How to make complexity look simple
Complexity is not about details, it is all about creating simple models and asking simple questions which produce complex answers ..…
How to make complexity look “simple”
You may not think that complex things such as nuclear reactors, making metals out of hydrogen, the economic system, three-dimensional billiards and “cultural hitch-hiking” have much in common, but Professor Graeme Ackland can explain them all in simple terms that anyone can follow – using models running on supercomputers...
Finding out what happens when you squeeze a mixture of deuterium and hydrogen so hard that it forms a new kind of metal (“the great unsolved challenge of high-pressure physics”) and trying to extend the life of nuclear reactors by working out how long the steel will survive radiation are scientific challenges which don’t seem to sit very comfortably together. And it’s hard to imagine much similarity between testing the economic system to destruction, analysing climate change and working out how DNA and languages spread throughout Europe by piggybacking on new farming technologies. But whether it's materials or archaeology we're studying, they can all be understood by using differential equations – solved using a supercomputer – to model what happens and predict what will happen in the future.
According to Graeme Ackland (Professor of Computer Simulation at the University of Edinburgh), companies, atoms and species (including Homo sapiens) behave in the same basic way. Individuals interact with other things and with each other. The sum of these individual interactions creates larger systems, whether it’s the structure of a metal, the climate or the global economy. We may try to understand the Big Picture in terms of behaviour and rules, but sometimes events can throw the rulebook out of the window.
To explain his work in simulation and how “simple” models can be used to solve so many complex problems, Ackland uses several examples. One of his department’s most notable recent achievements was to help a team at Edinburgh's Centre for Science at Extreme Conditions (CSEC) advance its work in metallizing hydrogen. The scientists “squeezed tiny amounts of solid hydrogen between diamonds, to create some of the highest static pressures ever achieved on Earth,” comparable to the insides of giant planets such as Jupiter and Saturn, where the metallized hydrogen creates magnetic fields which can be easily detected. By firing a laser through this novel material, the team set out to analyse its structure and establish if it was metal or not, but because the signal faded as it passed through the sample, there was not enough data to be sure – just a few “experimental peaks.” To build a better picture and fill in the gaps in the data, Ackland and his colleague Ioan Magdau then used quantum mechanics, and supercomputer software developed over the past 25 years, to generate models of different arrangements of atoms which might be like the new material, then ran simulations to compare the results with the data extracted from the actual experiment.
When they found a model which generated numbers close to the numbers produced in the lab (the experimental peaks), they used it to do further simulations, which helped to establish that the novel material is not in fact metal. The great unsolved problem remains unsolved.
This recent project echoes Ackland’s other research into metals, analysing the steel used in nuclear fission reactors – to see how long the steel will last under radiation, and also help design new types of longer-lasting steel. When the first reactors were built, not very much was known about the steel used, but scientists did know that it would be bombarded by radioactive particles, and that this would eventually lead to damage in the structure, requiring the reactor to be decommissioned. Damage is created at an alarming rate, but then atoms move and swap positions, so the structure effectively “heals itself,” while other atoms move to the surface, leaving a “hole” behind where they have been and causing the surface to swell or expand, which eventually weakens the structure and makes it unstable. But how long will it take to develop these defects?
The stakes are high – if the lifetime of a nuclear reactor can be safely extended beyond the original specification, the power it produces is virtually free because most of the costs are already incurred when the station is built and running costs are relatively low – the extra power is an unexpected bonus. Decommissioning is also expensive, and the longer that this is delayed, the more productive the reactor will be.
To analyse the lifetime of the steel in the reactor, one approach is to speed up the process by bombarding the steel with 40 times more radioactivity than happens in real life, to simulate in just one year its possible lifetime of about 40 years, but this is not entirely realistic because the “self-healing” takes longer than this – so the results could be misleading. Simulation, however, enables researchers to create “computationally fast, yet accurate models” of the interatomic forces involved, to simulate 40 years-worth of radiation damage (enough energy to vaporise the entire reactor ten times over) and self healing (enough to heal almost all that damage). When these results are compared with real-world data from already decommissioned reactors (for example, reactors from the Soviet era), the models are confirmed to be reliable enough to predict what will happen to currently active reactors, and thus extend their possible lifetime.
Ackland's work on radiation damage in steels, in partnership with colleagues from Germany, France, Japan, the US and Russia, as part of a G8-sponsored international project for fusion science, has been “terminated” following the crisis in Ukraine and imposition of sanctions on Russia, but Ackland says results so far have been encouraging. “The simulations demonstrate we know what's going on,” he says. And, in effect, this guarantees “free” energy, as long as the reactors are proved to be safe.
The work for power companies, including EDF, will also change the way reactors are designed. The steel used in the first generation of reactors contained cobalt, nickel and copper, but the cobalt and nickel become radioactive over time (“intermediate-level nuclear waste”), making reactors dangerous to decommission, while the copper (which got into the steel inadvertently during production) forms particles which make the steel more brittle. Using simulations to predict how long the steel will last enables power companies to build new reactors that use special “nuclear steels,” with structures which are better at resisting radiation and which do not become radioactive themselves – lasting longer and producing cheaper energy.
“Doing the experiments is difficult,” says Ackland, “and you can't wait for decades to get the results.” Simulation, however, can model the primary damage, and Ackland compares this to “three-dimensional billiards,” looking at what happens to millions of atoms, as they interact inside the steel.
Ackland’s other passion is to use simulations to model ecological and economic systems (using “novel approaches to realistic networks of interacting autonomes"). For example, traditional models suggest that if a company competes with other companies, the best way to win market share is to lower your prices, then cut production costs to optimise profits. By dividing the world into loosely-connected local markets, Ackland produced very different results: When there were very few competitors, this kicked off a race to the bottom, the businesses became unsustainable, then they collapsed – and other companies came in to take their place, starting the whole process over again. These new competitors could either keep their prices very low or exploit their market dominance and keep prices high. Keeping interactions local is key: when a company’s reach extends throughout the global economy, its collapse means the economic system collapses – it is simply unsustainable.
“What fell out of these models,” says Ackland, “is that once you allow different things to be happening in different places, you get completely different behaviour, whether it is climate or ecology, or different species and companies.” And when there is unstable behaviour, Ackland makes this part of the model, unlike economists who reboot their models as soon as they start to break down. “The key thing is the model shows dynamic boom and bust, just like the real world,” says Ackland. “Every local region can be unstable, yet the system as a whole is stable.”
The signature of these computer models, says Ackland, is lots of bankruptcies in many local markets. The more they are connected, the bigger the eventual system collapses become. “Sometimes I observe things I find personally offensive or immoral,” says Ackland, “but that is not the point.” For example, if you encourage the redistribution of wealth or promote globalisation, this accelerates system collapse. And similar phenomena can also be seen when you model ecological systems, with species competing like companies trying to optimise profits or win market share, and ultimately facing the threat of extinction.
Another project looked at how advances in farming technology spread throughout Europe, bringing cultural markers (for example, languages) and genes in their wake, as a result of reaction (local people copying imported ideas and changing their behaviour) or diffusion (good ideas being exported). The first computer model showed these processes in action, but produced some unusual results – for example, the new ideas did not spread north through central Europe as expected. When the model was adjusted to allow the ideas to spread via water (e.g. the Rhine and the Danube), instead of only overland, this solved the problem. And other anomalies also appeared, suggesting that the model should take more account of events, particularly at very specific locations – in other words, small localised effects can have big consequences, and the rulebook (determinism) does not always shape the course of human history.
“Complexity models are simple,” says Ackland, “and we learn as much from what goes wrong as what goes right.”
Ackland says that what links these various projects is the fact that they all concern multiple objects – whether they are atoms, farmers, companies or species – which interact with many other objects. “I have a strong view,” he says, “that an algorithm is a sufficient theoretical explanation. Many people think that mathematics will explain it all (the theory of everything), but this leads to studying only the equations you can actually solve, and most phenomena still unsolved are complex enough for that to be impossible.”
What interests Ackland is how conventional approaches are hamstrung by the language of cause and effect – for example, do atoms go somewhere because their own electrons take them there, or do he electrons simply go with the atoms? “We eventually realised that to make progress we must abandon the notion of cause and effect, and look for a 'self-consistent solution' which is favourable for the electrons and favourable for the atoms.”
“Complexity is not about details,” says Ackland. “It is all about creating simple models – asking simple questions which produce complex answers.”
So next time you are looking for free energy, or trying to anticipate the next important economic trend and increase your company’s profits, computer models – and three-dimensional billiards – may be the answer.
Bikes, cows and farmers
In addition to his work on fusion materials and high-pressure crystallography, economics and climate change, Professor Ackland's work has thrown up some bizarre and unexpected conclusions on a range of “esoteric projects,” including when pelotons form in bike races, how animals evolved herding behaviour, why farming made people smaller and why cows have eyes on the side of their heads. He also collects curious and counter-intuitive problems, only some of which require a degree in physics to solve them.