Home Artists Posts Import Register

Content

In around four and a half billion years the Andromeda galaxy and our own Milky Way will finish their long mutual plummet. On a first grazing pass, the delicate spiral arms will be yanked almost out of their sockets. Perhaps their denizens will look back with relief as the dislocated galaxies retreat on their future night skies. But then they’ll pause for a hundred thousand year eye blink before falling back together. In a series of whirling collisions, all spiral structure will be obliterated, gas will be compacted to produce waves of supernovae, and the giant Milkdromeda galaxy will have been born.

The event I just described will happen as surely as the Sun will rise tomorrow. We know this because we’ve calculated the chaotic gravitational and hydrodynamic interactions of countless stars and gas and dark matter particles over billions of years. And simulations of galaxy collisions are just the beginning. We routinely simulate the universe on all of its scales, from planets to the large fractions of the cosmos. Today we’re going to see how it’s possible to build a universe in a computer - and see if there’s a limit to what we can simulate.

—-

In 1941 the Swedish astronomer Erik Holmberg conducted what was probably the first simulation of the universe, right when the first programmable computers were being assembled. But Holmberg didn’t use a computer. He arrayed 37 light bulbs on a plane, each one representing billions of stars in a spiral galaxy disk. The light from each bulb was its gravity - felt more strongly close to the bulb, and dropping away with the square of distance, just like gravity. At any point on the disk, a galvanic cell could determine in which direction the intensity of light was strongest. So the simulation went like this: Holmberg started with a pair of these light-bulb-galaxies next to each other, with the motion of each bulb existing only as a note in a table. He would then measure light at each bulb, which told him the summed “gravitational” pull on a group of stars. He’d then adjust the velocities of all according to Newton’s laws. Finally, he’d allow time to tick forward - he’d move all bulbs according to their new velocities. Then, from that new configuration he’d start the process again. And so the two galaxies collided, and just as with our modern supercomputer simulations, he witnessed the destruction of the disks, the throwing-off of tidal streams, and the formation of an elliptical galaxy.

Holmberg’s experiment was ingenious, but why go to all this effort? Since the time of Isaac Newton it had been possible to write down equations describing the trajectory of any pair of massive bodies moving in each other’s gravitational fields. And then to solve those equations to find their positions at any time in the future. But note I said “pair” of bodies. This only works for 2 objects. For 3 or more bodies there is no simple set of equations describing their future evolution. This is the so-called 3-body problem, which we’ve discussed in the past. The 3-body problem doesn’t have an analytical solution - no simple master equations - which is why Holmberg had to solve the problem with light bulbs. Holmberg did what we call a numerical calculation, in which an impossibly complex computation is broken down into a series of much simpler steps. This particular type of numerical calculation is called an N-body simulation.

Newton’s laws of motion and gravity are applied over a series of time steps. Each step is short enough that we can assume that the global gravitational field is constant - it only changes in the next step, after all the particles have made their moves. The predictions of these N-body simulations can be as accurate as you like, as long as you make the time steps small enough. And I would hope so, because this is essentially how we calculate the trajectories that place people on the moon or land a robot on a comet. For simulating the solar system, this method is pretty reasonable. In fact, the room-sized computer used to calculate the Apollo trajectories had about the power of your smart phone.

But if we want realistic simulations, say, of a galaxy with its billions of stars, we need to do a bit better. Fortunately, computing power has improved somewhat since Holmberg or the Apollo missions - but actually, nowhere near enough to do N-body simulations of entire galaxies using the method I described. Let’s think about what this computation really needs. In the simplest type of N-body simulation you need to compute the effect of every particle on every other particle. So if there are N particles that’s N^2 calculations. For a modern one-million particle simulation of a star cluster, that’s a trillion computations per time step.

The real challenge in astrophysics is that we often have to deal with a huge range in scale. For example, in a cosmological simulation we’re trying to watch the detailed formation of individual galaxies, as well as the evolution of giant clusters of galaxies. But if we keep enough precision to get the fine structure, it’s computationally impossible to also produce a large enough volume to also get largest scales. To do these simulations, astrophysicists have come up with ingenious tricks.

Perhaps the most important is to avoid having to consider every single particle pair. Remember, the strength of gravity drops off with distance squared. For nearby particles it’s important to consider every individual interaction, but for more distant locations it’s okay to clump particles together and consider only their summed gravitational effect.

One of the original approaches is a so-called tree code. It works like this: you start with a volume full of particles, each with its starting position and velocity. Now divide up the volume into 8 sub-cubes. Then divide each of these into 8, and so on. You stop dividing in any given part of the volume when there is no more than one particle per cube. Next, you run an N-body simulation by calculating the summed gravitational pull on each given particle. But now you have a shortcut - when you calculate the effect from distant locations, you don’t do it for each particle - instead you do it for all particles inside one of these cubes. The larger the distance, the larger the cube you can use. This sounds like a complicated process, but now the number of calculations you need to do goes down from N^2 to N-times-log-N, which for large particles numbers is much, much faster.

Another approach the particle-mesh method, in which particles are converted into a density distribution and a gravitational potential across a grid. The force at each point can then be solved using Fourier transform methods, which can be very fast. Adaptive particle meshes can be used to add higher resolution where needed - say, where the stars have higher density or structure. Modern mesh codes also do classic particle-particle interactions at short ranges to improved accuracy at small scales.

These mesh codes are useful for systems of particles interacting under gravity. But they also work for another really important astrophysical situation - flowing gas. The universe started as an ocean of gas a few hundred thousand years after the Big Bang. That gas is still everywhere - it flows into galaxies from beyond, where it rides the disk, fragments and collapses into stars, it forms whirlpools and jets around new stars and black holes. All of these processes are key parts of galaxy evolution and star and planet formation, and so we’d better be able to simulate them. We call these hydrodynamic simulations - they simulate the flow of gas using the equations of fluid dynamics.

Particle-mesh approaches can do this, but it’s more common to use an approach called smoothed-particle hydrodynamics. “SPH” codes don’t use a rigid grid, but rather tracks tracer particles within the fluid - those particles effectively make up a constantly shifting grid. SPH codes are used to simulate the flows of gas in galaxies and around quasars, star and planet formation, and even their destruction in collisions or supernovae. SPH can even be used to do galaxy formation, where the stars are treated as a type of fluid.

In practice, modern simulations often use an amalgam of these methods - for example SPH for large scale flows and particle-particle N-body for small-scale interactions. Astrophysicists often inject all sorts of other physics into their simulations. In your galaxy simulation you might need a separate prescription to describes how stars age and die. In your quasar disk you need to simulate separately how light travels through the hot plasma. And don’t get me started about the complexity of including magnetic fields, or of Einstein’s general relativity when the gravitational field becomes very strong.

With the techniques now available, using paralyzed code on modern computing clusters, we can produce some pretty insane simulations. We can see how stars form in multitudes from collapsing gas clouds, and how planets then coalesce in the disks surrounding those stars. We can watch as galaxies form, with gas and dark matter interacting to produce waves of star formation and supernovae, settling into spiral structures - just like we see in the real universe. And then we have cosmological simulations which create entire virtual universes, from the moment the first atoms formed to the modern day. One of the most famous is the Millennium simulation from the Max-Planck institute in Germany. It simulated 13-billion light years wide cube containing over 300 billion particles, each representing a billion-Suns worth of dark matter. But the current largest cosmological simulations is AbacusSummit, which just last year simulated 70 trillion particles on the supercomputers at the Center for Computational Astrophysics in New York.

So how far can this go? Can we ever simulate a real universe, in which creatures evolve that can themselves simulate universes? In fact, could we be such creatures? Probably not. None of these simulations contain the full information of an actual universe - or even a tiny part of it. As we discussed recently, a full quantum description of the world contains unthinkably more information than is contained in a typical simulation, which just tracks particle positions and velocities. No conceivable technology could fully simulate a quantum universe, except perhaps cosmically-sized quantum computer. Which is kind of what the universe is. We’ve talked about this simulation hypothesis stuff before, and maybe we’ll come back to it. For now let’s just be proud of the science we can get from modern simulations. Because we’ve come an awful long way since Erik Holmberg pushed some light bulbs around on a table. And the fidelity and size of our simulations will only get better as computing power continues its exponential growth, and as we invent better and better algorithms. Perhaps there’s no limit to what we can learn about the outside universe by rebuilding it inside our computers, and then peering into a simulated space time.

Comments

Anonymous

"...using paralyzed code"? I think you meant parallelized. Otherwise, that's a great article.

pbsspacetime

Amazingly both of these jokes work: Why couldn't the code cross the road? It was paralyzed code. or Why couldn't the code cross the road? It was parallelized code.

Anonymous

Oooh, so it’s hard to compute the trajectories of > 2 bodies in the gravitational field of each other 🤔 try to compute my blood sugar one hour from now 😃