Home Artists Posts Import Register

Content

If we ever want to simulate a universe, we should probably learn to simulate even a single atomic nucleus. But it’s taken some of the most incredible ingenuity of the past half-century to figure out how that out. All so that today I can teach you how to simulate a very very small universe.

Physics has been insanely successful at finding the underlying rules by which the universe operates. It helps that a lot of those rules seem to be mathematical. By writing down the laws of physics as equations, we can make predictions about how the universe should behave - that lets us test our theories and basically become wizards capable of predicting the future and manipulating the foundations of reality.

But our wizarding only works if we can solve the equations, and it's impossible to calculate perfectly the evolution of all but the most simple systems. That’s especially true when we study the quantum world, where the information density is obscenely high. As we saw in a previous episode, it takes as many bits as there are particles in the universe to store all the information in the wavefunction of a single large molecule. We also talked about the hack that lets us do it anyway - density functional theory.

DFT is good for simulating the electrons in an atom. But the behaviour of electrons is comparatively baby stuff compared to the atomic nucleus. Every proton and neutron is composed of 3 quarks stuck together by gluons. Well, actually, that’s a simplification.

Every nucleon is a roiling, shifting swarm of virtual quarks and gluons that just LOOKS like three quarks from the outside. The messy interactions of quarks via gluons is described by quantum chromodynamics, or QCD, in the same way that quantum electrodynamics describes the interactions of electrons and any other charged particle via photons. We’re going to come back to a full description of QCD very soon, but you don’t need it for this video. Today we just need to understand why it’s complicated.

Instead of the one type of charge in QED, in QCD there are three - which we call colour charges, hence the chromo in chromodynamics. Also quarks never appear on their own - they’re always bound to other quarks in composite particles called hadrons, of which protons and neutrons are an example. To test QED we can chuck a photon at an electron and see what happens. But to test QCD, we can’t just poke a quark with a gluon. Instead we need to figure out what the theory predicts about properties of hadrons that are actually measureable. And that’s near impossible because of the third problem. The force mediated by gluons is very strong - earning it the name the strong force. And that strength turns the interior of a hadron into a maelstrom of activity which can’t possibly be calculated on a blackboard, and at first glance looks impossible to simulate on any computer we could ever build. Or would be if it weren’t for the fact that people are exceptionally clever, and came up with lattice simulations.

Before we do the hard stuff, let’s review the comparatively “easy” quantum electrodynamics. Say we want to predict what happens when two electrons are shot towards each other. We can actually calculate the almost exact probability of them bouncing apart with a given speed and angle. We do that by adding up all the possible ways that interaction could happen. For example there are various ways the first electron could emit a photon which is absorbed by the second, or vice versa. Or it could happen via two electrons or more, or one of those photons could spontaneously form an electron-positron pair before becoming a photon again, and so on. Each family of interaction types is represented by a Feynman diagram, and quantum electrodynamics gives us a recipe book for adding up the probabilities. We talked about this stuff in some previous videos. But there are literally infinite ways this interaction could happen, each more complex than the last. So how do you know when to stop adding new diagrams?

We kind of got lucky with that. As the interactions get more complicated, their probabilities get smaller and smaller. Diagrams with more than a few twists and turns add almost nothing to the probability, so we only have to include the simplest few levels. Each pair of vertices in a Feynman diagram represents the probability of a pair of electrons interacting with the electromagnetic field - emitting and absorbing a virtual photon. And there’s a set probability of that happening each time - it’s around 1/137. So every time you add another pair of vertices to a Feynman diagram, the interaction it represents becomes 137 times less likely. A diagram with only 6 vertices is nearly 20,000 times less likely than the simplest 2-vertex diagram . So we just choose the precision we want for our calculation and ignore any interactions that nudge the probability by less than that.

This 1/137 thing is the fine structure constant. It’s the coupling strength between the electron and electromagnetic field. The smallness of the fine structure constant means the electromagnetic interaction is relatively weak. Relative to the strong force anyway.

OK, back to the strong force. Now we have two quarks hurtling towards each other. They’re going to interact by the strong force, which is mediated via virtual gluon of the gluon field rather than virtual photons of the electromagnetic field. We can draw Feynman diagrams of these interactions, now with curly gluon lines. Presumably to calculate the probability of a given interaction we again just add up diagrams, with the probability of each diagram determined by the number of vertices.

The coupling strength for the strong nuclear force is ingeniously named the strong coupling constant. It’s much higher than the fine coupling constant, of order 1, though it depends on energy scale. That’s what makes the strong force strong, and it’s also what makes strong force interactions very difficult to calculate. You no longer have the luxury of throwing away all but the simplest Feynman diagrams when you’re calculating the interaction probability for your pair of quarks. Adding new vertices doesn’t decrease the probability anywhere near as quickly as with QED.

This means you can’t possibly do this calculation with pen and paper. In fact it’s extremely challenging to do QCD the Feynman diagram approach even with computers. Before any particle physicists start shouting at me, I’ll quickly add the caveat that there are some unusual cases where the strong couple constant can become small and quarks can be understood with Feynman diagrams, this is the phenomenon known as asymptotic freedom. But if I tried to explain that too we’d be here all day.

In general, if we can’t calculate what quantum chromodynamics predicts for the behaviour of quarks, how can we even test the theory? Well, we need to abandon Feynman diagrams. In fact, we need to abandon the idea of particles altogether. See, it turns out that the virtual particles that we were using to calculate particle interactions … don’t actually exist. We’ve talked about that fact previously. Real particles are sustained oscillations in a quantum field that have real energy and consistent properties. Virtual particles are just a handy calculation tool - a way of representing something deeper. They represent the transient disturbances in quantum fields due to the presence of real particles that couple to those fields.

The strong nuclear force, the coupling between quark and gluon fields is so intense that the disturbances of those fields are way too tumultuous to be easily approximated by virtual particles. Instead we have to try to model the field more directly. That’s where lattice QCD comes in. It’s an effort to model how the quantum fields themselves evolve over the course of a strong force interaction. Similar to how Feynman diagrams work, to do this you need to account for all possible paths between the starting and final field configuration to get the probability of that transition happening.

Now, there’s a good reason we don’t do this for electromagnetism: there’s an astronomical number of configurations that the field could pass through in the intervening time. No supercomputer could do that even given the entire life of the universe. For QED, Feynman diagrams let us reduce the number of field configurations by approximating them as virtual particles. For QCD we have to stick with fields, so we need a different hack. In fact we need a few of them.

Let’s make sure we understand exactly what we’re trying to do here. We want the probability for some wiggly quantum field wiggles between two states. Let’s go back to electromagnetism just for a second because there’s an analogous case there, and one that we already covered. Before Richard Feyman came up with his famous diagrams, he devised a way to calculate quantum probabilities called the Feynman path integral. It calculates the probability that a particle will move from one location to another by adding up the probabilities of all possible paths between those points. Actually, it also includes the impossible paths, but no time to explain that now. Every time you add the probability for a single Feynman diagram you’re actually adding infinite possible trajectories using Feynman path integrals.

In lattice QCD, we want to do something like a Feynman path integral, but instead of trajectories through physical space we add up trajectories through the space of field configurations. That’s … much harder, for three reasons. First, any patch of spacetime technically contains an infinite number of points and no computer can hold an infinite amount of memory. So we need to pixelate spacetime so there’s only a finite number of points.

But even then, there’s still an astronomical number of ways that the field can move from the starting to final configuration. And because these configurations are messy, we can’t do a simple integration like in the Feynman path integral. So our second trick is called Monte Carlo sampling. This is an extremely common computational method in which you do your calculation based on randomized selections from some distribution.

So we randomly choose a selection of field configurations of a pixelated space that get us from the start to the end of our interaction. But these can’t be totally random because some of these paths are still more likely than others. In the Feynman path integral, the probability of each path somes from adding up all the little shifts in the particle phase from each step. Then at the end of the path, you add together the phases of all paths to get a probability. This is exactly like the famous double slit experiment, where the probability of a particle landing at a certain point on the screen depends on whether the phases of different paths through both screens add together or cancel out.

Our quantum field is a 3-D pixelated lattice that evolves through time. As with the path integral, each time step results a complex-valued phase shift at each spatial point. Those complex numbers are fine in the Feynman path integral because they get squared and disappear. But they’re very difficult to deal with in Monte Carlo approaches. So we need one more hack- our most ingenious yet. We’re going to pretend that time is just another dimension of space. This operation is called the Wick rotation and it eliminates the complex nature of the phase shifts. If we also “pixelate” the time dimension then we have a lattice with 4 spatial dimensions.

Now our couple quark-gluon field looks like this: a lattice of points with connections. The points are the quark field and the connections are the gluon field. Getting rid of the quantum probabilities means this isn’t really even a quantum problem any more. The structure looks like a crystal - admittedly a 4-dimensional one, but a classical crystal. And guess what - we understand how crystals work extremely well. We can now simulate how this lattice evolves over time using the laws of statistical mechanics.

We’re not all the way there yet. After all, spacetime isn’t really a discrete lattice of points. And it turns out that the things you want to calculate, like the mass of a hadron, DO depend on your choice of pixel size. But in a very simple way. You can run your simulation multiple times for different lattice spacings to figure out that relationship. For example, the prediction for neutron mass gets larger with increasing lattice spacing, but if you draw a trend line through your simulation results you can find the neutron mass in the case of zero spacing - a continuous spacetime. And guess what - it works.

This trick of transforming quantum fields into a lattice was first discovered by Ken Wilson all the way back in 1974, back when you’d be lucky to fit a kilobyte of RAM on a computer. Since then, and with the help of vastly improved computational resources, lattice QCD has accurately predicted many things, from masses and decay frequencies of hadrons to the exotic properties of quark-gluon plasma. These simulations were also an essential part of the prediction side of the new muon g-2 results.

The fact that lattice QCD even works gives us deep insights into the nature of the quantum fields. For one thing, because it doesn’t use virtual particles at all, but rather simulates the quantum field more or less directly. That helps us put to bed the idea that virtual particles are more than an approximation of what these messy fields are really doing during an interaction.

So there’s your whirlwind introduction to lattice QCD. As our computing power grows, one day we’ll likely be able to build detailed simulations of entire collections of hadrons like the nucleus of a single atom. We will never simulate a whole universe this way - nor any way in all likelihood. But we’re going to learn so much just simulating such tiny patches of spacetime.

Comments

Rileymetal

What happens inside a proton? Positivity.

Ted Jones

Has Spacetime done an episode on the Weak Force? I think with the weak force you are looking at a waveform and probabilities -- like with QED -- but I could have that wrong. Why an isotope would seem to be stable for millions of years and then suddenly radiate a particle in more interesting to me than the messy confusion going on inside a hadron.

Anonymous

As a follow up, could you do an episode on the Nielsen Ninomiya theorem?