What is going wrong in particle physics? (Patreon)
Content
[This is a transcript of the video.]
If you follow news about particle physics, then you know that it comes in three types. It’s either that they haven’t found that thing they were looking for. Or they’ve come up with something new to look for. Which they’ll later report not having found. Or it’s something so boring you don’t even finish reading the headline.
How come that particle physicists constantly make wrong predictions. And what’ll happen next? That’s what we’ll talk about today.
The list of things that particle physicists said should exist but that no one’s ever seen is very long. No supersymmetric particles, no proton decay, no dark matter particles, no WIMPs, no axions, no sterile neutrinos. There’s about as much evidence for any of those as for Bigfoot, though Bigfoot would probably have got me more views. Some particle physicists even predicted unparticles, and those weren’t found either. It’s been going like this for 50 years, ever since the 1970s.
In the 1970s, particle physicists completed what’s now called the standard model. The standard model of particle physics collects all the fundamental particles that matter is made of and their interactions. When the model was completed, not all these particles had yet been measured. But one after other they were experimentally confirmed.
The W and Z bosons were discovered in 1983 at CERN, the top quark was discovered 1995 at Fermilab. And the last one was the Higgs-boson which was found at CERN in 2012. It was the final nail in the coffin of the standard model. There are no more particles left to look for.
But particle physicists believed there’d be more to find. Indeed, I’d guess, most of them still believe this today. Or at least they’d tell you they believe it.
Already in the 1970s they said that the standard model wasn’t good enough because it collects three different fundamental forces: That’s the electromagnetic, the strong, and the weak nuclear force. Particle physicists wanted those to be unified to one force. Why? Because that’d be nicer.
Theories which combine these three forces are called “grand unified theories”. You get them by postulating a bigger symmetry than that of the standard model. Grand Unified Theories, GUTs for short, reproduced the standard model in the range that it had been tested already but led to deviations in untested ranges.
I’d say at the time grand unification was a reasonable thing to try. Because symmetry principles had worked well in physics in the past. The standard model itself was born out of symmetry principles. And even though Einstein himself – yes, that guy again – didn’t use symmetry arguments, we today understand his theories as realizations of certain symmetries. But this time, more symmetries didn’t work.
Grand unified theories made a prediction, which is that one of the constituents of atomic nuclei, the proton, is unstable. Starting in the 1980s, experiments looked for proton decay. They didn’t see it. This ruled out several models for grand unification. But you can make those models more complicated so that they remain compatible with observations. That’s what physicists did. And that’s where the problems began.
Next there was the axion. The standard model contains about two dozen numbers that must be determined by experiment. One of them is known as the theta parameter. Experimentally it’s been found to be zero or so small it’s indistinguishable from zero. If it was non-zero then the strong nuclear force would violate a symmetry known as CP-symmetry. That the theta parameters is zero or very small is known as the strong CP problem.
It isn’t really a problem because the standard model works just fine with simply setting the theta parameter to zero. But particle physicists don’t like small numbers. It’s a feeling that I’m sure most of us have experienced when looking at our bank statements, but particle physicists are somewhat more accepting. They accept small numbers if there’s a mechanism keeping it small.
The standard model has no such mechanism. This is why, to make the small theta-parameter acceptable, particle physicists added a mechanism to the standard model that’d force the parameter to be small. But a consequence of this modification was the existence of new particle, which Frank Wilczek called the “axion” in 1978. The name’s a pun on the symmetry-axis of the mechanism, and the name of an American laundry detergent, because, the axion-particle was a particularly clean solution.
Unfortunately, the axion turned out to not exist. If the axion existed, neutron stars would cool very quickly which we don’t observe. With this argument, the axion was experimentally ruled out almost as quickly as it was introduced, in 1980.
But physicists didn’t give up on the axion. Like with grand unification, they changed the theory so that it’d evade the experimental constraints. The new type of axion was introduced in 1981 and was originally called the “harmless axion”. It was then for some while called the “invisible axion,” but today it is often just called the “axion”. Lots of experiments have looked and continue to look for these invisible axions. None was ever detected, but physicists still look for their invisible friends.
Wilczek by the way invented another particle in 1982 which he called the “familon”. No one’s found that either.
Yet another flawed idea that particle physicists came up with in the 1970s is supersymmetry. Supersymmetry postulates that all particles in the standard model have a partner particle. This idea was dead on arrival, because those partner particles have the same masses as the standard model particles that they belong to. If they existed, they’d have shown up in the first particle colliders, which they did not.
Supersymmetry was therefore amended immediately, so that the supersymmetric partner particles would have much higher masses. It takes high energies to produce heavy particles, so it’d take big particle colliders to see those heavy supersymmetric particles.
The first supersymmetric models made predictions that were tested in the 1990s at the Large Electron Positron Collider at CERN. Those predictions were falsified. Supersymmetry was then amended again to prevent the falsified processes from happening. The next bigger collider, the TeVatron was supposed to find them. That didn’t happen. Then they were supposed to show up at the Large Hadron Collider. And that didn’t happen either.
Particle physicists continue to change and amend those supersymmetric models so that they don’t run into conflict with new data.
The reason particle physicists liked supersymmetry, besides that it neatly abbreviates to SUSY, was that they claimed l’d solve what’s known as the “hierarchy problem.” That’s the question of why the mass of the Higgs boson is so much smaller than the Planck mass. You may say, well, why not? And indeed, there’s no reason why not.
The mass of the Higgs boson is a constant of nature. It’s one of those free parameters in the standard model. This means you can’t predict it, you just go and measure it. Supersymmetry doesn’t change anything about this. The Higgs boson mass is still a free parameter in a supersymmetric extension of the standard model, and you still cannot predict it. Supersymmetry therefore does not “explain” the mass of the Higgs boson. You measure it and that’s that.
Then there are all kinds of dark matter particles. A type that is particularly popular is called “Weakly Interacting Massive Particles”, WIMPs for short. Experiments have looked for WIMPs since the 1980s. They haven’t found them. Each time an experiment came back empty-handed, particle physicists claimed the particles were a little bit more weakly interacting, and said they need a better detector.
There are more experiments that have looked for all kinds of other particles which continue to not find them. There are headlines about this literally every couple of weeks. The PandaX-4T experiment looked for light fermionic dark matter. They didn’t find it. The STEREO experiment looked for sterile neutrinos. They didn’t find them. CDEx didn’t find light wimps, HESS didn’t find any evidence for WIMP annihilation, the MICROSCOPE experiment didn’t find a fifth force, An experiment called SENSEI didn’t find sub GeV dark matter. And so on.
The pattern is this: Particle physicists invent particles, make predictions for those invented particles, and when these predictions are falsified, they change the model and make new predictions. They say it’s good science because these hypotheses are falsifiable. I’m afraid most of them believe this.
But just because a hypothesis is falsifiable doesn’t mean it’s good science. And no, Popper didn’t say that a hypothesis which is falsifiable is also scientific. He said that a hypothesis which is scientific is also falsifiable. In case you’re a particle physicist, here’s a diagram that should help. Example: Tomorrow you will receive 1000 dollars from my friend the prince of Nigeria. Falsifiable but not scientific.
The best way to see that what particle physicists are doing isn’t good science is by noting that it’s not working. Good scientists should learn from their failures, but particle physicists have been making the same mistakes for 50 years.
But why is it not working? I’ll try to illustrate this with a simple sketch. If you understand the following two minutes, you can outsmart most particle physicists, and you don’t want to miss that opportunity, do you.
Suppose you have a bunch of data and you fit a model to it. The model is this curve. You can think of the model as an algorithm with input parameters if you like, or just set of equations that you work out by hand. Either way, it’s a bunch of mathematical assumptions.
If you make a model more complicated by adding more assumptions, you can fit the data better, but the more complicated the model becomes, the less useful it will be. Eventually the model is more complicated than the data. At this point you can fit anything, and the model is entirely useless. This is called “overfitting”. The best model is one that reaches a balance between simplicity of the model and accuracy of the fit. Let’s suppose it’s this one.
If you get new data and the data do not agree with what was previously your best model, then you improve the model. This is normal scientific practice, and this is probably what particle physicists think they are doing. But it’s not what they are doing. The currently best model is the standard model and all the data agree with it, so there’s no reason to amend it.
Here is what they are doing instead. Let’s imagine that this curve is the standard model and this is all the existing data. And imagine we have a particle physicist, let’s call him Bob. Bob says, that’s nice, but we haven’t checked the model over here. And, he says, I could make this model more complicated so that the curve goes instead this way. Or that way. Or any other way. I’ll pick this one, call this my “prediction”, and hey, I’ll publish it in PRL.
Why do I predict it? Because I can. Because you see, my model agrees with all the data, so this prediction could be correct, right? Right? And it’s falsifiable. Therefore, I am a good scientist” And all of Bob’s friends with all their different predictions say the same. They are all good scientists. Every single one of them. And as result of all that “good science,” we get any possible prediction.
Then then do an experiment and the data come in and would you know it they agree with the standard model. And Bob and all his friends say: Oh well, no worries, we’ll update our prediction, now the deviations are in this range where we still haven’t measured it, we need bigger experiments. And also, I’ll write a new paper about it.
What’s the problem with that procedure? The problem is that those models with all their different predictions are unnecessarily complicated. They should never have been put forward. They are not scientific hypotheses, they are made-up stories, like my friend the prince of Nigeria who will send you money tomorrow. Though, if you send me one hundred dollars today I’ll have another word with him.
There are only two justifications for making a model more complicated. The first is if you already have data that requires it. We can call this an inconsistency with data. The second is if the model isn’t working properly: it makes several predictions that contradict each other, or no predictions at all. We can call this an internal inconsistency.
And that’s what’s going wrong in particle physics. They have no justification for making the standard model more complicated. When they do it nevertheless, it isn’t working, because that’s just not how science works. If you change a good model, then that change should be an improvement, not a complication.
I believe the reason they don’t notice what they’re doing is that they have invented all these pseudo-problems that the complicated models are supposed to fix. Like the absence of unification. Or some parameters being small. These aren’t real problems because they do not prevent them from making predictions with the standard model. They are just aesthetic misgivings. In fact, if you look at the list of unsolved problems in the foundations of physics on Wikipedia, most problems on the list are pseudo-problems. I have a list in which I distinguish real from pseudo-problems, to which I’ll leave a link in the info below.
And there are a few real problems in the foundations of physics. But they are difficult to solve and particle physicists don’t seem to like working on them, but then I repeat myself.
I’ve been giving many talks about this. It hasn’t made me friends among particle physicists. But it’s not like I am against particle physics. I like particle physics. That’s why I talk about those problems. It bothers me that they’re not making progress.
There are some common replies that I get from particle physicists. The first is to just deny that anything is wrong. Because, hey, they are writing so many papers and holding so many conferences. Or they will argue that it sometimes just takes long time to find evidence for a new prediction. For example, it took more than 30 years from the hypothesis of neutrinos to its confirmation. It took half a century to directly detect gravitational waves and so on.
But both of those objections are beside the point. The issue isn’t that it’s taking a long time. The issue is that particle physicists make all those wrong predictions. And that they think that’s business as usual.
The next objection they bring up is normally that, yes, there are all those wrong predictions, but they don’t matter. The only thing that matters is that we haven’t tested the standard model in this or that range, and we should.
The problem with this argument is that there are thousands of possible tests we could do in physics, and all of them cost money, sometimes a lot of money. We must decide which tests are the most promising ones, the ones most likely to lead to progress. That’s why we need good predictions for where something new can be found. And that’s why all those wrong predictions are a problem.
Particle physicists know of course that predictions are important, because that’s why they always claim that some new experiment would be able to rule out this or that particle. Though they usually don’t mention that there wasn’t any reason to think those particles existed in the first place.
Besides, in which other discipline of science do we excuse thousands of wrong predictions with saying it doesn’t matter?
Another common reply I get from particle physicists is that it doesn’t matter that those models are all wrong because while they’re working on it, they might stumble over something else that’s interesting. And that’s possible, but it’s not a good strategy for knowledge discovery. As I’ve already said a few times, it does as a matter of fact not work.
Also if that’s really the motivation for their work, then I think they should put this into their project proposals: Hey, I don’t actually think that those particles I’m going on about here exist but please give me money anyway because I’m smart and maybe while I write useless papers I’ll have a good idea about something else entirely. I’m sure that’ll fly.
Another objection that particle physicists often bring up is that this guessing worked in the past. But if you look at past predictions in the foundations of physics which turned out to be correct, and that did not just confirm an existing theory, then it was those which made a necessary change to the theory. The Higgs boson, for example, is necessary to make the standard model work. Anti-particles, predicted by Dirac, are necessary to make quantum mechanics compatible with special relativity. Neutrinos were necessary to explain observations. Three generations of quarks are necessary to explain CP violation. And so on.
That the physicists who made those predictions didn’t always know that doesn’t matter. The point is that we can learn from this. It tells us that a good strategy is to focus on necessary changes to a model, those that resolve an inconsistency with data, or an internal inconsistency.
One final objection I want to mention usually doesn’t come from particle physicists, but from people in other fields who think that we need all those models to explain dark matter. But that’s mixing up two different things.
We need either dark matter or a modification of gravity to explain observations in astrophysics and cosmology. But if it’s dark matter, then the only thing we need to explain observations is how the mass is distributed. Details about the particles, if they exist, are unnecessary. What particle physicists do is guessing these unnecessary details. They guess, for example, that those particles will be produced at some particle collider. Which then doesn’t happen.
So what will happen to particle physicists? Well, if you extrapolate from their past behaviour to the future, then the best prediction for what will happen is: Nothing. They will continue doing the same thing they’ve been doing for the past 50 years. It will continue to not work. Governments will realize that particle physics is eating up a lot of money for nothing in return, funding will collapse, people will leave, the end.