Home Artists Posts Import Register

Content

[This is a transcript of the video.]

If you follow news about particle physics, then you know that it comes in three types. It’s either that they haven’t found that thing they were looking for. Or they’ve come up with something new to look for. Which they’ll later report not having found. Or it’s something so boring you don’t even finish reading the headline.

How come that particle physicists constantly make wrong predictions. And what’ll happen next? That’s what we’ll talk about today.

The list of things that particle physicists said should exist but that no one’s ever seen is very long. No supersymmetric particles, no proton decay, no dark matter particles, no WIMPs, no axions, no sterile neutrinos. There’s about as much evidence for any of those as for Bigfoot, though Bigfoot would probably have got me more views. Some particle physicists even predicted unparticles, and those weren’t found either. It’s been going like this for 50 years, ever since the 1970s.

In the 1970s, particle physicists completed what’s now called the standard model. The standard model of particle physics collects all the fundamental particles that matter is made of and their interactions. When the model was completed, not all these particles had yet been measured. But one after other they were experimentally confirmed.

The W and Z bosons were discovered in 1983 at CERN, the top quark was discovered 1995 at Fermilab. And the last one was the Higgs-boson which was found at CERN in 2012. It was the final nail in the coffin of the standard model. There are no more particles left to look for.

But particle physicists believed there’d be more to find. Indeed, I’d guess, most of them still believe this today. Or at least they’d tell you they believe it.

Already in the 1970s they said that the standard model wasn’t good enough because it collects three different fundamental forces: That’s the electromagnetic, the strong, and the weak nuclear force. Particle physicists wanted those to be unified to one force. Why? Because that’d be nicer.

Theories which combine these three forces are called “grand unified theories”. You get them by postulating a bigger symmetry than that of the standard model. Grand Unified Theories, GUTs for short, reproduced the standard model in the range that it had been tested already but led to deviations in untested ranges.

I’d say at the time grand unification was a reasonable thing to try. Because symmetry principles had worked well in physics in the past. The standard model itself was born out of symmetry principles. And even though Einstein himself – yes, that guy again – didn’t use symmetry arguments, we today understand his theories as realizations of certain symmetries. But this time, more symmetries didn’t work.

Grand unified theories made a prediction, which is that one of the constituents of atomic nuclei, the proton, is unstable. Starting in the 1980s, experiments looked for proton decay. They didn’t see it. This ruled out several models for grand unification. But you can make those models more complicated so that they remain compatible with observations. That’s what physicists did. And that’s where the problems began.

Next there was the axion. The standard model contains about two dozen numbers that must be determined by experiment. One of them is known as the theta parameter. Experimentally it’s been found to be zero or so small it’s indistinguishable from zero. If it was non-zero then the strong nuclear force would violate a symmetry known as CP-symmetry. That the theta parameters is zero or very small is known as the strong CP problem.

It isn’t really a problem because the standard model works just fine with simply setting the theta parameter to zero. But particle physicists don’t like small numbers. It’s a feeling that I’m sure most of us have experienced when looking at our bank statements, but particle physicists are somewhat more accepting. They accept small numbers if there’s a mechanism keeping it small.

The standard model has no such mechanism. This is why, to make the small theta-parameter acceptable, particle physicists added a mechanism to the standard model that’d force the parameter to be small. But a consequence of this modification was the existence of new particle, which Frank Wilczek called the “axion” in 1978. The name’s a pun on the symmetry-axis of the mechanism, and the name of an American laundry detergent, because, the axion-particle was a particularly clean solution.

Unfortunately, the axion turned out to not exist. If the axion existed, neutron stars would cool very quickly which we don’t observe. With this argument, the axion was experimentally ruled out almost as quickly as it was introduced, in 1980.
But physicists didn’t give up on the axion. Like with grand unification, they changed the theory so that it’d evade the experimental constraints. The new type of axion was introduced in 1981 and was originally called the “harmless axion”. It was then for some while called the “invisible axion,” but today it is often just called the “axion”. Lots of experiments have looked and continue to look for these invisible axions. None was ever detected, but physicists still look for their invisible friends.

Wilczek by the way invented another particle in 1982 which he called the “familon”. No one’s found that either.

Yet another flawed idea that particle physicists came up with in the 1970s is supersymmetry. Supersymmetry postulates that all particles in the standard model have a partner particle. This idea was dead on arrival, because those partner particles have the same masses as the standard model particles that they belong to. If they existed, they’d have shown up in the first particle colliders, which they did not.

Supersymmetry was therefore amended immediately, so that the supersymmetric partner particles would have much higher masses. It takes high energies to produce heavy particles, so it’d take big particle colliders to see those heavy supersymmetric particles. 

The first supersymmetric models made predictions that were tested in the 1990s at the Large Electron Positron Collider at CERN. Those predictions were falsified. Supersymmetry was then amended again to prevent the falsified processes from happening. The next bigger collider, the TeVatron was supposed to find them. That didn’t happen. Then they were supposed to show up at the Large Hadron Collider. And that didn’t happen either.

Particle physicists continue to change and amend those supersymmetric models so that they don’t run into conflict with new data.

The reason particle physicists liked supersymmetry, besides that it neatly abbreviates to SUSY, was that they claimed l’d solve what’s known as the “hierarchy problem.” That’s the question of why the mass of the Higgs boson is so much smaller than the Planck mass. You may say, well, why not? And indeed, there’s no reason why not.

The mass of the Higgs boson is a constant of nature. It’s one of those free parameters in the standard model. This means you can’t predict it, you just go and measure it. Supersymmetry doesn’t change anything about this. The Higgs boson mass is still a free parameter in a supersymmetric extension of the standard model, and you still cannot predict it. Supersymmetry therefore does not “explain” the mass of the Higgs boson. You measure it and that’s that.

Then there are all kinds of dark matter particles. A type that is particularly popular is called “Weakly Interacting Massive Particles”, WIMPs for short. Experiments have looked for WIMPs since the 1980s. They haven’t found them. Each time an experiment came back empty-handed, particle physicists claimed the particles were a little bit more weakly interacting, and said they need a better detector.

There are more experiments that have looked for all kinds of other particles which continue to not find them. There are headlines about this literally every couple of weeks. The PandaX-4T experiment looked for light fermionic dark matter. They didn’t find it. The STEREO experiment looked for sterile neutrinos. They didn’t find them. CDEx didn’t find light wimps, HESS didn’t find any evidence for WIMP annihilation, the MICROSCOPE experiment didn’t find a fifth force, An experiment called SENSEI didn’t find sub GeV dark matter. And so on.

The pattern is this: Particle physicists invent particles, make predictions for those invented particles, and when these predictions are falsified, they change the model and make new predictions. They say it’s good science because these hypotheses are falsifiable. I’m afraid most of them believe this.

But just because a hypothesis is falsifiable doesn’t mean it’s good science. And no, Popper didn’t say that a hypothesis which is falsifiable is also scientific. He said that a hypothesis which is scientific is also falsifiable. In case you’re a particle physicist, here’s a diagram that should help. Example: Tomorrow you will receive 1000 dollars from my friend the prince of Nigeria. Falsifiable but not scientific.

The best way to see that what particle physicists are doing isn’t good science is by noting that it’s not working. Good scientists should learn from their failures, but particle physicists have been making the same mistakes for 50 years.  

But why is it not working? I’ll try to illustrate this with a simple sketch. If you understand the following two minutes, you can outsmart most particle physicists, and you don’t want to miss that opportunity, do you.

Suppose you have a bunch of data and you fit a model to it. The model is this curve. You can think of the model as an algorithm with input parameters if you like, or just set of equations that you work out by hand. Either way, it’s a bunch of mathematical assumptions.

If you make a model more complicated by adding more assumptions, you can fit the data better, but the more complicated the model becomes, the less useful it will be. Eventually the model is more complicated than the data. At this point you can fit anything, and the model is entirely useless. This is called “overfitting”. The best model is one that reaches a balance between simplicity of the model and accuracy of the fit. Let’s suppose it’s this one.

If you get new data and the data do not agree with what was previously your best model, then you improve the model. This is normal scientific practice, and this is probably what particle physicists think they are doing. But it’s not what they are doing. The currently best model is the standard model and all the data agree with it, so there’s no reason to amend it.

Here is what they are doing instead. Let’s imagine that this curve is the standard model and this is all the existing data. And imagine we have a particle physicist, let’s call him Bob. Bob says, that’s nice, but we haven’t checked the model over here. And, he says, I could make this model more complicated so that the curve goes instead this way. Or that way. Or any other way. I’ll pick this one, call this my “prediction”, and hey, I’ll publish it in PRL.

Why do I predict it? Because I can. Because you see, my model agrees with all the data, so this prediction could be correct, right? Right? And it’s falsifiable. Therefore, I am a good scientist” And all of Bob’s friends with all their different predictions say the same. They are all good scientists. Every single one of them. And as result of all that “good science,” we get any possible prediction.

Then then do an experiment and the data come in and would you know it they agree with the standard model. And Bob and all his friends say: Oh well, no worries, we’ll update our prediction, now the deviations are in this range where we still haven’t measured it, we need bigger experiments. And also, I’ll write a new paper about it.

What’s the problem with that procedure? The problem is that those models with all their different predictions are unnecessarily complicated. They should never have been put forward. They are not scientific hypotheses, they are made-up stories, like my friend the prince of Nigeria who will send you money tomorrow. Though, if you send me one hundred dollars today I’ll have another word with him.

There are only two justifications for making a model more complicated. The first is if you already have data that requires it. We can call this an inconsistency with data. The second is if the model isn’t working properly: it makes several predictions that contradict each other, or no predictions at all. We can call this an internal inconsistency.

And that’s what’s going wrong in particle physics. They have no justification for making the standard model more complicated. When they do it nevertheless, it isn’t working, because that’s just not how science works. If you change a good model, then that change should be an improvement, not a complication.

I believe the reason they don’t notice what they’re doing is that they have invented all these pseudo-problems that the complicated models are supposed to fix. Like the absence of unification. Or some parameters being small. These aren’t real problems because they do not prevent them from making predictions with the standard model. They are just aesthetic misgivings. In fact, if you look at the list of unsolved problems in the foundations of physics on Wikipedia, most problems on the list are pseudo-problems. I have a list in which I distinguish real from pseudo-problems, to which I’ll leave a link in the info below.

And there are a few real problems in the foundations of physics. But they are difficult to solve and particle physicists don’t seem to like working on them, but then I repeat myself.  

I’ve been giving many talks about this. It hasn’t made me friends among particle physicists. But it’s not like I am against particle physics. I like particle physics. That’s why I talk about those problems. It bothers me that they’re not making progress.


There are some common replies that I get from particle physicists. The first is to just deny that anything is wrong. Because, hey, they are writing so many papers and holding so many conferences. Or they will argue that it sometimes just takes long time to find evidence for a new prediction. For example, it took more than 30 years from the hypothesis of neutrinos to its confirmation. It took half a century to directly detect gravitational waves and so on.

But both of those objections are beside the point. The issue isn’t that it’s taking a long time. The issue is that particle physicists make all those wrong predictions. And that they think that’s business as usual.

The next objection they bring up is normally that, yes, there are all those wrong predictions, but they don’t matter. The only thing that matters is that we haven’t tested the standard model in this or that range, and we should.

The problem with this argument is that there are thousands of possible tests we could do in physics, and all of them cost money, sometimes a lot of money. We must decide which tests are the most promising ones, the ones most likely to lead to progress. That’s why we need good predictions for where something new can be found. And that’s why all those wrong predictions are a problem.

Particle physicists know of course that predictions are important, because that’s why they always claim that some new experiment would be able to rule out this or that particle. Though they usually don’t mention that there wasn’t any reason to think those particles existed in the first place.

Besides, in which other discipline of science do we excuse thousands of wrong predictions with saying it doesn’t matter?

Another common reply I get from particle physicists is that it doesn’t matter that those models are all wrong because while they’re working on it, they might stumble over something else that’s interesting. And that’s possible, but it’s not a good strategy for knowledge discovery. As I’ve already said a few times, it does as a matter of fact not work.

Also if that’s really the motivation for their work, then I think they should put this into their project proposals: Hey, I don’t actually think that those particles I’m going on about here exist but please give me money anyway because I’m smart and maybe while I write useless papers I’ll have a good idea about something else entirely. I’m sure that’ll fly.

Another objection that particle physicists often bring up is that this guessing worked in the past. But if you look at past predictions in the foundations of physics which turned out to be correct, and that did not just confirm an existing theory, then it was those which made a necessary change to the theory. The Higgs boson, for example, is necessary to make the standard model work. Anti-particles, predicted by Dirac, are necessary to make quantum mechanics compatible with special relativity. Neutrinos were necessary to explain observations. Three generations of quarks are necessary to explain CP violation. And so on.

That the physicists who made those predictions didn’t always know that doesn’t matter. The point is that we can learn from this. It tells us that a good strategy is to focus on necessary changes to a model, those that resolve an inconsistency with data, or an internal inconsistency.

One final objection I want to mention usually doesn’t come from particle physicists, but from people in other fields who think that we need all those models to explain dark matter. But that’s mixing up two different things.

We need either dark matter or a modification of gravity to explain observations in astrophysics and cosmology. But if it’s dark matter, then the only thing we need to explain observations is how the mass is distributed. Details about the particles, if they exist, are unnecessary. What particle physicists do is guessing these unnecessary details. They guess, for example, that those particles will be produced at some particle collider. Which then doesn’t happen.
 
So what will happen to particle physicists? Well, if you extrapolate from their past behaviour to the future, then the best prediction for what will happen is: Nothing. They will continue doing the same thing they’ve been doing for the past 50 years. It will continue to not work. Governments will realize that particle physics is eating up a lot of money for nothing in return, funding will collapse, people will leave, the end. 

Files

What's Going Wrong in Particle Physics? (This is why I lost faith in science.)

Try out my quantum mechanics course (and many others on math and science) on Brilliant using the link https://brilliant.org/sabine. You can get started for free, and the first 200 will get 20% off the annual premium subscription. Why do particle physicists constantly make wrong predictions? In this video, I explain the history and status of the problem. 👉 Transcript and References on Patreon ➜ https://www.patreon.com/Sabine 💌 Sign up for my weekly science newsletter. It's free! ➜ https://sabinehossenfelder.com/newsletter/ 📖 Check out my new book "Existential Physics" ➜ http://existentialphysics.com/ 🔗 Join this channel to get access to perks ➜ https://www.youtube.com/channel/UC1yNl2E66ZzKApQdRuTQ4tw/join 00:00 Intro 00:30 The History of the Problem 08:29 The Cause of the Problem 14:52 Common Objections and Answers 19:37 What Will Happen? 20:04 Learn Physics on Brilliant #science #physics #particlephysics

Comments

Anonymous

I am utterly disappointed. This episode gives me NOTHING to pick a fight over. Specifically targeting each failed prediction, starting with GUT and going down the list of dead ends, is a convincing argument that particle physics is broken. Naming names was great, especially calling out a Nobel laureate who makes comments that make him sound like a good old boy chauvinist. Don’t hold back on others either. If the leadership doesn’t change, the discipline won’t either. I would have liked a bit more time on what experimental particle physicists can do, if anything, to advance the frontiers of our understanding. Perhaps that’s a theme for a different episode. Back to the issue at hand, a promising weekend for a social media scuffle has been ruined. I might as well pick up my sticks and go ruin a walk.

Anonymous

It's not even an hour and a half since the video dropped, particle physicists are probably still drinking their hot caffeinated beverages. Give it a bit of time. I'm buying some margarine later to make caramel popcorn with, should be good. 😸

Sabine

Sorry to disappoint ;) Yes, indeed, I meant to say more about experimental particle physics, as I always feel I am somewhat unfair to experimentalists. But it just got too long. You're right, maybe another episode on this would be interesting.

Anonymous

@Sabine: Of course another episode would be interesting. 😺

Tanj

@Sabine so the most prominent area of inconsistency is gravity vs. quantum mechanics, an area where standard model makes no progress. Correct? Are there productive, useful physics investigations flowing from that? Your prediction would seem to be that is the gold mine. If not, is there not actually an inconsistency, or are we simply lacking the tools to investigate it?

Anonymous

About symmetries: This is the physical religion of these days. Historically it was the world understanding of the Greek philosopher Plato. In his view, the rotation of planets was caused by the fact that our world is determined by structures and symmetries. So every free object has to follow the important structure of a circuit. As we know, this was changed by Newton who replaced the structures by physical laws, here the law of inertia and the law of gravity. But when, a hundred year ago now, the first measurements of particles were not explainable by the physical models of that time, Heisenberg explicitly stated that the solution can only be to return to the structural thinking of Plato. That was the origin of quantum mechanics. Einstein on the other hand did originally not like this way of thinking. But he was influence by the German educational system of that time, which was strongly related to this thinking of Plato. And so Einstein was happy to find a theory of relativity which also was a theory of structures. The relativity of Lorentz, which was in fact a physical one, did not have a chance in that physical community.

Anonymous

This episode hit the mark perfectly. Not a single confirmed prediction for over half a century. 🙈 A separate discussion of experimental particle physics would be great. Given the recent W mass re-analysis and the disappearance of the b meson decay anomaly doesn’t exactly inspire confidence that community’s ability to deliver new physics. Even neutrino oscillations seem just as mysterious today as when they were first discovered.

Anonymous

I appreciate your courage, Sabine. I hope they don't try to do the modern equivalent of burning you at the stake.

Anonymous

The only thing I didn't understand is why we only need to explain the distribution of dark matter and not what kind of fields/particles are responsible for it. Are you saying the kinds of fields/particles is a theory that modifies GR? Why can't it be both that and at the same time explain the distribution of dark matter?

Anonymous

As Sabine has already stated (or suspected), the dark matter problem is not one of new particles or new fields but a different understanding of gravitation. But this will not be a minor change like MOND, it is surely a greater one. We are getting close to a solution if we take into account that the measured spatial distribution of dark matter effects is exactly the same as the distribution of photons around a galaxy. Assume for a moment that dark matter is identical to photons and photons cause the same gravitational field as massive particles. With this assumption the whole problem is solved, and this solution works even quantitatively. This is a property which we do not find in any of the theories which are in discussion now.

Anonymous

"A different understanding of gravitation" can also include new fields... Or something else, my favorite being a Euclidean region of spacetime. I don't know what you mean by "distribution of photons" Aren't they distributed as expected for the stars in a galaxy? Is there anything else emitting photons (surely not a hypothetical dark matter). "Assume for a moment that dark matter is identical to photons". this is a non-starter because dark matter is by definition not visible with photons. And I'm almost sure people have included the known gravitational effect of photons in calculating the angular velocity distribution of stars in galaxies. The effect is surely very small, probably orders of magnitude smaller than the required energy necessary to explain the data.

Anonymous

'Dark photons' have already been posited if I remember correctly, but that's the sort of non-particle that Sabine's been blasting.

Anonymous

The rotation curves and the investigations of dark matter in galaxy clusters show an effective spatial density around galaxies and galaxy clusters of 1/r^2, where r is the distance from the center of the galaxy or the galaxy cluster. Photons have exactly the same spatial distribution around their source; that is the luminosity law. The new idea which I have mentioned is that there are further indications in physics that gravitation is independent of mass. The dependence was once the position of Newton who could not understand anything different at his time. We are using the so called "weak equivalence principle" that every object has the same gravitational acceleration independent of its mass. This is in fact no principle but the simple physical fact that mass does not have a role in gravitation. - The name "dark matter" only shows the helplessness of the physicists in this context and does not have any further meaning.

Anonymous

Hi luval, stars do not trace all of the light from a galaxy. The neutral hydrogen gas in spiral galaxies, traced in radio light, can sometimes extend out to double the visible light disk. The hot gas component, traced in X-ray light, forms a huge halo around galaxies. But, I don't agree with Albrecht that the distribution of light (in all wavelengths) from a galaxy matches the distance scales over which we see gravitational effects from dark matter. Specifically, the orbital velocities of the Large and Small Magellanic Clouds predict about 800 billion M_solar inside of 50 kpc and 900 billion M_solar inside of 60 kpc -- or 100 billion M_solar between 50-60 kpc, where the light emission, from both neutral hydrogen (radio) and extremely hot hydrogen (X-rays) is extremely faint. So it would seem that photons are not distributed exactly the same as the gravitational effects of dark matter. Edit: I didn't see Albrecht's response before I posted my remarks. Albrecht, if you are more simply making the case that both gravity and light are inverse square laws, then my response about photon vs dark matter distribution is not on point. But, it leads me to wonder of a test for your hypothesis that photons might have a gravitational effect. For example, the Shapiro delay for the double millisecond pulsar system PSR J1811–2405 has been measured. If photons contributed significantly to the local gravitational field, then I might expect the Shapiro delay to not match that predicted based solely on the masses of the two pulsars. I'm just spitballing here, but there must be other direct tests of your idea. I'm just as flummoxed as all of you about dark matter -- is it just that we don't have gravity figured out?, is it actually some sort of matter at all?, does it change phase in some way as we might expect of a superfluid-like material? I think Sabine's point is that we are in the dark about dark matter (pun intended on my part), so it is pointless to invent particles to search for with a collider when we don't even know if dark matter is particles.

Anonymous

Hi Tracey, the model of gravity which I am referring to says that every elementary particle contributes equally to the gravitational field. No dependence of its mass. The best proof which I can offer is the fact that the predictions of this model, applied to NGC 3198, are in a full quantitative agreement with the observation. And this model does not use any adaptable parameters. There is no other model or theory of DM which comes close to this.

Anonymous

Regarding the distribution of dark matter vs particle physics, I think Sabine should make another video about it, as there are some things to unpack there for the general public. Gravity's 10^-40 order of magnitude role (negligible in particle collider experiments) compared to other forces isn't generally well understood by the public. So while it is understandable that particle physics profession may be interested in figuring out what dark matter is independently, it is not required for gravity to be included in standard model details to understand the astrophysical phenomena. However it is worth emphasizing that the same is not true for other forces, for example electromagnetic / quantum theories are deeply involved with particles even at astrophysical levels, e.g. the lines of emission and such, while nuclear forces drive stars, metallicities etc

Anonymous

Sabine makes a larger point, whereby science discoveries are usually made to correct existing observations. The experimental apparatus and environment that provides evidence for DM have been telescopes and astronomical observations. None of the particle experiments performed here on Earth in standard laboratory conditions had found or suggested DM. I think it is worth for particle physics to take a brief look at DM to see if it can be detected in colliders, but given that the starting assumption is that DM only interacts gravitationally and gravity is 10^-40 of the other forces (effects of gravity are undetectable in collider experiments), it does not make sense to keep making up experiments in colliders to detect it. Now of course if astrophysics comes back and says 'hey we think DM isn't really dark, it interacts via other force(s) and here's some proof' then by all means go do collider experiments as long as they are within the required energy range. Currently astronomers need to drive that conversation, and I think the basics of DM have already been checked in colliders, and they need to look for things that are more accessible to collider apparatuses.

Anonymous

@Albrech Yes this is exactly the kind of science that needs to be performed on DM. More astronomical observations, studies of density and distribution, in regions of 'empty space' and near strong gravity (e.g. galaxy centers) to see if something new comes up. Looking for elementary particle miniscule objects that interact via gravity only (40 orders of magnitude weaker) here on Earth doesn't make any sense beyond the initial thinking and measuring that's already been done.

Anonymous

@ Luval yes we need to understand the DM particles 'eventually' , but there needs to be a robust apparatus or experimental method to do so. Imagine trying to do computational electromagnetics a million years ago in the stone age. I feel like we're currently in the stone age of gravity and the standard model unification, not because there aren't smart people, but because there aren't proper experimental tools to probe interactions with gravity at elementary particle scales. Perhaps they can do some experiments with heavier nuclei / larger quantum objects to see if effects of gravity can be seen, but it's going to be really hard.

Anonymous

Very interesting. I had never considered the philosophy of science, mostly because I neve considered philosophy to be pre-science, until I read 'A Philosophical Approach to MOND: Assessing the Milgromian Research Program in Cosmology' by David Merritt last year. He discussed those problems in developing 'theories' and models, so I had a far better appreciation for your discussion than I would have had.

Anonymous

I have the same feeling of religion here. It seems that animals in general have evolved certain capabilities, such as classification (https://www.scientificamerican.com/article/wired-for-categorization/), but that ours is a bit more complex. So, it seems a small step for physicists to extend the Standard Model to Supersymmetry because of the way that they see order in the world, their classification structure is different than others. I see religion as being merely a belief system, with or without a god or gods, and being an atheist I am ready to accept no or little symmetry. Others require more, I figure.

Anonymous (edited)

Comment edits

2023-03-31 21:38:02 “Good scientists should learn from their failures, but particle physicists have been making the same mistakes for 50 years.” One problem of present particle physics seems to be that certain convictions are treated like religion. An example is the size of the electron. Main stream’s opinion is that it is point-like (< 10^-19 m). With this size the magnetic moment needs for a classical understanding an internal rotation of the charge at > 10,000 times c. Of course ridiculous, and so it is said to need QM. Erwin Schrödinger, however, has evaluated (1930) the electron’s size to be around 4*10^-13 m with an internal oscillation at c. If this is used, not only the magnetic moment, but also the spin can be determined *classically* and particularly the mass with great precision (without Higgs!). The present assumption about the size of the electron is from scattering experiments. But those determine only the size of the charge in the electron which is thus orbiting at c.
2023-02-14 12:03:15 “Good scientists should learn from their failures, but particle physicists have been making the same mistakes for 50 years.” One problem of present particle physics seems to be that certain convictions are treated like religion. An example is the size of the electron. Main stream’s opinion is that it is point-like (< 10^-19 m). With this size the magnetic moment needs for a classical understanding an internal rotation of the charge at > 10,000 times c. Of course ridiculous, and so it is said to need QM. Erwin Schrödinger, however, has evaluated (1930) the electron’s size to be around 4*10^-13 m with an internal oscillation at c. If this is used, not only the magnetic moment, but also the spin can be determined *classically* and particularly the mass with great precision (without Higgs!). The present assumption about the size of the electron is from scattering experiments. But those determine only the size of the charge in the electron which is thus orbiting at c.

“Good scientists should learn from their failures, but particle physicists have been making the same mistakes for 50 years.” One problem of present particle physics seems to be that certain convictions are treated like religion. An example is the size of the electron. Main stream’s opinion is that it is point-like (< 10^-19 m). With this size the magnetic moment needs for a classical understanding an internal rotation of the charge at > 10,000 times c. Of course ridiculous, and so it is said to need QM. Erwin Schrödinger, however, has evaluated (1930) the electron’s size to be around 4*10^-13 m with an internal oscillation at c. If this is used, not only the magnetic moment, but also the spin can be determined *classically* and particularly the mass with great precision (without Higgs!). The present assumption about the size of the electron is from scattering experiments. But those determine only the size of the charge in the electron which is thus orbiting at c.

Sabine

I'm not sure what you mean by "prominent". If you mean that it's the one that has previously attracted the most attention then, yes, I guess that's true. Well, as I 've been saying for two decades, the best thing to do in this area is push for expexperimental test and that is indeed now slowly under way. So I am actually quite optimistic that something is going to come out of it after all. If you're asking for theoretical approaches, I think all existing approaches are wrong.

Sabine

What I mean by distribution is the energy density. That's the only thing you need in General Relativity. What fields or particles give rise to this energy density, if anything, just doesn't enter the model.

Anonymous

So is there an energy distribution (the same for all galaxies?) as a function of distance from the center (and angle?) that fits all the data without having to modify GR?