Home Artists Posts Import Register

Content

[This is a transcript with references.]

Some topics we cover on this channel are a little heavy, so today I want to talk about something lighthearted. Human extinction. What’s the risk of human extinction and what are the biggest factors that contribute to the risk? That’s what we’ll talk about today.

Why is Sabine talking about human extinction? Personal hobby? No, I got into this through my PhD thesis. Not because the thesis was that bad, but because it was about the production of black holes at the Large Hadron Collider. At the time a lot of people were scared that such a black hole could eat up the planet.

The reaction I saw to this from almost all particle physicists was to laugh it off. I got the impression they couldn’t even contemplate the possibility they might accidentally kill us all. So, they just discarded the idea as ridiculous. Most of them still do this today. Remember when they didn’t bother with enough lifeboats on the Titanic, it was kind of like that.

The idea that a particle collider might destroy the planet by creating a black hole wasn’t quite as stupid as particle physicists wanted you to believe. I’ll say a little more about this later. But this is what got me thinking about human extinction. We shouldn’t discard the possibility of extinction as silly because it’s never happened before, we should take this threat seriously. Well, maybe not too seriously. I’m not good with that. But let’s at least talk about it.

What do we even mean by human extinction? After all, a lot of species have gone extinct, but sometimes that just means they produced offspring that eventually became genetically so different we called it a different species, like with the different species of “homo” in our own past.

The phrase “Existential Risk” comes from longtermists, a particular species of humans that we just talked about a few weeks ago. The type of extinction they worry about is not a gradual transition to another species, but the end of all intelligent life on earth. Some might argue it’s not all that clear there’s intelligent life on earth to begin with, but Nick Bostrom, director of the Future of Humanity Institute, put it like this “An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.”

But would human extinction really be that bad? Well, since you’re asking, a few years ago, a group of psychologists from the UK did a survey on this. They recruited about 180 Americans and asked them whether extinction is bad. The exact question they used is: “Suppose that humanity went extinct in a catastrophe. This means that no human being will ever live anymore in the future. Would that be bad or not bad?”  – “yes” or “no”?

78 percent answered that “yes” human extinction is bad. And, indeed, that means that one in five said extinction wouldn’t be bad.

You might wonder whether those people were just trolling, but I believe most of them were quite sincere. That’s because a later question asked them to explain why they felt that way. The people who said that extinction would be good typically had one of three arguments. (a) Because humans are destroying the planet and nature would be better off without us. (b) It’s the natural way of things. Or (c) If no one’s around, it can’t be bad. Or to put it another way, if we chop down the last tree, it’s okay because other trees can’t hear it fall.

The logic on the other side of the argument is also interesting. The most common reason people gave for why extinction is bad was, well, if everyone is dead then I’m dead too and I’d rather not be dead, or a similar statement about their children. The next most common explanation was some version of “what’s there to explain, of course extinction is bad.” I guess that’s all the people who haven’t seen Jurassic Park.

Okay, having thus found tentative evidence that most people think extinction is kind of bad, what are the greatest risks?

We can roughly classify existential risks into natural disasters we had no doing in, and self-caused disasters. At the moment, the self-caused disasters are the more urgent ones to deal with because they’re multiplying as we develop more powerful technologies. The risks that longtermists are currently most worried about are nuclear war, climate change, biotechnology, and artificial intelligence.

The biggest problem with nuclear war isn’t the detonations, and it isn’t the radiation either, it’s the enormous amount of dust and sooth that’d be injected into the atmosphere. This blocks a lot of sunlight and causes what’s been dubbed “nuclear winter”. Except it isn’t just one winter, it’d last for more than a decade.

Just a few months ago, an international team of researchers published a paperin the journal Nature Food with a new analysis for the consequences of nuclear war. They combined the predictions from a climate model with models for crop yield and fish stocks.

For a major nuclear war, for example between the United States and Russia, the nuclear winter could cause air temperatures to drop by more than 10 degrees on average. Rain fall would also noticeably decrease because the summer monsoon would significantly weaken in some of the world’s most fertile areas. This would lead to massive food shortages all over the globe, faster than we can develop any technology to deal with the problem. They estimate that up to 5 billion people could die from starvation. Yeah, that’s grim. What did you expect clicking on a video about extinction?

In this figure, the red color is for places where the average amount of available calories falls below the amount necessary for survival. You see that for a major nuclear war, that’s basically the entire world, except Australia, New Zealand, and Argentina. So cockroaches and crocodiles will be fine, don’t worry.

With climate change, too, the major problem isn’t the primary effect, it’s the secondary effects that impede our ability to recover from other problems. It’s the consequences of an increasing number of natural disasters and droughts and fires that lead to economic distress that upset supply networks that cause international tension. Or, to put this differently: We are only so many people and there’s only so much we can do. If we’re forced to constantly cope with climate change, other things won’t get done.

Climate change is unlikely to cause complete extinction on its own because it’s a self-limiting problem. If economies collapse, carbon dioxide emission will decrease and in two hundred years or so we might be able to start over again. But in those two hundred years humanity will be extraordinarily susceptible to additional problems like a nuclear war or pandemics.

A few months ago, a team of researchers from institutions such as the Centre for the Study of Existential Risk in Cambridge and, again, the Future of Humanity Institute in Oxford, published a paperin which they say it’s necessary to consider such “bad-to-worst-case scenarios” and that this risk is “dangerously underexplored”. Christmas parties at those institutes must be fun. Hey, Simon, I saw your new paper about multi-resistant bacteria in the wake of nuclear wars, well done! But I want to talk to you about brain-eating fungi.

Pandemics are bad enough, but a pandemic caused by a bioengineered virus could be worse. Think of a virus as lethal as ebola but as contagious as measles and a government response as sluggish as we’ve seen with COVID.

I actually think COVID was a blessing in disguise because it’s a fairly mild virus that gave us an excellent test run. It might have been bad at the time but think of COVID like the 3rd Pirates of the Caribbean movie. Yes, it was bad but worse was yet to come. Hopefully next time we’ll be better prepared - both for the next pandemic and the next pirates movie.

But viruses aren’t the only problem, there are also bacteria and fungi and other bioweapons that can induce diseases. And then there’s the risk that genetically modified organisms escape from the lab into the wild and cause ecosystems to collapse.

The biggest problem with Artificial Intelligence that longtermists see is that that an AI could become intelligent enough to survive independently of us but pursue interests that conflict with our own. They call it the “misalignment problem”. In the worst case, the AIs might decide to get rid of us. And could we really blame them? I mean, most of us can’t draw a flower, let alone a human face, so what’s the point of our existence really?

This wouldn’t necessarily be an extinction event in the sense that intelligent life would still exist, it just wouldn’t be us. Under which circumstances you might consider an AI species a continuation of our own line is rather unclear. Longtermists argue it depends on whether the AI continues our “values”, but it seems odd to me to define a species by its values, and I’m not sure our values are all that great to begin with.

In any case, I consider this scenario unlikely because it assumes that advanced AIs will soon be easy to build and reproduce which is far from reality. If you look at what’s currently happening, supercomputers are getting bigger and bigger, and the bigger they get the more difficult they are to maintain and the longer it takes to train them. If you extrapolate the current trend to the next few hundred years, we will at best have a few intelligent machines owned by companies or governments, and each will require a big crew to keep it alive. They won’t take over the world any time soon.

What do we know about the likelihood of those human-caused extinction scenarios? Pretty much nothing, at least that’s my reading of the literature.

Take for example this survey that US Senator Richard Lugar sent to 132 experts in 2005. He asked them “What is the probability (expressed as a percentage) of an attack involving a nuclear explosion occurring somewhere in the world in the next ten years?” The answers of the so-called experts were all over the board from zero to 100 percent, so you might as well not bother asking.

According to the Australian philosopher Toby Ord, the risk of self-caused extinction in the next 100 years is 1 over 6. Well, as I keep preaching, a number without uncertainty estimates isn’t science. If you’d add uncertainty estimates to this estimate, I think you’d find anything between zero and 1. So your guess is as good as his.  

Let’s then have a look at the naturally occurring existential risks. I don’t want to go through all of them, but I do want to mention the biggest risk which is currently that of a supervolcano eruption. That’s an eruption which ejects more than a thousand cubic kilometers of material. They’re known to have happened repeatedly in the past. One of the most famous examples is Yellowstone. It had three mega eruptions in the past 2 million years, each of which covered most of the western US in ash a foot deep. The next eruption will probably come in the next 100 thousand years or so. Our planet has about a dozen supervolcanoes.

Supervolcano eruptions are a problem for the same reason as nuclear war. They can inject a lot of dust into the atmosphere that’d cool the planet rather suddenly, possibly by more than 10 degrees for a decade.

The problem with asteroid impacts, too, is that they would propel a lot of dust into the atmosphere. But that would take a pretty big asteroid, and big asteroids are luckily rare and also quite easy to spot. The asteroid that caused the extinction of dinosaurs 65 million years ago has been estimated to have had about 10 kilometers in diameter.

NASA currently knows about four asteroids of that size and none of them is on a collision course with us. If a new one appeared, we’d probably know at least a few months in advance. Getting a redirect mission on the way would currently take several years of planning at least, which isn’t fast enough. But this is a problem that we can solve with current technology,  technology that doesn’t require Bruce Willis or Ben Affleck to make it work. Really supervolcanoes are the bigger problem and there’s very little that current technology can do about them. Let alone Willis or Affleck. Another scary natural risk that we can’t currently do anything about, is big solar flares. I talked about this in an earlier video.

Doing risk estimates is somewhat easier for natural disasters than for self-caused ones, because we can estimate their frequency from past records. This was done in a 2019 paper by researchers from Oxford including the previously mentioned Toby Ord.

They used the observation that humans have survived at least 200 thousand years on this planet to estimate the annual probability of human extinction from natural causes. And in this case, they actually do have an uncertainty estimate. They say it’s less than one in 87 thousand with 90 percent probability and less than 14 thousand with more than 99 point 9 percent probability.

If one uses records of the entire lineage of Homo which dates back about two million years, then the annual probability of extinction from natural causes falls below one in 870 thousand with 90 percent confidence. You’re probably more likely to see an expert being right about nuclear war than this.

These estimates have a general problem which is that from a sample of one you can’t tell apart the probability of occurrence from the probability of having picked a particular element of the sample. That is to say, we might just have been unusually lucky and the number they came up with isn’t the probability that we’ll go extinct tomorrow, but a statement about how lucky we’ve been so far.

Here's an example for what I mean. Suppose you have a billion planets and each day half of them evaporate into nothing, so the daily extinction risk is one in two. After a month there’s about 1 planet left. The people on this one planet could now calculate the probability of going extinct tomorrow based on the observation that they’ve survived one month. They’d arrive at an estimated daily extinction risk of less than 6 in 1000 with 90 percent confidence, which is crudely wrong. The reason it’s wrong is that the people on that planet don’t know of all the other planets which went poof.

This means that the estimate based on observations from our own planet assumes we’re a typical planet, and not an extraordinarily lucky one.

The only way to make an estimate which does not rely on this assumption is to look at other planets to figure out how typical we are. At the moment this can’t tell us much about natural disasters on our planet, because we can’t observe those on other planets. But it can give us an estimate for the risk that our entire planet gets destroyed by natural causes, for example because a black hole comes by or a supernovae explosion goes off in the vicinity.

This estimate was done in 2005 by Nick Bostrom and Max Tegmark, and they found that the annual probability of our planet being destroyed is less than one in a trillion. Hey, at least I have some good news in this video! The reason they looked at this was that at the time people were worried that the Large Hadron Collider would produce a black hole, which, like a particle going round in in the LHC, returns me to the beginning.

So, was there ever really a risk that the LHC would destroy the planet? The most common argument that participle physicists bring up is that cosmic ray collisions in the upper atmosphere sometimes happen at total energies higher than the collisions at the LHC. Therefore, they say, if those collisions could create dangerous black holes, we’d have died long ago.

There are two problems with this argument. The first one is what Bostrom and Tegmark addressed in their paper. The probability might not be small, we might just have been very lucky so far.

The bigger problem is that it’s a false comparison. Because the risk doesn’t come from any microscopic black holes, but from those that move slowly relative to earth. These would eat up matter and grow and then sit in the center of Earth sucking in the rest of the planet. Cosmic ray collisions have a center of mass system that moves rapidly relative to earth, and therefore everything that’s produced in those collisions is very likely to also move fast. This is not the case for LHC collisions. They give a very different distribution of velocities that makes it much more likely to produce a dangerously slow black hole.

The actual reason this was never likely is an entirely different one. You can’t produce microscopic black holes at the LHC if Einstein’s theory of general relativity is correct. Yes, this guy again. Like reasons for the world ending, he really pops up everywhere, doesn’t he?

The production of black holes at the LHC only becomes possible if you change Einstein’s theory. Why would you do that? The reason that particle physicists had for doing this was the same reason they had for believing that the LHC would produce supersymmetric particles. It’s an idea called naturalness. I explained in my first book why this naturalness idea is not scientific.

But, you see, if particle physicists had been honest about this, if they’d admitted that the idea that the LHC would produce those tiny black holes in the first place was nonsense, they’d also have had to admit that it was nonsense to claim it would produce dark matter particles or supersymmetry. So they had to come up with a different reason.

Okay, so in summary, the biggest existential risk is our own stupidity.

Files

Human Extinction: What Are the Risks?

🌎 Get our exclusive NordVPN deal here ➡️ https://NordVPN.com/sabine It's risk-free with Nord's 30-day money-back guarantee! What do we know about the risks of human going extinct? In today's video I collect what we know about the frequency of natural disasters and just how they would kill us, and estimates for man-made disasters. 👉 Transcript and References on Patreon ➜ https://www.patreon.com/Sabine 💌 Sign up for my weekly science newsletter. It's free! ➜ https://sabinehossenfelder.com/newsletter/ 📖 Check out my new book "Existential Physics" ➜ http://existentialphysics.com/ 🔗 Join this channel to get access to perks ➜ https://www.youtube.com/channel/UC1yNl2E66ZzKApQdRuTQ4tw/join 00:00 Intro 00:30 What Is an Existential Risk? 02:00 Would Extinction be Bad? 04:18 Man-made Disasters 10:36 What's The Risk of Man-made Disasters? 11:35 Natural Disasters 13:38 What's the Risk of Natural Disasters? 16:55 Why Can't the LHC Produce Black Holes? 19:29 Protect Your Data with NordVPN Many thanks to Jordi Busqué for helping with this video http://jordibusque.com/ #science

Comments

Anonymous

Don't you mean 1000 cubic kilometers, rather than 1000 cubic meters?

Sabine

Ah, you're right of course. Sorry about that. Will put a note under the video. Thanks for pointing out!

Anonymous

Excellent video. Many thanks!

Anonymous

For over 19 minutes, I thought Sabine was punking everyone with gobbledygook disguised as an "Extinction" video. Then at 19:40 Sabine stated, "the biggest existential risk is our own stupidity". So at 19:40, Sabine pulled out Occam's Razor and everything made sense. I agree in that the biggest threat to our extension is ourselves. If, for the past 20 years, major powers not been fighting over money and oil and territory, and who can pee the farthest, a more permanent space station in space could be in the works or completed. This space [station] could have supported the build up of a permanent station on the moon. The moon station could have been the launching pad for missions to Mars and reactor driven robotic missions out of the solar system. Such missions might have brought an end to some of the mining of natural resources on earth. The space stations could have eventually evolved into star ships, with the ability to haul cryo-sleep modules (maybe 30 years out) and food production modules that have the ability to grow and/or print any type of food we want. But many of the leaders and billionaires of the world have chosen greed over the long term safety of the planet and its inhabitants. By definition, we lack synergy, which (by definition) is, "the combined power of a group of things when they are working together that is greater than the total power achieved by each working separately". I see this problem in the company where I work. People who refuse to share information with me because it will help me achieve my goals quicker and (in their minds) make them look bad. Then there is me, the voice of one crying in the wilderness. I try to document everything so that those who come after me have the information they need to accomplish a task. How many different levels of this problem exist between my world and the world of the world leader/billionaire? I cannot get upper management to fix this problem, which is where the problem has to be fixed. And because of societal issues, I will never be in upper management. So our own stupidity will prevent people from being the best they can be and having the best impact on a company's success. There is a good chance it will also prevent the next Einstein from ever getting a chance. I had not planned to being up Greta Thunberg (yea, that girl again), but I have to say the movement Greta started gives me hope. Millions of future voters in countries throughout the world realize that when they work together they have a voice and a chance to effect change. Up until then the 'old guard' and their friends in the 1% club had ensured the world would keep running happily along until the end of time, which for our world could be 60 years from now. Greta's influence has not only impacted young people, Greta has been able to influence/shame world leaders to the point where the climate crisis was suddenly high on their agenda. It is my hope that Greta's influence on world leaders goes far beyond the climate crisis as young voters throughout the world elect reasonable minded politicians who believe that working together accomplishes more than fighting together ever has and ever will. Looking back on what I wrote I thought mine was bigger than Sabine's. No, it's not. Happy New Year to everyone. David Brown 12/31/2022

Anonymous

OK, so supersymmetry is NOT on the list of things that are likely to cause human extinction -- good to know. I was actually kind of amazed that so many commenters on YouTube and elsewhere went with our own stupidity as the cause. Yeah, jokingly, obviously stupidity is what will do us in, or perhaps, it's not so much stupidity as a lack of foresight.

Anonymous

Happy New Year to you as well David. I am optimistic there are many more Gretas growing up and we’ll be ok.

Anonymous

Is salvation Supersymmetric then 😆? Perhaps not our stupidity, as it was quite clear from the episode there are smart people thinking about all sorts of contingencies, but our arrogance, when we choose not to listen and engage on matters that seem to be a distant concern.

Anonymous

As you mention, extinction events have happened in the past and will continue into the future. The issue is not extinction itself, nothing lasts forever, but how the extinction takes place. There is a big moral difference between extinction due to climate change due to natural causes and the extinction event that humans are causing due to overpopulation, direct slaughter, encroachment, our extractive economy, climate change, etc. Natural causes have no moral basis, human causes do, we simply exalt ourselves above accountability for the harm that we cause those we don't value or value as objects for our use. The Permian Extinction was caused by climate change and was estimated to take 60,000 +/- 48,000 years. That was without humans. With humans, global warming is but the latest stressor of wildlife already in decline and the imbalance is seen in the estimate that our biomass, at 0.06 GtC, is almost 10X that of all wild mammals, at 0.007 GtC. However, add to ours that of our livestock, at 0.1 Gtc, which brings us to 22.8X that of all wild mammals. We have destroyed the natural balance and that is precisley what happened in the previous extinction events, except for the one at the end of the Cretaceous. Longtermism is, IMO, just plain stupid. Adherents disregard pain and suffering that we cause because they are concerned with the idea of us not existing forever. I have no patience for their ideology because my concern is with our victims, nonhuman animals, not us. We may deserve what we do to ourselves, but as we consider ourselves moral we have obligations to those we don't value that we choose to ignore for purely selfish reasons and therefore they do not deserve what we do to them. We are driving an extinction event that will reach a tipping point to collapse if we do not change our selfish and wasteful ways. The fossil record preserves such events that we choose to ignore.

Anonymous

Happy New Year to you too! Now, the be the Debby Downer: https://www.pnas.org/doi/10.1073/pnas.1711842115 AND https://www.theguardian.com/environment/2022/oct/13/almost-70-of-animal-populations-wiped-out-since-1970-report-reveals-aoe Unless we replace our extractive economy with a sustainable one and develop policies to get people to stop having kids to drive our population to well below 2 billion, all is lost, we will actually reach the tipping point that causes another mass extinction that will include us. For me, the issue is the pain and suffering that we cause our victims because we exalt ourselves above accountability for the harm we cause them. They deserve better from us, moral beings.

Anonymous

“The actual reason this was never likely is an entirely different one. You can’t produce microscopic black holes at the LHC if Einstein’s theory of general relativity is correct. Yes, this guy again. Like reasons for the world ending, he really pops up everywhere, doesn’t he? “ Why does he (Einstein) pop up everywhere? Answer: Because he is treated like the revealer of a religion. Is THAT science?? It was Lorentz who started the theory of relativity. But for his approach he had to assume the existence of molecules in order to explain the contraction. This was considered by the physical community of his time to be too speculative and therefore not scientific. So Einstein seemed more straight forward with his assumption of a curved space-time. We now know that Lorentz was right. But the train had left for this decision as we say it. So we have to live with an unnecessarily complicated theory. And with the problems of dark matter and dark energy, which do not exist in Lorentz's world.

Anonymous

Lorentz is widely credited for deriving the coordinate transformations of special relativity. However, he did so by studying Maxwell’s equations. For example, see 21-6 in volume II of the Feynman lectures: https://www.feynmanlectures.caltech.edu/II_21.html. As for DM and DE, that’s just nomenclature for phenomena that we have yet to explain. Doesn’t matter what we call it, no one knows what’s causing either.

Anonymous

But yes, we know it; or we can know it if we follow Lorentz. If we continue Lorentz’ approach of relativity (which is based on physical facts rather than on principles) up to General Relativity, then we can understand, that gravity has nothing to do with mass. That means, every elem. particle contributes equally to the field. And that means that the “normal” photons contribute equally as the other particles. And this explains DM completely. More than that, the explanation is quantitative in contrast to all other attempts of explanation. Even Einstein himself was in 1911 close to this understanding of gravity. Dark energy is much simpler. If we accept that the speed of light has been slightly higher in former times, then the Doppler evaluation yields a higher recession speed for old stars. So there is no acceleration and consequently no need for any dark energy.

Anonymous

So in other words what you are saying is that guy was wrong. The speed of light is not constant? If you got some convincing references, do share.

Anonymous

I am unfortunately not as educated as the rest but this sparked my interest: WHAT HUMANS CAN LEARN FROM CALHOUN'S RODENT UTOPIA

Anonymous

It is the other way around: Einstein has never given a physical argument why the speed of light is constant. He only said: it is an observation. – But this observation is valid only for us living at this time. What about an observation if it would have been made one billion years ago? That is not available as a direct measurement; but as an indirect measurement. And such indirect measurement was made by Saul Perlmutter et al. and by Adam Riess et al.. However, the explanation that space underlies an accelerated motion was seen by them and by the physical community as the simpler and more plausible explanation than a change of c. If we look at our present understanding of cosmology, the assumption of a change of the space and a change of the speed of light are symmetric to each other. So the question is: what is more plausible? I had once the occasion to talk to Saul Perlmutter about this question. He liked the idea of a changing space better than a changing c, but he did not give me solid arguments for his position.

Anonymous (edited)

Comment edits

2023-03-31 21:57:56 Hi Albrecht and Rad, There are observational reasons for a constant c. Remember that c = 1/sqrt(mu_0 epsilon_0) where mu_0 and epsilon_0 are the permeability and permittivity of free space, which are also constants of nature. Atomic structure is dependent on mu_0 and epsilon_0, or rather, if these two constants change, then atomic energy levels will change, not just shift red-ward or blue-ward. The energies of the Halpha, Hbeta, Hgamma, Ca II H & K lines, etc would change and the energy difference between these spectral lines would change. If c was different in the past, then when we look at the spectra of galaxies far away, the spectra would be different from local galaxies -- which is not observed. What is observed is just a simple Doppler shift.
2023-01-03 20:36:03 Hi Albrecht and Rad, There are observational reasons for a constant c. Remember that c = 1/sqrt(mu_0 epsilon_0) where mu_0 and epsilon_0 are the permeability and permittivity of free space, which are also constants of nature. Atomic structure is dependent on mu_0 and epsilon_0, or rather, if these two constants change, then atomic energy levels will change, not just shift red-ward or blue-ward. The energies of the Halpha, Hbeta, Hgamma, Ca II H & K lines, etc would change and the energy difference between these spectral lines would change. If c was different in the past, then when we look at the spectra of galaxies far away, the spectra would be different from local galaxies -- which is not observed. What is observed is just a simple Doppler shift.

Hi Albrecht and Rad, There are observational reasons for a constant c. Remember that c = 1/sqrt(mu_0 epsilon_0) where mu_0 and epsilon_0 are the permeability and permittivity of free space, which are also constants of nature. Atomic structure is dependent on mu_0 and epsilon_0, or rather, if these two constants change, then atomic energy levels will change, not just shift red-ward or blue-ward. The energies of the Halpha, Hbeta, Hgamma, Ca II H & K lines, etc would change and the energy difference between these spectral lines would change. If c was different in the past, then when we look at the spectra of galaxies far away, the spectra would be different from local galaxies -- which is not observed. What is observed is just a simple Doppler shift.

Anonymous

Thanks for bringing up Calhoun's rodent experiments, Per, that was enlightening to read. It looks like in Universe 25, all of the teenage mice were addicted to their cell phones 24 hours a day and lost all interest in sex and reproduction :-).

Anonymous

You mean its the distraction of cell phones / games / entertainment that potentially stops us from evolving or reproducing?

Anonymous

I was mostly joking, but even so we can see some parallels between the rodent civilizations Calhoun manipulated and pockets of humans. I'm thinking of things as heavy as the Rwandan genocide between the Hutus and the Tutsis all the way to much lighter things like teenagers not learning how to socialize in person because they do all of their socializing with text messages from a distance. With the proliferation of young, male INCELs, I do wonder if these socialization changes are at least partially to blame.

Anonymous

Hi Tracey, That is an interesting point, but also a complex one. The energy levels of the atom do not only depend on c, but also on Planck’s constant h and the actual mass m. And those both depend in turn on c. A quite complex situation. So, what is the final result then for the energy level? Or the difference of two of them? Did you determine that? To the other point of c, epsilon, and mu. Your equation is formally correct, but physically misleading. The speed of light c is not given by epsilon and mu, but mu is given by c and epsilon. The reason is that magnetism is not a force parallel to the electric one, but magnetism is only an apparent force, somewhat equivalent to the Coriolis force. It is a relativistic side effect of the electrical one. Maxwell could not know this because he did not know the Lorentz transformation which is needed to understand magnetism. The Maxwell equations are helpful for the daily life of an engineer, but do not reflect the physical reality. - Do you want literature?

Anonymous

For the spin-flip transition of neutral hydrogen, I end up with deltaE proportional to c^6 after accounting for where c enters into the equation with the Bohr radius, m_e, m_p, mu_0 etc. A 1% change in c would change the emission line from 1420 MHz to 1335 MHz. At the same time, the Balmer line energies are proportional to c^4, so they would be at different apparent Doppler shifts than the neutral hydrogen line. Regardless, I don't think my spectral argument is necessarily correct, otherwise the situation would have been settled long ago. If, for example, c was different in the first few minutes of the universe, but has been constant since, we wouldn't see this in the spectra of galaxies. If c was different by milli percent, we would have a hard time sorting this out given the noise level in the spectra of faint, distant galaxies. How much does c need to change with time in order to explain the effects currently attributed to dark matter and dark energy? If c only needs to change by milli percent, then I guess my real argument isn't about spectra but about Occam's razor. Atoms are complex systems where the structure depends on fundamental, interrelated constants of nature. What makes more sense, that c (and everything else) changes in such a way that we exactly recover atomic structure (and fine structure and hyperfine structure) or that c is constant? Given no way to settle this with current data limitations, I would side with Saul Perlmutter and stick with the simpler explanation. I was fixing to hit reply just now, but what flew through my mind is what is the definition of "simpler" here. If by changing c by some, currently undetectable, amount solves dark matter and dark energy and keeps atomic structure the same as near as we can measure, is not this the simpler explanation? Is there a mechanism that drives the time evolution of c and the other fundamental constants of the universe? What other signatures of a c(t) universe might exist to break the degeneracy between the competing explanations? It's certainly an interesting idea, Albrecht, but I'm not courageous enough to stake a claim on c(t) hill.

Anonymous (edited)

Comment edits

2023-03-31 21:57:56 Hi Tracey, First of all I want to give you the reference to the following paper: A. Albrecht & J. Magueijo: “A time varying speed of light as a solution to cosmological puzzles”; arXiv:astro-ph/9811018 ; 1999. They have investigated the benefit and the implications of a variable c with the result that they find it helpful for several problems in the present understanding of cosmology. To your question about dark energy and dark matter: Dark energy would be explained (or better unnecessary!) if the speed of light was higher by a few percent a billion years ago. Rough estimate. And this can also be applied to the question of inflation. Physics assumes an expansion of space by 50 orders of magnitude in the beginning. Alternatively we can assume an increase of the speed of light by just this factor. Dark matter does not have anything to do with a variable speed of light. Dark matter is explained by a theory which says that gravity has nothing to do with mass, but every elementary particle contributes in the same way; so also photons. You find that the assumption of an accelerated expansion of the space is simpler than a variable c? I object to this, because the so called dark energy is now in discussion for several decades and no one in physics has a slight idea how this could work. Whereas a variable c can explain several observations which now are causing problems. See Albrecht and Magueijo. To your quantitative considerations in the beginning: the wave function of the hydrogen atom does not only contain c but also h and m, as I have already mentioned. And the latter depend both on c. That should be in your consideration.
2023-01-05 20:23:25 Hi Tracey, First of all I want to give you the reference to the following paper: A. Albrecht & J. Magueijo: “A time varying speed of light as a solution to cosmological puzzles”; arXiv:astro-ph/9811018 ; 1999. They have investigated the benefit and the implications of a variable c with the result that they find it helpful for several problems in the present understanding of cosmology. To your question about dark energy and dark matter: Dark energy would be explained (or better unnecessary!) if the speed of light was higher by a few percent a billion years ago. Rough estimate. And this can also be applied to the question of inflation. Physics assumes an expansion of space by 50 orders of magnitude in the beginning. Alternatively we can assume an increase of the speed of light by just this factor. Dark matter does not have anything to do with a variable speed of light. Dark matter is explained by a theory which says that gravity has nothing to do with mass, but every elementary particle contributes in the same way; so also photons. You find that the assumption of an accelerated expansion of the space is simpler than a variable c? I object to this, because the so called dark energy is now in discussion for several decades and no one in physics has a slight idea how this could work. Whereas a variable c can explain several observations which now are causing problems. See Albrecht and Magueijo. To your quantitative considerations in the beginning: the wave function of the hydrogen atom does not only contain c but also h and m, as I have already mentioned. And the latter depend both on c. That should be in your consideration.

Hi Tracey, First of all I want to give you the reference to the following paper: A. Albrecht & J. Magueijo: “A time varying speed of light as a solution to cosmological puzzles”; arXiv:astro-ph/9811018 ; 1999. They have investigated the benefit and the implications of a variable c with the result that they find it helpful for several problems in the present understanding of cosmology. To your question about dark energy and dark matter: Dark energy would be explained (or better unnecessary!) if the speed of light was higher by a few percent a billion years ago. Rough estimate. And this can also be applied to the question of inflation. Physics assumes an expansion of space by 50 orders of magnitude in the beginning. Alternatively we can assume an increase of the speed of light by just this factor. Dark matter does not have anything to do with a variable speed of light. Dark matter is explained by a theory which says that gravity has nothing to do with mass, but every elementary particle contributes in the same way; so also photons. You find that the assumption of an accelerated expansion of the space is simpler than a variable c? I object to this, because the so called dark energy is now in discussion for several decades and no one in physics has a slight idea how this could work. Whereas a variable c can explain several observations which now are causing problems. See Albrecht and Magueijo. To your quantitative considerations in the beginning: the wave function of the hydrogen atom does not only contain c but also h and m, as I have already mentioned. And the latter depend both on c. That should be in your consideration.

Anonymous

So what I hear you say is that there is something behind all these smaller groups that you point out as maladapted. Doesn't "God" love the normal curve, would we not see extremes on both sides of any spectrum. I guess... at what point is it a problem we have to address, and how would we discover its root cause?

Anonymous

Hi Per, I normally don't check this far back in the feed, so sorry my response is late. I fully admit that I may be a victim of the news/social media algorithm where it seems like certain classes of outliers have grown in recent years. I'm just as stupid as the next person when it comes to perception vs reality. I think you're right in that it would be difficult to determine if the relative amount of outliers has actually grown given that they may have always been around, just silent because of the lack of social media through most of history. I also wonder how transferable the rodent social experiments are to human populations. On the face of it, we can jokingly make a number of connections, but there are so many social variables in humans that there may be no way to truly isolate one variable. For example, there have been a number of political candidates pushing universal basic income, and it certainly sounds like an good idea on many fronts, but this is essentially what the rats were given in their utopia, so what unintended consequences might arise in human populations? I certainly don't know, but this is why UBI needs more study before it is implemented on large scales.

Anonymous

Thank you for answering my message. The Egyptians built the pyramids even though they dont have any existential value. I guess food was abundant and they decided to keep people busy. In our modern world there is no reason why people cannot continue working even though Robots do the "real" work. To make people do nothing is the root of all troubles... its better to slowly progress into a utopia where people can find what they are good at and do that instead. But I am just a bear looking at the stars, I dont think wise people really understand what they are doing right now... Robots are an asset, but people like rats... need a purpose to wake up every morning. GDP can be provided by robots while people are entertained by jobs that give them some "purpose".