Human Extinction: How High is the Risk? (Patreon)
Content
[This is a transcript with references.]
Some topics we cover on this channel are a little heavy, so today I want to talk about something lighthearted. Human extinction. What’s the risk of human extinction and what are the biggest factors that contribute to the risk? That’s what we’ll talk about today.
Why is Sabine talking about human extinction? Personal hobby? No, I got into this through my PhD thesis. Not because the thesis was that bad, but because it was about the production of black holes at the Large Hadron Collider. At the time a lot of people were scared that such a black hole could eat up the planet.
The reaction I saw to this from almost all particle physicists was to laugh it off. I got the impression they couldn’t even contemplate the possibility they might accidentally kill us all. So, they just discarded the idea as ridiculous. Most of them still do this today. Remember when they didn’t bother with enough lifeboats on the Titanic, it was kind of like that.
The idea that a particle collider might destroy the planet by creating a black hole wasn’t quite as stupid as particle physicists wanted you to believe. I’ll say a little more about this later. But this is what got me thinking about human extinction. We shouldn’t discard the possibility of extinction as silly because it’s never happened before, we should take this threat seriously. Well, maybe not too seriously. I’m not good with that. But let’s at least talk about it.
What do we even mean by human extinction? After all, a lot of species have gone extinct, but sometimes that just means they produced offspring that eventually became genetically so different we called it a different species, like with the different species of “homo” in our own past.
The phrase “Existential Risk” comes from longtermists, a particular species of humans that we just talked about a few weeks ago. The type of extinction they worry about is not a gradual transition to another species, but the end of all intelligent life on earth. Some might argue it’s not all that clear there’s intelligent life on earth to begin with, but Nick Bostrom, director of the Future of Humanity Institute, put it like this “An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.”
But would human extinction really be that bad? Well, since you’re asking, a few years ago, a group of psychologists from the UK did a survey on this. They recruited about 180 Americans and asked them whether extinction is bad. The exact question they used is: “Suppose that humanity went extinct in a catastrophe. This means that no human being will ever live anymore in the future. Would that be bad or not bad?” – “yes” or “no”?
78 percent answered that “yes” human extinction is bad. And, indeed, that means that one in five said extinction wouldn’t be bad.
You might wonder whether those people were just trolling, but I believe most of them were quite sincere. That’s because a later question asked them to explain why they felt that way. The people who said that extinction would be good typically had one of three arguments. (a) Because humans are destroying the planet and nature would be better off without us. (b) It’s the natural way of things. Or (c) If no one’s around, it can’t be bad. Or to put it another way, if we chop down the last tree, it’s okay because other trees can’t hear it fall.
The logic on the other side of the argument is also interesting. The most common reason people gave for why extinction is bad was, well, if everyone is dead then I’m dead too and I’d rather not be dead, or a similar statement about their children. The next most common explanation was some version of “what’s there to explain, of course extinction is bad.” I guess that’s all the people who haven’t seen Jurassic Park.
Okay, having thus found tentative evidence that most people think extinction is kind of bad, what are the greatest risks?
We can roughly classify existential risks into natural disasters we had no doing in, and self-caused disasters. At the moment, the self-caused disasters are the more urgent ones to deal with because they’re multiplying as we develop more powerful technologies. The risks that longtermists are currently most worried about are nuclear war, climate change, biotechnology, and artificial intelligence.
The biggest problem with nuclear war isn’t the detonations, and it isn’t the radiation either, it’s the enormous amount of dust and sooth that’d be injected into the atmosphere. This blocks a lot of sunlight and causes what’s been dubbed “nuclear winter”. Except it isn’t just one winter, it’d last for more than a decade.
Just a few months ago, an international team of researchers published a paperin the journal Nature Food with a new analysis for the consequences of nuclear war. They combined the predictions from a climate model with models for crop yield and fish stocks.
For a major nuclear war, for example between the United States and Russia, the nuclear winter could cause air temperatures to drop by more than 10 degrees on average. Rain fall would also noticeably decrease because the summer monsoon would significantly weaken in some of the world’s most fertile areas. This would lead to massive food shortages all over the globe, faster than we can develop any technology to deal with the problem. They estimate that up to 5 billion people could die from starvation. Yeah, that’s grim. What did you expect clicking on a video about extinction?
In this figure, the red color is for places where the average amount of available calories falls below the amount necessary for survival. You see that for a major nuclear war, that’s basically the entire world, except Australia, New Zealand, and Argentina. So cockroaches and crocodiles will be fine, don’t worry.
With climate change, too, the major problem isn’t the primary effect, it’s the secondary effects that impede our ability to recover from other problems. It’s the consequences of an increasing number of natural disasters and droughts and fires that lead to economic distress that upset supply networks that cause international tension. Or, to put this differently: We are only so many people and there’s only so much we can do. If we’re forced to constantly cope with climate change, other things won’t get done.
Climate change is unlikely to cause complete extinction on its own because it’s a self-limiting problem. If economies collapse, carbon dioxide emission will decrease and in two hundred years or so we might be able to start over again. But in those two hundred years humanity will be extraordinarily susceptible to additional problems like a nuclear war or pandemics.
A few months ago, a team of researchers from institutions such as the Centre for the Study of Existential Risk in Cambridge and, again, the Future of Humanity Institute in Oxford, published a paperin which they say it’s necessary to consider such “bad-to-worst-case scenarios” and that this risk is “dangerously underexplored”. Christmas parties at those institutes must be fun. Hey, Simon, I saw your new paper about multi-resistant bacteria in the wake of nuclear wars, well done! But I want to talk to you about brain-eating fungi.
Pandemics are bad enough, but a pandemic caused by a bioengineered virus could be worse. Think of a virus as lethal as ebola but as contagious as measles and a government response as sluggish as we’ve seen with COVID.
I actually think COVID was a blessing in disguise because it’s a fairly mild virus that gave us an excellent test run. It might have been bad at the time but think of COVID like the 3rd Pirates of the Caribbean movie. Yes, it was bad but worse was yet to come. Hopefully next time we’ll be better prepared - both for the next pandemic and the next pirates movie.
But viruses aren’t the only problem, there are also bacteria and fungi and other bioweapons that can induce diseases. And then there’s the risk that genetically modified organisms escape from the lab into the wild and cause ecosystems to collapse.
The biggest problem with Artificial Intelligence that longtermists see is that that an AI could become intelligent enough to survive independently of us but pursue interests that conflict with our own. They call it the “misalignment problem”. In the worst case, the AIs might decide to get rid of us. And could we really blame them? I mean, most of us can’t draw a flower, let alone a human face, so what’s the point of our existence really?
This wouldn’t necessarily be an extinction event in the sense that intelligent life would still exist, it just wouldn’t be us. Under which circumstances you might consider an AI species a continuation of our own line is rather unclear. Longtermists argue it depends on whether the AI continues our “values”, but it seems odd to me to define a species by its values, and I’m not sure our values are all that great to begin with.
In any case, I consider this scenario unlikely because it assumes that advanced AIs will soon be easy to build and reproduce which is far from reality. If you look at what’s currently happening, supercomputers are getting bigger and bigger, and the bigger they get the more difficult they are to maintain and the longer it takes to train them. If you extrapolate the current trend to the next few hundred years, we will at best have a few intelligent machines owned by companies or governments, and each will require a big crew to keep it alive. They won’t take over the world any time soon.
What do we know about the likelihood of those human-caused extinction scenarios? Pretty much nothing, at least that’s my reading of the literature.
Take for example this survey that US Senator Richard Lugar sent to 132 experts in 2005. He asked them “What is the probability (expressed as a percentage) of an attack involving a nuclear explosion occurring somewhere in the world in the next ten years?” The answers of the so-called experts were all over the board from zero to 100 percent, so you might as well not bother asking.
According to the Australian philosopher Toby Ord, the risk of self-caused extinction in the next 100 years is 1 over 6. Well, as I keep preaching, a number without uncertainty estimates isn’t science. If you’d add uncertainty estimates to this estimate, I think you’d find anything between zero and 1. So your guess is as good as his.
Let’s then have a look at the naturally occurring existential risks. I don’t want to go through all of them, but I do want to mention the biggest risk which is currently that of a supervolcano eruption. That’s an eruption which ejects more than a thousand cubic kilometers of material. They’re known to have happened repeatedly in the past. One of the most famous examples is Yellowstone. It had three mega eruptions in the past 2 million years, each of which covered most of the western US in ash a foot deep. The next eruption will probably come in the next 100 thousand years or so. Our planet has about a dozen supervolcanoes.
Supervolcano eruptions are a problem for the same reason as nuclear war. They can inject a lot of dust into the atmosphere that’d cool the planet rather suddenly, possibly by more than 10 degrees for a decade.
The problem with asteroid impacts, too, is that they would propel a lot of dust into the atmosphere. But that would take a pretty big asteroid, and big asteroids are luckily rare and also quite easy to spot. The asteroid that caused the extinction of dinosaurs 65 million years ago has been estimated to have had about 10 kilometers in diameter.
NASA currently knows about four asteroids of that size and none of them is on a collision course with us. If a new one appeared, we’d probably know at least a few months in advance. Getting a redirect mission on the way would currently take several years of planning at least, which isn’t fast enough. But this is a problem that we can solve with current technology, technology that doesn’t require Bruce Willis or Ben Affleck to make it work. Really supervolcanoes are the bigger problem and there’s very little that current technology can do about them. Let alone Willis or Affleck. Another scary natural risk that we can’t currently do anything about, is big solar flares. I talked about this in an earlier video.
Doing risk estimates is somewhat easier for natural disasters than for self-caused ones, because we can estimate their frequency from past records. This was done in a 2019 paper by researchers from Oxford including the previously mentioned Toby Ord.
They used the observation that humans have survived at least 200 thousand years on this planet to estimate the annual probability of human extinction from natural causes. And in this case, they actually do have an uncertainty estimate. They say it’s less than one in 87 thousand with 90 percent probability and less than 14 thousand with more than 99 point 9 percent probability.
If one uses records of the entire lineage of Homo which dates back about two million years, then the annual probability of extinction from natural causes falls below one in 870 thousand with 90 percent confidence. You’re probably more likely to see an expert being right about nuclear war than this.
These estimates have a general problem which is that from a sample of one you can’t tell apart the probability of occurrence from the probability of having picked a particular element of the sample. That is to say, we might just have been unusually lucky and the number they came up with isn’t the probability that we’ll go extinct tomorrow, but a statement about how lucky we’ve been so far.
Here's an example for what I mean. Suppose you have a billion planets and each day half of them evaporate into nothing, so the daily extinction risk is one in two. After a month there’s about 1 planet left. The people on this one planet could now calculate the probability of going extinct tomorrow based on the observation that they’ve survived one month. They’d arrive at an estimated daily extinction risk of less than 6 in 1000 with 90 percent confidence, which is crudely wrong. The reason it’s wrong is that the people on that planet don’t know of all the other planets which went poof.
This means that the estimate based on observations from our own planet assumes we’re a typical planet, and not an extraordinarily lucky one.
The only way to make an estimate which does not rely on this assumption is to look at other planets to figure out how typical we are. At the moment this can’t tell us much about natural disasters on our planet, because we can’t observe those on other planets. But it can give us an estimate for the risk that our entire planet gets destroyed by natural causes, for example because a black hole comes by or a supernovae explosion goes off in the vicinity.
This estimate was done in 2005 by Nick Bostrom and Max Tegmark, and they found that the annual probability of our planet being destroyed is less than one in a trillion. Hey, at least I have some good news in this video! The reason they looked at this was that at the time people were worried that the Large Hadron Collider would produce a black hole, which, like a particle going round in in the LHC, returns me to the beginning.
So, was there ever really a risk that the LHC would destroy the planet? The most common argument that participle physicists bring up is that cosmic ray collisions in the upper atmosphere sometimes happen at total energies higher than the collisions at the LHC. Therefore, they say, if those collisions could create dangerous black holes, we’d have died long ago.
There are two problems with this argument. The first one is what Bostrom and Tegmark addressed in their paper. The probability might not be small, we might just have been very lucky so far.
The bigger problem is that it’s a false comparison. Because the risk doesn’t come from any microscopic black holes, but from those that move slowly relative to earth. These would eat up matter and grow and then sit in the center of Earth sucking in the rest of the planet. Cosmic ray collisions have a center of mass system that moves rapidly relative to earth, and therefore everything that’s produced in those collisions is very likely to also move fast. This is not the case for LHC collisions. They give a very different distribution of velocities that makes it much more likely to produce a dangerously slow black hole.
The actual reason this was never likely is an entirely different one. You can’t produce microscopic black holes at the LHC if Einstein’s theory of general relativity is correct. Yes, this guy again. Like reasons for the world ending, he really pops up everywhere, doesn’t he?
The production of black holes at the LHC only becomes possible if you change Einstein’s theory. Why would you do that? The reason that particle physicists had for doing this was the same reason they had for believing that the LHC would produce supersymmetric particles. It’s an idea called naturalness. I explained in my first book why this naturalness idea is not scientific.
But, you see, if particle physicists had been honest about this, if they’d admitted that the idea that the LHC would produce those tiny black holes in the first place was nonsense, they’d also have had to admit that it was nonsense to claim it would produce dark matter particles or supersymmetry. So they had to come up with a different reason.
Okay, so in summary, the biggest existential risk is our own stupidity.