Home Artists Posts Import Register

Content

[This is a transcript with links to references.]

We’ve seen a lot of headlines in the past year about how dangerous AI is  and how overblown these fears are . I’ve found it hard to make sense of this discussion. If only someone could systematically interview experts and figure out what they’re worried about. Well, a group of researchers from the UK has done exactly that and just published their results. What they have found is, not very reassuring. Let’s have a look.

 This new report is based on several rounds of interviews with 12 experts on software development using what’s called the Delphi method.  The Delphi method is named after  the Oracle of Delphi, a position held by a priestess in the Greek city of Delphi around 2500 years ago.  The Oracle’s task was to supposedly convey messages from the gods about the future.

The Delphi method was invented by the American non-profit RAND corporation  in the 1950s to make better use of experts’ knowledge. It works by conducting in-depth interviews with the experts.  The interviews are then transcribed and anonymously shared with the other participants.  They add opinions on each other’s interviews and further information, then another round of interviews is done. This process can be repeated several times.

The Delphi method  has become a common way for companies and committees to leverage expert knowledge and convert it into actionable plans, and that’s what these researchers also did.

 They asked a lot of questions about what would happen in software development by the year 2040 and eventually identified 5 points on which the experts more or less agreed.

The first one is that they all agree that by 2040 corners will be cut in AI safety. But interestingly enough, they think it’s not because of competition between companies, but because of competition between nations, in particular they name the United States and China.

The results are summarized in this chart where blue means agreement,  orange disagreement  and white means no opinion . Two of the experts said that by 2040 AI would cause events with at least a million deaths, that’s a megadeath.  Yes, megadeath is actually a unit, not just the name of a heavy metal band. You can also see that several experts disagree, but this is partly because they think it will “only” be a few thousand fatalities.

 Another thing on which the experts all agree is that by 2040 quantum computing will only just be used. Again, you can see that some of them disagree, but in the text, it’s explained that they disagree by degree,  in that one could say quantum computing is already being used today it just has no commercial relevance and that’s not going to change by 2040.

 The next point of agreement is that almost all of them are worried that AI will make it increasingly hard to tell apart truth from fiction in various domains from written text to image to video,  and that it will likely come to an arms race in which some Ais produce fake content and other Ais will constantly try identify content as fake, quite possibly sometimes accidentally flagging the truth as fake. It’ll be a mess. One of the participants summarized it like this:

 “We’re not going to be living in a George Orwell world.… We’re going to be living in a Philip K. Dick world [where] nobody knows what’s true.” And just in case you’re too young to remember, Philip Dick wrote a bunch of dystopian future novels in which his characters frequently question the nature of reality, the most famous probably being “Do Androids Dream of Electric Sheep?”  which was later adapted for the movie Bladerunner.  

Now those three points I basically expected to see, but the last two I found somewhat of a surprise.  The experts all agree that by 2040 it will become common to buy and own internet assets by way of tokenship.  A tokenship is basically a digital record and it’s what NFTs have become known for. Even more interestingly, they don’t think that this tokenization will happen through blockchain technology but through other distributed services.  According to one of the interviewees ““Blockchain has now proved its irrelevance.”

 And the final item is that they think the increasing complexity of software in general and that of AI in particular will make it hard to tell apart accidents from deliberate manipulation, basically because no human will be able to really figure out what’s going. Modern-day Kafka basically.

The experts also came up with a bunch of proposals  for how to address these issues. As you’d expect they ask for regulations on AI safety and more built-in safety requirements and outcome checks on software development. This is what is listed here as “ambient accountability”. They also ask for better education of people in relevant positions and more input from the social sciences on what the impact of all these changes might be. These are surely all good ideas and they’ll surely all be pretty much ignored.

I am confident these experts know what they’re talking about,  but I think they have somewhat of a blind spot in an area that I care a lot about which is scientific publication.  AI is going to make it dramatically easier to produce rubbish papers and fake data and spread these all over the globe. In fact, I would bet it’s basically happening as we speak.

This falls into the general category of fake news  and misinformation, but I’d argue it’s an underestimated special case. That’s because fact checkers heavily rely on scientific publication, and if that base erodes the entire house will tumble down.

So yes, interesting times ahead, maybe we’ll soon find out whether androids do dream of electric sheep and if they do, whether that makes them vegan.

Files

AI experts make predictions for 2040. I was a little surprised.

😍Special Offer! 👉 Use our link https://joinnautilus.com/SABINE to get 15% off your membership! My new essay is here: https://nautil.us/what-physicists-have-been-missing-506607/ We’ve seen a lot of headlines in the past year about how dangerous AI is and how overblown these fears are . I’ve found it hard to make sense of this discussion. If only someone could systematically interview experts and figure out what they’re worried about. Well, a group of researchers from the UK has done exactly that and just published their results. What they have found is, not very reassuring. Let’s have a look. The paper is here: https://ieeexplore.ieee.org/document/10380243 🤓 Check out our new quiz app ➜ http://quizwithit.com/ 💌 Support us on Donatebox ➜ https://donorbox.org/swtg 📝 Transcripts and written news on Substack ➜ https://sciencewtg.substack.com/ 👉 Transcript with links to references on Patreon ➜ https://www.patreon.com/Sabine 📩 Free weekly science newsletter ➜ https://sabinehossenfelder.com/newsletter/ 👂 Audio only podcast ➜ https://open.spotify.com/show/0MkNfXlKnMPEUMEeKQYmYC 🔗 Join this channel to get access to perks ➜ https://www.youtube.com/channel/UC1yNl2E66ZzKApQdRuTQ4tw/join 🖼️ On instagram ➜ https://www.instagram.com/sciencewtg/ #science #sciencenews #technews

Comments

Anonymous

In matters of fraud, my money is on humans over AI. We already have the fake science news covered: https://www.science.org/content/article/paper-mills-bribing-editors-scholarly-journals-science-investigation-finds

Anonymous

Thanks for the link. As to betting money, mine I'll keep in my pocket, until I know what best to do with it, such as betting on possible future developments. Such as some humans using "AI" to help them commit fraud. When comes to engineers and scientists inflating their CD's using things such as ChatGPT to write parts or all of a manuscript to be submitted for publication, then revising it to make it look more human-made and removing glaring errors, I understand this is happening already. So things are really not splendid even now, as I have written already here, on what is going on in scientific publications, other than paper mills. I think Sabine has mentioned them indirectly.

Anonymous

Would megadeth have been a thing if Mustaine had stayed with Metallica?