Home Artists Posts Import Register
The Offical Matrix Groupchat is online! >>CLICK HERE<<

Downloads

Content

Drawing on interviews with half a dozen top experts at companies like Nvidia, Google DeepMind and Microsoft, as well as a host of other experts in the field, I'll give you their - and my - timelines for artificial general intelligence. By the end of the video, I want you to have a much better sense of what scale is needed, and what the approximate consensus is.

Links:

6 Exclusive Interviews +

Metaculus Predictions: https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/

Hassabis: https://www.independent.co.uk/tech/ai-deepmind-artificial-general-intelligence-b2332322.html

Carl Shulman: https://www.youtube.com/watch?v=_kRg-ZP1vQc&t=2151s

https://twitter.com/BostonDynamics/status/1618619858978996225

Nvidia B100: https://www.semianalysis.com/p/nvidias-plans-to-crush-competition

Yann LeCun: https://www.youtube.com/watch?v=EGDG3hgPNp8&t=5716s

https://twitter.com/ylecun/status/1731445805817409918

Synapses and Neural Networks: https://wp.nyu.edu/yungjurick/2020/03/15/debate-on-the-relationship-between-neural-network-and-the-brain/

Wait But Why: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

Jensen Huang: https://www.youtube.com/watch?v=Pkj-BLHs6dE

Amodei Clip: https://twitter.com/dwarkesh_sp/status/1688577515550597121

Avatars at the End: https://shunsukesaito.github.io/rgca/

Levels of AGI, Google DeepMind: https://arxiv.org/pdf/2311.02462.pdf

Files

Comments

Sean Betts

Excited for this - I knew you’d be a Wait Buy Why fan!

Rakesh Murria

Yann Le Cun has really brought his AGI expectations forwards it feels? He used to be very belittling of LLMs

Brian Crabtree

Does OpenAI's 4-year Superalignment goal of July 2027 inform your timelines? I think Jan Leike said the plan is... Year 1-3: Solve alignment of human-level AI safety researchers. Year 4: Spin up millions of them to solve Superalignment. Also, given the huge influx of talent and compute into AI research, shouldn't we expect someone to figure out a successor to the transformer before 2028? Seems reasonable to expect humanity will collectively "push" on AGI architectures more in 2024 than it did in the past 10 years combined.

Shaun McDonogh

Feels like the timeline to AGI is quadratic to the number of recursive actionable breakthroughs (for example transformers). With each recursive breakthrough leading to the next breakthrough, maybe the question is how many breakthroughs lead to AGI? I would guess 3. With the timespan between each breakthrough being shorter due to the self recursive nature you pointed out after featuring Jensen. Also mind and body…feels like chicken and egg. Does embodiment help AI learn cause and effect better…no idea but fascinating.

Sean Betts

Great TED interview with Shane Legg released on AGI that you should include in the list above: https://youtu.be/kMUdrUP-QCs?si=WV7UgyvCNl6BKg29

AIExplained

With Sutskever leaving most likely, wonder if that superalignment team is slowed down...

AIExplained

Thing is, with sheer scale of compute, we might not even need breakthroughs (like Mamba perhaps?) to get to AGI...

Brian Crabtree

Yah, Ilya leaving would almost certainly slow down the superalignment team. And I think it's worth noting that Jan Leike expects to have an internal GPT by mid-2026 capable of "roughly human-level automated alignment research". That sounds very AGI-like. He said: "The roughly human-level automated alignment researcher is this instrumental goal that we are pursuing in order to figure out how to align superintelligence because we don’t yet know how to do that. ... If you want to work back from the four years, I think, basically, probably, in three years you would want to be mostly done with your automated alignment researcher, assuming the capabilities are there. If they’re not there, then, our project might take longer, but for the best reasons." axrp.net/episode/2023/07/27/episode-24-superalignment-jan-leike.html Also of note from that interview: "...we would want them [human-level automated alignment researchers] to figure out how to better align the next iteration of itself that then can work on this problem with even more brain power and then make more progress and tackle a wider range of approaches. And so you kind of bootstrap your way up to eventually having a system that can do very different research that will then allow us to align superintelligence." So I think he's saying the team's plan for year 4 is a controlled, year-long explosion of both alignment and intelligence - with the "alignment explosion" staying one step ahead of the intelligence explosion until superalignment is solved by mid-2027. Presumably their internal GPT would bootstrap itself well above human-level intelligence to accomplish this which is WILD. Does anyone have a different interpretation, though?

Shaun McDonogh

Perhaps that is correct. Sheer compute might be enough to spark learning and reasoning beyond the data. GPT-4 seems to have given us a glimpse of that. I feel positive that we will live to see the truth of it regardless. I bet you £5 it takes another 3 breakthroughs though lol.

Jon Kurishita

. Does embodiment help AI learn cause and effect better? I would say we could simulate many of the embodiment features of humans in virtually worlds and SIM-type games. You can already see Unreal5 starting to integrate this. Nvidia also is trying to do such simulation with Omniverse Ecosystem and capabilities. We may see the first AGI-like agents in these worlds FIRST before we see it in the real physical world.

AIExplained

Fantastic point, I think you are right. Simulated embodiment would be the obvious first instantiation, this is what Nvidia are working so hard on. In effect, the first full-fledged AGI might appear to us like a video game character...

Anonymous

Very well done. Happy to be around for the ride!

Anonymous

This was amazing!!

Martin Percy

Thanks Philip, great video. One question you might like to address at some stage if you're talking to a neuroscientist: with Transformers, did AI perhaps stumble onto a process similar to that used by human brains?

r

You are wonderful Phillip. Thank you for your hard work 🙏🏿😔. Your channel is an island of good sense takes amid an ocean of AI hype nonsense.