Episode Details
Back to Episodes“Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’” by 80000_Hours
Description
By Robert Wilbin | Watch on Youtube | Listen on Spotify | Read transcript
Episode summary
Whether or not we get AGI in the next few years, a lot of people are starting to not really care about that question. They still expect the next 25 years or the next 50 years to play out kind of like the last 25 years or the last 50 years…
Whereas I think that there's a pretty good chance that by 2050 the world will look as different from today as today does from the hunter-gatherer era. It's like 10,000 years of progress rather than 25 years of progress driven by AI automating all intellectual activity.
— Ajeya Cotra
Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almost satirical. But is it actually a bad plan?
Today's guest, Ajeya Cotra, recently placed 3rd out of 413 participants forecasting AI developments and is among the most thoughtful and respected commentators on where the technology is going.
She thinks there's a meaningful chance we’ll see as much [...]
---
Outline:
(00:22) Episode summary
(03:39) Highlights
(03:42) The spectrum of expectations about AGI
(06:50) Wildly different views about the economic effects of AI
(10:44) The most dangerous AI progress might remain secret
(14:08) White-knuckling the 12-month window after automated AI R&D
(17:06) EA as an incubator for avant-garde causes others wont touch
---
First published:
February 17th, 2026
---
Narrated by TYPE III AUDIO.