Episode Details
Back to Episodes
AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out
Description
Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?
With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.
Check out the full transcript on the 80,000 Hours website.
You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You’ll hear:
- Ajeya Cotra on overrated AGI worries
- Holden Karnofsky on the dangers of aligned AI, why unaligned AI might not kill us, and the power that comes from just making models bigger
- Ian Morris on why the future must be radically different from the present
- Nick Joseph on whether his companies internal safety policies are enough
- Richard Ngo on what everyone gets wrong about how ML models work
- Tom Davidson on why he believes crazy-sounding explosive growth stories… and Michael Webb on why he doesn’t
- Carl Shulman on why you’ll prefer robot nannies over human ones
- Zvi Mowshowitz on why he’s against working at AI companies except in some safety roles
- Hugo Mercier on why even superhuman AGI won’t be that persuasive
- Rob Long on the case for and against digital sentience
- Anil Seth on why he thinks consciousness is probably biological
- Lewis Bollard on whether AI advances will help or hurt nonhuman animals
- Rohin Shah on whether humanity’s work ends at the point it creates AGI
And of course, Rob and Luisa also regularly chime in on what they agree and disagree with.
Chapters:
- Cold open (00:00:00)
- Rob's intro (00:00:58)
- Rob & Luisa: Bowerbirds compiling the AI story (00:03:28)
- Ajeya Cotra on the misalignment stories she doesn’t buy (00:09:16)
- Rob & Luisa: Agentic AI and designing machine people (00:24:06)
- Holden Karnofsky on the dangers of even aligned AI, and how we probably won’t all die from misaligned AI (00:39:20)
- Ian Morris on why we won’t end up living like The Jetsons (00:47:03)
- Rob & Luisa: It’s not hard for nonexperts to understand we’re playing with fire here (00:52:21)
- Nick Joseph on whether AI companies’ internal safety policies will be enough (00:55:43)
- Richard Ngo on the most important mis