Rob Miles is the most popular AI safety educator on YouTube, with millions of views across his videos explaining AI alignment to general audiences. He dropped out of his PhD in 2011 to focus entirely on AI safety communication – a prescient career pivot that positioned him as one of the field's most trusted voices over a decade before ChatGPT made AI risk mainstream.
Rob sits firmly in the 10-90% P(Doom) range, though he admits his uncertainty is "hugely variable" and depends heavily on how humanity responds to the challenge. What makes Rob particularly compelling is the contrast between his characteristic British calm and his deeply serious assessment of our situation. He's the type of person who can explain existential risk with the measured tone of a nature documentarian while internally believing we're probably headed toward catastrophe.
Rob has identified several underappreciated problems, particularly around alignment stability under self-modification. He argues that even if we align current AI systems, there's no guarantee their successors will inherit those values – a discontinuity problem that most safety work ignores. He's also highlighted the "missing mood" in AI discourse, where people discuss potential human extinction with the emotional register of an academic conference rather than an emergency.
We explore Rob's mainline doom scenario involving recursive self-improvement, why he thinks there's enormous headroom above human intelligence, and his views on everything from warning shots to the Malthusian dynamics that might govern a post-AGI world. Rob makes a fascinating case that we may be the "least intelligent species capable of technological civilization" – which has profound implications for what smarter systems might achieve.
Our key disagreement centers on strategy: Rob thinks some safety-minded people should work inside AI companies to influence them from within, while I argue this enables "tractability washing" that makes the companies look responsible while they race toward potentially catastrophic capabilities. Rob sees it as necessary harm reduction; I see it as providing legitimacy to fundamentally reckless enterprises.
The conversation also tackles a meta-question about communication strategy. Rob acknowledges that his measured, analytical approach might be missing something crucial – that perhaps someone needs to be "running around screaming" to convey the appropriate emotional urgency. It's a revealing moment from someone who's spent over a decade trying to wake people up to humanity's most important challenge, only to watch the world continue treating it as an interesting intellectual puzzle rather than an existential emergency.
Timestamps
* 00:00:00 - Cold Open
* 00:00:28 - Introducing Rob Miles
* 00:01:42 - Rob's Background and Childhood
* 00:02:05 - Being Aspie
* 00:04:50 - Less Wrong Community and "Normies"
* 00:06:24 - Chesterton's Fence and Cassava Root
* 00:09:30 - Transition to AI Safety Research
* 00:11:52 - Discovering Communication Skills
* 00:15:36 - YouTube Success and Channel Growth
* 00:16:46 - Current Focus: Technical vs Political
* 00:18:50 - Nuclear Near-Misses and Y2K
* 00:21:55 - What’s Your P(Doom)™
* 00:27:31 - Uncertainty About Human Response
* 00:31:04 - Views on Yudkowsky and AI Risk Arguments
* 00:42:07 - Mainline Catastrophe Scenario
* 00:47:32 - Headroom Above Human Intelligence
* 00:54:58 - Detailed Doom Scenario
* 01:01:07 - Self-Modification and Alignment Stability
* 01:17:26 - Warning Shots Problem
Published on 6 days, 6 hours ago
If you like Podbriefly.com, please consider donating to support the ongoing development.
Donate