Episode Details
Back to Episodes“Worlds where we solve AI alignment on purpose don’t look like the world we live in” by MichaelDickens
Description
(Or: Why I don't see how the probability of extinction could be less than 25% on the current trajectory)
Cross-posted from my website.
AI developers are trying to build superintelligent AI. If they succeed, there's a high risk that the AI will kill everyone. The AI companies know this; they believe they can figure out how to align the AI so that it doesn't kill us.
Maybe we solve the alignment problem before superintelligent AI kills everyone. But if we do, it will happen because we got lucky, not because we as a civilization treated the problem with the gravity it deserves—unless we start taking the alignment problem dramatically more seriously than we currently do.
Think about what it looks like when a hard problem gets solved. Think about the Apollo program: engineers working out minute details; running simulations after simulations; planning for remote possibilities.
Think about what it looks like when a hard problem doesn't get solved. Consider the world's response to COVID.
When I look at civilization's response to the AI alignment problem, I do not see something resembling Apollo. When I visualize what it looks like for civilization to buckle [...]
The original text contained 6 footnotes which were omitted from this narration.
---
First published:
March 20th, 2026
---
Narrated by TYPE III AUDIO.