Podcast Episodes

Back to Search
Kristian Rönn - A Blissful Successor Beyond Darwinian Life [Worthy Successor, Episode 9]

Kristian Rönn - A Blissful Successor Beyond Darwinian Life [Worthy Successor, Episode 9]


Season 2 Episode 9


This is an interview with Kristian Rönn, author, successful startup founder, and now CEO of Lucid, and AI hardware governance startup based in SF.

This is an additional installment of our "Worthy Succ…


Published on 3 months, 1 week ago

Jack Shanahan - Avoiding an AI Race While Keeping America Strong [US-China AGI Relations, Episode 1]

Jack Shanahan - Avoiding an AI Race While Keeping America Strong [US-China AGI Relations, Episode 1]


Season 5 Episode 1


This is an interview with Jack Shanahan, a three-star General and former Director of the Joint AI Center (JAIC) within the US Department of Defense. 

This the first installment of our "US-China AGI Re…


Published on 3 months, 3 weeks ago

Richard Ngo - A State-Space of Positive Posthuman Futures [Worthy Successor, Episode 8]

Richard Ngo - A State-Space of Positive Posthuman Futures [Worthy Successor, Episode 8]


Season 2 Episode 8


This is an interview with Richard Ngo, AGI researcher and thinker - with extensive stints at both OpenAI and DeepMind.

This is an additional installment of our "Worthy Successor" series - where we exp…


Published on 4 months, 1 week ago

Yi Zeng - Exploring 'Virtue' and Goodness Through Posthuman Minds [AI Safety Connect, Episode 2]

Yi Zeng - Exploring 'Virtue' and Goodness Through Posthuman Minds [AI Safety Connect, Episode 2]


Season 4 Episode 2


This is an interview with Yi Zeng, Professor at the Chinese Academy of Sciences, a member of the United Nations High-Level Advisory Body on AI, and leader of the Beijing Institute for AI Safety and G…


Published on 4 months, 3 weeks ago

Max Tegmark - The Lynchpin Factors to Achieving AGI Governance [AI Safety Connect, Episode 1]

Max Tegmark - The Lynchpin Factors to Achieving AGI Governance [AI Safety Connect, Episode 1]


Season 4 Episode 1


This is an interview with Max Tegmark, MIT professor, Founder of the Future of Humanity Institute, and author of Life 3.0.

This interview was recorded on-site at AI Safety Connect 2025, a side event f…


Published on 5 months, 1 week ago

Michael Levin - Unfolding New Paradigms of Posthuman Intelligence [Worthy Successor, Episode 7]

Michael Levin - Unfolding New Paradigms of Posthuman Intelligence [Worthy Successor, Episode 7]


Season 2 Episode 7


This is an interview with Dr. Michael Levin, a pioneering developmental biologist at Tufts University.

This is an additional installment of our "Worthy Successor" series - where we explore the kinds o…


Published on 5 months, 3 weeks ago

Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]

Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]



This is an interview with Eliezer Yudkowsky, AI Researcher at the Machine Intelligence Research Institute.

This is the sixth installment of our "AGI Governance" series - where we explore the means, ob…


Published on 7 months, 1 week ago

Connor Leahy - Slamming the Brakes on the AGI Arms Race [AGI Governance, Episode 5]

Connor Leahy - Slamming the Brakes on the AGI Arms Race [AGI Governance, Episode 5]


Season 3 Episode 5


This is an interview with Connor Leahy, the Founder and CEO of Conjecture.

This is the fifth installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of …


Published on 7 months, 3 weeks ago

Andrea Miotti - A Human-First AI Future [AGI Governance, Episode 4]

Andrea Miotti - A Human-First AI Future [AGI Governance, Episode 4]


Season 3 Episode 4


This is an interview with Andrea Miotti, the Founder and Executive Director of ControlAI.

This is the fourth installment of our "AGI Governance" series - where we explore the means, objectives, and im…


Published on 8 months, 1 week ago

Stephen Ibaraki - The Beginning of AGI Global Coordination [AGI Governance, Episode 3]

Stephen Ibaraki - The Beginning of AGI Global Coordination [AGI Governance, Episode 3]


Season 3 Episode 3


This is an interview with Stephen Ibaraki, the Founder of the ITU's (part of the United Nations) AI for Good initiative, and Chairman REDDS Capital.

This is the third installment of our "AGI Governanc…


Published on 8 months, 3 weeks ago





If you like Podbriefly.com, please consider donating to support the ongoing development.

Donate