Episode Details

Back to Episodes
The Interpretability Problem: For Humanity, An AI Safety Podcast Episode #3 Trailer

The Interpretability Problem: For Humanity, An AI Safety Podcast Episode #3 Trailer

Published 2 years, 5 months ago
Description

This is the trailer for Episode #3: The Interpretability Problem. In this episode we'll hear from AI Safety researchers including Eliezer Yudkowsky, Max Tegmark, Connor Leahy, and many more discussing how current AI systems are black boxes, no one has any clue how they work inside.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.



#AI #airisk #alignment #interpretability #doom #aisafety #openai #anthropic #eleizeryudkowsky #maxtegmark #connorleahy



This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com/subscribe
Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us