Episode Details

Back to Episodes
For Humanity, An AI Safety Podcast Episode #2: The Alignment Problem

For Humanity, An AI Safety Podcast Episode #2: The Alignment Problem

Published 2 years, 5 months ago
Description

Did you know the makers of AI have no idea how to control their technology? They have no clue how to align it with human goals, values and ethics. You know, stuff like, don't kill humans.


This the AI safety podcast for all people, no tech background required. We focus only on the threat of human extinction from AI.


In Episode #2, The Alignment Problem, host John Sherman explores how alarmingly far AI safety researchers are from finding any way to control AI systems, much less their superintelligent children, who will arrive soon enough.



This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com/subscribe
Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us