Episode #3: The Interpretability Problem. In this episode we'll hear from AI Safety researchers including Eliezer Yudkowsky, Max Tegmark, Connor Leahy, and many more discussing how current AI systems…
Published on 2 years ago
This is the trailer for Episode #3: The Interpretability Problem. In this episode we'll hear from AI Safety researchers including Eliezer Yudkowsky, Max Tegmark, Connor Leahy, and many more discussin…
Published on 2 years ago
Did you know the makers of AI have no idea how to control their technology? They have no clue how to align it with human goals, values and ethics. You know, stuff like, don't kill humans.
This the AI …
Published on 2 years ago
Did you know the makers of AI have no idea how to control their technology, while they admit it has the power to create human extinction? In For Humanity: An AI Safety Podcast, Episode #2 The Alignme…
Published on 2 years, 1 month ago
How bout we choose not to just all die? Are you with me?
For Humanity, An AI Safety Podcast is the AI Safety Podcast for regular people. Peabody Award-winning former journalist John Sherman explores t…
Published on 2 years, 1 month ago
For Humanity, An AI Safety Podcast is the AI Safety Podcast for regular people. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligen…
Published on 2 years, 1 month ago
If you like Podbriefly.com, please consider donating to support the ongoing development.
Donate