Episode Details

Back to Episodes

Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy

Published 2 years ago
Description
This is a linkpost for https://garrisonlovely.substack.com/p/sam-altmans-chip-ambitions-undercut If you enjoy this, please consider subscribing to my Substack.

Sam Altman has said he thinks that developing artificial general intelligence (AGI) could lead to human extinction, but OpenAI is trying to build it ASAP. Why?

The common story for how AI could overpower humanity involves an “intelligence explosion,” where an AI system becomes smart enough to further improve its capabilities, bootstrapping its way to superintelligence. Even without any kind of recursive self-improvement, some AI safety advocates argue that a large enough number of copies of a genuinely human-level AI system could pose serious problems for humanity. (I discuss this idea in more detail in my recent Jacobin cover story.)

Some people think the transition from human-level AI to superintelligence could happen in a matter of months, weeks, days, or even hours. The faster the takeoff, the more dangerous, the thinking goes.

Sam [...]

---

First published:
February 10th, 2024

Source:
https://www.lesswrong.com/posts/pEAHbJRiwnXCjb4A7/sam-altman-s-chip-ambitions-undercut-openai-s-safety

Linkpost URL:
https://garrisonlovely.substack.com/p/sam-altmans-chip-ambitions-undercut

---

Narrated by TYPE III AUDIO.

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us