Podcast Episodes
Back to Search“The Failed Strategy of Artificial Intelligence Doomers” by Ben Pace
This is the best sociological account of the AI x-risk reduction efforts of the last ~decade that I've seen. I encourage folks to engage with its cri…
1 year, 2 months ago
“Murder plots are infohazards” by Chris Monteiro
Hi all
I've been hanging around the rationalist-sphere for many years now, mostly writing about transhumanism, until things started to change in 2016 …
1 year, 2 months ago
“Why Did Elon Musk Just Offer to Buy Control of OpenAI for $100 Billion?” by garrison
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artif…
1 year, 2 months ago
“The ‘Think It Faster’ Exercise” by Raemon
Ultimately, I don’t want to solve complex problems via laborious, complex thinking, if we can help it. Ideally, I'd want to basically intuitively fol…
1 year, 2 months ago
“So You Want To Make Marginal Progress...” by johnswentworth
Once upon a time, in ye olden days of strange names and before google maps, seven friends needed to figure out a driving route from their parking lot…
1 year, 2 months ago
“What is malevolence? On the nature, measurement, and distribution of dark traits” by David Althaus
Summary
In this post, we explore different ways of understanding and measuring malevolence and explain why individuals with concerning levels of mal…
1 year, 2 months ago
“How AI Takeover Might Happen in 2 Years” by joshc
I’m not a natural “doomsayer.” But unfortunately, part of my job as an AI safety researcher is to think about the more troubling scenarios.
I’m like …
1 year, 2 months ago
“Gradual Disempowerment, Shell Games and Flinches” by Jan_Kulveit
Over the past year and half, I've had numerous conversations about the risks we describe in Gradual Disempowerment. (The shortest useful summary of t…
1 year, 2 months ago
“Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development” by Jan_Kulveit, Raymond D, Nora_Ammann, Deger Turan, David Scott Krueger (formerly: capybaralet), David Duvenaud
This is a link post.Full version on arXiv | X
Executive summary
AI risk scenarios usually portray a relatively sudden loss of human control to AIs,…
1 year, 2 months ago
“Planning for Extreme AI Risks” by joshc
This post should not be taken as a polished recommendation to AI companies and instead should be treated as an informal summary of a worldview. The c…
1 year, 2 months ago