Podcast Episodes
Back to Search“Building AI Research Fleets” by bgold, Jesse Hoogland
From AI scientist to AI research fleet
Research automation is here (1, 2, 3). We saw it coming and planned ahead, which puts us ahead of most (4, 5, …
1 year, 3 months ago
“What Is The Alignment Problem?” by johnswentworth
So we want to align future AGIs. Ultimately we’d like to align them to human values, but in the shorter term we might start with other targets, like …
1 year, 3 months ago
“Applying traditional economic thinking to AGI: a trilemma” by Steven Byrnes
Traditional economics thinking has two strong principles, each based on abundant historical data:
Principle (A): No “lump of labor”: If human populat…
1 year, 3 months ago
“Passages I Highlighted in The Letters of J.R.R.Tolkien” by Ivan Vendrov
All quotes, unless otherwise marked, are Tolkien's words as printed in The Letters of J.R.R.Tolkien: Revised and Expanded Edition. All emphases mine.…
1 year, 3 months ago
“Parkinson’s Law and the Ideology of Statistics” by Benquo
The anonymous review of The Anti-Politics Machine published on Astral Codex X focuses on a case study of a World Bank intervention in Lesotho, and te…
1 year, 3 months ago
“Capital Ownership Will Not Prevent Human Disempowerment” by beren
Crossposted from my personal blog. I was inspired to cross-post this here given the discussion that this post on the role of capital in an AI future …
1 year, 3 months ago
“Activation space interpretability may be doomed” by bilalchughtai, Lucius Bushnaq
TL;DR: There may be a fundamental problem with interpretability work that attempts to understand neural networks by decomposing their individual acti…
1 year, 3 months ago
“What o3 Becomes by 2028” by Vladimir_Nesov
Funding for $150bn training systems just turned less speculative, with OpenAI o3 reaching 25% on FrontierMath, 70% on SWE-Verified, 2700 on Codeforce…
1 year, 3 months ago
“What Indicators Should We Watch to Disambiguate AGI Timelines?” by snewman
(Cross-post from https://amistrongeryet.substack.com/p/are-we-on-the-brink-of-agi, lightly edited for LessWrong. The original has a lengthier introdu…
1 year, 3 months ago
“How will we update about scheming?” by ryan_greenblatt
I mostly work on risks from scheming (that is, misaligned, power-seeking AIs that plot against their creators such as by faking alignment). Recently,…
1 year, 3 months ago