Podcast Episodes

Back to Search
“Slowdown After 2028: Compute, RLVR Uncertainty, MoE Data Wall” by Vladimir_Nesov

It'll take until ~2050 to repeat the level of scaling that pretraining compute is experiencing this decade, as increasing funding can't sustain the …

9 months, 3 weeks ago

Short Long
View Episode
“Early Chinese Language Media Coverage of the AI 2027 Report: A Qualitative Analysis” by jeanne_, eeeee

In this blog post, we analyse how the recent AI 2027 forecast by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean has b…

10 months ago

Short Long
View Episode
[Linkpost] “Jaan Tallinn’s 2024 Philanthropy Overview” by jaan

This is a link post. to follow up my philantropic pledge from 2020, i've updated my philanthropy page with the 2024 results.

in 2024 my donations fun…

10 months ago

Short Long
View Episode
“Impact, agency, and taste” by benkuhn

I’ve been thinking recently about what sets apart the people who’ve done the best work at Anthropic.

You might think that the main thing that makes …

10 months ago

Short Long
View Episode
[Linkpost] “To Understand History, Keep Former Population Distributions In Mind” by Arjun Panickssery

This is a link post. Guillaume Blanc has a piece in Works in Progress (I assume based on his paper) about how France's fertility declined earlier tha…

10 months, 1 week ago

Short Long
View Episode
“AI-enabled coups: a small group could use AI to seize power” by Tom Davidson, Lukas Finnveden, rosehadshar

We’ve written a new report on the threat of AI-enabled coups.

I think this is a very serious risk – comparable in importance to AI takeover but muc…

10 months, 1 week ago

Short Long
View Episode
“Accountability Sinks” by Martin Sustrik

Back in the 1990s, ground squirrels were briefly fashionable pets, but their popularity came to an abrupt end after an incident at Schiphol Airport …

10 months, 1 week ago

Short Long
View Episode
“Training AGI in Secret would be Unsafe and Unethical” by Daniel Kokotajlo

Subtitle: Bad for loss of control risks, bad for concentration of power risks

I’ve had this sitting in my drafts for the last year. I wish I’d been …

10 months, 1 week ago

Short Long
View Episode
“Why Should I Assume CCP AGI is Worse Than USG AGI?” by Tomás B.

Though, given my doomerism, I think the natsec framing of the AGI race is likely wrongheaded, let me accept the Dario/Leopold/Altman frame that AGI …

10 months, 1 week ago

Short Long
View Episode
“Surprising LLM reasoning failures make me think we still need qualitative breakthroughs for AGI” by Kaj_Sotala

Introduction

Writing this post puts me in a weird epistemic position. I simultaneously believe that:

The reasoning failures that I'll discuss are st…

10 months, 1 week ago

Short Long
View Episode

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us