Episode Details
Back to Episodes"Responsible Scaling Policy v3" by HoldenKarnofsky
Description
All views are my own, not Anthropic's. This post assumes Anthropic's announcement of RSP v3.0 as background.
Today, Anthropic released its Responsible Scaling Policy 3.0. The official announcement discusses the high-level thinking behind it. This is a more detailed post giving my own takes on the update.
First, the big picture:
- I expect some people will be upset about the move away from a “hard commitments”/”binding ourselves to the mast” vibe. (Anthropic has always had the ability to revise the RSP, and we’ve always had language in there specifically flagging that we might revise away key commitments in a situation where other AI developers aren’t adhering to similar commitments. But it's been easy to get the impression that the RSP is “binding ourselves to the mast” and committing to unilaterally pause AI development and deployment under some conditions, and Anthropic is responsible for that.)
- I take significant responsibility for this change. I have been pushing for this change for about a year now, and have led the way in developing the new RSP. I am in favor of nearly everything about the changes we’re making. I am excited about the Roadmap, the Risk Reports, the move toward external [...]
---
Outline:
(05:32) How it started: the original goals of RSPs
(11:25) How its going: the good and the bad
(11:51) A note on my general orientation toward this topic
(14:56) Goal 1: forcing functions for improved risk mitigations
(15:02) A partial success story: robustness to jailbreaks for particular uses of concern, in line with the ASL-3 deployment standard
(18:24) A mixed success/failure story: impact on information security
(20:42) ASL-4 and ASL-5 prep: the wrong incentives
(25:00) When forcing functions do and dont work well
(27:52) Goal 2 (testbed for practices and policies that can feed into regulation)
(29:24) Goal 3 (working toward consensus and common knowledge about AI risks and potential mitigations)
(30:59) RSP v3s attempt to amplify the good and reduce the bad
(36:01) Do these benefits apply only to the most safety-oriented companies?
(37:40) A revised, but not overturned, vision for RSPs
(39:08) Q&A
(39:10) On the move away from implied unilateral commitments
(39:15) Is RSP v3 proactively sending a race-to-the-bottom signal? Why be the first company to explicitly abandon the high ambition for achieving low levels of risk?
(40:34) How sure are you that a voluntary industry-wide pause cant happen? Are you worried about signaling that youll be the first to defect in a prisoners dilemma?
(42:03) How sure are you that you cant actually sprint to achieve the level of information security, alignment science understanding, and deployment safeguards needed to make arbitrarily powerful AI systems low-risk?
(43:49) What message will this change send to regulators? Will it make ambitious regulation less likely by making companies commitments to low risk look less serious?
(45:10) Why did you have to do this now - couldnt you have waited until the last possible moment to make this change, in case the more ambitious risk mitigations ended up working out?
[... 15 more sections]
---
First published:
February 24th, 2026
Source:
https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsible-scaling-policy-v3
---
Narrated by TYPE III AUDIO