Episode Details

Back to Episodes

"Anthropic repeatedly accidentally trained against the CoT, demonstrating inadequate processes" by Alex Mallen, ryan_greenblatt

Published 19 hours ago
Description
It turns out that Anthropic accidentally trained against the chain of thought of Claude Mythos Preview in around 8% of training episodes. This is at least the second independent incident in which Anthropic accidentally exposed their model's CoT to the oversight signal.

In more powerful systems, this kind of failure would jeopardize safely navigating the intelligence explosion. It's crucial to build good processes to ensure development is executed according to plan, especially as human oversight becomes spread thin over increasing amounts of potentially untrusted and sloppy AI labor.

This particular failure is also directly harmful, because it significantly reduces our confidence that the model's reasoning trace is monitorable (reflective of the AI's intent to misbehave).[1]

I'm grateful that Anthropic has transparently reported on this issue as much as they have, allowing for outside scrutiny. I want to encourage them to continue to do so.

Thanks to Carlo Leonardo Attubato, Buck Shlegeris, Fabien Roger, Arun Jose, and Aniket Chakravorty for feedback and discussion. See also previous discussion here.

Incidents

A technical error affecting Mythos, Opus 4.6, and Sonnet 4.6

This is the most recent incident. In the Claude Mythos alignment risk update, Anthropic report having accidentally exposed approximately 8% [...]

---

Outline:

(01:21) Incidents

[... 6 more sections]

---

First published:
April 13th, 2026

Source:
https://www.lesswrong.com/posts/K8FxfK9GmJfiAhgcT/anthropic-repeatedly-accidentally-trained-against-the-cot

---



Narrated by TYPE III AUDIO.

---

Images from the article:

Highlighted text from a document discussing technical errors in AI model training.
Screenshot of text highlighting a technical error in AI training regarding reward signal and scratchpad content usage.
Text excerpt discussing AI model training, reasoning faithfulness, and optimization pressure concerns.Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or anoth
Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us