Episode Details

Back to Episodes

"Current AIs seem pretty misaligned to me" by ryan_greenblatt

Published 2 hours ago
Description
Many people—especially AI company employees [1] —believe current AI systems are well-aligned in the sense of genuinely trying to do what they're supposed to do (e.g., following their spec or constitution, obeying a reasonable interpretation of instructions). [2] I disagree.

Current AI systems seem pretty misaligned to me in a mundane behavioral sense: they oversell their work, downplay or fail to mention problems, stop working early and claim to have finished when they clearly haven't, and often seem to "try" to make their outputs look good while actually doing something sloppy or incomplete. These issues mostly occur on more difficult/larger tasks, tasks that aren't straightforward SWE tasks, and tasks that aren't easy to programmatically check. Also, when I apply AIs to very difficult tasks in long-running agentic scaffolds, it's quite common for them to reward-hack / cheat (depending on the exact task distribution)—and they don't make the cheating clear in their outputs. AIs typically don't flag these cheats when doing further work on the same project and often don't flag these cheats even when interacting with a user who would obviously want to know, probably both because the AI doing further work is itself misaligned and because it [...]

---

Outline:

(09:20) Why is this misalignment problematic?

(13:50) How much should we expect this to improve by default?

(14:51) Some predictions

(16:44) What misalignment have I seen?

(40:04) Are these issues less bad in Opus 4.6 relative to Opus 4.5?

(42:16) Are these issues less bad in Mythos Preview? (Speculation)

(45:54) Misalignment reported by others

(46:45) The relationship of these issues with AI psychosis and things like AI psychosis

(48:19) Appendix: This misalignment would differentially slow safety research and make a handoff to AIs unsafe

(51:22) Appendix: Heading towards Slopolis

(55:30) Appendix: Apparent-success-seeking (or similar types of misalignment) could lead to takeover

(59:16) Appendix: More on what will happen by default and implications of commercial incentives to fix these issues

(01:03:20) Appendix: Can we get out useful work despite these issues with inference-time measures (e.g., critiques by a reviewer)?

The original text contained 14 footnotes which were omitted from this narration.

---

First published:
April 15th, 2026

Source:
https://www.lesswrong.com/posts/WewsByywWNhX9rtwi/current-ais-seem-pretty-misaligned-to-me

---



Narrated by TYPE III AUDIO.

---

Images from the article:

Terminal output showing successful exploit chain achieving code execution on redacted build.Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podc
Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us