Episode Details
Back to Episodes
Anthropic built an AI that can supposedly break into anything. Then it forgot to lock its own door
Description
Anthropic has spent years building a reputation as the AI company that actually cares about safety.
Then, in the span of two weeks, it leaked an unannounced model, exposed its own source code, and accidentally handed hackers a blueprint of its most widely-used product. The fix came in 24 hours. The blueprint can't be unlearned.
And the companies that trusted Claude Code with their deepest systems are still running on publicly documented defences.
If the most careful AI company couldn't prevent this, what does that mean for everyone else?
Tune in.
Daybreak is produced from the newsroom of The Ken, India’s first subscriber-only business news platform. Subscribe for more exclusive, deeply-reported, and analytical business stories.