Episode Details

Back to Episodes
Anthropic built an AI that can supposedly break into anything. Then it forgot to lock its own door

Anthropic built an AI that can supposedly break into anything. Then it forgot to lock its own door

Episode 729 Published 5 days, 7 hours ago
Description

Anthropic has spent years building a reputation as the AI company that actually cares about safety.

Then, in the span of two weeks, it leaked an unannounced model, exposed its own source code, and accidentally handed hackers a blueprint of its most widely-used product. The fix came in 24 hours. The blueprint can't be unlearned.

And the companies that trusted Claude Code with their deepest systems are still running on publicly documented defences.

If the most careful AI company couldn't prevent this, what does that mean for everyone else?

Tune in.

Daybreak is produced from the newsroom of The Ken, India’s first subscriber-only business news platform. Subscribe for more exclusive, deeply-reported, and analytical business stories.

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us