Episode Details

Back to Episodes

OpenAI says it may ‘adjust’ its safety requirements if a rival lab releases ‘high-risk’ AI

Published 10 months, 1 week ago
Description

In an update to its Preparedness Framework, the internal framework OpenAI uses to decide whether AI models are safe and what safeguards, if any, are needed during development and release, OpenAI said that it may “adjust” its requirements if a rival AI lab releases a “high-risk” system without comparable safeguards.

Learn more about your ad choices. Visit podcastchoices.com/adchoices

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us