Episode Details

Back to Episodes
Grok, deepfakes and who should police AI

Grok, deepfakes and who should police AI

Published 1 month, 1 week ago
Description

What happens when AI gets it wrong? After a backlash over the misuse of Elon Musk’s AI tool Grok, new restrictions have been imposed on editing images of real people. Is this a sign that AI regulation is lagging, and who should be in charge – governments or Silicon Valley? This week, Danny and Katie are joined by AI computer scientist Kate Devlin from King’s College London to discuss why this moment could be a turning point for global AI rules.


Image: Getty


Hosted on Acast. See acast.com/privacy for more information.

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us