Episode Details

Back to Episodes

What Could Possibly Go Wrong? Safety Analysis for AI Systems

Published 5 months, 2 weeks ago
Description

How can you ever know whether an LLM is safe to use? Even self-hosted LLM systems are vulnerable to adversarial prompts left on the internet and waiting to be found by system search engines. These attacks and others exploit the complexity of even seemingly secure AI systems 

In our latest podcast from the Carnegie Mellon University Software Engineering Institute (SEI), David Schulker and Matthew Walsh, both senior data scientists in the SEI's CERT Division, sit down with Thomas Scanlon, lead of the CERT Data Science Technical Program, to discuss their work on System Theoretic Process Analysis, or STPA, a hazard-analysis technique uniquely suitable for dealing with AI complexity when assuring AI systems. 

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us