Episode Details

Back to Episodes
Securing the AI Frontier: Unmasking LLM and RAG Vulnerabilities

Securing the AI Frontier: Unmasking LLM and RAG Vulnerabilities

Episode 155 Published 9 months ago
Description

Large language models present new security challenges, especially when they leverage external data sources through Retrieval Augmented Generation (RAG) architectures . This podcast explores the unique attack techniques that exploit these systems, including indirect prompt injection and RAG poisoning. We delve into how offensive testing methods like AI red teaming are crucial for identifying and addressing these critical vulnerabilities in the evolving AI landscape.

www.securitycareers.help/navigating-the-ai-frontier-a-cisos-perspective-on-securing-generative-ai/

www.hackernoob.tips/the-new-frontier-how-were-bending-generative-ai-to-our-will

 

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us