Episode Details

Back to Episodes

Prompts gone rogue. [Research Saturday]

Season 8 Episode 341 Published 1 year, 6 months ago
Description

Shachar Menashe, Senior Director of Security Research at JFrog, is talking about "When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI." A security vulnerability in the Vanna.AI tool, called CVE-2024-5565, allows hackers to exploit large language models (LLMs) by manipulating user input to execute malicious code, a method known as prompt injection.

This poses a significant risk when LLMs are connected to critical functions, highlighting the need for stronger security measures.

The research can be found here:

Learn more about your ad choices. Visit megaphone.fm/adchoices

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us