Podcast Episode Details

Back to Podcast Episodes

Prompted to Fail: The Security Risks Lurking in DeepSeek-Generated Code


Episode 61


CrowdStrike research into AI coding assistants reveals a new, subtle vulnerability surface: When DeepSeek-R1 receives prompts the Chinese Communist Party (CCP) likely considers politically sensitive, the likelihood of it producing code with severe security flaws increases by up to 50%.

Stefan Stein, manager of the CrowdStrike Counter Adversary Operations Data Science team, joined Adam and Cristian for a live recording at Fal.Con 2025 to discuss how this project got started, the methodology behind the team’s research, and the significance of their findings.

The research began with a simple question: What are the security risks of using DeepSeek-R1 as a coding assistant? AI coding assistants are commonly used and often have access to sensitive information. Any systemic issue can have a major and far-reaching impact. 

It concluded with the discovery that the presence of certain trigger words — such as mentions of Falun Gong, Uyghurs, or Tibet — in DeepSeek-R1 prompts can have severe effects on the quality and security of the code it produces. Unlike most large language model (LLM) security research focused on jailbreaks or prompt injections, this work exposes subtle biases that can lead to real-world vulnerabilities in production systems.

Tune in for a fascinating deep dive into how Stefan and his team explored the biases in DeepSeek-R1, the implications of this research, and what this means for organizations adopting AI. 


Published on 7 hours ago






If you like Podbriefly.com, please consider donating to support the ongoing development.

Donate