Episode Details

Back to Episodes
Catching 98.9 Out of 100 Deepfakes: What It Takes to Lead Hugging Face's Leaderboard

Catching 98.9 Out of 100 Deepfakes: What It Takes to Lead Hugging Face's Leaderboard

Published 9 hours ago
Description

This story was originally published on HackerNoon at: https://hackernoon.com/catching-989-out-of-100-deepfakes-what-it-takes-to-lead-hugging-faces-leaderboard.
Modulate tops Hugging Face's Speech Deepfake Leaderboard with 98.9% accuracy at $0.25/hr. Here's what voice-native architecture unlocks for fraud teams.
Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning. You can also check exclusive content about #deepfake-detection, #voice-ai, #ai-benchmarks, #audio-deepfakes, #fraud-prevention, #deepfake-benchmarks, #huggingface, #good-company, and more.

This story was written by: @modulate. Learn more about this writer by checking @modulate's about page, and for more stories, please visit hackernoon.com.

Voice deepfake losses are projected to hit $40B by 2027, a 6,566% jump from 2023. Modulate's velma-2 now ranks #1 on Hugging Face's Speech Deepfake Leaderboard with a 1.104% average EER across 14 datasets and 2M+ audio samples, catching 98.9 out of every 100 deepfakes. This post breaks down why the Hugging Face benchmark is the most credible public standard for detection, how Modulate's voice-native ELM architecture outperforms repurposed models from Hiya and Resemble AI, and why running detection at $0.25/hr (100x cheaper than competitors) lets fraud teams monitor entire calls instead of just the opening seconds where most checks stop today.

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us