Episode Details

Back to Episodes

Imperceptible NLP Attacks

Published 4 years, 2 months ago
Description

Nicholas Boucher is a PhD at Cambridge University where his focus is on security including on topics like homomorphic encryption, voting systems, and adversarial machine learning. He is the lead author of a fascinating new paper – “Bad Characters: Imperceptible NLP Attacks” – which provides a taxonomy of attacks against text-based NLP models, that are  based on Unicode and other encoding systems.

Download a FREE copy of our recent NLP Industry Survey Results:  https://gradientflow.com/2021nlpsurvey/

Subscribe: AppleAndroidSpotifyStitcherGoogleAntennaPodRSS.

Detailed show notes can be found on The Data Exchange web site.

Subscribe to The Gradient Flow Newsletter.

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us