Episode Details

Back to Episodes

HS094: How Risky Is Your Organization’s AI Strategy?

Published 1 year ago
Description
AI Large Language Models (LLMs) can be used to generate output that the creators and users of those models didn’t intend; for example, harassment, instructions on how to make a bomb, or facilitating cybercrime. Researchers have created the HarmBench framework to measure how easily an AI can be weaponized. Recently these researchers trumpeted the finding... Read more »
Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us