Episode Details

Back to Episodes
🎙️ EP 91: The Day AI Forgot How to Behave… and Then Learned Again

🎙️ EP 91: The Day AI Forgot How to Behave… and Then Learned Again

Published 5 months, 3 weeks ago
Description

Today’s story sounds wild, shrinking an AI model can make it dangerous… but researchers at UC Riverside found a way to retrain it without the original data. Also, Google quietly dropped a new model that can run full RAG search on your phone, even offline.

We’ll talk about:

  • How open-source AI models become risky when trimmed down
  • The fix that helps models "remember" how to stay safe — no filters needed
  • Google's new EmbeddingGemma: a powerful multilingual embedding model under 500M params
  • Why mobile RAG is the next frontier, and what it means for private, local AI apps

Keywords: EmbeddingGemma, RAG, AI safety, on-device AI, UC Riverside, Google Gemma, open-source AI, Claude, semantic search, AI Fire

Links:

  1. Newsletter: Sign up for our FREE daily newsletter.
  2. Our Community: Get 3-level AI tutorials across industries.
  3. Join AI Fire Academy: 500+ advanced AI workflows ($14,500+ Value)

Our Socials:

  1. Facebook Group: Join 253K+ AI builders
  2. X (Twitter): Follow us for daily AI drops
  3. YouTube: Watch AI walkthroughs & tutorials
Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us