Episode Details

Back to Episodes

LLMs: Totally Not Making Stuff Up (they promise) (Ep. 263)

Episode 261 Published 1 year, 7 months ago
Description

In this episode, we dive into the wild world of Large Language Models (LLMs) and their knack for… making things up. Can they really generalize without throwing in some fictional facts? Or is hallucination just part of their charm?
Let’s separate the genius from the guesswork in this insightful breakdown of AI’s creativity problem.

TL;DR;

LLM Generalisation without hallucinations. Is that possible?

 

References

https://github.com/lamini-ai/Lamini-Memory-Tuning/blob/main/research-paper.pdf

https://www.lamini.ai/blog/lamini-memory-tuning

 

 

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us