Episode Details

Back to Episodes

“Finetuning Borges” by Linch

Published 1 day, 6 hours ago
Description

My newest hobby is fine-tuning a Chinese open-source LLM to generate Pierre Menard, Author of the Quixote (originally by Borges). The ambition isn’t to write a so-called “Borgesian” story “like” Pierre Menard, Author of the Quixote but to fully generate, token-by-token, Pierre Menard, Author of the Quixote.

Importantly, this can’t just be a mere act of machine transcription, or even memorizing the story in the weights [to-do: attach paper]. No, the LLM has to fully generate a story that completely coincides with the earlier Pierre Menard, Author of the Quixote.

Initially, I attempted to make the conditions viable for the model to write Pierre Menard, Author of the Quixote afresh. One proposed strategy on X.com is to situate Borges in Kimi K2.5-Thinking by putting the entire life history and literary influences of Borges into Kimi's system prompt. Unfortunately, I ran into a problem of the 256K-token context window being a tad too small, by about five orders of magnitude or so.

I then considered doing more advanced fine-tuning to imitate Borges’ intellectual influences and life trajectory. Start with machine unlearning to erase everything post-1939, followed by sparse autoencoders to isolate the “Jorge Luis Borges” feature in Kimi's latent space [...]

---

First published:
April 20th, 2026

Source:
https://www.lesswrong.com/posts/GQXymCGL3D8mZD5o4/finetuning-borges

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Wooden desk with lamp and book surrounded by towering server racks.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us