Episode Details
Back to Episodes
World Models & General Intuition: Khosla's largest bet since LLMs & OpenAI
Description
From building Medal into a 12M-user game clipping platform with 3.8B highlight moments to turning down a reported $500M offer from OpenAI (https://www.theinformation.com/articles/openai-offered-pay-500-million-startup-videogame-data) and raising a $134M seed from Khosla (https://techcrunch.com/2025/10/16/general-intuition-lands-134m-seed-to-teach-agents-spatial-reasoning-using-video-game-clips/) to spin out General Intuition, Pim is betting that world models trained on peak human gameplay are the next frontier after LLMs.
We sat down with Pim to dig into why game highlights are “episodic memory for simulation” (and how Medal’s privacy-first action labels became a world-model goldmine https://medal.tv/blog/posts/enabling-state-of-the-art-security-and-protections-on-medals-new-apm-and-controller-overlay-features), what it takes to build fully vision-based agents that just see frames and output actions in real time, how General Intuition transfers from games to real-world video and then into robotics, why world models and LLMs are complementary rather than rivals, what founders with proprietary datasets should know before selling or licensing to labs, and his bet that spatial-temporal foundation models will power 80% of future atoms-to-atoms interactions in both simulation and the real world.
We discuss:
* How Medal’s 3.8B action-labeled highlight clips became a privacy-preserving goldmine for world models
* Building fully vision-based agents that only see frames and output actions yet play like (and sometimes better than) humans
* Transferring from arcade-style games to realistic games to real-world video using the same perception–action recipe
* Why world models need actions, memory, and partial observability (smoke, occlusion, camera shake) vs. “just” pretty video generation
* Distilling giant policies into tiny real-time models that still navigate, hide, and peek corners like real players
* Pim’s path from RuneScape private servers, Tourette’s, and reverse engineering to leading a frontier world-model lab
* How data-rich founders should think about valuing their datasets, negotiating with big labs, and deciding when to go independent
* GI’s first customers: replacing brittle behavior trees in games, engines, and controller-based robots with a “frames in, actions out” API
* Using Medal clips as “episodic memory of simulation” to move from imitation learning to RL via world models and negative events
* The 2030 vision: spatial–temporal foundation models that power the majority of atoms-to-atoms interactions in simulation and the real world
—
Pim
* LinkedIn: https://www.linkedin.com/in/pimdw/
Where to find Latent Space
* X: https://x.com/latentspacepod
Full Video Episode
Timestamps
00:00:00 Introduction and Medal's Gaming Data Advantage00:02:08 Exclusive Demo: Vision-Based Gaming Agents00:06:17 Action