Episode Details
Back to Episodes
Unsupervised Learning x Latent Space Crossover Special
Description
If you’re in SF: Join us for the Claude Plays Pokemon hackathon this Sunday!
If you’re not: Fill out the 2025 State of AI Eng survey for $250 in Amazon cards!
Unsupervised Learning is a podcast that interviews the sharpest minds in AI about what’s real today, what will be real in the future and what it means for businesses and the world - helping builders, researchers and founders deconstruct and understand the biggest breakthroughs.
Top guests: Noam Shazeer, Bob McGrew, Noam Brown, Dylan Patel, Percy Liang, David Luan
Full Episode on Their YouTube
Timestamps
* 00:00 Introduction and Excitement for Collaboration
* 00:27 Reflecting on Surprises in AI Over the Past Year
* 01:44 Open Source Models and Their Adoption
* 06:01 The Rise of GPT Wrappers
* 06:55 AI Builders and Low-Code Platforms
* 09:35 Overhyped and Underhyped AI Trends
* 22:17 Product Market Fit in AI
* 28:23 Google's Current Momentum
* 28:33 Customer Support and AI
* 29:54 AI's Impact on Cost and Growth
* 31:05 Voice AI and Scheduling
* 32:59 Emerging AI Applications
* 34:12 Education and AI
* 36:34 Defensibility in AI Applications
* 40:10 Infrastructure and AI
* 47:08 Challenges and Future of AI
* 52:15 Quick Fire Round and Closing Remarks
Transcript
[00:00:00] Introduction and Podcast Overview
[00:00:00] Jacob: well, thanks so much for doing this, guys. I feel like we've we've been excited to do a collab for a while. I
[00:00:13] swyx: love crossovers. Yeah. Yeah. This, this is great. Like the ultimate meta about just podcasters talking to other podcasters. Yeah. It's a lot. Podcasts all the way up.
[00:00:21] Jacob: I figured we'd have a pretty free ranging conversation today but brought a few conversation starters to, to, to kick us off.
[00:00:27] Reflecting on AI Surprises and Trends
[00:00:27] Jacob: And so I figured one interesting place to start is you know, obviously it feels that this world is changing like every few months. Wondering as you guys reflect path on the past year, like what surprised you the most?
[00:00:36] Alessio: I think definitely recently models we kinda on the, on the right here. Like, oh, that, well, I, I I think there's, there's like the, what surprised us in a good way.
[00:00:44] May maybe in a, in a bad way. I would say in a good way. Recently models and I think the release of them right after the new reps scaling instead talked by Ilia. I think there was maybe like a, a little. It's so over and then we're so back. I'm like such a short, short period. It was really [00:01:00] fortuitous
[00:01:00] Jacob: timing though, like right.
[00:01:01] As pre-training died, I mean, obviously I'm sure within the labs they knew pre-training was dying and had to find something. But you know, from the outside it was it, it felt like one right into the other.
[00:01:09] Alessio: Yeah. Yeah, exactly. So that, that was a good surprise,
[00:01:12] swyx: I would say, if you wanna make that comment about timing, I think it's suspiciously neat that like, because we know that Strawberry was being worked on for like two years-ish.
[00:01:20] Like, and we know exactly when Nome joined OpenAI, and that was obviously a big strategic bet by OpenAI. So like, for it to transition, so transition so nicely when like, pre-training is kind of tapped out to, into like, oh, now inference time is, is the new scaling law is like conv very convenient. I, I, I like if there were an Illuminati, this would be what they planned