Podcast Episode Details

Back to Podcast Episodes
Adam Marblestone – AI is missing something fundamental about the brain

Adam Marblestone – AI is missing something fundamental about the brain



Adam Marblestone has worked on brain-computer interfaces, quantum computing, formal mathematics, nanotech, and AI research. And he thinks AI is missing something fundamental about the brain.

Why are humans so much more sample efficient than AIs? How is the brain able to encode desires for things evolution has never seen before (and therefore could not have hard-wired into the genome)? What do human loss functions actually look like?

Adam walks me through some potential answers to these questions as we discuss what human learning can tell us about the future of AI.

Watch on YouTube; read the transcript.

Sponsors

* Gemini 3 Pro recently helped me run an experiment to test multi-agent scaling: basically, if you have a fixed budget of compute, what is the optimal way to split it up across agents? Gemini was my colleague throughout the process — honestly, I couldn’t have investigated this question without it. Try Gemini 3 Pro today gemini.google.com

* Labelbox helps you train agents to do economically-valuable, real-world tasks. Labelbox’s network of subject-matter experts ensures you get hyper-realistic RL environments, and their custom tooling lets you generate the highest-quality training data possible from those environments. Learn more at labelbox.com/dwarkesh

To sponsor a future episode, visit dwarkesh.com/advertise.

Timestamps

(00:00:00) – The brain’s secret sauce is the reward functions, not the architecture

(00:22:20) – Amortized inference and what the genome actually stores

(00:42:42) – Model-based vs model-free RL in the brain

(00:50:31) – Is biological hardware a limitation or an advantage?

(01:03:59) – Why a map of the human brain is important

(01:23:28) – What value will automating math have?

(01:38:18) – Architecture of the brain

Further reading

Intro to Brain-Like-AGI Safety - Steven Byrnes’s theory of the learning vs steering subsystem; referenced throughout the episode.

A Brief History of Intelligence - Great book by Max Bennett on connections between neuroscience and AI

Adam’s blog, and Convergent Research’s blog on essential technologies.

A Tutorial on Energy-Based Learning by Yann LeCun

What Does It Mean to Understand a Neural Network? - Kording & Lillicrap

E11 Bio and their brain connectomics approach

Sam Gershman on what dopamine is doing in the brain

Gwern’s proposal on training models on the brain’s hidden states

Relevant episodes: Ilya Sutskever, Richard Sutton, Andrej Karpathy



Get full access to Dwarkesh Podcast at Published on 8 hours ago






If you like Podbriefly.com, please consider donating to support the ongoing development.

Donate