Particle Data Platform

Adam Marblestone – AI is missing something fundamental about the brain

12/30/20251 hr 50 min

Adam Marblestone is CEO of Convergent Research. He’s had a very interesting past life: he was a research scientist at Google Deepmind on their neuroscience team and has worked on everything from brain-computer interfaces to quantum computing to nanotech and even formal mathematics.

In this episode, we discuss how the brain learns so much from so little, what the AI field can learn from neuroscience, and the answer to Ilya’s question: how does the genome encode abstract reward functions? Turns out, they’re all the same question.

Watch on YouTube; read the transcript.

Sponsors

* Gemini 3 Pro recently helped me run an experiment to test multi-agent scaling: basically, if you have a fixed budget of compute, what is the optimal way to split it up across agents? Gemini was my colleague throughout the process — honestly, I couldn’t have investigated this question without it. Try Gemini 3 Pro today gemini.google.com

* Labelbox helps you train agents to do economically-valuable, real-world tasks. Labelbox’s network of subject-matter experts ensures you get hyper-realistic RL environments, and their custom tooling lets you generate the highest-quality training data possible from those environments. Learn more at labelbox.com/dwarkesh

To sponsor a future episode, visit dwarkesh.com/advertise.

Timestamps

(00:00:00) – The brain’s secret sauce is the reward functions, not the architecture

(00:22:20) – Amortized inference and what the genome actually stores

(00:42:42) – Model-based vs model-free RL in the brain

(00:50:31) – Is biological hardware a limitation or an advantage?

(01:03:59) – Why a map of the human brain is important

(01:23:28) – What value will automating math have?

(01:38:18) – Architecture of the brain

Further reading

Intro to Brain-Like-AGI Safety - Steven Byrnes’s theory of the learning vs steering subsystem; referenced throughout the episode.

A Brief History of Intelligence - Great book by Max Bennett on connections between neuroscience and AI

Adam’s blog, and Convergent Research’s blog on essential technologies.

A Tutorial on Energy-Based Learning by Yann LeCun

What Does It Mean to Understand a Neural Network? - Kording & Lillicrap

E11 Bio and their brain connectomics approach

Sam Gershman on what dopamine is doing in the brain

Gwern’s proposal on training models on the brain’s hidden states

Relevant episodes: Ilya Sutskever, Richard Sutton, Andrej Karpathy

Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Clips

Transcript preview

First 90 seconds
  1. Dwarkesh Patel· Host0:00

    The big million-dollar question that I have that, um, I've been trying to get the answer to through all these interviews with AI researchers: How does the brain do it, right? Like, we're throwing way more data at these LLMs, and they still have a small fraction of the total capabilities that a human does. So what's going on?

  2. Adam Marblestone· Guest0:15

    Yeah. I mean, this might be the quadrillion-dollar question- (laughs) ... or something like that. It's- it's- it's arguab- you could make an argument this is the most important, you know, question in science. I don't claim to know the answer. I- I also don't really think that the answer will necessarily come even from a lot of smart people thinking about it as much as they are. I- my- my overall, like, meta-level take is that we have to empower the field of neuroscience to just make neuroscience a- a more powerful, uh, field, technologically and otherwise, to actually be able to crack a question like this. But maybe the- the way that we would think about this now with, like, modern AI, neural nets, deep learning, is that there are sort of these- these cer- certain key components of that. There's the architecture. Um, there's maybe hyperparameters of the architecture. How many layers do you have or sort of properties of that architecture? There is the learning algorithm itself. How do you train it? You know, backprop, gradient descent, um, is it something else? There is how is it initialized, okay? So if we take the learning part of the system, it still may have some initialization of- of the weights. Um, and then there are also cost functions. There's like, what is it being trained to do?

  3. Dwarkesh Patel· Host1:27

    Yeah.

  4. Adam Marblestone· Guest1:28

    What's the reward signal? What are the loss functions?

We value your privacy

We use cookies to understand how you use our platform and to improve your experience. Click "Accept All" to consent, or "Decline non-essential" to opt out of non-essential cookies. Read our Privacy Policy.