Podcast Episode Details

Back to Podcast Episodes
Top Professor Condemns AGI Development: “It’s Frankly Evil” — Geoffrey Miller

Top Professor Condemns AGI Development: “It’s Frankly Evil” — Geoffrey Miller



Geoffrey Miller is an evolutionary psychologist at the University of New Mexico, bestselling author, and one of the world's leading experts on signaling theory and human sexual selection. His book "Mate" was hugely influential for me personally during my dating years, so I was thrilled to finally get him on the show.

In this episode, Geoffrey drops a bombshell 50% P(Doom) assessment, coming from someone who wrote foundational papers on neural networks and genetic algorithms back in the '90s before pivoting to study human mating behavior for 30 years.

What makes Geoffrey's doom perspective unique is that he thinks both inner and outer alignment might be unsolvable in principle, ever. He's also surprisingly bearish on AI's current value, arguing it hasn't been net positive for society yet despite the $14 billion in OpenAI revenue.

We cover his fascinating intellectual journey from early AI researcher to pickup artist advisor to AI doomer, why Asperger's people make better psychology researchers, the polyamory scene in rationalist circles, and his surprisingly optimistic take on cooperating with China. Geoffrey brings a deeply humanist perspective. He genuinely loves human civilization as it is and sees no reason to rush toward our potential replacement.

* 00:00:00 - Introducing Prof. Geoffrey Miller

* 00:01:46 - Geoffrey’s intellectual career arc: AI → evolutionary psychology → back to AI

* 00:03:43 - Signaling theory as the main theme driving his research

* 00:05:04 - Why evolutionary psychology is legitimate science, not just speculation

* 00:08:18 - Being a professor in the AI age and making courses "AI-proof"

* 00:09:12 - Getting tenure in 2008 and using academic freedom responsibly

* 00:11:01 - Student cheating epidemic with AI tools, going "fully medieval"

* 00:13:28 - Should professors use AI for grading? (Geoffrey says no, would be unethical)

* 00:23:06 - Coming out as Aspie and neurodiversity in academia

* 00:29:15 - What is sex and its role in evolution (error correction vs. variation)

* 00:34:06 - Sexual selection as an evolutionary "supercharger"

* 00:37:25 - Dating advice, pickup artistry, and evolutionary psychology insights

* 00:45:04 - Polyamory: Geoffrey’s experience and the rationalist connection

* 00:50:96 - Why rationalists tend to be poly vs. Chesterton's fence on monogamy

* 00:54:07 - The "primal" lifestyle and evolutionary medicine

* 00:56:59 - How Iain M. Banks' Culture novels shaped Geoffrey’s AI thinking

* 01:05:26 - What’s Your P(Doom)™

* 01:08:04 - Main doom scenario: AI arms race leading to unaligned ASI

* 01:14:10 - Bad actors problem: antinatalists, religious extremists, eco-alarmists

* 01:21:13 - Inner vs. outer alignment - both may be unsolvable in principle

* 01:23:56 - "What's the hurry?" - Why rush when alignment might take millennia?

* 01:28:17 - Disagreement on whether AI has been net positive so far

* 01:35:13 - Why AI won't magically solve longevity or other major problems

* 01:37:56 - Unemployment doom and loss of human autonomy

* 01:40:13 - Cosmic perspective: We could be "the baddies" spreading unaligned AI

* 01:44:93 - "Humanity is doing incredibly well" - no need for Hail Mary AI

* 01:49:01 - Why ASI might be bad at solving alignment (lacks human cultural wisdom)

* 01:52:06 - China cooperation: "Whoever builds ASI first los


Published on 1 month ago






If you like Podbriefly.com, please consider donating to support the ongoing development.

Donate