Podcast Episodes

Back to Search
"Humans provide an untapped wealth of evidence about alignment" by TurnTrout & Quintin Pope
"Humans provide an untapped wealth of evidence about alignment" by TurnTrout & Quintin Pope

https://www.lesswrong.com/posts/CjFZeDD6iCnNubDoS/humans-provide-an-untapped-wealth-of-evidence-about#fnref7a5ti4623qb

Crossposted from the AI Alig…

3 years, 6 months ago

Short Long
View Episode
"Changing the world through slack & hobbies" by Steven Byrnes
"Changing the world through slack & hobbies" by Steven Byrnes

https://www.lesswrong.com/posts/DdDt5NXkfuxAnAvGJ/changing-the-world-through-slack-and-hobbies

 

Introduction

In EA orthodoxy, if you're really seri…

3 years, 6 months ago

Short Long
View Episode
"«Boundaries», Part 1: a key missing concept from utility theory" by Andrew Critch
"«Boundaries», Part 1: a key missing concept from utility theory" by Andrew Critch

https://www.lesswrong.com/posts/8oMF8Lv5jiGaQSFvo/boundaries-part-1-a-key-missing-concept-from-utility-theory

Crossposted from the AI Alignment For…

3 years, 6 months ago

Short Long
View Episode
"ITT-passing and civility are good; "charity" is bad; steelmanning is niche" by Rob Bensinger
"ITT-passing and civility are good; "charity" is bad; steelmanning is niche" by Rob Bensinger

https://www.lesswrong.com/posts/MdZyLnLHuaHrCskjy/itt-passing-and-civility-are-good-charity-is-bad

I often object to claims like "charity/steelmanni…

3 years, 7 months ago

Short Long
View Episode
"What should you change in response to an "emergency"? And AI risk" by Anna Salamon
"What should you change in response to an "emergency"? And AI risk" by Anna Salamon

https://www.lesswrong.com/posts/mmHctwkKjpvaQdC3c/what-should-you-change-in-response-to-an-emergency-and-ai

Related to: Slack gives you the ability…

3 years, 7 months ago

Short Long
View Episode
"On how various plans miss the hard bits of the alignment challenge" by Nate Soares
"On how various plans miss the hard bits of the alignment challenge" by Nate Soares

https://www.lesswrong.com/posts/3pinFH3jerMzAvmza/on-how-various-plans-miss-the-hard-bits-of-the-alignment

Crossposted from the AI Alignment Forum…

3 years, 7 months ago

Short Long
View Episode
"Humans are very reliable agents" by Alyssa Vance
"Humans are very reliable agents" by Alyssa Vance

https://www.lesswrong.com/posts/28zsuPaJpKAGSX4zq/humans-are-very-reliable-agents

Over the last few years, deep-learning-based AI has progressed ex…

3 years, 7 months ago

Short Long
View Episode
"Looking back on my alignment PhD" by TurnTrout
"Looking back on my alignment PhD" by TurnTrout

https://www.lesswrong.com/posts/2GxhAyn9aHqukap2S/looking-back-on-my-alignment-phd

The funny thing about long periods of time is that they do, event…

3 years, 7 months ago

Short Long
View Episode
"It’s Probably Not Lithium" by Natália Coelho Mendonça
"It’s Probably Not Lithium" by Natália Coelho Mendonça

https://www.lesswrong.com/posts/7iAABhWpcGeP5e6SB/it-s-probably-not-lithium

A Chemical Hunger (a), a series by the authors of the blog Slime Mold Ti…

3 years, 7 months ago

Short Long
View Episode
"What Are You Tracking In Your Head?" by John Wentworth
"What Are You Tracking In Your Head?" by John Wentworth

https://www.lesswrong.com/posts/bhLxWTkRc8GXunFcB/what-are-you-tracking-in-your-head

A large chunk - plausibly the majority -  of real-world experti…

3 years, 7 months ago

Short Long
View Episode

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us