Podcast Episodes

Back to Search
[HUMAN VOICE] "Scale Was All We Needed, At First" by Gabriel Mukobi

Support ongoing human narrations of LessWrong's curated posts:
www.patreon.com/LWCurated

Source:
https://www.lesswrong.com/posts/xLDwCemt5qvchzgHd/scale…

2 years ago

Short Long
View Episode
[HUMAN VOICE] "Acting Wholesomely" by OwenCB

Support ongoing human narrations of LessWrong's curated posts:
www.patreon.com/LWCurated

Source:
https://www.lesswrong.com/posts/Cb7oajdrA5DsHCqKd/actin…

2 years ago

Short Long
View Episode
The Story of “I Have Been A Good Bing”

Rationality is Systematized Winning, so rationalists should win. We’ve tried saving the world from AI, but that's really hard and we’ve had … mixed r…

2 years ago

Short Long
View Episode
The Best Tacit Knowledge Videos on Every Subject

TL;DR

Tacit knowledge is extremely valuable. Unfortunately, developing tacit knowledge is usually bottlenecked by apprentice-master relationships. Ta…

2 years ago

Short Long
View Episode
[HUMAN VOICE] "My Clients, The Liars" by ymeskhout

Support ongoing human narrations of LessWrong's curated posts:
www.patreon.com/LWCurated

Source:
https://www.lesswrong.com/posts/h99tRkpQGxwtb9Dpv/my-cl…

2 years ago

Short Long
View Episode
[HUMAN VOICE] "Deep atheism and AI risk" by Joe Carlsmith

Support ongoing human narrations of LessWrong's curated posts:
www.patreon.com/LWCurated

Source:
https://www.lesswrong.com/posts/sJPbmm8Gd34vGYrKd/deep-…

2 years ago

Short Long
View Episode
[HUMAN VOICE] "CFAR Takeaways: Andrew Critch" by Raemon

Support ongoing human narrations of LessWrong's curated posts:
www.patreon.com/LWCurated

Source:
https://www.lesswrong.com/posts/Jash4Gbi2wpThzZ4k/cfar-…

2 years, 1 month ago

Short Long
View Episode
[HUMAN VOICE] "Speaking to Congressional staffers about AI risk" by Akash, hath

Support ongoing human narrations of LessWrong's curated posts:
www.patreon.com/LWCurated

Source:
https://www.lesswrong.com/posts/2sLwt2cSAag74nsdN/speak…

2 years, 1 month ago

Short Long
View Episode
Many arguments for AI x-risk are wrong

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.The following is a lightly edited version of a memo I wrote for…

2 years, 1 month ago

Short Long
View Episode
Tips for Empirical Alignment Research

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.TLDR: I’ve collected some tips for research that I’ve given to …

2 years, 1 month ago

Short Long
View Episode

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us