Podcast Episodes

Back to Search
"How 'Discovering Latent Knowledge in Language Models Without Supervision' Fits Into a Broader Alignment Scheme" by Collin

https://www.lesswrong.com/posts/L4anhrxjv8j2yRKKp/how-discovering-latent-knowledge-in-language-models-without

Crossposted from the AI Alignment Forum.…

3 years, 1 month ago

Short Long
View Episode
"Models Don't 'Get Reward'" by Sam Ringer

https://www.lesswrong.com/posts/TWorNr22hhYegE4RT/models-don-t-get-reward

Crossposted from the AI Alignment Forum. May contain more technical jargon t…

3 years, 1 month ago

Short Long
View Episode
"The Feeling of Idea Scarcity" by John Wentworth

https://www.lesswrong.com/posts/mfPHTWsFhzmcXw8ta/the-feeling-of-idea-scarcity

Here’s a story you may recognize. There's a bright up-and-coming young …

3 years, 1 month ago

Short Long
View Episode
"The next decades might be wild" by Marius Hobbhahn

https://www.lesswrong.com/posts/qRtD4WqKRYEtT5pi3/the-next-decades-might-be-wild

Crossposted from the AI Alignment Forum. May contain more technical j…

3 years, 2 months ago

Short Long
View Episode
"Lessons learned from talking to >100 academics about AI safety" by Marius Hobbhahn

https://www.lesswrong.com/posts/SqjQFhn5KTarfW8v7/lessons-learned-from-talking-to-greater-than-100-academics

Crossposted from the AI Alignment Forum. …

3 years, 3 months ago

Short Long
View Episode
"How my team at Lightcone sometimes gets stuff done" by jacobjacob

https://www.lesswrong.com/posts/6LzKRP88mhL9NKNrS/how-my-team-at-lightcone-sometimes-gets-stuff-done

Disclaimer: I originally wrote this as a private …

3 years, 3 months ago

Short Long
View Episode
"Decision theory does not imply that we get to have nice things" by So8res

https://www.lesswrong.com/posts/rP66bz34crvDudzcJ/decision-theory-does-not-imply-that-we-get-to-have-nice

Crossposted from the AI Alignment Forum. May…

3 years, 3 months ago

Short Long
View Episode
"What 2026 looks like" by Daniel Kokotajlo

https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like#2022

Crossposted from the AI Alignment Forum. May contain more technical jargon…

3 years, 3 months ago

Short Long
View Episode
Counterarguments to the basic AI x-risk case

3 years, 3 months ago

Short Long
View Episode
"Introduction to abstract entropy" by Alex Altair

https://www.lesswrong.com/posts/REA49tL5jsh69X3aM/introduction-to-abstract-entropy#fnrefpi8b39u5hd7

This post, and much of the following sequence, was…

3 years, 3 months ago

Short Long
View Episode

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us