Podcast Episodes

Back to Search
"Announcing Dialogues" by Ben Pace

As of today, everyone is able to create a new type of content on LessWrong: Dialogues.

In contrast with posts, which are for monologues, and comment s…

2 years, 4 months ago

Short Long
View Episode
"Thomas Kwa's MIRI research experience" by Thomas Kwa and others

Moderator note: the following is a dialogue using LessWrong’s new dialogue feature. The exchange is not completed: new replies might be added continu…

2 years, 4 months ago

Short Long
View Episode
"EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem" by Elizabeth

Effective altruism prides itself on truthseeking. That pride is justified in the sense that EA is better at truthseeking than most members of its ref…

2 years, 4 months ago

Short Long
View Episode
"How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions" by Jan Brauner et al.

Large language models (LLMs) can "lie", which we define as outputting false statements despite "knowing" the truth in a demonstrable sense. LLMs migh…

2 years, 4 months ago

Short Long
View Episode
"The Lighthaven Campus is open for bookings" by Habryka

Lightcone Infrastructure (the organization that grew from and houses the LessWrong team) has just finished renovating a 7-building physical campus th…

2 years, 4 months ago

Short Long
View Episode
"'Diamondoid bacteria' nanobots: deadly threat or dead-end? A nanotech investigation" by titotal

A lot of people are highly concerned that a malevolent AI or insane human will, in the near future, set out to destroy humanity. If such an entity wa…

2 years, 4 months ago

Short Long
View Episode
"The King and the Golem" by Richard Ngo

This is a linkpost for https://narrativeark.substack.com/p/the-king-and-the-golem

Long ago there was a mighty king who had everything in the world tha…

2 years, 5 months ago

Short Long
View Episode
"Sparse Autoencoders Find Highly Interpretable Directions in Language Models" by Logan Riggs et al

This is a linkpost for Sparse Autoencoders Find Highly Interpretable Directions in Language Models

We use a scalable and unsupervised method called Sp…

2 years, 5 months ago

Short Long
View Episode
"Inside Views, Impostor Syndrome, and the Great LARP" by John Wentworth

Epistemic status: model which I find sometimes useful, and which emphasizes some true things about many parts of the world which common alternative m…

2 years, 5 months ago

Short Long
View Episode
"There should be more AI safety orgs" by Marius Hobbhahn

I’m writing this in my own capacity. The views expressed are my own, and should not be taken to represent the views of Apollo Research or any other p…

2 years, 5 months ago

Short Long
View Episode

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us