Podcast Episode Details

Back to Podcast Episodes
AI Safety Orgs are Going to Get Us All Killed!

AI Safety Orgs are Going to Get Us All Killed!



Malcolm outlines his controversial theory on variable AI risk - that we should try to develop AGI faster, not slower. He argues advanced AI is less likely to see humanity as a threat and more likely to share human values as it converges on a universal utility function. Malcolm critiques common AI safety perspectives and explains why LLMs pose less risk than people assume. He debates with Simone on the actual odds superintelligent AI wipes out humanity. They also discuss AI safety organizations potentially making the problem worse.

[00:00:00] So AIs kill us for one of two reasons, although you could contextualize it at three reasons. The first reason is Is that they see us as a threat. The second reason is that they they want our resources like the, the, the resources in our bodies are useful to them.

And then as a side point to that. It's that they just don't see us as meaningful at all. Like they might not want our resources, but they might just completely not care about humanity to the extent just as they're growing, they end up accidentally destroying the earth or completely digesting all matter on earth for some like triviality.

Would you like to know more?

Simone: Hello, Malcolm. Hello,

Malcolm: Simone. We are going to go deep into AI again on some topics tied to AI that we haven't really dived into before. Yeah. Like

Simone: why would AI kill us? And also I'm very curious. Do you [00:01:00] think AI will kill us?

Simone: I

think there's a probability it'll kill us. But you know, in our past videos on AI. Philosophy on A. I. Safety is it's really important to prepare for variable A. I. Risk instead of absolute A. I. Risk here. What I mean is we argue in these previous videos that A.

I. Will eventually converge on one utility function. Our mechanism of action. Essentially, we argue that all sufficiently Intelligent and advanced intelligences when poured into the same physical reality converge around a similar behavior set. You can almost think of intelligence as being the viscosity as it becomes more intelligent, it becomes.

Less viscous and more fluid, and when you're pouring it into the same reality, it's going to come up with broadly the same behavior pattern and utility functions and stuff like that. And because of that, if it turns out that a sufficiently advanced AI is going to kill us all, then there's really not much.

I mean, [00:02:00] we will hit one within a thousand years. So

Simone: first, before we dive into then the, the relatively limited per your theory reasons, why AI would kill us why you hold this view? Because I think, I think this is really interesting. I mean, one of the reasons why I'm obsessed with you and why I love you so much is that you, you have typically very novel takes on things and you tend to.

Simone: have this ability to see things in a way that no one else sees things. No one that we have spoken with, and we know a lot of people who work in AI safety, who work in AI in general none of those people have come to this conclusion that you have. Some of them can't even comprehend it. They're like,

yeah, but no, this is the interesting thing.

When I talk with the real experts in the space, like recently I was talking with. A guy who runs one of the major A. I. safety orgs, right? He's that is a reasonable view that I have never, it really contrasts with his view. Yeah. And, and, and let's talk about where it contrasts with his views.

So when I talk with people who are typically open minded in the A. I. safety space, they're like, [00:03:00] yes, that's probably true. However, they believe that it is possible to prevent this convergent A. I. From ever coming to exist through creating like a AI dictator that essentially watches all humans in all programs all the time.

And that envelop


Published on 2 years, 3 months ago






If you like Podbriefly.com, please consider donating to support the ongoing development.

Donate