Podcast Episode Details

Back to Podcast Episodes
Hard Mathematical Proof AI Won't Kill Us

Hard Mathematical Proof AI Won't Kill Us



An in-depth look at how the Fermi Paradox and the Grabby Alien Hypothesis provide evidence that an AI apocalypse is unlikely. We examine filters to advanced life, terminal utility convergence, and why we may not have encountered aliens yet.

Malcolm: [00:00:00] So basically, no matter which one of these Fermi paradox is true, either it's irrelevant That we are about to invent a paperclip maximizing AI, because we're about to be destroyed by something else, or in a simulation, or... We're definitely not about to invent a paperclip maximizing AI, either because we're really far away from the technology or because almost nobody does that.

That's just not the way AI works, I am so convinced by this argument that it is actually, I used to believe it was like a 20 percent chance we all died because of an AI or maybe even as high as a 50 percent chance, but it was a variable risk as I've explained in other videos.

I now think there's almost a 0 percent chance. I a, a 0 percent chance, assuming we are not about to be killed by a grabby AI somebody else invented Now it does bring up something interesting. If the reason we're not running into aliens is because infinite power and material generation is just incredibly easy, and there's a terminal utility convergence function, then what are the aliens doing in the universe?

Would you like to know more?

Simone: Hi, Malcolm. How are you doing, my [00:01:00] friend?

Malcolm: So today we are going to do an episode, a bit of a preamble for an already filmed interview. So we did two interviews with Robin Hanson, and in one of them we discuss this.

theory. However, I didn't want to off rail the interview too much going into this theory, but I really wanted to nerd out on it with him because he is the person who invented the grabby aliens hypothesis solution to the Fermi paradox. So I hadn't heard

Simone: about grabby aliens before, so I'm glad we're doing this.

This is great.

Malcolm: Yes, so we will use this episode to talk about the Fermi Paradox, the Grabby Alien Hypothesis, and how the Grabby Alien Hypothesis can be used. Through controlling one of the variables, i. e. the assumption that we are about to invent a paperclip maximizer AI that ends up fooming and, killing us all because that would be a grabby alien definitionally.

If you collapse that variable within the equation to [00:02:00] today. Then you can back calculate the probability of creating a paperclip maximizing AI. And, spoiler alert, the probability is almost zero. It basically means it is almost statistically impossible that we are about to create a paperclip maximizing AI.

Unless, with the two big caveats here, something in the universe that would make it irrelevant whether or not we created a paperclip maximizing AI. Is hiding other aliens from us or we are in a simulation, which also would make it irrelevant that we're about to make, create a paperclip maximizing AI, or there is some filter to advanced life developing on a planet that we have already passed through that we don't realize that we have passed through.

So those are the only ways that this isn't the case. But let's go into it because it is, it is really easy.

I just realize that some definitions may help here. We'll get into the gravity alien hypothesis in a second, but the [00:03:00] concept of the paperclip maximizing AI. Is the concept of an AI that is just trying to maximize some simplistic function. So in the concept as it's laid out as a paperclip maximizer, , it would be just make maximum number of paperclips and then it just keeps making paper clips and it starts turning the earth into paper clips and it starts turning people into paper clips.

Now, realistically, if we were to have a paperclip maximizing AI, It would probably look something more like, you


Published on 2 years, 1 month ago






If you like Podbriefly.com, please consider donating to support the ongoing development.

Donate