Artificial Intelligence is not trying to kill you

Artificial Intelligence is not trying to kill you

I am lucky enough to be invited to speak on the topic of AI fairly often. I talk about the evolution of AI, where we are now and where we are headed. At most of these talks I get a question around the same topic “Are you not afraid that when Artificial Intelligence gets good enough that it will dominate / kill / get rid of / conquer us humans?”

I think it is a very valid concern especially when you look at the current state of the debate about AI and how we often portray AI in future visions.



However, I am not afraid, at all, and that is because I think the assumption is flawed.

Current State of AI

Before we dig into that, lets examine where we are and what we know right now. I ascribe to the theory of three (r)evolutionary stages of AI. ANI, AGI and ASI. ANI is our current stage, Artificial Narrow Intelligence. Computer driven mechanisms that are trained by humans to solve narrow problems, often incredibly well. Think of an algorithm trained to detect cancer in a CT scanning, much better and quicker than humans. But not sentient or intelligent in any way.

Next stage is the AGI or Artificial General Intelligence. A much broader intelligence, appearing sentient, able to pass the Turing test (https://en.wikipedia.org/wiki/Turing_test) and able to expand itself. Also, the development stage that sets us on the course for ASI.

ASI or Artificial Super Intelligence. At this stage the AI will quickly surpass human abilities and indeed rapidly dwarf us to the stage where we might not even be able to comprehend how it functions. We will from an intellectual capacity perspective become quite irrelevant.

And really it is this coming of the ASI that scares people. Because if it is so powerful will it not want to dominate us humans. Much like we dominate the entire planet, not because of superior physique, but because of our intellectual capacity. This is also known as the AI Takeover (https://en.wikipedia.org/wiki/AI_takeover)

The theory of the friendly AI

I have a theory which is why I do not fear the AI Takeover. I often field this theory, but I thought it was high time to put it down in writing to start a healthy discussion on the topic.

I believe that our need to dominate, and indeed the need of most biological beings including plants is driven by a need to survive. In a competitive and often resource scarce environment, being better and dominating other species or peers was a way of surviving. You could of course argue that an AI could be resource starved in that there might be finite compute resources. That aside, the coming of true sentient AI will not have evolved in a competitive world like us biological beings. Therefore, there would be no built-in motivation, evolved over millions of years, to dominate or destroy other species. In addition to this, an AI would most likely not stand to gain anything by destroying humans.

You could of course argue that the AI could learn from our own history and copy a pattern of behaviour, but again, what would it stand to gain?

AI in science fiction

If we examine two scenarios made popular by science fiction we also see that they make little sense

In the Matrix we are used as batteries, but in reality, we are a terribly inefficient power source compared to whatever else an ASI will be able to come up with. In the Terminator movies, Skynet declares war on humans and build killer robots to destroy us. But why? Skynet could happily have lived on in peaceful co-existence with humans. And besides, an ASI would never build inefficient humanoid killer robots, instead it would use nanobots to dissemble us at the molecular level or use a custom virus with a 100% hit rate.

Now I agree that once we make it to the ASI stage one of two things will happen. Either AI could as the late Stephen Hawking said, "be the worst event in the history of civilization", and herald our extinction. Alternatively it could make us the first species to live forever or even eternal life for the individual. I am an optimist so naturally I believe in the latter. But mass extinction as a result of AI is a possibility. From my perspective it would not be because of an evil AI motivated to dominate and kill. It would be as a result of poor directives or wishes from us silly humans. The most common example is that, when the ASI arrives we have still not solved climate change. We ask our artificial overlord to help us. A quick examination shows that we, humans, are the cause. The AI the promptly removes us all to return to planet to a better state. Not an evil AI, just stupid humans.

And I think that is really the moral of the story. Fear not artificial intelligence – Fear real stupidity!

要查看或添加评论,请登录

Christoffer Mc Carthy Mors的更多文章

社区洞察

其他会员也浏览了