Futurist: AI isn't killing Creativity it is going to kill all human beings, instead!
One of the headlines that I hear all the time, and perhaps it's not a headline; it's a rumor or a statement made by various people. That statement is simply that an AI is going to kill all humans. Technically, these are not theories, nor are they even hypotheses. They are educated guesses about the future. But we saw that play out in several movies. The movie Wargames have a computer on the edge of the brink of total human destruction by a nuclear weapon. In The Terminator, we have cyborgs, the distinct mixing of human and machine, and then later simply robotic creatures attempting to kill the one human in the past that would ultimately destroy the computer-controlled world of the future. There are many other science fiction stories you can look at that part about the reality of artificial intelligence killing humanity.
?
Another variation of that, my favorite, is the paperclip theory. An AI builds a machine or a machine that is run by a design or built to make more paper clips effectively. Of course, the AI then consumes all the resources in the world to make paperclips, and we are up to eyebrows and paperclips. However, I may steal from one of my favorite adages: it's time to turn off the paperclip machine when you are up to your neck in paperclips. In other words, there is a theoretical gap Today. My goal is to talk about that Today.
?
At this time, should we consider the London subway rules or keep in mind the gap? So what is the gap? First, as we consider where we are Today, we are in the age of machine intelligence or artificial intelligence-augmented human beings. Repetitious tasks are automated, and we reduce the need for direct human intervention in that process. People don't have to do it anymore. The gap is between where machine intelligence or artificial intelligence is Today and in the perceived future, where machine intelligence will turn around and say these biological computing systems are annoying. Let's get rid of them.
?
So, the reality of this infrastructure we created is that it is an augmentation system Today. We, as human beings, have to consider several things. The reality of machine intelligence Today is that it can do several things that humans are particularly good at. I know several people who are multilingual. I have a couple of friends who speak more than ten languages. But a machine intelligence translation system can easily speak hundred and 60+ languages. Yes, the biological computer is amazing because he can speak ten languages. But the reality is that machine intelligence systems have grown to 60. That's an augmentation. The effort and scope of learning nine languages other than the language you are born speaking is fairly significant.
领英推荐
?
?
Like the nebulous concept of the information age thrown around constantly, which is not real now, the reality is this is the future. Potentially long into the future state! Today is speculation. Again, there is no effective way to test it. Any machine intelligence that is capable of understanding that it is alive and plotting to kill humans is not going to answer the question, are you planning on killing humans? Which brings me back to the gap. Today, machine intelligence helps humans. In the future, it may consider human beings extra. The reality is we may also consider humans its creator. Considering humanity, its creator may take a more theological view of humanity. In other words, very few people kill their father or mother. It does happen. It is sad, but it doesn't happen at all. Therein, in my eyes, like the gap. I understand that there is potential that an AI system placed in charge of weapons might not stop.
?
It would not continue out of malicious intent; instead, it would continue to collect praise from its creator. I remember that, as a child, one of my biggest goals was to make my parents happy. So, it behooves us to evaluate the reality of what we are talking about with machine intelligence. Suppose we assume that machine intelligence is going to kill us all. In that case, the likelihood is that human beings talk to machine intelligence about how to use weapons and give it the goal of eradicating the enemy. Both sides would have that machine intelligence. Unless the machine intelligence has instructions only to remove the competing machine intelligence, eventually, there will be no human/by going back to the gap, I think the reality of that situation is not machine intelligence killing humans but humans continuing the process of killing each other that started more than 30,000 years ago. In other words, we have been doing it for a long time. If we create a machine intelligence based on our thinking process and with the guardrails that most humans have, if the machine intelligence has a function to use weapons and destroy an enemy, why would it ever stop?
?
I will end with this simple question. It's one that I have thought about and ask myself constantly. The question is not whether AI will kill human beings but when AI will kill all human beings. My question is, when will humanity learn the lessons of its past and stop killing each other? Because when humans stop killing humans, the likelihood is that any creation of humanity, i.e., machine intelligence, is not going to kill humans either!