The rise of the killer robots! When will AI get dangerous?
https://commons.wikimedia.org/wiki/File:Celia-killer-bots-attacking-thom_mango_concept-art_02.png

The rise of the killer robots! When will AI get dangerous?

Will the robots be taking over? Will AI get so powerful that there will be killer robots wandering the streets? 

Elon Musk says he is worried about this existential threat of AI. Was he serious, or did he have his tongue firmly pressed up against his cheek?

Lets take a look. It would be the folly to go up against Elon, but the answer has to be no, there is no threat of machines going rogue, *unless we program them to*. Does that seem like a circular statement? It is and yet it is not. 

When you think of a killer robot what do you imagine? Chances are it is anthropomorphic in appearance, has some cognitive capacity and, this is the most important part, an awareness of its own existence. This is the Hollywood version.

That last part, of having self-awareness is not possible for a computer. It is interesting to look at what machine learning really does. While I have been searching for a new position I have also been enjoying reading up on the latest in deep learning technology. I took courses on this in college, and the techniques are the same as they have been for quite a while.

One thing that stands out: machine learning is actually extremely banal. A machine can be trained to have what looks like cognitive capacity, but it is nothing like the cognitive capacity that we humans have.  

The training goes like this (a very high level description): you provide the machine with an input (say a digital representation of a picture) and a set of initial parameters. The machine computes an output by doing a whole set of mathematical operations on the input and the initial parameters.

You also provide the value the output should be (say you want to know if the picture is a cat picture, so your output is either 0(no) or 1(yes)). The machine measures the error between the output it computed and the output you say is the correct output and then adjusts the parameters to recompute its output. It does this over and over and over again until the error is reduced to zero(or very, very, very, very close to zero).

Thereafter the training is done, and it can now use the parameters it has discovered to inspect new pictures and tell if they are cat pictures or not. With enough training the machine can get very, very, very, very close to 100% accurate. 

So it has now cognitive ability of sorts,but we can note that:

a) this is nothing like human cognitive ability

b) the main work the machine does is the mathematical operations to learn what are the parameters that give no error.  These mathematical operations are reasonably the same regardless of the problem: whether you are want to know if the picture is a cat picture, or be able to recognize speech, predict the stock market, play chess etc, you are applying similar procedures.

c) The machine has no self-awareness of any kind. It does not 'know' what it is doing. It does not 'think' like we think. Cat pictures or language or stock prices or chess games have no conceptual meaning to the machine, at least not how we would understand such a meaning.

If intelligence implies being able to perceive this conceptual meaning, then no matter how powerful computers become, no matter how sophisticated the software is, the machine has no intelligence whatsoever. And it never will. It is 'dumb' in that way, a dumb workhorse, doing exactly what we have told it to do. And the computer will never do anything we have not told it to do. We may think we have told the machine something different than what it does in the case of a software bug for example, but even there, the machine is exactly following the instructions given to it, we just gave it faulty instructions.

Consequently morality, things like good and evil, also has no conceptual meaning to the machine. A machine is indifferent to what it is doing. So the robots will go rogue only if we have trained them to, and in that case they will not be rogue, will they? And then they are no different from any of the other tools we have developed since humans first developed tools. A knife is also indifferent. You can use it to cut vegetables or you could use it to hurt someone. The usage is up to you.

So then if there is a new intelligence that will arise, at least using this definition of intelligence, it will not be from the software lab or the computer science department. It will come from the biology lab. We are capable now of growing synthetic meat. At some point we might, you never know, it could happen, be able to take non living materials and combine them together to create something living that has its own unique DNA.  

We might be able to make a cell that is a new species altogether. It will have its own intelligence, though nothing like the intelligence of a human, but from there on who knows? Maybe a hundred years from now, or a thousand, or ten thousand, or maybe 50 thousand, we will be able to create a new species that has a brain like us. It will be self-aware and fully capable of going rogue. And then it might actually go rogue. 

Until then, enjoy your cat pictures! And your dog pictures too.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了