Weapons Are Not Controlled by A.i. ~ Weapons Are Created by Humans
Many of my friends and associates ask me what I think of an A.i. performing nefarious acts or replacing humans at work, etc. Well the response from me often starts with a chuckle as to their naivete.
A.i. is not nefarious in its own right. The programmer or creator / operator might be nefarious, but an A.i. is not. We have nothing to worry about A.i. planning the demise of humanity in the least. I tell those who ask me this question that "Until a machine has the ability or the conscious desire to independently wish harm on us we have nothing to worry about" . And being that it requires a conscious mind to intentionally act in a nefarious manner, we as humans have nothing to worry about.
Now that does not mean a programmer or A.i. operator can or will not design a machine to hurt other people. This would require that the A.i. is given control and is trained, like an attack dog to hurt other people. Given the machine has control over the hardware (weapons) necessary to cause bodily harm, one must see that the machine is not nefarious in its own right, the creator / operator would have to be the one with ill intent. Warfare is often a subject of contention amongst my peers and those whom I speak to regarding A.i. and their suspicions.
Today, we already have drones with missiles and guns, these drone can be operated by an A.i. in order to kill a human. But, it would require that someone (i.e. a Human Being) to provide this A.i. a trained model specifically designed to identify a target and subsequently use a wired weapon in order to cause the bodily harm. The fact is, that it would not be ethical nor effective to give a drone autonomy over the use such weapon(s). In fact the human would be irresponsible and malicious at the least, and homicidal / psychotic at the worst.
Effective use of weapons are based upon defensive combat needs only, where the life of innocent people(s) are at risk. If someone were to empower any autonomous weapon driven by an A.i with a weapon and give it power to use weapon(s) based upon autonomous A.i. software, hardware, then that individual human would be defined as homicidal. Some people have a hard time understanding this, it is the creator or programmer who would be homicidal not the device. So let me give you a scenario.
Say your vehicle has a bad parking brake. You park your car on a steep hill. And then you leave the vehicle in neutral, and then pull the e-break. Now you know the break is on, but it is faulty, and it is very well likely to fail. If your bag of records falls over and knocks the e-break and the vehicle rolls down a hill to collide with a sidewalk full of children , the vehicle is not guilty of murder, the individual who parks a faulty e-Break is guilty. The machine is not at fault. The person behind the machine is guilty. A.i. and weapons are, in many aspects, no different from your car , bad e-break, and a sack of music records. The human(s) behind the weaponized technology is to blame. A.i. powered machines have no desire to hurt or kill.
Please share this article , so you and your network will understand what is real about weaponized a.i. controlled systems.
Inventor - Chairman of Board
6 年I had originally been on the fence about publishing this article. Until recently someone I know who is not educated about the concepts of A.i. cited weaponized systems and something we must fear. It is a very common question I am asked by a great many smart people as well. So I wrote this article to set the record straight, at least in layman's term. A.i. at its core is not inherently evil nor nefarious. It would have to be the human(s) behind the system who seeks to hurt someone. A human would need to design a system and provide the A.i. trained? targets and models in order to attack humans or any other living being.?