Autonomous Weapons Debate: A list of all the arguments for and against
Thomas Jestin
Co-founder & CEO @ KRDS. Co-founder @ YeldaAI. Writer (SingularityHub, SpaceTechAsia, HuffPost, Leséchos &more)
For several years, a debate has been raging in the circles of IA and the military: what about autonomous weapons?
These would be the third great revolution in the art of war, after the development of guns that allowed kings to strengthen their power over local lords by destroying their castles and otherwise put an end to the nomadic threat coming from the Asian steppes, then the development of the atomic weapon that signed the end of the second world war in the Pacific as everyone knows and has offered us a balance of terror since.
Several questions arise, however, about these new weapons:
- What is an autonomous weapon?
- Should we ban them? Why?
- And if so, is it even possible?
What is an autonomous weapon?
- “Weapons have been able to track their prey unsupervised since the first acoustic-homing torpedoes were used in the second world war. Most modern weapons used against fast-moving machines home in on their sound, their radar reflections or their heat signatures. But, for the most part, the choice about what to home in on—which aircraft’s hot jets, which ship’s screws—is made by a person. An exception is in defensive systems, such as the Phalanx guns used by the navies of America and its allies. Once switched on, the Phalanx will fire on anything it sees heading towards the ship it is mounted on. And in the case of a ship at sea that knows itself to be under attack by missiles too fast for any human trigger finger, that seems fair enough. Similar arguments can be made for the robot sentry guns in the demilitarised zone (dmz) between North and South Korea.”
- “In 2017, according to a report by the Stockholm International Peace Research Institute (sipri), a think-tank, there were 49 deployed systems which could detect possible targets and attack them without human intervention.”
- “One of them is the Harop, an Israeli kamikaze drone, that bolts from its launcher and then just loiters, unsupervised, too high for those on the battlefield below to hear the thin old-fashioned whine of its propeller, waiting for its chance. If the Harop is left alone, it will eventually fly back to a pre-assigned airbase, land itself and wait for its next job. Should an air-defence radar lock on to it with malicious intent, though, the drone will follow the radar signal to its source and the warhead nestled in its bulbous nose will blow the drone, the radar and any radar operators in the vicinity to kingdom come. Israeli Aerospace Industries (iai) has been selling the Harop for more than a decade. A number of countries have bought the drone, including India and Germany. “
- “Switzerland, for instance, says that autonomous weapons are those able to act “in partial or full replacement of a human in the use of force, notably in the targeting cycle”, thus encompassing Harop and Brimstone, among many others. Britain, by contrast, says autonomous weapons are only those “capable of understanding higher level intent and direction”. That excludes everything in today’s arsenals, or for that matter on today’s drawing boards”
So autonomy is not really new in a sense, but has now frankly become a catch-all, difficult to navigate. It needs to be clarified.
The advocates for the prohibition of autonomous weapons designate systems that are:
o i. able to kill;
o ii. offensive, that is, capable of attacking, not merely defending;
o iii. autonomous, in the sense of being able to search for a target, identify and neutralize it, all without human intervention, thus showing a certain degree of autonomy, a certain intelligence, by resorting to IA algorithms
But even that definition remains unsatisfying. Yann Le Cun explains, for example, that an antipersonnel mine is indeed an autonomous weapon, but a very stupid one that indiscriminately kills whoever walks on it.
This is never really explained either, but the concept also implies some form of physical incarnation, we are not talking just about an AI here, but about something made out of matter, a robot, a machine, a piece of equipment with its own ammunition and energy source, a combination of these elements, which of course also work thanks to algorithms. Such a weapon can take the form of a flying drone, a submarine drone, a tank, a vehicle, a robot insect, anything is possible, in shape and size, using all kinds of projectiles, waves, bombs.
A credible example of what a serious killer robot could soon be is that of the microdrone, which can autonomously identify someone in a crowd, rush on her and detonate its explosive charge. This concept was popularized in the video "Slaughterbots" designed by advocates of the ban on autonomous weapons, below:
As Elon Musk said in November 2018: "You could make a swarm of assassin drones for very little money. By just taking the face ID chip that's used in cellphones, and having a small explosive charge and a standard drone, and just have it do a grid sweep of the building until they find the person they're looking for, ram into them and explode. You could do that right now.... No new technology is needed."
To rephrase one could say that the debate is about “Intelligent machines able to kill proactively without human intervention"
But again, what is intelligence? Is there even a clear border between being able to defend and attack? When do we consider that there is human intervention? If I turn on a killer robot in the morning, which then goes on a killing spree all day long, can we say there was a human intervention?
Let's move on nevertheless in our understanding of the debate with this approximate definition in mind.
So, should we ban "intelligent machines intended to kill proactively without human intervention"? If yes, is it even possible ?
Here is the list of arguments for and against, the most relevant that I could find after reading many articles on the subject:
A. For prohibition:
· 1. Without a ban, these weapons will, sooner or later, be produced in industrial quantities. Some of them, like the killer microdrones in the video above, will be much more compact, much cheaper and much simpler to use than Kalashnikovs, so that they would eventually end up one way or another, in large numbers, in the hands of evil entities, who will be able to launch deadly and large-scale surprise attacks. It is for this reason that they constitute weapons of mass destruction and must be prohibited.
· 2. Autonomous weapons, especially killer microdrones, because they can be deployed en masse by a very small group of people, give rise to the accountability puzzle; we would take the risk of witnessing mass killings without being able to know clearly who is behind them, which can also be used as a pretext for retaliation against practical but innocent enemies (we remember the invasion of Iraq by the United States after the 11 September). We can already imagine operations under false flag of all kinds conducted to destabilize a region, a country, or cause a war between two great powers by a third lurking in the shadows. Development of weapons capable of mass destruction should therefore absolutely be avoided also because their use would be difficult to attribute. And even without going as far as mass killings, these weapons would allow untraceable targeted assassinations as never before. Remember as well that it was an assassination that sparked the First World War.
2.1 Objection: the problem of accountability is difficult, however, not as acute as for cyberwar, as it will still be necessary for a small group of people to take action in the real world to trigger an attack, and it is clear that the intelligence services can usually reassemble the channels, although they may, it is true, take the opportunity to designate the culprits as others.
· 3. By allowing to kill and destroy in an ultra-selective way and without having to engage troops, these weapons could paradoxically lower the threshold to be reached for a war to be triggered. If a country no longer has to pay the price of blood to perform an operation, perhaps it will be less hesitant to do so, which could cause more wars than before. We can quote Yuval Noah Harari here: "If the USA had had killer robots in the Vietnam War, the My Lai massacre might have been prevented, but the war itself could have dragged on for many more years, because the American government would have had fewer worries about demoralized soldiers, massive anti-war demonstrations, or a movement of ‘veteran robots against the war’ (some American citizens might still have objected to the war, but without the fear of being drafted themselves, the memory of personally committing atrocities, or the painful loss of a dear relative, the protesters would probably have been both less numerous and less committed)."
3.1 Objection : the cost in human lives may be less for the power that attacks with autonomous weapons, but if faced with the human victims being too many, and this is what the advocates of the ban fear, there is as strong a bet that in a globalized world, we would have the images to see it very quickly, which would not fail to move the international community and make public opinion of the aggressor country, or at least that of other countries, feel guilty. We can therefore probably relativize this motivation. We can quote Yann Le Cun here: "Modern wars are sensitive to public opinion, and the deadly nature of a war makes it, at a minimum, suspect. It was domestic and international public opinion that drove the United States to disengage from Vietnam. Militarily they were superior, but the accumulation of victims turned against them."
· 4. Autonomous weapons, because they will be stuffed with embedded electronics and ultra-connected, will be vulnerable to hacking. If a hostile power could take control of such weapons and turn them against their owners or towards other targets, the consequences could be dramatic.
4.1 Objection: this problem concerns in fact all the remote-controlled machines with a lot of electronics, not only autonomous weapons, and as there is no question of prohibiting these weapons and remote-controlled military equipment, adding to that the autonomous weapons does not change this very real threat a great deal. Some say, moreover, that autonomous cars will constitute tomorrow’s major threat to security before autonomous weapons: since they will be very numerous and will operate near civilians in the cities, widespread piracy could turn them into car rams which would be more devastating.
· 5. Forbidding these weapons will not solve all the problems, but it is better to have a ban imperfectly respected than no ban at all. The Treaty on the Non-Proliferation of Nuclear Weapons (NPT), the prohibition of bacteriological and chemical weapons, antipersonnel mines and blinding lasers are telling examples: it is not perfect, but we are probably doing much better than without these bans. As Yoshua Bengio says (one of the godfathers of Deep Learning): "I think we need to make it immoral to have killer robots. We need to change the culture, and that includes changing laws and treaties. That can go a long way. Of course, you’ll never completely prevent it, and people say, “Some rogue country will develop these things.” My answer is that one, we want to make them feel guilty for doing it"
5.1 Objection: the issue with nukes is unique given the difficulty of developing them coupled with their absolute destructive nature, which has led to a certain balance due to the risk of mutual assured destruction (MAS). For other existing prohibitions, they hold because the ratio [(effectiveness) / (stigma associated with their use)] is considered too low by the military: they are not effective enough, less effective than initially anticipated in most cases, to justify their use in view of the horror they inspire. But autonomous weapons are a completely different issue: we want to ask the nations to give them up when we still cannot quantify their potential effectiveness and we suspect it is immense - it is too much to ask them. Especially when the cost of production, once the design is complete, is considered modest. In the same spirit, the attempts to ban submarines and missiles launched by plane failed miserably in the twentieth century, for example! It seems unthinkable that nations could give up developing such weapons if all of them were not to agree to participate in the first place. And as such, the discussions at the UN are blocked for the moment, among others by the United States and Russia. It is probably for these reasons that the United Kingdom seems to have changed its position between September 2017 (UK bans fully autonomous weapons after Elon Musk letter) and November 2018 (Britain funds research into drones that decides who they kill, says report). Finally, even if everyone would agree to ban autonomous weapons, should we trust China, among others?
In a statement on the central government’s work report by Lieutenant General Liu Guozhi, director of the Chinese Central Military Commission’s Science and Technology Commission, in reference to military applications of AI, that the world is “on the eve of a new scientific and technological revolution,” and “whoever doesn’t disrupt will be disrupted!” (source)
Xi Jinping promised Barack Obama in 2015 never to militarize the small islets of the South China Sea, a promise which was then broken as we know. Who can believe that China would let inspectors go to search facilities in its territory? The technology is considered very promising, and not so expensive to industrialize, with many of the bricks necessary for the design available in the public domain, it is hard to imagine that no one will ever cheat! Why take the risk of respecting an agreement that others could violate easily?
B. Against prohibition
· 1. There is strong reason to believe that autonomous weapons will make conflicts less bloody.
1.1 More competent weapons: indeed, autonomous weapons are actually more reliable than humans because they only do what they are told (and so will they for still a long time, leaving aside the occasional bugs) and do it better. Humans, victims of their emotions, and at times their perversity and sadism, do not always obey, and when they do what they are asked, they are not as competent as some machines that can be endowed with superhuman abilities: more enduring and faster, aiming better, etc. Autonomous weapons will allow the most surgical targeting of the enemies to be destroyed and the infrastructures to destroy, leaving the rest unscathed. Let's quote Y. N. Harari again here: "On 16 March 1968 a company of American soldiers went berserk in the South Vietnamese village of My Lai, and massacred about 400 civilians. This war crime resulted from the local initiative of men who had been involved in jungle guerrilla warfare for several months. It did not serve any strategic purpose, and contravened both the legal code and the military policy of the USA. It was the fault of human emotions. If the USA had deployed killer robots in Vietnam, the massacre of My Lai would never have occurred."
1.2 Less violence because there is less need to preserve oneself: a good part of the violence of conflicts comes from the need for the soldiers to protect themselves, to take the least risks for their life. This leads the troops engaged on the front, when in doubt, to use their weapons to neutralize the opponent, by the logic "we shoot first and we see what happens." Autonomous weapons may be programmed to be more attentive, more reactive than proactive in the use of force, as losing them will be more acceptable than sacrificing human lives. As Rodney Brooks, former director of the MIT Computer Science and Artificial Intelligence Laboratory and founder of iRobot explains: "It always seemed to me that a robot could afford to shoot second. A 19-year-old kid just out of high school in a foreign country in the dark of night with guns going off around them can’t afford to shoot second."
· 2. A ban is in vain, as malicious agents will always be able to assemble existing tools and equipment that are authorized and / or already available on the black market, to create autonomous killer weapons themselves. For example, by combining a civilian drone, an explosive charge / firearm, and facial recognition software.
2.1 Objection: Yes, it is possible, but bypassing the ban on industrial production of killer self-sustaining weapons, using DIY will not produce these weapons in sufficient quantities, so they would be prevented from becoming weapons of mass destruction. As Stuart Russel, an expert in artificial intelligence (co-author of the manual on reference AI in universities and one of the instigators of the call for the ban) says: "It could be that under a treaty, there would be a verification mechanism that would require the cooperation of drone manufacturers and the people who make chips for self-driving cars and so on, so that anyone ordering large quantities would be noticed—in the same way that anyone ordering large quantities of precursor chemicals for chemical weapons is not going to get away with it because the corporation is required, by the chemical weapons treaty, to know its customer and to report any unusual attempts that are made to purchase large quantities of certain dangerous products. I think it will be possible to have a fairly effective regime that could prevent very large diversions of civilian technology to create autonomous weapons. (...) In small numbers, though, autonomous weapons don’t have a huge advantage over a piloted weapon. If you’re going to launch an attack with ten or twenty weapons, you might as well pilot them because you can probably find ten or twenty people to do that."
· 3. A ban is not useful because defenses will appear in response to autonomous weapons. For drones, for example, researchers are busy perfecting how to scramble the signals they receive in order to disorient them, how to fry their electronics remotely with electromagnetic waves, how to take control by hacking them, how to capture them or stop them with nets for example, how to destroy them with lasers or projectiles. In short, the ideas are not lacking, "necessity is the mother of invention" as the saying goes.
3.1 Objection: the problem of killer microdrones is much tougher than it seems. The conclusion of a Pentagon exercise in 2017 to assess different defense systems, and which crowns 15 years of research on the subject, is worrying: "steadily mixed results", "the thorny puzzle of counter-terrorism”, "most of the technologies tested are still immature". It is quite possible that these microdrones continue to stay ahead of defense systems, knowing that the need of protection will be everywhere, all the time. One single failure could be enough to allow a slaughter. We must also understand that the goal is not only to find a way to neutralize these microdrones, but to do it in a more economical way than it costs to produce them, otherwise it is simply about fighting a losing battle: neutralizing a $ 1000 drone with a $ 1 million missile does not make sense!
· 4. A ban is futile, it is not possible because the terms are too vague and will be too easily bypassed, it will be impossible to implement. It may hold in peacetime, that is to say, when it is not necessary, but would collapse as soon as a war would erupt. In particular:
4.1 If we only forbid offensive autonomous weapons: Yoshua Bengio thinks indeed that "there’s nothing to stop us from building defensive technology. There’s a big difference between defensive weapons that will kill off drones, and offensive weapons that are targeting humans. Both can use AI." Ok, suppose defensive autonomous weapons only would be allowed, but is the difference really marked with offensive ones? The hardware, the equipment, in both cases could be the same, and the only difference would be the software, a few lines of code basically. An autonomous defensive weapon can be reprogrammed to attack very easily and quickly. Ditto for autonomous reconnaissance drones with no embedded weaponry: since they are not manned, and can cost very little by the unit, they can also be reprogrammed to be turned into suicide bombers and throw themselves at a target. In early 2017, the US Air Force announced that it had successfully tested the deployment of a hundred microdrones from a fighter jet. "These drones are not pre-programmed individually, they form a collective organism driven by a shared artificial brain allowing them to adapt their flight to each other like a school of fish". These drones are supposedly developed for the purpose of reconnaissance, but it is difficult then to see the need to make them fly in tight formation, the potential offensive use is barely veiled.
4.2 If we forbid autonomous weapons targeting only enemies wearing a military uniform or other robots: the same weapons can be reprogrammed to target civilians without a problem
4.3 If we only completely forbid autonomous weapons that do not require any human intervention: we can very easily develop semi-autonomous weapons (these already exist) which always assumes that a human being validates a decision to kill or to strike, but they can also be reprogrammed easily to do it without, how can we prevent that?
So it seems that the advent of killer autonomous weapons is inescapable. What do you think? Do you see any other arguments for or against? Tell me about it on twitter.
Finally, let us point out that while we are focusing on this threat, we are losing sight of another much more imminent thing: cyberwarfare doped with AI. The US Homeland Security Department acknowledged in 2018 that a cyber-attack from Russia could have led to the takeover of part of its power grid (They got to the point where they could have thrown switches” to disrupt the flow of electricity to the grid, Jonathan Homer, DHS’s chief of industrial-control-system analysis, told The WSJ). It is feared that these attempts can be amplified in frequency and intensity thanks to AI. “A cyber hurricane is threatening us”, says a former director of the Airbus strategy. French think tank “Montaigne Institute” has just published a report Like a Hurricane : Preparing for a Large Cyber Attack.
We also did not talk about the damage on our societies allowed by the amplification via AI means of disinformation thanks to fake images, videos and audios generated at will, as well as mass phone calls by robots with delightfully human voices.
(To be notified once a month of upcoming articles on AI, technical progress at large and their impact on human societies, you can register here)
(You can also follow me on Twitter by clicking here, I strive to share only bite-sized counter-intuitive facts and figures about the future, tech, economy, geopolitics, most of the time without having to click a link)
Artificial Intelligence Enthusiast
6 年The AMDR market is estimated to be valued at USD 8.79 Billion in 2016 and is projected to reach USD 12.52 Billion by 2021, at a CAGR of 7.34% from 2016 to 2021. For PDF:- https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=140105037