The 42 Reasons why Killer Robots are Unstoppable
And how we must lead in A.I. / Autonomous Weapons
Being in Silicon Valley with regular trips to Washington, DC and Beijing, China, I spend a lot of time with top innovators in A.I., autonomy, robotics, cyber, and of course gaming, the pioneer of many such technologies. And at BENS and the National Cyber & Tech Council, we support the NSA, CyberCommand, the DoD, the Department of Homeland Security and the White House in aspects of CyberSecurity & A.I./Autonomy.
And in all this, one thing has become clear to me: “Killer Robots”, aka autonomous weapons systems (AWS), at least partially outside of direct human control in actual combat roles, are coming and there is no way we can stop them. And the U.S. and our allies must lead, not follow, this revolution, which is arguably even more critical to our security, liberty, and prosperity than cyber security and nuclear deterrence:
- Game-changing weapons have extreme consequences. From bow & arrow to machine gun, from bronze spear to ICBM, from trebuchet to nuclear bomb, game-changing weapons make or break empires, as well as free, secure, and prosperous societies.
- A.I./Autonomy is a Game-changer. Autonomous weapon systems for combat sense more completely, target more precisely, act more rapidly, are more enduring, more flexible, more dispensable… than any human warfighter. Autonomous command & control reduces time for concepts of operations development, target selection, and mission assignments.
- All of the world’s military leaders understand this. China, e.g. plans to put artificial intelligence into its newest cruise missiles. The Chinese Navy develops unmanned surface and underwater systems and autonomous unmanned underwater vehicles to track and find submarines.
- Chinese scientists work hard to master autonomous intelligence for these underwater robots, not just individually but also to teach them how to ‘swarm’, to work together with one another with little or no human oversight, as well as with other naval platforms.
- China’s Harbin Institute of Technology unveiled at the Beijing 2015 World Robot Conference a fleet of semi-autonomous robots which can wield anti-tank weapons, grenade launchers, or assault rifles.
- Russia’s military is preparing to fight on a robotic battlefield, just as Russia’s Strategic Missile Forces announced they would in the near future deploy armed sentry robots that can select and destroy targets with no human in- or even on-the-loop at five missile installations.
- Russian defense contractor Uralvagonzavod, builder of the Armata T-14, stated “we will be able to show prototypes [of unmanned, semi-autonomous tanks] in 1.5 to 2 years. We are gradually moving away from crewed machines.”
- The U.S., with its utmost dependence on qualitative warfighting superiority to offset its quantitative disadvantages, cannot rest and has woken up to the challenge. Autonomy and human-machine assistance are core elements of the Pentagon’s Third Offset Strategy.
- The U.S. military already has about 11,000 unmanned aerial vehicles (UAVs) and a similar number of unmanned ground systems. Most with limited Autonomy today, but continuously evolving more capable A.I. The U.S. military is working on dozens of different programs to increase the autonomous capacities of its weapons systems.
- UAV started mostly in military applications, but expanded into commercial, scientific, recreational, agricultural, and other applications such as policing and surveillance, aerial photography, agriculture and drone racing. Civilian drones now vastly outnumber military. More than a million UAVs have been sold in the United States. Their A.I./Autonomy progresses quickly, advancements feed back to the military.
- Reactive Autonomy, e.g. collective flight, real-time collision avoidance, wall following, situational awareness, etc., has already been achieved in high-end systems with sensors such as cameras, GPS, motion, lidars, radars, sonars.
- More proactively, UAVs can take off and land from aircraft carriers, travel to space, spy and track targets. One of the ultimate goals for UAV, the fully autonomous swam, will be technically feasible in a few years.
- The Pentagon’s tech-focused Defense Innovation Unit Experimental (DIUx) awarded a “prototype project in the area of Autonomous Tactical Airborne Drones.” The contract comes from the Naval Special Warfare Command, which oversees Navy SEALs.
- The U.S. Air Force Research Lab has demonstrated the simulated ability of drones to defeat human pilots. An experienced former Air Force battle manager tried repeatedly and failed to score a kill and “was shot out of the air by the [A.I.] reds every time after protracted engagements.”
- Predators and Reapers are made for counterterrorism operations, but are not designed to withstand anti-aircraft defenses or air-to-air combat. The DoD’s “Unmanned Systems Integrated Roadmap” foresees UAVs in combat, including extended capabilities, human-UAV interaction, managing increased information flux, increased Autonomy and UAV-specific munitions.
- Commercial technologies are outpacing military technology in Autonomy in unmanned undersea vehicles (UUVs), too. While their development was pioneered by the U.S. Navy, the commercial sector with undersea oil exploration and oceanography now leads in autonomous platforms.
- Counterterrorism and time-critical targeting require wider field-of-view sensing with higher resolution and frame rates. As new sensors have reached the battlefield (Gorgon Stare, Argus, Constant Hawk) data collection far outpaces the ability to send raw sensory data back for analysis. On-board A.I. is the only viable solution and will deliver game-changing advantages.
- In 2014, the Pentagon’s influential Defense Science Board (DSB) studied the use of Autonomy across all warfighting domains. The team identified opportunities for the DoD to enhance mission efficiency, shrink life-cycle costs, reduce loss of life, and perform new missions. It concluded there are “substantial operational benefits and potential perils associated with the use of Autonomy”.
- The DSB just released its “Summer Study on Autonomy” on the future of A.I., Autonomy, and Robotics. It states that “Autonomy will deliver substantial operational value across an increasingly diverse array of DoD missions, but the DoD must move more rapidly to realize this value.”
- DSB stresses that machines and computers can process much more data much more quickly than can humans, “enabling the U.S. to act inside an adversary’s operations cycle.” And that is why it is “vital if the U.S. is to sustain military advantage.”
- The study provides recommendations aligned with three over-arching vectors: accelerating DoD’s adoption of autonomous capabilities, strengthening the operational pull for Autonomy, expanding the envelope of technologies available for use on DoD missions.
- According to the DSB, “Autonomy delivers significant military value”, including "opportunities to reduce the number of warfighters in harm’s way, increase the quality and speed of decisions in time-critical operations, and enable new missions that would otherwise be impossible".
- DSB confirms that fielded capabilities demonstrate ongoing progress in embedding autonomous functionality into systems, and “many development programs already underway include an increasingly sophisticated use of Autonomy”.
- It also points to the commercial sector: “Autonomy is becoming a ubiquitous enabling capability for products from advisory expert systems to autonomous vehicles.”…providing “opportunities for the DoD to leverage the investments of others, while also providing substantial capabilities to potential adversaries.”
- DSB highlights that one of the most contentious applications of Autonomy is for command & control in military operations or warfighting, but “the potential benefits are real”.
- In the study, four categories are used to characterize underlying technologies critical to the development of autonomous systems: Sense; Think/Decide; Act; Team. Advances in all four areas Compound each other, similar to the case I make in The A.I.-Robotic Chain-Reaction.
- Advances to Sense are driven by a diverse array of applications (including, but not limited to, autonomous systems) that share a common need to reduce sensor size, weight, power requirements, and (I add), the need for more and better data.
- Artificial intelligence, which enables the Think/Decide functionality in autonomous systems, is benefiting from advances in computational power (e.g. GPUs, TPUs) as well as availability of vast data sets from more and better Sensors.
- Demand for productivity growth via automation was an early driver of advances in actuators and mobility that Act; and “demand is growing as robotics become more intelligent and new applications are emerging”.
- A growing number of applications require human-machine teaming and collaboration?—?letting each do what it does best, but also “imposing new requirements on the underlying Team technologies”.
- The study concludes that the “DoD must accelerate its exploitation of Autonomy”?—?both to “realize the potential military value and to remain ahead of adversaries who also will exploit its operational benefits”.
- To date of all great powers only the US and the UK have policies on autonomous weapon systems. They allow research, but limit deployment. The U.S. policy has backdoors, however, and the military can use Autonomy if deemed necessary. There are no consistent guidelines for R&D and above all Training & Testing.
- Entering the A.I.-Robotic Chain-Reaction, we will see further dramatic breakthroughs in autonomous, self-learning systems. Self-learning Autonomy will be superior in many (likely most) strategic and tactical aspects of warfighting, and sooner than people project linearly. While US policy highlights Verification & Validation, learning systems are not fully predictable.
- Anecdotally, when we build the RIFT dynamic Game-world at Trion, we pioneered fully data-driven, server-based, and much more advanced A.I. for non-player characters (NPCs) in a non-linear, dynamic virtual environment. We quickly encountered unpredictable, so-called emerging behavior. One classic example was a beach with crocodiles roaming and killing crabs. Later we found crabs teaming up, “swarming” so to speak, and killing crocodiles without ever being programmed to.
- In other words, classic Verification & Validation is necessary but not sufficient in non-linear, dynamic evolving, learning systems, and which in any case are "Alien" Intelligence as much as Artificial Intelligence, as they “reason” along different paths than humans, have other sensors and other data sources than us, operate on different contextual assumptions, and lack aspects of environmental and ethical awareness.
- Autonomous weapon systems need to be trained & tested, aka red-teamed, at all stages of development and deployment. The best and only feasible approach is to “Play” through the widest possible range of scenarios in an evermore complex, non-linear, dynamic virtual (aka Game) world and to record and analyze all “Training Missions” and all learning progress.
- By constantly Playing, during R&D and beyond, key aspects of behavior can be emphasized or eliminated. With evermore detailed world models, the effectiveness of adversary actions and autonomous counteractions can be evaluated and vulnerabilities found.
- As development progresses, all Training Missions should take place in worlds of ever-greater complexity and fidelity, as well as evermore extreme scenarios to catch potential outliers and unforeseen reactions.This can only be done in dynamic Game worlds as Training & Testing environments.
- When things deviate from acceptable outcomes, autonomous weapon systems must allow humans at all times to immediately intervene, correct, and terminate actions, keeping them at least, if not in the action loop, then in the information-, reaction- and accountability-loop.
- Finally, all autonomous weapons systems must be fully audit-capable and must preserve, record and transmit all their actions faithfully and in all available detail.
- Autonomous weapons systems are inevitable. And similar to cyber, nuclear, mines, chemical and biological weapons, the U.S. must lead the development of international rules of engagement. We first need to proactively build consensus among our allies, and a shared vision, as well as R&D, Training & Testing, and Deployment Guidelines for AI, Autonomy, and Robotics.
- Finally, we must work with and establish new international regulatory bodies to agree and legislate all aspects of autonomous weapons systems and strictly enforce such new laws and binding agreements.
The human urgencies of combat will defy our standards of testing and reliability, I fear. I recall hearing that during WW2, the Luftwaffe made history and had local superiority by using the ME262 jet fighter. It had advantages in the air, but apparently they lost more aircraft to fueling accidents - fires and explosion on the ground during rushed operations -- than to combat. These accidents led to loss of aircraft and some flight crews, but modern autonomous system accidents have the potential for far, far, greater harm. ? ?Even if Big Players (gov'ts in developed countries) play by the rules, it's also likely that small players -- poor countries, insurgents, private security firms -- will get in on the action, and their budgets and lack of Big Oversight will lead to an eventual hacked-weapon crisis. Perhaps many. "Ransomware" might get a lot nastier.
Excellent warning summary. No good will come of this as these weapons are unleashed. The DoD says this will spare lives of our war fighters. It fails to mention the massive numbers of intended victims.
Senior AI Architect, Generative AI @UCSD
7 年A robot may not injure a human being or, through inaction, allow a human being to come to harm, .... https://en.wikipedia.org/wiki/Three_Laws_of_Robotics
Great article Lars, wish I'd seen this when it first came out! Drones are the next gunpowder when it comes to war, nothing is going to be the same soon.