When an algorithm becomes a weapon
As countries compete with each other to see who will be the first to regulate AI, another race is taking place in the shadows - the race of AI warfare. In the upcoming EU AI Act, not much emphasis is given to AI systems used for military purposes. Such systems are exempt from any EU regulations and, one could say, are given a free pass. With the unfortunate events that have led to a war in eastern Europe, policymakers’ views might quickly take a different turn, especially as we might be standing on the threshold of the first AI War.???
The third warfare revolution
The experts emphasise that we are living through the third warfare revolution - the first being the invention of gunpowder and the second, not so long ago, the invention of nuclear weapons. Now, we are approaching the third revolution - autonomous warfare.?
The invention of nuclear weapons in the aftermath of World War II was a grim warning of humankind’s ability to annihilate itself. Atomic weapons changed warfare forever as a war could turn into mass destruction in a blink of an eye.???
It took two tragic nuclear bomb attacks, both executed by the U.S. against Japan, and decades of a nerve-wracking Cold War between the U.S.S.R. and the U.S., for major governments to realise that some limitations on military weapons needed to be implemented to ensure that humankind would not destroy itself.
In 1970, the Treaty on the Non-Proliferation of Nuclear Weapons came into effect. The pact established the concept of accepted nuclear-weapon states (the U.S., the U.S.S.R., China, Great Britain and France) and non-nuclear-weapon states. The agreement banned the use of nuclear weapons with the long-term objectives of total disarmament. As we know now, the treaty not only failed to disarm states that already had nuclear weapons but also did not prevent the states who had not signed the treaty, such as India, Pakistan and Israel, from obtaining their nuclear arms.?
The threat of nuclear war still gloomily hangs over humanity, and with the wave of the third warfare revolution and the development of autonomous weapons, this threat has only magnified.?
AI in Russia/Ukraine war
In 2017, Russian President Vladimir Putin said: “Whoever becomes the leader in AI will be the leader of the world”. From the recent events, we can guess what were Putin’s aspirations when he gave that speech. Fortunately, Russia is far behind in the AI race, and it does not seem that it will become a leader anytime soon - a recent assessment carried out by the U.S. Centre for Naval Analyses reported that Russia is still far behind the U.S. and China when it comes to AI defence capabilities. Nevertheless, the might shown by Russia in the Ukraine conflict could be the first step toward more sinister warfare. The same report suggests that “Russia’s use of military drones in Ukraine could be psychological and public relations tool which allows Russia to showcase its capabilities and ability to compete in a rapidly evolving information war that goes hand in hand with the actual combat”.?
Validation of the idea of the Russia/Ukraine conflict being the first AI war is happening quickly, as both sides have been reported using lethal autonomous weapon systems in combat. So-called ‘kamikaze’ drones are being used by the Russians. Such drones can loiter in the air for hours, waiting until the target appears. On the other front, Ukraine is successfully utilising similar weapons received from other countries, such as Turkey’s Bayraktar TB2 drones, to inflict significant damage on Russia’s heavy artillery.??
Experts explain that the drones’ use in war has many advantages. They are relatively cheap to produce, compared to tanks and missiles (whose cost can run into the millions of dollars), easy to use and can deliver deadly results. A prototype version of a drum magazine that can hold multiple mortar bombs and be attached to a commercial drone has been spotted in use by Ukrainian soldiers. Such new devices raise long-term concerns over their possible use by civilians as easily accessible weapons which could be turned into tools of mass destruction.??
From mud-soaked trenches to radio-powered drones - the advancement of autonomous weapons has completely changed the flow of war, and while robots are sent to fight the war on the battlefield, another crucial combat is taking place online. There have been reported cases revealing that AI systems can analyse blog posts, TikTok videos or tweets to predict troop movements or attacks. ?
领英推荐
The Russian government is well known for its notorious disinformation campaigns, so it is no surprise that this approach has been used to misinform Ukrainians. In early March, a video with Ukrainian President Volodymyr Zelenskyy announcing their surrender to Russia appeared on social media. Another video shared on social media showed Vladimir Putin announcing a peace deal with Ukraine. Fortunately, due to their poor quality, both videos were easily recognised as deepfakes - videos manipulated with deep learning technology - however, as technology advances detecting deepfakes will not always be as easy. This short-lived deception was the first publicly known attempt to weaponise deepfakes.
Other AI and ML technologies have also found their use in the Russia/Ukraine war, one of which is the already infamous U.S. facial recognition company Clearview. Clearview suffered reputational damage after it was revealed that the company trained its algorithms with millions of social media pictures obtained without users’ permission. The technology has been used by the Ukraine military to identify dead Russian soldiers and inform their families. Such grim actions have helped Ukraine challenge Russian propaganda that under-reports the actual number of fatalities suffered by Russian forces in the war. Spreading misinformation and particularly twisting the facts is one of the ways this war is fought, and this is something both sides are active in.
Can AI make war safer??
It is difficult to place the words war and safe next to each other. The attempt should be to avoid war rather than to make it safer. The perception of a “safer” war could backfire, leading to more conflicts, as less is being put on the line. With the invention of nuclear weapons, many believed that the damage that nuclear war could bring would deter countries from starting a war.?
Nuclear deterrence theory holds that states would be reluctant to use nuclear weapons in warfare for fear of nuclear retaliation from other states. The theory has been criticised as a “slippery slope”, providing a false sense of security to the countries that own nuclear weapons and encouraging further nuclear proliferation.?
Deterrence theory, whether talking about nuclear or autonomous weapons, is insufficient due to irrational human nature. History has taught us that desperate leaders often leave morals and common sense behind and default to the “anything-goes” approach. During World War I, Germany used submarines to sink civilian ships. Such practices, known as unrestricted submarine warfare, were perceived as barbaric and were one of the reasons why the U.S. joined the war.??
After the war, an international treaty initiated by the U.S. banned unrestricted submarine warfare. But, after the 1941 Japanese attack on Perl Harbor, it took no more than six hours for the U.S. military to disregard any legal and ethical norms they so eagerly advocated for and order unrestricted submarine warfare against Japan. American campaign against Japan’s civil merchant fleet during World War II was later acknowledged to be equivalent to a war crime. This is a stark example of how easily moral beliefs can be abandoned in times of war.??
On the other hand, some speculate that AI and autonomous weapons could bring original solutions to the threats of wars. For example, countries could agree that all wars would be fought only with robots, which could save many human lives.?
The U.S.-based National Security Commission on AI (NSCAI) emphasised in their report that future wars will be fought “algorithm against algorithm”. The report estimates that the advantage in the war will shift from the traditional factors such as the size of force and armament to the amount and quality of data, connectivity, computing power, algorithms and the security of systems, as well as autonomous weapons on the battlefield.?
Autonomous weapons could also be better at eliminating mistakes. In one instance, an autonomous weapon correctly distinguished a photographer pointing its camera at the drone, while similar situations have confused human soldiers and brought fatal results. On the other side, the scope of an error made by an autonomous weapon and the scope of an error made by a human is like “mailing a letter and tweeting it”, with some experts pointing out that AI mistakes could lead to “accidental wars”. And who would be to blame when such a mistake occurs??
Other arguments speculate that robots could act more humanely than human soldiers, as they would not need to be programmed with self-preservation instincts, potentially eliminating the “shoot-first, ask questions later” attitude. It would also eliminate many of the war crimes committed by soldiers.
The use of autonomous weapons in warfare might have its case and could be viewed as a safer solution to the future of wars. Nevertheless, the recent examples from Ukraine reveal that, more often than not, the power of autonomous weapons is not used to make war safer but simply to increase the scope of the attacks, often choosing civilians rather than the equipment on the battlefield as their targets.
You obviously never saw the episode of Star Trek. A Taste of Armageddon, Episode 23 of the first Series. It's the horrors of war and the death and suffering that brings them to a close. Smart people don't need to see that death and suffering before they wake up and find a way to keep peace because they can imagine it. Stupid and venal people do, unfortunately. Only when their children, brothers, sons and mothers are dead and die a grizzly death will they decide they have lost. Should one side merely lose an expensive real life video game, with plastic helicopter toys, it would simply send a hypersonic missile to nuke the major cities of the victor or release an engineered virus and deny responsibility. Wars where people don't suffer or suffer far less would just last far longer.
* Global AI Ethics and Regulatory Leader at EY; * Director, EMLS RI ltd; * Trustee, 5Rights Foundation; * Responsible AI advisor to various organizations;
2 年Thank you Aidan for some important reflections. Just to be clear, the reason why military use of AI is not covered in the EU's AI Act is not related to any value judgements about military AI. The EU simply does not have legal competency to regulate on matters related to the military. Defense is a national competency of the member states.