Killer robots
Mimi Hammad
Technical Recruiter & CEO | Expert in Talent Acquisition for Tech & Startups | Passionate about Building Diverse Teams
For most of human history war has been a giant meat grinder. Masses of humans and their sidekicks trying desperately to kill each-other with sticks, rocks, swords and now bombs and guns. The promise of precision weapons is to reduce the overall horror of war — but with this trajectory is Skynet about to go online?
Is it acceptable for a machine to make life and death decisions in war? There are many situations in war that are grey, and it’s a real challenge when it comes to autonomous weapons. How would a robot know the difference in what’s legal and what’s right? How would it know?when?to tell the difference?
When people worry about autonomous weapons one of the fears would be robots running amok and killing civilians. The sort of things you would see in Terminator movies. Autonomous weapons have actually been around for a long time. For example, landmines — they’re not intelligent, but once they’re armed and placed by a human, they work on their own. One of the arguments for autonomous weapons is that just like someday self-driving cars could reduce incidents on roads, perhaps machines could do the same in war. Autonomous weapons could avoid civilian casualties making war more precise and more humane. In World War II, nations bombed entire cities and killed hundreds of thousands of civilians. One of the unfortunate realities of the technology at the time was bombing was not very precise. Today, advanced militaries can actually place a bomb precisely where they want to.
Moving forward, militaries are developing next-generation robotic combat systems that would use more autonomy. How much autonomy is not always clear when looking at their plans. Senior Russian military leaders have said that their intention is to build fully-roboticized units that are capable of independent operations. The US military now has in its naval fleet a ship called the Sea Hunter, it’s a totally autonomous ship. Swarms are the next step in military robotics. People are experimenting with what this might look like and how it would dramatically change military tactics and warfare. How do you fight with a swarm? What are the commands to give a swarm? How do swarms even fight each other? What are the best tactics for swarm warfare?
The idea of handing a lethal weapon to a computer would seem, on the face of it potentially risky and problematic. We want to be mindful of risks but we also don’t want to foreclose the opportunity that there may be ways to make more precise and more humane and save civilian lives in the process.
As this technology evolves, we have to think about how involved humans should be when technology takes a life. Are there things we don’t want machines to decide without us? Some may argue that there are some things machines can’t deal some situations that are too complex and require human judgement and moral understanding. For example, the laws of war don’t set an age for combatants, a robot would not know the difference between a five year old or a thirty-five year old. If you programmed a robot to comply perfectly with the laws of war they would kill regardless of age.
As this technology evolves, we have to think about how involved humans should be when technology takes a life. Are there things we don’t want machines to decide without us? Some may argue that some things machines can’t deal with, situations that are too complex and require human judgement and moral understanding. For example, the laws of war don’t set an age for combatants, a robot would not know the difference between a five year old or a thirty-five year old. If you programmed a robot to comply perfectly with the laws of war they would kill regardless of age.
领英推荐
If we had a war and no one felt bad about the fact that they were killing what would that say about us, about the fact that we were killing other human beings, if no one slept uneasy at night afterwards?
Right now the United States are not building lethal autonomous weapons. War presents a lot of complex moral challenges that people face and I don’t really think that any machines that we see today or in the foreseeable future will be able to deal with those challenges. Not because the machines cannot make those decisions, but because they shouldn’t.
These moments of humanity seem important. They seem like something we wouldn’t want to give up. It’s an opportunity for humans to exercise empathy and recognise the humanity of the other side. If the history of war has taught us anything, it’s that just because we don’t want a technology to be used for killing people, that doesn’t mean it won’t be.
We’ve seen through our history that there are many times when in peacetime, militaries say “we’re not going to use this kind of technology” — before World War I, “we won’t use poison gas”, yet once the shooting started and poison gas was available the military started using it. Prior to World War II, military said they wouldn’t do aerial attacks on cities…. In a course of interviewing experts for “Army of None”, the former director of research at the Office of Naval Research, Larry Schuette asked the really important question on the attack Pearl Harbour — “when we’re thinking about autonomous weapons, is it December 6th or December 8th?”. Is it before or after these game-changing events that might change the way we think about risk and what we’re willing to tolerate in warfare?
Office & HR Manager | Mobile Disc Jockey
2 年Maybe they can make ethical decisions but only if they are made about the best and worst of us. How was my answer Mimi Hammad?