Does the World Want Lethal Autonomous Robots?
It does not happen often. Calling for a public debate about the desirability of a technology that is still at least 20 or 30 years away from being even possible. But that is exactly what Human Rights Watch and Harvard Law School’s International Human Rights Clinic are doing with the publication of the report Losing Humanity: The Case Against Killer Robots.
The title of the report leaves no doubt about the position the authors are taking: lethal autonomous robots should be stopped from ever comi...
It does not happen often. Calling for a public debate about the desirability of a technology that is still at least 20 or 30 years away from being even possible. But that is exactly what Human Rights Watch and Harvard Law School’s International Human Rights Clinic are doing with the publication of the report Losing Humanity: The Case Against Killer Robots.
The title of the report leaves no doubt about the position the authors are taking: lethal autonomous robots should be stopped from ever coming into existence. They argue that autonomous systems are inherently incapable to operate in compliance with the international laws of war.
Autonomous robots would be unable to make a distinction between civilians and combatants, lack the human compassion that naturally deters people from killing others and -because the robot is truly autonomous- no human can be held responsible for violations of laws of war.
Proponents argue that robots can be programmed to use deadly force if, and only if, it is in compliance with the law. Moreover, because they lack the human drive toward self-preservation and aren’t motivated by anger, they might actually be better at conflict resolution than humans. And most importantly, the deployment of autonomous weapons can safe the lives of human soldiers.
Automatic weapons
Fully autonomous weapons are defined in the report as those systems that decide when and how to attack without a human in the loop. Many armies already deploy automatic weapons but all of them have an instance of human decision making in the process. Also, automatic weapons operate within a limited scope. Robotics professor Noel Sharkey, quoted in the report, defines an automatic robot as one that “carries out a pre-programmed sequence of operations or moves in a structured environment. A good example is a robot arm painting a car.” An autonomous robot, he continues, “is similar to an automatic machine except that it operates in open and unstructured environments. The robot is still controlled by a program but now receives information from its sensors that enable it to adjust the speed and direction of its motors (and actuators) as specified by the program.”
There are many examples of automatic weapons which ultimately have a human finger on the trigger. For instance, South Korea has deployed sentry robots in the Demilitarized Zone. The SGR-1s can detect a human 2 miles away in the day time and a mile away at night with its heat and motion sensors. When a human target is detected it sends a warning to a human controlled command center. The soldier in control can communicate with the target to establish whether he or she is an enemy. It is the soldier who decides whether to fire the sentry robot’s 5.5 mm machine gun.
An example of an automatic weapon operating within a structured environment is Israel’s Iron Dome defense system. Stationed at the border of Gaza it is currently operational in the resurgent Israeli-Palestinian conflict. It uses radar to detect incoming missiles and is capable of responding with interceptor missiles. On detecting a threat the system recommends a counter-action. Because a minimal response time is essential for the success of a counter-action, the operator must decide in a split second. But the limited extent of human input is compensated for by the limitations of the system’s scope: it operates as an air defense system only.
Laws of war
Fully autonomous weapons, by definition, do not have humans in the loop. And if they are designed to operate in an unstructured environment, they can not be programmed with a predetermined response for all possible scenarios. That means that robots have to make autonomous decisions about engaging and destroying an enemy based on their sensor data and the algorithms processing it.
And this is what he authors of the report consider to be the core of the problem. The lack of human cognition and external constraint make autonomous robots incapable of complying with the laws of war. Particularly the Geneva Conventions - the treaties concerning the humanitarian treatment of non-combatants in a time of war- stipulates protocols a robot can not expected to be able to follow.
At the heart of the Geneva Conventions is the stipulation that combatants must ‘distinguish between the civilian population and combatants’. A military encounter that fails to do so is unlawful. Robots are incapable of making such a distinction, the authors state. Especially in the age of asymmetric warfare where enemy combatants intermingle with the local population. In traditional state-to-state warfare soldiers can be recognized by their uniforms and their position in the battle zone. In asymmetric warfare human soldiers rely heavily on judging the intention of potential targets. Such nuanced observations are difficult for robots.
Another problem is liability. Because the robot is autonomous neither the programmer nor the commanding officer can be held liable. Robert Sparrow, a professor of political philosophy and applied ethics, is quoted in the report saying: ‘The possibility that an autonomous system will make choices other than those predicted and encouraged by its programmers is inherent in the claim that it is autonomous.’ And since the robot can’t be punished, war crimes might move beyond the reach of the law.
Ethical algorithms
Proponents of autonomous weapons point out that robotic warfare will prevent the death and injury of human soldiers. Next to that they have a bigger trust in the capabilities of future robots to comply with the laws of war. Ronald Arkin, a roboticist at the Georgia institute of Technology, is introduced in the report as a proponent. He has articulated the idea of endowing robots with an ‘ethical governor’. This is essentially a set of algorithms which prevent the robot from applying lethal force unless specified conditions are met. For instance, if the robot is incapable of determining whether a subject is a civilian or a combatant it can’t use force.
With such constraints in place, the robot is incapable of not complying with the laws of war. And because robots aren’t driven by anger, fear or self-preservation they might even be better suited for conflict resolution than there human counterparts.
Public debate
Having weighed the pros and cons Human Rights Watch and Harvard Law School’s International Human Rights Clinic conclude that the world does not need killer robots. They recommend states to prohibit the development, production and use of fully autonomous weapons through an international legally binding instrument. Of roboticists they ask to establish a code of conduct.
Perhaps more interesting than their somewhat simplistic conclusion is the call for a public debate. It would indeed be valuable to have a planet-wide discussion about what we want from our future tech.
Image: Taranis British Unmanned Aircraft. Source: Losing Humanity
The title of the report leaves no doubt about the position the authors are taking: lethal autonomous robots should be stopped from ever coming into existence. They argue that autonomous systems are inherently incapable to operate in compliance with the international laws of war.
Autonomous robots would be unable to make a distinction between civilians and combatants, lack the human compassion that naturally deters people from killing others and -because the robot is truly autonomous- no human can be held responsible for violations of laws of war.
Proponents argue that robots can be programmed to use deadly force if, and only if, it is in compliance with the law. Moreover, because they lack the human drive toward self-preservation and aren’t motivated by anger, they might actually be better at conflict resolution than humans. And most importantly, the deployment of autonomous weapons can safe the lives of human soldiers.
Automatic weapons
Fully autonomous weapons are defined in the report as those systems that decide when and how to attack without a human in the loop. Many armies already deploy automatic weapons but all of them have an instance of human decision making in the process. Also, automatic weapons operate within a limited scope. Robotics professor Noel Sharkey, quoted in the report, defines an automatic robot as one that “carries out a pre-programmed sequence of operations or moves in a structured environment. A good example is a robot arm painting a car.” An autonomous robot, he continues, “is similar to an automatic machine except that it operates in open and unstructured environments. The robot is still controlled by a program but now receives information from its sensors that enable it to adjust the speed and direction of its motors (and actuators) as specified by the program.”
There are many examples of automatic weapons which ultimately have a human finger on the trigger. For instance, South Korea has deployed sentry robots in the Demilitarized Zone. The SGR-1s can detect a human 2 miles away in the day time and a mile away at night with its heat and motion sensors. When a human target is detected it sends a warning to a human controlled command center. The soldier in control can communicate with the target to establish whether he or she is an enemy. It is the soldier who decides whether to fire the sentry robot’s 5.5 mm machine gun.
An example of an automatic weapon operating within a structured environment is Israel’s Iron Dome defense system. Stationed at the border of Gaza it is currently operational in the resurgent Israeli-Palestinian conflict. It uses radar to detect incoming missiles and is capable of responding with interceptor missiles. On detecting a threat the system recommends a counter-action. Because a minimal response time is essential for the success of a counter-action, the operator must decide in a split second. But the limited extent of human input is compensated for by the limitations of the system’s scope: it operates as an air defense system only.
Laws of war
Fully autonomous weapons, by definition, do not have humans in the loop. And if they are designed to operate in an unstructured environment, they can not be programmed with a predetermined response for all possible scenarios. That means that robots have to make autonomous decisions about engaging and destroying an enemy based on their sensor data and the algorithms processing it.
And this is what he authors of the report consider to be the core of the problem. The lack of human cognition and external constraint make autonomous robots incapable of complying with the laws of war. Particularly the Geneva Conventions - the treaties concerning the humanitarian treatment of non-combatants in a time of war- stipulates protocols a robot can not expected to be able to follow.
At the heart of the Geneva Conventions is the stipulation that combatants must ‘distinguish between the civilian population and combatants’. A military encounter that fails to do so is unlawful. Robots are incapable of making such a distinction, the authors state. Especially in the age of asymmetric warfare where enemy combatants intermingle with the local population. In traditional state-to-state warfare soldiers can be recognized by their uniforms and their position in the battle zone. In asymmetric warfare human soldiers rely heavily on judging the intention of potential targets. Such nuanced observations are difficult for robots.
Another problem is liability. Because the robot is autonomous neither the programmer nor the commanding officer can be held liable. Robert Sparrow, a professor of political philosophy and applied ethics, is quoted in the report saying: ‘The possibility that an autonomous system will make choices other than those predicted and encouraged by its programmers is inherent in the claim that it is autonomous.’ And since the robot can’t be punished, war crimes might move beyond the reach of the law.
Ethical algorithms
Proponents of autonomous weapons point out that robotic warfare will prevent the death and injury of human soldiers. Next to that they have a bigger trust in the capabilities of future robots to comply with the laws of war. Ronald Arkin, a roboticist at the Georgia institute of Technology, is introduced in the report as a proponent. He has articulated the idea of endowing robots with an ‘ethical governor’. This is essentially a set of algorithms which prevent the robot from applying lethal force unless specified conditions are met. For instance, if the robot is incapable of determining whether a subject is a civilian or a combatant it can’t use force.
With such constraints in place, the robot is incapable of not complying with the laws of war. And because robots aren’t driven by anger, fear or self-preservation they might even be better suited for conflict resolution than there human counterparts.
Public debate
Having weighed the pros and cons Human Rights Watch and Harvard Law School’s International Human Rights Clinic conclude that the world does not need killer robots. They recommend states to prohibit the development, production and use of fully autonomous weapons through an international legally binding instrument. Of roboticists they ask to establish a code of conduct.
Perhaps more interesting than their somewhat simplistic conclusion is the call for a public debate. It would indeed be valuable to have a planet-wide discussion about what we want from our future tech.
Image: Taranis British Unmanned Aircraft. Source: Losing Humanity