Killer Robots: The Tech Moves Faster Than The Debate
Lethal autonomous weapon systems become feasible in a matter of years not decades. As technological development steamrolls on, an international treaty to regulate such killer drones is still wanting, says computer science professor Stuart Russell.
Last week I attended the HITB Haxpo in Amsterdam, the expo arm of the Hack In The Box security conference where hackerspaces, makers and companies showcase their latest inventions and hacks. One of the hackerspaces had brought a toy gun hooked up to a...
Lethal autonomous weapon systems become feasible in a matter of years not decades. As technological development steamrolls on, an international treaty to regulate such killer drones is still wanting, says computer science professor Stuart Russell.
Last week I attended the HITB Haxpo in Amsterdam, the expo arm of the Hack In The Box security conference where hackerspaces, makers and companies showcase their latest inventions and hacks. One of the hackerspaces had brought a toy gun hooked up to a Kinect. The weapon tracked people as they walked by aiming for their center of mass. “Military grade weaponry”, joked one of the participants. “This is nowhere near military grade”, I replied, “and that's what scares me”.
A dystopian imagery came to mind of the city of Amsterdam with target tracking guns mounted on walls next to today's ubiquitous CCTV camera's and a person watching a wall of screens in city hall deciding which types of anomalous behavior merits pulling the trigger.
As it turns out, I wasn't imaginative enough. In the near future it will be possible such a decision isn't made by a human but by an algorithm. And wall-mounted isn't exactly state of the art either.
Lethal Autonomous Weapon Systems (LAWS) are quickly becoming a reality according to Professor Russell of the University of California, Berkeley who specializes in Artificial Intelligence:
“Existing AI and robotics components can provide physical platforms, perception, motor control, navigation, mapping, tactical decision-making and long-term planning. They just need to be combined. For example, the technology already demonstrated for self-driving cars, together with the human-like tactical control learned by DeepMind's DQN system, could support urban search-and-destroy missions.”
Russell's position was published in the May 27 edition of the scientific journal Nature in a segment called Robotics: Ethics of artificial intelligence.
The United Nations have been discussing LAWS to ascertain whether or not they should be covered under the Convention on Certain Conventional Weapons (CCW). The CCW prohibits the use of weapons that 'cause unnecessary or unjustifiable suffering to combatants or affect civilians indiscriminately'. Entering into force in 1983, additional Protocols have been added over the years such as the restriction on blinding laser weapons in 1995.
Some countries like Japan and Germany want to flat-out ban LAWS. But the US, Israel and the United Kingdom - the three countries most advanced in creating these types of systems – are reluctant to negotiate a treaty claiming existing laws sufficiently cover these new capabilities.
Russell who attended one of these meetings to give expert testimony writes in his Nature article:
“In my view, the overriding concern should be the probable endpoint of this technological trajectory. The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them. For instance, as flying robots become smaller, their maneuverability increases and their ability to be targeted decreases. They have a shorter range, yet they must be large enough to carry a lethal payload — perhaps a one-gram shaped charge to puncture the human cranium. Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenseless. This is not a desirable future.”
He calls upon the AI and Robotics community to take a stand and participate in the debate. Because if no one says anything we'll be finding ourselves in dystopian Sci-Fi scenario real soon.
Image: Campaign to Stop Killer Robots CC-BY.
Last week I attended the HITB Haxpo in Amsterdam, the expo arm of the Hack In The Box security conference where hackerspaces, makers and companies showcase their latest inventions and hacks. One of the hackerspaces had brought a toy gun hooked up to a Kinect. The weapon tracked people as they walked by aiming for their center of mass. “Military grade weaponry”, joked one of the participants. “This is nowhere near military grade”, I replied, “and that's what scares me”.
A dystopian imagery came to mind of the city of Amsterdam with target tracking guns mounted on walls next to today's ubiquitous CCTV camera's and a person watching a wall of screens in city hall deciding which types of anomalous behavior merits pulling the trigger.
As it turns out, I wasn't imaginative enough. In the near future it will be possible such a decision isn't made by a human but by an algorithm. And wall-mounted isn't exactly state of the art either.
Lethal Autonomous Weapon Systems (LAWS) are quickly becoming a reality according to Professor Russell of the University of California, Berkeley who specializes in Artificial Intelligence:
“Existing AI and robotics components can provide physical platforms, perception, motor control, navigation, mapping, tactical decision-making and long-term planning. They just need to be combined. For example, the technology already demonstrated for self-driving cars, together with the human-like tactical control learned by DeepMind's DQN system, could support urban search-and-destroy missions.”
Russell's position was published in the May 27 edition of the scientific journal Nature in a segment called Robotics: Ethics of artificial intelligence.
The United Nations have been discussing LAWS to ascertain whether or not they should be covered under the Convention on Certain Conventional Weapons (CCW). The CCW prohibits the use of weapons that 'cause unnecessary or unjustifiable suffering to combatants or affect civilians indiscriminately'. Entering into force in 1983, additional Protocols have been added over the years such as the restriction on blinding laser weapons in 1995.
Some countries like Japan and Germany want to flat-out ban LAWS. But the US, Israel and the United Kingdom - the three countries most advanced in creating these types of systems – are reluctant to negotiate a treaty claiming existing laws sufficiently cover these new capabilities.
Russell who attended one of these meetings to give expert testimony writes in his Nature article:
“In my view, the overriding concern should be the probable endpoint of this technological trajectory. The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them. For instance, as flying robots become smaller, their maneuverability increases and their ability to be targeted decreases. They have a shorter range, yet they must be large enough to carry a lethal payload — perhaps a one-gram shaped charge to puncture the human cranium. Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenseless. This is not a desirable future.”
He calls upon the AI and Robotics community to take a stand and participate in the debate. Because if no one says anything we'll be finding ourselves in dystopian Sci-Fi scenario real soon.
Image: Campaign to Stop Killer Robots CC-BY.