Ethical Robots and Robot Ethics
January 22, 2016
on
on
As the field of robotics progresses, robots increasingly share the public and private space with humans. Think of self-driving cars on the roads, carebots in elderly homes and Rumba and its next iterations taking care of household chores. When robots co-exist with humans they need to conform to social norms, first and foremost those norms relating to safety. When you visit someone in their home, you don't take the shortest route to the living room if this means stepping on their pet. And neither should a robot.
Professor Alan Winfield builds ethical robots to enhance robot safety. He is a Professor of electronic engineering at the University of the West of England. Together with his colleagues Christian Blum and Wenguo Liu he built a robot that will prevent a human from coming to harm even if that compromises its own safety. They tested it in a scenario where a human risks falling into a hole deep enough to cause serious harm.
The engineers set up an experimental environment that looks like a miniature football field. Placed at the center of the field is an e-puck robot* (an open hardware robot developed for educational purposes) which is given the mission to reach a destination near the end of the field. In between it and its goal is a virtual hole it must avoid so as not to come to virtual harm. In all instances of the experiment the robot navigates successfully around the hole and reaches its destination.
Then a second e-puck robot is introduced into the field. It is marked with a H to highlight it plays the role of the human. The proxy-human is unaware of the hole and moves straight toward it. The first robot (marked with an A for Asimov) abandons its mission and alters its course to collide with the proxy-human to divert it from the hole, despite the fact that this trajectory increases its own risk of coming to harm. In their paper Towards an Ethical Robot the engineers report a 100% success rate of A rescuing H.
When I read the paper I was a bit skeptical. Can a robot that is programmed to prioritize action B over action A be called ethical? Moreover, is the experiment scalable? In the setup the robot finds itself in one situation to which it can respond with a limited set of possible actions. But what if the robot is placed in the unstructured highly diverse environment that is the human world? It can encounter an endless number of different situations and it's impossible to hard code a response to them all.
Ethical rule
When I met with Prof. Winfield in a café in London to speak about his work and I expressed my skepticism to him, he politely disagreed with me: “What we are not hard coding are all the thousands of situations a robot can find itself in, the only thing we're hard coding is its choice of how to behave given several alternatives. The big advance of the experiment and the particular architecture is that we put a simulation of the robot and the world inside the robot.”
* The engineers have since replaced the e-puck robot with the more versatile Nao robot.
Professor Alan Winfield builds ethical robots to enhance robot safety. He is a Professor of electronic engineering at the University of the West of England. Together with his colleagues Christian Blum and Wenguo Liu he built a robot that will prevent a human from coming to harm even if that compromises its own safety. They tested it in a scenario where a human risks falling into a hole deep enough to cause serious harm.
The engineers set up an experimental environment that looks like a miniature football field. Placed at the center of the field is an e-puck robot* (an open hardware robot developed for educational purposes) which is given the mission to reach a destination near the end of the field. In between it and its goal is a virtual hole it must avoid so as not to come to virtual harm. In all instances of the experiment the robot navigates successfully around the hole and reaches its destination.
Then a second e-puck robot is introduced into the field. It is marked with a H to highlight it plays the role of the human. The proxy-human is unaware of the hole and moves straight toward it. The first robot (marked with an A for Asimov) abandons its mission and alters its course to collide with the proxy-human to divert it from the hole, despite the fact that this trajectory increases its own risk of coming to harm. In their paper Towards an Ethical Robot the engineers report a 100% success rate of A rescuing H.
When I read the paper I was a bit skeptical. Can a robot that is programmed to prioritize action B over action A be called ethical? Moreover, is the experiment scalable? In the setup the robot finds itself in one situation to which it can respond with a limited set of possible actions. But what if the robot is placed in the unstructured highly diverse environment that is the human world? It can encounter an endless number of different situations and it's impossible to hard code a response to them all.
Ethical rule
When I met with Prof. Winfield in a café in London to speak about his work and I expressed my skepticism to him, he politely disagreed with me: “What we are not hard coding are all the thousands of situations a robot can find itself in, the only thing we're hard coding is its choice of how to behave given several alternatives. The big advance of the experiment and the particular architecture is that we put a simulation of the robot and the world inside the robot.”
* The engineers have since replaced the e-puck robot with the more versatile Nao robot.
Read full article
Hide full article
Discussion (0 comments)