Asimov's three Laws of Robotics were meant to ensure that robots would serve as safe and useful tools for humans, but some modern roboticists say the rules don't mesh with current technology, and propose a new set of robotics laws.
Asimov's Laws of Robotics, first fully outlined in the short story "Runaround," were meant to give robots utility as tools for humans while ensuring that the robots would never be used to harm humans:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The laws have been popular not only with science fiction enthusiasts, but with professional roboticists themselves, to the extent that the South Korean government is using them as a guideline for their Robot Ethics Charter. But, according to David Woods, a systems engineer at Ohio State University, and Robin Murphy, a rescue robotics expert at Texas A&M University, when dealing with robots that are not yet self-aware, Asimov's Laws function better as a literary device than as an ethical guideline.
Still, Woods and Murphy believe that Asimov was on the right track, and that engineers and programmers need a set of rules to govern their robots and the way they deploy them, both to ensure human safety and to allow robots to operate with minimal human oversight:
Their first law says that humans may not deploy robots without a work system that meets the highest legal and professional standards of safety and ethics. A second revised law requires robots to respond to humans as appropriate for their roles, and assumes that robots are designed to respond to certain orders from a limited number of humans.
The third revised law proposes that robots have enough autonomy to protect their own existence, as long as such protection does not conflict with the first two laws and allows for smooth transfer of control between human and robot. That means a Mars rover should automatically know not to drive off a cliff, unless human operators specifically tell it to do so.
Too often, Woods and Murphy say, roboticists try to push robots beyond the limits of their programming, giving them more autonomy than is technologically feasible, resulting in injuries to humans, property, and the robots themselves. The best model of Woods and Murphy's proposed laws? NASA, which carefully tests robots and identifies their limitations, so that the machines can enjoy minimal human supervision during the routine portions of missions, but a human operator can take over if there are any surprises.
And even if we reach the point where robots become more autonomous, they note that robots will still require ethical guidelines more complex than the Laws of Robotics:
"People are making this leap of faith that robot autonomy will grow and solve our problems," Woods added. "But there's not a lot of evidence that autonomy by itself will make these hard, high-risk decisions go away."