The Military's Spending Millions to Build Robots with Morals

Imagine a future where autonomous robots make life or death decisions based not just on data, but a preprogrammed moral code. This is not the plot of a dystopian novel. It's the directive of a new Pentagon program that will scare your socks off.

Over the next five years, the Office of Naval Research is awarding $7.5 million in grant money for university researchers to build a robot that knows right from wrong. This sense of moral consequence could make autonomous systems operate more efficiently and, well, autonomously. And some people even think that machines could make better decisions than humans, since they could strictly follow the rules of engagement to the letter and calculate the outcome of multiple different scenarios.

It sort of makes sense when you think of it like that. "With drones, missile defines, autonomous vehicles, etc., the military is rapidly creating systems that will need to make moral decisions," AI researcher Steven Omohundro told Defense One. "Human lives and property rest on the outcomes of these decisions and so it is critical that they be made carefully and with full knowledge of the capabilities and limitations of the systems involved. The military has always had to define 'the rules of war' and this technology is likely to increase the stakes for that."

On the contrary, programming robots with a certain moral code assumes we can all agree on that certain moral code. Without digging too far into your college philosophy syllabus, it's easy to understand how this could be a pretty contentious task. And while computer processing power could come in handy, say, when better handling triage at a field hospital, it gets super tricky when you're pointing missiles at people.

"I do not think that they will end up with a moral or ethical robot," said Noel Sharkey, another AI expert, in response to the news. "For that we need to have moral agency. For that we need to understand others and know what it means to suffer. The robot may be installed with some rules of ethics but it won't really care. It will follow a human designer's idea of ethics."

The debate goes on and on. It's worth having, though, especially since we're depending more and more on machines. And research suggests that we already hold robots morally accountable for their actions. Why not program some morals into them? Maybe because maybe then they'd decide that the right thing to do is take control of the world away from weak humans. You've read Asimov. You know how this story ends.