One side effect of talking about the risk of killer robots so much is that everyone is now rushing to assure us exactly how they would prevent this. Microsoft CEO Satya Nadella is the latest to reassure us, in an extremely vague way.
Earlier this month, Google engineers published a paper outlining five principles of safety. Their principles were things like “avoid negative side effects” which, while not earth-shaking, at least were fairly concrete and illustrated with examples from a cleaning robot. Nadella’s, as outlined in Slate, are filled with jargon.
1. A.I. must be designed to assist humanity: Glad we’re clear on this one I was unsure for a little bit there.
2. A.I. must be transparent: Sounds good, although his point that “the tech will know things about humans, but the humans must know about the machines” seems so obvious that the need to point it out is a little ominous.
3. A.I. must maximize efficiencies without destroying the dignity of people: He adds that this means “the tech industry should not dictate the values and virtues of this future,” but this may have already happened. See: everyone trying to learn to code.
4. A.I. must be designed for intelligent privacy: What is “intelligent” privacy? Does “intelligent privacy” mean like how Facebook’s privacy settings are so complicated only tech-savvy people know how to keep strangers from seeing their tagged college photos?
5. A.I. must have algorithmic accountability so that humans can undo unintended harm: Wonder what he thinks about Google’s kill switch.
6. A.I. must guard against bias, ensuring proper, and representative research so that the wrong heuristics cannot be used to discriminate: This last one is the one that is most clearly written and makes the most sense, given all the instances of how an AI, oh, classified black people as gorillas or, much worse, mistakenly flag black defendants as having a higher risk of committing future crimes.
To be fair, no real-world safety rules can really live up to Asimov’s Three Laws of Robotics—but when all Nadella’s jargon boils down to “try not to hurt people,” he maybe could have done a little better.