Science fiction author Isaac Asimov famously predicted that we’ll one day have to program robots with a set of laws that protect us from our mechanical creations. But before we get there, we need rules to ensure that, at the most fundamental level, we’re developing AI responsibly and safely. At a recent gathering, a group of experts did just that, coming up with 23 principles to steer the development of AI in a positive direction—and to ensure it doesn’t destroy us.
The new guidelines, dubbed the 23 Asilomar AI Principles, touch upon issues pertaining to research, ethics, and foresight—from research strategies and data rights to transparency issues and the risks of artificial superintelligence. Previous attempts to establish AI guidelines, including efforts by the IEEE Standards Association, Stanford University’s AI100 Standing Committee, and even the White House, were either too narrow in scope, or far too generalized. The Asilomar principles, on the other hand, pooled together much of the current thinking on the matter to develop a kind of best practices rulebook as it pertains to the development of AI. The principles aren’t yet enforceable, but are meant to influence the way research is done moving forward.
Artificial intelligence is at the dawn of a golden era, as witnessed by the emergence of digital personal assistants like Siri, Alexa, and Cortana, self-driving vehicles, and algorithms that exceed human capacities in meaningful ways (in the latest development, an AI defeated the world’s best poker players). But unlike many other tech sectors, this area of research isn’t bound by formal safety regulations or standards, leading to concerns that AI could eventually go off the rails and become a burden instead of a benefit. Common fears include AI replacing human workers, disempowering us, and becoming a threat to our very existence.