Specter of Deadly A.I. Looms In Wake of Invite-Only Asilomar ConferenceS

Science fiction is great fun, but should we really be quaking in our boots over dangerous A.I. anytime soon? A growing number of scientists say yes, and the results of their February conference at Asilomar are finally being made public.

At the conference, the scientists debated research limits on AI, much like their colleagues in genetics and biotechnology have done already with stems cells. Their thoughts were published this weekend under an ominous, dark cloud headline at the New York Times: "Scientists Worry Machines May Outsmart Man"

Specter of Deadly A.I. Looms In Wake of Invite-Only Asilomar Conference

The location is actually an interesting bit of trivia, as Asilomar was host to a groundbreaking conference on genetics and biology in 1975. At that conference, scientists met to debate their new found ability to reshape life at the cellular level. As the Times notes, the conference ultimately led to guidelines for "recombinant DNA research" and a Nobel Prize for organizer Paul Berg.

Today's scientists are hoping to get similar guidelines into place for AI, although many worried openly that autonomous people-killing robots were here already.

But for every cautionary tale out of Asilomar these days, there's a detractor ready to debunk the warnings with a bit of what they believe to be common sense. Said startup guru and investor Chris Dixon (via Gawker's own Nick Denton, no less), "Is the nytimes serious? AI researchers I know are embarrassed by the lack of progress, not worried about too much."

Indeed, when Wilson chatted with Wired for War author PW Singer during our ominous Machines Behaving Deadly theme week, we learned that a Terminator uprising was unlikely to happen anytime soon because the "preconditions" simply weren't in place—yet.

"The Global Hawk drone may be able to take off on its own, fly on its own, but it still needs someone to put that gasoline in there," he said. Nevertheless, as Wilson added after that comment, "it's not hard to see how this precondition could eventually be overcome." No kidding.

Many of the details from this conference are still coming out, but from what we read today one could definitely infer that there was an ominous, cautious tone present throughout the proceedings. "I went in very optimistic about the future of A.I. and thinking that Bill Joy and Ray Kurzweil were far off in their predictions," said Tom Mitchell, a professor of AI and machine learning at Carnegie Mellon University. "[But] the meeting made me want to be more outspoken about these issues and in particular be outspoken about the vast amounts of data collected about our personal lives."

Sounds like a split decision. Who's afraid of some big bad AI now? [New York Times]