Navy Says Combat Robots Multiplying Fast, Need "Battlefield Ethics" Pronto

A new report commissioned by the Office of Naval Research makes a grim statement about the increasing danger of combat-trained robots in the 21st century: "We are going to need a warrior code."

Simply put, increasingly sophisticated robots programmed to kill enemies living and animatronic will at some point be making their own decisions about who to shoot at, and when to shoot at them. New congressional requirements have demanded increased reliance on unmanned vehicles for both "deep-strike" air combat and ground combat, deployed at a fast clip over the next six years. Code is no longer written by one person who understands all facets of it, but by specialized teams who don't necessarily know all the components of the operating system. According to the Times UK article, the report discusses the following bone-chilling concerns:

How do we protect our robot armies against terrorist hackers or software malfunction? Who is to blame if a robot goes berserk in a crowd of civilians—the robot, its programmer or the US president? Should the robots have a "suicide switch" and should they be programmed to preserve their lives

Therefore, the ONR report is the first step by the military in investigating what the report's chief compiler Dr. Patrick Lin calls a "warrior code," essentially a combination of programmed rules and an AI learning period where robots are taught a form of "battlefield ethics."

If this doesn't sound new, it's because a) you read a lot of Asimov, b) you love Will Smith more than most Will Smith fans, or c) you caught the civilian discussion of this matter last year in Gizmodo. We'll understand if your answer is d) all of the above. [Times UK via Geekologie]