In the television show Person of Interest the Machine uses government surveillance data to predict individuals who will be involved in a crime, whether as a perpetrator or a victim. How close are we to a real-life version of the Machine? Current statistical modeling systems suggest we're not far off.
A new article in the Economist examines a handful of systems used for varied predictive purposes. The first is Spatio-Cultural Abductive Reasoning Engine, better known by the ominous acronym SCARE, developed by computer scientists at the United States Military Academy at West Point. The purpose of SCARE is to predict the behavior of a notoriously difficult to anticipate group: guerrilla armies. While military tacticians may have trouble predicting the movements of guerrillas, SCARE relies on analyses of the behavior of guerrillas in Iraq and Afghanistan to come up with accurate predictions. For example, when trying to determine where guerrillas plant bombs, SCARE creates models based on previous bombing sites, topographical and street map data, as well as the ethnic, linguistic, and attitudinal data of the people in the region. Today, SCARE can predict a munitions dump within about 700 meters, and project leader Major Paulo Shakarian is upgrading the system to include phone traffic data.
The US Navy has its own modeling system, called RiftLand. RiftLand, which is being developed by computer science professor Claudio Cioffi-Revilla at George Mason University, focuses on East Africa around the Great Rift Valley. Using data from charitable organizations, academic groups, and governments, RiftLand seeks to bring clarity to the complexities of the region, where many of the countries are dealing with civil conflict. In a place where central governments are not the driving force, RiftLand has to make its predictions based on very specific details about much smaller groups — which groups provide certain types of services, which groups value those services, which groups are likely to experience hostilities toward one another, and which groups are likely to owe other groups favors.
Meanwhile, systems like MIT's Condor, Aptima's Epidemiological Modelling of the Evolution of MEssages (E-MEME), and Lockheed Martin's Worldwide Integrated Crisis Early Warning System (W-ICEWS) analyze blogs, tweets, Facebook, news media, and other websites to measure the social barometer. And yes, politicians and governments are using these sentiment analyses to predict the likelihood of protests and demonstrations.
As impressive (and perhaps frightening) as these systems are becoming, they still rely on a strong human element. People must understand how other people operate — their values and interests — and how their behavior can be influenced. So the Machine could someday be real — provided a human is constantly teaching it what all that surveillance data really means.
What makes heroic strife [The Economist — Tip of the hat to Mike for bringing this to our attention]