Knowing a cyberattack’s going to occur before it actually happens is very useful—but it’s tricky to achieve in practice. Now MIT’s built an artificial intelligence system that can predict attacks 85 percent of the time.
Cyberattack spotters work in two main ways. Some are AI that simply looks out for anomalies in internet traffic. They work, but often throw up false positives—warnings about a threat when actually nothing’s wrong. Other software systems are built on rules developed by humans, but it’s hard to create systems like that which catches every attack.
Instead, researchers from MIT’s Computer Science and Artificial Intelligence Lab built a new AI—creatively named AI2—that combines the two approaches.
AI2 uses three different machine learning algorithms to detect suspicious events. Like any AI system, though, it needs some feedback from a human to tell it whether those events are actually suspicious or not. But most of us wouldn’t be able to tell the difference between, for example, a DDoS from a legitimate surge in traffic—so it shows its first set of results to an expert.
With that feedback, it takes on board whether or not it should be classifying the events as attacks or not, then refines its internal models. Over time, it becomes better able to differentiate signal from noise, showing fewer incorrect results and in turn saving the expert’s time.
In tests carried out using 3.6 billion log lines of internet activity, AI2 was able to identify 85 percent of attacks ahead of time. It also created five times fewer false positives than existing cyberattack spotting AIs. The work was presented last week at the IEEE International Conference on Big Data Security in New York City.
Over time, the researchers explain, it only gets more effective. “The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions,” explained Kalyan Veeramachaneni in a press release. “That human-machine interaction creates a beautiful, cascading effect.”