Adriel Desautels was suddenly in a serious mess, and it was entirely his fault.
Sitting in his college dorm room back in the mid-1990s, Desautels let his curiosity run rampant. He had a hunch that his school’s network was woefully insecure, so he took it upon himself to test it and find out.
“My thoughts at the time were, ‘Hey, it’s university. I’m here to learn. How much harm can there really be in doing it?’” Desautels says in a recent phone call, the hint of a tremor in his voice.
It wasn’t long before he found himself in a dull faculty conference room, university officials hammering him with questions as a pair of ominous-looking men—Desautels says he still doesn’t know who they were, but it’s hard not to assume they had badges in their pockets—stood quietly listening on the sidelines.
Penetrating the school’s network proved simple, he says, and thanks to Desautels’ affable arrogance, talking his way out of trouble was easier still. Forensically speaking, he argued to the school officials, there was no way to prove he did it. It could’ve just as easily been another student, at another computer, in a dorm room that wasn’t his. And he was right; they couldn’t prove shit, Desautels recalls. One of the mystery men smiled knowingly.
The school ultimately dropped the case, and in what has become a tech-entrepreneur cliché, Desautels soon dropped out as well. He was making far too much money as a legitimate hacker-for-hire and decided to start his own company instead. Since the late ‘90s, he has been the CEO of his own security firm, some of which have come and gone. Since 2006, he’s been the head of Netragard, a penetration testing company that simulates what it calls “realistic threats.” Stamped on the company’s business cards, it reads: “We protect you from people like us.”
Desautels has the nerdy look of any other computer security guy you might run into anywhere. But his textbook look hides a fierce contrarian in a security industry he believes is building its riches on the false promise that it can keep people safe.
Where many see industry standard practices, he sees a long con. In his eyes, the marketing around most commercially available security tools are “snake oil.” Inflated egos, shoddy work ethics, and unachievable promises leave corporations and the wealth of confidential data they hold at risk. That includes, of course, the personal and private information of the world’s consumers, increasingly exposed by major security incidents—only a small percentage of which we ever hear about.
When not restrained by their clients, who occasionally place annoying limits on how hard they can go to test a company’s security, Netragard’s team of about 35 hackers do everything they can to be unstoppable. The company shuns the use of commercial vulnerability scanners—the kinds of tools most companies rely on to test their security—and opt instead for aggressive penetration tests, both remotely and on site.
The team crafts specialized tools to suit their needs on the fly, including zero-day malware developed in-house. Its social engineers manipulate employees by phone, email, and in person, capturing credentials and other information needed to breach networks. They forge documents, hack security keypads, and clone work badges that open doors concealing tightly controlled servers. If all else fails, they can always just bribe someone at the door. On one occasion, for instance, a Netragard engineer slipped an office maid $50 to get inside a building. He just needed to use the bathroom, he told her. But within minutes, he was leaping over a wall outside lugging a computer server under his arm.
Of course, it doesn’t always go that easily.
“During one on-site test, our engineer managed to break into an office, but we didn’t know the CEO had his own private security,” Desautels tells me, laughing. “They rushed in with guns and started sweeping the office. Then they hog-tie our guy and start dragging him out of there while he’s screaming, ‘I’ve got a get-out-of-jail-free card!’”
During another engagement at a major bank, the name of which Desautels could not disclose, Netragard managed within minutes to seize control of the bank’s systems. The bank’s executives, having already had the networks scanned multiple times by other firms, was in complete shock. Time and again, those scans had overlooked a simple remote desktop protocol protected only by the username and password: “admin,” “admin.” The bank had sunk thousands of dollars into its security. Yet, a couple hackers were able to compromise its entire domain in roughly the time it takes to brew a pot of coffee.
“People need to understand,” Desautels says, “there’s a major difference between the type of security tests companies are willing to pay for and what actual hackers are willing to do to get paid.”
The bank’s people thought they had done their due diligence before Netragard’s team stepped in—gotten all of that “security stuff” taken care of. They could even produce paperwork saying the bank’s systems were secure, that its customers were safe. They weren’t, of course. The tests the bank had paid for were a joke, and the people who’d conducted them, while they may have worked in security, weren’t hackers. Whether they were conscious of it or not, what they really offered was a false sense of security and that only left their customers at greater risk.
“They don’t understand, if you don’t test at the same level as the threat you’ll be facing, it’s like testing a bulletproof vest with a squirt gun,” Desautels says of companies that increasingly becoming targets while paying fortunes for “security solutions” that offer little in terms of defense.
Worldwide security spending is estimated to reach $96 billion in 2018, up from $82 billion just two years ago and $89 billion the year after that. Yet, the number of data breaches and the amount of data stolen is also projected to increase. In the first half of 2017, more data was stolen than in all of 2016 combined. Estimates place the worldwide cost of cybercrime by 2021 somewhere around $6 trillion—annually.
By the time Desautels founded his company, the industry was becoming increasingly predatory and full of scammers. In 2008, for example, the Federal Trade Commission was forced to act after, it said, “more than one million consumers” were tricked into purchasing “security software” from companies that relied on false and misleading marketing schemes, known as “scareware.” Pop-ups and other ad-like modules on websites shouted spurious claims of compromise at users: “Your PC is infected. Click here for a scan.” But the so-called scans, which could cost $40 or more, were fake.
Legitimate security tools and snake oil remain indistinguishable to the average user, as baffling as the difference between a Meegeren and a priceless Vermeer to someone without an art history degree. Most corporate executives, unfortunately, are equally as blind, as are many IT professionals and the people who hire them, nearly all of whom rely heavily on outside vendors to tell them when and if they’re secure. Security companies love to brag about how their services are being utilized by billion-dollar corporations—up until the point those corporations get compromised.
Famously, the security firm behind the Metasploit framework, Rapid7, once boasted about a major supermarket chain deploying its NeXpose “enterprise vulnerability management solution.” But after hackers breached the chain and made off with 4.2 million credit cards—considered one of the largest such thefts at the time—all of the promotional materials and testimonies related to the chain abruptly vanished from the security firm’s website.
While many legitimized security firms today offer services that can, at the very least, detect known vulnerabilities and threats, their marketing departments are rarely realistic about the level of protection these services truly offer. Walking through a Dallas airport last week I spotted a sign that read: “Say Goodbye to Data Breaches.” Yeah, right.
It had only been a few years since the company that put up that sign suffered its own humiliating breach. Claiming that you can stop data breaches is a lot like claiming you can stop employees from stealing office supplies. You can minimize, you can reduce, you can mitigate. But you can’t actually stop someone from walking out the front door with a box of paperclips. Even the National Security Agency, the nation’s largest employer of mathematicians, can’t solve this problem.
Simply put, the best vulnerability scanner in the world can’t detect stupidity or the malicious intent of an employee. Just ask the CIA.
In the first few years of business, Netragard’s team built up their reputation by hacking major Las Vegas casinos, pharmaceutical companies, and nuclear research—with permission, of course. The penetration testing side of the business was booming, but the company was also pulling in millions from the sale of zero-day exploits—a practice that remains controversial to this day, particularly among privacy advocates concerned about government spies stockpiling “cyber weapons.”
“We were making money doing offensive penetration testing,” Desautels says, “but we were also known throughout the zero-day world as being the go-to guys if you needed to break something.”
Who was it buying up all these previously undiscovered exploits? Desautels just smiles. “People always ask me that and I always give them the same answer,” he says. “Public and private sector.” So in other words, defense contractors, private security firms, and three-letter agencies. To Desautels, the idea that the government is stockpiling zero days for the purpose of mass surveillance is preposterous. “These exploits cost millions of dollars a piece,” he says. “And they have a shelf life of basically three months.” After that, the odds of getting detected rapidly increases; so do the odds that the vulnerability will be fixed.
For more than five years after the company was founded, the majority of Netragard’s zero-day customers were all based in America. But between 2013 and 2014, that changed. The company had received a special recommendation from a trusted buyer—which Desautels refers to only as “a very well known entity here in the US”—and so Netragard began dealing to a Milan-based firm known as Hacking Team, which had done business with the FBI, among other US agencies.
The decision to work with these foreign hackers would ultimately threaten not only Netragard’s lucrative zero-day business but its entire reputation.
Hacking Team, a major supplier of surveillance and intrusion software for governments worldwide, got hacked itself in 2015. Roughly 400GB of internal company emails and documents were dumped online, including invoices for zero-day purchases that traced back to Netragard. The emails were later vacuumed up by WikiLeaks, which tossed them into a searchable archive.
The leak revealed that Hacking Team had been peddling spyware to authoritarian regimes with abysmal human rights records. For example, the firm had sold surveillance software to a Sudanese intelligence service notorious for the systematic torture of students, journalists, and activists. Human rights workers tied the armed forces that relied on that intelligence to incidents of mass sexual violence.
That revelation heralded the end of Netragard’s exploit-brokering business—and several shadowy “programs” in the US along with it.
“I was disgusted,” Desautels says. “If we can’t trust the people who we’re selling to to keep these things within the right hands, we just don’t want to be apart of this.” Still, Desautels remains unapologetic about dealing in zero days “because a lot of good can come from this.” The wares brokered by Netragard had been deployed, he says, in operations tracking human traffickers, child pornography, and in various homeland security scenarios.
“The public doesn’t understand that,” Desautels says. “It’s not like we can jump out and explain, ‘Hey everybody, we’re doing this because it protects you, it saves lives, and it’s necessary.’ It’s something somebody is going to do at some level, somewhere. And for the other countries that aren’t our friends, we need to have the same level of capability.”
As if the defense of covert sale of zero days to government agencies wasn’t controversial enough, Desautels also operates under the idea that publicly disclosing vulnerabilities—which is generally seen as a security industry best practice—actually imperils customers. His reasons are difficult to dispute.
When a company discovers a vulnerability, it’s under increasing pressure to alert everyone it can—or else face accusations of putting people at risk in the interest of shareholders. Concealing a vulnerability is often painted as covering one’s own ass. The problem is that once bugs are disclosed, malicious hackers also become aware of their existence. Consumers and businesses, meanwhile, often fail to deploy a fix.
Statistically, less than 1 percent of hacks involve the use of zero-day exploits. In fact, the exploits affecting most companies today date back several years.
“I feel like if you find a bug, if you find out that something is dangerous, you find a flaw in the system, there should be a way to get that resolved without putting the world at risk,” Desautels says. “This is a serious failure on the part of the security industry.”
Desautels believes “egos that exist in the industry” are to blame for the idea that disclosing vulnerabilities is in the public good. “You have researchers who want their name out there. They want to be talked about. It builds up your credibility, it builds up your portfolio,” he says. “But the bad almost always outweighs the good.” A number of major ransomware attacks in several countries last year, for example, took advantage of vulnerabilities that had long been public. For months and even years, companies had simply ignored the need to patch. And it’s consumers who often suffer.
“I wish there was a way to guarantee that people would fix vulnerabilities when they are published,” he says. “But the reality is, there isn’t. It never happens that way and probably it never will.”
Ultimately, the number one problem with security is that often there is no security. “The majority of people responsible for protecting networks know fuck-all about protecting networks,” Desautels says. “And even those who do aren’t always allowed to.” In his experience, the vast majority of enterprise customers just want a check in a box that says they’ve been tested, that claims they’re secure. They want a gold star to slap on their website. Even among the 20 percent who do care—those who actually want real fixes and real protection—most aren’t given the resources needed to actually get the job done. “The board or the executive management doesn’t fund it, doesn’t understand it, doesn’t see the value, won’t support it,” he says. And when they do, getting a vulnerability patched is “this huge bureaucratic process that sometimes never happens.”
What inevitably follows is a data breach, or some other kind of huge data loss event, or a ransomware attack that cripples a hospital, or an airport, or a shipping company. Millions of people exposed, and what do they get? A subscription to a credit monitoring service; an identity theft insurance card.
“People don’t patch or they can’t patch, the security technology that’s suppose to protect them does not work as nearly as well as advertised, and boom, you have critical failures left and right,” Desautels says, pausing for a moment. “You know what doesn’t help either...,” he adds with a sigh, “most people in the security industry couldn’t hack their way out of a wet paper bag with a lightsaber.”
This story was produced with support from the Mozilla Foundation as part of its mission to educate individuals about their security and privacy on the internet.