Since the Cambridge Analytica privacy scandal first broke last month, Facebook has tried out a number of PR strategies to address the growing outcry. At this point, the social media company is just going for broke, telling the public it should just assume that “most” of the 2.2 billion Facebook users have probably had their public data scraped by “malicious actors.” That’s huge news. But it doesn’t feel huge—if you managed to pick up that detail at all.
Facebook knew ahead of time that former Cambridge Analytica employee Christopher Wylie was blowing the whistle to multiple media outlets on the political data analytics company’s unauthorized procurement of user data from a third-party app developer. At first, it decided to go with the “get ahead of the story” approach by admitting that it had known about this for years and saying that it was taking proactive measures to punish the bad actors. Once Wylie’s story was out of the bag, Facebook pivoted into “be quiet and hope it goes away” mode. But it didn’t go away, and Zuckerberg emerged five days later to give interviews. And it still didn’t go away. On Wednesday, the company switched into a full-on “flood the zone” approach, dropping multiple news posts and hastily inviting hundreds of reporters to an impromptu conference call.
Meanwhile, reports have focused on a variety of issues that have popped up in just the last 24 hours. It’s hard to focus on what matters—and frankly, all of it seems to matter, so in turn, it ends up feeling like none of it does. This is the Trump PR playbook, and Facebook is running it perfectly. It’s the media version of too big to fail, call it too big to matter. Let us suggest that you just zero in on one detail from yesterday’s blog post about new restrictions on data access on the platform.
Mike Schroepfer, Facebook’s chief technology officer, explained that prior to yesterday, “people could enter another person’s phone number or email address into Facebook search to help find them.” This function would help you cut through all the John Smiths and locate the page of your John Smith. He gave the example of Bangladesh where the tool was used for 7 percent of all searches. Thing is, it was also useful to data-scrapers. Schroepfer wrote:
However, malicious actors have also abused these features to scrape public profile information by submitting phone numbers or email addresses they already have through search and account recovery. Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped in this way. So we have now disabled this feature. We’re also making changes to account recovery to reduce the risk of scraping as well.
The full meaning of that paragraph might not be readily apparent, but imagine you’re a hacker who bought a huge database of phone numbers on the dark web. Those numbers might have some use on their own, but they become way more useful for breaking into individual systems or committing fraud if you can attach more data to them. Facebook is saying that this kind of malicious actor would regularly take one of those numbers and use the platform to hunt down all publicly available data on its owner. This process, of course, could be automated and reap huge rewards with little effort. Suddenly, the hacker might have a user’s number, photos, marriage status, email address, birthday, location, pet names, and more—an excellent toolkit to do some damage.
What information on your Facebook profile is public? Are you totally sure about your answer? For years, the company has made tweaks here and there to its privacy settings, it even used to change up your settings without informing you. Arguably it still doesn’t do enough to inform you of what’s going on and why you should care—despite what its latest attempt to “make clear” what its revamped data policy means. (By the way, that new data policy also came out on Wednesday, on top of everything else.)
In yesterday’s Q&A, Zuckerberg explained that Facebook did have some basic protections to prevent the sort of automation that makes this particularly convenient, but “we did see a number of folks who cycled through many thousands of IPs, hundreds of thousands of IP addresses to evade the rate-limiting system, and that wasn’t a problem we really had a solution to.” The ultimate solution was to shut the features down. As far as the impact goes, “I think the thing people should assume, given this is a feature that’s been available for a while—and a lot of people use it in the right way—but we’ve also seen some scraping, I would assume if you had that setting turned on, that someone at some point has accessed your public information in this way,” Zuckerberg said. Did you have that setting turned on? Ever? Given that Facebook says “most” accounts were affected, it’s safe to assume you did.
While Facebook is gradually bumping up the number of users who likely had their data compromised in the Cambridge Analytica scandal, it’s facing a barrage of questions about how many other instances have occurred because of third-party apps. It has responded with vague admissions but no concrete numbers. It even ran full-page newspaper ads saying that it “expects there are others.” Yesterday, the company just decided to go for it, essentially saying, “you know what, all of it, just assume somebody got your data.”
It’s a pretty good strategy. We all get blitzed with information, Facebook gives an illusion that it’s responding to calls for greater transparency, and all the while it’s really not giving us a whole of insight into how the big machine works. When you’ve spent years providing virtually no transparency, any ray of sunshine seems like a bright spotlight.
Down the road, we’ll likely find out that a different bad company got ahold of even more bad data and used it in an effort do something even more horrible than manipulating the American electorate, and Facebook will just say, “Well, we told you so.” Don’t let this tactic inoculate you to the fact that it shoulders the blame for everything.
Mark Zuckerberg has known from the beginning that his creation was bad for privacy and security. Activists, the press, and tech experts have been saying it for years, but we the public either didn’t understand, didn’t care, or chose to ignore the warnings. That’s not totally the public’s fault. We’re only now seeing a big red example of what it means for one company, controlled by one man, to have control over seemingly limitless personal information. Even the NSA can’t keep its secret hacking tools on lockdown, why would Facebook be able to protect your information? In many respects, it was just giving it away.
Zuckerberg argued in Wednesday’s Q&A that the “reality of a lot of this is that when you are building something like Facebook that is unprecedented in the world, there are going to be things that you mess up.” Imagine if in 1977 it became altogether clear that lead paint has enormous health risks but instead of banning it, we just said, “Ya know, maybe we’ll just use a little less lead paint.” And then proceeded to let the lead paint industry decide how much lead paint it wants to manufacture.
When he was asked about the effort to #deletefacebook, Zuckerberg said, “I don’t think there has been any meaningful impact we’ve observed.” That could mean that there’s not all that much anger out there, or it could mean that Facebook has tangled itself up in our lives so thoroughly that the idea of deleting Facebook is as unimaginable as, say, suddenly ending the production of fossil fuels. When the people at the social network realized that the search function was an intractable security problem, they realized there was only one solution: shut it down. Maybe there’s a lesson there.