A Facebook initiative announced last year designed to generate “independent, credible research about the role of social media in elections” is faltering, BuzzFeed reported this week, citing multiple sources with knowledge of the program and its participants. According to Facebook’s former chief security officer, reporters who covered the company’s Cambridge Analytica scandal are at least partly to blame.
Alex Stamos, who oversaw security at Facebook when news first broke about the scandal last year, criticized BuzzFeed and “other outlets” over what he called “unbalanced reporting on privacy,” saying the media coverage of Facebook’s numerous privacy violations has been geared all along toward hampering its ability to share data for legitimate research.
[See updates below for comments from Stamos, who also says this article is an “extremely unfair characterization” of his tweets and “Gizmodo at its finest.”]
“If the misuse of an API by an academic is considered the greatest privacy scandal in the history of the internet, how do you think the companies are going to treat future academic research?” he asked, referring to Aleksandr Kogan, the data scientists whose app, “thisisyourdigitallife,” collected the Facebook user data later acquired by Cambridge Analytica. (BuzzFeed’s story notably does not mention Cambridge Analytica once.)
Stamos’ characterization of the scandal as “misuse of an API by an academic” belies the fact that Facebook intentionally designed a system in which a single user was authorized to consent to the release of data belonging to thousands of other users. Thanks to Facebook, while only a few hundred thousand people downloaded Kogan’s app, Cambridge Analytica was able to acquire data on some 86 million others, all of whom never agreed to take part in Kogan’s research.
This practice continued during Stamos’ tenure at Facebook, as described by a New York Times story last year, which Stamos took shots at on Friday, saying it “completely misrepresent[ed] normal industry practices like 3rd party clients.” The story, which relied on internal documents generated in 2017, described how Facebook allowed companies like Microsoft and Amazon to access the names and contact information of users’ friends without their consent.
This system behind Facebook’s $50 billion business makes it a liability for any user to “friend” another. There’s simply no way to be sure which friends will agree to surrender one’s personal information. And that’s an incentive for the company not to make its data-sharing practices too apparent, which is why its privacy settings don’t reflect how they truly function. (As part of its recent $5 billion settlement with the FTC, Facebook agreed to start asking users’ permission before sharing their data beyond what’s specified in its privacy settings.)
According to Stamos, this is just a “normal” industry practice, and any attempt to shine a light on the finer details of Facebook’s operations only serve to negatively sensationalize its behavior. Facebook is now too afraid to hand over its data to give researchers a better look at how its platform impacts democratic elections around the world—something CEO Mark Zuckerberg called “crazy” after the 2016 election—and, per Stamos, the media is to blame.
BuzzFeed’s tech and business editor, John Paczkowski, responded to Stamos’ criticism by calling out “a decade of disregard for user privacy and a profound lack of transparency about how our personal data is being used,” citing among other examples, Facebook’s attempt to pilfer user data through promotion of its “privacy” product, Onavo.
Onavo, spyware masquerading as a virtual private network (VPN), was marketed to Facebook users as a means to “protect” their accounts. In reality, it granted the company access to a wealth of private information—data which most VPNs are proud to advertise they do not collect—including daily wifi usage and cellular data. The app also collected data on other apps running on the user’s phone, i.e., Facebook competitors like Snapchat. Nevertheless, Facebook presented the app as a way for users to keep their personal information secure.
Onavo remains by far one of the most grotesque attempts by Facebook to plunder its users’ data for marketing purposes, and it was reportedly ejected from the Apple App Store specifically on that basis almost exactly a year ago to the day.
While Stamos has retrospectively criticized Facebook’s use of Onavo in the past, he was notably the company’s most senior official in charge of security oversight and compliance when Facebook decided to start advertising Onavo to users in the Facebook app. It was still running last year on the day he quit the company. Still, maybe he believes that, too, is somehow BuzzFeed’s fault.
Update, 6pm: Stamos sent Gizmodo the following:
I’m not sure it would be right to say that I’m ‘implicating’ the media. I am glad [BuzzFeed’s Craig Silverman] wrote the piece, because I really want SS1 to work out and for differential-privacy based solutions to academic research to become the standard. The problem is that most of the media coverage reflects the larger societal issue: we don’t know how open we want these companies to be or how to define ‘public’.
There was recently a mini-scandal about comments being scraped by a Mexican company off of public [Facebook] pages. Twitter offers an API that does the same thing and nobody bats an eye. As the WSJ pointed out, FBI is trying to hire a company to do something Facebook has to try to prevent.
I think the coverage of data protection issues at Buzzfeed and elsewhere almost never talks about the tradeoffs.
“Yes, I think Onavo should not exist and that [Facebook] should not have bought them,” adding: “‘I don’t like Facebook for these other things’ does not change the basic trade-offs with SS1 and other attempts by academics to study what’s happening on the world’s largest social network.”