When the Federal Trade Commission fined Facebook $5 billion for deceiving its users on privacy, it celebrated the fine as “record-breaking and history-making.” Two years and $200 billion in revenue later, Facebook has found a way to turn lemons into lemonade, deceiving its users once again and using a seemingly powerless FTC to do it.
Facebook’s Tuesday night crackdown on research into the dangerous falsehoods perpetuated by its platform was predicated on the lie that the FTC had effectively forced its hand. “We took these actions to stop unauthorized scraping and protect people’s privacy in line with our privacy program under the FTC Order,” it said. That statement sparked a flurry of condemnation from federal lawmakers, who accused the company of working to conceal its role in fostering fraud and abuse that’s having a corrosive effect on the country.
Facebook wasn’t shy about laying out its motivation, though its excuses contained one lie of omission after the next. The action was taken, it said, to obstruct research into its platform conducted out of New York University; work intended to enhance knowledge production on a range of critically relevant societal harms; the attraction of violent belief systems, the weaponization of election disinformation, conspiratorial attitudes eroding the public’s faith in validated medical science, et cetera.
The Knight Institute, a First Amendment nonprofit housed at Columbia University, is convinced Facebook’s motive was somehow even more sinister: Although Facebook had denounced the NYU researchers and their methods ten months ago, their work was allowed to continue until Tuesday—hours after learning researchers had expanded the project to include Facebook’s role on January 6, the day of the Capitol insurrection.
Naturally, Facebook’s letter pinning the suspensions on its promises to the FTC neglected to mention the uncomfortable nature of this research. It failed to mention the research it was working to stifle was focused on social media’s role in spreading hoaxes and conspiracy theories undermining public health official’s efforts to rein-in the novel coronavirus and its variants—which, in only a quarter of the time, has racked up a death toll equivalent to the American Civil War.
Facebook attempted to malign the researchers by insinuating they were violating the privacy of its users. This isn’t true in the slightest.
Firefox and Chrome extensions developed at NYU—which users install so that researchers can review any ads Facebook inserts in their feeds—was sucking up, the company claimed, “data about Facebook users who did not install it or consent to collection.” This is wildly misleading. The extension catalogs advertisements exclusively, a fact Facebook appears to intentionally avoid stating. Not even the names of the people using the tool are collected by NYU. Failing to mention this, Facebook’s aim seems clear: to cast the NYU team and Cambridge Analytica in the same light, and the suspensions as a necessary step to prevent its next big breach. The reality is Facebook is only protecting itself from public scrutiny over whether it follows its own guidelines when accepting money for the promotion of information.
It went on to say the tool, known as Ad Observer, had been designed to “evade [its] detection systems,” which sounds like the researchers hadn’t issued a press release announcing its launch or put the code online for the company to review.
Seemingly, the only thing Facebook is complaining about here is that NYU hasn’t given it the ability to track who’s using Ad Observer. And why, an inquisitive person might ask, might Facebook even want to do that? The answer seems obvious: to seize control of the experiment. If Facebook can tell which accounts are aiding in NYU’s research, then it can manipulate the results on a whim. Any ads relevant to the work—anything remotely related to politics, the covid-19 vaccine, or the Capitol riot—could be manually reviewed ahead of time, or omitted from feeds entirely.
The company claims it offered NYU an alternative dataset to further its research and—through a series of omissions—implies the only difference is that its data is more privacy-friendly. What it doesn’t say is that it only covers a 3-month period leading up to the 2020 election. NYU’s team expanded the scope of its project this year to include, for instance, disinformation about the covid-19 vaccine; however, Facebook’s data stops a month before the first vaccine was approved. It’s therefore mostly useless today. None of Facebook’s statements mention this.
What’s more, Facebook removed a majority of ads related to politics and social issues from its data. Ads that got fewer than 100 impressions are not included; an arbitrary figure it claims is a “privacy protective measure.” (Advertisers who got 101 impressions apparently don’t need privacy.) Those low-dollars ads are, in fact, the meat and potatoes of NYU’s research; it isn’t focused merely on big political campaigns, but smaller ones that on occasion paid less than $100 to misinform a select group of voters using Facebook’s microtargeting platform.
Individually, these omissions are small and to be expected of a company trying to paint itself in the best light possible. But the more they stack up, the less sense its excuses seem to make. Its users were effectively providing NYU with screenshots of their Facebook feeds voluntarily—with everything but advertisements blurred out. That’s not an invasion of privacy. Facebook’s only counteroffer required NYU to give it sole authority to control and limit the data underlying its research.
“These are the actions of a company that clearly has something to hide about how dangerous misinformation and disinformation is spreading on its platform,” said Rep. Frank Pallone, Jr., chairman of the House Energy and Commerce, which has broad remit over public health matters.
Rep. Jan Schakowsky, chair of the committee’s consumer protection panel, added that Facebook “wants to strike fear in the hearts of their critics and chill academic research that might undermine [its] bottom line.” A spokesperson for Schakowsky went on to say Facebook’s reasoning for the suspension, that it was required under its settlement with the FTC, was bogus.
“Of course we don’t accept that interpretation—look at how Facebook reacted to another scraping incident earlier this year,” the aide said, referring to Facebook’s decision not to tell users if they are among 530 million people whose data was stolen. “They said they had no responsibility to inform the tens of millions of people whose data was scraped that their personal info may have been compromised.”
The FTC’s acting director, Samuel Levine, wrote on Thursday that he was “disappointed” in Facebook for falsely blaming the privacy settlement the agency had negotiated with the company. “Indeed, the FTC supports efforts to shed light on opaque business practices,” he wrote, “especially around surveillance-based advertising.”
Levine went on to thank Facebook for having “now corrected the record,” something it earnestly hasn’t done. Its original post blaming the FTC hasn’t been updated. Facebook’s Twitter account, which posted the letter, hasn’t shared any clarification. Seemingly, Levine is referring to a statement published Wednesday by Wired:
Joe Osborne, a Facebook spokesperson, acknowledges that the consent decree didn’t force Facebook to suspend the researchers’ accounts. Rather, he says, Section 7 of the decree requires Facebook to implement a “comprehensive privacy program” that “protects the privacy, confidentiality, and integrity” of user data.
Facebook, in other words, has acknowledged that it wasn’t compelled to do anything. Instead, on the very day it learned researchers might be gathering evidence of its role in the Capitol riot, it abruptly moved to squash it, and found a convenient reason to do so. Facebook was punished by the FTC two years ago for deceiving its users on privacy; a deception it just repeated by using their privacy as a scapegoat.
Only a week ago, Facebook’s chief counsel committed to “timely, transparent communication” with the FTC about any “significant developments,” Levine wrote, but no one on the commission had received so much as a phone call about this research crackdown. Regardless, he confirmed, the FTC will take no action. Instead, Levine added that he has “hope” the company wasn’t intentionally using privacy, or its agreement with the agency, “as a pretext to advance other aims.”
The timing of Facebook’s action and the bevy of misleading excuses it offered—compounded by the fact that the researchers remain suspended despite the FTC having now offered approval in writing—make it clear that that’s exactly what happened. Worse, the lack of consequences signals there’s plenty of wiggle room for Facebook to interpret its privacy rules in ways that best serve its own purpose, not its users.
More than a hundred academics, researchers, and technologists signed a letter on Friday denouncing the company over its attempt to silence a “critical watchdog over a powerful corporation.”
“The Ad Observatory enables research that is critical to assessing whether Facebook is living up to its own transparency promises,” the letter says. “It allows researchers to verify that Facebook’s Ad Library is publishing all the ads running on its platform. The Ad Observatory also collects information not available in Facebook’s own Library, including information on why ads are being targeted to specific users. This information is critically important to understanding potential manipulation, as well as the broader civic impacts of advertising, particularly political advertising.”
“We see Facebook’s actions against NYU as part of a long-standing pattern among large technology firms,” it concludes, “all of whom have systematically undermined accountability and independent, public-interest research.”