While news of Mark Zuckerberg’s appearance on the Joe Rogan Experience was making the rounds Friday night, eventually trending on Twitter, attorneys at his company were preparing to settle one of the biggest lawsuits it’s ever faced — ignited by one of the biggest controversies it’s ever caused.
Court filings show that, at least in principle, a deal has been struck in San Francisco federal court between lawyers defending the company now called Meta and the two law firms that represent millions of users burned in the 2018 data-privacy disaster now known simply as the Cambridge Analytica scandal. Both sides have requested 60 days to finalize the terms of the settlement, and financial terms of the proposed deal have not been disclosed.
Plaintiffs in the case allege that Facebook violated a laundry list of state and federal consumer protection and privacy laws by sharing the personal information of its users with the Britain-based political consulting firm Cambridge Analytica. The company, gripped by the controversy, filed for bankruptcy over four years ago, roughly the same time the class action suit was initiated.
Carole Cadwalladr, the British journalist in whom the whistleblower, Christopher Wylie, confided, framed the settlement on Saturday as an 11th hour attempt by Zuckerberg to avoid being deposed.
“It is a measure of how desperate Zuckerberg is to avoid answering questions about Facebook’s cover-up of the Cambridge Analytica data breach that Facebook has settled this case just days away from him being cross-examined under oath for six hours,” Cadwalladr told the Guardian, adding the company seemed prepared to pay “almost any sum of money” to avoid its executives being questioned under oath. Facebook did not immediately respond to a request for comment.
As Facebook missteps go, Cambridge Analytica is nearly ancient history, but when the scandal erupted in earnest, it made worldwide news and elicited dozens of lawsuits against the company. Most were eventually combined in the Northern District of California, where plaintiffs filed an exhaustive complaint alleging consumer fraud and neglect.
While it had been reported in 2015 that data gathered from tens of millions of Facebook users had been used to create “psychological profiles” of U.S. voters in an effort to elect Sen. Ted Cruz president of the United States, it wasn’t until a whistleblower came forward in 2018 that the controversy truly exploded.
Following Cruz’s defeat in the 2016 primaries, Cambridge Analytica went on to consult the Donald Trump campaign. The firm’s primary investor, hedge fund billionaire Robert Mercer, contributed more than $15 million toward Trump’s victory, records show, while the Trump campaign spent at least $5 million of its warchest on Cambridge Analytica’s software.
Facebook faced a $5 billion fine as a result of a Federal Trade Commission investigation into its privacy practices spurred by the Cambridge Analytica revelations. It was the largest penalty ever issued by the agency. A separate shareholder lawsuit filed last year accuses Facebook executives, including former Chief Operating Officer Sheryl Sandberg, of overpaying the Federal Trade Commission by some $4.9 billion as part of a 2019 settlement that found Facebook had deceived users over its ability to control the privacy of their personal information. The plaintiffs allege the overpayment was part of an “express quid pro quo” arrangement to protect Zuckerberg from being personally named.
An amended complaint filed by the plaintiffs in the class action suit describes Facebook as a company that was initially valued purely for its growth. Users were joining the platform steadily and in droves. They grew comfortable with posting personal information, believing it to be shared solely “with the connections they’d selected.”
Absent this expectation of privacy, Facebook may have never taken off. The site’s evolution from one accessible to only a handful of universities into the 21st century’s most dominant communications platform may best be understood as series of successive changes designed to favor user engagement over the value users actually place on those selected “connections.”
In a leaked internal post from 2018, one employee wrote that its News Feed — run by a powerful ranking algorithm that constantly strives to surface only content it believes users are willing to engage with — had reduced the cost of “friending” to almost nothing. They wrote: “By reducing the cost of friending close to zero, ranking changes the semantics of friending from ‘I care about you’ to ‘I might conceivably care about something you share someday.’”
In 2006, after Facebook failed to give any notice before rolling out the News Feed, users were shocked to find suddenly that their every interaction was being broadcast to everyone they’d ever friended. This sparked a sizable user revolt. As many as 100,000 users flocked to a group called “Students Against Facebook News Feed.” It didn’t matter that those same interactions were never really hidden in the first place. To Facebook, it was a novel way of nudging users to engage more with one another. But to many users, the experience of using the site was drastically different than what they’d signed up for, substantially less private.
Zuckerberg’s immediate response was to accuse his users of being hysterical. In a post instructing them to “calm down” and “breathe,” the then-22-year-old CEO promised that, “Nothing you do is being broadcast; rather, it is being shared with people who care about what you do—your friends.”
While he’d later admit to having done a “bad job” at explaining the Feed (and “an even worse job” at giving users control over it), Zuckerberg’s assurances that people’s information was only being shared with those they knew, or had chosen to friend, would quickly prove false.
An increasing push to monetize the site created what the plaintiff’s in the lawsuit call a “profound conflict of interest.”
With its revenue generated almost exclusively from advertising, Facebook stood to prosper far less by focusing solely on growth. It was equally essential, if not more so, to squeeze Facebook’s existing user base for as much engagement as possible. It inevitably found the best method was through the inclusion of third-party apps, the most notable of which was FarmVille early on.
Between June 2009 and March 2010, the “agriculture-simulation” game drove upwards of 34 million daily users. Its in-game purchases quickly generated hundreds of millions in revenue, with Facebook receiving 30 percent of the take. There was a major financial incentive to work closely with outside developers and arm them with whatever data they needed to succeed.
In his 2019 book, Zucked: Waking Up to the Facebook Catastrophe, early investor Roger McNamee — a venture capitalist and cofounder of Silver Lake, one of Silicon Valley’s premier buyout giants — describes the company’s earliest efforts to monetize user data: “Social games like FarmVille cause people to spend much more time on Facebook. Users see a lot of ads. Zynga had a brilliant insight: adding a social component to its games would leverage Facebook’s architecture and generate far more revenue, creating an irresistible incentive for Facebook to cooperate. In 2010, Facebook introduced a tool that enabled third-party developers to harvest friends lists and data from users.”
When experts on the platform’s history speak of Zuckerberg’s efforts to “monetize data,” they do not necessarily mean treating the data itself as a commodity. Rather, it became its own form of currency.
Prompted by the Cambridge Analytica scandal, a report submitted by the U.K. House of Commons in 2019 cites Ashkan Soltani, a former technologist at the Federal Trade Commission, who describes this process further: “Facebook’s business model is ‘to monetise data’, which evolved into Facebook paying app developers to build apps, using the personal information of Facebook’s users. To Mr. Soltani, Facebook was and is still making the following invitation: ‘Developers, please come and spend your engineering hours and time in exchange for access to user data.’”
As Facebook toiled away at finding more inventive ways to drive engagement—often by conducting experiments on its own unaware users—it also forged relationships with outside data brokers, combining the information its users voluntarily provided with rich personal data collected externally about their habits and activities elsewhere online.
A March 2018 study cited by the class action complaint revealed that three-fourths of Facebook users were unaware it was still surveilling them when they left the site.
Eventually, tens of thousands of apps—games, quizzes, and surveys—acquired access to the data that Facebook was offering. Among them, in 2014, was the This Is Your Digital Life, a “personality quiz” created by an academic researcher named Aleksander Kogan. While only 270,000 people actually downloaded This Is Your Digital Life, Facebook later estimated that Kogan had obtained data on roughly 87 million people, while acknowledging it couldn’t be sure of the figure.
When news broke in 2018 that Kogan had handed the data to Cambridge Analytica, Facebook’s head of security, Alex Stamos — now in charge of a consultancy of his own — rushed to defend the company on Twitter. His big misgiving was that Facebook had documented, in its terms of service, that “friend data” could be accessed through its API. Users, in other words, should have known. Stamos went on to denounce characterizations by the New York Times and Guardian newspapers, which labeled the leak a “data breach.” The term was not appropriate, he said, because Facebook was not, per se, hacked.
He deleted the tweets soon after.
Attorneys for the plaintiffs have cited an array of Zuckerberg’s own public statements to help solidify the case against him — lending, perhaps, some credence to Cadwalladr’s claim, that the sudden appearance of a settlement is an effort to keep him away from the microphones.
“We didn’t focus enough on preventing abuse and thinking through how people could use these tools to do harm as well,” Zuckerberg told the press in 2018. “That goes for fake news, foreign interference in elections, hate speech, in addition to developers and data privacy.”
“We didn’t take a broad enough view of what our responsibility is,” he added, “and that was a huge mistake.”