If Mark Zuckerberg’s concerned citizen act has convinced you that Facebook’s trying really, really hard, or that the company is too big to possibly moderate hate on its platform, read this resignation letter from software engineer Ashok Chandwaney. After five and a half years at the company, they’ve unequivocally concluded that Facebook is “an organization that is profiting off hate in the US and globally”—echoing, essentially, the crux of protests from advertisers and fellow employees, findings from human rights investigators, and repeated criticism from civil rights leaders.
In a letter, which the Washington Post reports was initially published on the company’s internal message board, Chandwaney frames Facebook as an institution that’s “on the wrong side of history.” They go on to enumerate the examples: Facebook’s failure to mitigate hateful Islamophobic lies that led to killings in the 2016-2017 Myanmar genocide; Facebook’s refusal to remove a widely-reported Kenosha, Wisconsin, militia page immediately preceding the deadly shooting of protesters; Facebook’s decision to give Donald Trump a platform to call for shooting civilians; Facebook’s continued allowance of discriminatory ad targeting.
“Every day ‘the looting starts, the shooting starts’ stays up is a day that we choose to minimize regulatory risk at the expense of the safety of Black, Indigenous, and people of color,” Chandwaney writes, referencing a May 29 post by Trump. 1) Yes! And 2) this is exactly where Mark Zuckerberg’s slippery position that “bad speech” can be counteracted with “more speech” falls apart; the post still stands without any kind of label, and no post elsewhere on Facebook is going to change the fact that a call for violence against citizens by the president of the United States is still hanging there.
Contrary to popular belief, Chandwaney argues that Facebook is decidedly not too big to moderate, or, at least, it could be doing more if it decided to invest the same energy it does in its routine operations. Chandwaney says that the “moving fast” motto in practice has meant they might be told about a bug and fix it within the course of a meeting, whereas Facebook does the bare minimum to preserve its reputation after it’s alerted to dangers by civil rights organizations, researchers, the public, and the media. “ In fact, we continue to pass the buck with the Kenosha Guard failure being pinned on contract content moderators, who are underpaid and undersupported in their jobs - both of which are things Facebook could almost instantly fix if it so chose,” they wrote.
Chandwaney also charges Facebook with designing different rules on misinformation for right-wing publications in order to preserve a conservative audience, which sources within Facebook confirmed to the Washington Post.
Facebook has not responded to a request for comment from Gizmodo, but it told the Washington Post that it does not “benefit from hate,” invests “billions of dollars each year to keep our community safe,” and is “in deep partnership with outside experts to review and update our policies.” Facebook has clearly ignored at least one recommendation from its auditors: that it hold politicians to the same rules as it does everybody else. Facebook also pointed to its QAnon purge—one year after the FBI named the conspiracy movement a domestic terror threat—as progress. We’ll update this post if we hear back from Facebook.
Possibly the takeaway for Facebook is Chandwaney’s framing of this as a long-term reputational issue (“the wrong side of history”) since Facebook seems to understand civil rights as a crisis management issue. From a business standpoint, getting ahead of another bloody conflict could be even more valuable than preserving the existing far-right and conservative userbase int the short term.
Gizmodo was unable to reach Chandwaney in time for publication, but you should absolutely read this letter.
Update, 9/9/2020 8:22 a.m. ET: Facebook sent the following statement:
“We don’t benefit from hate. We invest billions of dollars each year to keep our community safe and are in deep partnership with outside experts to review and update our policies. This summer, we launched an industry leading policy to go after QAnon, grew our fact-checking program, and removed millions of posts tied to hate organizations - over 96% of which we found before anyone reported them to us.”