The Future Is Here
We may earn a commission from links on this page

Company That Profits Off Political Misinformation Says It's Going to Do More to Stop It

We may earn a commission from links on this page.
Image for article titled Company That Profits Off Political Misinformation Says It's Going to Do More to Stop It

After a month of blistering criticism over Facebook’s decision to help more politicians lie in online advertisements, the company has conveniently announced an update on its efforts to shield American voters from the misinformation chronically afflicting its platform.

“We have a responsibility to stop abuse and election interference on our platform. That’s why we’ve made significant investments since 2016 to better identify new threats, close vulnerabilities and reduce the spread of viral misinformation and fake accounts,” four senior Facebook officials blogged on Monday, while announcing several changes intended to “help protect the democratic process.”


For weeks, Facebook, which makes the bulk of its revenue on ads, has been under fire over the recent narrowing of its advertising policies to make room for “false and misleading” content promoted by “political figures.” (The same policy previously prohibited all ads containing “false” and “misleading” information.) The change was revealed on October 3 by the newsletter Popular Information, which also pointed to a Trump campaign video ad on Facebook containing debunked attacks on Joe Biden:

At least two of Facebook’s fact-checking partners, Politifact and, have reviewed Trump’s assertion, featured in his new ad. Both fact-checkers clearly concluded that Trump’s claim about Biden was false.


As Popular Information goes on to note, Facebook hadn’t really been enforcing the old policy anyway. The newsletter includes several examples of political ads promoting outright false information. Regardless, CEO Mark Zuckerberg defended the company’s decision last week in a speech at Georgetown University, telling a crowd that he didn’t believe it was right for a private company to “censor politicians,” and that doing so would only favor incumbents and “whoever the media covers.”

“I don’t think most people want to live in a world where you can only post things that tech companies judge to be 100 percent true,” he said, digressing miles away from from the subject of political advertisements, pretending instead that anyone at all had suggested fact-checking the Instagram feeds of a bunch of undergrads.

Facebook’s new changes will conveniently offer Zuckerberg and his surrogates something to point to when lawmakers inevitably call them to testify (again) about Facebook’s disproportionate influence over the election process and the flood of deliberately false information that its users consume every day. For instance, over the next month, the company says it intends to begin rating content fact-checked by third parties as “false” or “partly false” in an effort to help people “decide for themselves what to read, trust and share.”


Facebook also says it wants to make it easier for users to “understand” political ads, and will thus begin divulging how much cash individual politicians are spending. It will also take steps to combat voter suppression and intimidation, monitoring for false voter registration and voting information. Additionally, it’s offering to monitor the accounts of elected officials and political candidates—with their consent—for “potential hacking threats.”

But what good are any of these measures when Facebook is simultaneously accepting millions of dollars in ad buys to bombard its users with lies? If a candidate for office can pay Facebook to tell millions of Americans that their opponent tried to bribe foreign leaders when there’s zero evidence it ever happened, who gives a shit if Facebook hires a few more fact-checkers to warn users that some absurd claim might be “partly false.” Will its users even believe its warnings? Of course not.


Facebook can try to justify its policies by repeating the phrase “free speech” as many times as it wants, on as many campuses that will give Mark a podium to shout it from. But in the end, Facebook isn’t fighting misinformation; it’s getting paid millions upon millions of dollars to spread it as far and as wide as it can, by at least one unscrupulous candidate who’ll say virtually anything to win the race.

Far from defending voters from misinformation, Facebook is in the lying-to-voters business. And business is good.