Meta, owner of social media platforms repeatedly cited amongst the top purveyors of falsehoods and fake news, wants you to know it’s super serious about combating lies and bogus claims ahead of the 2022 midterm elections.
In a blog post published Tuesday, Nick Clegg, Meta’s President of Global Affairs, showcased some of the ways it’s attempting to beef up security on its platforms ahead of the upcoming midterm elections to combat hate speech, voter interference, and foreign influence. In total, Meta’s efforts mirror larger-scale election integrity changes proposed prior to the 2020 presidential elections, which some researchers applauded and others ultimately found insufficient. The measures also come as Meta endures continued criticism on multiple fronts over its handling of recent election-related misinformation in Brazil, and other non-U.S. countries.
Meta claims it has hundreds of staff focused specifically on the midterms spread out across 40 teams. The company said it will prohibit ads on its platforms encouraging people not to vote or posts calling into question the legitimacy of the elections. Anyone wandering onto Facebook in the past two years inherently knows that the second goal is much easier said than done. Additionally, Meta says it will remove misinformation about dates, locations, times and methods of voting, as well as falsehoods regarding who’s eligible to vote, whether or not a given vote will be counted, and calls for violence in relation to voting.
Broadly, Meta says it’s working with the Cybersecurity and Infrastructure Security Agency, as well as state and local election officials, to, “make sure we’re all preparing for different scenarios.”
“We’re fighting both foreign interference and domestic operations, and have exposed and disrupted dozens of networks that have attempted to interfere with the U.S. elections,” Meta said in a fact-sheet. “We also continue to carry out proactive sweeps on platform to catch any banned organizations attempting to violate our policies or cause offline harm.”
In just the first quarter this year, Meta claims its removed 2.5 million pieces of content tied to “organized hate.” The company, which raked in $28 billion in its most recent quarterly earnings, applauded itself for investing around $5 billion on safety and security for the entirety of last year. Like it did during the 2020 elections, Meta says it will prohibit new political ads during the final week leading up to the elections. Meta will also use its home pages to send notifications regarding voter registration as well as information on how and where to vote.
That all sounds well and good, but recent criticisms around Meta’s handling of election information in Brazil potentially call into question the effectiveness of Meta’s safeguards. A new report published by international NGO Global Witness claims Facebook was unable to detect explicit election-related information. As part of their study, Global Witness submitted 10 Brazilian Portuguese-language ads, five of which contained blatant election misinformation and five others “aiming to delegitimize the electoral process.” Global Witness says all 10 of the posts were approved by Facebook.
“Facebook knows very well that its platform is used to spread election disinformation and undermine democracy around the world,” Global Witness Senior Advisor Jon Lloyd said in a statement. “Despite Facebook’s self-proclaimed efforts to tackle disinformation—particularly in high stakes elections—we were appalled to see that they accepted every single election disinformation ad we submitted in Brazil.”
Global Witness says Facebook approved ads that contained false information regarding when and where to vote, as well as incorrect information regarding methods for voting. Global Witness’s findings, come on the heels of similar criticisms surrounding Facebook’s handling of political content in Myanmar, Ethiopia, and Kenya.
In a statement sent to Gizmodo, A Meta spokesperson did not refute the Global Witness Findings but said the company is, “deeply committed to protecting election integrity in Brazil and around the world.”
“We have prepared extensively for the 2022 election in Brazil,” the spokesperson said. “We’ve launched tools that promote reliable information and label election-related posts, established a direct channel for the Superior Electoral Court to send us potentially-harmful content for review, and continue closely collaborating with Brazilian authorities and researchers. Our efforts in Brazil’s previous election resulted in the removal of 140,000 posts from Facebook and Instagram for violating our election interference policies and 250,000 rejections of unauthorized political ads.”
Facebook, and now by extension Meta, have had to eat a justifiably heaping pile of shit related to election misinformation since former president Donald Trump’s 2016 victory. Researchers and lawmakers condemned the platform for allegedly letting foreign actors manipulate the company’s newsfeed with torrents of fake or misleading content. In a rare admission of fault, Meta executives have previously come forward admitting they could have done more to bolster their platform.
To their credit, Facebook did implement an expansive list of new procedures and policies to try and limit supposed misinformation on the site during the 2020 election, though research shows that didn’t stop potentially false information from exploding in popularity. While some groups extolled Facebook for the extra steps it took in 2020—particularly for dutifully removing over 100,00 posts attempting to “obstruct voting” and for its moratorium of political ads in the months following the 2020 elections—its efforts weren’t universally praised.
A report released in March 2021 by Avaaz determined Facebook could have prevented an estimated 10.1 billion page views to 100 prominent pages known for spreading election misinformation if the company had not waited until October 2020 to make adjustments to its algorithm. Avaaz estimates those groups managed to triple their monthly interaction on the platform between October 2020 and October 2021.
Facebook’s own internal research revealed in The Facebook Papers show a majority of U.S. adults believe Facebook was at least partially to blame for the January 6 Capitol Hill attack. Other Facebook data collected in the immediate aftermath of the attacks pinpointed then President Donald Trump’s own account as being principally responsible for a surge in reports concerning violations of its violence and incitement rules. Elsewhere, the documents reveal Facebook employees were aware of growing fears among U.S. users of being exposed to election-related misinformation.