Fake news. Foreign election interference. Dark money propaganda. This and more served as a catalyst for Facebook’s push for more transparency around political advertising on its platform. But its efforts to be more open are flawed, and the consequences are already becoming apparent.
On Wednesday, Facebook rejected a boosted post from nonprofit investigative journalism outlet Reveal, flagging it as “political content.” The post promoted an article about a lawsuit against an immigrant children detention center, one that allegedly forcibly injected kids with drugs. According to a screenshot posted by Reveal, Facebook wouldn’t allow the news organization to boost the article because it wasn’t authorized to run political ads, a process that involves confirming one’s physical address through a code mailed by Facebook and providing the company with a government-issued ID.
Because Facebook has reduced the number of news stories it displays on users’ feeds, promoting an article through an ad is one of the only ways to ensure a significant audience can actually find it on that platform. By limiting publishers’ ability to reach their audience under the auspices of cracking down on political ads, however, Facebook responded poorly to a contentious question: Where do you draw the line between political content that’s meant to sway votes and nonpartisan information critical for public knowledge?
Byard Duncan, an engagement reporter at Reveal, told Gizmodo in an email that the post was flagged for “potentially containing political content.” He continued: “We’re not an advocacy organization; we’re an award-winning investigative news nonprofit that’s worked for decades to hold the powerful accountable, shine a bright light on injustice and protect the most vulnerable in our society.”
ProPublica pointed out this month that Facebook’s new policy around political ads is imperfect, flagging news content as political while failing to flag indisputably political ads. The new policy, which Facebook announced in April and began enforcing last month, requires advertisers to receive Facebook’s blessing before they can boost political ads. This includes “issue ads,” or what Facebook characterized as “political topics that are being debated across the country.” These issues include abortion, civil rights, health, immigration, and “values,” to name a few.
“This could be really confusing to consumers because it’s labeling news content as political ad content,” Stefanie Murray, director of the Center for Cooperative Media at Montclair State University, told ProPublica.
Facebook’s policy has noble intentions—to call attention to posts with political motivations. What’s evident with the Reveal post is that the rule isn’t simply elucidating users on what’s political—it’s censoring crucial news. To lump in an arguably vital piece of journalistic work with the likes of election-related news or ads about how taxes are evil undermines the value of the story while misleading users to believe it is presented with some sort of bias or partisan affiliation.
Simply put, by lumping news in with content intended to influence voters, Facebook is creating its own misinformation.
The problem of Facebook’s overzealous flagging of political content extends beyond the news. Vulnerable communities, too, are subjected to this new flavor of what is effectively censorship. Earlier this month, an employee at the Central Indiana Community Foundation tried to boost a post announcing a new podcast episode. Ben Snyder, a marketing manager at the foundation, told Gizmodo that Facebook disapproved the post, which was about the unique challenges faced by the LGBTQ community in central Indiana because the foundation was not authorized to advertise political content.
“I feel like when you classify a conversation about any marginalized person or community as political it does a huge disservice and it’s a way of censoring and silencing an already silenced population,” Snyder said. “And then it becomes that much more difficult to address the challenges that these communities are facing as they can’t talk about it.”
Snyder said that the stories included in the podcast episode and the post were not inherently political or pushing any type of political agenda—they weren’t asking users to vote a certain way, or contact their representative, or get engaged with the cause. The podcast was just telling people’s stories. “It censors or stifles conversations that are helpful,” Snyder said of Facebook flagging such a post as political, adding that “it sets up the pretense that when you’re talking about the LGBTQ community that you have to censor yourself.”
Facebook itself admits its system remains broken but says its flaws are necessary to block “bad actors.” Reveal’s ad for its story, “not the actual story, was flagged because it contains political content,” a Facebook spokesperson told Gizmodo in an email. “We will soon be launching a separate section in our archive for news ads about politics, which we agree is different than advocacy. But, we flag both to prevent workarounds for bad actors.”
With both the Reveal and Central Indiana Community Foundation incidents, it’s evident Facebook’s new system can be detrimental to important discourse. News organizations and communities shouldn’t have to have authorization to signal boost fact-based news and the stories of vulnerable populations. What’s more, they shouldn’t have to flag those posts as political. The implication is that it’s expected that one should take a side in the face of such information. To make that implication about, say, children forcibly injected with drugs gives users permission to strip themselves of humanity more so than it offers transparency around political ideologies.
“Whether it’s the LGBTQ community or people of color, for people to not be allowed to speak out about their differences? That terrifies me,” Snyder said. “And in a public forum as huge as Facebook to have to, for lack of a better word, whitewash everything just so we can share and communicate stories on that platform is extremely troubling.”