At the end of 2016, Facebook decided to finally do something about its misinformation problem. It tasked a bunch of third-party fact-checkers to flag and weed out fake and misleading content from its News Feed. Now, it’s extending this fact-checking effort to Instagram.
“Our approach to misinformation is the same as Facebook’s—when we find misinfo, rather than remove it, we’ll reduce its distribution,” Stephanie Otway, a spokeswoman for Instagram, told Poynter. “We can use image recognition technology to find the same piece of content on Instagram and take automatic action.”
Image recognition technology! Automatic action! That sounds great and all. But it still leaves one key question unanswered: Is this massive fact-checking effort actually helping us live in a more reality-based world, or is it all just for show?
Otway told Poynter that Instagram will take down fake content in its Explore tab and hashtag results page. This is part of a testing phase, and like Facebook’s fact-checking program, third-party fact-checkers will be the ones tasked with scrutinizing potentially fake content. In fact, the program being tested with Instagram posts mirrors the workflow of the process for weeding out fake content on Facebook; fact-checkers will reportedly see flagged posts for Instagram on the same dashboard used for Facebook posts.
It’s good, and crucial, for Facebook to continue taking steps toward ensuring all of its products stop contributing to our collective loosening grip on reality. But what remains unclear is just how effective these efforts are. It’s been over two years since Facebook first began going after dangerous bullshit on the social network, and there are still deeply disturbing instances of hateful propaganda fueling genocide on the platform, on top of the more innocuous, day-to-day viral hoaxes and misleading posts.
It wouldn’t be immediately obvious or even reassuring that Facebook has figured out a way to resolve its misinformation problem by looking at the state of its platforms today, and in the absence of any hard metrics, transparency reports, or testaments from its fact-checkers, we’re left with simply trusting Facebook’s commitment to the truth—and given all that we know about Facebook now, why would anyone do that?
Some metrics Facebook could share either with the public—or at the very least with those laboring over these efforts—include how frequently people click on and share links before and after they have been demoted or tagged with a warning, how many posts have been flagged and debunked per region, and whether the overall credibility of the news sites people click on had improved. That’s just to name a few.
Gemma Mendoza, who leads fact-checking efforts as well as research on disinformation on social media at Philippines-based Rappler, a partner with Facebook’s fact-checking program, told Gizmodo last year that a lot of the false claims don’t come from just one URL, but are distributed through multiple copycat sites, and it would be helpful to know if Facebook’s system is tracking the suspect claim or just the URL. A Facebook spokesperson said at the time that the company has “machine-learning driven similarity detection processes in place to catch duplicate hoaxes.”
A Facebook spokesperson told Gizmodo in August of last year that its fact-checking program has been able to “reduce future views of debunked stories by 80%, but it’s worth noting that we don’t believe it’s a silver bullet to fighting misinformation.” A spokesperson shared the same statistic in an email on Tuesday. But the company hasn’t released any convincing metrics—or really any meaningful metrics at all—to give us a sense of the impact of these efforts on its platforms over time. Tai Nalon, the director of Brazilian fact-checking startup Aos Fatos, a company that works with Facebook’s program, confirmed in an email on Tuesday that Facebook hasn’t released any metrics or been more transparent around the effectiveness of the program with fact-checkers.
“They just announced they will be doing it from now on with posts (images that are similar to the ones reported as false on Facebook) that are on the discover tab,” Nalon said, referring to the company’s fact-checking expansion to Instagram. “They will track them through hashtags as well. That’s all I know.”
Fact-checkers have even called for more transparency around their own work. “They promised some metrics to us,” Mendoza told Gizmodo last year. She said that they have seen hypothetical numbers, but no exact numbers based on their own work. At the time, Mendoza characterized Rappler’s relationship with Facebook as “frenemies.”
“What everybody wants from Facebook is an improvement in the quality of information, in the quality of misinformation being flagged to us,” Phil Chetwynd, editor in chief of Agence France-Presse, which also fact-checks for Facebook, told Gizmodo last year, referring to a better system for flagged misinformation. “That is something they are still really struggling to provide for us.”
In an email to Gizmodo, a Facebook spokesperson pointed to a February blog post from the company detailing a handful of studies that explored fake news on Facebook, indicating that there has been some progress, according to this research, in tackling misinformation on the platform. But the link between Facebook’s efforts and diminishing visits to websites pushing bogus information remains frustratingly vague.
While it’s heartening that Facebook is now extending these very efforts to Instagram, we still don’t really know if this years-long program is the most effective strategy. It’s reasonable and valid for fact-checkers to want to have some hard evidence indicating that their admirable work isn’t in vain. Facebook doesn’t have an in-house team working on one of its most egregious issues, and to outsource its efforts while keeping those very experts in the dark on the efficacy of their work is both pretty shitty and pretty shady. And for the public, the expansion of these efforts would sound more promising if we knew they worked.