Seemingly every new day brings news of Facebook’s failure to adequately manage its products, be that due to privacy or security issues, the spread of misinformation or hate speech, or whatever the hell has been causing its string of recent crashes. But even as these issues have unfurled in recent years—some of them with dire consequences—Facebook appears to have been prioritizing monitoring its own image and that of its top executives, including by keeping tabs on fantastical posts and memes related to CEO Mark Zuckerberg.
Indicating that the ways Facebook can continue to erode trust in its products are evidently limitless, Bloomberg reported Monday that the social media network has used specialized toolkits to monitor and shepherd the public’s opinion of the company and its top brass. This reportedly involved the use of two programs: one titled Stormchaser and another dubbed Night’s Watch, evidently a Game of Thrones reference.
Citing former employees and internal documents, Bloomberg reported that Stormchaser has been used by Facebook employees since 2016 to track viral content involving everything from “Delete Facebook” campaigns to claims that Zuckerberg is an alien (big if true). In some cases, the company reportedly targeted sharers of such content with specialized messages to debunk bogus claims, including memes and those dumb “copy and paste” posts people still love to share for some inexplicable reason. A spokesperson told the outlet it stopped using Stormchaser to track memes midway through last year.
Night’s Watch, meanwhile, reportedly allowed Facebook staffers to see how information about Facebook spread on the platform and its other products like WhatsApp. In the case of WhatsApp, because messages are end-to-end encrypted, the company allegedly cross-referenced how some users cited information from WhatsApp on Facebook to gauge virality.
It might be assumed that any company, particularly a social media product, would utilize its own tools to ascertain public perception. But these alleged initiatives to control the narrative may have come at the cost of combating other forms of misinformation on the platform, according to Bloomberg, citing a former employee. Optics-wise, this doesn’t look especially great for a company with a serious misinformation problem linked to everything from election interference to genocide.
A spokesperson for the company did not immediately return a request for comment about the report. However, a spokeswoman told Bloomberg that Facebook didn’t use Stormchaser “to fight false news because that wasn’t what it was built for, and it wouldn’t have worked.” The spokeswoman added:
The tool was built with simple technology that helped us detect posts about Facebook based on keywords, so we could consider whether to respond to product confusion on our own platform. Comparing the two is a false equivalence.
It bears repeating that Facebook has long defended allowing misinformation on its platform—whether or not it’s capable of managing the problem in any significant way, which does not appear to be the case—because it doesn’t explicitly break its rules. In an arguably performative gesture, Facebook employs fact-checking partners to monitor the content shared by its 2 billion global users and demote, but not necessarily remove, content they deem misleading or inaccurate. Removal is reserved for special cases in which there may be a safety threat.
Debate over whether this is an adequate response arose recently after manipulated videos of House Speaker Nancy Pelosi went viral on the platform—content that it defended leaving up even after it was roundly criticized and determined to be doctored. Facebook’s head of global policy management, Monika Bickert, said in an interview with CNN’s Anderson Cooper in May that the company thinks “it’s important for people to make their own informed choice about what to believe.”
Still, it appears that when it comes to Facebook’s own public image and those of its leaders, the company may be stepping in more than it would like us to know. But at this point, given Facebook’s cache of fuck-ups, are any of us even surprised that its apparent priority is shielding its ass rather than fixing its immensely profitable misinformation machine?