As the deadly coronavirus continues to wreak havoc in China and around the world, several social media kingpins are fighting the outbreak on a different front: By curbing the spread of misinformation and fake claims online amid a public health emergency.
False information on the coronavirus has only been proliferating on certain platforms as more and more cases come to light. So far, 23 countries outside of China have reported cases of the virus and more than 300 people have died as a result of the outbreak, all of them in China.
The question then becomes: What exactly are these platforms doing about it?
Twitter has fielded more than 15 million tweets about the outbreak in the past four weeks, though none have been “coordinated attempts to spread disinformation at scale,” the company wrote in a recent statement. When asked via email what existing protocol Twitter’s been citing in its response, a spokesperson pointed us to its global policy regarding platform manipulation, which bans misleading, fake, and spam accounts.
Twitter searches for “coronavirus” in some countries now trigger a “Know the Facts” prompt along with a link to resources from the Centers for Disease Control and Prevention. These countries include several that have reported cases of coronavirus, including the U.S., Australia, and Japan, with more to be added “as the need arises.”
If this tool looks familiar, it’s because Twitter originally rolled it out last year in partnership with the U.S. Department of Health and Human Services as a method of combating vaccine misinformation.
Facebook doesn’t exactly have the best track record when it comes to stamping out potentially harmful pseudoscience. Anti-vaxxer nonsense, snake oil testimonials, and other misleading health information have traditionally run rampant on the platform until public pressure finally forces Facebook’s hand.
Fortunately, that doesn’t appear to be the case this time around. Facebook’s been taking a proactive stance, employing its global network of fact-checkers to suss out fake posts about the coronavirus and vowing to take down “false claims or conspiracy theories” that have been flagged by health authorities.
In a company blog post, Facebook’s head of health, Kang-Xing Jin, explained that these kinds of fake posts violate the platform’s existing policies banning content that could cause physical harm. Hashtags perpetuating similar false claims will also be blocked.
“We’re focusing on claims that are designed to discourage treatment or taking appropriate precautions,” Jin wrote. “This includes claims related to false cures or prevention methods — like drinking bleach cures the coronavirus — or claims that create confusion about health resources that are available.”
Since Snapchat posts only have a 24-hour lifespan and the platform lacks any sort of public newsfeed, its very design prevents misinformation and fake news from going viral, a company spokesperson told Gizmodo via email. You could argue the app’s Discover feature basically operates as its newsfeed, but its content is much more curated, particularly relying on companies and publishers previously vetted by Snap.
Though Reddit is no stranger to conspiracy theories and toxic communities, it appears to be pulling its weight in this crusade as well. On Friday Reddit tacked a banner to the top of its homepage directing users to its forum r/AskScience Megathread for all their questions about the coronavirus outbreak.
Reddit also quarantined the subreddit r/Wuhan_Flu “for containing misinformation and/or hoax content,” a company spokesperson told Gizmodo via email. This excludes the forum from search results and recommendations and limits users’ ability to share its content. (You may recall the pro-fascist and Trump championing r/The_Donald subreddit received similar treatment last year.)
Quarantined forums also require viewers to opt-in via a prompt before viewing any content to avoid accidental exposure. In the case of this specific subreddit, Reddit also provides a warning message and a link to CDC resources in the prompt.