
Facebook wants you to know that it is committed to stopping the spread of internet hoaxes. But it requires some mental gymnastics to understand how signal-boosting comments with the word âfakeâ in them would help fight misinformation. In a recent test, however, thatâs exactly what the social network did.
As the BBC reports, Facebook conducted an experiment last month where messages containing the word âfakeâ were pushed to the top of comment threads below links for some users. Thus Facebook comments below stories from The New York Times, the BBC, The Guardian, and other news outlets all began with messages stating âfake.â
âWeâre always working on ways to curb the spread of misinformation on our platform, and sometimes run tests to find new ways to do this. This was a small test which has now concluded,â a Facebook spokesperson told the BBC. âWe wanted to see if prioritising comments that indicate disbelief would help. Weâre going to keep working to find new ways to help our community make more informed decisions about what they read and share.â
Back in March, Facebook debuted a feature intended to better highlight fake news stories on its site by marking them as âdisputedâ by third-party fact-checkers. While this doesnât prevent users from sharing a story, it gives them a non-partisan expert opinion on the truthfulness of the article. But simply promoting any comment with the word âfakeâ under stories that may actually be legitimate is a mystifying strategy for curbing nonsense on the platform.
Advertisement
Facebook, of course, has a storied history of trying out little âtestsâ on its users. The company messed with the emotional content on the News Feeds of nearly 70,000 users in June 2014 to determine whether happy or negative content online can directly affect someoneâs mood. (It can.) The company also experimented with an âI Votedâ button on the platform for years to see how it influenced voting behavior. And in 2012, Facebookâs Data Science Team randomly hid links hundreds of millions of times to âassess how often people end up promoting the same links because they have similar information sources and interests,â according to Technology Review.
Itâs hard to know whether Facebook sincerely believed that elevating comments with the word âfakeâ in them would help users determine which stories were factually accurate. Or if, perhaps, this was just another social experiment to see how these types of comments influence its users. We have reached out to Facebook for comment and will update this story if and when it responds.
[BBC]
Advertisement