Facebook's Latest Solution to Fake News? More Machines, Baby

Illustration for article titled Facebook's Latest Solution to Fake News? More Machines, Baby

Take a shot every time Facebook evades editorial accountability.

On Thursday, Facebook announced that it is going to use “updated machine learning” algorithms in order to better spot and counter misinformation on its platform. The company says it will use its existing third-party fact checkers to review stories that the new algorithm flags, and their reports may be shown below flagged stories in a section called Related Articles.


The Related Articles feature—a list of suggested links offering varying perspectives—is technically not new. Facebook started publicly testing the feature in April, but now the company is rolling out the feature more widely in the US, Germany, France, and Netherlands, TechCrunch reported on Thursday. These are countries where Facebook already has fact checking partnerships in place.

Image: Facebook

Facebook says its objective with Related Articles and updated machine learning tech is to offer users more context on the validity of a story they see in their feed. The company aims to help users make better judgment calls as to whether or not they should believe a potential hoax, or share it to their network.

But it’s also just another way for Facebook to continue acting like a news outlet for billions of users without directly accepting any journalistic responsibility.

“We don’t want to be and are not the arbiters of the truth,” Facebook News Feed integrity product manager Tessa Lyons told TechCrunch. “The fact checkers can give the signal of whether a story is true or false.”

But while Facebook does not want to be seen as the authority over what stories are permitted on its platform, it is. By delegating the subjective work to non-Facebook employees and leaning on machine learning technology, Facebook still gets to wield its influence as an editorial outlet without being labeled as one. And if any mistakes are made—like if a politically-charged story is wrongly flagged as a hoax, or if Facebook accidentally recommends fake news—Facebook can now more easily shift blame to a glitch or a third-party.


Facebook hasn’t shared why its updated machine learning algorithm is now more capable than it once was—or if a previous version was ever widely in use on users’ Newsfeeds before today. But the company is seemingly hell-bent on trying to fix its misinformation problem (the one Mark Zuckerberg once blew off) out in the open. The shady part is, Facebook wants to do this without being held directly responsible for how it handles a potential hoax. Facebook isn’t calling the shots, the machines are.



You gotta love watching Gizmodo swing back and forth on this issue.

First, Gizmodo argues that humans are bad and shouldn’t be trusted to determine what news is news (importantly, Giz misreported the entire story - in actuality, humans were determining what was newsworthy, and human bias seeped in, which is pretty inevitable and noncontroversial. There was no mandate to suppress conservative news, as the article implied. Just human disagreement on what is and isn’t newsworthy. No big deal, the big result there was just a Congressional investigation.).

So Facebook removed the human element. You’d think that Gizmodo would be excited about that, but no, now they’re trashing that method.

Make up your mind. Either you want people actively intervening, knowing that people make subjective decisions that not everyone will agree with, or you want machines to determine newsworthiness.