Anas Modamani (Image: AP)

Facebook’s problems of late are legion: fake news, hate speech, trolls, and the hideous and unpredictable violent outbursts on its Live platform. Now, apparently, the German court system is in the mix.

On Monday, a Syrian refugee named Anas Modamani and his lawyer appeared in a Würzburg courtroom to duke it out with Mark Zuckerberg’s social network. Modamani sued Facebook over a 2015 selfie he took with German chancellor Angela Merkel. The photo went viral, but it also began to appear with statements saying Modamani was a terror suspect, and a number of fake news reports shared on Facebook falsely linked him to terror attacks in Brussels and Berlin.

In court, Modamani’s lawyer Chan-jo Jun argued that Facebook has the technical capability to detect the selfie and prevent it from being spread further.

Advertisement

From Bloomberg, here’s Facebook’s reply:

“There are billions of postings each day,” said Martin Munz, a Facebook lawyer. “You want us to employ a sort of wonder machine to detect each misuse. Such a machine doesn’t exist.”

“A sort of wonder machine.”

There are a few things wrong here. First, Facebook isn’t being asked to prevent all instances of hate speech. It’s being asked to use its vast technical capabilities to track and detect the spread of a single image that was misused. It may be a needle in a haystack, but it’s not outside the realm of possibility—particularly in light of Facebook’s habit of making grand proclamations.

Sponsored

Just last year, Mark Zuckerberg set an incredibly ambitious timeline to “[help] cure all diseases by the end of this century.” He also wants to connect the entire world using wi-fi internet enabling, solar-powered drones, and is hiring someone to build “brain-computer interface” tech for the “communications platform of the future.” Facebook has all kinds of moonshot ideas, yet it often drops the ball when it comes the fundamentals. Modamani certainly isn’t the first example of Facebook getting it wrong when it comes to moderation: It haphazardly removes posts meant to condemn social crises, not encourage them.

Facebook routinely (and very publicly) announces incredibly ambitious plans and ideas. But when it comes to policing racism and hate speech, Facebook’s perpetually sunny push for dynamism and inventiveness cuts out like spotty wi-fi. And in this case, amidst rising anti-refugee sentiment in Europe, this isn’t simply a case of algorithms or hoaxes—it’s someone’s life.

Moreover, Facebook already does a whole lot when it comes to content analysis. It’s been rolling out image detection upgrades for years, and it can apparently recognize users in photos even when they’re obscured. It also has international teams monitoring content for both terroristic influences and porn 24 hours a day. It’s partnering with media companies worldwide to vet content and curb the spread of fake news, which Germany thinks should be a fineable offense. Crucially, it also maintains a staggering amount of data on its users—from income level to religion and ethnicity, Facebook uses metadata analysis to create extremely accurate profiles.

So why is it so reticent to try its hand at combatting “misuse,” particularly when it comes at the expense of an individual? Facebook has said it’s working on fighting both fake news and hate speech. This viral image has involved both, and suddenly Facebook claims it would take something out of Terminator or Back to the Future to help.

Advertisement

This isn’t to say that this kind of moderation is by any means simple—as Facebook’s lawyer notes, there are billions of postings out there. But to act as though the solution requires some supernatural or miraculous intervention is spineless and antithetical to Facebook’s approach to many other problems.