Facebook Security Chief Alex Stamos Hits Back at Media Coverage of Its Algorithms

Photo: AP
Photo: AP

Alex Stamos, Facebook’s chief security officer, defended his employer against “coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos.” in a lengthy tweetstorm on Saturday.

Stamos was responding to Lawfare editor Quinta Jurecic, who criticized Facebook’s decision to have human editors oversee more ads as a cop-out, saying it should instead fix its broken algorithms.

In his response, Stamos was particularly concerned with what he saw as attacks on Facebook for not doing enough to police rampant misinformation spreading on the platform, saying journalists largely underestimate the difficulty of filtering content for the site’s billions of users and deride their employees as out-of-touch tech bros. He added the company should not become a “Ministry of Truth,” a reference to the totalitarian propaganda bureau in George Orwell’s 1984.

Advertisement
Advertisement

“If your piece ties together bad guys abusing platforms, algorithms and the Manifestbro into one grand theory of SV, then you might be biased,” Stamos wrote. “If your piece assumes that a problem hasn’t been addressed because everybody at these companies is a nerd, you are incorrect.”

Advertisement

“If you call for less speech by the people you dislike but also complain when the people you like are censored, be careful,” he added. “... If you call for some type of speech to be controlled, then think long and hard of how those rules/systems can be abused both here and abroad.”

Advertisement
Advertisement
Advertisement

So that, apparently, is the view from the inside—that amid the worsening scandal over its sales of targeted Facebook ad buys to organizations linked to the Russian government before the 2016 elections, some senior staff believe the press are taking low blows over its careful attempts to walk the tightrope between a social media company and a content provider.

As TechCrunch noted, one problem with this argument is that Facebook employees are reluctant to speak to the media, because the social media giant “will fire employees that talk to the press without authorization.”

Advertisement

But the meat of Stamos’ argument underplays the reality that Facebook has become so huge and so powerful that management struggles to define exactly what it is and what it intends to accomplish—and that while the company faces potential criticism if it overreaches in combating misinformation and abuse on the platform, its current problems very much do relate to its preference for the inverse, hands-off solutions which often boil down to relatively small tweaks.

Everyone gets the moral hazard of Facebook-as-censor, has for years, and it’s getting a little tiring hearing that repeatedly brought up as an excuse when the needle is currently very far in the other direction.

Advertisement

Yet it’s hard to take issue with Stamos’ explanation of the difficulties of designing an algorithm that tries to take all of the potential issues into account when making a decision about what content to prioritize—bringing this back to the original point, human editors. It’s not a cop-out to bring in humans to help supervise Facebook ad sales or manually filter content; some of the company’s woes come from replacing human editors with supposedly less biased algorithms, as it did in 2016, when it fired its entire news team at the same time its “fake” news issue was exploding.

Let’s meet in the middle here: Put real effort into anticipating and countering problems with solutions that aren’t just clever technical workarounds, and perhaps that will make a small dent in Facebook’s growing reputation as an aloof overseer of our digital lives.

Advertisement

[TechCrunch]

"... An upperclassman who had been researching terrorist groups online." - Washington Post

Share This Story

Get our newsletter

DISCUSSION

As one of those “academics” referenced by Mr. Stamos, I have to agree with him on a couple of points. My university—-like most others—is VERY concerned about the nexus of information literacy and critical thinking, as it applies to online content. And it IS a different beast than applying the same general principles to more static content. The speed with which the content spreads, the difficulty in determining provenance, and the purposeful and intentional targeting of specific populations for maximum effect, present unique challenges. We are working on curriculum and experimenting with units in existing courses, and trying some pre-packacked interventions coming out of places like Stanford. But we don’t know the outcomes yet. But one problem I am finding is that while you can walk a class through an exercise debunking some particular story, even college students seem unable to then generalize those strategies on the next story—especially when its angle is something they WANT to believe or feel they have some personal anecdote in support of it.

But it was my personal experience that brought home how difficult this is. After a relative of mine posted a link to a story claiming that a research team of Argentinian doctors had evidence that the Zika Virus was not what was causing microcephaly—but that it was pesticides, I decided to go full on Snopes and break down and debunk the linked story step by step and then post my process. Well, 40 minutes later I had finally done what I felt was a comprehensive and authoritative job. But in those 40 minutes, I had to 1) track down the original article through a maze of reposts and links, 2)find the actual empirical article everyone was referencing (which was interesting because in many of these cases you never do find an actual scientific study) 3) read the actual study carefully—-and by carefully I mean as someone who has a research PhD, teaches research methods, and is a tenured professor with an active research agenda, 4) determine that the empirical study had virtually NOTHING to do with determining anything between Zika or microcephaly, and finally 5) Post all of this shit to prove my point. And by the way, the only reason I was even able to read the actual study referenced was that I have access to all science journal journals through my university’s database, otherwise I would have to cough up $20 or $30 bucks to download and view the study as it was behind a paywall.

Whatever social media companies are going to do going forward—-and yes I think something has to be done—is not going to be easy and may only put a tiny dent in the problem, unless with that comes the other side of the equation—-a broad comprehensive education of the public in how to spot crap. But that will not be cheap, easy or fast.