“If your piece ties together bad guys abusing platforms, algorithms and the Manifestbro into one grand theory of SV, then you might be biased,” Stamos wrote. “If your piece assumes that a problem hasn’t been addressed because everybody at these companies is a nerd, you are incorrect.”

“If you call for less speech by the people you dislike but also complain when the people you like are censored, be careful,” he added. “... If you call for some type of speech to be controlled, then think long and hard of how those rules/systems can be abused both here and abroad.”

So that, apparently, is the view from the inside—that amid the worsening scandal over its sales of targeted Facebook ad buys to organizations linked to the Russian government before the 2016 elections, some senior staff believe the press are taking low blows over its careful attempts to walk the tightrope between a social media company and a content provider.

As TechCrunch noted, one problem with this argument is that Facebook employees are reluctant to speak to the media, because the social media giant “will fire employees that talk to the press without authorization.”

But the meat of Stamos’ argument underplays the reality that Facebook has become so huge and so powerful that management struggles to define exactly what it is and what it intends to accomplish—and that while the company faces potential criticism if it overreaches in combating misinformation and abuse on the platform, its current problems very much do relate to its preference for the inverse, hands-off solutions which often boil down to relatively small tweaks.

Everyone gets the moral hazard of Facebook-as-censor, has for years, and it’s getting a little tiring hearing that repeatedly brought up as an excuse when the needle is currently very far in the other direction.

Yet it’s hard to take issue with Stamos’ explanation of the difficulties of designing an algorithm that tries to take all of the potential issues into account when making a decision about what content to prioritize—bringing this back to the original point, human editors. It’s not a cop-out to bring in humans to help supervise Facebook ad sales or manually filter content; some of the company’s woes come from replacing human editors with supposedly less biased algorithms, as it did in 2016, when it fired its entire news team at the same time its “fake” news issue was exploding.

Let’s meet in the middle here: Put real effort into anticipating and countering problems with solutions that aren’t just clever technical workarounds, and perhaps that will make a small dent in Facebook’s growing reputation as an aloof overseer of our digital lives.