The best place online to lead us unsuspecting sheep from an innocent recipe video to either a defense of racialist pseudoscience or pushes pedophiles towards videos of kids has decided maybe, just maybe, it doesn’t want to be known for those things anymore. So it’s taking the most obvious step there is: banning blatantly pro-Nazi content.
Announced today in a company blog post, the platform rolled out an enhanced policy on hate speech that will prohibit “videos that promote or glorify Nazi ideology” as well as videos which deny the veracity of “well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary.” To be clear, this is the bare minimum and comes after months of YouTube claiming to take action against “borderline” content which may promote biased or hateful views without necessarily breaking the site’s rules.
According to the New York Times, thousands of videos are expected to be pulled from the platform as a result of the stricter hate speech rules.
The Nazi problem, both on YouTube and on social platforms in general, is not new—but a controversy that has been roiling the video giant more recently is far-right pundit Steven Crowder’s repeated harassment of Vox journalist Carlos Maza. After a thread detailing years of attacks on Maza, often on the basis of his sexuality and heritage, YouTube replied to him directly on Twitter, stating that although Crowder used “language that was clearly hurtful, the videos as posted don’t violate our policies.”
These new, tighter rules around hate speech include prohibiting “promoting violence or hatred” based on nationality, race, and sexual orientation. It’s unclear at this time if Crowder runs afoul of the line now. YouTube did not immediately respond to a number of questions sent by Gizmodo.
Included in YouTube’s vague mea culpa is an ever vaguer promise to dredge up “more authoritative content in recommendations.” According to the press release, attempts to curb problematic content have resulted in a 50-percent reduction in recommendation-based views to “borderline content and harmful misinformation.” That YouTube needed to take action to reduce this sort of stuff being algorithmically pushed towards users suggests there’s a problem with the way it recommends videos—something anyone who has spent a meaningful amount of time on the site has likely experienced firsthand. Yet the presence of increasingly partisan, dangerous, or uninformed rabbit holes is something the platform’s own content chief, Neal Mohan, denied outright barely two months ago.
What constitutes a more authoritative source remains an open question. YouTube’s track record of surfacing engaging content being what it is, I’m sure the opacity of those statements is no cause for concern.
YouTube has repeatedly been criticized for letting hateful content and disinformation run rampant on its platform. A Bloomberg investigation from earlier this year found staffers believed their bosses turned a blind eye to these glaring issues, seeking more views and higher engagement over responsible stewardship of the second most visited website on the web. (CEO Susan Wojcicki has stridently denied this accusation on Twitter.)
The push to limit hate speech online has been a challenge and one that platforms have taken up with the highest degree of reluctance. Facebook wrestled with how to update its policies, finally banning white nationalism and white separatism in March of this year, though actually enforcing that policy has been a mixed bag. YouTube today joins Facebook in the category of “platforms that care enough to at least pretend to try publicly” while Twitter and others remain staunchly in the “lol, you’re on your own” camp.
We’ve reached out to YouTube with questions on the policy change and will update if we receive any answers.