YouTube has not had an easy go of it over the last 24 hours. Right in the middle of LGBT Pride Month, the platform responded to Vox Media host Carlos Maza’s claims that right-wing YouTube host Steven Crowder had repeatedly taunted him with racist, homophobic language including “lispy queer,” “token Vox gay atheist sprite,” “gay Mexican,” and “anchor baby” by... ruling that Crowder was not, in fact, in completely obvious violation of its anti-hate speech rules.
YouTube then made it worse, issuing talking points that its hate-speech standard is determining whether “criticism is focused primarily on debating the opinions expressed or is solely malicious” and that the “main point” of Crowder’s videos were “not to harass or threaten.” Faced with a furious backlash, it spent most of Wednesday rushing a new hate speech policy out the door, reversing course by saying it would demonetize Crowder’s content, and then waffling back and forth on how much of a slap on the wrist it would actually apply.
On Wednesday night, YouTube’s head of its beleaguered comms team, Chris Dale, tried yet again to explain exactly what is going on in a post to the Official YouTube Blog. It’s not great! Here’s some of what he wrote, broken down by section:
One of the most important issues we face is around harassment. We enforce our policies here rigorously and regardless of the creator in question: In the first quarter of 2019, we removed tens of thousands of videos and accounts for violation of our policies on cyberbullying and harassment. We also removed hundreds of millions of comments, many of which were flagged and removed due to harassment.
“Rigorously and regardless of the creator in question” is a good laugh, given that this situation only developed after Maza publicly called out Crowder, building a critical mass that forced YouTube to pay attention.
The rest of these stats are essentially meaningless—in part because “tens of thousands of videos and accounts” and “hundreds of millions of comments” are a drop in the world-spanning YouTube bucket, but also in part because they lack any context or comparison point. Forget truncating the y-axis, we don’t even have an x-axis or a legend.
Just today, we took another step in our fight against hate speech and our responsibility to reduce the spread of harmful borderline content. As mentioned, one of our upcoming projects will reexamine our harassment policy, as well.
Save big with this Samsung sale
If you’re ready to drop some cash on a TV, now’s a great time to do it. You can score the 75-inch Samsung Q70A QLED 4K TV for a whopping $800 off. That knocks the price down to $1,500 from $2,300, which is 35% off. This is a lot of TV for the money, and it also happens to be one of the best 4K TVs you can buy right now, according to Gizmodo.
YouTube did indeed announce it was banning supremacist ideology and denial of “well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary” on Wednesday. YouTube has also been claiming to be taking action on “borderline” content for months, and as Wired noted, platforms including YouTube have long claimed to be cleaning up their act with little to show for it. This is just a PR exercise unless and until that changes.
As an open platform, we sometimes host opinions and views that many, ourselves included, may find offensive. These could include edgy stand-up comedy routines, a chart-topping song, or a charged political rant — and more. Short moments from these videos spliced together paint a troubling picture. But, individually, they don’t always cross the line.
There are two key policies at play here: harassment and hate speech. For harassment, we look at whether the purpose of the video is to incite harassment, threaten or humiliate an individual; or whether personal information is revealed. We consider the entire video: For example, is it a two-minute video dedicated to going after an individual? A 30-minute video of political speech where different individuals are called out a handful of times? Is it focused on a public or private figure?
No one alleged that Crowder had personally encouraged his followers to go after Maza. But this is a fair point to bring up in isolation—creators are only responsible for their own behavior. To an extent. Which brings us to the next point:
For hate speech, we look at whether the primary purpose of the video is to incite hatred toward or promote supremacism over a protected group; or whether it seeks to incite violence. To be clear, using racial, homophobic, or sexist epithets on their own would not necessarily violate either of these policies. For example, as noted above, lewd or offensive language is often used in songs and comedic routines. It’s when the primary purpose of the video is hate or harassment.
This is a nice little sleight of hand. They’ve ditched their original point about whether content is “focused primarily on debating the opinions expressed or is solely malicious,” probably because it was repeatedly, by a large swathe of the internet, pointed out to them that hate speech is not an exchange of ideas but deliberate bullying.
Hate speech is implicitly dehumanizing and blows a big, loud dog whistle for audiences to go after targets. Couple the “primary purpose” standard with a harassment policy that leaves a plausible deniability loophole a mile wide, and YouTube is still left with rules that don’t address the original Crowder situation: A creator repeatedly pushing the boundaries enough that his audience gets the message, but can wink-wink, nudge-nudge his way out of the consequences until enough people complain.
Even if a creator’s content doesn’t violate our community guidelines, we will take a look at the broader context and impact, and if their behavior is egregious and harms the broader community, we may take action. In the case of Crowder’s channel, a thorough review over the weekend found that individually, the flagged videos did not violate our Community Guidelines. However, in the subsequent days, we saw the widespread harm to the YouTube community resulting from the ongoing pattern of egregious behavior, took a deeper look, and made the decision to suspend monetization. In order to be considered for reinstatement, all relevant issues with the channel need to be addressed, including any videos that violate our policies, as well as things like offensive merchandise.
Here’s where it all goes off the rails. YouTube appears to be caught in a bind where its hate speech policy clearly prohibited Crowder’s language, but seemingly doesn’t want to get dredged into another headache-inducing debate about censoring conservatives (too late!), or rock its “extremism rabbit hole” gravy train too far off the tracks, or whatever. Who knows what’s going on over there anymore? Does YouTube? Or is it just pulling a bunch of levers and contorting the facts of what happened through all of those loopholes to machine-pulp and squeeze out an arbitrary result it thinks will calm enough people down to ride out this news cycle?
In the coming months, we will be taking a hard look at our harassment policies with an aim to update them — just as we have to so many policies over the years — in consultation with experts, creators, journalists and those who have, themselves, been victims of harassment. We are determined to evolve our policies, and continue to hold our creators and ourselves to a higher standard.