YouTube’s Copyright Filter Is Crushing Video Critique—And It’s Getting Worse

Illustration for article titled YouTube’s Copyright Filter Is Crushing Video Critique—And It’s Getting Worse
Graphic: Elena Scotti (Photos: AP, Shutterstock)

In July, Harry “hbomberguy” Brewis shared a video on his popular YouTube channel called “RWBY Is Disappointing, And Here’s Why.” The two-and-half-hour video — a sharp, detailed critique of the cartoon RWBY — was the result of a lot of work by Brewis and his producer, Kat Lo. It also took an extra week and a half of editing and $1,000 in legal fees just to get and keep the video up on YouTube. All because of YouTube’s copyright filter. And thanks to a new proposed law by Sen. Thom Tillis, Brewis’ experience could become virtually everyone’s.

Advertisement

YouTube’s copyright filter is a labyrinthine nightmare called Content ID. Content ID works by scanning all the videos on YouTube and comparing them to a database of material submitted by copyright holders—often music labels and movie and TV studios—which have been given the ability to add things to the database by YouTube. Once Content ID matches a few seconds of an uploaded video to something in the database — regardless of context — a number of automatic penalties can be imposed. According to Google, most of the time the rights-holder chooses to just take the money generated by ads placed by Google on the video. If the original creator didn’t want any ads put on their video, too bad. But in other cases, the rights-holder can make something much worse happen: They can make sure no one sees the video at all.

The problem with filters like Content ID is that their restrictions have nothing to do with the law. The ability to use copyrighted material without permission or payment—especially short clips for purposes such as criticism, commentary, education, and so on—is protected by something called “fair use.” It’s easy to get into the weeds of fair use, but the important thing to note is that whether or not a use is fair depends on a lot of context. Context that Content ID simply can’t determine. All it does is determine is whether elements of a work match to its source, not what is actually being done with the material. For example, a movie review using a 14-second bit of a film to illustrate what is good or bad will trigger a Content ID match to the whole movie. As far as Content ID is concerned, those 14 seconds are no different from a complete copy of the film being uploaded. So while algorithms like this might be useful in flagging potential infringement, the fact that Content ID automatically applies penalties, with no human review involved at all, is a problem.

Brewis’ situation is not unique. And it’s possible that it’s about to be the best-case scenario for anyone trying to share videos, music, or art online. You may remember the overzealous EU copyright directive that passed last year. We are seeing a call for new, faster, harsher penalties in the United States, too. In the giant spending and covid-19 relief package, there are two new copyright bills: the CASE Act, which creates a weird quasi-court in the Copyright Office that can deal out $30,000 worth of “small claims” judgments with limited appeal options, and a bill to make certain streaming operations a felony. Earlier this year, the Copyright Office issued a report that argued that the problem with the internet is that not enough content is removed, and not enough people are losing their internet access because of unproven accusations of copyright infringement. Content ID and YouTube have existed for a long time. Why dig into filters now? Because we keep hearing calls to make them mandatory. Just last week, the Senate subcommittee on intellectual property had a hearing on such things, where it was claimed, over and over, that perfect filters do exist, it’s just that tech companies haven’t been forced to make them.

After a year of hearings in which the public interest was routinely not represented or mostly ignored (and constantly sparking the question of why there were monthly hearings about copyright in a year filled with other concerns), Sen. Tillis has produced a draft bill that does all sorts of dangerously bad things to the internet. Most relevant to this discussion is that it requires internet services to monitor uploads, requires what is called “notice and staydown,” and calls for the establishment of “standard technical measures.” Within the draft it’s clear: All of these things require filters. And it appears that Tillis has joined the chorus claiming that filters will solve all our ills.

Advertisement

They’re wrong. I spent the last year digging through YouTube’s own documentation, reports from others, and doing interviews as part of a research project I did into Content ID for the Electronic Frontier Foundation. You can see a year of work and almost 10,000 words in the whitepaper, “Unfiltered: How YouTube’s Content ID Discourages Fair Use and Dictates What We See Online.” But I’m not writing here as an EFF employee—I am writing as a private internet user who, through a love of media, has forced herself to become an expert in the arcane world of intellectual property. And it’s a mess. Content ID alone is so complicated that those who rely on YouTube for their livelihood are constantly trying to divine, through trial and error, how it works.

So let me tell you: There is no secret, better filter out there that is just hidden, waiting for tech companies to use it. YouTube’s Content ID is one of the best-funded and most-used filters online. It’s not just removing legal speech from the internet, it’s dictating it. It’s forcing every website, every ISP, every whatever internet service Congress decides new laws should apply to, to have a filter that will ruin lives. If Google’s filter doesn’t work, why does anyone think a cheaper, less-tested one will?

Advertisement

So when we talk about the problems with YouTube, we are talking about a possible future of the internet. We are talking about people trying to make use of their right to free expression being blocked by an algorithm. People trying to make a living as independent creators seeing their work shut down or their wages taken, with no reasonable way to appeal.

Advertisement

In Brewis and the RWBY video’s case, the penalty chosen by the studio was the extreme onedestruction. So Brewis first tried uploading 20-minute portions of his video to YouTube to see what Content ID matches it triggered so that he could edit them out. But the 20-minute videos came back clear of any matches. So he uploaded the full video. It came back with two matches. So he trimmed the claimed portions and reuploaded. It came back with two new matches. He edited again. And reuploaded again. And again. And again.

Brewis discovered that the studio behind RWBY, Roosterteeth, has set Content ID to automatically take down any video that has a Content ID match. Roosterteeth explained that it expects creators using its material to go through YouTube’s Content ID dispute process, which lets the studio know that someone is using it. Then, Roosterteeth will decide if it approves of the use, then get the YouTuber to agree to its terms, and then manually change Content ID to “just” put ads on a video and collect the money it generates.

Advertisement

To avoid having to have the company behind a series he was criticizing determine the fate of his criticism, Brewis re-edited the whole video so that not a single clip of the show was over five seconds long. This added a week and a half to production time. He also paid a lawyer $1,000 to check that he was within his rights to use the clips.

While I have spent untold hours unwinding the tangled web of Content ID, none of the broad generalities of my research will be surprising to anyone who has experienced the capricious and ever-changing algorithm behind Content ID. There are stories of creators making videos to suit the algorithm, of simply handing over revenue to the people you are criticizing in your work to avoid the hassle, and of certain works or even whole types of art simply left uncriticized because getting through the copyright filter is just too difficult.

Advertisement

Challenging Content ID matches is fraught as hell for creators uploading videos. Trying to map out how things work inevitably turns you into that Charlie Day meme. It’s so confusing that literal experts in copyright law have been confused by this system. So it’s not a surprise that many YouTubers have decided to just submit to the almighty algorithm. While fair use does not have any concrete number of seconds that makes a use legal, Content ID triggers matches on, anecdotally, snippets of five to 10 seconds. So YouTubers choose clips under five seconds long. No, not the clip that matches their point best. Just the clip that will pass Content ID. Fair use does not require payment. But YouTubers will let the revenue generated by their videos go to the rights-holders rather than chance a fight. Hey, at least that way the video is visible, right?

This is a ridiculous outcome, by the way. A critic shouldn’t be handing over their wages to the major corporation behind the movie, show, game, or song they are reviewing. That’s never how that job has worked.

Advertisement

Content ID is much more sensitive to audio-only material, matching music much more often than full audiovisual material. You may have heard about the classical musicians who have been consistently blocked while trying to post videos of themselves playing music that no one currently owns because those composers have been dead for hundreds of years. This is why.

Advertisement

What’s happening there is that while the compositions are in the public domain, there is still copyright in specific performances. Let’s say CBS is the label behind an album of classical music written by Beethoven but played in 1990 by Yo-Yo Ma. Ma’s performance is copyrighted, but Beethoven, dying as he did almost 200 years ago, no longer has a copyright in the composition. CBS puts the recording of Ma into Content ID. Content ID then flags anyone playing Beethoven because, unsurprisingly, two people playing the same compositions on the same instrument sound the same to a computer.

Music reviews are simply less common on YouTube for this reason. It’s so much harder to make a living at it since the videos get blocked or the money gets taken away.

Advertisement

Content ID is not the only filter out there, but we have to take the problems with it more seriously. First of all, Google has spent over $100 million on Content ID, and it’s still a pile of garbage. Second, YouTube’s basically cornered the market on user-uploaded video. The very existence of the word “YouTuber” proves that. People are far more often called YouTubers than they are vloggers, video essayists, or any other generic term. Everyone I talked to was clear that they were on YouTube not because it was the best option, but because it was the only option.

While I dearly love to bash Big Tech, technology isn’t magic. The algorithm is not going to save us. We should be very skeptical of anyone who says this, in any context. Google has spent $100 million trying to get Hollywood off its back, and it hasn’t worked. And if filters become a requirement, who is going to be able to afford that? Just YouTube and its parent company Google, probably.

Advertisement

The promise of the internet was lowering barriers to expression. Studies keep showing that the world of mainstream criticism is overwhelmingly white and male. Part of the solution has been bypassing traditional gatekeepers and going directly to audiences. But Content ID stands between creators and audiences by blocking videos. It makes independent criticism a difficult job since it unjustly redirects revenue from the critic to the criticized. If the content cartel has its way, filters will be everywhere, and again all sorts of voices will disappear.

Artists who complain about infringement have real concerns, but the idea that filters will save them is deeply misguided. Instead, a whole other set of artists will be all but wiped out. We should be asking ourselves, constantly, why it is that there are so many ways to fast-track copyright claims, but not anything else.

Advertisement

This isn’t to say that platforms should be taking down more speech, but that they spend a disproportionate amount of time and energy on policing intellectual property. Why are intellectual property claims the fastest way to get something vanished off the internet? Possibly because the groups making those claims are some of the largest companies in the world, with the resources to make Big Tech worry and get Congress to do their bidding.

Laws that increase penalties for intellectual property violations and make more and more hoops for people to jump through to share their thoughts are not wins for free expression or for any regular internet users. The only winners will be Big Tech and Big Content.

Advertisement

Any proposed law that prevents people from being heard, which is a boon to the monopolies who already make obscene amounts of money and shreds the ability of regular people to use the internet, should be fought as hard as can be.

Since I talked to Brewis three months ago, he has had a years-old video completely blocked and a video about the BBC Sherlock blocked in the UK by Content ID. This is the future some people want for everyone. And we need to stop it.

Advertisement

Katharine is the Associate Director of Policy and Activism at the Electronic Frontier Foundation and the former managing editor of io9. She writes about technology policy and pop culture.

DISCUSSION

I mean, ContentID has everything to do with the law in a roundabout way. There’s actual liability for youtube making money off people who are committing copyright violations. There’s 0 liability for them being draconian and removing stuff that is actually fair use (because it’s their platform) so of course they are gonna take the cheaper, easier path. The law is the reason for this. If the law required the rights holder to manually report any violations before there was liability, we’d be in a lot better shape.