In a response to an outcry from marginalized streamers who say they’ve become the targets of harassment and hate speech on the platform, live streaming giant Twitch said Wednesday that it was rolling out new protections for its most vulnerable users, effective immediately.
“We’ve seen a lot of conversation about botting, hate raids, and other forms of harassment targeting marginalized creators,” Twitch writes. “You’re asking us to do better, and we know we need to do more to address these issues. That includes an open and ongoing dialogue about creator safety.”
As part of its efforts to clamp down on rampant abuse, Twitch said it had identified “a vulnerability in our proactive filters, and have rolled out an update to close this gap and better detect hate speech in chat.” It also claimed that more safety features are coming in the weeks ahead, including a tighter account verification process and channel-level ban evasion detection tools.
In the wake of those so-called “hate raids,” in which bad-faith users employ bots and fake accounts to shower specific streamers with abuse, Twitch users had mobilized under the hashtag #TwitchDoBetter to shed light on the platform’s ongoing harassment problems. The hashtag was created by the Twitch streamer RekItRaven—who is Black and uses they/them pronouns—after their account was overrun on August 6 by users commenting “This channel now belongs to the KKK.”
Since then, users have spoken out in force to condemn the identity-based harassment they say is all but inevitable for marginalized streamers on Twitch.
“Every marginalized identity creator I know has at least one story, baseline, even if they don’t stream regularly,” a Twitch user named Vanessa, who is black, told the Washington Post. “The thing that’s most terrifying is that the hate is aimed at all of us equally. Size, frequency, status — none of it matters. They look out for the marginalized identity and go to work.”
Twitch has struggled to reign in harassment and hate speech on its platform in recent years, often responding quickly rather than thoughtfully to issues when they crop up in a way that fails to offer sustainable protections to vulnerable users. In late 2020, for example, the platform offered a piecemeal solution to harassment that specifically targeted users’ sexual practices by banning words like “simp,” “incel,” and “virgin” on the platform as long as they were being used as insults.
On Wednesday, Twitch thanked users for sharing “these difficult experiences,” and said that it would continue to work to address harassment on its platform.
“Our work is never done, and your input is essential as we try to build a safer Twitch,” Twitch wrote in a tweet. “We’ll be reaching out to community members to learn more about their experiences, and encourage you to share feedback via UserVoice.”