“Don’t read the comments” has become a common maxim among anyone who has spent any meaningful time online—it means that, if you go sifting through online comments, you’re going to see some shit. A new experimental tool from Google is trying to give users more power over what they do and don’t see—as it stands, it seems nice in concept but still pretty futile in reality.
The tool, called Tune, is a Chrome extension rooted in the same machine learning models used for Perspective, Google’s API that flags “toxic” content. Tune also identifies toxic content, but rather than serve developers and publishers, it is intended for everyday users. It works on platforms like YouTube, Facebook, Twitter, Reddit, and Disqus, and it gives users the option to choose how much content they want to filter.
According to an announcement post from CJ Adams, a product manager at Google-owned Jigsaw, you can adjust “the volume of toxic comments”—the tool looks like a volume dial in the shape of a chat bubble. There’s “zen mode,” which filters out all of the comments the tool identifies as toxic when you turn the volume all the way down. If you have the volume all the way up, it shows all of the comments. The middle ground gives users the option to customize what type of toxic comments they want to see.
When you add the extension to Chrome, it gives you the option to filter toxicity—which is the default setting—but you can also customize with other filters, a feature that’s labeled “Very Experimental.” “Toxic” comments are characterized as “comments likely to make people leave a discussion.” The latter options include attacks on identity, insults, profanity, threats, and sexually explicit. Users can check boxes alongside each of the types of comments they want to filter. These experimental filter options are the types of comments more likely to punch you in the gut and let all the air out of you. Censoring these types of comment also contributes to the sanitization of the web.
Of course, scrubbing these types of comments from view isn’t actually sweeping them off the web—their removal is unique to each user’s preferences. Meaning, if someone wants to filter any mention of sexual acts or lewd content from their comments section, that only applies to their experience online. But as we’ve seen, these types of filters sometimes target educational and sex-positive content. While you may be wiping your comments of unwanted adult content, you may also erase sex education or empowering commentary on sexual identity. Machines still lack the ability to understand the nuance and complexities of human language, which has been increasingly evident in the failures of social networks’ AI-based moderation efforts.
Jigsaw’s Adams acknowledges that Tune isn’t a fully baked product yet, noting in the blog post that it is still in its experimental phase. “It still misses some toxic comments and incorrectly hides some non-toxic comments,” he wrote. “We’re constantly working to improve the underlying technology, and users can easily give feedback right in the tool to help us improve our algorithms.” Adams also pointed out in the blog post that this tool isn’t designed to be “a solution for direct targets of harassment (for whom seeing direct threats can be vital for their safety), nor is Tune a solution for all toxicity.”
Giving users more individual power over the type of content they see and don’t see on the internet is useful, and working toward such a future is a good thing. But it’s important to note that Tune is very much a glimmer of that possibility. For starters, it’s a browser extension, so it won’t work in apps. And like other moderation efforts leaning on AI, it’s going to miss actual toxic content and remove content that is harmless or, more insidiously, educational and empowering.
The default setting—to filter out discussions that might turn you away from content online—is also pretty vague. If this applies simply to content you disagree with, this type of filter might only serve to further our echo chambers online. Content that is intended to threaten you, insult you, or attack your identity is still considered experimental. If a tool can’t clean up the comments that really shake you to your core, it seems misguided to position this as a way to address toxicity online.