Twitter is well aware that it’s a breeding ground for online harassment and has done a terrible job stopping it, and today, the company announced three tactics to better combat tweeting trolls.
First, it expanded its definition of a threat to cover a wider umbrella of shitbag tweets. Then it introduced a “locking” policy to give it more control over blocking threatening accounts. But then— this is where things get weird— Twitter announced a sort of troll-hunting product.
Twitter has already started rolling the product out. The language is vague but here’s the plan:
We have begun to test a product feature to help us identify suspected abusive Tweets and limit their reach. This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive.
It will not affect your ability to see content that you’ve explicitly sought out, such as Tweets from accounts you follow, but instead is designed to help us limit the potential harm of abusive content. This feature does not take into account whether the content posted or followed by a user is controversial or unpopular.
It is important that Twitter is actively trying to be more aggressive in moderating its community to keep people safe, but this is an odd product. It’s an automated system, so there aren’t moderators— what if someone’s innocent tweets get flagged?
Twitter isn’t going to tell the authors of the flagged tweets that it is hiding their words, which is kind of bizarre— it’s tantamount to a tweet-specific shadowban. Nor will it proactively lock an account if it displays a pattern of abusive behavior. Instead, it’ll punish trolls by only showing the tweets to people who explicitly seek them out.
So basically, Twitter has an experimental algorithm devoted to predicting abusive tweets and then making them hard to find, but not telling the trolls that it’s hiding their tweets or hindering them in any other way.
Google has already funded a research project to develop a similar algorithm, so it’s not like that’s not an idea that’s floating around in Silicon Valley. But this specific variation on the idea is kind of bizarre. It’s more like mildly muting abusive than eradicating it.
Unless Twitter makes it impossible for people to sign up for an infinite number of Twitter accounts, it’s never going to be able to solve its troll problem. Even if it’s easier to report a threat now, and Twitter has more control over locking accounts, it takes literally less than a minute to register for a new account.
Trolls can simply keep switching from one account to another as soon as they get reported, and something tells me they’ll find a way to make their tweets seen. The decision to expand the definition of a threat was long overdue and necessary, and locking seems like maybe it could dissuade some abuse, which is good. But none of these tactics address the tricky task of stopping trolls from resurfacing with another alias.