Twitter's One Step Closer to Figuring Out Why It Sucks

We may earn a commission from links on this page.

Twitter, a social network known for its perpetual feed of virulence and targeted harassment, is turning to academia to answer the decade-old question: Why is it so bad?

The social network announced on Monday that it has chosen two partners to help it “measure the health of public conversation on Twitter.” The company noted in a blog post that a review committee—Twitter employees from various departments—looked at over 230 proposals since March. The process whittled down the finalists over several rounds, ultimately resulting in the final two projects.


The first group of experts, led by assistant professor of political science at Leiden University Dr. Rebekah Tromble, will look at “echo chambers and uncivil discourse” to figure out “how communities form around political discussions on Twitter, and the challenges that may arise as those discussions develop.”

One of the metrics the group will analyze is “the extent to which people acknowledge and engage with diverse viewpoints on Twitter,” which in our experience looks more like an army of trolls yelling “I hope you die” at a woman who tweeted about the gender wage gap than a thoughtful, informed interaction. The group will also aim to develop algorithms that can discern between incivility and intolerant discourse. Intolerant discourse, the group found, “is inherently threatening to democracy.”


Twitter’s second research partner will look at “how people use Twitter, and how exposure to a variety of perspectives and backgrounds can decrease prejudice and discrimination,” the company wrote. The experts—professors from the University of Oxford and the University of Amsterdam—will help develop “text classifiers for language commonly associated with positive sentiment, cooperative emotionality, and integrative complexity [that] will be adapted to the structure of communication on Twitter.”

It’s inarguably a good thing for Twitter to examine how its platform is used and how that impacts its users—especially if that impact is severely unhealthy. And it’s also inarguably a good thing for this scrutinization to come from outside parties, though there is inherent bias in the chosen topics given they were hand-selected by Twitter itself. This research is part and parcel of the company’s larger mission to make its service less abusive and more productive—an admirable and essential effort. One that’s sorely belated.