Twitter, which has a spotty history of applying policies governing misinformation, hate speech, and inciting rhetoric from government officials, elected representatives, and unverified propagandists, said Thursday that it is working to ensure “healthy civic conversation” ahead of the U.S. midterms.
In a blog post, the company said it had reactivated its civil integrity policy, which works to limit messaging designed to deceive users about “when, where, or how” to vote. While the company says content aimed at “manipulating or interfering in elections” is not allowed, it says it plans to respond to such posts by adding labels to them notifying users that they’re misleading.
Twitter says it will refrain from recommending or amplify this content inorganically once it’s labeled (mainly, removing it from feeds organized by its “Top Tweets” algorithm) and users trying to “like” or retweet the content will be prompted with a discouraging notice. Tweets tagged by Twitter’s civics team as having “potential for harm” will become unshareable entirely, it says (so anyone looking to do so will have to take screenshots).
At the same time, Twitter says it intends to “prebunk” misinformation by inserting accurate details concerning when and where to vote into users’ timelines; create state-specific “hubs” producing real-time information from trusted sources; and launch a dedicated tab that will serve up important election-related announcements.
The accounts of people running for office will also be labeled as such, a feature Twitter began deploying in May.
“As election day nears, we’ll continue to share real-time information about our approach,” the company said.
Twitter is currently involved in litigation attempting to force the sale of the company to billionaire Elon Musk, who is now trying desperately to claw his way out of the $44 billion merger agreement, likely the most expensive attempt to “own the libs” in recorded history.
A study out of Stanford University last month found that, during the 2020 election Twitter managed to consistently label 70 percent of misleading claims.
In an article for Lawfare Blog, the researchers—Samantha Bradshaw of American University and Shelby Grossman of the Stanford Internet Observatory—explained:
We found many examples of Twitter treating identical tweets differently as well. On Nov. 4, 2020, Trump tweeted, “We are up BIG, but they are trying to STEAL the Election. We will never let them do it. Votes cannot be cast after the Polls are closed!” Twitter put the tweet behind a warning label. In response, Trump supporters shared the text of the tweet verbatim. Sometimes Twitter labeled these tweets and sometimes it didn’t, even though the tweets were identical and tweeted within minutes of each other.
Examining a pool of over 600 misleading tweets, Grossman and Bradshaw noted that Twitter was “22 percent more likely to label tweets from verified users, compared to unverified users.”
The researchers said they were unable to draw any reasonable conclusions as to why 30 percent of misleading claims were treated differently by Twitter, even when tweets were “seemingly identical” to others it did label. Their best guess The tweets were placed into a queue for content moderators, who just happened to responded differently to the same misinformation.