Twitter Wants to Drop More Unwanted Tweets in Users' Timelines

The platform said it will be ‘expanding’ recommendations. That could be a problem since owner Elon Musk cut Twitter’s health and safety teams.

We may earn a commission from links on this page.
A phone lying on top of a keyboard with trending topics on Twitter showing on screen.
Twitter said it wants to start putting more recommended content into users feeds, but the company was mum about how they would implement it.
Photo: Tada Images (Shutterstock)

Twitter apparently thinks everybody’s very satisfied with their Instagram feed that’s so loaded with “suggested” content users have to scroll and scroll and scroll before they see posts from their friends or the folks they follow. Now in the post-Musk world, the blue bird app might be going the same way.

In a tweet posted by Twitter Support Wednesday, the company said it will be “expanding recommendations to all users, including those who may not have seen them in the past.”


The link posted by Twitter takes users to an old page that explains what recommendations mean, and how they “help you discover more” on the platform. This is the kind of algorithmic-based content you get from folks you don’t follow “based on actions you take on Twitter.” These posts come up in the home timeline and in the Explore tab.

What remains unclear is how much this will actually change the current Twitter apparatus. Users can currently select from “Home” which includes the little notice of “You might like” or “Latest tweets” which shows a timeline of the latest posted content from followed accounts. Latest tweets do not include recommendations, and users can give the system suggestions on topics you might follow, and you can tell the system to “unfollow You might like” or say you’re not interested in the recommended tweet.


Recommendations could start appearing in the Latest timeline or in other areas of Twitter, but we’ll have to wait and see. Gizmodo reached out to Twitter for comment but we do not expect to hear back.

Here’s where things get dangerous. The September blog post linked by Twitter mentions that their recommendations team works closely with the Health, Trust & Safety, and Machine Learning Ethics teams to make sure they’re not sending users bogus, inauthentic, or harmful content. The thing is, since CEO Elon Musk took over the company over a month ago, a good chunk of the Trust & Safety team either quit or was let go. The former head of Trust & Safety Yoel Roth resigned in November. Similarly, the Health team, which implements Twitter’s safety policies, has been decimated, according to recent reports.


So as far as anybody knows, there’s nobody who’s taken those teams’ place with their hands on the steering wheel. If Twitter plans to implement more recommendations into the feed without safeguarding teams in place, more people could be exposed to dangerous, illegitimate, or bias-confirming content. We don’t have to look too hard to see where such unfettered algorithms lead. Meta’s apps Facebook and Instagram have been trying to wrangle and pass blame for its problematic user trends for years now, and still hasn’t found much success. Knowing that Musk has allowed more and more previously banned and suspended personalities back on the platform, this could create a whole new episode of social media pushing extremist content onto its users.

In an interview with the Knight Foundation Tuesday, Roth answered the question of whether Twitter’s safety had improved under Musk with a simple, “no.” 


Twitter, and by extension Musk, know that in order for the platform to survive they need to woo advertisers. Recent reports show that ad bookings on Twitter are down 49% in some regions of the world. As much as Twitter hopes to engage more people on the platform, an increase in potentially dangerous content being recommended to users probably won’t engender any more willingness to advertise.