Skip to content

Microsoft’s Tay had absolutely no chill whatsoever

Image: Microsoft
Image: Microsoft

Microsoft’s Tay caused quite a stir when it showed up in 2016. The bot was the brainchild of the company’s Technology and Research and the Bing team. It had created the bot in an attempt to research conversational understanding. Instead, it showed us how awful people could be when they’re interacting with artificial intelligence.

Tay’s name was based on an acronym that spelled out “thinking about you,” which perhaps set the stage for why no one was taking this bot seriously. It was also built to mine public data, which is why things took a turn for the worse so quickly. As we reported back then:

While things started off innocently enough, Godwin’s Law—an internet rule dictating that an online discussion will inevitably devolve into fights over Adolf Hitler and the Nazis if left for long enough—eventually took hold. Tay quickly began to spout off racist and xenophobic epithets, largely in response to the people who were tweeting at it—the chatbot, after all, takes its conversational cues from the world wide web. Given that the internet is often a massive garbage fire of the worst parts of humanity, it should come as no surprise that Tay began to take on those characteristics.

Once Tay was available for the public to interact with, people were able to exploit the bot enough that it started posting racist and misogynist messages in response to people’s queries. It’s similar to what happened to IBM’s Watson.

Tay was eventually taken off the internet the same year it made its debut after being suspended for reprogramming. We haven’t heard from the bot since then.