One day after trolls transformed Microsoft’s chatbot Tay into a ditzy, Holocaust-denying monster, the company has issued an apology for failing to realize that people on the internet are dicks.


“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” wrote Peter Lee, the corporate vice president for Microsoft Research, with what one imagines was a look of pained bewilderment unique to someone who just learned that 4chan exists.

As anyone who followed the debacle will tell you, the most astonishing thing about it was not the revelation that trolls will troll—that’s a given—but rather that Microsoft somehow didn’t anticipate the very real possibility of rampant trolling.


Unfortunately for Microsoft, the apology drives this home tenfold (emphasis ours):

As we developed Tay, we planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience. Once we got comfortable with how Tay was interacting with users, we wanted to invite a broader group of people to engage with her. It’s through increased interaction where we expected to learn more and for the AI to get better and better.

The logical place for us to engage with a massive group of users was Twitter. Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

It’s unclear why no one foresaw the prospect of “this specific attack,” given that the users targeting Tay were using common, garden-variety trolling tactics like virulent racism, anti-antisemitism, misogyny, and conservative chest-thumping; it’s even more bizarre that the team expected things to get better once they widened the pool of discourse.

Luckily, it can probably be blamed on a naive group of people rather than any sort of arrogance or general assholery. And to Microsoft’s credit, the apology also acknowledges that AI systems need to master both positive and negative communication. For these bots to truly succeed, they need to appear genuine—a tricky prospect considering that a lot of people are genuine shitheads.


The company does, however, seem intent on focusing on the rainbows and unicorns for now. “We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an internet that represents the best, not the worst, of humanity,” Lee concluded.

Ten bucks says SmarterChild is behind this, that crafty motherfucker.