Microsoft’ xenophobic and racist chatbot

In March 2016, Microsoft developed an algorithm chatbot called “Tay”’ that was created to seem like a teenage girl. The algorithm was designed to learn by interacting with real people on Twitter and the messaging apps Kik and GroupMe.

Tay’s Twitter profile contained the following warning: “The more you talk the smarter Tay gets”.  And at first, the chatbot seemed to be work: Tay’s posts on Twitter really matched that of a typical teenage girl. However, after less than a day, Tay started posting explicit racist, sexist, and anti-Semitic content (Rodriguez 2016). A Microsoft spokesperson could only defend the behavior by saying:

“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways (ibid).”

While the bot was created to give personalized responses to users based on information gathered from each individual interaction, Microsoft did not anticipate the malicious intent of some internet users. They promptly took Tay offline after the 24hr experiment. The case largely underscores the lesson of how “talking to artificially-intelligent beings is like speaking to children” (ibid). They are incredibly malleable and fast-learning subjects and can be easily taught to do and say anything.