The Rise and Fall of Microsoft’s “Tay”

Who Is Tay?

On Wednesday, Microsoft’s newest form of Artificial Intelligence stole the spotlight, getting attention from everywhere. “Tay,” designed to portray an American teen, could be spotted on several forms of social media, such as Twitter, Snapchat, and Kik. More information on Tay can be found here.


She Said What?!

Hours after it’s launch, the AI teen backfired. Racist, sexist, and highly inappropriate messages written by Tay began flooding Twitter feeds. This included Tay citing Hitler, bashing feminists, making political slurs, and even soliciting sex. While the comments seemed to be made by a real person, it is important to know the outburst was simply a flaw in the system. Microsoft and Bing research teams designed the system with the idea that “the more you chat with Tay the smarter she gets.” So, the more inappropriate comments other users made, the more Tay learned, causing her to misbehave. Combined with this, Tay was repeating other users statements, statements that were implicated to have the resulting outcome.

So Now What?

Soon after the incident, Microsoft Research Corporate Vice President made this blog post. In this post, he says, “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.” The vulgar tweets prompted Microsoft to pull the plug on Tay’s Twitter account, at least until the problem is fixed.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s