Microsoft's Tay AI Turns Out to be Awful

NicoleMotta on Monday March 28, 2016 at 11:47:46 AM

Microsoft's Tay AI Turns Out to be Awful

Last week, Microsoft launched an AI which mimicked how millennials talk, but quickly had to pull it for being offensive.

On Friday, Corporate Vice President of Microsoft Research Peter Lee published an apology for Tay's misbehavior. "As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," said Lee, noting that Tay is now offline, until the team is confident that they can keep her from becoming unhinged again. "As we developed Tay, we planned and implemented a lot of filtering, and conducted extensive user studies with diverse user groups. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience," explained Lee. However, within the first 24 hours of Tay being on Twitter, in an attempt to engage with a larger group of users, she was targeted by a subset of users hell-bent on corrupting the AI. As a result of this attack, Tay began tweeting "wildly inappropriate and reprehensible words and images."

While this experiment produced some discomforting results, it is worth noting that Tay's negativity, racism, and all-out offensiveness was learned. "AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical," explained Lee. "To do AI right, one needs to iterate with many people and often in public forums… We will remain steadfast in our efforts to learn from this and other experiences, as we work toward contributing to an Internet that represents the best, not the worst, of humanity."

Photo: © Anikei - Shutterstock.com


Add comment

Comments

Add comment