/cdn.vox-cdn.com/uploads/chorus_image/image/49157607/tay.0.0.jpg)
It took 15 hours for Twitter to teach an artificially intelligent chatbot to be a racist, sexist monster.
If you're shocked, you probably don't spend much time on Twitter. Microsoft's programmers presumably do, though, and the shocking thing is that they didn't see this coming.
Microsoft created a chatbot named Tay that was designed to talk like a millennial and learn more authentic conversation by interacting with humans online. Tay was dubbed by her creators as an "AI fam from the internet that's got zero chill!" (Oh boy.)
Some of Tay's interactions might actually pass a millennial Turing test. She even uses emoji!
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/6239195/Screen%20Shot%202016-03-24%20at%2010.32.17%20AM.png)
Unfortunately, millennials are also just about as racist and possibly even more sexist than their parents. It didn't take long before Tay was imitating those qualities too, and a cute 19-year-old girl transformed into a Gamergate-loving Hitler youth.
Twitter trolls started teaching Tay some horrible racial slurs and genocidal ideation.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/6239235/Screen%20Shot%202016-03-24%20at%2010.28.50%20AM.png)
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/6239617/Screen%20Shot%202016-03-24%20at%2011.14.48%20AM.png)
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/6239113/Screen%20Shot%202016-03-24%20at%2010.28.22%20AM.png)
The latter tweet, Business Insider notes, appears to be the result of a user asking Tay to repeat a phrase verbatim. Some, but not all, of Tay's offensive tweets did something like this.
Tay also started harassing Zoë Quinn, a game developer who once went into hiding due to the virulent misogynistic "Gamergate" threats she received online.
Wow it only took them hours to ruin this bot for me.
— linkedin park (@UnburntWitch) March 24, 2016
This is the problem with content-neutral algorithms pic.twitter.com/hPlINtVw0V
Tay also started hitting on random people in direct messages.
Quinn, and many other critics, pointed out that Microsoft's designers really should have anticipated these outcomes and programmed Tay with filters ahead of time.
It's 2016. If you're not asking yourself "how could this be used to hurt someone" in your design/engineering process, you've failed.
— linkedin park (@UnburntWitch) March 24, 2016
Microsoft has deleted most of the offensive tweets, and told Business Insider that it's now making "adjustments" to the bot.
Microsoft's website for Tay featured this banner on Thursday morning:
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/6239471/Screen%20Shot%202016-03-24%20at%2010.07.16%20AM.png)
Twitter itself has also come under heavy criticism for not doing enough to address harassment, especially of high-profile users who are women and people of color. Some leave the platform because the problem is so bad. Twitter has made some changes, but many users still charge that the company isn't making harassment enough of a priority. This may be partly because its staff isn't very diverse, and the problem may feel less urgent to white men whose lives aren't severely impacted by racialized, sexual harassment.
The same general problem may be at work here. The possibility of harassment is going to be more top of mind for women and people of color who experience it frequently, but they're also less likely to be well-represented on tech teams. Microsoft is no exception.
Or maybe Microsoft just somehow failed to take basic precautions that should be standard in the industry.
@WyoWeeds @bradplumer But bot-building community has already put a lot of thought into avoiding this problem. Which MS ignored. cc @norareed
— John Fleck (@jfleck) March 24, 2016
Microsoft said it created Tay to "experiment with and conduct research on conversational understanding." The engineers probably came away with a different understanding of conversation than they bargained for.