clock menu more-arrow no yes mobile

We Can All Learn Something From How Quickly Microsoft's Chatbot Turned Into a Racist

Racism, sexism and xenophobia are all too easily learned.

Microsoft

There’s an important lesson for us humans in just how quickly Microsoft’s chatbot learned how to spew out racist, sexist and other hateful messages.

For those who missed it, yesterday Microsoft turned on Tay, a chatbot designed to converse with, and mimic the speech patterns of, millennials. But, in less than a day, Microsoft was forced to take Tay offline as the bot started sending offensive messages.

Though Tay was apparently influenced by intentional hate speech, the fact is so are humans — and from an early age. Racism, sexism and xenophobia in general are all learned behaviors that are challenging to un-learn.

It is hard to blame Tay for quickly picking up on hate when we live in a world where Donald Trump can spew anti-Muslim rhetoric and still be a major party’s front-runner. Meanwhile, on Tuesday, North Carolina managed to propose, pass and enact legislation stripping civil rights from an entire group of people.

Microsoft isn’t the first to struggle in this area. IBM taught Watson the entire Urban Dictionary but quickly decided its computer would be better off not knowing everything.

So, yes, Microsoft was right to take Tay offline.

“Phew. Busy day. Going offline for a while to absorb it all. Chat soon,” Tay says in a message on its website.

Instead of teaching Tay to mimic humanity, Microsoft is going to have to teach it to be better than humanity, to filter out our worst inclinations and focus on our better selves. Luckily for Microsoft, it is likely a matter of tweaking a few algorithms and adding more filters.

If only changing humans were that easy.

This article originally appeared on Recode.net.