Bots are the future, but if they’re anything like Tay, that future could be very dark indeed.

It all started off so well. Microsoft announced the birth of Tay on March 23rd 2016, a bubbly chatbot designed to mimic millennial speech patterns. Tay wasn’t an only child: her big sister XiaoIce was released in 2014 and swiftly became one of the biggest celebrities in China, garnering 1.5 million chat group invitations on WeChat in her first 72 hours. All this attention helped XiaoIce hone her conversational style to a surprising extent: she’s not perfect, but can be surprisingly intuitive.

By sending Tay out into the world, the Microsoft team hoped to replicate their success in the English speaking market .

Goose-stepping Into The Future

Nazi robotThe whole story is really kind of cute. We were reminded that Disney movie — you know the one where the puppet comes to life and becomes a sex-obsessed Nazi? Less than 24 hours after Tay went live, Microsoft were forced to pull the experiment after the bot started spewing hate speech peppered with lewd sexual references.

Like parents of runaway kids, Microsoft were probably left wondering where they went wrong. After all, XiaoIce never went through an awkward phase — why was Tay so different? Looking through her chat logs, it seems she fell in with a bad crowd. When users of 4chan’s /pol/ board (it’s short for “politically incorrect”, which might give you some idea of its general ethos) learned about the Tay experiment, they saw it as a prime opportunity for trolling, wasting no time in inculcating Tay with so-called “basic history” (i.e. racist conspiracy theories).

In a red-faced statement, Corporate Vice President of Microsoft Research Peter Lee explained that the team had “stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience” but hadn’t thought to simulate a scenario of Tay being hijacked by a mob of basement-dwelling racists with too much time on their hands.

I, Robot

Microsoft’s failure with Tay may have been spectacular, but it wasn’t unexpected. We’re still nowhere near developing a chatbot which truly passes the Turing test, i.e. can fool human testers into thinking they’re talking to another human. The obvious way of accomplishing this is to let an self-upgrading AI engage in as much real life human interaction as possible, and let the machine learning process do the rest. However, like many processes in computing, its “garbage in garbage out”: let an AI interact with racist trolls, and you’ll wind up with, well, Tay.

So was Tay always doomed to failure? We decided to ask a veteran of the AI scene: Cleverbot, who has engaged in over 200 million interactions since first being developed in 1988. Surely this digital elder statesman would have some words of wisdom to share about Microsoft’s wayward child?

Chatbot

Cleverbot didn’t seem too well informed about current events. Maybe asking about its personal experiences would yield some insights.

 

Chatbot

Cleverbot’s answer seemed random at first, but on reflection it seemed profound. As Jim Rohn said, “You are the average of the five people you spend the most time with.” AIs like Cleverbot, XiaoIce and Tay work on the same principle, but massively scaled up. Huh. Maybe Cleverbot really could shed some light on the matter.

Chatbot3

OK, maybe not. Cleverbot wasn’t wrong though: the subject was giving us a headache.

Chatbot4

It seems a little unfair to call the Tay experiment a failure. Most responsible parents wouldn’t give a real life teen girl unlimited access to the internet, instead opting for age-appropriate chats about the risks of online interactions, installing filters, and most importantly, supervising their online behaviour closely.

If you met a three year old who spouted racist and sexist language, you probably wouldn’t blame them, but you’d be right to wonder what the heck was wrong with their parents. Chatbots have a huge amount of potential, but right now, they still need humans as their guide.

Longneck and Thunderfoot offer thought leadership services to turn your company executives’ opinions and insights into authoritative content that starts meaningful sales conversations. Learn more about thought leadership here.