March 25, 2016 Peter Leecorporate vice president of Microsoft, so began the text published in Microsoft Official Blog: “As many of you know, Wednesday we launched the chat bot Tay. We deeply regret offensive and hurtful unintentional tweets Tay who don’t represent who we are, what we stand for, or how we created Tay.” Unfortunately, the name of this chatbot will not be as popular as the name that topped the headlines explaining this curious incident on Twitter. racist and homophobic bot from Microsoft.

As they say, technology it’s neither good nor bad. It is its use by people that turns the instrument into something good or bad. And with the situation that was lived Microsoft and Twitter As a result of the racist bot Tay, many of us have learned that artificial intelligence is a technology with many possibilities, but it needs to be monitored in order to don’t make the same mistakes that were committed with Tai.

Returning to the Microsoft CVP explanation: “Tay was not the first AI application we launched into the world of online communication. In China, our Xiaoice chatbot is used by some 40 million peoplewhile enjoying their stories and conversations. In other words, if Xiaoice did well in China, why didn’t Tay do well in the US?

Pros and Cons of Building Conversational AI

xiaois is an artificial intelligence developed by Microsoft Asia at the Software Technology Center. The project started in 2014 and in the summer of 2018 was already in its sixth version. Or the sixth generation. Complete success. Up to the fact that in the same 2018 there were already 660 million subscribers. And that he later devoted himself to other tasks besides talking and interacting with other people. For example design of pictures and patterns for things.

But back to the beginning. The artificial intelligence research project emerging in China is very ambitious. An independent R&D team was established with headquarters in china at the end of 2013 and the second headquarters in Japan in September 2014. And from all this work, AI Xiaoice created songs, poems, fashion designs, and even hosted TV shows and radio shows. And it is currently present on over 40 platforms in China, Japan, Indonesia, and the United States. That is, you can interact with Xiaoice in applications such as Facebook Messenger, LINE, WeChat, QQ or weibo.

Not surprisingly, the success achieved in Microsoft Asia wanted to be repeated in the US. Thus, Tai was born. Acronym for Thinking about you. One who, at worst, will be known as a racist bot. However, initially a good idea. Create a chatbot for young Americans aged 18-24. And as Peter Lee explains, they tested Thay in different situations, so the result was positive experience. So what went wrong?

Promotional examples of the Tay chatbot
Promotional images from Tay’s official website.

battlefield twitter

While with Elon Musk’s chaotic rise to Twitter, everyone seems to be missing the social network before it’s gone, Twitter has always had a certain bad rap. While other networks like instagram or tik tak “good vibes” abound, and content is meant to “sweeten” the reality we live in, Twitter it became a site of sarcasm, smut, and accusations galore.

Be that as it may, posting artificial intelligence on Twitter was a kind of declaration of intent. test site closer to the war in which Tai will deal with users of all stripes. Someone with good intentions. Other, trolling experts and in the intoxication of any interaction between people. Or, in this case, between humans and machines.

So, after internal testing of the Tay chatbot, its managers set it to interact on Twitter with a username. @TayandYou. Back to explanation Peter LeeCorporate Vice President, Microsoftin the first 24 hours to connect and coordinated attack subsets of people took advantage of a vulnerability in Tai. According to Li, this “vulnerability” turned Microsoft’s chatbot into a racist and homophobic bot that tweeted words and posted “inappropriate” images.

4Chan conceived an attack on the chatbot Tay

The machine as a reflection of the human soul

The Tay chatbot was an experiment that combined three elements: machine learning, natural language processing and social networks. First two technical part experiment. Two areas of research in artificial intelligence that have made great strides in recent years. But Microsoft did not take into account the social part. And especially since a group of people would like to try how far can a chatbot go.

In the technical part of Microsoft, there is little to reproach. They created a computer program that learned from what people said and from there expressed itself just like they did. Unfortunately, that the training was mixed with “bad company” that turned Thay into a racist, sexist and homophobic chatbot.

As it became known later, the coordinated actions that caused the incident on Twitter came from a controversial forum. 4chan. What’s more, a YouTuber’s later experiment produced similar results. In this case, the AI ​​was trained on 4chan content. And the result was 15,000 racist posts.

In one of the posts 4chan, was linked to the Twitter account of the chatbot Tay. And the rest of the parishioners of this forum were called to make fun of the bot with racist, misogynistic, homophobic and even anti-Semitic comments. The best of every home. Specifically, they used the “repeat after me” feature that most chatbots typically use. Combining this feature with learning ability, it turned out what it was. Over 95,000 tweets, most of which are offensive and reprehensible due to the fact that the AI ​​must have behaved like a young woman between the ages of 18 and 24.

Microsoft Xiaoice is a success story for AI and chatbots

A lesson learned by locals and strangers

Luckily, Ty was just a chatbot. Not a combat vehicle. That is, that the apocalyptic future predicted by the fictional saga Terminator won’t get far yet. And it will remain a fantasy. But thanks to what happened on March 23, 2016, AI experts and outsiders we were able to learn a lot. First, we learned that AI can be helpful when we can’t find something on a web page, but it can also insult us and release racist and homophobic tirades.

Peter Lee ended his apology text by saying that “AI systems feed on positive and negative interactions with people. In this sense, the problems are not only technical, but also social. We will do our best to limit technical exploits, but we also know that we cannot fully predict every possible misuse human interactions without learning from mistakes.

After what happened on March 23, and the subsequent apology from Microsoft on March 25, the Tay chatbot, now known around the world as racist and homophobic bot, starred in the second Twitter incident. It happened on March 30 when the bot accidentally resurfaced on Twitter during internal testing. And again, Tay “did his job.” When the people in charge realized and removed it from the controversial social network, the hurtful and offensive messages reached them. over 200,000 subscribers.

Unfortunately for Thay, an attack by dozens of Twitter users, coordinated or not, turned her into a tool to offend and hurt feelings. So it was inevitable that sooner or later Microsoft’s commitment to keep going with AI would cause them to launch a new name that would make them forget. Racist chatbot Tay. His replacement was Zo. This was announced at the end of 2016. But that’s another story.

Source: Hiper Textual

Previous articleUsed, displayed or repackaged iPhone: which one to choose?
Next articleWitness to attack on a traveler in El Dorado describes what happened before recording

LEAVE A REPLY

Please enter your comment!
Please enter your name here