Just a few days after the launch of the beta test of his Bing AI, the creators severely restricted access to user television. Although, according to the plan, everything should have been the other way around – without learning from practical examples, in the course of a lively dialogue, such a system would never emerge from its infancy, but that is exactly what it was forbidden to do. What’s the price?
Microsoft has invested heavily in getting discovered text AI, but now few people remember that you already had a discovery experience in 2016, which ended sadly. Previously, Microsoft’s “breakthrough of the pen” in AI TECHNOLOGIES was the “Tay” chatbot, designed for social networks. This is an abbreviation for the “I give about you” objection – the chatbot was supposed to be betting callers with s s s
Alas, Tai turned out to be too diligent a student, while completely devoid of critical thinking. He absorbed everything he wrote, and when cynical users understood this, they deliberately taught chatbots bad things – racism, Nazism, sexism, insults, etc. These “trolls” were so successful that in less than 24 hours, Tai “hated” humanity, judging by his lines. Therefore, Microsoft urgently had to open it.
With Bing AI, judging by the first reviews, a similar story is repeated. On the beautiful and sovereign Tai, no take seems to be effective, not mechanized by anyone. The chatbot is already living on the brink of chaos and deliberately escalating negativity in dialogues with users, but Microsoft is in no hurry to disconnect it from the network. It remains only to observe what it dreamed of.

Source: Tech Cult

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.