The Russian Human Rights Council was outraged that voice assistants “Alice” and “Marusya” avoided answering some political questions. The head of the council, Valery Fadeev, expressed his indignation that even ChatGPT gives more detailed answers. Yandex, which developed Alice, indicated that for the virtual assistant’s incorrect answers, the company would “probably be banned completely.”
Subscribe to RB.RU on Telegram
At the event “Protection of the rights and freedoms of citizens in the context of digitalization” the head of the Human Rights Council, Valery Fadeev, addressed the issue of the rapid introduction of artificial intelligence into the everyday life of citizens . At the same time, Fadeev was outraged that virtual assistants developed by Russian companies Yandex and VK refuse to answer some questions related to the territorial borders of the Russian Federation and the actions of the Russian army on the territory of Ukraine.
At the same time, the head of the HRC admitted, even the American ChatGPT gives detailed and complete answers to these questions.
“The GPT chat has answers. I thought they would be harsh propaganda responses, but no: on the one hand there is an opinion and, on the other, there is a discussion. Pretty vague, but there is an answer. Why are our new tools shy about providing answers? “It is not a question of censorship, it is a question of a nation’s attitude towards its history, the most important ideological question,” Kommersant quotes Fadeev.
In turn, Alexander Krainov, director of development of artificial intelligence technologies at Yandex, tried to explain to Fadeev the basics of the neural network. According to him, the service cannot give a valid canonical answer, since it tries to imitate all the texts it has seen.
There is also an element of randomness in the algorithm, so the answers are not always identical.
“So, if you ask the same query every time, the result will be different. In particular, the difference can have an absolutely radical meaning,” Krainov explained.
A representative for the technology company added that the neural network’s responses are not edited. He noted that for some errors the company could be held criminally responsible.
“Avoiding an answer is the best thing we can do now, because if we answered wrong, we would most likely get banned completely,” Krainov concluded.
However, the HRC was not satisfied with this explanation. Council member Igor Ashmanov noted that these voice assistants are positioned, among other things, as “children’s companions.” Ashmanov proposed the idea that there is no point in offering children a multifaceted view of a given situation and that “the child must have an answer.”
“The same goes for history: you can’t, even talking to a teenager, give him many points of view,” Ashmanov said.
Since the start of active development of artificial intelligence technologies in Russia and the launch of various “smart” services based on AI, developers periodically face comments and complaints about the generated content. Last spring, the leader of the “Fair Russia – For the Truth” faction, Sergei Mironov, complained to the Prosecutor General’s Office about the “negative image of Russia” in the photographs of Sberbank’s Kandinsky neural network.
And YandexGPT from Yandex was criticized by Dmitry Medvedev.
Author:
Natalia Gormaleva
Source: RB

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.