Prabhakar Raghavan, head of Google’s search division, once again reminded users that information received from AI bots cannot be trusted 100%. Sometimes the data provided may be completely fictitious.
“This type of artificial intelligence that we are talking about now can sometimes produce what we call hallucinations,” Ragwahan said in an interview with Welt am Sonntag. “It translates to the machine giving a convincing, but completely made-up answer.”
The task of the developers, according to the expert, is to reduce the number of such “hallucinations” to a minimum.
In a recent Google filing, “ChatGPT competitor” chatbot Bard made an inaccuracy during the speech, causing Alphabet shares to crash 8.9%.
Ragwahan urged people to be attentive to the results that the chatbot presents to them. According to the senior manager, developers must provide users with tools to verify results, including disclosure of sources.
“We want to try [чат-бота] on a large enough scale that in the end we are satisfied with the results of the verification of the validity of the answers, ”said a Google representative.
Microsoft is ahead of Google in the process of integrating an AI bot into its search engine. Last week users posted screenshots of the updated Bing with integrated ChatGPT. There is still no word on the release date of the Bard chatbot.
“We certainly feel the urgency, but we also feel a great responsibility,” Ragwakhan said. “We definitely don’t want to mislead the public.”
Author:
Ahmed Sadulayev
Source: RB

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.