It looks like Google has decided to limit certain questions to the AI Overview due to “weird” answers. Strange information produced by the search engine’s artificial intelligence (AI) and spread to the public on social networks no longer appears in some searches. Change noticed Border.
Seems like, Preservation takes place manually. Some questions that went viral on social media no longer return AI Overview results. So far, the company has not made any public statement on the matter.
The launch of AI Overview was problematic to say the least. Designed as a way to provide quick answers to Google searches the tool started producing various oddities and incorrect responsesLike the suggestion to add non-toxic glue to increase the adhesion of cheese to pizzas.
Posted by: @reckless1280
View in Topics
But the unprecedented pizza recipe wasn’t the only strange idea offered by the artificial intelligence of the world’s most famous search engine. The model also recommended the daily consumption of cigarettes by pregnant women, as well as the intake of stones to meet the need for minerals and vitamins.
Strange but dangerous
Although AI Overview is the protagonist of numerous memes and strange posts shared on networks, Strange answers reinforce the idea that there is no artificial intelligence Trustworthy. If the model can’t understand the strangeness of recommending mixing soda on pizzas and other unhealthy habits, it must be even harder to understand complex or more sensitive topics.
HE AI Overview has been in testing for over a year. The tool was originally known as Search Productive Experience and has answered more than 1 billion searches since its launch in May 2023.
But despite this massive amount of interaction, AI is still not perfect. The intriguing answers the model gives suggest that it needs significant improvements to at least be considered a reliable source of information.
Why does Google’s artificial intelligence give wrong answers?
In fact, Google’s answers are a clear example of an AI hallucination. Hallucinations are a known phenomenon of generative patterns; is a possible “side effect” of the excess data obtained during training.
In addition to obtaining information from academic articles, books, and other trusted sources, generative models supporting AI Overview obtain data over the internet. But unlike humans technology fails to interpret irony, sarcasm, deliberately misleading responses, and does not handle missing data well. Naturally, this interferes with the generation of responses and interactions.
Source: Tec Mundo

I am a passionate and hardworking journalist with an eye for detail. I specialize in the field of news reporting, and have been writing for Gadget Onus, a renowned online news site, since 2019. As the author of their Hot News section, I’m proud to be at the forefront of today’s headlines and current affairs.