Google has confirmed that it has made significant changes to one of its Gemini promotional videos. The company’s latest language model developed through artificial intelligence (AI) was announced this Wednesday (6).
This was considered Gemini’s most impressive showing. But according to BloombergThe company “staged” much of what is seen in the video, which shows a user interacting with Gemini by voice, with the AI recognizing elements drawn on a piece of paper via a camera.
This does not mean that the platform cannot recognize voice, image and text commands and also gives complex answers to the questions asked. However, in real use of artificial intelligence, the interaction is not as smooth as shown in the video.
What did Google change in the Gemini demo?
According to the Bloomberg report, the description of the Gemini show on YouTube already includes text suggesting changes. “For the purposes of this demo, latency has been reduced and Gemini’s responses have been shortened for brevity.”says in the message.
However, when contacted later, the company confirmed that further changes had been made to the published content. In reality, voice commands shown in the video were done via text. Besides, Gemini took static images of objects as visual materialIt’s not a real-time video.
“The user narrative contains actual quotes from the prompts used to generate the Gemini responses you see,” says a company spokesperson. So in practice Voice command and real-time interaction were not available during the video how it was shown.
Regarding latency, in practice probably, Each Gemini response takes a few seconds to generate and publish. The clip implies that everything is happening almost in real time, as if it were a natural conversation with another human being.
Questions were asked about staging BorderGoogle responded with a statement from Oriol Vinyals, DeepMind’s vice president of research and responsible for the project. The post was made on the old Twitter X.
We are really happy to see the interest in our “Hands On with Twins” video. We detailed how Gemini was used to create this in our developer blog yesterday. https://t.co/50gjMkaVc0
We gave Gemini different styles of strings (image and text in this case) and made it respond… pic.twitter.com/Beba5M5dHP
— Oriol Vinyals (@OriolVinyalsML) December 7, 2023
According to the researcher, only video “It shows what multimodal user experiences built with Gemini can look like.” and the company made the clip “inspiring developers”.
Some of the early tests with the Bard chatbot, including Gemini, were also not well received by the community, which reported incorrect answers and missing content. Google promised Updates and a more advanced version of the platform for next year.
Source: Tec Mundo

I am a passionate and hardworking journalist with an eye for detail. I specialize in the field of news reporting, and have been writing for Gadget Onus, a renowned online news site, since 2019. As the author of their Hot News section, I’m proud to be at the forefront of today’s headlines and current affairs.