Microsoft announced that the new Bing with ChatGPT uses a custom version of GPT-4 optimized for search engines.. The tech company took advantage of the OpenAI announcement to confirm what many have been suspecting for weeks. According to its creators, the new AI model can solve complex problems more accurately thanks to broader general knowledge and improved skills.

“If you’ve used the new Bing preview at any time in the past five weeks, you’ve already experienced a preview of this powerful model,” said Yusuf Mehdi, director of search and devices at Microsoft. “As OpenAI updates GPT-4 and later, Bing benefits from these improvements,” he said. The manager mentioned that updates made by Microsoft engineers are being added to the model so that users have the most complete version.

GPT-4 can handle over 25,000 words of text, which means generating longer content and longer conversations. According to OpenAI, the new model is 82% less likely to respond to banned content requests and 40% more likely to give objective answers compared to GPT-3.5..

integration GPT-4 in the Microsoft browser would make it a powerful assistant, or “co-pilot” as the tech company calls it. Bing can turn web searches into conversation with a more “human” AI that makes fewer mistakes. By analyzing the behavior and honing the rules, users could chat for a long time, and the ChatGPT AI did not lose its mind.

Bing’s secret weapon is image recognition

Among the improved features we might see in future versions of Bing with ChatGPT is document parsing. One of the features of GPT-4 is that can translate and improve writing by looking for spelling and grammatical errors. The AI ​​is also more creative than in the previous version, so it exquisite sense of humor and is capable of writing the best songs or even writing movie scripts.

The GPT-4 AI recognizes images of eggs and flour and recommends cooking recipes, functionality that Bing will inherit from ChatGPT.
Image recognition in GPT-4. Photo: OpenAI

One of the most important features of GPT-4 is image recognition. New model version takes images as input and can generate captions, ratings and analysis. As a practical matter, users can enter an image of a vegetable or meat and ask Bing to tell them which dishes they can cook.

A few weeks ago, Microsoft announced that it was working on developing a model that could extract text from images or solve puzzle. Cosmos-1 bank answer questions with pictures, perform simple math operations, and recognize text and numbers. However, the most interesting thing about this AI is its performance on a test that measures human intelligence and abstract thinking.

Although Kosmos-1 is not related to OpenAI and GPT-4, it is clear that Microsoft is interested in the development and future integration of a multimodal model. The tech company has confirmed that it will be hosting an artificial intelligence event on March 16. He will share all the details on how AI will benefit your future products and services.

How to use GPT-4 in Bing with ChatGPT

Until that day, those interested in using GPT-4 on Bing with ChatGPT can sign up during the preview phase. Setting Edge as your default browser and Bing as your default search engine will help you skip a few places in line. For now availability of the final version is unknownalthough if you get access to the beta, you can also use it on your mobile device.

Source: Hiper Textual

Previous articleUnderstand what TWS headset is and get to know some models
Next articleHow many times have space objects tried to enter the ISS and Russian satellitesScience and technology00:43 | 15 March 2023
I am Garth Carter and I work at Gadget Onus. I have specialized in writing for the Hot News section, focusing on topics that are trending and highly relevant to readers. My passion is to present news stories accurately, in an engaging manner that captures the attention of my audience.

LEAVE A REPLY

Please enter your comment!
Please enter your name here