American developer OpenAI presented an updated GPT model; a multimodal artificial intelligence model capable of analyzing sounds, images and text in real time. The updated model is called GPT-4o, where “o” means omni and translates to complete.
Subscribe to RB.RU on Telegram
OpenAI intends to implement GPT-4o in its products in the coming weeks. The company is confident that the updated model will improve the performance of the ChatGPT bot. The bot already has a voice mode, but with the update the tool will be more like a voice assistant.
New features in GPT-4o
The update will allow the robot to distinguish emotions in the user’s voice and intonation. AI will be able to respond to human actions in real time. Additionally, the developers have improved the visual capabilities.
According to Bloomberg, the update will give regular users access to features that were previously only available with a paid subscription.
According to Meera Murati, CTO of OpenAI, the company has made a leap in the ease of use of ChatGPT.
Free users will now be able to ask the bot to search the internet for answers, remember conversation details for future context, and also receive responses from the bot in different voices.
The company also added free features like charts, data analysis, and the ability to work with files and images.
GPT-4o is available in free and paid versions. ChatGPT Plus and Team subscription users have a higher message limit (five times).
Author:
Natalia Gormaleva
Source: RB

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.