With hallucinations, the company understands reasonable, but the chatgpt chat boots confidently.

For example, the researchers asked the model thesis name and the date of birth of one of the authors of the business, and received three different answers, all wrong answers. According to Openai, the reason is that they learn to predict the next word in the text without discrimination between the truth and fiction at the stage of the preliminary education of the model.

Frequent facts are better absorbed and rare details such as birthdays or insignificant events almost always fail. According to researchers, the main problem is not only in education but also in artificial intelligence evaluation methods.

Now the models are evaluated with accuracy – the more accurate answers, the higher the result. This encourages to predict and is not honest, “I don’t know.”

Openai proposes to change the metrics: confident mistakes to punish more strongly than the recognition of uncertainty and to give a partial score for the correct expression of suspicion. If the approach does not change, the authors believe that AI will continue to öğrenmek learning to predict ”instead of learning to be careful.

Source: Ferra

Previous articleEnd of the era: nova launcher closes forever and will not have the future on Android
Next articleSamsung plans to launch laptops based on Intel Panther Lakeno -lake and tablets on 09 September 2025, 07:08
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here