According to a peer-reviewed study published on Wednesday, despite the fact that medical AI chatbot Med-PaLM or Google passed the exam for the right to work with clients, its answers still do not “keep up” to the level of answers of real doctors.
I remind you that in December of last year, Google introduced its next AI-based product – Med-PaLM, a capable answer. This is the subject of medicine. The management of the IT giant announced that Med-PaLM is the first language AI technology trained to work with huge amounts of medical information that received the US coronavirus – USMLE.
The criterion was the assessment of the requirements for medical patients undergoing medical training, that is, it was necessary to score about 60% of correct answers. Med-PaLM outperformed its “plan” by scoring 67.6%.
To reduce the number of incorrect answers, Google announced that its specialists have developed a new evaluation criterion – a benchmark for evaluating a new version of AI models. As a result, the result, confirmed by the USMLE Med-PaLM standard, was 86.5%.
As a computer scientist, British University, Leeds Jehems, Davenport notes, the main problem of Med-PaLM is still “the big difference between rude answers to “medical questions” and material medicine.”
Source: Tech Cult

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.