chatbots based on generative artificial intelligence such as ChatGPThave very interesting applications, but are always under observing a person. For example, it has been observed that they can be useful in obtaining medical diagnoses if there is a trained person available to check whether they have made a mistake. They make fewer and fewer mistakes, but they need to check just in case. The problem arises when that person does not trust the AI ​​and, in the end, the chatbot’s assistance is useless.

This seems to be what is happening to doctors, at least in the United States. This was confirmed by a group of scientists from Stanford Universityin a study published in YAMA network.

In it, diagnoses made by a number of doctors are verified with or without the help of a chatbot. There was almost no difference between doctors who used ChatGPT and those who don’t. However, when the chatbot was left to work without human supervision, the results were much better.

Chatbot or not chatbot, that is the question?

This study involved 50 doctors, residents and assistants, who were divided into two groups. Everyone had to make a diagnosis based on history and explain the reason. reasoning this led them to him. But there were differences. The doctors of the first group managed it without any outside help, while the doctors of the second group used the help of ChatGPT. In addition, there was a third group in which there were no doctors. The chatbot was provided only with information, without any control.

The stories are taken from real cases. However, They were never published. This was done to ensure that doctors were not aware of the cases and, furthermore, they were not part of the chatbot’s training data.

Doctors who did not use ChatGPT made the correct diagnosis in 74% of cases, and those who used the chatbot forThey confirmed 76% of diagnoses. Instead, when ChatGPT was left running unattended, This achieved a 90% success rate.

This is as explained New York Times doctor Adam Rodmanwho helped design the study, is due to some mistrust on the part of doctors. And in general, they tend to trust their diagnosis more than the advice of a chatbot. So even if it contradicts the answer, they continue with it.

You must know how to use ChatGPT.

Another factor influencing these results is the fact that many doctors do not know how to properly use chatbots. They usually use ChatGPT as if it were a search engineEnter Google. They ask you specific questions, but don’t give you the full story of the patient awaiting diagnosis. This is a much more effective way to get to the point.

Some doctors do not know how to use chatbots correctly. Credit: NCI (Unsplash)

Of course, a chatbot doesn’t always get everything right. It is important to have a person in control. However, you must also be willing to admit possible mistakes.

The future of automated chatbot

An unattended chatbot is not dangerous. At best, it brings good results, as in this case. However, advances in artificial intelligence increasingly show how important it is to always provide these algorithms with certain specific instructions.

It is very important that they are accurate, since the bot will never stop due to lack of information. He is looking for a way out, exactly the one that seems simplest and which can ignore the criteria of ethics. So while what’s happening with doctors and ChatGPT is nothing more than an anecdote, it reminds us that, at the end of the day, artificial intelligence can handle itself quite well. Just in case, it’s better not to take your eyes off him.

Source: Hiper Textual

Previous articleBis released the first game that can be played via WhatsApp; see how to play
Next articleAirpods on sale at Magalu: 3 options to take advantage this Black Friday

LEAVE A REPLY

Please enter your comment!
Please enter your name here