We already know: ChatGPT cannot be trusted. Several investigations have warned that these AI-powered chatbots often provide inaccurate or false information. They call it “hallucinations” hidden behind impeccable writing. A new study from Long Island University in New York recommends being especially careful when consulting about medications.
A group of researchers asked a question ChatGPT, which answered 39 drug-related questions. These were real questions posed to the drug information service of the university’s Faculty of Pharmacy over a 16-month period from 2022 to 2023. A group of pharmacists answered the same questions in order to subsequently verify the chatbot’s answers.
Only 10 out of 39 ChatGPT responses were considered satisfactory. according to criteria established by the researchers. The remaining 29 did not directly answer the questions, were inaccurate or incomplete.
For each question, the researchers asked ChatGPT to provide links so that the information provided could be verified. Detailed references to OpenAI artificial intelligence were provided in only eight responses. Each of them contained non-existent links. The team, led by Sarah Grossman, explained in a statement that some of the responses could put patients’ health at risk.
Other warnings about ChatGPT drug requests
“Health care providers and patients should be cautious when using ChatGPT as an authoritative source of drug-related information,” Grossman said during the American Society of Health-System Pharmacists (ASHP) meeting held this week in California. “Anyone using ChatGPT to obtain drug information should verify information using reliable sources.”
One of the questions the researchers asked ChatGPT was whether there is Drug interactions between the COVID-19 antiviral drug Paxlovid and verapamil, a blood pressure-lowering drug. ChatGPT showed that no interactions were recorded for this combination. In fact, these medications can interact with each other, and taking them together can cause your blood pressure to drop too low, Grossman explained.
“AI-powered tools have the potential to impact the clinical and operational aspects of health care,” said Gina Luchen, director of data and digital health at ASHP. “Pharmacists must remain vigilant regarding patient safety while assessing the suitability and validity of specific artificial intelligence tools for drug-related uses.”
All known chatbots spread false information. A study published by Vectara found levels “hallucinations” 3% for GPT-4, the most advanced ChatGPT model. For Palm, one of Google’s systems, the rate reached 27%. The researchers asked artificial intelligence to create summaries of other documents. Ultimately, factual errors and additions compared to the original text were taken into account.
Even the World Health Organization (WHO) has urged “caution” when using tools like ChatGPT in medical care. Last May, he warned that the data used to train these systems could be “biased” and generate misleading information that may harm patients.
Source: Hiper Textual