There has been a lot of talk in recent months about how the new wave of AI-powered chatbots ChatGPT in between, it could turn many industries upside down, including the legal profession.

However, judging by what happened recently in the New York case, it may be some time before highly skilled lawyers are pushed into the background by technology.

The strange episode began when Roberto Mata sued the Colombian airline, claiming he had been injured on a flight to New York.

Avianca asked the judge to dismiss the case, so Mata’s legal team produced a brief report citing half a dozen such incidents to convince the judge to allow his client’s case to be heard, according to the New York Times.

The problem was that neither the airline’s lawyers nor the judge could find any evidence of the cases mentioned in the brief. Because? Because ChatGPT invented them all.

The author of the report, Steven A. Schwartz, a highly qualified lawyer for the firm Levidow, Levidow & Oberman, admitted in an affidavit that he used the famed OpenAI ChatGPT chatbot to find cases like this, but said he “found it to be unreliable.”

Schwartz told the judge that he had not used ChatGPT before and “therefore was unaware of the possibility that its content might be false.”

When creating the report, Schwartz even asked ChatGPT to confirm that the cases actually took place. The always-helping chatbot responded in the affirmative, stating that information about them can be found in “authoritative legal databases.”

A lawyer caught in the middle of the storm said he was “very sorry” for using ChatGPT to generate the report and insisted he would “never do this in the future without fully verifying its authenticity.”

Looking at what he called a legal document filled with “fake judgments, with fake citations and fake internal citations” and describing the situation as unprecedented, Judge Castel scheduled a hearing early next month to consider possible sanctions.

Impressive in how they produce high quality flowing text, ChatGPT and other similar chatbots are also known for making things up and presenting them as if they were real, something Schwartz learned at his own expense. This phenomenon is known as “hallucination” and is one of the biggest problems chatbot developers face as they seek to solve this very annoying problem.

In another recent example of a generative AI tool going berserk, the mayor of Australia accused ChatGPT of creating lies about him, including that he was jailed for bribery while working at a bank over a decade ago.

Mayor Brian Hood was actually a whistleblower in this case and was never charged with a crime, so he was very upset when people started telling him about the chatbot history rewrite.

Source: Digital Trends

Previous articleRussia has developed a new method of water purificationIn Russia19:04 | 30 May 2023
Next articleiPhone 16 Pro will have a taller display with a 19:6:9 aspect ratio
I am Garth Carter and I work at Gadget Onus. I have specialized in writing for the Hot News section, focusing on topics that are trending and highly relevant to readers. My passion is to present news stories accurately, in an engaging manner that captures the attention of my audience.

LEAVE A REPLY

Please enter your comment!
Please enter your name here