False information generated by ChatGPT about a publisher has resulted in lawsuits against OpenAI in the United States.. The lawsuit, filed in a Georgia court on Monday (5), accuses the developer of libel and seeks compensation.
The case concerns radio host Mark Walters, who was named in a chatbot-generated report for alleged fraud and embezzlement from a nonprofit. However, Walters was never charged with these crimes.i.e. artificial intelligence (AI) invented knowledge.
These details were created by the virtual robot in response to a request for a summary of an actual court case at the request of journalist Fred Riehl. In summarizing the case, the tool relayed real data, but also included lies, as the action in question had nothing to do with the publisher or the embezzlement of funds.
Although OpenAI presents ChatGPT as a reliable resource that can help the user “learn something new,” it also says that the mechanism is not infallible. There is a warning on the chatbot home screen stating that the system can “sometimes generate false information”.
Innocent or guilty?
as you remember Border, American legislation protects internet companies with respect to information produced by third parties and hosted on their domains. However, it is not known whether the law can be applied to artificial intelligence capable of generating new knowledge.
According to University of California legal expert Eugene Volokh, the slander charge against OpenAI is “legally valid” in principle. But the author, who gave the company a chance to eliminate false data by not notifying him, has doubts about his victory. Also, Walters will need to prove that he was actually hurt by the fake news.
Source: Tec Mundo
I am a passionate and hardworking journalist with an eye for detail. I specialize in the field of news reporting, and have been writing for Gadget Onus, a renowned online news site, since 2019. As the author of their Hot News section, I’m proud to be at the forefront of today’s headlines and current affairs.