This phenomenon, called “hallucination” in the AI ​​industry, highlights a fundamental flaw: AI’s tendency to fabricate information rather than provide accurate data.

Nieman Lab decided to see if ChatGPT would provide accurate links to articles from news organizations it pays millions of dollars for. Nieman Lab’s Andrew Deck asked the service to provide links to high-profile exclusive stories published by 10 publishers that OpenAI has deals with, including the Associated Press, The Wall Street Journal, Financial Times, The Times (UK), Le Monde, El País, The Atlantic, The Verge, Vox, and Politico. In response, ChatGPT generated fictitious URLs that resulted in 404 error pages because they didn’t exist.

Despite OpenAI’s ongoing efforts to expand ChatGPT’s capabilities, these experiments demonstrate the inherent risks of relying on AI for actual accuracy. Xixit argues that if AI cannot perform basic tasks, such as providing correct URLs, trust in it becomes questionable.

Source: Ferra

Previous articleiPhone still receives Google RCS protocol but for some reason there is no end-to-end encryptionAdded June 29, 2024, 06:00
Next articleSila, the unicorn startup with Russian roots, raises $375 million in Series G round
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here