This study showed how a hypothetical scammer could use ChatGPT to create well-written but completely fabricated abstracts and submit them to academic journals for publication.

If such a job is accepted, the scammer could use the language model to write a completely made-up article with inaccurate data, non-existent clinical trial participants, and meaningless results.

As part of the experiment, the experts asked the experts to examine texts created by humans and artificial intelligence to determine whose work was where. Experts misidentified 32% of the research summaries generated by the language model and 14% of the abstracts written by humans.

According to the researchers, it currently takes a lot of effort and a lot of time to create a fake study with enough believability. But by connecting artificial intelligence to this situation, the task can be solved in a matter of minutes, which greatly simplifies the creation of fakes.

Source: Ferra

Previous articleIn China, the water was lowered in the world’s container ship
Next articleRussians began to buy domestic laptops more oftenLaptops and tablets09:53 | 16 March 2023
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here