The 154-year-old journal Nature, publishing fundamental academic research in science and technology, has proven its commitment to ethical principles. The editors are confident that the images attached to the articles that show the thought flow of the author of the work should be helpful material that makes it possible to refer to the sources. Generative AI does not provide links to data sources on the basis that it generates a specific response or voluminous content. As a result, there is unacceptable transparency for scientific research.
At the same time, magazine editors are not opposed to the generation of images by neural networks for articles on artificial intelligence.
“Why are we banning the use of productive AI in visual content? Ultimately, it’s a matter of honesty. The publishing process in both science and the arts is supported by a shared commitment to integrity. This includes transparency. As researchers, editors, and publishers, we need to know the sources of data and images so that their accuracy and validity can be verified. Current productive AI tools do not provide access to their resources for this type of verification to be performed,” Nature editors said in a statement.
Source: Ferra

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.