OpenAI announced on Tuesday that an invisible watermark will be included in images created by its DALL-E 3 artificial intelligence (6). Descriptors will be added to the shapes’ metadata.
The measure complies with the standards of the Coalition for Content Provenance and Authenticity (C2PA), an organization of companies dedicated to developing systems that provide context and history for digital media.
According to the company, C2PA brands will appear in images created in ChatGPT on the web and in applications using the DALL-E 3 API. The mobile version will have the new feature on February 12.
In addition to the identifiers in the metadata, all images will have a “CR” symbol in the upper left corner.
According to OpenAI, the addition of flags does not mean any negative impact on latency or the quality of the AI-generated image. The company reports that the feature will also increase the size of the figures in some orders.
The method is not infallible
However, adding tags to metadata is not a foolproof solution to stop the spread of fake images. OpenAI itself points out that data “could easily be removed, either accidentally or intentionally.”
For example, when sharing images on social networks, the metadata content varies depending on the platform. Taking a screenshot of the image in question also camouflages the content.
In any case, the addition of OpenAI is an important step towards ensuring transparency in AI-generated images. This is a great demand for companies in this sector, as elections will be held in many countries this year.
Source: Tec Mundo
I am a passionate and hardworking journalist with an eye for detail. I specialize in the field of news reporting, and have been writing for Gadget Onus, a renowned online news site, since 2019. As the author of their Hot News section, I’m proud to be at the forefront of today’s headlines and current affairs.