YouTube has decided to strengthen its policies against disinformation or misleading content caused by the use of artificial intelligence. That’s why the platform will require creators to warn the public when they upload videos containing “synthetic” content; That is, they were created or manipulated using tools such as generative artificial intelligence. This applies to both regular long publications and short videos.
This measure still does not have a specific implementation date. On YouTube’s official blog, they explained that new modified content alerts will start appearing in the coming months, so they won’t become more visible until 2024. The idea is to work with youtubers to ensure they are aware of new requirements and guidelines.
Once this feature becomes available, creators will have to indicate during the upload process whether the video has been processed using artificial intelligence.. YouTube will then display two types of warnings in the post: one will appear in the description field, and the other will appear directly above the player. In both cases it will contain the following message: “Modified or synthetic content. Sound or images have been digitally altered or generated.
However, it is worth clarifying that the player warning will be limited to synthetic content related to sensitive topics and prone to misinformation. For example, public health crises, ongoing armed conflicts, events involving government officials or election processes, and many more.
“We will require creators to disclose when they have created altered or synthetic content that is realistic, including using artificial intelligence tools. When creators upload content, we will have new choices to indicate that it contains realistic but altered or synthetic content. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content that shows someone saying or doing something that they did not actually do,” they pointed out from the Google platform .
Penalties for non-compliance with YouTube’s AI-modified content policies

YouTube’s initiative is very interesting and aims to introduce a new level of control over the spread of misleading or outright false content. Although the reality is also that if their main motivation is misinformation, it is likely that some creators will choose not to mark their videos as modified, even if they are. That’s why Google must have an effective system in place to enforce the new rules.
YouTube indicates that if creators choose not to tag their processed videos as such, They will be subject to various punishments. From deleting the content in question to removing the user from the affiliate program(Partners) platforms and others. Of course, it is not specified whether the sanctions will come into force after a certain number of violations; It only says that this will happen if the behavior occurs “consistently.”
It’s also important to note that new warnings about modified videos with synthetic content They do not replace community rules.. This means YouTube will be able to remove inappropriate content processed by AI, even if it contains a warning.
Finally, YouTube will not be limited to flagging videos processed using third-party software.. Google is currently one of the major companies in the field of artificial intelligence and is gradually integrating it into its products. Therefore, warnings will also be displayed if the video contains elements created using native tools. For example, with Dream Screen, a utility that allows you to create images and videos using AI for use as backgrounds in short films.
Source: Hiper Textual

I am Garth Carter and I work at Gadget Onus. I have specialized in writing for the Hot News section, focusing on topics that are trending and highly relevant to readers. My passion is to present news stories accurately, in an engaging manner that captures the attention of my audience.