YouTube is taking new measures to protect artists and youtubers from deepfakes audio and video created with artificial intelligence. Google’s video platform plans to launch two tools to help detect fake content that imitates the voice and appearance of content creators and their personalities.
As explained on YouTube, one of them will be dedicated exclusively detect synthetically modeled singers’ voices. It has already been developed and will begin testing early next year. The company did not specify how it works, although it did mention that it is integrated into Content IDyour copyright management utility.
Once activated, the tool will automatically detect when content that mimics singing is published on YouTube partners who publish their music on the site. If the material was created without permission or for defamatory purposes, artists can request its removal.
Another feature YouTube is currently working on that is also related to Content ID is search. put an end to deepfakes with faces of actors, YouTubers, athletes, musicians and all sorts of personalitieswhich allows them to be automatically detected and acted upon. This option is still in development and there is no word on when it will be launched on the video platform, although all indications are that it will not be available until 2025.
YouTube is expanding its anti-money laundering strategy deepfakes and artificial intelligence modeling
This isn’t the first time YouTube has announced changes or new tools to combat the spread of deepfakes and AI-generated simulations. In November 2023, the company said it would begin accepting requests to remove AI-generated content that “simulates identifiable individuals.” It also included a tool that would allow record labels and agencies representing artists to request the removal of songs created by AI.
Meanwhile, last July, YouTube changed its privacy rules to allow users to report realistic AI-generated videos that include deepfakes. These measures are part of a growing strategy to address the rapid evolution of generative AI, especially given the dramatic increase in the viralization of counterfeit content in recent years. they become easier to create.
In addition to providing greater protection against deepfakesYouTube has confirmed its intention to ban AI companies from training their language models using videos uploaded by content creators. This has been a source of controversy for some time. Let’s not forget it OpenAI has found himself in the eye of a storm for allegedly using posts from the site to train Sora, although in recent weeks there has been renewed controversy over apparently similar behavior NVIDIA, anthropic and even from Apple.
YouTube has already warned that scratch videos to train AI models is a violation of its terms of service. Now the company promises to improve its systems to make it easier to detect such activity and even block those who do this.
Source: Hiper Textual
I am Garth Carter and I work at Gadget Onus. I have specialized in writing for the Hot News section, focusing on topics that are trending and highly relevant to readers. My passion is to present news stories accurately, in an engaging manner that captures the attention of my audience.