Several sexually explicit images of Taylor Swift’s face have been flooding the internet this week. Are deepfakesfake content created using artificial intelligence. One of the photos that went viral on X (formerly Twitter) has been seen more 47 million times before it was removed. The scandal highlighted one of the dangers of the new tools that experts have been warning about for years, which particularly affects women.
It took X about 17 hours to block the account (a “verified” user, by the way) that posted the material in question. The image was previously shared more than 24,000 times and received hundreds of thousands of likes. “AI Taylor Swift” has become a trend that has contributed to the spread of content.
“Publishing non-consensual Nude Images (NCN) is strictly prohibited on X, and we have a zero-tolerance policy for such content,” Elon Musk’s social network later said in a statement. But the problem goes beyond this platform.
These deepfakes Taylor Swift, for example, were first shared on Telegram. 404 Media said the images were created in a group on the messaging app where users share explicit images of women generated by artificial intelligence.
This time it was Taylor Swift, but the practice has already had a few victims. This month, NBCNews discovered deepfakes pornographic images of approximately thirty female celebrities. They didn’t have to do extensive research: they just had to Google the artist’s name next to the term “deepfake”
Taylor Swift case as evidence of damage deepfakes

Generative artificial intelligence has begun to facilitate the spread of violent practices that have evolved over the years. Sensity AI, an Amsterdam-based company dedicated to detecting fake content, already warned in 2019 that 96% of videos deepfake on the Internet they were pornographic. And that most of the victims were women.
Between 2018 and 2020, the number of fake videos available online (not just pornographic ones) doubled every six months. With the advent of new generative artificial intelligence tools, their adoption is expected to increase significantly.
This doesn’t just apply to Taylor Swift and other famous artists. Sensity AI also found in 2020 that minimum 100,000 deepfakes Pornographic images of women were distributed in Telegram groups. Users of these channels then said that they used bots to generate content. With these tools they created deepfakes using photos that real women posted on their social media.
The scandal erupted last September in Almendralejo, Spain. More than 20 girls from 11 to 17 years old. They reported that they created their nude images using artificial intelligence tools. The material was distributed via Telegram and WhatsApp, the message said. BBC.
Reality Defender, a cybersecurity company, reported The newspaper “New York Times that Taylor Swift’s images were almost certainly created using a diffusion model. This technology is the basis of some of the most famous new imaging tools, such as those from OpenAI, Google or Midjourney. Ben Coleman, CEO of Reality Defender, explained that there is currently Over 100,000 publicly available applications and models.
“What happened to Taylor Swift is nothing new. For years, women have been targeted deepfakes without their consent,” said Yvette D. Clark, a Democratic US congresswoman. “And thanks to advances in artificial intelligence, the creation deepfakes “It’s easier and cheaper.”wrote in X. There are several states in this country that restrict this type of content, but federal legislation in this regard is limited.
What do big tech companies do?

The most well-known tools, such as Google and OpenAI, do not allow the creation of sexually explicit content or the use of the identity of famous people, such as politicians or artists. But big tech companies have a big influence on distribution deepfakes like Taylor Swift.
Google allows victims to request removal of this content, but does not search for or remove it. deepfakes porn is active. “We only review the URLs that you or your authorized representative submit on the form,” they state in the request template.
Microsoft, for its part, believes deepfakes for not in its category of intimate images without consent (NCI). Like Google, it has a form that allows victims to report content that appears on Bing.
“The distribution of NCII is a serious violation of privacy and personal dignity, with devastating consequences for victims,” Courtney Gregoire, Microsoft’s chief digital security officer, said in a statement last August. “Microsoft prohibits NCII on our platforms and services, including soliciting it or advocating the production or distribution of intimate images without the permission of the affected individual,” Gregoire explained.
However, both search engines have several portals that position themselves as specializing in deepfakes celebrity pornographic. Most of the content is from women. Google and Bing also highlight several apps for creating similar violent content.
Investigation Wired found at least in October 35 sites dedicated to porn deepfake. Another 300 publish this or that material without consent. There are hundreds of thousands of videos.
Source: Hiper Textual

I am Garth Carter and I work at Gadget Onus. I have specialized in writing for the Hot News section, focusing on topics that are trending and highly relevant to readers. My passion is to present news stories accurately, in an engaging manner that captures the attention of my audience.