By Bruno Fraga.
In today’s digital world, the line between real and fake is becoming increasingly blurred. Dr. Imagine Drauzio Varella recommending a “miracle supplement” in a video, or TV Globo presenters William Bonner and Pedro Bial promoting gambling. Believe it or not, these videos were not real.
So-called deepfakes, a combination of the terms “deep learning” and “fake”, have emerged. A powerful tool that can simulate anyone’s face or voice. They use artificial intelligence to create highly realistic fake images and videos, and today anyone can change their face and voice to say or do something they haven’t said or done before.
Thanks to this technology, influencing wars and national policies or committing fraud on an unprecedented scale has become easy and accessible. According to data from the Sumsub security platform report, in 2023, Brazil, which is still in the early days of this technology, has already become the epicenter of deepfake activities, with an 830% increase in registered cases.
Moreover, according to the research, almost half of deepfakes in Latin America are located in the country, mainly affecting the fintech, cryptocurrency and iGaming sectors.
But the problem is not just limited to celebrities or heads of state. Job seekers are using deepfakes to change their faces and impersonate other people in interviews. The famous “kidnapping scam” involves copying the voice of the victim’s relatives over the phone.
This proliferation of fake videos and audio recordings is so powerful that it raises profound questions about what we can trust in the future. Even the President of the Federal Supreme Court (STF) has expressed concern, anticipating the impact of deepfakes on the next elections.
Technology suddenly began to question our ability to distinguish the real from the manufactured. Are we prepared for this?
A new wave of fake images and videos begins
Deepfakes have gained popularity as a form of entertainment on social media; Influencers use this technology to create new songs with the voices of other singers or even portray characters in famous movies.
An example of this is famous DJ David Guetta, who used deepfake to recreate rapper Eminem’s voice and use it in a live song, causing fans confusion as to whether the voice was real or fake. Others go even further and create content without even appearing on camera: The AI itself produces text, voice and face with extreme realism.
This was the case with Jordi, known as ‘Kwebbelkop’ on social media. No longer does he need to sit down to record a video for his millions of followers and decide to just use deepfakes and AI to automate the entire process. With these rapid advancements, the market value of companies specializing in deepfakes, such as Heygen, has increased exponentially in a few months, with its valuation increasing from US$1 million to US$75 in 2023, according to Forbes.
Even so, the public is questioning and pressing whether these companies will add resources to prevent unauthorized deepfakes and the creation of inappropriate content. Governments and authorities are becoming increasingly dependent on such mechanisms.
On the one hand, deepfakes open up a new world of possibilities in terms of entertainment and productivity. In the wrong hands, disinformation can be a weapon for personal revenge and cybercrime. The potential for abuse is huge and dangerous.
But the truth is that it is impossible to stop deepfakes. Although it seems like a science fiction scenario, the truth is that this technology is already among us and it is not possible to prevent its use.
Just as creating websites has made information accessible to anyone, technologies that produce deepfakes have evolved and become open source tools accessible to anyone with minimal technical knowledge.
Free tools like Faceswap and DeepFaceLab have further democratized this access, allowing anyone to create compelling deepfake content with basic computing resources. Sites like DeepFakesWeb allow users to create deepfakes without prior knowledge and without the need to install anything on their computers. These technologies, which were previously limited to experts, are now accessible to every user with a mobile phone or computer.
Today it is almost impossible to prevent someone from creating a deepfake. A deepfake of you may have already been created.
The ability to create custom deepfakes poses a significant risk to individual privacy and security. A simple photo on Instagram can be enough to create a convincing deepfakeIt irreversibly violates privacy and personal authenticity.
Additionally, its psychological impact cannot be ignored. Even if identified as fake, deepfake content is used every day to demoralize and humiliate victims.
A recent case was that of American singer Taylor Swift, who cloned her face and used it to create images with explicit content that went viral on social media. The scandal reached such proportions that the US White House had to intervene.
The same goes for ordinary people who have their own. photos copied from public social networks and manipulated by criminals Anonymous on the internet.
So what can you do to combat deepfakes?
Editing online content is already a huge challenge, and the deepfake problem is taking this challenge to an unprecedented level. Traditional detection methods such as pixel analysis or voice recognition are becoming obsolete in the face of the constant development of deepfake technology. Although it is often stated that the responsibility of companies and full control is not possible, the role of the individual cannot be ignored in this scenario.
Awareness about the dangers of deepfakes is crucialBecause it represents the first line of defense against misinformation and manipulation. Digital education combined with advances in deepfake detection appears to be a light at the end of the tunnel.
Media literacy initiatives and the development of algorithms that can detect manipulated content are tools of empowerment in an environment dominated by distrust.
Deepfake is a categorical reflection that technological progress is many steps beyond our ability to control it and understand its consequences, but it also serves as a call for a new era of digital literacy and international cooperation in the search for solution ethics and practices.
Total control may be an illusion, but preparation is the best weapon against the intrusion of the unreal into the realm of reality. Society needs to adapt to this new reality, seek ways to maintain integrity and trust in the digital ecosystem, while also exploring the positive potential of this disruptive technology.
****
Bruno is an information security expert and digital researcher. Author of the book “Técnicas de Invasão”, which has become a Brazilian bestseller in the field of Ethical Hacking.
Source: Tec Mundo

I am a passionate and hardworking journalist with an eye for detail. I specialize in the field of news reporting, and have been writing for Gadget Onus, a renowned online news site, since 2019. As the author of their Hot News section, I’m proud to be at the forefront of today’s headlines and current affairs.