Modern AI may be better at detecting lies than humans, according to a study from Germany. A tool based on Google’s BERT large-scale language model (LLM) was able to detect lies in conversations with 67% accuracy.
Created by scientist Alicia von Schenk of the University of Würzburg in Germany, the tool was trained with 1,500 conversations from 768 project volunteers. All participants must explain their plans for the weekend, but half were encouraged to lie about the matter – something reasonable but still not right in exchange for a small financial reward.
About 80% of these lines were then used to train an algorithm powered by LLM BERT designed to detect lies. After that, The remaining lines were used to test the accuracy of the tool..
According to the publication, The algorithm was able to tell which statements were true and which were false with 67% accuracy.
How reliable is lie detector AI?
In the second part of the study, around 2,000 volunteers were divided into smaller groups. These groups were asked to say which statements were lies and which were not, but they could rely on lie detector AI to evaluate each statement.
The vast majority rejected the use of the tool, but One third accepted this opportunity.
Then an interesting trend emerged: Participants evaluated the speeches on their own I assumed most of the statements were true — only 19% of the statements were marked as lies. Meanwhile, Using AI, the group found that 58% of the statements were false.
The AI tool made it easier for people to spot lies, but It has been shown that the public embracing AI is much more skeptical than those evaluating it themselves.
A useful but dangerous tool
“Given the prevalence of fake news and misinformation, there is a benefit to using these tools [de detecção de mentiras]”, concludes von Schenk. “But you need to test them and make sure that they are significantly better than humans. [na detecção de mentiras]”, added.
In the conclusion of the study, von Schenk states: When AI tools are used to evaluate conversations, there is a much higher rate of accusationsTherefore, considering the increasing distrust, it is important for lawmakers to prepare laws to protect consumer privacy and trust.
“Economic policies should also consider the incentives and disincentives for the adoption of lie-detecting AI, especially in more sensitive contexts,” the article says. “These are issues of legal liability, public trust, and the consequences of (false) accusations,” it adds.
To view the full study, refer to the original article in the journal iScience.
Source: Tec Mundo

I am a passionate and hardworking journalist with an eye for detail. I specialize in the field of news reporting, and have been writing for Gadget Onus, a renowned online news site, since 2019. As the author of their Hot News section, I’m proud to be at the forefront of today’s headlines and current affairs.