The polygraph measures respiratory rate, heart rate and blood pressure to determine whether a person is lying. This instrument is already 85 years old and can hardly be considered reliable.
A group of scientists from the University of Würzburg have come up with their solution: a system based on BERT, a language model from Google. It can be used continuously, for example, to search for fictional facts in a CV or to verify posts on social media.
The researchers say the system helps people spot deception more often, but at the same time encourages people to suspect each other of lying more often.
AI can detect lies and truths with 67% accuracy
-
News
Author:
Subscribe to RB.RU on Telegram
In a recent study, Alicia von Schenk and her colleagues at the University of Würzburg in Germany developed a tool that detects lies much better than ordinary people and conducted several experiments to find out how it would be used.
The scientists collected 1,536 statements, half of which were true and the rest false. An algorithm was trained to detect lies and truths in 80% of these statements using Google’s BERT AI language model. The resulting instrument was tested on the remaining 20% of statements.
It was found that it could successfully determine whether a statement was true or false 67% of the time. A human copes with this task much worse – we usually assume that only half the time.
To find out how people would interact with the tool, the scientists conducted a series of tests and came to the following conclusions:
- If people can use an AI polygraph for a small fee and then receive financial rewards, they still aren’t very interested in using it.
- Only a third of the volunteers use the tool, perhaps out of skepticism or because they are optimistic about their own ability to detect lies. Moreover, they almost always follow the conclusions of the algorithm.
Read more about this topic:
How Google’s plans to abandon traditional search could change the Internet
Robots will be more useful if they learn to “listen”
The reliability of a tool also influences our behavior. As a general rule, we tend to assume that those around us are telling the truth.
- The study confirmed this: although the volunteers knew that half of the statements were lies, they only marked 19% of them as such. However, when using the AI tool, they suspected fraud in 58% of the cases.
In some ways, this was a good thing: tools like this allow us to detect more lies. However, this undermines trust, without which it is impossible to build relationships.
In the study, von Schenk and his colleagues were only interested in how to create a tool that could detect lies better than humans. It’s not that hard considering how bad we are at it. But imagine if such a tool could be used on a daily basis: assessing the credibility of social media posts, as well as looking for false information in resumes or interview answers.
Are we willing to accept an 80% accuracy rate, where only four out of five statements tested will be correctly interpreted as true or false? Will even 99% accuracy be sufficient? It is difficult to say for sure.
It is worth remembering that all lie detection methods were subject to errors. The polygraph measured heart rate and other signs of arousal, as some signs of stress were thought to be unique to liars. But that is not true. And this has been known for a long time.
This is why lie detector results are generally not taken into account in American courts. Despite this, polygraphs are still used in some situations, for example in reality shows, causing a lot of harm to their participants.
Read more about this topic:
Yandex was the first to start labeling AI-created ads
JAMA: ChatGPT outperformed doctors on health questions 79% of the time
Imperfect AI tools could have an even bigger impact, von Schenck said. Only a limited number of people can be tested with a lie detector each day. But AI can detect deception on an almost unlimited scale.
“Given that we have so much fake news and disinformation, these technologies have advantages,” says von Schenk. “But we need to test them well to make sure they are significantly better than humans.” “If an AI lie detector generates a lot of accusations, we might as well not use it at all,” he adds.
Fountain.