Physicist Stephen Hawking said, “The development of full artificial intelligence can mean the end of the human race.” Recently, Rameez Kureshi, Director of Hull University AI Master Program, suggested that this “the ability to think and move as a human being” has allowed the computer to do the job for us.
However, a new study published in the magazine Transactions on machine learning Indicates that both concepts can be exaggerated. After examining large language models (LLMS) such as GPT-4, researchers at the University of Amsterdam and Santa Fe, although these software can solve various analogy problems, When these problems are changed somehow, their performance decreases.
This strengthens the idea that AI cannot think in the same way and even produces consistent answers, and may not have a deep understanding to adapt reasoning to the problem variations. And this difficulty of dealing with fine changes in difficulties can be a fundamental weakness of artificial intelligence.
Comparing him with human intelligence
The authors tested the problems that the task consisted of completion of a matrix at the beginning, and that people consisted of defining the lack of not good performance. This cognitive process, called analogical reasoning, compares two different things to describe similar aspects. For example, we can recommend: cup is for coffee just like the soup ??? Answer: bowl.
By applying similar tests to computer -based computer programs, Study Co -Totoritor Martha Lewis, a neurosimbolic teacher at the University of Amsterdam, AI cannot do analogical reasoning as well as peoplewith letter sequence problems.
If the ABCD IJKL goes to Abce, there is a way for the analogy of the letter series “? “Most people will answer” ijkm “and [a IA] He tends to give this answer, dedi he explained to live science. But when he offered, “If ABBCD goes to ABCD, where is Ijkkl going? “, People responded correctly” ijkl “(just removing the repeated element).
AI risks use data for real conflict solution

In order to test whether GPT models can maintain their strength, the authors tested them in both original analog problems and slightly modified versions. Unlike humans, AI models are effective in standard tests, but failed in adaptations. This means that artificial intelligence prioritizes the correspondence of standards, But it fails in your abstract understandinglimit cognitive flexibility.
In the case of numbers, IAS, unlike people, performed badly when the position of the poor number changed. Already in story analogies, the GPT-4 trend has always been to choose the first answer. When the researchers reshape key elements, IAS was based on more prominent features than cause and outcome relationships.
For authors, this reduced analogy ability, case -law analysis and sentence suggestions can represent a serious problem in critical sectors. This lack of robustness may serve as a warning in the case of abstraction from certain standards for more general rules. “There is less about what is in the data and how the data is used in the data.”
What did you think to find out that even artificial intelligence has limitations? Comment on our social networks and have the opportunity to share the story with your friends. Also, read the ChatGPT-4 criticism to your own creator’s views, who speak badly from artificial superity.
Source: Tec Mundo

I’m Blaine Morgan, an experienced journalist and writer with over 8 years of experience in the tech industry. My expertise lies in writing about technology news and trends, covering everything from cutting-edge gadgets to emerging software developments. I’ve written for several leading publications including Gadget Onus where I am an author.