The study adds to previous findings suggesting that large language models (LLMs) use probabilistic pattern matching rather than formal logical reasoning to solve problems. When adding irrelevant information, such as details about the size of a fruit, to a math problem, AI models experienced dramatic drops in accuracy, in some cases by as much as 65.7%.
Experts believe that further progress in artificial intelligence will require the development of models that can perform abstract symbolic operations similar to traditional algebra.
Source: Ferra

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.