In their article published on the arXiv platform, the authors described the method “Adapting while learning: Connecting Masters to scientific problems through the intelligent use of tools.” Previously, developers believed that increasing the number of model parameters would always lead to increased accuracy. But new research shows that smaller models can be smarter and more accurate if new features are added.
The scientists implemented a built-in safety check that allowed LLMs to classify problems as easy or difficult by rating their confidence in an answer. This makes it possible to solve simple problems without turning to external resources, which reduces the need for resources.
Testing the system on a model with 8 billion parameters showed a 28.18% increase in response accuracy compared to a similar model without modifications. The new approach shows that increasing the size of models does not always lead to better results and makes it possible to create powerful LLMs without significantly increasing their volume.
Source: Ferra

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.