In a series of experiments, researchers showed: The removal of these parts may reduce the accuracy of AI to 20%.
In the first test, the team taught the model to restore the text on the markers – the minimum units in the input is divided. It turned out that the greatest contextual burden was fully carried by stopping and punctuation signs, not “semantic” names or verbs.
In addition, the same elements were removed from understanding tasks, including MMLU and Babylon. Conclusion: Even large models, including chatgpt, began to make more often mistakes.
Researchers emphasized that the perception of “insignificant” words as secondary ones was a misunderstanding, especially when interacting with AI.
Source: Ferra

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.