According to academician Harutyun Avetisyan, director of the Institute of Systems Programming of the Russian Academy of Sciences, the problem of data “poisoning” when cybercriminals introduce distortions into training databases leads to failures in the functioning of artificial intelligence, including incorrect results. To solve this problem, scientists developed a test dataset called SLAVA that helps evaluate the accuracy of algorithms and protect against attacks. Reliable versions of frameworks for working with artificial intelligence have also been created.
Avetisyan noted that with the increasing use of generative artificial intelligence models, protecting such systems has become increasingly important. Problems with AI errors have already led to scams using fake videos and even court decisions based on inaccurate data. However, according to the academic, it is impossible to give up on artificial intelligence; This will cause a technological delay. Instead, it is necessary to create reliable artificial intelligence systems based on modern scientific developments.
Source: Ferra

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.