Russian experts have stated that neural networks that produce text or images are vulnerable to hackers who want to intercept personal data. In addition, they are often “buggy”.

Oftentimes, a chatbot may not be able to distinguish truth from falsehood, and may make fictitious facts seem real. Fedor Muzalevsky, head of the technical department of the RTM Group, warns that based on them, the user can draw erroneous conclusions. Alexei Drozd, head of SearchInform’s information security department, notes that the more such counterfeit products go viral on the internet, the more likely a new version of a product will accept it as genuine.

Ramil Kuleev, head of the Artificial Intelligence Institute at Innopolis University, says the ChatGPT structure is technologically closed. Despite the developers’ statements, users should avoid making chatbot requests that contain information that could be harmful if disclosed – big data is an invaluable resource for artificial intelligence. Drozd underlines that it is not known whether neural networks store archives of user requests, whether they are depersonalized and the likelihood of one user’s data getting into another user’s problem. Additionally, most chatbots are not directly accessible – only through VPNs, proxies or tailored Telegram chats. The expert warns that “agents can also obtain user data.”

Information from neural networks can be used for fraudulent purposes. It can be used for information attacks to create sensible messages, empowering text, and artificially created images. Yury Ryadnin, an expert in the banking systems security research group at Positive Technologies, emphasizes that information from neural networks is always worth double-checking.

Experts say it is impossible to develop absolute protection against the threats posed by neural networks. Therefore, it is recommended that their use be restricted by law. According to Artyom Sheikin, Chairman of the Digital Economy Development Council within the Federation Council, it is necessary to categorize neural networks according to security risk. Also, according to him, there is a need for unified types of tests for security and types of access to data for audits and evaluations. The Senator believes that special attention should be paid to the use of neural networks in healthcare. Sheikin said that the digital code may appear in Russia in the near future.

Also, Mikhail Seregin, head of the Innopolis University Center for Information Security, believes that data or content generated by neural networks and chatbots should be flagged.

Source: Ferra

Previous articleIt is one of the most recommended iPhones of the year and has a discount of 164 euros.
Next articleTreat Williams dies at 71
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here