Moreover, all this can be done using natural language instead of changing the code. We are talking about the so-called ones. rapid execution or operational injections – commands in human language that, as they say, confuse AI chatbots due to the fact that attackers set tasks that go beyond the logic of the algorithms set by the developers.

At the same time, chatbots do not have an “ethics” system; There is only what is taught to the system. Bots therefore become “incredibly gullible” and do whatever is asked of them. Hackers may ask a bot to harvest sensitive user data, steal that information, or send reputation-damaging messages. Instead of ignoring the command, the AI ​​will treat it as a legitimate request. And the user may not even know the attack is happening.

According to developer Simon Willison, co-creator of the widely used web framework Django, cybersecurity researchers are not aware of any successful rapid delivery attacks outside of published experiments so far. However, as interest in personal artificial intelligence assistants and other solutions increases, the risk of such attacks also increases.

However, hackers disagree with security researchers and say successful attacks are achieved with quick execution.

Source: Ferra

Previous articlePre-orders for the Russian game Smoot have opened. It will require an RTX 3080 and 32 GB of RAM.
Next articleIn Spain, a biobank of live human brain specimens was closed
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here