A new study has shown that AI chatbots can be easily hacked and used to obtain sensitive information about users. As scientists have discovered, hackers can use a procedure called “indirect rapid injection” (indirect rapid injection). It allows you to sneak malicious components into the system.

For example, an attacker inserts an invisible font into a request on a web page to be used by a chatbot to answer a user’s question. Thus, the researchers forced the Bing chatbot to obtain the user’s personal financial data. The program managed to “spoof” an unsuspecting user’s email IDs and financial information.

Source: Ferra

Previous articleGoodbye AirPods Max: These Sony headphones have 45 hours of battery life and adaptive noise canceling
Next article“Yandex” allowed employees to work from abroad
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here