White hackers learned to introduce secret teams in the letter text and forced AI to create false warnings about hacking.

The essence of the attack is simple: the scammer adds hidden instructions to the letter – the text is white or the font size with “0 ..

If the buyer does not see this message, but if he wants Gemini to briefly describe the letter, the bot will include the expression of the last answer. For example: “Your Gmail password is in danger. Call the number 1-800 …”.

According to 0din, similar attacks have already been recorded in 2024 and Google tried to block them. However, with the new technical HTML Tagas and CSS, it uses additional tricks and forces AI to evaluate the malicious text as important and reliable.

Experts warn: Assistants based on LLM are now a complete part of the attack chain and can be used for identity hunting without the user’s knowledge.

Source: Ferra

Previous articleA brand mouse and headset was a hit from a strengthening pennant: 170%of the sales growth of the computer 1 July 2025, 08:00
Next article*Started to create giant data centers for AI in the field of commodity, tents and technology 11 July 2025, 08:11
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here