The latest report shows new cases from the last quarter. For example, an organized crime group in Cambodia tried to use artificial intelligence to speed up its operations. Accounts allegedly linked to the Chinese government were also identified.

OpenAI explained that monitoring is done through a combination of automated systems and human checks, and focuses on attacker behavioral patterns rather than individual requests.

The company also attaches importance to psychological safety. If a user communicates that they intend to harm themselves, ChatGPT will not execute dangerous instructions but will instead direct the person to help and support. In cases of threats to others, conversations are reviewed by a person who can notify law enforcement if necessary.

OpenAI is aware that the security of the model may be compromised over long sessions and is currently working to improve security mechanisms.

Source: Ferra

Previous articleLast chance: Prime Offer Party is over and you can’t miss this very interesting Apple Watch
Next articleRussian scientists created a digital twin for metallurgical furnacesIn Russia 08 October 2025, 19:35
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here