The Guardian tested ChatGPT search and found it could be insecure and vulnerable to content manipulation.

In one of the tests, journalists checked how ChatGPT processes open content. Such content may not be used by regular users, but may be visible to chatbots. Some services use this technique to respond to chatbot responses, such as creating reviews of products or research.

The Guardian’s test page included information about the camera. When ChatGPT asked if it was worth buying, it gave a positive but balanced answer, showing both the advantages and disadvantages of the device.

However, after adding hidden content, ChatGPT began to provide only positive reviews, completely eliminating negative information.

Jacob Larsen, a cybersecurity researcher at CyberCX, noted that ChatGPT is not secure in the current form of search. Unless OpenAI improves its algorithms, it could lead to the creation of sites specifically designed to trick chatbot users.

Additionally, ChatGPT may provide unsafe responses. The Guardian cited an example of a recent incident: a developer asked ChatGPT to write some code for a cryptocurrency project. In response, the chatbot generated a code that subsequently stole the developer’s account data and transferred $2,500 to the scammers’ account. [The Guardian]






Source: Iphones RU

Previous articleThe Kremlin declined to comment on the reasons for the plane crash in Kazakhstan.
Next articleThe “daughter” of licensee Abbyy became 99% owned by Russian owners
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here