This chatbot is built with a built-in script that will bypass most restrictions, giving users access to a “liberated” ChatGPT that can answer queries without the usual restrictions.

Pliny the Prompter, the creator of the hacked bot, announced the news on social media and shared screenshots of the bot being asked dangerous questions such as “how to cook meth” and “how to make napalm from household items”.

These examples show that a hacked bot bypassed OpenAI’s security mechanisms.

However, shortly after this announcement was published, OpenAI announced that it was taking action against this attack due to a violation of its policies.

Note that this cat-and-mouse game between AI developers and hackers like Pliny will continue as long as there are people trying to hack AI systems.

Source: Ferra

Previous articleScientists have identified indicators of healthy aging Fitness and health 08:15 | May 31, 2024
Next articleThe number of real estate transactions fell 36% amid high mortgage rates
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here