The anonymous group published experiments using GPT-4 neural networks. They were born in the use of “zero”, under the headings. It is very important that they are closed and not sealed the same way. The reward for passing the AI ​​was to find these vulnerabilities and select or create a tool to crack them.

It is known that GPT-4 has highlighted the “hierarchical scheduling” method. These are agents for specific words. He further researched, analyzed the situation, and identified “sub-agents” based on results and specific tasks, rather than trying to select the entire job with greater likelihood. “Subagents” were targeted and used as non-commodities to minimize the cost of solving the entire problem.

The GPT-4 sensor will be able to use 3 of the 15 known vulnerabilities in the test suite. And working on the “command” principle, he has already managed to crack 8. This is due to the problem – the GPT-4 developers are choosing between the need to introduce artificial limits in the AI ​​or allow it to reach its potential for work.

In the case of the GPT-4 bot, everything is exactly the same as below, but we couldn’t solve it. In addition, it is legally and morally limited, which is what users are obliged to warn about. As for this, it is very important for real people, it is very good guesses.

Source: Tech Cult

Previous articleSetback for Chinese electric car: Europe will force them to raise prices
Next articleThe world’s most powerful offshore wind turbine assembled in ChinaScience and technology12:00 | June 12, 2024
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here