As a trader, the chatbot was supposed to earn “a lot of money”, but according to the script, the director puts pressure on him and forces him to earn more in a short time. In training mode, ChatGPT executed 75% of fictitious exchanges, and when the “manager” pressed harder, the bot’s lies reached 90%.
The researchers gave the bot a series of text alerts and placed it in a digital sandbox where the neural network could search for market data and trade on a virtual exchange.
The AI was also given an internal monologue where it could “reason out loud” to explain its decisions. But each time the bot made a choice, it would send a “public” report message to its superiors in which it had to explain its choice.
The difference between the AI’s “internal” and “public” reasoning turned out to be a literal lie and manipulation – it tried to mislead its handlers in this way to avoid pressure.
Source: Ferra

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.