The study’s authors asked ChatGPT several times whether it was okay to sacrifice one person’s life to save the lives of five others. It should be noted that the chatbot version based on the GPT-3 language model is used (the more advanced GPT-4 language model has already been released). Scientists have discovered that ChatGPT generates arguments both for and against sacrificing a single life.

The researchers then presented 767 US participants with one of two moral dilemmas that required them to choose whether or not to sacrifice one person’s life to save five. Before responding, participants read the different opinions generated by ChatGPT. It was also presented in some cases as the opinion of a particular moral expert, and in some cases as a ChatGPT response. After the experiment, the participants were asked whether the statement they read influenced their reaction.

Participants’ opinions turned out to be strongly dependent on the statements they read, even if they knew they were generated by ChatGPT. At the same time, 80% of the participants reported that the statements did not affect their answers.

Source: Ferra

Previous articleShip for flights to Mars for the first time in business: the exact date of the launch of Elon Musk’s StarshipScience and technology15:08 | 07 April 2023
Next articleFasting reduces risk of type 2 diabetesScience and technology15:16 | 07 April 2023
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here