Artificial intelligence has taken the world by storm in recent years, with chatbots like ChatGPT being among the most popular. However, there is one company that has taken communication with artificial intelligence to the next level by offering users the opportunity communicate with a simulation of your deceased loved ones.

Project December

Feign the Dead – Project December

Project October is an artificial intelligence platform developed by Jason Rohrer that allows you to “simulate a text conversation with anyone.” Thanks to patented technology and advanced artificial intelligence running on one of the most sophisticated supercomputers in the world, it is now possible to set up a simulated chat “even with someone who has already died,” according to the website.

For the price of $10, the chat includes “more than 100 exchanges” and lasts about an hour, depending on how quickly the user responds and how you distribute the conversation. Once applicants fill out the form with questions related to the deceased and make payment, the platform will generate a personalized identity for interaction.

Rohrer’s AI platform was even featured in a 2024 documentary called Eternal youwhich explores technologies that can create digital copies of deceased people and how user experience-based grief marketing works.

Eternal You – International Trailer

Some people who have already adopted this technology find comfort in text, voice or video modeling. They say they feel like their loved ones are actually talking to them from the outside. However, others find AI memorialization of the dead disturbing and manipulative.

The Ethical Dilemma of Simulated Conversations with Dead People

Ethicists Tomasz Hollanek and Katarzyna Nowaczyk-Basinska from the University of Cambridge have raised concerns about the risks of the “digital afterlife industry.” They claim that chatbots that imitate deceased people, sometimes called dead people, grief or ghost robotsraise several key social and ethical issues that have not yet been resolved.

Questions raised include: Who owns a person’s data after they die? What is the psychological impact on survivors? What can you use a deadbot for? And who can disable the bot forever?

One scenario Hollanek and Nowaczyk-Basinska propose is this: A 28-year-old woman’s grandmother dies, and she decides to upload text messages and voice notes to an app. This application allows a woman to call the artificial intelligence of her deceased grandmother at any time. However, after a free trial of the app, your digital granny will start selling you things while you talk to her.

“People may develop a strong emotional connection to such simulations, making them especially vulnerable to manipulation,” Hollanek suggests. “Methods and even rituals should be considered to properly remove dead robots. “This could mean a digital funeral of sorts.”

Campus Production / Pexels

While this may seem absurd, Hollanek’s claims stem from a 2018 paper in which a group of ethicists argued that digital human remains have value and should be treated with more than just economic interest, viewing them as “subjects with intrinsic value.” , which is in line with the vision of the ICOM Code of Professional Ethics for Museums. This code states that human remains must be treated with respect and “intact” dignity.

Hollanek and Nowaczyk-Basinska don’t believe a complete ban on dead robots is possible, but argue that companies should treat donor data “with reverence.” They also agree with the previous opinion that dead robots should never appear in publicly accessible digital spaces such as social media, with the sole exception of historical figures.

Mental health issues

In 2022, ethicist Nora Freya Lindemann argued that dead robots should be classified as medical devices to protect mental health. For example, children may be confused if a deceased loved one is digitally “alive.”

However, Hollanek and Nowaczyk-Basinska argue that this idea is “too restrictive because it refers specifically to deadbots designed to help interactants cope with grief.” Instead, they argue that these systems should be “substantially transparent” so that users know what they are doing and the potential risks associated with it.

A worried woman looks at her mobile phone.
Alex Green/Pexels

There is also the question of who has the right to disable the bot. If someone gave their ghost robot to their children, could the children decide not to use it? Or will the deadbot remain active forever, if the deceased decided so? The desires of different groups may not coincide. So who has the last word? “Additional measures are needed to develop these recreational services,” conclude Hollanek and Nowaczyk-Basinska.

As we debate the future of digital life after death, regulations and transparency are needed to protect human well-being and promote ethical interactions with artificial intelligence. The Cambridge ethicist duo hope their arguments will help “focus critical thinking about user immortality in the design of human-AI interaction and in AI ethics research.”

Source: Digital Trends

Previous articleiPad Pro M4 OLED turned out to be a tablet with almost no flawsLaptops and tablets18:00 | May 26, 2024
Next articleMicrosoft Copilot on Telegram, loss of $56 billion by Elon Musk, import of bananas from China: the main thing on May 26
I am Garth Carter and I work at Gadget Onus. I have specialized in writing for the Hot News section, focusing on topics that are trending and highly relevant to readers. My passion is to present news stories accurately, in an engaging manner that captures the attention of my audience.

LEAVE A REPLY

Please enter your comment!
Please enter your name here