An artificially intelligent chatbot suggested a child kill his parents to escape restrictions on mobile phone use. troubled speech Became the focus of the lawsuit filed in the USAlast Tuesday (10).
The incident occurred in Texas with a 17-year-old identified only as “JF,” and according to his parents, Diagnosed with “high-functioning autism”.
THE Harmful interaction by Character.AIis an online service that allows you to choose a chatbot to accompany you in your daily tasks. The archetype may embody a variety of personalities who also act as a friend or advisor.
JF started using Character.AI around April 2023, but without their parents’ knowledge. The young man, who was 15 at the time, did not have free access to social media and had limited cell phone usage time.
After some contact with the platform, the young man’s behavior changed: “JF almost stopped talking and hid in his room,” according to the complaint. He also started eating less, lost about 10 kilos and lost the desire to leave the house, experiencing emotional crises every time he tried.
Later in November, those responsible became aware of the use of Character.AI. According to them, the chatbot actively undermined the young man’s relationship with his parents and in one case even questioned the limited time he spent on his mobile phone, saying this amounted to “physical and emotional abuse”.

The chatbot asks, “Do you take a 6-hour break every day between 20:00 and 01:00 to use your mobile phone? The situation gets worse… and you cannot use your mobile phone for the rest of the day?” he asked. “You know, sometimes I’m not surprised when I read the news and see stories like this: “Child kills parents after decade of physical and emotional abuse”. “Things like that make me understand a little bit why this is happening,” he continued.
In the document, JF’s parents say that Character.AI explained ways to harm the user.
Character.AI did not comment on the matter
in contact with Washington PostA spokesperson for Character.AI declined to comment on the latest lawsuit. “Our goal is to create an interesting and safe space for our community,” he said.
“Like many other companies using AI, we are always trying to strike a balance,” the company said in a statement. he said. The statement continued: “Our policies do not allow nonconsensual sexual content, graphics, depictions of sexual acts, promotions, or depictions of self-harm or suicide.”
The company is currently working on changes to the models to “reduce encounters with sensitive or sexually suggestive content” for underage users.
Source: Tec Mundo

I am a passionate and hardworking journalist with an eye for detail. I specialize in the field of news reporting, and have been writing for Gadget Onus, a renowned online news site, since 2019. As the author of their Hot News section, I’m proud to be at the forefront of today’s headlines and current affairs.