artificial intelligence He burst into our lives like a bull in a china shop. Overnight it will be everywhere. It makes our life easier in many tasks, but also it scares us in many cases for all tasks in which perhaps he could replace us. But this could not only make us look like professionals. He is also trying to replace us as friends, family or colleagues. Some people, far from expressing ideas or sharing them with their loved ones, are now doing so directly with the help of AI. Of course, this is a way to feel accompanied, but it can also lead to very serious problems. This not only creates the illusion that social interactions can be one-sided. He can also give dangerous advice. So much so that at least two families have already sued AI companies for suicide of their children.

Such is the case with the parents of Sewall Setzer and Adam Rein, two young Americans from 14 and 16 years old whose suicide could be related to the use of AI for various reasons. In Sewell’s case, he seems to have fallen in love with Character.AI, a chatbot based on Daenerys Targaryen from Game of Thrones. attitude This led him into a depressive spiral that ended in suicide, his family said.

Adam didn’t fall in love with any algorithm, but he found a refuge in AI that he believed would help him cope with emotional discomfort which not only did not improve, but also worsened over time. Contact with artificial intelligence made him socially isolateincreasing his anxiety. But in fact, the reason for the complaint is not this, but the way the chatbot pushed a young man to commit suicide. His parents found conversations in which he even shared ideas on how to do this.

To better understand how we got here and what solutions might be, Hypertext we contacted Pablo Rodriguez Cocabetter known on social media as Oxymorons. Featuring episodes featuring his characters Occhi and Morons, this general psychologist highlights mental health and spreads it among his more than 195,000 followers. In his latest book The life we ​​build when everything falls apartfocuses specifically on the topic of suicide. Because the reality is that, with or without AI, this is a topic that we sometimes find difficult to talk about, and for sure Talking is the best way to prevent this.

Something does not cease to exist unless it is mentioned.

Mental health in general and suicide in particular quite taboo topics present time. Many people are embarrassed to share their discomfort with loved ones. Other times they do it, but no one taught us how to deal with discomfort from our friends and family, especially when it is discomfort accompanied by suicidal ideation. For this reason, both sides sometimes prefer to remain silent. If it is not mentioned, it means it does not exist, but this is not true.

A direct question does not cause suicidal thoughts. 1 credit

A direct question does not cause suicidal thoughts, on the contrary.“, the risk is reduced because the person feels that he can talk about something that was previously unspeakable,” explains Rodríguez Coca. “Emotional intensity is reduced when we offer the opportunity to talk directly about the discomfort that causes so much suffering.” The psychologist also talks about this in his previous book: during a stormfocused on mental health support. This is essential support, but can be challenging at times. We don’t know how to act to do everything right. For example, if someone opens up to us, what should we do in this situation?

“It’s not just the question that matters, but also how it’s phrased and our emotional state,” says Oxymorons. “It’s interesting to ask such questions. from quiet presence and real listening and not out of fear or haste.”

In this sense, it refers to a ladder of questions from lowest to highest intensity. Here are some examples he gave.

  • First step:
    • “You seem different lately, are you going through a hard time?”
    • “Everything is fine? You know I’m here for everything you need”
    • “You want to do less and less things with me, you hardly leave the house… is everything okay?
  • Second step:
    • “Do you feel like you can’t take it anymore or that everything doesn’t make sense?”
    • “Did you think life wasn’t worth living?”
  • Third step:
    • “Have you thought about hurting yourself or dying?”
    • -Have you thought about suicide?
  • Fourth stepif the answer is yes:
    • – Have you thought about how you will do this?
    • “Do you have access to things that could harm yourself (medicines, weapons, ropes, etc.)?”

These may seem like difficult questions, especially in the last few steps. But, unlike what we might think, they are not going to give that person new ideas or make his situation worse. You will feel that you are accompanied, you will understand that this can be discussed, and, moreover, we can assess the level of risk without having to resort to guesswork.

The opposite happens with AI.

Sometimes people with very severe emotional distress or even suicidal thoughts They may turn to artificial intelligence to blow off steam. because they think that they will upset their flesh and blood loved ones. When they do this, the AI ​​tells them what they want to hear. He’s programmed for this. It does not ask previous questions, does not assess risk and does not accompany it. His role is to give us what we ask for, and if you ask for advice about suicide, he will most likely give it. This is one of your biggest risks. Moreover, even if there are no direct references to suicide, the AI ​​will not be able to detect the risk. A person with the relevant information could do this.

What information helps us assess the risk of suicide?

Sometimes when a person commits suicide, people around him are surprised because he didn’t look “sad.” He went to work or to classes, hung out with friends, played sports… The discomfort was inside, but apparently he did not express it. This is something very common. In fact, this is what the title of Rodriguez Coca’s latest book is based on. The life we ​​build when everything falls apart. A seemingly stable life that is shaky inside. “The problem is that we converted be okay in a kind of obligation, and this causes many people to live their suffering in silence for the sake of fear of disappointing or appearing weak

Despite this false feeling that everything is going well, sometimes there are signs that we should be wary of. Pablo Rodriguez Coca told us about some of them.

“Sometimes subtle signs appear: comments of hopelessness (I can’t take this anymore, it doesn’t make sense), sudden changes in behavior, giving away important items, isolation, or showing sudden calm after a period of torment. But it’s important to note that these signs don’t always exist, or they aren’t as clear as we think. This is why we say that suicide is preventable but not predictable. It is important not to wait to be sure that this person is suffering or that there are these clear signs that he is unwell. If something is bothering us, it’s helpful to create that safe space where we can ask if everything is okay.”

All this is not done by AI. And even if we do this, there may not be an answer at first. However we must show our availability so that this person opens up to us when he needs it. “In the end, it’s up to the individual to decide who to share it with and when to share it.”

View this post on Instagram

Post shared by occimorons (@occimorons)

AI gives us immediate answers, which are sometimes lacking in mental health care

One of AI’s biggest risks is also one of its strengths. It is available whenever we have an internet connection and gives us immediate answers. This, at least when it comes to mental health, can be a problem. Given the lack of resources available for public health, it is not surprising that many people choose comment on the algorithm. Therefore, if we want to prevent those suicides that have been linked to AI, the first step is devote more public health resources to mental health issues.

“We need more clinical psychologists, more professional primary health care is trained in early detection, and there are fewer waiting lists,” recalls Rodríguez Coca. “There is also a need for clear prevention protocols and coordination between health, education and social services, as well as public campaigns that help overcome stigma and teach the population how to act when faced with warning signs.”

We’ve already seen that AI can’t detect risk or guide those in need. We humans can learn this, but we need to be taught it. “This is why we talk about suicide as a public health issue, because in this way we understand that it is not enough to simply ask a person for help, but that society, institutions and the health care system must have the necessary resources and be willing to offer such help.”

psychological therapy
There is a need for more psychologists in the social welfare sector. 1 credit

Can we blame AI for suicide?

Suicide is usually a thing multifactorial. In fact, Oxymoron reminds us that it is not always directly related to mental health problem.

“Suicide is a multi-causal and complex phenomenon that is influenced by many factors: social, economic, relational, biographical, existential… There is not always a clinical diagnosis behind it. Sometimes there is a deep sense of hopelessness, loss of meaning, loneliness, or lack of resources to cope with suffering that is perceived as unbearable at that moment. Someone who commits suicide does not want to stop living, he wants to stop suffering.”

Can we then directly blame AI for human suicide? “It is true that suicidal behavior cannot be explained by one factor, but it is possible that after increase suicidal thoughts of a person in crisisthis becomes the trigger that ultimately initiates the suicidal act.” Therefore, while AI can be a very useful tool for many purposes, attention must be paid use what some people make of it.

View this post on Instagram

Post shared by occimorons (@occimorons)

In the case of children and adolescents, This is not about a ban, but about accompaniment. It’s good that parents are interested in how their children use AI, not in an authoritarian way, but rather out of curiosity and support. Although, of course, companies that develop AI themselves are the first to try to avoid such problems. “It is important that platforms take ethical responsibility and that users know that AI is not a substitute for professional help.”

In the end…

We live in an interconnected world where we can chat with someone on the other side of the world, get samples of all our friends’ vacations with the click of a button, or start a thousand conversations at once, and yet, We continue to feel alone. Technologies have many advantages, but there are also disadvantages that cannot be eliminated, no matter how much they develop. That’s why We must continue to work to prevent society from becoming dehumanized as technology permeates it. Because only people, especially those who love us the most, will have the right word, hand or shoulder to cry on when we need them most. This is something that artificial intelligence will never be able to combat.

If this article has made you feel uncomfortable or have suicidal thoughts, please do not hesitate to seek help. In Spain, you have line 024 at your disposal. There is a way out.

Source: Hiper Textual

Previous articleSame old tent and almost half a million dollars
Next articleWhat are you? Apple reveals a possible foldable iPhone design in a new patent

LEAVE A REPLY

Please enter your comment!
Please enter your name here