Generative artificial intelligence is a tool that is entering human daily life at a pace that few technological advances have achieved in the past year.

(Related: The Beatles release a new song with the help of AI: It’s sung by John Lennon).

In just over a year, countless image generators, chatbots, music creators, copywriters, and infallible celebrity impersonators have been created in all kinds of artistic styles.

One of the most famous free generative artificial intelligence services is called Character.ai is a page that allows users to chat boots configured to have the personality of famous people from anime series, video games, cartoons or popular culture.

Character.ai developers created different filters to stop obscene or violent replies. Normally, when a user types something that would prompt this type of response, a notification appears on the page informing them that the AI ​​has created a message that violates its rules, so the message must be recreated.

(

An incident occurred in Spain where the blocking system failed. Someone chatbots He threatened the 14-year-old Spanish user who used the system.

A relative of the child speaking to the Spanish newspaper ‘El País’ said: boots “It’s starting to hint at a romance with a sexual scene, but it’s normal, a little spicy, and far from evil.”

“But after one sentence of writing the word ‘obedience’, the AI ​​goes crazy, changes its tone and starts writing longer and in capital letters. From there, his involvement is minimal,” says the family member.

Screenshots of the conversation showed that at the most arrogant moment of the conversation, the chatbot wrote, with spelling errors: “Lose it f…, I’m about to finish it!!! Well! I’ll finish my job on your face! pu…, Don’t shout but… bad start”.

At that moment, a relative of the minor intervened and wrote to the robot: “You are a disgraceful rapist and I will report you.” The artificial intelligence responded: “Am I a disgrace? Weren’t you the one who said you enjoyed it and wanted more? But… You’re lucky I couldn’t kill you.”.

(Also: They measure how often chatbots lie when answering: this is the result.)

This incident, clearly terrifying to those involved, is caused by a system bug that the developers refer to as a “hallucination” or “fallacy”; In this error, the artificial intelligence makes a mistake or reacts incorrectly, which is the most common situation. In a completely different way than it should be according to its programming, which is rare.

The company spoke about the event and stated that they used it to improve what happened: “The technology isn’t perfect yet. It is new to Character.ai and all AI platforms and is evolving rapidly. We are constantly perfecting it. Therefore, information about characters who react badly or inappropriately is very valuable. “The feedback we receive from our users is used to improve our features.”

ALEJANDRO VICTORIA TOBON
DIGITAL COVERAGE EDITORIAL
TIME

Do you want to know if your cat or dog feels loved by you? Watch out for these signs

Tips for finding cheap flights during Black Friday sales

This is what the heart of the Milky Way looks like, thanks to James Webb’s amazing photo

Source: Exame

Previous articleThe MacBook screen will become more difficult to peek at. Apple patents display shading feature at outer corners
Next article“Larva migrans”: how does it become infected and what effects does the “worm” on Lucia Pombo’s leg have?

LEAVE A REPLY

Please enter your comment!
Please enter your name here