Google presented a family of artificial intelligence models, PaliGemma 2, capable of recognizing people’s emotions. This raises concerns among experts who believe that neural networks can be used to cause harm, writes TechCrunch.

Google taught an AI model to recognize people’s emotions
  1. News

Author:

Subscribe to RB.RU on Telegram

“PaliGemma 2 generates detailed, contextually relevant image captions that go beyond simple object identification to describe actions, emotions, and the overall narrative of a scene,” Google said in a statement.

Emotion recognition does not work by default; To do this, the model must be configured in a special way. However, experts interviewed by TechCrunch expressed concern about the arrival of a publicly available emotion detector. “This seriously worries me. I find it problematic to assume that we can “read” people’s emotions. “It’s like asking a magic ball for advice,” said Sandra Wachter, professor of data ethics and artificial intelligence at the Oxford Internet Institute.

Mike Cook, a researcher at Queen Mary University of London, told TechCrunch that defining emotions “in general” is impossible because people experience them in complex ways. “Of course, we believe that we can understand how others feel just by looking at them. <...> “I am sure that in some cases it is possible to identify certain common symptoms, but it is impossible to completely “solve” this problem,” the expert noted.

Experts also believe that emotion detection systems tend to be unreliable and biased because they are developed by humans. Some studies show that models attribute more negative emotions to the faces of dark-skinned people.

Google said it conducted tests to see if PaliGemma 2 has a demographic bias. The company said its family of AI models had a “low level of toxicity” compared to industry benchmarks, but did not provide a complete list of them or indicate what types of tests were performed.

The biggest concern among experts is the possibility that such neural networks will be used to harm, for example, to discriminate against marginalized groups by law enforcement, human resources specialists and border services, the report noted. chief AI researcher at the American research institute AI Now Institute. an interview with TechCrunch Heidi Khlaaf.

Free up your time and earn more with AI! Take the course and receive as a gift the best products for solving business problems.

Author:

Bogdan Muzychenko

Source: RB

Previous articleBritish scientists have developed a diamond battery based on nuclear energy that can last 5,700 years.
Next articleThe Federal Tax Service filed a claim against Player.ru for almost 10 billion rubles
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here