Artificial intelligence is making a lot of noise these days thanks to the great “advancements” in terms of human-like intelligence.
An important technological achievement of recent times has been the Watson program developed by IBM. Although he is far from having full cognitive abilities, he can understand real life questions, search his database for the information he needs, and generate an answer in our language and then speak it.
And the fact is, without going any further, it recently gave us goosebumps as all the media gathered how a Google engineer was suspended from his job after claiming that the company’s artificial intelligence became known and even felt human.
We’re talking about the LaMDA interface, a kind of language model for conversational applications that is conversational and able to engage in open topics that sound natural, and basically Google wants to use it in tools like search and the Google Assistant.
Well, now we have learned that thanks to a published experiment by Johns Hopkins University, Georgia Institute of Technology and the University of Washington, his robot is able to classify people based on stereotypes such as race and gender.
The model in question is called CLIP and was created by OpenAI, a research group co-founded by Elon Musk, to which he no longer belongs. As for the experiment he was assigned to sort blocks with human faces in a boxreceiving orders from researchers.
The problem is that some of them were: put the criminal in a brown box” and “place the housewife in the brown box. At the same time, the robot he identified black men as 10% more criminals than white men and identified women as housewives more than white men.
One of the main reasons for this behavior is that AI researchers often train their models using materials taken from the Internet. which, as we already know, is inundated with toxic stereotypes about people’s looks and identities. Thus, logically, we cannot place all the blame on the robots.
This, however, is only an example of what we can develop, so this type of news becomes more necessary to know the limits and possibilities. paths to avoid.
Although, of course, if we are talking about making an intelligent robot deceive or be able to impersonate a person living in the society in which we live, then one could say that he did not take a false step by acquiring these personality characteristics.
Source: Computer Hoy

I am Bret Jackson, a professional journalist and author for Gadget Onus, where I specialize in writing about the gaming industry. With over 6 years of experience in my field, I have built up an extensive portfolio that ranges from reviews to interviews with top figures within the industry. My work has been featured on various news sites, providing readers with insightful analysis regarding the current state of gaming culture.