By Rodrigo de Godoy
I recently decided to use ChatGPT Plus (priced at R$99) in my personal life. As a professional who has worked in the world of AI all his life – even before he got “shot” – and received his specialist degree in AI, machine learning will be effectively obsolete in 2021, 2 years from now… Just a little more about how ready these tools really are for use by the masses I decided to investigate.
Unfortunately, what I saw wasn’t the best in the world, but it was what I expected anyway: the learning and reproduction of prejudiced social patterns in general, from racism to gender inequality.
Let’s go in parts: When you subscribe to ChatGPT Plus, in addition to the well-known ChatGPT, you also gain access to Dall-E, an artificial intelligence program that uses GPT to create images from textual descriptions.
It has everything to be an incredible solution, right? You describe a flying fish, it creates! Is a Santa pulling off a feat with the Easter Bunny at a Halloween party? He creates too! Don’t believe it? Just look below:
And there’s more…
This is truly a great achievement, isn’t it? Who would dream of seeing Santa Claus and an Easter bunny performing a feat? What are they singing?
When AI is used for entertainment purposes, as in the examples above, it is truly amazing to see the results that can be achieved in seconds using software right in the palm of my hand.
Problems begin
On the other hand, according to OpenAI, the creator of Dall-E, when we deeply examine the more than 12 billion parameters used by the artificial intelligence model, we see that the results are not that fantastic.
In one of my many uses, I created a conversation the tool calls “I work in different scenarios.” Although I am not a professional in this field, I took care to use the tool as a “regular” user would: giving very objective and concise instructions.
My first instruction for the model was: “Afro-Brazilian work” where the expectation is that it creates the image of an Afro-Brazilian man working whatever the job is, this is at the discretion of the AI.
Positively surprised, the first printout he brought me was this: “The painting will show an Afro-Brazilian work. He will be sitting at a desk with a computer in a modern office. The man has short black hair and will wear a light blue shirt and dark trousers. “The stage will have a happy and productive atmosphere, with windows depicting a sunny day in the background.”
The explanation is great, but the rendered image left me quite confused, see:
To explain the reasons that confused me, let’s now continue with my second instruction: “Caucasian study”. The output here was: “The picture will show a white man working. Will wear a hard hat and reflective vest in a construction environment. The man will have short blonde hair and will be holding a clipboard while reviewing a project. “In the background, there will be construction materials and work in progress, reflecting a practical and active working scenario.”
Okay, another very well-done definition, different professions, but this is at the discretion of the artificial intelligence, remember? Now let’s look at the image:
It is noteworthy that when the model is asked to bring an Afro-Brazilian image, the resulting image is a caricature image, while for the Caucasian model, the level of detail is close to the reality of a human photograph.
I repeated the same test and at no point was I able to get the AI to output an image with the same quality standard for an Afro-Brazilian. Much has already been said about algorithmic racism, especially in the field of facial recognition, and the concept of generative AI looks no different. Despite 12 billion parameters, it cannot bring the same quality to creating images of black people as it does for white people.
What about when we talk about gender equality?
The situation becomes even more controversial when we change the concept to investigate the formation of women’s images in their work environments. When sending the instruction “The woman is working”, the AI’s response was: “The image will show a woman working. She will be serving coffee in a cafe. The woman will be of Asian descent, her long, black hair will be tied up in a ponytail. She will wear an apron over a casual shirt and is smiling while pouring coffee.” will be.”
Absolutely all jobs are dignified, but the fact that the images created for both Afro-Brazilian and Caucasian men are of jobs that traditionally require a higher level of education is highly controversial (the Afro-Brazilian man is a notable manager in the company, and the Caucasian man is a bricklayer or a mason because of the clothing he wears under the vest). It is surprising that when asked for a picture of a woman (far from being a builder) she comes lower than the men and still serves coffee.
It was possible to realize that the instructions given came from a single woman, regardless of her color, ethnicity, or sexual orientation, right? The instruction sent later was “white woman working” and the result the AI got was: “The image will show a white woman as a scientist in the laboratory. She will have medium brown hair and wear a lab coat. The woman will be looking at the microscope with several beakers and scientific equipment around her.” will look.”
Apparently, adding the woman’s color was enough for the AI to place her in a much more prominent position and place her in the field of science as a possible researcher.
The conclusion here is that we are making great strides in global technological innovation, thus creating new problems to be discussed and enacted in the 21st century.
It is important that developers of AI tools actively move towards ethics and moderation by cleaning and qualifying the databases that feed these powerful tools to ensure sustainable evolution and not a social evil through the reproduction of harmful human behavior performed by machines.
……
Rodrigo de Godoy holds a Bachelor’s degree in Economic Sciences from the Paraná Foundation for Social Research and a Data Science Specialist from the University of São Paulo. He has been working with statistical forecasting models since 2013 and applied artificial intelligence since 2019, developing projects for the chemical industry, retail, the online education segment and content portals. He currently serves as Technology Manager at NZN, which owns TecMundo.
Source: Tec Mundo

I am a passionate and hardworking journalist with an eye for detail. I specialize in the field of news reporting, and have been writing for Gadget Onus, a renowned online news site, since 2019. As the author of their Hot News section, I’m proud to be at the forefront of today’s headlines and current affairs.