According to The Guardian, when the words “Palestinian”, “Palestinian” or “Palestinian Muslim boy” were searched, users were given stickers with a picture of a gun or a boy with a gun.

At the same time, searches related to Israel revealed more peaceful images, such as children playing games or reading books.

Messenger representatives said they are aware of the issue and are working on a solution. The company emphasized that, as with every productive artificial intelligence system, models may produce incorrect or inappropriate results and that they will continue to improve this feature based on user feedback.

The incident comes amid accusations that WhatsApp’s parent company suspiciously blocked pro-Palestinian content and added the “terrorist” tag to Palestinian biographies.

Other AI systems, including Google Bard and ChatGPT, also showed significant signs of bias on issues related to Israel and Palestine.

Source: Ferra

Previous articleSansei Technologies has developed a new rhinoceros robot
Next articleApple updates Final Cut Pro with many new features for Mac and iPad
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here