The American company OpenAI is testing systems to identify and clean unwanted information based on artificial intelligence. The GPT-4 model can do six months of work in just one day, Bloomberg writes.

OpenAI tests AI for content moderation

The report notes that the moderation process should not be fully automated, as using technology to interpret the nuances of human handwriting can be challenging.

Instead, OpenAI proposes using AI to free up staff time for more complex moderation tasks.

  • In early August, OpenAI introduced the GPTBot search bot, which gathers information from the Internet to train the company’s new language models. It will only collect public data, filtering out sources that collect personal information or violate the rules of the service. Also, the bot will not have access to paid content.


karina pardaeva

Source: RB

Previous articleThe best games to laugh and have fun on iPhone
Next articleWent to rest in Kaliningrad. When the Baltic Sea met better than the Mediterranean
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.


Please enter your comment!
Please enter your name here