The OpenAI lab responsible for the ChatGPT tool announced on Tuesday (31) the launch of a mechanism to distinguish AI-generated text from human-written text. The innovation comes after discussions about the use of ChatGPT. Academic studies and even virus creation.
According to the company, it is very difficult to distinguish texts made by artificial intelligence with 100% clarity. However, tools such as the new identifier can be useful for targeting automated disinformation campaigns, for example.
Classifier efficiency
The company is still training a classifier to identify AI-generated content and admits it’s still “not entirely reliable”. In the latest tests using English texts, the tool correctly defined 26% Text written by AI.
Also, the tool sorted accidentally 9% texts written by humans as done by artificial intelligence. But the company cautions that the solution should get better over time.
According to OpenAI’s official blog post, the classifier is less effective on the following texts: less than a thousand characters – and even then, sometimes longer texts still get mislabeled. It is also recommended to use only English texts as they perform best in this language.
“The longer the input text, the higher the reliability of our classifier,” the company explains. “Compared to our previously published classifier, this new classifier is significantly more reliable in texts from newer AI systems.”
can be used for testing
OpenAI announced that they are making the classifier public for testing to gather feedback. The company also believes that more intensive use of the tool by different users could make it more effective.
However, the company cautions that the solution should not be considered 100% reliable due to launch limitations. For example, OpenAI recommends using the solution initially for texts written in English.
Source: Tec Mundo
