Middle You don’t want the AI models behind ChatGPT or Bard to be trained on articles your users post. For this reason, the platform announced the implementation blockade of such technologies. The measure is similar to the one they have already implemented. cnn, Reuters And New York Times.
Tony Stubblebine, Platform CEO bloggingargued that this responds to the need to address a critical issue: setting limits on what is considered fair use of information publicly available on the Internet.
“AI companies make money from your emails without asking your consent or offering you compensation or credit. We could ask for a lot more, but these “3 Cs” are the minimum,” explained the Medium leader.
At the same time, the performer said that he would do everything possible to block the use of articles published on Medium to train AI models. This position will remain until companies take steps to ensure fair use of materials. Stubblebine indicated that their ban applies to the general site level. And he admitted that while it’s far from an ideal or foolproof solution, it’s the best they can do for now.
“What we need for our writers is a granular approach that works at the individual author and story level. A more robust protocol would likely look like a search engine sitemap, allowing the site to explicitly say what is available for “AI training and what” is not. Medium would be happy to provide tools for authors to set these permissions. But first we need some kind of standard.”
Tony Stubblebine, CEO of Medium.
Medium is up against OpenAI and other artificial intelligence companies.

In his statement, Medium’s CEO mentions that, in principle, they are aware that their new policy it’s difficult to enforce. The company has decided to update its terms and conditions to prevent the use of materials published on the platform to train AI models without prior written consent. In addition, they will include certain locks in the file. robots.txt site.
Despite this, Stubblebine points out that OpenAI is the only artificial intelligence company that today allows it to prevent its web crawler – in this case GPTBot – from extracting the contents of the service and using it to train its tools such as ChatGPT. Other firms have so far turned a deaf ear to these statements. This doesn’t mean that Sam Altman’s people are the “good guys” of history, as they already have a slew of lawsuits and lawsuits over their use of copyrighted material and personal data. scratched from the network without permission.
From Medium they claim that They also cannot be protected by copyright laws. to prevent AI models from being trained using articles on the site, as this does not cover such a use case. Moreover, since the posts belong to the users and not the platform, they cannot assume legal obligations to fight corporations.
In search of a coalition
Purpose of the site blogging is form a coalition with other companies that are in a similar situation. However, Medium argues that not all of them are ready to publicly take on giants like OpenAI, Google or Meta. Moreover, Stubblebine says that if organizations like Wikipedia or Creative Commons got involved, they would likely be able to get positive results faster.
At the moment, the platform claims that it does not intend to hinder the development of artificial intelligence. But you also don’t want the value of what your users write to be diluted just so the technology can be used to generate spam.
What’s boosting Medium is interesting. For the first time someone openly asked creating a consent standard for fair use of web materials for training AI models. It will be interesting to see if this idea gains traction in the near future.
Source: Hiper Textual

I am Garth Carter and I work at Gadget Onus. I have specialized in writing for the Hot News section, focusing on topics that are trending and highly relevant to readers. My passion is to present news stories accurately, in an engaging manner that captures the attention of my audience.