One of the biggest problems around artificial intelligence This is due to its upcoming use in the military sphere. The latter are constantly looking for new technologies that will give them advantages on the battlefield, so AI will eventually become a key ally. IN OpenAI They know they can partner with different countries’ militaries, and they don’t plan on wasting it.
A few days ago, OpenAI banned the use of its artificial intelligence language models in the military sphere. However, according to information from InterceptionOn January 10, they updated their policy to remove this exclusion. The above can actually be verified by checking the old version of your website stored at the Internet Archive.
Clearly, OpenAI has decided to make this important change as little noise as possible. Because? Because it controversial. However, sooner or later this had to be discovered.
The above media outlets have contacted OpenAI for more information on this. The company believes that instead of using the words “military and military” it would be wiser to ban using one’s artificial intelligence to “harm others”. The problem is that this phrase can have a fairly broad meaning and does not, as before, cover the possibility of its use in war for various purposes.
As for the timing of this modification to its standards, OpenAI states that it is due to the recent launch of GPT. That is, the possibility that people will be able to create their own chatbots without having advanced programming knowledge. Consequently, they found it convenient to convey to users that they cannot create a tool that causes harm to others.
“OpenAI’s goal is to create a set of universal principles that are easy to remember and apply, especially since our tools are now used around the world by everyday users who can now also create their own GPTs.”
OpenAI.
None of these arguments convince you? We, too. The real reason for the change is something else that you may have suspected all along.
OpenAI wants to work with military organizations
Finally, in a statement released The ultimate gadget, OpenAI admitted that it is already working with the Defense Advanced Research Projects Agency. (DARPA, English acronym), an organization owned by the US Department of Defense and whose goal is to develop new technologies for military use.
“Our policy does not allow our tools to be used to harm people, develop weapons, monitor communications, harm others, or destroy property. However, there are national security use cases that align with our mission.
For example, we are already working with DARPA to develop new cybersecurity tools to protect the open source software on which their critical infrastructure depends. It was unclear whether these beneficial uses were allowed under the term “military” in our previous policy. Therefore, the purpose of our update is to provide clarity and opportunity to have these discussions.”
OpenAI.
No doubt they were too carried away by this topic. They just needed to acknowledge and emphasize from the start that they had changed their policies to be able to work with governments and win multi-million dollar contracts. As simple as that.
Source: Hiper Textual
I am Garth Carter and I work at Gadget Onus. I have specialized in writing for the Hot News section, focusing on topics that are trending and highly relevant to readers. My passion is to present news stories accurately, in an engaging manner that captures the attention of my audience.