Last week, the association between Las -Alamos Laboratory And the company Open AI Use Artificial intelligence (AI) To control Safety of nuclear weaponsThe field is as logical, it generated excitement. It is difficult not to consider this as a fox, watching chicken chicken coins. But is it really so dangerous? And, more importantly, is this what the headlines say?
The reality is that it is not as apocalyptic as it can sound in our heads. Open AI published a statement on January 30, in which the actions that they would carry out in Los -Alamos were announced. Most of them have nothing to do with nuclear weapons. Nevertheless, this news caused fear on the table, which AI is really used to control them.
In fact, this is not a new fear. A United Nations They have already expressed their commitment so that automatic weapons were prohibited in 2026. That is, a weapon that acts without human intervention. This not only applies to nuclear weapons. Also, for example, for missiles intended for launch, when they find a possible threat.
We know that artificial intelligence is not infallible. This may be wrong, precisely because it does not take into account the opportunity to do this. AI is the friend whom we all owe, if you do not know anything, invents it. The problem is that when it comes to the control of weapons, it can be very dangerous. Therefore, no one dares to do this. Unfortunately, not for the sake of his rivals, but because he can also return against him.
What will open AI in Los -Alamos?
The Los -Alamos laboratory is known as the place where the first atomic bomb was created. Headquarters -apartment in which the famous Oppenheimer He made a story, and then regrets it.
Today it is a specialized laboratory in National Security, Science, Energy and Ecological ManagementThe field performs work for both the Ministry of Defense and for the intelligence community and the national security department. But this is far beyond the borders of nuclear weapons. New materials are studied, algorithms to accelerate the forecasting of the scientific method and endless applications of nuclear energy, ranging from the production of electricity to medicine.
In the newly published statement, Open AI said that the missions that will be fulfilled in cooperation with this laboratory will be mainly 6:
- “Accelerate Basic science which supports global technological leadership of the United States.
- “Determine the new approaches to Treat and prevent diseases“
- “To achieve a new era Energy leadership From the United States, they disclose the entire potential of natural resources and revolutionizing the energy infrastructure of the nation.
- “Improve Security of the United States Thanks to the better detection of natural and artificial threats, such as biology and cybernetics before they arise “
- “To deepen the understanding of the forces that control the universe from fundamental mathematics to High energy physics“
- “Better Cybersecurity And protect American electricity.
As for nuclear weapons, they indicate that “a thorough and selective review of AIS from Openai with security investigators with security permits” will be carried out. In addition, they mention other types of very dangerous weapons, such as biological weapons. “We work in close cooperation to evaluate the risks raised More advanced models in the creation of biological weapons. “
In short, they are not going to put artificial intelligence for work alone or consider that nuclear weapons are not shooting. They will simply use it to study the best security methods, always under Human supervision.
Artificial intelligence, nuclear weapons and energy
Talk about this topic, in Hypertual We contacted Francisco ErreraProfessor of computer sciences and artificial intelligence at the University of Granada, director of the research institute DASCI and academician of the Royal Engineering Academy. He collaborated with us as part of the strategic project “Ethical Artificial Intelligence, Responsibility and General Purpose for Cybersecurity” IAFER-CIB (C074/23), the result of the cooperation agreement signed between the National Cybersecurity Institute (Incibe) and the University of Grenade. This is an initiative, which is carried out within the framework of the restoration, transformation and stability funds funded by the European Numniks Union. Generation of the EU.
The first thing that clearly gives the expert is that We must distinguish the solution and recommendationsSince in the script that involves a risk to people, IA should never make decisions. This only gives recommendations with which it will be a person who makes a decision.
“If people are high for people, the system must be checked and capable of responsibility. In the United States, they have models Safe and safety, Where SAFE refers to safe behavior against a person and Safety For security so that you do not receive attacks. In Spanish, this usually translates as Protection and safetyField
Francisco Errera, professor of computer sciences and artificial intelligence.
Therefore, any system of AI aimed at situations in which a person should be heard should be checked to make sure that it is safe in these two feelings. This is what is very well regulated in Europe, as well as in the United States where the Los Alamos laboratory is located.
Logically, the volume of nuclear energy, especially since in nuclear weapons, is a high risk to humans. In this case, AI should never decide, I just recommend it. Yes, it would be viable for you to make decisions in situations such as make meetings on a hairdresserA field if an error of artificial intelligence (because yes, they can fail), “as much as possible to tell the client that there are no free meetings when there are.” No one is dangerous with this.

Should we worry about Trump’s arrival?
Donald Trump Already started leaving in the United States from World Health Organization And Parisian agreement against climate changeHe also launched an energy emergency plan in the field, in which he eliminates any restrictions for drilling oil wells in protected areas. These are just a few examples of all solutions Joe Biden This denied as soon as you start your mandate. His intention is to eliminate everything that may be connected with the previous president.
This, as Errera explained to us, also includes a new order for IA. Joe Biden launched one in 2023, which was associated with the need to ensure that the algorithms were reliable and safe. Now Trump has canceled this order, but has not yet manifested its intentions. “There is uncertainty that the United States will now make, so they will have to say what their model of their work in this area will be,” says the professor with whom he consulted. “But this does not mean that they will do what could not be done before. There are no AI systems applied to all these areas. ”
Subscribe to Daily hypertoid information ballotGet the field every day in your electronic letter the most important and most relevant of technology, science and digital culture.

Nuclear weapons and other autonomous weapons: the UN is located
The UN recently indicated its intention to ban autonomous weapons by 2026. This is a way to foresee what can happen, given the thresholds Achievements of artificial intelligenceBut this does not mean that they are already used.
“Autonomous weapons can autonomously make decisions on dismissal or attack,” explains Errera. “Right now we do not know that they exist.” Nevertheless, the expert tells us about the nearest thing that exists currently. For example, indicates in case that is given in Gaza StripThe field lies in the fact that the Israeli army has an algorithm for the Hamas terrorists trained with the image of the faces of the terrorists. When the algorithm detects one of these persons, it gives a signal, and a person decides to shoot. According to estimates, 9 out of 10 times the algorithm is correct. This tenth person can be just those who look like a terrorist, but no. Despite this, the army decides to shoot.
In this case, we are not before autonomous weapons. This tenth person can be completely Israelis. Maybe that’s why the army does not dare to leave everything in the hands of AI. But if it seems that this is not so, it is not expected that 100%, like a terrorist. He shoots, without more.
In this case, terror is sown by people, not AI. The same thing happened with nuclear weapons throughout history. It is true that if they leave in the hands of these new technologies, without any supervision, it can be very dangerous. But this, fortunately, has not yet happened, and everything possible that is regulated so that this does not happen.
Source: Hiper Textual
