Many expected that this year Nobel Prize in Physics may be aimed at Artificial intelligence. That’s how it was. The award fell into your hands John Hopfield and Geoffrey Hintonfor important discoveries in the field of neural networks. Both played important roles, although the latter was at the forefront in this field for many years. So much so that they regret their discoveries for some time due to the risk they may entail for humanity.
There are many interviews in which he has stated that he regrets the drift that artificial intelligence has taken over the years. In one of the last published XLWeeklyindicates that he does not feel guilt or regret. He knows that if he had not been responsible for the discovery, it would have been someone else later. However, he is concerned about what might happen if A.I. not properly regulated.
Artificial intelligence (AI) is an amazing tool. It can make our lives infinitely easier, but it can also shorten it completely. It is this potential that makes him so attractive that he deserves the Nobel Prize in Physics. But why were Hopfield and Hinton lucky? What risks are Hinton so concerned about? We talked about this and much more with Francisco HerreraDirector of the Andalusian Interuniversity Institute of Data Science and Artificial Intelligence (DaSCI).
The first steps of science that will change the world
Hopfield and Hinton’s research was conducted in the 1980s. However, to see the birth of AI, we have to go back to the 1950s. Computer scientist John McCarthy coined the term “AI” to refer to the science and technology of developing artifacts with intelligent behavior. in tasks that require it if they are performed by people. On the other hand, psychologist Frank Rosemblatt developed the perceptron model in 1958 based on the ideas of McCulloch and Pitts in 1943. That’s when they first started talking about the possibility of creating neural networks as if they were computers.
With this model, an attempt was made for the first time to develop a set of connections between the electrical resistances that function as neurons and their synapses. Rosemblatt managed to get one layer of artificial neurons. Unfortunately, this model did not go very far because it did not develop the capabilities expected of it. As Herrera explains, “he had no way to achieve powerful cognitive qualities of learning“
In the 70s, given the impossibility of developing the perceptron model, we entered into the so-called winter of artificial intelligence. It had no potential for promotion and practically fell into disuse. It was then that Hopfield and Hinton made the revolutionary discoveries for which the Nobel Prize had just been awarded.
The spring of artificial intelligence is coming
In fact, it did not go completely out of use during the AI winter. What is known as Symbolic AI. However, this was not based on neural networks capable of learning, but on rules governed by logic. This, logically, greatly limited its use. Therefore, the appearance on stage of Hinton and Hopfield was necessary to turn the tide.
Hinton was the father of the so-called Backpropagation learning model. Thanks to this, it was possible to combine several layers of neurons. train and study. The perceptron only allowed learning at one level, so this was a revolutionary achievement.
Meanwhile, Hopefield published a very important article in PNAS where he proposed a model that introduces recurrent artificial networks capable of returning to reasoning. Together with the learning capabilities of Hinton’s models, these are the two main elements of the current model. Deep learning. “They are the ones who are breaking the AI winter with their results, because connectionist AI is at a dead end.”
At this point, although symbolic AI continued to be used throughout the 80s with expert systems as the reference AI technology, it gradually left room for all the deep learning models that would come later.
Deep learning wins the game
Both symbolic AI and deep learning have won a lot of games. Literally.
In 1996 and 1997, an algorithm based on symbolic AI Dark bluewon several chess games against the then world champion, Garry Kasparov. In the first case, the AI managed to win only 1 game out of 6, Kasparov won 3 and tied 2 more. However, after improving the algorithm in 1997, Deep Blue won twice and tied three times. Kasparov managed to win only one of the games.
This was already an important milestone, but the situation became even more revolutionary with the advent of the Hinton and Hopfield models. “There are now neural network models that have made quantum leaps in the field of chess and game theory,” says Herrera. “This is a case AlphaZeroFor example”.

A question inevitably arises here. Who will win the chess game between Deep Blue and AlphaZero? An artificial intelligence expert interviewed by this publication made it clear: AlphaZero.
In fact, Herrera tells us that there are many young chess players in India today who are training using an artificial intelligence module and with little or no effort. 16 or 17 years old They are already real champions. “They play just like artificial intelligence,” says DaSCI’s director. “They play a very aggressive game because they will always win. “They’re not looking for tables.”
Light and shadows of artificial intelligence
It makes sense that the only role of artificial intelligence is not to win chess games. This is a good example of how he managed to first equal and then exceed the capabilities of the man himself. But today its potential extends much further.
During the presentation of the Nobel Prize in Physics, the director of the Nobel Committee commented on this occasion that one of the applications of artificial intelligence is obtaining new materials.
This is possible due to the ability to explore all possible options and select candidates with the best properties for a given purpose. For example, inorganic crystalline materials They have a large number of applications, especially in the field of electronics. However, their synthesis can be very expensive, and the results do not always meet expectations. It’s possible that one of these crystals improves battery performance, but when it comes down to it, it’s very fragile and breaks easily.

So they developed Artificial Intelligence Algorithms who quickly search for all candidates using suitable properties. This way, scientists can focus on synthesizing only those ones and not waste time on those that will ultimately prove useless.
Something similar is done with substances that can be active ingredients in many medications. Or they can even be analyzed drugs that are already on the marketbut it may provide benefits against diseases very different from those for which they were developed. For example, AI is capable of finding drugs with bactericidal activity that can be used to combat antibiotic-resistant bacteria. Because these drugs are already on the market, bureaucracy and clinical testing can move much faster.
Also notable in biology is DeepMind’s AlphaFold system, which predicts the three-dimensional structure of proteins, accelerating progress in medicine and biotechnology. Demis HassabisDirector-Researcher of DeepMing, won the Nobel Prize in Chemistry for unlocking the secrets of proteins using artificial intelligence and computation (along with John Jumper and David Baker).
And of course, artificial intelligence has applications in countless other areas. Yes, indeed. Always with supervision of a person. And if this is not done properly, we can leave the light completely entering the shadow.
When AI takes control
In 2014, the philosopher Nick Bostrom He explained with a simple example why artificial intelligence can become very dangerous.
Let’s imagine we have an AI designed to Make as many clips as possible. No other instructions are given. Only that one. This will allow you to find the most efficient way to produce the largest number of clamps as requested.

The time will come when AI will be able to perceive people as interference At any moment, someone can turn off the machine and prevent you from completing your task. In addition, people have atoms of matter that can be used to make new paper clips. The solution would be destroy them.
Logically, this is an extreme example. However, this is not too far from what experts, including Jeffrey Hilton and Francisco Herrera, have warned.
“One of the most important areas in artificial intelligence right now is Safetywhich has two translations in Spanish: “Protection against external attacks” and “Safe behavior towards users,” Herrera explains. “We talk about security in a double sense: it is not attacked and it is also not attacked.” Only in this way can the necessary correspondence with human beings be achieved.
Why is this coordination so important? Herrera explains this to us very simply.
“Oh, when work for goalsyou may take a path to obtain it that is not humanly ethical.” Look for the most effective way to achieve this goal, even if this means taking actions that a person would not consider ethical.
To prevent this from happening, there must always be a person at the helm. “Within European regulation There is already a fundamental requirement of human control,” says an artificial intelligence expert. “An intelligent system should never decide autonomously in all decisions that affect fundamental human rights and security.”
This is something that Hinton himself also highlights in an interview with XL Semanal. And it is impossible to give AI its free will, because it is impossible to predict what it will do. Here’s how the Nobel Prize winner in physics explained it:
“Imagine a leaf falling from a tree. We know that it descends in small arcs towards the ground, but no one can predict exactly where it will fall or on which side. There are too many variables in the game: there could be a gust of wind, another leaf, a dog. Whatever. This is exactly the case with modern AI: it weighs its answers based on the analogies it makes. There are no rules. Just as we will never be able to know where a leaf ends on a tree, we will never be able to explain why AI makes the decisions it does.”
Geoffrey Hinton, Nobel Prize winner in physics
That artificial intelligence so similar to human that could plunge us into the shadows has not yet appeared. However, Hinton believes that at this rate we could achieve this within 5 years. 20 maximum.
We better be warned. As Spider-Man’s uncle said, with great power comes great responsibility. AI is one of the most enormous forces ever put into the hands of humans. We better be responsible when we use it.
Source: Hiper Textual
