In recent months, the artificial intelligence (AI) industry has attracted the attention of millions and sparked debate about the ethical issues surrounding the technology. In a study published in the Journal of Artificial Intelligence Research, The researchers explain that current data suggest it is impossible to control an artificial superintelligence.
Many people fear that artificial intelligence may trigger some kind of apocalypse and cause the extinction of humanity, and according to scientists, this may happen in a particular situation: if it reaches the level of artificial super intelligence. The problem, according to the research, is that it will likely act on its own and experts won’t be able to control it.
For example, if a superintelligence tried to destroy the world and scientists developed a program to stop it, the super-AI could intervene and prevent that software from being used. Don’t let it destroy the world, SIt would be necessary to create a simulation of this superintelligence, but probably humans would not be able to create a simulation as advanced as the artificial intelligence itself.
“A superintelligence represents a fundamentally different problem from those normally studied under the heading of ‘robotic ethics’. This is because a superintelligence is versatile and therefore has the potential to mobilize a variety of resources to achieve goals that are potentially incomprehensible to humans, let alone controllable,” explains the study, published in 2021.
artificial super intelligence
The problem is that since this type of artificial intelligence would be at a level above human intelligence, it would probably not be possible to set certain limits and rules such as “do no harm to humans”. Including some of this idea, it was inspired by the reasoning of Alan Turing, considered the ‘father of computing’, who stated that it is logically impossible to predict the outcome of all artificial intelligence at that level.
For researchers, one of the alternatives to not allowing super-AIs to destroy the world is to limit their capacity, as it will not be possible to properly teach them about ethics. One of these possibilities is to significantly limit the AI’s learning by disconnecting from the internet. — Currently ChatGPT is not directly connected to the Internet.
In early 2023, fears about AI led some tech figures like Elon Musk and Steve Wozniak to voice their concerns and sign the open letter “Pause Giant AI Experiments” (Pause giant AI experiments in Portuguese). The goal was to ask experts to pause all development involving large AI for up to six months so that they could more carefully explore the possibilities of the technology.
“AI systems with human competitive intelligence can pose profound risks to society and humanity. “Strong AI systems should only be developed when we are confident that their impact will be positive and their risks will be manageable.”
Source: Tec Mundo
I’m Blaine Morgan, an experienced journalist and writer with over 8 years of experience in the tech industry. My expertise lies in writing about technology news and trends, covering everything from cutting-edge gadgets to emerging software developments. I’ve written for several leading publications including Gadget Onus where I am an author.