leave alone AI they can be very dangerous on their own. In fact, many years ago an ethicist artificial intelligence gave an example in which an algorithm designed to produce the maximum possible number of clips could destroy humanity. Even so, in China They wanted to experiment by putting an AI in control of one of their Earth observation satellites. And what they found, if that’s what they suspect, is appalling.
Initially, they thought that artificial intelligence turned its attention to random spots on the planetwithout much interest. However, when examining the selected areas, they found that they all had history of conflicts with China. Therefore, it could be that the AI was looking for a matchup.
This, too, may be a coincidence, but there is reason to fear that it is not. Therefore, this experiment, like this ethicist’s theoretical example, highlights the importance set specific goals for artificial intelligence. If they are allowed to act without restraint, it seems that people are not among their priorities.
Artificial intelligence at the helm of the satellite
This experiment is described in the article South China Morning Post. Researchers reveal how, despite going against their mission, they decided to leave artificial intelligence in control of one of their satellites to 24 hours. He was not given any instructions or tasks. He just left it free to see what he focused on Qimingxing Satellite 1.
The initially chosen targets caught his attention. He first focused on Patna, a large city in India, located on the Ganges River. He focused his observation there for a long time, and then, after a new scan, went to Osaka japanese port.
These may be random places. However, a cursory glance at history is enough to see that this may not be the case.
AI War Goals
For decades, China and India have had a border dispute over the Galwan Valley, next to Tibet. Although it was declared part of India when it belonged to the United Kingdom, China was not satisfied with this.
The disagreement sparked a longstanding conflict, though it didn’t come up until 2020, when the first soldiers died on the border. At the very beginning of the COVID-19 pandemic, India condemned the death of 20 of his soldiers in the Kashmir region at the hands of the Chinese army.
Patna is not located in the heart of the Galwan Valley, but in the northeast of the country, not far from Tibet. In addition, one of the dead soldiers, Sunil KumarHe came from this city. Therefore, the AI’s search for information could have chosen him as the target of the war.
As for Osaka, it is known that its port from time to time receives ships from The US Navy operates in the Pacific.
All this suggests that perhaps the choice of these two points was not a random artificial intelligence. Maybe I was choice of war targets based on historical information from your country. Because it had no mission in front of it, the algorithm could only focus on its owner: the Chinese government. The ethicist had already warned him. The AI doesn’t know how to protect humanity, it only understands who it’s working for, and that’s the only thing it’ll focus on. Therefore, if we are not looking for conflict, we must make it clear to him what his job is. Artificial intelligence can be very useful, but so far nothing is left to chance.
Source: Hiper Textual
