Computer scientists: We would not be able to control very intelligent machines

We’re fascinated with devices that can control cars, make up symphonies, or defeat people at chess, Go, or Jeopardy! While more and more progress is being made in Artificial Intelligence (AI), some scientists and philosophers are warning about the dangers of unregulated AI. Using theoretical calculations, an international team of researchers, including scientists from the Center for People and Tools at the Max Planck Institute for Human Development, show that it would be impossible to control Highly advanced AI. The study was published in the Journal of Artificial Intelligence Research.

Osbarr was someone programming an AI system with better information than human intelligence, so he could learn independently. Connected to the internet, the AI ​​may have access to all humanitarian data. It could replace all existing programs and take control of all online devices worldwide. Would this lead to utopia or dystopia? Would the AI ​​cure cancer, bring world peace, and prevent a climate catastrophe? Or would it destroy humanity and take over the earth?

Computer scientists and philosophers have asked themselves whether we could even control high-minded AI at all, to ensure that it would not be a threat to humanity. An international team of computer scientists used theoretical calculations to show that it would be fundamentally impossible to control a truly intelligent AI.

“A very intelligent machine that controls the world is like science fiction. But there are already machines that perform important tasks independently without programmers fully understanding how to ‘ Max Planck Institute for Human Development.

Scientists have studied two different ideas on how high-intensity AI could be controlled. On the one hand, highly intelligent AI capabilities could be particularly limited, for example, by removing it from the Internet and all other technical devices so that it would have no connection to the Internet. the outside world – but this could make the high-minded AI much less powerful, less able to answer humanitarian questions. Without that option, AI could be motivated from the outset to pursue only goals that are in the best interests of humanity, for example by programming ethical principles into it. However, the researchers also point out that these and other contemporary and historical ideas for controlling AI have very limited connotations.

In their study, the team devised a theoretical constraint algorithm that ensures that highly intelligent AI cannot harm people in any situation, by first simulating the behavior of the AI ​​and stopping if it does. considered harmful. But careful analysis shows that, in our current computing pattern, that algorithm cannot be built.

“If you break the problem into basic rules from theoretical computer science, it turns out that an algorithm that would order AI without destroying the world could stop its own operations without If this happened, you would not know whether the constraint algorithm was still analyzing the threat, or whether it stopped to contain the malicious AI. make the restriction algorithm unusable “, said Iyad Rahwan, Director of the Center for People and Tools.

Based on these numbers, the constraint problem is insurmountable, i.e. no single algorithm can find a solution to determine whether AI would harm the world. Moreover, the researchers point out that we may not even know when high-resolution devices have arrived, because determining whether a device displays information better than people in the same field. to the confinement problem.

###

The study “Superintelligence: not Lessons from Computability Theory” was published in the Journal of Artificial Intelligence Research. Other researchers studied include Andres Abeliuk from the University of Southern California, Manuel Alfonseca from the Autonomous University of Madrid, Antonio Fernandez Anta from the IMDEA Networks Institute and Lorenzo Coviello.

Disclaimer: AAAS and EurekAlert! they are not responsible for the accuracy of press releases posted to EurekAlert! by sending institutions or for using any information through the EurekAlert system.

.Source