Calculations show that full-fledged AI will be impossible to control

The idea of ​​artificial innovation destroying humanity has been debated for decades, and scientists have just judged whether it would be possible for us to control superpowers. high level computer information. The answer? Almost.

The catch is that we need to have control over superpower far beyond human comprehension in order to explore that superpower that we can examine. But if we cannot understand it, it is impossible to create such a metaphor.

Rules such as ‘no harm to people’ cannot be set if we do not understand the kind of situations that AI is going to come up with, the authors of the new paper suggest. Once a computer system operates at a level above the scope of our programmers, we can no longer set boundaries.

“Mastery presents a completely different problem than those typically studied under the banner of‘ robot ethics ’,” the researchers wrote.

“This is because ambition is multifaceted, and therefore potentially capable of mobilizing a variety of resources to achieve goals that may be unbelievable to humans, not to mention control. to get. “

Part of the team’s reasoning comes from the stop problem put forward by Alan Turing in 1936. The problem is based on knowing whether a computer program makes a decision and a response (so it stops), or just bend forever trying to find one.

As Turing has proven through some good math, even though we know for some specific programs, it is logically impossible to find a way that allows us to know for every possible program never written. That brings us back to AI, which in a preconceived state could hold all computer programs in memory at once.

Any program written to stop AI from harming people and destroying the world, for example, can decide (and stop) or not – yes it is mathematically impossible for us to be absolutely certain either, which means that it cannot be ruled out.

“In fact, this makes the restriction algorithm unusable,” says computer scientist Iyad Rahwan, of the Max-Planck Institute for Human Development in Germany.

The other option is to teach AI a bit of ethics and tell it not to ruin the world – something no algorithm can be absolutely sure of, the researchers say – limiting the capabilities of the super-understanding. It could be cut from parts of the internet or from specific networks, for example.

The new study rejects this view as well, suggesting that it would block access to artificial intelligence – the argument says if we are not going to use it to solve problems outside of space. folks, why create it at all?

If we are going to be pushed forward with artificial intelligence, we may not even know that when a supernatural understanding that is beyond our control is reached, there is such a misunderstanding. That means we have to start asking some big questions about the directions we are going into.

“A very intelligent machine that controls the world is like science fiction,” says computer scientist Manuel Cebrian, of the Max-Planck Institute for Human Development. “But there are already tools that perform important tasks independently without programmers fully understanding how they learned it.”

“The question therefore arises as to whether this may at some point be unmanageable and dangerous to humanity.”

The research was published in the Journal of Artificial Intelligence Research.

.Source