Superhuman Algorithms: What Could Happen if We’re Not Careful?

We all know that technology is moving, not — sprinting forward at breakneck speed. But many of us don’t appreciate the level of power and sophistication it has already reached, especially when artificial intelligence (AI) algorithms come into play. Unfortunately, researchers are increasingly worried about how great a risk these dangerous superhuman algorithms could pose for humanity as AI evolves.

Oxford scientists are cautioning against the potential of artificial intelligence exceeding human capacity.

The scientists are urging the government to put laws to guard the public against sophisticated algorithms.

AI of an advanced level can utilize “game theory” to anticipate and outsmart any attempts at subjugating its power.

Before we plunge into the competition of artificial intelligence, let us make sure that we are not inadvertently leading ourselves to destruction. This is the main content of the recent warning two Oxford University researchers gave Parliament’s House of Commons.

Michael Cohen says:

“With superhuman AI, there is a particular risk that is of a different sort of class, which is that it could kill everyone.”

Cohen illustrated how a dog could be taught to do the trick so that it receives a treat as its reward. He then added an even more fascinating example: if the canine were to locate the cupboard containing the treats, it could get them without help from its human companion.

Michael Cohen went on say:

“If you imagine going into the woods to train a bear with a bag of treats, by selectively withholding and administering treats, depending on whether it is doing what you would like it to do, the bear would probably take the treats by force.”

Today, AI is trained like the way we train animals. However, it may eventually become capable of taking control of the process and altering the approach initially dictated by the algorithm.

Cohen continues to say:

“If you have something much smarter than us monomaniacally trying to get this positive feedback, however we have encoded it, and it has taken over the world to secure that, it would direct as much energy as it could toward securing its hold on that, and that would leave us without any energy for ourselves.”

Cohen believes that by creating regulations that guide us away from dangerous algorithms and concentrate on those with beneficial characteristics, it is possible to set up international norms to safeguard humans from the risks posed by a certain category of algorithms.

Cohen added to say:

“Imagine that there was a button on Mars labeled ‘geopolitical dominance,’ but actually, if you pressed it, it killed everyone.”

“If everyone understands that there is no space race for it, if we as an international community can get on the same page … I think we can craft regulation that targets the dangerous designs of AI while leaving extraordinary economic value on the table through safer algorithms.”

AI hasn’t yet reached the point of causing alarm; however, when it can carry out requests better than we can, this will become a superhuman ability. At that juncture, reversing our course may not be an option.

Cohen went on to say:

“If your life depended on beating an AI at chess.”

“You would not be happy about that.”

Tim Newcomb is a journalist in the Pacific Northwest, producing articles on stadiums, sneakers, equipment, and more for magazines like Popular Mechanics. His most cherished interviews have been with Roger Federer in Switzerland, Kobe Bryant in Los Angeles, and Tinker Hatfield in Portland.

In time, algorithms will become more and more accurate in their predictions, and we may not be able to control them. As these superhuman algorithms get smarter, they could one day pose a threat to humanity as a whole. We must be careful with how we use and develop such technology. What are your thoughts on this issue? Do you think algorithms could eventually kill us all? Let us know in the comments below.

Source: Popular Mechanics

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top