Artificial intelligence advancement’s far-reaching, intricate effects have no previous historical context; they completely reshape the human experience. Its scale and complexity are unprecedented, leading to a profoundly disruptive effect across all our lives.
This statement may seem exaggerated. However, Forbes is known for offering more accurate insight than over-the-top comments.
Before making any solid conclusions, we need to slow down our analysis of Artificial Intelligence. It needs to be noted that machines lack genuine intelligence; this is not a new idea.
These “stuff,” wires, and diodes, created by people – although they may appear alive and conscious, have no true intelligence. They mimic intelligent behavior, but this does not equate to them being alive or conscious.
In essence, computers and Artificial Intelligence (A.I.) are no different from any other tool used, though complex and becoming increasingly so daily. It is not the technology itself but its creators and users who pose a risk as humans use it for various ends.
A.I. has been programmed with various political and social biases by those who created it, making it a useful tool and one prone to misuse, just like guns. Such biases are ingrained in the wires and diodes within A.I. and its algorithms and code lines.
The initial warning sign for the misuse of A.I. was presented when self-driving vehicles were developed for military use. Equipped with weapons such as machine guns, computers guide these vehicles to traverse complicated terrains; it is through them that decisions regarding navigation and targeting can be made.
Though humans control the decision to shoot, some scenarios require moving away from this fundamental concept; a bundle of wires and diodes could be used to select a target (like a human) and carry out the act of killing. This complicates matters, as nothing is ever really that straightforward.
Computers employed by aircraft such as the B-2 stealth bomber are swift enough to instantaneously make important aeronautic adjustments up to 20 times a second – something beyond the capability of humans.
Decisive action may be necessary when situations rapidly develop and the potential for collateral damage is high; for example, deciding whether to fire could be instantaneous.
Erroneous decisions to shoot are not the only way in which tragic “friendly fire” fatalities happen. Unfortunately, a hostage situation can still prove fatal for the hostage even when shooting is avoided; in this case, the decision not to shoot can be just as deadly.
There is no undoing what has already been done; we have passed the point of no return about allowing computers to make decisions that, in some cases, cannot be avoided. This means we are thrust into a murky unknown full of potential hazards and surprises, traveling forward into a future that is not mapped out.
We must recognize A.I. as an effective and useful tool that should be used but not relied upon for making moral decisions, which only humans can make. Rather than relying on the artificial intelligence of technology, we must call upon our natural, God-given intelligence.
A.I. may not be a big deal for some, but its potential cannot be ignored. We must ensure we use A.I. responsibly and ethically to create a better future for all.