Why Artificial Intelligence Remains A Dangerous Myth | Expert Analysis

Elon Musk and other tech leaders have issued dire warnings urging a six-month moratorium on Artificial Intelligence (AI) research, citing that the technology is nearing its full potential with associated existential risks.

Despite the widely held belief that Artificial Intelligence (AI) is at hand or even possible in the future, some researchers disagree with this consensus. This supposed universal agreement is nothing more than a myth.

In an essay written for Nature Journal, Ragnar Fjelland (Emeritus Professor at the Centre for the Study of Sciences and the Humanities at Bergen University) he cautioned against accepting AI too easily. Entitled “Why general artificial intelligence will not be realized,” his essay is complex yet worth reading.

Humans are often guilty of overlooking their intellect and overly relying on technology to the point of overestimation. But, this is connected with an underestimation of humans. This idea stretches back to Plato’s teachings and is echoed by Fjelland in his work.

3 Strategies For Effective AI Regulation By Mark Weinstein

The mid-20th century scientist Alan Turning was at the forefront of the debate over AI, and his tests, most famously the ‘Turing Test,’ was a measure of how well an AI could fool a human being into thinking they were conversing with another human.

Using the same formatting, this test to establish whether a computer is engaging in human-style intelligence has been accomplished more or less; however, it is very inadequate.

No agreement exists regarding the definition of a human dream or why they take place; therefore, predicting whether a computer can experience them is inconceivable. Similarly, AI cannot unexpectedly tell humorous stories and slip in Freudian blunders before realizing and reflecting on them – making computers unable to replicate higher levels of human intelligence today.

Furthermore, much of human knowledge and intelligence is largely implicit rather than explicitly conveyed. For instance, as Fjelland explains, while almost everyone can walk competently without formal instruction, only a few would be able to explain how it is done through math and physics.

The abundance of human intelligence lies not in our information but in our capability to apply it through experience with physical phenomena without pure mental exercise. This knowledge gained from experience exemplifies much remaining to be explored and discovered.

Experts in AI are the ones who are most relied upon when it comes to inquiring about the reality or attainability of AI, which is why we do not hear these questions asked very often. It makes sense that they believe that artificial intelligence is real and achievable since they are experts in the field.

The careers and funding of many individuals depend on the debate surrounding definitions in philosophy and theology; However, this does not indicate that these individuals are incorrect; it indicates that other parties – such as philosophers and theologians – are involved in said debate.

The potential impacts of machine learning, despite being potentially dangerous, can be significant. If self-driving vehicles replace truckers, who are in large numbers worldwide, that would be a major issue. None of this should imply that machine learning will not cause substantial consequences to society.

No matter how modern the technology may be, it has been displacing human work since the invention of the plow in ancient times. Self-driving vehicles are no exception, even though they do not necessitate artificial intelligence.

The detailed questions about creativity and intuition are much more significant than the humorous pieces churned out by ChatGPT. We shouldn’t anticipate a replacement for William Faulkner or James Joyce in the imminent future, or even ever – these achievements are not likely achievable.

The end product of the system reflects a notable human influence, which is vividly reflected in the politically biased responses given to various prompts.

AI can pose a potential danger, albeit not in the way many think. It’s human beings devaluing their intelligence that is far more hazardous. AI does not operate similarly to the human mind, so we should refrain from believing this misconception – an important aspect of navigating the technological age.

“No computer will ever possess the extraordinary genius of the Bard, Shakespeare. His words exclaimed that humans are a marvellous creation—their reasons noble, powers boundless, forms elegant yet able to act gracefully like angels with an elevated understanding much like deity.”

No matter how hard they try, all of Elon Musk’s people cannot create a computer that can match the understanding and imagination characteristic of human intelligence. Humanity will always dominate in its aptitude for telling stories, depicting reality, and expressing emotion—and this is not expected to change.

The fear of AI taking over the world and causing harm is largely a product of science fiction and does not reflect the current state of AI technology. However, policymakers, businesses, and individuals must understand and proactively mitigate the risks. This includes investing in research and development that prioritizes ethical considerations, creating regulations that ensure transparency and accountability in AI development and use, and educating the public about the capabilities and limitations of AI. Ultimately, the responsible development and use of AI can lead to significant benefits for society, but it requires a careful and thoughtful approach to address the potential risks and challenges.

Source: Fox News

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top