ChatGPT, displayed on a computer screen in Boston on Tuesday, March 21, 2023, provoked a query about the speed of artificial intelligence technology being released by tech companies which could eventually exceed humans. The OpenAI logo was visible on the mobile phone nearby.
Leading computer scientists and tech industry luminaries request a 6-month pause to assess the potential risks. This group believes that such a pause is essential for studying the dangers associated with artificial intelligence.
Their petition, published Wednesday, March 29, 2023, responds to San Francisco startup OpenAI’s recent release of GPT-4.
Companies in the tech industry are possibly moving too hastily when deploying powerful A.I. technology, which might be more intelligent than humans in the future.
Elon Musk, Apple co-founder Steve Wozniak and other prominent computer scientists from the tech industry agree that a 6-month pause to consider the risks is necessary. This group of notables reached this conclusion after careful consideration.
Their [petition] published Wednesday is in response to San Francisco startup OpenAI’s release of GPT-4, a more advanced successor to ChatGPT. This triggered Microsoft and Google to create similar applications, causing a race among the tech giants.
What Are They Saying?
The letter cautions that A.I. systems with “human-competitive intelligence” can bring about serious issues for humanity, including the spread of false information on the web, job losses due to automation, as well as more far-reaching risks that may sound like they are lifted straight out of a science fiction movie.
In recent months, A.I. labs have been in an escalating competition to deploy digital minds of ever-increasing power. This race has become difficult – if not impossible – to predict, comprehend or control even for those who created them.
The letter urges all A.I. labs to suspend, for a minimum of 6 months, the training of Artificial Intelligence systems more potent than GPT-4. This halt should be public and verifiable; governments should impose a moratorium if such a pause cannot be swiftly implemented.
Last Wednesday, the U.K. released a paper discussing its approach to regulating high-risk A.I. tools. The goal of the approach is to prevent legislation from stifling innovation. E.U. legislators have been working on laws and regulations, specifically on A.I.
Who Signed It?
Leading A.I. researchers Yoshua Bengio, Stuart Russell, and Gary Marcus are amongst the many confirmed signatories supporting a petition organized by the nonprofit Future of Life Institute. The petition calls for further restricting the development and use of lethal autonomous weapons (LAWs).
Joining Wozniak, the co-founder of Apple, was former U.S. presidential candidate Andrew Yang and Rachel Bronson – President of the Bulletin of the Atomic Scientists – an organization advocating for science and warnings against nuclear war, to form part of this endeavor.
Musk, involved in Tesla, Twitter, and SpaceX, expressed his worries about A.I.’s existential risks long ago. He was an OpenAI co-founder and early investor as well.
In a surprising move, Emad Mostaque, CEO of Stability AI, maker of the A.I. image generator Stable Diffusion, partners with Amazon and competes with OpenAI’s DALL-E, has been included on the list of 2019 Rising stars in Artificial Intelligence.
What Is The Best Response?
The Wednesday requests for comment regarding OpenAI, Microsoft, and Google went unanswered. However, the letter has already drawn much criticism.
Professor Grimmelmann, who specializes in digital and information law, is employed at Cornell University.
James Grimmelmann says:
“A pause is a good idea, but the letter is vague and doesn’t take the regulatory problems seriously,”
“It is also deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against accountability for the defective AI in its self-driving cars.”
Is The Hype Around A.I. Real? Analyzing The A.I. Hysteria
The signatories of the letter were not worried about “superhuman” AI but instead raised concerns about potentially nefarious A.I. that is more intelligent than what currently exists.
ChatGPT is an impressive tool—a text generator that uses large amounts of written works to guess words in response to a prompt. ChatGPT can accurately predict what a user might say with this sophisticated algorithm.
Professor Marcus, emeritus of New York University and signatory of the letter, has raised doubts regarding the fears about intelligent machines supposedly surpassing human control shortly. He expressed his views in a blog post, saying he does not share those apprehensions.
He fears the deployment of “mediocre A.I.,” especially by criminals or terrorists, for malicious endeavors such as fooling others or propagating dangerous misinformation.
Today’s technology brings a wealth of risks, the magnitude of which we are not ready to manage. If we look towards the future, the potential danger will rise. Consequently, this calls for our utmost attention and immediate action.
The call for a halt to the AI race is a call to action for all stakeholders involved in AI development to prioritize ethical considerations and to work together to ensure that AI is developed responsibly and transparently. The potential benefits of AI are significant, but so are the risks. We must proactively approach its development to ensure it benefits society.
Source: Denver 7 Colorado News