AI Experts Urge Global Pause On AI Development Citing Concerns “Risk To society”

In an open letter, Elon Musk and a group of AI experts and industry leaders have proposed a six-month pause in research for systems more powerful than OpenAI’s GPT-4 due to the concerns about potential risks it could pose to society.

OpenAI has recently unveiled its fourth GPT (Generative Pre-trained Transformer) program, impressing observers with its countless capabilities. Such abilities include communicating with humans remarkably realistically, authoring melodic tunes, and summarising extensive documents.

With over 1,000 signatories, including Elon Musk, and the Future of Life Institute, they have issued a letter calling for pausing the development of advanced AI until protocols addressing safety are shared and audited by independent experts.

Once we are sure that the effects of powerful AI systems will be advantageous and their dangers manageable, only at this point should they be created. This is what the letter stated.

OpenAI did not return comments when asked for one when requested.

Developers were urged to join forces and cooperate with policymakers to draft laws and regulations in light of the potential risks posed by AI systems that can outperform humans. These risks, which include economic and political disruptions, are detailed in the letter as threats to society and civilization.

The signatures of AI experts and heavyweights, like “godfather of AI” Yoshua Bengio, Alphabet-owned DeepMind researchers, Stability AI CEO Emad Mostaque, and pioneer researcher Stuart Russell, were included in the co-signatories.

The Future of Life Institute is mainly supported by the Musk Foundation and two other organizations – Founders Pledge, an effective altruism group based in London, and Silicon Valley Community Foundation. This is according to details found in the European Union’s transparency register.

Monday saw Europol, the EU police force, raising ethical and legal issues concerning advanced ais such as ChatGPT, with warnings of possible misconduct in phishing attempts, misinformation, and cybercrime. This adds to the apprehensions about advanced AI already present.

The UK government recently described plans for a “flexible” regulatory system concerning Artificial Intelligence. This framework shall contain a flexible structure that can be changed according to the changing landscape of AI technologies.

The UK government’s strategy for overseeing AI, revealed in a Wednesday policy paper, does not call for a specifically-dedicated entity. Instead, it will rely upon various existing regulators to monitor AI in the context of human rights, health and safety, and competition.

Transparency Of Artificial Intelligence

Musk has been outspoken about his worries about AI, notably concerning the implemented autopilot system.

Prompted by OpenAI’s ChatGPT, released last year, rivals have been driven to accelerate the development of comparable large language models. Companies, too, have integrated generative AI models into their products.

OpenAI’s ChatGPT now offers the ability to take advantage of services from many companies, such as Instacart for groceries and Expedia for booking flights. Collaboration between OpenAI and around a dozen firms has made this possible.

Reuters noted that Sam Altman, the chief executive at OpenAI, had not signed the letter, which a spokesperson from Future of Life confirmed.

Marcus, a professor at New York University and signatory of the letter, is Professor Gary Marcus.

Gary Marcus says:

“The letter isn’t perfect, but the spirit is right: we need to slow down until we better understand the ramifications,”

“The big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialize.”

Several people have accused the signatories of the letter, who have proposed utilizing Artificial Intelligence (AI) in various industries, of propagating “hype” around the technology’s current capability; they maintain that these assertions are highly exaggerated.

Johanna Björklund, hailing from Umeå University, is an associate professor specializing in artificial intelligence research.

Johanna Björklund says:

“These kinds of statements are meant to raise hype. It’s meant to get people worried,”

“I don’t think there’s a need to pull the handbrake.”

She argued that greater transparency requirements should be implemented among AI researchers rather than pausing AI research.

Johanna Björklund went on to say:

“If you do AI research, you should be very transparent about how you do it.”

The call for a global pause on AI development is a reminder of the need for caution and responsible development of these powerful technologies. By working together to establish clear ethical guidelines and regulations for the development and use of AI, we can ensure that these technologies are used safely and beneficially while minimizing the potential risks associated with their uncontrolled development.

Source: OPEN YOUR EYES, PEOPLE

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top