Elon Musk & Bill Gates Join Urgent Call To Halt ‘Out Of Control’ AI

Tech leaders from some of the most well-known companies in the industry are asking for artificial intelligence labs to cease the education of their strongest AI systems for a half-year minimum, warning of “profound risks to society and humanity.”

Dozens of tech leaders, professors, and researchers, including Elon Musk, Bill Gates, and Apple co-founder Steve Wozniak, signed a letter published overnight by the Future of Life Institute – a nonprofit organization backed by Musk.

Just two weeks after OpenAI announced GPT-4, a powerful iteration of the technology that spawned the popular AI chatbot tool, ChatGPT, a letter had been dispatched.

In their early tests and a demonstration, the company showed off this technology’s capability to draft lawsuits, pass standardized exams, and create a website from nothing more than a hand-drawn sketch.

The letter, signed by the CEO of OpenAI, also proposed a pause on AI systems “more powerful than GPT-4”. OpenAI deemed it necessary to pause any development and deployment of such AI systems.

The European Commission has proposed a specific pause for AI tools and deeply considered that independent experts should be involved. The aim is to develop “shared protocols” to ensure the safety of AI tools beyond a reasonable doubt, as it is of utmost importance.

“Careful planning and adequate resources,” the letter emphasized, “are necessary for advanced AI, as this could lead to a major transformation of life on our planet.”

Unfortunately, even though there has been an intense race recently to create and unleash more powerful AI systems with ever-increasing capabilities which even their developers cannot predict nor control, the required planning and management to mitigate any potential risks associated with this is not happening.

The letter demanded that governments take swift action to create a moratorium should a pause not be immediately implemented.

Late last year, ChatGPT drew a wave of attention, reigniting competition among tech companies to launch innovative AI tools. As they race to deploy their products, the focus is on developing AI-driven solutions.

At the top, OpenAI, Microsoft, and Google are taking charge of this trend. Following them are IBM, Amazon, Baidu, and Tencent, who have all launched projects in this area. Several startups have also made advancements with creations such as AI writing assistants and image generators.

Experts in AI have expressed grave concern about the dangers these tools can pose due to their inclination for bias, their ability to propagate misinformation, and their invasion of consumer privacy.

AI tools have raised questions about the potential for professions to be disrupted and possible cheating among students and our ever-changing connection with technology.

The tech leaders have voiced their legitimate apprehensions regarding the unregulated usage of AI technologies, as shown in the letter, according to Lian Jye Su, an analyst with ABI Research.

He labeled some elements in the petition as “ridiculous,” among them a request to halt Artificial Intelligence (AI) development beyond GPT-4. He asserted that this suggestion could benefit some signers of the petition wanting to maintain their supremacy in the area.

Musk has been a part of OpenAI since its inception in 2015. However, he stepped out three years later and has since been critical of the firm. Conversely, Gates established Microsoft, which has invested large sums of money into OpenAI.

Su says:

“Corporate ambitions and desire for dominance often triumph over ethical concerns,”

“I won’t be surprised if these organisations are already testing something more advanced than ChatGPT or [Google’s] Bard as we speak.”

Though the industry of AI technology has progressed rapidly, the letter expresses the unease both inside and outside the business. This hints at a bigger issue concerning the safety of such advancements.

Governing agencies in China, the EU, and Singapore have developed early versions of AI governance frameworks, each setting its regulatory requirements.

The call to halt ‘out of control’ AI is a reminder of the need for caution and responsible development of these powerful technologies. By working together to establish clear ethical guidelines and regulations for the development and use of AI, we can ensure that these technologies are used safely and beneficially while minimizing the potential risks associated with their uncontrolled development.

Source: @9News

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top