Sign The Open Letter Asking All AI Labs To Take Immediate Action For At Least 6 Months

Over 1100 signatories, such as Elon Musk, Steve Wozniak, Stuart Russell, Andrew Yang, and Yoshua Bengio, have signed an open letter calling for the immediate pause of training AI systems that are more powerful than GPT-4 for six months.

It is claimed in the letter that, rather than engaging in prudent planning and management, AI labs have recently been propelling themselves into an out-of-control race to develop and deploy increasingly powerful digital minds — without having a proper understanding of how best to control them.

No matter how powerful contemporary AI systems are becoming at carrying out general tasks, we still need to question whether it is appropriate to grant them control over our information channels in terms of disseminating propaganda and untrustworthy information.

No, decisions such as these–automating jobs, creating nonhuman minds to outnumber, outsmart and replace humans, risking control of civilization–should not be left in the hands of unelected tech leaders. They must not be delegated to those without a democratic mandate.

We should only develop highly advanced AI systems once we have an unwavering assurance that their potential outcomes will be beneficial and all associated risks can be managed properly. This conviction must be thoroughly validated and grow in tandem with the size of the effect these systems can have.

At this time, it is critical to get a third-party analysis before any plans are made for training advanced AI systems. In addition, the most sophisticated initiatives must be controlled so that the number of computing resources devoted to creating new models can be regulated. This is OpenAI’s recent message regarding artificial general intelligence.

We urge all AI laboratories to seize the training of AI systems that surpass GPT-4 for six months. This stoppage should be open and confirmed, encompassing the leading players; governments must impose a moratorium if this cannot be achieved promptly.

The signatories, some AI experts, urge governments to enforce a “public and verifiable” suspension of the development, production, and use of Lethal Autonomous Weapon Systems if enactment does not happen swiftly. The letter emphasized that “all key actors” must be included in this pause.

The letter’s signatories, including engineers from Meta and Google, the founder and CEO of Stability AI (Emad Mostaque), as well as non-tech professionals such as an electrician and an esthetician, have rendered it especially noteworthy – yet even more remarkable, maybe those who have not endorsed it.

Even though this letter has not been signed by anyone from OpenAI (the organization responsible for the GPT-4 language model) or Anthropic (which splintered from the former to develop a “safer” AI chatbot), no one has officially endorsed it.

Who Signed On? Explore Some Of The Signatories Here

– Yoshua Bengio, head of the Montreal Institute for Learning Algorithms, has earned his well-deserved Turing Laureate title. He earned this honor by developing groundbreaking deep-learning techniques at the University of Montréal.

– Stuart Russell, a professor of Computer Science based at the University of California, Berkeley, and director of the Center for Intelligent Systems, is well-known as the co-author of “Artificial Intelligence: a Modern Approach” – a highly acclaimed standard textbook.

– Elon Musk is the CEO of SpaceX, Tesla, and Twitter. He has taken the helm of these firms and shown leadership in the industry. Musk’s accomplishments include inspiring a surge in innovation with his influence on these businesses.

– Steve Wozniak, Co-founder, Apple

– Yuval Noah Harari, Author, and Professor, Hebrew University of Jerusalem.

– Andrew Yang, Forward Party, Co-Chair, Presidential Candidate 2020, NYT Bestselling Author, Presidential Ambassador of Global Entrepreneurship

– Connor Leahy, CEO, Conjecture

– Jaan Tallinn, Co-Founder of Skype, Centre for the Study of Existential Risk, Future of Life Institute

– Evan Sharp, Co-Founder, Pinterest

– Chris Larsen, Co-Founder, Ripple

– Emad Mostaque, CEO, Stability AI

– Valerie Pisano, President & CEO, MILA

– John J Hopfield, Princeton University, Professor Emeritus, inventor of associative neural networks

– Rachel Bronson, President, Bulletin of the Atomic Scientists

– Max Tegmark, MIT Center for Artificial Intelligence & Fundamental Interactions, Professor of Physics, president of Future of Life Institute

– Anthony Aguirre, University of California, Santa Cruz, Executive Director of Future of Life Institute, Professor of Physics

– Victoria Krakovna, DeepMind, Research Scientist, co-founder of Future of Life Institute

– Emilia Javorsky, Physician-Scientist & Director, Future of Life Institute

– Sean O’Heigeartaigh, Executive Director, Cambridge Centre for the Study of Existential Risk

– Tristan Harris, Executive Director, Center for Humane Technology

– Marc Rotenberg, Center for AI and Digital Policy, President

– Nico Miailhe, The Future Society (TFS), Founder and President

– Zachary Kenton, DeepMind, Senior Research Scientist

– Ramana Kumar, DeepMind, Research Scientist

The open letter is a call to action for all AI labs to take responsibility for the ethical development of AI and to pause any research that could have harmful consequences. It is a reminder that the development of AI must be guided by principles of transparency, accountability, and social responsibility and that the benefits of AI can only be realized if it is developed responsibly and ethically.

Source: Insane –

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top