Prominent artificial intelligence experts, tech entrepreneurs, and scientists have signed an open letter calling for a pause in developing and testing AI technologies more powerful than OpenAI’s language model GPT-4. This is so that the potential risks associated with the technology can be thoroughly examined.
GPT-4, a language model, can increasingly rival humans in different tasks. This could lead to the automation of jobs and the creation of false information. As a distant possibility, it could culminate in an AI system that can replace people and revolutionize our world as we know it.
The currently-being-trained GPT-5 and all other AI systems more powerful than GPT-4 are urged to pause their training for at least six months, according to the letter signed by figures such as Yoshua Bengio, Yuval Noah Harari, Jaan Tallinn, and Elon Musk.
The Future of Life Institute, focusing on technological risks to humanity, requested a “public and verifiable” pause in developing advanced AI models such as GPT-4. This letter asserts that all those working in this field should be involved in this timeout.
Governments should consider introducing a moratorium if it is impossible to halt development quickly. This is an unlikely event within six months, but such a “pause” in development should still be enacted regardless.
The signatories of the letter, including people from Microsoft and Google, are building advanced language models, though neither company responded to requests for comment. Other tech giants such as OpenAI and Microsoft were also not contacted.
The recent announcement of GPT-4 has generated a lot of excitement and, in some cases, concern over its capabilities. With AI systems making impressive strides, this letter addresses the implications of these advancements.
ChatGPT, OpenAI’s popular chatbot language model, exhibits impressive scores on various academic tests. In addition, it can correctly solve complex queries that have traditionally necessitated a more advanced intelligence level than what AI systems have been able to achieve.
Yet GPT-4 also makes plenty of trivial, logical mistakes. And, like its predecessors, it sometimes “hallucinates” incorrect information, betrays ingrained societal biases, and can be prompted to say hateful or potentially harmful things.
The letter’s signatories express concerns over a race spurred by OpenAI, Microsoft, and Google to quickly develop and release new AI models driven by profit. As they fear, developments are happening faster than society and regulators can catch up, leaving them neglected in this race.
Microsoft has made a significant investment of $10 billion into OpenAI to keep up with the rapid pace of change in AI technology. This money has been used to integrate their AI with applications like Bing, Microsoft’s search engine.
Until this year, Google chose not to make public its language models – earlier created with AI – out of ethical apprehensions. However, the company fashioned some AI technology used in constructing GPT-4.
Microsoft’s advances in the world of search seem to have urged Google to accelerate its plans, leading it to debut Bard as a contender of ChatGPT. Furthermore, Google also made PaLM, which resembles OpenAI’s offerings, accessible through an API.
“It feels like we are moving too quickly,”
Stone, a professor at UT Austin and chair of the One Hundred Year Study on Artificial Intelligence (AI) study, aims to understand how AI will affect our future.
The call for pausing large language models like ChatGPT highlights the need for a more robust and responsible approach to AI development. By addressing the ethical implications of these models and developing appropriate safeguards, we can ensure that AI is used for the betterment of society rather than being a source of harm or division.