European regulators and law enforcement agencies are becoming increasingly concerned about the potential risks of generative AI platforms such as ChatGPT. This leads them to seek ways to counteract humanity’s sprint toward digitalization.
ChatGPT, a platform with few restrictions and guidance, has experienced tremendous growth since its launch in December: racking up over 1.6 billion visits. Users can request essays, poems, spreadsheets, and computer code.
EUROPOL, the European Union Agency for Law Enforcement Cooperation, cautioned recently that ChatGPT – one of the thousands of AI technologies currently in use – could facilitate unlawful activities, including phishing, formulating malicious software, and even supporting terrorism.
Europol says:
“If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps,”
“As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home to terrorism, cybercrime and child sexual abuse.”
OpenAI, the creator of ChatGPT, was threatened with hefty fines by the Italian privacy rights board Garante following a glitch that exposed user files. The board demanded clarification on where user information is going and instituted age restrictions on the program, resulting in a temporary ban imposed in Italy.
This month, the European Data Protection Board set up a task force to deal with grievances of personal data violation among the twenty-seven countries of the EU. Spain, France, and Germany are all included in this investigation and are working together to unify the rules concerning it.
Dragos Tudorache, a European lawmaker and co-sponsor of the Artificial Intelligence Act, told Yahoo News that this act is close to being finalized in the European Parliament and would create a fundamental AI authority.
Dragos Tudorache says:
“It’s a wake-up call in Europe,”
“We have to discern very clearly what is going on and how to frame the rules.”
Users familiar with Alexa and online chess games have seen firsthand the power of artificial intelligence, which has become part of everyday life in recent years. To illustrate this more tangibly, ChatGPT is a large language model that can answer questions and accomplish tasks rapidly, providing an interactive AI experience.
Mark Bünger is the co-founder of Futurity Systems, a Barcelona-based consulting agency. His company specializes in bringing science-based innovation to its clients.
Mark Bünger says:
“ChatGPT has knowledge that even very few humans have,”
“Among the things it knows better than most humans is how to program a computer. So, it will probably be very good and very quick to program the next, better version of itself. And that version will be even better and program something no humans even understand.”
Experts have voiced their concerns over fraud stemming from this incredibly efficient technology, with examples like identity theft and plagiarism in educational institutions.
Taylor, the deputy director of the Edinburgh Centre for Robotics, has a significant role in the field.
Taylor says:
“For educators, the possibility that submitted coursework might have been assisted by, or even entirely written by, a generative AI system like OpenAI’s ChatGPT or Google’s Bard, is a cause for concern,”
OpenAI and Microsoft, which has provided financial support to OpenAI though they have built a competitive chatbot, were unavailable for comment regarding this article.
CEO, ChatGPT, Trial, Programmers, Adapting, Thousands of New Chatbots. Since its free public trial on Nov. 30th, CEO Cecilia Tham of Futurity Systems has observed programmers adapting ChatGPT to create thousands of new chatbots – from PlantGPT, which helps monitor the house.
Cecilia Tham says:
“AI has been around for decades, but it’s booming now because it’s available for everyone to use,”
“That is designed to generate chaotic or unpredictable outputs,”
“Destroy humanity.”
AutoGPT, also known as Autonomous GPT, is an advanced version capable of performing goal-oriented tasks that are more complex.
Tham continues to say:
“For instance,”
“you can say ‘I want to make 1,000 euros a day. How can I do that?’— and it will figure out all the intermediary steps to that goal. But what if someone says ‘I want to kill 1,000 people. Give me every step to do that’?” Even though the ChatGPT model has restrictions on the information it can give, she notes that “people have been able to hack around those.”
Last month, the Future of Life Institute – a think tank centered on technology – released an open letter asking for a momentary suspension of AI development because of the potential dangers of chatbots and AI.
Elon Musk and Apple co-founder Steve Wozniak signed off on this document.
Steve Wozniak says:
“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” and “AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”
The signatories, calling for a six-month hiatus on the development of AI systems beyond GPT-4, urged governments to ‘institute a moratorium’ should the key players in the industry fail to do so voluntarily. They proposed this pause as an opportunity to formulate necessary regulations.
The ChatGPT controversy serves as a wake-up call for ensuring that AI is developed and used responsibly and ethically. While the risks of advanced AI systems are real, we cannot ignore their potential benefits. By working together to address these challenges, we can ensure that AI is a force for good rather than a source of harm.
Source: Yahoo News