Rick Claypool and Cheyenne Hunt authored the report; sorry in Advance! Rapid Rush to Deploy Generative AI Risks a Wide Array of Automated Harms, published Tuesday, argues that a halt should be called until appropriate government measures are established to protect the public from the dangers of generative AI. This analysis seeks to shift the conversation around this type of artificial intelligence so that decision-makers and citizens can voice their opinion on how these technologies can affect our lives.
Since the launch of OpenAI’s ChatGPT in November, there has been increased interest in generative AI tools.
The report says:
“A huge amount of buzz—especially among the Big Tech corporations best positioned to profit from them,”
“The most enthusiastic boosters say AI will change the world in ways that make everyone rich—and some detractors say it could kill us all. Separate from frightening threats that may materialize as the technology evolves are real-world harms the rush to release and monetize these tools can cause—and, in many cases, is already causing.”
Claypool and Hunt identified five distinct areas of concern regarding potential harm.
– Damaging Democracy: Spambots that spread misinformation have been around for some time, but with the help of generative AI tools, it is simpler than ever to create a large amount of false political material. Audio and video production AI technologies are also becoming more advanced, making it harder to discern between genuine and artificial content.
– Consumer Concerns: Businesses utilizing generative AI to maximize their profits are collecting user data, manipulating customers, and providing an advantage for large corporations. Furthermore, fraudulent individuals are employing these tools in increasingly elaborate scams.
– Worsening Inequality: Generative AI technology has the potential to perpetuate and intensify existing systemic biases, such as racism and sexism. Additionally, it could provide bullies and abusers more opportunities to inflict harm on their victims. If widely adopted, this AI type could seriously affect economic inequality.
– Undermining Worker Rights: AI tools are trained with the help of texts and images created by humans, for which companies often hire low-wage workers from abroad. However, using AI to automate media production has the potential to reduce or eliminate the need for people to do this work – thereby deskilling them and taking away their jobs.
– Environmental Concerns: The demand for training and maintaining generative AI tools is pushing the limits of computing power, which is growing faster than tech developers can keep up with via efficiency advances. This could lead to large tech companies needing to quadruple or quintuple their computing power resulting in their carbon footprints.
Public Citizen cautioned that “businesses are utilizing AI tools which may be hazardous quicker than their damaging effects can be analyzed or diminished.”
The report continues to say:
“History offers no reason to believe that corporations can self-regulate away the known risks—especially since many of these risks are as much a part of generative AI as they are of corporate greed,”
“Businesses rushing to introduce these new technologies are gambling with peoples’ lives and livelihoods, and arguably with the very foundations of a free society and livable world.”
Public Citizen is hosting an event on Thursday, April 27, combining in-person and virtual attendance. U.S. Rep. Ted Lieu (D-Calif.) and 10 additional panelists will be there to discuss the risks of AI technology and how to manage its growth, given the current lack of regulation. Those interested must sign up by Friday to join the conference.
Calls for the control of Artificial Intelligence have been escalating. In May, Geoffrey Hinton, regarded as a leading expert in neural networks, presented a plea for the governance of AI.
Geoffrey Hinton says:
“Godfather of artificial intelligence,” compared the quickly advancing technology’s potential impacts to “the Industrial Revolution, or electricity, or maybe the wheel.”
When Brook Silva-Braga from CBS News questioned the potential of this technology erasing humankind, Hinton cautioned that it “is not impossible to imagine.”
Current programs like ChatGPT do not necessarily cause the dread that could result from AI, but rather by something known as “artificial general intelligence” (AGI). Through AGI, computers can generate and act upon their thoughts.
Geoffrey Hinton continues to say:
“Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI,”
“Now I think it may be 20 years or less.” Eventually, Hinton admitted that he wouldn’t rule out the possibility of AGI arriving within five years—a major departure from a few years ago when he “would have said, ‘No way.'”
“We have to think hard about how to control that,”
“We don’t know, we haven’t been there yet, but we can try.”
Sam Altman, CEO of OpenAI, recently wrote in a blog post that he is certainly not the only AI pioneer. In February, he made this statement.
Sam Altman says:
“The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world.”
A recently published open letter has been signed by over 26,000 individuals, which demands a six-month suspension of any AI system training that exceeds the level of OpenAI’s newest chatbot, GPT-4. However, Altman is not one of the signatories.
The letter emphasizes that we should only create powerful AI systems when we can be sure that their consequences will be beneficial and that their potential risks can be kept in check.
The report from Public Citizen emphasizes that AI tech, which is currently available, for example, chatbots with misleading information, apps that generate fake videos, and cloned voices involved in criminal activities, are already resulting in or have the potential to cause significant damage. This includes increased inequality, a threat to democracy, job losses, exploitation of customers, and an aggravation of the climate emergency.
If allowed to be used without strong restrictions, corporations can cause serious dangers with generative AI. However, these threats do not have to become a reality if appropriate measures are taken in time. Claypool and Hunt highlighted this fact in their writing.
Government regulation can prevent companies from utilizing the technologies too quickly (or even stop them completely if unsafe). It can also establish standards that shield individuals from risks. Companies using generative AI must avoid recognizable damages, honor the concerns of communities and creators, test their technologies before use, assume responsibility, and accept accountability if an issue occurs.
It can require that fairness be incorporated into the technologies. It can make sure that if Artificial Intelligence does enhance efficiency and displace personnel, any monetary advantages should be distributed to those who have been hurt and not held by a small number of firms, directors, and shareholders.
Last week, the Biden administration expressed its solicitude for an AI accountability mechanism by launching a call to the public for suggestions on making such systems lawful, effective, moral, secure, and reliable. Senator Chuck Schumer (D-N.Y.), Leader of the Majority in the Senate, is taking early action toward legislating AI technology in response to the rising regulatory concern.
Claypool and Hunt emphasize the importance of strong safeguards and government regulation before making AI technology widely available. They urge for a pause on this matter until then.
Experts are calling for a “pause” on the development and deployment of AI until appropriate regulations are in place. This will ensure that AI’s potential risks and negative consequences are adequately addressed while allowing technology development to continue. Ultimately, the responsible use of AI will require a combination of regulation, transparency, and ethical considerations. As we move forward, it will be important to strike a balance between innovation and accountability to ensure that AI serves the best interests of society as a whole.
Source: Alternet.org