Prepare For The Next Level Of AI: How It Impacts Our Democracy

Social media is eroding democracy by providing a platform for individuals with extreme views to meet and strategize. As discussed in this series, joining like-minded people with dissenting opinions is simpler than ever.

The Constitution’s drafters saw geographic dispersal as a tool to counteract the potential power of dangerous factions. In today’s world, however, individuals do not need to rely on political representatives to make their voices heard in public.

Our representative institutions need repairs due to the ramifications of our current situation. To move forward, we must seek ways to replace geographic dispersal as a check on faction by considering measures such as ranked-choice voting. Democracy is only beginning to feel the effects of this issue.

Generative artificial intelligence, a tool promising a new wave of misinformation, could be readily utilized by bad actors to spread false information on an even greater scale.

A healthy democracy could use this new technology, both beneficial and adversarial, to develop plans for navigating possible economic transitions due to its rapid advancement. Yet, our present democracy’s ability to govern and manage these challenges requires serious consideration.

I joined many technologists, academics and even controversial visionaries such as Elon Musk, who signed an open letter demanding a pause on training “AI systems more powerful than GPT-4” for at least six months due to my worries concerning their answer.

Last month, the OpenAI lab released GPT-4, a great improvement over the predecessor, ChatGPT, launched in November. This development has prompted this letter.

The discussion on whether we have entered the Age of AGI rather than the AI-centered technological developments where machines are tasked with performing specific tasks, such as Siri, is a highly contested one in the field of technology. This new age is expected to bring the capability to match humans at nearly any activity.

The technology provides us with more problems of misinformation and fraud and unleashes a wave of emergent properties and capabilities – something which could be an absolute game changer.

The much-anticipated Artificial General Intelligence (AGI) may be finally upon us thanks to the advanced generative foundational models backing GPT-4. These models have already outshined humans in coding, the LSAT and many other fields. Is this a sign that AGI is here?

Gates, the co-founder of Microsoft and a fervent believer in the power of OpenAI, argues that despite GPT-4 and other large language models having potential, their capability remains confined to limited tasks.

Using Microsoft Research’s comprehensive review of GPT-4, the newest machine-learning models show a “spark” of artificial general intelligence. Despite my lab receiving funding from Microsoft Research, I believe this assessment to be accurate.

The debate over whether the time has arrived to regulate an intelligence functioning unpredictably is ongoing, but this breakthrough’s benefits and potential harms are already recognizable. This realization necessitates us to be attentive, no matter our stance on the issue of regulation.

Automation can now complete multiple activities that traditionally require human labour, including many white-collar positions. This increase in productivity also disrupts the labour market. It poses risks not just to truck drivers but also to lawyers, coders and anyone who largely depends on intellectual property as their source of income.

Gates says:

“When productivity goes up, society benefits because people are freed up to do other things, at work and at home.

Of course, there are serious questions about what kind of support and retraining people will need. Governments need to help workers transition into other roles.”

OpenAI knows its technology’s potential applications and risks and has adopted usage policies prohibiting certain activities.

Generation of child sexual abuse material, hateful, harassing or violent content, or malware; weapons development and military operation; guiding in critical infrastructure such as energy, transportation and water; and promoting self-harm are all highly prohibited activities due to the risk of physical harm.

Multilevel marketing, gambling, payday lending and automated determinations of eligibility for credit, employment, educational institutions or public assistance services are all activities that carry a high risk of economic harm.

Scams, astroturfing, coordinated inauthentic behaviour, disinformation, plagiarism, and pseudo-pharmaceuticals are all fraudulent or deceptive activities. Such activities seek to mislead and deceive people.

Activities considered illicit include content with adult themes, unauthorized practice of law/medicine/financial advice, and political campaigning or lobbying by generating high volumes of campaign materials and any actions that may violate privacy.

Despite the numerous advantages offered by this technology, it is not the primary purpose of our open letter to emphasize these positive aspects. Instead, we aim to discuss how it could finally achieve personalization in learning.

We may be able to shift the current economy away from the dominance of big tech by using generative AI to pay internet users for their production of raw data, therefore treating it as compensated labour. If successful, this measure would alter the dynamics that have allowed for an increased power imbalance.

We should avoid any rushed decisions while considering the potentially drastic effects this social transformation will likely have on us. Instead of being led entirely by a handful of engineers in limited laboratories, we must ensure we are adequately prepared for these changes’ full scope and magnitude.

We require a respite to evaluate what humanity has brought into being, how best to manage it and develop methods of overseeing the development and utilization of novel technologies so that they are held accountable.

The National Institute of Standards and Technology has worked towards the standard-setting process. Our public sector should further invest in third-party auditing to better understand model capabilities and data ingestion. Accelerating this progress is essential for improved AI usage in our society.

We must look into and persist in establishing “compute governance,” which effectively regulates the utilization of the massive energy necessary for computing power that drives the new models – much like overseeing access to uranium for creating nuclear technology.

We must not only strive to enhance our democracy but also the tools it utilizes. As a result, suspending the development of generative AI could allow us to manage technology and experiment with newer methods of enhancing governance.

The Commerce Department sought to regulate new AI models, and they are now looking into how they can use some of the tools AI has generated to make the public comment process even more robust and meaningful.

Before we can effectively deploy these emerging technologies for next-generation governance and make sure they have a positive impact on democracy, there is a need to take the time to consider the challenges first – something which GPT-4 cannot do for us. We must govern these technologies responsibly to reap their full benefits.

One of the biggest challenges facing our democracy is the need to develop new policies and regulations to keep up with technological change. As AI continues to evolve, we must work together to ensure that it is deployed in a way that is ethical, transparent, and accountable. Failure to do so risks creating new inequalities and exacerbating existing ones. It is time for our policymakers and leaders to step up and take action to ensure our democracy is ready for the next level of AI.

Source: Washington Post

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top