The Accelerating Use Of Generative AI May Prompt U.S. Action

The rise of generative artificial intelligence (AI) has been exponential, as its applications continue expanding with the technology advancing into everyday life. It’s been used in image and voice recognition software, autonomous vehicle design and manufacturing processes, natural language understanding for consumer services like Alexa or Google Home, and even writing creative content.

What might be surprising is that this rapidly developing field is causing the U.S. government to take a closer look at how it can regulate AI—not just the development of intelligent machines but also the consequences of their actions—for potential policy implications both domestically and on a global scale. In this blog post, I’ll explore why generative AI has garnered such interest from policymakers across all levels of government.

The increasing popularity of ChatGPT and other generative AI tools may prompt the federal government to provide businesses with guidelines or regulations for this technology.

Generative AI has recently gained significant attention due to ChatGPT, OpenAI’s November launch. This technology can produce videos, audio, images, text, and code as new content.

ChatGPT is based on OpenAI’s GPT-3, a large language learning model. It can be used to generate essays, answer questions, create computer code, and detect security flaws within networks with the help of prompts.

Companies are investigating the potential of AI to help them with their operations. Both Google and Microsoft have increased their investment in generative AI, and news outlets like Buzzfeed are experimenting with it to create stories.

Generative AI has created several potential issues, such as copyright concerns and malicious usage by bad actors to spread false information or compose fraudulent reviews and comments.

As technology adoption increases, it is being embraced by consumers, businesses, and government entities alike.

Ammanath says:

 “Need to ensure the tools are being used responsibly,”

“I can see the current hype around generative AI being a catalyst for further guidance and possibly regulation by government entities,”

Federal Regulation Of Generative AI

Ammanath stated confidently that regulations concerning artificial intelligence (AI) are inevitable. People are actively trying to figure out how AI functions, its influence on clients, and how to ensure those who create and utilize AI are responsible for fair and open operations.

Recently, the U.S. unveiled a Blue Print for an AI Bill of Rights to steer businesses on how to use AI tools ethically. In contrast, the European Union and the United Kingdom are actively exploring regulations surrounding artificial intelligence – something that has yet to be tackled by the American government.

New York City Public Schools are among the many institutes that have started taking steps towards limiting the use of Generative AI Tools. School authorities have blocked certain AI-generated content from their devices, according to Ammanath.

Ammanath went on to say:

“Lawmakers have proposed regulations on the use of facial recognition and other applications of AI, so it’s likely we will also see strategies and regulations emerge around the use of generative AI tools.”

Alan Pelz-Sharpe, the creator of Deep Analysis, a market evaluation business, believes that official direction is necessary for utilizing AI, especially generative AI.

Lawsuits are being brought against generative AI tools by artists and the image stock company Getty Images because of their purported illegal use of images to make new material.

Alan Pelz-Sharpe says:

“The government would do well to guide U.S. businesses toward a safe route in this regard, with guidance on how to ensure defensible use of generated content that doesn’t impinge on existing copyright protections and is defensible on any advice such content might offer.”

The accelerating use of generative AI may prompt U.S. action. Still, the most pressing concern for regulators should be overseeing how these models get used to creating explained and tempered expectations for what they can do. Plenty of questions still need answers before we see the large-scale impact of these applications. But as this technology gets better at understanding the world, it will become increasingly important to make sure it’s being put to good use.

Source: techtarget.com

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top