The release of Generative AI techniques is bringing us closer to a tipping point, despite only a few years earlier, many countries committing to the development of human-centric and trustworthy AI. Such technologies differ from those designed to be human-friendly and reliable, warranting further attention.
Such systems cannot be reproduced or validated and can fabricate and hallucinate. For example, they can provide instructions on how to complete terrorist attacks, assassinate political figures, and cover up child abuse.
GPT-4 can utilize mass surveillance on a large scale by incorporating functions such as ingesting images, algorithmic linking of identities, and creating detailed profiles.
The secrecy surrounding the development of this rapidly changing industry has kept up with the evolution. The technical paper concerning GPT-4 gives very little insight into what is used for training, how many parameters there are, and even less detail on the assessment processes.
Before deploying any AI technology, an independent impact assessment must be conducted, as it is a requirement in every emerging AI policy framework.
Many firms creating AI and leading experts in the space have suggested regulations are necessary; however, in the US, there is a little movement towards establishing protective legal measures, unlike in other countries where progress is being made quickly.
The current trajectory of using artificial intelligence cannot be maintained. All stakeholders should be informed about the impacts of AI, and highly knowledgeable professionals must evaluate these models for accuracy before implementation.
A national commission should examine Artificial Intelligence (AI) ‘s implications on American Society. This commission would work to identify both AI’s benefits and potential risks while promoting algorithmic transparency and devising measures to prevent algorithmic bias.
The Center for AI and Digital Policy has partnered with other organizations to file a complaint with the Federal Trade Commission, urging them to look into the product chatGPT developed by Open AI.
Given its responsibility as the top consumer protection agency of the United States, we believe the Federal Trade Commission (FTC) has the right to take action concerning this emerging problem.
We request the FTC to impose a moratorium on releasing new commercial versions of GPT until related safeguards have been established. At the same time, we are asking them to initiate a rulemaking process for regulating the generative AI industry.
We acknowledge the many potential advantages of AI but recognize that without proper control and monitoring, it could lead to undesired results. Thus, we support growth and innovation yet stress the need to manage risks to avoid catastrophic consequences.
We urge the FTC to hit “the pause button,,” allowing us to catch up with our institutions, laws, and social norms. This is a critical opportunity to re-establish agency over these technologies before losing all control.
CAIDP | The Best Choice For Professional Development
As we progress on the FTC Complaint and Petition, we welcome any advice or input you may have regarding items to address and bring attention to with the FTC.
Accurate, authoritative descriptions and expert opinions of the risks posed by GPT and how it violates the Federal Trade Commission’s (FTC) guidelines for AI product and services marketing and advertising are especially helpful for us.
Identifying Topics For Further Research & Discussion
– Enhanced risk to cybersecurity
– Enhanced risk to data protection and privacy
– Enhanced risk to children’s safety
– Failure to conduct an independent risk assessment before deployment
– Failure to establish independent risk assessment throughout the AI lifecycle
– Failure to accurately describe the data source
– Failure to disclose data collection practices regarding users
– False advertising regarding reliability
– Lack of transparency in outputs produced
– Replication of bias in protected categories
We cannot include general policy arguments, unsupported claims, or rhetorical statements in our evidence-based arguments as they are not backed by published work. Rather, we only employ cited studies and scholarly literature to support our reasoning.
The FTC has mandated that OpenAI complies with the various reports and policy guidelines they issued in recent years concerning the marketing and advertising of AI-related products and services.
How To Use AI In Advertising: Tips From The Financial Times
– Are you exaggerating what your AI product can do?
– Are you promising that your AI product does something better than a non-AI product?
– Are you aware of the risks?
– Does the product use AI at all?
The FTC in 2021 warned of the risk of discrimination posed by Artificial Intelligence technology, emphasizing that seemingly “neutral” AI systems can still often result in unequal outcomes for individuals belonging to certain legally protected classes, such as those based on race.
The FTC has decades of experience enforcing three laws; the FTC’s legal approach aims to protect developers and users of AI. These laws are:
– Section 5 of the FTC Act
– Fair Credit Reporting Act
– Equal Credit Opportunity Act
The FTC provided lessons regarding the use of AI that were derived from recent work and enforcement actions, which stressed the importance of utilizing AI in a truthful, fair, and equitable manner.
– Start with the right foundation
– Watch out for discriminatory outcomes
– Embrace transparency and independence
– Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results
– Tell the truth about how you use data
– Do more good than harm
– Hold yourself accountable – or be ready for the FTC to do it for you
At the FTC, we have comprehensive knowledge in dealing with the difficulties of using data and algorithms to make choices concerning consumers.
The FTC has undertaken numerous investigations and pursued many legal cases, focussing on violations of their authority related to AI and automated decision-making. In pursuing these matters, they have scrutinized multiple companies in this field.
The FTC has highlighted the importance of AI tools’ transparency, explainability, fairness, and empirical accuracy by emphasizing these areas in their law enforcement activities, studies, and guidance. Furthermore, they aim to promote accountability throughout the process.
Our experience and laws guide companies looking to manage better consumer protection risks posed by AI and algorithms. We believe this knowledge is highly valuable.
– Be transparent.
– – Don’t deceive consumers about how you use automated tools
– Be transparent when collecting sensitive data
– If you make automated decisions based on information from a third-party vendor, you may be required to provide the consumer with an “adverse action” notice
– Explain your decision to the consumer.
– – If you deny consumers something of value based on algorithmic decision-making, explain why
– If you use algorithms to assign risk scores to consumers, also disclose the key factors that affected the score, rank ordered for the importance
– If you might change the terms of a deal based on automated tools, tell consumers.
– Ensure that your decisions are fair.
– – Don’t discriminate based on protected classes.
– Focus on inputs but also outcomes
– Give consumers access and an opportunity to correct information used to make decisions about them
– Ensure that your data and models are robust and empirically sound.
– Hold yourself accountable for compliance, ethics, fairness, and non-discrimination.
The Federal Trade Commission (FTC) has warned of the potential harms of using Artificial Intelligence (AI) to fight online problems. This includes inaccuracy, bias, discrimination, and increased commercial surveillance. They have voiced their concern about the implications these AI practices could bring about.
The FTC has urged policymakers to “exercise great caution” when considering AI as a policy solution while releasing their report to Congress today. They warn about the dangers of employing AI to tackle online issues.
The use of AI, especially by large tech corporations and other organizations, can come with its own set of limits and issues. These challenges include:
AI tools’ potential inaccuracies, biases, and discriminatory potential when relied upon are highlighted in the report. Moreover, it stresses how such tools entice businesses to resort to increasingly intrusive forms of commercial surveillance.
As the use of AI becomes more prevalent in various industries, it is essential to have organizations like OpenAI that prioritize ethical considerations and strive to make AI accessible to everyone. The FTC 2023 provides an excellent opportunity for OpenAI to showcase its capabilities and demonstrate how its work can contribute to the advancement of AI while ensuring that it is used responsibly. OpenAI will continue to play a crucial role in the development and regulation of AI in the coming years, and we can expect to see even more impressive innovations from this organization.
Source: Center for AI and Digital Policy