FTC Warns Of Potential Misuse Of AI Technology Like ChatGPT

At a Congressional hearing on Tuesday, FTC chair Lina Khan and other commissioners cautioned House representatives about the ability of modern AI technologies such as ChatGPT to accelerate fraud. The hearing focused on the Federal Trade Commission’s efforts to safeguard American consumers from fraud.

The Commission issued a warning in response to a query concerning the measures they are taking to shield Americans from unjust dealings resulting from technological advancements. Khan agreed that AI presented new risks for the FTC to manage, despite the other advantages it may present.

Khan says to the House representatives:

“AI presents a whole set of opportunities, but also presents a whole set of risks,”

“And I think we’ve already seen ways in which it could be used to turbocharge fraud and scams. We’ve been putting market participants on notice that instances in which AI tools are effectively being designed to deceive people can place them on the hook for FTC action,”

Khan emphasized that the potential of AI to facilitate fraud is a grave issue that needs to be taken seriously.

Rebecca Slaughter, an FTC commissioner, countered the FTC chair’s statement by stressing the agency’s ability to adapt and combat AI-fueled fraud. She pointed out that technology specialists were brought into the FTC for consumer protection and market competition so that any possible AI-related complications could be detected and handled.

Rebecca Slaughter says:

“There’s a lot of noise around AI right now and it’s important because it is [a] revolutionary technology in some ways,”

“But our obligation is to do what we’ve always done — which is apply the tools we have to these changing technologies, make sure that we have the expertise to do that effectively, but to not be scared off by the idea that this is a new revolutionary technology, and dig right in on protecting people,”

Khan, Slaughter, and Alvaro, Bedoya of the Commission, testified before the House Energy and Commerce Subcommittee on Innovation, Data, and Commerce. Their testimony covered more than just Artificial Intelligence topics.

The FTC’s work to protect consumers from technology-related issues was detailed in their written testimony, which included tackling the issue of spam phone calls, warning online home buyer Opendoor about false claims related to sales prices, going after crypto community members for deceptive practices, safeguarding health data collected by websites and apps, dealing with COPPA violations committed by Epic Games (creator of Fortnite), ordering Chegg to protect personal information better, fighting against junk fees and making it easier for customers to cancel subscriptions, as well as addressing deceptive practices within the gig economy.

The agency established the Office of Technology in February to assist with law enforcement and policy efforts. It will provide internal technical knowledge, allowing them to remain current with technological advances.

The FTC highlighted the Office of Technology’s concentration on data security and confidentiality, digital trading markets, augmented and virtual worlds, the freelance employment market, and advertisement tracking technology in addition to “automated decision-making,” which could include Artificial Intelligence.

The FTC says:

“The creation of the Office of Technology builds on the FTC’s efforts over the years to expand its in-house technological expertise, and it brings the agency in line with other leading antitrust and consumer protection enforcers around the world,”

The responsible use of AI technology can help to mitigate risks and protect consumers from fraud and other malicious activities. As AI advances and becomes more ubiquitous, we must prioritize ethical considerations and work together to ensure these powerful tools are used for the greater good.

Source: TechCrunch

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top