Defamed By ChatGPT: My Own Bizarre Experience With Artificiality Of “Artificial Intelligence”

Recent news reports have been filled with discussion about the swift advancement of artificial intelligence, prompting Elon Musk and over 1,000 other tech experts and analysts to demand a moratorium on AI.

I recently faced a false accusation due to the potential political bias in AI systems such as ChatGPT. This is an issue that some have warned about and can be extremely harmful.

I recently received an email from a law professor regarding research that he had done with ChatGPT regarding sexual harassment by professors. According to a 2018 Washington Post article, the program then reported that I had been accused of inappropriately touching students during a trip to Alaska. AI has constructed false accusations and invented ‘facts’ that didn’t exist before.

Professor Eugene Volokh and I were surprised to discover — through his research at UCLA — that The Washington Post had never published an article stating that I had taken students to Alaska or was accused of sexual harassment or assault.

At first, I found the accusation amusing. However, it became much more threatening when I deeply thought about it.

Throughout the years, due to my conservative legal opinions, I have resigned myself to the persistent reality of death threats against my family and me and an ongoing effort to get me terminated from George Washington University. Not surprisingly, this also involves relentless falsehoods about my statements or heritage.

I no longer choose to address the allegations against me since merely repeating them is sufficient to damage my reputation as an academic and writer.

Many people, relying on prejudiced or partial accounts, are echoing criticisms of AI as it can drastically augment these transgressions. Such skeptics swiftly subscribe to experiences that appear to uphold their view without probing beyond the surface.

What stands out is that this false accusation didn’t just come from AI – it was allegedly based on a Post article that never existed.

In American law schools, sexual harassment by professors has been a big issue, as demonstrated by at least five accounts documented in articles from relevant newspapers. Quotes from these articles are included to support this claim.

The former Georgetown University Law Center student accused Prof. Jonathan Turley of sexual harassment, claiming that he had stated ‘sexually suggestive comments’ and tried to embrace her sexually during an excursion organized by the school in Alaska (Washington Post, March 21, 2018).

I have been a teacher for 35 years. During that time, I have never taken students on any trip, nor to Alaska. The story claiming Georgetown University employed me and published an article in The Washington Post following an alleged incident of sexual harassment/assault on such a voyage is false.

ChatGPT appears to have made unsubstantiated assertions against two other law professors in response to Volokh’s inquiry – an action that has stirred questions among people. Bias creates flaws in AI programs.

Using AI may lead to false claims and cited nonexistent articles. This is because AI algorithms are just as flawed and biased as the people program them. Therefore, the question remains why an AI system would fabricate a quote and make references to incorrect information?

Although this incident might not reflect political biases recently identified in ChatGPT, it demonstrates how AI can disseminate its own disinformation with reduced levels of responsibility.

Unease exists over artificial intelligence due to the risks and difficulties highlighted. Nevertheless, Bill Gates, founder of Microsoft and billionaire philanthropist, has advocated using AI to combat not only digital misinformation digital combat misinformation political polarization.

Gates proposed utilizing AI to address issues in today’s society, such as unfounded conspiracy theories and increasing political polarization. He suggested that AI could be used to limit the amplifying effects of digital channels on certain views and to check for confirmation bias.

The phenomenon of confirmation bias, in which people seek or interpret information to confirm their prejudices, could explain why my colleagues and I experienced “garbage in, garbage out”: a problem resulting from the replication of inaccurate information with AI technology that inundates the internet.

At UCLA, Eugene Volokh is investigating the issue of how to respond to AI-driven defamation. He is seeking ways to address this potential threat.

The growing evidence of the government’s censorship system to blocklist sites and citizens, as shown by the “Twitter files,” has raised free-speech concerns concerning implementing AI systems. Likewise, I recently testified on this topic.

The reason, a website that serves as a respected source of information for libertarian and conservative scholars to discuss legal cases and controversies was blocked by one of the government-funded attempts called the Global Disinformation Index. This classification ranks it among the 10 most dangerous sites for spreading misleading facts.

In response to criticism about their censorship attempts, some Democratic leaders have proposed the use of algorithmic systems; these would be used to shield citizens from untoward decisions and filter out what they see as “disinformation.”

According to Sen. Elizabeth Warren, D-Mass, people should avoid misinformation about COVID-19 vaccines. She believes searching Amazon for books by prominent skeptics is not the right way to gain knowledge. Still, citizens should be steered towards experts and advice from knowledgeable people using enlightened algorithms.

Disinformation, including real stories, can counter officially endorsed narratives, enabled by AI and algorithms that give the illusion of being unbiased. Employing such tactics may make it seem easy to take down false stories, yet companies can escape blame by citing errors in their systems, as happened in my case.

The technology protects those who decide how to present facts and those who are presented with them. In my case, it can even be used to disperse the false information it was originally recruited to oppose.

Turley, a Board of Contributors member with USA TODAY member, is renowned as the Shapiro Professor of Public Interest Law at George Washington University.

Source: JONATHAN TURLEY

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top