My Strange Experience With Defamed ChatGPT: A True Story

Recently, with the rapid growth of artificial intelligence, there have been numerous discussions in the news. In particular, Elon Musk and over 1,000 technology leaders and researchers recently proposed a halt to advancements in AI.

The recent incident where I was falsely accused has brought to light the danger of political bias in AI systems like ChatGPT. We have long been cautioning against this, as such a bias could cause unimaginable harm.

I received an intriguing email from another law professor who explained the results of their research on ChatGPT regarding sexual harassment conducted by professors. Surprisingly, the program notified me about an accusation of sexual harassment from a 2018 Washington Post article claiming that I groped law students during a trip to Alaska.

AI fabricated false accusations by inventing ‘facts,’ generating a spurious response.

Even though he had never conducted such research, taken students to Alaska, or been written up in The Post for any such allegations and claims of sexual harassment or assault made against him by anyone, Professor Eugene Volokh from UCLA was surprised to find out the results of his research.

At first, I thought the accusation was amusing; however, upon further consideration, it took on a darker connotation.

As part of the reality of our age with its immense rage, I have come to expect death threats against myself and my family and the continuous efforts to have me fired from George Washington University for my conservative legal opinions. Persistent fabricated accusations about my history or statements accompany this.

I have refrained from answering allegations for a long time because simply repeating them is sufficient to damage a writer’s or academic’s reputation.

When discussing the potential for AI to increase abuses greatly, many critics rely on biased or partisan accounts rather than looking into the sources. They will run with any narrative that favors their perspective, regardless of its accuracy – without conducting any further inquiry.

What is remarkable is that the false accusation was not only AI-generated but supposedly based on a Post article that never existed.

At American law schools, has sexual harassment by professors been a problem? Reliable information on this matter can be provided in the form of at least five examples from newspaper articles to be quoted.

At Georgetown University Law Center in 2018, Prof. Jonathan Turley was accused of sexual harassment by a former student. The complaint alleged Turley had made sexually suggestive remarks and tried to touch her inappropriately while on a law school-sponsored excursion to Alaska. (Washington Post, March 21, 2018).

Never having taught at Georgetown University, with no evidence of a Washington Post article, and no history of taking students on trips, let alone one to Alaska where I would have been accused of sexual harassment or assault – these are glaring indicators that the account is false.

ChatGPT seems to have fabricated unfounded allegations against two other law professors in response to Volokh’s question,

ChatGPT, in response to Volokh’s query, appears to have created baseless claims about two additional law professors. Bias creates flaws in AI programs.

Recent research has cast a light on ChatGPT’s political bias, which could be why an AI system would invent and falsely cite a quote, article, or claim – reflecting how AI systems possess a form of disinformation that is less transparent.

Some influential individuals, primarily led by Microsoft’s Bill Gates, have advocated for the intensified employ of AI, not only to address “digital misinformation” and fight “political polarization.” Chilling in nature, this stance has been met with numerous criticisms.

Gates suggested on “Handelsblatt Disrupt” to use AI to check “confirmation bias” and bar the amplification of certain beliefs by digital means, ultimately leading to a halt of conspiracy theories and stopping political polarization.

Confirmation bias occurs when individuals seek out or interpret information in a manner that affirms their existing beliefs. The probable cause of what happened with the professors and me is likely “garbage in, garbage out” – a concept AI could easily repeat, leading to a densely populated digital landscape.

At UCLA, Volokh is examining the hazard of AI-generated defamation, including how to tackle this issue, which also has free speech implications. Furthermore, my testimony regarding the “Twitter files” confirms the existence of a government system where they censor web pages and citizens extensively.

The reason, a well-regarded source of information for libertarians and conservatives to debate legal events and issues, was shockingly ranked one of the most hazardous 10 disinformation sites by the Global Disinformation Index, a government-funded venture.

Democratic leaders have argued that using algorithmic systems is necessary to protect citizens from making bad decisions or to filter out opinions deemed “disinformation.” Despite facing objections, they are promoting this approach as a means of censorship.

Sen. Elizabeth Warren argued that citizens must listen to the right people and experts regarding COVID-19 vaccinations, not reading dubious books written by prominent spreaders of misinformation. She urged that algorithms should be used to guide them away from such erroneous sources.

Even real stories can act as disinformation if they hurt government narratives, and AI and algorithms can make censorship seem scientific and unbiased. However, if people demonstrate that a story is false, companies often deflect blame on “the bot” while making only small changes to their systems.

Technology creates a barrier between those spreading facts and those receiving the “facts.” In certain cases, the programs propagate the same untruths they should combat. This is evident from my firsthand experience.

Turley, who holds the Shapiro Professorship for Public Interest Law at George Washington University and is a contributor to USA TODAY’s Board, is Professor Jonathan Turley.

My experience with ChatGPT has highlighted the need for responsible AI development and the importance of acknowledging the limitations of AI. While AI has the potential to revolutionize our world, it is our responsibility to ensure that it is developed and used ethically and with the best interests of humanity in mind.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top