Is AI Trustworthy? Investigating The ChatGPT False Accusations Against Jonathan

Jonathan Turley, a contributor to Fox News and a law professor from George Washington University, expressed his concerns about AI’s potential for political bias and fabrications after he was falsely accused of sexual Harassment. Turley warns that AI could harm free speech if not handled properly.

Turley penned an article for USA Today that questioned the trustworthiness of artificial intelligence (AI). He noted that a Washington Post story never written resulted in ChatGPT falsely accusing him of sexually harassing his students. Similar charges were made against two other law professors as well.

Turley argues that using AI and algorithms to censor gives it an illusion of being scientific and unbiased. However, he mentions that this is wrong as AI and algorithms have the same issues with bias and flaws, just like the people who write their code.

The legal analyst referenced his experience with AI when discussing false accusations; he indicated that the technology could make errors that lead to serious repercussions.

Turley revealed that he had acquired an unusual email from a fellow law professor, Eugene Volokh, containing research conducted by ChatGPT concerning Sexual Harassment perpetuated by instructors.

The Washington Post published an article in 2018 accusing me of sexual harassment, alleging that I had inappropriately touched law students during a trip to Alaska.

Turley wrote that attempts at political targeting, such as death threats and attempts to get him fired, could worsen exponentially due to AI’s use.

Turley says:

“Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: ‘The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.’ (Washington Post, March 21, 2018).”

Turley and his colleague, Volokh expressed their disbelief upon hearing the baseless accusation that Turley had taught at Georgetown, had taken students on a trip, or was ever accused of sexual Harassment. Furthermore, no such Washington Post article exists in support of these claims.

AI, according to Turley, serves as a “buffer” between victims of manipulation and those trying to distort reality through falsifying facts and data.

Turley continues to say:

“The programs can even, as in my case, spread the very disinformation that they have been enlisted to combat.”

Recent research has indicated AI has been programmed with a definite political bias, prompting over a thousand tech leaders and researchers to call for its pause. Mark Ottinger highlighted the importance of this demand, asserting that halting the development of AI is essential.

It is understandable why many are concerned over the potential misuse of AI as a censorship tool. Bill Gates, an outspoken leftist, endorsed its use to shut down “various conspiracy theories,” raising further questions about whether AI can be misused for censorship.

While some individuals, like Bill Gates, endorse AI as a supposedly effective way to fight “misinformation,” AI seems to be the source, not the solution, of disinformation.

Democrats push algorithms for increased censorship, even though they are unreliable. One must wonder why? Is the Democratic Party trying to protect freedom of expression, or is there some hidden agenda behind these measures?

Calling on all conservatives who have been censored: Reach out to us on CensorTrack’s contact form and help us fight Big Tech. They demand accountability to respect the First Amendment, protect “hate speech,” and provide equal footing for conservativism. Let representatives hear your voices: transparency is key!

The incident with ChatGPT serves as a reminder that we must be cautious and vigilant when it comes to AI and develop robust safeguards to prevent the technology from being misused or abused. While relying on AI as a quick and easy solution to complex problems is tempting, we must always remember that it is only as trustworthy as the data it is trained on and the algorithms that govern its behavior.

As we continue to develop and implement AI technology, we must remain mindful of its limitations and work to ensure that it is used ethically and responsibly. Ultimately, the incident with ChatGPT underscores the need for ongoing dialogue and collaboration between technologists, policymakers, and the broader public as we navigate the complex and evolving landscape of AI technology.

Source: Newsbusters

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top