Unveiling The Potential Risks Of AI Language Models – Opening A Pandora’s Box

Faisal Elali, a medical student and researcher from the State University of New York Downstate Health Sciences University, and Leena Rachid, a medical scribe and researcher from the New York-Presbyterian/Weill Cornell Medical Center, had an interest in finding out if it was possible to use artificial intelligence to generate a research paper, as well as how best to detect such papers.

The value and importance of artificial intelligence in scientific research are growing. It is utilized to help sift through complex data, yet it never produces the paper for publication. AI-constructed articles, however, can seem credible even if the underlying study was manufactured. How reliable are these papers, though?

In a study featured in the open-access journal Patterns, two researchers showed that it is possible to generate a research paper with ChatGPT, an AI-based language model. By simply asking questions, ChatGPT was able to create multiple convincing yet false abstracts. If a person wanted to carry out fraudulent activities, these fake abstracts could be submitted to many journals for publication.

The same technique could generate an entire study featuring fictitious data, non-existent participants, and insignificant outcomes if accepted. It would seem valid, particularly if the field is complex or not examined by an expert.

In a study referenced in the present paper, humans were exposed to research abstracts created by people and Artificial Intelligence systems. Interestingly, 32% of the AI-generated abstracts were mistakenly regarded as genuine, while 14% of human-crafted ones were incorrectly identified as artificial.

The research team tested their ChatGPT-generated study on three online AI detectors. The results showed that the texts were easily identified as being created by an AI, which suggests that journals should consider using AI detection tools to stop fraudulent applications.

The consensus changed to “likely human” when the same text was run through an AI-supported rewording tool on the internet for free. This indicates that special detection tools for AI are needed.

Doing science is hard labor, and expressing the specifics of it requires a great deal of effort. However, even an ape with barely any fur can form coherent sentences if they have enough caffeine and time – something that the author of this article knows from experience.

Constructing a convincing fake study with enough detail to make it seem legitimate would necessitate much work, including spending hours researching how to appear authentic. This could easily be too tiresome an activity for someone wanting to cause trouble. However, AI technology can finish such a job in minutes, making mischief much easier. The researchers have expressed concern about this in their paper, as it could lead to serious consequences.

A hypothetical example of an undetected fabricated study that claims the opposite—that drug B should be used over drug A for treating a medical condition—could influence the outcome of meta-analyses and systematic reviews on this topic. These analyses can affect healthcare policies, standards of care, and clinical recommendations. It isn’t easy despite attempts to pull back citations and reprints of retracted studies.

The paper’s authors indicate that beyond the usual mischievous intent, there is also pressure on medical professionals to publish many works to gain research grants or advance in their careers. They note that this is due to the US Medical Licensing Examination now passing/failing rather than being graded, which forces students who want to stand out from the crowd to rely heavily on published research.

The need for an AI detection system that can be relied upon to identify and remove any potentially fraudulent medical research is now more important than ever. It can have serious repercussions if such papers make it into the publishing environment or, worse, are used by practitioners on patients.

For a long time, AI language models have aimed to create texts indistinguishable from those written by people. Therefore, it should not be surprising that we need AI to identify when someone is trying to use artificial intelligence to generate false work that is hard to differentiate from reality. It could be unexpected, however, how soon we may require this technology.

We have now seen from this article that AI language models come with their potential risks, ranging from data privacy issues to radical bias perpetuation. The race to create more accurate and powerful models should remind us of the need to leverage advances in artificial intelligence while critically examining any potential ethical implications they may have.

Any deployment of AI technology must be undertaken only after carefully considering its potential upside and associated risks. Only through this open and honest dialogue between researchers, technologists, and other interested parties can we ensure the responsible use of AI for safer and fairer real-world applications.

Source: medicalxpress.com

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top