OpenAI, the advanced artificial intelligence research company that created the widely-acclaimed natural language processing tool ChatGPT maker, has now released a new “not fully reliable” version of their AI-generated text detection offering.
Warning: Using and creating AI tech can be convenient and accurate, but it comes with risks. By making this development publicly available, OpenAI hopes to empower users to understand better how these technologies operate so they can make informed decisions when using or creating them.
ChatGPT AI, which the OpenAI research laboratory created, possesses a new tool to figure out whether a machine wrote the text. Although OpenAI comments that it still offers limited reliability so far.
OpenAI recently released a new classifier tool that can accurately distinguish text written by a human from AI – including the popular ChatGPT. The company announced this exciting addition in a recent blog post.
According to Open AI researchers, good classifiers can identify signs of AI written text and could be employed in scenarios such as academic dishonesty or using AI chatbots pretending to be humans. Although they mentioned that it is not possible to ensure “total detection” of all texts written this way,
The academic collaboration showed that the classifier, although not completely trustworthy, correctly identified 26% of AI-written English texts. On the other hand, 9% of human-written texts were mistakenly labeled as having originated from automated writing tools.
“Our classifier’s reliability typically improves as the length of the input text increases. Compared to our previously released classifier, this new classifier is significantly more reliable on text from more recent AI systems.”
Since ChatGPT opened to public access has sparked a wave of concern among educational institutions worldwide that cheating in exams or assessments could lead to cheating.
Lecturers in the UK are being pushed to reexamine the methods by which their classes are evaluated. Various universities disallow AI use for assessments and have returned to pen-and-paper exams.
At Deakin university in Australia, a lecturer noted that approximately one-fifth of the assessments they had marked in the Australian summer had been provided AI assistance.
Using ChatGPT for papers has been outlawed by many scientific journals. OpenAI acknowledged that the application had flaws, including needing to be more accurate in sections less than 1,000 words and matching real people’s work with computer-generated content.
According to the study, this type of program is only suitable for analyzing English text, as it could perform better in different languages. This ailment also applies when trying to detect mistakes in code, where its reliability is low.
OpenAI suggested that while the Grover model should not be the sole source to judge where a text originated, it could still be utilized as a complementary method to other approaches when determining its origin.
OpenAI has invited educational institutions to share their experiences using ChatGPT in classrooms. The call is intended to gather insights that help shape the technology and increase its effectiveness.
Last month, the three main universities in South Australia allowed AI like ChatGPT to be implemented in policies; however, it must be disclosed. Contrasting with this encouraging news, most have opted to ban such use of AI.
While it is a positive step that ChatGPT’s maker OpenAI has released a tool to help identify AI-generated content, it is important to note that the tool still needs to be fully reliable. This means that there is still room for improvement when it comes to detecting AI-generated content. As more and more companies begin to use AI for content generation, we must continue to develop better ways of identifying this type of content.
Source: the guardian