Open AI Launches Tool to Identify AI-Generated Text Content

OpenAI has unveiled a new AI recognition tool that could potentially change how we differentiate “artificial” from “human” writing. This high-tech introduction to their leading artificial intelligence laboratory may revolutionize how our knowledge evolves.

The patterns in the text can now be identified using this new technology, designed to tell whether the text was written by a human or generated through machine learning algorithms.

This technology will greatly impact industries such as tech and journalism. It can stop fraudulent content production and help prove online stories are authentic. Learn more about this exciting development and discover what AI detection can do for your business today!

OpenAI has developed and released an AI text classifier that utilizes artificial intelligence technology such as ChatGPT to detect if the given content is machine-generated.

The OpenAI blog post explains that using AI Text Classifier, a GPT model carefully trained, it’s possible to estimate the probability of whether a text was composed using AI; potential sources like ChatGPT are being considered.

Today, OpenAI has released its innovative new tool after disallowing its ChatGPT AI chatbot in many universities and K-12 school districts because of the technology’s capacity for finishing students’ assignments like writing book reports, essays, and even programming tasks.

ChatGPT has been restricted within K-12 public school districts of New York City, Seattle, Los Angeles, and Baltimore. Educational institutes in France and India have prohibited using the program on student computers.

BleepingComputer analyzed OpenAI’s latest AI text classifier, showing varying degrees of inconclusiveness.

When checking OpenAI’s text classifier against our content, we saw it was easily able to recognize the articles had been composed by a human rather than a machine.

How to Use OpenAI’s AI Text Classifier for BleepingComputer Content

Detecting if it was machine-generated when assessing the texts produced by ChatGPT and You.com’s AI chatbot proved challenging. Despite this, the analysis continued for these sentences created through artificial intelligence.

OpenAI cautions against relying solely on the new AI Text Classifiers when administering checks for cheating on homework assignments by students. It is essential to note that these tools should not be viewed as the ultimate proof for any cases of educational dishonesty.

OpenAI admin says:

“Our classifier is not fully reliable.”

“In our evaluations on a ‘challenge set’ of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as ‘likely AI-written,’ while incorrectly labeling human-written text as AI-written 9% of the time (false positives).”

“Our classifier’s reliability typically improves as the length of the input text increases”

Using data for further training, this classifier may become a more reliable tool for sensing AI-generated content; at present, its efficiency is not assured.

While this may seem like a great development for those that want to use AI-generated text, there are some potential dangers. For example, what if someone used this tool to create a bot that could pass the Turing test? We would never know if we were talking to a human or a machine. This tool also has the potential to be used for less-than-honorable purposes, such as creating fake news articles. Only time will tell how this tool will be used and its societal implications.

Source: bleepingcomputer.com

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top