In late December of his sophomore year at Rutgers University, Kai Cobbs’s problem seemed unbeatable: could AI be less intelligent than humans? Head scratching later, he finally arrived at the perturbing answer: it was possible.
After hearing others wax poetically about the AI tool ChatGPT, Cobbs determined to explore its capabilities while structuring an essay on capitalism’s evolution.
Cobb expected the tool to craft a considered response to his formulation. To his dismay, his result was an incoherent piece of content, beneath acceptable standards, which unfortunately couldn’t be taken authoritatively.
“The quality of writing was appalling. The phrasing was awkward and it lacked complexity.”
“I just logically can’t imagine a student using writing that was generated through ChatGPT for a paper or anything when the content is just plain bad.”
The launch of the OpenAI chatbot has presented educators with a challenge – how to approach student work generated from AI assistance. Some appear to believe that this technology is acceptable. However, Edwin Cobbs adamantly differs in opinion.
While some public school systems, like New York City’s, have prohibited ChatGPT from being used on any of their technology-equipped items as it could potentially lead to occurrences of cheating, universities have been hesitant to enact similar bans.
As a pioneering technology implemented in higher education, generative AI has caused much debate surrounding ethics concerning plagiarism and scholarly conduct. Due to a proliferation of new digital research tools directly affecting institutions of learning, the concept of academic integrity needs redefinition that goes into such criteria.
The introduction of ChatGPT does not signify the appearance of worries concerning improper utilization of the web in academia. Rather, this innovative advancement indicates that such worrisome practices have been present and must be tackled.
Universities across the country endeavored to comprehend their regulations surrounding academic integrity and stability of scholarly work when Wikipedia was initiated in 2001, as it meant proliferating policies to synchronize with new technology growth.
In certain professions, bots that can craft authentic pieces of writing or designs are a reality now. Keeping this fact in mind, institutes of higher learning need to assess the new challenge they are up against. They should alter their judgments and behavior by utilizing updated guidelines and perspectives to meet the complexities.
Plagiarism is defined as an act of appropriation. It is done when someone takes another person’s writing, ideas, and creative expressions without giving the original author proper credit or acknowledgment. Essentially, it involves passing off someone else’s work as one’s own.
According to Emily Hipchen, a Brown University’s Academic Code Committee board member, when a work is produced by AI (Artificial Intelligence) rather than a person, it may be hard to determine the authorship. This highlights a seemingly difficult topic governed by varying authorship or creative voice definitions.
“If [plagiarism] is stealing from a person,”
“Then I don’t know that we have a person who is being stolen from.”
The considerations of Alison Daily, chair of the Academic Integrity Program at Villanova University, have joined Hipchen’s thoughts on this matter. Specifically, both are looking into whether text generation algorithms could be defined as people.
Universities are struggling to keep up with the technology available to students. They are looking for new ways to combat plagiarism, and ChatGPT may be the answer. This app has the potential to help universities reduce the amount of cheating that takes place on campus. It will be interesting to see how this app develops and if it becomes a standard tool for universities worldwide.