How And Why This Teacher Has Adopted An Open ChatGPT Policy

Cheating. It’s the scourge of academia, and teachers everywhere struggle to prevent it in their classrooms. But one educator has taken on a different approach: an open ChatGPT policy. Traditional methods of curbing cheating involve strict enforcement and harsh punishments.

Still, this innovative teacher wants to use technology as a way not only to limit dishonesty but also to cultivate an environment where students are encouraged to be honest with themselves—and each other—about their learning. Let’s explore why adopting an open ChatGPT policy may be the key to keeping students honest regarding classwork and delivering improved educational outcomes for everyone involved.

Everyone Cheating: Why Teachers Adopted Open Chat Policy

Ethan Mollick pleases humans and machines: why can’t we coexist harmoniously?

The associate professor at the University of Pennsylvania’s Wharton School has made it clear that we are now living in an A.I. world and, as such, we must learn to share it.

Professor Ethan Mollick says:

“This was a sudden change, right? There is a lot of good stuff that we are going to have to do differently, but I think we could solve the problems of how we teach people to write in a world with ChatGPT.”

Since the introduction of ChatGPT in November, educators have been worried that it might make cheating easier.

In certain educational districts, the use of OpenAI’s bot has been disallowed due to legitimate concerns. This autonomous A.I. system can create poetry and programming code; it may even be adept enough to pass an MBA examination.

A Wharton professor recently put the questions from a core MBA course’s final exam into a chatbot and was surprised to find that it would have received either a B or B-minus grade even though there were some mistakes in its mathematics.

Mollick has decided to go against the grain this year and is mandating his students take advantage of ChatGPT. Furthermore, he has included an Artificial Intelligence policy in his syllabus for the first time.

He stated that the feedback from his classes on entrepreneurship and innovation was positive, indicating that the transition was going very well.

Mollick continues to say:

“The truth is, I probably couldn’t have stopped them even if I didn’t require it.”

This week, he held a gathering inviting students to brainstorm ideas for their class project. Most participants had ChatGPT working, prompting it to produce projects and responding to the bot’s notion with further inquiries.

Ethan Mollick adds:

“And the ideas so far are great, partially as a result of that set of interactions.”

He readily acknowledges his conflicting feelings of excitement and apprehension about the potential impact of artificial intelligence on assessments in the classroom. Yet, he maintains that educators must keep up with modern developments.

Ethan Mollick went on to say:

“We taught people how to do math in a world with calculators.”

Educators now face the task of instructing students on how the world has shifted and how they can adjust to these modifications.

Mollick’s new policy stipulates that A.I. is an “emerging skill”; errors or omissions resulting from its use are the student’s responsibility and should be cross-checked with other sources for accuracy.

Most importantly, students must acknowledge when and how they have used it.

“Failure to do so is in violation of academic honesty policies,” the policy reads.

Mollick isn’t the first to try to put guardrails in place for a post-ChatGPT world.

Earlier this month, 22-year-old Princeton student Edward Tian created an app to detect if a machine had written something. Named GPTZero, it was so popular that the app crashed from overuse when he launched it.

“Humans deserve to know when something is written by a human or written by a machine,” Tian told NPR of his motivation.

Mollick concurs though he is still trying to convince that educators can permanently halt cheating.

A survey of Stanford students revealed that numerous had already utilized ChatGPT for their final exams. It is estimated that thousands of people in countries such as Kenya are producing essays on behalf of international students.

Mollick went on to say:

“I think everybody is cheating … I mean, it’s happening. So what I’m asking students to do is just be honest with me.”

“Tell me what they use ChatGPT for, tell me what they used as prompts to get it to do what they want, and that’s all I’m asking from them. We’re in a world where this is happening, but now it’s just going to be at an even grander scale.”

“I don’t think human nature changes as a result of ChatGPT. I think capability did.”

Gabe O’Connor produced the radio interview with Ethan Mollick, which Christopher Intagliata edited.

After reading this article, could an open ChatGPT policy work in your classroom? Have you ever struggled with the issue of cheating in your classroom? How did you handle it? We’d love to hear your thoughts and experiences in the comments below.

Source: NPR

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top