How To Write Engaging Emails For Students With ChatGPT

In the wake of the recent shooting at Michigan State University, a college made a well-intentioned effort to communicate with its students about the tragic event. However, their approach – relying on an AI language model called ChatGPT to write the email – backfired, resulting in a message criticized for its insensitivity and lack of human touch.

The incident has raised important questions about the role of artificial intelligence in communication and the potential risks and challenges of relying on such technology for sensitive and complex topics.

The Office of Equity, Diversity, and Inclusion at Peabody College of Vanderbilt University, Nashville, Tennessee, dispatched an email on Thursday.

In this five-paragraph email to the Peabody Family, it was expressed that the Michigan shootings are a heartbreaking indication of how essential it is to look out for each other and ensure everyone feels welcome.

The continuation of this sentiment was that, as members of Peabody’s campus, we must think about the consequences of such a happening and take action to guarantee that we are doing all we can to form an environment where everyone feels secure and included.

The email contained a message with an almost mechanical-sounding tone; it was made clear at the bottom that OpenAI’s ChatGPT had generated this.

An associate dean at Peabody sent an email, which The Vanderbilt Hustler student newspaper reported, in which they expressed regret for their “poor judgement.”

Nicole Joseph pointed out that while the email expressed an admirable message of inclusivity, it was not appropriate to use ChatGPT to create communications for Peabody College, particularly in a time of grief and following a tragedy, as this does not reflect the values of the college.

She continued: “As with all new technologies that affect higher education, this moment gives us all an opportunity to reflect on what we know and what we still must learn about AI.

The email sparked outrage from many students, with Laith Kayat, a senior at Vanderbilt University whose sister goes to Michigan State University, saying to The Vanderbilt Hustler: “It’s incredibly ironic that you would use a computerized message about solidarity and togetherness when it seems like you can’t be bothered to think about it yourself.”

Insider requests for comment from Peabody College and Vanderbilt University have yet to be answered.

The recent incident involving the use of ChatGPT by a college to write an email to students about the Michigan State University shooting highlights the potential risks and challenges associated with relying on AI language models for sensitive and complex communication.

While AI language models like ChatGPT can be highly efficient and helpful for certain tasks, they have limitations and biases. As we saw in this case, the language used in the email was insensitive and potentially harmful to some of the recipients, reflecting the data limitations and biases that may have been present in the model’s training.

Moving forward, organizations and individuals need to approach AI language models with caution and critical thinking, especially when dealing with sensitive or emotionally charged topics. It’s also crucial to consider the potential impacts of the language used by AI models on diverse groups of people and to take steps to ensure that any communication is respectful and inclusive.

Source: Africa. business insider

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top