ChatGPT vs Students: Study Reveals Who Writes Better

AI generated essays don’t yet live up to the efforts of real students - according to new research from the University of East Anglia (UK). A new study published in Written Communication compared the work of 145 real students with essays generated by ChatGPT. While the AI essays were found to be impressively coherent and grammatically sound, they fell short in one crucial area – they lacked a personal touch.

As the line between human and machine writing continues to blur, the study underlines the importance of fostering critical literacy and ethical awareness in the digital age. It is hoped that the findings could help educators spot cheating in schools, colleges and universities worldwide by recognizing machine-generated essays.

Prof Ken Hyland, from UEA’s School of Education and Lifelong Learning, said: “Since its public release, ChatGPT has created considerable anxiety among teachers worried that students will use it to write their assignments. “The fear is that ChatGPT and other AI writing tools potentially facilitate cheating and may weaken core literacy and critical thinking skills. This is especially the case as we don’t yet have tools to reliably detect AI-created texts. “In response to these concerns, we wanted to see how closely AI can mimic human essay writing, particularly focusing on how writers engage with readers.”

Lack of 'personal touch'

The research team analysed 145 essays written by real university students and another 145 generated by ChatGPT. “We were particularly interested in looking at what we called ‘engagement markers’ like questions and personal commentary,” said Prof Hyland. “We found that the essays written by real students consistently featured a rich array of engagement strategies, making them more interactive and persuasive. They were full of rhetorical questions, personal asides, and direct appeals to the reader – all techniques that enhance clarity, connection, and produce a strong argument." 

"The ChatGPT essays on the other hand, while linguistically fluent were more impersonal. The AI essays mimicked academic writing conventions but they were unable to inject text with a personal touch or to demonstrate a clear stance. They tended to avoid questions and limited personal commentary. Overall, they were less engaging, less persuasive, and there was no strong perspective on a topic. This reflects the nature of its training data and statistical learning methods, which prioritize coherence over conversational nuance,” he added.

No algorithm can teach students how to think

Despite its shortcomings, the study does not dismiss the role of AI in the classroom. Instead, the researchers say that tools like ChatGPT should be used as teaching aids rather than shortcuts. “When students come to school, college or university, we’re not just teaching them how to write, we’re teaching them how to think - and that’s something no algorithm can replicate,” added Prof Hyland.

StepUp Note

This research shows the difference between original (human) thinking and summarized AI thinking in written language. If the purpose of written language is to summarize existing knowledge or information, AI is very useful.  However, this research article shows a clear advantage for human writers when it comes to sharing information that engages the reader in sharing the thoughts and feelings of the writer. Verbal communication creates the foundation for written communication, and StepUp to Learn exercises help children practice thinking and talking in ways that encourage the development of useful written language.

Note by Nancy W. Rowe, MS, CCC/A

Reposted from University of East Anglia

Subscribe


 

Follow Us on Facebook @stepuptolearn  Follow Us on Instagram @stepuptolearn  Follow Us on LinkedIn @neuronet-learning