A recent academic paper titled “Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT” explored the challenges artificial intelligence poses to academic honesty and plagiarism. Ironically, the paper itself was written by ChatGPT, a fact unknown to readers and peer reviewers. Professor Debby Cotton of Plymouth Marjon University, listed as the lead author, said the aim was to demonstrate ChatGPT’s high writing ability and warn that universities are in an “arms race” with rapidly improving AI technology.
Universities have long battled essay mills that sell pre-written assignments to students, but now even these services appear to be using AI tools like ChatGPT. Many institutions admit they are struggling to identify and discipline students who submit AI-generated work. Some universities in the UK are already planning to expel students caught using ChatGPT to cheat.
Thomas Lancaster, a computer scientist at Imperial College London, said universities were “panicking” because it is difficult to prove a text was written by a machine. ChatGPT often produces grammatically correct and fluent writing that can surpass that of some students. However, Lancaster noted one giveaway: the chatbot’s poor grasp of academic referencing, which often results in fabricated or unreliable citations. To make their paper convincing, Cotton and her colleagues had to manually fix ChatGPT’s references before submitting it.
Lancaster added that ChatGPT could handle early undergraduate work but would struggle with advanced, specialized writing such as dissertations. He emphasized that students relying on AI would eventually be exposed as their coursework became more complex.
In response, universities such as Bristol have issued new guidance on detecting AI-assisted cheating. Repeat offenders could face expulsion. Professor Kate Whittington from Bristol said universities must maintain academic standards: “If you cheat your way to a degree, you might get an initial job, but your career won’t progress.”
Irene Glendinning, head of academic integrity at Coventry University, said students caught using AI inappropriately would be required to undergo training, and repeated cheating would lead to expulsion. She urged faculty to watch for writing that doesn’t match a student’s usual voice or shows “lots of facts and little critique.”
Glendinning warned that students who fail to recognize the limitations of AI tools risk exposure. In computer science, for example, AI-generated code often contains bugs that only someone with real programming knowledge could fix. She emphasized that students paying £9,250 a year were ultimately “cheating themselves” by not using their education to genuinely learn.
As AI tools become more sophisticated, universities face growing pressure to adapt their policies and teaching methods. While ChatGPT can assist learning, unchecked use threatens academic integrity. The challenge for educators, therefore, is not only to detect AI misuse but also to teach students how to engage with such technologies responsibly.
1. What was the main purpose of Professor Debby Cotton’s paper?
2. According to Thomas Lancaster, why are universities “panicking” about ChatGPT?
3. What feature of ChatGPT’s writing might reveal its use, according to the text?
4. What is implied about students who rely on ChatGPT for their coursework?
5. What is the main challenge universities face according to the article?