Rethinking AI in Education: The Unintended Consequences of AI Detection Tools

crop faceless diverse male colleagues working on netbooks in office
Photo by William Fortunato on Pexels.com

In the rapidly evolving world of artificial intelligence (AI), we are constantly faced with new challenges and ethical dilemmas. One such issue has recently been brought to light by a study published in The Guardian. The study reveals a concerning bias in AI detection tools, particularly against non-native English speakers.

These AI detection tools are designed to identify whether a piece of text has been written by a human or generated by an AI. They are increasingly being used in academic and professional settings to prevent what some consider a new form of cheating – using AI to write essays or job applications. However, the study found that these tools often incorrectly flag work produced by non-native English speakers as AI-generated.

The researchers tested seven popular AI text detectors using 91 English essays written by non-native speakers. Over half of these essays, written for the Test of English as a Foreign Language (TOEFL), were incorrectly identified as AI-generated. In stark contrast, when essays written by native English-speaking eighth graders in the US were tested, over 90% were correctly identified as human-generated.

The bias seems to stem from how these detectors assess what is human and what is AI-generated. They use a measure called “text perplexity”, which gauges how “surprised” or “confused” a generative language model is when trying to predict the next word in a sentence. Large language models like ChatGPT are trained to produce low perplexity text, which means that if humans use a lot of common words in a familiar pattern in their writing, their work is at risk of being mistaken for AI-generated text. This risk is greater with non-native English speakers, who are more likely to adopt simpler word choices.

The implications of these findings are serious. AI detectors could falsely flag college and job applications as AI-generated, and marginalize non-native English speakers on the internet, as search engines such as Google downgrade what is assessed to be AI-generated content. In education, non-native students bear more risks of false accusations of cheating, which can be detrimental to a student’s academic career and psychological well-being.

In light of these findings, Jahna Otterbacher at the Cyprus Center for Algorithmic Transparency at the Open University of Cyprus suggests a different approach. Instead of fighting AI with more AI, we should develop an academic culture that promotes the use of generative AI in a creative, ethical manner. She warns that AI models like ChatGPT, which are constantly learning from public data, will eventually learn to outsmart any detector.

This study serves as a reminder that as we continue to integrate AI into our lives, we must remain vigilant about its potential unintended consequences. It’s crucial that we continue to question and scrutinize the tools we use, especially when they have the potential to discriminate or cause harm. As we move forward, let’s ensure that our use of AI in education and other sectors is not only innovative but also fair and ethical.

For more details, you can read the full article here.


Thanks for taking the time to read this post. If you’ve enjoyed the insights and stories, consider showing your support by subscribing to my weekly newsletter. It’s a great way to stay updated and dive deeper into my content. Alternatively, if you love audiobooks or want to try them, click here to start your free trial with Audible. Your support in any form means the world to me and helps keep this blog thriving. Looking forward to connecting with you more!