Rethinking AI in Education: The Unintended Consequences of AI Detection Tools

crop faceless diverse male colleagues working on netbooks in office
Photo by William Fortunato on Pexels.com

In the rapidly evolving world of artificial intelligence (AI), we are constantly faced with new challenges and ethical dilemmas. One such issue has recently been brought to light by a study published in The Guardian. The study reveals a concerning bias in AI detection tools, particularly against non-native English speakers.

These AI detection tools are designed to identify whether a piece of text has been written by a human or generated by an AI. They are increasingly being used in academic and professional settings to prevent what some consider a new form of cheating – using AI to write essays or job applications. However, the study found that these tools often incorrectly flag work produced by non-native English speakers as AI-generated.

The researchers tested seven popular AI text detectors using 91 English essays written by non-native speakers. Over half of these essays, written for the Test of English as a Foreign Language (TOEFL), were incorrectly identified as AI-generated. In stark contrast, when essays written by native English-speaking eighth graders in the US were tested, over 90% were correctly identified as human-generated.

The bias seems to stem from how these detectors assess what is human and what is AI-generated. They use a measure called “text perplexity”, which gauges how “surprised” or “confused” a generative language model is when trying to predict the next word in a sentence. Large language models like ChatGPT are trained to produce low perplexity text, which means that if humans use a lot of common words in a familiar pattern in their writing, their work is at risk of being mistaken for AI-generated text. This risk is greater with non-native English speakers, who are more likely to adopt simpler word choices.

The implications of these findings are serious. AI detectors could falsely flag college and job applications as AI-generated, and marginalize non-native English speakers on the internet, as search engines such as Google downgrade what is assessed to be AI-generated content. In education, non-native students bear more risks of false accusations of cheating, which can be detrimental to a student’s academic career and psychological well-being.

In light of these findings, Jahna Otterbacher at the Cyprus Center for Algorithmic Transparency at the Open University of Cyprus suggests a different approach. Instead of fighting AI with more AI, we should develop an academic culture that promotes the use of generative AI in a creative, ethical manner. She warns that AI models like ChatGPT, which are constantly learning from public data, will eventually learn to outsmart any detector.

This study serves as a reminder that as we continue to integrate AI into our lives, we must remain vigilant about its potential unintended consequences. It’s crucial that we continue to question and scrutinize the tools we use, especially when they have the potential to discriminate or cause harm. As we move forward, let’s ensure that our use of AI in education and other sectors is not only innovative but also fair and ethical.

For more details, you can read the full article here.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

5 Questions Students Should Ask About AI-Generated Content

monitor screen showing chatgpt landing page
Photo by Andrew Neel on Pexels.com

Do your students enjoy interacting with AI chatbots? Are they fascinated by the idea of AI-generated content, such as articles, poems, or even code? Do you want to help your students learn how to discern the difference between human and AI-generated content? If you answered yes to any of these questions, consider integrating AI literacy education into your lessons.

AI literacy expands traditional literacy to include new forms of reading, writing, and communicating. It involves understanding how AI systems work, how they generate content, and how to critically evaluate the information they produce. AI literacy empowers people to be critical thinkers and makers, effective communicators, and active citizens in an increasingly digital world.

Think of it this way: Students learn print literacy — how to read and write. But they should also learn AI literacy — how to “read and write” AI-generated messages in different forms, whether it’s a text, an article, a poem, or anything else. The most powerful way for students to put these skills into practice is through both critiquing the AI-generated content they consume and analyzing the AI-generated content they create.

So, how should students learn to critique and analyze AI-generated content? Most leaders in the AI literacy community use some version of the five key questions:

  1. Who created this AI model? Help your students understand that all AI models have creators and underlying objectives. The AI models we interact with were constructed by someone with a particular vision, background, and agenda. Help students understand how they should question both the messages they see, as well the platforms on which messages are shared.
  2. What data was used to train this AI model? Different AI models are trained on different datasets, which can greatly influence their output. Help students recognize how this often comes in the form of new and innovative techniques to capture our attention – sometimes without us even realizing it.
  3. How might different people interpret this AI-generated content? This question helps students consider how all of us bring our own individual backgrounds, values, and beliefs to how we interpret AI-generated messages. For any piece of AI-generated content, there are often as many interpretations as there are viewers.
  4. Which lifestyles, values, and points of view are represented — or missing? Just as we all bring our own backgrounds and values to how we interpret what we see, AI-generated messages themselves are embedded with values and points of view. Help students question and consider how certain perspectives or voices might be missing from a particular AI-generated message.
  5. Why is this AI-generated content being produced? With this question, have students explore the purpose of the AI-generated content. Is it to inform, entertain, or persuade, or could it be some combination of these? Also, have students explore possible motives behind why certain AI-generated content has been produced.

As teachers, we can think about how to weave these five questions into our instruction, helping our students to think critically about AI-generated content. A few scenarios could include lessons where students interact with AI chatbots or any time we ask students to create AI-generated projects. Eventually, as we model this type of critical thinking for students, asking these questions themselves will become second nature to them.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!