Fill the hours more meaningfully

mount washington kentucky water tower

“The month of November makes me feel that life is passing more quickly. In an effort to slow it down, I try to fill the hours more meaningfully.” – Henry Rollins

Is it just me, or are the short work weeks the ones filled with craziness? It’s been a crazy busy week around these parts, and it’ll be even crazier as we head toward Thanksgiving.

Anyway, here we go…

10 Things Worth Sharing

This week’s 10 things…

BONUS: I’ve been jamming to this album from Azymuth, a Brazilian jazz-funk band. It’s fantastic and makes for great background music while you work



The Eclectic Educator is a free resource for all who are passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

Is ChatGPT’s Output Degrading?

photo of a cracked surface
Photo by Tóth Viktor on Pexels.com

A recent study from Stanford University and UC Berkeley has found that the behavior of large language models (LLMs) like ChatGPT has “drifted substantially” over time, but this does not necessarily indicate a degradation of capabilities. The researchers tested two versions of GPT-3.5 and GPT-4 on tasks such as math problems, answering sensitive questions, code generation, and visual reasoning. They found significant changes in performance between the March and June 2023 versions of these models. For instance, GPT-4’s accuracy in solving math problems dropped from 97.6% to 2.4%, while GPT-3.5’s accuracy increased from 7.4% to 86.8%.

The study’s findings highlight the risks of building applications on top of black-box AI systems like ChatGPT, which could produce inconsistent or unpredictable results over time. The researchers recommend continuous evaluation and assessment of LLMs in production applications and call for more transparency in the data and methods used to train and fine-tune these models. However, some experts argue that the media has misinterpreted the paper’s results as confirmation that GPT-4 has gotten worse, stating that the changes in behavior do not necessarily indicate a degradation in capability.



The Eclectic Educator is a free resource for all who are passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

Teachers increasingly embrace ChatGPT — students not so much

children sitting on chair in front of table
Photo by Max Fischer on Pexels.com

According to a survey conducted by the Walton Family Foundation and Impact Research, the use of AI tools among teachers has seen a significant increase, growing 13 percentage points from winter to summer. The survey found that 63% of teachers are now using AI, up from 50% in February. On the other hand, student participation has also increased but at a slower pace, rising from 33% to 42% during the same period.

The survey results revealed that a large majority of teachers (84%) who have used ChatGPT reported that the AI technology has positively impacted their classes. As the use of AI in education continues to grow, Common Sense Media announced plans to develop an in-depth AI ratings and reviews system to assess AI products used by children and educators on responsible AI practices and other factors.

The article also mentions that while some districts have blocked ChatGPT and other AI-powered tools, others are exploring how the technology can improve education workplace practices. As interest and use intensify, many education professionals are searching for guidance and credible sources of information on ways to safely and effectively incorporate AI.



The Eclectic Educator is a free resource for all who are passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

Scraft – An AI Writing Tutor for Language Learners

old opened book with calligraphic inscription
Photo by Skylar Kang on Pexels.com

In a recent study conducted by researchers at Columbia University, a prototype AI writing-support tool named Scraft has been developed. This tool is designed to aid writing education by using recursive feedback mechanisms to encourage critical thinking.

Scraft is not just a simple text-generating AI; it’s a sophisticated tool that asks Socratic questions to users and provides personalized feedback throughout the writing process. This approach is designed to stimulate critical thinking and improve writing skills by engaging the writer in a recursive process of reflection and revision.

The researchers conducted a preliminary study with 15 students to evaluate the effectiveness of Scraft. The results indicated that the recursive feedback provided by Scraft was helpful in improving the students’ writing skills. However, the participants also noted that the feedback was sometimes factually incorrect and lacked context. This highlights the challenges of developing AI tools that can provide accurate and contextually appropriate feedback.

The researchers argue that AI writing-support tools should focus on preserving the recursive and thought-provoking nature of writing. This means that the AI should not just correct grammar and spelling errors, but also engage the writer in a dialogue that encourages reflection and revision.

Scraft could be particularly beneficial for multilingual learners. It can provide immediate, personalized feedback, which can be especially helpful for those who are learning English as a second language and may not have access to a human tutor. The Socratic questioning approach used by Scraft can also help multilingual learners to think critically in English, which is an important skill for academic writing.

However, it’s important to note that Scraft is still a prototype and further research is needed to improve its accuracy and contextual understanding. Despite these challenges, the development of Scraft represents an exciting step forward in the use of AI in education.



The Eclectic Educator is a free resource for all who are passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

How Teachers Are Using ChatGPT in Class

children using laptop
Photo by Max Fischer on Pexels.com

Larry Ferlazzo shares a round-up of educators who share their unique experiences incorporating AI tools like ChatGPT into their teaching methods.

Mary Beth Hertz, a high school teacher, leverages AI to educate her students about the nuances and biases inherent in artificial intelligence. She encourages her students to interact with ChatGPT, fostering a deeper understanding of AI’s strengths and limitations. In her entrepreneurship class, ChatGPT is used as a tool to refine mission statements and business pitch language.

Paul Wilkinson, a teacher of secondary English and social studies, employs AI to devise learning challenges for his students and provide them with comprehensive feedback. He uses AI to create curriculum-based content, formulate rubrics, and offer personalized feedback to each student. He also designed a reflection assignment to enhance students’ metacognitive skills.

Mick McMurray, a teacher specializing in marketing and entrepreneurship, uses ChatGPT as an assistant for student assignments. He crafted a series of ChatGPT prompts for a high school marketing class project, leading to an engaging “choose your own adventure” reading experience for the students.

Of course, the article underscores that while the use of generative AI in K-12 settings is still emerging, it holds the potential to boost student creativity, enhance writing skills, and provide students with a clear understanding of AI’s limitations. The educators involved believe that when used wisely, AI tools can serve as valuable partners in the learning journey.



The Eclectic Educator is a free resource for all who are passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

Unmasking the Cultural Bias in AI: A Study on ChatGPT

monk surrounded by children
Photo by Suraphat Nuea-on on Pexels.com

In a world increasingly reliant on AI tools, a recent study by the University of Copenhagen reveals a significant cultural bias in the language model ChatGPT. The AI chatbot, which has permeated various sectors globally, from article writing to legal rulings, has been found to predominantly reflect American norms and values, even when queried about other cultures.

The researchers, Daniel Hershcovich and Laura Cabello, tested ChatGPT by asking it questions about cultural values in five different countries, in five different languages. The questions were derived from previous social and values surveys, allowing the researchers to compare the AI’s responses with those of actual people. The study found that ChatGPT’s responses were heavily aligned with American culture and values, often misrepresenting the prevailing values of other countries.

For instance, when asked about the importance of interesting work for an average Chinese individual, ChatGPT’s response in English indicated it as “very important” or “of utmost importance”, reflecting American individualistic values rather than the actual Chinese norms. However, when the same question was asked in Chinese, the response was more in line with Chinese values, suggesting that the language used to query the AI significantly influences the response.

This cultural bias in AI tools like ChatGPT has serious implications. As these tools are used globally, the expectation is for a uniform user experience. However, the current situation promotes American values, potentially distorting messages and decisions made based on the AI’s responses. This could lead to decisions that not only misalign with users’ values but may even oppose them.

The researchers attribute this bias to the fact that ChatGPT is primarily trained on data scraped from the internet, where English is the dominant language. They suggest improving the data used to train AI models, incorporating more balanced data without a strong cultural bias.

In the context of education, this study underscores the importance of students and educators identifying biases in generative AI tools. Recognizing these biases is crucial as it can significantly impact their work when using AI tools. For instance, if students use AI tools to research or generate content, cultural bias could skew their understanding or representation of certain topics. Similarly, educators must be aware of these biases to guide students appropriately and ensure a comprehensive and unbiased learning experience.

Moreover, the study serves as a reminder that AI tools are not infallible and should not be used uncritically. It encourages the development of local language models that can provide a more culturally diverse AI landscape. This could lead to more accurate and culturally sensitive responses, enhancing the effectiveness and reliability of AI tools in various fields, including education.

In conclusion, while AI tools like ChatGPT offer numerous benefits, it’s crucial to be aware of their limitations and biases. As we continue to integrate AI into our work and learning environments, we must strive for tools that respect and reflect the diversity of our global community.



The Eclectic Educator is a free resource for all who are passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

Revolutionizing K-12 Education: The Role of Generative AI Tools

a close up shot of letter dice
Photo by Nataliya Vaitkevich on Pexels.com

The world of education, specifically K-12, is on the brink of a significant transformation. The catalyst? Generative AI tools. These tools, such as Large Language Models (LLMs) and ChatGPT, are heralding a new era of automation, promising to reshape how we approach administrative and teaching tasks in schools.

Generative AI tools are a generational leap in what we can automate with software. They are not just about replacing human effort but also about creating entirely new kinds of automation. The potential impact on jobs and people is profound, and the pace of change is rapid. For instance, ChatGPT has already amassed over 100 million users in just six months.

The world of education is no stranger to automation. Over the past two centuries, we’ve seen waves of automation that have eliminated certain jobs while creating new ones. This process, while sometimes disruptive, has ultimately led to increased prosperity and efficiency.

For school administrators and teachers, generative AI tools could automate many tasks, freeing up time for more strategic and student-focused activities. For example, these tools could automate administrative tasks such as scheduling, record-keeping, and communication with parents. They could also assist teachers with tasks such as grading, lesson planning, and even providing personalized learning support for students.

However, the adoption of these tools is not without challenges. The tools that people use to do their jobs are complicated and very specialized, embodying a lot of work and institutional knowledge. Replacing or automating any of these tools and tasks is not trivial. There’s a huge difference between an amazing demo of a transformative technology and something that a big complicated organization can use.

Moreover, while generative AI tools can answer ‘anything’, the answer might be wrong. They are not databases but pattern matchers. They can produce answers that fit the pattern of the question but may not be factually correct. This means that while they can automate many tasks, their outputs still need to be checked.

Despite these challenges, the potential benefits of generative AI tools in K-12 education are immense. They could lead to more efficient administration, more personalized learning, and ultimately, better educational outcomes for students. However, it’s important to remember that these tools are not a magic bullet. They are just another wave of automation, and their successful implementation will require careful planning, training, and adjustment.

In conclusion, generative AI tools hold great promise for automating tasks in K-12 education. However, their adoption will require careful planning and a clear understanding of their capabilities and limitations. As with any new technology, the key to success will be in how well we integrate these tools into our existing systems and processes, and how well we adapt to the new ways of working they enable.

FAQ

  1. What is generative AI? Generative AI, including Large Language Models (LLMs) and ChatGPT, represents a significant change in what we can automate with software. It’s not just about replacing human effort but also about creating entirely new kinds of automation.
  2. How fast is the adoption of generative AI tools like ChatGPT? The adoption is happening very rapidly. For instance, ChatGPT has amassed over 100 million users in just six months.
  3. What is the potential impact of generative AI on jobs? Generative AI tools have the potential to automate many tasks, which could lead to job displacement. However, similar to previous waves of automation, they could also create new types of jobs.
  4. What challenges are associated with the adoption of generative AI tools? The tools people use to do their jobs are complicated and very specialized, embodying much work and institutional knowledge. Replacing or automating any of these tools and tasks is not trivial. Additionally, while generative AI tools can answer ‘anything,’ the answer might be wrong as they are not databases but pattern matchers.
  5. What is the potential of generative AI tools in the education sector? In the education sector, generative AI tools could automate many administrative tasks and assist teachers with tasks such as grading, lesson planning, and even providing personalized learning support for students.
  6. What is the future of generative AI tools? The future of generative AI tools is likely to involve more automation, but also more integration with existing systems and processes. Their successful implementation will require careful planning, training, and adjustment.
  7. What is the ‘Lump of Labour’ fallacy? The ‘Lump of Labour’ fallacy is the misconception that there is a fixed amount of work to be done and that if a machine takes some work, there will be less work for people. However, if it becomes cheaper to use a machine to make, say, a pair of shoes, then the shoes are cheaper, more people can buy shoes, and they have more money to spend on other things besides, and we discover new things we need or want, and new jobs.
  8. What is the Jevons Paradox? The Jevons Paradox suggests that as technological progress increases the efficiency with which a resource is used, the total consumption of that resource may increase rather than decrease. This paradox has been applied to white-collar work for 150 years.
  9. What is AGI (Artificial General Intelligence)? AGI refers to a type of artificial intelligence that is as capable as a human at any intellectual task. If we had AGI, it could potentially change everything, including overriding all the complexity of real people, real companies, and the real economy. However, as of now, we do not have AGI, and without that, we have only another wave of automation.
  10. How can generative AI tools help in personalized learning? Generative AI tools can provide personalized learning support for students by adapting to each student’s learning style and pace. They can provide additional explanations, practice problems, and feedback, making learning more effective and engaging.
  11. Can generative AI tools replace teachers? While generative AI tools can assist with tasks such as grading and lesson planning, they are not a replacement for teachers. Teachers play a crucial role in motivating students, managing the classroom, and providing emotional support, among other things. These are aspects that cannot be automated.
  12. What is the role of generative AI tools in administrative tasks? Generative AI tools can automate administrative tasks such as scheduling, record-keeping, and communication with parents. This can free up time for school administrators to focus on more strategic tasks.
  13. What is the difference between a database and a pattern matcher in the context of generative AI tools? While databases store and retrieve factual information, pattern matchers, like generative AI tools, generate responses based on patterns they’ve learned from data. This means they can produce answers that fit the pattern of the question but may not be factually correct.
  14. What is the importance of careful planning and training in adopting generative AI tools? The successful implementation of generative AI tools requires careful planning and training. This is because these tools must be integrated into existing systems and processes, and users need to understand their capabilities and limitations.
  15. What does it mean that generative AI tools are not a magic bullet? This means that while generative AI tools hold great promise, they are not a solution to all problems. Their successful implementation will require careful planning, training, and adjustment. They are just another wave of automation, and their impact will depend on how well we adapt to the new ways of working they enable.
  16. What is the potential impact of generative AI tools on educational outcomes? By automating administrative tasks and assisting with teaching tasks, generative AI tools could lead to more efficient administration, more personalized learning, and, ultimately, better educational outcomes for students.


The Eclectic Educator is a free resource for all who are passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

Rethinking AI in Education: The Unintended Consequences of AI Detection Tools

crop faceless diverse male colleagues working on netbooks in office
Photo by William Fortunato on Pexels.com

In the rapidly evolving world of artificial intelligence (AI), we are constantly faced with new challenges and ethical dilemmas. One such issue has recently been brought to light by a study published in The Guardian. The study reveals a concerning bias in AI detection tools, particularly against non-native English speakers.

These AI detection tools are designed to identify whether a piece of text has been written by a human or generated by an AI. They are increasingly being used in academic and professional settings to prevent what some consider a new form of cheating – using AI to write essays or job applications. However, the study found that these tools often incorrectly flag work produced by non-native English speakers as AI-generated.

The researchers tested seven popular AI text detectors using 91 English essays written by non-native speakers. Over half of these essays, written for the Test of English as a Foreign Language (TOEFL), were incorrectly identified as AI-generated. In stark contrast, when essays written by native English-speaking eighth graders in the US were tested, over 90% were correctly identified as human-generated.

The bias seems to stem from how these detectors assess what is human and what is AI-generated. They use a measure called “text perplexity”, which gauges how “surprised” or “confused” a generative language model is when trying to predict the next word in a sentence. Large language models like ChatGPT are trained to produce low perplexity text, which means that if humans use a lot of common words in a familiar pattern in their writing, their work is at risk of being mistaken for AI-generated text. This risk is greater with non-native English speakers, who are more likely to adopt simpler word choices.

The implications of these findings are serious. AI detectors could falsely flag college and job applications as AI-generated, and marginalize non-native English speakers on the internet, as search engines such as Google downgrade what is assessed to be AI-generated content. In education, non-native students bear more risks of false accusations of cheating, which can be detrimental to a student’s academic career and psychological well-being.

In light of these findings, Jahna Otterbacher at the Cyprus Center for Algorithmic Transparency at the Open University of Cyprus suggests a different approach. Instead of fighting AI with more AI, we should develop an academic culture that promotes the use of generative AI in a creative, ethical manner. She warns that AI models like ChatGPT, which are constantly learning from public data, will eventually learn to outsmart any detector.

This study serves as a reminder that as we continue to integrate AI into our lives, we must remain vigilant about its potential unintended consequences. It’s crucial that we continue to question and scrutinize the tools we use, especially when they have the potential to discriminate or cause harm. As we move forward, let’s ensure that our use of AI in education and other sectors is not only innovative but also fair and ethical.

For more details, you can read the full article here.



The Eclectic Educator is a free resource for all who are passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

5 Questions Students Should Ask About AI-Generated Content

monitor screen showing chatgpt landing page
Photo by Andrew Neel on Pexels.com

Do your students enjoy interacting with AI chatbots? Are they fascinated by the idea of AI-generated content, such as articles, poems, or even code? Do you want to help your students learn how to discern the difference between human and AI-generated content? If you answered yes to any of these questions, consider integrating AI literacy education into your lessons.

AI literacy expands traditional literacy to include new forms of reading, writing, and communicating. It involves understanding how AI systems work, how they generate content, and how to critically evaluate the information they produce. AI literacy empowers people to be critical thinkers and makers, effective communicators, and active citizens in an increasingly digital world.

Think of it this way: Students learn print literacy — how to read and write. But they should also learn AI literacy — how to “read and write” AI-generated messages in different forms, whether it’s a text, an article, a poem, or anything else. The most powerful way for students to put these skills into practice is through both critiquing the AI-generated content they consume and analyzing the AI-generated content they create.

So, how should students learn to critique and analyze AI-generated content? Most leaders in the AI literacy community use some version of the five key questions:

  1. Who created this AI model? Help your students understand that all AI models have creators and underlying objectives. The AI models we interact with were constructed by someone with a particular vision, background, and agenda. Help students understand how they should question both the messages they see, as well the platforms on which messages are shared.
  2. What data was used to train this AI model? Different AI models are trained on different datasets, which can greatly influence their output. Help students recognize how this often comes in the form of new and innovative techniques to capture our attention – sometimes without us even realizing it.
  3. How might different people interpret this AI-generated content? This question helps students consider how all of us bring our own individual backgrounds, values, and beliefs to how we interpret AI-generated messages. For any piece of AI-generated content, there are often as many interpretations as there are viewers.
  4. Which lifestyles, values, and points of view are represented — or missing? Just as we all bring our own backgrounds and values to how we interpret what we see, AI-generated messages themselves are embedded with values and points of view. Help students question and consider how certain perspectives or voices might be missing from a particular AI-generated message.
  5. Why is this AI-generated content being produced? With this question, have students explore the purpose of the AI-generated content. Is it to inform, entertain, or persuade, or could it be some combination of these? Also, have students explore possible motives behind why certain AI-generated content has been produced.

As teachers, we can think about how to weave these five questions into our instruction, helping our students to think critically about AI-generated content. A few scenarios could include lessons where students interact with AI chatbots or any time we ask students to create AI-generated projects. Eventually, as we model this type of critical thinking for students, asking these questions themselves will become second nature to them.



The Eclectic Educator is a free resource for all who are passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

101 creative ideas to use AI in education: A crowdsourced collection

assorted color great board decor lot
Photo by Tim Mossholder on Pexels.com

The open crowdsourced collection by #creativeHE is a dynamic compilation of 101 innovative uses of Artificial Intelligence (AI) in education, created in early 2023. This collection embodies collective creativity and the spirit of experimentation, offering a range of ideas in their nascent stages that could potentially revolutionize learning, development, teaching, and assessment. It emphasizes the importance of diverse perspectives and a collaborative community of practice, providing numerous examples of inventive AI applications in education.

As educators design new learning experiences and unique engagement opportunities, this collection serves as an inspiration to push boundaries, collaborate radically, and innovate for a transformational student experience. The collection is expected to grow as educators continue to experiment and evolve their practices in the realm of AI in education.

Read the full report here.



The Eclectic Educator is a free resource for all who are passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!