The Best Books for Understanding AI — A Reading List for Educators and Curious Humans

elderly man thinking while looking at a chessboard
Photo by Pavel Danilyuk on Pexels.com

A quick note before the list: I’ve been living in this space for a while now — as an instructional coach, a Google Certified Innovator, a doctoral student, and someone who uses AI tools daily in my actual work. The books I’m recommending here are ones I’d press into the hands of a thoughtful educator or a curious non-technical reader. This is not a developer’s reading list. If you want to build LLMs from scratch, you’re reading the wrong blog.

What I care about: understanding what these systems actually are, what they can and can’t do, what they mean for teaching and learning, and how to think clearly about the cultural and ethical questions they raise. The AI book market has exploded with hype, doom, and everything in between. Most of it isn’t worth your time. Here’s what is.


Where to Start

Co-Intelligence: Living and Working with AI — Ethan Mollick (2024)

This is the book I recommend first to every educator asking me where to begin, and it’s not particularly close. Mollick is a Wharton professor who has been using AI in his classroom since the day ChatGPT launched and writing about it — honestly and with genuine curiosity — at his Substack ever since. Unlike most AI books, this one was written by someone with actual daily practice rather than theoretical distance.

The central argument is in the title: AI as co-intelligence, not replacement intelligence. Mollick’s four rules for working with AI are practical enough to start using today and deep enough to keep thinking about. His concept of the “jagged frontier” — that AI is weirdly capable at things we’d consider hard and oddly bad at things we’d consider easy — is the single most useful mental model I’ve found for calibrating what to expect.

For educators specifically, Chapter 7 on AI in schools is worth the price of the book alone. Mollick is genuinely thoughtful about the implications for assessment, expertise development, and what we’re actually asking students to do when we assign traditional work in an era of capable AI tools. He doesn’t hand you easy answers. He asks better questions.

Worth noting: some readers already deep in this space find it a bit surface-level, and it was written in 2023, so some specifics are already dated. Read it for the framework, not the technical details.

Get Co-Intelligence


Understanding What AI Actually Is

Artificial Intelligence: A Guide for Thinking Humans — Melanie Mitchell (2019)

Still the best accessible introduction to what AI fundamentally is and isn’t. Mitchell is a computational complexity researcher at the Santa Fe Institute, and she brings real intellectual rigor to a topic that attracts an unusual amount of noise. This book predates the LLM explosion, which is actually part of what makes it valuable — it gives you the conceptual foundation to understand why systems like GPT surprised even the researchers who built them.

Mitchell is especially good on the gap between narrow AI capability and what we loosely call “understanding.” If you want to have an informed opinion about whether AI is “really” thinking, read this first.

Get Artificial Intelligence: A Guide for Thinking Humans


The Coming Wave: Technology, Power, and the Twenty-First Century’s Greatest Dilemma — Mustafa Suleyman (2023)

This is the big-picture book. Suleyman co-founded DeepMind and Inflection AI before becoming CEO of Microsoft AI — he is, in other words, someone who has spent his career at the center of this thing. The Coming Wave is his argument that we are facing a genuine civilizational inflection point with AI (and synthetic biology), and that the window to build appropriate containment structures around these technologies is narrowing rapidly.

What distinguishes it from most AI doom-or-boom books is specificity. Suleyman doesn’t deal in vague anxieties — he makes concrete arguments about the concentration of power, economic disruption, and the structural problems of trying to regulate technology that spreads faster than governance can follow. Readable, serious, and useful for understanding why AI isn’t just a productivity story.

Get The Coming Wave


The Ethics and Alignment Problem

The Alignment Problem: Machine Learning and Human Values — Brian Christian (2020)

If you want to understand why making AI systems that reliably do what we want them to do is genuinely hard — technically, philosophically, and ethically — this is the book. Christian spent years interviewing researchers at the leading AI labs and built a rigorous, human-readable account of the problem at the center of AI safety.

The alignment problem isn’t abstract. It shows up in recommendation systems that optimize for engagement and produce radicalization. It shows up in hiring algorithms that encode historical discrimination. It shows up every time a system is optimized for a measurable proxy of what we actually want, rather than the thing itself. Christian is excellent on how this happens, why it’s hard to fix, and what the researchers working on it are actually doing.

This book complements Mollick’s more optimistic framing well. Read both.

Get The Alignment Problem


Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence — Kate Crawford (2021)

The critical perspective this list needs. Crawford, a researcher at USC and co-founder of the AI Now Institute, makes a compelling argument that AI systems are not software abstractions — they are material, political, and economic objects with real costs and embedded power dynamics. The rare earths in the hardware, the data center energy consumption, the contract workers’ labeling training data in difficult conditions, and the labor displacement — Crawford maps all of it.

I don’t agree with everything in this book, and Crawford’s perspective is explicitly critical rather than balanced. But the questions she raises are important and underrepresented in the mainstream AI conversation. If you’ve read Mollick and want a counterweight, this is it.

Get Atlas of AI


The History and the People

Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World — Cade Metz (2021)

The best narrative history of the deep learning revolution. Metz is a New York Times technology reporter who covers this beat obsessively, and he had remarkable access to the key figures: Geoffrey Hinton, Yann LeCun, Demis Hassabis, and the others who turned decades of dormant theory into the technology now reshaping every industry.

This is the book if you want to understand why everything changed so fast after 2012, what the competitive dynamics between labs looked like, and how the researchers themselves thought about what they were building. Reads like a thriller — the science is real, the rivalries are real, and the ethical stakes land harder when you know the people involved.

Get Genius Makers


For Educators Specifically

Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing) — Salman Khan (2024)

Sal Khan founded Khan Academy. He’s also an optimist, which comes through clearly in this book. Brave New Words makes the case for AI as tutor, mentor, and educational equalizer — arguing that tools like Khanmigo can bring the one-on-one tutoring advantage (Bloom’s famous “two sigma” finding, that individual tutoring improves outcomes dramatically over classroom instruction) to every student who needs it.

I read this more critically than I read Mollick, because the institutional interests are more directly aligned with the argument. But the core vision — that AI could close genuine equity gaps in access to high-quality educational support — is worth taking seriously, and the specific examples from Khan Academy’s work are compelling. Read it alongside the Crawford book for balance.

Get Brave New Words


The Short Version

If you read only one: Mollick’s Co-Intelligence. It’s the most practical and most directly relevant to anyone working in education or doing knowledge work of any kind.

If you want the big picture: Suleyman’s The Coming Wave. The most serious argument about what’s actually at stake.

If you want the history: Metz’s Genius Makers. The best story of how we got here.

If you want the ethics: Christian’s The Alignment Problem for the technical/philosophical dimension, Crawford’s Atlas of AI for the political/material dimension.


These books sit alongside my broader reading on technology and education — if you’re interested in that context, the Zettelkasten post covers the note-taking system I use to actually hold onto what I read across all of this.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

AI doesn’t write well, but neither do most people

Giles Turnbull has some thoughts on AI-generated writing:

First of all, I’ll come clean about where I stand, generally speaking: I’m an AI sceptic, especially on using AI for writing. I can see it being useful for other things – but that’s because I’m a writer, right?

I see AI generated text and most of the time, I think it’s rubbish. It’s dull, it’s derivative, it always sounds like a thousand other things I’ve read before. Because the AI has been trained on those thousands of things, all now easy to find on the internet.

But: do I think AI is quite good at making simple software, or basic web tools? Well, yeah, I have tried it for that, and I thought: “Hmm yeah this isn’t too shabby.”

And of course I would think that, wouldn’t I? I don’t know better. I’m not a software engineer.

I have a feeling that everyone likes using AI tools to try doing someone else’s profession. They’re much less keen when someone else uses it for their profession. I fall into the same trap as everyone else. I recognise, and admit to, my own bias.

Yes, using AI to do a job someone else does is fun. Ultimately, generative AI is an efficiency tool. Writing a first draft, especially for students who don’t have a lot of experience, is absolutely something AI can do for you. It will give you structure. It will help you overcome the blank page.

Should you then take up the writing task on your own? Sure. The only way to get better at writing is to write, whether it’s a human or AI.

Write more. Use whatever tools you have to get it done.

Source: gilest.org: AI and the human voice

What Teachers Need to Understand About AI and the Economy — A Reading List

man using laptop wit chat gpt
Photo by Matheus Bertelli on Pexels.com

Here’s something that should be keeping school leaders up at night: 55% of recent graduates report that their academic programs didn’t prepare them to use generative AI tools in the workforce. Not just use AI well — use it at all. We are preparing students for an economy that is reorganizing itself faster than our curriculum review cycles can keep up with, and most schools are responding with either panic or denial.

The World Economic Forum’s Future of Jobs Report 2025 projects that AI will displace 92 million jobs while creating 170 million new ones — a net gain on paper, but that math only works if the people losing the 92 million jobs can access the 170 million new ones. That transition requires education, retraining, and policy infrastructure that does not currently exist at the scale needed. Young workers in AI-exposed occupations are already experiencing shifts in employment. The college wage premium has flattened. Jobs requiring AI skills now command a 56% wage premium over those that don’t — up from 25% just the year before.

This is not an abstract future problem. It is the context in which our students will graduate.

I don’t write primarily about business or economics — this site is about education, technology, and the ideas that shape both. But understanding how AI is disrupting the economy is part of understanding what we are actually preparing students for. The books below are the ones I’d put in front of any educator or school leader who wants to think more seriously about this.


The Coming Wave: Technology, Power, and the Twenty-First Century’s Greatest Dilemma — Mustafa Suleyman

Get it on Amazon

Suleyman co-founded DeepMind (later acquired by Google) and Inflection AI before becoming CEO of Microsoft AI. He is, in other words, someone who has been building this technology from the ground up and who has had to think carefully about what he was building.

The Coming Wave is his argument that we are facing a genuine inflection point: AI and synthetic biology are advancing faster than governance structures can keep pace with, and the window to build appropriate containment mechanisms is closing. His central concern isn’t that AI is malevolent — it’s that the concentration of power that comes with controlling transformative technology is itself the problem, whether that power sits with corporations, governments, or both.

For educators: the chapter on economic disruption is essential reading. Suleyman doesn’t pretend the transition will be smooth. He takes seriously the question of what happens to people and communities during the displacement phase, which is precisely the phase our current students are entering.


AI Superpowers: China, Silicon Valley, and the New World Order — Kai-Fu Lee

Get it on Amazon

Lee has a unique vantage point: he’s worked at Apple, Microsoft, and Google, and then moved to Beijing to lead Google China before becoming one of China’s leading AI investors. AI Superpowers was published in 2018, and some of the specific competitive dynamics have shifted, but the core argument holds: we are in a global race for AI dominance between two different models of how AI development should work, and the outcomes of that race will have profound economic consequences at every level.

The section on job displacement is where this book becomes most directly relevant to educators. Lee argues that routine cognitive work is the most vulnerable to automation — not just manual labor — and that the categories of work that will be protected are those requiring creativity, empathy, and complex human judgment. That framing has direct implications for what we teach and why.

Read this alongside The Coming Wave for a richer picture of the geopolitical and economic forces shaping the AI landscape.


Prediction Machines: The Simple Economics of Artificial Intelligence — Ajay Agrawal, Joshua Gans & Avi Goldfarb

Get it on Amazon

Three economists from the University of Toronto built their framework around a deceptively simple claim: AI is, fundamentally, a technology that makes prediction cheaper. When prediction gets cheaper, the value of the things that complement prediction — judgment, action, data — increases. When prediction gets cheaper, the value of things that substitute for prediction — routine rule-following, low-stakes decision-making — decreases.

This framework is useful for educators because it maps directly onto a question we should be asking about curriculum: what are we teaching students that will be substituted by cheap AI prediction, and what are we teaching them that will be complemented by it? The answer has real implications for what genuinely rigorous education looks like in an AI economy. Prediction Machines is the most analytically useful book on this list for thinking through those questions.


The Age of AI: And Our Human Future — Henry Kissinger, Eric Schmidt & Daniel Huttenlocher

Get it on Amazon

An unusual collaboration: a former Secretary of State, a former Google CEO, and an MIT computer scientist thinking together about what AI means for how human societies understand the world. The book is less about the economic disruption and more about the epistemological one — the way AI systems generate outputs that humans can use without understanding how those outputs were produced, and what that does to decision-making in business, government, and education.

The argument that lands hardest for me as an educator: we have spent centuries building institutions of learning around the transmission and evaluation of human knowledge. AI is producing a new kind of knowledge — statistical, pattern-based, extraordinarily capable, and fundamentally alien to how human minds work. What does education mean in that context? This book doesn’t fully answer the question, but it asks it more precisely than most.


Power and Prediction: The Disruptive Economics of Artificial Intelligence — Ajay Agrawal, Joshua Gans & Avi Goldfarb

Get it on Amazon

The follow-up to Prediction Machines, published in 2022, moves from “here’s what AI does to economics” to “here’s how organizations and institutions will be restructured by it.” The core new argument: AI doesn’t just automate tasks; it disrupts the decision-making systems in which those tasks are embedded. That disruption creates power shifts — between professions, between institutions, between incumbents and challengers.

The education implications are direct. The authors discuss healthcare and legal services as sectors being restructured by AI-driven prediction, and the analysis applies equally to education. What happens to the teacher’s role when AI can provide personalized feedback faster and at greater scale? What happens to credentialing when AI can assess competencies that diplomas approximate? These aren’t comfortable questions, but they’re the right ones to be asking now rather than after the disruption has already happened.


The Question Underneath All of These Books

The books above are written primarily for business leaders, policymakers, and economists. That’s who they were designed for. But they all circle around a fundamentally educational question: what kind of people do we need to develop, and what do we need to prepare them for, in an economy being reorganized by AI?

Self-Determination Theory gives us part of the answer — humans are most resilient and most capable when they have genuine autonomy, a sense of competence, and meaningful connection. Those psychological needs don’t get automated. They get more important as the tasks around them do.

The Connectivist framing that the network is where knowledge lives is also useful here: in an economy where AI can provide information faster than any human, the competitive advantage lies in the quality of your connections — to ideas, to people, to problems worth solving — and in your capacity to navigate those networks with judgment. That’s what education in an AI economy should be building.

These books don’t answer those questions for us. But they describe the problem with enough precision that we can start asking the right ones.


Related on this site: the AI books post covers the books I’d recommend for understanding what AI actually is — how it works, what it can and can’t do, and what the most credible researchers think about its implications. That’s a companion list to this one.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

What If Every Teacher Could Build an AI Tutor? David Wiley’s Generative Textbooks Idea Is Worth Your Attention

generative textbooks

There’s a particular kind of idea that shows up in education technology every few years — one that sounds almost too obvious once you hear it, but that nobody had quite put together that way before. David Wiley‘s work on generative textbooks is one such idea.

I’ve been following Wiley for a long time. If you’ve ever used an open textbook in a course or benefited from freely available educational materials online, there’s a good chance his fingerprints are on the infrastructure that made that possible. He’s one of the founders of the open educational resources movement — the effort to create, share, and freely adapt teaching and learning materials under open licenses. It’s unglamorous, important work that has saved students billions of dollars in textbook costs and given teachers genuine tools they can actually modify.

So when Wiley started applying that same philosophy to AI, I paid attention.


The Problem He’s Solving

The standard AI-in-education conversation goes like this: here are some tools (ChatGPT, Gemini, Claude, take your pick), and here are some ways teachers can use them. The tools belong to the companies. The teachers are users. If the company changes pricing, changes policy, or shuts down, the teacher starts over.

Wiley’s question is different: what if the instructional logic — the pedagogical intelligence built into an AI learning experience — belonged to the teacher? What if any educator could author an AI-powered learning tool without writing code, without a budget, and without surrendering control to a platform?

That’s what generative textbooks are attempting to answer.


How It Actually Works

The architecture is simpler than it sounds. A generative textbook isn’t a document — it’s a structured collection of inputs that, when assembled, tell an AI model exactly how to behave as a learning tool for a specific subject.

Here’s what an author creates:

  • A book-level prompt stub — the template that sets the AI’s voice, tone, format, and overall behavior. Think of this as the personality and ground rules of the learning experience.
  • Learning objectives — one per chapter or topic, short statements about what a learner should understand or be able to do.
  • Topic summaries — accurate, context-rich summaries written for the AI, not for students. These are what the model uses to stay grounded in accurate content rather than hallucinating.
  • Activity templates — the types of interactions available: flashcards, explanations, quiz questions, Socratic dialogue, whatever the author builds in.

When a student picks a topic and an activity type, the system assembles the relevant pieces into a single prompt and sends it to the language model, which generates a fresh, tailored learning experience — not retrieved from a database, but generated in the moment based on the author’s pedagogical structure.

As Wiley puts it: in this model, prompt engineering is instructional design. The authoring isn’t code — it’s curriculum work. That’s a meaningful distinction for teachers.


The Clever Pivot on Cost

The original prototype sent prompts through an API to open-weight language models hosted on Groq. Clean, seamless, technically elegant. Also not free — API calls cost money at scale, and Wiley found that most educators he consulted weren’t particularly concerned with whether the underlying model was “open” in the ideological sense. They were concerned with whether it was free for students.

So he made a pragmatic call: rather than routing prompts through a back-end service, the tool now assembles the prompt and copies it to the student’s clipboard. The student pastes it into whatever AI interface they already have access to — ChatGPT’s free tier, Gemini, a school-licensed model, whatever.

This is inelegant in the user-experience sense. There’s a copy-paste step that breaks the flow. Analytics become difficult. Student privacy depends on whatever tool they choose to use. Wiley is honest about all of this — he describes the project explicitly as a tech demonstration, not a finished product.

But there’s something worth noticing in the pragmatism. The decision prioritizes actual access over technical elegance. For students in districts that can’t afford platform licenses and teachers who don’t control their school’s technology budget, a tool that works with the free tier of a consumer AI product is more useful than a seamless experience behind a paywall.


Where Wiley Has Taken This Since

The generative textbook prototype was a starting point, and Wiley has kept building. His more recent thinking has evolved toward what he calls OELMs — Open Educational Language Models — a framework that combines open-licensed content with AI in a more sophisticated way.

The key addition is retrieval-augmented generation (RAG): rather than just grounding the AI’s behavior in a few paragraph-length topic summaries, an OELM includes a curated collection of OER content that the model actively retrieves from when generating responses. This makes the outputs more accurate, more traceable to specific source materials, and more trustworthy for educational use — one of the genuine limitations of relying on a general-purpose language model that might confabulate confidently.

The broader argument Wiley is making — that generative AI is the logical successor to OER — is worth sitting with. His claim isn’t that AI replaces open textbooks, but that the principles that made OER valuable (open licensing, participatory creation, the ability to adapt and remix) need to be extended into the AI space. As the educational materials market shifts toward AI-powered products, the question of who owns the instructional logic matters enormously for equity and access.


What This Means for Teachers

I want to be careful not to oversell where this project currently is. The generative textbooks site is live and explorable, but this is genuinely early-stage work. The copy-paste workflow has real friction. The quality of the learning experience depends heavily on the quality of the inputs a teacher creates, which means the authoring itself requires genuine pedagogical thought — garbage in, garbage out applies acutely here.

But the underlying question Wiley is raising is one I think about a lot as an instructional coach: who gets to design the learning experience, and on whose terms?

The dominant model in AI-powered education right now is platform-centric. A company builds an AI tool, schools license it, teachers become users. This mirrors exactly what happened with traditional educational technology — districts buy the LMS, teachers work inside it, the pedagogical architecture belongs to the vendor. We know how that story tends to go: cost escalation, lock-in, tools that don’t quite fit what teachers actually need because they were designed generically.

Wiley’s generative textbooks project is asking whether there’s another path — one where educators are architects rather than users. Where the instructional intelligence lives in open, adaptable, teacher-created structures rather than in proprietary platforms. Where a teacher in a school with limited resources can build a learning tool that’s as good as anything a well-funded district is paying for.

That’s not a modest ambition. And it’s not finished yet. But it’s the kind of work that tends to matter more than it seems to when it starts.


Go explore:


Related reading: my AI books post covers Ethan Mollick’s Co-Intelligence, which has useful framing for educators thinking about AI as a co-teacher rather than a replacement — a theme that runs directly through Wiley’s work.

Google’s Stitch Update: “Vibe Design” and the Shrinking Distance Between Ideas and Tools

A preview of the updated Stitch AI-design tool from Google

Google recently announced a major update to its experimental design tool, Stitch. If you haven’t heard of it before, Stitch is an AI-powered interface design tool—but this update signals something bigger than just new features.

Google is now describing Stitch as an “AI-native software design canvas”—a space where users can move from an idea to a high-fidelity interface using natural language, images, or even voice.

That shift in language matters.

What’s Actually New in This Update?

Stitch isn’t new, but this version pushes it in a different direction. A few highlights stand out.

First, Stitch is no longer framed as a traditional design tool. Instead of starting with wireframes or components, users are encouraged to begin with intent—what they want to build, how it should feel, and what it should accomplish. In practice, that means you can describe a goal and generate a working interface almost immediately.

Second, Google introduces the idea of “vibe design.” While the phrasing might feel a little buzzword-heavy, the concept is straightforward. Rather than trying to get a design right on the first attempt, users can explore multiple directions quickly and refine toward a stronger result.

Third, the updated Stitch includes a design agent that works alongside the user. This agent can reason across the entire project, suggest changes, and help explore different directions simultaneously. It shifts the process from step-by-step construction to something closer to collaboration.

Another notable addition is the introduction of DESIGN.md, an agent-friendly markdown file that captures design rules and structure. This makes it easier to move designs into other tools or continue development with AI systems without starting over.

Finally, Stitch now supports instant prototyping of user flows. Instead of static screens, users can connect interfaces and immediately experience how someone would move through the app. That ability to test ideas quickly changes the pace of iteration.

Why This Matters for Educators

At first glance, this might seem like a tool built for designers or developers. But the implications for classrooms are more immediate than they appear.

For years, we’ve asked students to design solutions to problems—create a product, propose an innovation, build something meaningful—but those ideas often remain abstract. They exist in slides, posters, or written descriptions.

Tools like Stitch begin to close that gap.

Students can take an idea—such as a tool to help track progress in Algebra 1—and generate a working interface in minutes. From there, they can evaluate it, revise it, and improve it. The work becomes more tangible, and the feedback loop becomes faster.

That shift from describing an idea to interacting with it has real potential to deepen thinking.

The Bigger Shift Underneath

What Stitch represents is part of a broader change in how creation works.

The more technical aspects of building—layout, structure, and basic interaction design—are increasingly handled by AI. That doesn’t eliminate the need for skill, but it does change where the most important thinking happens.

Instead of focusing primarily on execution, the emphasis shifts toward clearly defining problems, making intentional design decisions, and evaluating whether something is actually useful.

Those are the kinds of capacities we want students to develop, but they’re often overshadowed by the mechanics of building something from scratch.

A Quick Reality Check

This doesn’t automatically lead to better learning.

If we simply replace “make a slideshow” with “generate an app,” we haven’t meaningfully changed the task. The tool itself isn’t the innovation. The thinking behind how it’s used is what matters.

Used thoughtfully, however, tools like Stitch can support faster iteration, more visible thinking, and more authentic design work.

Try This in Your Classroom

If you’re curious about what this might look like in practice, you don’t need a full unit redesign to get started. A simple activity can open the door.

Start with a question tied to your content:

  • “What would a tool that helps students master this unit actually look like?”
  • “How could we design something that makes feedback more useful?”
  • “What would help someone learn this concept more effectively?”

Have students work individually or in small groups to:

  1. Define the purpose of their tool
  2. Describe the user (another student, themselves, a teacher)
  3. Generate a design using Stitch or another AI interface tool
  4. Review the result and critique it

Then push their thinking:

  • What works about this design?
  • What doesn’t?
  • What would you change to make it more useful?
  • How does it connect to what we know about learning?

The goal isn’t to build a perfect product. It’s to move students into a cycle of idea → prototype → critique → revision, which is where deeper learning tends to happen.

Final Thought

Google describes this update as helping users “close the gap from idea to reality in minutes rather than days.”

That may sound ambitious, but it reflects a real trend.

As that gap continues to shrink, the question for educators isn’t whether students can build things. It’s what we ask them to build—and whether those tasks are worthy of the tools now available to them.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

AI as Co-Teacher or AI as Replacement?

bionic hand and human hand finger pointing
Photo by cottonbro studio on Pexels.com

“Empathy, evidently, existed only within the human community.”
— Philip K. Dick, Do Androids Dream of Electric Sheep?

There’s a moment in Philip K. Dick’s novel when the line between human and machine doesn’t shatter—it thins. The androids aren’t clumsy metallic caricatures. They’re articulate. Quick. Convincing. They can simulate emotional response so well that distinguishing them from humans requires careful testing. The danger isn’t brute force. It’s indistinguishability. It’s the subtle shift where simulation becomes “good enough,” and we stop asking what’s been replaced.

That’s what this moment in education feels like to me.

Not collapse. Not revolution. Just a quiet thinning of the line.

At Virginia Tech, a graduate course in Structural Equation Modeling nearly fell apart when the instructor unexpectedly dropped out. It was required. Students needed it to graduate. There wasn’t time to hire someone new. Instead of postponing the course, the department tried something that would have sounded like speculative fiction even five years ago. Half of the weekly learning objectives would be taught traditionally—through textbook and human instruction. The other half would be taught entirely through ChatGPT. Students received the same objectives either way. They completed the same assessments. And importantly, they submitted their AI chat logs along with their work so their reasoning could be examined. Every student passed.

You can read that as proof that AI can replace textbooks, maybe even instructors. Dr. Ivan Hernandez himself noted that AI can already function as a replacement for traditional textbooks and, to a certain extent, for instructors. That’s the easy interpretation, and it’s the one that will generate headlines.

But that’s not what interests me most.

What interests me is that Hernandez never surrendered the architecture.

He didn’t dissolve the classroom into a chatbot. He designed an experiment. He kept the objectives. He kept the assessments. He required documentation. He reviewed the logs. AI was allowed inside the system, but it did not define the system. The machine participated, but it did not govern.

That distinction feels subtle. It isn’t.

Because at the same time, another model of schooling is gaining attention. A 404 Media report on Alpha School states that students reportedly complete core academic work in roughly 2 hours per day. AI systems deliver most of the instruction. Adults function more as guides and coaches around the edges. The pitch is efficiency, personalization, and mastery at speed.

Now we’re standing inside the tension Dick was writing about decades ago.

If a system can simulate understanding, simulate responsiveness, and simulate personalized feedback, at what point do we stop asking whether it is human-centered?


When I talk about vibrant learning, I’m not talking about colorful classrooms or surface-level engagement. I’m talking about environments where students are actively constructing meaning, forming identity, navigating networks of knowledge, and experiencing the kind of belonging that makes intellectual risk possible. Vibrant learning is relational. It’s cognitively demanding. It depends on friction. It requires the presence of other minds.

And it is, almost by definition, inefficient.

The Science of Learning and Development has made something abundantly clear: learning isn’t merely cognitive processing. It is relational and contextual. Emotion and identity are braided into cognition. Belonging isn’t a nice add-on; it’s neurological infrastructure. When students feel safe enough to wrestle with ideas, they engage in deeper processing. When they feel unseen or disconnected, their cognitive system shifts toward protection rather than exploration.

Now imagine reorganizing schooling around algorithmic instruction as the primary academic engine.

Can AI explain structural equation modeling? Absolutely. The Virginia Tech experiment clearly demonstrates that. But explanation isn’t the same thing as formation. Learning is not just absorbing information; it’s situating yourself within a community of inquiry. It’s deciding what counts as credible. It’s learning how to disagree well. It’s building intellectual humility alongside intellectual confidence.

Connectivism adds another layer. Knowledge doesn’t reside in a single authority. It lives in networks—human, digital, and cultural. Learning is the ability to form and traverse those networks. AI belongs in that web. It can extend it. It can accelerate feedback loops. It can surface patterns that would take humans far longer to see.

But networks remain generative only when no single node dominates the topology.

When most academic interaction flows through a single algorithmic system, the structure centralizes. It becomes efficient. Predictable. Optimized. And optimization is not neutral. It always reflects a priority.

In Hernandez’s classroom, AI is one node among many. Students engage with it, but their interactions are documented and subject to human evaluation. The professor remains the architect. The AI is instrumentation. That’s augmentation.

In the Alpha-style model, as it’s been described, AI becomes the instructional spine. Humans support it. That’s substitution.

The difference between augmentation and substitution isn’t technological. It’s architectural.

And architecture shapes identity.


I understand why the efficiency model is appealing. Public education is strained. Teachers are exhausted. Districts are underfunded. Families are frustrated. If someone promises individualized instruction in two focused hours a day, it feels like relief. It feels like progress. It feels like the system finally catching up to the technology that already saturates students’ lives.

But we have to ask what we’re optimizing for.

If the goal is procedural mastery at scale, AI-centered instruction makes sense. You can compress problem sets. You can adapt pacing. You can automate feedback. You can produce measurable gains efficiently.

But public education, at its best, was never solely about workforce preparation. It was about citizenry. It was about forming people who can navigate complexity, ambiguity, disagreement, and shared life. That kind of formation doesn’t thrive in compressed, frictionless environments. It depends on relational tension. It depends on encountering other minds. It depends on spaces where empathy is not simulated but practiced.

Dick’s line lingers because it names something we’re tempted to overlook: empathy exists within the human community. Machines can model tone. They can generate encouragement. They can approximate responsiveness. But vibrant learning depends on something more than approximation. It depends on shared vulnerability, on the subtle cues of presence, on the unpredictable back-and-forth that shapes identity as much as it shapes understanding.

The Virginia Tech experiment shows that AI can assist with cognition. It does not prove that AI can replace the relational architecture in which cognition becomes character.

That’s the line.

It’s thin. And it’s easy to cross without noticing.

If pedagogy remains accountable to human judgment, AI can deepen vibrant learning. It can expand networks, accelerate iteration, and free educators to focus on the uniquely human dimensions of teaching. It can serve as a co-teacher inside a human-designed ecosystem.

But if pedagogy becomes accountable to platform architecture—if efficiency and throughput quietly become the organizing principles—then vibrant learning will slowly give way to optimized progression. The system may still function. Students may still perform. But something harder to measure will thin.

An educated workforce can be trained through efficient systems.

An educated citizenry must be formed within human communities.

The question before us isn’t whether AI works. It clearly does.

The question is who remains responsible for the architecture.

If we keep that responsibility—if we treat AI as instrumentation rather than architecture—then this moment could expand what’s possible in ways that genuinely support vibrant learning. If we don’t, if we reorganize schooling around efficiency engines and call it innovation, we may find that we’ve streamlined education while quietly narrowing what it means to be educated.

The machine can assist.

But empathy, formation, and responsibility still belong within the human community.

And whether that remains true in our schools will depend on the choices we make now—quietly, structurally, and often in the name of progress.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

AI Schools and the Illusion of Efficiency

close up photo of an abstract art
Photo by Marek Piwnicki on Pexels.com

A recent investigation into Alpha School, a high-tuition “AI-powered” private school, revealed faulty AI-generated lessons, hallucinated questions, scraped curriculum materials, and heavy student surveillance. Former employees described students as “guinea pigs.”

That’s the headline.

But the real issue isn’t whether one school deployed AI sloppily.

The real issue is whether we are confusing technological acceleration with educational progress.

The Seduction of the Two-Hour School Day

Alpha’s pitch is simple and powerful: compress academic learning into two hyper-efficient hours using AI tutors, then free the rest of the day for creativity and passion projects.

If you believe traditional schooling wastes time, that promise is intoxicating.

But here’s the problem:

Efficiency is not the same thing as development.

From a Science of Learning and Development (SoLD) perspective, learning is not merely the transmission of content. It is a process that integrates cognition, emotion, identity, and social context. Durable learning requires safety, belonging, agency, and meaning-making.

You cannot compress belonging into a two-hour block.

You cannot automate identity formation.

And you cannot hallucinate your way to deep understanding.

Connectivism Is Not Automation

Some defenders of AI-heavy schooling argue that we are simply witnessing the next phase of networked learning. Knowledge is distributed. AI becomes a node in the network. Personalized pathways replace one-size-fits-all instruction.

That language sounds connectivist.

But Connectivism is not about replacing human nodes with machine ones.

It concerns the expansion of networks of meaning.

In a connectivist system:

  • Learning happens across relationships.
  • Knowledge flows through dynamic connections.
  • Judgment matters more than memorization.
  • Pattern recognition and critical filtering are essential skills.

AI can participate in that network.

But when AI becomes the primary instructional authority — generating content, generating assessments, evaluating its own outputs — the network collapses into a closed loop.

AI checking AI is not distributed intelligence.

It is recursive automation.

Connectivism requires diversity of nodes.

Not monoculture.

Surveillance Is Not Personalization

The investigation also described extensive monitoring: screen recording, webcam footage, mouse tracking, and behavioral nudges.

This is framed as personalization.

It is not.

It is optimization.

SoLD research clarifies that psychological safety and autonomy are foundational to learning. When students feel constantly watched, agency erodes. Compliance increases. Anxiety increases.

You can nudge behavior with surveillance.

You cannot cultivate intrinsic motivation that way.

If our model of learning begins to resemble corporate productivity software, we should pause.

Education is not a workflow dashboard.

The Hidden Variable: Selection Bias

To be fair, Alpha School reportedly produces strong test scores.

However, high-tuition schools serve families with financial, cultural, and educational capital. Research consistently shows that standardized test performance correlates strongly with income.

If affluent students succeed in an AI-heavy environment, that does not prove that the AI caused the success.

It may simply mean the students would succeed almost anywhere. I often say those students would succeed with a ham sandwich for a teacher.

The question is not whether AI can serve already advantaged learners.

The question is whether AI, deployed without deep pedagogical grounding, strengthens or weakens human development.

The Real Design Question

The danger is not AI itself.

The danger is designing educational systems around what AI does well.

AI does well at:

  • Drafting content
  • Generating practice questions
  • Scaling feedback
  • Recognizing surface patterns

AI does not do well at:

  • Reading emotional context
  • Building trust
  • Modeling intellectual humility
  • Navigating moral ambiguity
  • Forming identity

SoLD reminds us that learning is relational and developmental.

Connectivism reminds us that learning is networked and distributed.

If we optimize for what AI does well and marginalize what humans do uniquely well, we create a system that is efficient — but thin.

Fast — but shallow.

Impressive — but fragile.

What This Means for Public Education

This story is not merely about a private school engaging in aggressive experimentation.

It is a preview.

Every district will face pressure to:

  • Automate instruction
  • Replace textbooks with AI tutors
  • Compress seat time
  • Increase data capture

The answer cannot be a blanket rejection.

Nor can it be an uncritical adoption.

The answer is design discipline.

We should use AI to:

  • Reduce administrative drag
  • Prototype lessons
  • Support differentiated feedback
  • Expand access to expertise

But we should anchor every AI decision in two non-negotiables:

  1. Does this strengthen human relationships?
  2. Does this expand student agency and meaning-making?

If the answer is no, we are not innovating.

We are optimizing the wrong variable.

The Choice in Front of Us

We stand at a fork.

We can design AI systems around human development.

Or we can redesign human development around AI systems.

One path amplifies Connectivism, relational trust, and whole-child growth.

The other path creates compliant, monitored, hyper-efficient learners who score well but lack deep agency.

Technology will not make that choice for us.

We will.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

10 Things: Week Ending August 22, 2025

pexels-photo-45708.jpeg
Photo by Dom J on Pexels.com

We’re two weeks into the school year, and I’ve already seen some incredible examples of authentic learning in action. It’s a good reminder of Steve Wozniak’s advice: keep the main thing the main thing—and don’t sell out for something that only looks better.

This week’s newsletter rounds up 10 links worth your time, from AI and education to remote learning, punk archives, and why cell phone bans never work.

Read the full newsletter here →



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

We must build AI for people; not to be a person

people

My life’s mission has been to create safe and beneficial AI that will make the world a better place. Today at Microsoft AI we build AI to empower people, and I’m focused on making products like Copilot responsible technologies that enable people to achieve far more than they ever thought possible, be more creative, and feel more supported.

I want to create AI that makes us more human, that deepens our trust and understanding of one another, and that strengthens our connections to the real world. Copilot creates millions of positive, even life-changing, interactions every single day. This involves a lot of careful design choices to ensure it truly delivers an incredible experience. We won’t always get it right, but this humanist frame provides us with a clear north star to keep working towards.

Some thoughts from Mustafa Suleyman on building AI that doesn’t convince people that AI is a human, or needs rights. Or is a god.

Sadly, we’re already having those discussions.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

You Might Be Trying to Replace the Wrong People with AI

I was at a leadership group and people were telling me “We think that with AI we can replace all of our junior people in our company.” I was like, “That’s the dumbest thing I’ve ever heard. They’re probably the least expensive employees you have, they’re the most leaned into your AI tools, and how’s that going to work when you go 10 years in the future and you have no one that has built up or learned anything?

So says Matt Garman, CEO of Amazon Web Services. A better question to ask: What do you mean, you don’t want to teach your high school students how to use AI to help them write code and solve problems more efficiently?

We live in weird times when people constantly retreat to what came before and avoid any intention of moving on.

Life is the future, not the past.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!