AI as Co-Teacher or AI as Replacement?

bionic hand and human hand finger pointing
Photo by cottonbro studio on Pexels.com

“Empathy, evidently, existed only within the human community.”
— Philip K. Dick, Do Androids Dream of Electric Sheep?

There’s a moment in Philip K. Dick’s novel when the line between human and machine doesn’t shatter—it thins. The androids aren’t clumsy metallic caricatures. They’re articulate. Quick. Convincing. They can simulate emotional response so well that distinguishing them from humans requires careful testing. The danger isn’t brute force. It’s indistinguishability. It’s the subtle shift where simulation becomes “good enough,” and we stop asking what’s been replaced.

That’s what this moment in education feels like to me.

Not collapse. Not revolution. Just a quiet thinning of the line.

At Virginia Tech, a graduate course in Structural Equation Modeling nearly fell apart when the instructor unexpectedly dropped out. It was required. Students needed it to graduate. There wasn’t time to hire someone new. Instead of postponing the course, the department tried something that would have sounded like speculative fiction even five years ago. Half of the weekly learning objectives would be taught traditionally—through textbook and human instruction. The other half would be taught entirely through ChatGPT. Students received the same objectives either way. They completed the same assessments. And importantly, they submitted their AI chat logs along with their work so their reasoning could be examined. Every student passed.

You can read that as proof that AI can replace textbooks, maybe even instructors. Dr. Ivan Hernandez himself noted that AI can already function as a replacement for traditional textbooks and, to a certain extent, for instructors. That’s the easy interpretation, and it’s the one that will generate headlines.

But that’s not what interests me most.

What interests me is that Hernandez never surrendered the architecture.

He didn’t dissolve the classroom into a chatbot. He designed an experiment. He kept the objectives. He kept the assessments. He required documentation. He reviewed the logs. AI was allowed inside the system, but it did not define the system. The machine participated, but it did not govern.

That distinction feels subtle. It isn’t.

Because at the same time, another model of schooling is gaining attention. A 404 Media report on Alpha School states that students reportedly complete core academic work in roughly 2 hours per day. AI systems deliver most of the instruction. Adults function more as guides and coaches around the edges. The pitch is efficiency, personalization, and mastery at speed.

Now we’re standing inside the tension Dick was writing about decades ago.

If a system can simulate understanding, simulate responsiveness, and simulate personalized feedback, at what point do we stop asking whether it is human-centered?


When I talk about vibrant learning, I’m not talking about colorful classrooms or surface-level engagement. I’m talking about environments where students are actively constructing meaning, forming identity, navigating networks of knowledge, and experiencing the kind of belonging that makes intellectual risk possible. Vibrant learning is relational. It’s cognitively demanding. It depends on friction. It requires the presence of other minds.

And it is, almost by definition, inefficient.

The Science of Learning and Development has made something abundantly clear: learning isn’t merely cognitive processing. It is relational and contextual. Emotion and identity are braided into cognition. Belonging isn’t a nice add-on; it’s neurological infrastructure. When students feel safe enough to wrestle with ideas, they engage in deeper processing. When they feel unseen or disconnected, their cognitive system shifts toward protection rather than exploration.

Now imagine reorganizing schooling around algorithmic instruction as the primary academic engine.

Can AI explain structural equation modeling? Absolutely. The Virginia Tech experiment clearly demonstrates that. But explanation isn’t the same thing as formation. Learning is not just absorbing information; it’s situating yourself within a community of inquiry. It’s deciding what counts as credible. It’s learning how to disagree well. It’s building intellectual humility alongside intellectual confidence.

Connectivism adds another layer. Knowledge doesn’t reside in a single authority. It lives in networks—human, digital, and cultural. Learning is the ability to form and traverse those networks. AI belongs in that web. It can extend it. It can accelerate feedback loops. It can surface patterns that would take humans far longer to see.

But networks remain generative only when no single node dominates the topology.

When most academic interaction flows through a single algorithmic system, the structure centralizes. It becomes efficient. Predictable. Optimized. And optimization is not neutral. It always reflects a priority.

In Hernandez’s classroom, AI is one node among many. Students engage with it, but their interactions are documented and subject to human evaluation. The professor remains the architect. The AI is instrumentation. That’s augmentation.

In the Alpha-style model, as it’s been described, AI becomes the instructional spine. Humans support it. That’s substitution.

The difference between augmentation and substitution isn’t technological. It’s architectural.

And architecture shapes identity.


I understand why the efficiency model is appealing. Public education is strained. Teachers are exhausted. Districts are underfunded. Families are frustrated. If someone promises individualized instruction in two focused hours a day, it feels like relief. It feels like progress. It feels like the system finally catching up to the technology that already saturates students’ lives.

But we have to ask what we’re optimizing for.

If the goal is procedural mastery at scale, AI-centered instruction makes sense. You can compress problem sets. You can adapt pacing. You can automate feedback. You can produce measurable gains efficiently.

But public education, at its best, was never solely about workforce preparation. It was about citizenry. It was about forming people who can navigate complexity, ambiguity, disagreement, and shared life. That kind of formation doesn’t thrive in compressed, frictionless environments. It depends on relational tension. It depends on encountering other minds. It depends on spaces where empathy is not simulated but practiced.

Dick’s line lingers because it names something we’re tempted to overlook: empathy exists within the human community. Machines can model tone. They can generate encouragement. They can approximate responsiveness. But vibrant learning depends on something more than approximation. It depends on shared vulnerability, on the subtle cues of presence, on the unpredictable back-and-forth that shapes identity as much as it shapes understanding.

The Virginia Tech experiment shows that AI can assist with cognition. It does not prove that AI can replace the relational architecture in which cognition becomes character.

That’s the line.

It’s thin. And it’s easy to cross without noticing.

If pedagogy remains accountable to human judgment, AI can deepen vibrant learning. It can expand networks, accelerate iteration, and free educators to focus on the uniquely human dimensions of teaching. It can serve as a co-teacher inside a human-designed ecosystem.

But if pedagogy becomes accountable to platform architecture—if efficiency and throughput quietly become the organizing principles—then vibrant learning will slowly give way to optimized progression. The system may still function. Students may still perform. But something harder to measure will thin.

An educated workforce can be trained through efficient systems.

An educated citizenry must be formed within human communities.

The question before us isn’t whether AI works. It clearly does.

The question is who remains responsible for the architecture.

If we keep that responsibility—if we treat AI as instrumentation rather than architecture—then this moment could expand what’s possible in ways that genuinely support vibrant learning. If we don’t, if we reorganize schooling around efficiency engines and call it innovation, we may find that we’ve streamlined education while quietly narrowing what it means to be educated.

The machine can assist.

But empathy, formation, and responsibility still belong within the human community.

And whether that remains true in our schools will depend on the choices we make now—quietly, structurally, and often in the name of progress.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

AI Schools and the Illusion of Efficiency

close up photo of an abstract art
Photo by Marek Piwnicki on Pexels.com

A recent investigation into Alpha School, a high-tuition “AI-powered” private school, revealed faulty AI-generated lessons, hallucinated questions, scraped curriculum materials, and heavy student surveillance. Former employees described students as “guinea pigs.”

That’s the headline.

But the real issue isn’t whether one school deployed AI sloppily.

The real issue is whether we are confusing technological acceleration with educational progress.

The Seduction of the Two-Hour School Day

Alpha’s pitch is simple and powerful: compress academic learning into two hyper-efficient hours using AI tutors, then free the rest of the day for creativity and passion projects.

If you believe traditional schooling wastes time, that promise is intoxicating.

But here’s the problem:

Efficiency is not the same thing as development.

From a Science of Learning and Development (SoLD) perspective, learning is not merely the transmission of content. It is a process that integrates cognition, emotion, identity, and social context. Durable learning requires safety, belonging, agency, and meaning-making.

You cannot compress belonging into a two-hour block.

You cannot automate identity formation.

And you cannot hallucinate your way to deep understanding.

Connectivism Is Not Automation

Some defenders of AI-heavy schooling argue that we are simply witnessing the next phase of networked learning. Knowledge is distributed. AI becomes a node in the network. Personalized pathways replace one-size-fits-all instruction.

That language sounds connectivist.

But Connectivism is not about replacing human nodes with machine ones.

It concerns the expansion of networks of meaning.

In a connectivist system:

  • Learning happens across relationships.
  • Knowledge flows through dynamic connections.
  • Judgment matters more than memorization.
  • Pattern recognition and critical filtering are essential skills.

AI can participate in that network.

But when AI becomes the primary instructional authority — generating content, generating assessments, evaluating its own outputs — the network collapses into a closed loop.

AI checking AI is not distributed intelligence.

It is recursive automation.

Connectivism requires diversity of nodes.

Not monoculture.

Surveillance Is Not Personalization

The investigation also described extensive monitoring: screen recording, webcam footage, mouse tracking, and behavioral nudges.

This is framed as personalization.

It is not.

It is optimization.

SoLD research clarifies that psychological safety and autonomy are foundational to learning. When students feel constantly watched, agency erodes. Compliance increases. Anxiety increases.

You can nudge behavior with surveillance.

You cannot cultivate intrinsic motivation that way.

If our model of learning begins to resemble corporate productivity software, we should pause.

Education is not a workflow dashboard.

The Hidden Variable: Selection Bias

To be fair, Alpha School reportedly produces strong test scores.

However, high-tuition schools serve families with financial, cultural, and educational capital. Research consistently shows that standardized test performance correlates strongly with income.

If affluent students succeed in an AI-heavy environment, that does not prove that the AI caused the success.

It may simply mean the students would succeed almost anywhere. I often say those students would succeed with a ham sandwich for a teacher.

The question is not whether AI can serve already advantaged learners.

The question is whether AI, deployed without deep pedagogical grounding, strengthens or weakens human development.

The Real Design Question

The danger is not AI itself.

The danger is designing educational systems around what AI does well.

AI does well at:

  • Drafting content
  • Generating practice questions
  • Scaling feedback
  • Recognizing surface patterns

AI does not do well at:

  • Reading emotional context
  • Building trust
  • Modeling intellectual humility
  • Navigating moral ambiguity
  • Forming identity

SoLD reminds us that learning is relational and developmental.

Connectivism reminds us that learning is networked and distributed.

If we optimize for what AI does well and marginalize what humans do uniquely well, we create a system that is efficient — but thin.

Fast — but shallow.

Impressive — but fragile.

What This Means for Public Education

This story is not merely about a private school engaging in aggressive experimentation.

It is a preview.

Every district will face pressure to:

  • Automate instruction
  • Replace textbooks with AI tutors
  • Compress seat time
  • Increase data capture

The answer cannot be a blanket rejection.

Nor can it be an uncritical adoption.

The answer is design discipline.

We should use AI to:

  • Reduce administrative drag
  • Prototype lessons
  • Support differentiated feedback
  • Expand access to expertise

But we should anchor every AI decision in two non-negotiables:

  1. Does this strengthen human relationships?
  2. Does this expand student agency and meaning-making?

If the answer is no, we are not innovating.

We are optimizing the wrong variable.

The Choice in Front of Us

We stand at a fork.

We can design AI systems around human development.

Or we can redesign human development around AI systems.

One path amplifies Connectivism, relational trust, and whole-child growth.

The other path creates compliant, monitored, hyper-efficient learners who score well but lack deep agency.

Technology will not make that choice for us.

We will.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

Daring Greatly: The Courage Manual You Didn’t Know You Needed

Daring Greatly

I want to be honest about my relationship with Daring Greatly before I say anything else, because I think it matters.

When Brené Brown’s TED talk went viral, I was skeptical. The vocabulary — vulnerability, wholehearted, shame resilience — sounded like the kind of therapeutic language that gets plastered on motivational posters and stripped of the difficult specificity that actually makes it useful. I’d seen the ideas travel from a research context to a corporate keynote to a school district “culture” initiative, losing precision at every step.

So I put off reading the book for longer than I should have.

I was wrong to. Daring Greatly is not what I expected. It’s a more rigorous, more honest, and more specifically useful book than the way it tends to be discussed. And for anyone who works in education — particularly anyone who coaches teachers, which requires asking adults to be vulnerable about their practice in ways that most professional norms actively discourage — it’s genuinely important.


What the Book Actually Is

Brown is a qualitative researcher who spent years studying connection, shame, and what she calls “wholeheartedness” — the capacity to engage fully in life despite uncertainty and imperfection. Daring Greatly is built on that research: real data, patterns from thousands of interviews, and a framework she developed to understand what gets in the way of genuine engagement.

The central claim is that vulnerability — defined as risk, emotional exposure, and uncertainty without guaranteed outcome — is not weakness. It is the precondition for courage, creativity, connection, and meaningful work. The armor we build to avoid vulnerability (perfectionism, cynicism, numbing, controlling) protects us in the short term and costs us everything in the long term.

The book is titled after a Theodore Roosevelt quote: the famous “man in the arena” passage, the one about the critic who sits in the cheap seats versus the person who is actually in the fight, who “dares greatly” even knowing they will fail sometimes. Brown uses it as a frame for what she’s asking: not to eliminate vulnerability, but to choose it deliberately, in service of what matters.


Why It Matters in Schools, Specifically

Teaching is one of the most vulnerable jobs there is, and we have almost no professional language for that.

Every day, teachers stand in front of 25 or 30 people and attempt to make something happen — understanding, curiosity, skill, connection — without any guarantee that it will work. The lesson they planned might fall flat. The explanation they thought was clear turned out to be confusing. A student they’ve been trying to reach for weeks shuts down at the one moment they feel like they’re finally getting through. This happens constantly, and mostly in silence, because the professional culture of teaching tends to reward certainty and penalize visible struggle.

As an instructional coach, a significant part of my work involves watching teachers teach — sitting in classrooms, observing, taking notes, then having conversations about what I saw. This is, if you think about it, a structured invitation to vulnerability. I’m asking a professional to let someone into the most imperfect part of their work, the part they haven’t figured out yet, and to talk about it honestly.

What Brown’s research makes clear is why this is so hard and why so many coaching relationships fail to produce genuine reflection: shame. Not dramatic shame, but the quiet, ambient kind — the professional fear that if you let someone see what’s not working, they’ll conclude that you are not working. That the struggle is evidence of inadequacy rather than evidence of honest effort in a genuinely difficult job.

Brown’s framework for navigating this — what she calls shame resilience, the capacity to recognize shame, reality-check the story you’re telling yourself, reach out, and speak it rather than let it drive behavior — is a practical map for conversations that coaching depends on. It’s not therapeutic language. It’s a professional development infrastructure.


The Research Versus the Brand

Here’s my honest caveat, because this book has a complicated position in the culture.

The research underlying Daring Greatly is real and legitimate. Brown’s qualitative work is careful, and her framework is grounded in patterns observed among real people. The book respects the reader’s intelligence.

But Brown has also become a brand, and the brand version of these ideas is considerably more diluted than the book version. The corporate keynote version of “vulnerability” often means “share something personal at the start of a meeting to build rapport,” which is not what Brown is describing. The school culture version tends to mean “hang growth mindset posters and say ‘we value failure,'” which is also not what Brown is describing.

The book itself is more demanding than that. It’s asking for something that is genuinely uncomfortable: not performed openness but actual risk. Not vulnerability as a tactic, but vulnerability as a condition of meaningful work. There’s a significant difference, and if you’ve been exposed to the brand version without the book version, the book may surprise you with how much harder it asks you to be on yourself.


What Resonates as an Educator

A few things from this reread that I keep thinking about:

The distinction between perfectionism and high standards. Brown is not arguing against excellence. She’s arguing against the specific cognitive trap of using perfectionism as a protective strategy — the belief that if you do everything perfectly, you can avoid criticism, judgment, and failure. That trap is everywhere in teaching and education leadership, and it produces exactly the opposite of what it promises.

The concept of “foreboding joy.” The tendency to preemptively imagine disaster when things are going well — to hold back from full engagement because full engagement feels dangerous. Teachers who’ve been through painful years sometimes develop this reflex: don’t get attached to a good moment because it will end. It’s a real pattern, and Brown names it precisely.

The arena metaphor is applied to professional learning. The person in the arena is the teacher who tries something new, has it fall apart in front of their students, and then learns from it. The person in the cheap seats is anyone who critiques without attempting. School cultures that penalize visible struggle and reward only polished performance push people out of the arena and into the cheap seats — and then wonder why professional learning doesn’t stick.


Who Should Read This

If you coach teachers or lead professional development, this book will give you a framework for understanding why the work is harder than it looks and what the emotional conditions for genuine growth actually require. Read it before you design your next coaching cycle.

If you’re a teacher who’s been in the profession long enough to have developed professional armor — the particular efficiency and distance that protects you from full engagement — this book will name what’s happening with more precision than most things you’ll find in education-specific reading.

If you’re skeptical of self-help books in general (I was), give the first three chapters a try before deciding. It earns its keep.

Rating: 4 out of 5. The research is real, the framework is useful, and the writing is clear without being condescending. The half-star off is because some sections drift toward the brand territory — the motivational phrasing that feels more like it was designed for an audience than worked out for a reader. The core is worth it.

Get Daring Greatly


If You Liked This, Read Next

Dare to Lead by Brené Brown — Brown’s follow-up focuses on leadership and organizations rather than on individuals. More directly applicable to school leaders and coaches.

The Gifts of Imperfection by Brené Brown — The book that preceded Daring Greatly, covering many of the same ideas with more focus on personal life than professional. A good companion.

Mindset by Carol Dweck — The growth mindset research that maps directly onto what Brown is describing about perfectionism and failure. Read together, they’re more useful than either is alone. (Affiliate link)

The Shift to Student-Led by Tucker and Novak — Connects Brown’s ideas about vulnerability and risk to the classroom specifically: what it actually means to create conditions where students (and teachers) can fail productively. (Affiliate link)


Related on this site: the Mastery post covers the long arc of skill development in teaching. Brown and Greene are in conversation, whether they know it or not — Brown asks what makes it possible to keep showing up to hard work, Greene asks what happens when you do.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!