
“Empathy, evidently, existed only within the human community.”
— Philip K. Dick, Do Androids Dream of Electric Sheep?
There’s a moment in Philip K. Dick’s novel when the line between human and machine doesn’t shatter—it thins. The androids aren’t clumsy metallic caricatures. They’re articulate. Quick. Convincing. They can simulate emotional response so well that distinguishing them from humans requires careful testing. The danger isn’t brute force. It’s indistinguishability. It’s the subtle shift where simulation becomes “good enough,” and we stop asking what’s been replaced.
That’s what this moment in education feels like to me.
Not collapse. Not revolution. Just a quiet thinning of the line.
At Virginia Tech, a graduate course in Structural Equation Modeling nearly fell apart when the instructor unexpectedly dropped out. It was required. Students needed it to graduate. There wasn’t time to hire someone new. Instead of postponing the course, the department tried something that would have sounded like speculative fiction even five years ago. Half of the weekly learning objectives would be taught traditionally—through textbook and human instruction. The other half would be taught entirely through ChatGPT. Students received the same objectives either way. They completed the same assessments. And importantly, they submitted their AI chat logs along with their work so their reasoning could be examined. Every student passed.
You can read that as proof that AI can replace textbooks, maybe even instructors. Dr. Ivan Hernandez himself noted that AI can already function as a replacement for traditional textbooks and, to a certain extent, for instructors. That’s the easy interpretation, and it’s the one that will generate headlines.
But that’s not what interests me most.
What interests me is that Hernandez never surrendered the architecture.
He didn’t dissolve the classroom into a chatbot. He designed an experiment. He kept the objectives. He kept the assessments. He required documentation. He reviewed the logs. AI was allowed inside the system, but it did not define the system. The machine participated, but it did not govern.
That distinction feels subtle. It isn’t.
Because at the same time, another model of schooling is gaining attention. A 404 Media report on Alpha School states that students reportedly complete core academic work in roughly 2 hours per day. AI systems deliver most of the instruction. Adults function more as guides and coaches around the edges. The pitch is efficiency, personalization, and mastery at speed.
Now we’re standing inside the tension Dick was writing about decades ago.
If a system can simulate understanding, simulate responsiveness, and simulate personalized feedback, at what point do we stop asking whether it is human-centered?
When I talk about vibrant learning, I’m not talking about colorful classrooms or surface-level engagement. I’m talking about environments where students are actively constructing meaning, forming identity, navigating networks of knowledge, and experiencing the kind of belonging that makes intellectual risk possible. Vibrant learning is relational. It’s cognitively demanding. It depends on friction. It requires the presence of other minds.
And it is, almost by definition, inefficient.
The Science of Learning and Development has made something abundantly clear: learning isn’t merely cognitive processing. It is relational and contextual. Emotion and identity are braided into cognition. Belonging isn’t a nice add-on; it’s neurological infrastructure. When students feel safe enough to wrestle with ideas, they engage in deeper processing. When they feel unseen or disconnected, their cognitive system shifts toward protection rather than exploration.
Now imagine reorganizing schooling around algorithmic instruction as the primary academic engine.
Can AI explain structural equation modeling? Absolutely. The Virginia Tech experiment clearly demonstrates that. But explanation isn’t the same thing as formation. Learning is not just absorbing information; it’s situating yourself within a community of inquiry. It’s deciding what counts as credible. It’s learning how to disagree well. It’s building intellectual humility alongside intellectual confidence.
Connectivism adds another layer. Knowledge doesn’t reside in a single authority. It lives in networks—human, digital, and cultural. Learning is the ability to form and traverse those networks. AI belongs in that web. It can extend it. It can accelerate feedback loops. It can surface patterns that would take humans far longer to see.
But networks remain generative only when no single node dominates the topology.
When most academic interaction flows through a single algorithmic system, the structure centralizes. It becomes efficient. Predictable. Optimized. And optimization is not neutral. It always reflects a priority.
In Hernandez’s classroom, AI is one node among many. Students engage with it, but their interactions are documented and subject to human evaluation. The professor remains the architect. The AI is instrumentation. That’s augmentation.
In the Alpha-style model, as it’s been described, AI becomes the instructional spine. Humans support it. That’s substitution.
The difference between augmentation and substitution isn’t technological. It’s architectural.
And architecture shapes identity.
I understand why the efficiency model is appealing. Public education is strained. Teachers are exhausted. Districts are underfunded. Families are frustrated. If someone promises individualized instruction in two focused hours a day, it feels like relief. It feels like progress. It feels like the system finally catching up to the technology that already saturates students’ lives.
But we have to ask what we’re optimizing for.
If the goal is procedural mastery at scale, AI-centered instruction makes sense. You can compress problem sets. You can adapt pacing. You can automate feedback. You can produce measurable gains efficiently.
But public education, at its best, was never solely about workforce preparation. It was about citizenry. It was about forming people who can navigate complexity, ambiguity, disagreement, and shared life. That kind of formation doesn’t thrive in compressed, frictionless environments. It depends on relational tension. It depends on encountering other minds. It depends on spaces where empathy is not simulated but practiced.
Dick’s line lingers because it names something we’re tempted to overlook: empathy exists within the human community. Machines can model tone. They can generate encouragement. They can approximate responsiveness. But vibrant learning depends on something more than approximation. It depends on shared vulnerability, on the subtle cues of presence, on the unpredictable back-and-forth that shapes identity as much as it shapes understanding.
The Virginia Tech experiment shows that AI can assist with cognition. It does not prove that AI can replace the relational architecture in which cognition becomes character.
That’s the line.
It’s thin. And it’s easy to cross without noticing.
If pedagogy remains accountable to human judgment, AI can deepen vibrant learning. It can expand networks, accelerate iteration, and free educators to focus on the uniquely human dimensions of teaching. It can serve as a co-teacher inside a human-designed ecosystem.
But if pedagogy becomes accountable to platform architecture—if efficiency and throughput quietly become the organizing principles—then vibrant learning will slowly give way to optimized progression. The system may still function. Students may still perform. But something harder to measure will thin.
An educated workforce can be trained through efficient systems.
An educated citizenry must be formed within human communities.
The question before us isn’t whether AI works. It clearly does.
The question is who remains responsible for the architecture.
If we keep that responsibility—if we treat AI as instrumentation rather than architecture—then this moment could expand what’s possible in ways that genuinely support vibrant learning. If we don’t, if we reorganize schooling around efficiency engines and call it innovation, we may find that we’ve streamlined education while quietly narrowing what it means to be educated.
The machine can assist.
But empathy, formation, and responsibility still belong within the human community.
And whether that remains true in our schools will depend on the choices we make now—quietly, structurally, and often in the name of progress.
The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!





