Tag: connectivism

  • AI as Co-Teacher or AI as Replacement?

    bionic hand and human hand finger pointing
    Photo by cottonbro studio on Pexels.com

    “Empathy, evidently, existed only within the human community.”
    — Philip K. Dick, Do Androids Dream of Electric Sheep?

    There’s a moment in Philip K. Dick’s novel when the line between human and machine doesn’t shatter—it thins. The androids aren’t clumsy metallic caricatures. They’re articulate. Quick. Convincing. They can simulate emotional response so well that distinguishing them from humans requires careful testing. The danger isn’t brute force. It’s indistinguishability. It’s the subtle shift where simulation becomes “good enough,” and we stop asking what’s been replaced.

    That’s what this moment in education feels like to me.

    Not collapse. Not revolution. Just a quiet thinning of the line.

    At Virginia Tech, a graduate course in Structural Equation Modeling nearly fell apart when the instructor unexpectedly dropped out. It was required. Students needed it to graduate. There wasn’t time to hire someone new. Instead of postponing the course, the department tried something that would have sounded like speculative fiction even five years ago. Half of the weekly learning objectives would be taught traditionally—through textbook and human instruction. The other half would be taught entirely through ChatGPT. Students received the same objectives either way. They completed the same assessments. And importantly, they submitted their AI chat logs along with their work so their reasoning could be examined. Every student passed.

    You can read that as proof that AI can replace textbooks, maybe even instructors. Dr. Ivan Hernandez himself noted that AI can already function as a replacement for traditional textbooks and, to a certain extent, for instructors. That’s the easy interpretation, and it’s the one that will generate headlines.

    But that’s not what interests me most.

    What interests me is that Hernandez never surrendered the architecture.

    He didn’t dissolve the classroom into a chatbot. He designed an experiment. He kept the objectives. He kept the assessments. He required documentation. He reviewed the logs. AI was allowed inside the system, but it did not define the system. The machine participated, but it did not govern.

    That distinction feels subtle. It isn’t.

    Because at the same time, another model of schooling is gaining attention. A 404 Media report on Alpha School states that students reportedly complete core academic work in roughly 2 hours per day. AI systems deliver most of the instruction. Adults function more as guides and coaches around the edges. The pitch is efficiency, personalization, and mastery at speed.

    Now we’re standing inside the tension Dick was writing about decades ago.

    If a system can simulate understanding, simulate responsiveness, and simulate personalized feedback, at what point do we stop asking whether it is human-centered?


    When I talk about vibrant learning, I’m not talking about colorful classrooms or surface-level engagement. I’m talking about environments where students are actively constructing meaning, forming identity, navigating networks of knowledge, and experiencing the kind of belonging that makes intellectual risk possible. Vibrant learning is relational. It’s cognitively demanding. It depends on friction. It requires the presence of other minds.

    And it is, almost by definition, inefficient.

    The Science of Learning and Development has made something abundantly clear: learning isn’t merely cognitive processing. It is relational and contextual. Emotion and identity are braided into cognition. Belonging isn’t a nice add-on; it’s neurological infrastructure. When students feel safe enough to wrestle with ideas, they engage in deeper processing. When they feel unseen or disconnected, their cognitive system shifts toward protection rather than exploration.

    Now imagine reorganizing schooling around algorithmic instruction as the primary academic engine.

    Can AI explain structural equation modeling? Absolutely. The Virginia Tech experiment clearly demonstrates that. But explanation isn’t the same thing as formation. Learning is not just absorbing information; it’s situating yourself within a community of inquiry. It’s deciding what counts as credible. It’s learning how to disagree well. It’s building intellectual humility alongside intellectual confidence.

    Connectivism adds another layer. Knowledge doesn’t reside in a single authority. It lives in networks—human, digital, and cultural. Learning is the ability to form and traverse those networks. AI belongs in that web. It can extend it. It can accelerate feedback loops. It can surface patterns that would take humans far longer to see.

    But networks remain generative only when no single node dominates the topology.

    When most academic interaction flows through a single algorithmic system, the structure centralizes. It becomes efficient. Predictable. Optimized. And optimization is not neutral. It always reflects a priority.

    In Hernandez’s classroom, AI is one node among many. Students engage with it, but their interactions are documented and subject to human evaluation. The professor remains the architect. The AI is instrumentation. That’s augmentation.

    In the Alpha-style model, as it’s been described, AI becomes the instructional spine. Humans support it. That’s substitution.

    The difference between augmentation and substitution isn’t technological. It’s architectural.

    And architecture shapes identity.


    I understand why the efficiency model is appealing. Public education is strained. Teachers are exhausted. Districts are underfunded. Families are frustrated. If someone promises individualized instruction in two focused hours a day, it feels like relief. It feels like progress. It feels like the system finally catching up to the technology that already saturates students’ lives.

    But we have to ask what we’re optimizing for.

    If the goal is procedural mastery at scale, AI-centered instruction makes sense. You can compress problem sets. You can adapt pacing. You can automate feedback. You can produce measurable gains efficiently.

    But public education, at its best, was never solely about workforce preparation. It was about citizenry. It was about forming people who can navigate complexity, ambiguity, disagreement, and shared life. That kind of formation doesn’t thrive in compressed, frictionless environments. It depends on relational tension. It depends on encountering other minds. It depends on spaces where empathy is not simulated but practiced.

    Dick’s line lingers because it names something we’re tempted to overlook: empathy exists within the human community. Machines can model tone. They can generate encouragement. They can approximate responsiveness. But vibrant learning depends on something more than approximation. It depends on shared vulnerability, on the subtle cues of presence, on the unpredictable back-and-forth that shapes identity as much as it shapes understanding.

    The Virginia Tech experiment shows that AI can assist with cognition. It does not prove that AI can replace the relational architecture in which cognition becomes character.

    That’s the line.

    It’s thin. And it’s easy to cross without noticing.

    If pedagogy remains accountable to human judgment, AI can deepen vibrant learning. It can expand networks, accelerate iteration, and free educators to focus on the uniquely human dimensions of teaching. It can serve as a co-teacher inside a human-designed ecosystem.

    But if pedagogy becomes accountable to platform architecture—if efficiency and throughput quietly become the organizing principles—then vibrant learning will slowly give way to optimized progression. The system may still function. Students may still perform. But something harder to measure will thin.

    An educated workforce can be trained through efficient systems.

    An educated citizenry must be formed within human communities.

    The question before us isn’t whether AI works. It clearly does.

    The question is who remains responsible for the architecture.

    If we keep that responsibility—if we treat AI as instrumentation rather than architecture—then this moment could expand what’s possible in ways that genuinely support vibrant learning. If we don’t, if we reorganize schooling around efficiency engines and call it innovation, we may find that we’ve streamlined education while quietly narrowing what it means to be educated.

    The machine can assist.

    But empathy, formation, and responsibility still belong within the human community.

    And whether that remains true in our schools will depend on the choices we make now—quietly, structurally, and often in the name of progress.



    The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

  • AI Schools and the Illusion of Efficiency

    close up photo of an abstract art
    Photo by Marek Piwnicki on Pexels.com

    A recent investigation into Alpha School, a high-tuition “AI-powered” private school, revealed faulty AI-generated lessons, hallucinated questions, scraped curriculum materials, and heavy student surveillance. Former employees described students as “guinea pigs.”

    That’s the headline.

    But the real issue isn’t whether one school deployed AI sloppily.

    The real issue is whether we are confusing technological acceleration with educational progress.

    The Seduction of the Two-Hour School Day

    Alpha’s pitch is simple and powerful: compress academic learning into two hyper-efficient hours using AI tutors, then free the rest of the day for creativity and passion projects.

    If you believe traditional schooling wastes time, that promise is intoxicating.

    But here’s the problem:

    Efficiency is not the same thing as development.

    From a Science of Learning and Development (SoLD) perspective, learning is not merely the transmission of content. It is a process that integrates cognition, emotion, identity, and social context. Durable learning requires safety, belonging, agency, and meaning-making.

    You cannot compress belonging into a two-hour block.

    You cannot automate identity formation.

    And you cannot hallucinate your way to deep understanding.

    Connectivism Is Not Automation

    Some defenders of AI-heavy schooling argue that we are simply witnessing the next phase of networked learning. Knowledge is distributed. AI becomes a node in the network. Personalized pathways replace one-size-fits-all instruction.

    That language sounds connectivist.

    But Connectivism is not about replacing human nodes with machine ones.

    It concerns the expansion of networks of meaning.

    In a connectivist system:

    • Learning happens across relationships.
    • Knowledge flows through dynamic connections.
    • Judgment matters more than memorization.
    • Pattern recognition and critical filtering are essential skills.

    AI can participate in that network.

    But when AI becomes the primary instructional authority — generating content, generating assessments, evaluating its own outputs — the network collapses into a closed loop.

    AI checking AI is not distributed intelligence.

    It is recursive automation.

    Connectivism requires diversity of nodes.

    Not monoculture.

    Surveillance Is Not Personalization

    The investigation also described extensive monitoring: screen recording, webcam footage, mouse tracking, and behavioral nudges.

    This is framed as personalization.

    It is not.

    It is optimization.

    SoLD research clarifies that psychological safety and autonomy are foundational to learning. When students feel constantly watched, agency erodes. Compliance increases. Anxiety increases.

    You can nudge behavior with surveillance.

    You cannot cultivate intrinsic motivation that way.

    If our model of learning begins to resemble corporate productivity software, we should pause.

    Education is not a workflow dashboard.

    The Hidden Variable: Selection Bias

    To be fair, Alpha School reportedly produces strong test scores.

    However, high-tuition schools serve families with financial, cultural, and educational capital. Research consistently shows that standardized test performance correlates strongly with income.

    If affluent students succeed in an AI-heavy environment, that does not prove that the AI caused the success.

    It may simply mean the students would succeed almost anywhere. I often say those students would succeed with a ham sandwich for a teacher.

    The question is not whether AI can serve already advantaged learners.

    The question is whether AI, deployed without deep pedagogical grounding, strengthens or weakens human development.

    The Real Design Question

    The danger is not AI itself.

    The danger is designing educational systems around what AI does well.

    AI does well at:

    • Drafting content
    • Generating practice questions
    • Scaling feedback
    • Recognizing surface patterns

    AI does not do well at:

    • Reading emotional context
    • Building trust
    • Modeling intellectual humility
    • Navigating moral ambiguity
    • Forming identity

    SoLD reminds us that learning is relational and developmental.

    Connectivism reminds us that learning is networked and distributed.

    If we optimize for what AI does well and marginalize what humans do uniquely well, we create a system that is efficient — but thin.

    Fast — but shallow.

    Impressive — but fragile.

    What This Means for Public Education

    This story is not merely about a private school engaging in aggressive experimentation.

    It is a preview.

    Every district will face pressure to:

    • Automate instruction
    • Replace textbooks with AI tutors
    • Compress seat time
    • Increase data capture

    The answer cannot be a blanket rejection.

    Nor can it be an uncritical adoption.

    The answer is design discipline.

    We should use AI to:

    • Reduce administrative drag
    • Prototype lessons
    • Support differentiated feedback
    • Expand access to expertise

    But we should anchor every AI decision in two non-negotiables:

    1. Does this strengthen human relationships?
    2. Does this expand student agency and meaning-making?

    If the answer is no, we are not innovating.

    We are optimizing the wrong variable.

    The Choice in Front of Us

    We stand at a fork.

    We can design AI systems around human development.

    Or we can redesign human development around AI systems.

    One path amplifies Connectivism, relational trust, and whole-child growth.

    The other path creates compliant, monitored, hyper-efficient learners who score well but lack deep agency.

    Technology will not make that choice for us.

    We will.



    The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

  • Daring Greatly: The Courage Manual You Didn’t Know You Needed

    Daring Greatly

    Blistering verdict: Brené Brown turns vulnerability from a punchline into a power-up. Daring Greatly isn’t self-help fluff; it’s a rigor-backed field guide for stepping into the arena when your brain is screaming, “Nope.” It reads fast, hits hard, and leaves you with language—and habits—that change how you lead, teach, parent, and show up.


    Spoiler-free recap (no “cheap seats” commentary included)

    Brown’s premise is simple and seismic: vulnerability is courage in action—the willingness to be seen when outcomes aren’t guaranteed. Drawing on years of qualitative research, she maps how shame (the fear of disconnection) drives perfectionism, numbing, and armor… and how shame resilience (naming what’s happening, reality-checking our stories, reaching out, and speaking it) gives us our lives back.

    You’ll walk through:

    • Scarcity culture (“never enough”) vs. worthiness (“I’m enough, so I can risk more”).
    • Armor types—perfectionism, foreboding joy, cynicism—and how to set them down.
    • Empathy as antidote (connection > fixing).
    • Wholeheartedness: living with courage + compassion + connection, anchored by boundaries.

    No plot twists to spoil—just a research-driven blueprint that makes bravery behavioral, not mythical.


    Why this book still matters (and why your team/family/class will feel it)

    • It rewires the courage myth. Courage isn’t swagger; it’s risk + emotional exposure + uncertainty. That framing scales from a tough conversation to a moonshot.
    • It gives you a shared language. “Armor,” “scarcity,” “shame triggers,” “wholehearted”—terms your team can actually use in meetings without rolling their eyes.
    • It upgrades feedback culture. Vulnerability isn’t oversharing; it’s specific, boundaried honesty. That’s the backbone of psychological safety and real performance.
    • It’s ruthlessly practical. The book reads like a human-systems playbook: name it, normalize it, and move—together.

    No products found.


    What hits different in 2025

    • AI & authenticity. In a world of auto-generated polish, human risk-taking is the differentiator. Vulnerability is how we build trust beyond the algorithm.
    • Hybrid work, thin trust. Distance amplifies story-making. Brown’s “story I’m telling myself…” move is rocket fuel for remote teams and relationships.
    • Schools & Gen Z. Teens live under surveillance capitalism. Teaching boundaries + worthiness beats any pep talk on resilience.

    Read it like a field guide (fast, no navel-gazing required)

    • Skim for tools, then circle back for depth. Treat each section like a drill you can run this week.
    • Practice out loud. Say the scripts: “Here’s what I’m afraid of… Here’s what I need… The story I’m telling myself is…”
    • Pick one arena. A hard 1:1, a classroom norm, a family ritual. Ship courage in small, observable iterations.

    For my fellow geeks & builders

    If Neuromancer gave us cyberspace, this gives us the social API for courage. It’s the middleware between your values and your behavior under load. Think of shame as a high-latency bug; Brown gives you the observability tools to catch it in prod and roll a patch without taking the system down.


    Who will love this

    • Leaders & coaches who care about performance and people.
    • Educators & parents building cultures of belonging without lowering standards.
    • Makers & founders whose work requires public risk and iterative failure.
    • Anyone tired of armoring up and ready to try brave instead of perfect.

    Pair it with (next reads)

    • The Gifts of Imperfection (Brown) — the on-ramp to wholehearted living.
    • Dare to Lead (Brown) — her organizational upgrade, perfect for teams.
    • Crucial Conversations (Patterson et al.) — tactics for high-stakes talk, post-armor.

    Final verdict

    Five stars, zero hedging. Daring Greatly is the rare book that alters your behavioral defaults. It’s sticky, quotable, and wildly usable the minute you close it. If you build products, classes, teams, or families, this is the courage stack you want installed.


    Ready to step into the arena? Grab Daring Greatly in paperback, hardcover, or audio—whichever format helps you practice while you read. (Some links on my site may be affiliate links, which help support this work at no extra cost to you.)



    The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!