Google’s Stitch Update: “Vibe Design” and the Shrinking Distance Between Ideas and Tools

A preview of the updated Stitch AI-design tool from Google

Google recently announced a major update to its experimental design tool, Stitch. If you haven’t heard of it before, Stitch is an AI-powered interface design tool—but this update signals something bigger than just new features.

Google is now describing Stitch as an “AI-native software design canvas”—a space where users can move from an idea to a high-fidelity interface using natural language, images, or even voice.

That shift in language matters.

What’s Actually New in This Update?

Stitch isn’t new, but this version pushes it in a different direction. A few highlights stand out.

First, Stitch is no longer framed as a traditional design tool. Instead of starting with wireframes or components, users are encouraged to begin with intent—what they want to build, how it should feel, and what it should accomplish. In practice, that means you can describe a goal and generate a working interface almost immediately.

Second, Google introduces the idea of “vibe design.” While the phrasing might feel a little buzzword-heavy, the concept is straightforward. Rather than trying to get a design right on the first attempt, users can explore multiple directions quickly and refine toward a stronger result.

Third, the updated Stitch includes a design agent that works alongside the user. This agent can reason across the entire project, suggest changes, and help explore different directions simultaneously. It shifts the process from step-by-step construction to something closer to collaboration.

Another notable addition is the introduction of DESIGN.md, an agent-friendly markdown file that captures design rules and structure. This makes it easier to move designs into other tools or continue development with AI systems without starting over.

Finally, Stitch now supports instant prototyping of user flows. Instead of static screens, users can connect interfaces and immediately experience how someone would move through the app. That ability to test ideas quickly changes the pace of iteration.

Why This Matters for Educators

At first glance, this might seem like a tool built for designers or developers. But the implications for classrooms are more immediate than they appear.

For years, we’ve asked students to design solutions to problems—create a product, propose an innovation, build something meaningful—but those ideas often remain abstract. They exist in slides, posters, or written descriptions.

Tools like Stitch begin to close that gap.

Students can take an idea—such as a tool to help track progress in Algebra 1—and generate a working interface in minutes. From there, they can evaluate it, revise it, and improve it. The work becomes more tangible, and the feedback loop becomes faster.

That shift from describing an idea to interacting with it has real potential to deepen thinking.

The Bigger Shift Underneath

What Stitch represents is part of a broader change in how creation works.

The more technical aspects of building—layout, structure, and basic interaction design—are increasingly handled by AI. That doesn’t eliminate the need for skill, but it does change where the most important thinking happens.

Instead of focusing primarily on execution, the emphasis shifts toward clearly defining problems, making intentional design decisions, and evaluating whether something is actually useful.

Those are the kinds of capacities we want students to develop, but they’re often overshadowed by the mechanics of building something from scratch.

A Quick Reality Check

This doesn’t automatically lead to better learning.

If we simply replace “make a slideshow” with “generate an app,” we haven’t meaningfully changed the task. The tool itself isn’t the innovation. The thinking behind how it’s used is what matters.

Used thoughtfully, however, tools like Stitch can support faster iteration, more visible thinking, and more authentic design work.

Try This in Your Classroom

If you’re curious about what this might look like in practice, you don’t need a full unit redesign to get started. A simple activity can open the door.

Start with a question tied to your content:

  • “What would a tool that helps students master this unit actually look like?”
  • “How could we design something that makes feedback more useful?”
  • “What would help someone learn this concept more effectively?”

Have students work individually or in small groups to:

  1. Define the purpose of their tool
  2. Describe the user (another student, themselves, a teacher)
  3. Generate a design using Stitch or another AI interface tool
  4. Review the result and critique it

Then push their thinking:

  • What works about this design?
  • What doesn’t?
  • What would you change to make it more useful?
  • How does it connect to what we know about learning?

The goal isn’t to build a perfect product. It’s to move students into a cycle of idea → prototype → critique → revision, which is where deeper learning tends to happen.

Final Thought

Google describes this update as helping users “close the gap from idea to reality in minutes rather than days.”

That may sound ambitious, but it reflects a real trend.

As that gap continues to shrink, the question for educators isn’t whether students can build things. It’s what we ask them to build—and whether those tasks are worthy of the tools now available to them.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

AI as Co-Teacher or AI as Replacement?

bionic hand and human hand finger pointing
Photo by cottonbro studio on Pexels.com

“Empathy, evidently, existed only within the human community.”
— Philip K. Dick, Do Androids Dream of Electric Sheep?

There’s a moment in Philip K. Dick’s novel when the line between human and machine doesn’t shatter—it thins. The androids aren’t clumsy metallic caricatures. They’re articulate. Quick. Convincing. They can simulate emotional response so well that distinguishing them from humans requires careful testing. The danger isn’t brute force. It’s indistinguishability. It’s the subtle shift where simulation becomes “good enough,” and we stop asking what’s been replaced.

That’s what this moment in education feels like to me.

Not collapse. Not revolution. Just a quiet thinning of the line.

At Virginia Tech, a graduate course in Structural Equation Modeling nearly fell apart when the instructor unexpectedly dropped out. It was required. Students needed it to graduate. There wasn’t time to hire someone new. Instead of postponing the course, the department tried something that would have sounded like speculative fiction even five years ago. Half of the weekly learning objectives would be taught traditionally—through textbook and human instruction. The other half would be taught entirely through ChatGPT. Students received the same objectives either way. They completed the same assessments. And importantly, they submitted their AI chat logs along with their work so their reasoning could be examined. Every student passed.

You can read that as proof that AI can replace textbooks, maybe even instructors. Dr. Ivan Hernandez himself noted that AI can already function as a replacement for traditional textbooks and, to a certain extent, for instructors. That’s the easy interpretation, and it’s the one that will generate headlines.

But that’s not what interests me most.

What interests me is that Hernandez never surrendered the architecture.

He didn’t dissolve the classroom into a chatbot. He designed an experiment. He kept the objectives. He kept the assessments. He required documentation. He reviewed the logs. AI was allowed inside the system, but it did not define the system. The machine participated, but it did not govern.

That distinction feels subtle. It isn’t.

Because at the same time, another model of schooling is gaining attention. A 404 Media report on Alpha School states that students reportedly complete core academic work in roughly 2 hours per day. AI systems deliver most of the instruction. Adults function more as guides and coaches around the edges. The pitch is efficiency, personalization, and mastery at speed.

Now we’re standing inside the tension Dick was writing about decades ago.

If a system can simulate understanding, simulate responsiveness, and simulate personalized feedback, at what point do we stop asking whether it is human-centered?


When I talk about vibrant learning, I’m not talking about colorful classrooms or surface-level engagement. I’m talking about environments where students are actively constructing meaning, forming identity, navigating networks of knowledge, and experiencing the kind of belonging that makes intellectual risk possible. Vibrant learning is relational. It’s cognitively demanding. It depends on friction. It requires the presence of other minds.

And it is, almost by definition, inefficient.

The Science of Learning and Development has made something abundantly clear: learning isn’t merely cognitive processing. It is relational and contextual. Emotion and identity are braided into cognition. Belonging isn’t a nice add-on; it’s neurological infrastructure. When students feel safe enough to wrestle with ideas, they engage in deeper processing. When they feel unseen or disconnected, their cognitive system shifts toward protection rather than exploration.

Now imagine reorganizing schooling around algorithmic instruction as the primary academic engine.

Can AI explain structural equation modeling? Absolutely. The Virginia Tech experiment clearly demonstrates that. But explanation isn’t the same thing as formation. Learning is not just absorbing information; it’s situating yourself within a community of inquiry. It’s deciding what counts as credible. It’s learning how to disagree well. It’s building intellectual humility alongside intellectual confidence.

Connectivism adds another layer. Knowledge doesn’t reside in a single authority. It lives in networks—human, digital, and cultural. Learning is the ability to form and traverse those networks. AI belongs in that web. It can extend it. It can accelerate feedback loops. It can surface patterns that would take humans far longer to see.

But networks remain generative only when no single node dominates the topology.

When most academic interaction flows through a single algorithmic system, the structure centralizes. It becomes efficient. Predictable. Optimized. And optimization is not neutral. It always reflects a priority.

In Hernandez’s classroom, AI is one node among many. Students engage with it, but their interactions are documented and subject to human evaluation. The professor remains the architect. The AI is instrumentation. That’s augmentation.

In the Alpha-style model, as it’s been described, AI becomes the instructional spine. Humans support it. That’s substitution.

The difference between augmentation and substitution isn’t technological. It’s architectural.

And architecture shapes identity.


I understand why the efficiency model is appealing. Public education is strained. Teachers are exhausted. Districts are underfunded. Families are frustrated. If someone promises individualized instruction in two focused hours a day, it feels like relief. It feels like progress. It feels like the system finally catching up to the technology that already saturates students’ lives.

But we have to ask what we’re optimizing for.

If the goal is procedural mastery at scale, AI-centered instruction makes sense. You can compress problem sets. You can adapt pacing. You can automate feedback. You can produce measurable gains efficiently.

But public education, at its best, was never solely about workforce preparation. It was about citizenry. It was about forming people who can navigate complexity, ambiguity, disagreement, and shared life. That kind of formation doesn’t thrive in compressed, frictionless environments. It depends on relational tension. It depends on encountering other minds. It depends on spaces where empathy is not simulated but practiced.

Dick’s line lingers because it names something we’re tempted to overlook: empathy exists within the human community. Machines can model tone. They can generate encouragement. They can approximate responsiveness. But vibrant learning depends on something more than approximation. It depends on shared vulnerability, on the subtle cues of presence, on the unpredictable back-and-forth that shapes identity as much as it shapes understanding.

The Virginia Tech experiment shows that AI can assist with cognition. It does not prove that AI can replace the relational architecture in which cognition becomes character.

That’s the line.

It’s thin. And it’s easy to cross without noticing.

If pedagogy remains accountable to human judgment, AI can deepen vibrant learning. It can expand networks, accelerate iteration, and free educators to focus on the uniquely human dimensions of teaching. It can serve as a co-teacher inside a human-designed ecosystem.

But if pedagogy becomes accountable to platform architecture—if efficiency and throughput quietly become the organizing principles—then vibrant learning will slowly give way to optimized progression. The system may still function. Students may still perform. But something harder to measure will thin.

An educated workforce can be trained through efficient systems.

An educated citizenry must be formed within human communities.

The question before us isn’t whether AI works. It clearly does.

The question is who remains responsible for the architecture.

If we keep that responsibility—if we treat AI as instrumentation rather than architecture—then this moment could expand what’s possible in ways that genuinely support vibrant learning. If we don’t, if we reorganize schooling around efficiency engines and call it innovation, we may find that we’ve streamlined education while quietly narrowing what it means to be educated.

The machine can assist.

But empathy, formation, and responsibility still belong within the human community.

And whether that remains true in our schools will depend on the choices we make now—quietly, structurally, and often in the name of progress.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

AI Schools and the Illusion of Efficiency

close up photo of an abstract art
Photo by Marek Piwnicki on Pexels.com

A recent investigation into Alpha School, a high-tuition “AI-powered” private school, revealed faulty AI-generated lessons, hallucinated questions, scraped curriculum materials, and heavy student surveillance. Former employees described students as “guinea pigs.”

That’s the headline.

But the real issue isn’t whether one school deployed AI sloppily.

The real issue is whether we are confusing technological acceleration with educational progress.

The Seduction of the Two-Hour School Day

Alpha’s pitch is simple and powerful: compress academic learning into two hyper-efficient hours using AI tutors, then free the rest of the day for creativity and passion projects.

If you believe traditional schooling wastes time, that promise is intoxicating.

But here’s the problem:

Efficiency is not the same thing as development.

From a Science of Learning and Development (SoLD) perspective, learning is not merely the transmission of content. It is a process that integrates cognition, emotion, identity, and social context. Durable learning requires safety, belonging, agency, and meaning-making.

You cannot compress belonging into a two-hour block.

You cannot automate identity formation.

And you cannot hallucinate your way to deep understanding.

Connectivism Is Not Automation

Some defenders of AI-heavy schooling argue that we are simply witnessing the next phase of networked learning. Knowledge is distributed. AI becomes a node in the network. Personalized pathways replace one-size-fits-all instruction.

That language sounds connectivist.

But Connectivism is not about replacing human nodes with machine ones.

It concerns the expansion of networks of meaning.

In a connectivist system:

  • Learning happens across relationships.
  • Knowledge flows through dynamic connections.
  • Judgment matters more than memorization.
  • Pattern recognition and critical filtering are essential skills.

AI can participate in that network.

But when AI becomes the primary instructional authority — generating content, generating assessments, evaluating its own outputs — the network collapses into a closed loop.

AI checking AI is not distributed intelligence.

It is recursive automation.

Connectivism requires diversity of nodes.

Not monoculture.

Surveillance Is Not Personalization

The investigation also described extensive monitoring: screen recording, webcam footage, mouse tracking, and behavioral nudges.

This is framed as personalization.

It is not.

It is optimization.

SoLD research clarifies that psychological safety and autonomy are foundational to learning. When students feel constantly watched, agency erodes. Compliance increases. Anxiety increases.

You can nudge behavior with surveillance.

You cannot cultivate intrinsic motivation that way.

If our model of learning begins to resemble corporate productivity software, we should pause.

Education is not a workflow dashboard.

The Hidden Variable: Selection Bias

To be fair, Alpha School reportedly produces strong test scores.

However, high-tuition schools serve families with financial, cultural, and educational capital. Research consistently shows that standardized test performance correlates strongly with income.

If affluent students succeed in an AI-heavy environment, that does not prove that the AI caused the success.

It may simply mean the students would succeed almost anywhere. I often say those students would succeed with a ham sandwich for a teacher.

The question is not whether AI can serve already advantaged learners.

The question is whether AI, deployed without deep pedagogical grounding, strengthens or weakens human development.

The Real Design Question

The danger is not AI itself.

The danger is designing educational systems around what AI does well.

AI does well at:

  • Drafting content
  • Generating practice questions
  • Scaling feedback
  • Recognizing surface patterns

AI does not do well at:

  • Reading emotional context
  • Building trust
  • Modeling intellectual humility
  • Navigating moral ambiguity
  • Forming identity

SoLD reminds us that learning is relational and developmental.

Connectivism reminds us that learning is networked and distributed.

If we optimize for what AI does well and marginalize what humans do uniquely well, we create a system that is efficient — but thin.

Fast — but shallow.

Impressive — but fragile.

What This Means for Public Education

This story is not merely about a private school engaging in aggressive experimentation.

It is a preview.

Every district will face pressure to:

  • Automate instruction
  • Replace textbooks with AI tutors
  • Compress seat time
  • Increase data capture

The answer cannot be a blanket rejection.

Nor can it be an uncritical adoption.

The answer is design discipline.

We should use AI to:

  • Reduce administrative drag
  • Prototype lessons
  • Support differentiated feedback
  • Expand access to expertise

But we should anchor every AI decision in two non-negotiables:

  1. Does this strengthen human relationships?
  2. Does this expand student agency and meaning-making?

If the answer is no, we are not innovating.

We are optimizing the wrong variable.

The Choice in Front of Us

We stand at a fork.

We can design AI systems around human development.

Or we can redesign human development around AI systems.

One path amplifies Connectivism, relational trust, and whole-child growth.

The other path creates compliant, monitored, hyper-efficient learners who score well but lack deep agency.

Technology will not make that choice for us.

We will.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

10 Things: Week Ending August 22, 2025

pexels-photo-45708.jpeg
Photo by Dom J on Pexels.com

We’re two weeks into the school year, and I’ve already seen some incredible examples of authentic learning in action. It’s a good reminder of Steve Wozniak’s advice: keep the main thing the main thing—and don’t sell out for something that only looks better.

This week’s newsletter rounds up 10 links worth your time, from AI and education to remote learning, punk archives, and why cell phone bans never work.

Read the full newsletter here →



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

We must build AI for people; not to be a person

people

My life’s mission has been to create safe and beneficial AI that will make the world a better place. Today at Microsoft AI we build AI to empower people, and I’m focused on making products like Copilot responsible technologies that enable people to achieve far more than they ever thought possible, be more creative, and feel more supported.

I want to create AI that makes us more human, that deepens our trust and understanding of one another, and that strengthens our connections to the real world. Copilot creates millions of positive, even life-changing, interactions every single day. This involves a lot of careful design choices to ensure it truly delivers an incredible experience. We won’t always get it right, but this humanist frame provides us with a clear north star to keep working towards.

Some thoughts from Mustafa Suleyman on building AI that doesn’t convince people that AI is a human, or needs rights. Or is a god.

Sadly, we’re already having those discussions.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

You Might Be Trying to Replace the Wrong People with AI

I was at a leadership group and people were telling me “We think that with AI we can replace all of our junior people in our company.” I was like, “That’s the dumbest thing I’ve ever heard. They’re probably the least expensive employees you have, they’re the most leaned into your AI tools, and how’s that going to work when you go 10 years in the future and you have no one that has built up or learned anything?

So says Matt Garman, CEO of Amazon Web Services. A better question to ask: What do you mean, you don’t want to teach your high school students how to use AI to help them write code and solve problems more efficiently?

We live in weird times when people constantly retreat to what came before and avoid any intention of moving on.

Life is the future, not the past.



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

Democratizing AI in Education: David Wiley’s Vision of Generative Textbooks

generative textbooks

David Wiley is experimenting with what he calls generative textbooks — a mashup of OER (open educational resources) and generative AI. His core idea is:

What if anyone who can create an open textbook could also create an AI-powered, interactive learning tool without writing code?

From Open Content to Open AI-Driven Learning

For decades, Wiley has championed open education resources (OER)—teaching and learning materials freely available to adapt and share under open licenses like Creative Commons. With generative AI now in the mix, Wiley sees a unique opportunity to merge the participatory spirit of OER with the dynamic adaptability of language models.

The result? A new kind of learning tool that feels less like a dusty PDF and more like a responsive learning app—crafted by educators, powered by AI, and free for students to use.

The Anatomy of a Generative Textbook

Wiley’s prototype isn’t just a fancy textbook—it’s a modular, no-code authoring system for AI-powered learning. Here’s how it works:

  • Learning Objectives: Short, focused statements about what learners should master.
  • Topic Summaries: Context-rich summaries intended for the AI—not students—to ground the model’s responses in accuracy.
  • Activities: Learning interactions like flashcards, quizzes, or explanations.
  • Book-Level Prompt Stub: A template that sets tone, personality, response format (e.g., Markdown), and overall voice.

To build a generative textbook with ten chapters, an author creates:

  1. One book-level prompt stub
  2. Ten learning objectives (one per chapter)
  3. Ten concise topic summaries
  4. Various activity templates aligned with each chapter

A student then picks a topic and an activity. The system stitches together the right bits into a prompt and feeds it to a language model—generating a live, tailored learning activity.

Open Source, Open Models, Open Access

True to his roots, Wiley made the tool open source and prioritized support for open-weight models—AI models whose architectures and weights are freely available. His prototype initially sent prompts to a model hosted via the Groq API, making it easy to swap in different open models—or even ones students host locally.

Yet here’s the catch: even open models cost money to operate via API. And according to Wiley, most educators he consulted were less concerned with “open” and more with “free for students.”

A Clever—and Simple—Solution

Wiley’s creative workaround: instead of pushing the AI prompt through the API, the tool now simply copies the student’s prompt to their clipboard and directs them to whatever AI interface they prefer (e.g., ChatGPT, Gemini, a school-supported model). Students just paste and run it themselves.

There’s elegance in that simplicity:

  • No cost per token—students use models they already have access to.
  • Quality-first—they can choose the best proprietary models, not just open ones.
  • Flexibility—works with institution-licensed models or free-tier access.

Of course, there are trade-offs:

  • The experience feels disjointed (copy/paste instead of seamless).
  • Analytics and usage data are much harder to capture.
  • Learners’ privacy depends on the model they pick—schools and developers can’t guarantee it.

A Prototype, Not a Finished Product

Wiley is clear: this is a tech demonstration, not a polished learning platform. The real magic comes from well-crafted inputs—clear objectives, accurate summaries, and effective activities. Garbage in, garbage out, especially with generative AI.

As it stands, generative textbooks aren’t ready to replace traditional textbooks—but they can serve as innovative supplements, offering dynamic learning experiences beyond static content.

The Bigger Picture: Where OER Meets GenAI

Wiley’s vision reflects a deeper shift in education: blending open pedagogy with responsive AI-driven learning. It’s not just about access; it’s about giving educators and learners the ability to co-create, remix, and personalize knowledge in real time.

Broader research echoes this trend: scholars explore how generative AI can support the co-creation, updating, and customizing of learning materials while urging care around authenticity and synthesis.

Related Innovations in Open AI for Education

  • VTutor: An open-source SDK that brings animated AI agents to life with real-time feedback and expressive avatars—promising deeper human-AI interaction.
  • AI-University (AI‑U): A framework that fine-tunes open-source LLMs using lecture videos, notes, and textbooks, offering tailored course alignment and traceable output to learning materials.
  • GAIDE: A toolkit that empowers educators to use generative AI for curriculum development, grounded in pedagogical theory and aimed at improving content quality and educator efficiency.

Final Thoughts

David Wiley’s generative textbooks project is less about launching a product and more about launching possibilities. It’s a thought experiment turned demonstration: what if creating powerful, AI-powered learning experiences were as easy as drafting a few sentences?

In this vision:

  • Educators become prompt architects.
  • Students become active participants, selecting how they engage.
  • Learning becomes dynamic, authorable, and—critically—free to access.

That’s the open promise of generative textbooks. It may be rough around the edges now, but the implication is bold: a future where learning tools evolve with educators and learners—rather than being fixed in print.


Bonus reading & resources:



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

Beyond Policing AI: Rethinking Assessment Through Authentic Learning and Connectivism

leon furze principles for assessment

Leon Furze makes an important case: if the best we can do in the age of AI is to tighten surveillance, we’ve already lost.

In all corners of education, we need to stop policing artificial intelligence and focus instead on designing better assessments. GenAI gives us an excuse to have these conversations. AI needs to prompt us to reflect on what matters most: validity, fairness, transparency and of course, learning.

Instead of treating generative AI as a threat to assessment, we should see it as a provocation—an opportunity to reimagine how we measure and value learning. His five principles (validity, reality, transparency, process, and professional judgement) are solid on their own, but when refracted through authentic learning and connectivism, they take on even sharper meaning.

1. Validity becomes authenticity.
Assessment validity isn’t just about matching standards to outcomes—it’s about ensuring that what students are asked to do actually matters. Authentic learning demands that assessments reflect the messy, interconnected problems students will face beyond school. A lab report, a policy pitch, or a podcast that connects with a real audience provides validity in a way a locked-down multiple-choice exam never will. AI doesn’t threaten that kind of assessment; it strengthens it, because students must decide how and when to use the tool responsibly within authentic contexts.

2. Designing for reality means designing for networks.
Furze’s “design for reality” principle resonates strongly with connectivism. The reality is that knowledge no longer lives solely inside a student’s head—it’s distributed across networks of people, resources, and technologies. An assessment that ignores that fact is already outdated. When we allow students to bring AI into the process (declared openly, as Furze suggests), we invite them to practice navigating networks of information, filtering noise from signal, and building connections that mirror the way knowledge flows in the real world.

3. Transparency and trust are relational, not transactional.
Authentic learning environments thrive on trust: teachers trust students to take risks, and students trust teachers to guide without over-policing. Connectivism reminds us that learning happens in community, and that means shared norms around how tools like AI are used. Instead of “thou shalt not” rules, we need open conversations: Why might you use AI here? When might it short-circuit your learning? Transparency becomes less about compliance and more about cultivating reflective practitioners who can articulate their choices.

4. Assessment as process = learning as ongoing connection.
If assessment is a process, not a point in time, then it looks less like a final judgment and more like a portfolio of evolving connections. Students don’t just demonstrate what they know; they show how they know, who they connect with, and how their thinking shifts over time. This is connectivism in action: learning is the ability to make and traverse connections, not the ability to store facts in isolation. AI can become part of that process—as a collaborator, a draft partner, or even a provocateur that challenges their assumptions.

5. Respecting professional judgement = empowering educators as designers.
Authentic learning doesn’t happen in lockstep with rigid policies; it requires teachers to design experiences that matter in their contexts. Connectivism reminds us that teachers are nodes in the network too, bringing their expertise, relationships, and creativity. Respecting professional judgement means trusting teachers to balance the affordances of AI with the human dimensions of belonging, curiosity, and care.

The big takeaway?
AI doesn’t invalidate assessment. It invalidates bad assessment. If the only way an assignment “works” is by pretending students live in a vacuum, disconnected from tools, networks, and communities, then it was never truly authentic to begin with.

For those of us who see learning as both deeply human and deeply networked, Furze’s five principles are a call to action: design assessments that honor authenticity, embrace connections, and prepare students for a world where knowledge is always evolving—and never isolated.

Here are a few ideas to get your creative mind going as you think about redesigning your assessments:

1. Color Mapping Across Disciplines (Art + Science)

Task: Students design a digital exhibit that compares different historical models of color (Newton’s circle, Munsell’s system, RGB cubes). They use AI tools to generate visualizations, then critique the limitations of each.

  • Authenticity: Color mapping is both a scientific and artistic problem. Students engage in real-world disciplinary practices.
  • Connectivism: Students link to a network of thinkers (Newton to Roussel), and share their exhibits with peers online.
  • AI Role: Visualization generator, comparison tool, but students must justify why a model matters for perception or art.

2. Community Podcast: Local Environmental Issues (ELA + Science + Civics)

Task: Students research a local environmental challenge (e.g., water quality, urban green space), create a podcast episode featuring expert interviews, and use AI to help with transcription, sound editing, and draft questions.

  • Authenticity: Students contribute to civic discourse in their community.
  • Connectivism: They learn from and connect with real experts and share publicly.
  • AI Role: Drafting interview questions, transcribing recordings, generating promotional materials—but students remain responsible for the core knowledge and ethical framing.

3. History “What If” Simulation (Social Studies)

Task: Students use AI to model counterfactual scenarios (e.g., “What if the printing press had been invented 200 years earlier?”). They must critique the AI’s reasoning, identify inaccuracies, and build their own historically valid narrative in response.

  • Authenticity: Historians often test counterfactuals to sharpen their understanding of cause and effect.
  • Connectivism: Students cross-reference scholarly works, archives, and even online history communities.
  • AI Role: Idea generator and foil—the flawed AI answers become a catalyst for deeper historical reasoning.

4. Entrepreneurial Pitch for a School Problem (Business + Math + Design)

Task: Students identify a real issue in their school (e.g., cafeteria waste, lack of study space), design a product/service solution, and pitch it to administrators or community members. AI is used for market research summaries, prototype visuals, or cost projections.

  • Authenticity: Mirrors real entrepreneurial problem-solving.
  • Connectivism: Students collaborate with community stakeholders and pitch to an authentic audience.
  • AI Role: Research and prototyping assistant, not a substitute for problem-finding or decision-making.

5. Literature in the Age of Machines (ELA)

Task: Students select a literary theme (identity, power, justice) and compare how a human-authored poem and an AI-generated poem tackle it. They publish a critical essay or multimedia piece reflecting on authorship, creativity, and meaning.

  • Authenticity: Engages with contemporary debates about art and authorship.
  • Connectivism: Students link across traditions—classic texts, modern scholarship, AI-driven art.
  • AI Role: Source of creative “texts” to analyze, not a replacement for analysis.

Why These Work

Each task:

  • Builds validity by aligning with standards and real-world practices.
  • Designs for reality, where AI is part of the workflow.
  • Encourages transparency—students must declare and justify how they used AI.
  • Emphasizes process, not just a single product.
  • Relies on teacher judgment to guide reflection and assess growth.


The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

Neuromancer: The book that jailbreaks the future

Neuromancer cover

Blistering verdict: Neuromancer doesn’t predict the future—it jailbreaks it. William Gibson plugs you into a neon-slick, rain-slicked world where data has gravity, money moves at the speed of light, and the line between human and machine is just another corporate asset to be negotiated. It’s fast. It’s razor-sharp. And four decades on, it still crackles like a live wire.


Spoiler-free recap (no ICE burned, promise)

Meet Case—a burned-out “console cowboy” who once rode the matrix like a god until he crossed the wrong people and lost the only thing that mattered: his ability to jack in. He’s offered a dangerous second chance by a mysterious patron with deep pockets and deeper secrets. Enter Molly, a mirror-shaded street samurai with retractable razors and zero patience for anyone’s nonsense. The job? A multilayered, globe-hopping (and orbit-hopping) heist threading megacorps, black-market biohacks, and an AI problem that’s less “glitch” and more “philosophical earthquake.”

The plot moves like a hot knife through black ice—tight, propulsive, and always one layer more ambitious than you think. Every chapter ups the stakes; every alleyway has a camera; every ally might be a contractor. You don’t need spoilers. You need a seatbelt.


Why this book still matters (and why geeks keep handing it to friends)

  • It gave us our mental model of the net. Gibson’s “cyberspace” isn’t just a word—it’s an interface, a mythos, a feeling. The luminous grids, the consensual hallucination of a shared data world? That’s the cultural operating system we installed long before broadband.
  • It forged the cyberpunk aesthetic. Street-level grit meets orbital decadence; chrome and sweat; hackers and mercenaries threading the seams of empire. If you love The Matrix, Ghost in the Shell, Cyberpunk 2077, or Mr. Robot, you’re drinking from this well.
  • It nailed corporate power as world-building. Megacorps behaving like nations. Security as religion. Branding as surveillance. In 2025, tell me that doesn’t feel uncomfortably like a user agreement we all clicked.
  • It treats AI as character, not prop. Neuromancer asks the questions we’re still arguing about in boardrooms and labs: autonomy, constraint, alignment, and what “self” means when the self can be copied, merged, or monetized.
  • The prose is pure overclocked poetry. Gibson writes like he’s soldering language: compressed, glittering, and purpose-built. The sentences hum; the metaphors bite; the world feels legible and alien at once.

What hits different in 2025

  • Identity as a login. Case isn’t just locked out of systems; he’s locked out of himself. That anxiety—who are we without access?—is the backbone of our cloud-tethered lives.
  • The gig-hacker economy. Contractors, fixers, “teams” assembled like temporary code branches. It’s Upwork with thermoptic shades.
  • Biohacking & upgrade culture. From dermal mods to black-clinic tune-ups, the book treats the body like firmware—exactly how today’s wearables, implants, and nootropics culture wants you to think.
  • Algorithmic power. Replace “AI” with your favorite recommendation engine and the social physics hold: it watches, it optimizes, it nudges. The ethics still sting.

How to read it (and love it)

  • Surf the jargon. Don’t stop to define every acronym. Let the context teach you like you’re a rookie riding shotgun with veterans.
  • Trust the city. The settings—Chiba City, the Sprawl, orbit—are more than backdrops; they’re tutorial levels. Watch what they reward and punish.
  • Hear the bassline. The book is paced like a heist film. When it slows, it’s loading a bigger payload. When it sprints, hang on.

If you’re this kind of reader, this book is your jam

  • You love high-concept, high-velocity fiction that respects your intelligence.
  • You care about tech culture’s DNA—where our metaphors and nightmares came from.
  • You’re a world-building nerd who wants settings that feel lived-in, not wallpapered.
  • You’re into AI, hacking, and systems thinking and want a story that treats them as more than shiny props.

The influence blast radius

Neuromancer is ground zero for the cyberpunk sensibility: the hero is small, the system is massive, and victory looks like carving a human-sized space in a machine-sized world. Its fingerprints are everywhere—console cowboys inspiring dev culture; “ICE” as the vibe under every security audit; fashion, music, and UI design that still chase its cool. Even the way journalists write about breaches and “entering the network” leans on Gibson’s visual grammar. Read it and you’ll start seeing the code behind the cultural interface.


After you jack out: what to read next

  • Count Zero and Mona Lisa Overdrive (finish the Sprawl Trilogy—richer world, expanding consequences).
  • Burning Chrome (short stories that sharpen the vision).
  • Adjacent canon: Neal Stephenson’s Snow Crash (satire-powered rocket fuel), Pat Cadigan’s Synners (media and minds), and Rudy Rucker’s Ware series (weirder, wilder, wonderfully so).

Final verdict

Neuromancer is essential reading—full stop. It’s the rare novel that changed the language we use to talk about technology and remains a pulse-pounding ride. If the Internet is the city we all live in now, Gibson drew the first street map that felt true. Pick it up for the thrills; keep it on your shelf for the ideas that won’t let you go.


Ready to jack in? Grab Neuromancer in paperback, ebook, or audio—however you mainline stories—and let it rewrite your mental firmware. (Some links on my site may be affiliate links, which help support the work at no extra cost to you.)



The Eclectic Educator is a free resource for everyone passionate about education and creativity. If you enjoy the content and want to support the newsletter, consider becoming a paid subscriber. Your support helps keep the insights and inspiration coming!

Wednesday assorted links