Empirical Research

Asking Slow Questions in the Age of Fast Machines: Neil Postman’s Seven Questions and the AI-Powered Classroom

This week I watched an old College Lecture from 1997, “The Surrender of Culture to Technology” with the media theorist and culture critic Neil Postman. The lecture was based on his book Technopoly where Postman in a quite entertaining and provocative way raises a series of questions that need answering in regard to new technology. Especially television and the internet is in his scope during his talk.

Today, almost 30 years later, in a time when artificial intelligence (AI) is becoming part of classrooms, lesson planning, and student assessment, it is likewise urgent to pause and ask the right questions. In Technopoly (1992) Neil Postman proposed seven questions that any society should ask when a new technology is introduced. His intention was not to stop innovation, but to foster technological literacy, meaning not just the ability to use technology, but the ability to understand its cultural, social, and political consequences. As schools and other educational institutions across the globe begin implementing generative AI tools like ChatGPT, adaptive learning platforms, and AI-based grading assistants, Postman’s critical lens becomes not only relevant but, in my opinion, quite necessary.

Postman’s Seven Questions, Reimagined for the Age of AI in Education

Postman asks the following 7 questions. By answering one it is possible to move forward to the next. In that sense there is a taxonomy to the questions. I have listed them here.

  • What is the problem that this new technology solves?
  • Whose problem is it?
  • What new problems do we create by solving this problem?
  • Which people and institutions will be most impacted by a technological solution?
  • What changes in language occur as the result of technological change?
  • Which shifts in economic and political power might result when this technology is adopted?
  • What alternative (and unintended) uses might be made of this technology?

Rethinking Education in the Age of AI: A Postman Perspective

It is worth pausing to ask not only what AI can do, but why we are inviting it into education in the first place. Postman’s questions is a way to resist the seduction of innovation for its own sake. Reimagining Postman’s seven inquiries in the context of today’s AI revolution in schools reveals both the promises and the perils of our current trajectory.

We begin, as Postman would, by asking: What is the problem to which this technology is the solution? In education, AI is often framed as a remedy for overworked teachers, disengaged students, or slow feedback loops. Tools powered by machine learning claim to tailor instruction to the individual, offering faster responses than any teacher could manage. But beneath this efficiency lies a more fundamental question: is the core challenge really a lack of automation—or rather, a lack of meaningful human connection in learning? This leads us to the second question: Whose problem is it? The burdens AI alleviates—lesson planning, grading, administrative tracking—are largely those of the teacher or the institution. Rarely does AI directly respond to the student’s need for dialogue, struggle, or relational guidance. When a student submits an AI-generated essay, the final product may appear polished, but the learning process, drafting, reflecting, revising often vanishes. In solving the adult’s problem, we may be ignoring the child’s.
Yet every solution brings new complications. What new problems might be created by solving the old one? In one Danish secondary school, the use of ChatGPT among students has led to a spike in what teachers call “algorithmic authorship.” Educators now spend more time detecting machine written work than offering thoughtful feedback. The tool meant to conquer writer’s block has instead eroded authorship, critical thinking, and integrity, forcing teachers into the role of AI-police rather than mentors. So, who benefits? Certainly, the EdTech industry, whose products are increasingly embedded in national education policies. Governments hoping to reduce costs and standardize testing also stand to gain. But do students truly benefit, when automation risks dulling their curiosity, creativity, and capacity for reflection? In classrooms where AI-generated feedback replaces teacher dialogue, efficiency comes at the expense of education. And inevitably, who loses out? The open-ended question loses. The productive error loses. The slow conversation and the unpredictable insight lose. In a U.S. high school piloting AI tutoring, students report turning to the chatbot first, not their peers, not their teachers. Authority is shifting, and with it, the fragile space where democratic dialogue and educational experimentation unfold. At its core, every technology promotes certain values. So we must ask: What values does AI promote in education? The dominant values are speed, precision, and performance. These are not inherently negative, but they may come at the cost of empathy, ambiguity, and critical reflection. Education, in its richest form, is not about solving problems quickly, but about dwelling in questions, learning how to navigate complexity, contradiction, and uncertainty. These are not tasks that can or should be outsourced to algorithms. Finally, we must ask: Which institutions are changed by this technology, and how? Schools, once envisioned as democratic communities of inquiry, risk becoming data-driven service platforms. In some UK primary schools, AI-generated reading assessments have replaced teacher-pupil conversations. The result? More data points, but fewer relationships. More measurement, but less meaning.

Eventhough I might play a roll of being overly critical here my concern is, that if we adopt AI in education without asking these seven questions, we risk letting the technology reshape our values, practices, and institutions in ways we neither intended nor fully understand. Neil Postman warned us not to become tools of our tools. In an age of smart machines, the real test of our intelligence is whether we still remember how to ask the human questions.

Democracy Requires Friction

Postman argued that new technologies are not additions to a culture; they change everything. AI in education is not just a tool – it’s a force that reshapes how we think about knowledge, authority, and agency. If democratic education means more than standardized test scores – if it means learning to think together, disagree respectfully, and act ethically – then we must treat AI with caution and curiosity, not blind adoption. The purpose of education is not to prepare students to become machines. It is to help them become fully human. This includes the ability to ask questions that machines cannot answer: Who am I responsible for? What kind of society do we want? What does it mean to be free? Postman reminds us that just because we can automate learning doesn’t mean we should. We should not fear technology, but we should fear forgetting to ask what it asks of us. Neil Postman gave us a framework for technological critique grounded in human values, democratic education, and cultural awareness.

In the evolving landscape of political communication, politicians and heads of state are increasingly turning to artificial intelligence not merely as instruments of narrative control and persuasion. AI-powered content generation, micro-targeting, and sentiment analysis allow leaders to craft highly personalized, emotionally resonant messages that can bypass traditional media gatekeepers and exploit citizens’ psychological vulnerabilities. This creates a profound imbalance in democratic discourse: when politicians use AI to simulate authenticity, amplify propaganda, or flood public spheres with tailored disinformation, they effectively automate manipulation. The opacity of algorithmic messaging, often delivered through digital echo chambers, blurs the line between persuasion and coercion. Rather than fostering informed participation, such practices risk undermining public trust, diluting accountability, and eroding the deliberative foundations upon which democratic societies depend.

In my oponion asking Postman’s questions in a time of artificial intelligence is not just an intellectual exercise – it is a civic responsibility and especially when dealing with education.

References:

Postman, N. (1992). Technopoly: The surrender of culture to technology. New York: Knopf.

Postman, N. (1995). The end of education: Redefining the value of school. New York: Vintage Books.

Postman, N. (1999). Building a Bridge to the 18th Century: How the Past Can Improve Our Future. New York: Vintage Books.

Danish Secondary School using AI

Improving Student Learning with Hybrid Human-AI Tutoring: A Three-Study Quasi-Experimental Investigation

Thoughts

Are we making ourselves dumber with AI?

As tools like ChatGPT become an increasingly regular part of daily life – in school, at university, and at work – an important question arises: What happens to our brains and the learning processes we engage in when we use AI to help us? And what implications could that have on education?

What Happens to Learning When Your Brain Meets LLMs?

In a new study from MIT: Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, researchers asked 54 university students to write essays over four sessions. They divided the students into three groups. One group could only use ChatGPT, another could use Google (but no AI), and a third had to rely on their own knowledge – no tools at all. At the same time, participants’ brain activity was monitored using EEG, and they were interviewed after each session. The results were then compared across groups and sessions.

The study showed that participants who used ChatGPT showed the lowest levels of brain activity during the writing task. Compared to those who used Google, and especially those who wrote without any digital assistance, their cognitive engagement was significantly reduced. Not only were their brains less active, but they also struggled to recall what they had written. Many couldn’t remember even a single sentence from their own essays, suggesting that the writing process hadn’t left a strong imprint on their memory. This lack of connection extended to their sense of ownership. Unlike the other groups, ChatGPT users were less likely to feel that the essay truly belonged to them. On the other hand, those who didn’t use AI consistently outperformed the others – in language quality, idea development, and cognitive engagement. They wrote essays they could remember, quote from, and take pride in. Their brains were fully activated throughout the process.

In another study, The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI, researchers showed that participants who relied on a large language model like ChatGPT to help answer reading comprehension questions remembered significantly less than those who completed the task on their own. When tested afterwards, LLM users not only retained fewer facts, but they also struggled to recall the core ideas and arguments from the original texts. This memory gap wasn’t just a matter of forgetting details—it reflected a deeper issue. Those who used the AI assistant showed less mental effort and engagement during the task. They often skimmed the material and leaned on the AI to interpret it for them, rather than processing the information themselves. Ironically, the more helpful the AI seemed in the moment, the less participants learned from the experience. By offloading the work of understanding and remembering to the model, they missed the cognitive processes that usually lead to durable learning. Even when participants reviewed the material again later, those who had initially used the AI still performed worse in memory tests. The short-term convenience came at the cost of long-term understanding—a paradox at the heart of using LLMs as study companions?

My three key take-aways

From the studies, I think three lessons should be learned.

Both studies highlight a significant shift in how learners engage with material when using large language models (LLMs) like ChatGPT. Rather than promoting deeper understanding, the use of LLMs tends to reduce active cognitive engagement. Kosmyna et al. showed that students using ChatGPT displayed notably less brain activity during writing tasks, suggesting superficial engagement. Similarly, Li et al. found that learners relied more on the AI’s interpretation than on their own meaning-making processes. In both cases, the use of AI tools displaced the kinds of effortful processing typically associated with engagement and deeper learning processes.

Further, the studies reveal a paradox: while AI can assist in completing tasks, it appears to undermine long-term memory formation. Participants in Li et al.’s study remembered significantly less factual and conceptual information when they had used ChatGPT for reading comprehension. Kosmyna et al. observed similar patterns: ChatGPT users were often unable to recall even a single sentence from their own essays. This suggests that the use of LLMs disrupts the encoding and consolidation processes crucial to memory.

Finally, both studies suggest the emergence of cognitive offloading and tool dependency. In Kosmyna et al., students who had used ChatGPT struggled to return to unsupported writing, showing lower brain activity and reduced originality when AI was removed. Li et al. noted that even after revisiting the material, participants who initially used the LLM still performed worse, indicating lasting effects of early reliance. In essence, the more learners depend on AI, the more their autonomous cognitive abilities seem to atrophy.

Table 1: Summing up the key take-aways.

Your Brain on ChatGPT (Kosmyna et al., 2024)The Memory Paradox (Li et al., 2024)Overall Implications
Learning ProcessesAI use reduces cognitive activation and engagement. Students rely less on their own thinking, and their essays become more uniform and superficial.AI use leads to shallow reading and reduced effort in comprehension. Participants often defer interpretation to the model instead of making meaning themselves.AI increases the risk of superficial learning, turning students into passive recipients rather than active meaning-makers. Deep learning is weakened.
MemoryChatGPT users had difficulty recalling what they had written. Many couldn’t quote a single sentence from their own essays.Participants remembered significantly fewer facts and key ideas after using AI. Even after review, their performance remained lower.AI disrupts the cognitive processes that support lasting understanding and memory formation.
Dependence on ToolsStudents who used ChatGPT struggled to return to independent writing. Their brains remained less active, and their essays less original.Although not the focus, the study shows lasting negative effects even after revisiting the material—suggesting cognitive reliance.Signs of cognitive offloading and tool dependence. Overuse of AI may undermine self-directed learning.

Final remarks

The findings of the two raise important considerations for how educators integrate AI tools into teaching. Both studies suggest a need for intentional design—where AI use supports rather than replaces cognitive effort. This could involve more structured tasks that require students to reflect on and explain AI-generated content or using LMMs as partners for dialogue and critique rather than generators of content. This kind of meta-reflection on tool use could enhance engagement and deeper understanding. Another approach could be a phased approach where students move between assisted and independent work. AI should be positioned not as a shortcut, but as a scaffold—used to enhance, not erode, the learning process.

The tools we use shape our understanding of problems and how to solve them. With LLMs at our hands, we have a tool that can easily help us solve tasks as reading and writing texts, generate content or even plan and structure teaching and learning tasks. But at what cost?

To me, there’s a real danger associated with the use of large language models: they risk making us mentally lazy and dulling our cognitive capacities. This can create a self-reinforcing cycle — the more we rely on generative AI, the more dependent we become, and the less effort we put into thinking for ourselves. In turn, we risk becoming not only more passive, but quite simply less intelligent. It’s a vicious circle. Of course, in certain contexts, generative AI can help us simplify or speed up specific tasks. But we must not lose sight of who should remain the creative force in the process. In schools, we give students assignments that serve both as training exercises and as tasks for developing understanding. Both types are essential, and we should not replace them just because we can, or because it feels easier. Learning is fundamentally based on effort and perseverance. When we ask students to write—not just letters, but full texts—we do so not only to produce content, but because the process teaches them to think, to structure, to organise their thoughts. It teaches them to be persistent and creative. As a kind of self-check, try noticing how long you can maintain your reading focus without reaching for your phone or being tempted to let an LLM summarise the text for you. Is it under 20 minutes? Or perhaps writing a longer text suddenly feels overwhelming or more exhausting than it used to?

These may be warning signs worth paying attention to—especially when it comes to children, whose brains are still forming and developing.

Empirical Research

Crafting Dialogic Classrooms: How Wood and Code Inspire Student Voice


In today’s push toward digital technology in education, we often ask: How can technology support—not replace—student engagement, dialogue and creativity? My colleague Lene Illum and I are asking ourselves how to design for students voices and dialogue in writing practices in school. As part of our research we believe that meaningful answers lie in the intersection of the tangible and the digital. We have developed SkriveXpeditionen (The WritingXpedition) that draws on both these aspects.

SkriveXpeditionen is a didactic design developed to support creative and dialogic writing processes in Danish L1 classrooms at the intermediate level (typically grade 5) with an age group of 11-12 years. The design combines physical and digital tools to scaffold students’ narrative thinking, collaborative dialogue, and creative expression.
Skrive consists of two main elements.

First, a set of physical wooden tiles. These tiles are engraved with various narrative motifs and symbolic images. Some tiles include QR codes that link to excerpts from a shared literary starting text. The tiles function as tactile prompts that help students:

  • Generate and structure ideas
  • Decompose narrative elements
  • Visualise connections and develop plot structures

    Secondly, the open-source interactive writing tool Twine. Twine enables students to create non-linear, interactive stories, where readers make choices that affect the outcome. The platform invites computational thinking through sequencing, logic, and hypertext structure, while simultaneously encouraging literary creativity.

      In the image, the pedagogical design is displayed. A is the starting point where students get a short read-out-loud from the beginning of a novel. In this case, it is “The Horrible Hand”, a fantasy novel. B represents the wooden pieces and the phase where students engage and collaborate on their stories. Finally C represents the process of creating their stories in Twine.

      The Pedagogical Rationale

      SkriveXpeditionen is designed to cultivate what Neil Mercer calls exploratory talk: a form of dialogue where students collectively explore, justify, and refine ideas. The material artefacts serve as mediators that bring students’ ideas into a shared dialogic space. The goal is not only to support writing outcomes but to:

      • Enhance oral language development
      • Foster embodied and multimodal communication
      • Promote partner-awareness and collaborative meaning-making

      Learning Outcomes and Observations

      Preliminary findings from classroom interventions show that SkriveXpeditionen:

      • Strengthens students’ engagement in the writing process
      • Encourages playful experimentation and co-creation
      • Supports students in structuring stories and making narrative decisions
      • Increases participation opportunities, especially for students who might struggle with traditional writing tasks

      The design also reveals how materiality—in the form of physical tiles—can play a central role in shaping dialogic learning environments and computational literacy practices.
      SkriveXpeditionen is more than a writing tool. It is a hybrid learning design that brings together literature, technology, and embodied dialogue to support student creativity and collaboration. By combining material and digital media, it opens new pedagogical pathways for teaching writing in ways that are meaningful, imaginative, and deeply social.

      Enhancing Computational Literacy through objects to think with.

      SkriveXpeditionen is a teaching design that invites students into a creative and exploratory learning space, where storytelling and technology are tightly interwoven. Drawing on Andrea diSessa’s concept of computational literacy, learning is understood here as a materially-supported deployment of skills and dispositions toward meaningful intellectual goals.

      Unlike the narrower notion of computational thinking, often framed as a general set of problem-solving skills, computational literacy expands the view by emphasising three interrelated dimensions: the cognitive, the social, and the material. These dimensions are not separate layers but intertwined aspects of how learners engage with the world.

      SkriveXpeditionen brings all three dimensions into play. The physical wooden tiles serve as cognitive scaffolds, allowing students to break down and recompose narrative ideas. Group work and conversation create a shared space where meaning is socially negotiated. At the same time, both the tiles and the Twine platform act as material mediators, giving shape to students’ abstract thinking and enabling new forms of interaction and expression.

      In this interplay, students do not merely write stories—they engage in thinking through materials. They work with narrative elements, symbols, and code to construct meaning within a shared ecology of learning. It is precisely within this process that what Seymour Papert called powerful ideas begin to emerge.

      Students learn how ideas evolve in collaboration and dialogue, as they articulate and refine their thinking through shared language, gesture, and embodied interaction. They work with narrative systems, grappling with cause-effect relationships, branching logic, and interactive structures—not as abstract concepts, but through concrete storytelling practices. They also explore how representational elements can be rearranged and transformed, discovering that story components are not fixed but fluid and malleable.

      SkriveXpeditionen is not about teaching programming per se. Instead, it aligns with Papert’s deeper pedagogical vision of using technology as a medium for expression, a tool for exploration, and a mirror for thinking.

      We will continue our research and development

      Lene and I will continue our endeavour into SkriveXpeditionen and how to enhance and develop the design. Next steps for us is to focus on a more generic concept that can be applied to a vararity of texts in L1.

      Stay tuned!

      Empirical Research

      Teaching with AI: Creativity, Student Agency, and the Role of Didactic Imagination


      As generative artificial intelligence (GAI) rapidly makes its way into classrooms, discussions about its educational implications tend to oscillate between promise and peril. In our recent article published in Unge Pædagoger ( https://u-p.dk/vare/2025-nr-2/ ), Peter Holmboe and I explore how GAI might become a tool not of automation, but of amplification — nurturing rather than replacing students’ creative engagement with the world.
      At the heart of our argument is the idea that creativity is not a spontaneous spark or a gift bestowed on a few, but a socially and materially situated process that thrives on exploration, reflection, and dialogue. GAI, when used with care, can become a medium for such engagement — but only if educators retain a clear focus on human agency, intentionality, and context.

      From Prompt to Product: A Framework for Creative AI Integration

      To support this reframing, we propose a practical teaching model based on three focal points: prompt, process, and product. Each stage reflects different opportunities for teacher intervention and student engagement:

      • Prompts are not mere instructions to the machine; they are invitations to think differently, to explore multiple meanings, and to frame the problem creatively.
      • Processes involve iteration, dialogue, and experimentation — often where the real learning and growth happens.
      • Products, whether a story, a song, or a prototype, become less about perfection and more about reflection: what did we learn by making this?

      This model is enriched by three complementary methods: immersion, tinkering, and disruption. Each represents a way for students to work with GAI in ways that retain ownership of the learning process.

      • Immersion promotes deep, focused work within well-defined boundaries.
      • Tinkering supports playful experimentation, where learning happens through trial, error, and surprise.
      • Disruption challenges habits and assumptions, using constraints or provocations to push thinking in new directions.

      Creativity is Situated — and So is AI

      We argue that creativity does not exist in a vacuum. Following Schön, Tanggaard, and Vygotsky, we locate creative thinking in the embodied, social, and material world. This is where human intelligence diverges most significantly from GAI: AI may generate content, but it cannot inhabit context. It predicts plausible output; it does not understand meaning. This has implications for how we teach with GAI. If students merely outsource creative tasks to a machine, we risk losing what matters most: their voice, their struggle, their growth. However, if we invite them to collaborate with GAI — to question it, repurpose it, and respond to it — then the technology becomes a stimulus, not a substitute.

      Toward a Pedagogy of Possibility

      Teaching with GAI calls for what we term didactic imagination: a combination of foresight, courage, and responsiveness. It means being willing to reshape curricula, adapt practices, and imagine new learning trajectories — not because we surrender to technological determinism, but because we remain committed to meaningful, learner-centered education. The notion of didactic imagination, teaching with generative artificial intelligence (GAI) is not merely a matter of integrating a new tool into the classroom — it represents a profound shift in how we conceive of pedagogy, knowledge, and student engagement. Didactic imagination challenges educators to go beyond reactive adaptation and instead engage in proactive rethinking of educational practice. It is a stance that requires:

      • Foresight to anticipate how GAI may shape future forms of knowledge production, communication, and creativity — and to prepare students not just to use tools, but to question and redefine them.
      • Courage to depart from familiar routines, assessment models, and linear instructional design in favour of more open-ended, exploratory, and student-driven approaches.
      • Responsiveness to the evolving needs, interests, and capacities of students in a rapidly changing world — acknowledging that meaningful learning emerges in the dynamic interplay between structure and spontaneity, between teacher intention and student agency.

      Didactic imagination implies treating curricula not as fixed templates, but as living frameworks that must be continually reinterpreted in light of new possibilities. This may mean designing activities where students co-develop prompts with GAI, reflect critically on algorithmic bias, or remix AI-generated content in ways that foreground their own perspectives. It may mean disrupting traditional roles of teacher and student, where the teacher becomes a co-inquirer, and the classroom becomes a lab for collective sense-making.

      Importantly, embracing didactic imagination does not mean abandoning rigour or coherence. Rather, it calls on us to re-anchor educational practice in the core values of curiosity, empathy, agency, and dialogue. In this view, GAI becomes a provocateur — a reflective partner that invites new ways of asking questions, framing problems, and expressing understanding.

      Thus, the real innovation lies not in the machine, but in how we choose to imagine and inhabit the pedagogical spaces it opens. The challenge for educators is to hold open these spaces — not for efficiency, but for exploration. Not to automate learning, but to animate it.

      Concluding thoughts

      In light of this, I invite colleagues across sectors and disciplines to pause and reflect — not merely on how generative AI (GAI) fits into current pedagogical structures, but on how it compels us to rethink some of the fundamental principles of education itself. The integration of GAI challenges us to reconsider what it means to learn, to create, and to be an agent in the process of knowledge-building.

      What does student agency mean in an era of generative AI?

      When machines can generate text, images, code, and even ideas with remarkable fluency, the concept of student agency cannot be reduced to mere task completion or content production. Agency must be reframed as the capacity to make meaningful decisions within complex, sociotechnical environments — to pose original questions, to shape technological tools for personal or communal ends, and to navigate ambiguity with intentionality. It’s about giving students the authority and responsibility to direct their learning journeys — not in isolation, but in active dialogue with intelligent systems. In this view, agency becomes not just the right to act, but the ability to critically reflect on how and why we act in partnership with AI.

      How can we design learning experiences where GAI is used to provoke, not predetermine, creativity?

      Too often, educational technology has been employed to automate or simplify learning, reducing complexity instead of engaging with it. But GAI opens new possibilities: it can serve as a creative irritant, a tool for playful experimentation, or a mirror that reflects and reframes student thinking. Learning designs that foreground iteration, co-construction, and reflection — rather than fixed outcomes — are essential. Imagine prompts that ask students to revise or challenge an AI-generated poem, or collaborative projects where students must make the logic behind AI decisions visible and debatable. In these scenarios, creativity is not something AI delivers — it is something students practice and develop through interaction with AI.

      How do we assess creative work when the process involves both human and machine actors?

      Traditional assessment models — focused on individual output, originality, and correctness — are poorly suited to hybrid creative processes. We need evaluative frameworks that can account for process, intention, and transformation. This includes assessing how students shape AI contributions, how they reflect on ethical and contextual implications, and how they position themselves as co-authors of meaning. Rubrics may include dimensions like critical decision-making, iterative development, or responsiveness to feedback. Importantly, assessment must shift from product-focused grading to process-aware evaluation — making visible the learning embedded in the co-creation journey.

      Can disruption — not just fluency — become a valued competence in our AI-enhanced classrooms?

      Fluency in AI tools is important, but fluency alone risks producing compliance rather than creativity. We must also value disruption — the ability to interrupt routines, challenge defaults, and see beyond the surface of algorithmic convenience. This includes introducing ‘productive friction’ into learning environments: constraints that force rethinking, prompts that provoke surprise, and design challenges that resist easy automation. By cultivating the capacity to critique and complicate technology, we nurture students who don’t just use AI, but who actively shape its cultural, ethical, and creative trajectories.

      Empirical Research

      Rethinking Computational Thinking in Education

      Computational thinking (CT) has become a buzzword in educational policy and curriculum reform. Promoted as a fundamental 21st-century skill, it is often described as a universal way of thinking—akin to literacy and numeracy. But beneath this seemingly neutral framing lies a deeper question: What kind of thinking do we want students to engage in, and what role should schools play in nurturing it?

      The current dominant view of CT, popularised by Jeannette Wing, sees it as a set of abstract, transferable skills drawn from computer science—algorithmic thinking, abstraction, problem decomposition. This approach fits neatly into existing curricular structures and assessment regimes, but it risks sidelining the messier, more situated, and culturally embedded dimensions of learning with and through computers.

      An alternative perspective comes from Seymour Papert, whose work in the 1970s and 80s laid the groundwork for what we now call CT. Papert didn’t frame CT as a fixed set of skills. Instead, he was concerned with thinking deeply about thinking itself—learning through making, experimenting, and expressing ideas in computational media. His approach, known as constructionism, was grounded in the idea that children learn best when they are actively engaged in building things that are meaningful to them.

      Ai generated image in ChatGPT displaying three themes Papert's development of constructionism. The Tutle Program, Soap Sculptures and Samba Schools.

      Ai generated image in ChatGPT displaying three themes Papert’s development of constructionism. The Tutle Program, Soap Sculptures and Samba Schools.

      Our educational system rejects the “false theories” of children, thereby rejecting the way children really learn. (Papert, Mindstorms 1980)

      Central to Papert’s vision were what he called “objects-to-think-with”—tangible or digital artefacts that serve as tools for thought. These could be programmable turtles on the screen, floor robots, or soap sculptures. The key is that learners engage with these objects not through instruction, but through exploration and iteration. The act of programming becomes a medium for expressing ideas, testing hypotheses, and developing personal and shared understandings.

      Papert’s notion of epistemological pluralism is equally crucial. He recognised that learners approach problems in different ways—some prefer planning and abstraction, others tinker and iterate. Both styles are valid, and a healthy learning environment supports this diversity. In contrast, much of today’s CT implementation privileges the abstract, logical, and formal, often marginalising intuitive, creative, or sensory approaches to computational problem-solving.

      Another critical insight from Papert is his view of schools as cultural institutions with deeply ingrained norms. He was sceptical of how technologies—computers included—tend to be absorbed into existing school structures rather than transforming them. He warned against what he called technocentrism—the belief that technological tools alone can drive educational change. For Papert, the real power of the computer lay not in the machine itself, but in its potential to disrupt traditional pedagogies and empower learners.

      Little by little the subversive features of the computer were eroded away: Instead of cutting across and so challenging the very idea of subject boundaries, the computer now defined a new subject; instead of changing the emphasis from impersonal curriculum to excited live exploration by students, the computer was now used to reinforce School’s ways. What had started as a subversive instrument of change was neutralized by the system and converted into an instrument of consolidation (Papert, The Children’s Machine, 1993)

      Papert’s vision of a “Samba School for Computing” offers a compelling metaphor. Inspired by the inclusive, community-based learning culture of Brazilian samba schools, he imagined computational learning as a pluralistic, joyful, and participatory activity. Instead of rigid curricula and standardised assessment, imagine spaces where children and adults collaboratively explore, build, play, and perform with computational media—learning not just to code, but to express, critique, and co-create.

      This vision remains deeply relevant today. While CT is often justified by its economic utility—preparing students for future jobs—Papert reminds us that schools should not merely serve existing societal needs. They should be spaces for reimagining society itself. Rather than training students to think like computer scientists, we might ask how computation can support them in thinking like designers, storytellers, activists, or citizens.

      Moreover, Papert’s critique of the school’s “immune system”—its tendency to neutralise radical ideas—is as pertinent as ever. Today’s digital tools are often used to reinforce traditional instruction rather than to reimagine it. Many implementations of CT end up focusing on tool mastery rather than tool invention, reinforcing rather than disrupting existing power structures in education.

      A genuinely transformative approach to CT would begin not with abstract definitions but with concrete engagements: what are learners passionate about? What problems do they want to solve? What stories do they want to tell? From there, educators can scaffold experiences that build computational fluency in ways that are meaningful and contextually grounded.

      Key Takeaways for Schools Today:

      1. Reframe CT as situated practice Rather than treating computational thinking as a decontextualised skill set, we should design learning environments that situate CT in meaningful, hands-on, and culturally relevant practices.
      2. Value epistemological diversity Support different ways of knowing and thinking. Not all students thrive through abstraction—some learn best through tinkering, storytelling, or physical interaction with materials. All of these are valid pathways into computational understanding.
      3. Challenge the school’s “immune system” Schools must remain open to educational models that challenge the status quo. CT has the potential to democratise and humanise learning—if we resist the urge to reduce it to testable outcomes and instead embrace it as a medium for expression, reflection, and cultural participation.

      Empirical Research

      Assessing Computational Literacy in First Language (L1) Teaching

      New article out in Nordic Journal of Comparative and International Education.

      Computational Thinking should be rejected as a generic set of skills that can be applied and transferred to fit all subjects. Computational Thinking should rather be seen as context-dependent and integrated into the specific subject’s existing methods and traditions. In other words, instead of pushing a computer science template onto an existing subject, more consideration should be given to aligning computational approaches with the specific subject.

      Proud to share a new article on Assessing Computational Literacy in First Language (L1) Teaching by Marie Falkesgaard Slot and I.
      In the article, we propose a cross-disciplinary framework to assess computational literacy (CL) in L1 settings, focusing on four principles that bridge traditional language arts with computational approaches.
      We further reflect on how applying these principles can help formulate new learning goals that better align with the emerging demands of 21st-century education. Throughout the article, we argue that a CL approach provides a more socially rooted and context-sensitive method for integrating computational methods into non-computer science subjects, offering theoretical clarity and practical benefits for both educators and researchers alike. The article commences with a discussion of the CL approach related to assessment.

      Book, Book Chapter, Design Experiments, Empirical Research

      Bridging the Gap: Computational Literacy Beyond Computer Science

      In today’s rapidly evolving world, computational literacy is no longer confined to computer science classrooms. It’s time we explore how these skills can enhance learning across all subjects, including language arts.

      I’m excited to share insights from the 17th chapter of our recent book Creating Design Knowledge In Educational Innovation, where I explore how computational literacy can transform learning across all subjects—not just computer science.

      In my chapter, Designing for Computational Literacy in Non-CS Subjects, I explore how computational literacy can be integrated into non-CS subjects like language arts. I share som insights from a case: The Horrible Hand, where pupils combine storytelling with computational tools like Twine to craft interactive, multimedia stories. This approach deepened their understanding of narrative structures and enhanced their creativity and collaboration skills.

      By integrating computational methods into different subject areas, we unlock new learning possibilities for students. This leads us to important questions:

      💡 How can we make computational literacy accessible and meaningful across diverse subjects?

      💡 What new literacies can emerge when we blend computational tools with traditional teaching methods?

      💡 How can this empower students to engage in deeper, more meaningful learning?

      The future of education is cross-disciplinary and integrative—and my chapter highlights these opportunities.

      Book Chapter, Design Experiments, Publication

      LINKING DESIGN PRINCIPLES TO CONTEXT AND EVIDENCE

      A Semantic Web Approach

      Empirical Research

      Creating Design Knowledge in Educational Innovation: Theory, Methods, and Practice

      I’m thrilled to announce a recent book for which I am part of the editorial team and contributed with several chapters.

      In today’s rapidly evolving educational landscape, understanding how research-informed design knowledge is created, represented, and applied is critical for innovation. Our book The book Creating Design Knowledge in Educational Innovation: Theory, Methods, and Practice provides a comprehensive exploration of this process, offering theoretical, methodological, and practical insights for those involved in educational research and innovation projects.

      Through 21 chapters the book delves into how educational researchers, designers, teachers, and other practitioners can ensure that the outcomes of their projects are not only scalable and applicable but also impactful in real-world educational settings. It provides practical “know-how” based on robust research and design experiences, making it a valuable resource for anyone looking to bridge the gap between theory and practice.

      Through the work and collaboration of 19 international reseachers the book critical reflects on current theories and methodologies while also looking ahead to future developments. Emerging technologies such as semantic web tools and AI are explored as potential game-changers in how we approach educational research and design.

      The book is particularly useful for researchers, students, and designers aiming to produce research-informed design principles that are both grounded in evidence and practically applicable. Whether you are involved in research or on-the-ground innovation, this book offers essential guidance on creating design knowledge that truly makes a difference.

      Link to the whole book is here

      Download a preview of the book, including list of content and the first introductory chapter. Click here

      Empirical Research

      Ai enhanced Google NotebookLM

      I’ve been investigating the new podcast feature in Google’s NotebookLM and I must admit, that I’m quite amazed by it.

      Google’s NotebookLM (formerly known as Project Tailwind) represents a significant advancement in AI-powered tools aimed at enhancing the management and interaction with personal notes and documents. Utilizing large language models, the tool enables users to upload documents such as research papers, lecture notes, or study guides and subsequently ask questions, generate summaries, and retrieve specific information.

      One notable feature is the introduction of a podcast option, which allows users to generate audio content based on their notes. For students, the potential benefits are clear: personalized study assistance, streamlined access to key concepts, and the ability to review materials in a variety of formats. For educators, NotebookLM promises efficiency in lesson planning and content delivery, allowing for the rapid curation and dissemination of key academic materials.

      However, there are critical considerations to take into account. The reliance on AI-generated summaries and answers raises concerns about the accuracy and depth of the information provided. The simplification of complex topics through automated summaries could lead to an oversimplification of content, potentially missing nuances essential to deep understanding. Additionally, the podcast feature, while convenient, risks reducing engagement with written materials, which remain crucial for the development of critical reading and analytical skills.

      To give a taste, I share the result of letting NotebookLM transform one of my open-access articles into a podcast. Both the article and the podcast can be reviewed, and even if it’s simplified – the podcast explains some complex issues in a quite ‘refreshing’ way . I even took some of the analogies into my coming presentation for my students.

      Link to the article: https://learningtech.laeremiddel.dk/en/read-learning-tech/learning-tech-13/computational-literacy/

      Listen to the podcast here