Empirical Research

Asking Slow Questions in the Age of Fast Machines: Neil Postman’s Seven Questions and the AI-Powered Classroom

This week I watched an old College Lecture from 1997, “The Surrender of Culture to Technology” with the media theorist and culture critic Neil Postman. The lecture was based on his book Technopoly where Postman in a quite entertaining and provocative way raises a series of questions that need answering in regard to new technology. Especially television and the internet is in his scope during his talk.

Today, almost 30 years later, in a time when artificial intelligence (AI) is becoming part of classrooms, lesson planning, and student assessment, it is likewise urgent to pause and ask the right questions. In Technopoly (1992) Neil Postman proposed seven questions that any society should ask when a new technology is introduced. His intention was not to stop innovation, but to foster technological literacy, meaning not just the ability to use technology, but the ability to understand its cultural, social, and political consequences. As schools and other educational institutions across the globe begin implementing generative AI tools like ChatGPT, adaptive learning platforms, and AI-based grading assistants, Postman’s critical lens becomes not only relevant but, in my opinion, quite necessary.

Postman’s Seven Questions, Reimagined for the Age of AI in Education

Postman asks the following 7 questions. By answering one it is possible to move forward to the next. In that sense there is a taxonomy to the questions. I have listed them here.

  • What is the problem that this new technology solves?
  • Whose problem is it?
  • What new problems do we create by solving this problem?
  • Which people and institutions will be most impacted by a technological solution?
  • What changes in language occur as the result of technological change?
  • Which shifts in economic and political power might result when this technology is adopted?
  • What alternative (and unintended) uses might be made of this technology?

Rethinking Education in the Age of AI: A Postman Perspective

It is worth pausing to ask not only what AI can do, but why we are inviting it into education in the first place. Postman’s questions is a way to resist the seduction of innovation for its own sake. Reimagining Postman’s seven inquiries in the context of today’s AI revolution in schools reveals both the promises and the perils of our current trajectory.

We begin, as Postman would, by asking: What is the problem to which this technology is the solution? In education, AI is often framed as a remedy for overworked teachers, disengaged students, or slow feedback loops. Tools powered by machine learning claim to tailor instruction to the individual, offering faster responses than any teacher could manage. But beneath this efficiency lies a more fundamental question: is the core challenge really a lack of automation—or rather, a lack of meaningful human connection in learning? This leads us to the second question: Whose problem is it? The burdens AI alleviates—lesson planning, grading, administrative tracking—are largely those of the teacher or the institution. Rarely does AI directly respond to the student’s need for dialogue, struggle, or relational guidance. When a student submits an AI-generated essay, the final product may appear polished, but the learning process, drafting, reflecting, revising often vanishes. In solving the adult’s problem, we may be ignoring the child’s.
Yet every solution brings new complications. What new problems might be created by solving the old one? In one Danish secondary school, the use of ChatGPT among students has led to a spike in what teachers call “algorithmic authorship.” Educators now spend more time detecting machine written work than offering thoughtful feedback. The tool meant to conquer writer’s block has instead eroded authorship, critical thinking, and integrity, forcing teachers into the role of AI-police rather than mentors. So, who benefits? Certainly, the EdTech industry, whose products are increasingly embedded in national education policies. Governments hoping to reduce costs and standardize testing also stand to gain. But do students truly benefit, when automation risks dulling their curiosity, creativity, and capacity for reflection? In classrooms where AI-generated feedback replaces teacher dialogue, efficiency comes at the expense of education. And inevitably, who loses out? The open-ended question loses. The productive error loses. The slow conversation and the unpredictable insight lose. In a U.S. high school piloting AI tutoring, students report turning to the chatbot first, not their peers, not their teachers. Authority is shifting, and with it, the fragile space where democratic dialogue and educational experimentation unfold. At its core, every technology promotes certain values. So we must ask: What values does AI promote in education? The dominant values are speed, precision, and performance. These are not inherently negative, but they may come at the cost of empathy, ambiguity, and critical reflection. Education, in its richest form, is not about solving problems quickly, but about dwelling in questions, learning how to navigate complexity, contradiction, and uncertainty. These are not tasks that can or should be outsourced to algorithms. Finally, we must ask: Which institutions are changed by this technology, and how? Schools, once envisioned as democratic communities of inquiry, risk becoming data-driven service platforms. In some UK primary schools, AI-generated reading assessments have replaced teacher-pupil conversations. The result? More data points, but fewer relationships. More measurement, but less meaning.

Eventhough I might play a roll of being overly critical here my concern is, that if we adopt AI in education without asking these seven questions, we risk letting the technology reshape our values, practices, and institutions in ways we neither intended nor fully understand. Neil Postman warned us not to become tools of our tools. In an age of smart machines, the real test of our intelligence is whether we still remember how to ask the human questions.

Democracy Requires Friction

Postman argued that new technologies are not additions to a culture; they change everything. AI in education is not just a tool – it’s a force that reshapes how we think about knowledge, authority, and agency. If democratic education means more than standardized test scores – if it means learning to think together, disagree respectfully, and act ethically – then we must treat AI with caution and curiosity, not blind adoption. The purpose of education is not to prepare students to become machines. It is to help them become fully human. This includes the ability to ask questions that machines cannot answer: Who am I responsible for? What kind of society do we want? What does it mean to be free? Postman reminds us that just because we can automate learning doesn’t mean we should. We should not fear technology, but we should fear forgetting to ask what it asks of us. Neil Postman gave us a framework for technological critique grounded in human values, democratic education, and cultural awareness.

In the evolving landscape of political communication, politicians and heads of state are increasingly turning to artificial intelligence not merely as instruments of narrative control and persuasion. AI-powered content generation, micro-targeting, and sentiment analysis allow leaders to craft highly personalized, emotionally resonant messages that can bypass traditional media gatekeepers and exploit citizens’ psychological vulnerabilities. This creates a profound imbalance in democratic discourse: when politicians use AI to simulate authenticity, amplify propaganda, or flood public spheres with tailored disinformation, they effectively automate manipulation. The opacity of algorithmic messaging, often delivered through digital echo chambers, blurs the line between persuasion and coercion. Rather than fostering informed participation, such practices risk undermining public trust, diluting accountability, and eroding the deliberative foundations upon which democratic societies depend.

In my oponion asking Postman’s questions in a time of artificial intelligence is not just an intellectual exercise – it is a civic responsibility and especially when dealing with education.

References:

Postman, N. (1992). Technopoly: The surrender of culture to technology. New York: Knopf.

Postman, N. (1995). The end of education: Redefining the value of school. New York: Vintage Books.

Postman, N. (1999). Building a Bridge to the 18th Century: How the Past Can Improve Our Future. New York: Vintage Books.

Danish Secondary School using AI

Improving Student Learning with Hybrid Human-AI Tutoring: A Three-Study Quasi-Experimental Investigation

Empirical Research

Teaching with AI: Creativity, Student Agency, and the Role of Didactic Imagination


As generative artificial intelligence (GAI) rapidly makes its way into classrooms, discussions about its educational implications tend to oscillate between promise and peril. In our recent article published in Unge Pædagoger ( https://u-p.dk/vare/2025-nr-2/ ), Peter Holmboe and I explore how GAI might become a tool not of automation, but of amplification — nurturing rather than replacing students’ creative engagement with the world.
At the heart of our argument is the idea that creativity is not a spontaneous spark or a gift bestowed on a few, but a socially and materially situated process that thrives on exploration, reflection, and dialogue. GAI, when used with care, can become a medium for such engagement — but only if educators retain a clear focus on human agency, intentionality, and context.

From Prompt to Product: A Framework for Creative AI Integration

To support this reframing, we propose a practical teaching model based on three focal points: prompt, process, and product. Each stage reflects different opportunities for teacher intervention and student engagement:

  • Prompts are not mere instructions to the machine; they are invitations to think differently, to explore multiple meanings, and to frame the problem creatively.
  • Processes involve iteration, dialogue, and experimentation — often where the real learning and growth happens.
  • Products, whether a story, a song, or a prototype, become less about perfection and more about reflection: what did we learn by making this?

This model is enriched by three complementary methods: immersion, tinkering, and disruption. Each represents a way for students to work with GAI in ways that retain ownership of the learning process.

  • Immersion promotes deep, focused work within well-defined boundaries.
  • Tinkering supports playful experimentation, where learning happens through trial, error, and surprise.
  • Disruption challenges habits and assumptions, using constraints or provocations to push thinking in new directions.

Creativity is Situated — and So is AI

We argue that creativity does not exist in a vacuum. Following Schön, Tanggaard, and Vygotsky, we locate creative thinking in the embodied, social, and material world. This is where human intelligence diverges most significantly from GAI: AI may generate content, but it cannot inhabit context. It predicts plausible output; it does not understand meaning. This has implications for how we teach with GAI. If students merely outsource creative tasks to a machine, we risk losing what matters most: their voice, their struggle, their growth. However, if we invite them to collaborate with GAI — to question it, repurpose it, and respond to it — then the technology becomes a stimulus, not a substitute.

Toward a Pedagogy of Possibility

Teaching with GAI calls for what we term didactic imagination: a combination of foresight, courage, and responsiveness. It means being willing to reshape curricula, adapt practices, and imagine new learning trajectories — not because we surrender to technological determinism, but because we remain committed to meaningful, learner-centered education. The notion of didactic imagination, teaching with generative artificial intelligence (GAI) is not merely a matter of integrating a new tool into the classroom — it represents a profound shift in how we conceive of pedagogy, knowledge, and student engagement. Didactic imagination challenges educators to go beyond reactive adaptation and instead engage in proactive rethinking of educational practice. It is a stance that requires:

  • Foresight to anticipate how GAI may shape future forms of knowledge production, communication, and creativity — and to prepare students not just to use tools, but to question and redefine them.
  • Courage to depart from familiar routines, assessment models, and linear instructional design in favour of more open-ended, exploratory, and student-driven approaches.
  • Responsiveness to the evolving needs, interests, and capacities of students in a rapidly changing world — acknowledging that meaningful learning emerges in the dynamic interplay between structure and spontaneity, between teacher intention and student agency.

Didactic imagination implies treating curricula not as fixed templates, but as living frameworks that must be continually reinterpreted in light of new possibilities. This may mean designing activities where students co-develop prompts with GAI, reflect critically on algorithmic bias, or remix AI-generated content in ways that foreground their own perspectives. It may mean disrupting traditional roles of teacher and student, where the teacher becomes a co-inquirer, and the classroom becomes a lab for collective sense-making.

Importantly, embracing didactic imagination does not mean abandoning rigour or coherence. Rather, it calls on us to re-anchor educational practice in the core values of curiosity, empathy, agency, and dialogue. In this view, GAI becomes a provocateur — a reflective partner that invites new ways of asking questions, framing problems, and expressing understanding.

Thus, the real innovation lies not in the machine, but in how we choose to imagine and inhabit the pedagogical spaces it opens. The challenge for educators is to hold open these spaces — not for efficiency, but for exploration. Not to automate learning, but to animate it.

Concluding thoughts

In light of this, I invite colleagues across sectors and disciplines to pause and reflect — not merely on how generative AI (GAI) fits into current pedagogical structures, but on how it compels us to rethink some of the fundamental principles of education itself. The integration of GAI challenges us to reconsider what it means to learn, to create, and to be an agent in the process of knowledge-building.

What does student agency mean in an era of generative AI?

When machines can generate text, images, code, and even ideas with remarkable fluency, the concept of student agency cannot be reduced to mere task completion or content production. Agency must be reframed as the capacity to make meaningful decisions within complex, sociotechnical environments — to pose original questions, to shape technological tools for personal or communal ends, and to navigate ambiguity with intentionality. It’s about giving students the authority and responsibility to direct their learning journeys — not in isolation, but in active dialogue with intelligent systems. In this view, agency becomes not just the right to act, but the ability to critically reflect on how and why we act in partnership with AI.

How can we design learning experiences where GAI is used to provoke, not predetermine, creativity?

Too often, educational technology has been employed to automate or simplify learning, reducing complexity instead of engaging with it. But GAI opens new possibilities: it can serve as a creative irritant, a tool for playful experimentation, or a mirror that reflects and reframes student thinking. Learning designs that foreground iteration, co-construction, and reflection — rather than fixed outcomes — are essential. Imagine prompts that ask students to revise or challenge an AI-generated poem, or collaborative projects where students must make the logic behind AI decisions visible and debatable. In these scenarios, creativity is not something AI delivers — it is something students practice and develop through interaction with AI.

How do we assess creative work when the process involves both human and machine actors?

Traditional assessment models — focused on individual output, originality, and correctness — are poorly suited to hybrid creative processes. We need evaluative frameworks that can account for process, intention, and transformation. This includes assessing how students shape AI contributions, how they reflect on ethical and contextual implications, and how they position themselves as co-authors of meaning. Rubrics may include dimensions like critical decision-making, iterative development, or responsiveness to feedback. Importantly, assessment must shift from product-focused grading to process-aware evaluation — making visible the learning embedded in the co-creation journey.

Can disruption — not just fluency — become a valued competence in our AI-enhanced classrooms?

Fluency in AI tools is important, but fluency alone risks producing compliance rather than creativity. We must also value disruption — the ability to interrupt routines, challenge defaults, and see beyond the surface of algorithmic convenience. This includes introducing ‘productive friction’ into learning environments: constraints that force rethinking, prompts that provoke surprise, and design challenges that resist easy automation. By cultivating the capacity to critique and complicate technology, we nurture students who don’t just use AI, but who actively shape its cultural, ethical, and creative trajectories.

Empirical Research

Ai enhanced Google NotebookLM

I’ve been investigating the new podcast feature in Google’s NotebookLM and I must admit, that I’m quite amazed by it.

Google’s NotebookLM (formerly known as Project Tailwind) represents a significant advancement in AI-powered tools aimed at enhancing the management and interaction with personal notes and documents. Utilizing large language models, the tool enables users to upload documents such as research papers, lecture notes, or study guides and subsequently ask questions, generate summaries, and retrieve specific information.

One notable feature is the introduction of a podcast option, which allows users to generate audio content based on their notes. For students, the potential benefits are clear: personalized study assistance, streamlined access to key concepts, and the ability to review materials in a variety of formats. For educators, NotebookLM promises efficiency in lesson planning and content delivery, allowing for the rapid curation and dissemination of key academic materials.

However, there are critical considerations to take into account. The reliance on AI-generated summaries and answers raises concerns about the accuracy and depth of the information provided. The simplification of complex topics through automated summaries could lead to an oversimplification of content, potentially missing nuances essential to deep understanding. Additionally, the podcast feature, while convenient, risks reducing engagement with written materials, which remain crucial for the development of critical reading and analytical skills.

To give a taste, I share the result of letting NotebookLM transform one of my open-access articles into a podcast. Both the article and the podcast can be reviewed, and even if it’s simplified – the podcast explains some complex issues in a quite ‘refreshing’ way . I even took some of the analogies into my coming presentation for my students.

Link to the article: https://learningtech.laeremiddel.dk/en/read-learning-tech/learning-tech-13/computational-literacy/

Listen to the podcast here

Thoughts

How AI is Changing The Idea Of Lifelong Learning

In today’s world, Artificial Intelligence (AI) is not just for tech experts—it’s something we’re all interacting with more and more, sometimes without even realizing it. From apps that help you learn a new language to smart systems that assist in your workplace, AI is everywhere. But how exactly does AI change the way we learn throughout our lives? When reading through research and books I found that I could the literature AI into three main ideas. Let’s take a quick tour of these ideas!

Idea 1. AI for Efficiency: Making Learning Faster And More Efficient

The first idea focuses on how AI can make learning more efficient. Imagine having a personal tutor available 24/7, helping you practice new skills or understand difficult concepts. AI systems like these are designed to step in where humans might be limited by time or resources. They’re especially common in workplaces, where they help employees learn new skills quickly to keep up with changes in technology. The downside? These AI tools often treat learning as something mechanical—focused on getting the right answers or completing tasks as quickly as possible, without much room for creativity or deep thinking.

Key Point: AI can help us learn faster, but it might miss the bigger picture of what learning is really about.

Idea 2. AI as A Virtual Assistant: Learning With AI, Not Just From It

The second idea highlights AI as more of a partner or assistant. Instead of just spitting out answers, AI systems work alongside humans, offering feedback and suggestions as we learn together. “Coboting” could be a term for that. For example, AI might act as a language assistant, helping you practice speaking by responding to your sentences in real-time. This approach recognizes that learning isn’t just about getting the correct answers—it’s about the process, the environment, and the relationships involved.

Key Point: AI can be a learning buddy, making the process more interactive and engaging.

Idea 3. AI As An Impulse For Change: Changing How We Learn Altogether

The third, and most futuristic idea, sees AI as a force that could completely change how we think about learning. Here, AI isn’t just a tool or a partner—it’s part of a broader shift in how we live and work. This might mean that AI is reshaping the roles humans play in the workplace or even in our personal lives. For instance, an AI might not just help you learn a language but might be part of a new way of communicating entirely, blurring the lines between technology and humanity.

Key Point: AI could lead to entirely new ways of learning, pushing us to rethink our roles in the world.

Wrapping It Up

AI is reshaping the idea of lifelong learning – from speeding up the learning process to transforming it altogether. Whether AI is acting as a tool, an assistant, or a revolutionary force, it’s clear that the future of institutionalized learning is going to look very different from what we’re used to. And while these changes offer possibilities, they also challenge us to think critically about what it means to learn in a world where AI has become an agent. Do we want AI in school – and if so, how do we want it to shape the way we work?

A recent study gives a nice review of the current research within the above mentioned areas. For further reading: Palenski, T., Hills, L., Unnikrishnan, S. et al. How AI Works: Reconfiguring Lifelong Learning. Postdigit Sci Educ (2024). https://doi.org/10.1007/s42438-024-00496-y

Empirical Research

AI in a danish educational context

Wednesday, April 24, 2024, the recommendations from the expert group appointed by the Danish Government regarding ChatGPT in relation to test and examination formats were released.

They can be found here: https://www.uvm.dk/aktuelt/nyheder/uvm/2024/april/240424-ekspertgruppe-klar-med-anbefalinger-for-brug-af-chatgpt-ved-proever

On Thursday, April 22, 2024, a more nuanced opinion piece arrived. The expert group suggests a paradigm shift and advocates considering fewer and different testing formats, not solely relying on written, reproductive, and individual assessments of students’ knowledge and skills. https://www.altinget.dk/uddannelse/artikel/medlemmer-af-ekspertgruppe-her-er-de-anbefalinger-vi-ikke-blev-bedt-om

This has triggered some thoughts that I would like to share here.

Not much new added

I hardly offend anyone (that’s certainly not my intention) by pointing out that in the recommendations and nuances, there’s not much new added to the table, but rather a reinforcement of something that has been pointed out for years – just with a different rationale than artificial intelligence. And maybe that’s fair enough since it’s not really the task of the expert group. Therefore, it’s particularly pleasing that they subsequently supplement with their other considerations – which were not commissioned by the Ministry of Education.

I also gladly noticed that the article in the Danish online news app Altinget is not just about ChatGPT and digital tools but, more broadly, about generative artificial intelligence. That’s a very important nuance. Artificial intelligence is much more than large language models – as the expert group also emphasises.

With language models in mind, it’s obvious that traditional testing formats no longer make sense. That collaboration, creativity, critical thinking, and communication skills are important is just as obvious. That tests should be based on a practical and student-oriented approach has been discussed since the early 1900s starting with Thorndike and colleagues on how learning transfers.

So, why hasn’t anything happened earlier?

Perhaps because the calculator, computer, internet, Wikipedia, and other technological developments gave us a greater sense of being in control than artificial intelligence does. Perhaps because now, it would be politically foolish not to do something about what has been pointed out for so long in education. Since the consequences of doing nothing would be obvious to everyone, including the public.

It has been said before, but again, it can’t be said enough. We need to rethink the school’s continued logic of industrialization, where instead of taming the world as if it were a wild bull, students are driven through steel gates to slaughter as if they were beef cattle.

At the same time, we might also need to rethink what we understand by life skills in our age. On the one hand, being able to understand and handle the digital layer surrounding us. And on the other hand, being able to emancipate ourselves from being dependent on it.

The school should encompass both.

Recent times with cybercrime, war, and pandemics have clearly shown the helplessness and panic that sneak into a population when technology fails or a minor or major crisis hits us.

One could briefly consider: What do we (as individuals and communities) do if we lose power for 2-3 weeks due to a super solar storm or an attack on critical infrastructure? Neither is as unlikely as we think, and the question is whether we are adaptable enough to handle this?

On a less existential level, smaller challenges such as the Chromebook issues in Danish schools from 2022 (and onwards) can create major concerns and almost paralyze teaching. The Danish Data Protection Agency’s restrictions and decisions regarding the limitation of Google Workspace in Schools led to statements like “We can’t teach without Chromebooks,”. Perhaps an exaggeration to emphasize a point, but also a symptom of how technology can create needs that are difficult to ignore.

Paradoxically, it could prompt the question: Do we want a school that becomes dependent on artificial intelligence and other digital solutions? Or a school that shirks its responsibility to develop versatile and cultured individuals who will navigate a world with these technologies?

So, it’s about balance!

“A teacher that could be replaced by Google – should be!” – A saying well-known in the education landscape in 2016. Could the same sentence be rewritten today with “ChatGPT replacing “Google”?

In any case, reflection is required regarding the balance between teaching and education, which requires human contact and learning that can be accessed through dialogue with artificial intelligence.

That the language models are imprecise, hallucinate, or don’t account for X or Y is only a temporary setback, not a lasting argument for human teachers. It’s just a matter of time before more and larger data and training sets are released and provided – then a large language model such as ChatGPT can provide a more precise and nuanced answer than any teacher or educator.

So, what kind of school/education and teacher/educator is necessary? This is the fundamental question that arises.

In light of the possibilities with artificial intelligence, the most immediate and banal answer is that it will be the school or teacher who focuses not primarily on knowledge and skills, but on relationships, humanity, empathy, adaptability, embodiment, and creativity.

This raises the question of whether our educational systems can handle this kind of school thinking when we also see the need to compare ourselves and live up to international test standards.

More skilled or lazier

As the expert group points out, large language models in education increase the need for students to be good at asking questions rather than providing answers. At the same time, one might add that students should also become adept at modelling questions about the world computationally and properly validating answers so that artificial intelligence is a help and enrichment for students’ activities in school.

The fear of cheating is real enough, but perhaps we should fear laziness even more in the long term. Not that we become lazy in the sense that we just lean back; on the contrary, one can imagine that we now must accomplish even more in less time since AI can assist us in solving different tasks more efficiently. No, lazy in the sense that we no longer need to think, ponder, remember, and concentrate – because artificial intelligence entices us with quick answers and solutions. Neuroscience and brain science researchers have long pointed out that digital technologies have consequences for the aforementioned brain functions. That our ability to remember is closely related to bodily experiences and memories thereof. So, prompting AI to do our thinking tasks poses some risks of making our brains lazy.

Therefore, the developing tests and evaluation formats should embed AI as a tool and be based on the students’ situational contexts. AI can be a powerful tool in idea creation and in helping to aggregate, organise and summarise some forms of knowledge. However, solving real-world human problems in contexts dependent on action-based solutions requires humans.

What the future holds is uncertain. As the researcher and futurist Roy Amara once said, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” I do think that we need to think carefully about AI in education. With thought on the works of Joseph Weizenbaum, there are things AI can do that we do not want it to do.