Empirical Research

Crafting Dialogic Classrooms: How Wood and Code Inspire Student Voice


In today’s push toward digital technology in education, we often ask: How can technology support—not replace—student engagement, dialogue and creativity? My colleague Lene Illum and I are asking ourselves how to design for students voices and dialogue in writing practices in school. As part of our research we believe that meaningful answers lie in the intersection of the tangible and the digital. We have developed SkriveXpeditionen (The WritingXpedition) that draws on both these aspects.

SkriveXpeditionen is a didactic design developed to support creative and dialogic writing processes in Danish L1 classrooms at the intermediate level (typically grade 5) with an age group of 11-12 years. The design combines physical and digital tools to scaffold students’ narrative thinking, collaborative dialogue, and creative expression.
Skrive consists of two main elements.

First, a set of physical wooden tiles. These tiles are engraved with various narrative motifs and symbolic images. Some tiles include QR codes that link to excerpts from a shared literary starting text. The tiles function as tactile prompts that help students:

  • Generate and structure ideas
  • Decompose narrative elements
  • Visualise connections and develop plot structures

    Secondly, the open-source interactive writing tool Twine. Twine enables students to create non-linear, interactive stories, where readers make choices that affect the outcome. The platform invites computational thinking through sequencing, logic, and hypertext structure, while simultaneously encouraging literary creativity.

      In the image, the pedagogical design is displayed. A is the starting point where students get a short read-out-loud from the beginning of a novel. In this case, it is “The Horrible Hand”, a fantasy novel. B represents the wooden pieces and the phase where students engage and collaborate on their stories. Finally C represents the process of creating their stories in Twine.

      The Pedagogical Rationale

      SkriveXpeditionen is designed to cultivate what Neil Mercer calls exploratory talk: a form of dialogue where students collectively explore, justify, and refine ideas. The material artefacts serve as mediators that bring students’ ideas into a shared dialogic space. The goal is not only to support writing outcomes but to:

      • Enhance oral language development
      • Foster embodied and multimodal communication
      • Promote partner-awareness and collaborative meaning-making

      Learning Outcomes and Observations

      Preliminary findings from classroom interventions show that SkriveXpeditionen:

      • Strengthens students’ engagement in the writing process
      • Encourages playful experimentation and co-creation
      • Supports students in structuring stories and making narrative decisions
      • Increases participation opportunities, especially for students who might struggle with traditional writing tasks

      The design also reveals how materiality—in the form of physical tiles—can play a central role in shaping dialogic learning environments and computational literacy practices.
      SkriveXpeditionen is more than a writing tool. It is a hybrid learning design that brings together literature, technology, and embodied dialogue to support student creativity and collaboration. By combining material and digital media, it opens new pedagogical pathways for teaching writing in ways that are meaningful, imaginative, and deeply social.

      Enhancing Computational Literacy through objects to think with.

      SkriveXpeditionen is a teaching design that invites students into a creative and exploratory learning space, where storytelling and technology are tightly interwoven. Drawing on Andrea diSessa’s concept of computational literacy, learning is understood here as a materially-supported deployment of skills and dispositions toward meaningful intellectual goals.

      Unlike the narrower notion of computational thinking, often framed as a general set of problem-solving skills, computational literacy expands the view by emphasising three interrelated dimensions: the cognitive, the social, and the material. These dimensions are not separate layers but intertwined aspects of how learners engage with the world.

      SkriveXpeditionen brings all three dimensions into play. The physical wooden tiles serve as cognitive scaffolds, allowing students to break down and recompose narrative ideas. Group work and conversation create a shared space where meaning is socially negotiated. At the same time, both the tiles and the Twine platform act as material mediators, giving shape to students’ abstract thinking and enabling new forms of interaction and expression.

      In this interplay, students do not merely write stories—they engage in thinking through materials. They work with narrative elements, symbols, and code to construct meaning within a shared ecology of learning. It is precisely within this process that what Seymour Papert called powerful ideas begin to emerge.

      Students learn how ideas evolve in collaboration and dialogue, as they articulate and refine their thinking through shared language, gesture, and embodied interaction. They work with narrative systems, grappling with cause-effect relationships, branching logic, and interactive structures—not as abstract concepts, but through concrete storytelling practices. They also explore how representational elements can be rearranged and transformed, discovering that story components are not fixed but fluid and malleable.

      SkriveXpeditionen is not about teaching programming per se. Instead, it aligns with Papert’s deeper pedagogical vision of using technology as a medium for expression, a tool for exploration, and a mirror for thinking.

      We will continue our research and development

      Lene and I will continue our endeavour into SkriveXpeditionen and how to enhance and develop the design. Next steps for us is to focus on a more generic concept that can be applied to a vararity of texts in L1.

      Stay tuned!

      Empirical Research

      Teaching with AI: Creativity, Student Agency, and the Role of Didactic Imagination


      As generative artificial intelligence (GAI) rapidly makes its way into classrooms, discussions about its educational implications tend to oscillate between promise and peril. In our recent article published in Unge Pædagoger ( https://u-p.dk/vare/2025-nr-2/ ), Peter Holmboe and I explore how GAI might become a tool not of automation, but of amplification — nurturing rather than replacing students’ creative engagement with the world.
      At the heart of our argument is the idea that creativity is not a spontaneous spark or a gift bestowed on a few, but a socially and materially situated process that thrives on exploration, reflection, and dialogue. GAI, when used with care, can become a medium for such engagement — but only if educators retain a clear focus on human agency, intentionality, and context.

      From Prompt to Product: A Framework for Creative AI Integration

      To support this reframing, we propose a practical teaching model based on three focal points: prompt, process, and product. Each stage reflects different opportunities for teacher intervention and student engagement:

      • Prompts are not mere instructions to the machine; they are invitations to think differently, to explore multiple meanings, and to frame the problem creatively.
      • Processes involve iteration, dialogue, and experimentation — often where the real learning and growth happens.
      • Products, whether a story, a song, or a prototype, become less about perfection and more about reflection: what did we learn by making this?

      This model is enriched by three complementary methods: immersion, tinkering, and disruption. Each represents a way for students to work with GAI in ways that retain ownership of the learning process.

      • Immersion promotes deep, focused work within well-defined boundaries.
      • Tinkering supports playful experimentation, where learning happens through trial, error, and surprise.
      • Disruption challenges habits and assumptions, using constraints or provocations to push thinking in new directions.

      Creativity is Situated — and So is AI

      We argue that creativity does not exist in a vacuum. Following Schön, Tanggaard, and Vygotsky, we locate creative thinking in the embodied, social, and material world. This is where human intelligence diverges most significantly from GAI: AI may generate content, but it cannot inhabit context. It predicts plausible output; it does not understand meaning. This has implications for how we teach with GAI. If students merely outsource creative tasks to a machine, we risk losing what matters most: their voice, their struggle, their growth. However, if we invite them to collaborate with GAI — to question it, repurpose it, and respond to it — then the technology becomes a stimulus, not a substitute.

      Toward a Pedagogy of Possibility

      Teaching with GAI calls for what we term didactic imagination: a combination of foresight, courage, and responsiveness. It means being willing to reshape curricula, adapt practices, and imagine new learning trajectories — not because we surrender to technological determinism, but because we remain committed to meaningful, learner-centered education. The notion of didactic imagination, teaching with generative artificial intelligence (GAI) is not merely a matter of integrating a new tool into the classroom — it represents a profound shift in how we conceive of pedagogy, knowledge, and student engagement. Didactic imagination challenges educators to go beyond reactive adaptation and instead engage in proactive rethinking of educational practice. It is a stance that requires:

      • Foresight to anticipate how GAI may shape future forms of knowledge production, communication, and creativity — and to prepare students not just to use tools, but to question and redefine them.
      • Courage to depart from familiar routines, assessment models, and linear instructional design in favour of more open-ended, exploratory, and student-driven approaches.
      • Responsiveness to the evolving needs, interests, and capacities of students in a rapidly changing world — acknowledging that meaningful learning emerges in the dynamic interplay between structure and spontaneity, between teacher intention and student agency.

      Didactic imagination implies treating curricula not as fixed templates, but as living frameworks that must be continually reinterpreted in light of new possibilities. This may mean designing activities where students co-develop prompts with GAI, reflect critically on algorithmic bias, or remix AI-generated content in ways that foreground their own perspectives. It may mean disrupting traditional roles of teacher and student, where the teacher becomes a co-inquirer, and the classroom becomes a lab for collective sense-making.

      Importantly, embracing didactic imagination does not mean abandoning rigour or coherence. Rather, it calls on us to re-anchor educational practice in the core values of curiosity, empathy, agency, and dialogue. In this view, GAI becomes a provocateur — a reflective partner that invites new ways of asking questions, framing problems, and expressing understanding.

      Thus, the real innovation lies not in the machine, but in how we choose to imagine and inhabit the pedagogical spaces it opens. The challenge for educators is to hold open these spaces — not for efficiency, but for exploration. Not to automate learning, but to animate it.

      Concluding thoughts

      In light of this, I invite colleagues across sectors and disciplines to pause and reflect — not merely on how generative AI (GAI) fits into current pedagogical structures, but on how it compels us to rethink some of the fundamental principles of education itself. The integration of GAI challenges us to reconsider what it means to learn, to create, and to be an agent in the process of knowledge-building.

      What does student agency mean in an era of generative AI?

      When machines can generate text, images, code, and even ideas with remarkable fluency, the concept of student agency cannot be reduced to mere task completion or content production. Agency must be reframed as the capacity to make meaningful decisions within complex, sociotechnical environments — to pose original questions, to shape technological tools for personal or communal ends, and to navigate ambiguity with intentionality. It’s about giving students the authority and responsibility to direct their learning journeys — not in isolation, but in active dialogue with intelligent systems. In this view, agency becomes not just the right to act, but the ability to critically reflect on how and why we act in partnership with AI.

      How can we design learning experiences where GAI is used to provoke, not predetermine, creativity?

      Too often, educational technology has been employed to automate or simplify learning, reducing complexity instead of engaging with it. But GAI opens new possibilities: it can serve as a creative irritant, a tool for playful experimentation, or a mirror that reflects and reframes student thinking. Learning designs that foreground iteration, co-construction, and reflection — rather than fixed outcomes — are essential. Imagine prompts that ask students to revise or challenge an AI-generated poem, or collaborative projects where students must make the logic behind AI decisions visible and debatable. In these scenarios, creativity is not something AI delivers — it is something students practice and develop through interaction with AI.

      How do we assess creative work when the process involves both human and machine actors?

      Traditional assessment models — focused on individual output, originality, and correctness — are poorly suited to hybrid creative processes. We need evaluative frameworks that can account for process, intention, and transformation. This includes assessing how students shape AI contributions, how they reflect on ethical and contextual implications, and how they position themselves as co-authors of meaning. Rubrics may include dimensions like critical decision-making, iterative development, or responsiveness to feedback. Importantly, assessment must shift from product-focused grading to process-aware evaluation — making visible the learning embedded in the co-creation journey.

      Can disruption — not just fluency — become a valued competence in our AI-enhanced classrooms?

      Fluency in AI tools is important, but fluency alone risks producing compliance rather than creativity. We must also value disruption — the ability to interrupt routines, challenge defaults, and see beyond the surface of algorithmic convenience. This includes introducing ‘productive friction’ into learning environments: constraints that force rethinking, prompts that provoke surprise, and design challenges that resist easy automation. By cultivating the capacity to critique and complicate technology, we nurture students who don’t just use AI, but who actively shape its cultural, ethical, and creative trajectories.

      Empirical Research

      Rethinking Computational Thinking in Education

      Computational thinking (CT) has become a buzzword in educational policy and curriculum reform. Promoted as a fundamental 21st-century skill, it is often described as a universal way of thinking—akin to literacy and numeracy. But beneath this seemingly neutral framing lies a deeper question: What kind of thinking do we want students to engage in, and what role should schools play in nurturing it?

      The current dominant view of CT, popularised by Jeannette Wing, sees it as a set of abstract, transferable skills drawn from computer science—algorithmic thinking, abstraction, problem decomposition. This approach fits neatly into existing curricular structures and assessment regimes, but it risks sidelining the messier, more situated, and culturally embedded dimensions of learning with and through computers.

      An alternative perspective comes from Seymour Papert, whose work in the 1970s and 80s laid the groundwork for what we now call CT. Papert didn’t frame CT as a fixed set of skills. Instead, he was concerned with thinking deeply about thinking itself—learning through making, experimenting, and expressing ideas in computational media. His approach, known as constructionism, was grounded in the idea that children learn best when they are actively engaged in building things that are meaningful to them.

      Ai generated image in ChatGPT displaying three themes Papert's development of constructionism. The Tutle Program, Soap Sculptures and Samba Schools.

      Ai generated image in ChatGPT displaying three themes Papert’s development of constructionism. The Tutle Program, Soap Sculptures and Samba Schools.

      Our educational system rejects the “false theories” of children, thereby rejecting the way children really learn. (Papert, Mindstorms 1980)

      Central to Papert’s vision were what he called “objects-to-think-with”—tangible or digital artefacts that serve as tools for thought. These could be programmable turtles on the screen, floor robots, or soap sculptures. The key is that learners engage with these objects not through instruction, but through exploration and iteration. The act of programming becomes a medium for expressing ideas, testing hypotheses, and developing personal and shared understandings.

      Papert’s notion of epistemological pluralism is equally crucial. He recognised that learners approach problems in different ways—some prefer planning and abstraction, others tinker and iterate. Both styles are valid, and a healthy learning environment supports this diversity. In contrast, much of today’s CT implementation privileges the abstract, logical, and formal, often marginalising intuitive, creative, or sensory approaches to computational problem-solving.

      Another critical insight from Papert is his view of schools as cultural institutions with deeply ingrained norms. He was sceptical of how technologies—computers included—tend to be absorbed into existing school structures rather than transforming them. He warned against what he called technocentrism—the belief that technological tools alone can drive educational change. For Papert, the real power of the computer lay not in the machine itself, but in its potential to disrupt traditional pedagogies and empower learners.

      Little by little the subversive features of the computer were eroded away: Instead of cutting across and so challenging the very idea of subject boundaries, the computer now defined a new subject; instead of changing the emphasis from impersonal curriculum to excited live exploration by students, the computer was now used to reinforce School’s ways. What had started as a subversive instrument of change was neutralized by the system and converted into an instrument of consolidation (Papert, The Children’s Machine, 1993)

      Papert’s vision of a “Samba School for Computing” offers a compelling metaphor. Inspired by the inclusive, community-based learning culture of Brazilian samba schools, he imagined computational learning as a pluralistic, joyful, and participatory activity. Instead of rigid curricula and standardised assessment, imagine spaces where children and adults collaboratively explore, build, play, and perform with computational media—learning not just to code, but to express, critique, and co-create.

      This vision remains deeply relevant today. While CT is often justified by its economic utility—preparing students for future jobs—Papert reminds us that schools should not merely serve existing societal needs. They should be spaces for reimagining society itself. Rather than training students to think like computer scientists, we might ask how computation can support them in thinking like designers, storytellers, activists, or citizens.

      Moreover, Papert’s critique of the school’s “immune system”—its tendency to neutralise radical ideas—is as pertinent as ever. Today’s digital tools are often used to reinforce traditional instruction rather than to reimagine it. Many implementations of CT end up focusing on tool mastery rather than tool invention, reinforcing rather than disrupting existing power structures in education.

      A genuinely transformative approach to CT would begin not with abstract definitions but with concrete engagements: what are learners passionate about? What problems do they want to solve? What stories do they want to tell? From there, educators can scaffold experiences that build computational fluency in ways that are meaningful and contextually grounded.

      Key Takeaways for Schools Today:

      1. Reframe CT as situated practice Rather than treating computational thinking as a decontextualised skill set, we should design learning environments that situate CT in meaningful, hands-on, and culturally relevant practices.
      2. Value epistemological diversity Support different ways of knowing and thinking. Not all students thrive through abstraction—some learn best through tinkering, storytelling, or physical interaction with materials. All of these are valid pathways into computational understanding.
      3. Challenge the school’s “immune system” Schools must remain open to educational models that challenge the status quo. CT has the potential to democratise and humanise learning—if we resist the urge to reduce it to testable outcomes and instead embrace it as a medium for expression, reflection, and cultural participation.

      Empirical Research

      AI in a danish educational context

      Wednesday, April 24, 2024, the recommendations from the expert group appointed by the Danish Government regarding ChatGPT in relation to test and examination formats were released.

      They can be found here: https://www.uvm.dk/aktuelt/nyheder/uvm/2024/april/240424-ekspertgruppe-klar-med-anbefalinger-for-brug-af-chatgpt-ved-proever

      On Thursday, April 22, 2024, a more nuanced opinion piece arrived. The expert group suggests a paradigm shift and advocates considering fewer and different testing formats, not solely relying on written, reproductive, and individual assessments of students’ knowledge and skills. https://www.altinget.dk/uddannelse/artikel/medlemmer-af-ekspertgruppe-her-er-de-anbefalinger-vi-ikke-blev-bedt-om

      This has triggered some thoughts that I would like to share here.

      Not much new added

      I hardly offend anyone (that’s certainly not my intention) by pointing out that in the recommendations and nuances, there’s not much new added to the table, but rather a reinforcement of something that has been pointed out for years – just with a different rationale than artificial intelligence. And maybe that’s fair enough since it’s not really the task of the expert group. Therefore, it’s particularly pleasing that they subsequently supplement with their other considerations – which were not commissioned by the Ministry of Education.

      I also gladly noticed that the article in the Danish online news app Altinget is not just about ChatGPT and digital tools but, more broadly, about generative artificial intelligence. That’s a very important nuance. Artificial intelligence is much more than large language models – as the expert group also emphasises.

      With language models in mind, it’s obvious that traditional testing formats no longer make sense. That collaboration, creativity, critical thinking, and communication skills are important is just as obvious. That tests should be based on a practical and student-oriented approach has been discussed since the early 1900s starting with Thorndike and colleagues on how learning transfers.

      So, why hasn’t anything happened earlier?

      Perhaps because the calculator, computer, internet, Wikipedia, and other technological developments gave us a greater sense of being in control than artificial intelligence does. Perhaps because now, it would be politically foolish not to do something about what has been pointed out for so long in education. Since the consequences of doing nothing would be obvious to everyone, including the public.

      It has been said before, but again, it can’t be said enough. We need to rethink the school’s continued logic of industrialization, where instead of taming the world as if it were a wild bull, students are driven through steel gates to slaughter as if they were beef cattle.

      At the same time, we might also need to rethink what we understand by life skills in our age. On the one hand, being able to understand and handle the digital layer surrounding us. And on the other hand, being able to emancipate ourselves from being dependent on it.

      The school should encompass both.

      Recent times with cybercrime, war, and pandemics have clearly shown the helplessness and panic that sneak into a population when technology fails or a minor or major crisis hits us.

      One could briefly consider: What do we (as individuals and communities) do if we lose power for 2-3 weeks due to a super solar storm or an attack on critical infrastructure? Neither is as unlikely as we think, and the question is whether we are adaptable enough to handle this?

      On a less existential level, smaller challenges such as the Chromebook issues in Danish schools from 2022 (and onwards) can create major concerns and almost paralyze teaching. The Danish Data Protection Agency’s restrictions and decisions regarding the limitation of Google Workspace in Schools led to statements like “We can’t teach without Chromebooks,”. Perhaps an exaggeration to emphasize a point, but also a symptom of how technology can create needs that are difficult to ignore.

      Paradoxically, it could prompt the question: Do we want a school that becomes dependent on artificial intelligence and other digital solutions? Or a school that shirks its responsibility to develop versatile and cultured individuals who will navigate a world with these technologies?

      So, it’s about balance!

      “A teacher that could be replaced by Google – should be!” – A saying well-known in the education landscape in 2016. Could the same sentence be rewritten today with “ChatGPT replacing “Google”?

      In any case, reflection is required regarding the balance between teaching and education, which requires human contact and learning that can be accessed through dialogue with artificial intelligence.

      That the language models are imprecise, hallucinate, or don’t account for X or Y is only a temporary setback, not a lasting argument for human teachers. It’s just a matter of time before more and larger data and training sets are released and provided – then a large language model such as ChatGPT can provide a more precise and nuanced answer than any teacher or educator.

      So, what kind of school/education and teacher/educator is necessary? This is the fundamental question that arises.

      In light of the possibilities with artificial intelligence, the most immediate and banal answer is that it will be the school or teacher who focuses not primarily on knowledge and skills, but on relationships, humanity, empathy, adaptability, embodiment, and creativity.

      This raises the question of whether our educational systems can handle this kind of school thinking when we also see the need to compare ourselves and live up to international test standards.

      More skilled or lazier

      As the expert group points out, large language models in education increase the need for students to be good at asking questions rather than providing answers. At the same time, one might add that students should also become adept at modelling questions about the world computationally and properly validating answers so that artificial intelligence is a help and enrichment for students’ activities in school.

      The fear of cheating is real enough, but perhaps we should fear laziness even more in the long term. Not that we become lazy in the sense that we just lean back; on the contrary, one can imagine that we now must accomplish even more in less time since AI can assist us in solving different tasks more efficiently. No, lazy in the sense that we no longer need to think, ponder, remember, and concentrate – because artificial intelligence entices us with quick answers and solutions. Neuroscience and brain science researchers have long pointed out that digital technologies have consequences for the aforementioned brain functions. That our ability to remember is closely related to bodily experiences and memories thereof. So, prompting AI to do our thinking tasks poses some risks of making our brains lazy.

      Therefore, the developing tests and evaluation formats should embed AI as a tool and be based on the students’ situational contexts. AI can be a powerful tool in idea creation and in helping to aggregate, organise and summarise some forms of knowledge. However, solving real-world human problems in contexts dependent on action-based solutions requires humans.

      What the future holds is uncertain. As the researcher and futurist Roy Amara once said, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” I do think that we need to think carefully about AI in education. With thought on the works of Joseph Weizenbaum, there are things AI can do that we do not want it to do.