Thoughts

Are we making ourselves dumber with AI?

As tools like ChatGPT become an increasingly regular part of daily life – in school, at university, and at work – an important question arises: What happens to our brains and the learning processes we engage in when we use AI to help us? And what implications could that have on education?

What Happens to Learning When Your Brain Meets LLMs?

In a new study from MIT: Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, researchers asked 54 university students to write essays over four sessions. They divided the students into three groups. One group could only use ChatGPT, another could use Google (but no AI), and a third had to rely on their own knowledge – no tools at all. At the same time, participants’ brain activity was monitored using EEG, and they were interviewed after each session. The results were then compared across groups and sessions.

The study showed that participants who used ChatGPT showed the lowest levels of brain activity during the writing task. Compared to those who used Google, and especially those who wrote without any digital assistance, their cognitive engagement was significantly reduced. Not only were their brains less active, but they also struggled to recall what they had written. Many couldn’t remember even a single sentence from their own essays, suggesting that the writing process hadn’t left a strong imprint on their memory. This lack of connection extended to their sense of ownership. Unlike the other groups, ChatGPT users were less likely to feel that the essay truly belonged to them. On the other hand, those who didn’t use AI consistently outperformed the others – in language quality, idea development, and cognitive engagement. They wrote essays they could remember, quote from, and take pride in. Their brains were fully activated throughout the process.

In another study, The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI, researchers showed that participants who relied on a large language model like ChatGPT to help answer reading comprehension questions remembered significantly less than those who completed the task on their own. When tested afterwards, LLM users not only retained fewer facts, but they also struggled to recall the core ideas and arguments from the original texts. This memory gap wasn’t just a matter of forgetting details—it reflected a deeper issue. Those who used the AI assistant showed less mental effort and engagement during the task. They often skimmed the material and leaned on the AI to interpret it for them, rather than processing the information themselves. Ironically, the more helpful the AI seemed in the moment, the less participants learned from the experience. By offloading the work of understanding and remembering to the model, they missed the cognitive processes that usually lead to durable learning. Even when participants reviewed the material again later, those who had initially used the AI still performed worse in memory tests. The short-term convenience came at the cost of long-term understanding—a paradox at the heart of using LLMs as study companions?

My three key take-aways

From the studies, I think three lessons should be learned.

Both studies highlight a significant shift in how learners engage with material when using large language models (LLMs) like ChatGPT. Rather than promoting deeper understanding, the use of LLMs tends to reduce active cognitive engagement. Kosmyna et al. showed that students using ChatGPT displayed notably less brain activity during writing tasks, suggesting superficial engagement. Similarly, Li et al. found that learners relied more on the AI’s interpretation than on their own meaning-making processes. In both cases, the use of AI tools displaced the kinds of effortful processing typically associated with engagement and deeper learning processes.

Further, the studies reveal a paradox: while AI can assist in completing tasks, it appears to undermine long-term memory formation. Participants in Li et al.’s study remembered significantly less factual and conceptual information when they had used ChatGPT for reading comprehension. Kosmyna et al. observed similar patterns: ChatGPT users were often unable to recall even a single sentence from their own essays. This suggests that the use of LLMs disrupts the encoding and consolidation processes crucial to memory.

Finally, both studies suggest the emergence of cognitive offloading and tool dependency. In Kosmyna et al., students who had used ChatGPT struggled to return to unsupported writing, showing lower brain activity and reduced originality when AI was removed. Li et al. noted that even after revisiting the material, participants who initially used the LLM still performed worse, indicating lasting effects of early reliance. In essence, the more learners depend on AI, the more their autonomous cognitive abilities seem to atrophy.

Table 1: Summing up the key take-aways.

Your Brain on ChatGPT (Kosmyna et al., 2024)The Memory Paradox (Li et al., 2024)Overall Implications
Learning ProcessesAI use reduces cognitive activation and engagement. Students rely less on their own thinking, and their essays become more uniform and superficial.AI use leads to shallow reading and reduced effort in comprehension. Participants often defer interpretation to the model instead of making meaning themselves.AI increases the risk of superficial learning, turning students into passive recipients rather than active meaning-makers. Deep learning is weakened.
MemoryChatGPT users had difficulty recalling what they had written. Many couldn’t quote a single sentence from their own essays.Participants remembered significantly fewer facts and key ideas after using AI. Even after review, their performance remained lower.AI disrupts the cognitive processes that support lasting understanding and memory formation.
Dependence on ToolsStudents who used ChatGPT struggled to return to independent writing. Their brains remained less active, and their essays less original.Although not the focus, the study shows lasting negative effects even after revisiting the material—suggesting cognitive reliance.Signs of cognitive offloading and tool dependence. Overuse of AI may undermine self-directed learning.

Final remarks

The findings of the two raise important considerations for how educators integrate AI tools into teaching. Both studies suggest a need for intentional design—where AI use supports rather than replaces cognitive effort. This could involve more structured tasks that require students to reflect on and explain AI-generated content or using LMMs as partners for dialogue and critique rather than generators of content. This kind of meta-reflection on tool use could enhance engagement and deeper understanding. Another approach could be a phased approach where students move between assisted and independent work. AI should be positioned not as a shortcut, but as a scaffold—used to enhance, not erode, the learning process.

The tools we use shape our understanding of problems and how to solve them. With LLMs at our hands, we have a tool that can easily help us solve tasks as reading and writing texts, generate content or even plan and structure teaching and learning tasks. But at what cost?

To me, there’s a real danger associated with the use of large language models: they risk making us mentally lazy and dulling our cognitive capacities. This can create a self-reinforcing cycle — the more we rely on generative AI, the more dependent we become, and the less effort we put into thinking for ourselves. In turn, we risk becoming not only more passive, but quite simply less intelligent. It’s a vicious circle. Of course, in certain contexts, generative AI can help us simplify or speed up specific tasks. But we must not lose sight of who should remain the creative force in the process. In schools, we give students assignments that serve both as training exercises and as tasks for developing understanding. Both types are essential, and we should not replace them just because we can, or because it feels easier. Learning is fundamentally based on effort and perseverance. When we ask students to write—not just letters, but full texts—we do so not only to produce content, but because the process teaches them to think, to structure, to organise their thoughts. It teaches them to be persistent and creative. As a kind of self-check, try noticing how long you can maintain your reading focus without reaching for your phone or being tempted to let an LLM summarise the text for you. Is it under 20 minutes? Or perhaps writing a longer text suddenly feels overwhelming or more exhausting than it used to?

These may be warning signs worth paying attention to—especially when it comes to children, whose brains are still forming and developing.

Thoughts

How AI is Changing The Idea Of Lifelong Learning

In today’s world, Artificial Intelligence (AI) is not just for tech experts—it’s something we’re all interacting with more and more, sometimes without even realizing it. From apps that help you learn a new language to smart systems that assist in your workplace, AI is everywhere. But how exactly does AI change the way we learn throughout our lives? When reading through research and books I found that I could the literature AI into three main ideas. Let’s take a quick tour of these ideas!

Idea 1. AI for Efficiency: Making Learning Faster And More Efficient

The first idea focuses on how AI can make learning more efficient. Imagine having a personal tutor available 24/7, helping you practice new skills or understand difficult concepts. AI systems like these are designed to step in where humans might be limited by time or resources. They’re especially common in workplaces, where they help employees learn new skills quickly to keep up with changes in technology. The downside? These AI tools often treat learning as something mechanical—focused on getting the right answers or completing tasks as quickly as possible, without much room for creativity or deep thinking.

Key Point: AI can help us learn faster, but it might miss the bigger picture of what learning is really about.

Idea 2. AI as A Virtual Assistant: Learning With AI, Not Just From It

The second idea highlights AI as more of a partner or assistant. Instead of just spitting out answers, AI systems work alongside humans, offering feedback and suggestions as we learn together. “Coboting” could be a term for that. For example, AI might act as a language assistant, helping you practice speaking by responding to your sentences in real-time. This approach recognizes that learning isn’t just about getting the correct answers—it’s about the process, the environment, and the relationships involved.

Key Point: AI can be a learning buddy, making the process more interactive and engaging.

Idea 3. AI As An Impulse For Change: Changing How We Learn Altogether

The third, and most futuristic idea, sees AI as a force that could completely change how we think about learning. Here, AI isn’t just a tool or a partner—it’s part of a broader shift in how we live and work. This might mean that AI is reshaping the roles humans play in the workplace or even in our personal lives. For instance, an AI might not just help you learn a language but might be part of a new way of communicating entirely, blurring the lines between technology and humanity.

Key Point: AI could lead to entirely new ways of learning, pushing us to rethink our roles in the world.

Wrapping It Up

AI is reshaping the idea of lifelong learning – from speeding up the learning process to transforming it altogether. Whether AI is acting as a tool, an assistant, or a revolutionary force, it’s clear that the future of institutionalized learning is going to look very different from what we’re used to. And while these changes offer possibilities, they also challenge us to think critically about what it means to learn in a world where AI has become an agent. Do we want AI in school – and if so, how do we want it to shape the way we work?

A recent study gives a nice review of the current research within the above mentioned areas. For further reading: Palenski, T., Hills, L., Unnikrishnan, S. et al. How AI Works: Reconfiguring Lifelong Learning. Postdigit Sci Educ (2024). https://doi.org/10.1007/s42438-024-00496-y

Thoughts

Five Didactical Perspectives on Computational Thinking

Introduction

This post contributes with five perspectives on how to didactically adress progression in the work with Computational Thinking (CT) in teaching.
I take on here a perspective of CT, that was revitalized and popularized in 2010 by Jeanette Wing, who defined the term as:
“The thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent” (Wing, 2010).

Wings take can be seen as problematic in different ways. However, this will not be treated in-depth here.
Although Wing’s formulation is somewhat abstract, special emphasis is placed on the fact that these are cognitive problem-solving strategies, where students model the world through specific forms of representation that are based on informatics and computer science as disciplines. Wing emphasizes that it is about thinking like a computer scientist and that the special modelling competence inherent in the disciplines can be generalized across all subject disciplines in the same way as reading, writing and arithmetic.

With this point of departure, it is the aim here, to provide teachers with perspectives for didactic reflections on CT as a specific way of working in the subjects. It is, therefore, not an attempt to provide instructions, but more to point to points of attention on planning.

Five perspectives:

What follows are five didactic perspectives that point out important aspects in the student’s development of CT as a problem-solving strategy. The five perspectives are:

1. The pre-computational perspective
2. The bodily perspective
3. The abstract perspective
4. Perspectives on problems and their solutions
5. Perspectives on the creative and imaginative

I do not claim this to be a model of progression, but the five points reflect a movement from the concrete towards the abstract. Although students, to a certain degree, possess the ability to think abstractly in different ways, they often come with little or no prerequisites for computational modelling. Therefore, an important and overall point of attention here is to create opportunities for the students to be gradually introduced to computational strategies and methods in interaction between the tangible/concrete and the symbolic/abstract.

The pre-computational perspective

Working computationally involves a basic understanding of symbol manipulation and modelling. This requires the students to become aware that e.g. coding and programming are based on certain kinds of logic and terminologies. The students must learn a special of computational grammar (certain syntaxes) before they can begin concrete computational work. For example, abilities within mathematical logic, decomposition and pattern recognition are part of the prerequisites students must possess in order to work computationally.
This can be taught in many ways, but a well-known and well-tested method for older children is to have them solve mathematical puzzles. One of many examples is Cut Hive Puzzles (http://inabapuzzle.com/),  which I explain in a previous post

Although the computational historically has its origins in mathematics, CT is not only limited to mathematical problem-solving. However, by working with logic tasks, students learn basic principles that can later be transformed into situated computational methods. For example, writing programs consist of certain logical rules and patterns.

The pre-computational perspective must be seen as an indication that students do not necessarily have the necessary prerequisites to think computationally in the sense of a computer scientist, but that this, along with reading and arithmetic, must be trained and developed. In the CT literature, coding and programming are often compared to professional skills that are similar to complex mathematical problem-solving in the natural sciences, and high-level literature analyses in the language subjects. Therefore, here too, the students need a basic understanding that precedes the computational.

The bodily perspective

A more concrete and less abstract approach to understanding and implementing CT in relation to a professional task can be done through physical activities. Activities such as “program your friend” or “bodygramming” could be examples of such approaches. Programming a friend to execute an algorithm can help show how precisely one needs to formulate rules if the computational agent (the friend) is to execute it correctly and the same way every time. The bodily perspective makes it concrete and visible how the algorithm is performed and whether this is done correctly. In such a process, students can isolate and correct the places where things go wrong and discuss how, for example, IF/THEN can be formulated more precisely.

The Finnish researcher Jussi Mikkonen proposes “bodygramming” as a bodily method to teach students programming. Bodygramming means that students physically behave like a computer program, through step-by-step prescribed actions. In this way, the students get an experience of the synchronous processes that are connected to a code at a low, human pace. In this way, an alternative option is offered to understand basic programming concepts and abstractions.
The bodily perspective points out the need for CT to be concrete for the students. These two examples (Program a friend and Bodygrammin) can make programming visible and concrete for students in a way that makes it tangible and debatable. In this way, it is ensured that CT does not just become something that takes place as part of abstract thought processes, but also something that physically unfolds in the world. At the same time, this perspective can shed light on how we as humans (as opposed to computers and AI) partly build our understanding of phenomena on contextual interpretations and intuition.

The abstract perspective

CT embeds a perspective that is based on students’ ability to think abstractly about phenomena in the world and translate these into ways that can be processed computationally and rule-based.

With the Cut Hive example above in mind, this means a move from being able to solve this puzzle to writing down rules for its solution.

A simple example is:

IF <a given input>
THEN <a specific action>

IF <a hexagon with area 2 contains the number 1 or 2>
THEN <the second hexagon contains the missing number>

Writing down rules for programs includes abstractions, generalizations and pattern recognition. Students must be able to exclude other parts of the puzzle in order to simplify and generalize, while at the same time comparing the generalized rules with the rest of the game’s rules (pattern recognition).

An example of the rewriting of a concrete event into a general rule:

The example shows how the abstract perspective in Wing’s definition of CT can play out. What distinguishes this approach to problem-solving in school from others is that it involves thought processes that seek to reduce the complexity and interpretation possibilities of phenomena in such a way that a machine (which cannot make decisions based on intuitive interpretations and emotions) can solve tasks, that would be too difficult or take too long for humans to do. The abstract perspective thus points to the fact that there are specific ways of thinking with special purposes that students must learn.

The problem-based perspective

In most contexts, CT is related to problem-solving of complex problems that cannot be handled by humans alone. In Wing’s formulation, it is also explicitly mentioned that it is both about formulations of problems and solutions to them. Not all problems are relevant nor solvable with the implicit methods alone.
Often, the problems that are sought to be solved are closely connected with and take their place in the subject. However, there are some fundamental characteristics of computational problems in non-computer science subjects:

1. That data is collected and processed (analysed).
2. That an algorithm is created that can help solve the problem
3. That the algorithm can be executed by a computer or human
4. That the problems have multiple or often open solutions

As hinted at academic problems are not tied to the field of informatics or data science. Poems are, for example, full of patterns (e.g. rhyme, metric, syllables) that can be transformed into rules and classifications in relation to recognizing and categorizing other poems. Another approach in language subjects could be to write four sentences and let the students put them in a logical order based on the data the sentences contain.
The problem-based perspective points to a need to reflect on which problem types are suitable for CT and what level and complexity the problem has in relation to the students’ prerequisites. Working with analyzing poems will be difficult for beginners because it requires skills that are based on complex knowledge of genre and language. Putting four sentences together so they make logical sense based on a content analysis would be easy for a high-school student.

The creative and imaginative perspective

Although Wing, in her definition, is particularly concerned with the problem-solving perspective, there are examples in the literature, that CT is involved in more than that. There are also arts and creative perspectives associated with designing solutions to problems or expressing oneself and imaginaries through coding. In such a perspective, creativity is part of shaping something from a set of conditions. Conditions here could mean the fact that algorithms or algorithmic processes are included in the design itself and/or in the product in one way or another.
The German professor Yasmin Kafai, who is, among other things, is one of the developers behind the coding platform Scratch, has in recent years worked with young people’s design of electronic textiles as a special way of expressing themselves. Kafai emphasizes that this way of working with the computational offers special opportunities for young people to have a critical-constructive voice and participate democratically through freedom of expression. The students’ thoughts, feelings and attitudes are expressed, when they design and create different products from textiles that are combined with microcomputers such as Lillypads (see e.g. https://www.exploringcs.org/e-textiles). The creative perspective thus points out that CT in teaching is not always only associated with problems but can also support the creative and soft forms of interpretation.

Concluding remarks

In the recent literature in Denmark, CT is seen as part of a movement focussing on empowering children to be able to take a critical stand towards digitization and the use of the media. In an educational context, one could ask, what possibilities and limitations does the computer as a tool entail when it is involved in the solution of problems and what do students need in order to use them? In this light, a CT competence makes good sense as a modelling competence that, through certain methods, enables the student to transform concrete specific problems into something a computer can help to solve. However, it is also important that teachers consider the following:

  • What CT is not. Is it, as Wing imagines, a transversal and general competence that reaches into, but also beyond, the subjects? Or is it just one of many methodological tools that students must have in their toolbox?
  • How is CT reconciled with the subject’s already built-in logics?
  • To what extent does CT contribute to the students’ general education?

Teaching is complex and the questions here show that the five perspectives highlighted here are not exhaustive, but just specific points of attention that can be included and discussed in relation to the teacher’s other general and subject-didactic reflections.

Thoughts

Few thoughts on math puzzles and computational problem-solving

Recently I read the book The Power of Computational Thinking: Games, Magic and Puzzles to Help You Become a Computational Thinker by Paul Curzon and Peter W. McOwan. I got me thinking about math, problem-solving, and computation. Not least what it takes for young children to grasp some of the concepts, that is involved in computational thinking.

Working computationally involves a basic understanding of symbol manipulation and modeling. This requires that the pupils become aware that e.g., coding and programming are subject to certain kinds of logic and terminologies. The pupils must learn special grammar (certain syntaxes) and think in specific ways before they can begin concrete computational work. For example, abilities within mathematical logic, decomposition, and pattern recognition are part of the prerequisites pupils must possess to work computationally.

These can be taught in many ways, but a well-known and well-tested method is to have pupils solve mathematical puzzles. One of many examples is Cut Hive Puzzles (http://inabapuzzle.com/), which in short, are about having a pattern (cubes) where some walls are marked with thicker lines. There are two rules:

  1. Each marked area must contain numbers from 1 to the maximum number of cubes in the marked area (4 in the example below).
  2. The same number may not appear within the same marking, and the same number may not appear in two cubes that are in contact with each other.

https://teachinglondoncomputing.org/cut-hive-puzzles/ 

By working with this type of task, the pupils learn basic principles that can later be transferred to computational methods, such as the fact that writing programs consist of certain logical rules and patterns.

Although computation historically has its origins in mathematics, CT is not limited to mathematical problem-solving. The pre-computational perspective considers that pupils do not necessarily have the necessary prerequisites to think computationally but that this, along with reading and arithmetic, must be trained and developed. In the CT literature, the abilities to code and program are often compared to literacy skills that are needed for complex mathematical problem-solving in the natural sciences and high-level literature analysis in the language subjects. Therefore, pupils need a basic understanding that precedes the computational.

CT embeds a perspective that is based on the pupils’ ability to think abstractly about phenomena in the world and translate these in ways that can be processed computationally and rule-based. With the Cut Hive example above in mind, this means going from being able to solve a puzzle to writing down rules for solving it.

A completely banal example is:

IF <a given input>
THEN <a specific action>.IF <a hexagon with area 2 contains the number 1 or 2>
THEN <the second hexagon contains the second number>

Writing down rules for programs includes abstractions, generalizations, and pattern recognition. Pupils must be able to exclude other parts of the puzzle to simplify and generalize while simultaneously comparing the generalized rules with the rest of the game’s rules (pattern recognition).

The example here shows how the abstract perspective is fundamental in CT.

What distinguishes this approach to problem-solving in school from others is that, in most cases, it is a question of thought processes that seek to reduce the complexity and interpretation possibilities of phenomena in such a way that a machine (that cannot make decisions based on intuitive interpretations and emotions) can solve tasks that would be too difficult, or take too long for humans to complete.

See more here: https://teachinglondoncomputing.org/cut-hive-puzzles/