Thoughts

Are we making ourselves dumber with AI?

As tools like ChatGPT become an increasingly regular part of daily life – in school, at university, and at work – an important question arises: What happens to our brains and the learning processes we engage in when we use AI to help us? And what implications could that have on education?

What Happens to Learning When Your Brain Meets LLMs?

In a new study from MIT: Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, researchers asked 54 university students to write essays over four sessions. They divided the students into three groups. One group could only use ChatGPT, another could use Google (but no AI), and a third had to rely on their own knowledge – no tools at all. At the same time, participants’ brain activity was monitored using EEG, and they were interviewed after each session. The results were then compared across groups and sessions.

The study showed that participants who used ChatGPT showed the lowest levels of brain activity during the writing task. Compared to those who used Google, and especially those who wrote without any digital assistance, their cognitive engagement was significantly reduced. Not only were their brains less active, but they also struggled to recall what they had written. Many couldn’t remember even a single sentence from their own essays, suggesting that the writing process hadn’t left a strong imprint on their memory. This lack of connection extended to their sense of ownership. Unlike the other groups, ChatGPT users were less likely to feel that the essay truly belonged to them. On the other hand, those who didn’t use AI consistently outperformed the others – in language quality, idea development, and cognitive engagement. They wrote essays they could remember, quote from, and take pride in. Their brains were fully activated throughout the process.

In another study, The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI, researchers showed that participants who relied on a large language model like ChatGPT to help answer reading comprehension questions remembered significantly less than those who completed the task on their own. When tested afterwards, LLM users not only retained fewer facts, but they also struggled to recall the core ideas and arguments from the original texts. This memory gap wasn’t just a matter of forgetting details—it reflected a deeper issue. Those who used the AI assistant showed less mental effort and engagement during the task. They often skimmed the material and leaned on the AI to interpret it for them, rather than processing the information themselves. Ironically, the more helpful the AI seemed in the moment, the less participants learned from the experience. By offloading the work of understanding and remembering to the model, they missed the cognitive processes that usually lead to durable learning. Even when participants reviewed the material again later, those who had initially used the AI still performed worse in memory tests. The short-term convenience came at the cost of long-term understanding—a paradox at the heart of using LLMs as study companions?

My three key take-aways

From the studies, I think three lessons should be learned.

Both studies highlight a significant shift in how learners engage with material when using large language models (LLMs) like ChatGPT. Rather than promoting deeper understanding, the use of LLMs tends to reduce active cognitive engagement. Kosmyna et al. showed that students using ChatGPT displayed notably less brain activity during writing tasks, suggesting superficial engagement. Similarly, Li et al. found that learners relied more on the AI’s interpretation than on their own meaning-making processes. In both cases, the use of AI tools displaced the kinds of effortful processing typically associated with engagement and deeper learning processes.

Further, the studies reveal a paradox: while AI can assist in completing tasks, it appears to undermine long-term memory formation. Participants in Li et al.’s study remembered significantly less factual and conceptual information when they had used ChatGPT for reading comprehension. Kosmyna et al. observed similar patterns: ChatGPT users were often unable to recall even a single sentence from their own essays. This suggests that the use of LLMs disrupts the encoding and consolidation processes crucial to memory.

Finally, both studies suggest the emergence of cognitive offloading and tool dependency. In Kosmyna et al., students who had used ChatGPT struggled to return to unsupported writing, showing lower brain activity and reduced originality when AI was removed. Li et al. noted that even after revisiting the material, participants who initially used the LLM still performed worse, indicating lasting effects of early reliance. In essence, the more learners depend on AI, the more their autonomous cognitive abilities seem to atrophy.

Table 1: Summing up the key take-aways.

Your Brain on ChatGPT (Kosmyna et al., 2024)The Memory Paradox (Li et al., 2024)Overall Implications
Learning ProcessesAI use reduces cognitive activation and engagement. Students rely less on their own thinking, and their essays become more uniform and superficial.AI use leads to shallow reading and reduced effort in comprehension. Participants often defer interpretation to the model instead of making meaning themselves.AI increases the risk of superficial learning, turning students into passive recipients rather than active meaning-makers. Deep learning is weakened.
MemoryChatGPT users had difficulty recalling what they had written. Many couldn’t quote a single sentence from their own essays.Participants remembered significantly fewer facts and key ideas after using AI. Even after review, their performance remained lower.AI disrupts the cognitive processes that support lasting understanding and memory formation.
Dependence on ToolsStudents who used ChatGPT struggled to return to independent writing. Their brains remained less active, and their essays less original.Although not the focus, the study shows lasting negative effects even after revisiting the material—suggesting cognitive reliance.Signs of cognitive offloading and tool dependence. Overuse of AI may undermine self-directed learning.

Final remarks

The findings of the two raise important considerations for how educators integrate AI tools into teaching. Both studies suggest a need for intentional design—where AI use supports rather than replaces cognitive effort. This could involve more structured tasks that require students to reflect on and explain AI-generated content or using LMMs as partners for dialogue and critique rather than generators of content. This kind of meta-reflection on tool use could enhance engagement and deeper understanding. Another approach could be a phased approach where students move between assisted and independent work. AI should be positioned not as a shortcut, but as a scaffold—used to enhance, not erode, the learning process.

The tools we use shape our understanding of problems and how to solve them. With LLMs at our hands, we have a tool that can easily help us solve tasks as reading and writing texts, generate content or even plan and structure teaching and learning tasks. But at what cost?

To me, there’s a real danger associated with the use of large language models: they risk making us mentally lazy and dulling our cognitive capacities. This can create a self-reinforcing cycle — the more we rely on generative AI, the more dependent we become, and the less effort we put into thinking for ourselves. In turn, we risk becoming not only more passive, but quite simply less intelligent. It’s a vicious circle. Of course, in certain contexts, generative AI can help us simplify or speed up specific tasks. But we must not lose sight of who should remain the creative force in the process. In schools, we give students assignments that serve both as training exercises and as tasks for developing understanding. Both types are essential, and we should not replace them just because we can, or because it feels easier. Learning is fundamentally based on effort and perseverance. When we ask students to write—not just letters, but full texts—we do so not only to produce content, but because the process teaches them to think, to structure, to organise their thoughts. It teaches them to be persistent and creative. As a kind of self-check, try noticing how long you can maintain your reading focus without reaching for your phone or being tempted to let an LLM summarise the text for you. Is it under 20 minutes? Or perhaps writing a longer text suddenly feels overwhelming or more exhausting than it used to?

These may be warning signs worth paying attention to—especially when it comes to children, whose brains are still forming and developing.

Leave a comment