Empirical Research

AI in a danish educational context

Wednesday, April 24, 2024, the recommendations from the expert group appointed by the Danish Government regarding ChatGPT in relation to test and examination formats were released.

They can be found here: https://www.uvm.dk/aktuelt/nyheder/uvm/2024/april/240424-ekspertgruppe-klar-med-anbefalinger-for-brug-af-chatgpt-ved-proever

On Thursday, April 22, 2024, a more nuanced opinion piece arrived. The expert group suggests a paradigm shift and advocates considering fewer and different testing formats, not solely relying on written, reproductive, and individual assessments of students’ knowledge and skills. https://www.altinget.dk/uddannelse/artikel/medlemmer-af-ekspertgruppe-her-er-de-anbefalinger-vi-ikke-blev-bedt-om

This has triggered some thoughts that I would like to share here.

Not much new added

I hardly offend anyone (that’s certainly not my intention) by pointing out that in the recommendations and nuances, there’s not much new added to the table, but rather a reinforcement of something that has been pointed out for years – just with a different rationale than artificial intelligence. And maybe that’s fair enough since it’s not really the task of the expert group. Therefore, it’s particularly pleasing that they subsequently supplement with their other considerations – which were not commissioned by the Ministry of Education.

I also gladly noticed that the article in the Danish online news app Altinget is not just about ChatGPT and digital tools but, more broadly, about generative artificial intelligence. That’s a very important nuance. Artificial intelligence is much more than large language models – as the expert group also emphasises.

With language models in mind, it’s obvious that traditional testing formats no longer make sense. That collaboration, creativity, critical thinking, and communication skills are important is just as obvious. That tests should be based on a practical and student-oriented approach has been discussed since the early 1900s starting with Thorndike and colleagues on how learning transfers.

So, why hasn’t anything happened earlier?

Perhaps because the calculator, computer, internet, Wikipedia, and other technological developments gave us a greater sense of being in control than artificial intelligence does. Perhaps because now, it would be politically foolish not to do something about what has been pointed out for so long in education. Since the consequences of doing nothing would be obvious to everyone, including the public.

It has been said before, but again, it can’t be said enough. We need to rethink the school’s continued logic of industrialization, where instead of taming the world as if it were a wild bull, students are driven through steel gates to slaughter as if they were beef cattle.

At the same time, we might also need to rethink what we understand by life skills in our age. On the one hand, being able to understand and handle the digital layer surrounding us. And on the other hand, being able to emancipate ourselves from being dependent on it.

The school should encompass both.

Recent times with cybercrime, war, and pandemics have clearly shown the helplessness and panic that sneak into a population when technology fails or a minor or major crisis hits us.

One could briefly consider: What do we (as individuals and communities) do if we lose power for 2-3 weeks due to a super solar storm or an attack on critical infrastructure? Neither is as unlikely as we think, and the question is whether we are adaptable enough to handle this?

On a less existential level, smaller challenges such as the Chromebook issues in Danish schools from 2022 (and onwards) can create major concerns and almost paralyze teaching. The Danish Data Protection Agency’s restrictions and decisions regarding the limitation of Google Workspace in Schools led to statements like “We can’t teach without Chromebooks,”. Perhaps an exaggeration to emphasize a point, but also a symptom of how technology can create needs that are difficult to ignore.

Paradoxically, it could prompt the question: Do we want a school that becomes dependent on artificial intelligence and other digital solutions? Or a school that shirks its responsibility to develop versatile and cultured individuals who will navigate a world with these technologies?

So, it’s about balance!

“A teacher that could be replaced by Google – should be!” – A saying well-known in the education landscape in 2016. Could the same sentence be rewritten today with “ChatGPT replacing “Google”?

In any case, reflection is required regarding the balance between teaching and education, which requires human contact and learning that can be accessed through dialogue with artificial intelligence.

That the language models are imprecise, hallucinate, or don’t account for X or Y is only a temporary setback, not a lasting argument for human teachers. It’s just a matter of time before more and larger data and training sets are released and provided – then a large language model such as ChatGPT can provide a more precise and nuanced answer than any teacher or educator.

So, what kind of school/education and teacher/educator is necessary? This is the fundamental question that arises.

In light of the possibilities with artificial intelligence, the most immediate and banal answer is that it will be the school or teacher who focuses not primarily on knowledge and skills, but on relationships, humanity, empathy, adaptability, embodiment, and creativity.

This raises the question of whether our educational systems can handle this kind of school thinking when we also see the need to compare ourselves and live up to international test standards.

More skilled or lazier

As the expert group points out, large language models in education increase the need for students to be good at asking questions rather than providing answers. At the same time, one might add that students should also become adept at modelling questions about the world computationally and properly validating answers so that artificial intelligence is a help and enrichment for students’ activities in school.

The fear of cheating is real enough, but perhaps we should fear laziness even more in the long term. Not that we become lazy in the sense that we just lean back; on the contrary, one can imagine that we now must accomplish even more in less time since AI can assist us in solving different tasks more efficiently. No, lazy in the sense that we no longer need to think, ponder, remember, and concentrate – because artificial intelligence entices us with quick answers and solutions. Neuroscience and brain science researchers have long pointed out that digital technologies have consequences for the aforementioned brain functions. That our ability to remember is closely related to bodily experiences and memories thereof. So, prompting AI to do our thinking tasks poses some risks of making our brains lazy.

Therefore, the developing tests and evaluation formats should embed AI as a tool and be based on the students’ situational contexts. AI can be a powerful tool in idea creation and in helping to aggregate, organise and summarise some forms of knowledge. However, solving real-world human problems in contexts dependent on action-based solutions requires humans.

What the future holds is uncertain. As the researcher and futurist Roy Amara once said, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” I do think that we need to think carefully about AI in education. With thought on the works of Joseph Weizenbaum, there are things AI can do that we do not want it to do.

Leave a comment