Home
News
Highlights
Come i bias cognitivi rischiano di distorcere l’apprendimento digitale

Come i bias cognitivi rischiano di distorcere l’apprendimento digitale

There is a precise, almost imperceptible moment when an algorithm makes a decision that concerns us. It is not something we see happening: it appears instead as a content suggestion, a didactic recommendation, a “personalized” learning path. Behind a proposal that seems neutral, the result of an objective calculation, there actually hides a choice. A preference. A simplification.

Let me give an example to clarify what I mean: imagine a user accessing their learning path after logging into the educational platform. The system interface suggests continuing to a leadership module, because the algorithm has noticed an interest in team management. Meanwhile, another user, with a similar profile, receives a different indication from the same platform: for him or her, the artificial intelligence deems a course on inclusive communication more suitable. Nothing wrong, apparently. And yet, in that invisible association between pattern and person, between data and identity, the seed of bias creeps in.

The illusion of impartiality is the great paradox of algorithmic training. The tools that promise to reduce human error, rationalize complexity, and optimize the learning experience risk reproducing—and sometimes amplifying—the same cognitive distortions they claim to correct.

As Kate Crawford observes in her Atlas of AI (2021), “every artificial intelligence system is a political condensation: a set of human choices made opaque by mathematics.” In digital training, this opacity manifests discreetly: through the criteria by which data are aggregated, the logic of content recommendations, or the predictive algorithms that estimate the likelihood of completing a course. In this web of numbers and correlations, the learner risks becoming a synthetic, partial profile, constructed on behavioral indicators. And if learning is a deeply human, subjective, interpretative process, its reduction to a predictive scheme represents a loss of cognitive complexity.

Understanding algorithmic bias

Every algorithm presupposes an act of interpretation. The moment it translates reality into numbers, it performs a reduction. And in this reduction, choices, perspectives, and assumptions inevitably lurk. Ben Williamson, in his Learning Machines (2019), highlights how artificial intelligence applied to education is not a mere technical tool but a “political device” reflecting the priorities and cognitive models of its creators.

In the field of e-learning, algorithmic bias takes various forms. It can arise from training data, from the criteria for profile categorization, or from predictive models interpreting user interactions. Apparently, everything boils down to a data quality problem. But the point, in my view, is subtler and of a distinctly cultural (or philosophical?) nature: what idea of learning are we translating into code?

Contemporary digital learning constantly balances between the desire for objectivity and control and the inherently subjective, personal nature of learning. Bias represents our very difficulty in managing cognitive complexity.

L’abbiamo osservato più volte nei progetti di formazione data-driven: i modelli funzionano perfettamente fino a quando non incontrano l’imprevisto umano — la curiosità, la deviazione, l’errore creativo. È lì che l’algoritmo, privo di contesto, mostra la propria fragilità.

La riduzione cognitiva dell’esperienza formativa

When a system “personalizes” a path, it adapts it to the user’s profile but simultaneously tends to narrow the field of possibilities. What is proposed as being “in fit” with individual needs often coincides with a statistical alignment: I show you what users similar to you have chosen; I guide you toward what you are most likely to complete.

This dynamic generates a cognitive reduction: the complexity of human learning is compressed into repeated patterns. In digital training, this means that the learner risks being immersed in a “cognitive bubble,” an ecosystem where everything is relevant—but only within the limits of one’s own profile. What is the most immediate side effect? Inevitably, the learning horizon narrows, along with the ability to be surprised, to explore, to navigate uncertainty, to grow, to stand out—in a few words, to bring innovation and richness.

A subtle pedagogical risk is that artificial intelligence ends up replacing curiosity with conformity. And yet, learning means, by its nature, exercising cognitive freedom, deviating from the predictable. How, then, can we escape this impasse? Perhaps by accepting the challenge that AI poses to us: not so much to eliminate biases, but to resist the temptation of their comfort.

Data-Driven training and ethical governance

Nel 2023, il rapporto OECD “Artificial Intelligence in Education” ha messo in guardia governi e aziende dal rischio di “governare l’apprendimento attraverso l’analisi dei dati piuttosto che attraverso la conoscenza educativa”. Detto altrimenti; i dati possono informare, ma non devono dettare strategie.

The ethics of algorithmic training is not only about the transparency of models but about cognitive governance: who decides what counts as learning, which variables are significant, which outcomes are desirable?

According to UNESCO (AI and the Futures of Learning, 2023), a fair learning system must ensure that “algorithms reflect the diversity of human contexts” and that every automated decision is accompanied by informed human oversight. This implies a profound rethinking of the roles of trainers, learning designers, and strategic decision-makers—those who will govern the shift from “data analysis” to “educational decision-making.”

New professional figures will undoubtedly emerge, perhaps even that of a “learning ethics designer,” a role capable of dialoguing with data scientists, translating metrics into educational language, and ensuring that the human dimension remains central.

Lo sguardo dell’esperto: note dal mestiere del sapere digitale

I confess that every time I open a learning analytics dashboard, I feel an ambivalent sensation. On one hand, the elegance of the graphs, the clarity of trends, the apparent precision reassure me. On the other, I know that behind those numbers lies a complexity that no visualization can ever fully convey. I have learned that reading data means listening to a story: the story of those who interacted, who stopped, who skipped a module because perhaps they did not find it relevant. The task of the digital trainer today is to learn to read against the current: not only what the algorithm shows, but what it keeps silent.

Many training professionals in both the private sector and public administration often find themselves, perhaps unconsciously, mediating between two cultures: that of efficiency and that of understanding. Bias creeps in precisely here—in the desire to measure even what should remain partly ineffable: motivation, effort, implicit learning. I consider it quite likely that among the skills I will need to strengthen in the coming years is my ability to interpret the complexity of data. And perhaps this is the frontier on which the future of education will be played: that of trainers “aware of the limits of the algorithm,” capable of using it as an ally, not as an oracle.

Educare all’algoritmo

The issue of algorithmic bias does not concern only technology but the culture of knowledge. Every time a machine proposes a learning path, it implicitly invites us to share its worldview: linear, calculable, predictive. But human learning thrives on discontinuity, unpredictability, and creative ambiguity. Educating the algorithm means developing a new cognitive literacy: knowing how to read how systems work, recognizing their limits, understanding that every suggestion is a partial narrative. It is not about rejecting technology but inhabiting it critically.

In the future of digital learning, true innovation will not lie in the precision of recommendations but in the ability to train cognitively autonomous citizens—aware of the mechanisms through which algorithms interpret their behavior. Perhaps, then, the most important task will not be to “eliminate biases” but to learn to live with them consciously, transforming them into opportunities for reflection. Because the way we educate our machines inevitably reflects the way we educate ourselves.

Picture of Valentina Urli

by 

Valentina Urli
Digital Learning Manager
Share the Post:

Altri post