Home
News
AI Act europeo: ecco cosa non potrà più fare l’intelligenza artificiale

AI Act europeo: ecco cosa non potrà più fare l’intelligenza artificiale

The approval of the AI Act by the European Union marks a decisive step in the regulation of technologies based on artificial intelligence. After years of discussions, consultations and technical revisions, legislators have defined a regulatory framework that aims to balance innovation with the protection of fundamental rights. The result is a complex piece of legislation designed to classify AI systems according to their level of risk and regulate their use in a proportional manner.

L’entrata in vigore ufficiale è datata 1° agosto 2024, ma all’atto pratico è dal 2025 che cominceranno a essere messe in atto le prime applicazioni dell’AI Act, ma si stima che la fase di transizione dei processi finirà nel 2028.

The implications of this regulatory intervention are profound. For the first time, Europe explicitly identifies which AI behaviors constitute a threat to security, privacy or human dignity, establishing precise limits on what these technologies may or may not do. The industry will need to adapt through concrete technical measures, from revising datasets to implementing stricter model governance and ensuring full traceability of automated decisions.

Tecniche di manipolazione e sfruttamento delle vulnerabilità

Among the practices banned by the AI Act are systems designed to manipulate individuals subliminally or to exploit cognitive, emotional or physical vulnerabilities. This prohibition is not generic but is based on a technical analysis of how models function. Systems capable of inferring psychological traits, steering decisions without informed consent or identifying behavioral fragilities will be considered high risk or banned outright. The goal is to prevent AI from being used to influence unaware individuals, especially in commercial or political contexts where the predictive power of models could be leveraged to alter human behavior.

Biometric surveillance and real-time recognition

The regulation intervenes decisively on biometric surveillance. The use of real-time facial recognition systems in public spaces is heavily restricted and, in many cases, prohibited. From a technical standpoint, such systems require a complex pipeline involving continuous video acquisition, extraction of biometric features, database comparison and identification generation within milliseconds. The AI Act considers this process incompatible with principles of proportionality and data minimization, as it entails systematic and potentially indiscriminate tracking of the population. The exceptions provided are tightly limited and include strict technical requirements such as continuous auditing, encrypted logging and independent verification processes.

Social scoring and systematic profiling

The regulation bans any form of social scoring similar to systems experimented with in certain non-European countries. Technologically, social scoring relies on aggregating data from multiple sources to generate profiles that can influence access to services, opportunities and rights. The AI Act deems this approach incompatible with democratic values because it gives disproportionate decision-making power to machines, based on data that is often opaque or inaccurate. Companies will need to eliminate any scoring architecture using personal information not directly related to the specific service being provided.

Predictive analysis applied to individual behavior

Another central point concerns systems for behavioral prediction that attempt to anticipate the likelihood that an individual will commit a crime, exhibit specific behaviors or make certain decisions. The regulation clearly distinguishes between aggregated statistical models, which are valid for monitoring broad phenomena, and targeted predictive models intended to evaluate a single person. The latter are prohibited because they rely on correlations that are often decontextualized, biased datasets and a lack of verifiable explainability. The technology is not considered reliable enough to generate assessments with direct consequences on people’s lives.

Trasparenza, tracciabilità e governance dei modelli

Beyond explicit prohibitions, the AI Act introduces mandatory, stricter oversight of high-risk AI systems. This includes the adoption of structured model governance, formal documentation of the model lifecycle, decision traceability and continuous performance monitoring. From a technical perspective, organizations will need to implement detailed logs, model versioning systems, explainability tools and security protocols to ensure that systems behave in predictable and verifiable ways. Artificial intelligence must become a transparent and measurable component of technical infrastructure.

Implicazioni per l’ecosistema tecnologico europeo

L’AI Act non si limita a imporre divieti, ma orienta lo sviluppo tecnologico verso un modello più sicuro e responsabile. Le aziende dovranno investire in processi di mitigazione del rischio, auditing indipendenti e architetture progettate per rispettare gli standard europei. Questo porterà a una trasformazione significativa nel design dei sistemi, verso modelli più robusti, interpretabili e costruiti su basi etiche verificabili. La regolamentazione potrebbe rallentare alcune implementazioni, ma crea le condizioni per un’evoluzione più sostenibile dell’intelligenza artificiale, rafforzando la fiducia degli utenti e delle istituzioni.

In this sense, corporate training plays a fundamental role in understanding the new AI Act regulations, enabling these powerful new technologies to be used in compliance with the law, and above all, with respect for people. If you'd like to learn more about this topic and understand how e-learning could be the solution for your company, contact us.

Picture of Alessandro Chiarato

di 

Alessandro Chiarato
Marketing Manager
Share the Post:

Altri post