Vai al contenuto principale Vai in fondo alla pagina

Bocconi Knowledge

07/10/2025 Grace Haruni

Quo Vadis, EU (Law)? Navigating the Future of AI Regulation 1/4

AI and the Future of Fundamental Rights: Striking a Balance Between Progress and Protection

On November 28 and November 29, Bocconi University, the Bocconi Lab for European Studies (BLEST) and the LLM in European Business and Social Law (EBSL) organized the fourth edition of Quo Vadis, EU (Law)?. This year’s edition focused navigating the future of AI regulation, analyzing and discussing the legal and constitutional challenges posed by artificial intelligence (AI). Pietro Sirena, Dean of the Law School, delivered the opening remarks alongside Prof. Lillà Montagnani (Bocconi University) and Prof. Eleanor Spaventa (Bocconi University).

The opening panel was chaired by Marco Gerbaudo (Bocconi University), and the speakers were Giovanni De Gregorio (Universidade Católica Portuguesa), Federica Paolucci (Bocconi University), Giovanni Zaccaroni (University of Milano-Bicocca) and Francesco Paolo Patti (Bocconi University).

 

The conference brought together four esteemed experts to discuss the evolving landscape of artificial intelligence (AI) regulation in Europe. The panel examined the challenges posed by AI technologies to constitutional principles, fundamental rights, and the rule of law, focusing on the European Union’s regulatory responses. Through an analysis of the AI Act and its interplay with other legislative frameworks such as the GDPR, the speakers highlighted the opportunities and difficulties of ensuring a balance between innovation and rights protection. 

Giovanni De Gregorio: Risk Regulation in AI Policies

Giovanni De Gregorio, Chair of Law and Technology at the University of Lisbon, provided an insightful overview of how risk-based regulation is shaping AI governance, the constitutional implications of AI and the challenges in ensuring accountability in the current evolving technological landscape. His talk revolved around three key themes: the rise of Algorithmic Society, the risk-based approach and Constitutional challenges.

 

Rise of Algorithmic Society

De Gregorio opened the discussion by framing the scenario we are currently in and its characteristics, from which the constitutional response and the challenges encountered can then be addressed.

At the moment, we can observe an interplay between private and public sectors in regulating AI. As the conversation between constitutional democracy and business increases when regulating technology, we have observed the state competing with the power of social media and online platforms. This situation of is referred to as the algorithmic society, where AI is used as a form of governance.

 

Risk-Based Approach

Taking a look on how constitutional democracies are reacting so far, we can notice a new regulatory wave based on procedural safeguards and limitation of economic freedoms in order to ensure the protection of fundamental rights and democratic values. 

The methodology that characterizes this regulation is the risk-based approach, which consists of the adoption of a regulatory framework where duties and obligations are scaled and adapted to the concrete risks deriving from a specific activity. Risk, in other words, functions as a proxy for an activity, that of the balancing of interests and values, which is intrinsically constitutional by its own nature.

The risk-based approach is not unique because moving from the AI act to DSA and GDPR the approaches differ as they try to balance the need to have flexible regulation and the need to involve the private sector. The public sector has recognized its limits and uses the risk-based approach to work with the private sector which is more aware of what is inside the box so it can better regulate. The methodology of regulating is different: in the GDPR we find a bottom-up approach where risk assessment is made by the targets of the regulation, in the AI Act the approach is top-down, the risk assessment is made by the law-maker. Through the notion of risk, these legal instruments aim to strike a balance between, on the one hand, the economy-oriented interest towards innovation and the creation of an internationally competitive digital single market and, on the other hand, the often-conflicting interest towards the protection of democratic values and the rights and freedoms of individuals.

 

Constitutional challenges

Among the challenges there is the increasing entanglement between the public and the private which requires instruments of collaboration in order to prevent the disappearing of the rule of law.

Furthermore, AI’s autonomous nature and evolving datasets make consistent regulatory scrutiny difficult, especially as decisions made by AI systems can vary over time.

In conclusion, questions around compliance, enforcement, and sanctions remain unresolved,

 

Federica Paolucci: Fundamental Rights and Biometric Data

Federica Paolucci, a PhD researcher at Bocconi University, delved into the implications of AI systems on fundamental rights, with a particular focus on biometric data. She emphasized the need for comprehensive assessments to mitigate potential infringements.

 

Impact of AI on Fundamental Rights

To begin, Paolucci discussed the 2023 case of Glukhin v. Russia, a pivotal case highlighting the risks AI poses to fundamental rights and the first case decided by an international court (ECtHR). In this case mass surveillance with facial recognition technology that were used during protests in Moscow violated privacy and data protection in addition to limiting freedom of assembly.

It emerges the dual nature of AI’s impact—direct interference (e.g., privacy breaches) and indirect interference (e.g., chilling effects on other rights). AI systems influence privacy, non-discrimination, freedom of expression, and access to justice, often in ways that are opaque and difficult to regulate. Moreover, risks might be amplified by systemic biases embedded in algorithms and the lack of meaningful oversight.

 

Fundamental Rights Impact Assessment (FRIA)

In order to better protect rights Impact Assessments (IAs) have been introduced, IAs are tools used to evaluate the potential risks and impacts of policies, technologies, or actions on outcomes like the environment, social values, or rights. They are important for anticipating risks, proposing mitigation measures, and enhancing accountability and transparency.

An approach that focuses solely on one fundamental right, such as data protection under Article 8 of the Charter and its implications in the GDPR, is insufficient. It fails to account for the potential impact on other fundamental rights. The European legislator has sought to develop a broader framework that incorporates a risk-based approach, where the use of certain technologies is linked to specific risks, and accountability lies with those who engage with them. Impact assessments are not only essential for ensuring compliance with the AI Act, but they also help being precisely aware of the potential infringements on fundamental rights that may arise from using (rather than programming) these specific systems.

Art. 27 of the AI Act introduces FRIA (Fundamental Risk Assessment) as a mandatory requirement for high-risk systems. This tool aims to evaluate potential rights violations and enforce accountability. However, challenges include the lack of standardized methodologies and the complexity of identifying all rights potentially affected by AI systems.

The FRIA's role is to ensure compliance with the Charter of Fundamental Rights of the European Union (CFREU). It applies to both public institutions and private deployers of public services, shifting from ex post enforcement to anticipatory governance. FRIA covers not only data protection but also rights such as equality, non-discrimination, and freedom of assembly. It applies both vertically (state-to-individual) and horizontally (private entity-to-individual).

Art. 27 is intended to bring together a description of the process, the categories of risks, the implementation of measures, and steps to mitigate potential harms. However, there are many challenges such as potential for underenforcement, lack of standardized methodology, operational complexity, dynamic nature of AI risks and lastly, overlap with other assessments.

 

Biometric Data Risks

Biometric technologies pose specific risks, particularly with upcoming obligations for their enforcement under the EU’s AI Act. These technologies highlight the urgency and importance of robust FRIA methodologies to assess risks related to privacy, equality, and freedom of assembly. A tailored FRIA for biometrics is proposed, focusing on a fundamental rights checklist to ensure compliance with proportionality and necessity tests, ensuring biometric systems are used only when strictly necessary and proportionate. Key risks include privacy violations, such as those seen in public surveillance cases, non-discrimination issues due to racial and gender biases in facial recognition systems, and the potential deterrence of freedom of assembly from pervasive surveillance.

The entry into force of obligations for systems falling under Article 5 of the AI Act, particularly for real-time biometric identification systems, is set for February 2025. This makes the adoption of FRIA even more critical. Mitigation measures, such as transparency and judicial oversight, are essential to address these concerns. Paolucci’s research emphasizes the need for sector-specific guidelines for high-risk applications like biometric systems and strengthening oversight mechanisms to ensure that FRIA compliance is substantive, not symbolic.

 

Giovanni Zaccaroni: AI and Democracy in the relationship between the EU and the CoE: The Promessi Sposi?

Giovanni Zaccaroni, Assistant Professor of EU Law at the University of Milano-Bicocca, explored the relationship between the European Union (EU) and the Council of Europe (CoE) when it comes to AI and Democracy, analyzing elements such as the Framework Convention on AI.

The current framework and future scenarios were made even more vivid through the metaphorical comparison with the novel I Promessi Sposi written by Alessandro Manzoni.

 

The Promessi Sposi: An Analogy for EU and Council of Europe Relations

The relationship between the European Union (EU) and the Council of Europe (CoE) can be portrayed through the lens of the classic Italian novel "The Betrothed" (Promessi Sposi), a tale of two lovers, Renzo and Lucia, whose union is thwarted by external forces.

In this analogy, Renzo represents the Council of Europe, Lucia stands for the EU, and the Court of Justice is Don Abbondio, as a character who does not facilitate the union but has nonetheless been reevaluated positively by literature over time.

The EU law autonomy is symbolized by the Bravis, who uphold the strict boundaries of EU law, especially in light of the EU’s failed attempt to join the European Convention on Human Rights (ECHR), blocked by the Court in Opinion 1/14 due to concerns over the independence of EU law. The Framework Convention on AI, which is so far the first and only international treaty about AI that focuses on fundamental rights, democracy, and rule of law, plays the role of Fra Cristoforo, attempting to facilitate the union between Renzo and Lucia by bridging the gap between the EU and the CoE in the area of AI governance.

 

The EU Approach to AI

The EU’s approach to AI has predominantly been shaped by its internal market framework, with a focus on creating uniform regulations for AI deployment. The AI Act (Regulation (EU) 2024/1689), for instance, is primarily structured around market considerations, with Article 27 addressing the fundamental rights impact assessment (FRIA), though this provision was only included following pressure from the European Parliament.

The early impression suggests that to safeguard fundamental rights effectively in the context of AI, the EU may need to lean more heavily on the Charter of Fundamental Rights, even though its broad scope and the challenges national judges face in applying it leave room for uncertainty. This reflects the limitations of the current EU framework in fully addressing the complexities of AI’s impact on fundamental rights, such as privacy and non-discrimination. As the Council of Europe has recognized, the Framework Convention on AI and human rights offers a more comprehensive approach that takes these concerns into account, especially by mandating a more specific focus on AI’s democratic implications.

 

The Framework Convention: A Potential Fix for EU's Approach to AI?

The Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law, adopted by the Council of Europe in May 2024 and signed by the EU in September, represents a significant step toward harmonizing the approach to AI across Europe. It aims to bridge the gap left by the EU's failure to join the ECHR, as the EU has signed the Framework Convention and intends to implement it primarily through the AI Act and other related EU legal instruments.

However, the EU’s approach to AI and the Framework Convention’s methodology for risk and impact assessments diverge, as highlighted by scholars such as Ziller, De Gregorio, Paolucci, Pollicino and others in 2024. The EU's current methodology is more market-driven, while the Framework Convention emphasizes the need for comprehensive, rights-based assessments.

While the EU’s accession to the Framework Convention might be a promising development, it still faces challenges as the EU seeks to ratify the Framework Convention as an EU-only international agreement, we don’t know if this will be possible so there are questions about whether the Member States will have to ratify individually the Framework Convention, or if instead the EU manages to ratify the Convention the questions are about through which instruments it will be implemented or whether a higher court responsible of the Framework Convention will be needed. The relationship between the EU law and the Council of Europe remains complex.

In conclusion, while the Framework Convention offers an opportunity to strengthen the relationship between the EU and the Council of Europe, the potential for internal friction between the EU and its member states, as well as external tensions between the EU and the Council, could complicate the full realization of a unified AI governance framework. As the EU moves towards ratifying the Framework Convention, its eventual accession to the ECHR may become more pressing, creating a more cohesive legal framework for the protection of fundamental rights in AI deployment across Europe.

 

Francesco Paolo Patti: GDPR and AI Act

Francesco Paolo Patti, Professor at Bocconi University, addressed the intersection of the GDPR and the AI Act, focusing on their cumulative application and implications for private law.

 

Integration of the AI Act and GDPR and Compliance Challenges

The increasing use of AI presents complex challenges for data protection, particularly when integrating AI technologies with existing legal frameworks like the GDPR. Both the AI Act and GDPR share a risk-based approach, but they differ in terms of their application, especially regarding the rights of individuals. While the GDPR provides direct rights for data subjects to intervene and exercise their privacy rights, the AI Act does not explicitly grant these rights but ensures compliance through a framework that protects those rights indirectly.

As these regulations overlap, understanding how the AI Act and GDPR interact is essential for developing an effective compliance strategy.

The compliance obligations which are classic in the GDPR will be integrated in the AI act, so privacy by design of tomorrow will be represented not only by what we have in the GDPR but also by what we have in the AI Act.

An example of the interplay between AI Act and GDPR is the role of the Data Protection Authorities (DPAs) which were already responsible for data protection under the GDPR, now DPAs are expected to play a key role in the AI Act as Main Supervisory Authorities for high-risk AI systems in crucial sectors, especially when the latter are likely to affect individual’s rights and freedom. In addition, DPAs should serve as single point of contact for the public and counterparts at Member State and EU levels.

 

GDPR in the AI Act

The AI Act and GDPR are closely aligned, especially in the integration of privacy by design principles, which extend beyond GDPR’s original scope to incorporate AI systems.

A challenging factor in the interplay is that the AI act comes after, so there are no mentions of the AI Act in the GDPR while the opposite is true.

However, the AI Act does not replace the GDPR but operates alongside it, ensuring that AI systems comply with data protection requirements when processing personal data. The three aspects that characterize this collaborative relationship are: non-interference, use of GDPR definitions and Cumulative Application.

Non-interference states that the AI Act does not aim to alter or overlap with existing legal provisions governing the protection of personal data. Furthermore, it does not change the obligations of providers or users of AI systems, who remain responsible as data controllers or data processors under national or EU data protection laws.

In addition, the AI Act incorporates GDPR definitions like “personal data” and “special categories of personal data,” ensuring that the protections in the GDPR continue to be applied in AI systems, therefore granting consistency and coherence.

Lastly, the cumulative application of these frameworks establishes that AI providers conduct both Data Protection Impact Assessments (DPIAs) under the GDPR and Fundamental Rights Impact Assessments (FRIAs) under the AI Act, especially for high-risk AI systems. These assessments will help address privacy and fundamental rights risks, particularly when AI is deployed in sensitive sectors like law enforcement and border management.

 

Practical applications: from Biometrics to the SCHUFA case

The use of AI technologies, such as biometric data processing present specific challenges under the GDPR. The AI Act addresses these issues by imposing strict safeguards on AI systems that process sensitive data. For example, biometric data is subject to stringent legal conditions, especially for high-risk AI systems, and transparency obligations ensure individuals are informed about data processing.

Another practical application is the SCHUFA case (C-634/21) which is about a system indicating the probability of a person to repay a loan.

This case highlights the challenges of applying the GDPR to AI systems. It raises the question of whether the automated establishment of a creditworthiness probability triggers protections under Article 22 of the GDPR, which states that the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. The scope of application can be wide or narrow, so two interpretations arise: one where automated decision-making applies broadly (requiring explicit consent) and another where it only applies to preparatory steps, thus not triggering the same restrictions.

Additionally, AI systems trained with publicly available personal data must comply with the GDPR, including conducting DPIAs. This is crucial when personal data is sourced from third parties, ensuring that data subject rights are balanced with legitimate interests. As AI regulations evolve, clear guidance is needed to harmonize AI development and data protection compliance while allowing for innovation and technological progress.

Diapositiva1