Quo Vadis, EU (Law)? Navigating the Future of AI Regulation 3/4
AI in Finance: Transforming the Future of the Financial Sector
AI reliability and liability
Antonio Davola, Assistant Professor of Economic Law at the University of Bari, explores the increasingly pivotal role of Artificial Intelligence (AI) in the financial sector, with a particular focus on AI reliability and liability. His research addresses the complex intersection between AI technologies and financial law, emphasizing the need for a comprehensive liability framework tailored to the nature of AI in financial services. This inquiry is developed through three essential steps: first, by examining the roles AI plays in the provision and legal structuring of financial services; second, by analyzing how liability related to AI functioning operates as a regulatory and legal mechanism, particularly in cases where AI systems make autonomous decisions; and third, by assessing how AI is integrated into legal documents and contracts, and how this impacts service provision and the legal responsibilities of financial actors. The presence of AI in the private financial sector is expanding rapidly, with technologies such as high-frequency trading (HFT) algorithms, robo-advisors, and automated credit scoring systems becoming increasingly prevalent. However, the unregulated or under-regulated deployment of such AI technologies introduces significant risks, including discriminatory outcomes, systemic instability, and accountability gaps. Notably, major events in recent years, such as in 2022, have highlighted instances where investment managers relied heavily on externally operated AI systems for portfolio management, raising critical questions about oversight and control. Scholars and institutions alike have observed the outsized influence that AI-based HFT systems now exert on financial markets. Even in traditional banking services, AI-driven credit scoring tools have been criticized for their potential to produce disparate impacts across socio-economic groups. Despite these concerns, the use of AI continues to rise, as seen in the increasing number of licensing applications for AI technologies in finance. Although credit scoring remains the most scrutinized use case, AI applications span a much broader spectrum, including customer service bots, algorithmic trading, risk management systems, and tools for resolution and recovery planning.
Some of these systems fall under the scope of existing legal frameworks such as the NCC law, but many do not. The AI Act, for instance, classifies credit scoring and biometric identification systems as high-risk applications, but many others—such as robot-advisory chatbots—may be considered low or minimal risk, which impacts how they are supervised under the broader regulatory regime. This fragmented classification highlights a critical regulatory gap between the development of AI technologies and their governance within financial services. Two problematic dynamics emerge: first, the disaggregation of financial services, where core activities such as credit scoring or HFT are outsourced or developed by third-party providers (e.g., PISPs and ISPs under the EU’s payment services directives); and second, the resulting interdependency and complexity that make it difficult for traditional financial institutions to identify and assign liability clearly. Current financial and banking regulations still operate largely under a subject-based framework, emphasizing authorization and trust as foundational principles. However, as AI-driven functions grow more autonomous and decentralized, this model is increasingly strained. The financial system’s ability to effectively allocate resources and maintain stability hinges on identifying trustworthy actors, but in a landscape shaped by AI, trust and responsibility become more diffuse. Ultimately, even as AI redefines operational mechanisms within finance, traditional financial institutions remain the gatekeepers, entrusted with mediating between technological advancement and regulatory compliance in a system that continues to evolve in scope and complexity.
Katja Langenbucher’s Presentation on AI, Financial Profiling, and the Gaps in Regulation
Katja Langenbucher explored the evolving role of AI in financial profiling, particularly credit scoring, and highlighted how current legal frameworks, specifically in the EU, are not yet equipped to address the complexities introduced by these technologies.
Where Regulation Falls Short
One of the most striking points was her argument that AI-based scoring is not regulated in the way it should be. The law touches on various related areas: data protection, banking, responsible lending, anti-discrimination—however the role of intermediaries were not addressed.
Intermediaries are those creating and operating the models behind credit scores. As they are not addressed, this gap is particularly evident when comparing the EU’s AI Act to the US’s Fair Credit Reporting Act (FCRA). While the FCRA imposes some duties on scoring agencies, the EU’s AI regulation is still catching up, despite progress like the ECJ’s decision on Schufa.
Consumer Rights and Legal Interpretation
Key provisions under the GDPR that offer protection to individuals subjected to automated decisions: Article 22 stipulates the right not to be subject to decisions based solely on automated processing, including profiling. Article 15 ensures a right to “meaningful information” about the logic behind those decisions.
What was especially interesting was her reference to the ECJ’s interpretation in the Schufa case: the court ruled that a probability-based credit score can constitute a “decision,” even if it’s only used to inform a lender’s decision. This broad reading gives way for significant potential for consumer protection.
It is imperative to note that Article 86 of the AI Act strengthens the individual’s right to an explanation when decisions are made based on AI outputs. The actual implementation of these rights is a work in progress, and often, consumers still have no clear path to understanding or challenging their score.
Regulatory Focus: Where It Should Shift
Instead of focusing solely on the end decision, it was proposed to regulate the profiling process itself. That includes: defining and separating profiling from decision-making; imposing quality requirements on both data and the models used; expanding consumer rights to challenge scoring systems, ensure transparency, and demand human oversight, especially when decisions are based on "black box" algorithms.
The ECJ’s Schufa ruling as a guide for future legislation in line with the need to regulate along the lines of Annex III of the AI Act.
The Emerging Market for Scores
Scoring is no longer just about credit as we enter an era where several aspects- from marketing to employment could be subject to algorithmic scoring. Despite this, they are often vaguely regulated or not regulated at all.
Langenbucher warned of a growing "market for scores," where profiling expands beyond traditional financial services, but with unclear boundaries and minimal oversight.
This presentation made it clear AI offers efficiency and scalability in financial profiling; however, the regulatory framework hasn’t kept pace. Her focus on intermediaries, profiling processes, and consumer rights is a much-needed contribution to the broader conversation about responsible and reliable AI.
There is an urgent need to rethink regulation not only around AI decision-making but around AI-driven profiling itself. As AI continues to shape financial markets and consumer experiences, the law must evolve to ensure fairness, transparency, and accountability.