Bocconi Knowledge

17/11/2022 Marco Alibrando

Quo Vadis EU (Law)? - 2022

Governing Digital Disruption: The Case of Regulatory Sandboxes

On November 17 and 18, 2022, the Bocconi’s LLM in European Business and Social Law (EBSL) and the Bocconi Lab for European Studies (BLEST) hosted a cycle of conferences dedicated to the current developments of EU Law. The second session revolved around the way of governing digital disruption, focusing on the tool of regulatory sandboxes.

regulatory sandbox is defined as a tool allowing businesses to explore and experiment with new and innovative products, services or businesses under a regulator's supervision. Its main goal is providing innovators with incentives to test their innovations in a controlled environment: while doing so, it also allows regulators to better understand the technology and fosters consumer choice in a long-term perspective. However, regulatory sandboxes can pose a significant problem as they come with a risk of being misused or abused: therefore, they need the appropriate legal framework to succeed. To further explore this topic from a European perspective, the Bocconi Faculty of Law invited Prof.s Laura Zoboli (Bocconi University) and Marie Skłodowska-Curie (University of Turin) who were also joined by Prof. Cristina Poncibò (University of Turin) and Dr. Gabriele Mazzini (European Commission).  

The delicate balance between consumers' interests and competition

Prof. Poncibò discussed about regulatory sandboxes, focusing mainly on competition and consumers' interests. Firstly, regarding competition, whereas public authorities’ reports on sandboxes are positive because they allow to simultaneously put and balance different goals (such as fostering innovation, regulating market for new services, driving competition law, protecting consumer and, more recently, fostering sustainability), according to Prof. Poncibò it is not always possible to manage all these things together. For instance, there could be some cases where there may be the need to restrict competition in order to foster sustainability.[1]

In any case, to understand the impact that sandboxes could have on promoting competition in the digital market, it would be useful to first look at the current status quo. Nowadays, a violation of Articles 101 and 102 TFEU[2] triggers an ex-ante sanctioning system based on asymmetric regulation of a specific sector, in this case pro-competitive asymmetric regulation. 

However, there are two issues to consider. The first one is whether sector-specific asymmetric regulation really promotes competition in the markets for new digital services. According to Prof. Poncibò, this solution does not work, because the law assumes that competition is more likely to come from services that are perfect substitutes for the incumbent. However, in digital markets competition takes place through tools and services that innovatively develop the incumbent's offer and take it towards a new direction. An example of this is rise of TikTok which, despite offering different services, was still able to undermine Meta’s dominant position. In addition, there are the risks and perils of overregulation which helps only lawyers and auditors, disadvantaging small and new market entrants. The second issue to consider is whether regulatory sandboxes offer alternatives and/or complementarity to sectorial regulation in promoting competition. They certainly do at first when a new service is being tested, but this is part of a broader theory, namely the theory of experimental regulation.[3]

The cooperative approach of the Financial Conduct Authority in the UK

Prof. Poncibò additionally discussed some interesting scenarios deriving from the combination of regulatory sandboxes and consumer protection. In the UKthe Financial Conduct Authority (FCA) allowed firms to test innovative propositions in the market with real consumers. Contrary to EU law’s idea of a “weak consumer”, a confused and vulnerable individual who must be protected, in this case the consumer was an active part of the testing. Thus, a new way of seeing the consumer as an important part of the design of rules is arising. This cooperative approach paves the way for the emergence of a light touch regulation: indeed, during the testing period, the huge regulatory framework for consumer protection would not apply. Nevertheless, as everything happens under the constant check of public authority, which stresses the need to provide redress mechanisms in case something goes wrong, consumers can still obtain compensation. As a matter of fact, light touch regulation does not mean a "wild west" of experimentation to the detriment of citizens.

The Artifical Intelligence Act proposal 

Dr. Mazzini provided an overview of Articles 53-54 of AI Act,[4] which are the relevant norms concerning regulatory sandboxes. The AI Act’s main goal is to foster trust among the population, but excellence is also an important component of the whole strategy, and the sandboxes play a bridging role between these two sections. Dr. Mazzini initially focused on Recital 72 from which a double dimension transpires: on one hand the companies wishing to participate in the sandbox want some guarantees on how to apply the legal framework, while on the other hand the authorities need to tailor their resources. Furthermore, he highlighted an interesting element added in Recital 72, namely that the conduct of participants in the sandbox should be considered in determining whether to impose added compliance pursuant to Regulation 2016/679 (GDPR). Indeed, even though sandboxes should not represent a free pass for non-complying with the applicable norms, at the same time there should be a signal that if a company makes an extra effort to participate in a sandbox and to comply with the guidance, there will be some tolerance.

In relation to the key features of the AI Act, the premarketing phase turns out to be the core: specifically, high-risk systems must comply with specific requirements and a procedure to ensure compliance before they enter the market. This phase lasts for a limited time, and it also allows for testing under real world conditions.  Article 53 provide the sandboxes’ governance framework, which should be established at a national level by national competent authorities.[5] Moreover, the AI Act applies irrespectively of who the provider is, including law enforcement authorities, if they are developing in-house AI systems, and EU institutions. In addition, participating in a sandbox does not mean that companies do not comply with the relevant law: on the contrary, the goal of participating in the sandboxes is to properly foster compliance. Nevertheless, there is a margin of discretion and flexibility as the authority can decide how compliance should be met by the participants in the sandbox. 

Article 54 establishes a special regime regarding the re-use of personal data. Generally, the use of personal data is subject to the GDPR, however the GDPR foresees that it is possible to have a national law regulating the further processing of personal data to pursue certain public interest objectives. This is exactly the origin of Article 54: it is not lex specialis but essentially provides a legal basis for the re-use of personal data and sets several safeguards and conditions.

Lastly, Dr. Mazzini examined the current status of the AI Act: the proposal was presented by the Commission in April 2021, and now, according to the Ordinary Legislative Procedure, it’s being discussed by the Council, under France presidency, and the Parliament: it is expected that negotiations within Parliament will not finish before March 2023.

 


[1] The most recent and clear example about this case is the sandbox realized by the Hellenic Competition Commission (HCC).

[2] Under Article 101 TFEU, agreements restricting on competitions are prohibited. Under Article 102 TFEU prohibits abusive conduct by companies that have a dominant position on a particular market.

[3] C. Sabel, J. Zeitlin, Experimentalist Governance, Oxford University Press, Oxford, 2010.

[4] The Artificial Intelligence Act (AI Act) is a Regulation proposed on April 21, 2021 by the European Commission which aims to introduce a common regulatory and legal framework for artificial intelligence.

[5] Specifically, the AI act foresees three categories of authorities: the Market Surveillance Authorities, the National Supervising Authority and the Notifying Authority.

technology-3389904