Vai al contenuto principale Vai in fondo alla pagina

Bocconi Knowledge

07/10/2025 Justyna Michalak

Quo Vadis, EU (Law)? Navigating the Future of AI Regulation 4/4

AI and Intellectual Property: Shaping the Future of Innovation and Creativity

On November 28 and November 29, Bocconi University, the Bocconi Lab for European Studies (BLEST) and the LLM in European Business and Social Law (EBSL) organized the fourth edition of Quo Vadis, EU (Law)?. This year’s edition focused navigating the future of AI regulation, analyzing and discussing the legal and constitutional challenges posed by artificial intelligence (AI). Pietro Sirena, Dean of the Law School, delivered the opening remarks alongside Prof. Lillà Montagnani (Bocconi University) and Prof. Eleanor Spaventa (Bocconi University).

 

Panel 4, AI and Intellectual Property: Shaping the Future of Innovation and Creativity was chaired by Eleonora Visentin (Bocconi University) and it featured insightful presentations of leading legal scholars – Dr. Alina Trapova (University College London), Prof. Noah Shemtov (Queen Mary University of London), and Prof. Tao Qian (China University of Political Science and Law). Together, they elaborated on the complex interplay between AI, IP law, and innovation.

GenAI and Copyright

Dr. Alina Trapova opened the discussion by presenting about “genAI and Copyright – the EU perspective – focus on input.” Dr. Trapova examined how generative AI models, which rely on human-created materials for their training, raise significant questions about copyright law within the EU. She distinguished between various legal issues regarding input and output of the genAI models. One of the crucial aspects was exploring the 2019 EU Copyright Directive, which introduced two exceptions for text and data mining (TDM): Article 3 allows non-commercial use for purposes like research, while Article 4 permits general use, including by genAI systems such as ChatGPT. However, these exceptions pose certain restrictions. Article 4, in particular, includes an opt-out provision for right holders, which, according to Dr. Trapova, undermines its effectiveness. A recent decision by a German court demonstrated that this opt-out provision may render the exception practically useless for companies like OpenAI if right holders broadly exercise their opt-out rights.

 

Dr. Trapova also highlighted the transparency requirements of the EU’s AI Act, noting that these provisions aim to empower right holders by ensuring they know when and how their data is used for training AI systems. Yet, this transparency is insufficiently robust, particularly in cases where training occurs outside the EU but affects stakeholders within its jurisdiction, owing to the Act's extraterritorial reach. She concluded with a reflection on the normative value of legal recitals in EU law, which often provide interpretative guidance but lack the binding force to address these intricate regulatory challenges fully.

 

Fair Use, LLM Training & More: Striking the Balance in Copyright

Following Dr. Trapova, Prof. Noah Shemtov enriched the discussion by presenting on the topic of “Fair Use, LLM Training & More: Striking the Balance in Copyright.” Prof. Shemtov’s presentation focused on the intersection of fair use doctrine and large language models (LLMs). The presentation began by explaining how Transformer LLMs operate, reflecting a striking similarity between their and humans’ way of thinking due to the fact that transformer LLMs can focus on different words in a sentence to understand its context just like humans do.

 

Later Prof. Shemtov elaborated on the intersection of fair use and LLM training. Firstly, it is important to stress that AI companies advocate the fair use argument to justify using various copyrighted materials for LLM training purposes. These companies emphasize that the use of copyrighted materials qualifies as "transformative" by creating something new or serving a different purpose rather than merely replicating the original work. The legitimacy of this argument depends on the way in which copyrighted materials are used in the input layer (encoder ingesting existing works) and the output layer (decoder producing results). This distinction plays a crucial role in assessing whether the use is fair use or infringing IP rights. In conclusion, Prof. Shemtov highlighted the need for more comprehensive discussions and guidelines to address the challenges posed by the intersection of innovation and copyright law, given the ongoing uncertainties in the current legal framework.

 

Copyright Compliance for AI model Providers concerning Training Data – Chinese Perspective

The final presentation by Prof. Tao Qian focused on the regulatory landscape in China with the presentation about “Copyright Compliance for AI model Providers concerning Training Data – Chinese Perspective.” Prof. Qian noted that Chinese authorities have established ministerial rules requiring AI model providers to source training data legally and with proper consent. Prof. Qian outlined China's requirements for AI providers, mandating the use of legally sourced training data with explicit consent and ensuring transparency about its origins and use. Additionally, providers must enhance input data quality and adhere to national information security standards to align AI innovation with legal compliance and public trust.

 

Moreover, Prof. Qian highlighted the potential regulatory gaps when different entities, such as model producers and developers, are involved in creating and deploying AI systems. This division of roles complicates efforts to assign liability and enforce compliance with copyright laws, particularly when trying to understand which works are copyrighted and who is their actual owner. These are undoubtedly some of the most pressing challenges for AI regulators. Prof. Qian concluded her presentation with remarks about the dual impact of AI and copyright dilemma on fostering prosperity of culture and driving industrial development. There is a need to ensure that innovation and technological development align with ethical and legal standards, ultimately balancing creative growth with technological advancement.

 

The three presentations underscored the need for a more harmonized global approach towards regulating AI and IP law. While the EU has taken steps to balance innovation with rights protection, its measures face practical challenges, regarding  transparency and enforcement. As the panel concluded, it became clear that addressing the complex intersection of AI and IP will require not only robust legal frameworks but also ongoing dialogue among stakeholders across different jurisdictions.

Diapositiva1