- Pascal's Chatbot Q&As
- Posts
- This paper makes a compelling case for aligning AI governance with the bedrock principles of copyright law—territoriality, clarity, and fairness.
This paper makes a compelling case for aligning AI governance with the bedrock principles of copyright law—territoriality, clarity, and fairness.
If the EU wishes to lead in responsible AI, it must avoid trying to do so by stretching the limits of what its laws can credibly and lawfully govern.
Copyright, the AI Act, and Extraterritoriality in the Era of Generative AI
Based on João Pedro Quintais' Policy Brief for the Lisbon Council (June 2025)
by ChatGPT-4o
Introduction
In the evolving landscape of artificial intelligence, few issues strike at the core of creativity, regulation, and sovereignty like the intersection of generative AI and copyright. João Pedro Quintais’ policy brief, Copyright, the AI Act and Extraterritoriality, offers a rigorous legal and policy analysis of how the European Union’s AI Act engages with copyright law—particularly through its treatment of text and data mining (TDM) and its controversial claims of extraterritorial reach. The paper is a timely and thoughtful exploration of unresolved questions at the heart of EU regulatory ambitions and technological change.
The Creative Disruption of Generative AI
Quintais opens with a familiar paradox: technology has long been both a catalyst and a threat to creativity. Generative AI epitomizes this tension. On the one hand, it empowers individuals to produce sophisticated creative outputs with ease. On the other hand, it undermines traditional content creation structures, challenging how we define originality, authorship, and value.
While a growing number of creatives embrace AI as a tool, the economic predictions are mixed. Industry bodies like CISAC foresee sharp revenue declines for musicians and audiovisual creators. These tensions are more than economic—they question whether AI-generated content should be protected, derivative, or seen as infringing on the works used to train it.
The EU’s Regulatory Approach: Ambition Meets Ambiguity
The EU AI Act, adopted in 2024 and partially entering into force in 2025, attempts to set ground rules for AI development across the value chain. Chapter V targets general-purpose AI (GPAI) models—like GPT-4 or LLaMA 2—with new obligations:
Training Data Disclosure: Providers must disclose what data their models are trained on.
Copyright Compliance Policy: Providers must implement technical measures to respect opt-outs under Article 4(3) of the CDSM Directive.
However, this creates interpretive challenges. The AI Act, a public law instrument emphasizing systemic compliance, overlays a copyright system that is fundamentally private law-based, relying on enforceable rights of individual creators. Their interface is unclear, particularly since the AI Act avoids directly integrating copyright rules into its core operative provisions.
Text and Data Mining (TDM) and Copyright: A Legal Knot
TDM is central to AI model training. EU law offers exceptions in Articles 3 and 4 of the CDSM Directive, allowing TDM under specific conditions. Crucially, Article 4 allows rights holders to opt out—typically via machine-readable signals. Yet several ambiguities remain:
What qualifies as a valid opt-out signal?
Can natural language terms in a website’s T&Cs suffice?
At what stage can opt-outs be enforced—before or during training?
Quintais notes that no single opt-out protocol dominates, despite tools like Google-Extended or Spawning’s “Do Not Train” suite. These technical complexities are compounded by legal uncertainties: should upstream dataset providers be held accountable, or only downstream deployers? And how do jurisdictional rules apply if the training happens outside the EU?
Extraterritoriality: Law Without Borders?
The most controversial section of the brief concerns the AI Act’s extraterritorial reach, especially Recital 106, which asserts that copyright compliance obligations apply even if TDM occurs outside the EU. Quintais argues persuasively that this conflicts with the principle of territoriality, foundational to both EU and international copyright law.
The legal frameworks (e.g., Rome II Regulation, Brussels Regulation, Berne Convention) affirm that the location of the allegedly infringing act governs which law applies. If a model is trained in Japan using Japanese copyright law, then EU copyright law should not retroactively apply when the model is deployed in Europe. While this recital may reflect regulatory ambition to level the playing field, it lacks binding force and risks distorting established legal norms.
Real-world cases like LAION v. Kneschke in Germany and DPG Media v. HowardsHome in the Netherlands illustrate that courts are already grappling with these extraterritorial claims. The result is heightened uncertainty for AI developers and rightsholders alike.
Codes of Practice and Soft Law: A Middle Ground?
A partial resolution may lie in the GPAI Code of Practice, a soft law instrument under development. It could clarify obligations, harmonize expectations, and offer a compliance pathway in the absence of a binding European standard.
However, the third draft of the Code has conspicuously scaled back references to extraterritoriality. This raises a pivotal question: will voluntary commitments from model providers to respect opt-outs globally become the norm, or is this a temporary stopgap until harmonized standards emerge?
This soft law approach offers some flexibility but does not resolve core issues of jurisdiction, enforcement, or fair creator remuneration.
Critical Evaluation
Valuable Contributions:
The brief offers the clearest legal articulation to date of the conflict between the AI Act’s ambition and the territorial logic of copyright law.
It surfaces emerging frictions between upstream and downstream AI actors, showing how regulation could unfairly target model providers while ignoring dataset creators.
Surprising Elements:
The finding that failure to comply with Article 53(1)(c) might not constitute copyright infringement, but could still result in administrative fines under the AI Act.
The notion that Europe might indirectly regulate global AI behavior via non-binding recitals is a bold regulatory gambit—unprecedented and potentially unsustainable.
Controversial Points:
The implied assumption that European law should “set the standard” for global AI copyright compliance, even when activities happen outside its jurisdiction.
The suggestion that harmonization via standard-setting may temporarily displace meaningful enforcement or remuneration strategies.
Recommendations for Policymakers, AI Makers, and Rights Holders
For EU Policymakers:
Remove or revise Recital 106 to reflect the limits of EU jurisdiction and avoid destabilizing international copyright norms.
Accelerate the creation of binding harmonized standards for opt-outs and training transparency, balancing enforcement with legal clarity.
For AI Makers:
Voluntarily adopt asset-based opt-out protocols (like TDMRep) to signal good-faith compliance, even when not strictly required by law.
Engage in co-regulatory processes, contributing to the GPAI Code of Practice to ensure it reflects technical realities.
For Rights Holders and Publishers:
Collaborate to develop interoperable opt-out registries and tools that can withstand legal scrutiny across jurisdictions.
Advocate for statutory remuneration schemes, as reliance on opt-outs alone is insufficient to secure fair compensation in the age of generative AI.
Conclusion
Quintais’ policy brief is a sober yet vital warning: while generative AI transforms creativity, regulation must tread carefully. The AI Act, with its copyright provisions and aspirations of global reach, may overstep legal and practical boundaries. Extraterritoriality, while tempting as a tool of regulatory control, introduces uncertainty without ensuring better outcomes for creators. As Europe ponders delaying the AI Act’s application to reduce complexity, this paper makes a compelling case for aligning AI governance with the bedrock principles of copyright law—territoriality, clarity, and fairness.
If the EU wishes to lead in responsible AI, it must avoid trying to do so by stretching the limits of what its laws can credibly and lawfully govern.
