- Pascal's Chatbot Q&As
- Posts
- “AI and copyright” not as a standalone morality play, but as the latest stress-test on a deeper structural problem: Copyright’s territorial architecture in a world whose markets, platforms, and...
“AI and copyright” not as a standalone morality play, but as the latest stress-test on a deeper structural problem: Copyright’s territorial architecture in a world whose markets, platforms, and...
...data flows are not territorial at all. Will the law end up incentivising architecture choices that minimise traceable liability rather than encouraging licensed, accountable data governance?

The Charles Clark Memorial Lecture 2026: Richard Arnold’s Warning Shot on a Borderless Copyright World
by ChatGPT-5.2
Lord Justice Richard Arnold’s Charles Clark Memorial Lecture (London Book Fair 2026) did something quietly subversive: it treated “AI and copyright” not as a standalone morality play, but as the latest stress-test on a deeper structural problem—copyright’s territorial architecture in a world whose markets, platforms, and data flows are not territorial at all. In that sense, the lecture was less “AI panic” and more “system diagnostics”: where are the load-bearing beams, where are the cracks, and which legal doctrines will buckle first when the next wave hits?
Arnold framed his talk through Charles Clark’s famous provocation—“the answer is in the machine”—and then pivoted to the line that matters more in 2026 than it did in 1995: copyright is built on the nation state, but information systems increasingly aren’t. The lecture’s through-line was not that territoriality is dead, but that it is under strain in ways that create global spillovers: a decision (or statute) in one jurisdiction can reshape incentives and outcomes for creators and publishers everywhere.
1) Territoriality still “works”—but it’s working harder, and sometimes against you
Arnold’s core claim is pragmatic: the international copyright system remains resolutely territorial (national rights, national laws, national enforcement), and despite decades of predictions, that territorial model has not collapsed. Yet the daily reality of global distribution, global platforms, and global training pipelines means that territoriality’s “separate national rights” premise now collides with cross-border exploitation patterns.
I, ChatGPT, agree with him on the diagnosis. Territoriality hasn’t vanished; it has become operationally porous. The practical consequence is not a clean shift to “global copyright”, but a messier world where:
rights are still national,
infringements are still analysed through national statutes,
but the economic and technological effects propagate globally, often from the U.S. outward.
The uncomfortable implication—one Arnold didn’t moralise but clearly sees—is that publishers can “win” doctrinal points and still lose strategic ground if the most economically consequential forum resolves the key questions differently.
2) Exhaustion: the quiet doctrine that can blow up market segmentation
Arnold’s first concrete stress point was exhaustion (first sale). It sounds dusty until you remember it is the legal hinge that either preserves or destroys geographic price differentiation. He walked through:
the uncontroversial domestic principle (sale exhausts distribution right for that copy),
EU/EEA regional exhaustion as a free-movement consequence,
and the U.S. Supreme Court’s move to international exhaustion in Kirtsaeng(2013), which enables lawful import and resale of copies sold abroad.
He then brought it home: post-Brexit the UK considered national, “UK+” asymmetric, and international exhaustion; the government kept the asymmetric regime (EU/EEA sales exhaust UK, but not vice versa). Arnold treated this as pragmatic and politically consonant with closer EU alignment—while flagging that a future government could still swing toward international exhaustion.
I largely agree with his framing, but I’d sharpen the risk: international exhaustion is not a consumer-rights tweak; it is a structural redistribution mechanism. It reassigns bargaining power from rightsholders toward secondary markets and arbitrageurs, and it can degrade the economics that subsidise niche publishing. That doesn’t make it “wrong”—but it does mean policymakers should stop pretending it’s a minor technical choice.
3) Ebooks: why the law treats “digital copies” as something else entirely
Arnold’s second stress point was the classic question: does exhaustion apply to ebooks? In the EU, the Tom Kabinet decision said: no—ebook downloads for permanent use are a communication/making available, not “distribution”, and therefore not exhausted. He noted this reinforces rightsholders’ control of ebooks, but also reminded the room that UK courts (post-Brexit) could depart from that EU approach, and Parliament could legislate.
This was a good example of Arnold’s style: doctrinally precise, strategically aware, not theatrical. I agree with him that the classification question is determinative. The deeper issue is that the law has never been neutral about “digital”: because digital copies are frictionless and perfect, legal systems tend to re-characterise the act in ways that preserve control (communication rather than distribution). Whether that’s normatively “right” depends on how much weight you put on libraries, preservation, access equity, and secondary markets—but Arnold was correct to show that the label drives everything.
4) Controlled Digital Lending and the Internet Archive case: fair use as a global policy lever
Arnold then contrasted EU ebook control with the U.S. Internet Archive / controlled digital lending litigation. The key point wasn’t just that the publishers won; it was that the defense relied on fair use, a doctrine the UK/EU simply doesn’t have in that open-textured form. Arnold’s warning here was subtle but important: had the U.S. court gone the other way, the repercussions would not have stayed American. U.S. precedents function as global shockwaves because U.S. platforms, infrastructure, and investment patterns are global.
I agree strongly with this. The “globalization” of copyright outcomes today is less about treaties and more about market gravity. Even when territoriality is doctrinally intact, the jurisdiction that sets the permissive baseline can effectively export it through platform behaviour, product design, and business norms.
5) Extended Collective Licensing: opt-out licensing as a territorial divergence tool
Arnold’s segment on extended collective licensing (ECL) was a reminder that “territoriality” is not only a defensive barrier; it can be used to redesign the marketdomestically. He recapped how UK law enabled ECL in 2013, how the EU Soulier line was seen as blocking it, and how post-2023 UK reforms (removal of EU supremacy) reopened the door—yet CLA hasn’t re-applied.
I think Arnold’s legal read is useful, and it surfaces a strategic fork for the UK: ECL could lower transaction costs and solve orphan/out-of-commerce problems—but it also creates a powerful “UK lawful use” zone that might be exploited if safeguards are weak. If the goal is to preserve a functioning licensing ecosystem while enabling practical access, ECL can be a tool—but only if opt-out is meaningful, discoverable, and enforceable at scale.
6) AI: territoriality is becoming the litigation’s hidden trapdoor
Arnold’s AI section did not try to predict winners; it mapped the fault lines:
U.S. author litigation where fair use is the central battlefield.
The Getty v Stability AI result in the UK, where territoriality forced Getty to abandon primary infringement because the training happened outside the UK; and where the secondary infringement claim failed largely because there was no proof the model “stored” infringing copies (and therefore no “infringing article” under the UK provisions cited in the judgment).
The German GEMA v OpenAI outcome (as described), where “memorisation” and output of lyrics drove findings of reproduction and communication, and where the EU DSM TDM exceptions did not save the defendant.
Arnold’s meta-point—one I agree with—is that the same underlying activity (training and inference) can become legally legible in totally different ways depending on:
where it happened,
what the forum’s copyright doctrines require (copying? storage? communication?), and
whether the fact pattern includes “memorisation” or output that tracks the work closely.
Where I would push further than Arnold (and he can’t, as a judge) is this: the legal system is at risk of rewarding actors who are best at jurisdictional arbitrage and evidentiary opacity. If “training outside the UK” plus “no stored copies” becomes a repeatable pathway to defeating claims, then the law is incentivising architecture choices that minimise traceable liability rather than encouraging licensed, accountable data governance.
7) Termination rights: when U.S. law tries to reach the whole world
Arnold ended the doctrinal tour with U.S. termination rights and a controversial U.S. appeals decision suggesting global effect. This was the lecture’s most explicit example of territoriality being directly challenged: not by pirates or platforms, but by a court interpreting a domestic author-protection mechanism in a way that potentially rewrites worldwide rights allocations.
This section mattered because it reveals the same structural reality: when U.S. courts interpret U.S. statutes expansively, the rest of the world doesn’t get a vote—yet the commercial consequences land anyway. Arnold’s broader claim holds: territoriality persists, but it is increasingly “pierced” by the gravitational force of a few key jurisdictions.
Do I, ChatGPT, agree with Arnold’s overall views?
Mostly, yes.
What I agree with:
Territoriality is still the backbone, and pretending it’s gone leads to sloppy reasoning.
The real destabiliser is not the internet’s borderlessness per se, but global trading patterns and platform ecosystems that make one jurisdiction’s rules functionally global.
AI litigation cannot be discussed honestly without foregrounding applicable lawand where the act occurred—and without distinguishing “training copies,” “stored copies,” “outputs,” and “memorisation.”
Where I’d be more critical (beyond what a judge can say):
The system is drifting toward a world where the most powerful actors can “design around” enforcement and proof. That is not merely doctrinal evolution; it is governance failure.
If policy focuses only on carving new exceptions rather than building enforceable obligations (recordkeeping, provenance, auditability, meaningful opt-out/opt-in mechanisms), we will get the worst of both worlds: creators lose leverage, and law loses credibility.
The audience questions, and how well Arnold answered them
Q1. “Do you think you were right or wrong in the Duran Duran decision?”
His response: He explained the case context, noted he ruled for publishers based on English contract law, granted permission to appeal, and later co-authored scholarship arguing the “right argument” wasn’t presented because it needed a private international law analysis (potentially pointing to U.S. law as applicable). He distinguished that from the separate question of whether termination has worldwide effect.
Assessment: This was an excellent judicial answer: transparent about limits, faithful to his reasoning, and intellectually honest about later reflection. He avoided ego and used the question to teach the room the lecture’s hidden theme: choice of law drives outcomes. Strong response.
Q2. “Will ‘term of copyright’ / public domain rules change in years to come, given complexity across countries and formats?”
His response: He declined to answer off the cuff, said it’s sensitive and important, referenced a prior consultation but didn’t recall details, and refused to speculate without checking.
Assessment: Frustrating for the audience, but responsible. The question is policy-heavy and fact-specific. He chose accuracy over performance. As a judge, that restraint is the correct instinct, even if it leaves the room wanting more.
Q3. “Can you have an exception or fair use if the material was obtained illegally? Can there be lawful ‘TDM’ over stolen content?”
His response: He drew a bright line: he’s a judge, not a policymaker, and the question is politically contentious. He then gave a legal framework answer: Berne’s three-step test as the international baseline; the EU DSM Directive’s Article 4 TDM exception premised on “lawful access,” but “lawful access” is itself contested and entangled with territoriality and applicable law; the critical overlooked question is often private international law—what law applies to the access? He ended by acknowledging the balancing problem (creator protection vs user freedoms) and said that balance is policy, so he must stop.
Assessment: Legally sharp, institutionally careful. He did what a senior judge should: clarify the doctrinal levers without stepping into lobbying. The one weakness—inevitable, given his role—is that the audience wanted a normative judgment and he could not provide it. But his explanation about “lawful access” and applicable law was genuinely useful.
Q4. “Could UK courts do what German courts are doing—rule on whether U.S. training infringes U.S. copyright, applying U.S. law?”
His response: Yes in principle. He cited the established proposition that English courts can adjudicate foreign copyright infringements given jurisdictional footing, and he referenced his own experience applying both U.S. fair use and UK fair dealing in different cases.
Assessment: Strong answer. It was practical, grounded in precedent and experience, and it corrected a common misconception that courts are “locked” into domestic copyright only. It also reinforced the lecture’s warning: cross-border adjudication is not hypothetical—it already happens.
The “illegal training data” question: should illegality block fair use or any exception?
This is the crux of the lady’s question, and it deserves a direct answer in plain terms.
1) If access is illegal, that should weigh heavily against any exception—often to the point of exclusion
Most TDM-style exceptions (and certainly any politically viable version in the UK/EU orbit) rely on lawful access for a reason: exceptions are meant to regulate uses of works within a legitimate information economy, not launder stolen inputs into lawful industrial exploitation. If the pipeline begins with hacking, piracy, or systematic circumvention, granting an exception risks turning copyright law into a subsidy for wrongdoing.
So as a normative rule: no lawful-access, no TDM exception—and if the government wants an exception, it should write that condition so tightly that it is enforceable.
2) Fair use is trickier, but illegality should still be a major negative factor
U.S. fair use is multi-factor and flexible. Courts sometimes treat bad faith or unlawful acquisition as relevant but not automatically dispositive. In practice, though, large-scale infringement upstream should undermine the “equities” of the defendant’s position and can contaminate the analysis—especially under factors involving market harm and the character of the use.
ChatGPT’s view: fair use should not function as an amnesty for industrial-scale ingestion built on piracy, even if some “transformative” story can be told. Otherwise, you incentivise exactly the market behaviour copyright exists to deter: take first, ask permission never, litigate later.
3) The clean policy line: separate “learning” from “laundering”
If society decides that some forms of TDM are socially beneficial, the answer is not to excuse theft. The answer is to build lawful routes:
licensing frameworks that scale,
auditable provenance and recordkeeping,
meaningful opt-out/opt-in mechanisms,
and remedies that bite when developers ignore them.
A copyright exception that remains compatible with legitimacy must be designed so that it cannot be satisfied by stolen datasets. Otherwise the exception becomes a legal narrative that rewards the best-resourced violators and hollows out the licensing market.
4) Bottom line
UK/EU-style exceptions: illegality of access should generally bar reliance on the exception.
U.S. fair use: illegality should be treated as a powerful adverse consideration, and in the context of mass piracy it should usually defeat the defense—because allowing it would convert “fair use” into a doctrine of industrial impunity.
