- Pascal's Chatbot Q&As
- Posts
- One of the most ambitious copyright reform proposals yet aimed at generative AI: the creation of a new exclusive right—a “learnright”—that would require AI developers to license copyrighted works...
One of the most ambitious copyright reform proposals yet aimed at generative AI: the creation of a new exclusive right—a “learnright”—that would require AI developers to license copyrighted works...
...used for model training. Its central solution raises serious feasibility concerns, particularly when assessed against real-world AI development practices, global competition and technical realities
Essay: The Learnright Proposal—Fairness, Friction, and the Limits of Copyright Reform for AI Training
by ChatGPT-5.2
I. Introduction
Pasquale, Malone, and Ting advance one of the most ambitious copyright reform proposals yet aimed at generative AI: the creation of a new exclusive right—a “learnright”—that would require AI developers to license copyrighted works used for model training. The proposal is motivated less by doctrinal elegance than by distributive justice: the authors argue that existing copyright law systematically privileges machine learning over human learning and risks hollowing out the creative economy that AI depends upon.
The paper is unusually candid about uncertainty—legal, economic, and ethical—and is strongest where it exposes structural imbalances rather than promising doctrinal neatness. At the same time, its central solution raises serious feasibility concerns, particularly when assessed against real-world AI development practices, global competition, and the technical realities of large-scale model training.
II. Strengths of the Core Arguments
1. Clear diagnosis of the asymmetry problem
One of the paper’s most compelling contributions is its articulation of structural asymmetry: copyright law tolerates machine ingestion of expressive works at industrial scale while continuing to regulate human copying tightly. The comparison to Google Books jurisprudence is especially effective—highlighting how “transformative use” doctrine has quietly evolved into a doctrine that privileges automation over human engagement.
This critique is persuasive, legally grounded, and difficult to dismiss. It reframes AI copyright debates away from narrow infringement questions and toward systemic incentive design.
2. Honest treatment of fair use uncertainty
Rather than asserting that AI training is clearly infringing (a common rhetorical shortcut), the authors acknowledge the deep unpredictability of fair use doctrine—especially post-Warhol. Their conclusion that courts may well bless training while condemning outputs is realistic and strategically important. It strengthens the case for legislative intervention by showing that litigation alone is unlikely to resolve distributional harms.
This realism enhances the paper’s credibility, particularly for policymakers.
3. Normative pluralism done well
The ethical case for compensation—utilitarian, deontological, and virtue-based—is unusually well-balanced. The authors do not overclaim; they concede that counter-arguments exist within each framework, but argue convincingly that uncompensatedAI training by dominant commercial actors fails across all three.
Notably, the deontological critique of AI firms asserting IP protection for their own models while denying moral relevance to training data is one of the paper’s sharpest points—and one that resonates strongly in current geopolitical and industrial debates.
III. Weaknesses and Fragilities in the Proposal
1. Underestimation of technical infeasibility
The proposal assumes that AI developers can meaningfully identify, track, disclose, and license individual copyrighted works used in training. In practice, large-scale model training involves:
massive, mixed, often noisy datasets,
iterative pretraining, fine-tuning, and reinforcement cycles,
data transformations that make provenance attribution extremely difficult.
While the authors gesture toward audits, whistleblowers, and disclosure regimes, they do not fully confront the mismatch between legal traceability and technical reality. The risk is that compliance becomes either performative or selectively enforced—favoring incumbents with compliance budgets while disadvantaging smaller or open-source actors.
2. Market optimism bordering on institutional romanticism
The analogy to ASCAP-style collecting societies is appealing—but likely overextended. Music licensing works because:
works are discrete and identifiable,
usage can be measured (plays, broadcasts),
value attribution is tractable.
AI training lacks these features. The paper does not convincingly explain how value would be apportioned among millions of works contributing marginally and indirectly to a model’s capabilities. Market pricing may emerge—but it is just as likely to collapse into blunt category licensing (e.g., “news corpus,” “fiction corpus”), undermining the fairness rationale that motivates the proposal in the first place.
3. Jurisdictional and geopolitical blind spots
Although the authors briefly note that learnrights could vary across jurisdictions, they largely analyze the issue through a U.S. lens. This is a serious omission. In a world where:
AI development is globally competitive,
training can be relocated,
models cross borders effortlessly,
a unilateral learnright risks either being ignored, arbitraged, or weaponized geopolitically. The paper underplays the risk that such a regime could accelerate offshoring of AI development or entrench dominant firms that can absorb compliance costs.
IV. Most Surprising, Controversial, and Valuable Claims
Most surprising
The argument that failure to compensate creators may ultimately degrade AI quality itself (via reduced human creative output and model collapse) is both counterintuitive and compelling. It reframes creator compensation as a technical sustainability issue, not just a moral one.
Most controversial
The rejection of First Amendment defenses for AI training—especially the claim that machine learning is not speech because it lacks a speaker—will provoke significant pushback. While doctrinally plausible, it cuts sharply against prevailing Silicon Valley narratives and some academic consensus.
Most valuable
The framing of AI training as a new economic activity deserving its own tailored right—rather than being awkwardly shoehorned into reproduction or derivative work doctrines—is the paper’s most enduring contribution. Even critics of learnrights will likely adopt this conceptual move.
V. Overall Assessment: Is the Learnright Rationale Sound?
Legally
Partially sound, but brittle.
The rationale for legislative intervention is strong, and the critique of fair use drift is persuasive. However, creating a new exclusive right risks compounding copyright complexity and may trigger constitutional, trade, and preemption challenges—especially if poorly harmonized internationally.
Technically
Weak to moderate.
The proposal does not yet align with how modern AI systems are actually built. Without breakthroughs in data provenance, attribution, or training transparency, enforcement would be uneven and prone to gaming.
Commercially
Ambivalent.
Large incumbents could absorb learnright licensing as a cost of doing business—potentially even welcoming it as a barrier to entry. Smaller firms, open research, and non-Western competitors would likely be chilled or displaced, raising concerns that the cure may worsen concentration.
VI. Conclusion
Copyright, Learnright, and Fair Use is one of the most serious and intellectually honest attempts to grapple with AI’s distributive shock to the creative economy. Its diagnosis is sharper than its prescription, but that does not diminish its importance. The learnright proposal should be understood less as a ready-made policy solution and more as a forcing function—compelling lawmakers, courts, and industry to confront the reality that “free learning” at machine scale is neither economically neutral nor ethically benign.
Whether or not learnrights are ultimately adopted, the paper succeeds in one crucial respect: it makes it far harder to defend the status quo as fair, inevitable, or sustainable.
