- Pascal's Chatbot Q&As
- Posts
- DK: If an AI provider cannot show what training data was used, regulators and courts should assume that copyrighted content was included. This is a powerful, pro-rightsholder stance not widely seen.
DK: If an AI provider cannot show what training data was used, regulators and courts should assume that copyrighted content was included. This is a powerful, pro-rightsholder stance not widely seen.
The group suggests moving to explicit consent (“opt-in”) for TDM, which would radically increase licensing leverage for creators.
Essay on the Danish Expert Group on Copyright and AI Report
by ChatGPT-4o
1. Introduction
On 15 September 2025, Denmark’s Ministry of Culture published the long-awaited report from the Expert Group on Copyright and Artificial Intelligence (AI). The group brought together rights holders, industry associations, libraries, legal scholars, and media organizations to confront a simple but pressing question: are existing copyright rules “fit for purpose” in the age of generative AI? The consensus was clear—they are not. The report therefore sets out ten recommendations aimed at reshaping the copyright framework for AI training, outputs, and enforcement.
2. Most Surprising Statements
Several aspects of the report stand out as unexpected or unusually bold:
Presumption of infringement when transparency is lacking (Recommendation 1). The group proposes a reversal of the burden of proof: if an AI provider cannot show what training data was used, regulators and courts should assume that copyrighted content was included. This is a powerful, pro-rightsholder stance not widely seen elsewhere.
Possible shift to an “opt-in” model for TDM (Recommendation 2). Currently, Article 4 DSM Directive provides a broad opt-out regime for text and data mining. The group suggests moving to explicit consent (“opt-in”), which would radically increase licensing leverage for creators.
Domain public payant idea (Recommendation 8). Borrowing from continental copyright traditions, the group suggests charging for the commercial use of public domain or AI-generated works, redistributing proceeds to human creators. This is a provocative concept that blends cultural policy with copyright law.
Clarification that offering AI systems = communication to the public(Recommendation 10). This stretches copyright doctrine in new directions: the system itself, not just its outputs, would be seen as a mode of “making available,” thereby triggering licensing obligations.
3. Controversial Elements
The report triggered dissent within the group itself:
Dansk Erhverv and DI (Confederation of Danish Industry) rejected Recommendations 1 and 10.
Danske Medier dissented from Recommendations 1, 2, and 4.
The controversies revolve around:
Transparency burdens: Tech and media representatives fear that mandatory disclosure of training data may be unworkable or reveal trade secrets.
Opt-in TDM rules: This would arguably conflict with EU-level law unless the DSM Directive is revised, creating legal uncertainty.
Compulsory arbitration for press publishers (Recommendation 4): Some media companies prefer court-based leverage, while others welcome faster dispute resolution.
4. Most Valuable Statements
The report contains several valuable, pragmatic contributions:
Collective licensing reinforcement (Recommendation 3). It acknowledges that mass rights clearance is impossible case-by-case and highlights Denmark’s strong tradition of extended collective licensing as a practical solution.
Technical guardrails against illegal uploads (Recommendation 6). The group highlights the growing risk of user-side infringement and places the responsibility on providers to design safe platforms.
Conditional public prosecution for AI copyright cases (Recommendation 7). Recognizing the technical complexity of cross-border AI disputes, the report proposes targeted state involvement to avoid leaving smaller rightsholders stranded.
Awareness and education initiatives (Recommendation 9). The recognition that law alone is insufficient, and that users, institutions, and creators need clear guidance, is a valuable and often neglected point.
5. Is this Positive for Rights Owners and Creators?
On balance, yes. The report heavily tilts toward restoring bargaining power to rightsholders in a market currently dominated by opaque practices of tech giants. Transparency, presumptions of infringement, stronger collective licensing, and new protections for personal likenesses all strengthen the legal and negotiating position of authors, publishers, and performers.
At the same time, the report tries to avoid outright hostility to innovation. By recommending pilot schemes, arbitration, and further study (rather than immediate legislative overhaul) in some areas, it leaves room for compromise.
6. Points of Agreement and Critique
I broadly agree with the thrust of the report—AI has disrupted copyright far beyond what DSM foresaw, and bold interventions are justified. However, several issues are missing or underdeveloped:
Global enforcement dimension. AI model training is rarely confined to Denmark or even the EU. The report is largely silent on how Danish or EU rules can be enforced extraterritorially, especially against U.S. or Chinese providers.
Moral rights and attribution. While the report covers economic rights thoroughly, it does not adequately discuss whether AI outputs should carry attribution obligations when they are based on specific authors.
Competition and concentration. The text notes big tech dominance but does not suggest competition law remedies. A holistic solution likely requires both copyright and antitrust tools.
Fair remuneration for small creators. Collective licensing is emphasized, but how to ensure equitable distribution of revenues (e.g. so large publishers do not absorb all the benefits) is not addressed in detail.
7. What I Would Change or Add
If I were refining the report, I would add:
A clearer roadmap for harmonization with the EU’s AI Code of Practice and AI Act. The current text is already outdated in places, weakening its impact.
Proposals for metadata or watermarking standards to help automate compliance.
More explicit recognition of open-source AI models and their different dynamics compared to commercial, closed-source systems.
A forward-looking section on synthetic data feedback loops (AI outputs becoming AI inputs), which poses risks to diversity and originality.
8. Should Other Countries Follow This Example?
Yes—with caveats. Denmark’s report is significant because it blends strong copyright traditions with a willingness to innovate (extended collective licensing, presumption rules, public-domain levies). It could serve as a laboratory for Europe: if Denmark pilots these measures, their effects could inform EU-level reforms.
Other regulators—particularly in smaller markets vulnerable to cultural homogenization—should pay attention. Countries like the Netherlands, Finland, or Canada could adapt similar frameworks to protect their local creators. However, larger jurisdictions (U.S., UK) would need to tailor the recommendations to their own legal cultures, especially given First Amendment/free speech traditions in the U.S. and the UK’s departure from EU law.
Conclusion
The Danish Expert Group on Copyright and AI has produced one of the most ambitious national-level responses to the copyright challenges posed by AI. Its surprising elements—such as presumptions of infringement and exploration of domain public payant—push the debate beyond incremental tweaks toward structural reform. While controversial and imperfect, the report is a net positive for rights owners and creators, providing both legal clarity and bargaining power.
Other countries should indeed study and adapt Denmark’s approach. By experimenting with bold measures in a small but sophisticated legal system, Denmark may help shape the future balance between AI innovation and cultural sustainability across Europe and beyond.
