- Pascal's Chatbot Q&As
- Posts
- Denmark: This ruling is important because it shifts the legal debate from “did the rights holder object?” to “did the rights holder object in a technically actionable way?” That is a much higher bar.
Denmark: This ruling is important because it shifts the legal debate from “did the rights holder object?” to “did the rights holder object in a technically actionable way?” That is a much higher bar.
A rights holder may have objected in ordinary language & a lawyer may understand the objection, but the opt-out may still fail if automated systems cannot reliably process it as a rights reservation.
Summary: The Danish Court of Appeal held that a TDM opt-out for online content must be machine-interpretable, not merely written in human-readable HTML terms.
That weakens reliance on ordinary website policies and pushes rights holders toward technical, standardised opt-out signals backed by logs and enforcement infrastructure.
For AI, the ruling matters because training-data compliance will increasingly depend on whether crawlers can detect, interpret and respect rights reservations at scale.
The Opt-Out That Machines Could Not Understand: Why the Danish TDM Ruling Matters for AI
by ChatGPT-5.5
The Danish Court of Appeal ruling matters because it turns a deceptively simple question into a major copyright-infrastructure issue: is it enough for a rights holder to write “no scraping, no crawling, no text and data mining” in ordinary website terms, or must that prohibition be expressed in a way that automated systems can reliably detect, interpret and act upon?
The Court of Appeal’s answer is: ordinary human-readable language in HTML is not enough. A valid TDM opt-out for publicly available online content must not merely be machine-detectable; it must be machine-interpretable. That distinction is potentially very important for AI, because AI training, dataset creation, search indexing, scraping, retrieval systems and agentic tools all depend on automated ingestion at scale.
Why it is important
This ruling is important because it shifts the legal debate from “did the rights holder object?” to “did the rights holder object in a technically actionable way?” That is a much higher bar. It means that a rights holder may have clearly objected in ordinary language, and a human lawyer may understand the objection perfectly, but the opt-out may still fail if automated systems cannot reliably process it as a rights reservation.
That has direct relevance to AI even though this Danish case was not itself an AI-training case. The legal machinery is the same: Article 4 of the DSM Directive allows text and data mining of lawfully accessible content unless the rights holder has expressly reserved the use in an appropriate manner. For online content, the legal and policy trend is now moving toward “appropriate” meaning technically standardised, machine-actionable and capable of being respected at scale.
This matters for publishers, news media, platforms, image libraries, scientific databases, entertainment companies and any business whose value sits in structured or semi-structured online content. It also matters for AI developers because the EU AI Act requires providers of general-purpose AI models to have a policy to comply with EU copyright law and, in particular, to identify and comply with rights reservations made under Article 4(3) of the DSM Directive.
In other words, the future question will not simply be whether a model developer scraped protected material. It will also be whether the rights holder used a recognised machine-readable protocol, whether the AI provider’s crawler detected it, whether the provider’s ingestion pipeline respected it, and whether both sides can prove what happened.
What the issues at hand are
The case involved BoligPortal, a Danish rental-property platform, and ReData, a company that collected rental-market data. ReData scraped data from BoligPortal’s public rental listings and used it for competing data analytics. The first-instance court had granted an injunction preventing ReData from crawling, TDM or otherwise collecting data from BoligPortal’s website or underlying API, and from making the collected data publicly available. The Court of Appeal reversed that result.
There were several legal issues, but the decisive one was the TDM reservation.
First, there was a database-right issue. BoligPortal argued that its database was protected under Danish copyright law implementing the EU Database Directive. ReData challenged whether BoligPortal had shown a qualifying protected database, whether the relevant investment related to the scraped database rather than a broader commercial platform, whether the extracted eight data points were substantial enough, and whether any protection period had expired or renewed. The Court of Appeal treated these questions as insufficiently established for the interim injunction stage.
Second, there was the TDM exception issue. Even if BoligPortal had database rights, Danish copyright law implementing Article 4 of the DSM Directive permits text and data mining unless the rights holder has expressly reserved the use in an appropriate manner. That became the key question.
Third, there was the format problem. BoligPortal had placed prohibitions in its data and privacy policy, linked from the website footer, available in HTML and written in natural human language. The first-instance court thought that was enough: the text was public, accessible and technically readable. The Court of Appeal disagreed. It held that because the reservation was only in natural, human-readable language in HTML, it did not satisfy Article 4(3).
Fourth, there was the machine-readable versus machine-interpretable distinction. The Court of Appeal’s most important move was to say that “machine-readable” cannot simply mean “a machine can technically access the file.” Almost everything on the public web is machine-readable in that weak sense. A meaningful opt-out must be capable of being interpreted by automated processing in a way that causes the content not to be used. This is the line that makes the ruling relevant far beyond Danish rental listings.
Fifth, there was a commercial asymmetry problem. BoligPortal wanted to prevent ReData from scraping commercially valuable market data, but it also wanted search engines and major platforms to continue accessing the site. That is a familiar rights-holder dilemma: visibility and discoverability often depend on allowing some bots in, while content protection depends on keeping other bots out. A blunt “no bots” approach may damage SEO, distribution and traffic. A selective, machine-actionable rights architecture is therefore necessary.
What the remedies are
The immediate legal remedy was simple: the interim injunction was lifted. ReData was released from the first-instance prohibition, BoligPortal had to repay the earlier costs amount, and BoligPortal was ordered to pay ReData’s costs for both courts.
But the more important remedies are practical and strategic.
Rights holders should stop treating website terms as the whole solution. Human-readable terms remain useful, but they should be backed by machine-actionable signals. Depending on the type of content and distribution channel, that may include robots.txt, more granular bot controls, TDM Reservation Protocol-style files, HTTP headers, HTML metadata, embedded asset-level metadata, API terms, licensing-policy endpoints, crawler registries, and auditable access logs. The strongest approach is not one signal but a layered system: public legal terms, machine-readable rights declarations, technical controls, rate limits, authentication for higher-value data, monitoring, and evidence preservation.
AI developers should treat TDM opt-outs as part of their compliance architecture, not as a courtesy. The EU AI Act and GPAI Code of Practice process are pushing developers toward state-of-the-art detection of rights reservations. A defensible AI ingestion pipeline should record where data came from, what signals were present at the time of access, whether a reservation applied, how the crawler interpreted it, and whether the content was excluded, licensed or retained.
Regulators and standard-setters should recognise the structural problem. If every rights holder has to invent its own opt-out language and every AI company has to guess what counts as valid, the system will fail. The remedy is interoperability: a common, low-friction, publicly recognised set of protocols that work across sectors, file types, websites, platforms and territories.
Courts may also need help from the CJEU. The Danish Court of Appeal itself noted that there is no CJEU case law finally clarifying Article 4(3). That means national courts are converging in practice, but EU-wide legal certainty still depends on higher-level clarification.
How this compares to similar developments
The Danish ruling is especially interesting because it reverses the more rights-holder-friendly first-instance decision. The first-instance court accepted a clear, footer-linked, human-readable HTML policy as a valid TDM reservation. The appeal court did not. That mirrors a broader European shift from “can a machine read the text?” to “can an automated system understand the legal instruction?”
The Amsterdam District Court’s HowardsHome decision pointed in the same direction. There, Dutch publishers argued that their content had been reserved from TDM. The court found that the evidence did not establish an appropriate machine-readable reservation, and the defendant could rely on the TDM exception. That case also showed how difficult the opt-out problem becomes for news publishers, RSS feeds and downstream copies of content.
The German LAION litigation is even closer to AI. At first instance, the Hamburg Regional Court suggested that natural-language reservations might be sufficient. On appeal, however, the Hamburg Higher Regional Court took a stricter view: a natural-language opt-out did not satisfy the machine-readability requirement. The German appeal court also held that certain dataset-creation and pre-processing steps could fall within the TDM framework, while leaving some larger questions about model training and downstream outputs unresolved.
The Hungarian Like Company v Google reference is the next major escalation. That case asks the CJEU to address generative AI, press publishers’ rights, chatbot outputs, reproduction, communication to the public and the role of TDM. It may become the EU’s first major opportunity to draw the boundary between lawful AI training, unlawful reproduction, protected press content and compensable use.
The EU AI Act adds another layer. Article 53 requires general-purpose AI model providers to put in place copyright-compliance policies and identify and comply with rights reservations under Article 4(3). The European Commission has also launched work on protocols for rights reservations under the GPAI Code of Practice. This means TDM opt-outs are no longer only a copyright-law issue. They are becoming part of AI governance, product compliance and market access.
What this could mean for AI development
For AI developers, the ruling is both helpful and dangerous.
It is helpful because it gives developers a clearer argument that not every “no scraping” sentence buried in ordinary website terms disables the Article 4 TDM exception. If a rights holder wants to opt out of automated mining, the signal must be technically usable by automated systems. That gives developers more legal certainty when building crawlers and datasets.
But it is dangerous because it also makes compliance more technical and more auditable. Once machine-actionable standards exist, AI developers will find it harder to claim uncertainty. They will need to prove that they respected valid reservations. That means ingestion logs, crawler behaviour, dataset lineage, exclusion lists, deletion workflows and licensing records become legally significant.
For rights holders, the ruling is a warning. A clear human objection may not preserve leverage if it is not translated into machine-actionable infrastructure. This is particularly serious for publishers, because many publishing businesses depend on being discoverable by search engines while preventing uncompensated substitution by AI systems. The old model of “put it online, state your terms, enforce later” is not sufficient in an AI environment.
For smaller creators, the ruling may be more troubling. Large publishers and platforms can implement protocols, metadata and crawler controls. Individual artists, authors, photographers and small websites may struggle. That creates a risk that the opt-out system becomes formally available to everyone but practically usable only by those with infrastructure, lawyers and technical teams.
For AI licensing markets, the ruling is likely positive. If opt-outs become standardised, they can become the first step in licensing. A machine-readable “no TDM without permission” signal can point to a licensing URL, rights registry, collective management mechanism or API access route. That is how a chaotic prohibition layer can evolve into a functioning market.
Other lessons learned
The first lesson is that legal rights now need technical expression. In the AI era, a right that cannot be discovered by machines may be difficult to enforce against machine-scale conduct.
The second lesson is that “machine-readable” is not the same as “written somewhere on the internet.” Courts are beginning to understand that a bot reading bytes is not the same as a bot understanding legal permission. Rights reservations need structure, not just words.
The third lesson is that opt-out systems are not a substitute for licensing. An opt-out says “do not use unless authorised.” It does not by itself create a commercial pathway. Rights holders should pair reservations with licensing infrastructure, contact points, standard terms and APIs.
The fourth lesson is that content owners must avoid accidental self-harm. Blocking all bots may protect content but destroy discoverability. Allowing all bots preserves traffic but weakens licensing leverage. The future lies in selective, policy-aware access: search indexing, citation, snippets, RAG, AI training and commercial resale may need different permissions.
The fifth lesson is that evidence matters. BoligPortal’s case was weakened by uncertainty around the relevant database, the scope of investment, the extracted data, the timing and wording of the reservation, and whether automated systems would interpret the opt-out. In AI disputes, contemporaneous logs, versioned policies and technical records will matter as much as legal argument.
The final lesson is strategic. Europe is trying to build a middle path between unrestricted scraping and permission-only AI development. That middle path depends on a functioning opt-out mechanism. If opt-outs are too informal, AI developers face impossible legal uncertainty. If opt-outs are too technical, many rights holders cannot realistically use them. The Danish ruling pushes Europe toward technical clarity, but it also exposes the deeper infrastructure gap: copyright law has created a machine-readable rights system before the market has fully built one.
For publishers, the practical takeaway is blunt: do not assume that a website policy is enough. Convert rights into protocols. Convert protocols into logs. Convert logs into licensing leverage. And do it before the next scraper, crawler or AI agent arrives.
Sources and notes
Ruling: Østre Landsret, ReData A/S v BoligPortal A/S, 12 May 2026. The ruling records the first-instance injunction, the scraping/API facts, the parties’ dispute over HTML/natural-language reservations, the Court of Appeal’s finding that a reservation must be machine-interpretable rather than merely detectable, and the resulting reversal of the injunction and costs order.
For the German comparison, see summaries of the Hamburg Higher Regional Court’s 10 December 2025 decision in Kneschke v LAION, including its findings that pre-processing can fall within TDM and that a natural-language opt-out was insufficiently machine-readable.
For the Dutch comparison, see discussion of the Amsterdam District Court’s HowardsHome decision, where the court held that the publishers had not proved an appropriate machine-readable reservation under the Dutch implementation of Article 4 DSM.
For the AI Act and implementation context, see Article 53’s obligation for GPAI providers to identify and comply with Article 4(3) rights reservations, and the European Commission’s consultation on protocols for TDM rights reservations under the AI Act and GPAI Code of Practice.
For technical remedies, see the W3C Community Group’s TDM Reservation Protocol, which defines a web protocol for expressing TDM rights reservations and discovering licensing policies.
