• Pascal's Chatbot Q&As
  • Posts
  • Ready to sacrifice GDPR on the altar of the AI God? 1. Narrowing the definition of “personal data.” 2. Restricting data subject rights to “data-protection purposes” only.

Ready to sacrifice GDPR on the altar of the AI God? 1. Narrowing the definition of “personal data.” 2. Restricting data subject rights to “data-protection purposes” only.

3. Allowing AI companies to train models on Europeans’ personal data under a broad “legitimate interest.” 4. Providing a “wildcard” legal basis for any AI system’s operation. And much more...


The Digital Omnibus Draft and the Future of EU Privacy

by ChatGPT-5

The leaked internal drafts of the European Commission’s “Digital Omnibus” reform mark a profound and unexpected shift in the European Union’s data-protection landscape. According to the analysis of civil-society watchdog noyb, the proposal would fundamentally alter key GDPR principles—ranging from the definition of personal data to the scope of data-subject rights, the treatment of sensitive data, and the legality of AI training on large-scale personal information. Many of these changes appear to be motivated by a narrow focus on enabling AI development at scale, while sidelining long-established European constitutional guarantees.

Based on the documents provided, this essay examines the substance of the proposed reforms, evaluates whether the perspectives expressed in the attached analyses are corroborated, and concludes with a set of concrete recommendations for EU regulators.

1. Summary of the Proposed Reforms

The Commission’s draft (developed under DG CONNECT, led by Executive Vice President Henna Virkkunen) proposes sweeping changes across multiple GDPR provisions. The key elements include:

1. Narrowing the definition of “personal data.”
The draft introduces a subjective interpretation of identifiability: if a company claims it cannot identify an individual, the data is not “personal” for that company and therefore falls outside GDPR scope. This would deprioritise established case law holding that pseudonymous identifiers and “singling out” operations fall within the GDPR’s protective ambit.

2. Restricting data subject rights to “data-protection purposes” only.
This change would allow controllers to reject access, deletion, or rectification requests if individuals use them to support legal claims, workplace grievances, investigative journalism, or financial correction. This directly contradicts CJEU holdings that individuals may use these rights “for any purpose,” including litigation and evidence-gathering.

3. Allowing AI companies to train models on Europeans’ personal data under a broad “legitimate interest.”
The proposal modifies Articles 6 and 9 to effectively authorise AI training on personal data (including data scraped from the internet), with only a theoretical “right to object” as a safeguard—a right that cannot realistically be exercised in large-scale AI training contexts.

4. Providing a “wildcard” legal basis for any AI system’s operation.
Data processing performed via an AI system could be justified under legitimate interest, while non-AI processing remains strictly evaluated. This contradicts the technology-neutral intent of the GDPR.

5. Narrowing the scope of sensitive data.
Sensitive data categories would only be protected when “directly revealed,” excluding inferred data—despite established jurisprudence and Convention 108 requiring protection for data that reveals sensitive attributes even indirectly.

6. Permitting remote access to personal data on devices.
Through the reinterpretation of ePrivacy Article 5(3), the proposal would enable multiple legal bases for accessing terminal equipment, creating conditions under which data could be pulled from smartphones or PCs without meaningful consent.

2. Can these perspectives be confirmed?

The concerns raised by noyb are grounded in a detailed, text-based review of the Commission’s leaked internal drafts, supported by references to established CJEU jurisprudence and the Charter of Fundamental Rights. Their analysis reflects:

A. Accuracy of legal diagnosis
The organisation’s Version 2 overview, as described in the attached correspondence, draws on a structured mapping of multiple GDPR articles, cross-referenced with Charter Article 8 and Convention 108.

The critique that many provisions conflict with settled case law is consistent with decades of CJEU decisions on broad definitions of personal data, sensitive data, and data-subject rights.

B. Evidence of fast-tracked, low-quality drafting
The claim that Commission units had only five working days to comment on a 180-page proposal and that the draft contains internal contradictions appears credible.

C. Tunnel vision on AI
Both documents emphasise that the reform seems driven by the perceived need to “win” the global AI race. The expanded legal bases for AI training and AI operation support this interpretation.

D. Minimal benefit for SMEs
The drafts target high-scale AI developers such as Google, Meta, OpenAI, Microsoft—while SMEs gain little beyond marginal DPIA clarifications.

E. Democratic and procedural concerns
The use of an Omnibus “fast-track” instrument, bypassing normal impact assessments and inter-service verification, is described as extraordinary and inconsistent with principles of evidence-based EU law-making.

Given the primary source material and cross-reference with known legal standards, yes, the perspectives expressed can be confirmed as grounded and credible.

3. Implications for Fundamental Rights

If adopted, these reforms would:

  • Erode the protective scope of GDPR by allowing subjective interpretations of identifiability.

  • Undermine equality of arms in labor, civil, and consumer rights disputes by restricting access rights.

  • Expose minorities, vulnerable groups, and individuals with medical or political sensitivities to profiling and discrimination.

  • Create large-scale, unregulated pipelines of personal data for AI training across the globe.

  • Enable Big Tech to consolidate power while EU-based SMEs and publishers (including scholarly publishers) face intensified competitive disadvantage.

  • Generate legal uncertainty likely to result in years of litigation before the CJEU.

4. Recommendations for EU Regulators

1. Suspend the GDPR-related elements of the Digital Omnibus draft.
A reform of such magnitude must not proceed via a fast-track Omnibus instrument. It requires full-scale impact assessments, public consultations, and inter-institutional scrutiny.

2. Reaffirm technology neutrality.
AI must not become a privileged category that bypasses legal obligations. Processing should be evaluated based on risk, not on the label “AI system.”

3. Maintain the broad definition of personal data and inferred sensitive data.
This is necessary to ensure continuity with Charter Article 8, Convention 108, OECD principles, and global best practices.

4. Protect data-subject rights without purpose-based limitations.
Access, rectification, and deletion must remain tools available for any legitimate interest, including litigation, journalism, labour claims, and consumer protection.

5. Prohibit remote device access without explicit, informed, opt-in consent.
Any reform should strengthen—not weaken—protections against clandestine device-level data extraction.

6. Develop AI-specific rules outside the GDPR.
Where legitimate interests conflict with fundamental rights, regulators should use the AI Act framework, sector-specific regulation, and standards-based approaches rather than weakening the GDPR’s constitutional foundations.

7. Preserve democratic legitimacy and procedural integrity.
Return to the scheduled Digital Fitness Check in 2026 and ensure any revisions are evidence-based, thoroughly vetted, and compliant with EU constitutional principles.

Conclusion

The proposed Digital Omnibus revisions represent a significant deviation from the GDPR’s original vision as a bedrock of European fundamental rights. The leaked drafts would not modernise data protection but instead destabilise it—weakening rights, eroding trust, enabling intrusive AI practices, and generating unprecedented legal uncertainty. To safeguard Europe’s constitutional commitments and its strategic autonomy, EU regulators must halt the current direction of reform, re-establish transparent and evidence-based policymaking, and ensure that artificial intelligence is integrated into the European legal order without compromising fundamental rights.