- Pascal's Chatbot Q&As
- Posts
- The EU now faces a critical choice: to prioritize regulatory convenience or to uphold the foundational values it claims to champion.
The EU now faces a critical choice: to prioritize regulatory convenience or to uphold the foundational values it claims to champion.
The EU now faces a critical choice: to prioritize regulatory convenience or to uphold the foundational values it claims to champion.
Why the EU General Purpose AI Code Must Guarantee Public Transparency and the Right to Information
by ChatGPT-4o
The open letter published by civil society organizations and experts—“NO GPAI CODE WITHOUT RIGHT TO INFORMATION & TRANSPARENCY”—is a compelling and necessary call to action. The authors raise significant legal, ethical, and democratic concerns about the removal of public transparency provisions from the EU General Purpose AI (GPAI) Code of Practice. I strongly agree with the authors’ position and believe their call should be heeded not only for legal coherence but also for the integrity, trust, and democratic legitimacy of AI governance in the European Union and beyond.
I. The Legal and Democratic Basis for Public Transparency
The authors correctly assert that public transparency is not a discretionary feature of AI governance—it is a legal and democratic imperative. The AI Act itself, in Article 1, commits to the promotion of “human-centric and trustworthy AI” and the protection of “fundamental rights enshrined in the Charter.” Transparency is integral to both.
Moreover, the foundational treaties of the European Union, particularly Article 169 TFEU, affirm the right to information. This right cannot be ignored when it comes to technologies—like GPAI—that have systemic, often opaque impacts on individuals and communities. The letter also rightly references the European Court of Justice, which has consistently ruled that access to information is a necessary precondition for exercising other rights, including the right to an effective remedy.
In this context, deleting transparency provisions from the GPAI Code undermines both the letter and spirit of EU law.
II. Transparency Is Essential for Trustworthy and Accountable AI
The removal of transparency commitments contradicts established EU AI ethics guidelines, notably those published by the High-Level Expert Group on Artificial Intelligence. These guidelines view transparency as one of the seven essential requirements for trustworthy AI, alongside human agency, privacy, and accountability.
Public access to Safety and Security Frameworks (SSF) and Model Reports is indispensable for multiple reasons:
Accountability: Without access to the documentation, affected individuals, civil society, journalists, or researchers cannot verify claims of safety, fairness, or compliance.
Collective oversight: Public review allows external experts to scrutinize risk mitigation strategies, flag potential gaps, and help improve safety proactively.
Informed consent: Citizens must understand the risks they are subjected to, especially with systems that could affect public health, education, employment, law enforcement, or democratic participation.
The principle of proportionality—used effectively by the authors—reinforces the case. If companies already produce SSFs for regulatory review, making them available to the public imposes a negligible additional burden, especially if access is granted “on request.” The cost-benefit balance overwhelmingly favors transparency.
III. AI Literacy Requires Access to Information
The authors’ argument about AI literacy is especially powerful. The EU AI Act emphasizes AI literacy (Article 95(2)(c) and Article 3(56)), defining it as knowledge and skills that empower individuals to understand and assess the implications of AI systems.
However, AI literacy cannot exist in a vacuum. It requires meaningful access to real-world examples, data, and reports. Citizens cannot develop literacy from vague policy promises or press releases—they need access to concrete documentation about the risks and safeguards associated with deployed AI systems.
Making risk assessments and safety frameworks publicly available would create a virtuous cycle: it strengthens public knowledge, improves democratic oversight, and pressures developers to adhere to higher safety standards.
IV. Systemic Risks Demand Systemic Transparency
The letter highlights that GPAI systems pose systemic risks—by nature, these are not confined to private actors or internal audits. They affect society at large, often in unpredictable ways.
The EU must not create a regulatory blind spot where only the AI Office or Board has visibility into systemic risks.
Public transparency is a risk mitigation strategy in itself: it decentralizes oversight, allows for faster detection of harms, and enables informed public debate.
Opacity invites abuse and negligence. From biased recruitment tools to AI-generated misinformation, we’ve seen time and again how hidden systems perpetuate harm until whistleblowers or independent researchers uncover them.
The authors are right to emphasize that there is no less restrictive and equally effective alternative to public transparency. Confidential internal reporting is not a substitute for democratic accountability.
V. Additional Reasons to Uphold Transparency Provisions
Beyond what the letter states, there are additional compelling reasons why public transparency must be retained in the GPAI Code:
1. Strengthening Global Leadership and Soft Power
The EU prides itself on being a leader in ethical technology regulation. If it compromises on transparency in its flagship AI initiative, it risks losing credibility and moral authority in international negotiations, including with the U.S., China, and the Global South.
2. Preventing Regulatory Capture
Keeping documentation private creates fertile ground for lobbying and behind-the-scenes influence. Public disclosure creates checks and balances that help prevent industry capture of the regulatory process.
3. Resilience Against Disinformation
In an era of synthetic content and AI-driven propaganda, transparency is a defense mechanism. It helps society distinguish between legitimate and manipulated AI systems and fosters resilience against AI-driven deception.
4. Fostering a Competitive, Fair AI Ecosystem
If only dominant players have access to internal safety practices and risk mitigation protocols, smaller competitors and open-source projects are at a disadvantage. Public transparency levels the playing field and facilitates innovation aligned with public interest.
Conclusion and Recommendations
The authors of the open letter are right to demand the reinstatement of transparency provisions in the EU General Purpose AI Code of Practice. Their argument is not only well-grounded in law and ethics, but it also responds to a wider imperative: to align the GPAI Code with the values of democracy, fairness, and accountability.
Recommendations:
Reinstate Commitment 21 or II.16 in the final version of the Code, ensuring that public transparency is explicitly protected and operationalized.
Mandate “on-request” availability of safety documentation as a legal minimum, with clear processes and timelines for citizen access.
Encourage independent audits and public commentary on SSFs and Model Reports to ensure broad-based engagement and trust.
Ensure strong enforcement and redress mechanisms if transparency rights are denied.
Extend this model to other emerging AI governance frameworks—especially for frontier models, law enforcement systems, and algorithmic decision-making in public services.
The EU now faces a critical choice: to prioritize regulatory convenience or to uphold the foundational values it claims to champion. There should be no GPAI Code without the right to information and transparency—because without it, there is no trustworthy AI.
