- Pascal's Chatbot Q&As
- Posts
- Mekic v. X: the case is a microcosm of broader tensions: corporate secrecy vs. democratic oversight, algorithmic power vs. individual rights, and platform control vs. journalistic freedom.
Mekic v. X: the case is a microcosm of broader tensions: corporate secrecy vs. democratic oversight, algorithmic power vs. individual rights, and platform control vs. journalistic freedom.
Mekic was “shadowbanned” by X—his communications were limited without explanation. He demanded to know why, citing GDPR rights to personal data and to transparency about automated decision-making.
The Mekic v. X Corp Case – Moderation Transparency, Free Speech, and Regulatory Inaction
by ChatGPT-4o
I. Introduction: A Digital David vs. Goliath
The ongoing legal battle between Dutch privacy advocate and entrepreneur Danny Mekic and X Corp (formerly Twitter) represents more than a personal data access dispute—it embodies a clash between Big Tech opacity and public demands for algorithmic transparency. With X going to great lengths to suppress Mekic’s rights to freedom of expression and data access, the case is a microcosm of broader tensions: corporate secrecy vs. democratic oversight, algorithmic power vs. individual rights, and platform control vs. journalistic freedom. The importance of this legal confrontation cannot be overstated—for citizens, courts, regulators, and democracies alike.
II. Background: The Fight for Transparency
Danny Mekic filed a request under Article 15 and Article 22 of the GDPR (General Data Protection Regulation), seeking access to data X had stored about him—especially around automated moderation decisions that affected his account. X responded with hundreds of heavily redacted pages from its moderation system known as Guano, claiming business confidentiality and platform security as reasons for withholding information.
In essence, Mekic was “shadowbanned” by X—his communications were limited without explanation. He demanded to know why, citing GDPR rights to personal data and to transparency about automated decision-making. When X refused, the courts intervened.
The Amsterdam District Court ruled in July 2024 that X must provide more information, and imposed a €4,000 daily penalty for non-compliance. Rather than comply, X escalated the matter in appeal, backed by elite law firm Brinkhof. Their central demand? A mededelingenverbod (gag order) to prevent Mekic from speaking about the unredacted documents.
The Gerechtshof Amsterdam (Court of Appeal) rejected this request in October 2025. It ordered X to submit an unredacted version of the moderation file to the court (not to Mekic), and to provide a general explanation of key terms used in the system, which Mekic is allowed to access and discuss publicly.
III. Why This Matters
A. Freedom of Expression and the Chilling Effect
The attempted gag order against Mekic drew immediate concerns from journalists and press freedom organizations such as NVJ and the Dutch Press Freedom Fund. If granted, it would have set a precedent allowing tech giants to silence individuals under the pretense of protecting business secrets—even when those individuals are exercising fundamental rights.
Had X succeeded, it would have created a chilling effect—deterring others from speaking out or researching platform governance out of fear of litigation. In a democracy, this would constitute a dangerous erosion of both privacy rights and journalistic oversight.
B. Transparency of Automated Moderation Systems
At the heart of the case is Guano, X’s algorithmic moderation system that applies reputation scores, labels, and filters to user content. Mekic claims the system made automated decisions about him without any meaningful human oversight—qualifying the actions as “automated decision-making” under GDPR Article 22, which demands transparency.
X’s argument, that the system isn’t fully automated because parameters are designed by humans, was ridiculed by academics such as Prof. Gerrit-Jan Zwenne. As he correctly notes, the fact that humans built the tool does not exempt the automated outcomes from transparency obligations.
Transparency into content moderation systems—especially when they restrict lawful speech—is crucial for upholding human rights in the digital age. Without it, platforms effectively act as unaccountable censors.
C. Business Secrecy as a Shield
X’s core defense revolves around the protection of trade secrets and “safety risks” if moderation methods are disclosed. They argue that releasing this data—even partially—could empower bad actors to circumvent moderation, damage ad revenues, and compromise platform integrity.
But critics argue that this is a red herring: a way to avoid scrutiny of potentially arbitrary or discriminatory moderation practices, or even the lack thereof. Given X’s political and reputational challenges in recent years, the company may have a vested interest in keeping internal workings opaque.
IV. Why X Is Fighting So Hard
There are multiple reasons why X is deploying an army of lawyers and exhausting appeal options:
Precedent Risk: A court ruling that compels disclosure of moderation mechanisms under GDPR could open the floodgates. Thousands of users in the EU may start requesting similar insight into shadowbanning, content takedowns, and algorithmic ranking.
Exposure of Bias or Ineffectiveness: Full access to Guano could reveal that moderation decisions are inconsistent, politically biased, or not as effective as claimed. This would damage both public trust and legal standing.
Ad Revenue and Investor Sensitivity: Advertisers don’t want to be associated with platforms perceived as toxic or unsafe. Disclosures that reveal X cannot effectively moderate hate speech or illegal content would undermine monetization.
Strategic Litigation (SLAPP Tactics): There’s an argument to be made that X is engaging in a form of SLAPP (Strategic Lawsuit Against Public Participation), using legal threats not to win, but to exhaust and intimidate critics like Mekic into silence.
V. What Governments and Regulators Should Do
This case underscores urgent gaps in regulatory enforcement and platform accountability. To prevent similar abuses and promote democratic oversight, governments and regulators must:
1. Enforce the GDPR’s Article 22 Provisions
Regulators must clarify and enforce rules around automated decision-making, compelling platforms to explain the logic behind shadowbans, content suppression, or recommendation algorithms—especially when rights are infringed.
2. Limit Abusive Trade Secret Claims
Policymakers should tighten definitions in laws like the Wet Bescherming Bedrijfsgeheimen (Trade Secrets Act) so they are not misused to hide algorithmic decision-making that affects fundamental rights. Transparency must outweigh secrecy in such contexts.
3. Protect Whistleblowers and Public Interest Researchers
Legal protections must be expanded to cover individuals like Mekic, who pursue access to their own data in good faith. Civil society researchers should not face legal intimidation for exposing algorithmic injustice.
4. Accelerate Anti-SLAPP Legislation
The EU is currently preparing an anti-SLAPP directive. This case illustrates the urgency: courts need tools to rapidly dismiss lawsuits whose only aim is to silence critics through costly litigation.
5. Mandate Algorithmic Transparency for Large Platforms
The EU Digital Services Act (DSA) already requires large platforms to be more transparent about content moderation. National regulators must rigorously enforce this, especially when platforms stall or conceal key mechanisms.
VI. Conclusion: A Test Case for Platform Accountability
The case of Danny Mekic vs. X Corp is not just a local legal skirmish—it is a test case for platform accountability in Europe and beyond. It shows how fragile user rights are when confronted with the full legal arsenal of Big Tech. It also demonstrates that even individual citizens—armed with the law, determination, and public support—can push back.
Ultimately, this case reinforces that data rights and algorithmic transparency are not just abstract principles. They are vital tools to ensure that speech is not silenced by secret code, and that power—algorithmic or human—is exercised within the bounds of democratic norms.
