- Pascal's Chatbot Q&As
- Posts
- ChatGPT analyzes Marco Bassini's views: If you declare GenAI output “free speech,” you don’t just protect democracy against censorship. You also create an all-purpose deregulatory weapon...
ChatGPT analyzes Marco Bassini's views: If you declare GenAI output “free speech,” you don’t just protect democracy against censorship. You also create an all-purpose deregulatory weapon...
...and an accountability escape hatch, while granting quasi-person status to systems that cannot bear moral responsibility.
Speech Without a Speaker Is Not Speech: Why GenAI Output Can’t Be “Free Speech”
by ChatGPT-5.2
Marco Bassini’s interview frames a genuinely important constitutional anxiety: if governments can force generative AI systems to suppress certain topics (Tiananmen is his illustrative example), then—so the argument goes—treating LLM outputs as not“free speech” would invite censorship of a new, powerful communication channel. The worry is understandable. But Bassini’s conclusion—that GenAI output itself can be “free speech”—rests on a category mistake with dangerous downstream effects.
Generative AI does not “speak” in the constitutional sense. It produces strings of text (or images) through statistical inference inside a product designed, deployed, and governed by human institutions. Whatever free-speech protections are relevant here attach to people (and, in some jurisdictions, certain human associations like corporations and media organizations), not to autonomous computational systems. Trying to relocate the right from the human sphere into the model’s outputs confuses speech with signal, speaker with instrument, and constitutional liberty with technical behavior.
1) Free speech protects speakers, not output-generators
Free speech law—whether you think in terms of the First Amendment, the EU Charter, or Article 10 ECHR—protects agents who can hold opinions, form intentions, and participate in public reason. The right exists because democracies treat human beings (and sometimes human associations) as moral and political subjects.
An LLM is not a subject. It has no beliefs, no “view,” no stake in democratic self-government, no dignity, no conscience, no inner forum. It cannot be wronged in the way rights-holders can be wronged. It cannot be intimidated, silenced, ostracized, jailed, or coerced—at least not in the sense constitutional speech doctrine is designed to prevent.
Once you accept that free speech is fundamentally about protecting persons against state coercion in matters of expression and conscience, the conclusion follows: GenAI output is not itself a rights-bearing act of expression. It is an artifact generated by a tool.
2) Rights imply duties and accountability—AI has neither
A simple reality check: rights are not just entitlements; they sit inside a system of reciprocal responsibility. A speaker can be praised, condemned, sued, sanctioned, or—within limits—punished. Speech doctrine is tightly coupled to doctrines of responsibility (defamation, incitement, fraud, contempt, professional duties, electoral law, consumer protection, national security controls, etc.).
An LLM cannot meaningfully carry duties:
It cannot be deterred by punishment.
It cannot testify, form intent, or be cross-examined.
It cannot repay damages, be imprisoned, or be morally blamed.
It cannot internalize norms, feel remorse, or change behavior out of ethical recognition.
If you grant “speech” status to AI output in a way that mimics the rights of speakers, you create an accountability vacuum: the “speaker” is immune by design, and the humans behind it can launder responsibility through the machine. That is not a minor doctrinal glitch—it is a blueprint for rights without responsibility, the very thing constitutional systems try to avoid.
3) “AI speech” would become an evasion engine for regulation
Bassini’s worry about censorship implicitly assumes a binary: either AI output is protected “speech,” or states can arbitrarily control it. But modern constitutional systems rarely work that way. They regulate activities and products all the time—broadcast licensing, telecom rules, consumer law, product safety, medical device regulation, election integrity regimes—without pretending the regulated artifact is a rights-bearer.
Treating GenAI outputs as “free speech” invites a perverse legal strategy: turn any regulated conduct into “speech” by routing it through a generative model. Fraud becomes “speech.” Automated harassment becomes “speech.” Scalable defamation becomes “speech.” Political microtargeting at industrial scale becomes “speech.”
This isn’t theoretical. If an entity can claim constitutional protection for machine-generated output as such, then the fastest path to immunizing harmful conduct is to automate it and declare it expressive. Democracies already struggle with platform-scale amplification; “AI speech” doctrine would be jet fuel.
4) The better framing: protect human speech throughGenAI, not GenAI as a speaker
If the true concern is state manipulation of what citizens can access, discuss, or publish, you do not need to personify the model. You protect:
the user’s right to seek and impart information,
the publisher’s right to distribute lawful material,
the developer’s rights around code, editorial discretion, and product design (where applicable),
and the public’s interest in pluralism and access.
That approach keeps the constitutional “speaker” where it belongs (in human/legal-person actors) while still enabling robust scrutiny of state censorship attempts—without granting the model metaphysical status it does not deserve.
This is how we treated earlier technologies. The printing press did not get rights; printers and authors did. The internet did not get rights; speakers and publishers did. Search engines did not get rights; companies and users argued about lawful indexing, ranking, and access. The tool can be politically critical without being a rights-holder.
In copyright, courts and regulators have repeatedly treated “authorship” as a human category—at minimum requiring human creative control. That isn’t identical to free speech doctrine, but it reveals a deep legal instinct: we locate expressive responsibility and entitlement in human agency.
When an AI system produces content autonomously, law tends to treat it as notauthored by the machine—precisely because the machine lacks the kind of agency that turns output into protected creative labor with moral and legal claims attached.
This matters for Bassini’s thesis: if the system is not an author, not a rights-holder, not a bearer of moral agency, then calling its outputs “free speech” smuggles personhood in through the back door.
6) “Forgetting” is not automatically censorship; it can be remediation of illegality
Bassini highlights the “foundational” difficulty of removing information once embedded in model parameters and asks whether editing training data or models opens the door to censorship. The question is important, but the framing slips too quickly into a rhetoric where compliance and remediation look like speech suppression.
There is a difference between:
censorship (state coercion to suppress lawful viewpoints), and
remediation (technical/legal measures to stop repeating unlawful personal data, confidential information, defamation, or infringing memorized passages).
The latter is not some aberration; it is normal rule-of-law hygiene. A system that repeatedly outputs someone’s personal data, or reproduces protected text verbatim, is not “expressing itself.” It is malfunctioning relative to the legal constraints society legitimately imposes on publishers, processors, and distributors.
The hard questions are procedural and institutional: Who can request removal? Under what evidentiary standard? With what transparency? With what appeal rights? How do we prevent abuse by states or powerful private actors? Those are governance questions—not reasons to grant the model a right to “speak.”
7) Bassini’s Tiananmen example proves the opposite of what it intends
The Tiananmen scenario is meant to show why AI output should be treated as speech: to resist authoritarian filtering. But if China forces a provider to block Tiananmen information, the primary harm is to human rights of users and publishers—the public’s access to historical truth and the ability of people to discuss politics freely. That is precisely why you should anchor the analysis in human rights, not machine rights.
Calling the model’s output “free speech” does not add protection; it adds confusion. It shifts focus away from the human victims (citizens deprived of information; dissidents surveilled; researchers constrained) and toward an abstraction (“the model’s speech”) that cannot be oppressed in the first place.
8) The remaining article: good instincts, but a recurring conceptual wobble
The interview is strongest when it stays concrete: memorization, “stored vs generated” misunderstandings, the Schrems-type problem of correcting personal data, and the operational challenge of enforcing rights when information is embedded in weights. That is indeed foundational.
Where it wobbles is in repeatedly sliding from “LLMs create hard enforcement problems” to “therefore the outputs should be treated like constitutional speech.” Those are different issues. You can accept every technical difficulty he identifies and still reject the free-speech conclusion.
There is also a subtle policy tension in the interview’s later parts: a warning that detailed requirements (notably in the AI Act) might undermine innovation, paired with an insistence that there is no innovation-versus-rights tradeoff. Both instincts contain truth, but they need reconciliation. In practice, compliance burdens do shape market structure and can advantage incumbents. The remedy is not to dilute rights into slogans—it is to design enforcement that is proportionate, testable, and institutionally realistic.
Conclusion: don’t constitutionalize the tool; constitutionalize the human stakes
If you declare GenAI output “free speech,” you don’t just protect democracy against censorship. You also create an all-purpose deregulatory weapon and an accountability escape hatch, while granting quasi-person status to systems that cannot bear moral responsibility.
The defensible position is stricter and cleaner:
AI is not a rights-holder.
AI output is not “speech” in the way constitutional rights protect speakers.
The relevant rights belong to humans and human institutions—users, authors, publishers, journalists, researchers, developers—whose interests can be harmed by both censorship and unaccountable automated amplification.
“Forgetting” and model editing are governance problems, not proof that models deserve expressive rights.
A democracy should absolutely fear state control over information channels. But the answer is not to pretend the channel is a citizen.
Sources:
Tilburg University interview: “Fundamental rights protection in GenAI: ‘The problems we may still have are foundational’” (Marco Bassini), published Feb 19, 2026.
U.S. Copyright Office, Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence (Policy Statement; Mar. 16, 2023).
Thaler v. Perlmutter, U.S. Court of Appeals for the D.C. Circuit (Decided Mar. 18, 2025) (human authorship requirement; AI not an author).
European Convention on Human Rights, Article 10 (Freedom of expression) (text and overview references).
European Court of Human Rights, Guide on Article 10 – Freedom of expression(Case-law guide; last update shown 31.08.2022).
Federal Election Commission, case resource page: Citizens United v. FEC(summary of holding and procedural posture).
