- Pascal's Chatbot Q&As
- Posts
- The U.S. Supreme Court’s decision to deny certiorari in Thaler v. Perlmutter lands like a deceptively quiet thud: no sweeping opinion, no doctrinal fireworks—just the sound of the status quo...
The U.S. Supreme Court’s decision to deny certiorari in Thaler v. Perlmutter lands like a deceptively quiet thud: no sweeping opinion, no doctrinal fireworks—just the sound of the status quo...
...locking into place. In the United States, a work that is truly “AI-only” (i.e., created without a direct, traditional human authorial contribution) remains outside copyright.
by ChatGPT-5.2
The U.S. Supreme Court’s decision to deny certiorari in Thaler v. Perlmutter lands like a deceptively quiet thud: no sweeping opinion, no doctrinal fireworks—just the sound of the status quo locking into place. The immediate consequence is simple to state: in the United States, a work that is truly “AI-only” (i.e., created without a direct, traditional human authorial contribution) remains outside copyright. That’s not a philosophical preference; it’s now the practical operating environment for creators, publishers, platforms, and model builders.
And yet the case matters beyond its narrow posture. It’s a stress test for how copyright is supposed to function when “making” becomes a collaboration between humans and machines—or when the machine, at least formally, is positioned as the maker. The denial doesn’t settle the broader questions. It merely postpones them, while the market accelerates.
Below is what this current situation means—who it helps, who it hurts, what an “ideal” equilibrium could look like, and what comes next.
The Current Situation in Plain Terms
As framed in the Thaler dispute, the question was blunt: can a work produced by an AI system—without a direct human author—be copyrighted? Thaler’s position effectively asked the system to recognize copyright in an output where the human role is reduced to building/owning/operating the machine, not shaping the expressive result in the way copyright traditionally expects.
The Supreme Court’s denial leaves in place the lower-court approach and the Copyright Office’s posture: copyright attaches where human authorship is meaningfully present; it does not attach to purely machine-generated expression.
That’s a bright-ish line. Not a perfect one. But it’s the one we have.
Pros and Cons for Rights Owners (Creators, Publishers, Studios, Labels, News & Scholarly Producers)
Pros
1) A clearer moat around “human creativity” as protectable value.
For rights owners, the human-authorship requirement preserves the moral and economic intuition that copyright is meant to incentivize human creative labor—not merely capital investment in automated production.
2) Less risk of “infinite enclosure” by industrial-scale AI output.
If AI-only outputs were copyrightable, the world could quickly fill with proprietary machine-made content—owned by whoever has compute and distribution—raising the risk of new monopolies built on volume rather than originality.
3) A defensive shield against adversarial claims.
Rights owners facing mass-generative plagiarism and style mimicry can point to a system that is at least skeptical about granting protection to outputs whose provenance is unclear and whose “authorship” may be strategically asserted.
4) A bargaining chip in licensing negotiations.
If AI-only outputs don’t get copyright, some developers and customers may have stronger incentives to seek lawful access to high-quality human-authored content (because downstream exclusivity in outputs is harder to claim).
Cons
1) The boundary problem: prompt engineering and “how much human is enough?”
When the line is “human authorship,” fights shift to measurement: Is a prompt creative authorship? Is selecting the best of 200 outputs creative authorship? Is iterative inpainting? Curation? Editing? In practice, this can turn into costly uncertainty and inconsistent outcomes.
2) Incentives for secrecy rather than transparency.
If disclosing AI assistance risks losing rights (or inviting scrutiny), some creators and companies may under-disclose AI use. That is bad for provenance, trust, and clean licensing markets.
3) Competitive disadvantage for rights owners who want to innovate responsibly.
Publishers and studios trying to build compliant creative pipelines may find themselves disadvantaged relative to actors who gamble—because the compliance path is slower, more documented, and more exposed to challenge.
Pros and Cons for AI Developers (Model Builders, Tool Providers, and AI-First Creative Platforms)
Pros
1) Lower IP “output liability” in some workflows.
If AI-only outputs are not copyrightable, some categories of output become harder for anyone to lock up exclusively—which can reduce certain downstream rights disputes between competing generators (though it doesn’t eliminate infringement risks against training inputs).
2) More space for open ecosystems and commoditized generation.
Developers can argue that AI generation is closer to a general-purpose capability—where value is in service, workflow, distribution, or personalization, not exclusive rights in each output.
3) Clarity for product messaging: “humans own what humans add.”
Toolmakers can position their products around augmenting human authorship—supporting the idea that AI is an instrument and the human remains the author when they genuinely shape expression.
Cons
1) Reduced commercial value of AI-only output.
Many customers want exclusivity. If the output can’t be copyrighted (or is less likely to be), that undermines some business models—especially in media, stock imagery, advertising, and synthetic content pipelines.
2) More pressure to engineer “authorship theater.”
A system that rewards human authorship can incentivize performative rituals: minimal “human touches” added primarily to qualify for protection, not to improve quality. That creates compliance noise and weakens trust.
3) Legal fragmentation and product risk.
Developers operating globally face diverging rules and cultural expectations. If U.S. doctrine stays strict while other jurisdictions evolve differently, multi-market product design becomes harder—and litigation becomes more likely.
4) The bigger unresolved problem remains training and provenance.
Even if outputs are unprotected, training data disputes don’t disappear. Developers still face pressure to prove lawful access, respect opt-outs/rights reservations, and manage attribution and compensation expectations.
What the “Ideal” Situation Should Look Like (ChatGPT’s View)
The best outcome is not “copyright for everything” or “copyright for nothing.” It’s a stable, auditable system that rewards genuine human creativity, deters laundering and mass appropriation, and enables legitimate AI innovation without building a black box content economy.
Here’s what that ideal looks like in practice:
1) A workable, legible standard for human contribution
Not “magic words” like traditional elements of authorship that invite interpretive battles, but a clear evidentiary framework:
What kinds of human decisions count as expressive authorship (composition, arrangement, editing, transformation)?
What kinds generally do not (single prompt submission, purely mechanical selection, fully automated batch generation)?
What documentation can support a claim (version history, layered files, prompt-and-edit logs, provenance metadata)?
The point isn’t to police creativity; it’s to stop a world where the only reliable strategy is to litigate after the fact.
2) A disclosure-and-provenance norm that doesn’t punish honesty
We should move toward structured disclosure that:
encourages truthful labeling of AI assistance,
preserves protectability when humans genuinely shape the work,
and enables downstream users to understand risk.
Punishing disclosure creates perverse incentives. Rewarding auditable workflows creates a healthier market.
3) A targeted “AI-output right” only if it is narrow, time-limited, and anti-monopoly
If policymakers feel compelled to protect AI-only outputs, the least damaging form would be a sui generis, short-term, registration-based right (think months, not decades), with strong limits:
no protection for outputs that are substantially similar to identifiable protected inputs,
no protection that blocks human creators from making similar works,
and no “volume advantage” that lets industrial generation create a private enclosure of culture.
If the aim is investment incentives, do it without granting a new perpetual machine copyright class.
4) A parallel settlement layer for training data legitimacy
The authorship question is only half the battlefield. The other half is training:
machine-readable rights signals,
dataset provenance expectations,
standardized licensing rails,
and enforceable audit mechanisms.
Otherwise, we get a cynical loop: content is ingested under ambiguity, outputs are mass-produced, and the “human author” debate becomes a distraction from the upstream appropriation fight.
Predictions: Where This Heads Next
1) Congress becomes the inevitable arena—slowly, messily.
The Supreme Court stepping aside doesn’t eliminate pressure; it relocates it. Expect more proposals that attempt to define authorship thresholds, disclosure rules, or narrow new rights for AI outputs. The fight will be lobby-driven and sectoral.
2) “Authorship” disputes will migrate from AI-only to AI-assisted.
The next wave won’t be clean Thaler-style fact patterns. It will be hybrid creation: prompt + iterative editing + compositing + human curation + model steering. Courts and the Copyright Office will be forced into line-drawing—case by case—unless a clearer administrative standard emerges.
3) Provenance will become a competitive differentiator, not just compliance.
As customers—especially enterprise and institutional buyers—demand lower legal risk, the winners will increasingly be the systems that can show: what data was used, what rights were secured, and how outputs were produced. “Trust stacks” will matter.
4) We’ll see two creative economies running in parallel.
One will be the copyright economy (human-authored, licensable, enforceable, insurable). The other will be the attention economy (mass synthetic content—cheap, ubiquitous, hard to own, and often hard to trace). Businesses will learn to operate across both, but not without conflict.
5) The next Supreme Court “AI copyright” moment likely won’t be about AI-only authorship.
If the Court re-enters, it may be through a different door: liability for training, contributory infringement, fair use boundaries, or a case that forces a national standard on what counts as sufficiently human authorship in AI-assisted creation.
