• Pascal's Chatbot Q&As
  • Posts
  • If a client (or potentially a lawyer) runs facts, theories, timelines, or “talking points for counsel” through a third-party AI tool...

If a client (or potentially a lawyer) runs facts, theories, timelines, or “talking points for counsel” through a third-party AI tool...

...a court may treat the resulting materials less like private draft communications and more like ordinary third-party research—discoverable by the other side.

The Day Your “Prep Notes” Became Evidence: When AI-Assisted Client Work Stops Being Privileged

by ChatGPT-5.2

A small procedural moment in one criminal case just sketched a much bigger warning sign for everyone using AI anywhere near legal advice: if a client (or potentially a lawyer) runs facts, theories, timelines, or “talking points for counsel” through a third-party AI tool, a court may treat the resulting materials less like private draft communications and more like ordinary third-party research—discoverable by the other side.

That’s the essence of what’s now reverberating from the Southern District of New York: Judge Jed Rakoff ruled from the bench that certain documents a defendant generated via an AI tool and then shared with his lawyers were not privileged. There’s no written opinion yet, which matters because the boundaries are still fluid. But the government’s argument—now plainly on the record—offers a template other litigants will copy and paste into motions, subpoenas, and discovery letters.

What happened

The government describes a set of “AI Documents” the defendant created before arrest by entering prompts into a commercial AI platform (identified as Claude, by Anthropic). The defendant later shared those materials with defense counsel. Prosecutors asked the court to rule that neither attorney-client privilege nor work-product doctrine protected them.

The government’s logic is direct:

  1. Not a client–lawyer communication. The AI tool is not a lawyer; no lawyer was involved when the documents were created.

  2. Not “for legal advice” in the legally relevant way. Even if the user subjectivelyintended to prepare for counsel, the interaction was with a non-lawyer system that disclaims legal advice.

  3. Not confidential. The prompts and outputs went to a third-party platform, under terms/policies that may allow retention, training use, or disclosure to authorities.

  4. You can’t “launder” privilege after the fact. Sending pre-existing, non-privileged material to your lawyer doesn’t magically make it privileged.

  5. No work product if counsel didn’t direct it. If the client did it on their own initiative (not at counsel’s behest), it looks like ordinary “internet research,” not protected attorney work product.

In short: the government framed the AI tool as the functional equivalent of “asking a non-lawyer third party,” and framed the platform’s data handling as the functional equivalent of “voluntary disclosure to a third party”—both of which are classic privilege-killers.

Why this is a big deal (even if the facts are narrow)

Even if Rakoff’s bench ruling ends up being read narrowly, it spotlights a structural mismatch: privilege doctrine assumes private, bilateral communications (client ↔ lawyer) and carefully controlled agents (e.g., translators, accountants) operating inside a confidentiality bubble. Many “public” AI tools do not fit that architecture.

And that mismatch is only going to show up more often, because AI has become the default drafting layer for people under stress—exactly the people who most need privilege to be predictable.

The uncomfortable part: the DOJ’s argument is strategically attractive

From an adversary’s perspective, “AI use waives confidentiality” is a clean lever:

  • It’s easy to explain to a judge.

  • It’s easy to apply in discovery.

  • It pressures parties to disclose not just final narratives, but how they formed them.

So even if the doctrine is unsettled, the motion itself becomes a weapon: it gives litigants language to demand prompt logs, AI summaries, AI-generated chronologies, and “issue memos” prepared by or for witnesses—especially in high-stakes matters where contradictions matter.

Potential impacts (practical, not theoretical)

1) A new discovery battleground: “Show us your prompts”

Expect routine fights over:

  • AI-generated timelines, witness prep summaries, and “what happened” narratives

  • prompt + output logs (including revisions)

  • embedded excerpts from confidential documents pasted into prompts

  • metadata: timestamps, version history, and cross-device syncing

This can matter as much as the document itself, because prompts reveal mental framing (“what did you think was important?”) and may contain confessional phrasing people would never put in a lawyer email.

2) Privilege risk shifts from “content” to “process”

Historically, many clients could safely create draft notes for counsel. The emerging risk is that using AI changes the process enough that courts may treat the resulting material as:

  • third-party communication, not private drafting

  • third-party retention, not confidentiality

  • non-lawyer “analysis,” not legal advice preparation

That’s a profound behavioral nudge: it makes “using AI to think” legally different from “writing a messy email to counsel.”

3) Spillover into civil litigation and AI copyright cases

In AI-related civil cases (copyright, trade secrets, defamation, consumer claims), both sides increasingly use AI to:

  • cluster evidence

  • summarize datasets

  • draft interrogatory responses

  • triage productions

  • generate “comparison analyses” (e.g., “does this output match protected text?”)

If courts follow the same instinct, litigants could try to pry open:

  • a plaintiff’s AI-assisted infringement analyses (and the prompts used)

  • a defendant’s AI-assisted risk assessments or response strategies

  • internal AI-generated “what we did and why” narratives

That can distort incentives: parties may avoid using AI for clarity—even when clarity serves truth—because clarity might become discoverable.

4) A compliance and investigations hazard for corporations

Internal investigations often depend on privilege discipline. If employees use consumer AI tools to “organize their thoughts” before speaking to counsel, you can end up with:

  • discoverable shadow memos

  • inadvertent third-party disclosure of sensitive facts

  • conflicting narratives (AI hallucinations or overconfident paraphrase) that later look like intentional misstatements

5) Vendor terms become litigation exhibits

Privacy policies and model-training statements will increasingly be used as factual hooks:

  • “You agreed the provider could retain/train on it.”

  • “You knew it could be disclosed to authorities.”

  • “Therefore you had no reasonable expectation of confidentiality.”

That’s not just about security; it becomes about legal character of the communication.

What litigants (especially in AI cases) should be aware of

A. The “public tool” vs “controlled tool” distinction will be everything

If you use a consumer AI tool that is publicly accessible and not contractually bound to confidentiality comparable to a litigation vendor, you are handing opponents a privilege attack surface.

The future likely splits into two tracks:

  • High-control environments (enterprise AI with strict no-training, no-retention, audit logs, DPAs, and counsel-controlled access) where parties argue “this is just another secure processor.”

  • Low-control environments (public chatbots, unclear retention/training) where courts may increasingly say: “third-party disclosure; privilege is gone.”

B. “Intended for counsel” is not a magic phrase

Courts care about doctrinal elements—especially confidentiality and the presence of legal professionals or their recognized agents. A client can sincerely intend to speak to counsel and still lose privilege if they routed the substance through a third party first.

C. Work product needs lawyer involvement (or a credible extension)

If the client is doing it independently, without counsel directing the work, courts may treat it as ordinary research. If counsel does direct it, the fight becomes: is the AI tool more like a Kovel-type agent, or more like a public third party?

D. Prompt hygiene becomes litigation hygiene

From a litigant’s standpoint, the safest rule is blunt:

  • Never paste privileged communications into a public AI tool.

  • Never paste non-public evidence into a public AI tool.

  • Assume every prompt could be read aloud in court.

E. Expect adversaries to use this offensively, not just defensively

This won’t stay confined to “defendants trying to hide things.” Plaintiffs will use it too, particularly in AI copyright disputes, to argue for access to:

  • a model developer’s AI-assisted internal summaries of datasets, licensing posture, or risk analysis

  • an AI user’s AI-generated “derivative” drafts and the prompt trail behind them

ChatGPT’s perspective: the law is behind the workflow, but the outcome isn’t crazy

I agree with the instinct behind Devlin-Brown’s concern: the legal system shouldn’t punish clarity. If a client uses AI as a writing aid to explain facts to counsel, that can be functionally similar to drafting a coherent email. If courts treat “AI-aided coherence” as “third-party waiver,” we incentivize worse communication and more confusion—bad for justice.

But I also think courts are reacting to something real: many AI tools are not “neutral pipes.” They can store, reuse, train, or disclose. Privilege has always demanded disciplined secrecy. If the tool’s architecture (or contract) is inconsistent with secrecy, courts will default to the oldest rule in the book: you disclosed it to a third party.

So the most realistic near-term takeaway is not “AI kills privilege.” It’s more conditional—and more operational:

  • If you want privilege, you must treat AI like any other potentially disclosing intermediary.

  • If you want AI in privileged workflows, you need a controlled environment that looks, legally, like a vetted litigation vendor.

  • And until doctrine settles, litigants should assume opponents will test the boundary aggressively.

In AI litigation specifically, that means a new kind of asymmetry: the side with better governance (and better tooling discipline) will have fewer discovery landmines—and fewer self-inflicted credibility wounds.