United States v. Heppner: The Federal Ruling That Redrew Attorney-Client Privilege for the AI Era

On February 10, 2026, Judge Jed S. Rakoff of the Southern District of New York issued the first federal ruling of its kind: a defendant who fed his attorneys' privileged communications into a non-enterprise AI tool lost attorney-client privilege over those communications. The AI tool was Anthropic's Claude — the consumer version, not an enterprise deployment. The decision has immediate implications for every law firm and every client who uses public AI tools to research legal strategy.

️ United States v. Heppner — S.D.N.Y., February 10, 2026

CitationUnited States v. Heppner (S.D.N.Y., decided February 10, 2026)
JudgeHon. Jed S. Rakoff, U.S. District Judge
DefendantBradley Heppner, former CEO of Beneficient (financial services)
ChargesSecurities fraud, wire fraud, false statements to auditors (~$150M alleged scheme)
AI ToolAnthropic Claude — non-enterprise consumer version
Core RulingAI-generated documents not protected by attorney-client privilege or work product doctrine
Privilege WaiverFeeding attorney communications into consumer AI may waive privilege over underlying communications
SignificanceBelieved to be the first federal court ruling of its kind on AI and privilege
Read Debevoise Analysis of Heppner Ruling →

The facts are precisely the kind of scenario that every compliance training deck warned about — and that attorneys kept doing anyway. After receiving a grand jury subpoena and retaining counsel, Bradley Heppner — without his attorneys' knowledge or direction — used a non-enterprise consumer version of Claude to research his own legal situation. He input information he had received from his attorneys into the tool, generating approximately 31 documents outlining defense strategy and legal arguments. Federal agents seized those documents during a search of his home. His attorneys moved to suppress them as privileged. Judge Rakoff denied the motion on three independent grounds.

31
Privileged strategy documents — all ruled discoverable
Heppner generated 31 documents analyzing his defense strategy by feeding his attorneys' confidential advice into consumer Claude. Judge Rakoff ruled every one discoverable. The technical reason: consumer AI terms of service explicitly permit the provider to access user data and use prompts for model training — eliminating any reasonable expectation of confidentiality.

The Three Holdings — and Why Each Independently Dooms the Privilege Claim

Judge Rakoff's ruling rests on three independent legal conclusions. Any one of them would be sufficient to defeat the privilege claim. Together, they create a framework that bars privilege protection for AI-assisted legal research conducted outside enterprise deployments.

Holding 1: No Attorney-Client Privilege

Attorney-client privilege protects confidential communications between an attorney and client made for the purpose of obtaining legal advice. The court found three reasons it did not apply here:

Holding 2: No Work Product Protection

The work product doctrine (FRCP Rule 26(b)(3)) protects documents prepared in anticipation of litigation by or for a party or its representative, including the party's attorney. The court held it did not apply because Heppner conducted the research independently — not at counsel's direction, not as counsel's agent, and not as part of a coordinated litigation strategy with counsel. The 31 documents were his own work product, produced without attorney direction or involvement.

Holding 3: Potential Waiver of Underlying Attorney Communications

Most alarmingly for practicing attorneys: the court noted that by feeding his attorneys' privileged communications into the consumer AI tool, Heppner may have waived privilege over the underlying communications from his attorneys — the emails, strategy memos, and advice that he input as prompts. This is the cascade effect. The client's use of a consumer AI tool doesn't just lose protection for the AI-generated output. It potentially strips protection from the privileged attorney advice that was used as input.

The cascade risk: If your client pastes your legal memo into ChatGPT or consumer Claude to "get a second opinion," they may have waived privilege over your memo. This is not hypothetical — it is now the holding of a federal district court. Your engagement letter needs to address this explicitly.

Enterprise vs. Consumer AI: The Technical Distinction That Determines Privilege

The Heppner ruling turns on a technical fact: Heppner used the consumer version of Claude, not an enterprise deployment. This distinction is legally critical and technically specific.

Consumer AI tools (ChatGPT free/Plus, Claude.ai free/Pro, Gemini consumer) share these characteristics that defeat privilege claims:

Enterprise AI deployments (ChatGPT Enterprise, Claude Enterprise/API with zero-data-retention, private LLM deployments) can address some of these concerns — but only if properly configured:

What "Enterprise" Actually Requires for Privilege Protection

Zero data retentionContractual prohibition on using conversation data for any purpose
No training data useExplicit opt-out or contractual prohibition on model training
Data isolationYour tenant data is not accessible to other customers or provider staff except with your authorization
Audit loggingComplete audit trail of who accessed what conversation and when
Incident notificationContractual obligation to notify you if data is accessed by unauthorized parties
DPA/BAA equivalentData Processing Agreement establishing the provider as a trusted agent processing confidential information on your behalf

The Prior Warning: Mata v. Avianca (S.D.N.Y. 2023)

Heppner builds on a foundation laid by Mata v. Avianca. That 2023 case addressed a different AI failure mode — hallucinated citations rather than privilege waiver — but established the foundational principle that attorneys have a professional responsibility to understand and verify the output of any AI tool they use.

️ Mata v. Avianca, Inc. — Related Case (S.D.N.Y. 2023)

CitationNo. 1:22-cv-01461-PKC (S.D.N.Y. June 22, 2023), 678 F. Supp. 3d 443
JudgeHon. P. Kevin Castel
AttorneysSteven A. Schwartz, Peter LoDuca — Levidow, Levidow & Oberman P.C.
IssueChatGPT hallucinated 6+ non-existent case citations submitted to court
Sanction$5,000 per attorney + $5,000 against the firm
AI ToolChatGPT (public consumer version)
View Sanctions Order on Justia →

The facts are painful in their clarity. Attorney Steven Schwartz used ChatGPT to research case law supporting a personal injury claim against Avianca. The model returned a list of apparently on-point citations. Schwartz, by his own admission in subsequent filings, did not verify the citations against Westlaw, Lexis, or any other legal research database. He submitted the brief. Opposing counsel notified the court that the cases could not be found. Judge Castel ordered the attorneys to produce the cases. They could not. The cases did not exist.

What followed was a masterclass in what courts now expect from attorneys deploying AI. In his 46-page sanctions opinion, Judge Castel did not merely sanction the lawyers for the fake citations. He articulated a framework for professional responsibility in the age of generative AI that state bars across the country would spend the next two years encoding into formal ethics opinions.

$15,000
Total sanctions against Levidow, Levidow & Oberman
$5,000 against Schwartz, $5,000 against LoDuca (who signed the brief without reviewing it), $5,000 against the firm. Plus mandatory continuing legal education. Plus reputational damage that no dollar figure captures.

What Actually Went Wrong: A Technical Analysis

The narrative in the popular press focused on "AI hallucinations" as though ChatGPT's tendency to confabulate citations was a surprise or an anomaly. It was neither. The failure in Mata v. Avianca was architectural — a mismatch between the capabilities of a public consumer AI product and the verification requirements of legal practice. Understanding this mismatch is essential for any law firm deploying AI tools today.

The Hallucination Mechanism

Large language models like GPT-4 do not retrieve information from databases. They generate text that is statistically consistent with the patterns in their training data. When asked for case citations, an LLM produces strings of text that look like case citations — jurisdiction designations, year numbers, party names, reporter volumes — because it has seen millions of case citations in training. It does not check whether those specific citations correspond to real decisions.

This is not a bug that OpenAI failed to fix. It is a fundamental architectural characteristic of transformer-based language models. Retrieval-Augmented Generation (RAG) architectures, fine-tuned legal research models, and dedicated citation-verification layers can substantially reduce hallucination rates in legal contexts — but public ChatGPT, as deployed by Schwartz, had none of these safeguards.

The Verification Gap

Public ChatGPT has no connection to Westlaw, Lexis, Fastcase, or any legal database. It cannot verify whether a citation exists. It will not tell you a citation is fabricated — it will present fabricated citations with the same confident prose it uses for real ones. This is not a failure mode. This is how the product works.

The Audit Trail Problem

A secondary failure in Mata v. Avianca — one the sanctions opinion touched on but that has received less coverage — was the complete absence of an audit trail. Attorney LoDuca signed a brief containing fabricated citations without reviewing the underlying research. There was no log of what prompts were submitted to ChatGPT. There was no record of what the model returned. There was no timestamp showing when the research was conducted or who reviewed it.

In enterprise legal contexts, this audit trail is not merely good practice — it is required for privilege maintenance, malpractice defense, and increasingly, bar compliance. When a client later challenges a legal opinion, or when opposing counsel moves to strike a brief, the attorney's ability to demonstrate a reasonable verification process matters enormously. With public ChatGPT, that process simply does not exist.

The Supervision Failure

ABA Model Rules 5.1 and 5.3 require supervising attorneys to ensure that subordinate lawyers and non-lawyers comply with the Rules of Professional Conduct. Judge Castel found that LoDuca — who signed the brief — failed his supervisory obligations by lending his name to work product he had not reviewed. The fact that the "subordinate" in this case was an AI system, rather than a paralegal or associate, did not change the analysis. The supervising attorney remains responsible for the work product regardless of whether it was generated by a human, a database, or a language model.

The Bar Ethics Response: Six States Lead the Way

Mata v. Avianca was a catalyst. Within months of Judge Castel's opinion, state bars began issuing formal ethics guidance on AI use in legal practice. The following table summarizes the six most consequential opinions to date:

Formal AI Ethics Opinions: Key State Bars

Jurisdiction
Opinion / Citation
Key Requirement
New York
NY State Bar Ethics Op. 1253 (2024)
Attorneys must understand AI tool capabilities and limitations; supervise AI output; verify citations; disclose AI use when it affects the substance of work product; obtain client consent before submitting confidential data to third-party AI systems.
California
CA State Bar Formal Op. 2023-L-0002
Competent AI use requires understanding the tool's data retention and training practices. Confidential client information must not be disclosed to AI systems that may retain, share, or train on that data without client consent. Requires vendor due diligence documentation.
Florida
FL Bar Op. 24-1 (2024)
Lawyers using generative AI for client matters must evaluate whether the platform adequately protects confidential information. Highlights that standard consumer AI terms of service are insufficient for Rule 1.6 compliance. Training data contamination is a specific risk identified.
Texas
TX Prof. Ethics Comm. Op. 699 (2024)
Texas emphasizes the duty of competence (Rule 1.01) as including technology competence. Attorneys must verify AI-generated research through authoritative sources. The opinion specifically cites Mata v. Avianca as the paradigm case for insufficient verification.
Pennsylvania
PA Bar Assoc. Formal Op. 2024-300
Requires lawyers to implement reasonable measures to prevent unauthorized disclosure when using AI. "Reasonable measures" include reviewing vendor data policies, assessing data retention practices, and using enterprise-grade deployments that offer contractual confidentiality protections.
New Jersey
NJ Advisory Comm. Prof. Ethics Op. 740 (2024)
New Jersey opinion is among the most detailed on technical architecture. It distinguishes between consumer AI (insufficient), enterprise AI with data processing agreements (potentially sufficient with safeguards), and isolated private deployments (strongly preferred for sensitive matters).

ABA Formal Opinion 512 (2024)

In addition to state bar guidance, the ABA issued its first comprehensive generative AI ethics opinion in 2024. ABA Formal Opinion 512 addresses four core duties: competence (Model Rule 1.1), confidentiality (Rule 1.6), communication (Rule 1.4), and supervision (Rules 5.1 and 5.3). The opinion is notable for its technical specificity. It explicitly states that attorneys must evaluate whether AI tools "maintain reasonable confidentiality" and notes that consent to a vendor's standard terms of service does not constitute adequate client disclosure under Rule 1.4 when sensitive client information is being processed.

Opinion 512 also addresses billing — an area the sanctions order in Mata did not reach. The ABA notes that attorneys may not bill clients for time saved by AI efficiency gains that were not disclosed to the client, and that the economics of AI use must be addressed in the engagement letter.

The Trajectory Is Clear: As of early 2026, 47 state bars have issued some form of formal guidance on AI use in legal practice, up from six in mid-2023. The convergence point across all opinions is the same: consumer AI products are presumptively insufficient for work involving confidential client information without additional safeguards.

The Technical Architecture of Privilege-Safe AI

Understanding why public ChatGPT fails privilege requirements — and what a compliant alternative actually looks like — requires examining four distinct technical layers: data routing, retention policy, contractual frameworks, and access control architecture.

Layer 1: Where Does the Data Go?

When an attorney pastes a contract summary into ChatGPT, that text is transmitted over the public internet to OpenAI's servers. OpenAI's privacy policy and API terms govern what happens next. For the consumer product, that data was historically used to train future model versions — a fact that Samsung discovered when its engineers inadvertently uploaded proprietary semiconductor yield data to ChatGPT in 2023, prompting a company-wide ban on the tool. OpenAI has since offered opt-out mechanisms and enterprise tiers with stronger commitments, but the default consumer product remains a data transmission event to a third-party commercial entity over which the law firm has no contractual control.

Public ChatGPT (Consumer)

  • Data transmitted to OpenAI servers
  • Historically trained on user inputs
  • No BAA-equivalent agreement
  • Shared infrastructure, no isolation
  • No audit trail in attorney's control
  • OpenAI staff may access for safety review
  • Data may cross jurisdictions (EU concerns)
  • No contractual privilege acknowledgment
  • Citation verification: none
  • Sub-processors: undisclosed in consumer tier

Claire Architecture (Isolated Deployment)

  • Data stays in firm's own systems
  • Zero training on client data, ever
  • Full contractual data protection
  • Isolated tenant, no cross-client exposure
  • Audit trail in firm's practice management system
  • No vendor staff access to client data
  • Data residency controls available
  • Privilege architecture documented
  • Ephemeral session memory, no persistence
  • All sub-processors disclosed and contracted

Layer 2: Isolated Tenancy vs. Shared Infrastructure

Enterprise AI platforms deployed for legal use must operate on isolated tenant architecture — meaning that the model instance serving Law Firm A has no access to the data, prompts, or outputs generated by Law Firm B. This is analogous to the difference between a dedicated server and a shared hosting environment in traditional IT. On shared infrastructure, cross-contamination is not merely theoretical; it is an architectural possibility that vendor security controls must actively prevent, and those controls can fail.

The New Jersey Advisory Committee's Opinion 740 specifically flagged shared-infrastructure AI as a concern for sensitive matters, noting that "the theoretical possibility of cross-client data exposure creates privilege exposure that prudent counsel should avoid." Isolated tenant deployment eliminates this vector entirely.

Layer 3: The Training Data Contamination Problem

Perhaps the most legally consequential architectural question is whether client data submitted to an AI system may be incorporated into future model training runs. If Attorney A's privileged communication with Client X is used to train a model that Attorney B then queries on behalf of Client Y (a competitor), the privilege chain has been broken in a manner that is both practically unchallengeable and legally catastrophic.

OpenAI's enterprise agreements include provisions against training on customer data, but the consumer product historically lacked these protections. More importantly, even with contractual protections, the verification of training exclusions is difficult. A privilege-safe architecture should make training exclusion verifiable, not merely contractual — ideally through an air-gapped or isolated deployment where the model weights are never updated with client inputs.

12-Point Technical Audit Checklist for Law Firms Using AI

AI Privilege & Ethics Audit Checklist

01
Data Routing Documentation

Can you identify, in writing, exactly where client data goes when submitted to your AI tool? "To the vendor's servers" is not sufficient — you need geographic location, infrastructure provider, and data flow diagrams.

02
Training Exclusion Guarantee

Does your vendor provide a contractual guarantee — not a default setting, not an opt-out, but a written guarantee — that client data will never be used to train, fine-tune, or improve any model?

03
Isolated Tenant Architecture

Is your deployment isolated from other customers? Ask specifically: "Does our model instance share compute, memory, or storage with any other customer's instance?" The answer must be no.

04
Citation Verification Protocol

Post-Mata, every AI research output involving case citations must be verified against an authoritative legal database (Westlaw, Lexis, Fastcase) before filing. Document this verification in your work product file.

05
Audit Trail Preservation

Do you maintain logs of AI interactions involving client matters? These logs should include: timestamp, attorney ID, matter number, data submitted, and output received. These are discoverable in malpractice litigation.

06
Client Disclosure in Engagement Letter

Per ABA Op. 512 and NY Ethics Op. 1253: does your engagement letter disclose AI use, identify the type of AI system used, and obtain consent for processing of confidential information?

07
Sub-Processor Disclosure

AI vendors use sub-processors (cloud providers, GPU infrastructure, monitoring services). Do you have a complete list of sub-processors who may touch client data? Are they contractually bound to the same confidentiality standards as the primary vendor?

08
Incident Response Obligations

If the AI vendor suffers a data breach involving your client data, what notification obligations do you have? Under Model Rule 1.4 and many state data breach laws, you must notify affected clients promptly. Verify your vendor's breach notification SLA.

09
Supervisory Policy (Rules 5.1 & 5.3)

Have you adopted a written AI supervision policy? It should specify: which tasks AI may perform, what verification is required for each task type, who reviews AI output before it leaves the firm, and who is responsible when AI output is wrong.

10
Geographic Data Transfer Analysis

If you represent EU-based clients or EU-domiciled entities, GDPR Article 44 restricts data transfer outside the EEA. Does your AI vendor provide EU data residency options? Is a Data Processing Agreement (DPA) in place with Standard Contractual Clauses?

11
State Bar CLE Compliance

Post-Mata, several jurisdictions now require or strongly encourage AI-specific CLE. California and New York both address AI competence in their mandatory technology CLE requirements. Verify your attorneys are current.

12
Conflict Check Architecture

If AI is used in the intake or matter-opening phase, ensure the conflict check process is not compromised. AI systems that process prospective client information must operate within the same conflict-check protocols as human intake staff — and must not expose information about one prospective client to another.

How Claire's Architecture Preserves Privilege

The failures in Mata v. Avianca — and the structural risks that every state bar has now identified in public AI deployments — are architectural problems. They require architectural solutions. Patching a consumer AI product with verification checklists and training reminders addresses symptoms, not causes. The cause is that public AI products were not designed for the privilege requirements of legal practice.

Claire's Privilege-by-Design Architecture

Each element of Claire's technical deployment was designed to satisfy the specific requirements articulated in ABA Op. 512, NY Ethics Op. 1253, CA Formal Op. 2023-L-0002, and the Mata v. Avianca sanctions framework.

Isolated Tenant Deployment

Each law firm client operates in a fully isolated tenant environment. The model instance serving your firm processes no data from any other firm, ever. There is no shared compute, no shared memory, no shared storage layer between tenants. Cross-client data exposure is architecturally impossible, not merely contractually prohibited.

Zero Training on Client Data — Guaranteed

Claire's architecture uses ephemeral session memory. Client data submitted during a session is processed in-session and discarded when the session terminates. It does not persist to any database, it is not used for fine-tuning, and it does not influence the model's behavior in any subsequent session — for your firm or any other. This is verifiable through architecture review, not merely contractual assertion.

Audit Trail in Your Control

Every Claire interaction involving client matters is logged to your practice management system — not to Claire's systems. This means you own the audit trail. You control it, you can produce it in litigation, and you can demonstrate to bar regulators the exact scope and nature of AI use on any matter.

No Vendor Staff Access to Client Data

Claire's operational model does not require — and expressly prohibits — vendor staff access to client data for any purpose including "quality assurance," "content moderation," or "safety review." The data processing agreement reflects this prohibition with contractual teeth, not merely policy language.

Citation Verification Integration

Unlike public ChatGPT, Claire's legal research capabilities integrate with authoritative citation databases before delivering research output. Any case citation included in Claire's output has been verified against primary legal sources. The Mata v. Avianca failure mode is eliminated at the architecture level.

Engagement Letter Templates with AI Disclosure

Claire includes ABA Op. 512-compliant engagement letter language that discloses AI use to clients, describes the architecture of the system, and obtains informed consent for AI-assisted work product. This satisfies the disclosure requirements in every state bar opinion addressing AI use as of 2026.

The Lasting Lessons of Mata v. Avianca

Judge Castel's sanctions order was not primarily about ChatGPT. It was about the fundamental obligation of attorneys to stand behind their work product — to verify what they submit to courts, to supervise the process that produces their filings, and to understand the tools they use. These obligations predate AI by centuries. What changed in 2023 is that a new category of tool arrived that could produce highly plausible legal-sounding text that was factually wrong, and attorneys began using it without understanding what it could and could not do.

The sanctions in Mata were modest relative to the reputational damage — $15,000 total, easily absorbed by an ongoing law practice. But the sanctions opinion itself has become the most-cited document in legal AI ethics, referenced in bar opinions from New Jersey to California, in law review articles, in CLE curricula, and in law firm AI policies nationwide. Judge Castel did not just sanction two attorneys. He wrote the founding document of the legal AI compliance era.

For law firms deploying AI today, the question is not whether to use AI — it is how to use it in ways that satisfy bar obligations, preserve privilege, maintain client trust, and produce reliable work product. The firms that understand the architectural distinction between public consumer AI and purpose-built legal AI are the ones positioned to capture the efficiency benefits without the liability exposure. The firms that do not understand this distinction are the next Mata v. Avianca.

The pattern is clear: courts are not tolerating AI hallucinations as an excuse, they are not treating them as a novel mitigating circumstance, and they are increasingly imposing significant consequences for failure to verify AI-generated research. The verification obligation is not aspirational — it is enforceable professional responsibility.

For more on the specific technical risks of using public LLMs in legal practice, and the seven distinct privilege waiver vectors attorneys should understand before deploying any AI tool, see our companion analysis: ChatGPT in Your Law Firm: 7 Ways You're Waiving Attorney-Client Privilege. For a full technical overview of Claire's confidentiality architecture, see Client Confidentiality Technical Architecture and the Legal practice overview.

Claire
Ask Claire about legal AI compliance Privilege-safe architecture for law firms