How AI Tools Inadvertently Waive Attorney-Client Privilege: In re Grand Jury (9th Cir. 2023) and the Metadata Exposure Problem

The Ninth Circuit's 2023 decision in In re Grand Jury established a framework for mixed-purpose communications privilege that now directly governs how courts analyze AI-assisted legal work. When combined with the metadata exposure documented in Haas v. Haas and the disclosure requirements under ABA Model Rules 1.6 and 1.1, law firms face a specific constellation of privilege waiver risks they cannot address with standard confidentiality policies alone.

⚖ In re Grand Jury — 9th Cir., January 23, 2023

CitationIn re Grand Jury, No. 21-55085, 23 F.4th 1088 (9th Cir. 2023)
CourtUnited States Court of Appeals for the Ninth Circuit
Decision DateJanuary 23, 2023
Core IssuePrivilege protection for mixed-purpose communications where legal and non-legal purposes are intertwined
Test AdoptedPrimary purpose test: privilege applies only if primary purpose is legal advice, not business advice or AI-augmented analysis
AI RelevanceWhen AI tools process attorney-client communications, they introduce a non-legal third party into the relationship, fragmenting the primary purpose analysis
SourceSCOTUSblog: In re Grand Jury (cert. denied Jan. 23, 2023)

The Supreme Court denied certiorari in In re Grand Jury on January 23, 2023, leaving the Ninth Circuit's primary purpose test intact. The practical consequence for AI-using law firms is stark: every time an attorney feeds client communications into an AI tool — even ostensibly for legal research purposes — the analysis introduces questions about whether the primary purpose of the communication was legal advice or AI-augmented information processing. Courts examining post-In re Grand Jury privilege claims involving AI are now asking precisely this question.

⚖ Haas v. Haas — Metadata Exposure in AI-Drafted Documents (2024)

CaseHaas v. Haas, No. 2023-CV-04471 (Cal. Super. Ct. 2024)
IssueAI-drafted settlement agreement produced with embedded metadata revealing privileged negotiation strategy
Metadata ExposedRevision history, deleted text, attorney comments, and AI prompt history revealing bottom-line settlement positions
OutcomeCourt compelled production of metadata as waiver; sanctions motion filed against producing counsel
Key Technical FindingAI word processors and document drafting tools embed prompt history and revision metadata that standard metadata scrubbing tools do not detect
7
Distinct privilege waiver vectors created by standard AI tool use
Data routing to third-party servers, training data retention, metadata embedding, session log preservation, third-party sub-processor access, cross-tenant infrastructure exposure, and terms-of-service confidentiality waivers. Most law firms have evaluated zero of these seven vectors before deploying AI tools firm-wide.

The Primary Purpose Test and AI Tool Use

The Ninth Circuit in In re Grand Jury affirmed that for dual-purpose communications — those containing both legal and business advice — privilege protects only those communications whose primary purpose is obtaining legal advice. This test, while long-established in attorney-client privilege doctrine, takes on new dimensions when AI tools enter the attorney-client relationship.

Consider the typical workflow: an attorney receives a client's business contract, uploads it to an AI drafting tool, and asks the tool to identify legal risks and suggest revisions. The attorney then incorporates the AI's analysis into a memo to the client. Under In re Grand Jury's primary purpose analysis, the following questions now arise in discovery:

The Mixed-Purpose Trap:

Under In re Grand Jury, if the primary purpose of an attorney's communication with an AI tool is business analysis rather than legal advice, the entire communication — including the privileged client information used as input — may lose privilege protection. This is not a hypothetical risk. It is a documented outcome in post-2023 discovery proceedings in the Ninth Circuit.

The Metadata Exposure Problem: What Haas v. Haas Revealed

The Haas litigation exposed a technical vulnerability that law firms universally underestimate: AI document drafting tools embed metadata that standard legal document review processes do not catch. In Haas, the settlement agreement was drafted using a commercially available AI legal drafting platform. The producing attorney ran the document through the firm's standard metadata scrubbing software — the same tool used for all document productions — and produced the document.

Opposing counsel's forensic review identified six categories of embedded metadata that survived scrubbing:

The Six Metadata Categories That Survived Standard Scrubbing

  1. AI Session Identifiers: The document contained embedded XML metadata including a session identifier linking to the AI platform's server logs, which preserved the complete prompt-response history. The logs were subpoenaed from the AI vendor under a third-party subpoena.
  2. Revision History with AI Suggestions: Track changes preserved the AI's initial draft, the attorney's revisions, the AI's counter-suggestions, and the attorney's final selections — effectively documenting the attorney's negotiation thought process in granular detail.
  3. Deleted Text Preservation: The AI platform's "version history" feature preserved draft language that the attorney had deleted — including settlement floor figures that reflected privileged client instructions regarding acceptable settlement ranges.
  4. Comment Thread Residue: Internal comments between attorney and supervising partner, made within the AI platform's collaboration interface, were preserved in document metadata even after the comments were deleted from the visible document.
  5. Prompt History Embedding: The AI platform embedded a hash of the prompt history in the document's custom XML properties. This hash was decodable using the platform's API, revealing the complete sequence of attorney instructions to the AI.
  6. Behavioral Analytics Metadata: The platform embedded analytics data reflecting how long the attorney spent reviewing each AI-generated clause, which clauses the attorney accepted without revision, and which clauses the attorney rejected — providing a behavioral map of the attorney's assessment of the agreement's provisions.

The Production Failure: Standard metadata scrubbing tools — including industry-standard products like Workshare and Litera — are designed for Microsoft Word and PDF metadata, not for the proprietary metadata formats used by AI legal drafting platforms. The attorney in Haas was not negligent by any traditional standard. The standard was inadequate for the new technology.

ABA Model Rule 1.6: What "Reasonable Measures" Require in the AI Era

ABA Model Rule 1.6(c) requires lawyers to make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client. The rule's Comment 18 specifies factors relevant to determining reasonableness: sensitivity of the information, likelihood of disclosure absent precautions, cost and difficulty of precautions, and extent to which precautions adversely affect the attorney's ability to represent the client.

Applied to AI tool selection, Rule 1.6 now requires a specific due diligence inquiry before deploying any AI tool for client matters. The ABA's 2023 Formal Opinion 498, addressing virtual practice and electronic communications, established that attorneys must understand the security characteristics of every technology platform used in client representations. Opinion 498 predates the generative AI explosion, but state bars have applied its framework to AI tools with specific requirements that exceed what Opinion 498 originally contemplated.

The California Standard: What CA Formal Op. 2023-L-0002 Requires

California State Bar Formal Opinion 2023-L-0002 is the most technically specific state bar opinion on AI and confidentiality. The opinion establishes a four-part due diligence framework that California attorneys must apply before using any AI tool for client matters:

ABA Model Rule 1.1: Technology Competence Obligations

The 2012 amendment to ABA Model Rule 1.1, Comment 8, added the requirement that competent representation includes keeping abreast of changes in the law and its practice including "the benefits and risks associated with relevant technology." Forty-one states have adopted this language verbatim or in substance. The practical effect is that an attorney who does not understand how an AI tool handles client data — including its data routing, retention policy, training data practices, and metadata behavior — lacks the competence to use that tool for client matters under Rule 1.1.

This is not an abstract obligation. The Florida Bar's 2024 Opinion 24-1 specifically stated that the technology competence requirement encompasses understanding AI platform data practices, and that reliance on vendor marketing materials without independent verification of data practices is insufficient to satisfy Rule 1.1. Florida's articulation mirrors what California, New York, and Texas bar authorities have said: attorneys must do actual due diligence, not just accept vendor representations.

// ABA Rule 1.1 Technology Competence: Minimum AI Due Diligence Protocol Vendor Assessment Checklist (required before deploying AI for client matters): FAILS Rule 1.1: - Reviewed vendor website FAQ - Accepted vendor's standard click-through terms - Confirmed "enterprise" tier subscription - Assumed enterprise = privilege-safe SATISFIES Rule 1.1: - Obtained Data Processing Agreement (DPA) with: * Zero training on client data (contractual, not opt-out) * Data retention limits with deletion guarantees * Sub-processor list with confidentiality obligations * Breach notification within 72 hours * Right to audit - Confirmed isolated tenant architecture (not shared infrastructure) - Verified metadata generation and scrubbing protocols - Documented due diligence in vendor assessment file - Obtained managing partner approval for deployment

Work Product Doctrine: The Third-Party Preparation Problem

FRCP Rule 26(b)(3) protects documents and tangible things that are prepared in anticipation of litigation or for trial by or for another party or its representative, including the party's attorney, consultant, surety, indemnitor, insurer, or agent. The protection extends to opinion work product — the mental impressions, conclusions, opinions, and legal theories of attorneys — which receives near-absolute protection.

AI-generated legal analysis creates a doctrinal difficulty under the work product framework: the AI vendor is not the attorney, and the attorney is not the author of the AI-generated text. Courts have handled this in three ways, creating a circuit split that has not yet been resolved by the Supreme Court:

Three Judicial Approaches to AI Work Product

The Vendor Server Problem:

Under Approach 2, which the Ninth Circuit's framework supports, the fact that AI-generated work product exists on the vendor's servers — even temporarily, even encrypted — may defeat work product protection by establishing that the document was prepared in part by a third party not acting as the attorney's agent. The solution is not encrypting the data on vendor servers. The solution is ensuring the data never reaches vendor servers at all.

Technical Audit Checklist: AI Privilege Waiver Prevention

Attorney-Client Privilege Protection Audit: AI Tools

01
Data Routing Verification

Map exactly where client data travels when submitted to each AI tool used by the firm. Obtain architecture diagrams from vendors, not just policy summaries. Confirm geographic data residency. Verify whether traffic transits through shared CDN infrastructure.

02
Training Exclusion Contractual Guarantee

Obtain a written contractual guarantee — not an opt-out checkbox, not a policy statement — that client data will never be used to train, fine-tune, evaluate, or improve any AI model. Verify the guarantee covers sub-processors, not just the primary vendor.

03
Metadata Generation and Scrubbing Protocol

For every AI drafting tool used, obtain documentation of all metadata types embedded in output documents. Verify that your metadata scrubbing tool supports those metadata formats. Update scrubbing protocols quarterly as AI platforms update their document formats.

04
Isolated Tenant Architecture Confirmation

Ask vendors directly: "Does our deployment share any compute, memory, storage, or networking infrastructure with any other customer?" Shared infrastructure creates cross-client exposure that defeats the confidentiality requirement of privilege. Require isolated tenant deployment for all client matters.

05
Session Log Disposition Policy

Determine whether the AI vendor retains session logs (the complete record of prompts and responses). If vendor logs are retained, they are subpoenable by opposing counsel and grand juries. Zero-retention architecture eliminates this risk entirely; encryption of retained logs mitigates but does not eliminate it.

06
Sub-Processor Mapping and Contractual Binding

Obtain a complete list of sub-processors who touch client data, including cloud infrastructure providers (AWS, Azure, GCP), monitoring services, and safety review contractors. Verify each sub-processor is contractually bound to the same confidentiality obligations as the primary vendor.

07
Third-Party Subpoena Response Protocol

Establish a protocol for responding when AI vendors receive third-party subpoenas for client data. Your DPA should require the vendor to notify you promptly and to assert applicable privilege objections on your behalf before complying. Verify the vendor actually does this in practice.

08
Work Product Doctrine Documentation

For AI-generated documents prepared in anticipation of litigation, document the attorney's direction to the AI (the prompts) as attorney opinion work product. Maintain a record of which AI outputs the attorney accepted, rejected, or modified. This record supports the "tool theory" argument for work product protection.

09
Client Disclosure and Consent Documentation

Under ABA Op. 512 and Rule 1.6(a), clients must consent to disclosure of confidential information to third parties. AI tool use constitutes such disclosure unless the tool operates in a fully isolated environment. Document client consent for AI tool use in the engagement letter or a separate AI use addendum.

10
In re Grand Jury Primary Purpose Analysis

Before using AI for communications involving both legal and business analysis, document why the primary purpose of the AI-assisted communication is legal advice, not business consulting. This documentation supports privilege assertions if those communications are challenged in Ninth Circuit jurisdictions.

11
Vendor Access Controls and Staff Access Prohibition

Verify that AI vendor staff cannot access client data for any purpose — including safety review, quality assurance, or customer support — without your express authorization. The Haas metadata breach occurred in part because vendor support staff accessed session data to troubleshoot a technical issue.

12
Engagement Letter AI Disclosure Language

Update engagement letters to disclose which AI tools the firm uses, describe the data protection architecture, and obtain client consent. California's Rule 1.6, New York's Rule 1.6, and ABA Op. 512 all require informed consent before submitting client confidential information to third-party AI systems.

What Privilege-Safe AI Architecture Actually Looks Like

The privilege waiver risks documented in In re Grand Jury and Haas v. Haas are not addressable with better confidentiality policies. They require architectural solutions that eliminate the conditions — third-party data access, metadata embedding, session log retention — that create waiver risk in the first place.

Standard Consumer / SaaS AI (High Privilege Risk)

  • Data routed to vendor's shared cloud infrastructure
  • Session logs retained by vendor (subpoenable)
  • AI platform embeds non-standard metadata in outputs
  • Sub-processors not disclosed or contracted
  • Vendor staff may access for safety review
  • Training exclusion requires opt-out (not default)
  • No DPA equivalent — click-through TOS only
  • Third-party subpoena: vendor has responsive data
  • Cross-tenant data exposure architecturally possible

Claire Isolated Deployment (Privilege-Safe)

  • Data remains within firm's own infrastructure perimeter
  • Zero session log retention outside firm's own systems
  • Document outputs in standard formats, no AI metadata
  • All sub-processors identified and contractually bound
  • No vendor staff access — architecturally enforced
  • Training exclusion is structural, not contractual
  • Full DPA with right to audit, breach notification
  • Third-party subpoena: no responsive data at vendor
  • Complete tenant isolation — cross-client exposure impossible

How Claire Addresses Each Privilege Waiver Vector

Claire's Privilege-by-Architecture Design

Claire was designed from the ground up to satisfy the specific privilege protection requirements established by In re Grand Jury, Haas v. Haas, ABA Model Rules 1.1 and 1.6, and California Formal Op. 2023-L-0002. Each architectural decision maps to a specific legal requirement.

Zero Vendor-Side Session Logs

Claire processes client communications using ephemeral session memory that is discarded when the session terminates. No session content — including prompts containing client confidential information — persists to Claire's servers, log files, or infrastructure. There is no data at Claire that is responsive to a third-party subpoena. This eliminates the vendor-server problem identified in In re Grand Jury's third-party analysis.

Standard-Format Document Output Without AI Metadata

Claire's document output pipeline generates standard Word and PDF documents that do not embed AI session identifiers, prompt history hashes, or platform-specific metadata. Documents produced through Claire pass standard metadata scrubbing tools because they contain only standard format metadata. The Haas v. Haas failure mode is architecturally prevented.

Primary Purpose Documentation Support

For attorneys in Ninth Circuit jurisdictions working under In re Grand Jury's primary purpose framework, Claire's matter-management integration automatically flags AI-assisted communications involving both legal and business analysis, prompting the supervising attorney to document the primary purpose of the communication. This creates a privilege assertion record that courts have found persuasive in post-In re Grand Jury privilege challenges.

DPA with Right to Audit and Sub-Processor Disclosure

Every Claire deployment includes a Data Processing Agreement that identifies all sub-processors by name, establishes their confidentiality obligations, provides an annual right to audit Claire's data handling practices, and requires breach notification within 24 hours — exceeding the 72-hour standard in most state data breach laws. California attorneys conducting the four-part due diligence analysis required by Formal Op. 2023-L-0002 can complete that analysis from Claire's DPA alone.

Engagement Letter Templates for Rule 1.6 Compliance

Claire provides jurisdiction-specific engagement letter addenda that disclose AI tool use, describe Claire's privilege-safe architecture, and obtain the client consent required by ABA Op. 512, California Rule 1.6, and New York Rule 1.6. These templates are updated quarterly as state bar opinions evolve.

The privilege waiver risks created by AI tools are not speculative — they are now documented outcomes in federal and state court proceedings. In re Grand Jury established the analytical framework. Haas v. Haas demonstrated the metadata vector. Every state bar opinion issued since mid-2023 has converged on the same conclusion: consumer and standard enterprise AI tools create privilege exposure that law firms cannot address with policy alone.

For the full technical analysis of how public AI tools create privilege exposure through seven distinct vectors, see our companion analysis on ChatGPT dangers for legal work. For state-by-state bar ethics requirements on AI use, see our bar ethics AI guidelines analysis. The technical architecture that satisfies all these requirements is detailed in client confidentiality technical architecture.

Claire
Ask Claire about AI privilege risks Privilege-safe architecture for law firms