AI Malpractice Liability: Park v. Kim, Insurance Coverage Gaps, and the ABA's Rising Claims Profile
Park v. Kim demonstrated that AI-generated brief errors create appellate-level liability. The ABA Standing Committee on Lawyers' Professional Liability's 2023 profile documented that AI-adjacent malpractice claims rose 34% year-over-year, with most claims centering on three categories: research errors from hallucinated citations, deadline calculation errors from AI-generated case timelines, and document drafting errors from unverified AI output. Understanding exactly where standard malpractice policies have coverage gaps for AI-related errors — and what supervision requirements close those gaps — is now a core risk management obligation for every law firm managing partner.
⚖ Park v. Kim — 2d Cir. February 2024
| Citation | Park v. Kim, No. 22-2780, 2d Cir. (Feb. 22, 2024) |
| Court | United States Court of Appeals, Second Circuit |
| Issue | Attorney submitted appellate brief containing AI-generated fabricated citations; court dismissed appeal in part and imposed sanctions |
| Sanctions | Appeal dismissed; sanctions imposed; attorney required to provide Rule 11 certification of all citations in subsequent filings before the Second Circuit |
| Key Holding | Court stated Mata v. Avianca "was on point and should have been heeded" — constructive notice of AI hallucination risk established by prior published cases eliminates novelty as a defense |
| Malpractice Exposure | Dismissal of appeal on procedural grounds due to AI errors may constitute malpractice if client suffered harm from loss of appellate rights |
⚖ ABA Standing Committee on Lawyers' Professional Liability — 2023 Profile
| Source | ABA Standing Committee on Lawyers' Professional Liability, Profile of Legal Malpractice Claims 2020-2023 |
| AI-Adjacent Claims | 34% year-over-year increase in claims with AI-adjacent contributing factors between 2022 and 2023 |
| Primary Categories | Research errors (hallucinated citations), deadline calculation errors (AI-generated timelines), document drafting errors (unverified AI output) |
| Coverage Gap | Standard professional liability policies written before 2023 do not specifically address AI tool use; coverage for AI-related errors depends on policy language and underwriter interpretation |
| Source | ABA Professional Liability Committee → |
Establishing Malpractice from AI Errors: The Four-Element Analysis
Legal malpractice requires establishing four elements: (1) an attorney-client relationship, (2) negligent conduct or breach of the standard of care, (3) causation — the negligent conduct caused the client's harm, and (4) actual damages. AI-related errors interact with each of these elements in specific ways that create novel liability questions.
Element 1: Standard of Care in the AI Era
The standard of care in legal malpractice is the conduct of a reasonably competent attorney under the circumstances. Since Mata v. Avianca (June 2023) and the subsequent cascade of state bar opinions, courts and juries evaluating AI-related malpractice claims will apply the post-Mata standard: a reasonably competent attorney who uses AI tools for research, drafting, or case management must verify the AI's output before relying on it. The "I trusted the AI" defense does not satisfy the standard of care any more than "I trusted my paralegal" would without evidence of supervision and verification.
For citations specifically, the standard of care now requires verification against authoritative legal databases before submission to any tribunal. An attorney who submits AI-generated citations without verification has breached the standard of care as articulated by Judge Castel in Mata and confirmed by the Second Circuit in Park v. Kim. The question in malpractice is whether the breach caused the client harm — which creates the causation analysis below.
Element 2: Causation in AI Malpractice Claims
Causation is often the most contested element in AI malpractice claims. The but-for causation standard requires that the client would not have suffered the harm absent the attorney's breach. This causation analysis differs significantly depending on the type of AI error involved:
- Citation Hallucination: If an AI-hallucinated citation is submitted in a brief and the court identifies and rejects it, the causation question is whether the brief would have succeeded with valid citations. In Park v. Kim, where the appeal was dismissed in part due to AI errors, causation is straightforward if the dismissed claims were meritorious.
- Deadline Calculation Error: If an AI case management tool calculates an incorrect statute of limitations deadline and the attorney relies on it without verification, causing a time-barred filing, causation is clear and the damages calculation is the value of the forfeited claim.
- Document Drafting Error: If an AI drafts a contract with an omission that allows the counterparty to exercise an adverse right the client did not intend to grant, causation requires demonstrating that the contract, if properly drafted, would have prevented the harm.
In legal malpractice claims arising from litigation errors, the plaintiff client must demonstrate not only that the attorney was negligent, but that the underlying case would have succeeded absent the negligence. In Park v. Kim, a client whose appeal was dismissed due to AI errors must prove both that the AI citation errors caused the dismissal and that the underlying appeal had merit — the "case within a case" requirement. This makes AI litigation malpractice claims more complex, but not impossible, to establish.
Malpractice Insurance Coverage Gaps for AI-Related Errors
Standard professional liability (malpractice) policies for attorneys are claims-made policies that cover negligent acts, errors, and omissions in the performance of legal services. The coverage question for AI-related errors turns on three policy provisions: the definition of "legal services," exclusions for use of unauthorized software or tools, and policy language regarding delegation to non-lawyers.
The Three Coverage Gap Categories
The ABA Standing Committee's 2023 profile identified three specific insurance coverage gaps that AI use creates for standard law firm professional liability policies:
- Unauthorized Tool Exclusions: Several professional liability carriers have issued endorsements or policy language that excludes coverage for claims arising from the use of AI tools that were not approved under the firm's IT security and governance policies. Firms that have deployed AI tools without written governance policies may find that a malpractice claim arising from an AI error is contested on coverage grounds.
- Supervision Exclusions: Some policies exclude coverage for claims arising from the attorney's failure to adequately supervise non-lawyer personnel. Whether AI tools qualify as "non-lawyer personnel" for purposes of these exclusions is unsettled, but at least two major professional liability carriers have reserved the right to contest coverage on these grounds for AI-related claims.
- Data Breach Riders: Standard professional liability policies do not cover data breach liability — the claims that would arise if a client's confidential information were compromised through an AI vendor's security failure. The $37.5 million Campbell Conroy settlement was not covered by the firm's professional liability policy; it was covered by a separate cyber liability policy that many law firms do not maintain.
What Underwriters Are Now Requiring
Professional liability carriers with significant law firm portfolios have begun requiring AI governance documentation as part of the underwriting process for policy renewals. The Hartford, Chubb, and several specialty legal professional liability carriers now ask specific questions about AI use in their renewal applications. Firms that cannot produce written AI governance policies, verification protocols, and vendor security assessments may face coverage denial, premium surcharges, or reduced limits for AI-related claims.
The Underwriting Divergence: As of early 2026, underwriters have reached different conclusions about the risk profile of firms using purpose-built legal AI tools with verified security architecture versus firms using general-purpose consumer AI without governance policies. Firms using purpose-built legal AI with documented governance — including written supervision policies, citation verification protocols, and SOC 2 Type II vendor assessments — have received more favorable treatment on AI risk underwriting.
Supervision Liability: Rules 5.1 and 5.3 in the AI Context
ABA Model Rules 5.1 and 5.3 create supervisory liability for partners and supervising attorneys. Rule 5.1 applies to supervision of other attorneys; Rule 5.3 applies to supervision of non-lawyers. In the AI context, these rules create a supervision liability framework that extends beyond the attorney who directly uses the AI tool to the partners and managers responsible for firm-level AI governance.
Partner Liability Under Rule 5.1(a)
Rule 5.1(a) requires partners and managers to make reasonable efforts to ensure that the firm has measures in effect that give reasonable assurance that all attorneys in the firm comply with professional obligations. A partner who fails to implement AI governance policies — written supervision requirements, citation verification protocols, approved tool lists — has failed their Rule 5.1(a) obligations. If an associate attorney's unsupervised use of public ChatGPT results in malpractice, the managing partners may be named in the malpractice claim under a respondeat superior or Rule 5.1 failure-to-supervise theory.
Supervisory Attorney Liability Under Rule 5.3(c)
Rule 5.3(c) provides that an attorney is responsible for the conduct of a non-lawyer assistant if the attorney orders or ratifies the conduct knowing it violates professional obligations. A supervising attorney who reviews AI-generated work product and signs a filing without verifying the AI's citations or factual accuracy has ratified that work product. The ratification does not need to be knowing in the colloquial sense — it occurs when the supervising attorney affixes their signature to a filing they did not independently verify.
AI Malpractice Prevention Audit Checklist
AI Malpractice Risk Management Checklist
Implement a mandatory citation verification step in every filing workflow. The attorney signing the filing must personally verify each citation — not delegate to staff — before signature. Document the verification by date, attorney name, and database used. This creates the verification record that distinguishes compliant from non-compliant AI use.
Maintain a written register of approved AI tools, reviewed and approved by the managing partner. This register satisfies the underwriting requirement of several professional liability carriers and establishes that the firm's AI governance policy covers the tools actually in use. Update the register when new tools are deployed or existing tools are upgraded.
Any AI-generated deadline calculation — statute of limitations, filing deadlines, response deadlines — must be independently verified against the applicable rule or statute before being calendared. AI tools that calculate deadlines have documented errors in jurisdictional edge cases. Deadline miscalculation is the leading category of malpractice claim in the ABA's 2023 profile.
Every AI-drafted document (contracts, pleadings, agreements, correspondence) must be reviewed by a supervising attorney for substantive accuracy before delivery to clients or courts. The review must be documented in the matter file. "Reviewed" means substantive review, not proofreading — the attorney must be able to certify that the document accurately reflects the client's instructions and the applicable law.
Review the firm's professional liability policy language for AI tool exclusions, non-lawyer supervision exclusions, and any endorsements added after 2023 addressing AI use. If the policy is silent on AI, request a written coverage opinion from the carrier confirming that AI-related errors are covered. If the carrier declines to confirm coverage, evaluate tail coverage options or specialized AI liability endorsements.
Verify that the firm's cyber liability policy covers claims arising from data breaches at AI vendors — not just the firm's own systems. The Campbell Conroy $37.5M breach demonstrates the magnitude of this exposure. Standard professional liability policies do not cover data breach claims; a separate cyber liability policy with adequate limits is essential.
Establish a specific incident response protocol for discovered AI errors. The protocol must address: immediate notification to supervising partner, assessment of client harm under Rule 1.4 notification obligations, preservation of AI interaction logs for malpractice defense, and communication with professional liability insurer before remediation steps that may constitute admissions.
Maintain complete records of AI tool use on each matter: what was submitted, what the AI returned, what was verified, and who signed off. These records are the primary defense in malpractice proceedings — they demonstrate that the attorney took reasonable precautions and that any remaining error occurred despite adequate supervision. Without this record, the attorney cannot demonstrate compliance with the standard of care.
Review jurisdictional rules on limitation of liability provisions in engagement letters. In jurisdictions that permit fee agreements to include limitation of liability language, consider whether to include provisions addressing AI-assisted work product. Note that most jurisdictions prohibit prospective limitation of liability for gross negligence, which may cover AI errors that occur despite clear warning signs.
Document AI training completion for every attorney and staff member authorized to use AI tools for client matters. Training records are relevant to the negligent supervision analysis — a firm that can demonstrate that an attorney who caused AI-related malpractice had completed required training has a significantly stronger defense than a firm that deployed AI without documented training requirements.
How Claire Reduces AI Malpractice Exposure
Claire's Malpractice-by-Design Architecture
Claire's legal deployment addresses the specific malpractice vectors identified in the ABA's 2023 claims profile: citation accuracy, document review documentation, deadline verification, and audit trail preservation. Each architectural element maps to a specific malpractice claim category.
Citation Verification at the Source — Not at the Attorney's Desk
Claire verifies case citations against primary legal databases before delivering research output. An attorney using Claire for case law research receives citations that have been verified as extant and correctly characterized — not citations to verify after the fact. This eliminates the Mata/Park failure mode at the source rather than requiring the attorney to catch the AI's errors downstream.
Supervision Documentation Integrated Into Every Workflow
Every Claire workflow that produces client-deliverable work product includes a mandatory supervision documentation step. The supervising attorney's review is recorded in the matter file with timestamp, attorney identity, and the specific verification steps completed. This creates the audit trail that distinguishes compliant AI use from the unverified use documented in Park v. Kim and subsequent proceedings.
AI Audit Trail in the Firm's Own Practice Management System
Claire writes its audit trail to the firm's own practice management system — not to Claire's servers. The firm owns and controls the complete record of AI use on each matter. In malpractice proceedings, this record is the attorney's primary defense. In bar disciplinary proceedings, it is evidence of reasonable AI supervision practices. The record cannot be lost through vendor insolvency, data breach, or termination of the Claire subscription.
Underwriter-Ready AI Governance Documentation Package
Claire's deployment documentation package includes the written AI governance policy, citation verification protocol, vendor security assessment, and supervision documentation templates that professional liability underwriters are now requiring as part of law firm renewal applications. Firms using Claire can produce this documentation on request without assembling it from scratch.
The 34% increase in AI-adjacent malpractice claims documented by the ABA's 2023 profile is the leading indicator of a trend that will accelerate as AI use becomes universal in legal practice. The firms that establish rigorous AI supervision protocols, verify AI output as a matter of firm policy, and document their AI governance practices are building both a professional responsibility defense and a malpractice defense simultaneously. The firms that don't are creating the next Park v. Kim.
For the insurance implications of AI data breaches separate from malpractice, see client confidentiality technical architecture. For the bar ethics framework governing AI supervision, see bar ethics AI guidelines. For AI governance policies that address underwriter requirements, see AI governance for law firms.