AI Governance for Law Firms: ILTA 2024 Survey, the Jones Day Policy Incident, and the NIST AI RMF Legal Adaptation
The International Legal Technology Association's 2024 survey found that 68% of law firms lack a formal AI governance policy — despite the fact that every major state bar with AI ethics guidance requires one. The Jones Day AI policy leak of 2023 demonstrated what happens when a firm's internal AI governance documents become public without a governance framework to provide context: the revelation that a prominent firm's internal AI policy prohibited AI use for client work while individual attorneys were using consumer ChatGPT created a reputational and regulatory exposure the firm spent months managing. The NIST AI Risk Management Framework (AI RMF), adapted for the specific requirements of legal practice, and ISO 42001 provide the architecture for governance policies that satisfy bar ethics requirements, professional liability underwriters, and sophisticated clients simultaneously.
⚖ ILTA 2024 Technology Survey — Law Firm AI Governance Gap
| Source | International Legal Technology Association (ILTA), 2024 Technology Survey, published August 2024 |
| Governance Gap | 68% of law firms lack a formal written AI governance policy as of mid-2024 |
| AI Adoption Rate | 73% of surveyed firms report at least one AI tool deployed for legal work |
| Policy Gap Interpretation | 73% of firms have deployed AI; only 32% have governance policies — meaning approximately 41% of firms use AI without the governance framework required by their applicable state bars |
| Large Firm vs. Small Firm | Firms with 100+ attorneys: 47% have AI governance policy. Firms with under 10 attorneys: 12% have AI governance policy |
| Source URL | iltanet.org/resources/surveys |
⚖ Jones Day AI Policy Incident (2023)
| Firm | Jones Day (Am Law 100 firm, approximately 2,500 attorneys globally) |
| Incident Date | March 2023 |
| Nature | Internal AI acceptable use policy became public through a media report; policy prohibited use of AI tools for client work while individual attorneys were using consumer ChatGPT for legal research |
| Policy Content | Policy prohibited inputting client information into ChatGPT or similar AI tools without firm IT approval; policy stated that unauthorized AI use could expose client data and waive privilege |
| Significance | Confirmed that Am Law 100 firms had already identified the privilege waiver and data exposure risks; demonstrated the reputational risk of policy-practice gaps where attorneys bypass governance policies |
| Outcome | Multiple law firms publicly disclosed their AI governance policies following the Jones Day incident; accelerated state bar AI ethics opinion development |
What the Jones Day Incident Reveals About Policy-Practice Gaps
The Jones Day AI policy incident illustrates a governance failure that is common across the legal industry: firms adopt policies prohibiting or limiting AI use for client work, but attorneys bypass those policies using personal accounts or unofficial channels. The gap between the firm's stated policy and the actual practice creates a worse outcome than either (1) having a permissive policy that the firm enforces or (2) having no policy at all — because the gap creates both the liability of non-compliant AI use and the reputational liability of having a policy the firm does not follow.
The Jones Day policy was not wrong. It was correct — prohibiting the use of consumer ChatGPT for client work is appropriate under ABA Model Rule 1.6 and the privilege analysis in subsequent cases including the Heppner ruling. The problem was implementation: the policy existed but was not accompanied by an alternative that enabled attorneys to use AI for efficiency gains within a compliant framework. Attorneys faced a choice between efficiency gains (using consumer ChatGPT) and policy compliance (doing the work without AI) — and many chose efficiency.
The Governance Framework Principle: An effective AI governance policy is not primarily a prohibition document. It is a permissions document that specifies which AI tools are approved for which uses under which conditions — and provides the approved tools that enable attorneys to be efficient without creating compliance exposure. Prohibition without alternative creates the Jones Day gap. Approval with appropriate controls creates sustainable compliance.
NIST AI Risk Management Framework Adapted for Legal Practice
The National Institute of Standards and Technology AI Risk Management Framework (NIST AI RMF), published January 2023, provides a voluntary framework for managing risks to trustworthy AI. The framework organizes AI governance into four functions: Govern, Map, Measure, and Manage. Adapted for law firm use, the NIST AI RMF framework addresses the specific requirements of ABA Model Rules 1.1 (competence), 1.6 (confidentiality), and 5.3 (supervision) that state bar AI ethics opinions require.
GOVERN: Establishing AI Governance Structures
NIST AI RMF's Govern function requires establishing an organizational culture and processes for AI risk management. For law firms, this translates into:
- AI Governance Committee: A standing committee with responsibility for AI policy development, tool approval, and compliance monitoring. Membership should include: managing partner (or designee), IT director, general counsel or ethics counsel, and a representative practice group leader from each major practice area.
- AI Policy Document: A written policy covering approved tools, prohibited tools, acceptable use criteria, confidentiality requirements, supervision obligations, training requirements, incident reporting, and quarterly review process.
- Accountability Structure: Named individuals responsible for AI governance at the firm level (AI Committee Chair), practice group level (Practice Group AI Liaison), and matter level (Supervising Attorney).
MAP: Identifying and Categorizing AI Risks
NIST AI RMF's Map function requires identifying and categorizing AI risks in context. For law firms, the risk mapping exercise must address four specific risk categories:
- Confidentiality Risk: The risk that client confidential information is exposed to third parties through AI tool data routing, retention, or training. Risk level varies by AI tool architecture and data classification of the matter.
- Accuracy Risk: The risk that AI output contains errors — hallucinated citations, incorrect legal analysis, drafting errors — that harm clients if relied upon without adequate verification.
- Privilege Risk: The risk that AI tool use constitutes disclosure to a third party that defeats attorney-client privilege or work product protection, as established in In re Grand Jury (9th Cir. 2023) and United States v. Heppner (S.D.N.Y. 2026).
- Supervision Risk: The risk that AI output is incorporated into work product without adequate attorney review, violating the standard of care under Rules 5.1 and 5.3 and creating malpractice exposure.
MEASURE: Assessing AI Performance and Risk
NIST AI RMF's Measure function requires quantitative and qualitative assessment of AI performance and risk. For law firm AI governance, this translates into:
- Vendor Security Assessment: SOC 2 Type II review, ISO 27001 controls mapping, sub-processor identification, and data retention period documentation for each approved AI tool
- Citation Accuracy Monitoring: Systematic sampling of AI research outputs to assess citation accuracy rates for each tool used for legal research, documented quarterly
- Incident Tracking: Documentation of AI-related errors, near-misses, and policy violations in a centralized incident log, reviewed quarterly by the AI Governance Committee
- Training Completion Rates: Tracking of attorney and staff AI training completion rates by tool category, with remediation requirements for non-compliant staff
MANAGE: Treating and Monitoring AI Risks
NIST AI RMF's Manage function requires implementing risk treatments and monitoring their effectiveness. For law firms:
- Tool Approval Process: A formal approval process for new AI tools that includes vendor security assessment, legal review of vendor terms, bar ethics compliance analysis, and practice group suitability assessment
- Continuous Monitoring: Regular review of AI tool vendor updates (new features, changed terms of service, security incidents) and assessment of policy implications
- Incident Response: A documented incident response process for AI-related errors including client notification assessment, bar reporting assessment, and policy update requirements
ISO 42001: AI Management System Standard for Legal
ISO/IEC 42001:2023, the first international standard for artificial intelligence management systems (AIMS), was published in December 2023. The standard provides requirements for establishing, implementing, maintaining, and continually improving an AI management system within the context of an organization. For law firms, ISO 42001 certification demonstrates to enterprise clients — particularly financial services, healthcare, and government clients with their own AI governance requirements — that the firm has implemented a structured AI management system that meets international standards.
ISO 42001 Requirements Most Relevant to Law Firms
- Clause 4: Context of the Organization: Requires firms to identify internal and external issues that affect AI use, including bar ethics obligations, client confidentiality requirements, and jurisdictional variations in AI regulation. The context analysis is the foundation for the rest of the AIMS.
- Clause 6: Planning: Requires AI risk assessment documenting the risks associated with each AI use case — using the Map function from NIST AI RMF — and establishing objectives for each AI use case that are measurable and reviewed periodically.
- Clause 8: Operation: Requires controlled implementation of AI systems, including vendor evaluation procedures, access controls, and incident management. The operational requirements map directly to the technical controls required by Florida Bar Op. 24-1 and California's four-part due diligence framework.
- Clause 9: Performance Evaluation: Requires monitoring, measurement, analysis, and evaluation of AI system performance. For law firms, this means systematic evaluation of AI tool accuracy, confidentiality compliance, and bar ethics compliance — not just anecdotal assessment.
- Annex A Controls: ISO 42001 includes a set of controls in Annex A specifically for AI risks, including controls for AI transparency, explainability, fairness, and robustness. The transparency and explainability controls are particularly relevant to law firms that need to explain AI-assisted work product to clients and courts.
AI Vendor Evaluation Criteria for Law Firms
The NIST AI RMF and ISO 42001 frameworks both require documented vendor evaluation processes. Florida Bar Op. 24-1 and California's four-part framework specify the substantive elements of that evaluation. The following criteria represent the intersection of all applicable requirements:
Minimum Vendor Evaluation Criteria
- Data retention policy: Zero retention for session content, or clearly defined retention period with documented deletion procedures. Consumer AI tools with indefinite retention fail this criterion.
- Training data exclusion: Contractual (not opt-out) prohibition on using client data for model training. The contractual guarantee must cover sub-processors, not just the primary vendor.
- Isolated tenant architecture: Confirmed absence of cross-client data pathways, documented in architecture specifications, not merely asserted in marketing materials.
- SOC 2 Type II report: Current (within 12 months) Type II report with scope covering AI processing systems and without sub-processor carve-outs for the systems that process client data.
- Sub-processor disclosure: Complete list of sub-processors with confidentiality obligations documented in the vendor's DPA or equivalent agreement.
- Breach notification SLA: Contractual obligation to notify the firm within 24-72 hours of any security incident involving firm client data, with a documented incident response process.
- Right to audit: Contractual right for the firm to audit the vendor's data handling practices annually, or to require the vendor to produce current third-party audit reports demonstrating compliance.
Law Firm AI Governance Implementation Checklist
AI Governance Framework Implementation: Law Firm Checklist
Draft and adopt a written AI governance policy covering: approved tools by use category, prohibited tools, acceptable use conditions, confidentiality requirements, supervision obligations, training requirements, incident reporting, and quarterly review schedule. The NYSBA found fewer than 15% of firms had this as of April 2024 — disciplinary risk for the other 85%.
Establish a standing AI Governance Committee with named members from firm management, IT, and practice groups. The committee must meet at least quarterly to review incident reports, assess new tools, and update policies in response to evolving bar guidance. Document meeting minutes as evidence of governance activity for bar regulators and underwriters.
Maintain a register of approved AI tools with specific use conditions for each tool. The register must specify: which practice areas may use the tool, which data classification levels the tool may process, what verification is required before use, and the approval date and next review date. The Jones Day incident demonstrated what happens when policy and practice diverge — the register is the mechanism for managing that gap.
Complete a NIST AI RMF Map function risk assessment for each AI use case deployed at the firm: confidentiality risk, accuracy risk, privilege risk, and supervision risk. Document the assessment in the AI governance file. Update the assessment when the tool changes materially or when new bar guidance is issued addressing the use case.
Complete the minimum vendor evaluation criteria for each approved AI tool and document the evaluation in a vendor assessment file. The file must include: SOC 2 Type II report review notes, data retention analysis, training exclusion documentation, sub-processor list, breach notification SLA, and right to audit confirmation. This package satisfies Florida Bar Op. 24-1 and California's four-part framework simultaneously.
Implement mandatory AI training for all attorneys and staff authorized to use AI tools for client matters. Training must cover: how approved tools work, what they can and cannot do, verification requirements for each task type, and the disciplinary consequences of policy violations. Track completion rates and implement escalation procedures for non-completion.
Maintain a centralized AI incident log documenting: tool errors, near-misses, policy violations, and bar compliance issues. Review the log quarterly at the AI Governance Committee meeting. Use incident data to update the approved tools register, verification requirements, and training content. The incident log is the primary evidence of active governance in bar disciplinary proceedings.
Conduct quarterly audits of actual AI tool use against the approved tools register. Audit methods include: IT system logs showing which AI tools are accessing the firm's network, attorney self-certification forms, and periodic review of submitted work product for AI disclosure language. The Jones Day incident showed that policy without audit creates worse liability than no policy.
Include AI governance policy acknowledgment in lateral hire onboarding. Lateral attorneys bring AI habits and tool preferences from prior firms — some compliant, some not. Require written acknowledgment of the firm's AI governance policy and confirmation that unauthorized tools will not be used for firm client matters. The Jenkins conflict case demonstrates the downstream consequences of inadequate lateral onboarding.
For firms seeking ISO 42001 certification to satisfy enterprise client vendor qualification requirements, conduct a gap analysis against the ISO 42001 standard before initiating the certification process. The gap analysis will identify which elements of the firm's AI governance framework require enhancement before the certification audit.
Review and update the AI governance policy annually, or when significant new bar guidance is issued. The policy must be updated to reflect guidance issued by bars in each jurisdiction where the firm practices. A policy current as of January 2024 does not address bar opinions issued in late 2024 and 2025. Set calendar reminders tied to known state bar AI opinion release schedules.
How Claire Accelerates AI Governance Implementation
Claire's AI Governance Deployment Package
Claire's law firm deployment includes the governance documentation that the ILTA survey found 68% of firms lack — and that the bar ethics opinions, professional liability underwriters, and enterprise clients are requiring. The governance package is not a template; it is a fully customized documentation set built from the firm's specific deployment architecture.
Pre-Built AI Governance Policy Template (Rule 5.3(a) Compliant)
Claire provides a written AI governance policy template that satisfies ABA Rule 5.3(a)'s firm-level supervision requirement and has been reviewed by bar ethics specialists in California, New York, Florida, Texas, New Jersey, and Illinois. The template is customizable by practice area and is updated quarterly as state bar AI ethics guidance evolves.
Complete Vendor Assessment Documentation Package
Claire's deployment documentation package provides all elements of the vendor assessment required by Florida Bar Op. 24-1 and California's four-part framework: SOC 2 Type II report, ISO 27001 controls mapping, data retention documentation (zero retention), training exclusion guarantee, sub-processor list with confidentiality obligations, breach notification SLA, and right to audit. Firms using Claire can complete the California four-part due diligence analysis from Claire's documentation package alone.
NIST AI RMF Risk Assessment Framework for Legal
Claire provides a NIST AI RMF Map function risk assessment template pre-populated with the specific risk categories relevant to legal AI use: confidentiality risk (Claire: low, due to zero-retention architecture), accuracy risk (Claire: managed, through citation verification and supervision workflow), privilege risk (Claire: low, due to isolated deployment), and supervision risk (Claire: managed, through supervision documentation integration).
Policy-Practice Alignment Through System-Level Controls
Claire's deployment model aligns policy and practice at the system level — not just through attorney training. Because Claire is the firm's approved AI tool, and because access to consumer AI tools for client matters is logged and monitored through the firm's network security controls, the Jones Day gap (policy says no, attorneys do it anyway) is closed through architecture, not just admonition.
The 68% of law firms without AI governance policies documented in the ILTA 2024 survey are not firms that have decided governance is unnecessary — they are firms that have not yet built the framework. The bar ethics obligations, underwriting requirements, and enterprise client expectations are creating urgency. The NIST AI RMF and ISO 42001 frameworks provide the structure. The Jones Day incident provides the cautionary example of policy without implementation. The firms building governance frameworks now will not be learning those lessons under disciplinary proceedings or at the next malpractice claim.
For the bar ethics requirements that make AI governance mandatory, see bar ethics AI guidelines. For the malpractice insurance underwriting requirements that governance satisfies, see AI malpractice liability. For the multi-practice conflicts coordination that requires governance infrastructure, see multi-practice AI coordination.