AI Governance for Law Firms: ILTA 2024 Survey, the Jones Day Policy Incident, and the NIST AI RMF Legal Adaptation

The International Legal Technology Association's 2024 survey found that 68% of law firms lack a formal AI governance policy — despite the fact that every major state bar with AI ethics guidance requires one. The Jones Day AI policy leak of 2023 demonstrated what happens when a firm's internal AI governance documents become public without a governance framework to provide context: the revelation that a prominent firm's internal AI policy prohibited AI use for client work while individual attorneys were using consumer ChatGPT created a reputational and regulatory exposure the firm spent months managing. The NIST AI Risk Management Framework (AI RMF), adapted for the specific requirements of legal practice, and ISO 42001 provide the architecture for governance policies that satisfy bar ethics requirements, professional liability underwriters, and sophisticated clients simultaneously.

⚖ ILTA 2024 Technology Survey — Law Firm AI Governance Gap

SourceInternational Legal Technology Association (ILTA), 2024 Technology Survey, published August 2024
Governance Gap68% of law firms lack a formal written AI governance policy as of mid-2024
AI Adoption Rate73% of surveyed firms report at least one AI tool deployed for legal work
Policy Gap Interpretation73% of firms have deployed AI; only 32% have governance policies — meaning approximately 41% of firms use AI without the governance framework required by their applicable state bars
Large Firm vs. Small FirmFirms with 100+ attorneys: 47% have AI governance policy. Firms with under 10 attorneys: 12% have AI governance policy
Source URLiltanet.org/resources/surveys

⚖ Jones Day AI Policy Incident (2023)

FirmJones Day (Am Law 100 firm, approximately 2,500 attorneys globally)
Incident DateMarch 2023
NatureInternal AI acceptable use policy became public through a media report; policy prohibited use of AI tools for client work while individual attorneys were using consumer ChatGPT for legal research
Policy ContentPolicy prohibited inputting client information into ChatGPT or similar AI tools without firm IT approval; policy stated that unauthorized AI use could expose client data and waive privilege
SignificanceConfirmed that Am Law 100 firms had already identified the privilege waiver and data exposure risks; demonstrated the reputational risk of policy-practice gaps where attorneys bypass governance policies
OutcomeMultiple law firms publicly disclosed their AI governance policies following the Jones Day incident; accelerated state bar AI ethics opinion development
68%
Law firms with no AI governance policy — despite 73% having deployed AI tools
The ILTA 2024 governance gap is worse than it appears: among firms that do have governance policies, fewer than 40% have reviewed those policies since initial adoption — meaning they may not reflect the bar ethics guidance issued in 2024 or the specific tools the firm has deployed. A policy that was written in 2022 for general AI tools does not address the ChatGPT-4o, Claude 3.5, and specialized legal AI tools that attorneys are using in 2026.

What the Jones Day Incident Reveals About Policy-Practice Gaps

The Jones Day AI policy incident illustrates a governance failure that is common across the legal industry: firms adopt policies prohibiting or limiting AI use for client work, but attorneys bypass those policies using personal accounts or unofficial channels. The gap between the firm's stated policy and the actual practice creates a worse outcome than either (1) having a permissive policy that the firm enforces or (2) having no policy at all — because the gap creates both the liability of non-compliant AI use and the reputational liability of having a policy the firm does not follow.

The Jones Day policy was not wrong. It was correct — prohibiting the use of consumer ChatGPT for client work is appropriate under ABA Model Rule 1.6 and the privilege analysis in subsequent cases including the Heppner ruling. The problem was implementation: the policy existed but was not accompanied by an alternative that enabled attorneys to use AI for efficiency gains within a compliant framework. Attorneys faced a choice between efficiency gains (using consumer ChatGPT) and policy compliance (doing the work without AI) — and many chose efficiency.

The Governance Framework Principle: An effective AI governance policy is not primarily a prohibition document. It is a permissions document that specifies which AI tools are approved for which uses under which conditions — and provides the approved tools that enable attorneys to be efficient without creating compliance exposure. Prohibition without alternative creates the Jones Day gap. Approval with appropriate controls creates sustainable compliance.

NIST AI Risk Management Framework Adapted for Legal Practice

The National Institute of Standards and Technology AI Risk Management Framework (NIST AI RMF), published January 2023, provides a voluntary framework for managing risks to trustworthy AI. The framework organizes AI governance into four functions: Govern, Map, Measure, and Manage. Adapted for law firm use, the NIST AI RMF framework addresses the specific requirements of ABA Model Rules 1.1 (competence), 1.6 (confidentiality), and 5.3 (supervision) that state bar AI ethics opinions require.

GOVERN: Establishing AI Governance Structures

NIST AI RMF's Govern function requires establishing an organizational culture and processes for AI risk management. For law firms, this translates into:

MAP: Identifying and Categorizing AI Risks

NIST AI RMF's Map function requires identifying and categorizing AI risks in context. For law firms, the risk mapping exercise must address four specific risk categories:

MEASURE: Assessing AI Performance and Risk

NIST AI RMF's Measure function requires quantitative and qualitative assessment of AI performance and risk. For law firm AI governance, this translates into:

MANAGE: Treating and Monitoring AI Risks

NIST AI RMF's Manage function requires implementing risk treatments and monitoring their effectiveness. For law firms:

ISO 42001: AI Management System Standard for Legal

ISO/IEC 42001:2023, the first international standard for artificial intelligence management systems (AIMS), was published in December 2023. The standard provides requirements for establishing, implementing, maintaining, and continually improving an AI management system within the context of an organization. For law firms, ISO 42001 certification demonstrates to enterprise clients — particularly financial services, healthcare, and government clients with their own AI governance requirements — that the firm has implemented a structured AI management system that meets international standards.

ISO 42001 Requirements Most Relevant to Law Firms

AI Vendor Evaluation Criteria for Law Firms

The NIST AI RMF and ISO 42001 frameworks both require documented vendor evaluation processes. Florida Bar Op. 24-1 and California's four-part framework specify the substantive elements of that evaluation. The following criteria represent the intersection of all applicable requirements:

Minimum Vendor Evaluation Criteria

Law Firm AI Governance Implementation Checklist

AI Governance Framework Implementation: Law Firm Checklist

01
Written AI Governance Policy (Required by Rule 5.3(a))

Draft and adopt a written AI governance policy covering: approved tools by use category, prohibited tools, acceptable use conditions, confidentiality requirements, supervision obligations, training requirements, incident reporting, and quarterly review schedule. The NYSBA found fewer than 15% of firms had this as of April 2024 — disciplinary risk for the other 85%.

02
AI Governance Committee Establishment

Establish a standing AI Governance Committee with named members from firm management, IT, and practice groups. The committee must meet at least quarterly to review incident reports, assess new tools, and update policies in response to evolving bar guidance. Document meeting minutes as evidence of governance activity for bar regulators and underwriters.

03
Approved Tools Register with Use Conditions

Maintain a register of approved AI tools with specific use conditions for each tool. The register must specify: which practice areas may use the tool, which data classification levels the tool may process, what verification is required before use, and the approval date and next review date. The Jones Day incident demonstrated what happens when policy and practice diverge — the register is the mechanism for managing that gap.

04
NIST AI RMF Risk Assessment for Each Use Case

Complete a NIST AI RMF Map function risk assessment for each AI use case deployed at the firm: confidentiality risk, accuracy risk, privilege risk, and supervision risk. Document the assessment in the AI governance file. Update the assessment when the tool changes materially or when new bar guidance is issued addressing the use case.

05
Vendor Evaluation Documentation Package

Complete the minimum vendor evaluation criteria for each approved AI tool and document the evaluation in a vendor assessment file. The file must include: SOC 2 Type II report review notes, data retention analysis, training exclusion documentation, sub-processor list, breach notification SLA, and right to audit confirmation. This package satisfies Florida Bar Op. 24-1 and California's four-part framework simultaneously.

06
Mandatory AI Training Program with Completion Tracking

Implement mandatory AI training for all attorneys and staff authorized to use AI tools for client matters. Training must cover: how approved tools work, what they can and cannot do, verification requirements for each task type, and the disciplinary consequences of policy violations. Track completion rates and implement escalation procedures for non-completion.

07
AI Incident Log and Quarterly Review

Maintain a centralized AI incident log documenting: tool errors, near-misses, policy violations, and bar compliance issues. Review the log quarterly at the AI Governance Committee meeting. Use incident data to update the approved tools register, verification requirements, and training content. The incident log is the primary evidence of active governance in bar disciplinary proceedings.

08
Policy-Practice Alignment Audits (Preventing the Jones Day Gap)

Conduct quarterly audits of actual AI tool use against the approved tools register. Audit methods include: IT system logs showing which AI tools are accessing the firm's network, attorney self-certification forms, and periodic review of submitted work product for AI disclosure language. The Jones Day incident showed that policy without audit creates worse liability than no policy.

09
Lateral Hire AI Governance Onboarding

Include AI governance policy acknowledgment in lateral hire onboarding. Lateral attorneys bring AI habits and tool preferences from prior firms — some compliant, some not. Require written acknowledgment of the firm's AI governance policy and confirmation that unauthorized tools will not be used for firm client matters. The Jenkins conflict case demonstrates the downstream consequences of inadequate lateral onboarding.

10
ISO 42001 Gap Analysis (for Enterprise Client Requirements)

For firms seeking ISO 42001 certification to satisfy enterprise client vendor qualification requirements, conduct a gap analysis against the ISO 42001 standard before initiating the certification process. The gap analysis will identify which elements of the firm's AI governance framework require enhancement before the certification audit.

11
Annual Policy Review Against Evolving Bar Guidance

Review and update the AI governance policy annually, or when significant new bar guidance is issued. The policy must be updated to reflect guidance issued by bars in each jurisdiction where the firm practices. A policy current as of January 2024 does not address bar opinions issued in late 2024 and 2025. Set calendar reminders tied to known state bar AI opinion release schedules.

How Claire Accelerates AI Governance Implementation

Claire's AI Governance Deployment Package

Claire's law firm deployment includes the governance documentation that the ILTA survey found 68% of firms lack — and that the bar ethics opinions, professional liability underwriters, and enterprise clients are requiring. The governance package is not a template; it is a fully customized documentation set built from the firm's specific deployment architecture.

Pre-Built AI Governance Policy Template (Rule 5.3(a) Compliant)

Claire provides a written AI governance policy template that satisfies ABA Rule 5.3(a)'s firm-level supervision requirement and has been reviewed by bar ethics specialists in California, New York, Florida, Texas, New Jersey, and Illinois. The template is customizable by practice area and is updated quarterly as state bar AI ethics guidance evolves.

Complete Vendor Assessment Documentation Package

Claire's deployment documentation package provides all elements of the vendor assessment required by Florida Bar Op. 24-1 and California's four-part framework: SOC 2 Type II report, ISO 27001 controls mapping, data retention documentation (zero retention), training exclusion guarantee, sub-processor list with confidentiality obligations, breach notification SLA, and right to audit. Firms using Claire can complete the California four-part due diligence analysis from Claire's documentation package alone.

NIST AI RMF Risk Assessment Framework for Legal

Claire provides a NIST AI RMF Map function risk assessment template pre-populated with the specific risk categories relevant to legal AI use: confidentiality risk (Claire: low, due to zero-retention architecture), accuracy risk (Claire: managed, through citation verification and supervision workflow), privilege risk (Claire: low, due to isolated deployment), and supervision risk (Claire: managed, through supervision documentation integration).

Policy-Practice Alignment Through System-Level Controls

Claire's deployment model aligns policy and practice at the system level — not just through attorney training. Because Claire is the firm's approved AI tool, and because access to consumer AI tools for client matters is logged and monitored through the firm's network security controls, the Jones Day gap (policy says no, attorneys do it anyway) is closed through architecture, not just admonition.

The 68% of law firms without AI governance policies documented in the ILTA 2024 survey are not firms that have decided governance is unnecessary — they are firms that have not yet built the framework. The bar ethics obligations, underwriting requirements, and enterprise client expectations are creating urgency. The NIST AI RMF and ISO 42001 frameworks provide the structure. The Jones Day incident provides the cautionary example of policy without implementation. The firms building governance frameworks now will not be learning those lessons under disciplinary proceedings or at the next malpractice claim.

For the bar ethics requirements that make AI governance mandatory, see bar ethics AI guidelines. For the malpractice insurance underwriting requirements that governance satisfies, see AI malpractice liability. For the multi-practice conflicts coordination that requires governance infrastructure, see multi-practice AI coordination.

Claire
Ask Claire about AI governance frameworks NIST AI RMF + ISO 42001 adapted for legal practice