State Bar AI Ethics Guidelines: Florida Op. 24-1, California, New York, and ABA Model Rules 5.3, 1.1, and 1.4

Between mid-2023 and early 2026, every major state bar issued formal guidance on artificial intelligence in legal practice. The convergence point across all 47 state bar opinions issued to date is identical: attorneys have affirmative obligations to evaluate AI tools before deployment, supervise AI output, and disclose AI use to clients in specific circumstances. Florida Bar Opinion 24-1, the California State Bar's 2023 Practical Guidance, and the New York State Bar AI Task Force Report collectively establish the substantive standard. ABA Model Rules 5.3, 1.1, and 1.4 provide the disciplinary framework. This analysis explains what compliance actually requires, jurisdiction by jurisdiction.

⚖ Florida Bar Opinion 24-1 (2024) — Formal AI Ethics Opinion

CitationFlorida Bar Ethics Opinion 24-1, issued January 19, 2024
Issuing BodyProfessional Ethics Committee, The Florida Bar
Core HoldingLawyers using generative AI must evaluate whether the platform adequately protects confidential information under Florida Rule 4-1.6; standard consumer AI terms of service are insufficient for Rule 4-1.6 compliance
Specific RiskTraining data contamination: client information may be incorporated into AI training data and influence responses to third parties, including adverse parties
Due Diligence RequiredAttorneys must review AI vendor privacy policies, assess data retention and training practices, and implement reasonable security measures before using AI for client matters
SourceFlorida Bar Ethics Opinion 24-1 →

⚖ California State Bar Practical Guidance for AI Use (2023)

CitationState Bar of California, Practical Guidance for the Use of Generative AI, November 2023
StatusFormal Guidance (not a formal ethics opinion but binding as practical guidance under California Rule 3-110)
Core RequirementsFour-part due diligence framework: data retention analysis, training data contamination assessment, sub-processor identification, contractual confidentiality protections
Disclosure TriggerAttorneys must disclose AI use when it affects the substance of work product and must obtain client consent before processing confidential information through third-party AI systems
SourceCalifornia Bar Practical Guidance on AI →

⚖ New York State Bar Association AI Task Force Report (2024)

CitationNYSBA Task Force on Artificial Intelligence, Report and Recommendations, April 2024
StatusFormal Report with recommendations adopted by NYSBA; supplements NY Ethics Op. 1253 (2024)
Core RecommendationsAttorneys must understand AI tool capabilities and limitations; supervise AI output; verify citations; disclose AI use when it affects substance of work; obtain client consent before processing confidential data
Supervision EmphasisRule 5.3 supervision obligations extend to AI systems used in legal work; the supervising attorney is professionally responsible for AI output included in work product
SourceNYSBA AI Task Force Report (PDF) →
47
State bars with formal AI ethics guidance as of February 2026
Up from 6 in mid-2023. The acceleration followed Mata v. Avianca (June 2023) and ABA Formal Op. 512 (2024). Every jurisdiction that has issued guidance converges on the same four obligations: competence (Rule 1.1), confidentiality evaluation (Rule 1.6), supervision (Rule 5.3), and disclosure (Rule 1.4). The specific requirements differ by jurisdiction — which creates compliance complexity for multi-state firms.

ABA Model Rule 5.3: Supervising AI as a Non-Lawyer Assistant

ABA Model Rule 5.3 requires lawyers and law firms to ensure that the conduct of non-lawyer assistants is compatible with the professional obligations of the lawyer. The rule applies to paralegals, legal assistants, and contract staff — and, under every state bar opinion issued since 2023, to AI systems used in legal work.

The supervision framework under Rule 5.3 has three dimensions that are particularly relevant to AI:

5.3(a): Firm-Level Supervision Policies

Rule 5.3(a) requires a partner or shareholder with managerial authority to make reasonable efforts to ensure the firm has measures in effect that give reasonable assurance that non-lawyer conduct is compatible with professional obligations. Applied to AI, this requires the firm to adopt written AI governance policies that specify: which AI tools are approved for which tasks, what verification is required before AI output is included in work product, who is responsible for supervising AI use at the firm level, and how compliance is monitored.

The NYSBA AI Task Force Report found that as of April 2024, fewer than 15% of New York law firms had written AI governance policies. Firms without such policies cannot demonstrate Rule 5.3(a) compliance. For the consequences of operating without AI governance policies, see the AI governance for law firms analysis.

5.3(b): Attorney-Level Supervision

Rule 5.3(b) requires the supervising attorney to make reasonable efforts to ensure that non-lawyers comply with the professional obligations of the attorney. This means the attorney who uses AI output in their work product is responsible for verifying that output satisfies the standards applicable to attorney work product: citation accuracy, factual accuracy, appropriate characterization of legal authority, and compliance with court rules.

Judge Castel's Mata v. Avianca sanctions opinion is the most cited articulation of this principle: "The party's attorney is responsible for the contents of all filings made on behalf of a party, regardless of whether those contents were generated by a lawyer, paralegal, or artificial intelligence." Rule 5.3(b) provides the disciplinary authority behind this principle.

5.3(c): Ratification Liability

Rule 5.3(c) provides that an attorney is responsible for the conduct of a non-lawyer assistant if the attorney orders or ratifies the conduct. An attorney who reviews AI-generated work product and submits it to a court without correction has ratified that work product — including any errors, hallucinations, or inaccuracies it contains. The review does not need to be perfunctory; the ratification is complete when the attorney affixes their signature.

ABA Model Rule 1.1: Technology Competence for AI

ABA Model Rule 1.1, Comment 8 (2012 amendment), requires attorneys to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology." This comment has been adopted verbatim by 41 states and in substantially equivalent form by 9 others. It is the primary vehicle through which bar authorities have imposed specific AI competence requirements.

What AI Competence Requires Under State Bar Interpretations

The following state bars have issued the most specific interpretations of Rule 1.1's technology competence requirement as applied to AI:

State Bar AI Ethics Opinions: Technology Competence Requirements

Jurisdiction
Opinion / Citation
Competence Requirement
Florida
FL Bar Op. 24-1 (Jan. 2024)
Attorneys must understand how AI tools process, store, and potentially share client data. Reliance on marketing materials without independent verification of data practices is insufficient. Must conduct actual due diligence on vendor data policies before deployment.
California
CA Practical Guidance (Nov. 2023)
Four-part due diligence framework required before using any AI tool for client matters: (1) data retention analysis, (2) training data contamination assessment, (3) sub-processor identification, (4) contractual confidentiality protections. Each element must be documented.
New York
NY Ethics Op. 1253 (2024)
Attorneys must understand AI tool capabilities and limitations before use. Specific understanding required: hallucination rates for the specific task type, data retention practices, training data policies, and citation verification capabilities. Supervision policy must address each AI tool used.
Texas
TX Prof. Ethics Comm. Op. 699 (2024)
Rule 1.01 competence includes technology competence. Attorneys must verify AI-generated research through authoritative sources. Mata v. Avianca cited as the paradigm case for incompetent AI use. Specifically requires verification of citations before filing.
Pennsylvania
PA Bar Formal Op. 2024-300
Attorneys must implement reasonable measures to prevent unauthorized AI disclosure. "Reasonable measures" require reviewing vendor data policies, assessing data retention practices, and using enterprise-grade deployments offering contractual confidentiality protections.
New Jersey
NJ Advisory Comm. Op. 740 (2024)
Most technically specific state bar opinion on AI architecture. Distinguishes consumer AI (insufficient), enterprise AI with DPAs (potentially sufficient with safeguards), and isolated private deployments (strongly preferred for sensitive matters). Shared infrastructure flagged as specific concern.
Illinois
IL State Bar Advisory Op. 24-03 (2024)
Competent AI use requires understanding the specific AI system's architecture, not just vendor categories (e.g., "enterprise" vs. "consumer"). Attorneys in Illinois must be able to describe their AI system's data handling to a technically sophisticated client or regulator upon request.
ABA
ABA Formal Op. 512 (2024)
Addresses four core duties: competence (Rule 1.1), confidentiality (Rule 1.6), communication (Rule 1.4), and supervision (Rules 5.1 and 5.3). Attorneys must evaluate whether AI tools "maintain reasonable confidentiality." Standard TOS consent does not constitute adequate client disclosure under Rule 1.4.

ABA Model Rule 1.4: When Must Attorneys Disclose AI Use?

ABA Model Rule 1.4 requires attorneys to promptly inform clients of any decision or circumstance with respect to which the client's informed consent is required, and to explain a matter to the extent reasonably necessary to permit the client to make informed decisions. Applied to AI use, this creates a disclosure obligation in two specific scenarios:

Scenario 1: AI Use Requiring Client Consent Under Rule 1.6

When an attorney uses an AI tool that transmits client confidential information to a third party (as all consumer AI tools do), Rule 1.6(a) requires the client's informed consent to that disclosure unless the disclosure is impliedly authorized to carry out the representation. California's four-part framework treats this consent requirement as a mandatory element of AI deployment for client matters — attorneys must either obtain client consent or use AI tools that do not transmit client data to third parties.

Scenario 2: AI Use That Affects the Substance of Work Product

ABA Formal Op. 512 holds that attorneys must disclose AI use when it affects the substance of work product delivered to the client. This is a substantive-effect standard, not a categorical prohibition or categorical requirement. An attorney who uses AI to draft a contract and substantially modifies the draft based on their own judgment has used AI as a drafting tool — disclosure may not be required. An attorney who substantially relies on AI analysis in advising a client has used AI in a way that may affect the substance of the advice — disclosure may be required.

The Billing Dimension:

ABA Formal Op. 512 addresses AI billing practices that Rule 1.4 does not resolve expressly: attorneys may not charge clients at full hourly rates for time saved by AI efficiency gains without disclosing the efficiency gain and adjusting billing accordingly. This is a disclosure and honesty obligation that has generated disciplinary proceedings in California and New York separate from the confidentiality and supervision issues addressed elsewhere.

Supervision Requirements: What Policies Must Address

State bar opinions from New York, California, Florida, Texas, Pennsylvania, and New Jersey all require law firms to have written AI supervision policies. The specific required elements vary by jurisdiction but converge on a common minimum core:

Disclosure Requirements: What Clients Must Be Told

The disclosure requirements under Rule 1.4, as interpreted by state bar opinions, create three distinct disclosure obligations:

Pre-Engagement Disclosure: Engagement Letter Language

Most state bars require disclosure of AI tool use before the representation begins — specifically in the engagement letter or retainer agreement. The minimum required disclosure includes: that the firm uses AI tools in its practice, what type of AI tool is used (with sufficient description to enable the client to assess the privacy implications), and a description of the data protection measures in place to protect client confidential information processed by the AI tool.

Matter-Specific Disclosure: When AI Affects Advice

When AI use affects the substance of advice given to a client on a specific matter, attorneys in California, New York, and New Jersey must disclose that AI contributed to the analysis and must be prepared to describe how the AI output was verified and supplemented by attorney analysis. This disclosure obligation is triggered by substance, not by the mere fact of AI use.

Billing Disclosure: AI Efficiency and Fee Adjustment

ABA Formal Op. 512's position on billing — that attorneys must disclose AI efficiency gains to clients rather than billing at full rates for AI-accelerated tasks — has been adopted by California, New York, and Texas bar authorities. The practical implication is that engagement letters should address AI use and billing methodology together, not separately.

Bar Ethics AI Compliance Checklist

Multi-Jurisdiction AI Ethics Compliance Checklist

01
Written AI Governance Policy (Rule 5.3(a))

Every firm must have a written AI governance policy addressing approved tools, verification requirements, supervision hierarchy, training requirements, and incident reporting. The NYSBA AI Task Force found fewer than 15% of New York firms had this policy as of April 2024 — disciplinary risk for the other 85%.

02
Jurisdiction-Specific Due Diligence Protocol (Rule 1.1)

Apply the most stringent due diligence framework applicable to any jurisdiction where the firm operates. California's four-part framework is the current high watermark. Document the due diligence analysis for each AI tool deployed, with a record that can be produced to bar regulators.

03
Engagement Letter AI Disclosure Language (Rule 1.4)

Update engagement letters to disclose AI use, describe the data protection architecture, and obtain client consent for processing confidential information through AI systems. Jurisdiction-specific versions required for California, New York, New Jersey, Texas, and Florida, where disclosure requirements are most specific.

04
Court Filing AI Disclosure Compliance

Over 30 federal district courts have standing orders requiring AI disclosure in filings. Maintain a current register of applicable disclosure requirements for all courts where the firm regularly files. Ensure AI disclosure is included in filings before signature, not as an afterthought.

05
Attorney AI Training Program (Rules 1.1 and 5.3)

Implement attorney AI training covering: how the firm's approved AI tools work, what they can and cannot do, verification requirements for each task type, and the disciplinary consequences of failing to verify AI output. CLE credit available in most jurisdictions for documented AI ethics training.

06
Data Retention and Training Opt-Out Verification

For each AI tool deployed, verify and document the data retention period and training data opt-out status for each attorney account. Florida Bar Op. 24-1 specifically requires this verification — accepting vendor marketing representations without independent verification does not satisfy the standard.

07
Sub-Processor Identification and Contracting

California's Practical Guidance requires identification of all sub-processors who may access client data. Obtain a complete sub-processor list from each AI vendor and verify that each sub-processor is contractually bound to the same confidentiality obligations as the primary vendor.

08
Citation Verification Protocol Documentation

Document the firm's citation verification protocol as a matter of firm policy, not just individual attorney practice. The protocol should specify: which verification tool is used, who performs the verification, and how verification is documented in the work product file.

09
AI Billing Disclosure in Engagement Letters

Per ABA Formal Op. 512 (adopted by CA, NY, TX): engagement letters must describe how AI use affects billing methodology. Attorneys cannot bill full hourly rates for AI-accelerated tasks without disclosure. Address this explicitly in engagement letter language, not just in firm billing guidelines.

10
Multi-State Compliance Matrix

For multi-state firms, maintain a compliance matrix that maps each AI tool to the jurisdictions where it is used and confirms compliance with each jurisdiction's specific requirements. The matrix must be updated quarterly as state bar opinions evolve.

11
Incident Response and Bar Reporting Protocol

Establish a protocol for AI-related errors that may require client notification under Rule 1.4 or bar reporting under Rule 8.3. The protocol should include: assessment of whether the error constitutes malpractice, attorney notification obligations under Rule 1.4, and documentation for malpractice defense.

How Claire Satisfies Multi-Jurisdiction Bar Ethics Requirements

Claire's Bar Ethics Compliance Architecture

Claire was designed to satisfy the specific requirements of Florida Bar Op. 24-1, California's Practical Guidance, the NYSBA AI Task Force recommendations, and ABA Formal Op. 512 simultaneously — addressing the multi-jurisdiction compliance challenge that most AI vendor products require law firms to solve on their own.

Pre-Built Jurisdiction-Specific Disclosure Templates

Claire provides engagement letter AI disclosure language approved for California, New York, New Jersey, Florida, Texas, Illinois, and Pennsylvania — the seven jurisdictions with the most specific AI disclosure requirements. Templates are updated quarterly as state bar opinions evolve. Firms using Claire do not need to draft Rule 1.4 AI disclosure language from scratch.

California Four-Part Due Diligence Documentation Package

Claire's deployment documentation satisfies each element of California's four-part due diligence framework: data retention analysis (zero retention, documented in DPA), training data contamination assessment (structural training exclusion, verified through architecture review), sub-processor identification (complete sub-processor list in DPA), and contractual confidentiality protections (full DPA with right to audit).

Written AI Governance Policy Template (Rule 5.3(a))

Claire provides a law firm AI governance policy template that satisfies Rule 5.3(a)'s firm-level supervision requirement, addressing approved tools, verification requirements, supervision hierarchy, training requirements, and incident reporting. The template is customizable by practice area and is pre-reviewed by bar ethics specialists in each major jurisdiction.

Supervision Workflow Documentation for Rule 5.3(b)

Every Claire workflow that produces work-product-quality output includes a supervision documentation step that records the supervising attorney's review, verification steps taken, and sign-off. This documentation satisfies Rule 5.3(b)'s supervision requirement and creates a malpractice defense record that most law firms currently lack entirely.

The 47 state bars that have issued AI ethics guidance are not issuing advisory suggestions — they are establishing the minimum standards below which practicing attorneys face disciplinary proceedings. The convergence around Rules 1.1, 1.4, 1.6, and 5.3 means that multi-state compliance is achievable through a coherent architecture, not through jurisdiction-by-jurisdiction patchwork. For the specific technical architecture that satisfies Rule 1.6's confidentiality requirements, see client confidentiality technical architecture. For AI governance framework details, see AI governance for law firms. For privilege waiver risks under the primary purpose test, see AI privilege waiver risks.

Claire
Ask Claire about bar ethics compliance Multi-jurisdiction AI ethics templates included