Insurance AI Compliance: NAIC Model Bulletin, NY DFS Circular Letter & Algorithmic Underwriting Bias
Insurance carriers deploying AI for underwriting, claims adjudication, and pricing face an accelerating regulatory framework. The NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers (2023) and New York DFS Circular Letter No. 1 (2019) together establish accountability standards that most carriers have not operationalized. Algorithmic underwriting bias enforcement is no longer theoretical — regulators are examining AI-driven decisions for disparate impact under the federal Fair Housing Act, state anti-discrimination statutes, and NAIC's unfair trade practices model act.
NAIC Model Bulletin on Use of Artificial Intelligence Systems by Insurers (2023)
Adopted: December 4, 2023 by the NAIC membership
Scope: All insurers using AI systems in insurance operations including underwriting, rating, claims, and marketing
Key requirement: Insurers must implement an AI governance framework ensuring AI systems are accurate, reliable, explainable, and do not violate unfair discrimination prohibitions
Accountability standard: Carriers bear full accountability for third-party AI vendor decisions — "vendor said so" is not a defense
Bias testing: Requires proxy discrimination analysis when non-protected characteristics correlate with protected class status
Documentation: Insurers must maintain records sufficient to demonstrate that AI use complies with applicable insurance laws
Source: NAIC Model Bulletin — naic.org
NY DFS Circular Letter No. 1 (2019): The Algorithmic Underwriting Standard
New York DFS Circular Letter No. 1, issued January 18, 2019, was the first major US insurance regulator guidance on AI and external data use in underwriting. It established that insurers using external data sources, algorithms, or predictive models in underwriting must be able to demonstrate that the use does not result in unfair discrimination based on protected characteristics under New York Insurance Law § 2606. The circular applies to all life insurance underwriting using external data and AI.
The DFS guidance requires that before deploying any AI-driven underwriting model, an insurer must: (1) perform an independent audit of the data sources used; (2) test the model for disparate impact on protected classes; (3) document the relationship between any external data element and mortality risk; and (4) ensure the model does not use data that is an unlawful proxy for race, national origin, gender, religion, or disability status. Any insurer that cannot demonstrate these steps risks a finding of unfair discrimination under New York Insurance Law.
Colorado Senate Bill 21-169 — Insurance Unfair Discrimination by Algorithms
Enacted: June 2021 (effective January 2023 for life insurance)
Scope: Life insurers using external data, algorithms, or predictive models in underwriting
Key prohibition: Insurers may not use external data that unfairly discriminates based on race, color, national origin, religion, sex, sexual orientation, disability, or gender identity
Required testing: Annual testing of all algorithms for unfair discrimination, submitted to Colorado DOI
Penalty: Market conduct examination, license revocation, civil penalties under Colorado Insurance Code
Significance: First US state law requiring affirmative annual algorithm bias testing by insurers
Algorithmic Underwriting Bias: The Proxy Discrimination Problem
The central compliance risk in insurance AI is proxy discrimination. Modern machine learning models trained on historical insurance data can develop correlations between seemingly neutral variables — ZIP code, credit score, education level, occupation, social media behavior, purchasing patterns — and protected class membership. When these proxies drive underwriting or pricing decisions, the model effectively discriminates on the basis of race, national origin, or other protected characteristics even if those variables are not directly used.
The National Fair Housing Alliance (NFHA) filed a formal complaint with HUD in 2020 alleging that insurance pricing algorithms used by major homeowners insurers violated the Fair Housing Act by correlating credit-based insurance scores with race. The complaint cited research showing that neighborhoods with higher proportions of Black residents received higher insurance premiums even after controlling for risk factors. HUD opened a civil rights investigation, which remains ongoing. Multiple state insurance departments have since launched their own investigations into credit-based insurance scoring.
Claire's Insurance AI Governance Framework
Claire provides insurance carriers with an AI governance infrastructure that addresses NAIC Model Bulletin requirements, NY DFS Circular Letter standards, and state-level algorithmic bias testing mandates. The platform supports the full lifecycle from model documentation through annual bias testing and regulatory examination response.
Claire for Insurance AI Compliance
Automated Proxy Discrimination Analysis
Claire's fairness testing module runs disparate impact analysis across all protected class proxies before model deployment and on an annual basis thereafter — meeting the Colorado SB 21-169 annual testing requirement and the NAIC documentation standard. Results are automatically formatted for regulatory submission.
Model Documentation and Audit Trail
Every AI model used in underwriting, pricing, or claims generates an automatic documentation package including training data sources, validation results, bias test results, and a plain-language explanation of decision factors — satisfying the explainability requirements of NY DFS Circular Letter No. 1 and the NAIC Model Bulletin accountability standard.
Third-Party Vendor Accountability
Claire's vendor governance module manages the documentation and testing obligations for third-party AI vendors, addressing the NAIC Model Bulletin's clear statement that carriers cannot delegate accountability to vendors. Vendor AI systems are subject to the same documentation and bias testing requirements as internally built models.
Insurance AI Compliance Checklist
NAIC / NY DFS / State Law AI Compliance
AI governance framework documented: Board-approved policy covering all AI systems used in insurance operations, including third-party models, with named accountability owners for each system.
Disparate impact testing before deployment: All underwriting and pricing models tested for disparate impact on race, gender, national origin, disability, and religion before production deployment.
Annual algorithm bias testing (Colorado SB 21-169): Life insurers operating in Colorado submit annual testing results to Colorado DOI covering all algorithms using external data.
Proxy variable analysis: All input variables in underwriting models reviewed for potential proxy correlation with protected class status, including correlation analysis with census data.
Adverse action explainability: When AI-driven underwriting produces an adverse outcome, the carrier must be able to provide a specific, accurate explanation that complies with FCRA adverse action notice requirements.
Vendor contract requirements: Third-party AI vendor contracts require the vendor to provide model documentation, bias testing results, and cooperation with regulatory examinations.
Model inventory with risk classification: Complete inventory of all AI systems in use, classified by risk level, with higher-risk models (underwriting, claims denial) subject to enhanced governance.
Human oversight for high-stakes decisions: Claims denials and underwriting declinations above defined thresholds require human review of AI recommendations before final decision.
Examination-ready documentation package: All model documentation, testing results, and governance records maintained in examination-ready format accessible within 48 hours of regulator request.
Consumer complaint monitoring: AI-driven decisions monitored for patterns in consumer complaints that may indicate discriminatory outcomes not detected in pre-deployment testing.
Frequently Asked Questions
Does the NAIC Model Bulletin have the force of law?
The NAIC Model Bulletin itself is not binding, but states that adopt it give it regulatory force through examination authority. As of 2024, a growing number of states reference the Model Bulletin in market conduct examinations. Importantly, state anti-discrimination laws and unfair trade practices statutes already apply to AI-driven decisions — the Model Bulletin clarifies how regulators will evaluate compliance.
Can an insurer use credit scores in underwriting?
Credit-based insurance scores remain legal in most states for property and casualty insurance, but their use in life insurance underwriting is increasingly scrutinized. Several states have passed restrictions. The core legal test is whether the credit score functions as a proxy for race or national origin — if correlation analysis shows it does, use violates state anti-discrimination law regardless of whether credit scoring itself is permitted.
What constitutes adequate documentation under the NAIC Model Bulletin?
Documentation must include: a description of the AI system's purpose and function; data sources used in training; validation results including accuracy and fairness testing; the process for ongoing monitoring; escalation procedures for identified issues; and records showing senior management accountability. Documentation must be retained for the period required by state record-keeping laws.
How does the NY DFS circular apply to out-of-state insurers writing New York policies?
The NY DFS Circular Letter No. 1 applies to all insurers authorized to write life insurance in New York, regardless of domicile. An insurer domiciled in another state that writes New York life insurance policies must comply with NY's standards for algorithmic underwriting. This includes the requirement to demonstrate that external data use does not result in unfair discrimination under New York Insurance Law § 2606.
What happens during an insurance market conduct examination of AI systems?
Examiners increasingly request model documentation, training data descriptions, bias testing results, and samples of AI-driven decisions for re-review. Carriers that cannot produce this documentation face examination findings. Significant findings can result in consent orders, civil penalties, and mandated remediation. The NAIC's 2024 Market Conduct Annual Statement data now includes AI usage questions.
Prepare Your Insurance AI for Regulatory Examination
Related reading:
Finance AI Overview |
Mortgage AI Compliance |
EU AI Act for FinTech |
CFPB Fair Lending AI