Enterprise AI Governance Frameworks: NIST AI RMF, ISO 42001, EU AI Act, Colorado SB 205, and NYC Local Law 144

Key Regulatory Frameworks

NIST AI RMF Published
Jan 2023
ISO 42001:2023
Dec 2023
EU AI Act Enforcement
Aug 2026+
Colorado SB 205
Feb 2026
Converging Mandatory Requirements — 2026 Enforcement Wave Multiple AI governance frameworks move from voluntary guidance to mandatory enforcement in 2025-2026. EU AI Act prohibitions on unacceptable-risk AI became enforceable August 2, 2024. High-risk AI requirements become enforceable August 2, 2026. Colorado SB 205 (AI bias for high-risk decisions) became effective February 1, 2026. Organizations without documented AI governance programs face direct regulatory exposure.
Section 01

The AI Governance Landscape: Voluntary Frameworks to Mandatory Regulation

Enterprise AI governance has undergone a structural shift from 2023 to 2026. What began as voluntary frameworks — NIST AI RMF, OECD AI Principles, IEEE ethics guidelines — has progressively been supplemented by mandatory regulations with enforcement teeth. The EU AI Act's enforcement timeline, state-level AI bias laws in Colorado and New York City, and emerging federal AI guidance have created a compliance landscape that organizations cannot navigate using voluntary frameworks alone.

The challenge for enterprise AI governance programs is that these frameworks do not map cleanly onto each other. NIST AI RMF organizes risk management around four functions: GOVERN, MAP, MEASURE, and MANAGE. ISO 42001:2023 applies ISO's management system structure (Plan-Do-Check-Act) to AI-specific requirements. The EU AI Act organizes requirements around risk tiers (unacceptable, high, limited, minimal). Colorado SB 205 focuses specifically on algorithmic discrimination in high-risk decisions. NYC Local Law 144 targets automated employment decision tools. An organization subject to all of these — as any US company with EU employees or customers may be — must build a governance program that simultaneously satisfies each framework's distinct structural requirements.

€35M
EU AI Act maximum fine — or 7% of global annual turnover for highest-risk violations
$20K
Colorado SB 205 civil penalty per violation — AI discrimination in consequential decisions
ISO
42001:2023 — first certifiable AI management system standard
NIST
AI RMF 1.0 — GOVERN, MAP, MEASURE, MANAGE functions
Section 02

NIST AI RMF 1.0: Building the Governance Foundation

NIST's AI Risk Management Framework, published January 26, 2023, provides a voluntary framework for managing AI risks in a manner consistent with NIST's existing cybersecurity and privacy risk management frameworks. AI RMF 1.0 organizes risk management into four core functions, each with subcategories and informative references.

GOVERN: Establishing AI Risk Accountability

The GOVERN function establishes the organizational structures, policies, and accountability mechanisms for AI risk management. GOVERN.1 focuses on policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks. GOVERN.6 addresses how people manage risks associated with AI. For enterprise implementation, GOVERN requires: a documented AI governance policy approved at the board or executive level, designated accountability for AI risk (typically a Chief AI Officer or AI Risk Committee), clear escalation paths for AI risks, and integration of AI risk into enterprise risk management processes.

MAP: AI Risk Identification

MAP activities categorize AI use cases, identify affected stakeholders, and assess the potential impact of AI system failures or harmful outputs. MAP.1 requires documenting the context in which AI systems are used. MAP.3 requires identifying AI risks and potential harms. MAP.5 requires identifying organizational risk tolerances. For enterprise governance programs, MAP activities produce the AI system inventory — a registry of all AI systems in production use, with their risk classifications, affected stakeholder groups, and identified risk vectors.

MEASURE: AI Risk Assessment

MEASURE activities assess and analyze the risks identified through MAP activities. MEASURE.1 focuses on performance metrics for AI systems. MEASURE.2 addresses AI risk measurement methodologies. MEASURE.4 addresses feedback integration. In practice, MEASURE requires: documented performance metrics for each AI system, regular testing of AI outputs for accuracy, bias, robustness, and security, and mechanisms to detect AI performance degradation over time (model drift).

MANAGE: AI Risk Treatment

MANAGE activities implement risk response plans, prioritize risk treatment based on risk assessment outcomes, and track treatment progress. MANAGE.1 requires AI risk management plans that respond to assessed risks. MANAGE.4 requires plans for AI system retirement or modification when risks cannot be adequately managed. For enterprise programs, MANAGE requires a documented risk treatment plan for each identified AI risk, with owners, timelines, and escalation paths.

Section 03

ISO 42001:2023: The Certifiable AI Management System Standard

ISO/IEC 42001:2023, published December 18, 2023, is the first international standard for an AI management system (AIMS) — a structured, certifiable program for managing AI-related risks and impacts throughout an organization. ISO 42001 follows the ISO High-Level Structure used by ISO 27001 (information security) and ISO 9001 (quality management), making it structurally familiar to organizations already operating under those frameworks.

ISO 42001 certification — obtainable from accredited certification bodies — provides organizations with an externally validated demonstration that their AI management system meets international standards. This has become commercially significant: enterprise procurement processes increasingly require vendors to demonstrate AI governance credentials, and ISO 42001 certification is emerging as the credential of choice. Several large enterprise AI vendors, including major cloud providers, announced ISO 42001 certification or roadmaps in 2024-2025.

Key ISO 42001 Requirements for Enterprise AI

ISO 42001's requirements include: establishing an organizational context and interested parties analysis specific to AI, setting an AI policy at the leadership level, documenting an AI system impact assessment process, implementing controls for AI risk specific to the organization's context, and conducting periodic management reviews of AI management system effectiveness. The standard's Annex A provides a comprehensive control set covering AI system characteristics, data for AI, and AI system life cycle.

NIST AI RMF + ISO 42001 Convergence

NIST and ISO published a mapping document showing alignment between AI RMF and ISO 42001 controls. Organizations implementing ISO 42001 can demonstrate NIST AI RMF alignment through the mapping. This dual-framework approach satisfies both US federal agency AI guidance (which references NIST) and international procurement requirements (ISO certification).

EU AI Act: Mandatory for High-Risk AI

EU AI Act Article 9 requires high-risk AI systems to have a risk management system throughout the lifecycle. Article 10 requires training, validation, and testing data governance. Article 11 requires technical documentation. Article 12 requires record-keeping. Article 61 requires post-market monitoring. These requirements broadly align with ISO 42001's management system structure.

Colorado SB 205 and NYC Local Law 144

Colorado SB 205 (effective Feb 1, 2026) requires developers and deployers of high-risk AI systems making consequential decisions to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. NYC Local Law 144 (2023) requires bias audits for automated employment decision tools. Both require documented impact assessments and disclosure.

Section 04

EU AI Act: Risk Tiers and Enterprise Obligations

The EU AI Act, which entered into force August 1, 2024, organizes AI systems into four risk tiers with progressively more stringent requirements. Enterprises deploying AI must first classify their AI systems by risk tier, then implement the requirements applicable to each tier.

Risk Tier Examples Key Requirements Fine (Maximum)
Unacceptable Risk (Prohibited) Social scoring, subliminal manipulation, real-time biometric surveillance (with exceptions) Prohibited — cannot deploy €35M or 7% turnover
High Risk AI in employment decisions, credit scoring, critical infrastructure, biometric identification Risk management, technical documentation, human oversight, transparency, accuracy requirements €15M or 3% turnover
Limited Risk Chatbots, emotion recognition, AI-generated content Transparency obligations — disclose AI interaction €7.5M or 1.5% turnover
Minimal Risk AI-enabled video games, spam filters Voluntary codes of conduct encouraged N/A

High-risk AI requirements under Articles 9-15 include: a documented risk management system covering the entire AI lifecycle (Article 9); data governance for training/validation/testing datasets (Article 10); technical documentation including system architecture, training data specifications, and performance metrics (Article 11); logging of AI system operations for post-incident investigation (Article 12); transparency to deployers about capabilities and limitations (Article 13); human oversight mechanisms enabling monitoring and intervention (Article 14); and demonstrated accuracy, robustness, and cybersecurity (Article 15).

Section 05

AI Governance Program Technical Audit Checklist

  • AI System Inventory — Complete Registry Maintain a registry of all AI systems in production use. For each system, document: purpose, vendor/developer, data inputs, output types, affected stakeholders, risk tier classification (EU AI Act), and current NIST AI RMF governance status. Review quarterly.
  • Risk Tier Classification — EU AI Act Classify each AI system against EU AI Act risk tiers. For systems with EU exposure, obtain legal review of classification. High-risk classification triggers mandatory compliance requirements by August 2026. Document classification rationale for each system.
  • AI Governance Policy — Board/Executive Level Approval Adopt and document an AI governance policy approved at board or executive level. Policy must address: AI risk tolerance, accountability structures, prohibited use cases, data governance, and human oversight requirements. Review and update annually.
  • NIST AI RMF GOVERN Function — Accountability Structure Document designated AI risk accountability (CAIO, AI Risk Committee, or equivalent). Define escalation paths from operational AI use to executive oversight. Integrate AI risk into enterprise risk register. Document quarterly review cadence for AI risk posture.
  • ISO 42001 Gap Assessment Conduct gap assessment of current AI management practices against ISO 42001:2023 requirements. Document gaps and remediation roadmap. If ISO 42001 certification is a commercial requirement, engage accredited certification body for formal assessment timeline.
  • High-Risk AI — EU AI Act Articles 9-15 Compliance For each high-risk AI system (EU Act classification), document compliance status for Articles 9-15. Implement risk management system, technical documentation, human oversight mechanism, and logging. Conduct conformity assessment before EU deployment.
  • Colorado SB 205 — High-Risk AI Impact Assessment For AI systems making consequential decisions (employment, housing, financial, healthcare, insurance) affecting Colorado residents, conduct documented impact assessment for algorithmic discrimination risk. Implement risk mitigation measures. Maintain assessment documentation for 3 years minimum.
  • NYC Local Law 144 — Automated Employment Decision Tools For any automated employment decision tools used in NYC (hiring, promotion, termination, compensation), conduct annual bias audit by independent auditor. Publish bias audit summary on company website. Notify NYC candidates and employees of AEDT use and available accommodations.
  • AI System Performance Monitoring — NIST MEASURE Function Implement continuous monitoring of AI system performance metrics: accuracy, false positive/negative rates, demographic disparity in outputs, latency, and availability. Alert on metric degradation. Conduct quarterly performance review for all high-risk AI systems.
  • AI Audit Trail — Record-Keeping for Investigation and Compliance Implement comprehensive audit trails for all high-risk AI decisions. Audit trail must capture: input data, model version, decision output, confidence score, human review status, and timestamp. Retain for minimum 5 years for EU AI Act high-risk systems. Ensure audit trails are accessible to regulators on request.
  • Third-Party AI Vendor Governance Extend AI governance program to cover third-party AI vendors. Require vendors to provide: EU AI Act compliance documentation, NIST AI RMF alignment evidence or ISO 42001 certification, bias testing results, and data handling agreements. Include AI governance requirements in vendor contracts.
Section 06

How Claire Supports Enterprise AI Governance Programs

Claire's AI Governance Architecture

EU AI Act Technical Documentation — Claire provides customers with pre-populated EU AI Act Article 11 technical documentation for Claire's AI components, covering system architecture, training data descriptions, performance metrics, and risk management measures. Reduces customer documentation burden for high-risk AI conformity assessments.
NIST AI RMF GOVERN/MEASURE Evidence — Claire generates quarterly AI performance reports aligned with NIST AI RMF MEASURE function requirements: accuracy metrics, demographic disparity analysis, availability statistics, and security testing results. Evidence is formatted for inclusion in enterprise AI governance reporting.
ISO 42001 Alignment Documentation — Claire's AI management practices are documented against ISO 42001:2023 control requirements. Customers receive a Claire-specific ISO 42001 alignment mapping as part of enterprise onboarding — usable directly as evidence in customer ISO 42001 certification assessments.
Bias Testing and Demographic Analysis — Claire's quality assurance program includes quarterly bias testing across demographic dimensions relevant to Colorado SB 205 and NYC Local Law 144 compliance. Results are provided to customers in audit-ready format for regulatory disclosure requirements.
Comprehensive Audit Trail for Regulatory Inspection — Claire's logging architecture captures all required EU AI Act Article 12 log data for high-risk AI operations. Logs are immutable, timestamped, and exportable in regulatory inspection format. Retention is configurable up to 10 years.
C
Ask Claire about AI governance frameworks