CCAR/DFAST AI: Federal Reserve CCAR 2024 Results, SR 11-7 Model Risk & Stress Scenario Generation
The Federal Reserve's Comprehensive Capital Analysis and Review (CCAR) and Dodd-Frank Act Stress Testing (DFAST) programs represent the most rigorous model governance environment in US financial regulation. Banks participating in CCAR must subject every model used in stress testing to the Fed's model risk management expectations — and the Fed has increasingly found model governance deficiencies in AI and ML models used in stress testing. The Fed's 2024 CCAR results highlighted seven banks with capital plan weaknesses, and model governance was cited as a contributing factor in several cases.
Federal Reserve CCAR 2024 Results — Model Governance Findings
Results released: June 2024
Participants: 31 bank holding companies with assets over $100 billion
Key finding: Several participating banks received capital plan deficiency findings related to model documentation and governance — including insufficient documentation of AI models used in loss estimation and inadequate independent validation of ML-based credit loss models
Remediation: Banks with model governance deficiencies must submit remediated capital plans and demonstrate SR 11-7 compliance for all models before the next CCAR cycle
Source: Federal Reserve CCAR — federalreserve.gov
Regulatory Risks and Compliance Challenges
SR 11-7's model risk management framework requires that all models used in material risk management and capital calculations be subject to: independent validation by a function separate from model developers; ongoing performance monitoring; documentation adequate to allow replication of model results; and board-level awareness of model risk. For AI models in CCAR, these requirements create specific governance challenges — neural networks and gradient boosting models that produce high-accuracy loss estimates may fail SR 11-7 documentation requirements if they cannot explain individual predictions.
AI is increasingly used to generate CCAR stress scenarios — using ML models trained on historical economic data to produce hypothetical adverse scenarios that supplement the Federal Reserve's prescribed scenarios. These AI-generated scenarios must be documented with sufficient rigor that Fed examiners can evaluate the reasonableness of the scenarios and the methodology used to generate them. AI scenario generation that produces implausible scenarios or scenarios that are not sufficiently adverse creates CCAR submission risk.
Claire's AI Compliance Solution
Claire Platform Capabilities
CCAR Model Documentation Automation
Claire automates the production of SR 11-7-compliant model documentation for AI stress testing models — capturing model architecture, training data lineage, validation results, and performance monitoring history in the format that Federal Reserve examiners require in CCAR submissions.
Stress Scenario AI Generation with Documentation
Claire's scenario generation module uses AI to produce hypothetical stress scenarios that supplement Fed-prescribed scenarios, with full documentation of the economic methodology, scenario severity assessment, and historical precedent analysis that regulators require to evaluate AI-generated scenarios.
AI Model Independent Validation Support
Claire provides the testing framework for independent validation of AI CCAR models — running discrimination tests, calibration tests, and stability analyses that meet SR 11-7 standards for ML model architectures and generating validation reports formatted for Federal Reserve examination.
Compliance Checklist
AI Regulatory Compliance Requirements
AI model risk management framework: Governance applied to all quantitative models including AI.
Independent model validation: Annual independent validation of material AI models.
Examination-ready documentation: AI governance documentation maintained for regulatory access within 48 hours.
Third-party AI vendor oversight: Documentation of oversight activities for all AI vendors.
Fair lending monitoring: Monthly disparate impact analysis of AI credit decisions.
Consumer protection review: AI customer-facing tools reviewed for UDAAP and consumer protection compliance.
Data quality governance: Training data quality documented and reviewed for AI models.
Audit trail maintenance: Immutable audit trail of all AI decisions affecting consumers or regulatory obligations.
Board AI risk reporting: Quarterly AI risk reporting to board covering model performance and regulatory developments.
Incident response for AI failures: Written incident response plan for AI model failures with regulator notification protocols.
Frequently Asked Questions
What model governance does the Fed require for CCAR AI models?
The Federal Reserve applies SR 11-7 model risk management to all CCAR and DFAST models, including AI/ML models. SR 11-7 requires: model purpose and scope documentation; description of model inputs, processing, and outputs; quantitative testing results; independent validation by a function separate from model developers; ongoing performance monitoring; and board-level awareness of model risk. For AI models, documentation must describe the model architecture and feature importance in addition to traditional model documentation.
How does the Federal Reserve examine CCAR model governance?
Fed examiners conduct horizontal reviews of CCAR model governance across participating banks. Reviews cover: model inventories and risk tiers; validation independence and quality; documentation completeness; monitoring program effectiveness; and governance escalation pathways. Banks that cannot provide complete documentation for CCAR models, or whose validation quality does not meet Fed standards, receive supervisory findings that must be remediated before the next CCAR cycle.
Can AI be used to generate CCAR stress scenarios?
Yes. Some banks are using AI to generate hypothetical stress scenarios as supplements to the Fed's prescribed scenarios. AI-generated scenarios must meet the same documentation and reasonableness standards as expert-judgment scenarios — including economic justification for the severity level, assessment of internal consistency, and comparison to historical precedents. AI scenario generation that produces scenarios the Fed finds inadequately severe or methodologically flawed creates CCAR submission risk.
What happens when a bank has CCAR model governance deficiencies?
When Federal Reserve examiners find CCAR model governance deficiencies, they may: require the bank to resubmit its capital plan with remediated models; impose capital distribution restrictions while remediation is underway; issue a Matters Requiring Immediate Attention (MRIA) requiring specific remediation by a deadline; and factor model governance quality into the bank's overall supervisory rating. Persistent model governance deficiencies can affect a bank's standing in the CCAR process.
How is DFAST different from CCAR for AI model governance purposes?
DFAST (Dodd-Frank Act Stress Testing) applies to banks over $100 billion and requires annual stress tests using Fed-provided scenarios. CCAR is a more comprehensive annual review of capital adequacy and planning for the largest banks. The model governance requirements under SR 11-7 apply equally to DFAST and CCAR models. The difference is that CCAR also reviews the bank's own capital planning process — including the models and assumptions the bank uses in its own internal stress tests, not just the regulatory stress test.
Related: Finance AI Overview | AI Model Risk Management | Regulatory Compliance