Canada OSFI AI: Guideline E-23 Model Risk, Technology & Cyber Guidance & FINTRAC AML Automation

The Office of the Superintendent of Financial Institutions (OSFI) — Canada's federal banking regulator — has established an AI governance framework through Guideline E-23 (Model Risk Management, 2023), the Technology and Cyber Risk Management Guideline (2023), and supervisory guidance on AI use in federally regulated financial institutions (FRFIs). FINTRAC (Financial Transactions and Reports Analysis Centre of Canada) has issued guidance on AML program automation that specifically addresses AI-powered transaction monitoring.

C$9.1T
Total assets at OSFI-regulated federally regulated financial institutions (OSFI 2023)
OSFI's 2023 Guideline E-23 was a significant update to Canada's model risk management framework, explicitly addressing AI and ML models. The guideline applies to all federally regulated financial institutions and requires comprehensive model governance including independent validation, ongoing monitoring, and board-level model risk reporting.

OSFI Guideline E-23 — Model Risk Management (2023)

Issued: 2023 (supersedes 2017 guidance)
Scope: All federally regulated financial institutions using models in material risk management, financial reporting, or customer-facing decisions
Key AI provision: E-23 explicitly addresses AI/ML models, requiring: model documentation adequate to understand AI model functioning; independent validation with AI-specific testing protocols; ongoing performance monitoring; and clear accountability for AI model decisions
Explainability: OSFI expects AI models affecting customers or capital to provide explainable outputs — black-box models in high-stakes decisions create E-23 compliance risk
Source: OSFI E-23 — osfi-bsif.gc.ca

Regulatory Risks and Compliance Challenges

FINTRAC's 2021 guidance on AML program effectiveness specifically addresses AI-powered transaction monitoring, noting that AI systems must meet the same standards as rule-based systems — detecting suspicious transactions, generating SAR-equivalent reports (Suspicious Transaction Reports, STRs in Canada), and maintaining the documentation that FINTRAC reviews in AML program assessments. FINTRAC has been conducting enhanced AML program assessments since 2021, with AI systems specifically reviewed for adequacy of monitoring and STR filing.

Canada's Artificial Intelligence and Data Act (AIDA), included as Part 3 of Bill C-27 (Digital Charter Implementation Act 2022), proposes a risk-based AI governance framework for high-impact AI systems. Although AIDA had not received Royal Assent as of early 2026, Canadian financial institutions should anticipate regulation of AI credit scoring, insurance pricing, and automated investment advice as high-impact systems under future AIDA implementation.

Claire's AI Compliance Solution

Claire Platform Capabilities

OSFI E-23 AI Model Documentation

Claire implements OSFI Guideline E-23 model documentation requirements for AI systems — providing training data lineage documentation, model architecture description, validation results, and ongoing performance monitoring records in the format OSFI examiners request.

FINTRAC AML AI Program Compliance

Claire's AML automation module generates FINTRAC-compliant STR workflows from AI transaction monitoring — ensuring that AI-generated suspicious activity indicators trigger documentation and STR filing that meets FINTRAC's substantive review standards.

OSFI Technology Risk AI Assessment

Claire aligns AI governance with OSFI's Technology and Cyber Risk Management Guideline requirements — assessing AI system risks within the technology risk framework and generating the documentation that OSFI supervisors expect in technology risk examinations.

Compliance Checklist

AI Regulatory Compliance Requirements

01

AI model risk management framework: Governance applied to all quantitative AI models with inventory, validation, and monitoring.

02

Independent model validation: Annual independent validation of material AI models with results documented.

03

Examination-ready documentation: AI governance documentation maintained for regulatory access within 48 hours.

04

Third-party AI vendor oversight: Documentation of oversight activities for all AI vendors.

05

Fair lending and anti-discrimination monitoring: Regular testing of AI decisions for prohibited bias.

06

Consumer protection review: AI customer-facing tools reviewed for applicable consumer protection compliance.

07

Data quality governance: Training data quality documented and reviewed annually.

08

Immutable audit trail: Records of all AI decisions affecting consumers or regulatory obligations.

09

Board AI risk reporting: Quarterly AI risk reporting to board covering model performance and regulatory developments.

10

Incident response plan: Written incident response plan for AI model failures with regulator notification protocols.

Frequently Asked Questions

What does OSFI Guideline E-23 require for AI models?

OSFI E-23 (2023) requires FRFIs to implement model risk management for all models used in material risk management, financial reporting, or customer-facing decisions. For AI models, E-23 requires: documentation describing the model's purpose, inputs, processing, and outputs; independent validation by a function separate from model developers; ongoing performance monitoring; model risk reporting to senior management and the board; and clear accountability for model decisions.

How does FINTRAC assess AML AI programs?

FINTRAC conducts AML program assessments of reporting entities to evaluate whether their AML programs effectively detect and report suspicious transactions. For AI-powered transaction monitoring, FINTRAC assesses: the coverage of the AI monitoring system across transaction types; the quality of alerts generated; the STR filing rate from AI-generated alerts; the documentation maintained for alerts that were reviewed and not escalated; and the governance over the AI model including validation and tuning.

Does Canada's AIDA regulate financial AI?

Canada's Artificial Intelligence and Data Act (AIDA), Part 3 of Bill C-27, proposes a risk-based framework for high-impact AI systems. While AIDA had not received Royal Assent as of early 2026, it proposes that high-impact AI — including AI credit scoring and insurance pricing systems — would be subject to governance, impact assessment, and transparency requirements. Canadian financial institutions should prepare AI governance programs that would satisfy anticipated AIDA high-impact system requirements.

How does OSFI's Technology and Cyber Risk Guideline address AI?

OSFI's 2023 Technology and Cyber Risk Management Guideline (formerly OSFI Guideline B-10) addresses AI within the broader technology risk framework. OSFI expects FRFIs to assess AI systems for technology risk — including availability risk, integrity risk, and confidentiality risk — and to manage AI vendor technology risk as part of their third-party technology risk management programs. AI-specific provisions focus on model drift monitoring and AI system availability controls.

What privacy requirements apply to AI training data in Canada?

AI training data in Canada must comply with the Personal Information Protection and Electronic Documents Act (PIPEDA) and its successor the Consumer Privacy Protection Act (CPPA, Bill C-27 Part 1). Using personal financial data to train AI models requires a valid legal basis (consent or legitimate purpose), data minimization, purpose limitation, and retention limits. The CPPA would add algorithmic transparency rights for automated decision systems that significantly affect individuals.

Ready to strengthen your AI compliance program? Claire helps financial institutions navigate complex regulatory requirements with automated monitoring, audit trails, and examination-ready documentation. Book a demo with Claire.

Related: Finance AI Overview  |  AI Model Risk Management  |  Regulatory Compliance

Ask Claire about AI compliance
C