Australia APRA AI: CPS 230 Operational Risk, CPG 234 Information Security & ASIC AI Guidance

The Australian Prudential Regulation Authority (APRA) has established comprehensive AI governance requirements through its Prudential Standard CPS 230 Operational Risk Management (effective July 1, 2025), the CPG 234 Information Security guidance, and guidance issued jointly with ASIC on AI use in financial services. CPS 230 directly addresses AI-related operational risk by requiring regulated entities to assess and manage the operational risks of AI systems — including AI systems provided by third-party vendors.

A$7.7T
Total assets under APRA supervision (APRA Annual Report 2023)
APRA's CPS 230, effective July 2025, is the most significant prudential standard update in a decade. It explicitly requires regulated entities to identify, assess, and manage AI-related operational risks — treating AI failure as a material operational risk scenario requiring board-level oversight and documented controls.

APRA CPS 230 — Operational Risk Management (Effective July 1, 2025)

Standard: Prudential Standard CPS 230 Operational Risk Management
Effective date: July 1, 2025 for all APRA-regulated entities
Key AI provision: CPS 230 requires regulated entities to identify material operational risks — AI system failures are explicitly within scope as material operational risk events requiring risk assessment, controls, and board notification procedures
Service provider clause: Third-party AI providers must be assessed as material service providers where AI failure could significantly disrupt critical operations
CPG 234: Information Security guidance requires AI systems handling sensitive data to meet classification-appropriate security controls
Source: APRA CPS 230 — apra.gov.au

Regulatory Risks and Compliance Challenges

ASIC's 2024 guidance on AI in financial services identified specific compliance concerns for Australian financial services licensees (AFSLs) using AI in advice, lending, and investment management. ASIC's focus areas include: explainability of AI recommendations for retail clients; consumer protection for AI-driven financial product recommendations; and adherence to the best interests duty in AI-powered financial advice. ASIC has confirmed that existing AFSL obligations apply to AI regardless of the technology used to deliver advice.

Australia's AI governance landscape is further shaped by the Australian Government's National AI Framework (2023) and the Mandatory AI Guardrails for high-risk AI settings (2024 consultation). Financial institutions should anticipate mandatory AI governance requirements for high-risk financial AI — including credit scoring, insurance pricing, and automated investment advice — analogous to EU AI Act requirements. APRA and ASIC are coordinating with the Department of Industry, Science and Resources on sector-specific AI governance.

Claire's AI Compliance Solution

Claire Platform Capabilities

CPS 230 Operational Risk AI Assessment

Claire's operational risk module identifies AI systems that constitute material operational risks under CPS 230, assessing failure scenarios, concentration risks in vendor AI, and control effectiveness — generating documentation that meets CPS 230's board reporting and self-assessment requirements.

CPG 234 AI Information Security

Claire's security assessment module evaluates AI systems handling sensitive APRA-regulated entity data against CPG 234 security classification requirements — identifying control gaps and generating remediation plans for AI security governance.

ASIC Best Interests Duty for AI Advice

Claire monitors AI-driven financial advice outputs for compliance with the AFSL best interests duty (Corporations Act s961B) — flagging recommendations that may not be in the best interests of retail clients and generating documentation of the advice suitability assessment.

Compliance Checklist

AI Regulatory Compliance Requirements

01

AI model risk management framework: Governance applied to all quantitative AI models with inventory, validation, and monitoring.

02

Independent model validation: Annual independent validation of material AI models with results documented.

03

Examination-ready documentation: AI governance documentation maintained for regulatory access within 48 hours.

04

Third-party AI vendor oversight: Documentation of oversight activities for all AI vendors.

05

Fair lending and anti-discrimination monitoring: Regular testing of AI decisions for prohibited bias.

06

Consumer protection review: AI customer-facing tools reviewed for applicable consumer protection compliance.

07

Data quality governance: Training data quality documented and reviewed annually.

08

Immutable audit trail: Records of all AI decisions affecting consumers or regulatory obligations.

09

Board AI risk reporting: Quarterly AI risk reporting to board covering model performance and regulatory developments.

10

Incident response plan: Written incident response plan for AI model failures with regulator notification protocols.

Frequently Asked Questions

When does APRA CPS 230 take effect and what does it require for AI?

CPS 230 takes effect on July 1, 2025 for all APRA-regulated entities including ADIs, insurers, and RSE licensees. For AI, CPS 230 requires entities to: identify AI-related operational risks in their operational risk assessment; assess third-party AI vendors as potential material service providers; implement controls for AI failure scenarios; and include AI risks in board-level operational risk reporting.

How does APRA treat third-party AI providers under CPS 230?

CPS 230 requires entities to assess service providers that could, if they fail or provide poor service, materially disrupt critical operations as 'material service providers.' Third-party AI vendors providing AI systems for credit assessment, fraud detection, or operational automation should be assessed against this standard. Material service providers are subject to enhanced due diligence, contractual protections, and ongoing oversight requirements under CPS 230.

What ASIC obligations apply to AI-powered financial advice?

AFSL holders using AI to provide financial advice must comply with the best interests duty (Corporations Act s961B), which requires advisers to act in the best interests of retail clients. AI advice systems must be designed to provide advice genuinely in the client's best interests — not to optimize for product distribution metrics. ASIC has stated that AI does not change the legal obligation; only the method of delivery changes.

How does CPG 234 apply to AI systems handling sensitive data?

APRA's CPG 234 Information Security guidance requires regulated entities to classify information assets by sensitivity and apply security controls appropriate to the classification level. AI systems that process or store sensitive prudential data — customer financial records, credit history, health data for insurers — must meet the security controls required for the highest sensitivity data they handle. AI model training pipelines handling sensitive data must be assessed under CPG 234.

What is ASIC's enforcement approach to AI in financial services?

ASIC's enforcement approach focuses on consumer outcomes rather than technology method. ASIC examines whether AI-powered financial products produce outcomes that comply with applicable law — including best interests duty, product disclosure requirements, and consumer protection laws — regardless of whether the delivery mechanism is AI or human. ASIC has stated it will pursue enforcement where AI-driven advice or products cause consumer harm, applying existing legal frameworks.

Ready to strengthen your AI compliance program? Claire helps financial institutions navigate complex regulatory requirements with automated monitoring, audit trails, and examination-ready documentation. Book a demo with Claire.

Related: Finance AI Overview  |  AI Model Risk Management  |  Regulatory Compliance

Ask Claire about AI compliance
C