FCA Consumer Duty and AI: The Four Outcome Rules That Govern Every AI-Driven Financial Service

The FCA's Consumer Duty, effective July 31, 2023 for new and existing open products and services, represents the most significant shift in UK retail financial regulation in a generation. It replaces the previous "Treating Customers Fairly" framework with a substantively higher standard: firms must not merely avoid treating customers unfairly, they must actively deliver good outcomes. For firms deploying AI in customer-facing financial services, Consumer Duty is not a background compliance exercise — it is a framework that directly governs how AI communicates with customers, how AI-driven pricing is set, how AI handles vulnerable customers, and how firms evidence that their AI is actually achieving the outcomes the regulation requires.

FCA PS22/9 — Consumer Duty: Final Rules and Guidance, July 2022

Published: July 27, 2022 (PS22/9)
Effective dates:
  — July 31, 2023: New and existing open products and services
  — July 31, 2024: Closed products and services
Primary obligation: The Consumer Principle — firms must act to deliver good outcomes for retail customers (PS22/9 § 2.1)
Four outcome rules: Products and Services; Price and Value; Consumer Understanding; Consumer Support
Foreseeable harm standard: Firms must take reasonable steps to avoid causing foreseeable harm, even where the customer has not complained
AI-specific guidance: FCA CP23/24 (November 2023): AI and automated decision-making do not modify Consumer Duty obligations
Monitoring obligation: FCA Finalised Guidance FG22/5: monitoring framework must include outcome testing, not just process compliance
Official source: FCA PS22/9 — fca.org.uk

The Consumer Duty framework is outcome-oriented in a way that distinguishes it sharply from the process-compliance model of its predecessor. Under Treating Customers Fairly, a firm could demonstrate compliance by showing it had the right policies and procedures in place. Under Consumer Duty, policies and procedures are a starting point — the firm must demonstrate, with evidence, that its policies and procedures actually produced good outcomes for its customers. For AI-driven services, this means the evidence question is not "did we configure the AI correctly" but "did the AI actually deliver good outcomes to the customers it served."

1. Consumer Duty's Four Outcome Rules Applied to AI

Consumer Duty is structured around four outcome rules, each of which imposes specific obligations on firms deploying AI in retail financial services. Understanding how each outcome rule applies to AI is essential to building a compliant AI deployment architecture.

Outcome 1: Products and Services

The Products and Services outcome requires that firms design products and services that are fit for purpose and are targeted at an identified customer segment for whose needs the product is appropriate. For AI-driven financial products, this means the AI itself is part of the product design — including how the AI adapts the product presentation, recommendations, or terms to individual customers based on their data.

An AI that personalizes a product in ways that make it appropriate for some customers in the target segment but inappropriate for others — for example, by recommending add-ons or features to customers who have no need for them based on profit margin rather than customer need — creates a Products and Services outcome failure. The product design obligation extends to the AI's personalization logic, not just the base product specification.

Outcome 2: Price and Value

The Price and Value outcome requires that the price of a product or service is reasonable relative to the overall benefits it delivers to retail customers. This directly implicates AI-driven pricing engines that may, by design or by emergent behavior, offer different prices to different customers based on factors that correlate with their vulnerability, sophistication, or willingness to shop around rather than the risk or cost of serving them.

Dynamic pricing AI that identifies customers who are unlikely to switch providers and prices to capture maximum consumer surplus from those customers — rather than pricing to reflect the true cost and value of the product — creates Price and Value outcome risk. The FCA has explicitly stated that differential pricing strategies that exploit customer inertia or lack of market awareness are inconsistent with the Consumer Duty price and value standard.

Jul 31, 2023
Consumer Duty effective date for open products and services
Every AI-driven customer interaction, pricing decision, communication, and support process for retail customers in scope since that date must deliver evidenced good outcomes. Closed products joined on July 31, 2024. The FCA has confirmed that AI and automated decision-making create no exceptions to these obligations.

Outcome 3: Consumer Understanding

The Consumer Understanding outcome requires that firms' communications enable retail customers to make informed decisions. This is the outcome most directly affected by AI-generated content and automated communications. AI that generates personalized customer communications, product explanations, risk disclosures, or investment summaries must produce communications that actually enable the specific customer receiving them to understand the product, its risks, and its costs — not communications that are technically accurate but constructed in ways that the target customer cannot understand.

The Consumer Understanding outcome does not merely require that communications be accurate. It requires that they be comprehensible to the customer receiving them. An AI that generates highly technical product descriptions for retail customers who lack the financial literacy to understand them fails the Consumer Understanding outcome even if every statement in the description is accurate. The test is not whether the communication is objectively correct — it is whether it enables the specific customer to make an informed decision.

Outcome 4: Consumer Support

The Consumer Support outcome requires that firms provide timely and effective support to retail customers when they need it. For AI-driven support — including AI chatbots, automated service channels, and AI-assisted call centers — this means the AI support must actually resolve customer needs, not merely deflect them. A chatbot that consistently fails to resolve customer queries and routes customers to a queue where they wait 45 minutes for a human agent is not providing effective consumer support, even if the chatbot itself is technically functioning as designed.

2. The Foreseeable Harm Standard and AI Liability

The foreseeable harm standard is the most far-reaching aspect of Consumer Duty for firms deploying AI in customer-facing services. Under PS22/9, firms must take reasonable steps to avoid causing foreseeable harm to retail customers — including harm that has not yet occurred and including harm that the customer has not complained about.

The "foreseeable" standard requires firms to think prospectively about the ways their products and services — including their AI systems — could cause harm to customers, and to take reasonable steps to prevent that harm before it materializes. This is a proactive obligation, not a reactive one. The FCA does not require perfection — it requires that firms identify foreseeable harm and take reasonable preventive steps.

What Makes Harm Foreseeable in AI Systems

For AI deployed in retail financial services, harm is foreseeable where the design, training, or optimization of the AI system creates conditions that predictably produce adverse outcomes for at least some customers. Examples include:

Foreseeable Harm Does Not Require Actual Harm: A firm that deploys an AI system with a foreseeable harm has already violated the Consumer Duty foreseeable harm standard, even if no customer has yet complained and no harm has yet materialized in a measurable way. The FCA's supervisory approach to Consumer Duty includes proactive review of firm AI systems for foreseeable harm potential — not just reactive investigation of customer complaints. Firms should assess foreseeable harm before deploying AI, not after customers complain.

3. Sludge and Dark Patterns: AI-Generated Friction Enforcement Risk

Consumer Duty explicitly prohibits "sludge" — practices that use friction, complexity, or behavioral exploitation to impede customers from making decisions that are in their interest. The FCA's finalised guidance on Consumer Duty identifies sludge as a direct violation of the Consumer Principle and the Consumer Support outcome. For firms using AI, the sludge prohibition has particularly sharp teeth because AI is exceptionally capable of generating, optimizing, and personalizing friction at scale.

AI-Generated Sludge: The Key Patterns

The FCA's Consumer Duty guidance and its parallel work on behavioral economics identify several patterns of AI-generated friction that create Consumer Duty enforcement risk:

Auto-enrollment and negative option design: AI systems that automatically enroll customers in premium services, add-ons, or upgraded products based on behavioral triggers — and that make cancellation difficult, obscure, or friction-laden — violate the Consumer Duty principle that customers should be able to exercise their choices without unreasonable barriers. The Consumer Support outcome specifically addresses the obligation to make it as easy for customers to exit or switch products as it is to enter them.

Cancellation friction optimization: AI that is configured to optimize customer retention by inserting friction into the cancellation process — additional steps, waiting periods, hidden options, misleading framing — is directly within the Consumer Duty sludge prohibition. The FCA has explicitly stated that AI-designed retention flows that make cancellation artificially difficult violate the Consumer Principle even when each individual friction point might appear innocuous in isolation.

Choice architecture manipulation: AI-driven interface design that uses default settings, placement, framing, and visual prominence to direct customers toward choices that benefit the firm at the customer's expense is a dark pattern that Consumer Duty prohibits. This includes AI-optimized product comparison pages that make higher-margin products more visually prominent without that prominence reflecting genuine value differences.

The "Easy to Exit" Obligation: PS22/9 § 8.81 states explicitly that firms must make it as easy for customers to exit or switch products as it is to take them out. This "easy to exit" obligation is directly relevant to any AI system involved in offboarding, cancellation, or switching flows. If the AI makes the exit flow more complex or friction-laden than the entry flow — and the disparity cannot be justified by legitimate consumer protection or regulatory requirements — it creates a Consumer Duty violation.

4. Vulnerable Customer Identification in AI

Consumer Duty's requirements for vulnerable customers represent one of the most technically demanding aspects of AI compliance under the framework. The FCA's Finalised Guidance FG21/1 on the Vulnerable Customer Guidance — which forms part of the Consumer Duty framework — defines vulnerability broadly: a consumer is vulnerable when they are significantly less able to represent their own interests due to personal circumstances, mental health, financial distress, life events, or capability limitations.

The AI Vulnerability Detection Obligation

Under Consumer Duty, firms must have processes for identifying customers who may be vulnerable and for adapting their service delivery to the needs of those customers. For firms deploying AI in customer interactions, this creates an obligation to detect vulnerability signals within AI-mediated interactions and route affected customers to appropriate support pathways. An AI chatbot that is unable to detect that a customer is in financial distress and continues to apply standard debt collection messaging to that customer fails the Consumer Duty vulnerable customer requirement.

The FCA has not prescribed the specific signals that AI must detect to identify vulnerability, but its guidance and supervisory communications identify several categories of vulnerability indicators relevant to AI systems:

AI Interactions with Vulnerable Customers: Prohibited Approaches

Consumer Duty prohibits firms from using AI in ways that exploit the vulnerabilities of retail customers. This includes AI that is configured to engage more aggressively with customers showing financial distress signals — for example, AI debt collection systems that increase contact frequency when payment behavior deteriorates, or AI credit limit engines that make more credit available to customers showing signs that they cannot manage existing credit. These practices may generate short-term revenue but create foreseeable harm to vulnerable customers and violate the Consumer Duty foreseeable harm standard.

5. Monitoring and Evidence Requirements for AI-Driven Services

The Consumer Duty monitoring obligation is among its most operationally demanding requirements for firms using AI. FCA Finalised Guidance FG22/5 makes clear that the monitoring framework required under Consumer Duty must focus on outcomes — evidence that the firm's products, pricing, communications, and support are actually delivering good results for retail customers — not just inputs and processes.

What Outcome Testing Means for AI Systems

For AI-driven services, outcome testing requires that firms gather evidence about the actual results the AI is producing for customers — not just evidence that the AI is configured and operating as designed. A firm cannot satisfy the outcome testing requirement by demonstrating that its AI chatbot answered X% of queries within Y seconds. It must demonstrate that customers whose queries were handled by the AI actually received effective support — that they understood the information they received, that their issues were resolved, and that they were not directed toward outcomes that served the firm's interests over their own.

The evidence sources that the FCA expects firms to draw on for AI outcome monitoring include:

Board Annual Consumer Duty Assessment: PS22/9 requires that the board of every firm subject to Consumer Duty receive and review an annual Consumer Duty assessment. This assessment must include evidence of the consumer outcomes the firm is achieving — including outcomes delivered by AI systems. A board that approves the annual Consumer Duty assessment without adequate evidence about AI-driven outcome quality is not discharging its Consumer Duty governance obligation. The FCA's supervisory approach includes review of the board Consumer Duty assessment process and the quality of the evidence base it rests on.

6. 12-Item FCA Consumer Duty AI Compliance Checklist

FCA Consumer Duty AI Compliance Checklist

01

Consumer Duty outcome mapping for all AI touchpoints: Map every customer-facing AI interaction to the relevant Consumer Duty outcome rule (Products and Services, Price and Value, Consumer Understanding, Consumer Support). For each AI touchpoint, identify the specific obligations that apply and the evidence required to demonstrate compliance. This mapping is the foundation of the Consumer Duty AI compliance framework.

02

Foreseeable harm assessment for AI system design: Before deploying any new AI system in retail financial services, conduct a documented foreseeable harm assessment. The assessment must identify: what customer harms the AI could foreseeably cause; whether the design of the AI exacerbates those harms through its optimization objective; what safeguards are in place to prevent foreseeable harms; and what monitoring would detect harms that materialize after deployment.

03

Sludge audit of AI-driven customer journeys: Conduct a sludge audit of all AI-mediated customer journeys, with specific focus on cancellation, switching, complaint, and opt-out flows. Identify any friction that is not justified by legitimate regulatory or consumer protection requirements. Remove AI-designed friction that is designed to impede customer choices rather than protect customers. Document the audit and its findings as evidence of Consumer Duty compliance.

04

Vulnerable customer detection in AI communications: Implement vulnerability signal detection in AI systems that interact with customers. Define the signals the AI will detect — language patterns, behavioral indicators, disclosed life events — and the escalation pathways triggered by each signal type. Test the detection capability before deployment and monitor its performance continuously. Document the vulnerability detection framework for FCA supervisory review.

05

Consumer Understanding outcome testing for AI communications: Test AI-generated customer communications with representative samples of the target customer population — including customers with lower financial literacy — to assess whether the communications actually enable informed decision-making. Document the testing methodology and results. Update AI communication parameters when testing reveals comprehension failures. This is not a one-time exercise; it must be repeated when AI communication parameters change.

06

Price and Value AI audit: Review AI pricing engines for Consumer Duty Price and Value compliance. Assess whether the pricing logic reflects genuine cost and value drivers rather than customer-specific willingness-to-pay signals or inertia exploitation. Document the pricing methodology and its relationship to the value the product delivers. Identify any differential pricing segments and assess whether the differences reflect legitimate factors or Consumer Duty-prohibited exploitation of customer behavior.

07

Easy-to-exit compliance for AI-managed products: Review all AI-managed customer exit, cancellation, and switching flows against the Consumer Duty "easy to exit" obligation. Entry and exit flows should be comparable in their friction levels. Where the exit flow is more complex or requires more steps, document the legitimate justification. Flows that cannot be justified must be simplified to meet the Consumer Duty standard.

08

AI interaction log retention for Consumer Duty evidence: Establish retention and indexing procedures for AI customer interaction logs that allow the firm to retrieve evidence of specific AI interactions for Consumer Duty supervision purposes. Logs must record sufficient information to reconstruct the AI's decision-making and communication in any given interaction — not just the inputs and outputs, but the AI parameters active at the time and any personalization logic applied to the specific customer.

09

Consumer Support outcome monitoring for AI channels: Implement outcome metrics for AI support channels that measure actual customer issue resolution — not just response speed or deflection rates. Track rates at which AI support interactions result in: issue resolved without human escalation; issue resolved after human escalation; issue unresolved at customer abandonment; and customer complaint following AI support interaction. Report these metrics to senior management as part of Consumer Duty monitoring.

10

Closed book AI compliance assessment: Firms with closed products and services must have assessed their AI-driven interactions with closed book customers for Consumer Duty compliance as of July 31, 2024. If closed book customers interact with AI — for support queries, account management, or complaints — those interactions must meet the Consumer Duty standard. Closed book status does not exempt a product from Consumer Duty obligations that arise after its effective date.

11

SM&CR accountability for Consumer Duty AI: Under the Senior Managers and Certification Regime, identify the Senior Manager responsible for Consumer Duty compliance and ensure that person's Statement of Responsibilities includes oversight of AI-driven customer outcomes. Document the governance structure through which the responsible Senior Manager receives information about AI performance on Consumer Duty outcomes and the escalation pathway for Consumer Duty AI concerns.

12

Annual board Consumer Duty assessment with AI evidence: The annual board Consumer Duty assessment required under PS22/9 must include outcome evidence from AI-driven services. Prepare an AI Consumer Duty section of the annual assessment that reports on: outcome testing results for AI communications; vulnerable customer identification performance; complaint data for AI-mediated interactions; and any foreseeable harm identified and mitigated during the reporting period. This section must be supported by evidence, not assertions.

7. How Claire Meets FCA Consumer Duty Requirements

Consumer Duty's shift from process compliance to evidenced outcomes requires a different kind of AI compliance architecture — one that produces the evidence of good outcomes as an ongoing output, not one that merely avoids documented process violations. Claire's approach to Consumer Duty compliance is built around three operational principles that directly address the FCA's outcome-evidence requirements.

Claire's Consumer Duty Compliance Architecture

Outcome-Evidencing Interaction Logs

Every AI-mediated customer interaction facilitated by Claire generates a structured outcome record that goes beyond simple input-output logging. The outcome record captures: the customer's expressed need or query; the AI response and its sources; the comprehension signals embedded in the customer's subsequent behavior; whether the interaction achieved issue resolution; and any vulnerability signals detected and the escalation actions taken. These outcome records are indexed by Consumer Duty outcome category — Products and Services, Price and Value, Consumer Understanding, Consumer Support — so that the firm's compliance function can produce Consumer Duty outcome evidence organized around the FCA's framework, not just raw interaction data that requires manual analysis to convert into regulatory evidence.

Vulnerability Signal Detection with Automated Escalation

Claire's customer interaction AI incorporates a vulnerability detection layer that identifies a defined set of vulnerability signals — linguistic patterns indicating distress or cognitive difficulty, disclosed life events, behavioral patterns indicating financial distress — and automatically triggers escalation protocols appropriate to the signal type. The escalation logic is configurable to the firm's specific vulnerable customer framework and is documented in a way that satisfies the FCA's expectation that firms can demonstrate their vulnerability identification processes to supervisors. An AI that detects a customer in financial distress and continues standard engagement without adapting — the Consumer Duty failure mode that the FCA has explicitly identified as a concern — is prevented at the architecture level, not managed through post-hoc complaint analysis.

Annual Consumer Duty Assessment Support

The annual board Consumer Duty assessment required under PS22/9 requires evidence, and evidence requires systematic data collection throughout the year. Claire's Consumer Duty compliance module provides a structured annual assessment report that aggregates the Consumer Duty outcome evidence collected throughout the year — outcome testing results, vulnerability escalation rates, complaint analysis by Consumer Duty outcome category, sludge audit findings, and foreseeable harm assessments — into the format required for board review. The board receives a Consumer Duty report that meets the FCA's evidential standard, not a self-assessment narrative that substitutes assertion for proof.

8. Consumer Duty as Competitive Differentiator

The FCA's Consumer Duty is often framed primarily as a compliance burden — a new and demanding regulatory framework that requires significant investment in data collection, monitoring, governance, and evidence production. This framing is accurate. Consumer Duty is demanding. But it also creates a competitive dynamic that firms with strong Consumer Duty compliance architectures can exploit.

Firms that can demonstrate, with evidence, that their AI systems consistently deliver good outcomes for retail customers are building a regulatory compliance record that will insulate them from supervisory action, reduce their enforcement exposure, and support the FCA's ongoing supervisory relationship with the firm. Firms that deploy AI without the monitoring and evidence infrastructure to demonstrate Consumer Duty compliance are not just creating regulatory risk — they are likely operating AI systems that produce poor customer outcomes, which is also a customer retention and reputation risk.

The FCA's 2024 supervisory approach to Consumer Duty made clear that the regulator is actively examining firms' AI-driven customer outcome evidence, not just their policy documentation. Firms that can present comprehensive, well-organized Consumer Duty outcome evidence from their AI systems — organized around the four outcome rules, supported by specific data rather than general assertions — will receive materially better supervisory treatment than firms that cannot.

Building Consumer Duty compliance into your AI deployment? Claire works with FCA-regulated firms to design AI systems that meet the Consumer Duty foreseeable harm standard, generate the outcome evidence the FCA requires, and incorporate vulnerability detection that satisfies the FCA's expectations for customer-facing AI. Talk to Claire about Consumer Duty AI compliance.

Related reading:
Finance AI Overview  |  Starling Bank £29M KYC Fine  |  CFPB AI Fair Lending  |  SEC AI Washing Enforcement

Ask Claire about FCA Consumer Duty
C