SEC's First AI Washing Enforcement: Delphia, Global Predictions, and What Financial AI Must Disclose
On March 18, 2024, the Securities and Exchange Commission announced charges against two investment advisers for making materially false and misleading statements about their use of artificial intelligence — the first "AI washing" enforcement actions in the history of US securities regulation. Delphia (USA) Inc. settled for $225,000. Global Predictions, Inc. settled for $175,000. Together, these cases establish that the existing securities law framework — the Investment Advisers Act, the Marketing Rule, Regulation Best Interest — applies fully to AI claims made by financial services firms. The SEC will treat false AI representations as securities fraud, not merely as marketing exaggeration.
In the Matter of Delphia (USA) Inc. — SEC Administrative Proceeding, March 2024
Date: March 18, 2024
Respondent: Delphia (USA) Inc., registered investment adviser
Penalty: $225,000 civil money penalty
Violations: Investment Advisers Act Section 206(4); Rule 206(4)-1 (Marketing Rule); Rule 206(4)-8
Core finding: Delphia falsely claimed in marketing materials that it used AI and machine learning to analyze client personal data to make personalized investment recommendations. It did not use AI or machine learning to do this. The claims were materially false and misleading.
Official source: SEC Release No. IA-6685 — sec.gov
In the Matter of Global Predictions, Inc. — SEC Administrative Proceeding, March 2024
Date: March 18, 2024
Respondent: Global Predictions, Inc., registered investment adviser
Penalty: $175,000 civil money penalty
Violations: Investment Advisers Act Section 206(4); Rule 206(4)-1 (Marketing Rule)
Core finding: Global Predictions falsely claimed to be "the first regulated AI financial advisor" and made additional false and misleading statements about AI capabilities in its investment process. The AI claims were not substantiated and were materially misleading to prospective clients.
Official source: SEC Release No. IA-6686 — sec.gov
SEC Enforcement Director Gurbir Grewal stated at the time of the actions: "We find that the AI claims made by these investment advisers were not true — AI washing is the new greenwashing." The analogy to ESG greenwashing is deliberate and carries significant regulatory implications. ESG enforcement has expanded rapidly since the SEC brought its first greenwashing cases in 2022. Financial services firms claiming AI capabilities should expect AI washing enforcement to follow the same trajectory.
1. What "AI Washing" Is and Why the SEC Targets It
AI washing is the practice of making false or misleading claims about the use or capabilities of artificial intelligence in a product, service, or investment strategy. In financial services, AI washing typically takes one of three forms: claiming to use AI when no AI is actually deployed; overstating the capabilities or accuracy of AI that is used; or implying that AI produces results that are actually generated by conventional rules-based systems, human analysis, or simpler statistical methods.
The SEC's interest in AI washing is grounded in investor protection. Investors who seek out AI-powered financial advisers are making a material choice — they are specifically seeking the claimed benefits of AI-driven investment analysis, personalization, or risk management. If the AI they believe they are receiving does not exist, or does not function as represented, the investment decision they made was based on false information. This is precisely the harm that securities fraud law is designed to prevent.
SEC Chair Gary Gensler has been emphatic on this point in multiple public statements: "The same rules that have long applied to securities marketing apply to AI claims. If you claim your AI does something, it must actually do that thing. AI does not create a new exception to the anti-fraud provisions of the securities laws." This framing — AI claims as securities representations subject to anti-fraud standards — is the legal architecture that the Delphia and Global Predictions cases instantiate.
The Delphia case is particularly instructive because the false claim was specific and verifiable: the firm stated in marketing materials that it used AI and machine learning to analyze client personal data and generate personalized investment recommendations. SEC examiners investigating the firm's actual investment process found that no such AI or machine learning system existed. The gap between the marketing claim and the operational reality was total — not a matter of degree or interpretation.
Global Predictions presented a different variant of AI washing: the claim to be "the first regulated AI financial advisor" is not merely a capability claim but a product differentiation claim. If the basis for that claim — being AI-driven in a meaningful, differentiated way — is false, then the competitive positioning built on it is itself a material misrepresentation to prospective clients who chose the firm based on that claimed distinction.
2. The Marketing Rule 206(4)-1: The Legal Framework for AI Claims
The primary legal basis for the SEC's AI washing enforcement actions is Rule 206(4)-1 under the Investment Advisers Act, commonly known as the Marketing Rule. The Marketing Rule became effective November 4, 2022, replacing the previous advertising and cash solicitation rules. It represents the most significant update to investment adviser marketing regulation in decades and was designed with modern digital marketing practices — including AI-related claims — specifically in mind.
The Seven Prohibited Practices Under the Marketing Rule
The Marketing Rule prohibits investment advisers from making advertisements that include any of seven categories of misleading content. The category most directly relevant to AI washing is the prohibition on including materially false or misleading statements of fact or omissions of material fact. An AI capability claim that is not true is a false statement of fact. An AI claim that accurately describes a capability the firm possesses but omits material limitations of that capability is a misleading omission.
The Marketing Rule also prohibits unsubstantiated performance claims, references to specific investment advice that was profitable without disclosing unprofitable advice, and testimonials or endorsements that do not meet specified disclosure requirements. All of these prohibitions apply to AI-related claims in investment adviser marketing. An adviser who claims "our AI achieved X% returns" without the required disclosures about how those returns were calculated, the time period, and the impact of fees violates the Marketing Rule regardless of whether the AI claim itself is accurate.
The Substantiation Requirement
The Marketing Rule requires that investment advisers have a reasonable basis for believing that statements in their marketing materials are accurate. For AI claims, this means that the adviser must be able to document that the AI system described in marketing materials actually exists, actually performs the functions described, and actually generates the outputs attributed to it in client-facing materials. Internal documentation — technical specifications, model validation reports, operational logs — that can demonstrate the AI system's existence and functioning is not just good practice. It is a prerequisite for compliance with the substantiation requirement.
Compliance Program Requirements Under the Marketing Rule
Investment advisers subject to the Marketing Rule — which covers all registered investment advisers — are required to adopt and implement written policies and procedures that are reasonably designed to prevent violations of the rule. For firms that market AI capabilities, those policies must include a process for reviewing AI-related claims in marketing materials against the actual operational capabilities of the firm's AI systems before publication. This review process must be documented and must be part of the firm's annual compliance review.
3. MNPI and AI: Insider Trading Risk from AI Accessing Financial Data
The AI washing enforcement actions address one dimension of SEC AI risk — false claims about AI capabilities. A distinct and potentially more serious dimension of SEC AI risk involves the use of AI systems that access or process material non-public information (MNPI), creating insider trading liability for the firms and individuals involved.
MNPI is any information about a public company that is both material (would likely influence a reasonable investor's decision) and non-public (has not been disclosed to the general investing public). The use of MNPI in making investment decisions violates Section 10(b) of the Securities Exchange Act and Rule 10b-5 — the core insider trading provisions — regardless of how that MNPI is accessed or processed. An AI system that ingests MNPI and generates investment recommendations based on it does not immunize the firm or its clients from insider trading liability. The AI is not the trader — the firm acting on AI-generated recommendations based on MNPI is.
How AI Creates New MNPI Exposure
AI systems in financial services create MNPI exposure through data pipelines that may be poorly understood by the compliance function. An AI system that aggregates news feeds, social media, satellite imagery, credit card transaction data, or web scraping results may be ingesting information that constitutes MNPI without the compliance team being aware of the specific data sources feeding the model. The 2024 SEC examination priorities specifically identified AI model risk — including the risk that AI models incorporate MNPI through their data ingestion processes — as an area of supervisory focus.
Alternative data — datasets beyond traditional financial statements and market data — is a particularly acute area of MNPI risk for AI-powered investment advisers. Satellite imagery of retail parking lots, credit card transaction aggregates, employee review data, and web traffic analytics may all contain information that constitutes MNPI under the right circumstances. An AI system that trains on or queries these data sources must be subject to a MNPI assessment before deployment, not after an SEC examination reveals the issue.
4. Regulation Best Interest: AI-Generated Investment Recommendations
Regulation Best Interest (Reg BI), effective June 30, 2020, requires broker-dealers to act in the "best interest" of retail customers when making investment recommendations. The SEC has consistently stated that the Reg BI standard applies fully to AI-generated investment recommendations — the fact that a recommendation is generated by an algorithm rather than a human adviser does not reduce or modify the broker-dealer's obligation to ensure the recommendation is in the customer's best interest.
The Four Obligations Under Reg BI
Reg BI imposes four specific obligations on broker-dealers making investment recommendations: a disclosure obligation, a care obligation, a conflict of interest obligation, and a compliance obligation. Each of these applies to AI-generated recommendations:
The Care Obligation requires that before making a recommendation, the broker-dealer exercise reasonable diligence to understand the investment and have a reasonable basis to believe the recommendation is in the customer's best interest based on their investment profile. For AI-generated recommendations, the care obligation requires that the firm understand what its AI is recommending and why — including the ability to explain the recommendation to the customer in terms they can understand. A black-box AI that generates recommendations the firm cannot explain cannot satisfy the care obligation.
The Conflict of Interest Obligation requires broker-dealers to identify and address conflicts of interest associated with recommendations. AI systems may create novel conflicts — for example, if the training data or optimization objective of the AI system is influenced by the firm's own product economics rather than pure customer outcomes, the resulting recommendations will reflect those conflicts. The Reg BI conflict of interest obligation requires firms to assess whether their AI's optimization objective creates recommendations that serve the firm's interests at the customer's expense.
5. Form ADV Disclosure Requirements for AI Tools
Form ADV is the disclosure document that registered investment advisers file with the SEC and provide to clients. Part 2 of Form ADV — the adviser's "brochure" — must describe the adviser's investment process, methods of analysis, and material risks associated with those methods. The SEC has made clear that AI tools used in the investment process must be disclosed in Form ADV.
Specifically, Form ADV Part 2 must describe: the AI or algorithmic systems used to generate investment recommendations; the data sources those systems rely upon; the material limitations and risks of the AI methodology, including the risk of model error, data quality failures, or unexpected behavior in market conditions not represented in training data; and any conflicts of interest associated with the AI system, including conflicts in the firm's relationship with the AI vendor.
What Must Be Disclosed About AI Models
The SEC's guidance on Form ADV disclosure for AI tools tracks the general materiality standard: material information about the investment process must be disclosed. For AI tools, material information includes: whether AI is used at all; whether AI is the primary driver of investment decisions or a supporting tool; the nature of the data the AI ingests; known limitations or failure modes of the AI model; the firm's model validation and ongoing monitoring practices; and the procedures in place to identify and respond to AI model errors that affect client portfolios.
Advisers who describe their use of AI in Form ADV must also update their disclosures promptly when material changes occur — including when an AI system is decommissioned, when a new AI system is adopted, or when the firm becomes aware of material limitations or failure modes in an existing AI system. An adviser who continues to describe AI capabilities in Form ADV after those capabilities have been discontinued is making a false statement in a regulatory filing — the same violation at issue in the Delphia and Global Predictions cases, but in a different document.
6. 12-Item SEC AI Compliance Checklist for Financial Services Firms
The Delphia and Global Predictions enforcement actions, combined with the SEC's published examination priorities and chair statements, define a clear set of compliance requirements for investment advisers and broker-dealers using or claiming to use AI. The following checklist addresses the specific failure modes identified in the enforcement record.
SEC AI Compliance Checklist for Investment Advisers and Broker-Dealers
Marketing claim verification protocol: Establish a pre-publication review process for all marketing materials that reference AI, machine learning, or algorithmic capabilities. Each claim must be verified against documented evidence of the operational system it describes. Marketing claims must be reviewed by both the compliance function and the technical team responsible for the AI system before publication.
AI system documentation: Maintain current technical documentation for every AI system used in the investment process. Documentation must include system architecture, data inputs, model type, optimization objective, validation results, and known limitations. This documentation is the evidentiary foundation for substantiating marketing claims and satisfying Form ADV disclosure obligations.
Form ADV AI disclosure review: Conduct a line-by-line review of Form ADV Part 2 to ensure all AI tools used in the investment process are accurately described. Disclosures must be updated promptly when AI systems are added, modified, or discontinued. The AI disclosure must accurately describe the system's role (primary driver vs. supporting tool), data sources, and material limitations.
MNPI assessment for all AI data sources: Conduct a documented MNPI assessment for each data source feeding AI systems used in the investment process. Implement procedures to identify, quarantine, and report MNPI that may be present in AI training or inference data. Establish information barriers between AI data pipelines and personnel with trading authority.
Reg BI explainability requirement: Ensure that every AI-generated investment recommendation can be explained to the recommending registered representative in terms that allow them to satisfy the care obligation to the client. Black-box AI systems that cannot produce human-intelligible explanations of their recommendations do not satisfy Reg BI and must not be used to drive client recommendations without a supplementary explanation process.
Conflict of interest assessment for AI optimization objectives: Review the optimization objective of each AI system used in the investment process to assess whether the objective creates recommendations that favor firm economics over client outcomes. Document the assessment and disclose any identified conflicts in Form ADV and in client communications as required by Reg BI.
AI model validation and ongoing monitoring: Subject all AI models used in the investment process to independent model validation before deployment and at regular intervals thereafter. Validation must include backtesting, stress testing, and assessment of model behavior in market conditions not represented in training data. Monitoring must detect model drift and performance degradation on an ongoing basis.
Performance claim substantiation: Any performance claims — including AI-attributed performance — must be substantiated, must comply with the Marketing Rule's performance presentation requirements, and must include the required disclosures about time period, calculation methodology, and the impact of fees. Performance attributed to AI must be attributable to the specific AI system claimed.
AI vendor due diligence and contract terms: Investment advisers who use third-party AI tools must conduct due diligence on those vendors and must ensure that vendor contracts include: data ownership and privacy provisions, model performance warranties, disclosure rights allowing the adviser to describe the AI in Form ADV, and audit rights allowing the adviser to verify the AI's performance and data practices.
SEC examination readiness for AI: The SEC's 2024 examination priorities specifically identify AI use by investment advisers as an examination focus. Prepare examination responses that address: what AI systems are used, how they affect investment decisions, how the firm verifies AI performance, how conflicts of interest are managed, and how client disclosures are kept current. These questions should be answerable from documented records, not reconstructed after examination begins.
10-K and other public filing AI risk disclosure: Public companies using AI in material business processes must assess whether those uses create material risks requiring disclosure in SEC filings. AI-related material risks include: dependence on AI systems that may fail or be discontinued; liability from AI-generated outputs; regulatory risk from evolving AI regulation; and cybersecurity risk associated with AI data pipelines. Failure to disclose known material AI risks in public filings creates potential 10-K fraud exposure.
Annual AI compliance program review: The Marketing Rule requires annual review of the compliance program. The AI components of the compliance program — marketing claim verification, Form ADV AI disclosure, MNPI procedures for AI data, Reg BI explainability protocols — must be reviewed annually and updated to reflect changes in the firm's AI systems and changes in SEC guidance and enforcement.
7. How Claire Maintains SEC Compliance
The SEC's AI washing enforcement actions establish that the compliance burden for AI claims in financial services is substantive, not formal. It is not enough to add disclaimers to marketing materials or to reference AI vaguely in Form ADV. The compliance requirement is that the AI systems described actually exist, actually perform the functions claimed, and are subject to the ongoing monitoring and governance that the Securities laws require.
Claire's SEC Compliance Architecture for Financial AI
Verifiable AI Capability Claims
Claire's financial compliance architecture is built on a principle of documentary verifiability for every AI capability claim. Every function described in client-facing materials or Form ADV disclosures is matched to a documented technical specification, operational log, and validation report. Before any AI capability claim is published — in marketing materials, Form ADV, or client proposals — the compliance team reviews the claim against the technical documentation for the specific system it describes. The Delphia failure — claiming AI capabilities that did not exist — is prevented at the documentation layer, not discovered after an SEC examination.
Explainable Outputs for Reg BI Compliance
All AI-generated analysis and recommendations produced by Claire are accompanied by structured explainability outputs — human-readable summaries of the factors that drove the AI's output, the data sources considered, and the confidence level associated with the output. These explainability outputs allow the registered representative or adviser presenting Claire's analysis to satisfy the care obligation under Reg BI by explaining to the client why the recommendation is in their best interest based on their specific investment profile. Black-box outputs that cannot be explained to clients are not a feature of Claire's architecture — they are a design failure that Claire's explainability engine is specifically built to prevent.
MNPI-Clean Data Pipeline Design
Claire's data architecture includes a documented MNPI assessment for each data source used in financial analysis. Data sources that could contain MNPI are either excluded from the system or processed through an information barrier that prevents their use in generating investment recommendations for clients of firms with trading authority. The assessment is updated whenever new data sources are added, and the firm's compliance function has real-time visibility into the data sources currently active in the system. An SEC examiner asking about Claire's MNPI procedures receives a documented answer, not an improvised one.
8. The Precedent and What Comes Next
The Delphia and Global Predictions cases are founding precedents, not the last word in SEC AI enforcement. The $225,000 and $175,000 penalties reflect the early state of AI washing enforcement — small firms, relatively limited investor harm, and the novelty of the legal theory. As SEC AI enforcement matures, the cases that follow will involve larger firms, larger investor bases, and more sophisticated false AI claims. The penalties will be larger. The reputational consequences will be more severe.
SEC Chair Gensler's repeated public statements on AI in financial services have a consistent theme: existing law applies. The Investment Advisers Act, Regulation Best Interest, the anti-fraud provisions of the Exchange Act, and the disclosure obligations in Form ADV and public company filings all apply to AI-related claims and AI-driven investment processes in exactly the same way they apply to any other marketing claim or investment process. The firms that treat this as a new legal frontier requiring new regulation before compliance is required will find themselves in enforcement proceedings. The firms that apply existing compliance frameworks rigorously to their AI systems will be in a defensible position.
Related reading:
Finance AI Overview |
CFPB AI Fair Lending |
TD Bank $3.09B AML Fine |
FCA Consumer Duty and AI