CCPA and CPRA AI Compliance: Automated Decision-Making Rights, CPPA Draft Regulations, and $7,500 Per-Violation Penalties

CCPA/CPRA Enforcement Reference

Max Fine (Intentional)
$7,500 per violation
Max Fine (Unintentional)
$2,500 per violation
Minor Violation Fine
$7,500 per incident
CPPA AI Regs
Draft 2024
CPPA published draft Automated Decision-Making Technology (ADMT) regulations in 2024 — the most comprehensive US state AI privacy rules The California Privacy Protection Agency (CPPA) published draft regulations on Automated Decision-Making Technology (ADMT) in November 2023, with revised regulations in 2024. These rules create a right to opt-out of ADMT used for significant decisions, a right to access information about how ADMT works, and mandatory pre-deployment risk assessments for ADMT in high-risk categories (employment, housing, education, credit, healthcare). California sets the national standard — what California requires today, other states adopt within 2-3 years.
Section 01

CCPA vs CPRA: What Changed for AI Systems

California's privacy law has two phases: the California Consumer Privacy Act (CCPA), effective January 1, 2020, established baseline rights; the California Privacy Rights Act (CPRA), effective January 1, 2023 with enforcement from July 2023, significantly expanded those rights and created the California Privacy Protection Agency (CPPA) as a dedicated enforcement body. For AI systems, CPRA's most significant additions are: (1) the right to correct inaccurate personal information (Article 1798.106), which applies to AI-generated inferences; (2) the right to opt-out of sharing personal information for cross-context behavioral advertising, which applies to AI personalization; (3) restrictions on "sensitive personal information" (SPI), including biometric data and precise geolocation; and (4) data minimization requirements for AI training data.

Under CPRA, "automated decision-making" uses are subject to enhanced transparency requirements. Businesses must disclose in their privacy policy whether they use personal information for automated decision-making and provide meaningful information about how the technology works. The CPPA's 2024 draft ADMT regulations go further, creating specific opt-out rights and risk assessment requirements.

Sephora Consent Order (2022)

California AG settled with Sephora for $1.2M for CCPA violations including selling consumer data without disclosure and failing to honor opt-out requests. First major CCPA enforcement action. Signals enforcement posture for AI data sharing.

DoorDash Settlement (2024)

$375,000 settlement with California AG for selling customer personal information to a third-party marketing cooperative without disclosing it as a "sale" under CCPA. Demonstrates broad interpretation of "sale" that may encompass AI model training data sharing.

CPPA Enforcement (2024)

CPPA (not AG) took its first enforcement action in 2024 against a data broker for failing to register and honor deletion requests. CPPA enforcement of ADMT regulations expected 2025-2026 following rulemaking completion.

Section 02

CPPA Draft ADMT Regulations: What They Require

The CPPA's draft Automated Decision-Making Technology (ADMT) regulations (most recent version: 2024 revisions to the November 2023 draft) define ADMT as "any system that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decision-making." This definition is intentionally broad — it encompasses AI chatbots, recommendation systems, scoring models, and AI agents used for HR, credit, customer service, and operations.

Right to opt-out of ADMT: For ADMT used in "significant decisions" (employment, housing, education, credit, healthcare) or "extensive profiling" (profiling in public spaces, profiling of minors), consumers have the right to opt-out. Businesses must provide a clear and conspicuous opt-out mechanism and cannot deny service (or provide degraded service) to consumers who opt-out, except where ADMT is "reasonably necessary" to provide the service.

Right to access ADMT: Consumers may request human review of decisions made by ADMT and access to information about the logic, significance, and likely consequences of the automated processing. This is the California equivalent of GDPR Article 22 rights — but broader, applying to decisions with "significant effects" rather than only "legal effects."

Risk assessments for ADMT: Before deploying ADMT in high-risk categories, businesses must complete a written risk assessment evaluating: (1) the specific purpose and legal basis; (2) risks to consumers' privacy, civil rights, and civil liberties; (3) risk mitigation measures; (4) whether benefits outweigh risks. Risk assessments must be retained for 24 months and made available to the CPPA on request.

$7,500
Maximum CCPA/CPRA fine per intentional violation — can multiply rapidly across thousands of consumers
39M
California residents protected by CCPA/CPRA — largest US state privacy law by population
2026
Estimated final ADMT regulation effective date following CPPA rulemaking process
Section 03

CCPA/CPRA AI Implementation: Consumer Rights Mechanics

Right to know (1798.100): Consumers can request disclosure of personal information collected, sold, disclosed, or used for AI processing. For AI systems, this means organizations must be able to identify all personal information that has been processed by their AI systems and generate a readable report of that processing history within 45 days of a verified request.

Right to delete (1798.105): Consumers can request deletion of their personal information, including information used to train AI models. This creates a practical challenge: once personal data has been used in AI model training, "deleting" it from the model may require retraining — which is expensive and potentially impossible for large foundation models. The CPPA has signaled that the deletion obligation applies to training data, creating significant implications for AI companies that fine-tune on customer data without explicit consent or a separate lawful purpose.

Right to correct (1798.106, CPRA): Consumers can request correction of inaccurate personal information, including AI-generated inferences stored about them. If an AI system generates a customer profile or risk score based on personal information, and that profile contains inaccurate information, the consumer can request correction. Organizations must implement a process to receive, verify, and act on correction requests for AI-generated data.

Sensitive personal information (1798.121, CPRA): SPI includes health information, racial/ethnic origin, religious beliefs, biometric data, precise geolocation, and financial account information. AI systems that infer or process SPI must disclose this use, allow consumers to limit SPI use to purposes necessary for service delivery, and cannot use SPI for AI training without an appropriate basis.

Implementation Checklist

CCPA/CPRA AI Compliance Checklist

  • Update privacy policy for AIDisclose all AI data processing, automated decision-making use, and data sharing for AI purposes in privacy policy; include ADMT-specific disclosures
  • Implement opt-out mechanismsCreate accessible opt-out for AI-based automated decisions ("significant decisions" category per CPPA draft regs); link from homepage
  • Consumer rights request processBuild 45-day response process for right to know, delete, and correct requests; include AI-processed data in scope of responses
  • AI data inventoryMaintain data inventory documenting all personal information used by AI systems, retention periods, and processing purposes
  • ADMT risk assessmentComplete written risk assessment for AI systems in employment, housing, education, credit, or healthcare categories; retain for 24 months
  • Sensitive personal information controlsIdentify AI processing of California SPI; implement "Limit Use of My Sensitive Personal Information" link; document SPI use limitations
  • Training data reviewAudit AI fine-tuning data sources for California resident data; implement deletion-from-training procedures or switch to non-personal training data
  • Vendor agreementsInclude CCPA/CPRA data processing terms in all AI vendor contracts; confirm vendors support consumer rights mechanics
  • Minor data protectionsImplement enhanced protections for consumers under 16; do not use ADMT for decisions affecting minors without explicit parental consent
  • CPPA monitoringTrack CPPA rulemaking developments; assign compliance owner to monitor ADMT regulation finalization and implementation timeline
FAQ

Frequently Asked Questions

Does CCPA/CPRA apply to B2B AI systems or only consumer AI?

CCPA/CPRA applies to personal information of California residents in any context where the business meets the thresholds: annual gross revenue over $26.6M, or processes personal information of 100,000+ California consumers/households, or derives 50%+ of annual revenue from selling/sharing personal information. The B2B exemption that existed under CCPA (for employee/contractor/business contact data) was temporary and expired January 1, 2023 — CPRA fully applies to employee and business contact data, meaning B2B AI systems with California employee data are covered.

What are the CCPA/CPRA fines for AI violations?

CCPA/CPRA fines are $2,500 per unintentional violation and $7,500 per intentional violation. Critically, each affected consumer can be a separate 'violation' — a company that intentionally uses AI to make automated employment decisions without required disclosures for 10,000 California employees could face fines of up to $75 million ($7,500 x 10,000). The CPPA also has injunctive authority to compel compliance. There is a 30-day cure period for unintentional violations after notification, but CPRA eliminated the cure period for intentional violations.

How do CCPA/CPRA ADMT regulations differ from GDPR Article 22?

Both create rights around automated decision-making, but CPRA is broader in several ways. GDPR Article 22 applies only to decisions with 'legal effects or similarly significant effects,' while CPPA's draft ADMT regulations apply to 'significant decisions' (a broader category) and 'extensive profiling.' CPRA explicitly covers employment decisions where GDPR's employment derogation (Article 88) allows member state flexibility. California's opt-out model (versus GDPR's consent-or-exception model) also creates different implementation requirements. Organizations serving both EU and California residents must comply with both.

Can California consumers delete their data from AI models trained on their information?

This is an evolving area. The CPPA has signaled that the right to deletion extends to training data — personal information used to train AI models must be deletable upon request, which for trained models may require retraining or model scrubbing. Practically, for large foundation models, deletion from model weights is technically infeasible without full retraining. The CPPA is expected to address this in its ADMT regulations. The safest approach is to fine-tune AI models on anonymized or synthetic data so that CCPA deletion requests do not implicate model training.

How does Claire support CCPA/CPRA consumer rights?

Claire's architecture supports CCPA/CPRA compliance through: a customer data inventory API that enables organizations to identify all personal information processed by Claire on a per-consumer basis; data deletion API endpoints supporting right-to-delete requests within 45 days; AI inference and conversation log exports for right-to-know requests; opt-out configuration for automated decision-making use cases; and contractual CCPA data processing terms included in all customer agreements. Claire does not use customer data for model training without explicit customer consent.

How Claire Addresses CCPA/CPRA AI Compliance

Claire's data architecture is designed to support CCPA/CPRA compliance from day one: customer-specific data isolation, deletion APIs for right-to-delete requests, no use of customer data for model training without consent, and ADMT risk assessment templates for enterprise AI deployments. Schedule a compliance briefing to review Claire's California privacy compliance architecture.

Book a Demo See How It Works
C
Chat with Claire →