GDPR Compliance for AI Systems: Article 22 Automated Decisions, EDPB ChatGPT Guidelines, and 2024 AI Enforcement Actions

GDPR AI Enforcement Reference

Max Fine
€20M or 4% Revenue
Meta GDPR Fine (2023)
€1.2B
ChatGPT EDPB Guidelines
02/2022 (Mar 2023)
EU AI Act Enforcement
August 2026
Italian DPA (Garante) suspended ChatGPT in March 2023 — the first major AI GDPR enforcement action On March 31, 2023, Italy's Garante suspended ChatGPT, finding violations of GDPR lawful basis (Article 6), privacy notice requirements (Article 13/14), age verification failures for minors, and data breach notification failures. OpenAI was given 20 days to comply and was later fined €15M (December 2024). The EDPB established a dedicated ChatGPT Task Force in April 2023, publishing guidelines in 2024 that all DPAs now apply to AI systems processing EU personal data.
Section 01

GDPR Lawful Basis for AI Processing: The Six Bases and AI-Specific Challenges

GDPR Article 6 requires a lawful basis for processing personal data. For AI systems, the choice of lawful basis determines the architecture: consent (Article 6(1)(a)) requires granular, withdrawable consent mechanisms; legitimate interests (Article 6(1)(f)) requires a Legitimate Interests Assessment (LIA) balancing the controller's interests against data subjects' rights; contract performance (Article 6(1)(b)) is limited to strictly necessary processing; legal obligation (Article 6(1)(c)) applies to compliance-mandated processing; vital interests (Article 6(1)(d)) and public task (Article 6(1)(e)) have narrow applicability.

For enterprise AI assistants, the most commonly applicable bases are contract performance (the AI processes employee or customer data as part of delivering a contracted service) and legitimate interests (the organization has a legitimate interest in using AI to improve operational efficiency, balanced against the data subjects' expectations). The Italian Garante's 2023 ChatGPT suspension found that relying on "contract" for broad AI training purposes was not valid — the training of AI models goes beyond what is strictly necessary for service delivery. This ruling has significant implications: organizations cannot rely on contract performance as a lawful basis for using customer data to train or fine-tune AI models.

Special Category Data in AI Systems

GDPR Article 9 imposes additional restrictions on "special category" data: health data, biometric data, racial/ethnic origin, political opinions, religious beliefs, trade union membership, genetic data, sex life/sexual orientation, and criminal convictions. AI systems that process conversations mentioning health conditions, that analyze voice/facial biometrics, or that operate in HR contexts where protected characteristics may be inferred must satisfy one of the Article 9(2) conditions. Explicit consent (9(2)(a)) is the most common basis, but requires a separate, specific consent mechanism distinct from general service terms.

Garante v. OpenAI (2023-2024)

March 2023: Italian DPA suspended ChatGPT for GDPR violations including unlawful basis, inadequate privacy notice, age verification failure, and breach notification failure. €15M fine imposed December 2024.

Meta €1.2B Fine (2023)

Irish DPC fined Meta €1.2 billion in May 2023 for illegal transfer of EU personal data to the US without adequate safeguards (SCCs pre-adequacy decision). Largest GDPR fine to date. Applies directly to AI data transfers.

EDPB ChatGPT Task Force

EDPB established dedicated ChatGPT Task Force in April 2023; published Opinion 1/2024 in May 2024 addressing lawful basis for AI model training, transparency requirements, and DPA rights for AI-generated data.

Section 02

Article 22: Automated Decision-Making and Profiling

GDPR Article 22 gives individuals the right not to be subject to a decision "based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her." This right directly applies to AI systems that make or significantly influence: credit decisions, employment decisions (screening, performance evaluation), insurance underwriting, housing applications, and customer service triage that denies or delays service.

The three conditions that exempt automated decisions from Article 22 restrictions (Article 22(2)) are: necessary for contract performance, authorized by EU/member state law, or based on explicit consent. For all three exceptions, Article 22(3) still requires "suitable measures to safeguard the data subject's rights and freedoms and legitimate interests" — specifically "at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision."

Practical implications for enterprise AI: An AI agent that automatically approves or denies loan applications is an Article 22 system requiring explicit consent or legal authorization, human review capability, and an explanation mechanism. An AI agent that recommends products, routes support tickets, or drafts responses (with human approval before sending) is generally not an Article 22 system because humans make the final decision. The key question is whether the AI decision itself — not reviewed by a human — produces legal or significantly significant effects on individuals.

€20M
Maximum GDPR fine or 4% global annual revenue — whichever is higher
72 hrs
GDPR breach notification deadline — applies to AI systems that expose personal data
2024
EDPB Opinion 1/2024: lawful basis for AI training; scraping public data does not create lawful basis
Section 03

Data Processing Agreements, Sub-Processors, and AI Data Transfers

GDPR Article 28 requires a Data Processing Agreement (DPA) between a controller and any processor that processes personal data on the controller's behalf. For enterprise AI deployments, the DPA chain runs: Customer (controller) → Claire (processor) → AI API providers, infrastructure providers, and other sub-processors (sub-processors). Each link in this chain requires a valid DPA, and sub-processor changes require customer notification (typically 30 days' advance notice).

For cross-border data transfers from the EU/EEA, GDPR Chapter V requires an appropriate transfer mechanism. Following the Schrems II judgment (CJEU, July 2020) that invalidated the Privacy Shield framework, valid mechanisms include: EU Standard Contractual Clauses (SCCs, updated June 2021), the EU-US Data Privacy Framework (adequacy decision, July 2023), Binding Corporate Rules (BCRs), or individual adequacy decisions. Organizations using US-based AI API providers (OpenAI, Anthropic, AWS, Google) must ensure the transfer mechanism is documented — typically the EU-US DPF for US providers who have self-certified, or updated SCCs for those who have not.

AI model training on EU personal data: EDPB Opinion 1/2024 addressed a critical question: can web scraping or customer data be used to train AI models under legitimate interests? The EDPB's answer was nuanced but restrictive: the "balancing test" required for legitimate interests must specifically address the data subjects' reasonable expectations at the time of data collection. Data subjects who submitted data for customer service interactions would not reasonably expect that data to be used for AI model training, undermining a legitimate interests basis for training without explicit consent.

Implementation Checklist

GDPR AI Compliance Checklist

  • Document lawful basis per processing purposeMap each AI data processing activity to a specific Article 6 lawful basis; document in Records of Processing Activities (RoPA) per Article 30
  • Execute GDPR-compliant DPASign DPA with all AI processors; confirm updated SCCs (June 2021 version); review sub-processor list and notification procedures
  • DPIA for high-risk AI processingConduct Data Protection Impact Assessment (Article 35) for AI systems with systematic monitoring, large-scale special category data, or AI decisions affecting individuals
  • Article 22 compliance reviewAudit AI decisions for Article 22 applicability; implement human review mechanisms and explanation requirements for automated decisions
  • Privacy notice updatesUpdate privacy notices to disclose AI processing, automated decision-making, and cross-border transfers; ensure Article 13/14 information is provided
  • Data transfer mechanismDocument EU-US transfer mechanism (EU-US DPF or SCCs) for all US AI API providers; verify DPF self-certification status at privacyshield.gov
  • Breach notification proceduresImplement 72-hour breach notification capability for AI data incidents; document AI-specific breach scenarios in incident response plan
  • Data subject rights for AIImplement access, erasure, and restriction rights for AI-processed data; address right to explanation for automated decisions (Article 22(3))
  • Special category data controlsIdentify AI processing of Article 9 special category data; implement Article 9(2) basis documentation and additional safeguards
  • EU AI Act alignmentMap AI systems to EU AI Act risk categories; identify high-risk systems requiring conformity assessment by August 2026
FAQ

Frequently Asked Questions

Does GDPR apply to AI chatbots and virtual assistants?

Yes. Any AI system that processes personal data of EU/EEA residents is subject to GDPR, regardless of where the AI vendor is based. This includes AI chatbots that collect names, email addresses, or account information; AI assistants that access CRM data containing personal information; AI agents that process conversation logs; and AI systems that analyze behavioral data for personalization. The Italian Garante's 2023 ChatGPT suspension confirms that GDPR fully applies to AI systems.

Can we use customer data to fine-tune AI models under GDPR?

Only with care. EDPB Opinion 1/2024 makes clear that using customer data for AI model training requires a valid lawful basis specifically for that training purpose. Contract performance (delivering the service) generally does not extend to model training. Legitimate interests requires a balancing test that must account for data subjects' reasonable expectations. Explicit consent is the most defensible basis but requires affirmative opt-in. Many organizations fine-tune AI on synthetic data or anonymized data to avoid this issue entirely.

What is the GDPR requirement for AI-generated explanations?

GDPR Article 22(3) requires that for automated decisions, controllers provide 'meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing.' For AI systems, this creates an explainability requirement. The EDPB has interpreted this to require more than a generic statement that AI is used — the explanation must be specific enough that a data subject can understand why a particular decision was made about them. This is technically challenging for complex neural network models and is one driver of interest in explainable AI (XAI) techniques.

What is the EU AI Act relationship with GDPR for AI systems?

The EU AI Act (effective August 2024, full enforcement August 2026) and GDPR operate in parallel for AI systems processing personal data. Both must be complied with. GDPR governs the data processing aspects; the EU AI Act governs the AI system risks, transparency, and human oversight aspects. For high-risk AI systems under the AI Act, Article 10 requires high-quality training data — including data governance practices that align with GDPR. The EDPB and EAIB (European AI Board) are coordinating to ensure consistent enforcement.

How does Claire ensure GDPR compliance for EU customers?

Claire executes a GDPR-compliant Data Processing Agreement with all EU customers, incorporating the June 2021 Standard Contractual Clauses for international transfers. Claire processes EU customer data in EU AWS regions (Frankfurt, Ireland) by default, with no transfer to US systems unless explicitly configured. Claire's EU customer data is not used for model training. Data subject rights (access, erasure, restriction, portability) are supported via API and customer portal. Claire's DPA includes a 30-day sub-processor change notification obligation.

How Claire Addresses GDPR AI Compliance

Claire's GDPR compliance architecture is designed for EU enterprises in regulated industries. Our DPA covers all processing activities, our EU data residency keeps data in EU AWS regions, and our automated decision support tools include the human oversight mechanisms required by Article 22. Request our DPA template and GDPR technical measures documentation.

Book a Demo See How It Works
C
Chat with Claire →