SOC 2 Type II for AI Systems: Applying AICPA Trust Service Criteria to LLMs, Agents, and AI Data Pipelines

SOC 2 Type II AI Reference

Audit Observation Period
6–12 months
Required TSC
CC Series (Security)
AI-Specific TSC
CC6.1, CC7.2, P1.1
Enterprise RFP Requirement
~100% require SOC 2 Type II
SOC 2 Type I vs Type II: Enterprise contracts almost universally require Type II A SOC 2 Type I report tests whether security controls are suitably designed as of a point in time. A SOC 2 Type II report tests whether those controls operated effectively over a period of 6-12 months. Enterprise procurement teams, Fortune 500 security reviews, and regulated industry compliance programs almost universally require Type II reports, not Type I. A vendor with only a Type I report has not demonstrated that their controls work in practice.
Section 01

AICPA Trust Service Criteria: The Five TSC Categories

The AICPA's Trust Services Criteria (TSC) framework, last updated in 2022, organizes SOC 2 controls into five categories. Security (CC series) is the only required category; Availability, Processing Integrity, Confidentiality, and Privacy are optional but commonly included for AI systems that process sensitive data. Each category contains multiple criteria with "Points of Focus" that describe how organizations can demonstrate compliance.

For AI systems, the most relevant criteria are: CC6.1 (logical access security — applies to model access, API authentication, and prompt/completion logging), CC6.6 (security controls over third-party access — applies to AI API providers like OpenAI, Anthropic, and AWS Bedrock as sub-processors), CC7.2 (monitoring for unauthorized access — applies to real-time detection of prompt injection and data exfiltration attempts), and P1.1 through P8.1 (Privacy series — applies to AI systems processing personal data under GDPR and CCPA).

CC Series — Security (Required)

Common Criteria CC1-CC9 cover risk assessment, communication, monitoring, logical access, operations, change management, risk mitigation, and vendor management. All SOC 2 audits include CC series.

A Series — Availability

A1.1-A1.3 cover availability commitments, capacity monitoring, and recovery procedures. Critical for AI systems with uptime SLAs. Documents RTO and RPO commitments and redundancy architecture.

P Series — Privacy (for AI data)

P1.1-P8.1 apply when AI systems process personal information. Covers notice (P1), choice (P2), collection (P3), use/retention/disposal (P4-P6), access (P7), disclosure (P8). Maps to GDPR and CCPA obligations.

Section 02

Scoping AI Systems for SOC 2 Audits

The audit scope for an AI system must be carefully defined to include all components that affect the Trust Service Criteria. For an AI agent platform like Claire, the SOC 2 scope includes: the AI inference infrastructure (model API calls, caching layers), the retrieval-augmented generation (RAG) pipeline (vector databases, document stores), the orchestration layer (agent logic, tool definitions, action execution), the data persistence layer (conversation history, customer data), the audit logging infrastructure, and the administrative interfaces (configuration portals, monitoring dashboards).

Sub-processors are a critical scoping element. When Claire uses AWS (infrastructure), Anthropic or OpenAI (model APIs), Pinecone (vector database), and other third-party services, the SOC 2 audit must address how Claire monitors those sub-processors' security posture. CC6.6 (security controls over third-party access) requires documenting sub-processor security requirements, reviewing their SOC 2 reports annually, and maintaining contractual security obligations.

Key AI-Specific SOC 2 Control Areas

Prompt and completion logging: CC6.1 requires that all access to sensitive data be logged. For AI systems, this means logging every prompt sent to the model and every completion received — particularly when prompts contain PII or when completions may contain sensitive information derived from RAG retrieval.

Model access controls: CC6.1 also requires logical access security for the AI models themselves. This includes API key management (rotation schedule, least-privilege scoping), model version control (preventing unauthorized model updates), and access to model training data or fine-tuning infrastructure.

AI output monitoring: CC7.2 (monitoring for unauthorized or malicious activity) applies to AI outputs that may indicate prompt injection, data exfiltration, or policy violations. Automated monitoring of AI outputs for anomalous content patterns (PII appearing in outputs where it shouldn't, instructions being followed that contradict system prompts) is a control that auditors increasingly expect for AI systems.

93
Annex A controls in ISO 27001:2022 — complementary to SOC 2 for AI governance
2023
ISO/IEC 42001 published: first international AI management system standard
72 hrs
GDPR breach notification deadline — must be reflected in SOC 2 incident response procedures
Section 03

Continuous Monitoring for SOC 2 Type II Compliance

SOC 2 Type II auditors test whether controls operated effectively throughout the observation period — not just at audit time. This requires continuous monitoring infrastructure that generates evidence of control operation. For AI systems, continuous monitoring must cover: API authentication attempts and failures (CC6.1 evidence), AI output anomaly detection (CC7.2 evidence), sub-processor security review completion (CC6.6 evidence), and privacy-relevant AI processing activities (P-series evidence).

Point-in-time controls are inadequate for Type II. A security team that manually reviews AI logs only when alerted will fail to demonstrate "operating effectiveness" for CC7.2 if the auditor selects a sample period when no manual review occurred. Automated controls that run continuously — SIEM alerts for anomalous API access, automated PII detection in AI outputs, scheduled sub-processor review tasks — generate the audit evidence needed to demonstrate Type II compliance.

Modern AI compliance platforms (including Claire's compliance module) generate structured audit evidence: timestamped logs of all AI actions, automated policy violation alerts, access review reports, and compliance dashboards — all formatted for SOC 2 auditor consumption. This reduces the time SOC 2 auditors spend gathering evidence by up to 60%, according to AICPA practitioner surveys.

Implementation Checklist

SOC 2 Type II AI Audit Readiness Checklist

  • Define AI system audit scopeDocument all AI system components: model APIs, RAG pipeline, vector databases, orchestration layer, audit logging; obtain sub-processor SOC 2 reports
  • Implement prompt and completion loggingLog all LLM prompts and completions with timestamps, user IDs, model version, and token counts; retain for audit observation period
  • Configure access controls for AI APIsImplement API key rotation schedule (90 days max), least-privilege API scoping, MFA for administrative access, service account audit
  • Enable automated output monitoringDeploy automated monitoring for PII in AI outputs, anomalous content patterns, and policy violations; configure SIEM alerting
  • Document sub-processor security requirementsList all AI sub-processors; execute DPAs; review their SOC 2 reports annually; document in vendor management program
  • Implement change management for AI modelsDocument approval process for model version changes, prompt updates, and tool configuration changes; maintain change log
  • Test incident response proceduresConduct tabletop exercises for AI-specific incidents (prompt injection, data breach from AI output); document test results
  • Configure availability monitoringMonitor AI system uptime, API latency P99, error rates; document SLA commitments and measurement methodology
  • Implement privacy controls for AI dataDocument AI data processing in privacy notices; implement data minimization, retention limits, and deletion procedures for training/fine-tuning data
  • Engage qualified SOC 2 auditorSelect AICPA-licensed CPA firm with AI system experience; conduct readiness assessment 3-6 months before audit commencement
FAQ

Frequently Asked Questions

What is the difference between SOC 2 Type I and Type II for AI vendors?

SOC 2 Type I is a point-in-time assessment confirming that an AI vendor's security controls are suitably designed as of a specific date. Type II tests whether those controls operated effectively over a 6-12 month observation period. Enterprise procurement requires Type II because Type I provides no evidence that controls actually work in practice. A vendor that received its SOC 2 Type I report last month has no demonstrated history of operational security control effectiveness.

Does a SOC 2 Type II audit cover AI-specific risks like prompt injection?

Not automatically. Traditional SOC 2 audits were designed for SaaS applications, not AI systems. However, CC7.2 (monitoring for unauthorized or malicious activity) and CC6.1 (logical access security) can be applied to AI-specific risks if the system description and control environment address prompt injection monitoring, model access controls, and AI output anomaly detection. Progressive AI vendors like Claire specifically scope their SOC 2 audits to include AI-specific controls.

How long does it take to get a SOC 2 Type II report for an AI system?

The SOC 2 Type II process takes 9-15 months for a new certification: 3-6 months of readiness preparation (implementing controls, gathering evidence infrastructure), followed by a 6-12 month audit observation period, followed by 2-3 months for the auditor to complete testing and issue the report. Organizations that already have strong security controls can compress the readiness phase to 2-3 months.

What happens if an AI vendor has exceptions in their SOC 2 report?

SOC 2 exceptions (findings where a control was not operating effectively) require review. The severity depends on the nature of the exception and the vendor's management response. Minor exceptions with strong remediation plans are common and not automatically disqualifying. However, exceptions in high-risk controls — access management (CC6.1), anomaly detection (CC7.2), incident response (CC7.3) — warrant deeper investigation. Request a bridge letter confirming exceptions have been remediated.

Is ISO/IEC 42001 a replacement for SOC 2 for AI systems?

No. ISO/IEC 42001:2023 is the first international standard for AI management systems, covering AI governance, risk assessment, and responsible AI practices. It complements but does not replace SOC 2. SOC 2 is a security and trust framework audited by CPAs and required by enterprise procurement. ISO 42001 is a management system standard certified by accredited certification bodies and focused on AI-specific governance. Progressive AI vendors pursue both: SOC 2 Type II for security trust and ISO 42001 for AI governance credibility.

How Claire Addresses SOC 2 Type II Compliance

Claire maintains an annual SOC 2 Type II audit by a Big 4 accounting firm, covering Security, Availability, Confidentiality, and Privacy Trust Service Criteria. Our AI-specific controls — prompt logging, output monitoring, model access management, and sub-processor oversight — are scoped into the audit. Request our current SOC 2 report (under NDA) or schedule a security briefing with our compliance team.

Book a Demo See How It Works
C
Chat with Claire →