AI Third-Party Risk Management: SEC Cybersecurity Rule, EU DORA Article 30, and AI Supply Chain Concentration Risk
TPRM Reference
AI Supply Chain Risk: The New Concentration Risk in Enterprise AI
The modern enterprise AI stack creates multiple layers of third-party dependency: foundation model providers (OpenAI GPT-4, Anthropic Claude, Google Gemini, Meta LLaMA), AI infrastructure providers (AWS, Azure, Google Cloud), AI platform vendors (LangChain, LlamaIndex, Pinecone, Weaviate), data providers (vector database vendors, embedding model providers), and integration vendors (connectors to CRM, ITSM, ERP systems). Each layer represents a third-party risk.
Model provider concentration risk: 70%+ of enterprise AI deployments in 2024 use one of three foundation model providers (OpenAI, Anthropic, or Google). The OpenAI November 2023 4-hour outage illustrated the business impact — organizations with AI-dependent customer service, sales, or operations workflows experienced significant disruption. The SolarWinds attack illustrated the security dimension: a model provider compromise could inject malicious content into AI responses at scale across all dependent organizations.
AI API provider security posture assessment: Due diligence for foundation model providers must include: SOC 2 Type II report review (OpenAI, Anthropic, and major cloud providers publish these under NDA); security incident history (public breach disclosures, SEC 8-K filings for public companies, CVE publication history); model change management (how are model updates tested and deployed? Can you pin to a specific model version?); data processing agreements (does the provider offer a DPA confirming your data is not used for training?); and exit planning (what's the migration path if the provider is acquired, changes pricing, or discontinues the model?).
SEC Cybersecurity Rule (2023)
SEC cybersecurity disclosure rules (effective December 2023) require public companies to disclose material cybersecurity incidents on Form 8-K within 4 business days, and to describe cybersecurity risk management, strategy, and governance annually on Form 10-K — including third-party AI risk management.
EU DORA (Digital Operational Resilience Act)
DORA (effective January 17, 2025) requires EU financial institutions to manage ICT third-party risk including AI providers: contractual requirements (Article 30), concentration risk assessment (Article 29), ICT incident reporting, and exit strategies for critical ICT providers.
NIST SP 800-161 (C-SCRM)
NIST SP 800-161 Rev 1 (May 2022) provides the Cybersecurity Supply Chain Risk Management (C-SCRM) framework. AI-specific application: assess model integrity (can you verify the model you're using hasn't been tampered with?), provider security practices, and data handling commitments.
EU DORA Article 30: ICT Concentration Risk and AI Providers
DORA (Digital Operational Resilience Act, effective January 17, 2025) applies to EU financial entities including banks, investment firms, insurance companies, payment institutions, and crypto asset service providers. DORA Article 29 requires financial entities to identify and assess ICT concentration risk — the risk that dependency on a single ICT provider (or a small number of providers) creates systemic vulnerability. AI providers are explicitly covered as ICT third-party service providers under DORA.
DORA Article 30 requires written contractual arrangements with all ICT third-party service providers, including: (1) description of all functions outsourced and data categories involved; (2) the ICT third-party provider's SLA commitments; (3) the financial entity's right to audit and inspect; (4) business continuity requirements and exit strategies; (5) data processing location and notification requirements for changes; (6) the right to receive vulnerability and incident notifications; (7) the right to monitor the ICT third-party provider's performance. For AI providers, Article 30 compliance requires more contractual specificity than most AI vendor agreements currently provide.
DORA's exit strategy requirement (Article 28(d)) is particularly relevant for AI: financial entities must have documented exit plans for critical ICT providers, including how they would migrate to an alternative provider with "minimum disruption to time-critical activities." For AI systems where a single foundation model provider is used for all AI interactions, a DORA-compliant exit strategy requires either a secondary model provider agreement or a documented migration procedure with timeline estimates.
AI Vendor Due Diligence: Security Assessment Framework
Initial due diligence (pre-contract): Security questionnaire (SIG, CAIQ, or custom AI-specific questionnaire); SOC 2 Type II report review (current, within 12 months); ISO 27001 certificate verification; penetration test summary (executive findings); privacy impact assessment and DPA review; reference checks with regulated industry customers; financial stability assessment (for startups, runway and funding status); model change management policy review.
Contractual requirements for AI vendors: Beyond standard MSA/DPA terms, AI vendor contracts should include: model version pinning right (you can remain on a specific model version when updated); data processing restriction (your data is not used for model training without consent); model change notification (30 days advance notice of material model behavior changes); security incident notification (24 hours for security incidents, 72 hours for data breaches); audit right (annual audit, or review of third-party audit reports); uptime SLA (99.9% minimum for production AI systems, with financial penalties); and exit assistance (data portability, API compatibility for migration).
Ongoing monitoring: AI vendor risk doesn't end at contract signing. Ongoing TPRM for AI vendors includes: quarterly review of vendor security bulletins and CVE disclosures; annual SOC 2 report re-review; monitoring vendor financial news (acquisition, funding rounds, leadership changes); reviewing model update release notes for behavior changes; testing AI system performance after model updates; and annual vendor security questionnaire refresh. For critical AI providers, a dedicated vendor relationship manager ensures risk issues are addressed proactively.
AI Third-Party Risk Management Checklist
- AI vendor inventoryMaintain complete inventory of all AI third-party vendors: foundation model providers, infrastructure, platforms, and data providers; classify as critical, important, or standard
- SOC 2 report collectionCollect current SOC 2 Type II reports from all critical AI vendors annually; review exception items; obtain bridge letters for report gaps
- Contractual DORA requirementsFor EU financial institutions: include all DORA Article 30 required provisions in AI vendor contracts; document exit strategies for critical AI providers
- AI vendor security questionnaireComplete AI-specific security questionnaire covering: model change management, data processing restrictions, incident notification, penetration test history
- Concentration risk assessmentIdentify AI provider concentration risk: how many critical workflows depend on each AI vendor? What is the business impact of a 4-hour outage? A 30-day outage?
- Exit strategy documentationDocument migration procedures for each critical AI provider: alternative providers assessed, migration timeline estimate, data portability process
- SEC cybersecurity disclosure readinessFor public companies: ensure AI vendor security incidents are included in material cybersecurity incident assessment for Form 8-K; update 10-K TPRM disclosure
- Model change notification processEstablish process to receive and test AI model changes before production impact; include model change notification requirement in vendor contracts
- NIST C-SCRM alignmentApply NIST SP 800-161 Rev 1 C-SCRM practices to AI supply chain: model integrity verification, provider security practices assessment, data handling audit
- Annual TPRM reviewConduct annual third-party risk review for all AI vendors; update risk ratings based on security posture changes, incidents, and business impact reassessment
Frequently Asked Questions
What is the biggest AI supply chain risk for enterprise organizations?
The largest supply chain risk is foundation model provider concentration — the majority of enterprise AI systems depend on one of three providers (OpenAI, Anthropic, Google). A compromise, extended outage, or sudden pricing/policy change at any of these providers could affect thousands of enterprise AI systems. Secondary risks include vector database providers (Pinecone, Weaviate), AI orchestration frameworks (LangChain, LlamaIndex), and cloud AI services (AWS Bedrock, Azure OpenAI Service). SolarWinds demonstrated that software supply chain attacks can achieve systemic impact through a single vendor — the AI stack has equivalent concentration.
How does DORA affect AI vendor contracts for EU financial institutions?
DORA Article 30 requires EU financial institutions to include specific provisions in all ICT vendor contracts, including AI providers: description of outsourced functions and data categories, SLA commitments, audit rights, business continuity requirements, exit strategies, data location disclosure, and incident notification rights. Many existing AI vendor agreements (especially for newer providers) don't meet all DORA Article 30 requirements. EU financial institutions must review and renegotiate AI vendor contracts to achieve DORA compliance. The ESAs (European Supervisory Authorities) published templates and guidance on DORA ICT contract requirements in 2024.
What should we include in an AI vendor security questionnaire?
AI vendor security questionnaire should cover: SOC 2 Type II scope and current report availability; ISO 27001 certification status and version; penetration test frequency, methodology, and most recent findings; data processing restrictions (is customer data used for model training?); model change management process (how are updates tested? Can customers pin model versions?); encryption standards (at rest, in transit, key management); access control architecture (who at the vendor can access customer data?); incident response procedures and notification SLAs; sub-processor list and security requirements; and GDPR/CCPA compliance documentation.
How should we handle AI vendor outages in our business continuity plan?
AI vendor outages should be treated as a category of ICT disruption requiring BCP coverage. BCP provisions for AI vendor outages: identify which business processes depend on AI and their maximum tolerable downtime; for critical AI-dependent processes (customer service, fraud detection, loan processing), maintain a manual fallback process; negotiate AI vendor SLAs with financial penalties for outage duration; implement redundancy where possible (multi-model architecture with automatic failover); test failover procedures annually; and document the fallback procedure in the BCP with clear activation criteria.
Does Claire have documented AI supply chain security controls?
Yes. Claire's supply chain security includes: model provenance verification (cryptographic integrity checking of model artifacts); sub-processor security requirements documented in DPA; quarterly sub-processor security review; DORA-compliant contractual provisions for EU financial institution customers; SOC 2 Type II supply chain controls reviewed annually (CC6.6 — third-party access controls); and documented exit assistance procedures for customer offboarding including data portability and API documentation for migration. Request our TPRM questionnaire and sub-processor security documentation as part of vendor due diligence.
How Claire Addresses AI Third-Party Risk
Claire's vendor security architecture includes DORA-compliant contractual provisions, SOC 2 Type II coverage of supply chain controls, quarterly sub-processor security reviews, and documented exit assistance for customer offboarding. Our security questionnaire responses and sub-processor documentation are available for enterprise procurement due diligence. Schedule a vendor security review.