Telehealth AI Compliance: BetterHelp's $7.8M FTC Settlement, Mental Health Data Protections, and the Triple Regulatory Framework
In March 2023, the Federal Trade Commission announced a $7,800,000 settlement with BetterHelp, Inc. — the online therapy platform — for sharing user mental health data with Facebook and Snapchat for advertising purposes, despite explicit promises that such data would never be shared. BetterHelp is not a HIPAA covered entity, making the FTC Act Section 5 the applicable enforcement framework. For telehealth providers that are HIPAA-covered, AI systems used in telehealth sessions operate under a more complex regulatory environment: HIPAA's 45 CFR Part 164, FTC Act Section 5, and — for substance use disorder services — 42 CFR Part 2. Each layer has distinct requirements, and failure to address all three simultaneously creates enforcement exposure from multiple directions.
️ FTC Settlement — BetterHelp, Inc.
| Announced: | March 2, 2023 |
| Settlement: | $7,800,000 consumer redress fund |
| Respondent: | BetterHelp, Inc. (online therapy platform) |
| Violation: | FTC Act Section 5 — deceptive practices; sharing mental health intake data with Facebook, Snapchat for targeted advertising |
| Data Shared: | Mental health intake questionnaire responses, email addresses, IP addresses; used to create advertising audiences |
| Enforcement Framework: | FTC Act Section 5 (15 U.S.C. § 45) — BetterHelp was not a HIPAA covered entity at time of violation |
The BetterHelp settlement establishes that mental health data used for advertising purposes violates FTC Act Section 5's prohibition on deceptive and unfair practices — regardless of HIPAA applicability. For telehealth providers that are HIPAA covered entities, this analysis applies in addition to HIPAA, not instead of it. A HIPAA-covered telehealth provider that deploys AI in sessions and allows that AI system's data flows to reach advertising platforms faces both OCR enforcement under HIPAA and FTC enforcement under Section 5.
The Three-Layer Telehealth Regulatory Framework
For Covered Entities and BAs
Applies when the telehealth provider is a covered entity (healthcare provider billing federally funded programs) or a business associate of one. Privacy and Security Rules govern all PHI — including telehealth session content, AI-processed transcripts, and session metadata.
For Non-HIPAA and All Consumer-Facing
Applies to unfair or deceptive practices regardless of HIPAA coverage status. Mental health data shared with advertisers in contradiction of privacy representations = deceptive practice. FTC has signaled aggressive enforcement for health data misuse across the entire health tech sector.
Substance Use Disorder — Heightened Protection
Federal regulations specifically governing substance use disorder (SUD) patient records. More restrictive than HIPAA: requires patient-specific written consent for each disclosure, prohibits using SUD records for law enforcement purposes, and prohibits re-disclosure by recipients. AI systems cannot process SUD records without specific Part 2 consent.
FTC Act and Mental Health Data: Beyond BetterHelp
The BetterHelp case is not an isolated enforcement action — it is part of a pattern of FTC enforcement focused on health data misuse in the digital health sector. The FTC's September 2021 Policy Statement on Surveillance Technologies and Healthcare made clear that the FTC views health data misuse as an unfair practice under Section 5, even when no specific health privacy statute applies. For telehealth AI systems, the FTC analysis applies to:
Data Flows to Analytics and Advertising Platforms
BetterHelp's violation was using mental health intake questionnaire data to build Facebook Custom Audiences and Lookalike Audiences. The data flow that enabled this: users' email addresses (provided during sign-up) were hashed and sent to Facebook's Conversions API alongside behavioral signals that Facebook used to infer mental health status. For AI-enabled telehealth platforms, analogous data flows include: sending session metadata to Google Analytics with user identifiers, passing session completion events to advertising pixels, and using AI-generated session summaries in marketing automation tools.
FTC's "Reasonable Basis" Standard for Health Data
FTC's enforcement position for health data is that representations about privacy must have a reasonable basis, and "health information" includes not just medical records but any information that could reveal a person's health status — including the fact that someone is using a mental health platform. An AI telehealth platform that represents "we don't share your health information" while its analytics stack infers and shares health-related behavioral signals is making a representation without a reasonable basis.
The pixel tracking problem in telehealth: Standard web analytics implementations (Google Analytics 4, Meta Pixel, LinkedIn Insight Tag) collect page-level behavioral data that, on a telehealth platform, reveals health-related information by implication. A user who visits /scheduling/anxiety-therapy-intake, spends 20 minutes completing intake forms, and then visits /video-session has revealed their mental health treatment status through behavioral signals — even if no clinical data is directly transmitted to the analytics platform. FTC's BetterHelp enforcement covers this pattern explicitly.
42 CFR Part 2: Substance Use Disorder Records
42 CFR Part 2 governs the confidentiality of substance use disorder (SUD) patient records maintained by "Part 2 programs" — programs that provide, in whole or in part, alcohol or drug abuse diagnosis, treatment, or referral for treatment. The 2020 Final Rule (85 Fed. Reg. 42986) updated Part 2 to better align with HIPAA, but key differences remain that are particularly significant for AI telehealth deployments:
Patient-Specific Consent for Every Disclosure
Unlike HIPAA's treatment-payment-operations framework that permits many disclosures without patient authorization, 42 CFR Part 2 requires written patient consent for any disclosure of SUD records — including disclosures to other treating providers. Consent must be specific: identify the name of the person to whom disclosure is made, the purpose of the disclosure, what records are being disclosed, and contain a right to revoke. AI systems cannot share SUD records between treating providers without patient-specific Part 2 consent even when HIPAA would permit the same disclosure as treatment-related.
Prohibition on Law Enforcement Use
42 CFR Part 2 §2.12(d) prohibits using SUD records in any criminal, civil, or administrative proceeding without patient consent or a court order meeting specific standards. This is more restrictive than HIPAA's law enforcement disclosure provisions. AI systems processing SUD session content must not route that data to any system that could provide law enforcement access — including standard cloud logging infrastructure that may be subject to law enforcement requests.
Re-Disclosure Prohibition
Part 2 records that are disclosed with patient consent may not be re-disclosed by the recipient without additional patient authorization. This creates a data flow constraint for AI systems: SUD session content disclosed to an AI vendor under Part 2 consent cannot be shared by the AI vendor to its sub-processors without each sub-processor also being covered by the original patient consent. This is more stringent than HIPAA's sub-processor BAA chain requirement — Part 2 requires patient consent for each downstream disclosure, not just business associate agreements.
AI in Telehealth Sessions: Specific Compliance Requirements
When AI systems are deployed in active telehealth sessions — for real-time transcription, clinical documentation assistance, session quality monitoring, or AI-assisted care coordination — each function creates distinct compliance obligations:
Real-Time Session Transcription
AI transcription of telehealth sessions creates a detailed written record of clinical conversations that are PHI under HIPAA and may be SUD records under Part 2. Compliance requirements: patient disclosure that the session is being transcribed (required by HIPAA §164.520 Notice of Privacy Practices, and potentially by state recording consent laws); secure transmission and storage of transcripts with AES-256 encryption at rest; and BAA coverage for the transcription service with explicit prohibitions on using transcript content for training data.
AI-Assisted Clinical Documentation
AI systems that assist with SOAP note generation, assessment documentation, or care plan updates during telehealth sessions process clinical content that constitutes PHI (and potentially Part 2 records). If the documentation AI sends session content to a cloud inference API for processing, that API call requires BAA coverage. If the AI system's output is automatically integrated into the EHR, the EHR FHIR write operation requires proper authentication and audit logging as described in our EHR integration security guide.
Session Quality Monitoring
AI systems that analyze session quality metrics — speaking time ratios, topic detection, sentiment analysis — for clinical supervision or compliance purposes create a secondary derived dataset from session content. This derived dataset is PHI if it can be linked to an individual patient. The PHI classification applies even to aggregate statistics if they are computed per-session and can be linked to identifiable sessions.
Video Platform Security for AI-Enhanced Telehealth
Telehealth video sessions require platforms that meet HIPAA technical safeguard requirements. During the COVID-19 public health emergency, OCR exercised enforcement discretion for telehealth conducted over consumer platforms (FaceTime, Zoom consumer). That enforcement discretion ended in May 2023 — telehealth providers must now use HIPAA-compliant video platforms with executed BAAs.
HIPAA-compliant video platform requirements for AI-enhanced telehealth:
- End-to-end encryption — Video stream content must be encrypted end-to-end (E2EE) with keys controlled by the parties to the session, not the platform provider. AES-256 for video content encryption; TLS 1.3 for signaling
- BAA with platform vendor — Zoom Healthcare, Microsoft Teams (Healthcare tier), Doxy.me, and Thera-LINK all offer BAA-eligible telehealth tiers. Standard consumer Zoom, Google Meet, and FaceTime do not
- AI feature compliance — Platform AI features (Zoom AI Companion transcription, Microsoft Copilot for Teams) must be verified for BAA coverage before enabling in healthcare sessions. Platform AI features that send session content to AI processing infrastructure require explicit BAA coverage for that sub-processor
- Session recording controls — Recording must require explicit patient consent at each session. Recordings stored by the platform are ePHI requiring BAA-covered storage with defined retention periods
Telehealth AI Compliance Checklist: 12 Controls
Audit all third-party pixels, analytics tags, and tracking scripts for placement on clinical pages. BetterHelp's violation involved tracking pixels placed on intake forms. Any third-party script on pages where patients disclose health information creates FTC Section 5 exposure. Remove all marketing pixels from clinical pages; implement server-side event tracking without patient health data for marketing attribution.
Determine whether your telehealth platform is a HIPAA covered entity or business associate. HIPAA coverage requires billing federally funded programs or acting as a BA for one who does. Non-covered entities still face FTC Section 5 for health data misuse. Determine your regulatory status and ensure compliance obligations are mapped accordingly — not assumed to be HIPAA-only.
Implement 42 CFR Part 2 screening for all SUD-related services. If your telehealth platform provides substance use disorder services (addiction counseling, MAT support, SUD assessment), Part 2 applies. Implement patient-specific written consent for every SUD record disclosure — including disclosures to other treating providers and AI system processing.
Verify your telehealth video platform has an executed BAA at an eligible tier. Standard Zoom, Google Meet, and FaceTime do not include BAA coverage. Zoom Healthcare, Microsoft Teams Healthcare, and Doxy.me offer BAA-eligible tiers. Verify the executed BAA — not the vendor's HIPAA compliance page — is in your compliance document registry.
Audit AI transcription and documentation features for BAA coverage of the AI sub-processor. If your telehealth platform's AI transcription feature sends session content to an AI processing sub-processor, that sub-processor must be covered by a BAA. Platform AI features (Zoom AI Companion, Teams Copilot) require verification that the AI processing infrastructure is covered under the platform's BAA.
Implement patient disclosure for AI session transcription and documentation at the start of every session. Patients must be informed that AI transcription or documentation assistance is used in their session. This is required by HIPAA's Notice of Privacy Practices and may also be required by state recording consent laws. The disclosure must occur before the session begins, not at the end.
Classify mental health session data as requiring heightened protection in AI system design. Mental health data — even when permitted under HIPAA — warrants heightened protection given the specific harm its exposure causes. AI systems processing mental health session content should apply: no-training-data restrictions in BAAs, shorter retention periods than general health data, and explicit exclusion from any analytics or reporting that could reveal individual health status.
Verify 42 CFR Part 2 re-disclosure prohibition is reflected in AI vendor BAA and sub-processor agreements. If SUD records are disclosed to an AI vendor under Part 2 consent, the AI vendor's BAA must prohibit re-disclosure to sub-processors without additional patient consent. This is a more restrictive standard than HIPAA BAA sub-processor flow-down provisions require.
Implement separate session recording consent for AI-analyzed sessions. When AI systems analyze session recordings for quality monitoring, documentation, or supervision purposes, the patient's general consent to telehealth is insufficient. Recording and AI analysis requires specific disclosure and consent separate from the clinical consent to treatment.
Audit crisis response workflows for AI telehealth systems. AI telehealth platforms that identify crisis situations (suicidal ideation, self-harm expressions) through natural language processing must have documented crisis response workflows that connect patients to human clinical resources. The FTC's unfair practices standard applies to platforms that identify crises and fail to respond appropriately — in addition to HIPAA obligations for clinical risk documentation.
Review state telehealth practice laws for AI-specific requirements. Many states have enacted specific telehealth practice laws that go beyond federal requirements. Some states require in-state licensure for telehealth providers regardless of patient location; others require specific informed consent disclosures for AI-assisted sessions. Map your patient population's state distribution and verify compliance with each applicable state telehealth law.
Establish AI session data retention policies that address both HIPAA minimum retention and state mental health record laws. HIPAA requires documentation retention for 6 years. Many states require mental health records to be retained for longer periods — New York requires 6 years from last service, or 3 years after a minor reaches majority. AI session transcripts and documentation assistance outputs are clinical records subject to these retention requirements.
How Claire Approaches Telehealth Administrative Compliance
1. Telehealth Administrative Workflows — Not Session Processing
Claire's telehealth scope is the administrative layer: scheduling telehealth appointments, sending HIPAA-compliant session access links, handling post-telehealth follow-up scheduling, and routing prescription refills generated by telehealth encounters. Claire does not process session audio, video, or transcripts. This scope boundary eliminates the AI-in-session compliance complexity — including Part 2 constraints for SUD sessions and FTC concerns about in-session data flows.
2. Zero Third-Party Analytics on Clinical Workflows
Claire's patient-facing interactions contain zero third-party analytics pixels, tracking scripts, or behavioral data transmission. Operational metrics are collected internally for service quality monitoring — no session behavioral data is transmitted to advertising platforms, marketing analytics tools, or social media pixels. This architectural choice directly addresses the BetterHelp pattern and eliminates FTC Section 5 exposure from Claire's administrative workflows.
3. 42 CFR Part 2 Awareness in Scheduling Workflows
When a telehealth encounter is flagged as a substance use disorder service in the EHR, Claire's scheduling workflows apply enhanced handling: no appointment details in unsecured communications, explicit channel confirmation before sending any appointment-related content, and escalation to staff-managed scheduling for cases where the standard reminder protocol would expose SUD treatment status.
The Triple Regulatory Framework Demands Triple-Layer Compliance
BetterHelp's $7.8M FTC settlement documents what happens when a telehealth platform treats health data as a marketing asset rather than a protected category. For HIPAA-covered telehealth providers deploying AI, the same data misuse pattern triggers OCR enforcement in addition to FTC enforcement — doubling the regulatory exposure. And for SUD services, 42 CFR Part 2 adds a third layer of consent requirements that neither HIPAA nor the FTC framework addresses with comparable specificity.
AI systems in telehealth must be designed with all three layers in view simultaneously: HIPAA controls on every PHI data flow, FTC Section 5 compliance in how health data is characterized and used in business operations, and Part 2 consent management for any SUD service context. Organizations that design compliance for one layer and assume the others are covered by implication will find that assumption tested by enforcement actions from multiple agencies — potentially simultaneously.