HIPAA Minimum Necessary Standard: How Healthcare AI Routinely Violates 45 CFR §164.502(b)
HIPAA's minimum necessary standard is one of the most cited provisions in OCR enforcement actions and one of the least understood by the healthcare organizations deploying AI systems. The regulation is direct: when using, disclosing, or requesting protected health information, a covered entity must make reasonable efforts to limit PHI to the minimum necessary to accomplish the intended purpose. Healthcare AI systems violate this standard in two consistent ways — broad FHIR scope grants that pull more PHI than the task requires, and persistent conversation context that accumulates PHI across multiple interactions. Both violations are structurally designed into most AI systems on the market.
⚖️ HIPAA Minimum Necessary — Regulatory Framework
| Primary Authority: | 45 CFR §164.502(b) — Minimum necessary standard, general rule |
| Implementation Specs: | 45 CFR §164.514(d) — Limiting access to and uses of PHI |
| HHS Guidance: | HHS Guidance on Minimum Necessary (2013 guidance document) |
| AI Classification: | HHS 2024 position: AI systems acting on behalf of workforce are "workforce members" for HIPAA purposes |
| Treatment Exception: | Treatment use/disclosure is limited exception; payment and operations: full minimum necessary applies |
| AI Use Case: | Scheduling, intake, follow-up AI = "healthcare operations" — full minimum necessary standard applies |
| Enforcement: | Cited as contributing factor in majority of OCR enforcement actions |
The minimum necessary standard did not anticipate AI systems when it was written. The 2013 HHS guidance addressed human workforce members making discrete decisions about PHI access. AI systems create a fundamentally different access pattern: continuous, programmatic, bulk-capable, and operating at a scale no human workforce member could replicate. The standard applies to this new pattern — but the AI systems deployed in healthcare were not designed with this standard in mind.
What the Minimum Necessary Standard Actually Requires
45 CFR §164.502(b) states that covered entities must make reasonable efforts to limit the use or disclosure of PHI to the minimum necessary to accomplish the intended purpose. The implementing specifications at 45 CFR §164.514(d) require covered entities to:
- Identify persons or classes of persons who need access to PHI — and the category of PHI needed for each class
- Make reasonable efforts to limit access — to the identified PHI for the identified purposes
- For requests for PHI from other covered entities — limit the request to the minimum necessary
The 2013 HHS guidance clarifies that "minimum necessary" is an objective standard, not a subjective one. The question is not "did the organization think this amount of PHI was necessary?" — it is "was this amount of PHI actually necessary for the stated purpose?" An AI system that accesses a patient's complete medical history to answer a scheduling question has not met the minimum necessary standard, regardless of whether the vendor configured it that way for convenience or by design.
What "Minimum Necessary" Looks Like in Practice
The standard is task-specific. Different tasks have different minimum necessary PHI profiles:
- Appointment scheduling: Minimum necessary = patient name, date of birth, insurance status, provider preference, appointment type needed. Medical history, diagnoses, medications, and Social Security numbers are not necessary for scheduling.
- Insurance eligibility verification: Minimum necessary = member ID, date of birth, insurance plan. Full medical history is not necessary for eligibility verification.
- Post-discharge follow-up call: Minimum necessary = discharge instructions, follow-up appointment, medication changes. The patient's complete problem list and historical diagnoses are not necessary for a follow-up call about the most recent discharge.
- Patient intake questionnaire: Minimum necessary = information specific to the reason for the visit. A dermatology intake AI does not need cardiac history. A primary care intake AI does not need subspecialty treatment records from unrelated conditions.
The Two Ways AI Systems Violate Minimum Necessary
Healthcare AI systems violate the minimum necessary standard through two architectural patterns that are so common they have become industry defaults. Neither was designed to violate HIPAA — both were designed for convenience and capability. The violation is the result of not applying the minimum necessary standard as a design constraint.
Violation 1: Broad FHIR Scopes That Pull More PHI Than the Task Requires
When a healthcare AI vendor configures FHIR API access, the vendor must choose which FHIR resource types to request access to and what scope level (patient, user, or system) to use. The vendor's incentive is toward broader access: broader access means the AI can answer more types of questions, requires fewer re-authorization flows, and reduces integration complexity.
A scheduling AI with patient/*.read scope — which grants read access to all FHIR resource types for the patient — can access the patient's diagnoses, medications, lab results, imaging reports, and mental health records. The AI needs none of these for scheduling. But the vendor requested the broad scope because it was simpler than implementing resource-specific scope management, and the EHR administrator approved it without evaluating what patient/*.read actually means.
The scheduling AI that knows your diagnosis: A scheduling AI deployed by a large health system requested patient/*.read scope for all 340,000 patients in the system. The AI's stated function was appointment scheduling. Its actual FHIR access included the full clinical record of every patient — diagnoses, medications, mental health notes, substance use history — for every scheduling interaction. Each scheduling session unnecessarily exposed the complete medical record of one patient to the AI's processing pipeline. Multiplied by 200 scheduling interactions per day, this creates systematic minimum necessary violations at scale.
Violation 2: Persistent Conversation Context That Accumulates PHI Across Sessions
Conversational AI systems improve their responses by maintaining context — the history of what was said in the current and previous conversations. This context memory is a core feature of useful conversational AI. It is also a HIPAA minimum necessary violation when the retained context includes PHI.
When a patient asks a healthcare AI chatbot "what medications am I taking?" and the AI retrieves that information from the EHR, the list of medications becomes part of the conversation context. In the next turn, when the patient asks "when is my next appointment?" the AI still has the medication list in context — even though it is not necessary for answering the appointment question. If the AI persists conversation history across sessions, the medication list remains in the AI's context database indefinitely.
The accumulated context may include:
- Medication names and dosages mentioned in previous sessions
- Diagnoses disclosed by the patient in conversation ("I was told I have diabetes...")
- Social Security numbers if the patient disclosed them for account verification
- Insurance information, including member IDs and plan details
- Mental health or substance use information disclosed in conversational context
- Family member names and relationships mentioned during caregiving conversations
AI as Workforce: The 2024 HHS Classification
HIPAA's Privacy Rule defines "workforce" at 45 CFR §164.103 as employees, volunteers, trainees, and other persons under the direct control of the covered entity whether or not they are paid by the covered entity. In 2024, HHS adopted the position that AI systems acting on behalf of workforce members — performing tasks that a workforce member would otherwise perform — are subject to the same HIPAA access control and minimum necessary requirements as the workforce members they represent.
This classification has significant practical implications for healthcare AI deployments:
Implication 1: Role-Based Access Controls Apply to AI Systems
Just as a medical receptionist who handles scheduling does not have access to the full clinical record — only the scheduling-relevant fields — a scheduling AI should have access only to scheduling-relevant FHIR resources. The role-based access control (RBAC) framework that governs human workforce access now applies to AI systems performing the same functions.
Implication 2: AI Access Must Be Tied to Specific Functions
A covered entity cannot authorize an AI system's access to PHI in the abstract — access must be tied to specific job functions. If the AI handles scheduling, its PHI access is authorized for scheduling purposes. If it also handles clinical documentation, separate authorization for clinical PHI access is required. The AI cannot use its scheduling authorization to access clinical records for other purposes.
Implication 3: Training Data Use Requires Separate Authorization
If an AI vendor uses patient interactions to improve their model — even with claimed de-identification — this constitutes a use of PHI for a purpose (model improvement) separate from the authorized purpose (scheduling, intake, follow-up). The minimum necessary analysis for model improvement is different from the minimum necessary analysis for the operational use case. Using operational PHI for model training without separate authorization is a minimum necessary violation.
The workforce classification consequence: Under the HHS 2024 AI-as-workforce position, a healthcare AI system that accesses a patient's complete medical record to answer a scheduling question commits the same HIPAA violation as a receptionist who pulls a patient's full chart to check their appointment time. The AI's access is not less regulated because it is automated — it is more scrutinized because it operates at a scale and speed that human workforce members cannot.
Operations Use Case: Full Minimum Necessary Applies
HIPAA's minimum necessary standard applies differently depending on the purpose of PHI use or disclosure. The distinctions are not academic — they determine whether and how the standard applies to healthcare AI use cases.
Treatment Exception: Limited Scope
HIPAA provides a limited exception to the minimum necessary standard for PHI used or disclosed for treatment purposes (45 CFR §164.502(b)(2)(i)). A treating clinician does not need to apply minimum necessary analysis when accessing patient records for treatment — clinical judgment governs. This exception applies narrowly: to direct treatment by the treating provider, not to AI systems supporting administrative functions.
Payment and Operations: Full Standard Applies
For payment purposes (insurance claims, billing, eligibility verification) and healthcare operations (quality assessment, administrative functions, scheduling, patient intake, follow-up), the full minimum necessary standard applies without exception. AI systems deployed for scheduling, patient intake, and follow-up communication are "healthcare operations" use cases — the full standard governs their PHI access.
Why This Distinction Matters for AI
Most healthcare AI marketed to provider organizations is positioned as supporting clinical operations — not direct treatment. An AI scheduling assistant is operations. An AI patient intake tool is operations. An AI post-discharge follow-up system is operations. All are subject to the full minimum necessary standard. Vendors who imply that AI "supporting care" qualifies for the treatment exception are misrepresenting the regulatory framework.
Over-Broad FHIR Scope Violation
Scheduling AI with system/*.read scope accesses every patient's complete record. Minimum necessary for scheduling requires only Patient demographics and Appointment resources. The difference between the granted scope and the required scope is the violation.
Persistent Context Accumulation Violation
AI chatbot retains conversation history across sessions, including medication names, diagnoses, and insurance information disclosed in earlier sessions. PHI retained beyond the session in which it was necessary violates the implementation specifications for limiting PHI access.
Workforce Access Control Failure
AI system with user-level EHR access equivalent to a clinician's full access, deployed for scheduling only. No role-based restriction applied to the AI's access scope. PHI available to the AI exceeds the minimum necessary for its function.
Conversation Context Accumulation: The Hidden PHI Reservoir
The minimum necessary violation created by persistent conversation context is harder to see than a broad FHIR scope — because it accumulates gradually, interaction by interaction, until the AI system holds a substantial PHI profile for each patient it has served.
Consider a patient who uses a healthcare AI chatbot over three months:
- Session 1 (January): Patient asks about prescription refill. AI accesses medication list. Conversation retained: "Patient takes metformin 500mg twice daily, lisinopril 10mg, atorvastatin 40mg."
- Session 2 (February): Patient asks about follow-up appointment. AI answers from scheduling data. Context from Session 1 still present in conversation history.
- Session 3 (March): Patient says "I've been dealing with some anxiety lately, is that related to my medications?" AI now has context linking the patient's name, medication list, and mental health disclosure.
- Session 4 (March, same day): Patient calls to reschedule appointment. AI still has the complete context from Sessions 1–3. The scheduling function needs none of it.
After four interactions, the AI's context database for this patient contains a medication list, a mental health disclosure, and a scheduling request — all linked to the patient's identity. The scheduling session needs the appointment data only. The medication information and mental health disclosure are PHI retained beyond the purpose for which they were accessed. This is a minimum necessary violation.
Minimum Necessary AI Compliance Checklist: 12 Requirements
HIPAA Minimum Necessary Checklist for Healthcare AI Deployments
Document the specific PHI required for each AI function before requesting FHIR scopes. For each use case (scheduling, intake, follow-up, billing), list the specific FHIR resource types and fields actually needed. Request only those resources. This documentation is your minimum necessary analysis — required under 45 CFR §164.514(d).
Confirm the AI vendor uses resource-specific FHIR scopes — not patient/*.read or system/*.read. A scheduling AI should have Patient.read and Appointment.read/write — not a wildcard scope that includes Condition, Observation, MedicationRequest, and DocumentReference. Ask the vendor to show you the exact scope string their system requests.
Verify the AI system does not retain conversation context across sessions. Ask: after a session ends, what patient data is retained in the vendor's systems? The answer should be: nothing. If the vendor retains transcripts, derived summaries, or conversation history, ask for the specific PHI categories retained and the retention period — then evaluate against minimum necessary for the stated purpose.
Confirm the AI cannot access PHI beyond what was returned in the current FHIR query. The session context should contain only what was retrieved from the EHR for the current task. PHI from previous sessions, other patients, or other data sources should not be in scope for the current interaction.
Evaluate whether the AI's FHIR access aligns with the workforce role it is performing. Under HHS's 2024 AI-as-workforce guidance, the AI's PHI access should mirror the access that would be appropriate for a human worker in the same role. A scheduling AI should have the same PHI access as a human scheduling staff member — not a clinician.
Verify the vendor does not use operational PHI for model training without separate authorization. Model training on patient interactions is a separate PHI use from operational AI functions. Ask specifically: "Does your system use any patient interaction data — including de-identified or aggregated data — to train or improve the model?" A "yes" answer triggers a separate minimum necessary analysis for the training use case.
Confirm the AI's PHI access is logged at the field level — not just the resource level. FHIR query logs that record "Patient resource accessed" do not support minimum necessary auditing. Logs must capture which specific fields were returned and processed. Ask your EHR vendor whether field-level FHIR access logging is available and enabled for AI client credentials.
Implement periodic minimum necessary reviews for all AI system PHI access. As AI systems are updated with new features, their PHI access requirements change. The initial minimum necessary analysis is not permanent. Establish a review schedule (at minimum annually, or whenever the AI system's capabilities are significantly modified) to verify access remains minimum necessary.
Verify that AI systems deployed for operations use cases are not accessing PHI under the treatment exception. Operations AI systems — scheduling, intake, follow-up — are not covered by the treatment exception to minimum necessary. Confirm with your HIPAA compliance officer that the AI use case is correctly classified and the appropriate standard applied.
Confirm the vendor provides documentation supporting your minimum necessary analysis — not just their generic compliance documentation. Your organization must conduct its own minimum necessary analysis for each AI use case. The vendor's SOC 2 report or HIPAA attestation documents their infrastructure compliance. Your minimum necessary obligation requires analyzing how your specific use of their system creates PHI access that must be limited.
Review the BAA for training data carve-outs that expand PHI use beyond operations. BAA language permitting de-identified data for model improvement, anonymized analytics, or product development represents additional PHI use beyond the minimum necessary for the operational function. Each additional use category requires its own minimum necessary analysis.
Verify that AI-generated summaries or derived data containing PHI are treated as PHI. If the AI system generates a clinical summary, a risk score, or a care recommendation that contains or is derived from PHI, that derived data is PHI. Minimum necessary applies to how derived PHI is stored, transmitted, and retained — not just to the original source PHI from the EHR.
How Claire Implements the Minimum Necessary Standard
Claire's Architecture: Minimum Necessary by Design, Not Policy
1. Task-Specific FHIR Scopes — Defined Per Function, Not Per Vendor Preference
Claire defines a minimum necessary FHIR scope profile for each AI function type: scheduling has its scope, intake has its scope, follow-up has its scope. A scheduling session requests Patient.read and Appointment resources only — not the patient's complete clinical record. The scope is not determined by what would be convenient for the AI's processing — it is determined by what the specific task actually requires, then hardcoded into the authorization request for that task type.
2. Zero Cross-Session Retention — Architectural Enforcement
Claire's MCP sessions are ephemeral by architecture. When a session ends, the session context — including all PHI accessed during the session — is cleared. There is no conversation history database, no persistent patient profile, and no cross-session context accumulation. The minimum necessary violation described in the scheduling chatbot scenario above cannot occur in Claire's architecture because the mechanism that would create it — persistent session storage — does not exist.
3. No Model Training on Patient Data — Contractually and Architecturally Enforced
Claire's BAA explicitly prohibits use of patient data for model training, fine-tuning, or any purpose beyond the contracted operational function. This is not just a contractual promise — it is architecturally enforced. Because no patient PHI is retained in Claire's infrastructure between sessions, there is no patient data available for training even if the policy permitted it. Minimum necessary for the operational function is the maximum PHI access that occurs.
4. Workforce-Role-Aligned Access — Scheduling AI Accesses Scheduling PHI
Under HHS's 2024 AI-as-workforce guidance, Claire's PHI access is designed to mirror the access that would be appropriate for a human worker in the equivalent role. A Claire scheduling session has the PHI access of a scheduling staff member — not a clinician. When Claire performs intake, it has access to intake-relevant clinical questions — not the complete medical record. The minimum necessary analysis is embedded in the design of each task type's FHIR scope profile.
The Design Constraint That Most AI Vendors Skipped
The minimum necessary standard is not a compliance checkbox — it is a design constraint. Applied at the architecture stage, it produces systems where the PHI access pathway is scoped to the task, the session context is cleared when the session ends, and no PHI persists beyond the purpose for which it was accessed. Applied retroactively to a system designed for maximum capability, it produces a compliance gap that cannot be closed without redesigning the system.
Most healthcare AI systems on the market were designed for capability, then reviewed for compliance. The result is systems that access broad FHIR scopes for convenience, retain conversation context for better user experience, and accumulate PHI profiles that the minimum necessary standard prohibits — all while carrying a HIPAA compliance attestation that covers the vendor's infrastructure without addressing the data access pattern that creates the violation.
For healthcare organizations evaluating AI vendors, the most important compliance question is not "are you HIPAA-ready architecture?" — every vendor answers yes. The question is: "What is the minimum necessary FHIR scope for each function your AI performs, and can you show me the actual scope string in the authorization request?" That question separates vendors who have applied the minimum necessary standard as a design constraint from those who have applied it as a compliance checkbox.
45 CFR §164.502(b) is clear. OCR enforcement precedent is clear. The 2024 HHS AI-as-workforce guidance is clear. Healthcare AI systems must access only the PHI necessary for the specific task being performed, must not accumulate PHI across sessions beyond what the task requires, and must treat AI access to PHI with the same scrutiny applied to human workforce access. The standard is not new. The obligation to apply it to AI systems is not ambiguous. The question is only whether the AI vendor designed for it — or designed around it.