Alle Artikel
AI Act for Healthcare and Medical AI Compliance
AI Act

AI Act for Healthcare and Medical AI Compliance

EU AI Act compliance guide for healthcare and medical device AI. MDR/IVDR overlap, high-risk classification, and SaMD obligations explained.

Legalithm Team17 Min. Lesezeit
Teilen

AI Act for Healthcare and Medical AI Compliance

If you build, deploy, or distribute AI-powered medical devices in the European Union, you face a compliance burden that no other sector shares in quite the same way. Medical device AI must satisfy both the existing Medical Device Regulation (MDR) — or, for diagnostics, the In Vitro Diagnostic Regulation (IVDR) — and the new EU AI Act. These two frameworks overlap substantially in some areas — risk management, technical documentation, post-market surveillance — but diverge sharply in others, particularly around data governance, bias testing, and algorithmic transparency. Understanding where they converge and where they impose genuinely new obligations is the difference between a streamlined compliance programme and a duplicative, expensive one.

This guide maps the dual regulatory landscape. It explains the two pathways through which medical AI becomes high-risk, walks through the timeline, identifies which AI Act requirements are already satisfied by MDR/IVDR and which are not, and provides practical compliance scenarios. If you have not yet classified your system, start with the high-risk classification guide and then return here.

TL;DR — AI Act healthcare essentials

  • Medical device AI faces dual regulation: MDR/IVDR for safety and performance, plus the AI Act for algorithmic transparency, data governance, and bias testing.
  • Two pathways make healthcare AI high-risk: Annex I (safety component of a regulated medical device) and Annex III, area 5 (AI determining access to or influencing healthcare decisions).
  • Class IIa, IIb, and III devices under MDR that require third-party conformity assessment are automatically high-risk under the AI Act — no separate classification needed.
  • Substantial overlap exists: risk management, technical documentation, QMS, and post-market surveillance are already required under MDR.
  • Genuinely new obligations include: data governance with bias testing, specific human oversight requirements, and expanded logging.
  • New AI medical devices must comply from 2 August 2026. Existing devices receive a grace period until 2 August 2027.
  • Conformity assessment follows the MDR/IVDR procedure, with AI Act requirements incorporated — not a separate procedure.
  • Use our AI Act assessment tool to determine your system's classification and applicable obligations.

How healthcare AI becomes high-risk

The AI Act defines two distinct pathways through which an AI system becomes high-risk and subject to Chapter III, Section 2 requirements. Healthcare AI can enter through either — and some systems qualify through both.

Pathway A: Annex I — safety component of a regulated medical device

Article 6(1) establishes that an AI system is high-risk if it is a safety component of a product covered by EU harmonisation legislation listed in Annex I, Section A, and the product requires third-party conformity assessment.

MDR and IVDR are both listed in Annex I, Section A. This means:

  • Class IIa, IIb, and III medical devices incorporating an AI component are automatically high-risk. These device classes require third-party conformity assessment under MDR, satisfying Article 6(1).
  • Class I medical devices are generally excluded because they typically undergo self-assessment. However, Class I devices with a measuring function or supplied sterile may require notified body involvement and could fall within scope.
  • Class C and D IVDs under IVDR follow the same logic — they require notified body assessment and are therefore included.

The critical point: if your AI-powered medical device already requires a notified body under MDR or IVDR, it is high-risk under the AI Act by operation of law. No discretion, no separate risk assessment.

Real-world example: A company develops an AI algorithm that analyses chest CT scans to detect pulmonary nodules. The algorithm is classified as a Class IIb medical device under MDR Rule 11. It requires notified body assessment. By Article 6(1) and Annex I, it is automatically a high-risk AI system.

Pathway B: Annex III, area 5 — access to essential services (health)

Annex III, area 5 covers AI used in "access to and enjoyment of essential private services and essential public services and benefits." Healthcare is explicitly included:

  • AI systems used to evaluate eligibility for public health services, including allocation of healthcare resources.
  • AI systems used to assess health and life risk, including risk categorisation for insurance or care allocation.
  • AI systems used in emergency first response, including prioritisation of emergency dispatch.

This pathway captures AI that influences healthcare decisions even when not classified as a medical device. A health insurer's AI coverage-decision engine or a hospital triage algorithm may not meet the MDR definition of a medical device, but they are high-risk under the AI Act if they determine access to health services.

Where both pathways apply simultaneously: An AI diagnostic tool that is both a Class IIb medical device (Pathway A) and influences clinical decisions about patient treatment access (Pathway B) is high-risk under both pathways. The obligations do not stack — the Chapter III requirements apply once — but the dual classification may be relevant for documentation and conformity assessment routing.

Key timeline for medical AI

DateEventImpact on medical AI
1 August 2024AI Act enters into forceClock starts. Begin gap analysis.
2 February 2025Prohibited practices applyCertain AI uses in healthcare (subliminal manipulation, social scoring) become prohibited.
2 August 2025GPAI model obligations applyFoundation models used in medical AI must comply with transparency obligations.
2 August 2026High-risk obligations apply for new devicesNew AI medical devices placed on the market must fully comply with Chapter III requirements.
2 August 2027Grace period ends for existing devicesAI medical devices already on the market with valid MDR/IVDR certificates must comply by this date.
OngoingMDR/IVDR assessment includes AI ActNotified bodies integrate AI Act requirements into MDR/IVDR assessments. Any certificate issued after August 2026 reflects AI Act compliance.

Critical planning note: If your MDR/IVDR certificate is due for renewal between August 2026 and August 2027, the notified body will assess AI Act compliance during renewal. You need AI Act readiness before your renewal date, not before the generic deadline. See the full AI Act timeline for additional milestones.

MDR/IVDR and AI Act: overlap vs new requirements

The most common mistake in healthcare AI compliance is treating the AI Act as an entirely separate regulatory layer. In reality, MDR/IVDR already covers a significant portion of what the AI Act requires. The key is identifying the genuine gaps.

AI Act requirementMDR/IVDR equivalentOverlapGap to fill
Risk management (Art. 9)MDR Annex I, Chapter IHighAdd AI-specific hazards: bias-related risks, algorithmic failure modes, foreseeable misuse.
Data governance (Art. 10)MDR Section 17.1–17.4 (software validation)LowMajor gap. Requires documented training/validation/test dataset design, bias examination, data quality criteria.
Technical documentation (Art. 11)MDR Annex II, Annex IIIMediumAdd model architecture, training methodology, hyperparameters, disaggregated performance metrics. Extend with Annex IV content.
Automatic logging (Art. 12)MDR Section 17.4 (clinical logging)MediumAdd algorithmic traceability: input/output pairs, version tracking, decision audit trails.
Transparency (Art. 13)MDR Annex I, Chapter III (IFU)MediumDisclose that system is AI-powered, capabilities/limitations, expected accuracy across populations.
Human oversight (Art. 14)MDR Section 22.1 (usability)LowSignificant gap. Override/interrupt capabilities, confidence indicators, human-in/on-the-loop design.
Accuracy, robustness, cybersecurity (Art. 15)MDR general safety; Section 17 (cybersecurity)Medium–HighAdd robustness against adversarial inputs, performance consistency metrics.
Quality management (Art. 17)ISO 13485HighExtend to cover data management, model training, bias monitoring, algorithmic change management.
Post-market monitoring (Art. 72)MDR Art. 83–86, PMCFHighAdd monitoring for performance degradation, distribution drift, bias emergence.

Bottom line: With a compliant MDR/IVDR QMS, you have roughly 50–60% of the AI Act infrastructure in place. The major gaps are data governance, human oversight, and AI-specific elements of risk management and monitoring. Do not build a separate programme — extend the one you have.

AI-specific obligations for medical device providers

Data governance and bias in clinical datasets

Article 10 imposes a data governance regime with no direct MDR/IVDR equivalent.

Demographic representativeness. Training, validation, and testing datasets must be sufficiently representative of the intended patient population. Clinical datasets are notorious for underrepresenting ethnic minorities, elderly patients, and paediatric populations. Providers must document dataset demographic composition and justify it against the deployment population, taking documented measures to mitigate underrepresentation through oversampling, synthetic augmentation, or explicit performance disclaimers.

Training/validation/test set separation. Datasets must be clearly identified and managed separately with data lineage tracking and absence of data leakage — formally documented.

Article 10(5) — health data for bias correction. This permits processing special category data (health data, racial and ethnic origin, genetic data) strictly for bias monitoring, detection, and correction. Safeguards are strict: pseudonymisation, restricted access, no re-use. See the bias testing guide for implementation.

Technical documentation expansion

Article 11 requires documentation beyond MDR's Annex II/III:

  • Model architecture: network topology, algorithm type, ensemble structure — sufficient for an assessor to understand the computational approach.
  • Training methodology: optimisation algorithms, loss functions, regularisation, hyperparameter selection.
  • Disaggregated performance metrics: accuracy, sensitivity, specificity, AUC-ROC broken down by demographic subgroups.
  • Data provenance: origin, collection methods, annotation procedures, quality control.

Extend your existing MDR technical file with an AI-specific annex addressing Annex IV requirements. Structure it as a supplement.

Human oversight in clinical settings

Article 14 requires effective human oversight proportionate to risk and autonomy.

Human-in-the-loop (HITL): Every AI output requires active clinician confirmation. Default for most Class IIb and III systems. A radiology AI flags suspicious lesions; a radiologist confirms or dismisses each.

Human-on-the-loop (HOTL): The AI operates semi-autonomously within defined parameters; clinicians monitor and intervene on anomalies. A continuous monitoring AI triggers alerts when vitals exceed thresholds.

Critical consideration: alert fatigue undermines oversight. A system generating excessive false positives leads clinicians to ignore alerts. Effective oversight means meaningful human engagement — not just technical permission to intervene.

Post-market clinical follow-up + AI Act monitoring

MDR requires post-market surveillance (PMS) and PMCF. Article 72 adds AI-specific monitoring:

AspectMDR PMS/PMCFAI Act Art. 72Combined approach
Clinical performanceTrack outcomes, adverse eventsN/AContinue MDR PMCF
Algorithmic performanceNot coveredMonitor accuracy, bias, robustnessNew. Add algorithmic monitoring.
Data driftNot coveredMonitor input distribution changesNew. Implement drift detection.
Bias emergenceNot coveredMonitor subgroup disparitiesNew. Track disaggregated metrics.
Incident reportingMDR Art. 87 — 15 daysAI Act Art. 73Harmonised. Single report for both.

Integrate AI monitoring into your existing PMS plan. See the post-market monitoring guide for details.

SaMD (Software as a Medical Device) specifics

Software as a Medical Device (SaMD) — standalone software qualifying as a medical device — is the healthcare AI category most comprehensively affected by the AI Act. SaMD products are software-first, meaning every AI Act obligation applies directly.

Classification under MDR

SaMD classification follows Rule 11 of MDR Annex VIII:

  • Class IIa: Information for diagnostic/therapeutic decisions concerning Class I/IIa conditions (e.g., triage tool for low-risk skin conditions).
  • Class IIb: Information for decisions concerning Class IIb/III conditions (e.g., AI analysing ECGs for arrhythmia).
  • Class III: Information directly determining patient treatment (e.g., AI radiation therapy planning).

SaMD under the AI Act

All SaMD classified Class IIa or higher is high-risk under the AI Act via Annex I. Key categories:

  • Clinical decision support (CDS): AI recommending treatment protocols or suggesting differential diagnoses. If used by clinicians for patient-specific decisions, it likely qualifies as SaMD regardless of how it is positioned.
  • AI diagnostic tools: Standalone AI analysing images, labs, or genomic data. Unambiguously SaMD and high-risk.
  • Risk stratification tools: AI assigning patients to risk categories — may qualify as SaMD and/or fall under Annex III, area 5.

Key SaMD compliance considerations

  1. No hardware buffer. Unlike AI embedded in a physical medical device — where the hardware manufacturer handles much of the regulatory infrastructure — SaMD providers must directly manage every compliance aspect, from QMS to post-market surveillance to conformity assessment.
  2. Update cycles vs re-assessment. SaMD products often update frequently (model retraining, algorithm improvements, UI changes). Under both MDR and the AI Act, substantial modifications trigger re-assessment. Providers must establish change management that distinguishes between minor updates (documentation update only) and substantial changes (new conformity assessment required) — Article 43(4) mirrors MDR Article 120 in this regard.
  3. Cloud deployment complexity. SaMD deployed as a cloud service raises questions about continuous compliance monitoring, version control across all deployment instances, and data residency requirements when clinical data crosses borders.

Conformity assessment for medical AI

Conformity assessment for AI medical devices does not require two separate procedures. AI Act requirements are integrated into the existing MDR/IVDR process.

How dual assessment works

Article 43(3) of the AI Act states that for high-risk AI covered by Annex I legislation, the conformity assessment under that legislation applies, provided AI Act Chapter III requirements are also verified. In practice:

  1. Follow the MDR/IVDR route appropriate to your device's risk class (e.g., Annex IX for Class III, Annex XI for Class IIb).
  2. The notified body verifies AI Act compliance alongside MDR/IVDR assessment, checking Articles 9–15.
  3. A single conformity certificate covers both frameworks.
  4. The EU declaration of conformity references both regulations.

Notified body preparedness

Not all MDR/IVDR notified bodies are currently equipped for AI assessment. Expect extended timelines and potential bottlenecks. Engage your notified body early to understand their AI assessment readiness and documentation requirements. If your body lacks AI competence, consider switching — factoring in associated costs and delays.

Structuring documentation

  1. MDR/IVDR technical file — existing documentation covering Annex II/III.
  2. AI Act supplement — addendum covering Annex IV items: model description, data governance, bias testing, disaggregated metrics, human oversight design.
  3. Integrated QMS — extend ISO 13485 with AI-specific procedures (model lifecycle, data pipelines, algorithmic change control).
  4. Combined PMS plan — single plan incorporating MDR PMCF and AI Act Article 72 monitoring.

For cost and timeline details, see the conformity assessment guide.

Real-world compliance scenarios

Scenario 1: AI radiology tool for lung nodule detection

Product: AI analysing chest CT scans, flagging suspicious pulmonary nodules with malignancy probability scores. Radiologist reviews all findings.

MDR: Class IIb (Rule 11 — diagnosis of Class IIb/III conditions). AI Act: High-risk via Annex I.

Key requirements:

  • Risk management: Extend MDR risk file with AI-specific hazards — false negatives leading to missed cancers, false positives leading to unnecessary biopsies, and performance variation across CT scanner manufacturers or imaging protocols.
  • Data governance: Document training dataset demographics (age, sex, ethnicity, scanner types, nodule size distribution). Demonstrate representativeness and conduct bias testing for performance disparities across subgroups.
  • Human oversight: HITL design with clear presentation of AI confidence levels, ability to dismiss or modify findings, and workflow integration that does not create pressure to accept AI outputs uncritically.
  • Technical documentation: Add CNN architecture description, training procedure details, and performance metrics (sensitivity, specificity, AUC-ROC) disaggregated by nodule size, patient demographics, and scanner manufacturer.
  • Post-market monitoring: Extend PMCF with algorithmic performance monitoring per deployment site — track sensitivity and specificity, monitor for data drift as scanner populations change.

Scenario 2: Clinical decision support for treatment recommendations

Product: Cloud-based AI ingesting patient EHRs and recommending oncology treatment protocols. Oncologists use recommendations as one input among many.

MDR: Likely Class IIa or IIb SaMD under Rule 11, depending on positioning. AI Act: High-risk via Annex III, area 5 (influencing healthcare decisions), and potentially Annex I if classified as SaMD.

Key requirements:

  • Data governance: Complex challenge — EHRs contain significant demographic biases (treatment patterns differ by socioeconomic status, geography, and insurance coverage). Document how training data was curated to avoid perpetuating treatment disparities.
  • Human oversight: HITL design with enhanced transparency — the system must present its reasoning (evidence citations, guideline references) alongside recommendations, giving oncologists sufficient information to evaluate and override suggestions.
  • Transparency: Instructions for use must clearly state which cancer types and stages are covered, which patient populations are underrepresented, and conditions under which recommendations require particular caution.
  • Bias testing: Test recommendation patterns across patient demographics. Document whether the system recommends different treatment intensities for equivalent clinical presentations across racial, gender, or age groups — and whether any differences are clinically justified.

Scenario 3: Wearable health monitor with AI anomaly detection

Product: Wrist-worn device with PPG and accelerometer sensors. On-device AI detects atrial fibrillation and abnormal respiratory patterns, alerting users to consult physicians.

MDR: Class IIa (Rule 11 — screening for Class IIa conditions). AI Act: High-risk via Annex I.

Key requirements:

  • Risk management: AI-specific risks include false positive alerts causing unnecessary anxiety, false negatives missing clinically significant arrhythmias, PPG signal quality variation across skin tones (melanin levels affect signal), and motion artefacts during physical activity.
  • Data governance: Training data must represent the full range of skin tones, wrist sizes, age groups, and activity levels. Document melanin-related signal quality analysis and any compensation algorithms.
  • Human oversight: HOTL design — the device operates autonomously for screening, with users and physicians as the oversight layer. Alert thresholds must balance sensitivity (minimise missed arrhythmias) against false positive rates to avoid alert fatigue.
  • On-device AI: Document model optimisation for edge deployment (quantisation, pruning), and how firmware updates are managed without disrupting the validated algorithm — meeting both MDR substantial modification criteria and AI Act change management.
  • Post-market monitoring: Monitor algorithm performance across the real-world user population. Track detection accuracy by skin tone, age, and activity context. Implement automated drift detection for population-level metrics.

Frequently Asked Questions

Does the AI Act apply to all medical device software, or only AI?

The AI Act applies only to AI systems as defined in Article 3(1) — systems using machine learning, knowledge-based, or statistical methods to generate predictions, recommendations, or decisions. Traditional deterministic software (e.g., a dosage calculator using fixed formulae) is generally not covered. However, any learned model — even a simple regression trained on patient data — likely falls within scope. Use our AI Act assessment tool for classification.

We already have MDR certification. Do we need a new assessment?

Not a new assessment, but your certification must be extended to incorporate AI Act requirements. Existing devices have until August 2027. When your certificate is renewed or a new device is placed on the market after 2 August 2026, the notified body assesses AI Act compliance as part of the MDR procedure. See the conformity assessment guide.

How should we handle model retraining under dual regulation?

Both frameworks require assessing whether a change constitutes a substantial modification. Under Article 43(4), changes to training data, model architecture, or intended purpose trigger re-assessment. Establish a change management protocol evaluating every update against both frameworks. Minor updates (bug fixes) need only documentation updates; major changes (new datasets, architecture changes) will likely require re-assessment. See the compliance checklist.

Can we use patient health data for bias testing?

Yes — Article 10(5) permits processing special category data (health data, racial origin, genetic data) strictly for bias monitoring, detection, and correction. Safeguards apply: pseudonymisation, restricted access, no re-use. Involve your DPO in process design. See the bias testing guide.

What happens if our AI device causes harm?

Both frameworks apply simultaneously. MDR Article 87 requires serious incident reporting within 15 days. AI Act Article 73 imposes parallel reporting obligations. Align procedures to satisfy both with a single report. The AI Liability Directive and revised Product Liability Directive will create additional exposure. See the incident reporting guide.

Must we disclose our model architecture to patients?

Not in full technical detail. Article 13 requires sufficient transparency for users to interpret outputs appropriately. Instructions for use must state: the system uses AI, intended purpose and limitations, expected accuracy, validated populations, and degradation conditions. Detailed technical documentation goes to the notified body and market surveillance — not publicly disclosed.

Next steps

Healthcare AI compliance under the AI Act is an extension of your existing MDR/IVDR programme, not a separate project:

  1. Classify your system using both Annex I and Annex III pathways. Use the AI Act assessment tool.
  2. Conduct a gap analysis comparing current MDR/IVDR processes against AI Act Chapter III, using the overlap table above.
  3. Prioritise genuine gaps: data governance, bias testing, AI-specific risk management, human oversight design, algorithmic monitoring.
  4. Engage your notified body early to understand AI assessment readiness.
  5. Extend your QMS with AI-specific procedures rather than building a parallel structure.

For a complete compliance walkthrough, see the EU AI Act compliance checklist. Run the free AI Act assessment for your specific product.

AI Act
Healthcare
Medical Devices
MDR
SaMD
High-Risk
Annex I
Compliance

Prüfen Sie die Compliance Ihres KI-Systems

Kostenlose Bewertung ohne Signup. Erhalten Sie Ihre Risikoklassifizierung in wenigen Minuten.

Kostenlose Bewertung starten