All articles
EU AI Act Compliance Checklist 2026: Full Guide
AI Act

EU AI Act Compliance Checklist 2026: Full Guide

Practical EU AI Act compliance checklist for the August 2026 deadline. Risk classification, documentation, conformity assessment, and monitoring.

Legalithm Team17 min read
Share
Read time17 min
TopicAI Act
UpdatedNov 2025
Table of contents

EU AI Act Compliance Checklist 2026: The Complete Step-by-Step Action Plan

The EU AI Act's most significant enforcement wave hits on 2 August 2026. From that date, providers and deployers of high-risk AI systems must meet a demanding set of obligations — or face fines of up to EUR 15 million or 3% of global annual turnover.

This checklist distils the full Regulation (EU) 2024/1689 into a concrete action plan. It is designed for startup founders, CTOs, and lean compliance teams who need to move from "aware" to "audit-ready" in the months ahead.

TL;DR — What you must have in place by 2 August 2026

If you provide or deploy a high-risk AI system in the EU:

  • A complete AI systems inventory with risk classifications for every system.
  • A risk management system that runs iteratively through the AI lifecycle (Article 9).
  • Annex IV technical documentation — all nine mandatory sections (Article 11).
  • A quality management system covering design, testing, data, and monitoring (Article 17).
  • Completed conformity assessment — self-assessment or notified body (Article 43).
  • EU declaration of conformity and CE marking (Articles 47–48).
  • Registration in the EU database (Article 49).
  • Active post-market monitoring and incident reporting processes (Article 72).

If you are a deployer: verified provider compliance, human oversight assignments, log retention (6+ months), affected-person notifications, and — if applicable — a completed fundamental rights impact assessment.

Phase 0: Understand your starting position

Before building anything, you need three baseline answers.

0.1 Build an AI systems inventory

List every AI system your organisation develops, deploys, imports, or distributes. Include third-party AI embedded in SaaS tools your teams use — CRM lead scoring, chatbot plugins, code assistants, HR screening tools. Shadow AI counts. Survey every department.

For each system, record:

  • System name and vendor
  • Intended purpose and actual use
  • Data inputs and outputs
  • Who it affects (employees, customers, public)
  • Your role: provider, deployer, importer, or distributor (Article 3)

Real-world example: A fintech startup using Stripe Radar for fraud detection, an internally trained credit-scoring model, and GitHub Copilot for development has three AI systems to inventory. Stripe Radar and Copilot are deployed (deployer role); the credit-scoring model is provided (provider role, high-risk under Annex III point 5(a) — creditworthiness assessment).

See the full AI systems inventory guide for the complete process, including shadow AI discovery.

0.2 Classify each system by risk level

The AI Act uses four risk tiers. Your obligations depend entirely on where each system falls:

Risk levelWhat it coversObligationsDeadline
Prohibited (Article 5)Social scoring, manipulative subliminal techniques, real-time remote biometric identification (with narrow exceptions), emotion recognition in workplaces/education, untargeted facial image scraping, biometric categorisation inferring sensitive attributesBanned outrightAlready enforceable (2 Feb 2025)
High-risk (Article 6, Annex III)AI in biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice/democracyFull compliance (Articles 8–15, 17, 43, 49, 72)2 August 2026
Limited risk (Article 50)Chatbots, deepfake generators, emotion recognition, biometric categorisationTransparency obligations2 August 2026
Minimal riskSpam filters, recommendation engines, AI-assisted game mechanicsNo mandatory obligations (voluntary codes under Article 95)N/A

Not sure where your system falls? Run a free AI Act risk classification.

Real-world example: An HR tech company offering AI-powered candidate screening falls squarely into Annex III point 4(a) — "AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates." This is high-risk regardless of company size.

0.3 Determine your role for each system

Your obligations differ sharply depending on whether you are a provider (you built/trained the AI and put it on the market under your name) or a deployer (you use an AI system built by someone else in your professional capacity).

Providers carry the heaviest burden: risk management, technical documentation, conformity assessment, CE marking, post-market monitoring. Deployers have lighter but still binding obligations: human oversight, log retention, affected-person notification, and (in some cases) a fundamental rights impact assessment.

See the provider vs deployer comparison guide for the full side-by-side obligation table.

Phase 1: High-risk provider obligations

If you provide a high-risk AI system, these are your mandatory compliance deliverables.

1.1 Establish a risk management system (Article 9)

The risk management system must be a continuous, iterative process that runs throughout the entire lifecycle of the high-risk AI system. It is not a one-time assessment.

Required steps:

  1. Identify known and reasonably foreseeable risks to health, safety, and fundamental rights.
  2. Estimate and evaluate risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse.
  3. Evaluate risks that may emerge from analysis of data gathered from post-market monitoring.
  4. Adopt appropriate and targeted risk management measures, taking into account the state of the art, the generally acknowledged best practices, and the specific circumstances of the system's use.
  5. Test the system to confirm residual risk is acceptable — testing must be performed against preliminarily defined metrics and probabilistic thresholds.

Real-world example: A provider of an AI system used for university admissions (Annex III point 3) must identify risks such as: bias against applicants from certain socioeconomic backgrounds, over-reliance on standardised test scores, inability to account for non-traditional qualifications, and systematic disadvantage to applicants with disabilities. For each risk, the provider documents the probability, severity, mitigation measures adopted, and residual risk after mitigation.

1.2 Implement data governance (Article 10)

  • Define data quality criteria for training, validation, and testing datasets.
  • Examine datasets for possible biases — especially those that could lead to discrimination against protected groups.
  • Take appropriate measures to detect, prevent, and mitigate biases.
  • Document the origin of the data, collection methodology, and preparation procedures (cleaning, labelling, enrichment, aggregation).
  • Where special categories of personal data are processed for bias detection and correction, document the legal basis under GDPR Article 9 and the specific safeguards applied.

Real-world example: A credit-scoring AI provider must document that training data was examined for bias against protected characteristics (gender, ethnicity, age). If the model was trained on historical lending decisions, the provider must address the risk that past discrimination is encoded in the data and describe the debiasing techniques applied.

1.3 Prepare technical documentation (Article 11, Annex IV)

Annex IV specifies nine mandatory sections:

  1. General system description and intended purpose
  2. Development methodology, design choices, and computational resources
  3. Monitoring, functioning, and control description
  4. Performance metrics and appropriateness justification
  5. Risk management documentation
  6. Data governance practices
  7. Human oversight measures
  8. Relevant changes and updates
  9. Post-market monitoring plan

Start drafting now — this typically takes 40–80 hours for moderately complex systems. Do not try to write it retrospectively. Systems with complex architectures or large training datasets can take 100–200+ hours.

See the complete Annex IV documentation guide for section-by-section instructions.

1.4 Design automatic logging (Article 12)

  • Build logging into the system architecture, not as an afterthought.
  • Logs must enable traceability of the system's operation throughout its lifecycle.
  • Ensure logs record events relevant to identifying risks at national level, substantial modifications, and compliance with deployer obligations (particularly log retention).
  • Logs must be retained and made available to market surveillance authorities upon request.

1.5 Provide transparency and instructions for use (Article 13)

Write clear, comprehensive instructions for deployers that include:

  • The provider's identity and contact details
  • The system's intended purpose, capabilities, and limitations
  • The level of accuracy, robustness, and cybersecurity — with disaggregated performance across relevant subgroups where applicable
  • Any known or foreseeable circumstances that may lead to risks
  • The technical measures for human oversight, including interpretability of outputs
  • Expected lifetime, maintenance, and update requirements
  • Where applicable, specifications for input data

1.6 Design for human oversight (Article 14)

Build oversight features so that natural persons assigned to oversight can:

  • Properly understand the relevant capacities and limitations of the system
  • Duly monitor its operation and detect anomalies, dysfunctions, and unexpected performance
  • Remain aware of automation bias, particularly for systems that provide recommendations for decisions
  • Correctly interpret the system's output, taking into account the characteristics of the system and the interpretation tools and methods available
  • Decide not to use the system, or otherwise disregard, override, or reverse the output
  • Intervene in the operation or interrupt the system through a "stop" button or similar procedure

1.7 Achieve accuracy, robustness, and cybersecurity (Article 15)

  • Declare accuracy levels and relevant metrics in the instructions for use.
  • Design for resilience against errors, faults, and inconsistencies — including interaction with the environment, other AI systems, and natural persons.
  • Address AI-specific vulnerabilities: data poisoning, model poisoning, adversarial examples, model flipping, and membership inference attacks.
  • Implement redundancy solutions including backup or fail-safe plans where appropriate.
  • For continual-learning systems, address feedback loop risks that may compromise compliance.

1.8 Establish a quality management system (Article 17)

Document policies and procedures covering:

  • Regulatory compliance strategy and design/development controls
  • Testing, validation, and verification procedures including pre-deployment and ongoing testing
  • Data management practices and data quality controls
  • Post-market monitoring procedures
  • Incident and malfunction reporting processes
  • Communication procedures with competent authorities, notified bodies, and other operators
  • Resource management (including supply chain management where relevant)
  • An accountability framework with assigned responsibilities at management level

1.9 Complete conformity assessment (Article 43)

Before placing your system on the market:

  • Most Annex III high-risk systems: self-assessment (internal conformity assessment, Annex VI) is sufficient.
  • Biometric identification systems and safety components of products under existing EU legislation: third-party assessment by a notified body (Annex VII) is mandatory.
  • Draw up an EU declaration of conformity (Article 47).
  • Affix CE marking (Article 48).

See the conformity assessment guide for the detailed process.

1.10 Register in the EU database (Article 49)

Register the system in the EU database for high-risk AI systems before placing it on the market or putting it into service. The database is publicly accessible and contains system descriptions, provider details, conformity assessment status, and intended purpose.

1.11 Set up post-market monitoring (Article 72)

  • Establish a post-market monitoring system proportionate to the nature and risks of the AI system.
  • Actively collect and analyse data on performance throughout the system's lifetime.
  • Feed monitoring data back into the risk management system.
  • Report serious incidents to market surveillance authorities without undue delay — and no later than 15 days after becoming aware (Article 73).

Phase 2: High-risk deployer obligations

If you deploy (use) a high-risk AI system built by someone else, your obligations are lighter but legally binding.

2.1 Verify the provider's compliance

  • Request and review the EU declaration of conformity.
  • Confirm CE marking is present.
  • Obtain the instructions for use and confirm they cover your deployment context.
  • If a provider cannot supply these, treat it as a serious compliance red flag — consider whether continued use is defensible.

2.2 Implement human oversight (Article 26)

  • Assign competent, trained individuals to oversee the system.
  • Ensure they understand the system's capabilities and limitations — specifically, the tendency towards automation bias.
  • Follow the provider's human oversight instructions.
  • Ensure overseers have the authority and ability to override or disregard the system's output.

Real-world example: A bank deploying a third-party AI system for credit decisions must ensure that trained loan officers review AI recommendations before final decisions, can override the AI's output, understand how the AI produces its scores, and document the rationale when they deviate from the AI's recommendation.

2.3 Monitor and retain logs

  • Monitor the system's operation for risks arising in the specific deployment context.
  • Retain automatically generated logs for a minimum of six months (or longer if required by sector-specific law, e.g., financial services retention requirements).
  • Report serious incidents or malfunctions to the provider and to the relevant market surveillance authority.

2.4 Inform affected persons

  • Inform natural persons that they are subject to a high-risk AI system before or at the time the system is used in relation to them.
  • Provide meaningful information about the logic involved and the significance of the output.
  • Where the system makes or assists in decisions with legal or similarly significant effects, inform the person of their right to obtain an explanation (Article 86).

2.5 Conduct a fundamental rights impact assessment (if applicable) (Article 27)

Required if you are:

  • A body governed by public law
  • A private entity providing essential public services (banking, insurance, energy, water, transport, digital health)
  • An education or vocational training institution

Complete the FRIA before deployment. Register the results in the EU database. See the FRIA guide for the step-by-step process.

Phase 3: Transparency and GPAI obligations

3.1 Transparency for limited-risk systems (Article 50)

  • Chatbots and conversational AI: Inform users they are interacting with an AI system — unless this is obvious from the circumstances and context of use.
  • Deepfakes and synthetic content: Label AI-generated or manipulated image, audio, or video content in a machine-readable format. Deployers must disclose that the content has been artificially generated or manipulated.
  • Emotion recognition and biometric categorisation: Inform individuals that they are being exposed to such systems and describe the system's operation.
  • AI-generated text on matters of public interest: If you publish AI-generated text on topics of public interest, disclose that it was generated by AI — unless a human has editorially reviewed and takes responsibility for the content.

3.2 General-purpose AI model obligations (Articles 51–55)

If you provide a GPAI model (foundation model, large language model):

  • Prepare and maintain technical documentation including training methodology, data sources, and evaluation results (Article 53).
  • Provide information and documentation to downstream providers integrating the model.
  • Establish a policy for compliance with Union copyright law, including the text and data mining opt-out under the Copyright Directive.
  • Publish a sufficiently detailed summary of the training data content.

GPAI models classified with systemic risk (based on cumulative compute exceeding 10^25 FLOPs, or Commission designation) carry additional obligations: adversarial testing (red-teaming), incident monitoring, energy consumption reporting, and cybersecurity measures (Article 55).

Phase 4: Governance and documentation wrap-up

4.1 Appoint an authorised representative (if needed)

Non-EU providers must appoint an authorised representative established in the EU before placing their system on the market (Article 22). The representative must have a written mandate specifying the tasks to be performed, including maintaining technical documentation, cooperating with authorities, and registering in the EU database.

4.2 Train your workforce on AI literacy (Article 4)

Providers and deployers must ensure that their staff and other persons dealing with AI systems on their behalf have a sufficient level of AI literacy, taking into account their technical knowledge, experience, education, training, the context in which the AI systems are to be used, and the persons or groups on which the AI systems are to be used.

This obligation is already in force as of 2 February 2025.

Real-world example: A healthcare organisation deploying AI-assisted diagnostic tools must ensure that radiologists understand the AI's intended purpose, its accuracy limitations in edge cases, how to interpret confidence scores, and when to override the system. Generic "AI awareness" training is insufficient — it must be tailored to the specific system and context.

4.3 Prepare for market surveillance

National market surveillance authorities can:

  • Request documentation and access to AI systems — including source code in certain circumstances
  • Conduct unannounced on-site inspections
  • Require providers to take corrective measures within a specified period
  • Order the withdrawal or recall of non-compliant AI systems from the market
  • Restrict or prohibit the use of AI systems presenting serious risks
  • Impose administrative fines

Ensure your documentation is audit-ready, accessible on demand, and maintained in at least one official EU language.

Timeline summary

DeadlineWhat becomes enforceableStatus
2 February 2025Prohibited practices, AI literacy obligationAlready in force
2 August 2025GPAI model obligations, governance structuresAlready in force
2 August 2026High-risk obligations, transparency, deployer duties, conformity assessmentUpcoming
2 August 2027AI systems embedded in regulated products (Annex I, Section A)Future

Common mistakes that delay compliance

Mistake 1: Starting with documentation instead of classification

Teams that jump straight to Annex IV technical documentation without first classifying their systems often discover — months later — that they documented the wrong systems, used the wrong scope, or missed systems entirely. Always start with the inventory and classification.

AI Act compliance requires input from engineering (system architecture, logging, testing), data science (data governance, bias assessment, accuracy metrics), product (intended purpose, user instructions), and legal (risk management, conformity assessment). Siloing it in legal guarantees delays.

Mistake 3: Assuming third-party AI is someone else's problem

If you deploy a high-risk AI system built by a vendor, you have your own set of deployer obligations. "Our vendor is compliant" is necessary but not sufficient — you still need human oversight, log retention, affected-person notification, and potentially a FRIA.

Mistake 4: Waiting for harmonised standards

As of April 2026, the European standardisation organisations (CEN/CENELEC) have not yet published all harmonised standards for the AI Act. Waiting for these to start compliance work means running out of time. The legal obligations apply on 2 August 2026 regardless of standards status. Use the requirements in Articles 8–15 directly.

Mistake 5: Confusing AI literacy with AI Act compliance

AI literacy (Article 4) is one obligation among many — and it is already in force. Teams that stop at "we did an AI training" and treat it as AI Act compliance miss the substantive obligations: risk management, documentation, conformity assessment, and monitoring.

Practical resource estimate

ActivityTypical effortDependencies
AI systems inventory2–4 weeksAll departments
Risk classification1–2 weeksCompleted inventory
Risk management system4–8 weeksClassification
Data governance documentation2–4 weeksData science + legal
Technical documentation (Annex IV)40–200 hours per systemRisk management, data governance
Quality management system4–8 weeksCan run in parallel
Conformity assessment (self)2–4 weeksDocumentation complete
Conformity assessment (notified body)6–24 monthsDocumentation complete
Post-market monitoring setup2–4 weeksSystem deployed
Total (self-assessment route)4–8 months
Total (notified body route)8–18 months

Start now

The gap between "aware" and "compliant" is larger than most teams expect. Technical documentation alone can take 40–80 hours per system. Conformity assessment through a notified body can take 6–24 months.

Do not wait for August. Start with your risk classification:

Run the free AI Act assessment

Then use the complete AI Act guide to drill into each article and annex referenced in this checklist.

Frequently asked questions

Does the EU AI Act apply to companies outside the EU?

Yes. The AI Act applies to providers placing AI systems on the EU market or putting them into service in the EU, and to deployers located within the EU — regardless of where the provider is established (Article 2). It also applies to providers and deployers located outside the EU where the output produced by the AI system is used in the EU.

Is there a size exemption for startups and SMEs?

No blanket exemption. The substantive obligations (risk management, documentation, conformity assessment) apply regardless of company size. However, SMEs benefit from some procedural simplifications: simplified technical documentation forms (once published by the Commission), reduced conformity assessment fees from notified bodies, and priority access to regulatory sandboxes (Article 57). Penalty caps are also adjusted for SMEs — the percentage of turnover applies rather than the fixed euro amount, whichever is lower.

What happens if I miss the 2 August 2026 deadline?

Non-compliance with high-risk AI system obligations (Articles 8–15) carries fines of up to EUR 15 million or 3% of global annual turnover, whichever is higher. Providing incorrect information to authorities adds another EUR 7.5 million or 1.5%. In practice, enforcement will likely focus first on high-profile cases and complaints, but the legal exposure begins on the deadline date. See the penalties and fines guide for the full breakdown.

Can I use my existing ISO 27001 or SOC 2 controls?

Partially. Existing controls for information security, quality management, and data governance provide a foundation — but the AI Act has specific requirements that go beyond general frameworks. For example, ISO 27001 addresses cybersecurity but does not cover AI-specific vulnerabilities (data poisoning, adversarial examples). SOC 2 covers availability and processing integrity but does not address risk management for fundamental rights impacts. Use existing controls as building blocks, not substitutes.

How does the AI Act interact with GDPR?

The two regulations apply simultaneously when AI systems process personal data — which covers most business AI tools. GDPR governs the data processing; the AI Act governs the AI system itself. Impact assessments can be combined (DPIA + FRIA), and documentation can be coordinated. Fines accumulate independently. See the AI Act vs GDPR comparison guide for the detailed breakdown.

What if my AI system is both a product safety component and an Annex III standalone system?

The more stringent route applies. If the system falls under both Annex I (product safety) and Annex III (standalone high-risk), the provider must meet the requirements under both pathways and follow the conformity assessment route that applies to the product safety legislation.

Legalithm is an AI-assisted compliance workflow tool — not legal advice. Final compliance decisions should be reviewed by qualified legal counsel.

AI Act
Compliance
Checklist
High-Risk
2026 Deadline

Check your AI system's compliance

Free assessment — no signup required. Get your risk classification in minutes.

Run free assessment