All articles
EU AI Act Timeline: Key Dates and Deadlines
AI Act

EU AI Act Timeline: Key Dates and Deadlines

Complete EU AI Act implementation timeline from 2024 to 2027. Every enforcement date, compliance deadline, and what becomes mandatory when.

Legalithm Team21 min read
Share
Read time21 min
TopicAI Act
UpdatedJul 2025
Table of contents

EU AI Act Timeline: Every Key Date and Deadline You Need to Know

The EU AI Act (Regulation (EU) 2024/1689) does not switch on all at once. Instead, it phases in across four major enforcement waves stretching from August 2024 to August 2027. Some obligations are already enforceable right now. Others activate in a matter of months. Understanding which deadlines have already passed — and which are bearing down on you — is the first step toward compliance.

This timeline reference covers every significant date in the AI Act's rollout — the dates, what they trigger, and what you should be doing about each one.

TL;DR — AI Act timeline essentials

  • 1 August 2024 — The Regulation entered into force. The 24-month countdown for most obligations began.
  • 2 February 2025Prohibited AI practices (Article 5) became enforceable and AI literacy (Article 4) obligations activated.
  • 2 August 2025General-purpose AI (GPAI) model obligations (Articles 51–56) took effect. National competent authorities and the AI Office became operational.
  • 2 August 2026 — The big deadline. High-risk AI system obligations, transparency requirements, penalties, and regulatory sandboxes all activate. This is the date most organisations are preparing for.
  • 2 August 2027 — Final wave. AI systems embedded in products covered by existing EU harmonised legislation (Annex I) and legacy GPAI models placed on the market before August 2025.

The complete timeline

Overview table

DateWhat activatesKey articles
1 Aug 2024Entry into force; transition periods beginRegulation published in OJ
2 Feb 2025Prohibited practices; AI literacyArticles 4, 5
2 Aug 2025GPAI obligations; AI Office; national authorities; GPAI Code of PracticeArticles 51–56, 64–68
2 Aug 2026High-risk AI obligations; transparency; penalties; regulatory sandboxesChapter III (Articles 6–49), Article 50, Articles 99–101
2 Aug 2027Annex I product AI; legacy GPAI modelsArticle 6(1), transitional provisions

Below is the detailed breakdown of each phase.

1 August 2024: Entry into force

Regulation (EU) 2024/1689 was published in the Official Journal of the European Union on 12 July 2024 and entered into force on 1 August 2024. This date set the clock running on every transition period in the Act.

What happened:

  • The AI Act became binding EU law across all 27 Member States.
  • A 24-month general transition period began, giving organisations until 2 August 2026 to comply with the bulk of the Regulation.
  • Shorter transition periods (6 months and 12 months) began for prohibited practices, AI literacy, and GPAI obligations.
  • The European Commission began establishing the AI Office, the central EU body for GPAI oversight.
  • Member States started the process of designating national competent authorities and market surveillance authorities.

For a full introduction to the Regulation, see Understanding the EU AI Act.

2 February 2025: Prohibited practices + AI literacy

Six months after entry into force, the first enforcement wave landed. This was a hard stop — not a grace period.

What became enforceable:

ObligationArticleWhat it requires
Prohibited AI practicesArticle 5Eight categories of AI practice are banned outright — including social scoring, real-time remote biometric identification in public spaces (with narrow exceptions), exploitation of vulnerable groups, subliminal manipulation, emotion recognition in workplaces and education (with exceptions), untargeted facial image scraping, and predictive policing based solely on profiling.
AI literacyArticle 4All providers and deployers must ensure their staff have a sufficient level of AI literacy proportionate to their role and AI exposure.

What companies should have done by this date:

  1. Audited their AI portfolio against the Article 5 prohibited list. Any system that falls within the eight banned categories must have been decommissioned, redesigned, or restricted.
  2. Launched an AI literacy programme. Not certification or formal exams — but demonstrable, proportionate effort.
  3. Documented both steps. National authorities can request evidence at any time.

Penalties for prohibited practices are the harshest in the Act: up to EUR 35 million or 7% of total worldwide annual turnover, whichever is higher. For more on the penalty framework, see EU AI Act Penalties and Fines Explained.

Real-world example: A European HR-tech company was using a tool that scored employee emotional states during performance reviews to flag "disengagement risk." Under Article 5(1)(f), emotion recognition in the workplace is prohibited except for safety or medical purposes. The company had to disable the feature entirely before 2 February 2025.

2 August 2025: GPAI obligations + governance bodies

Twelve months after entry into force, the second enforcement wave activated — this time targeting general-purpose AI models and standing up the EU's governance infrastructure.

What became enforceable:

ObligationArticle(s)What it requires
GPAI model transparencyArticle 51, Article 53All GPAI model providers must publish a sufficiently detailed summary of training content, comply with EU copyright law, and draw up technical documentation.
GPAI with systemic riskArticles 51, 55GPAI models with systemic risk (trained with >10²⁵ FLOPs or designated by the Commission) must additionally perform model evaluations, assess and mitigate systemic risks, track and report serious incidents, and ensure adequate cybersecurity protections.
GPAI Code of PracticeArticle 56The AI Office published a GPAI Code of Practice providing detailed guidance on how to meet Articles 51–55. Compliance with the Code creates a presumption of conformity.
AI Office operationalArticles 64–68The AI Office (within the European Commission) became the primary supervisor for GPAI models, with powers to request information, conduct evaluations, and issue binding measures.
National competent authoritiesArticle 70Each Member State was required to designate at least one national competent authority and notify the Commission.

For a deep dive into GPAI obligations, see GPAI: General-Purpose AI Model Obligations.

Real-world example: An open-source AI lab releasing a large language model in September 2025 needed to publish a training data summary (Article 53(1)(d)), provide technical documentation to downstream providers (Article 53(1)(b)), and put in place a copyright compliance policy. Because the model exceeded 10²⁵ FLOPs of training compute, the lab also had to perform adversarial testing and report serious incidents to the AI Office.

2 August 2026: The big deadline

This is the date the majority of the AI Act's substantive obligations take effect — and the date most companies are (or should be) racing toward.

What becomes enforceable:

ObligationArticle(s)Summary
High-risk AI classificationArticle 6, Annex IIIThe framework for determining whether an AI system is high-risk, based on its use in areas listed in Annex III (biometrics, critical infrastructure, education, employment, law enforcement, migration, justice, etc.).
Risk management systemArticle 9Providers must establish, implement, document, and maintain a continuous risk management system throughout the AI system's lifecycle.
Data governanceArticle 10Training, validation, and testing datasets must meet quality criteria. Bias examination, gap identification, and data governance measures required.
Technical documentationArticle 11Full technical documentation per Annex IV — covering system description, design choices, data, performance metrics, risk management, and more.
Record-keeping and loggingArticles 12, 19Automatic logging of events during operation. Logs retained for periods appropriate to the system's purpose.
Transparency to deployersArticle 13High-risk systems must be designed to be sufficiently transparent for deployers to interpret output and use the system appropriately.
Human oversightArticle 14Systems must be designed to allow effective human oversight, including the ability to understand, monitor, and override the AI.
Accuracy, robustness, cybersecurityArticle 15High-risk AI must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout its lifecycle.
Quality management systemArticle 17Providers must establish a quality management system covering design, development, testing, data management, risk management, post-market monitoring, and more.
Conformity assessmentArticle 43Before placing a high-risk system on the market, providers must undergo conformity assessment — either internal (self-assessment) or through a notified body, depending on the system type.
EU declaration of conformityArticle 47Providers must draw up a written, machine-readable EU declaration of conformity and keep it for 10 years.
CE markingArticle 48High-risk AI systems must bear a CE marking indicating conformity.
EU database registrationArticle 49Providers and deployers must register high-risk AI systems in the EU database before placing them on the market or putting them into service.
Transparency obligations (limited risk)Article 50AI systems interacting with people must disclose they are AI. Deepfake content must be labelled. Emotion recognition and biometric categorisation systems must notify the subject. AI-generated text published to inform the public must be labelled.
Deployer obligationsArticles 26–27Deployers must use systems in accordance with instructions, assign human oversight, retain logs, notify providers of issues, and — for certain public-body uses — conduct fundamental rights impact assessments.
Penalties enforceableArticle 99National authorities can impose fines: up to EUR 35M / 7% turnover for prohibited practices, EUR 15M / 3% for high-risk violations, EUR 7.5M / 1% for supplying incorrect information.
Regulatory sandboxesArticle 57Each Member State must have established at least one AI regulatory sandbox.

For a complete step-by-step compliance plan, see EU AI Act Compliance Checklist 2026. Not sure whether your system qualifies as high-risk? Start with Is My AI System High-Risk? A Classification Guide.

2 August 2027: Annex I products + legacy GPAI

The final enforcement wave extends the AI Act's reach to two remaining categories.

What becomes enforceable:

ObligationScopeDetails
Annex I product AIAI systems that are safety components of products already regulated under existing EU harmonised legislation (Annex I)Products covered by legislation such as the Machinery Regulation, Medical Devices Regulation, Toys Safety Directive, Radio Equipment Directive, and others listed in Annex I. These AI systems must comply with the high-risk requirements, but get an extra year because they are already subject to sectoral conformity assessment procedures that need time to integrate AI Act requirements.
Legacy GPAI modelsGPAI models placed on the market before 2 August 2025These models benefit from a transitional grace period. By 2 August 2027, they must fully comply with Articles 51–56, including technical documentation and training data summaries.

Why the extra year? For Annex I products, the delay lets standards bodies and notified bodies integrate AI-specific requirements into existing product safety frameworks. For legacy GPAI models, the grace period recognises that retroactively documenting training data and model architecture for models already in production is a non-trivial undertaking.

What's already enforceable right now (April 2026)

As of this writing, two enforcement waves have already passed. Here is a concrete summary of what is currently in force:

Since 2 February 2025:

  • All eight prohibited AI practices under Article 5 are banned. Violators face fines of up to EUR 35 million or 7% of global turnover.
  • The AI literacy obligation under Article 4 is active. Every organisation that provides or deploys AI systems must ensure relevant staff have sufficient AI literacy.

Since 2 August 2025:

  • GPAI model providers must comply with transparency, documentation, and copyright obligations (Article 53).
  • GPAI models with systemic risk must comply with enhanced obligations including model evaluations, risk mitigation, incident reporting, and cybersecurity measures (Article 55).
  • The AI Office is operational and can request information, conduct evaluations, and enforce GPAI rules.
  • National competent authorities have been designated in each Member State.
  • The GPAI Code of Practice has been published and serves as the benchmark for demonstrating compliance.

If you are deploying any AI system, you should have already confirmed it does not fall under a prohibited category and addressed AI literacy. If you provide a GPAI model on the EU market, your technical documentation and training data summary should already be in place. Non-compliance on these points is already sanctionable.

What's coming in 4 months (August 2026)

The 2 August 2026 deadline is the largest single enforcement event in the AI Act. Here is a focused countdown of what activates.

High-risk AI obligations (providers)

From 2 August 2026, providers of high-risk AI systems under Article 6 and Annex III must have in place:

  1. Risk management system — iterative, continuous, documented (Article 9).
  2. Data governance measures — covering training, validation, and testing data quality (Article 10).
  3. Technical documentation — complete per Annex IV (Article 11).
  4. Automatic logging — event recording during operation (Article 12).
  5. Transparency information — clear instructions for deployers (Article 13).
  6. Human oversight design — built-in ability for human intervention (Article 14).
  7. Accuracy, robustness, cybersecurity — throughout the lifecycle (Article 15).
  8. Quality management system — comprehensive organisational controls (Article 17).
  9. Conformity assessment — self-assessment or notified body (Article 43).
  10. EU declaration of conformity and CE marking (Articles 47–48).
  11. EU database registration (Article 49).
  12. Post-market monitoring — systematic process to collect and analyse performance data (Article 72).

High-risk AI obligations (deployers)

Deployers — organisations that use high-risk AI systems under their authority — must:

  1. Use systems in accordance with the provider's instructions of use.
  2. Assign competent natural persons to human oversight.
  3. Ensure input data is relevant and representative for the system's intended purpose.
  4. Monitor operations and report malfunctions or serious incidents to the provider.
  5. Retain logs automatically generated by the system for at least 6 months (or longer if required by other EU/national law).
  6. Conduct a fundamental rights impact assessment (FRIA) where required (Article 27) — applicable to public bodies and certain private entities in specified areas.
  7. Inform affected natural persons that they are subject to a high-risk AI system.

For a detailed comparison of provider vs. deployer obligations, see AI Act: Provider vs Deployer Obligations.

Transparency obligations (limited risk)

Article 50 applies to a broader category of AI systems beyond just high-risk:

  • AI systems that interact with people (chatbots, virtual assistants) must disclose that the person is interacting with AI — unless this is obvious from the circumstances.
  • Emotion recognition and biometric categorisation systems must inform the subjects.
  • Deepfake content (AI-generated or manipulated images, audio, video) must be labelled as artificially generated or manipulated.
  • AI-generated text published with the purpose of informing the public on matters of public interest must be labelled as AI-generated.

For a full breakdown, see AI Act Transparency Obligations: Article 50 and Deepfake Labelling.

Penalties become enforceable

Article 99 penalties apply from 2 August 2026:

Violation typeMaximum fine
Prohibited practices (Article 5)EUR 35 million or 7% of global annual turnover
High-risk system obligations (Chapter III)EUR 15 million or 3% of global annual turnover
Other obligationsEUR 7.5 million or 1.5% of global annual turnover
Supplying incorrect informationEUR 7.5 million or 1% of global annual turnover

SMEs and startups benefit from proportionality provisions — see our guide for startups and SMEs.

Regulatory sandboxes

By 2 August 2026, every Member State must have established at least one AI regulatory sandbox — a controlled environment for testing AI systems under regulatory supervision before market placement.

What gets one more year (August 2027)

Two categories receive an extension until 2 August 2027:

Annex I product AI systems

AI systems that serve as safety components of products already regulated under existing EU harmonised legislation listed in Annex I — including the Machinery Regulation, Medical Devices Regulation, Civil Aviation Safety Regulation, Motor Vehicles Type-Approval Regulation, Toys Safety Directive, Radio Equipment Directive, and others. These systems must meet the same Chapter III high-risk requirements, but the extra year allows sectoral conformity assessment procedures to integrate the AI Act.

Legacy GPAI models

GPAI models placed on the market before 2 August 2025 have until 2 August 2027 to achieve full compliance with Articles 51–56. After 2 August 2027, no transitional provisions remain — every obligation in the AI Act is fully enforceable.

Key dates for specific roles

Providers of high-risk AI systems

DateObligation
2 Feb 2025Confirm no prohibited practices in your portfolio; start AI literacy programmes
2 Aug 2025If your system incorporates a GPAI model, ensure your GPAI provider is compliant
2 Aug 2026Full Chapter III compliance: risk management, documentation, conformity assessment, CE marking, database registration, post-market monitoring
2 Aug 2027If your system is a safety component of an Annex I product, full compliance by this date

Deployers of high-risk AI systems

DateObligation
2 Feb 2025AI literacy for all staff involved in AI system operation and oversight
2 Aug 2026Use in accordance with instructions, human oversight assignments, log retention, affected-person notifications, FRIA (if applicable), monitoring and reporting

GPAI model providers

DateObligation
2 Aug 2025Technical documentation, training data summary, copyright compliance, downstream provider information. Additional obligations for systemic risk models.
2 Aug 2027Legacy models (placed on market before Aug 2025) must achieve full compliance

Providers of limited-risk AI (chatbots, deepfakes, emotion recognition)

DateObligation
2 Feb 2025AI literacy obligations active
2 Aug 2026Article 50 transparency obligations: AI disclosure for chatbots, deepfake labelling, emotion recognition notification

Real-world timeline planning

Scenario 1: Discovering a high-risk system in April 2026

Real-world example: A mid-sized insurance company conducts its first AI systems inventory in April 2026 and discovers its automated claims triage system qualifies as high-risk under Annex III, point 5(a) (access to essential private services, insurance).

The company has four months until the 2 August 2026 deadline. Here is what they face:

  1. Weeks 1–2: Complete risk classification and confirm high-risk status. Use the AI Act Assessment tool to validate.
  2. Weeks 3–6: Begin building the risk management system, starting with a risk identification and known limitation analysis. Simultaneously, start compiling technical documentation per Annex IV.
  3. Weeks 7–10: Implement human oversight mechanisms (override capability, monitoring dashboards). Conduct data governance review of training data. Stand up automatic logging.
  4. Weeks 11–14: Run conformity assessment (likely self-assessment for this use case). Prepare the EU declaration of conformity. Register in the EU database.
  5. Weeks 15–16: Activate post-market monitoring. Document everything. Brief the board.

Four months is the minimum viable timeline for a single high-risk system with a cooperative team. Multiple systems or legacy architectures will extend it. The lesson: do not wait until April 2026 to inventory your AI systems.

Scenario 2: A GPAI provider that launched before August 2025

Real-world example: A European AI startup released a foundation model in March 2025 — five months before the 2 August 2025 GPAI deadline. Because the model was placed on the market before the cutoff, it benefits from the legacy GPAI grace period and has until 2 August 2027 to achieve full compliance with Articles 51–56.

However, the grace period applies only to the existing model version. If the startup substantially modifies the model or releases a new version after 2 August 2025, that new version must comply immediately.

The startup's prudent approach:

  • Use the grace period to build technical documentation retroactively, starting with the model card, architecture description, and performance benchmarks.
  • Prioritise the training data summary — the most labour-intensive requirement — tracing data sources, licences, and opt-out compliance.
  • Treat any new model version as a fresh release requiring day-one compliance.
  • Monitor the GPAI Code of Practice for sector-specific guidance.

For a full overview of GPAI obligations, including the systemic risk threshold, see our dedicated guide.

Scenario 3: A deployer using third-party HR AI

Real-world example: A multinational retailer uses a third-party AI recruitment platform to screen CVs and rank candidates. The AI is provided by a SaaS vendor. The retailer is the deployer.

AI used for recruitment falls under Annex III, point 4(a) — placing natural persons in employment. It is high-risk. By 2 August 2026, the retailer must:

  1. Verify provider compliance. Obtain evidence of conformity assessment, EU declaration of conformity, and CE marking. If the vendor cannot demonstrate compliance, switch providers.
  2. Assign human oversight. Designate named individuals with authority to override the AI's recommendations.
  3. Ensure input data relevance. Confirm data fed to the system is representative and does not introduce bias.
  4. Retain logs. Keep the system's automatic logs for at least six months.
  5. Inform candidates. Notify all applicants that AI is used in the recruitment process.
  6. Conduct a FRIA if applicable (public-sector deployers, or if required by national law).

The AI Act does not let you outsource compliance by outsourcing the AI. Deployer obligations exist independently. See Provider vs Deployer Obligations for the full breakdown.

How to prioritise your remaining time

You have approximately four months until 2 August 2026. Here is a prioritised action plan.

Month 1 (April): Assess and classify

Month 2 (May): Build core compliance infrastructure

  • For high-risk providers: Begin the risk management system and technical documentation — the two most time-consuming requirements.
  • For high-risk deployers: Request compliance evidence from AI providers — EU declaration of conformity, CE marking, instructions of use.
  • For all organisations: Establish or adapt a quality management system to cover AI-specific requirements.
  • Close AI literacy gaps if any remain from the February 2025 obligation.

Month 3 (June): Implement and test

  • Implement human oversight mechanisms — override capabilities, monitoring interfaces, alert systems.
  • Activate automatic logging. Confirm log retention policies (minimum 6 months).
  • Conduct conformity assessment. For most Annex III systems, self-assessment. For biometric systems used by law enforcement, a notified body is required. See Conformity Assessment: Self-Assessment vs Notified Body.
  • Prepare Article 50 transparency measures — chatbot disclosures, deepfake labels, emotion recognition notifications.

Month 4 (July): Finalise, register, and monitor

  • Prepare the EU declaration of conformity (Article 47) and affix CE marking (Article 48).
  • Register all high-risk systems in the EU database (Article 49).
  • Activate post-market monitoring — data collection, performance tracking, incident reporting.
  • Brief senior leadership. Compliance is an organisational commitment, not a departmental project.
  • Document everything. Regulators ask for evidence of process, not just outcomes.

Frequently Asked Questions

Has the EU AI Act already entered into force?

Yes. It entered into force on 1 August 2024. Prohibited practices and AI literacy took effect on 2 February 2025. GPAI obligations on 2 August 2025. The most significant wave — high-risk AI, transparency, and penalties — takes effect on 2 August 2026. See Understanding the EU AI Act.

What is the most important deadline for most companies?

2 August 2026. This is when the high-risk AI system obligations under Chapter III become enforceable, along with Article 50 transparency obligations, full penalty powers, and deployer obligations. Unless your only AI systems are GPAI models or Annex I product components, August 2026 is your primary compliance target. Use the compliance checklist to plan.

Do the deadlines apply to companies outside the EU?

Yes. The AI Act applies to any organisation that places an AI system on the EU market or puts one into service in the EU, regardless of where the organisation is established. A US company deploying an AI recruitment tool to screen candidates for EU-based roles is subject to the Act. The Act has extraterritorial reach similar to the GDPR.

What happens if I miss the August 2026 deadline?

Non-compliance after 2 August 2026 exposes your organisation to enforcement action by national market surveillance authorities. Penalties range from EUR 7.5 million to EUR 35 million (or 1–7% of global annual turnover). Authorities may also order withdrawal or recall of your AI system. Beyond fines, non-compliance creates reputational risk, procurement disqualification, and potential civil liability.

Is there a grace period for startups and SMEs?

The deadlines are the same for all organisations. However, the AI Act includes proportionality provisions for SMEs and startups. Article 99 requires that fines be proportionate, regulatory sandboxes must give priority access to SMEs, and conformity assessment fees must reflect the size of smaller operators. The practical burden is lighter, but the legal deadlines are identical. For tailored guidance, see our AI Act Compliance Guide for Startups and SMEs.

Where can I check my AI system's risk classification quickly?

Use Legalithm's free AI Act Assessment tool. It walks you through the criteria from Article 6 and Annex III in under five minutes. For a manual walkthrough, see Is My AI System High-Risk? A Classification Guide.

Conclusion

Here is the bottom line as of April 2026:

  • Prohibited practices and AI literacy — already enforceable. If you haven't addressed these, you're already exposed.
  • GPAI obligations — already enforceable. GPAI model providers should be in compliance now.
  • High-risk AI, transparency, and penalties — enforceable in four months (2 August 2026). This is where most organisations should focus their remaining effort.
  • Annex I product AI and legacy GPAI — you have until 2 August 2027, but don't treat that extra year as slack time.

If you're starting now, the time is short but sufficient — provided you focus on the highest-priority obligations and use every available resource to accelerate. Start with the AI Act Assessment to classify your systems, then follow the Compliance Checklist to build your action plan.

AI Act
Timeline
Deadlines
Enforcement
2025
2026
2027
Implementation

Check your AI system's compliance

Free assessment — no signup required. Get your risk classification in minutes.

Run free assessment