Delegated & Implementing Acts
Track all adopted and expected acts, codes of practice, standards, and guidance under the AI Act.
Last updated: 2026-04-11
Adopted Acts
Guidelines on Prohibited AI Practices
Legal basis: Article 5 | 4 Feb 2025
The Commission published the first official guidelines on the interpretation and application of Article 5 prohibited AI practices, timed to coincide with the 2 February 2025 enforcement date. The guidelines clarify the scope of each prohibition — subliminal manipulation, exploitation of vulnerabilities, social scoring, predictive policing based solely on profiling, untargeted facial image scraping, emotion recognition in workplace/education, and real-time remote biometric identification — including worked examples, the narrow law-enforcement exceptions, and practical criteria for determining whether a system falls within scope.
Guidelines on the Definition of an AI System
Legal basis: Article 3 | 6 Feb 2025
The Commission published guidelines clarifying the Article 3(1) definition of an AI system and how it differs from simpler software. The guidelines provide a decision tree for determining whether a system qualifies as AI under the Regulation, covering key elements: machine-based, varying levels of autonomy, adaptive after deployment, inference capability, and generating outputs (predictions, decisions, recommendations, content) that influence physical or virtual environments. They also address edge cases such as traditional rule-based systems, basic statistical models, and conventional database queries.
GPAI Code of Practice
Legal basis: Article 56 | 2 Apr 2025
The first GPAI Code of Practice was finalised and published, covering transparency, copyright compliance, technical documentation, and risk management for general-purpose AI model providers. Providers may rely on the Code to demonstrate compliance with Chapter V obligations. The AI Office facilitated the drafting process with input from providers, downstream deployers, civil society, and the scientific community.
Standardisation Request to CEN/CENELEC
Legal basis: Article 40 | 1 May 2025
The Commission issued a standardisation request to CEN and CENELEC to develop harmonised standards supporting the AI Act. CEN/CENELEC JTC 21 (Artificial Intelligence) is leading the work, with standards expected to cover risk management, data governance, transparency, human oversight, accuracy, robustness, and cybersecurity for high-risk AI systems. Once published in the Official Journal, these standards will create a presumption of conformity.
SME Compliance Templates & Tools
Legal basis: Article 62 | 1 Nov 2025
The AI Office has published templates, checklists, and simplified guidance tools to help SMEs and start-ups comply with the AI Act. These include pre-filled conformity documentation templates, risk assessment frameworks adapted for smaller organisations, and a step-by-step compliance roadmap. Available on the Commission's AI Act implementation portal.
Expected Acts
These acts are expected but have not yet been formally adopted.
Common Specifications (Fallback)
Legal basis: Article 41
If harmonised standards are not adopted or are insufficient to cover the AI Act requirements, the Commission may adopt common specifications by implementing act. These would provide an alternative path to demonstrate conformity. As of April 2026, the standardisation process is ongoing and no common specifications have been adopted yet.
AI Regulatory Sandbox Implementing Acts
Legal basis: Article 58
The Commission may adopt implementing acts on detailed arrangements for regulatory sandboxes. Member States must establish at least one sandbox by August 2026. The Commission is expected to provide guidance on harmonised sandbox procedures to ensure consistent application across Member States.
Annex III High-Risk List Updates
Legal basis: Article 7
The Commission is empowered to adopt delegated acts to amend the Annex III list of high-risk AI system areas. Updates must follow the criteria in Article 7(2): severity of harm, frequency of use, extent of dependency, vulnerability of affected persons, and reversibility of outcomes. The AI Board must be consulted. No amendments have been adopted as of April 2026.
Serious Incident Reporting Template
Legal basis: Article 73
The Commission is expected to adopt an implementing act establishing a standardised template for serious incident reporting by providers under Article 73. The template would harmonise the information to be provided across Member States.
EU Database for High-Risk AI — Functional Specifications
Legal basis: Article 49
The Commission is expected to adopt an implementing act specifying the functional design and technical specifications of the EU database for high-risk AI systems (Article 49). The database will contain registration data from providers and deployers, enabling public transparency about high-risk AI systems on the Union market. The database section for law enforcement is non-public and restricted to market surveillance authorities.
Post-Market Monitoring Plan Template
Legal basis: Article 72
The Commission may adopt an implementing act establishing a template for the post-market monitoring plan that providers of high-risk AI systems must draw up under Article 72. The template would harmonise the structure, data collection methodology, and reporting format across all Member States, complementing the separate serious incident reporting template under Article 73.
GPAI Model Technical Documentation Template
Legal basis: Article 52
The Commission is expected to adopt an implementing act providing a standardised template for the technical documentation that general-purpose AI model providers must prepare under Article 53 and detail under Annex XI. The template would cover model architecture, training methodology, compute resources, data governance measures, evaluation results, known limitations, and downstream integration guidance.
Annex I Union Harmonisation Legislation Updates
Legal basis: Article 97
The Commission is empowered under Article 97 to adopt delegated acts to update Annex I Section A (Union harmonisation legislation for products with AI safety components) and Annex I Section B (other Union harmonisation legislation). This ensures the AI Act stays aligned with evolving sectoral product legislation, such as updates to the Medical Devices Regulation, Machinery Regulation, or other safety directives. No amendments have been adopted as of April 2026.
GPAI Systemic Risk Classification Thresholds
Legal basis: Article 51
The Commission may adopt delegated acts to update the thresholds and criteria for classifying GPAI models as posing systemic risk (currently set at 10^25 FLOPs training compute). The scientific panel (Article 67) and AI Office will advise on adjustments as AI capabilities evolve.