Annexes — referenced by Article 53(1), point (a)Article Annex XI

Annex XI: Technical Documentation for Providers of General-Purpose AI Models

In effect since 2 Aug 202511 min readEUR-Lex verified Apr 2026

Annex XI specifies the minimum technical documentation that every provider of a general-purpose AI (GPAI) model must prepare under Article 53(1), point (a). Section 1 applies to all GPAI model providers (scaled to model size and risk profile); Section 2 adds requirements for models classified with systemic risk under Article 51. Together, these sections turn Chapter V obligations into auditable artefacts the AI Office can request. Always confirm paragraph-level wording on EUR-Lex.

Who does this apply to?

  • -Providers of GPAI models placing them on the Union market or making them available for integration into downstream AI systems
  • -Providers of GPAI models classified with systemic risk under Article 51 (Section 2 extras)
  • -Engineering, documentation, and compliance teams assembling model cards, data cards, and training reports to meet Annex XI headings
  • -Authorised representatives acting on behalf of third-country GPAI providers under Article 54

Scenarios

A foundation model provider publishes a model card covering architecture, parameter count, modalities, and training compute, but omits energy consumption estimates and data provenance.

Incomplete against Annex XI Section 1: energy consumption (point 2(e)) and data information (point 2(c)) are mandatory elements.
Ref. Annex XI, Section 1, points 1–2

A GPAI model is designated with systemic risk; the provider prepares red-teaming reports and adversarial evaluation results.

Aligned with Annex XI Section 2 (evaluation strategies and adversarial testing) on top of Section 1 baseline.
Ref. Annex XI, Section 2, points 1–2

An open-weights model is released with a brief README and no structured documentation mapping Annex XI headings.

Open-source release status does not exempt from Annex XI; documentation must still be prepared (limited exceptions under Article 53(2) for free open-source models that are not systemic risk—conditions are strict).
Ref. Art. 53(2) + Annex XI

What Annex XI does (in plain terms)

Annex XI is the checklist behind Article 53(1)(a): it converts the abstract "draw up technical documentation" duty into concrete headings every GPAI model provider must fill. Think of it as the Annex IV equivalent for GPAI models (Annex IV covers high-risk AI systems under Article 11; Annex XI covers GPAI models under Article 53).

The annex has two sections:

  • Section 1 — baseline documentation for every GPAI model provider, scaled to model size and risk.
  • Section 2additional documentation only for GPAI models designated with systemic risk under Article 51.

Section 1 — Information required from all providers

Point 1: General description of the GPAI model including:

  • (a) Tasks the model is intended to perform and the type and nature of AI systems it can be integrated into
  • (b) Applicable acceptable use policies
  • (c) Date of release and methods of distribution
  • (d) Architecture and number of parameters
  • (e) Modality (e.g. text, image) and format of inputs and outputs
  • (f) The licence

Point 2: Detailed description of the development process including:

  • (a) Technical means (e.g. instructions, infrastructure, tools) required for integration into AI systems
  • (b) Design specifications and training process: methodologies, techniques, key design choices, rationale, assumptions, optimisation targets, and parameter relevance
  • (c) Data used for training, testing, and validation (where applicable): type, provenance, curation methodologies (cleaning, filtering, etc.), number of data points, scope, main characteristics, sourcing and selection methods, and measures to detect unsuitability and identifiable biases
  • (d) Computational resources used to train the model (e.g. number of floating-point operations), training time, and other relevant training details
  • (e) Known or estimated energy consumption of the model (where unknown, may be based on information about computational resources used)

All Section 1 information must be appropriate to the size and risk profile of the model—proportionality is built into the text.

Section 2 — Additional requirements for systemic-risk models

GPAI models designated with systemic risk under Article 51 must also provide:

1. A detailed description of evaluation strategies, including evaluation results, based on available public evaluation protocols and tools or other evaluation methodologies. Evaluation strategies shall include evaluation criteria, metrics, and the methodology for identification of limitations. 2. Where applicable, a detailed description of measures for internal and/or external adversarial testing (e.g. red teaming), model adaptations including alignment and fine-tuning. 3. Where applicable, a detailed description of the system architecture explaining how software components build or feed into each other and integrate into overall processing.

Section 2 is in addition to Section 1—systemic-risk providers must satisfy both sections.

How Annex XI connects to the rest of the Act

  • Article 53(1)(a) — The direct legal hook: providers must prepare documentation "as referred to in Annex XI."
  • Article 51Systemic risk classification triggers Section 2 extras.
  • Article 52Procedure for systemic-risk designation; once designated, Section 2 kicks in.
  • Article 55Systemic-risk obligations reference the evaluation and adversarial testing artefacts that Section 2 documents.
  • Article 56Codes of practice can inform how evaluation strategies and metrics in Section 2 are shaped.
  • Article 54Authorised representatives may hold and transmit Annex XI documentation on behalf of third-country providers.
  • Annex XIITransparency information that GPAI providers must make publicly available (complementary to the private-facing Annex XI file).
  • Annex XIIISystemic risk criteria (the thresholds that determine whether Section 2 applies).
  • Annex IVTechnical documentation for high-risk AI systems (the Annex XI counterpart under Article 11; do not confuse the two—Annex IV covers downstream AI systems, Annex XI covers upstream GPAI models).
  • Article 113Application dates (Chapter V applied from 2 August 2025; transitional rules for existing models until 2 August 2026).

Annex XI vs Annex IV (common confusion)

Teams sometimes confuse Annex XI and Annex IV. The distinction:

  • Annex IV → documentation for high-risk AI systems (the downstream product or application), required by Article 11.
  • Annex XI → documentation for GPAI models (the upstream model), required by Article 53(1)(a).

If your GPAI model is integrated into a high-risk system, both annexes may need to be satisfied—Annex XI by the GPAI provider, and Annex IV by the high-risk system provider. The downstream provider relies on Article 53(1)(b) information from the GPAI provider to populate parts of their Annex IV file.

Official wording: Annex XI Section 1 (English)

Editorial note: The following reproduces Annex XI Section 1 from the English consolidated text of Regulation (EU) 2024/1689. Section 2 (systemic risk extras) and the energy-consumption clarification note appear in full on EUR-Lex Annex XI. Always re-open EUR-Lex before compliance decisions.

Section 1

*Information to be provided by all providers of general-purpose AI models*

The technical documentation referred to in Article 53(1), point (a) shall contain at least the following information as appropriate to the size and risk profile of the model:

1. A general description of the general-purpose AI model including:

(a) the tasks that the model is intended to perform and the type and nature of AI systems in which it can be integrated;

(b) the acceptable use policies applicable;

(c) the date of release and methods of distribution;

(d) the architecture and number of parameters;

(e) the modality (e.g. text, image) and format of inputs and outputs;

(f) the licence.

2. A detailed description of the elements of the model referred to in point 1, and relevant information of the process for the development, including the following elements:
(a) the technical means (e.g. instructions of use, infrastructure, tools) required for the general-purpose AI model to be integrated in AI systems;
(b) the design specifications of the model and training process, including training methodologies and techniques, the key design choices including the rationale and assumptions made; what the model is designed to optimise for and the relevance of the different parameters, as applicable;
(c) information on the data used for training, testing and validation, where applicable, including the type and provenance of data and curation methodologies (e.g. cleaning, filtering, etc.), the number of data points, their scope and main characteristics; how the data was obtained and selected as well as all other measures to detect the unsuitability of data sources and methods to detect identifiable biases, where applicable;
(d) the computational resources used to train the model (e.g. number of floating-point operations), training time, and other relevant details related to the training;

(e) known or estimated energy consumption of the model.

With regard to point (e), where the energy consumption of the model is unknown, the energy consumption may be based on information about computational resources used.

Official wording: Annex XI Section 2 (English)

Section 2

*Additional information to be provided by providers of general-purpose AI models with systemic risk*

1. A detailed description of the evaluation strategies, including evaluation results, on the basis of available public evaluation protocols and tools or otherwise of other evaluation methodologies. Evaluation strategies shall include evaluation criteria, metrics and the methodology on the identification of limitations.
2. Where applicable, a detailed description of the measures put in place for the purpose of conducting internal and/or external adversarial testing (e.g. red teaming), model adaptations, including alignment and fine-tuning.
3. Where applicable, a detailed description of the system architecture explaining how software components build or feed into each other and integrate into the overall processing.

Source: EUR-Lex Annex XI.

Recitals (preamble) on EUR-Lex

The recitals in the same consolidated AI Act on EUR-Lex contextualise GPAI documentation, proportionality ("appropriate to the size and risk profile"), transparency, and systemic-risk evaluation. Use the official preamble on EUR-Lexdo not rely on unofficial recital lists without checking sequence and wording against the authentic text.

Compliance checklist

  • Map every Annex XI Section 1 heading (1(a)–(f), 2(a)–(e)) to a concrete artefact in your documentation pipeline (model card, data card, training report, energy audit, etc.).
  • Include acceptable use policies and licence details—these are explicit Annex XI requirements, not optional add-ons.
  • Document data provenance, curation methods, and bias detection measures for training/validation/testing data.
  • Record computational resources (FLOPs, training time) and known or estimated energy consumption.
  • If designated systemic risk: prepare evaluation strategy reports, red-teaming findings, and system architecture diagrams per Section 2.
  • Version-control documentation with each model release placed on the Union market.
  • Review transitional rules: models placed before 2 August 2025 had until 2 August 2026 to produce Annex XI documentation.

Check whether your GPAI model documentation covers every Annex XI heading—free assessment.

Start Free Assessment

Related annexes

  • Annex IV — Technical documentation for high-risk AI systems (the counterpart under Article 11)
  • Annex XII — Transparency information for GPAI providers
  • Annex XIII — Criteria for the classification of GPAI models with systemic risk

Frequently asked questions

Does Annex XI apply to open-source models?

Yes, unless the narrow Article 53(2) exemption for free and open-source models applies—and even that exemption does not cover systemic-risk models. Open-weights release does not remove Annex XI duties for most commercially offered GPAI models.

What is the difference between Annex XI and Annex IV?

Annex IV is for high-risk AI systems (downstream applications, required by Article 11). Annex XI is for GPAI models (upstream foundation or general-purpose models, required by Article 53). If a GPAI model is integrated into a high-risk system, both annexes apply at different layers of the value chain.

How detailed does the energy consumption reporting need to be?

Annex XI requires 'known or estimated' energy consumption. Where actual energy use is unknown, the estimate may be based on computational resources used. The proportionality clause ('appropriate to the size and risk profile') applies.

What happens if my model is later designated systemic risk?

You must then produce Section 2 documentation (evaluation strategies, adversarial testing, system architecture) in addition to the Section 1 baseline you should already have. Monitor AI Office communications and Annex XIII thresholds.

Can a model card satisfy Annex XI?

A well-structured model card can cover many Annex XI headings, but you must ensure every point in Section 1 (and Section 2 if applicable) is addressed. Map your model card fields to the Annex XI points and fill gaps.