Chapter V — General-purpose AI modelsArticle 55

Article 55: Obligations for Providers of GPAI Models with Systemic Risk

In effect since 2 Aug 20257 min readEUR-Lex verified Apr 2026

Article 55 adds four additional obligations on top of the Article 53 baseline for providers of GPAI models classified with systemic risk under Article 51: (a) perform model evaluation including adversarial testing; (b) assess and mitigate possible systemic risks; (c) track, document, and report serious incidents to the AI Office and national authorities; (d) ensure adequate cybersecurity protection. Compliance can be demonstrated through adherence to codes of practice under Article 56 or equivalent measures until harmonised standards are adopted.

Who does this apply to?

  • -Providers of GPAI models classified with systemic risk under Article 51
  • -Safety, security, and red-teaming teams at frontier model labs
  • -Incident response teams responsible for serious incident reporting to the AI Office
  • -Authorised representatives acting under Article 54 for third-country systemic-risk providers

Scenarios

A systemic-risk model provider runs structured red-teaming exercises against jailbreak, deception, and CBRN-risk scenarios before each major release.

Aligned with Article 55(1)(a) model evaluation and adversarial testing obligations.
Ref. Art. 55(1)(a)

A systemic-risk model causes a significant outage to a downstream critical-infrastructure system. The provider documents the incident and reports to the AI Office within the required timeframe.

Aligned with Article 55(1)(c) serious incident tracking, documentation, and reporting.
Ref. Art. 55(1)(c)

A systemic-risk provider implements model-level access controls, rate limiting, and vulnerability monitoring as part of cybersecurity hardening.

Contributes to Article 55(1)(d) adequate cybersecurity protection for the model and its physical infrastructure.
Ref. Art. 55(1)(d)

The four Article 55(1) obligations (in plain terms)

Providers of systemic-risk GPAI models must, in addition to Article 53 baseline duties:

(a) Model evaluation including adversarial testing: - Perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model - Purpose: identify and mitigate systemic risks

(b) Assess and mitigate systemic risks: - Assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, placing on the market, or use of the model - The assessment must be proportionate to the model's capabilities

(c) Serious incident reporting: - Track, document, and report to the AI Office and, where relevant, national competent authorities, serious incidents and possible corrective measures to address them - Without undue delay after the provider becomes aware

(d) Cybersecurity protection: - Ensure an adequate level of cybersecurity protection for the model and its physical infrastructure

Codes of practice as compliance pathway

Until harmonised standards are adopted, providers may demonstrate compliance with Article 55 obligations through adherence to codes of practice under Article 56. The AI Office facilitates the drawing up of these codes. Where a provider does not adhere to a code of practice, they must demonstrate alternative adequate means of compliance to the satisfaction of the AI Office.

Codes of practice are not binding law but create a practical safe harbour: following them shifts the burden of proof to the authority to show non-compliance.

How Article 55 connects to the rest of the Act

  • Article 51Classification criteria that trigger Article 55 duties.
  • Article 52Procedure for systemic-risk designation.
  • Article 53Baseline GPAI obligations (Article 55 adds to, not replaces, these).
  • Article 56Codes of practice as compliance pathway.
  • Annex XI Section 2Documentation requirements for evaluation strategies, adversarial testing, and system architecture.
  • Annex XIIICriteria for systemic-risk classification (the triggers).
  • Article 64 — AI Office powers for enforcement and requesting information.
  • Article 101Penalties for GPAI providers.
  • Article 113Application dates (Chapter V from 2 August 2025).

Official wording: Article 55 (English)

1. In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall:
(a) perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks;
(b) assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk;
(c) keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them;
(d) ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model.
2. Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models with systemic risks who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission.
3. Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in accordance with the confidentiality obligations set out in Article 78.

Recitals (preamble) on EUR-Lex

The recitals in the same consolidated AI Act on EUR-Lex contextualise systemic-risk mitigation, adversarial testing, red teaming, incident reporting, and cybersecurity for frontier models. Use the official preamble on EUR-Lexdo not rely on unofficial recital lists without checking sequence and wording against the authentic text.

Compliance checklist

  • Establish a model evaluation programme using standardised protocols (benchmarks, evaluations, stress tests) before each major release.
  • Conduct and document adversarial testing (red teaming) covering misuse, jailbreaks, CBRN risks, and other systemic-risk scenarios.
  • Assess systemic risks at Union level: map how the model could affect public health, safety, security, fundamental rights, or society.
  • Build a serious incident tracking and reporting pipeline with clear escalation to the AI Office.
  • Implement cybersecurity controls for both the model (access controls, rate limiting, monitoring) and physical infrastructure (data centres, training compute).
  • Adhere to Article 56 codes of practice or document equivalent compliance measures.
  • Maintain Annex XI Section 2 documentation (evaluation strategies, red-teaming reports, system architecture).
  • Track AI Office guidance and harmonised standard development for evolving expectations.

Assess your systemic-risk compliance posture—free assessment.

Start Free Assessment

Related annexes

  • Annex XI — GPAI technical documentation (Section 2 for systemic-risk documentation)
  • Annex XIII — Criteria for systemic-risk classification

Frequently asked questions

What counts as a 'serious incident' under Article 55?

The Act does not exhaustively define it for GPAI; it covers events with significant adverse effects on health, safety, fundamental rights, or the environment. Report conservatively—under-reporting is riskier than over-reporting.

Is red teaming required or optional?

Required. Article 55(1)(a) mandates 'conducting and documenting adversarial testing' including for identifying systemic risks. The method and scope should be proportionate to the model's capabilities.

Can we use codes of practice instead of meeting each Article 55 obligation individually?

Codes of practice provide a compliance pathway—following them creates a practical safe harbour. If you choose not to follow a code, you must demonstrate alternative adequate compliance means.

Does Article 55 replace Article 53 obligations?

No. Article 55 is 'in addition to' Article 53. Systemic-risk providers must meet both the baseline GPAI obligations (Article 53) and the additional systemic-risk obligations (Article 55).