All articles
GPAI Obligations Under the EU AI Act Explained
AI Act

GPAI Obligations Under the EU AI Act Explained

Complete guide to general-purpose AI model obligations under the EU AI Act. Documentation, transparency, copyright, and systemic risk requirements.

Legalithm Team22 min read
Share

GPAI Obligations Under the EU AI Act: What Every Foundation Model Provider and Downstream User Needs to Know

If your company uses ChatGPT, Claude, Gemini, Llama, Mistral, Stable Diffusion, or any other foundation model — whether through an API, a fine-tuned variant, or as a component inside your own product — the EU AI Act's general-purpose AI (GPAI) model obligations apply to your supply chain. These obligations became enforceable on 2 August 2025, making them one of the earliest live requirements under the AI Act. Unlike the high-risk system rules that target specific use cases, GPAI obligations attach to the model itself, regardless of how it is ultimately deployed — following the model from the original trainer to the final deployer.

This guide covers who qualifies as a GPAI provider, what those providers must do, when systemic risk rules kick in, how the Code of Practice shapes compliance, and what downstream companies need to understand.

TL;DR — GPAI obligations at a glance

  • A general-purpose AI model is any AI model that can perform a wide range of tasks — including models trained with large amounts of data using self-supervision at scale. GPT-4, Claude, Gemini, Llama, Mistral, and Stable Diffusion all qualify.
  • GPAI obligations became enforceable 2 August 2025 — earlier than most other AI Act requirements.
  • All GPAI providers must comply with Article 53: technical documentation, copyright compliance, and transparency/downstream information duties.
  • Models trained with compute exceeding 10²³ floating point operations (FLOP) are presumed to pose systemic risk and face additional obligations under Article 55: adversarial testing, cybersecurity measures, incident reporting, and model evaluations.
  • The GPAI Code of Practice, published 10 July 2025 and endorsed 1 August 2025, provides the primary compliance benchmark.
  • Open-source GPAI models receive a limited exemption from some documentation and transparency duties — but not from copyright obligations or systemic risk rules.
  • When a GPAI model is integrated into a high-risk AI system, the high-risk provider must ensure full compliance with both the GPAI and high-risk requirements.
  • Penalties for GPAI non-compliance reach EUR 15 million or 3% of global annual turnover — enforced directly by the European Commission's AI Office.
  • Classify your AI systems now to understand which obligations apply.

What is a general-purpose AI model?

The AI Act defines a general-purpose AI model in Article 3 as an AI model — including where trained with a large amount of data using self-supervision at scale — that displays significant generality, is capable of competently performing a wide range of distinct tasks, and can be integrated into a variety of downstream systems or applications.

Key characteristics

Three features distinguish a GPAI model from a narrow, single-purpose AI system:

  1. Significant generality: The model is not designed for one specific task. It can handle translation, summarisation, code generation, classification, reasoning, and more — without being retrained for each task.

  2. Self-supervised training at scale: Most qualifying models are trained on large, diverse datasets using unsupervised or self-supervised learning methods. This is the hallmark of the foundation model paradigm.

  3. Integrability: The model can be embedded into many different downstream applications, products, and systems. A model that only works inside one proprietary application may still qualify, but integrability is a strong indicator.

Which models qualify?

The definition captures the major models dominating the current landscape:

ModelProviderTypeLikely systemic risk?
GPT-4 / GPT-4oOpenAIMultimodal LLMYes (compute above 10²³ FLOP)
Claude 3.5 / Claude 4AnthropicText LLMYes
Gemini 1.5 / Gemini 2Google DeepMindMultimodal LLMYes
Llama 3 / Llama 4MetaOpen-weight LLMYes
Mistral LargeMistral AIText LLMLikely yes
Stable Diffusion XLStability AIImage generationDepends on compute
Smaller fine-tuned variantsVariousVariousTypically no

The definition also covers generative AI models specifically — even if a model is marketed solely as a "content creation tool," it falls under the GPAI regime if it meets the generality and scale criteria. The definition applies regardless of how the model is placed on the market: as an API, a platform, downloadable weights, or embedded inside a product.

Who qualifies as a GPAI provider?

Under the AI Act, the GPAI model provider is the natural or legal person that develops a general-purpose AI model or has one developed, and places it on the market. "Placing on the market" includes making the model available — for payment or free of charge — to downstream companies, developers, or the public.

Provider vs downstream provider vs deployer

The GPAI supply chain often involves multiple layers:

  1. Upstream GPAI provider: The company that trains the base model and makes it available (e.g., OpenAI, Anthropic, Meta, Google). This entity bears the core Article 53 obligations and, where applicable, the Article 55 systemic risk obligations.

  2. Downstream provider: A company that takes a GPAI model, fine-tunes it, modifies it, or integrates it into its own AI system, and then places that modified model or system on the market under its own name. This entity may become a GPAI provider in its own right — particularly if the modification is substantial. For a deeper breakdown of how role changes work, see provider vs deployer obligations.

  3. Deployer: A company that uses a GPAI-powered system under its own authority but does not place it on the market. Deployers are not GPAI providers, but they have separate obligations — especially if the system qualifies as high-risk.

The critical question: when does a downstream company become a provider?

A company becomes a GPAI provider when it:

  • Places a substantially modified version of the model on the market under its own name or trademark
  • Fine-tunes a model and makes the fine-tuned variant available as a distinct product or service
  • Integrates a GPAI model into a system and places that system on the market as a new AI system

A company does not become a GPAI provider merely by:

  • Using a GPAI model via API for its own internal operations
  • Deploying a GPAI model within its organisation without making it available to third parties
  • Adding application-layer prompts, UI wrappers, or workflow automation around an existing model without modifying the model itself

Open-source model considerations

The AI Act provides a limited exemption for GPAI models released under a free and open-source licence. Under this exemption, open-source GPAI providers are relieved of certain documentation and information-sharing obligations — but only if the following conditions are met:

  • The model parameters (weights, architecture information, usage information) are made publicly available
  • The model is not classified as posing systemic risk

If a model is open-source but exceeds the 10²³ FLOP compute threshold or is designated by the Commission as posing systemic risk, the exemption does not apply and the full set of GPAI obligations — including systemic risk duties — applies in full.

Importantly, the open-source exemption never covers copyright compliance. All GPAI providers, open-source or not, must comply with the copyright-related obligations.

Obligations for all GPAI providers (Article 53)

Article 53 establishes three pillars of obligation that apply to every GPAI provider, regardless of the model's risk classification. These obligations are designed to ensure that downstream providers, deployers, and regulators have the information they need to comply with their own requirements.

Documentation requirements

GPAI providers must draw up and keep up to date technical documentation of the model, including its training and testing process and the results of its evaluation. The documentation must contain, at a minimum:

  • A general description of the model, including its intended tasks, the date of release, modalities (text, image, audio, etc.), and architecture
  • A description of the training data: sources, size, curation methodology, data preparation techniques, and labelling procedures where applicable
  • The computational resources used for training (including FLOP), training methodology, and key design choices
  • Information on the model's capabilities and limitations, including reasonably foreseeable risks and the measures taken to address them
  • An evaluation summary covering performance benchmarks, testing methodology, and known failure modes

This documentation must be provided to the AI Office upon request, and it must be kept up to date as the model is updated or modified. The standard of documentation closely parallels the Annex IV technical documentation requirements for high-risk systems, though tailored to the model level rather than the system level.

All GPAI providers must put in place a policy to comply with EU copyright law, specifically the Copyright Directive (Directive 2019/790). This obligation has two operational components:

  1. Rights reservation mechanism: Providers must identify and comply with rights reservations expressed by copyright holders under Article 4(3) of the Copyright Directive — particularly the text and data mining opt-out. This means implementing a system to detect and respect machine-readable opt-out signals (such as robots.txt directives, meta tags, or other standardised mechanisms) used by rights holders to reserve their content from AI training.

  2. Training data transparency: Providers must draw up and make publicly available a sufficiently detailed summary of the content used for training the GPAI model. The AI Office has published a template for this summary. The summary must enable parties with legitimate interests — including copyright holders — to understand and exercise their rights.

This obligation applies equally to open-source and proprietary models. There is no exemption for open-source providers when it comes to copyright compliance. Given the ongoing litigation and regulatory scrutiny around training data practices, this is one of the highest-risk compliance areas for GPAI providers globally.

Transparency and downstream information

GPAI providers must make available to downstream providers who integrate the model into their own systems or products:

  • The technical documentation described above (or a meaningful summary of it)
  • Information that enables downstream providers to comply with their own AI Act obligations — including the information needed for high-risk system documentation, transparency obligations under Article 50 (see also the practical guide to Article 50), and risk management
  • A clear description of the model's capabilities, limitations, and known risks
  • Information about changes to the model that might affect downstream compliance

This is the supply chain connective tissue of the AI Act. In practice, GPAI providers will need to establish structured information-sharing frameworks: model cards, API documentation, safety reports, and change notification processes.

For companies building on foundation models, the quality of upstream transparency disclosures directly determines your own compliance posture. If your GPAI provider cannot supply the necessary information, you face a gap that no amount of internal effort can fill. Supply chain due diligence — verifying that your model provider meets its Article 53 duties — is a critical first step.

Systemic risk: additional obligations (Article 55)

The AI Act recognises that some models — by virtue of their scale, capabilities, and reach — pose risks that extend beyond individual use cases to society, public safety, fundamental rights, or the EU economy. These are classified as GPAI models with systemic risk.

What triggers systemic risk classification?

There are two pathways to systemic risk classification:

Pathway 1 — Compute threshold: A GPAI model is presumed to have systemic risk if the cumulative amount of compute used for its training exceeds 10²³ floating point operations (FLOP). This is a bright-line rule: if you cross the threshold, the presumption applies. As of early 2026, this threshold captures most frontier models from OpenAI, Anthropic, Google DeepMind, and Meta, while excluding most smaller, specialised, or fine-tuned models.

Pathway 2 — Commission designation: The European Commission may designate a GPAI model as posing systemic risk based on criteria in Annex XIII, even if the compute threshold is not met. Criteria include end user count, degree of autonomy, market penetration, integration into critical infrastructure, and misuse potential. This ensures a widely deployed model is not exempt simply because it was trained efficiently.

GPAI providers must notify the Commission when their model meets either criterion. The notification obligation is on the provider — there is no external screening process that catches you automatically. Failure to notify is itself a compliance violation.

Additional obligations for systemic risk models

Beyond all Article 53 duties, providers of GPAI models with systemic risk must also comply with Article 55:

ObligationWhat it requires
Model evaluationPerform standardised model evaluations, including assessing capabilities, limitations, and foreseeable risks. Use state-of-the-art evaluation protocols, including benchmarks and red-teaming.
Adversarial testingConduct adversarial testing (red-teaming) to identify and mitigate systemic risks. Testing must cover misuse scenarios, failure modes, emergent capabilities, and risks to critical infrastructure.
Systemic risk assessment and mitigationAssess and mitigate reasonably foreseeable systemic risks — including risks that may materialise or be amplified through the downstream integration of the model into multiple systems.
CybersecurityEnsure an adequate level of cybersecurity protection for the model and its physical infrastructure, including protection against adversarial attacks, data poisoning, model extraction, and unauthorised access.
Serious incident reportingReport serious incidents to the AI Office without undue delay. Serious incidents include events where the model's capabilities or limitations lead to, or could foreseeably lead to, significant harm to health, safety, fundamental rights, the environment, or critical infrastructure.
Documentation of incidents and corrective actionsKeep records of serious incidents and corrective actions taken, and make these available to the AI Office upon request.

For frontier AI labs, these obligations formalise practices many already undertake voluntarily — but now with regulatory teeth and enforcement consequences.

The GPAI Code of Practice

The GPAI Code of Practice was published on 10 July 2025 and endorsed by the European Commission on 1 August 2025 — one day before the GPAI obligations became enforceable. It is the single most important compliance document for GPAI providers, translating the high-level obligations of Articles 53 and 55 into concrete, actionable measures.

The Code of Practice is voluntary, but adherence creates a presumption of compliance with the corresponding obligations. You can comply by following the Code, or demonstrate compliance through alternative means — but the burden of proving equivalence falls on you. In practice, the AI Office will use the Code as its baseline when assessing compliance. Deviating without a clear, documented rationale is a risk.

Structure and content

The Code of Practice is organised into three chapters:

Chapter 1 — Transparency obligations: Detailed specifications for the technical documentation, downstream information sharing, and training data summaries required under Article 53. Includes templates for model cards, data provenance summaries, and capability disclosures.

Chapter 2 — Copyright obligations: Practical guidance on implementing the rights reservation mechanism, detecting and respecting opt-out signals, and publishing the training data summary. Addresses the intersection with the Copyright Directive's text and data mining exceptions.

Chapter 3 — Systemic risk safety obligations: Specifications for model evaluation protocols, adversarial testing methodologies, cybersecurity baselines, and incident reporting procedures for models classified as posing systemic risk under Article 55.

How to use the Code for compliance

For companies subject to GPAI obligations: map your obligations under Articles 53 and 55 to the corresponding Code chapters, implement the specific measures, document your adherence, and where you deviate, document your alternative measures and rationale. For downstream providers and deployers, the Code serves as a reference point for evaluating upstream provider compliance — if a provider claims compliance but cannot demonstrate adherence to the Code's measures, that is a due diligence red flag.

Timeline and grace periods

The GPAI obligations under Article 51 and following provisions follow a distinct enforcement timeline within the broader AI Act implementation schedule:

DateMilestone
1 August 2024AI Act enters into force
2 February 2025Prohibited practices become enforceable
10 July 2025GPAI Code of Practice published
1 August 2025Code of Practice endorsed by the Commission
2 August 2025GPAI obligations (Articles 51–55) become enforceable for newly placed models
2 August 2026Commission enforcement powers for GPAI fully operational; high-risk AI system obligations become enforceable
2 August 2027Grace period expires for pre-existing GPAI models

What "pre-existing" means

  • Models placed after 2 August 2025: Must comply with all applicable GPAI obligations immediately. No grace period.
  • Models placed before 2 August 2025: Benefit from a two-year grace period expiring on 2 August 2027. Providers must work toward compliance but are not subject to enforcement during the transition.

Important nuance: if a pre-existing model is significantly modified after 2 August 2025, it may be treated as a newly placed model, triggering immediate compliance. Major retraining, architecture changes, or capability expansions are likely to qualify.

Commission enforcement powers

The AI Office is the sole enforcer of GPAI obligations — national authorities do not enforce GPAI rules. The AI Office's enforcement powers become fully operational on 2 August 2026, though it can already conduct preliminary investigations and issue guidance.

Real-world compliance scenarios

Understanding which obligations apply requires mapping your specific role in the GPAI supply chain. The following scenarios illustrate how the rules work in practice.

Real-world example: A startup fine-tuning Llama for legal research. LexAI, a Paris-based startup, takes Meta's Llama 3 (open-weight) and fine-tunes it on European case law, legislative databases, and legal commentary. LexAI makes the fine-tuned model available as a SaaS product to law firms under the "LexAI" brand. LexAI becomes a GPAI provider because it places a modified model on the market under its own name. LexAI must comply with all Article 53 obligations: technical documentation covering the fine-tuning process and legal training data, copyright compliance ensuring the training data was lawfully sourced, and transparency to downstream users about the model's limitations and jurisdictional coverage. LexAI is unlikely to trigger systemic risk — the fine-tuning compute will be far below 10²³ FLOP. However, LexAI benefits from Meta's upstream compliance for the base Llama model. LexAI should also evaluate whether the final product triggers high-risk classification under Article 6 when used for legal advice or court filings.

Real-world example: An enterprise deploying GPT-4 via API for customer service. NordicBank, a Scandinavian bank, integrates OpenAI's GPT-4 via API into its customer service chatbot. NordicBank does not modify the model — it uses prompt engineering and retrieval-augmented generation (RAG) at the application layer. NordicBank is a deployer, not a GPAI provider. OpenAI bears the Article 53 and Article 55 obligations for GPT-4. NordicBank's obligations are deployer-level: ensuring human oversight, informing customers that they are interacting with AI (Article 50 transparency), retaining logs, and monitoring for issues. However, NordicBank must still verify that OpenAI provides the downstream information required by Article 53. If the chatbot is used for creditworthiness assessment or financial decision-making, NordicBank may need to treat it as a high-risk AI system, triggering provider-level obligations for the integrated system.

Real-world example: A company training a proprietary foundation model. SovereignAI, a Berlin-based company, trains a proprietary multimodal foundation model from scratch using 5 × 10²³ FLOP and offers it via API to enterprise customers across healthcare, finance, and government. SovereignAI is an upstream GPAI provider subject to both Article 53 and Article 55. The compute threshold is clearly exceeded, so the model is presumed to pose systemic risk. SovereignAI must: (1) prepare comprehensive technical documentation including training data provenance and architecture details; (2) implement copyright compliance with rights reservation detection and a public training data summary; (3) provide downstream customers with information needed to meet their own obligations; (4) conduct model evaluations and adversarial testing; (5) implement cybersecurity protections; and (6) establish serious incident reporting to the AI Office. SovereignAI must also notify the Commission of its model's systemic risk classification. Given deployment across healthcare and government, SovereignAI should expect heightened scrutiny from both the AI Office and sector-specific regulators.

How GPAI obligations interact with high-risk rules

One of the most complex aspects of the AI Act's architecture is the interaction between GPAI model obligations and high-risk AI system requirements. These are two separate regulatory tracks that converge when a GPAI model is integrated into a high-risk system.

The two-layer compliance model

The AI Act operates on a principle of layered responsibility:

  • Model layer: The GPAI provider is responsible for model-level obligations (documentation, copyright, transparency, and systemic risk duties where applicable) under Articles 53 and 55.
  • System layer: The provider of the AI system that integrates the GPAI model is responsible for system-level obligations (risk management, data governance, accuracy, robustness, human oversight, conformity assessment) under the high-risk rules.

These layers are cumulative, not alternative. A high-risk system built on a GPAI model must satisfy both sets of requirements.

Practical implications for downstream providers

If you integrate a GPAI model into a system that qualifies as high-risk, you must:

  1. Obtain upstream documentation: Request and verify that the GPAI provider has complied with its Article 53 obligations. The GPAI provider's documentation forms the foundation of your own technical documentation for the system.

  2. Conduct system-level risk management: The GPAI provider's model evaluations do not substitute for your own system-level risk assessment. You must evaluate risks arising from the specific use case, deployment context, and interaction with other system components.

  3. Perform conformity assessment: Your system must undergo the appropriate conformity assessment procedure. The GPAI model's compliance status does not exempt you from this obligation.

  4. Maintain ongoing monitoring: You are responsible for post-market monitoring of the system, including monitoring for issues that originate from the GPAI model layer — such as model drift, emergent behaviours, or capabilities that were not documented by the upstream provider.

  5. Manage update risks: When the GPAI provider updates the underlying model, you must evaluate whether the update affects your system's compliance status. This is particularly relevant for API-based integrations where model updates may occur without explicit opt-in.

Who bears liability?

The GPAI provider is liable for model-layer failures (e.g., inadequate training data curation). The system provider is liable for system-layer failures (e.g., failing to implement safeguards for the specific use case). This layered model incentivises robust contractual arrangements — including SLA-backed information-sharing, change notification, and incident response coordination. For a detailed compliance checklist covering both layers, see the EU AI Act compliance checklist.

Penalties for non-compliance

GPAI non-compliance falls under the AI Act's penalty framework established in Article 99. The specific penalty tier for GPAI violations is:

Up to EUR 15 million or 3% of global annual turnover (whichever is higher for large organisations; whichever is lower for SMEs and startups).

This is the same penalty tier that applies to high-risk system violations and transparency obligation breaches. It is the middle tier of the AI Act's three-tier penalty structure:

Penalty tierMaximum fineApplies to
Tier 1 (highest)EUR 35 million / 7% of global turnoverProhibited practices
Tier 2EUR 15 million / 3% of global turnoverGPAI obligations, high-risk obligations, transparency obligations
Tier 3 (lowest)EUR 7.5 million / 1.5% of global turnoverSupplying incorrect or misleading information to authorities

For a full breakdown of how fines are calculated, including the SME adjustment mechanism and comparisons with GDPR penalties, see the EU AI Act penalties guide.

Enforcement specifics for GPAI

GPAI obligations are enforced exclusively by the European Commission through the AI Office — a single authority across the entire EU, eliminating jurisdictional arbitrage. The AI Office can request documentation, conduct investigations, impose fines, and require corrective measures including restricting or withdrawing a model from the market. Enforcement powers become fully operational 2 August 2026.

Frequently Asked Questions

Does using the ChatGPT API make me a GPAI provider?

No. If you access GPT-4 or any other GPAI model via API and use it within your own operations — to power a chatbot, generate content, or automate workflows — you are a deployer, not a GPAI provider. OpenAI remains the GPAI provider and bears the Article 53 and Article 55 obligations. Your obligations are at the deployer level: human oversight, transparency to end users, log retention, and monitoring. However, if you build a distinct product or service that you sell to third parties under your own brand — particularly if you fine-tune or substantially modify the model — you may be approaching provider status. The key question is whether you are placing a model or system on the market under your own name.

Are open-source models fully exempt from GPAI obligations?

No. Open-source GPAI models receive a limited exemption from certain documentation and downstream information-sharing requirements. The exemption does not apply to copyright obligations — all GPAI providers, regardless of licensing, must comply with copyright rules. More importantly, the open-source exemption does not apply if the model is classified as posing systemic risk (either because it exceeds the 10²³ FLOP compute threshold or because the Commission designates it). Meta's Llama 3, for example, is open-weight but likely exceeds the compute threshold, meaning it must comply with the full set of GPAI obligations including systemic risk duties.

What documentation do I need as a GPAI provider?

At minimum: (1) technical documentation covering architecture, training process, data sources, compute, evaluations, capabilities, and limitations; (2) a publicly available training data summary per AI Office format; (3) a documented copyright compliance policy with rights reservation detection; and (4) downstream information packages for integrators. If classified as posing systemic risk, add model evaluation reports, adversarial testing results, cybersecurity documentation, and an incident reporting procedure. The Code of Practice provides templates for each.

Does fine-tuning a model make me a GPAI provider?

It depends on what you do with the result. If you fine-tune a model and use it solely within your own organisation — for internal operations, research, or analysis — you are a deployer, not a provider. If you fine-tune a model and then make it available to third parties — whether through an API, a downloadable package, or embedded in a product — you become a GPAI provider for the fine-tuned model. Your Article 53 obligations would focus on the fine-tuning layer: what data you used, how you evaluated the fine-tuned model, what capabilities and limitations changed. You would not need to re-document the entire base model, but you must ensure that the upstream provider's documentation is available for the base model.

How does the 10²³ FLOP threshold work in practice?

The threshold covers the cumulative amount of compute used for training the model, measured in floating point operations. It is a bright-line rule: if your training run (including any pre-training, continued training, or extensive fine-tuning that is functionally equivalent to training) exceeds 10²³ FLOP, the model is presumed to pose systemic risk. The provider can attempt to rebut this presumption by presenting sufficiently substantiated arguments to the Commission, but the burden of proof is on the provider and the bar is high. In practice, the threshold captures current frontier models (GPT-4-class and above) while excluding most smaller models, domain-specific models, and typical fine-tuning runs. The Commission has the power to update this threshold as hardware capabilities evolve.

What if my GPAI provider refuses to share documentation?

This is a genuine compliance risk. If your upstream GPAI provider cannot or will not supply the documentation required under Article 53, you face a gap in your own compliance chain — particularly if you are building a high-risk AI system on top of the model. You should: (1) document your requests and the provider's responses to demonstrate due diligence; (2) evaluate alternative providers that offer compliant documentation; (3) assess whether the gap is material to your own obligations; and (4) consider escalating to the AI Office, which can compel GPAI providers to share required information. The AI Act is designed so that upstream non-compliance does not automatically flow to deployers — but only if you demonstrate reasonable steps to obtain the information. Start with the free AI Act risk classification tool to understand which obligations apply.

The GPAI obligations are not a future concern — they are live regulation with real enforcement consequences. Whether you are training frontier models, fine-tuning open-source models, or building applications on foundation model APIs, understanding your position in the GPAI supply chain is the first step toward compliance. Map your role, assess your obligations, verify your upstream providers, and document everything.

Assess your AI Act obligations now →

AI Act
GPAI
General-Purpose AI
Foundation Models
Compliance
Systemic Risk
Article 53

Check your AI system's compliance

Free assessment — no signup required. Get your risk classification in minutes.

Run free assessment