Article 3: Definitions
Article 3 is the EU AI Act’s definitions article: 68 numbered concepts—from AI system, provider, deployer, and operator, through market terms, conformity assessment, data sets, biometric systems, sandboxes, GPAI models, and downstream provider. It does not by itself tell you your obligations; it supplies vocabulary for Article 2 (scope) and later chapters. The complete text of all 68 definitions is set out below.
Who does this apply to?
- -Legal, product, and compliance teams mapping roles across the AI value chain (provider, deployer, importer, distributor, authorised representative)
- -Data scientists and engineers aligning model lifecycle language with “training / validation / testing data” and GPAI-related definitions
- -Public-sector buyers and law-enforcement units interpreting “remote biometric identification”, “real-time”, and related terms in context of Chapter II and transparency rules
- -Anyone drafting policies, contracts, or DPIAs where “personal data”, “profiling”, and “special categories” cross-reference GDPR instruments cited in Article 3
Scenarios
A consultancy hosts a fine-tuned model for a bank but another company’s brand appears on the user interface.
A municipality buys a vendor SaaS product and configures thresholds for alerts.
An EU reseller imports boxed AI appliances from China with the manufacturer’s mark.
A team fine-tunes an open foundation model and integrates it into several customer-facing apps.
What Article 3 does (in plain terms)
Article 3 is a dictionary for the whole AI Act. It assigns precise meanings to actors (provider, deployer, importer, distributor, authorised representative, operator), lifecycle moments (placing on the market, making available, putting into service), and technical/regulatory concepts (conformity assessment, notified bodies, biometric systems, sandboxes, GPAI, and more).
It does not replace Article 2 (who is in scope) or operational chapters (e.g. high-risk requirements in Chapter III). Instead, it stabilises language so later articles can attach duties, timelines, and penalties to defined terms.
Important: several definitions borrow from GDPR (e.g. personal data, profiling, special categories of personal data)—Article 3 points to the cited Union acts; your privacy counsel should read those instruments alongside the AI Act.
Article 3: how to use the definition list
- Start with actors (3)–(8) when you RACI a project: who is provider vs deployer vs distributor changes documentation and liability analysis.
- Pair market verbs (9)–(11) with your go-to-market plan: “placing on the market” and “putting into service” trigger different compliance clocks under Article 113.
- Use (12)–(13) when writing instructions for use and misuse scenarios for risk management.
- Jump to biometric clusters (34)–(43) if you touch law-enforcement or public-space analytics.
- GPAI block (63)–(68) matters for foundation-model vendors and integrators—read together with Chapter V.
- Definitions (26)–(68) cover market surveillance, harmonised standards, data concepts, biometric systems, law enforcement, GPAI models, and downstream providers—see the official wording section below for a topic summary.
How Article 3 connects to the rest of the Act
- Article 1 — Subject matter: what the Regulation covers at a high level.
- Article 2 — Scope: who is caught before you apply Article 3 labels in practice.
- Article 4 — AI literacy: operational duty for providers/deployers; Article 3(56) defines the term.
- Article 5 — Prohibited practices: definitions like remote biometric ID and emotion recognition gain bite in Chapter II.
- Article 6 + Annex III — High-risk classification builds on defined terms such as safety component and intended purpose.
- Article 50 — Transparency obligations for certain systems (e.g. deep fake in Article 3(60)).
- Article 51 — GPAI obligations lean on general-purpose AI model / system definitions.
- Article 113 — Application dates when defined duties become enforceable.
Cross-instruments: Article 3(50)–(52) signpost GDPR concepts; other points reference Regulation (EU) 2019/1020, Regulation (EU) No 1025/2012, and Directive (EU) 2022/2557—open the EUR-Lex anchors from the authentic Article 3 text.
Practical checklist (definitions)
- Glossary-first onboarding: export Article 3 terms relevant to your system class into your AI register.
- Contract alignment: mirror defined roles (provider, deployer, importer, distributor) in MSAs and DPAs.
- Biometric roadmap: if any feature touches biometric data or remote identification, map definitions (34)–(43) before architecture sign-off.
- Model lifecycle: align internal names (“validation set”) with Article 3(29)–(32) to avoid audit gaps.
- GPAI / downstream: trace whether you are a downstream provider (68) integrating someone else’s model.
- Re-verify on EUR-Lex after any consolidated-text update.
Official wording: Article 3 — Definitions
The following reproduces Article 3 of Regulation (EU) 2024/1689 (definitions (1)–(25) verbatim, and a topic summary for (26)–(68)).
For the purposes of this Regulation, the following definitions apply:
(1) ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;
(2) ‘risk’ means the combination of the probability of an occurrence of harm and the severity of that harm;
(3) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;
(4) ‘deployer’ means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity;
(5) ‘authorised representative’ means a natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation;
(6) ‘importer’ means a natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country;
(7) ‘distributor’ means a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market;
(8) ‘operator’ means a provider, product manufacturer, deployer, authorised representative, importer or distributor;
(9) ‘placing on the market’ means the first making available of an AI system or a general-purpose AI model on the Union market;
(10) ‘making available on the market’ means the supply of an AI system or a general-purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge;
(11) ‘putting into service’ means the supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose;
(12) ‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation;
(13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems;
(14) ‘safety component’ means a component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property;
(15) ‘instructions for use’ means the information provided by the provider to inform the deployer of, in particular, an AI system’s intended purpose and proper use;
(16) ‘recall of an AI system’ means any measure aiming to achieve the return to the provider or taking out of service or disabling the use of an AI system made available to deployers;
(17) ‘withdrawal of an AI system’ means any measure aiming to prevent an AI system in the supply chain being made available on the market;
(18) ‘performance of an AI system’ means the ability of an AI system to achieve its intended purpose;
(19) ‘notifying authority’ means the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring;
(20) ‘conformity assessment’ means the process of demonstrating whether the requirements set out in Chapter III, Section 2 relating to a high-risk AI system have been fulfilled;
(21) ‘conformity assessment body’ means a body that performs third-party conformity assessment activities, including testing, certification and inspection;
(22) ‘notified body’ means a conformity assessment body notified in accordance with this Regulation and other relevant Union harmonisation legislation;
(23) ‘substantial modification’ means a change to an AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment carried out by the provider and as a result of which the compliance of the AI system with the requirements set out in Chapter III, Section 2 is affected or results in a modification to the intended purpose for which the AI system has been assessed;
(24) ‘CE marking’ means a marking by which a provider indicates that an AI system is in conformity with the requirements set out in Chapter III, Section 2 and other applicable Union harmonisation legislation providing for its affixing;
(25) ‘post-market monitoring system’ means all activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions;
Definitions (26)–(68) cover the following topics from the same Article 3 of Regulation (EU) 2024/1689:
- (26) market surveillance authority; (27) harmonised standard; (28) common specification; (29) training data; (30) validation data; (31) validation data set; (32) testing data; (33) input data;
- (34) biometric data; (35) biometric identification; (36) biometric verification; (37) special categories of personal data; (38) sensitive operational data; (39) emotion recognition system; (40) biometric categorisation system; (41) remote biometric identification system; (42) real-time remote biometric identification system; (43) post-remote biometric identification system;
- (44) publicly accessible space; (45) law enforcement authority (with sub-points (a)–(b)); (46) law enforcement; (47) AI Office; (48) national competent authority;
- (49) serious incident (with sub-points (a)–(d): death or serious health harm; serious irreversible disruption of critical infrastructure; infringement of fundamental-rights obligations; serious harm to property or the environment);
- (50) personal data (by reference to GDPR Article 4(1)); (51) non-personal data; (52) profiling (by reference to GDPR Article 4(4));
- (53) real-world testing plan; (54) sandbox plan; (55) AI regulatory sandbox; (56) AI literacy; (57) testing in real-world conditions; (58) subject (for real-world testing); (59) informed consent;
- (60) deep fake; (61) widespread infringement (with sub-points (a)–(b)); (62) critical infrastructure (by reference to Directive (EU) 2022/2557);
- (63) general-purpose AI model; (64) high-impact capabilities; (65) systemic risk; (66) general-purpose AI system; (67) floating-point operation; (68) downstream provider.
Recitals (preamble) on EUR-Lex
The recitals in the same consolidated AI Act on EUR-Lex often explain why certain definitions (e.g. biometric or GPAI concepts) are drafted narrowly or broadly. Use the official preamble on EUR-Lex—do not rely on unofficial recital lists without checking sequence and wording against the authentic text.
Compliance checklist
- Maintain a living glossary keyed to Article 3 numbering for each AI product and vendor relationship.
- When roles shift (white-label, subcontracting, marketplace resale), re-run the Article 3(3)–(8) operator analysis.
- For biometric or emotion-inference features, log which Article 3 definitions your DPIA and DPIA annex cite.
- For foundation models, map Article 3(63)–(68) terms to your model card and integration architecture.
- Keep a dated EUR-Lex PDF/HTML of Article 3 in the compliance evidence set.
Map roles and definitions to obligations—start the free assessment.
Start Free AssessmentRelated Articles
Article 1: Subject matter
Article 2: Scope
Article 4: AI literacy
Article 5: Prohibited AI Practices
Article 6: Classification Rules for High-Risk Systems
Annex III: High-Risk AI System Areas
Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems
Article 51: Classification of GPAI Models with Systemic Risk
Article 113: Entry into Force and Application Dates
Related annexes
- Annex III — High-risk use cases (uses many defined terms from Article 3)
Frequently asked questions
Does Article 3 tell me if my product is high-risk?
No. Article 3 supplies definitions; classification follows Article 6, Annex III, and related rules. Use Article 3 to label actors and concepts, then run the classification guides.
Can we invent our own “provider-lite” role?
Commercial nicknames are fine internally, but regulators read the Article 3 meanings. Contracts should map to the defined roles to avoid responsibility gaps.
Why does Article 3 reference GDPR?
Several terms (personal data, profiling, special categories) are defined by reference to Union data-protection law so the AI Act interoperates with GDPR and related instruments—see Article 3(50)–(52) and your counsel’s GDPR analysis.