Chapter I — General provisionsArticle 1

Article 1: Subject matter

Applies from 2 Aug 20267 min readEUR-Lex verified Apr 2026

Article 1 states the EU Artificial Intelligence Act’s subject matter: it lays down harmonised rules to improve the internal market for AI, protect health, safety, fundamental rights, democracy, and the rule of law, and support innovation—while listing the main building blocks (prohibited practices, high-risk rules, transparency, GPAI, governance, and more). Always verify paragraph-level wording on EUR-Lex.

Who does this apply to?

  • -Anyone affected by AI systems placed on the EU market, put into service, or used in the Union—including providers, deployers, importers, distributors, and product manufacturers where the Act applies
  • -Public authorities shaping AI policy, procurement, and supervision in Member States
  • -Third-country operators whose AI outputs or models are used or placed in the Union

Scenarios

A U.S. SaaS company offers an API that scores EU job applicants; it has no EU office but sells to French employers.

Article 1(2)(a) situates such cases in the Act’s harmonised rules on placing on the market, putting into service, and use—then Article 2 and onward define scope and roles in detail.
Ref. Art. 1(2)(a)

A national ministry wants to deploy biometric analytics in public space for policing.

Subject matter includes prohibited practices and high-risk and transparency layers; Article 5 and classification rules (e.g. Article 6, Annex III) determine what is allowed and under which regime.
Ref. Art. 1(2)(b)–(d)

A foundation-model vendor publishes open weights and documentation for EU developers.

Article 1(2)(e) explicitly brings general-purpose AI models into the Regulation’s harmonised framework (detailed in Chapter V and related annexes).
Ref. Art. 1(2)(e)

An SME asks whether the Act only applies to “dangerous” robots.

Article 1(1) lists multiple aims (internal market, trustworthy AI, rights, democracy, environment); Article 1(2) shows obligations span prohibited, high-risk, limited-risk, and GPAI—not only physical robots.
Ref. Art. 1(1)–(2)

What Article 1 does (in plain terms)

Article 1 answers what the EU AI Act is for. Paragraph 1 states the Regulation’s aims, including improving the functioning of the internal market for AI, promoting human-centric and trustworthy AI, and ensuring a high level of protection of health, safety, fundamental rights, democracy, the rule of law, and the environment—while supporting innovation (especially for SMEs and start-ups).

Paragraph 2 lists the main harmonised rule areas the Act establishes. In practice, think of it as the table of contents for the whole law: market and use rules, prohibitions, high-risk requirements, transparency for certain systems, GPAI model rules, governance and enforcement, and measures supporting innovation.

Important: Article 1 does not define who you are (provider vs deployer) or every boundary exception—that is Article 2 (scope) and Article 3 (definitions), read together with Chapter VIII and Annex I adjustments.

Article 1(2): the seven harmonised building blocks

The official text groups the subject matter into lettered items. Use EUR-Lex for exact wording; below is a navigation map:

  • (a) Placing on the market, putting into service, and use of AI systems in the Union
  • (b) Prohibited AI practices (Chapter II)
  • (c) High-risk AI systems, plus obligations on operators (Chapter III and annexes)
  • (d) Transparency obligations for certain AI systems
  • (e) Rules on placing general-purpose AI models on the market (Chapter V and related annexes)
  • (f) Market monitoring, market surveillance, governance, and enforcement
  • (g) Measures to support innovation, in particular for SMEs and start-ups

When you triage a product, walk this list top-to-bottom: first check prohibitions, then classification (high-risk / GPAI), then transparency-only duties, then cross-cutting governance obligations.

How Article 1 connects to the rest of the Act

  • Article 2Scope: territorial and material boundaries (what is in / out of the Regulation).
  • Article 3Definitions: provider, deployer, AI system, high-risk, GPAI model, etc.
  • Article 4AI literacy: competence measures for providers and deployers, tied to Article 3(56).
  • Article 5 — First operational “hard stop”: prohibited practices.
  • Article 6 + Annex III — Whether a system is high-risk.
  • Article 50 and related provisions — Transparency for limited-risk use cases.
  • Articles 51–56GPAI models on the Union market.
  • Article 113Application dates (staggered entry into force of many rules).

Article 1 is therefore the orientation clause: if a stakeholder asks “what law is this?”, Article 1 is the authoritative answer—then you drill into the articles named above.

Practical checklist (orientation)

  • Read Article 1 once per program when you onboard legal, product, and security teams—it frames every downstream obligation.
  • Map your offering against Article 1(2)(a)–(g) before deep-diving into Annex III or GPAI chapters.
  • Pair Article 1 with Article 2 in every jurisdictional memo (subject matter ≠ full scope).
  • Revisit after amendments: the consolidated OJ/EUR-Lex text is the source of truth if the Union adjusts timelines or cross-references.

Official wording (excerpt): Article 1(1) and Article 1(2)(a)–(g)

Note: The following paragraphs reproduce the enacting terms of Article 1 as commonly cited from the English consolidated text of Regulation (EU) 2024/1689 on EUR-Lex. Always re-open EUR-Lex for the definitive wording, numbering, and any later amendments before compliance decisions.

1. The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation.

2. This Regulation lays down:

(a) harmonised rules for the placing on the market, the putting into service, and the use of AI systems in the Union;

(b) prohibitions of certain AI practices;

(c) specific requirements for high-risk AI systems and obligations for operators of such systems;

(d) harmonised transparency rules for certain AI systems;

(e) harmonised rules for the placing on the market of general-purpose AI models;

(f) rules on market monitoring, market surveillance, governance and enforcement;

(g) measures to support innovation, with a particular focus on SMEs, including start-ups.

Recitals (preamble) on EUR-Lex

The recitals (preamble) sit in the same consolidated AI Act on EUR-Lex as the articles and are often used when interpreting subject-matter clauses such as Article 1. Use the official preamble on EUR-Lex (table of contents / “Whereas” section)—do not rely on unofficial recital lists without checking the sequence and wording against the authentic text.

Compliance checklist

  • Use Article 1(2) as a structured agenda when scoping any new AI feature or vendor (prohibited → high-risk → transparency → GPAI → governance).
  • Link board-level AI policy to Article 1(1) aims (trustworthy AI, rights, democracy, environment) for proportionality discussions.
  • When briefing non-lawyers, cite Article 1 first, then point to Article 2 for “does this apply to us?” and Article 3 for role definitions.
  • Keep a dated EUR-Lex print/PDF in the compliance record; do not rely on secondary summaries alone for audits.
  • Coordinate with data protection (GDPR) and product-safety teams—Article 1 situates the AI Act alongside amended sector acts listed in the Regulation’s title and recitals.

Map your AI use case to the Act’s building blocks—start the free assessment.

Start Free Assessment

Related annexes

  • Annex III — High-risk AI systems (classification context for Article 1(2)(c))
  • Annex XI — Technical documentation for GPAI models (context for Article 1(2)(e))

Frequently asked questions

Does Article 1 tell me if my AI is “high risk”?

No. Article 1 explains what the Regulation covers at a high level. Classification follows Articles 5–7, Annex III, and related provisions. Start with Article 1 for orientation, then use the classification guides.

Does Article 1 apply outside the European Union?

Article 1 states the Regulation’s subject matter for the Union legal order. Whether a specific third-country operator is in scope depends on Article 2 and related rules (e.g. placing on the market, putting into service, or use within the Union).

Is Article 1 enough for an audit response?

Regulators expect conformity with operational articles and annexes, not a restatement of Article 1. Use Article 1 in policies as context; evidence should map to the specific articles your system triggers (documentation, risk management, logging, etc.).