Chapter I — General provisionsArticle 4

Article 4: AI literacy

Applies from 2 Aug 20266 min readEUR-Lex verified Apr 2026

Article 4 requires providers and deployers of AI systems to take measures so that staff and others operating AI on their behalf reach a sufficient level of AI literacy, calibrated to technical background, training, deployment context, and end-users or groups affected. It complements Article 3(56) (definition of AI literacy) and sits alongside—not instead of—substantive risk, documentation, and transparency duties in later chapters.

Who does this apply to?

  • -Providers of AI systems responsible for organisational measures, onboarding, and supervision of staff who operate or oversee AI
  • -Deployers using AI under their authority who must ensure operators understand capabilities, limits, and misuse risks
  • -HR, L&D, and compliance teams designing training, role profiles, and vendor oversight tied to AI rollouts
  • -Procurement and security teams briefing “persons dealing with operation and use” of AI on behalf of the organisation

Scenarios

A hospital deploys a diagnostic support model; radiographers click suggestions without training.

Article 4 expects proportionate measures so those operating the system understand limitations and oversight—complementing (not replacing) high-risk documentation and human oversight rules.
Ref. Art. 4

A SaaS provider ships an API; customer success staff give legal guarantees in calls.

Provider-side literacy should cover what frontline staff may promise; Article 4 targets competence of persons acting on the provider’s behalf, alongside marketing accuracy rules elsewhere.
Ref. Art. 4

A retailer rolls out emotion-analysis cameras to store managers only.

Article 4 asks deployers to consider persons or groups on whom systems are used—here shoppers and staff—not only internal operators.
Ref. Art. 4

Engineers have PhDs but no policy training on prohibited uses.

Article 4 requires taking into account technical knowledge, experience, education and training—literacy is not only “more coding courses”; it can include legal and ethics guardrails.
Ref. Art. 4

What Article 4 does (in plain terms)

Article 4 is a horizontal competence obligation: if you are a provider or deployer of an AI system, you must organise so that your staff and other people who operate or use AI on your behalf reach an appropriate level of AI literacy—as far as reasonably achievable (“to their best extent” in the English consolidated text).

The duty is contextual: you must account for people’s technical knowledge, experience, education, training, the context of use, and who is affected by the system.

Article 4 does not replace classification, data governance, logging, or transparency duties—it feeds proportionate implementation of those duties by ensuring people understand what the system does, when it fails, and where law draws red lines.

How Article 4 links to “AI literacy” in Article 3

The term AI literacy is defined in Article 3 (point (56) in the consolidated numbering): skills, knowledge, and understanding enabling providers, deployers, and affected persons to deploy AI informedly, with awareness of opportunities, risks, and harm.

Article 4 operationalises that concept for providers and deployers through organisational measures. When you document compliance, cross-reference your training programme to both Article 4 and the Article 3(56) definition text on EUR-Lex.

How Article 4 connects to the rest of the Act

  • Article 1Subject matter: literacy supports trustworthy deployment called for in the Regulation’s aims.
  • Article 2Scope: Article 4 applies where provider/deployer obligations already bite.
  • Article 3Definitions: AI literacy (point (56)) and actor labels (provider, deployer).
  • Article 5 — Training should cover red lines staff must not cross in sales or operations.
  • Article 6 + Annex III — High-risk systems need documentation and oversight; literacy makes those controls real.
  • Article 13 / Article 14 — Literacy underpins transparency to users and human oversight in practice.
  • Article 113Application dates determine when Article 4 must be met for your pathway.

Practical checklist (AI literacy)

  • Role-based curricula for engineering, product, legal, support, and leadership—mapped to Article 4 factors.
  • Vendor playbook when outsourcers “operate” your AI: contractual literacy duties and audit rights.
  • Affected-person lens: training for teams deploying AI in sensitive contexts (workers, patients, students, benefits claimants).
  • Join with risk management: link literacy records to risk assessments and incident reviews.
  • Refresh triggers: new model version, new geography, or post-incident lessons learned.
  • Evidence: dated attendance, assessments, and policy acknowledgements stored like other compliance artefacts.

Official wording (excerpt): Article 4 in full

Editorial note: The following paragraph reproduces Article 4 as commonly cited from the English consolidated text of Regulation (EU) 2024/1689 on EUR-Lex. Always re-open EUR-Lex for the definitive wording and any later amendments before compliance decisions.

Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.

Recitals (preamble) on EUR-Lex

The recitals in the same consolidated AI Act on EUR-Lex often explain why competence-building obligations sit in Chapter I. Use the official preamble on EUR-Lexdo not rely on unofficial recital lists without checking sequence and wording against the authentic text.

Compliance checklist

  • Publish an AI literacy policy aligned to Article 4 factors and the Article 3(56) definition of AI literacy.
  • Integrate literacy checkpoints into release gates for new AI features or vendors.
  • Document who counts as “persons … on their behalf” for each AI system (staff, temps, BPO).
  • Pair literacy metrics with oversight and incident metrics for board reporting.
  • Re-run training impact assessments when models or deployment contexts change materially.

Check whether your training programme meets AI Act expectations—start the free assessment.

Start Free Assessment

Frequently asked questions

Is a one-hour video enough for Article 4?

Article 4 does not prescribe hours; it requires measures toward a sufficient level of literacy given knowledge, training, context, and affected persons. Regulators will look for proportionate, role-specific evidence—not a single format.

Does Article 4 apply to micro-enterprises?

The duty applies to qualifying providers and deployers without an SME carve-out in the text of Article 4 itself—scale and proportionality still matter under general EU law principles; confirm with counsel for your facts.

How is Article 4 different from codes of conduct?

Article 4 is a binding literacy baseline for providers/deployers; codes of conduct (Article 95 and related) can add sectoral best practices but do not replace Article 4.