Article 5
Already enforceable

Article 5: Prohibited AI Practices

Article 5 of the EU AI Act prohibits 8 categories of AI practices that pose an unacceptable risk to fundamental rights. These prohibitions became enforceable on 2 February 2025.

Practical interpretationImplementation-focusedEUR-Lex referenced

Who does this apply to?

  • -All AI providers placing systems on the EU market
  • -All AI deployers using AI systems in the EU
  • -Organizations outside the EU whose AI systems affect EU citizens

What is Prohibited?

Article 5 identifies eight categories of AI practices that are banned entirely:

1. Subliminal manipulation — AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort behavior (Article 5(1)(a))

2. Exploiting vulnerabilities — AI systems that exploit vulnerabilities of specific groups (age, disability, social/economic situation) to materially distort behavior (Article 5(1)(b))

3. Social scoring by public authorities — AI systems used by or on behalf of public authorities to evaluate or classify people based on social behavior or predicted personal characteristics, leading to detrimental treatment (Article 5(1)(c))

4. Individual predictive policing — AI systems that make risk assessments of natural persons to predict criminal offences based solely on profiling or personality traits (Article 5(1)(d))

5. Untargeted facial image scraping — AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage (Article 5(1)(e))

6. Workplace/education emotion recognition — AI systems that infer emotions of natural persons in the areas of workplace and education institutions, except for medical or safety reasons (Article 5(1)(f))

7. Biometric categorisation by sensitive attributes — AI systems that categorise natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation (Article 5(1)(g))

8. Real-time remote biometric identification in public spaces — AI systems for real-time remote biometric identification in publicly accessible spaces for law enforcement purposes, with narrow exceptions (Article 5(1)(h))

Practical Examples

  • A government deploying an AI system to score citizens' 'trustworthiness' based on social media activity → Prohibited (social scoring)
  • A retailer using AI to detect employee emotions during work shifts → Prohibited (workplace emotion recognition)
  • A company scraping social media photos to build a facial recognition database → Prohibited (untargeted facial scraping)
  • A police department using AI to predict which individuals will commit crimes based purely on demographics → Prohibited (predictive policing based on profiling)
  • An advertiser using subliminal techniques in AI-generated content to manipulate purchasing decisions → Prohibited (subliminal manipulation)

Exceptions

Article 5 includes limited exceptions for real-time biometric identification (Article 5(1)(h)):

  • Targeted search for specific victims of abduction, trafficking, or sexual exploitation
  • Prevention of a specific, substantial, and imminent threat to life or a foreseeable terrorist attack
  • Identification of a suspect of a criminal offence punishable by a custodial sentence of at least 4 years

These exceptions require prior judicial authorization and are subject to strict proportionality requirements.

Penalties

Violations of Article 5 carry the highest penalties under the AI Act:

  • Up to €35 million or 7% of total worldwide annual turnover (whichever is higher) for undertakings
  • Up to €7.5 million for SMEs and startups

These are the maximum fines and significantly exceed those for other AI Act violations.

Not sure if your AI system involves prohibited practices? Use our free assessment tool.

Start Free Assessment