Alle Artikel
AI Act for HR and Recruitment: Compliance Guide
AI Act

AI Act for HR and Recruitment: Compliance Guide

EU AI Act compliance guide for HR and recruitment AI. High-risk classification, banned practices, vendor obligations, and bias testing for hiring.

Legalithm Team25 Min. Lesezeit
Teilen
Lesezeit25 min
ThemaAI Act
AktualisiertApr. 2026
Inhaltsverzeichnis

AI Act for HR and Recruitment: The Complete Compliance Guide

Recruitment and employment AI is the single most common high-risk category under the EU AI Act — and the one that touches the widest range of organisations. You do not need to be a technology company to fall within scope. If your organisation uses AI in hiring — CV screening, interview scoring, candidate ranking, performance monitoring, promotion decisions — the August 2, 2026 deadline for high-risk system compliance applies to you. That deadline is not distant. It is months away. And the obligations it triggers are substantial: conformity assessments, technical documentation, human oversight procedures, bias testing, candidate notification, log retention, and — for certain deployers — a fundamental rights impact assessment. Some practices in recruitment AI are already banned outright, with penalties reaching EUR 35 million or 7% of global turnover. This guide covers every obligation that applies to HR and recruitment AI, from the vendor building the tool to the employer using it, with practical steps, compliance checklists, and real-world scenarios.

TL;DR — AI Act and HR essentials

  • High-risk by default: AI systems used in recruitment, hiring, promotion, termination, task allocation, and performance monitoring are explicitly listed as high-risk in Annex III, point 4 of the EU AI Act.
  • August 2, 2026 deadline: High-risk obligations become enforceable on this date. Systems already on the market must comply unless they undergo no significant modification.
  • Already banned: Emotion recognition in workplaces and recruitment contexts, biometric categorisation inferring protected characteristics, social scoring, and subliminal manipulation have been prohibited since February 2, 2025 under Article 5.
  • Two sets of obligations: The AI vendor (provider) and the employer using the tool (deployer) each have independent compliance duties. Vendor compliance does not equal employer compliance.
  • Candidate rights: Candidates and employees must be informed they are subject to AI decision-making, have a right to explanation, and a right to human review.
  • Bias testing is mandatory: Article 10 requires examination of datasets for biases — particularly critical in employment where protected characteristics like age, gender, ethnicity, and disability are directly relevant.
  • Penalties: Up to EUR 35 million or 7% of global turnover for prohibited practices; up to EUR 15 million or 3% of global turnover for high-risk compliance failures. See the full penalties guide.
  • Assessment tool: Use the Legalithm AI Act Assessment to classify your HR AI systems and identify your obligations.

Which HR AI tools are high-risk?

Annex III, point 4 of the AI Act identifies AI systems intended to be used in employment, workers management, and access to self-employment as high-risk. The scope is broad and deliberately so. The European legislators recognised that AI-driven decisions about people's livelihoods carry inherently high fundamental-rights risks — affecting the right to non-discrimination, dignity, fair working conditions, and effective remedy.

The classification under Article 6 does not depend on how sophisticated the AI is. A simple keyword-matching algorithm that automatically rejects CVs is subject to the same high-risk regime as a deep-learning model that analyses video interviews. What matters is the function and context, not the technical complexity.

HR AI systems classified as high-risk

AI system typeWhy it is high-riskCommon examples
CV screening and rankingDirectly determines which candidates progress in the hiring pipelineATS ranking algorithms, AI-powered resume parsers that score candidates, automated shortlisting tools
Automated interview analysisEvaluates candidate suitability based on interview performanceVideo interview platforms that score verbal responses, natural language analysis of written answers
Psychometric and personality testingInfers personality traits, cognitive abilities, or cultural fit from behavioural dataAI-scored situational judgement tests, gamified assessments with algorithmic scoring
Chatbot pre-qualificationFilters candidates through automated questioning before human reviewRecruitment chatbots that ask screening questions and determine pass/fail
AI sourcing toolsIdentify and target potential candidates from databases and public profilesTools that crawl LinkedIn or job boards and rank potential candidates by predicted fit
Performance monitoringContinuously assesses employee productivity, behaviour, or outputKeystroke monitoring with AI analysis, productivity scoring dashboards, automated performance ratings
Promotion and termination decisionsAI systems that recommend or inform decisions about career progression or dismissalWorkforce analytics platforms that flag employees for performance improvement or recommend promotions
Task allocationAssigns work, shifts, or projects to employees based on algorithmic decisionsGig economy platforms that distribute tasks, warehouse management systems that assign shifts based on predicted output
Workforce planningPredicts staffing needs, recommends layoffs, or optimises headcountAI models that recommend which roles to eliminate in restructuring, demand-forecasting tools tied to staffing decisions

The "material influence" threshold

A critical point that many organisations miss: the AI system does not need to make the final decision to be high-risk. If the AI system materially influences the outcome — by screening out candidates before a human sees them, by ranking candidates in an order that determines who gets interviewed, or by assigning a score that a human reviewer relies on — it falls within scope. The legislative intent is clear: the high-risk classification targets systems whose output has a significant effect on the employment relationship, regardless of whether a human formally signs off.

Real-world example: A company uses an AI tool to score 5,000 incoming CVs and presents the top 200 to human recruiters. The recruiter makes the "final" decision about which 20 candidates to interview — but the AI already eliminated 4,800 people. That AI tool is high-risk, and the fact that a human makes the final selection from the AI's shortlist does not change the classification.

For a detailed walkthrough of the classification process, see Is My AI System High-Risk? Classification Guide.

Already-banned practices in recruitment

Before addressing the high-risk obligations that take effect in August 2026, organisations must understand that certain AI practices in recruitment are already prohibited. The Article 5 bans on prohibited AI practices took effect on February 2, 2025. Violations are already enforceable and carry the highest tier of penalties under the AI Act.

Emotion recognition in interviews and workplaces

Article 5(1)(f) prohibits the use of AI systems to infer emotions of natural persons in the areas of workplace and education, except where the AI system is intended to be placed on the market or put into service for medical or safety reasons.

This means any AI tool that analyses a candidate's facial expressions, vocal tone, micro-expressions, body language, or physiological signals during a job interview to infer emotional states — stress, enthusiasm, confidence, deception — is prohibited. The ban applies regardless of whether the emotion inference is the primary function or a secondary feature. If the system produces an output about the candidate's emotional state and that output is used in the recruitment context, it is banned.

Real-world example: A video interview platform offers a feature that analyses candidates' facial expressions and tone of voice to produce an "engagement score" and a "confidence rating." This feature is a prohibited AI practice under the AI Act. Continuing to use it after February 2, 2025 exposes both the vendor and the employer to penalties of up to EUR 35 million or 7% of global annual turnover, whichever is higher.

Biometric categorisation inferring protected characteristics

Article 5(1)(g) prohibits AI systems that categorise natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. Using AI to infer any of these characteristics from a candidate's photo, video appearance, or voice is prohibited.

Social scoring

Article 5(1)(c) prohibits AI systems that evaluate or classify natural persons based on their social behaviour or known, inferred, or predicted personal or personality characteristics, where the resulting social score leads to detrimental treatment in contexts unrelated to the original data collection — or disproportionate treatment relative to the social behaviour's gravity. This covers any AI system that assigns candidates a "social score" or "reputation score" based on aggregated behavioural data.

Subliminal manipulation

Article 5(1)(a) prohibits AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort their behaviour in a manner that causes or is reasonably likely to cause significant harm. In recruitment, this could cover AI-driven interfaces designed to manipulate candidate responses or behaviour during assessments through techniques the candidate cannot consciously perceive.

Penalty structure for prohibited practices

ViolationMaximum fine
Prohibited AI practices (Article 5)EUR 35 million or 7% of global annual turnover
High-risk AI obligations (Articles 6–51)EUR 15 million or 3% of global annual turnover
Supplying incorrect information to authoritiesEUR 7.5 million or 1% of global annual turnover

For SMEs and startups, the lower of each pair applies. But for large enterprises using recruitment AI at scale, the turnover-based calculation can produce staggering figures. See the penalties and fines guide for the full calculation methodology, including how group turnover is assessed.

Provider vs employer (deployer) obligations

One of the most dangerous assumptions in HR AI compliance is that buying a "compliant" tool from a vendor discharges the employer's obligations. It does not. The AI Act assigns independent, overlapping obligations to both the provider (the vendor that builds or sells the AI system) and the deployer (the employer that uses it). For a comprehensive comparison across all obligation types, see AI Act Provider vs Deployer Obligations Compared.

Provider obligations (the AI vendor)

The provider is the natural or legal person that develops the AI system or has it developed, and places it on the market or puts it into service under its own name or trademark. For HR AI, this is typically the software company selling the recruitment or workforce management tool.

Core provider obligations for high-risk HR AI:

  • Risk management system (Article 9): Establish and maintain a risk management system throughout the lifecycle, specifically addressing discrimination risks and unfair exclusion of qualified candidates.
  • Data governance (Article 10): Ensure training data is relevant, representative, and examined for possible biases — particularly critical where historical hiring data encodes systemic discrimination. See AI Bias Testing for EU AI Act Compliance.
  • Technical documentation (Article 11, Annex IV): Document the system's intended purpose, design, testing, performance metrics, known limitations, and instructions for use.
  • Automatic logging (Article 12): Build logging capabilities so every candidate score, ranking, or screening outcome is traceable.
  • Transparency (Article 13): Provide deployers with clear information about capabilities, limitations, human oversight requirements, and known biases.
  • Human oversight features (Article 14): Design the system so human reviewers can understand outputs, override decisions, and halt the system.
  • Conformity assessment (Article 43): Complete before market placement — typically self-assessment under Annex VI for HR AI. See Conformity Assessment: Self-Assessment vs Notified Body.
  • Post-market monitoring (Article 72): Systematically monitor performance, risks, and compliance after deployment.
  • EU database registration (Article 49): Register the system before market placement.

Deployer obligations (the employer using the tool)

The deployer is the organisation using the AI system under its own authority. In recruitment, this is the employer, the recruitment agency, or the HR department. The deployer's obligations are separate from and additional to whatever the provider has done.

Core deployer obligations for high-risk HR AI:

  • Use according to instructions (Article 26): Operate the system in accordance with the provider's instructions, respecting its intended purpose, input requirements, and documented limitations.
  • Human oversight (Article 14, Article 26): Assign competent, trained persons who understand the system's capabilities, limitations, and known biases, and who have genuine authority to override or reverse AI decisions.
  • Input data quality (Article 26): Ensure data fed into the system is relevant and representative. If the system was validated on structured CVs but you feed it unstructured application forms, you are responsible for that mismatch.
  • Log retention (Article 26): Retain automatically generated logs for at least six months. Given that employment discrimination claims can be filed years later, consider retaining logs for the applicable limitation period.
  • Inform affected persons (Article 26(11)): Inform candidates and employees that AI is being used — even if the vendor's system has no built-in notification feature.
  • Fundamental rights impact assessment (Article 27): Required for public-sector employers, essential service providers, and education/vocational training institutions. Complete before first use. See the FRIA guide.
  • Cooperation with authorities: Provide market surveillance authorities with access to logs and documentation on request.

Comparison table: provider vs deployer for HR AI

ObligationProvider (vendor)Deployer (employer)
Risk management systemDesign, implement, maintainMonitor for risks in practice; report to provider
Data governance and bias testingTest training data for bias; mitigateEnsure input data relevance and quality
Technical documentationPrepare Annex IV documentationObtain and retain provider documentation
LoggingBuild logging capabilitiesRetain logs ≥ 6 months
TransparencyProvide instructions for useInform candidates/employees of AI use
Human oversightDesign oversight featuresAssign trained persons to exercise oversight
Conformity assessmentComplete before market placementVerify provider has completed assessment
FRIANot requiredRequired for public bodies and certain private deployers
Post-market monitoringSystematic monitoring planReport malfunctions to provider
EU database registrationRegister systemRegister deployment (public bodies)

Candidate and employee rights

The AI Act creates a set of rights for individuals affected by high-risk AI systems. In the employment context, this means candidates and employees have enforceable rights when AI is used to evaluate them.

Notification that AI is being used

Under Article 26(11), deployers must inform natural persons that they are subject to the use of a high-risk AI system. Job applicants must be told before or at the point of AI-assisted evaluation. Employees must be told when AI is used for performance monitoring, promotion decisions, or task allocation. The notification must be clear and meaningful — not buried in page 47 of a privacy notice.

Right to explanation of individual decisions

When a high-risk AI system produces a decision with legal effects or that similarly significantly affects a person, the affected person can obtain a clear explanation of the AI's role in the decision. A rejected candidate can request an explanation of how the AI contributed to the rejection. An employee subject to AI-informed performance review can request an explanation of what factors drove the rating. This overlaps with and reinforces GDPR Article 22, which almost always co-applies for HR AI in the EU.

Right to human review

Article 14 requires that human reviewers can override, disregard, or reverse the AI's output. If a candidate is rejected based primarily on an AI screening score and requests human review, the employer must ensure a qualified person genuinely reviews the decision — not simply rubber-stamps the AI's output.

Fundamental rights impact assessment (FRIA) requirements

Article 27 requires certain deployers to complete a FRIA before first use. For HR, this covers public-sector employers (government agencies, municipalities, public universities), essential service providers (banks, insurers), and education/vocational training providers. The FRIA must assess the impact on non-discrimination, dignity, privacy, effective remedy, and fair working conditions. The FRIA guide provides a step-by-step methodology.

Bias testing requirements for HR AI

Bias testing is not optional for high-risk AI systems — it is a legal requirement under Article 10. In the recruitment and employment context, the stakes are particularly high because the protected characteristics most relevant to employment law — age, gender, ethnicity, disability, religion — are precisely the characteristics most likely to be encoded in historical hiring data. For the full technical methodology, see AI Bias Testing for EU AI Act Compliance.

Protected characteristics in employment

The EU Charter of Fundamental Rights, the Employment Equality Directive (2000/78/EC), and the Racial Equality Directive (2000/43/EC) establish the protected characteristics relevant to employment decisions:

Protected characteristicSourceRecruitment relevance
Gender / sexEU Charter Art. 21; Employment Equality DirectiveName, pronouns, employment gaps (parental leave)
Racial / ethnic originEU Charter Art. 21; Racial Equality DirectiveName, address, education institution, language
AgeEmployment Equality Directive (2000/78/EC)Graduation year, years of experience, date of birth
DisabilityEmployment Equality Directive (2000/78/EC)Employment gaps, accommodation history, career patterns
ReligionEmployment Equality Directive (2000/78/EC)Name, education institution, volunteer activities
Sexual orientationEmployment Equality Directive (2000/78/EC)Volunteer activities, organisational memberships
NationalityEU Charter Art. 21Citizenship, education country, language

How bias enters recruitment AI

Recruitment AI is uniquely vulnerable to bias for three reasons. First, historical hiring data is biased by definition — training data from past decisions encodes whatever conscious or unconscious biases existed in the organisation's recruitment practices. Second, proxy variables are pervasive — even when protected characteristics are removed, AI models learn correlations with postcode (ethnicity, socioeconomic status), university name (social class), and employment gaps (gender, disability). Third, feedback loops amplify bias — if a biased tool rejects underrepresented candidates, future training data contains fewer successful examples from those groups, compounding the pattern over successive training cycles.

Practical bias testing workflow for HR AI

Step 1 — Identify relevant protected attributes: Map the protected characteristics from employment law to the features and proxies in your data. The table above is a starting point — but contextualise it for your system.

Step 2 — Disaggregate performance metrics: Calculate your system's key performance metrics (acceptance rate, score distribution, false positive/negative rates) broken down by each protected attribute. The aggregate accuracy of the system is irrelevant if it performs well for majority groups and poorly for minorities.

Step 3 — Select appropriate fairness metrics: No single fairness metric captures all aspects of bias, and it is mathematically impossible to satisfy all metrics simultaneously. For recruitment, the most relevant are:

  • Demographic parity: Are candidates from different groups selected at similar rates?
  • Equalized odds: Given the same qualification level, are candidates from different groups treated equally?
  • Predictive parity: Does a given AI score mean the same thing regardless of group membership?
  • Four-fifths rule: The US EEOC's 80% rule (selection rate for any group must be at least 80% of the rate for the highest-scoring group) is a useful practical benchmark, though not legally mandated under EU law.

Step 4 — Test with realistic data: Use production-representative data, not synthetic or idealised test sets. Bias manifests in the interaction between the model, the data distribution, and the operational context — testing on sanitised data misses real-world disparities.

Step 5 — Document everything: Annex IV requires documentation of your data examination measures, bias detection methodology, and remediation steps. Record what you tested, what you found, what thresholds you applied, and what actions you took.

Step 6 — Monitor in production: Bias testing is not a one-time gate. Article 9 requires ongoing risk management, and Article 72 requires post-market monitoring. Track fairness metrics continuously and trigger reassessment when distributions shift.

Compliance checklist for HR teams

This checklist is designed for HR departments and HR technology companies preparing for the August 2, 2026 deadline. It covers both provider and deployer obligations. Adapt it based on your role — use the Legalithm AI Act Assessment to determine your classification and obligations.

  1. Inventory all AI systems used in HR. List every tool, platform, or algorithm involved in recruitment, performance management, workforce planning, task allocation, or termination — including features you may not think of as "AI." See How to Build an AI Systems Inventory.

  2. Classify each system's risk level. Apply Article 6 and Annex III, point 4. Use the high-risk classification guide for borderline cases.

  3. Identify your role for each system. Provider (developed in-house), deployer (licensed from vendor), or both? If you substantially modified a vendor's system, you may be a provider under Article 25. See Provider vs Deployer Obligations.

  4. Check for already-banned practices. Review for prohibited practices: emotion recognition, biometric categorisation, social scoring, subliminal manipulation. Discontinue immediately — these bans are already in effect.

  5. Request provider documentation. For every third-party tool, request: technical documentation, instructions for use, conformity assessment evidence, CE marking, and EU database registration. You cannot meet deployer obligations without them.

  6. Implement human oversight procedures. Designate trained personnel for human oversight of each high-risk system. Document their authority, training, and the review/override process.

  7. Establish candidate and employee notification. Create clear notifications for application forms, career portals, interview invitations, and employee handbooks.

  8. Configure and verify logging. Ensure logging captures inputs, outputs, and decisions. Establish a retention policy of at least six months.

  9. Conduct bias testing. Test against protected characteristics relevant to employment law. Document methodology, findings, and remediation. See the bias testing guide.

  10. Complete FRIA if applicable. Public-sector employers and essential service providers must complete a fundamental rights impact assessment before deployment.

  11. Establish monitoring and incident reporting. Detect malfunctions, performance degradation, and bias drift. Define escalation paths. See Article 99 for enforcement timelines.

  12. Document everything. Maintain a compliance file per system: classification rationale, provider documentation, oversight procedures, bias reports, notification templates, log retention records, FRIA, and incident reports.

For a broader compliance checklist covering all AI Act obligations, see the EU AI Act Compliance Checklist 2026.

Real-world compliance scenarios

Scenario 1: HR tech startup providing AI CV screening SaaS

Company: TalentFilter is a 40-person startup that has built an AI-powered CV screening tool. Employers upload job descriptions and receive ranked candidate lists. The system uses NLP to parse CVs, extract skills and experience, and assign a "match score" that predicts job performance.

Role: TalentFilter is the provider under the AI Act. It develops the AI system and places it on the market under its own name.

Key obligations: TalentFilter must complete risk management under Article 9, addressing the risk that its match score disadvantages candidates based on protected characteristics. It must prepare Annex IV technical documentation with performance metrics disaggregated by protected groups. It must conduct bias testing under Article 10 using representative training data. It must design human oversight features so employers can review and override rankings. And it must complete conformity assessment and EU database registration before August 2026.

Real-world example: TalentFilter discovers during bias testing that its system assigns systematically lower match scores to candidates with non-Western European names, because the training data overrepresented CVs from Western European candidates who were subsequently hired. The system learned to associate Western European name patterns with higher quality. TalentFilter must mitigate this — through debiasing techniques, rebalanced training data, or post-processing score adjustments — and document the remediation in its technical documentation.

Scenario 2: Large employer using video interview AI

Company: EuroBank AG is a major European bank with 45,000 employees. It licenses "InterviewIQ," a video interview platform that records candidate responses and produces AI-generated scores for verbal fluency, structured thinking, and domain knowledge. InterviewIQ previously offered an "engagement analysis" feature based on facial expression recognition.

Role: EuroBank is the deployer. InterviewIQ's vendor is the provider.

Key obligations: EuroBank must verify InterviewIQ's conformity assessment, CE marking, and EU database registration. It must immediately discontinue the "engagement analysis" feature — facial expression analysis is a prohibited practice banned since February 2, 2025, carrying penalties up to EUR 35 million or 7% of global turnover. EuroBank must assign trained HR professionals to exercise meaningful human oversight and inform every candidate before the interview that AI will analyse their responses. As a banking institution, EuroBank must complete a FRIA and retain logs for at least six months.

Real-world example: During a FRIA, EuroBank discovers that InterviewIQ's verbal fluency scoring correlates strongly with native language proficiency, systematically disadvantaging candidates whose first language is not the interview language — even when those candidates possess the technical skills required for the role. EuroBank reports this to InterviewIQ's vendor and, pending remediation, assigns human reviewers to manually evaluate candidates whose verbal fluency scores are below threshold before any rejection decision.

Scenario 3: Recruitment agency using AI candidate matching

Company: StaffConnect is a recruitment agency that uses an AI platform to match candidates from its database to client job openings. The platform analyses candidate CVs, past placement outcomes, and client feedback to predict candidate-job fit. StaffConnect built the matching algorithm in-house but uses third-party NLP models for CV parsing.

Role: This is a hybrid scenario. StaffConnect is the provider of the matching algorithm (it developed and operates it under its own authority). It is also a deployer of the third-party NLP model used for CV parsing. The classification under Article 25 depends on whether StaffConnect has substantially modified the NLP model or merely uses it as designed.

Key obligations: For the matching algorithm, StaffConnect bears full provider obligations — risk management, technical documentation, conformity assessment, bias testing, and post-market monitoring. For the third-party NLP model, it must verify the vendor's compliance and fulfil deployer obligations. StaffConnect must be vigilant about feedback loops: if the matching algorithm trains on past placement outcomes that reflect consultant biases, it will learn and amplify them. If StaffConnect has fine-tuned or significantly modified the NLP model, it may have assumed provider obligations under Article 25.

Real-world example: StaffConnect's matching algorithm is trained on 10 years of placement data. Analysis reveals that the algorithm strongly favours candidates who previously worked at Fortune 500 companies — not because those candidates perform better in the roles being filled, but because StaffConnect's consultants historically preferred those candidates and the system learned to replicate that preference. The bias is not against a protected characteristic per se, but it correlates with socioeconomic background and, indirectly, with ethnicity and nationality. StaffConnect must test for these indirect effects and mitigate them.

Common mistakes to avoid

1. Assuming vendor compliance covers employer obligations. The most prevalent mistake. An AI vendor's conformity assessment, CE marking, and technical documentation satisfy the provider's obligations. They do not satisfy the employer's independent deployer obligations: human oversight, candidate notification, log retention, input data quality, and — where applicable — FRIA. These are separate legal duties. You cannot outsource them by contract. See Provider vs Deployer Obligations.

2. Failing to notify candidates that AI is used. Article 26(11) places the notification obligation squarely on the deployer. Many employers assume the recruitment platform handles this, or that a general privacy notice is sufficient. It is not. Candidates must receive a specific, clear notification that an AI system is being used to evaluate them — ideally before the evaluation occurs.

3. Not retaining automatically generated logs. The minimum retention period is six months. Many employers do not configure their systems to retain AI decision logs, or rely on vendors to retain them without verifying. If a market surveillance authority requests logs during an investigation and they do not exist, the employer faces penalties for non-compliance with Article 26 — and loses the ability to demonstrate that the system operated as intended.

4. Ignoring the emotion recognition ban. Numerous video interview platforms historically offered features that analyse candidates' facial expressions, tone of voice, or body language. These features are prohibited under Article 5(1)(f) when used in workplaces or recruitment contexts. The ban took effect on February 2, 2025. If you are still using such features, you are already in violation. Check every feature of every interview platform you use — the emotion analysis may be an optional setting that was enabled by default.

5. Treating "human in the loop" as automatic compliance. Having a human formally approve an AI-generated decision does not satisfy the human oversight requirement. The human must be competent, trained, informed about the system's methodology and limitations, and have genuine authority and capacity to override. If a recruiter processes 500 AI-ranked CVs per day and rubber-stamps the ranking because they have no practical capacity to review individually, that is not meaningful human oversight.

6. Overlooking AI embedded in existing HR software. Many organisations do not realise that their existing HR platforms contain AI features. ATS systems with automated ranking, performance management tools with predictive analytics, workforce planning modules with algorithmic recommendations — these are often marketed as standard features rather than AI systems. Conduct a thorough AI systems inventory to identify all AI components within your HR technology stack.

7. Failing to test for bias against proxy variables. Removing protected characteristics from model inputs does not prevent discrimination. AI models routinely learn to discriminate on proxy variables — postcode, university name, employment gaps, hobbies, language patterns — that correlate with protected characteristics. Your bias testing must examine outcomes disaggregated by protected attributes, not merely confirm that protected attributes are excluded from inputs.

Frequently Asked Questions

Is my applicant tracking system (ATS) high-risk?

It depends on what the ATS does. If it merely stores applications and facilitates scheduling, it is likely not an AI system and therefore not in scope. If it uses algorithms to rank, score, filter, or recommend candidates, those features are high-risk under Annex III, point 4. Many modern ATS platforms have AI features enabled by default — check with your vendor.

What if we only use AI for initial screening, and humans make the final decision?

The AI system is still high-risk. If the AI screens out 90% of applicants before a human sees them, it has materially determined the outcome for those 90%. The human decision applies only to the pool the AI already curated. Full compliance obligations apply.

Do we need to tell candidates we use AI in recruitment?

Yes. Article 26(11) requires deployers to inform affected persons — this is an employer obligation, not the vendor's. Notification should be clear, specific, and provided before AI-assisted evaluation. Include it on the careers page, in the application form, and in interview invitations.

Can we still use AI video interview platforms?

Yes — but you can only use AI to evaluate the content of responses (structured thinking, domain knowledge). You cannot use AI to evaluate emotions, facial expressions, tone of voice, or body language — that constitutes prohibited emotion recognition under Article 5(1)(f). Verify with your vendor and permanently disable any emotion-inference features.

Our AI vendor says they are "AI Act compliant." Is that sufficient?

No. Vendor compliance covers provider obligations only. Your deployer obligations — human oversight, candidate notification, log retention, FRIA — are separate and independent. Request the vendor's conformity assessment documentation, EU declaration of conformity, CE marking, and EU database registration number. Claims without documentation are insufficient. See Provider vs Deployer Obligations.

Does the AI Act apply to AI tools we use for internal HR processes, not just external recruitment?

Yes. Annex III, point 4 covers employment, workers management, and access to self-employment — including performance monitoring, promotion decisions, task allocation, shift scheduling, and termination recommendations. Internal-only tools are in scope.

Next steps

The August 2, 2026 deadline for high-risk AI system compliance is approaching. For HR departments and HR technology companies, the path forward requires immediate action:

  1. Start with classification. Use the Legalithm AI Act Assessment to classify every AI system in your HR technology stack and identify your role and obligations for each.

  2. Eliminate banned practices now. If any recruitment tool uses emotion recognition, biometric categorisation of protected characteristics, or social scoring, discontinue it immediately. These practices are already prohibited.

  3. Engage your vendors. Request compliance documentation from every AI vendor in your HR technology stack. If a vendor cannot demonstrate a clear path to compliance by August 2026, begin evaluating alternatives.

  4. Build internal capabilities. Train HR professionals on AI oversight responsibilities. Develop notification templates, log retention policies, and bias testing workflows.

  5. Document continuously. Every compliance action — classification decisions, vendor assessments, bias testing results, oversight procedures — must be documented. The documentation is not bureaucracy; it is your evidence of compliance.

The EU AI Act Compliance Checklist 2026 provides a comprehensive framework for all obligations. For enforcement timelines and key dates, see Article 99.

AI Act
HR
Recruitment
High-Risk
Employment
Annex III
Bias
Hiring

Prüfen Sie die Compliance Ihres KI-Systems

Kostenlose Bewertung ohne Signup. Erhalten Sie Ihre Risikoklassifizierung in wenigen Minuten.

Kostenlose Bewertung starten