Is Employment AI High-Risk Under EU AI Act?
Yes—explicitly. AI used for employment decisions is classified as high-risk under Annex III, Category 4. This guide covers the full scope of regulated employment AI, compliance requirements, and what you need to do before August 2026.
Quick Answer: HIGH-RISK
Employment AI is explicitly listed in Annex III, Category 4 of EU Regulation 2024/1689 (the AI Act). This includes AI used for:
- ● Recruitment and candidate screening
- ● Performance monitoring and evaluation
- ● Promotion and termination decisions
Compliance deadline: August 2, 2026. Penalty: Up to €15 million or 3% of global turnover.
Annex III Employment Category Explained
The EU AI Act creates a risk-based classification system. Annex III exhaustively lists high-risk use cases requiring full compliance with Articles 8-15. Employment is Category 4:
Annex III, Point 4: Employment, Workers Management, and Access to Self-Employment
"AI systems intended to be used for:
- → (a) recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;
- → (b) making decisions affecting terms of work-related relationships, promotion or termination of work-related contractual relationships, allocating tasks based on individual behaviour or personal traits or characteristics, or monitoring and evaluating the performance and behaviour of persons in such relationships."
— EU Regulation 2024/1689, Annex III, Point 4[1]
The language is deliberately broad. The regulation doesn’t just cover final hiring decisions—it covers any AI system involved in the employment lifecycle from job advertisement through termination.
Full Scope of Covered Employment AI
Many organizations underestimate the breadth of employment AI subject to high-risk requirements. The following systems are explicitly covered:
Recruitment and Hiring
Job Advertisement Targeting
AI systems that determine which candidates see job postings—including LinkedIn’s ad targeting, programmatic job advertising platforms, and audience optimization tools.
Resume Screening
Automated filtering of applications based on keywords, qualifications, or predicted job fit. Includes ATS scoring systems, AI-powered resume parsers, and candidate ranking algorithms.
Interview Analysis
AI that evaluates video interviews, analyzes speech patterns, assesses body language, or scores candidate responses. HireVue, Pymetrics, and similar platforms fall squarely within scope.
Candidate Assessment
Psychometric testing, game-based assessments, skills verification, and predictive analytics that estimate candidate success or cultural fit.
Workforce Management
Performance Monitoring
AI tracking employee productivity, analyzing keystroke patterns, monitoring communications, or evaluating output quality. Includes warehouse tracking systems, call center analytics, and remote work monitoring.
Task Allocation
Systems assigning work based on predicted performance, availability algorithms, or behavioral analysis. Gig economy platforms (Uber, DoorDash, Deliveroo) use such systems extensively.
Promotion Decisions
AI recommending or ranking employees for advancement, succession planning algorithms, or "high-potential" identification systems.
Termination Recommendations
Systems flagging employees for performance improvement plans, predicting attrition risk, or recommending layoff candidates based on algorithmic criteria.
Key Determining Factors
Not every HR software tool is automatically high-risk. The classification depends on whether the AI system:
- Makes or materially influences employment decisions—filtering candidates, scoring performance, recommending actions
- Processes personal data to evaluate individuals—analyzing behavior, traits, or characteristics
- Affects employment relationship terms—compensation, scheduling, task assignment, contractual status
Examples that ARE high-risk:
- AI resume screener that auto-rejects 80% of applications
- Video interview platform that scores candidates on communication skills
- Performance analytics dashboard that identifies "underperformers"
- Algorithmic scheduling that assigns shifts based on predicted efficiency
Examples that may NOT be high-risk:
- Simple keyword search in job boards (no ranking/filtering of candidates)
- Calendar scheduling tool (no performance-based allocation)
- Expense reporting automation (not evaluating individual behavior)
When uncertain, classify conservatively. Regulators may disagree with narrow interpretations, and the penalty asymmetry favors over-compliance.
High-Risk Compliance Requirements (Articles 9-15)
Employment AI systems must satisfy the full suite of high-risk requirements. The EU AI Act mandates seven categories of obligations:
Article 9: Risk Management System
Continuous, iterative process throughout the AI system lifecycle:
- → Identify and analyze foreseeable risks to health, safety, and fundamental rights
- → Estimate and evaluate risks from intended use and reasonably foreseeable misuse
- → Adopt appropriate risk mitigation measures
Article 10: Data Governance
Training, validation, and testing datasets must be:
- → Relevant, sufficiently representative, and free of errors
- → Examined for possible biases likely to affect fundamental rights
- → Subject to appropriate data governance measures
Article 11: Technical Documentation
Comprehensive documentation per Annex IV:
- → General system description, intended purpose, and developer information
- → Detailed development process and elements
- → Validation, testing procedures, and risk management documentation
Article 12: Record-Keeping (Logging)
Automatic recording of events throughout operation:
- → Logging capabilities ensuring traceability of decisions
- → Records of inputs, outputs, and persons involved in verification
- → Tamper-evident logs retained for appropriate periods
Article 13: Transparency
Enable deployers to understand and interpret:
- → System capabilities and limitations
- → How to interpret system output appropriately
- → Instructions for use in digital or non-digital format
Article 14: Human Oversight
Enable effective oversight by natural persons:
- → Fully understand capacities and limitations
- → Ability to override, disregard, or reverse AI output
- → Awareness of automation bias risk
Article 12 Logging: The GLACIS Connection
Article 12 is where employment AI compliance becomes operationally complex—and where GLACIS provides critical value. The regulation requires:
"High-risk AI systems shall technically allow for the automatic recording of events ('logs') over the lifetime of the system... ensuring a level of traceability of the AI system’s functioning throughout its lifecycle that is appropriate to the intended purpose of the system." — EU AI Act, Article 12(1)
For employment AI, this means logging:
- Every candidate evaluation: Input data (resume, video, assessment responses), scoring criteria applied, and resulting recommendation
- Every rejection decision: Which candidates were filtered out, at which stage, and why
- Performance assessments: Data points analyzed, weighting applied, and conclusions reached
- Termination recommendations: Complete audit trail from input data to recommendation
- Human override events: When humans deviated from AI recommendations and why
Logs must be tamper-evident and retained appropriately. In employment contexts, this often means years—discrimination claims can be filed long after the decision occurred.
Fairness, Bias, and Discrimination Requirements
The EU AI Act places extraordinary emphasis on preventing discrimination in employment AI. Article 10 requires:
Bias Examination Requirement
"Training, validation and testing data sets shall be examined in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law."
— Article 10(2)(f)
For employment AI, this intersects with existing anti-discrimination frameworks:
- EU Employment Equality Directive (2000/78/EC): Prohibits discrimination based on religion, disability, age, or sexual orientation
- EU Racial Equality Directive (2000/43/EC): Prohibits discrimination based on racial or ethnic origin
- Gender Equality Directive (2006/54/EC): Prohibits discrimination based on sex
- GDPR Article 22: Right not to be subject to automated decisions with legal effects
Organizations must demonstrate they’ve tested for bias across protected characteristics and implemented mitigation measures. This requires:
- Demographic analysis of training data representation
- Disparate impact testing across protected groups
- Ongoing monitoring for bias drift in production
- Documentation of bias findings and remediation steps
Interaction with Employment Law
The EU AI Act doesn’t exist in isolation. Employment AI must also satisfy national employment laws, which often impose additional requirements:
Germany: Works Council Co-Determination
Under the Betriebsverfassungsgesetz (Works Constitution Act), works councils have mandatory co-determination rights over:
- Technical devices designed to monitor employee behavior or performance (§87(1)(6))
- Introduction and application of technical devices for data collection (§94)
- Selection guidelines for recruitment and termination (§95)
Deploying employment AI without works council consultation can result in injunctions, even if the system itself is EU AI Act compliant.
France: CNIL and Labor Code
The CNIL (data protection authority) has issued specific guidance on AI-assisted recruitment. The Labor Code requires informing employees of surveillance methods and consulting comités sociaux et économiques (CSE).
Netherlands: Employee Consent
Dutch data protection authority guidelines require explicit consent for AI-based profiling in employment contexts, beyond GDPR’s legitimate interest basis.
US Regulatory Comparison
While the EU AI Act represents the most comprehensive framework, US employers face a growing patchwork of employment AI regulations:
| Jurisdiction | Regulation | Key Requirements |
|---|---|---|
| Federal (EEOC) | Title VII, ADA | AI tools that produce disparate impact can violate Title VII; employers liable even if vendor-provided |
| New York City | Local Law 144 | Bias audits required for automated employment decision tools; candidate notice; annual public reporting |
| Illinois | AI Video Interview Act | Notice and consent required for AI video interview analysis; data destruction upon request |
| Colorado | Colorado AI Act (2026) | High-risk AI disclosure, impact assessments, discrimination prevention for "consequential decisions" |
| Maryland | Facial Recognition Ban | Prohibits facial recognition in hiring without explicit consent |
| California | CCPA/CPRA | Right to opt out of automated decision-making; transparency requirements |
US multinational companies must increasingly manage compliance across both EU AI Act requirements and this fragmented US landscape.
Evidence Requirements for Regulators
When regulators—or litigants—come asking questions about your employment AI, you’ll need evidence that your controls actually work. Documentation alone isn’t sufficient.
Regulators will request:
- Technical documentation per Annex IV—system architecture, training data details, validation results
- Risk assessment records—identified risks, probability/severity estimates, mitigation measures
- Bias audit results—disparate impact analysis across protected groups, remediation evidence
- Decision logs—complete audit trails for specific candidates or employees
- Human oversight records—evidence that humans reviewed and could override AI decisions
- Incident reports—any serious incidents reported per Article 73
The distinction between policy documentation and execution evidence is critical. A policy stating "humans review all AI recommendations" means nothing without logs proving that review actually occurred.
Implementation Checklist
Employment AI Compliance Checklist
Inventory all employment AI systems
Recruitment tools, performance monitoring, scheduling algorithms, termination analytics
Classify each system’s risk level
Document rationale for classification; conservatively classify uncertain cases as high-risk
Establish risk management process
Identify, analyze, evaluate, and mitigate risks to health, safety, and fundamental rights
Conduct bias audits
Test for disparate impact across protected characteristics; document findings and remediation
Implement Article 12 logging
Automatic, tamper-evident logging of all decisions, inputs, outputs, and human interventions
Design human oversight controls
Ensure humans can understand, override, and reverse AI decisions; train oversight personnel
Prepare technical documentation
Compile Annex IV documentation including system description, data governance, validation results
Consult works councils / employee representatives
Where applicable (Germany, France, Netherlands, etc.), engage employee bodies before deployment
Establish post-market monitoring
Ongoing monitoring for performance degradation, bias drift, and serious incidents
Train HR and management
Ensure human overseers understand AI capabilities, limitations, and their oversight responsibilities
Deadline reminder: All high-risk employment AI systems must be compliant by August 2, 2026. Start now—implementation typically takes 6-12 months.
Frequently Asked Questions
Is employment AI high-risk under the EU AI Act?
Yes. Employment AI is explicitly classified as high-risk under Annex III, Category 4: "Employment, workers management and access to self-employment." This includes AI used for recruitment, job advertising, application screening, interview analysis, performance monitoring, promotion decisions, task allocation, and termination recommendations. The compliance deadline is August 2, 2026.
Does our US-based recruitment platform need to comply?
If the platform is used to make employment decisions affecting EU workers or candidates, yes. The EU AI Act has extraterritorial reach—it applies wherever AI system output is used in the EU, regardless of where the provider or deployer is located. Additionally, your platform may need to comply with NYC Local Law 144, Illinois AIPA, or other US regulations depending on where candidates are located.
What’s the difference between "provider" and "deployer" for employment AI?
The provider is the entity that develops or places the AI system on the market (e.g., HireVue, Workday). The deployer is the entity using the system (e.g., a company using HireVue for interviews). Both have obligations: providers must ensure the system enables compliance (logging, transparency, documentation); deployers must implement human oversight, monitor for issues, and use the system as intended. If you customize a general-purpose AI for employment use, you may become the "provider."
Can we use ChatGPT or Claude to screen resumes?
You can, but you become the "provider" of a high-risk AI system. General-purpose AI models are subject to GPAI obligations (Articles 53-55), but when you integrate them into a high-risk use case like employment screening, you bear full responsibility for Articles 8-15 compliance—including risk management, bias testing, logging, human oversight, and conformity assessment. The foundation model provider (OpenAI, Anthropic) doesn’t assume your employment AI liability.
How do we handle employee monitoring tools already deployed?
Existing systems must be brought into compliance by August 2, 2026. Conduct a gap assessment against Articles 9-15 requirements, implement required controls (especially logging and human oversight), document your risk management process, and test for bias. If a system can’t be made compliant, you may need to discontinue or replace it. Don’t forget to consult works councils where applicable.
What penalties apply for non-compliant employment AI?
Non-compliance with high-risk system obligations (Articles 8-15) carries penalties up to €15 million or 3% of total worldwide annual turnover, whichever is higher. For fundamental rights violations including discrimination, the actual penalty may be higher when combined with GDPR, employment law, and anti-discrimination enforcement. Reputational damage from publicized violations often exceeds regulatory fines.
References
- [1] European Union. "Regulation (EU) 2024/1689 of the European Parliament and of the Council." Official Journal of the European Union, July 12, 2024. EUR-Lex 32024R1689
- [2] European Commission. "Annexes to Regulation (EU) 2024/1689 - Annex III High-Risk AI Systems." EUR-Lex, July 12, 2024.
- [3] EEOC. "The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence." Guidance, May 2022. eeoc.gov
- [4] NYC Department of Consumer and Worker Protection. "Automated Employment Decision Tools (Local Law 144)." Rules and Guidance, 2023. nyc.gov
- [5] Illinois General Assembly. "Artificial Intelligence Video Interview Act." 820 ILCS 42, 2020.
- [6] Colorado General Assembly. "Colorado Artificial Intelligence Act." SB24-205, 2024.
- [7] German Bundestag. "Betriebsverfassungsgesetz (Works Constitution Act)." §87, §94, §95.
- [8] European Parliament. "Directive 2000/78/EC Establishing a General Framework for Equal Treatment in Employment." November 27, 2000.