Ambient AI Scribe Privacy Read Now
Use Case Classification Guide

Is Biometric AI High-Risk Under EU AI Act?

Classification depends on use case. Some biometric AI is prohibited. Most is high-risk. This guide helps you determine which category applies.

12 min read 2,200+ words
Joe Braidwood
Joe Braidwood
CEO, GLACIS
12 min read

Quick Answer

Biometric AI classification under the EU AI Act depends entirely on the specific use case. Some uses are completely prohibited (Article 5), while most are classified as high-risk (Annex III) with strict compliance requirements.

Prohibited (Article 5)
  • Untargeted facial image scraping
  • Real-time remote biometric ID in public
  • Emotion recognition in workplaces/schools
High-Risk (Annex III)
  • Post-remote biometric identification
  • Biometric categorization systems
  • Permitted emotion recognition uses
Feb 2025
Prohibitions Active
Aug 2026
High-Risk Deadline
Article 12
Logging Required
Notified Body
Assessment Required

In This Guide

Prohibited Biometric AI Uses (Article 5)

Article 5 of the EU AI Act outright bans certain biometric AI applications that pose unacceptable risks to fundamental rights. These prohibitions took effect February 2, 2025. Organizations deploying prohibited biometric AI face penalties up to €35 million or 7% of global annual turnover.

Prohibited Biometric AI Applications

1. Untargeted Facial Image Scraping

Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage. This prohibition applies regardless of the database’s intended use—commercial, law enforcement, or otherwise.[1]

2. Real-Time Remote Biometric Identification in Public Spaces

AI systems that identify natural persons in real-time in publicly accessible spaces for law enforcement purposes. "Real-time" means identification occurs simultaneously with or without significant delay from the biometric data capture.[1]

Narrow exceptions exist: targeted search for missing children, prevention of imminent terrorist threats, locating suspects of specific serious crimes (terrorism, trafficking, murder). Even excepted uses require prior judicial authorization.

3. Emotion Recognition in Workplaces and Schools

AI systems inferring emotions of individuals in workplace and educational institution contexts. This covers systems detecting stress, engagement, attention, satisfaction, or other emotional states.[1]

Exceptions: Medical or safety purposes only—detecting driver fatigue for vehicle safety, monitoring patient emotional states in therapeutic settings, or medical diagnosis applications.

4. Biometric Categorization Inferring Sensitive Attributes

AI systems categorizing individuals based on biometric data to infer race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation. Lawful filtering of biometric datasets by sex (e.g., for law enforcement searches) is permitted.[1]

High-Risk Biometric AI Uses (Annex III)

Biometric AI systems not falling under Article 5 prohibitions are classified as high-risk under Annex III, Category 1. These systems must achieve full compliance with Articles 8-15 by August 2, 2026, including third-party notified body assessment.

High-Risk Biometric AI Categories

Post-Remote Biometric Identification

AI systems identifying natural persons at a distance through analysis of recorded biometric data (video footage, photographs) after the fact. Unlike real-time identification, post-remote analysis is permitted but heavily regulated. This includes forensic facial recognition systems used by law enforcement analyzing historical footage.[1]

Biometric Categorization Systems

AI systems assigning natural persons to specific categories based on biometric data. This includes systems categorizing by age, gender, ethnicity (where legally permitted), or other physical characteristics. Commercial applications like customer demographic analysis fall into this category.[1]

Permitted Emotion Recognition

Emotion recognition AI systems used for medical diagnosis, therapeutic monitoring, driver safety monitoring, or other safety-critical applications. While exempt from the workplace/school prohibition, these remain high-risk and require full compliance with Articles 9-15.[1]

Biometric Verification (Authentication)

One-to-one biometric verification systems (fingerprint authentication, face unlock, iris scanning for access control) are generally not high-risk under Annex III unless deployed in high-risk contexts or combined with other high-risk functions. Context determines classification.[1]

Key Determining Factors

The classification of biometric AI depends on several critical factors. Understanding these distinctions is essential for accurate risk categorization.

Factor Prohibited High-Risk
Timing Real-time (simultaneous/minimal delay) Post-remote (after recording)
Location Publicly accessible spaces Private/controlled environments
Context Workplace/school emotion recognition Medical/safety emotion recognition
Data Source Untargeted internet/CCTV scraping Targeted, consent-based collection
Purpose Inferring prohibited sensitive attributes Permitted categorization purposes
Operator Law enforcement (without exception) Law enforcement (with authorization)

Law Enforcement Exceptions

Real-time remote biometric identification by law enforcement in public spaces is permitted only for:

Even when exceptions apply, prior judicial or independent administrative authorization is required. Post-hoc authorization is only permitted in duly justified urgent cases, with authorization sought within 24 hours.[1]

High-Risk Compliance Requirements (Articles 9-15)

Biometric AI systems classified as high-risk must comply with the full suite of requirements in Articles 8-15. For biometric systems specifically, third-party notified body assessment is mandatory—internal conformity assessment is not an option.

9

Risk Management System

Continuous, iterative risk management throughout the system lifecycle. For biometric systems, this includes assessing risks of misidentification, bias across demographic groups, and fundamental rights impacts. Document foreseeable risks, mitigation measures, and residual risk acceptance rationale.

10

Data Governance

Training, validation, and testing data must be subject to appropriate governance. Biometric datasets require examination for representativeness, bias, and gaps across demographic groups. Document data sources, collection methods, and preprocessing procedures.

11

Technical Documentation

Prepare comprehensive technical documentation per Annex IV before placing the system on the market. For biometric systems, this includes detailed accuracy metrics (false acceptance rate, false rejection rate), demographic performance differentials, and testing methodology.

12

Automatic Logging

Systems must automatically record events enabling traceability of functioning. For biometric AI, logs must capture identification/verification requests, confidence scores, match decisions, and input data references—essential for post-incident investigation and ongoing monitoring.

13

Transparency

Provide clear instructions for use to deployers. Biometric systems must include information on accuracy levels, known limitations, demographic performance variations, and proper operating conditions.

14

Human Oversight

Design systems to enable effective human oversight. Critical for biometric identification systems—humans must be able to correctly interpret outputs, decide not to use the system, override decisions, and intervene in real-time when necessary.

15

Accuracy, Robustness, Cybersecurity

Achieve appropriate accuracy levels consistent with intended purpose. Biometric systems must be resilient against adversarial attacks (presentation attacks, morphing attacks) and errors that could lead to misidentification.

Article 12 Logging: GLACIS Core Relevance

Article 12 is particularly critical for biometric AI systems. The logging requirements create an auditable record that regulators will scrutinize during market surveillance and incident investigations.

Article 12 Logging Requirements for Biometric AI

High-risk biometric AI systems must have logging capabilities that automatically record:

  • Operation periods: Recording start and end times of system operation
  • Reference database: Which biometric database was queried for each identification
  • Input data: Reference to input data generating matches (enabling post-hoc verification)
  • Human verification: Identity of natural persons verifying identification results
GLACIS logo How GLACIS Helps

GLACIS generates cryptographic evidence that your Article 12 logging controls actually execute correctly—not just documentation claiming they exist. This evidence-based approach satisfies regulator expectations for proof that controls work, creating tamper-evident audit trails for biometric system operations.

GDPR Biometric Data Requirements

Biometric AI systems face dual regulatory obligations: the EU AI Act governs the AI system itself, while GDPR governs the processing of biometric personal data. Organizations must achieve compliance with both frameworks.

GDPR Article 9: Special Category Data

Biometric data processed for identification purposes is special category data under GDPR. Processing is prohibited unless an Article 9(2) exception applies:

  • Explicit consent (Article 9(2)(a))
  • Employment law obligations (Article 9(2)(b))
  • Substantial public interest (Article 9(2)(g))

AI Act Additional Requirements

Beyond GDPR compliance, biometric AI systems must meet AI Act requirements:

  • Risk management for the AI system (Article 9)
  • Data governance for training data (Article 10)
  • Automated logging (Article 12)
  • Human oversight mechanisms (Article 14)

Key overlap: GDPR Article 22 (automated decision-making) and AI Act Article 14 (human oversight) both require meaningful human involvement in consequential decisions. For biometric identification affecting individuals, both regulations demand mechanisms enabling human review and intervention.

US Regulatory Comparison

Unlike the EU’s comprehensive framework, US biometric regulation is fragmented across state laws, municipal ordinances, and sector-specific requirements. Organizations operating transatlantically face divergent compliance obligations.

Jurisdiction Regulation Key Requirements
Illinois BIPA (2008) Informed written consent, private right of action, $1,000-$5,000 per violation
Texas CUBI (2009) Consent required, AG enforcement only, $25,000 per violation
Washington HB 1493 (2017) Notice and consent for commercial purposes, AG enforcement
California CCPA/CPRA Biometric data is "sensitive personal information," opt-out rights
San Francisco Ordinance (2019) Government agencies prohibited from using facial recognition
Federal None comprehensive Sector-specific only (HIPAA for health, FCRA for employment)

Key distinction: The EU AI Act creates systematic prohibitions and high-risk requirements that don’t exist at US federal level. BIPA’s private right of action has driven significant litigation, but focuses on data collection consent rather than AI system governance. Organizations may find EU compliance creates a superset covering most US requirements.

Implementation Checklist

Organizations deploying biometric AI systems should systematically address these requirements before the August 2026 deadline.

Pre-Deployment Compliance Checklist

Frequently Asked Questions

Is biometric AI high-risk under the EU AI Act?

It depends on the specific use case. Some biometric AI uses are completely prohibited under Article 5 (such as untargeted facial image scraping and real-time remote biometric ID in public spaces). Most other biometric identification and categorization systems are classified as high-risk under Annex III, requiring full compliance with Articles 9-15 by August 2026.

Is facial recognition prohibited under the EU AI Act?

Not entirely. Real-time remote biometric identification in publicly accessible spaces by law enforcement is prohibited with narrow exceptions (missing children, terrorist threats, serious crime suspects). Post-remote facial recognition (analyzing footage after the fact) is classified as high-risk but permitted. Untargeted scraping of facial images from the internet or CCTV to build recognition databases is completely prohibited.

Can I use emotion recognition AI in my workplace?

Generally no. The EU AI Act prohibits emotion recognition AI in workplace and educational settings under Article 5, effective February 2025. Limited exceptions exist for medical or safety purposes—for example, detecting driver fatigue for safety reasons or monitoring patient emotional states in therapeutic contexts. Any permitted emotion recognition remains high-risk and requires full compliance.

Do I need a notified body assessment for biometric AI?

Yes. Biometric identification and categorization systems listed in Annex III require third-party notified body conformity assessment—the internal control pathway available for some high-risk systems does not apply. Assessments cost €10,000-€100,000 and take 3-12 months. Start engagement early to meet the August 2026 deadline.

What evidence do regulators expect for biometric AI compliance?

Regulators expect proof that controls execute—not just policy documentation claiming they exist. This includes: Article 12 logs demonstrating operational traceability, risk assessment records with mitigation evidence, data governance audit trails, human oversight execution records, and accuracy testing results across demographic groups. Generating cryptographic evidence of control execution satisfies these evidentiary expectations.

Does GDPR also apply to biometric AI systems?

Yes, biometric AI systems must comply with both the EU AI Act and GDPR. GDPR classifies biometric data as special category data under Article 9, requiring explicit consent or another lawful basis for processing. The AI Act adds additional requirements for the AI system itself. Organizations need dual compliance strategies addressing both regulations’ requirements.

References

  1. [1] European Union. "Regulation (EU) 2024/1689 of the European Parliament and of the Council." Official Journal of the European Union, July 12, 2024. EUR-Lex 32024R1689
  2. [2] European Commission. "Questions and Answers: Artificial Intelligence Act." March 13, 2024. europa.eu
  3. [3] European Data Protection Board. "Guidelines on the Use of Facial Recognition Technology in the Area of Law Enforcement." 2022. edpb.europa.eu
  4. [4] Illinois General Assembly. "Biometric Information Privacy Act (BIPA)." 740 ILCS 14. ilga.gov
  5. [5] Future of Life Institute. "EU AI Act Article-by-Article Analysis." 2024. artificialintelligenceact.eu

Prove Your Biometric AI Controls Work

GLACIS generates cryptographic evidence that your Article 12 logging, human oversight, and risk management controls execute correctly. Get audit-ready documentation for notified body assessment.

Get Free Assessment

Related Guides