Ambient AI Scribe Privacy Read Now
Use Case Classification • EU AI Act

Are AI Chatbots High-Risk Under EU AI Act?

Definitive classification guide for conversational AI. When chatbots are limited risk vs. high-risk, transparency requirements, and compliance obligations.

12 min read 2,200+ words
Joe Braidwood
Joe Braidwood
CEO, GLACIS
12 min read

Quick Answer: It Depends on Use Case

Most AI chatbots are LIMITED RISK—requiring only transparency obligations (users must know they’re talking to AI). However, chatbots become HIGH-RISK when they provide medical advice, legal guidance, financial recommendations, or make decisions affecting individuals’ fundamental rights.

The classification depends on what the chatbot does, not the underlying technology. A ChatGPT-powered customer service bot handling return policies is limited risk. The same ChatGPT technology providing diagnostic recommendations becomes high-risk.

Art. 50
Transparency Rule
Aug 2026
High-Risk Deadline
€15M
Max High-Risk Fine
€7.5M
Transparency Fine

In This Guide

When Chatbots Are Limited Risk Only

Most commercial chatbots fall into the EU AI Act’s limited risk category. These systems require transparency obligations under Article 50 but don’t face the extensive conformity assessment and documentation requirements imposed on high-risk AI.

General Customer Service Chatbots

Customer service chatbots handling routine inquiries are limited risk when they:

FAQ and Information Bots

Chatbots providing general information remain limited risk when they serve as interactive knowledge bases rather than advisors. Examples include:

Entertainment Chatbots

AI companions, creative writing assistants, gaming NPCs, and entertainment-focused conversational AI are limited risk. Their outputs don’t affect users’ fundamental rights, access to services, or consequential life decisions.

When Chatbots Become High-Risk

A chatbot’s risk classification escalates to high-risk when its purpose involves domains listed in Annex III of the EU AI Act or when it makes consequential decisions affecting individuals’ rights. The technology is identical—the application determines classification.

Medical Advice Chatbots

Chatbots providing healthcare guidance become high-risk under Annex III category 5 (access to essential services). This includes:

Note: Chatbots that merely schedule appointments or answer questions about clinic hours remain limited risk. The distinction is whether the chatbot provides clinical judgment affecting health decisions.

Legal Advice Chatbots

AI systems providing legal guidance fall under Annex III category 8 (administration of justice). High-risk legal chatbots include:

Financial Advice Chatbots

Financial services chatbots become high-risk under Annex III category 5(b) when they:

Chatbots Making Consequential Decisions

Beyond specific domains, any chatbot that makes or significantly influences decisions with material impact on individuals becomes high-risk. This includes:

Key Determining Factors

When classifying your chatbot, evaluate these critical factors:

Factor Limited Risk High-Risk
Purpose Information, navigation, entertainment Advice, recommendations, decisions
Domain General commerce, support, content Healthcare, legal, finance, employment, education
Decision Authority No decisions or human always decides Makes or significantly influences decisions
Impact Convenience, efficiency, engagement Rights, health, financial status, opportunities
Reversibility Easily corrected or inconsequential Difficult to reverse or significant consequences

Article 50 Transparency Requirements

All chatbots—regardless of risk classification—must comply with Article 50 transparency obligations. Users must know they’re interacting with AI, not a human.

Core Disclosure Requirements

Article 50(1) Requirements

  • Clear notification that the user is interacting with an AI system
  • Timely disclosure—at the start of interaction, not buried in terms
  • Accessible format—understandable language, appropriate for audience
  • Exception: Only when "obvious from the circumstances and context of use"

Implementation Best Practices

Effective transparency disclosure typically includes:

Penalty for non-compliance: Up to €7.5 million or 1% of global annual turnover for transparency violations.

Additional Requirements for High-Risk Chatbots

High-risk chatbots must satisfy Articles 8-15 requirements in addition to transparency obligations. This represents a substantial compliance burden requiring dedicated resources.

Article 9: Risk Management

Continuous risk management system throughout the chatbot’s lifecycle. Identify foreseeable risks, estimate probability and severity, implement mitigation measures, and document residual risks.

Article 10: Data Governance

Ensure training, validation, and testing data is relevant, representative, free of errors, and complete. Document data provenance, preparation processes, and bias examination.

Article 13: Transparency

Design for transparency enabling deployers to interpret outputs and use the system appropriately. Provide instructions for use including intended purpose, capabilities, and limitations.

Article 14: Human Oversight

Enable effective human oversight including ability to understand capabilities, monitor operation, interpret outputs, override or interrupt, and prevent automation bias.

Article 12 Logging Requirements

High-risk chatbots face stringent logging requirements under Article 12. Logs must enable post-market monitoring, incident investigation, and regulatory inspection.

Required Log Elements

Retention Requirements

Logs must be retained for the system’s lifetime or as specified by applicable sectoral legislation. Healthcare chatbots may require 6-10+ years retention under medical records laws. Financial services may require 5-7 years. Implement tamper-evident logging with cryptographic integrity verification.

Deepfake and Synthetic Content Rules

Article 50(4) addresses AI-generated synthetic content—relevant for chatbots producing audio, video, or images.

When Deepfake Rules Apply

Your chatbot triggers synthetic content disclosure requirements if it:

Text-only chatbots typically don’t trigger deepfake rules. However, multimodal AI assistants with voice or video capabilities require clear labeling that content is artificially generated or manipulated.

US Regulatory Comparison

The United States lacks comprehensive federal AI chatbot regulation comparable to the EU AI Act. However, a patchwork of existing and emerging laws applies.

Colorado AI Act (SB 21-169)

Effective February 1, 2026. Requires disclosure when AI makes or substantially influences "consequential decisions" in employment, education, financial services, healthcare, housing, insurance, and legal services. Developers must provide impact assessments; deployers must implement risk management.

FTC Act Section 5

Prohibits unfair or deceptive practices. Undisclosed AI interactions may constitute deception. FTC has signaled aggressive enforcement against "dark patterns" and hidden AI use, particularly in contexts where consumers expect human interaction.

FDA Oversight

Medical chatbots providing diagnostic or treatment recommendations may qualify as medical devices requiring FDA clearance or approval. Clinical decision support software guidance applies. 510(k) or De Novo pathway may be required.

State Consumer Protection Laws

California Bot Disclosure Law (SB 1001) requires bots to disclose their non-human nature when selling products or influencing votes. Similar laws emerging in other states. CCPA/CPRA may apply to data collected by chatbots.

Evidence Requirements

Demonstrating compliance requires more than policies—you need evidence that controls actually function. For chatbot compliance, prepare:

Limited Risk (All Chatbots)

High-Risk Chatbots (Additional)

Implementation Checklist

Compliance Checklist

Chatbot EU AI Act Compliance

1

Classification Assessment

  • ☐ Document chatbot’s intended purpose and use cases
  • ☐ Evaluate against Annex III high-risk categories
  • ☐ Assess decision-making authority and impact
  • ☐ Document classification rationale
2

Transparency Implementation

  • ☐ Add AI disclosure at conversation start
  • ☐ Implement visual indicators (icons, labels)
  • ☐ Create human handoff disclosure
  • ☐ Test disclosure visibility and comprehension
3

High-Risk: Technical Controls

  • ☐ Implement Article 12 compliant logging
  • ☐ Establish log retention and integrity controls
  • ☐ Build human oversight mechanisms
  • ☐ Implement override and interrupt capabilities
4

High-Risk: Documentation

  • ☐ Complete risk management documentation
  • ☐ Document data governance practices
  • ☐ Prepare technical documentation (Annex IV)
  • ☐ Establish quality management system
5

High-Risk: Conformity Assessment

  • ☐ Determine assessment pathway (internal vs. notified body)
  • ☐ Prepare EU declaration of conformity
  • ☐ Register in EU database (when available)
  • ☐ Implement post-market monitoring

Frequently Asked Questions

My chatbot uses ChatGPT/Claude. Am I the provider or deployer?

You’re typically the "deployer" using a GPAI model from a "provider" (OpenAI, Anthropic). However, if you integrate the model into a high-risk use case (medical advice, credit decisions), you become the "provider" of that high-risk AI system and bear compliance responsibility. The GPAI provider must give you documentation enabling your compliance, but you’re responsible for the final system.

What if my chatbot just routes to humans for important decisions?

Routing alone doesn’t determine classification. If the chatbot merely collects information and routes to humans who make all decisions, it’s likely limited risk. But if the chatbot triages, prioritizes, or makes recommendations that influence human decisions, it may be high-risk—especially in healthcare, employment, or financial contexts.

Do internal employee chatbots need to comply?

Yes. The EU AI Act applies regardless of whether the chatbot serves customers or employees. An HR chatbot screening candidates or providing benefits advice is high-risk. An IT helpdesk chatbot resetting passwords is limited risk. Apply the same use-case analysis.

What’s the timeline for chatbot compliance?

Transparency requirements (Article 50) apply from August 2, 2025, for all chatbots. High-risk chatbot requirements apply August 2, 2026. Start transparency implementation now. High-risk chatbots need 6-12 months for full compliance—begin immediately if applicable.

Can I add disclaimers to avoid high-risk classification?

Disclaimers don’t change classification. If your chatbot provides medical advice, stating "this isn’t medical advice" doesn’t make it limited risk—it may just add a deceptive practice violation. Classification depends on what the system actually does, not what you label it. However, clear limitations and redirection to professionals may reduce harm—which matters for risk management.

Do voice-enabled chatbots have additional requirements?

Voice chatbots must still disclose AI nature—audio disclosure is acceptable. If the voice is synthesized to resemble a specific person or could be mistaken for authentic human speech, Article 50(4) deepfake provisions may apply. Ensure clear AI identification in voice interactions, particularly at the start of calls.

Get Your Chatbot Compliance Evidence

GLACIS generates cryptographic proof that your chatbot’s transparency disclosures deploy correctly and logging controls function as designed. Evidence that auditors and regulators actually accept.

Start Your Compliance Sprint

Related Guides