When Chatbots Are Limited Risk Only
Most commercial chatbots fall into the EU AI Act’s limited risk category. These systems require transparency obligations under Article 50 but don’t face the extensive conformity assessment and documentation requirements imposed on high-risk AI.
General Customer Service Chatbots
Customer service chatbots handling routine inquiries are limited risk when they:
- Answer questions about products, services, pricing, or company policies
- Process basic requests like order status, tracking, or return initiation
- Route users to appropriate human agents or departments
- Collect information for human follow-up without making decisions
FAQ and Information Bots
Chatbots providing general information remain limited risk when they serve as interactive knowledge bases rather than advisors. Examples include:
- Website navigation assistants helping users find content
- Product information bots describing features and specifications
- Event or scheduling assistants for bookings and reservations
- Educational content bots providing general learning material
Entertainment Chatbots
AI companions, creative writing assistants, gaming NPCs, and entertainment-focused conversational AI are limited risk. Their outputs don’t affect users’ fundamental rights, access to services, or consequential life decisions.
When Chatbots Become High-Risk
A chatbot’s risk classification escalates to high-risk when its purpose involves domains listed in Annex III of the EU AI Act or when it makes consequential decisions affecting individuals’ rights. The technology is identical—the application determines classification.
Medical Advice Chatbots
Chatbots providing healthcare guidance become high-risk under Annex III category 5 (access to essential services). This includes:
- Symptom checkers that suggest diagnoses or triage urgency
- Treatment recommendation bots suggesting medications or therapies
- Mental health chatbots providing therapeutic interventions or crisis support
- Patient intake bots that influence care prioritization or resource allocation
Note: Chatbots that merely schedule appointments or answer questions about clinic hours remain limited risk. The distinction is whether the chatbot provides clinical judgment affecting health decisions.
Legal Advice Chatbots
AI systems providing legal guidance fall under Annex III category 8 (administration of justice). High-risk legal chatbots include:
- Chatbots recommending legal strategies or predicting case outcomes
- AI drafting legal documents with substantive legal recommendations
- Systems advising on rights, obligations, or legal remedies
- Immigration or asylum guidance bots affecting individuals’ legal status
Financial Advice Chatbots
Financial services chatbots become high-risk under Annex III category 5(b) when they:
- Assess creditworthiness or make lending recommendations
- Provide personalized investment advice or portfolio recommendations
- Determine insurance eligibility, pricing, or claims decisions
- Make fraud determinations affecting account access
Chatbots Making Consequential Decisions
Beyond specific domains, any chatbot that makes or significantly influences decisions with material impact on individuals becomes high-risk. This includes:
- Employment bots screening candidates, scheduling interviews based on qualifications, or providing hiring recommendations
- Education bots determining course placement, academic progression, or access to educational opportunities
- Benefits bots affecting access to public assistance, housing, or social services
- Essential services bots controlling access to utilities, telecommunications, or other necessities
Key Determining Factors
When classifying your chatbot, evaluate these critical factors:
| Factor | Limited Risk | High-Risk |
|---|---|---|
| Purpose | Information, navigation, entertainment | Advice, recommendations, decisions |
| Domain | General commerce, support, content | Healthcare, legal, finance, employment, education |
| Decision Authority | No decisions or human always decides | Makes or significantly influences decisions |
| Impact | Convenience, efficiency, engagement | Rights, health, financial status, opportunities |
| Reversibility | Easily corrected or inconsequential | Difficult to reverse or significant consequences |
Article 50 Transparency Requirements
All chatbots—regardless of risk classification—must comply with Article 50 transparency obligations. Users must know they’re interacting with AI, not a human.
Core Disclosure Requirements
Article 50(1) Requirements
- Clear notification that the user is interacting with an AI system
- Timely disclosure—at the start of interaction, not buried in terms
- Accessible format—understandable language, appropriate for audience
- Exception: Only when "obvious from the circumstances and context of use"
Implementation Best Practices
Effective transparency disclosure typically includes:
- Opening message stating "I’m an AI assistant" or equivalent
- Visual indicators (bot icons, labels) throughout the interface
- Clear distinction when transferring to human agents
- Persistent accessibility of disclosure information
Penalty for non-compliance: Up to €7.5 million or 1% of global annual turnover for transparency violations.
Additional Requirements for High-Risk Chatbots
High-risk chatbots must satisfy Articles 8-15 requirements in addition to transparency obligations. This represents a substantial compliance burden requiring dedicated resources.
Article 9: Risk Management
Continuous risk management system throughout the chatbot’s lifecycle. Identify foreseeable risks, estimate probability and severity, implement mitigation measures, and document residual risks.
Article 10: Data Governance
Ensure training, validation, and testing data is relevant, representative, free of errors, and complete. Document data provenance, preparation processes, and bias examination.
Article 13: Transparency
Design for transparency enabling deployers to interpret outputs and use the system appropriately. Provide instructions for use including intended purpose, capabilities, and limitations.
Article 14: Human Oversight
Enable effective human oversight including ability to understand capabilities, monitor operation, interpret outputs, override or interrupt, and prevent automation bias.
Article 12 Logging Requirements
High-risk chatbots face stringent logging requirements under Article 12. Logs must enable post-market monitoring, incident investigation, and regulatory inspection.
Required Log Elements
- Usage period—timestamps for each interaction session
- Reference database—version of knowledge base or model used
- Input data—user queries and conversation context
- Outputs—all responses, recommendations, and decisions
- Human verification—identification of persons reviewing results
Retention Requirements
Logs must be retained for the system’s lifetime or as specified by applicable sectoral legislation. Healthcare chatbots may require 6-10+ years retention under medical records laws. Financial services may require 5-7 years. Implement tamper-evident logging with cryptographic integrity verification.
Deepfake and Synthetic Content Rules
Article 50(4) addresses AI-generated synthetic content—relevant for chatbots producing audio, video, or images.
When Deepfake Rules Apply
Your chatbot triggers synthetic content disclosure requirements if it:
- Generates realistic synthetic voice responses (voice cloning, TTS resembling real people)
- Creates video avatars or realistic face synthesis
- Generates images depicting real people, places, or events
- Produces content that could be mistaken for authentic recordings
Text-only chatbots typically don’t trigger deepfake rules. However, multimodal AI assistants with voice or video capabilities require clear labeling that content is artificially generated or manipulated.
US Regulatory Comparison
The United States lacks comprehensive federal AI chatbot regulation comparable to the EU AI Act. However, a patchwork of existing and emerging laws applies.
Colorado AI Act (SB 21-169)
Effective February 1, 2026. Requires disclosure when AI makes or substantially influences "consequential decisions" in employment, education, financial services, healthcare, housing, insurance, and legal services. Developers must provide impact assessments; deployers must implement risk management.
FTC Act Section 5
Prohibits unfair or deceptive practices. Undisclosed AI interactions may constitute deception. FTC has signaled aggressive enforcement against "dark patterns" and hidden AI use, particularly in contexts where consumers expect human interaction.
FDA Oversight
Medical chatbots providing diagnostic or treatment recommendations may qualify as medical devices requiring FDA clearance or approval. Clinical decision support software guidance applies. 510(k) or De Novo pathway may be required.
State Consumer Protection Laws
California Bot Disclosure Law (SB 1001) requires bots to disclose their non-human nature when selling products or influencing votes. Similar laws emerging in other states. CCPA/CPRA may apply to data collected by chatbots.
Evidence Requirements
Demonstrating compliance requires more than policies—you need evidence that controls actually function. For chatbot compliance, prepare:
Limited Risk (All Chatbots)
- Screenshots or recordings showing AI disclosure at interaction start
- UI/UX documentation demonstrating disclosure placement and prominence
- User testing confirming disclosure is understood
High-Risk Chatbots (Additional)
- Risk management documentation per Annex IV requirements
- Log samples demonstrating Article 12 compliance
- Human oversight procedures and execution records
- Data governance documentation including bias testing results
- Quality management system records
- Conformity assessment documentation (EU declaration of conformity)
Implementation Checklist
Chatbot EU AI Act Compliance
Classification Assessment
- ☐ Document chatbot’s intended purpose and use cases
- ☐ Evaluate against Annex III high-risk categories
- ☐ Assess decision-making authority and impact
- ☐ Document classification rationale
Transparency Implementation
- ☐ Add AI disclosure at conversation start
- ☐ Implement visual indicators (icons, labels)
- ☐ Create human handoff disclosure
- ☐ Test disclosure visibility and comprehension
High-Risk: Technical Controls
- ☐ Implement Article 12 compliant logging
- ☐ Establish log retention and integrity controls
- ☐ Build human oversight mechanisms
- ☐ Implement override and interrupt capabilities
High-Risk: Documentation
- ☐ Complete risk management documentation
- ☐ Document data governance practices
- ☐ Prepare technical documentation (Annex IV)
- ☐ Establish quality management system
High-Risk: Conformity Assessment
- ☐ Determine assessment pathway (internal vs. notified body)
- ☐ Prepare EU declaration of conformity
- ☐ Register in EU database (when available)
- ☐ Implement post-market monitoring
Frequently Asked Questions
My chatbot uses ChatGPT/Claude. Am I the provider or deployer?
You’re typically the "deployer" using a GPAI model from a "provider" (OpenAI, Anthropic). However, if you integrate the model into a high-risk use case (medical advice, credit decisions), you become the "provider" of that high-risk AI system and bear compliance responsibility. The GPAI provider must give you documentation enabling your compliance, but you’re responsible for the final system.
What if my chatbot just routes to humans for important decisions?
Routing alone doesn’t determine classification. If the chatbot merely collects information and routes to humans who make all decisions, it’s likely limited risk. But if the chatbot triages, prioritizes, or makes recommendations that influence human decisions, it may be high-risk—especially in healthcare, employment, or financial contexts.
Do internal employee chatbots need to comply?
Yes. The EU AI Act applies regardless of whether the chatbot serves customers or employees. An HR chatbot screening candidates or providing benefits advice is high-risk. An IT helpdesk chatbot resetting passwords is limited risk. Apply the same use-case analysis.
What’s the timeline for chatbot compliance?
Transparency requirements (Article 50) apply from August 2, 2025, for all chatbots. High-risk chatbot requirements apply August 2, 2026. Start transparency implementation now. High-risk chatbots need 6-12 months for full compliance—begin immediately if applicable.
Can I add disclaimers to avoid high-risk classification?
Disclaimers don’t change classification. If your chatbot provides medical advice, stating "this isn’t medical advice" doesn’t make it limited risk—it may just add a deceptive practice violation. Classification depends on what the system actually does, not what you label it. However, clear limitations and redirection to professionals may reduce harm—which matters for risk management.
Do voice-enabled chatbots have additional requirements?
Voice chatbots must still disclose AI nature—audio disclosure is acceptable. If the voice is synthesized to resemble a specific person or could be mistaken for authentic human speech, Article 50(4) deepfake provisions may apply. Ensure clear AI identification in voice interactions, particularly at the start of calls.