Risk Management

AI Risk Assessment: Complete Framework for Evaluating AI Systems

Risk assessment is the foundation of AI governance. Learn how to identify, evaluate, and prioritize AI risks using proven methodologies and the GLACIS Risk Matrix.

15 min read Updated December 2025

What Is AI Risk Assessment?

AI risk assessment is the systematic process of identifying, analyzing, and evaluating risks associated with artificial intelligence systems. Unlike traditional IT risk assessments, AI risk assessment must address unique challenges:

  • Emergent behavior: AI systems can exhibit unexpected outputs that weren't explicitly programmed
  • Opacity: Many AI models are "black boxes" where decision-making logic isn't transparent
  • Data dependency: Model behavior depends heavily on training data quality and representativeness
  • Continuous evolution: Models may drift over time as data distributions change
  • Scale of impact: AI decisions can affect thousands or millions of people simultaneously

A comprehensive AI risk assessment examines technical, operational, ethical, and regulatory dimensions to inform governance decisions and prioritize mitigation efforts.

Risk Assessment vs. Impact Assessment

Risk assessment focuses on what could go wrong and how to prevent it. Impact assessment focuses on effects on individuals and society. Both are often required together, especially under regulations like the EU AI Act.

AI Risk Categories

AI risks span multiple dimensions. A thorough assessment must consider all categories:

Technical Risks

Model Performance

Accuracy degradation, poor generalization, sensitivity to edge cases, performance variance across subgroups.

Robustness & Reliability

Vulnerability to adversarial inputs, brittleness under distribution shift, inconsistent outputs.

Security Vulnerabilities

Prompt injection, data poisoning, model extraction, membership inference, jailbreaking.

Data Quality Issues

Training data bias, data leakage, privacy violations, outdated or incomplete data.

Operational Risks

  • Integration failures: Incompatibility with existing systems, data pipeline issues
  • Human oversight gaps: Inadequate review processes, automation bias
  • Dependency risks: Reliance on third-party models, APIs, or infrastructure
  • Change management: Uncontrolled updates, lack of version control
  • Incident response: Inadequate detection and remediation capabilities

Ethical & Societal Risks

  • Bias and discrimination: Disparate impact on protected groups
  • Privacy violations: Unauthorized data use, re-identification risks
  • Autonomy undermining: Manipulation, dark patterns, excessive reliance
  • Transparency gaps: Inability to explain decisions to affected parties
  • Accountability voids: Unclear responsibility when things go wrong

Compliance & Legal Risks

  • Regulatory violations: Non-compliance with EU AI Act, state laws, sector regulations
  • Contractual breaches: Violation of customer agreements, SLAs
  • Liability exposure: Product liability, malpractice, negligence claims
  • Reputational damage: Public incidents, media coverage, customer trust loss

GLACIS Risk Matrix

We use a probability-impact matrix to prioritize risks. This helps organizations focus resources on the most significant threats.

GLACIS AI Risk Matrix

Negligible
Minor
Significant
Severe
Almost Certain
Medium
High
Critical
Critical
Likely
Low
Medium
High
Critical
Possible
Low
Medium
Medium
High
Unlikely
Low
Low
Medium
Medium
Rare
Low
Low
Low
Medium
Low: Accept or monitor
Medium: Mitigate
High: Priority action
Critical: Immediate action

Impact Severity Definitions

Severity Definition Healthcare Examples
Negligible Minor inconvenience, easily reversible, no lasting harm Scheduling inefficiency, minor UI errors
Minor Noticeable impact, some remediation required, limited scope Incorrect billing code, delayed notification
Significant Substantial harm, regulatory attention, remediation costly PHI exposure, biased treatment recommendations
Severe Irreversible harm, patient safety impact, major regulatory action Misdiagnosis leading to harm, systematic discrimination

Assessment Methodology

A structured methodology ensures comprehensive and consistent risk assessment:

1

Scope & Context

Define the AI system boundaries and operating context:

  • System purpose and intended use cases
  • Affected stakeholders and populations
  • Deployment environment and integrations
  • Regulatory and contractual requirements
2

Risk Identification

Systematically identify potential risks:

  • Review each risk category (technical, operational, ethical, compliance)
  • Conduct threat modeling for security risks
  • Analyze failure modes and edge cases
  • Consider misuse and adversarial scenarios
3

Risk Analysis

Evaluate each identified risk:

  • Assess probability (rare to almost certain)
  • Assess impact severity (negligible to severe)
  • Identify existing controls and their effectiveness
  • Calculate residual risk after controls
4

Risk Evaluation

Prioritize and make treatment decisions:

  • Apply risk matrix to determine priority
  • Compare against risk appetite and tolerance
  • Identify risks requiring immediate action
  • Determine appropriate treatment strategy
5

Documentation & Monitoring

Create records and establish ongoing monitoring:

  • Document assessment findings and rationale
  • Create risk register with owners and timelines
  • Establish monitoring for risk indicators
  • Define triggers for reassessment

Regulatory Requirements

Multiple regulations now mandate AI risk assessments:

EU AI Act

The EU AI Act requires risk assessments for high-risk AI systems, which includes most healthcare AI. Requirements include:

  • Risk management system: Continuous, iterative process throughout the AI lifecycle
  • Residual risk evaluation: Remaining risks must be acceptable
  • Testing against risks: Validation that risks are adequately addressed
  • Post-market monitoring: Ongoing risk tracking after deployment

Colorado AI Act

Effective June 30, 2026, Colorado requires:

  • Impact assessments: For high-risk AI systems making consequential decisions
  • Annual updates: Assessments must be refreshed yearly
  • Disclosure: Consumers must be informed when AI affects them

NIST AI RMF

The NIST AI Risk Management Framework provides voluntary guidance that many organizations adopt. Key risk-related functions:

  • MAP: Establish context and identify risks
  • MEASURE: Assess and track identified risks
  • MANAGE: Prioritize and treat risks

Healthcare-Specific Requirements

Healthcare AI systems may also need to comply with HIPAA risk analysis requirements, FDA premarket submissions (for SaMD), and Joint Commission standards. These often overlap with but don't replace AI-specific risk assessments.

Healthcare AI Risks

Healthcare AI presents unique risk considerations due to patient safety implications:

Clinical Decision Support Risks

  • Misdiagnosis: False positives leading to unnecessary treatment; false negatives missing conditions
  • Automation bias: Clinicians over-trusting AI recommendations
  • Alert fatigue: Too many warnings causing important ones to be ignored
  • Context blindness: AI missing crucial patient context

Documentation AI Risks

  • Hallucination: AI generating false information in clinical notes
  • PHI exposure: Sensitive information in prompts or logs
  • Attribution errors: Incorrect patient data linked to wrong records
  • Semantic drift: Subtle meaning changes that alter clinical interpretation

Administrative AI Risks

  • Access discrimination: Biased scheduling or resource allocation
  • Billing errors: Incorrect coding affecting patient costs and compliance
  • Communication failures: Important messages not delivered or misrouted

Risk Mitigation Strategies

Once risks are identified and prioritized, apply appropriate mitigation strategies:

Strategy When to Use Examples
Avoid Risk is unacceptable and cannot be adequately controlled Don't deploy AI for this use case; use alternative approach
Reduce Risk can be lowered through controls Add human review, improve model, implement guardrails
Transfer Risk can be shared with another party Insurance, contractual allocation, outsourcing
Accept Risk is within tolerance after controls Document acceptance, establish monitoring

Common Risk Controls

  • Human-in-the-loop: Require human review before consequential actions
  • Confidence thresholds: Escalate low-confidence predictions
  • Guardrails: Hard limits on outputs (dosage ranges, prohibited actions)
  • Bias testing: Regular evaluation across protected groups
  • Continuous monitoring: Track performance, drift, and anomalies
  • Incident response: Plans for when things go wrong
  • Audit trails: Evidence of control execution

Frequently Asked Questions

What is an AI risk assessment?

An AI risk assessment is a systematic process to identify, analyze, and evaluate potential risks associated with an AI system. It examines technical, operational, ethical, and compliance risks to inform governance decisions and prioritize mitigation efforts.

When is an AI risk assessment required?

Risk assessments are required by the EU AI Act for high-risk systems, by the Colorado AI Act for consequential decisions, and by frameworks like NIST AI RMF and ISO 42001. Many organizations also require them for vendor due diligence and internal governance.

What makes an AI system high-risk?

High-risk classification depends on the domain (healthcare, employment, credit), decision type (consequential vs. advisory), affected population, and reversibility of outcomes. Most healthcare AI is considered high-risk under emerging regulations.

How often should AI risk assessments be updated?

Update assessments when there are material changes (new models, expanded use cases), when regulations change, after incidents, and at minimum annually for high-risk systems. Continuous monitoring should supplement periodic reassessments.

Who should be involved in AI risk assessment?

Effective assessments require cross-functional input: AI/ML engineers, security, legal/compliance, ethics, domain experts (clinicians for healthcare), affected user representatives, and executive sponsors. No single function has complete visibility.

Need Help With AI Risk Assessment?

The Evidence Pack Sprint includes comprehensive risk assessment documentation aligned with NIST AI RMF and EU AI Act requirements.

Book a Sprint Call

Related Guides