Complete implementation guide for the NIST AI Risk Management Framework. Core functions, profiles, and practical application.
28 min read
8,000+ words
Joe Braidwood
CEO, GLACIS
28 min read
Executive Summary
The NIST AI Risk Management Framework (AI RMF 1.0), released January 2023, has emerged as the de facto standard for AI governance in the United States. The framework provides 72 subcategories across 19 categories and 4 core functions, with the 2024 Generative AI Profile (NIST AI 600-1) adding 200+ actions specific to LLM and generative AI risks.[1]
While voluntary, NIST AI RMF is increasingly referenced by regulation. The Colorado AI Act explicitly cites it for safe harbor protection.[2] The Federal Artificial Intelligence Risk Management Act of 2024 would make compliance mandatory for federal agencies.[3] Enterprise customers including Workday and Google have publicly adopted the framework.[4]
This guide provides the complete implementation roadmap with evidence requirements, regulatory crosswalks, and practical controls for each function.
72
Subcategories[1]
233
AI Incidents (2024)[5]
$67B
Hallucination Losses[6]
56%
YoY Incident Rise[5]
In This Guide
Why NIST AI RMF Matters Now
The landscape of AI risk has shifted dramatically. According to the Stanford AI Index, 233 AI-related incidents were reported in 2024—a 56.4% increase over 2023 and a 26-fold increase since 2012.[5] These incidents included deepfake intimate images, chatbots implicated in self-harm, and false identification by anti-theft AI systems.
The financial stakes are substantial. Global losses attributed to AI hallucinations reached $67.4 billion in 2024, according to AllAboutAI research.[6] In a separate finding, 47% of enterprise AI users admitted to making at least one major business decision based on hallucinated content.[6]
Regulators have taken notice. In 2024 alone, 135 different state AI laws were passed in the United States, with over 800 submitted at the start of the year.[7] The NIST AI RMF has emerged as the common reference point across these regulatory efforts.
Legal Safe Harbor: The Colorado AI Act explicitly cites NIST AI RMF compliance as grounds for an affirmative defense. Organizations that demonstrate compliance may qualify for safe harbor protections against enforcement actions.[2]
Penalties under the Colorado AI Act can reach $20,000 per violation, enforced by the Colorado Attorney General.
Enterprise Adoption
Major enterprises have publicly embraced NIST AI RMF:
Workday describes NIST AI RMF as "a concrete benchmark for mapping, measuring, and managing our approach to AI governance" that helps "maintain customer trust and stay true to company core values."[4]
Google has built an AI governance program "aligned with the AI RMF approach and underpinned by industry-leading research."[4]
The 2025 AI Governance Survey found that 30% of organizations have at least one AI model in production, with another 40% running pilots.[8] As deployment accelerates, governance frameworks become essential.
Framework Structure
The NIST AI RMF is organized around four core functions, 19 categories, and 72 subcategories. The companion Playbook provides suggested actions for each subcategory, while remaining voluntary and adaptable to organizational context.[1]
Function
Purpose
Categories
Subcategories
GOVERN
Establish organizational culture and structures
6
~20
MAP
Understand context and identify risks
5
18
MEASURE
Assess and analyze identified risks
5
22
MANAGE
Prioritize and address risks
3
~12
TOTAL
19
72
G
GOVERN
Foundational—enables and informs all other functions
The framework defines seven characteristics that AI systems should exhibit to be considered trustworthy:[1]
Valid and Reliable
Consistent, accurate performance
Safe
No harm to people or environment
Secure and Resilient
Protected, recoverable from disruptions
Accountable and Transparent
Clear ownership and visibility
Explainable and Interpretable
Decisions can be understood
Privacy-Enhanced
Individual privacy protected
Fair with Harmful Bias Managed
Regular auditing for biases with corrective actions
GOVERN: Establishing AI Governance
The GOVERN function is foundational—it enables and informs the other three functions. Without effective governance structures, technical controls lack context and accountability. NIST emphasizes that governance should be established first and maintained throughout the AI lifecycle.[1]
Primary ownership typically sits with General Counsel, CISOs, Head of Risk, or Chief Risk Officer—leaders positioned to operationalize AI risk management as part of broader enterprise risk strategy.[9]
GV.1
Organizational Policies
Establish policies that define acceptable AI uses, risk thresholds, and accountability structures.
Document AI use policies aligned with organizational values and legal obligations
Define risk tolerance thresholds by use case category
Establish approval workflows for high-risk AI applications
Create incident response procedures specific to AI failures
Evidence Required:Policy documents, approval records, version history
GV.2
Roles and Responsibilities
Define clear accountability for AI risk management across the organization.
Assign executive sponsorship for AI governance program
Define RACI matrix for AI risk assessment, approval, and monitoring
Establish cross-functional AI governance committee
Evidence Required:Org charts, role descriptions, committee charters, meeting minutes
GV.3
Workforce and Culture
Build organizational capability and culture for responsible AI development and deployment.
Provide AI risk awareness training for relevant staff
Create channels for reporting AI concerns without retaliation
Foster culture that values AI safety alongside innovation
The MAP function focuses on understanding the context in which AI systems operate and identifying potential risks before deployment. It comprises 5 categories and 18 subcategories.[10]
Effective mapping requires understanding not just intended uses, but foreseeable misuses. The framework emphasizes considering stakeholders who might be affected by AI decisions, including those who may not directly interact with the system.
MP.1
Context and Use Case Analysis
Document the intended purpose, users, and operating environment for each AI system.
Define intended use cases with specific user populations
Identify foreseeable misuses and off-label applications
Assess deployment environment and integration points
Document data sources, quality, and provenance
Evidence Required:Use case documentation, data lineage records, architecture diagrams
MP.2
Stakeholder Impact Assessment
Identify who is affected by AI decisions and how they might be harmed.
Map all stakeholders affected by AI outputs (direct and indirect)
Engage affected communities in risk identification
Evidence Required:Impact assessments, stakeholder maps, engagement records
MP.3
Risk Identification
Systematically identify risks across the AI lifecycle and risk categories.
Catalog risks to each trustworthy AI characteristic
Consider risks from training data, model architecture, deployment
Document known limitations and failure modes
Evidence Required:Risk registers, model cards, limitation documentation
MEASURE: Assessing AI Risks
The MEASURE function involves quantifying and analyzing identified risks through testing, metrics, and ongoing monitoring. It comprises 5 categories and 22 subcategories.[10]
With AI hallucination rates varying from 0.7% (best-in-class) to over 25% in widely deployed enterprise models[6], measurement is critical. The 2024 Stanford AI Index found that standardized evaluations for LLM responsibility are seriously lacking—leading developers test against different benchmarks, complicating cross-model comparison.[5]
MS.1
AI Testing and Evaluation
Establish rigorous testing protocols for AI systems before and after deployment.
Conduct performance testing across diverse conditions and edge cases
Test for adversarial robustness and prompt injection vulnerabilities
Establish ongoing monitoring to detect performance degradation, drift, and emerging risks.
Monitor model performance metrics in production
Detect data drift and distribution shifts
Set up alerting for anomalous outputs or behaviors
Evidence Required:Monitoring dashboards, alert logs, drift reports, incident records
MANAGE: Treating AI Risks
The MANAGE function covers the prioritization and treatment of identified risks, including mitigation strategies and response planning. It translates measurement into action.
Research shows that 76% of enterprises now include human-in-the-loop processes to catch AI errors before deployment, and 91% of enterprise AI policies include explicit protocols for hallucination mitigation.[6]
MG.1
Risk Prioritization
Prioritize risks based on likelihood, impact, and organizational risk tolerance.
Score risks using consistent criteria aligned with enterprise risk
Factor in reversibility and remediation difficulty
Consider regulatory and reputational implications
Evidence Required:Risk scoring matrices, prioritization decisions, review records
MG.2
Risk Treatment Options
Select and implement appropriate risk treatment strategies.
Avoid
Don't deploy the AI system or use case
Mitigate
Implement controls to reduce risk to acceptable levels
Transfer
Share risk via contracts, insurance, or partnerships
Accept
Document with appropriate executive approval
Evidence Required:Treatment decisions, control implementation records, acceptance documentation
MG.3
Incident Response
Prepare for and respond to AI failures, incidents, and unintended outcomes.
Released July 26, 2024, per Executive Order 14110, the Generative AI Profile identifies 12 risks unique to or exacerbated by generative AI and provides over 200 suggested actions for risk management.[11]
The 12 GenAI Risk Categories
1. CBRN Information
Access to chemical, biological, radiological, nuclear weapons information
2. Confabulation
Production of false or misleading content ("hallucinations")
3. Dangerous Content
Creation of violent, hateful, or inciting content
4. Data Privacy
Leakage, unauthorized use, or de-anonymization of personal data
5. Environmental Impacts
High energy consumption and carbon emissions
6. Harmful Bias
Reinforcement of stereotypes and discriminatory outputs
7. Homogenization
Reduction in content diversity and perspective
8. Information Integrity
Mis/disinformation and manipulation of information
9. Information Security
Lowered barrier to cybersecurity attacks
10. Intellectual Property
Training data and output copyright concerns
11. Obscene Content
Generation of sexual, violent, or illegal content
12. Value Chain
Risks from third-party components and integrations
Integration with AI RMF 1.0: The GenAI Profile maps each of the 12 risks to the core GOVERN, MAP, MEASURE, and MANAGE functions, providing specific actions for generative AI contexts. Organizations implementing AI RMF should layer the GenAI Profile for LLM and generative AI deployments.
Regulatory Safe Harbor
NIST AI RMF has transitioned from voluntary guidance to regulatory reference point. Multiple regulations now explicitly cite it as a compliance benchmark.
Regulation
NIST AI RMF Reference
Effective Date
Penalties
Colorado AI Act (SB 205)
Cited as benchmark for safe harbor[2]
June 30, 2026
$20,000/violation
Federal AI Risk Mgmt Act (HR6936)
Would mandate for federal agencies[3]
Proposed
Contract eligibility
Executive Order 14110
Incorporates into federal guidelines[12]
October 2023
Agency compliance
EU AI Act
Compatible risk-based approach
August 2025+
Up to 7% revenue
Colorado AI Act Safe Harbor
The Colorado AI Act explicitly provides an affirmative defense for organizations that can demonstrate compliance with NIST AI RMF or equivalent frameworks:[2]
"Discovering a violation as a result of monitoring, testing or an internal review and curing it, is an affirmative defense if the deployer or developer was in compliance with the latest version of NIST AI Risk Management Framework and ISO/IEC 42001 or any other national or international framework that is substantially similar."
Key compliance obligations under the Colorado AI Act:
Impact assessments required by effective date, then annually and within 90 days of modifications
3-year retention of impact assessments with annual review
Public disclosure of high-risk AI systems and risk management practices
Crosswalk to Other Frameworks
Organizations implementing NIST AI RMF build a strong foundation for compliance with multiple regulatory frameworks:
NIST AI RMF describes what to do, but regulators and auditors increasingly demand proof you did it. The difference determines whether you qualify for safe harbor protections.
Knowledge workers spend an average of 4.3 hours per week verifying AI outputs.[6] Each enterprise employee costs approximately $14,200 per year in hallucination mitigation efforts.[6] The evidence you collect determines whether that investment translates to regulatory protection.
1
Policy Documentation
Written AI policies and procedures
Weak
2
Process Records
Risk assessments, impact documentation
Moderate
3
Execution Logs
Automated monitoring, test results, audit trails
Good
4
Cryptographic Attestation
Signed, timestamped proof of control execution
Strong
Most organizations operate at Levels 1-2. Regulators and sophisticated customers increasingly demand Level 3-4 evidence. Read more about the Proof Gap.
Frequently Asked Questions
Is NIST AI RMF mandatory?
Currently voluntary for private sector organizations. However, the Federal Artificial Intelligence Risk Management Act of 2024 (HR6936) would make compliance mandatory for federal agencies and their contractors.[3] The Colorado AI Act explicitly cites it for safe harbor protection.[2] Many enterprise customers now require NIST AI RMF alignment in procurement.
How does NIST AI RMF differ from NIST CSF?
NIST Cybersecurity Framework (CSF) addresses cybersecurity risks broadly. NIST AI RMF specifically addresses AI-related risks like bias, explainability, hallucinations, and AI-specific security concerns such as prompt injection. Organizations typically need both frameworks for comprehensive risk coverage.
What's the difference between NIST AI RMF and ISO 42001?
NIST AI RMF is a risk management framework focused on AI-specific risks—it tells you what to address. ISO 42001 is a certifiable management system standard that provides the how of organizational implementation. They're complementary: use NIST AI RMF for risk identification and ISO 42001 for management system certification.
Can small organizations implement NIST AI RMF?
Yes. The framework is designed to be scalable and risk-proportionate. Small organizations can implement a simplified version focused on their highest-priority AI systems. Start with governance foundations (GOVERN), identify your critical use cases (MAP), and expand measurement and management based on risk levels.
Who should own NIST AI RMF implementation?
Primary ownership typically sits with General Counsel, CISO, Head of Risk, or Chief Risk Officer—leaders positioned to operationalize AI risk management as part of broader enterprise risk strategy.[9] Implementation requires a cross-functional team including legal, compliance, engineering, and business stakeholders.
References
[1]NIST. "AI Risk Management Framework 1.0." NIST AI 100-1, January 2023. nvlpubs.nist.gov
[2]Colorado General Assembly. "Colorado Artificial Intelligence Act (SB 205)." Signed May 17, 2024. Analysis: RadarFirst
[3]U.S. Congress. "Federal Artificial Intelligence Risk Management Act of 2024 (HR6936)." Introduced January 10, 2024. holisticai.com
[4]NIST. "Perspectives about the NIST AI Risk Management Framework." 2023. nist.gov
[5]Stanford HAI. "AI Index 2025: State of AI in 10 Charts." April 2025. hai.stanford.edu
[6]AllAboutAI. "The Hidden Cost Crisis: Economic Impact of AI Content Reliability Issues." 2025. Analysis: Korra
Our Evidence Pack Sprint delivers board-ready compliance evidence mapped to all 72 NIST AI RMF subcategories. Proof your controls work, not just that policies exist.