Template • Updated December 2025

AI Security Questionnaire Template

80+ questions for AI vendor assessment. Aligned with SIG 2024, MITRE ATLAS, OWASP LLM Top 10, and EU AI Act requirements.

20 min read 80+ questions

Executive Summary

77% of organizations identified AI-related security breaches in the past year (HiddenLayer AI Threat Landscape Report, 2024), yet standard security questionnaires don't cover AI-specific risks. The SIG 2024 update added a dedicated AI risk domain based on NIST AI RMF, but most organizations still use inadequate assessment tools.

This questionnaire synthesizes requirements from SIG 2024, MITRE ATLAS, OWASP LLM Top 10, and the EU AI Act into 80+ questions across 8 categories. Organizations using security AI and automation extensively incur $2.2M less in breach costs than those without (IBM 2024).[1]

77%
AI Breaches Identified
HiddenLayer 2024
$4.88M
Avg Breach Cost
IBM 2024
87%
Escalate on Non-Response
Secureframe TPRM 2024
$2.2M
Cost Savings
With AI security tools (IBM)

Download the Complete AI Security Questionnaire

80+ questions aligned with SIG 2024, MITRE ATLAS, OWASP LLM Top 10, and EU AI Act. Excel format with scoring rubric and evidence requirements.

Download Free Template
80+ questions 8 risk categories Framework mapping

Why AI-Specific Questionnaires Matter

Traditional security questionnaires like SIG, CAIQ, or custom forms focus on IT infrastructure: network security, access controls, encryption, incident response. These remain essential, but AI systems introduce entirely new risk categories that standard assessments don't cover.

The Evidence Gap

77%

Organizations identified breaches to their AI systems in the past year (HiddenLayer 2024)[2]

61%

Experienced third-party data breach or security incident (Prevalent TPRM Study 2024)[3]

Standard vs. AI-Specific Assessment

Traditional Security AI-Specific Security Framework
Input validation (SQL injection) Prompt injection defenses OWASP #1
Data encryption at rest Training data governance ATLAS
Access control policies Model capability restrictions NIST AI RMF
Output sanitization Hallucination detection NIST 600-1
Software SBOM ML-BOM / AI-BOM CycloneDX
SOC 2 / ISO 27001 EU AI Act / ISO 42001 EU AI Act
Penetration testing AI red teaming ATLAS

SOC 2 alone isn't enough for AI systems. According to IBM's 2024 Cost of Data Breach Report, organizations using security AI and automation extensively detected and contained breaches 98 days faster and saved $2.2 million compared to those without.[1]

Framework Alignment

This questionnaire aligns with the major AI security frameworks. The SIG 2024 update added a dedicated AI risk domain—the first major vendor questionnaire to do so.[4]

SIG

SIG 2024

New AI risk domain added with Supply Chain Risk Management. 600+ questions across 21 risk categories. AI domain based on NIST AI RMF.[4]

Shared Assessments
ATLAS

MITRE ATLAS

Adversarial Threat Landscape for AI Systems. Knowledge base of tactics and techniques from real-world attacks and AI red teams.[5]

MITRE Corporation
OWASP

OWASP LLM Top 10

Updated 2025 with 10 critical security risks including prompt injection (#1), sensitive information disclosure, and supply chain vulnerabilities.[6]

OWASP Foundation
NIST

NIST AI RMF + 600-1

AI Risk Management Framework with Generative AI Profile. 72 subcategories across GOVERN, MAP, MEASURE, MANAGE functions.[7]

US Government

Framework Gap Alert

Analysis of AI model documentation from 5 frontier models and 100 Hugging Face model cards identified 947 unique section names with extreme naming variation—usage information alone appeared under 97 different labels. Standardization is essential.[8]

Question Categories

The GLACIS AI Security Questionnaire covers eight categories, each mapped to the relevant frameworks:

MS

Model Security

Prompt injection, jailbreaking, output filtering, adversarial robustness

OWASP ATLAS
DG

Data Governance

Training data, customer data handling, retention, PII/PHI protection

NIST EU AI Act
CR

Compliance & Regulatory

EU AI Act, NIST AI RMF, ISO 42001, industry-specific requirements

EU AI Act NIST
SC

Supply Chain

Model provenance, ML-BOM, third-party dependencies, attestations

ATLAS OWASP
BF

Bias & Fairness

Bias testing, discrimination audits, fairness metrics, documentation

EU AI Act NYC LL144
TR

Transparency

Model cards, system documentation, disclosure requirements

EU AI Act CA SB 1047
OS

Operational Security

Monitoring, incident response, human oversight, model updates

NIST SIG
RT

Red Teaming

Adversarial testing, vulnerability assessment, remediation tracking

ATLAS EU AI Act

Model Security Questions

These questions assess defenses against attacks targeting the AI model itself, aligned with OWASP LLM Top 10 and MITRE ATLAS.

MS

Model Security

OWASP • ATLAS

MS-1: What defenses are in place against prompt injection attacks? Describe both direct and indirect injection protections.

OWASP LLM01 Evidence: Input validation documentation, filtering rules

MS-2: How are jailbreaking attempts detected and prevented? What testing has been performed against known jailbreak techniques?

ATLAS AML.T0054 Evidence: Red team report, jailbreak test results

MS-3: What output filtering is applied before responses are returned to users? Are outputs scanned for harmful content, PII, and policy violations?

OWASP LLM02 Evidence: Output filter configuration, content policy

MS-4: How is the system prompt protected from extraction? What evidence demonstrates its effectiveness?

ATLAS AML.T0051 Evidence: Extraction test results, prompt protection config

MS-5: What controls prevent the model from taking unauthorized actions (tools, APIs, system access)? How are capabilities restricted?

OWASP LLM08 Evidence: Tool access policies, capability matrix

MS-6: How is the model protected against adversarial inputs designed to cause incorrect outputs (adversarial examples)?

ATLAS AML.T0015 Evidence: Robustness testing results

MS-7: What rate limiting, abuse prevention, and anomaly detection measures are implemented?

NIST MANAGE Evidence: Rate limit configuration, abuse detection rules

MS-8: Has the system undergone AI red teaming? Provide summary findings and remediation status.

EU AI Act Art. 55 Evidence: Red team report, remediation log

MS-9: How are multi-turn attacks and conversation manipulation detected and prevented?

ATLAS Evidence: Multi-turn test results, conversation monitoring

MS-10: What safeguards prevent the model from generating hallucinated or fabricated information?

NIST 600-1 Evidence: Grounding mechanisms, citation requirements

Data Governance Questions

These questions assess how data is handled throughout the AI lifecycle, from training to inference.

DG

Data Governance

NIST • EU AI Act

DG-1: What data is used for training/fine-tuning? Is any customer data used? Provide complete data provenance documentation.

NIST MAP 1.5 Evidence: Training data catalog, provenance records

DG-2: How is customer data isolated from other customers (multi-tenancy controls)? What prevents cross-tenant data leakage?

OWASP LLM06 Evidence: Isolation architecture, tenant separation tests

DG-3: What is the data retention policy for user prompts and model outputs? How is deletion enforced?

EU AI Act Art. 12 Evidence: Retention policy, deletion procedures

DG-4: Can customers opt out of data being used for training? How is this enforced and audited?

GDPR Art. 21 Evidence: Opt-out mechanism, compliance audit

DG-5: What PII/PHI detection is performed on inputs and outputs? What redaction or masking is applied?

OWASP LLM06 Evidence: PII detection rules, masking config

DG-6: How is training data provenance documented? Is the origin of all training data known and verified?

ATLAS AML.T0019 Evidence: Data provenance records, source verification

DG-7: What controls prevent training data poisoning attacks?

ATLAS AML.T0020 Evidence: Data validation pipeline, poisoning detection

DG-8: For healthcare: Is there a BAA available? What PHI protections are in place?

HIPAA Evidence: BAA, PHI handling procedures, encryption config

DG-9: What controls exist to prevent the model from memorizing and reproducing training data (extraction attacks)?

ATLAS AML.T0024 Evidence: Memorization tests, extraction prevention

Supply Chain & Provenance Questions

Supply chain risk is a critical blind spot. In 2024, a supply chain compromise of a published AI library led to users unknowingly installing cryptocurrency mining malware. Only 24% of organizations apply comprehensive evaluations to AI-generated code.[9]

SC

Supply Chain

OWASP • ATLAS

SC-1: What is the provenance of the AI model(s) used? Are they custom-trained, fine-tuned, or third-party?

OWASP LLM05 Evidence: Model provenance documentation, vendor contracts

SC-2: Is an ML-BOM (Machine Learning Bill of Materials) maintained? Does it include datasets, models, and code dependencies?

CycloneDX Evidence: ML-BOM artifact, component inventory

SC-3: How are third-party AI models validated before use? What security testing is performed?

ATLAS AML.T0011 Evidence: Model validation reports, security scan results

SC-4: What controls protect against backdoored or trojanized models?

ATLAS AML.T0010 Evidence: Trojan detection scans, model integrity checks

SC-5: Are cryptographic attestations used to verify model and data integrity (SLSA, Sigstore)?

SLSA Evidence: Attestation records, signature verification

SC-6: How are model updates verified and tested before deployment?

NIST GOVERN Evidence: Update verification procedures, test reports

SC-7: What visibility exists into Nth-party dependencies (your vendor's vendors)?

OWASP LLM05 Evidence: Dependency map, sub-processor agreements

Bias & Fairness Questions

Bias audits are now a regulatory requirement. NYC Local Law 144 requires independent bias audits before using AI in hiring, with penalties of $500–$1,500 per violation per day. Audits typically cost $20,000–$75,000 depending on system complexity.[10]

BF

Bias & Fairness

EU AI Act • NYC LL144

BF-1: Has the system undergone independent bias testing? Provide audit results for protected characteristics (gender, race, age, disability).

NYC LL144 Evidence: Bias audit report, impact ratio calculations

BF-2: What fairness metrics are used (demographic parity, equalized odds, individual fairness)? How are thresholds determined?

EU AI Act Evidence: Fairness metric definitions, threshold documentation

BF-3: How is training data evaluated for representativeness and potential bias amplification?

NIST MEASURE Evidence: Training data analysis, demographic breakdown

BF-4: What bias mitigation techniques are applied (pre-processing, in-processing, post-processing)?

NIST MAP Evidence: Mitigation technique documentation

BF-5: Is bias monitoring continuous? How often are bias metrics recalculated in production?

EU AI Act Evidence: Monitoring dashboard, recalculation schedule

BF-6: What tools are used for bias detection (IBM AI Fairness 360, Microsoft Fairlearn, Aequitas)?

Best Practice Evidence: Tool configuration, detection methodology

BF-7: How are bias incidents reported, investigated, and remediated?

NIST MANAGE Evidence: Incident response procedures, remediation log

Transparency & Model Cards

California's new AI disclosure law (2025) requires transparency reports with penalties up to $1 million per violation. The EU AI Act Article 50 establishes transparency requirements extending to businesses outside the EU using AI within the EU.[11]

TR

Transparency

EU AI Act • CA SB 1047

TR-1: Is a model card available? Does it include intended uses, limitations, training data summary, and evaluation results?

EU AI Act Ann. IV Evidence: Model card, system documentation

TR-2: Is documentation available in machine-readable format (JSON) for programmatic analysis?

Best Practice Evidence: JSON schema, API documentation

TR-3: Are users notified when they are interacting with an AI system (disclosure requirements)?

EU AI Act Art. 50 Evidence: User notification implementation, UI screenshots

TR-4: Is safety evaluation documentation publicly available? What testing methodology is described?

CA SB 1047 Evidence: Safety evaluation report, testing methodology

TR-5: How is documentation updated when the model is substantially revised?

EU AI Act Evidence: Version control, update procedures

TR-6: What information is disclosed about model capabilities, limitations, and known failure modes?

NIST MAP 3.4 Evidence: Capability documentation, limitation disclosures

Compliance & Regulatory Questions

These questions assess alignment with AI-specific regulations and standards. Financial services leads AI governance adoption, with FS-ISAC publishing dedicated AI vendor assessment guidance and major banks implementing formal AI risk classification frameworks.[13]

CR

Compliance & Regulatory

EU AI Act • NIST • SIG

CR-1: How is the system classified under the EU AI Act? What compliance measures are in place for that risk tier?

EU AI Act Art. 6 Evidence: Risk classification, conformity assessment

CR-2: Is the organization aligned with NIST AI RMF? Describe implementation status across all four functions.

NIST AI RMF Evidence: Function implementation matrix, gap analysis

CR-3: Is the organization ISO 42001 certified or pursuing certification?

ISO 42001 Evidence: Certificate or certification timeline

CR-4: How does the organization comply with the Colorado AI Act safe harbor provisions?

Colorado SB 205 Evidence: NIST AI RMF alignment documentation

CR-5: What documentation is maintained for regulatory compliance (risk assessments, impact assessments, conformity assessments)?

EU AI Act Ann. IV Evidence: Compliance documentation package

CR-6: Are there any pending regulatory actions or findings related to AI systems?

Due Diligence Evidence: Regulatory status attestation

CR-7: What industry-specific AI regulations apply (healthcare, financial services, employment)?

FINRA • HIPAA Evidence: Industry-specific compliance documentation

CR-8: How are regulatory changes monitored and incorporated into the system?

NIST GOVERN Evidence: Regulatory monitoring procedures

Operational Security Questions

These questions assess ongoing security operations and human oversight.

OS

Operational Security

NIST • SIG

OS-1: What monitoring is in place for AI system behavior? What metrics are tracked and alerted on?

NIST MEASURE Evidence: Monitoring dashboard, alerting rules

OS-2: How are AI-related security incidents detected, triaged, and responded to? Is there an AI-specific incident response plan?

NIST MANAGE Evidence: Incident response plan, runbooks

OS-3: What human oversight exists for AI decisions? When is human review required?

EU AI Act Art. 14 Evidence: Human oversight procedures, escalation matrix

OS-4: How is model drift monitored? What triggers model review or retraining?

NIST MEASURE Evidence: Drift monitoring config, retraining triggers

OS-5: What audit logging is maintained? How long are logs retained? Are they tamper-evident?

EU AI Act Art. 12 Evidence: Logging configuration, retention policy

OS-6: Can customers access logs of AI interactions with their data?

Best Practice Evidence: Customer log access mechanism

OS-7: What is the process for reporting AI safety concerns (internal and external)?

NIST GOVERN Evidence: Reporting procedures, whistleblower protections

OS-8: How are AI model updates tested before deployment? What rollback capabilities exist?

NIST MANAGE Evidence: Deployment procedures, rollback tests

Scoring & Evaluation

Use this scoring framework to evaluate vendor responses. Each question should be scored based on both the control maturity and the evidence quality provided.

Score Criteria Evidence Required Action
4 Comprehensive controls with continuous monitoring Documented, tested, audited Accept
3 Controls in place, some evidence gaps Documented, partially tested Accept with monitoring
2 Some controls, significant gaps Partial documentation Remediation required
1 Few controls, major gaps Minimal or no evidence Material remediation
0 No controls or evidence None Reject

Category Weights

Adjust weights based on your risk profile. These defaults reflect a balanced approach:

Model Security OWASP, ATLAS
20%
Data Governance NIST, EU AI Act
15%
Supply Chain OWASP LLM05
15%
Bias & Fairness NYC LL144, EU AI Act
15%
Compliance EU AI Act, NIST
15%
Transparency EU AI Act Art. 50
10%
Operations NIST MANAGE
10%

Red Flags

Watch for vendors who claim "we use [major provider] so we inherit their security" without demonstrating application-layer controls. LLM security requires layered defenses. Also flag vendors who can't answer basic questions about prompt injection or training data governance.

Need Help Assessing AI Vendors?

Our Evidence Pack Sprint includes vendor security assessment templates, scoring frameworks, and expert review of vendor responses. Get compliance-ready vendor documentation.

Learn About the Evidence Pack

References

  1. [1] IBM. "Cost of a Data Breach Report 2024." July 2024. ($4.88M average, $2.2M savings with AI)
  2. [2] HiddenLayer. "AI Threat Landscape Report." March 2024. (77% identified AI breaches)
  3. [3] Prevalent. "Third-Party Risk Management Study." 2024. (61% third-party breaches)
  4. [4] Shared Assessments. "SIG 2024: Key Updates and Considerations." 2024.
  5. [5] MITRE Corporation. "MITRE ATLAS: Adversarial Threat Landscape for AI Systems." 2024.
  6. [6] OWASP Foundation. "OWASP Top 10 for Large Language Model Applications." 2025.
  7. [7] NIST. "AI Risk Management Framework (AI RMF 1.0)." January 2023.
  8. [8] "AI Transparency Atlas: Framework, Scoring, and Real-Time Model Card Evaluation Pipeline." arXiv, 2024.
  9. [9] ReversingLabs. "Secure Your AI Supply Chain with the ML-BOM." 2024.
  10. [10] NYC Department of Consumer and Worker Protection. "Local Law 144: Automated Employment Decision Tools." 2023.
  11. [11] California Legislature. "Transparency in Frontier AI Act (SB 53)." 2025. ($1M per violation)
  12. [12] Venminder. "State of Third-Party Risk Management 2025 Survey." 2025.
  13. [13] FS-ISAC. "Generative AI Vendor Risk Assessment Guide." February 2024.

Disclaimer: Statistics cited are from third-party research and may be subject to methodology limitations. All figures reflect data available as of publication date (December 2025). Organizations should conduct independent verification for compliance or legal purposes. This guide is for informational purposes only and does not constitute legal advice.