Insights
Blog
Insights and analysis on AI compliance, governance, and building the evidence layer for regulated industries.
Why our new CTO left Microsoft after 19 years
Rohit Tatachar joins as co-founder & CTO after nearly two decades at Azure. The inside story.
Healthcare AI is uninsurable
The first framework for underwriting healthcare AI risk. Four case studies. Three liability domains.
We couldn’t ship our own AI
Why we open-sourced auto-redteam and published OVERT 1.0 — the open standard for AI runtime trust.
Why autoredteam.com is an open-source commitment
Why open-source auto-hardening matters and how autoredteam.com connects to safer AI deployment at scale.
Voluntary AI safety just died
Anthropic abandoned its RSP. The voluntary era is over. Here’s what replaces it.
ViVE 2026: Healthcare AI gets asked for its receipts
We’re in LA Feb 22–25. The AI accountability conversation healthcare has been building toward.
2026: the year Healthcare AI gets real
JPM kicks off a pivotal year. State laws take effect, consent litigation accelerates, and governance committees want proof.
The three layers of AI security
Most AI security solutions cover runtime protection. But there’s a critical third layer.
EU AI Act Healthcare: what to know
Most healthcare AI is classified as high-risk, triggering strict logging requirements.
When AI hallucinations become malpractice
“One beer at a wedding” becomes “daily heroin use.” Without evidence, who’s liable?
Why SOC 2 won’t protect you from AI risk
SOC 2 and HITRUST are essential for IT security. But they weren’t designed for AI.
Colorado AI Act for Healthcare vendors
Colorado becomes the first US state to regulate high-risk AI on June 30, 2026.
Building AI trust through evidence
The difference between “we have guardrails” and “here’s proof.”
How we used AI without a BAA
Deploying an in-line redaction proxy that strips PHI before it reaches external APIs.
Why we built GLACIS on Cloudflare
Global latency, edge compute, and enterprise security via Cloudflare Workers Launchpad.
Free AI runtime security assessment
Discover your AI runtime security posture with our free 2-minute assessment. Get your score and personalised recommendations.
ISO 42001: is certification worth it?
Costs, benefits, and limitations. When certification makes sense vs. using the framework internally.
Ready to unblock your deals?
The Evidence Pack Sprint gives AI vendors board-ready compliance evidence in days — for deals, audits, and internal assurance.
Learn About Evidence Pack Sprint