Back to Blog
Panel Recap 18 min read

Your AI Needs an Alibi

Washington State’s Chief Privacy Officer. A CMIO overseeing 12 hospitals. A Medicare payer executive. A health tech attorney. They walked into a room and described exactly what we’re building—without knowing it exists.

Joe Braidwood
Joe Braidwood Co-founder & CEO

Tuesday night in Seattle. A room full of healthcare builders, operators, and founders gathered at the AI House for a panel convened by SeaHealthTech. The event’s title—“From Promises to Proof in Healthcare”—captured a tension that everyone in the room felt but few could articulate precisely: healthcare AI adoption is accelerating faster than any other industry, but the infrastructure to prove these systems are safe, accountable, and compliant barely exists.

Moderator Neha Rajdev, co-founder of SeaHealthTech, opened with a striking statistic: according to Menlo Ventures, healthcare is adopting AI applications 2.2 times faster than any other industry. Two-thirds of physicians are already using AI tools. Two-thirds of hospital systems have deployed some form of AI technology. “This is the first time in history there has been no mandate, no regulation, and AI applications are organically jumping up,” she said.

And then the question that hung over the entire evening: “What happens when a vendor puts an application into your hospital system, and then you ask, ‘Can you tell me exactly what the algorithm had when that patient had this decision made at this time?’ Most of the times there will be no answer. It’s a gap.”

That gap—between what AI vendors claim and what organizations can prove—is exactly what the panel spent the next hour dissecting. Not promises of safety and accountability. Proof.

The panel was stacked with exactly the right people to pull this apart from every angle. Dr. Michael Han, CMIO of MultiCare Health System—12 hospitals, more than 300 clinics, one of the oldest Epic installations in the country. Corinne Stroum, head of emerging technologies at SCAN Health Plan, one of the nation’s largest Medicare Advantage organizations. Jefferson Lin, healthcare technology attorney at Scale LLP, who spends his days red-lining AI vendor contracts. Our own Dr. Jennifer Shannon, physician and GLACIS co-founder, who brought the patient-level reality of what happens when AI fails in the exam room. And Katy Ruckle, Washington State’s Chief Privacy Officer—a national leader in AI governance who sits on the state’s AI Task Force, leads the implementation of the governor’s AI executive order, and who, as the evening would reveal, is quietly building one of the most sophisticated state-level AI regulatory frameworks in the country.

Full Panel: AI Governance in Healthcare — SeaHealthTech at the AI House, Seattle. February 3, 2026.

SeaHealthTech panel ‘From Promises to Proof in Healthcare’ at the AI House, Seattle. From left: Neha Rajdev (moderator), Katy Ruckle, Jefferson Lin, Corinne Stroum, Dr. Jennifer Shannon, Dr. Michael Han.
“From Promises to Proof in Healthcare” — Katy Ruckle, Jefferson Lin, Corinne Stroum, Dr. Jennifer Shannon, and Dr. Michael Han. AI House, Seattle.

Governance Gets You Through the Door. Then What?

Dr. Han opened with a story that anyone in health system procurement will recognize—and one that reveals why post-deployment monitoring matters more than most people think.

MultiCare undertook one of the most ambitious ambient scribe evaluations in the country. They enrolled 550 providers across three competing solutions for a head-to-head comparison. Cleveland Clinic did something similar with five vendors, sequentially rather than simultaneously. Both organizations ultimately selected Ambience. The process was rigorous, data-driven, and thorough—Han’s team even measured Levenshtein distance on edited notes and found a bimodal distribution, with most providers changing fewer than 20–30 characters but outliers changing over a thousand.

But Han’s point wasn’t about the evaluation. It was about what comes after.

“That’s intake—that’s bringing a new vendor, a new capability on board. But it says nothing about what you do once that vendor is in your stack. Does it comply with what your contract says it’s supposed to? Does it comply with rules and regulations? Does it drift? Is it biased? Is it safe? Is it effective? Is it producing an ROI?”

DR. MICHAEL HAN — CMIO, MultiCare

Han calls himself the “CM-I-No”—the person who reflexively says no to every bright shiny object a vendor places in front of him, because he knows his providers will reject anything they don’t trust. His AI Governance Committee at MultiCare is the gatekeeper. “In order to get in the door, there’s a lot of hoops you have to jump through, from a security standpoint, from a compliance standpoint. If you don’t know that HIPAA is two A’s as opposed to two P’s, then you’re not ready.”

But security is just the first step. Beyond that, there’s a new software request process, a data governance committee review, an AI governance review, an architecture review, a CFO review. “And this isn’t even contracting,” Han noted. “This isn’t even BAAs and MSAs and alphabet soup. This is just getting through new software vendor, security questionnaire, data governance committee, and AI governance. It takes a lot of grit.”

The vendors who make it through that gauntlet earn a rare thing: trust. But trust without ongoing verification is fragile—and that was Han’s point. Procurement evaluates a moment in time. What happens on day 31, day 90, day 365? Getting his public acknowledgment that this monitoring gap exists is significant precisely because he’s the skeptic, the person who already has the hardest front door in the industry. Even he can’t see what’s happening after a vendor is live.

Corinne Stroum from SCAN reinforced the point from the payer side. “The first question eliminates about 90% of potential vendors,” she said. That question: do you have a CISO? “You can give us this top-tier technology, but fail us on the basics. Penetration testing. Simulated phishing against your employees. Multifactor authentication. Making sure your antivirus software is up to date, instead of asking your employees to bring their own machines.” Her advice to startups was blunt: “Get a fractional CISO. Get someone in who has done this before. Not, ‘Well, we’ve got this guy Ted, and he really likes doing the security.’”

When AI Hallucinates in the Exam Room

Dr. Shannon made the problem concrete with a story that made the room go quiet.

The first time she used an ambient scribe in her practice, it transcribed that she was prescribing Lamictal for PTSD—which is not an indication for PTSD—and generated it directly in the clinical note. She caught it. She was able to go back to the raw transcript and verify that nothing in her actual conversation would have prompted that output.

But catching the error wasn’t enough. As a physician with twenty years of practice, her instinct wasn’t just to fix the note—it was to demand forensic reconstruction.

“I really wanted to be able to go back and reconstruct—were there safety controls at that time? What was actually happening when the AI made that decision?”

DR. JENNIFER SHANNON — Physician & GLACIS Co-founder

This is the difference between claiming safety and having proof of it. A hallucination that puts the wrong drug indication in a medical record isn’t an edge case. It’s the kind of failure that becomes a lawsuit, a board inquiry, and a patient safety event. And today, most organizations have no forensic trail to reconstruct what happened.

Shannon articulated what physicians feel but rarely say in public: the current documentation paradigm—“if you don’t document it, you can’t prove it happened”—now has a dangerous corollary. If the AI documents something that didn’t happen, how do you prove it didn’t? “The gap is patient and physician safety—the proof gap,” she said. “That’s what I would like to see as evidence from the vendors that we work with.”

The Payer Sees It Too

Corinne Stroum from SCAN was the sleeper signal of the evening. Without knowing what GLACIS builds, she independently described nearly every element of the continuous attestation thesis—from the payer side.

On the lifecycle problem, she was blunt: “AI governance is there for the entire lifecycle. Those same groups that you had to get in the door—you’re actually just running on a treadmill and you didn’t know.” She described the emerging discipline: “One of the hottest new terms right now is evals—this capacity to build monitors on what is actually happening with this technology.”

She picked up on a word from earlier in the discussion—drift—and gave it conceptual weight: “I like the symbolism of this word, because it implies that we are dealing with something dynamic, whose answers may change over time.” Dynamic systems require dynamic monitoring. You don’t take a single x-ray of something that’s constantly moving. You put it on continuous telemetry.

On what monitoring looks like in practice, Stroum outlined a multilayered approach that extends far beyond engineering teams. “It’s not just your techies who are running synthetic tests. It’s also your legal team. My favorite is legal red-teaming—go find ways that this might get us in trouble, and have them do that sort of testing on an ongoing basis.” That’s a payer telling the market that compliance isn’t a one-time exercise—it’s an ongoing adversarial process.

Stroum also reframed the entire motivation behind AI adoption with a line that brought the room together: “This idea that we can do better is what attracts people, regardless of background, towards AI. They’re not excited necessarily about the AI. They’re excited about breaking the processes that we hate.” The flip side of that excitement, she warned, requires vigilance: “We are trying on something that maybe feels a little unsteady to us. Keep that in the back of your mind.”

On the payer’s specific mandate, Stroum framed it in terms of trust and service: “We want to instill trust, retention, and satisfaction. Our job is to remove friction when one of our seniors is trying to access care or understand their benefits.” AI enables faster responses to physician partners, communications in a member’s native language, at an appropriate literacy level, sensitive to their context. But every one of those capabilities needs to be monitored for the same safety and accuracy problems that haunt the provider side.

When a Medicare payer is describing your product category—continuous lifecycle monitoring, evals, legal red-teaming, drift detection—without knowing you exist, you’re building the right thing.

Washington Is Already Writing the Playbook

This is where the evening got genuinely surprising.

Katy Ruckle is Washington State’s Chief Privacy Officer—a role that, in a state with one of the most active AI policy environments in the country, carries national significance. She sits on the state’s AI Task Force. She leads the implementation of the governor’s AI executive order. She was named to the AI 50 by the Center for Public Sector AI. She’s also a licensed attorney with deep experience directing privacy programs for the Department of Social and Health Services.

What she described on Tuesday night wasn’t aspirational policy language or draft proposals. It was operational reality. Washington State is already requiring AI governance frameworks from vendors, already building interdisciplinary review processes for AI deployments, and already advancing legislation that will formalize requirements many organizations haven’t begun to think about. For anyone deploying AI in healthcare—or any high-risk context—what Ruckle laid out is a preview of what’s coming nationally.

NIST AI RMF: Not Optional, Not Future—Now

The foundation of Washington’s approach is the NIST AI Risk Management Framework. Unlike jurisdictions that reference NIST as a suggestion or a best practice, Washington has made it the operating standard for all state agencies.

“In the state of Washington, we’re focusing on the responsible AI practices that come from the NIST AI Risk Management Framework. That’s been baked into how all our state agencies are required to operationalize the NIST AI principles—and that’s also a recommendation from the AI Task Force to the state legislature to have that be adopted for use in law in Washington.”

KATY RUCKLE — Chief Privacy Officer, Washington State

This isn’t a recommendation. It’s a requirement—and it’s already in effect. Ruckle noted that it’s also a formal recommendation from the Task Force to the legislature, meaning it’s likely to be codified into state law. For organizations that have been treating NIST AI RMF as optional reading, the window to get ahead of this is closing.

When organizations ask “which framework should we follow,” Ruckle’s answer was straightforward: “NIST is actually very well respected in industry. Part of the reason is it is also free, versus ISO and some of the other ones you have to pay for.” For startups and smaller organizations navigating the alphabet soup of compliance frameworks, this is practical, actionable guidance from the person writing state policy.

Vendor Certification: The Door Is Already Closing

For any vendor selling AI into government—or into healthcare organizations that work with government—Ruckle revealed a detail that should change how companies think about their compliance posture immediately:

“We are in a place right now where, when we’re contracting, we have to get certification from vendors that they’re using an AI governance program like NIST AI Risk Management Framework, or something consistent, like the ISO standards. We’re trying to bake in that AI governance, no matter how you’re thinking about the use of AI—but especially around high-risk use cases.”

KATY RUCKLE — Chief Privacy Officer, Washington State

And on the question of whether healthcare qualifies as high-risk, Ruckle was unambiguous: “In healthcare, you’re almost always going to be walking into the high-risk space.”

This means that if you’re building or deploying AI in healthcare and you don’t have a documented AI governance program aligned with NIST AI RMF or ISO 42001, you are already behind. Not behind some future regulatory requirement. Behind what’s happening in procurement right now. Washington isn’t waiting for federal legislation. The requirements are live.

Panel discussion in progress at SeaHealthTech's From Promises to Proof event, AI House, Seattle.
The panel discussed vendor certification, risk assessments, and what Washington State is already requiring from AI vendors.

What’s Coming: Risk Assessments, Bias Protections, Biometric Safeguards

Beyond what’s already operational, Ruckle outlined the legislative landscape for the current session:

“Our lawmakers are looking at AI bills this session that include requiring pretty extensive risk assessments on uses for high-risk AI cases—concerns around algorithmic discrimination and bias—and especially around any uses of biometrics, where you can get into some really high-risk areas, including facial recognition.”

KATY RUCKLE — Chief Privacy Officer, Washington State

For organizations deploying AI in clinical settings, employment decisions, insurance underwriting, or any patient-facing context, the message is clear: risk assessments aren’t going to be optional. And the scope extends beyond the model itself to the data, the deployment context, and the potential for discriminatory outcomes. Washington has already been active on biometrics and facial recognition for law enforcement, and those protections are expanding to healthcare and other high-risk sectors.

The Private Right of Action: Washington’s Balanced Approach

One of the most nuanced moments of the evening was Ruckle’s discussion of private rights of action—the legal mechanism that determines whether individuals can sue directly under a law, or whether enforcement is limited to government agencies. This is the detail that keeps general counsels up at night, and Washington’s experience is instructive for anyone tracking state-level AI regulation nationwide.

Washington introduced the Washington Privacy Act in 2020, which became a model for approximately 25 other states across the US. But it failed to pass in Washington itself—in part because it didn’t include a private right of action. The My Health, My Data Act, which did pass, took a different approach. “We tied it to the Consumer Protection Act,” Ruckle explained, “and so you have to actually still prove unfair and deceptive practice, and that the harm was caused. And so we haven’t seen the flood of lawsuits that is the fear of having a private right of action.”

This balanced approach—enabling accountability without creating a litigation free-for-all—is now being applied to AI-specific legislation. “We’re hoping to accomplish the same thing with the AI chatbot bill that’s been introduced this year,” Ruckle said.

For organizations, this means the risk isn’t binary. It’s not “there are no AI laws” versus “everyone can sue you.” It’s a nuanced framework where demonstrating due diligence, documented governance, and ongoing compliance monitoring becomes your best defense—regardless of whether the enforcement mechanism is a government agency or a private plaintiff.

Transparency Without Exposing the Secret Sauce

Ruckle also addressed one of the most common objections from vendors: the fear that transparency requirements will expose proprietary information. Her response pointed to an emerging consensus that regulators understand the tension—but aren’t going to let it be a blanket defense:

“That’s where we’re talking about more like nutrition labels, where you’re not necessarily sharing the secret sauce recipe, but really sharing the basics of what the model contains—categories of data, training elements, those types of things. So you have the model cards.”

KATY RUCKLE — Chief Privacy Officer, Washington State

This “nutrition label” concept is gaining traction across multiple regulatory jurisdictions. Organizations deploying AI will need to articulate what their systems do, what data they use, and how they make decisions—even if the underlying architecture remains confidential. “We’re not asking for the recipe,” is the regulatory message. “We’re asking for the ingredients list.”

Interdisciplinary Review: Privacy, Security, Architecture, Accessibility

Washington is also building the institutional infrastructure to ensure AI governance isn’t siloed. Ruckle described the creation of interdisciplinary design review teams: “We are creating the design review team to incorporate the AI piece, to bring all those disciplines—privacy, security, architecture, technology—and then also the users and even accessibility. We’re doing it very intentionally.”

This mirrors what Han and Stroum described in their own organizations, and it signals that the expectation for AI governance is moving beyond a single team or checklist toward a genuinely cross-functional process. Ruckle connected this back to the liability question with a comment that should resonate with every CTO and compliance officer: “That’s why you hear about all the vetting that has to happen before you bring an application into your system. That’s because the liability that we have to hold when we bring the products into our environments.”

A Reassurance for Founders

Ruckle also had a direct message for the founders and startups in the room, many of whom expressed anxiety about the pace and complexity of the regulatory landscape:

“The regulations that I’ve been seeing are not that prescriptive. They’re really more about documenting your risk assessments and demonstrating due diligence. It’s not about being a burden—it’s about trying to help protect consumers from bad practices that might be happening with their data.”

KATY RUCKLE — Chief Privacy Officer, Washington State

This is significant coming from the person who helps write the rules. The message isn’t “regulation is coming and it will destroy your business.” It’s “regulation is coming, it’s reasonable, and if you’re already doing the right things—documenting your governance, assessing your risks, monitoring your deployments—you’re most of the way there.” The organizations that will struggle are the ones building without any governance infrastructure at all—the ones who, as Stroum put it, don’t even have a CISO.

The Regulatory Clock Is Ticking

Jefferson Lin laid out the specific regulatory timeline that’s bearing down on every healthcare AI deployment—and it’s not just Washington:

June 2026

Colorado AI Act Takes Effect

Impact assessments, documentation requirements around discrimination risk, performance metrics, three-year retention requirement. Safe harbor for organizations demonstrating NIST AI RMF or ISO 42001 compliance.

August 2026

EU AI Act High-Risk Obligations

Article 12 automatic logging requirements. Article 11 technical documentation. Article 19 record retention. Annex IV detailed requirements around training data and accuracy metrics. Hefty monetary penalties for non-compliance.

Ongoing

California CIPA Litigation

Sharp HealthCare precedent—$5,000 per violation under California’s 1960s wiretapping statute applied to ambient scribes. Default configuration created legal exposure. Private right of action enabled class action.

2026 Session

Washington State AI Bills

Risk assessments for high-risk AI, algorithmic discrimination protections, biometric safeguards, AI chatbot bill with balanced private right of action. NIST AI RMF already mandated for state agencies.

Feb 23, 2026

HHS Clinical AI Comments Due

Open request for information on clinical AI technical standards. An opportunity to shape federal policy. Lin noted “a lot more activity than I anticipated on that front.”

Lin connected attestation to precedents in other industries: “Finance with SWIFT and streaming attestation and fraud detection. Aviation—we use that analogy, the flight recorder, the black box to be able to go back and see what happened.” The pattern is clear across regulated industries: when the stakes are high enough, retrospective documentation gives way to real-time evidence capture. Healthcare is arriving at the same conclusion, just slower.

The Sharp HealthCare case loomed large. Lin detailed how plaintiffs used California’s Invasion of Privacy Act—a 1960s wiretapping statute—to file a class action related to ambient scribe deployment. “It’s not necessarily about AI being wrong or the output,” he noted. “It’s about how AI systems are deployed, how they’re configured.”

Han walked through the consent problem in operational detail. MultiCare’s workflow includes signage in clinics, language in annual privacy notices, and a provider obligation to obtain verbal consent before recording. But the fundamental paradox remains: you can’t start recording until after you receive consent, which means the consent itself never appears on the recording. Han proposed asking for consent twice—once before the recording starts, and again on the recording itself—so the ambient vendor can include the verification in the transcript. “It adds fifteen seconds. It’s cumbersome. But you can’t start recording until after you receive the consent.”

The Sharp lawsuit exposed what happens when a vendor’s default configuration fills that gap with an unverified assertion. “That’s a no-no,” Han said. “There’s no way of knowing.” That boundary between what was configured and what actually executed is exactly where a cryptographic attestation layer creates defensible proof. Not a checkbox that says “consent obtained.” A verifiable record of what the system was doing at the moment it mattered.

From Checklists to Continuous Proof

Shannon crystallized the gap between where the industry is and where it needs to be:

“We have these static checklist frameworks—we’re compliant, we follow NIST AI RMF. But when it really comes down to it, how are we implementing this, both from an operational standpoint and a technical standpoint? That’s where there isn’t a lot of detail in these frameworks.”

DR. JENNIFER SHANNON — Physician & GLACIS Co-founder

Ruckle confirmed this from the regulatory side—and named the exact gap:

“How you operationalize those pieces—that’s really where the rubber meets the road, where you’re really bringing in your controls or remediation to control against risk.”

KATY RUCKLE — Chief Privacy Officer, Washington State

Three different perspectives, three different roles, the same conclusion. Ruckle named the operationalization gap from the regulatory side. Stroum described governance as a treadmill from the payer side. Han articulated it from the health system side: procurement tells you nothing about what happens once a vendor is live. Every one of them arrived at the same place: claiming compliance is not the same as proving it.

Shannon made the business case directly: “It’s much easier and cheaper and better to integrate that earlier into either the vendor’s infrastructure or hospital systems, versus after the fact, when you’re having that oh crap moment—what happened, and can we actually reconstruct it?”

Land before the lawsuit. Build the evidence infrastructure now, while you still can. Because reconstructing what happened after a patient safety event, a regulatory inquiry, or a class action is orders of magnitude harder—and more expensive—than capturing it in real time.

Your AI Needs an Alibi

Shannon described the shift from x-ray to telemetry—from static, point-in-time snapshots of compliance to continuous, real-time monitoring of what AI systems are actually doing. It’s the difference between checking a patient’s vital signs once a year and having them on a continuous monitor. In critical care, the choice is obvious. In high-stakes AI deployment, we’re somehow still using the annual checkup model.

And then the line that names the category:

“Your AI needs an alibi… every AI decision having an alibi, and being able to know exactly where it’s been, what it’s doing.”

DR. JENNIFER SHANNON

Every decision, preserved. Every safeguard, proven. Not promises of safety and accountability—but proof.

“Build That. I’m In.”

The panel closed with a lightning round: what proof capability sounds like science fiction today?

Ruckle went the furthest: “Being able to see what is actually happening in the neural networks—to know how it’s making its decisions.” It’s a vision of mechanistic interpretability that even the leading AI labs haven’t fully achieved. But from a policy perspective, it signals where the regulatory expectation is heading. Today, we’re asked to document our risk assessments. Tomorrow, we may be asked to explain, at a technical level, why a model made a specific decision. The organizations building evidence infrastructure now will be better positioned when that expectation arrives.

Lin cited streaming attestation—real-time, continuous proof of what AI systems are doing, analogous to the financial industry’s fraud detection infrastructure.

Stroum described consensus-based outcomes: “Not just one AI, but a series of systems that are fact-checking against each other and their perception of reality from different perspectives.”

Shannon brought it back to the alibi.

And then Han closed the evening:

“I’m going to think about Minority Report… where I can monitor every app that says they have AI in my stack, and be able to monitor them for ROI, for drift, for safety, for efficacy, and be able to drill down… Joe, if you can build that, I’m in.”

DR. MICHAEL HAN — CMIO, MultiCare
SeaHealthTech panelists together after the panel discussion at the AI House, Seattle.
The panelists after the discussion. AI House, Seattle. February 3, 2026.

That’s what we’re building.

The evidence layer for healthcare AI. A continuous attestation infrastructure that gives organizations forensic-grade proof of what their AI systems are actually doing—not what vendors claim, but what the evidence shows. Drift detection. Consent verification. Safety control attestation. All in real time, all cryptographically provable, all designed for the regulatory and legal scrutiny that’s already here.

Tuesday night wasn’t just a good panel. It was a room full of healthcare leaders—a CMIO responsible for AI across 12 hospitals, a payer executive managing AI governance for hundreds of thousands of Medicare members, a state privacy officer building the regulatory framework that other states will follow, and an attorney who spends his days drafting the contracts that sit between these organizations—independently describing the product we’ve been building.

A CMIO asked for it by name. A payer described it without knowing it exists. An attorney used our analogy. A state privacy officer confirmed that the regulatory floor is rising—and that organizations without documented, operational AI governance are already exposed. And a physician co-founder made the room understand why it matters at the patient level.

Proof, not promises. That’s the standard now.

If you’re navigating this same gap—between claiming compliance and proving it—we should talk.

See what continuous attestation looks like

GLACIS gives healthcare organizations forensic-grade proof of what their AI systems are actually doing—drift detection, consent verification, safety control attestation, all in real time. The evidence layer. The source of truth. Your first defense.

Talk to Us