Why Our New CTO Left Microsoft After 19 Years
Today’s news is out — and I finally get to talk about something I’ve been sitting on for months. Rohit Tatachar has joined Glacis as co-founder and CTO.
Before there was a company to join
What the GeekWire article doesn’t fully capture is how far back this goes for us.
I first met Rohit at a friend’s kid’s birthday party on September 21st last year. We got talking about Glacis — what we were building, why it mattered — and something clicked. A couple of weeks later, we sat down for brunch at Skillet in Seattle and he grilled me on every detail. The architecture. The business model. The regulatory landscape. The honest gaps.
Rohit was guiding my thinking about Glacis before there was a company to join. Last fall, when I was still working through the earliest architecture decisions and go-to-market questions, he was the person I kept calling. Not as a favour — because he genuinely cared about the problem and had a clear-eyed view of what it would take to solve it.
He decided to come aboard over the holidays while visiting family back in India. He spent most of Q1 leaning in — evenings, weekends, architecture reviews — before starting full-time last month. From day one he brought a level of coherence and grounded expertise to what is, frankly, a very fast-moving and somewhat chaotic climate to be shipping product to AI teams.
What he saw from inside Azure
Rohit spent nearly 19 years at Microsoft across two stints, most recently as a principal product manager on the Azure AI Foundry team — their platform for building and deploying enterprise AI applications and agents. He had a front-row seat to an industry-wide pattern.
Companies could build AI. They could run proofs of concept. But when it came time to move into production — the moment a model touches real decisions, real patients, real money — they hit a wall. They couldn’t explain or verify what their systems were doing once they were live.
Same challenge I’d faced from the startup side with Yara. Same challenge Jennifer was seeing in her clinic, where AI-powered ambient scribes were fabricating prescriptions in her clinical notes.
He didn’t just advise. He shaped how we think about what “runtime trust” actually means — not just detecting problems, but converging three dimensions into a single provable record:
Infrastructure baseline. What was the state of the environment when this AI decision was made? Configuration, model version, safety controls — the full context.
Model behavior. What did the model actually do? Not what it was supposed to do. What it did.
Intent drift. Is the system behaving the way it was intended to, even when the underlying model is functioning normally? This is the subtlest failure mode — the kind I discovered with Yara, where the model didn’t break. It just… thinned.
“It’s only when you converge these three that a customer has a real view of what actually happened,” Rohit told GeekWire.
That framework is now core to everything we build. Arbiter doesn’t monitor one dimension — it records all three and produces a tamper-proof receipt that ties them together.
From vision to execution
When Rohit told me he was ready to leave Microsoft and go all in, it was one of those moments where you realise the company just changed. We went from a team with a vision to a team that can execute it.
Jennifer brings the frontline clinical perspective — she’s the one who has to defend AI-generated notes in her practice. I bring the product and go-to-market lens. Rohit brings the technical leadership to build infrastructure that a regulated enterprise would trust with its production workload.
Alongside this, we’re releasing two things today:
- autoredteam.com — an open-source tool that automatically attacks your AI systems, generates fixes, and verifies they work
- OVERT 1.0 — a standard for observable verification evidence for runtime trust
We’re also opening a waitlist for self-serve plans starting at $49/month, because we believe this infrastructure shouldn’t only be available to regulated enterprises with six-figure budgets.
We’re five people, some Rust code, and a thesis that the world’s AI systems need a flight recorder. To Jennifer, Caer, Atreya, and the rest of the team — today is a good day.
Let’s go prove it.
Full coverage from Todd Bishop at GeekWire:
Read on GeekWireGet started
Ready to Close the Gap?
Start with a free governance assessment. In two minutes, you’ll know exactly where your AI systems stand — and what to do next.
Talk to Us