Our co-founder and CEO, Justin Norden, MD, MBA, MPhil, just released a new episode of the Stanford Health Care AI podcast: Tracking and Trusting AI in Medicine.
He and co-host, Matt Lungren MD MPH of Microsoft, sat down with Shantanu Nundy, physician, technologist, and AI advisor to the FDA.
Three takeaways stood out:
1️⃣ AI demand is already pervasive, independent of health-system readiness: Roughly 5–10% of ChatGPT queries are health-related, reflecting real adoption by both patients and clinicians outside any formal oversight. The relevant comparison isn’t to an imaginary, risk-free status quo. It’s to a system where limited access, diagnostic variability, and medical errors are longstanding and well-documented sources of harm. Treating “non-deployment” as the safer option ignores this baseline reality.
2️⃣ Model accuracy is no longer the limiting factor; socio-technical design is: The sepsis-alert case illustrates the core issue: the algorithm triggered correctly, but the alert was dismissed amid routine alert fatigue and workflow noise. The failure mode was human-system interaction, not model capability. The next constraint to solve is how to embed AI into clinical pathways with clear prioritization, credible signal, and alignment with professional norms. Otherwise, even high-performing models will underdeliver.
3️⃣ Regulatory focus is shifting toward real-world performance and observability: The FDA’s emerging posture emphasizes post-market evaluation over one-time testing, which depends on infrastructure many organizations lack: a registry of which AI tools are in use, user-level and encounter-level metadata, model versioning, and the ability to link inputs and outputs to outcomes. Absent this foundational “plumbing,” it becomes impossible to detect degradation, bias emergence, or context-specific failure modes at scale.
How is your organization approaching real-world monitoring and AI governance? We’d love to hear what’s working and where you’re stuck in the comments.
🎧 Listen to the full Stanford Health Care AI podcast below 👇