AI, Radiology, and the Stories We Tell

AI, Radiology, and the Stories We Tell

I’m a big believer in storytelling.

A good story sticks in a way raw data never does. We remember it long after the details fade. It builds trust because it shows you understand real life, not just the facts. And when people need to move together, a strong story becomes a shared compass.

But telling the story right matters. Some of you might remember Geoffrey Hinton in 2016 painting a pretty direct picture of which jobs AI would take first. That moment set a narrative that lasted almost a decade.

Now, watch this:

I admire Jensen Huang for the progress he and NVIDIA have pushed forward. Their impact on AI is huge, and they’ll keep shaping how fast and how far the field grows. Still, Jensen’s recent comments at the Saudi-US Investment Forum mix real insight with a bit of gloss.

The first issue is the claim that radiology has been “largely converted” to AI. That’s just not where things stand. Yes, radiology has been at the forefront of AI in healthcare. Departments are using more AI, absolutely, but most tools today are narrow helpers. They flag findings, help with triage, shave a few minutes off a workflow. They don’t replace the core work of interpreting images. Saying radiology has been transformed by AI is like saying restaurants have been transformed by automation because the dishwasher is machine driven.

Another problem is the cause-and-effect jump. Huang ties AI advances directly to increased hiring of radiologists. Hiring trends in medicine almost never move for one reason. Imaging demand keeps rising, burnout is real, COVID backlogs never fully cleared, and demographics vary by region. AI might help productivity, but claiming it’s driving new hiring needs evidence he didn’t show.

He also trims down what radiologists actually do. Diagnosis is still tightly bound to image interpretation. It’s not a clean split between “AI reads images” and “radiologist diagnoses the disease.” Most diagnoses come from synthesizing image findings with clinical notes, labs, prior studies, and that strange pattern recognition skill radiologists build over years. Those pieces don’t separate as neatly as the story suggests.

So the narrative slides from “AI speeds up some image analysis” to “radiologists are now shifting into a higher level diagnostic role.” There’s some truth there, but it’s nowhere near happening across the board. Most radiologists still spend most of their day reading images, writing reports, fielding calls, sitting in meetings, and dealing with admin tasks. AI helps at the margins. It hasn’t reinvented the job.

And there’s one more thing. The old prediction that “radiologists will be the first jobs replaced by AI” didn’t come from radiologists. It came from tech circles guessing from the outside. So contrasting “AI will replace radiologists” with “actually AI increased hiring” is hype correcting hype.

I truly believe that in the long term, AI will reshape radiology in real ways. Fundamentally. But the present isn’t a near total transformation. It’s a work in progress. Still early days. Useful. Messy. Uneven. And still anchored in the judgment and responsibility of trained radiologists.

And this is where storytelling comes back in. When we talk about AI, the stories we choose shape how people understand what’s happening now and what might come next. If the story is too glossy, we lose the truth. If it’s too grim, we lose the opportunity. The real work is telling it straight, even when the reality is slower, stranger, and harder to package. That’s the story that actually helps people see what’s changing and what still needs to be built.

Does this gentleman describe the current state better in his recent BBC interview?

Insightful perspective, Jan. Balanced narratives like this are essential. AI is progressing, but the real transformation in radiology is still unfolding. Clear storytelling helps the field stay grounded in what’s changing and what still truly depends on radiologists’ judgment.

I appreciate the reminder that hype can mislead both patients and clinicians. The slow, uneven adoption of AI is where the most interesting lessons live.

Great post! Thank you! I was just at médica in Germany presenting several talks this week .

To view or add a comment, sign in

More articles by Jan Beger

  • Interesting Reads ✘ November 2025

    👋 Hello! You’re receiving this newsletter as one of 42,593 subscribers (+432). 📈 Last month, this newsletter was…

  • Are We Thinking Less Because Our Tools Think More?

    Healthcare is under pressure from every direction, but one shift is sneaking in quietly. It’s happening in the…

    6 Comments
  • Interesting Reads ✘ October 2025

    👋 Hello! You’re receiving this newsletter as one of 42,161 subscribers (+762). 📈 Last month, this newsletter was…

    8 Comments
  • Interesting Reads ✘ September 2025

    👋 Hello! You’re receiving this newsletter as one of 41,399 subscribers (+148). 📈 In August, this newsletter was…

    3 Comments
  • Interesting Reads ✘ August 2025

    👋 Hello! You’re receiving this newsletter as one of 41,251 subscribers (+479). 📈 In July, this newsletter was viewed…

    2 Comments
  • When AI Sounds Smart but Gets It Wrong

    Let’s get one thing straight: typos happen. Even in science.

    1 Comment
  • Interesting Reads ✘ July 2025

    👋 Hello! You’re receiving this newsletter as one of 40,772 subscribers (+1,278). 📈 In June, this newsletter was…

    1 Comment
  • Interesting Reads ✘ June 2025

    👋 Hello! You’re receiving this newsletter as one of 39,494 subscribers (+426). 📈 In June, this newsletter was viewed…

    2 Comments
  • What Self-Driving Cars Might Teach Us About AI in Healthcare

    We’re heading into a world where cars no longer compete. They cooperate.

    5 Comments
  • Interesting Reads ✘ May 2025

    👋 Hello! You’re receiving this newsletter as one of 39,068 subscribers (+408). 📈 ILast month, this newsletter was…

    1 Comment

Others also viewed

Explore content categories