Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking

Why physicians must lead the vetting of medical AI [PODCAST]

The Podcast by KevinMD
Podcast
November 12, 2025
Share
Tweet
Share
YouTube video

Subscribe to The Podcast by KevinMD. Watch on YouTube. Catch up on old episodes!

Cardiologist Saurabh Gupta discusses his article “Physicians must lead the vetting of AI.” In this episode, Saurabh explores how artificial intelligence is reshaping medicine and why its future depends on physician leadership, not passive adoption. Drawing from his experience in cardiology and AI development, he explains why every algorithm influencing clinical care must meet the same rigorous standards as any medical device or drug. Saurabh emphasizes that unvetted AI, not AI itself, is the real risk, underscoring the need for continuous validation, bias testing, and transparency. Viewers will learn how clinicians can move from users to stewards of technology, applying medical reasoning, accountability, and ethics to ensure that innovation truly serves patients.

Our presenting sponsor is Microsoft Dragon Copilot.

Microsoft Dragon Copilot, your AI assistant for clinical workflow, is transforming how clinicians work. Now you can streamline and customize documentation, surface information right at the point of care, and automate tasks with just a click.

Part of Microsoft Cloud for Healthcare, Dragon Copilot offers an extensible AI workspace and a single, integrated platform to help unlock new levels of efficiency. Plus, it’s backed by a proven track record and decades of clinical expertise, and it’s built on a foundation of trust.

It’s time to ease your administrative burdens and stay focused on what matters most with Dragon Copilot, your AI assistant for clinical workflow.

VISIT SPONSOR → https://aka.ms/kevinmd

SUBSCRIBE TO THE PODCAST → https://www.kevinmd.com/podcast

RECOMMENDED BY KEVINMD → https://www.kevinmd.com/recommended

Transcript

Kevin Pho: Hi, and welcome to the show. Subscribe at KevinMD.com/podcast. Today we welcome Saurabh Gupta. He’s a cardiologist and physician executive. Today’s KevinMD article is “Physicians must lead the vetting of AI.” Saurabh, welcome to the show.

Saurabh Gupta: Thank you, Kevin. I appreciate being here this morning with you.

Kevin Pho: All right, let’s start by briefly sharing your story, and then we’ll jump right into your KevinMD article.

ADVERTISEMENT

Saurabh Gupta: Of course. I’m a cardiologist by background and training, and within cardiology, my focus and interests have been in innovation and cutting-edge therapies. For example, my team and I did some of the first transcatheter valve procedures on the West Coast way back. Leading on from that, I’m now on my third startup.

Kevin Pho: Excellent. What got you into that health care startup space? A lot of physicians, of course, don’t get that training when they’re going through medical school and residency. What initially got you interested in that intersection?

Saurabh Gupta: Absolutely. Mostly curiosity and the ability to impact more than what I was able to do one patient at a time, which is the most rewarding thing in the world to be able to do. But then how do you scale up and how do you solve the bigger challenges in medicine? Also, my core belief is that clinicians are ideally positioned to be at the forefront of innovation when it applies to the health care sector, whatever vertical you might add.

Kevin Pho: All right, and it’s an exciting time in the health startup space. Of course, we’re going to talk about artificial intelligence. Your KevinMD article is, “Physicians must lead the vetting of AI.” For those who did not get a chance to read your article, tell us what it’s about.

Saurabh Gupta: It’s about my evolving thinking on that AI is now very, I wouldn’t say intrusive, but that might be the word that a lot of my friends have mentioned to me, and in all aspects of clinical medicine. Scheduling systems are running on that. Billing systems are running on that. Increasingly, ambient learning systems are running on that. That got me thinking about what is the background architecture that supports these tools to be vetted. Are we applying the same level of rigor to these that we might otherwise apply to other therapies?

For example, you take medical devices. They have a ten- to fifteen-year life cycle of development, sometimes for good reason. Fast-forward to that. Then there were some conversations during my role on the Board of Governors of the American College of Cardiology on what should be the framework for vetting these. Then, as I mentioned, my belief that physicians have to be leading that.

Kevin Pho: Specific to AI, what are some of the dangers if we don’t vet these tools properly in medicine?

Saurabh Gupta: Several. If you go through the article, I’ve broken those down into four broader categories: for example, utility. Is it a technology that is in search of a solution, or is it actually solving a problem that we as clinicians and patients face every single day? Technical robustness.

Is it accurate? Is it precise? Is it as reliable across diverse populations as, for example, some of the therapies that we study are? Is there ethical integrity? Is there evidence of bias, whether it be implicit or just subconscious, in these AI systems? What is the regulatory transparency?

Do we understand the logic well enough to explain it, for example, to a patient when they say, “How did you come to this decision, doctor?”

Kevin Pho: How are we doing? I know that AI has been in health care only within the last few years now. I know we have a lot of ambient AI scribes, for instance, and now AI is moving into the role of decision support. How is medicine doing in terms of integrating AI responsibly into our workflows?

Saurabh Gupta: I think the technology has far outpaced the framework behind it that allows us to vet it. There’s no question about that. For all of our colleagues in modern medicine, every single day there is one or other AI system that is being presented as an option, whereas the framework of vetting behind them is not as well established.

Certainly, some of the non-patient-facing interventions are easier targets. For example, billing, even though it’s important, is a very, very reasonable early use case. But when it starts into what we call predictive AI and deterministic AI, and then perhaps prescriptive AI, then the guardrails have to be very, very strong on how these therapies, and I would use the word ‘therapy’ here, get integrated into modern medicine.

Kevin Pho: You’re seeing various AI tools moving beyond the framework, moving beyond regulation, and sometimes with unintended consequences in medicine.

Saurabh Gupta: Oh, absolutely. The biggest challenge that I’ve seen and struggle with is the lack of transparency around how these systems work. Because for most of us, for example, you take a blood pressure medicine, and mechanistically, there have been three decades of work on, “Hey, this is an ACE inhibitor. That’s the enzyme in the renal system.” Now we have a drug, and obviously you always have off-target effects, but in these AI systems, the base of technical innovation is marvelous. That’s fantastic. I worry that behind the scenes, the ability of clinicians to integrate this into their practice in a responsible way is behind.

Kevin Pho: How do you reconcile the two cultures? Because on one hand you have Silicon Valley, which is “move fast and break things,” and then you have medicine, which infamously moves much slower. If you adopt that “move fast and break things” in medicine, patients can get hurt. You can’t necessarily have that same culture in medicine. How do you reconcile those two philosophies?

Saurabh Gupta: Absolutely. I think we’re on the cusp of a huge technical revolution here, and these tools are very powerful and potentially very, very useful as modern medicine just fundamentally restructures around some of these tools. My belief is that we start with doing what we always do. Physicians are ideally placed to do this: ask introspective and outward-facing questions.

For example: What data trained this model? Does it really reflect my patient population? Anytime we read a clinical trial, the first question is, is this the population that was in the trial, the one that we are treating? What is the false-positive rate? What is the false-negative rate? What’s the sensitivity, specificity? Just basic, simple questions. How do I verify the outputs? Is there source verification? Is there data verification that links back to the primary data?

Most importantly, when it fails, how would I know? For example, let’s take a blood pressure medicine example. When it fails, we know that the patient’s blood pressure was not controlled. With AI systems, that is far, far more challenging to infer or to see.

Kevin Pho: The majority of these health-tech startups integrating AI, do they have physician advisors guiding them, or are they purely run by venture capital and technology-based leaders that just want to move as quickly as they can?

Saurabh Gupta: The basic ethos around technical innovation is exactly what you said: move fast, break things. But in medicine, the consequences of that approach are real. I see startups in two broad categories. One is technical founders where most of the time, not all the time, of course, they would have technology that they’re then looking for novel applications. I think domain expertise in that is very, very important.

When physician founders, for example, start off at the inception of these companies, then they’re looking at problems to be solved rather than a technology that can solve some problems. It’s a subtle but very important difference.

I do think that once most companies get beyond a certain stage, they do bring physician advisors in. But I have seen less of that at the very inception, other than just bouncing off ideas. Every week I have young kids from Stanford or MIT who will approach me and say, “Hey, I have an idea. Can I just bounce it off you?” But I do think that there’s something lost in that type of experiential bouncing off of ideas versus an immersive founder at the outset of a tech company. That does need to get better.

Kevin Pho: In your ideal situation, what would be the role of physicians if they were to be involved with a health-tech startup? What would their ideal role be?

Saurabh Gupta: I think we should apply what we already know, and if we have interest, we should pursue it. Earlier on in the show, you asked me a question on, “Well, what did you do?” I approached it the same way as I would approach doing a residency or a fellowship. I basically set out to learn this ecosystem from people who are far better and more adapted at this. An idea is not a product. A product is not a company.

Ultimately, for something to be impactful, it has to get in the hand of consumers. Here the consumers might be either patients or health care. Get involved. Learn about the ecosystem. This is not as hard as it sounds, but it does take some learning. It is different from what we do. Then work with people around you to bring those ideas to fruition. I think some of the most successful ideas in this space and companies in this space are going to come from people who are at the forefront of doing the work.

Kevin Pho: If a physician were to consider using any number of AI tools in their patient care workflow, and knowing that some of these tools aren’t necessarily vetted by physicians, tell us the type of questions they need to ask before moving forward with a particular AI product.

Saurabh Gupta: Absolutely. I think the types of questions that one would ask are: What are the data regulatory policies? Where is this data going? What data actually trained this model? How will this perform? Then frankly to physicians: Where does the liability lie of this tool?

I was at the Board of Governors meeting on the American College of Cardiology, and this question came up on where the liability lies with these tools. That’s an active debate elsewhere as well. The general thinking on this is: think of this as a scalpel. The surgeon using the scalpel has the ultimate liability and responsibility.

I would say with the state of AI where it is, recognize what AI does well, and I mentioned the things that AI does well in my article, and areas that it struggles, areas where it might be prone to hallucinations.

Then use those case scenarios to have a deeper level of introspection about the software tool. Something as simple as, “Give me a list of all the clinic visits.” AI does really well at that type of task. But if you start thinking about more sophisticated questions on what medicine I should use on this patient, now that becomes very tricky.

Kevin Pho: In areas like clinical decision support, certainly liability comes up. How about things like patient-facing chatbots? That’s becoming more and more prevalent, especially in underserved areas where they may not have access to clinical care. Some people are using these AI-facing tools that patients can use as an initial triage. What about the liability in those?

Saurabh Gupta: I think it’s an unresolved question in my mind. Obviously, all of them in terms of services and disclaimers at the bottom of the screens will say, “Well, we are not really responsible for our outputs and use this as you will.” That becomes tricky. I do worry about patients getting misinformation or perhaps not as much as misinformation, but out-of-context information. Because what a clinician brings to the table is judgment and wisdom and insight, and all AI systems at the present, even the most advanced ones, are lacking in those aspects.

Yes, you can answer simple questions, but I would use those as in what in medicine we call hypothesis-generating activities. Then have somebody who’s a trained clinician put that all together. From a technical perspective, for those of us who have interacted with large language models, you could see how chats can lose context if you have a long chat string. Obviously, it’s not that that problem is completely solved in clinical medicine. You have patients who have ten years’ worth of history. That’s hard to go by, but generally a skilled clinician is able to parse out the details and the significant encounters, etc. That would be an area that I would proceed with caution.

Kevin Pho: You’re immersed in the AI space, of course. Tell us, what do we have to look forward to in the coming months when it comes to that intersection between AI and health care?

Saurabh Gupta: I think we’re very rapidly understanding what the evolution of this space will be, what it can do, what it can do reasonably well, and what it can do very well. I do believe that we are moving towards a model very fast where some of the judgment and insights are on the horizon. I don’t think they are here yet. The human touch in medicine will always remain important. But some of the simpler tasks I believe are very, very ideally suited for AI integration.

The tougher problem of how do you actually treat a patient rather than treat a disease or treat a keyword search, we would have to resolve that as a society. What does that human touch mean? What does that comforting voice on the other end mean? Now, obviously, no pun intended, AI systems are actually experimenting with that too, to provide the human touch. For example, in mental health counseling, it is a fast-moving field, and I think the near future will tell.

I think in parallel to the technical advances, we have to, as a society, build the responsible frameworks around the use of these. For example, medicine is a regulated profession. Should there be a state medical board for AI systems or a national medical board? I don’t know the answer, Kevin.

Kevin Pho: We’re talking to Saurabh Gupta. He’s an interventional cardiologist and physician executive. Today’s KevinMD article is “Physicians must lead the vetting of AI.” Saurabh, let’s end with some take-home messages that you want to leave with the KevinMD audience.

Saurabh Gupta: Number one: Be excited, not afraid. Number two: Be at the forefront, not behind. Number three: Remember that we as physicians are ideally placed to be at the forefront of these therapies, not behind them.

Kevin Pho: Thank you so much for sharing your perspective and insight. Thanks again for coming on the show.

Saurabh Gupta: Thank you, Kevin. I appreciate it, and I enjoyed the conversation.

Prev

Dealing with physician negative feedback

November 12, 2025 Kevin 0
…

Kevin

Tagged as: Health IT

Post navigation

< Previous Post
Dealing with physician negative feedback

ADVERTISEMENT

More by The Podcast by KevinMD

  • Fixing the system that fails psychiatric patients [PODCAST]

    The Podcast by KevinMD
  • Redefining health care through agency and partnership [PODCAST]

    The Podcast by KevinMD
  • Rebuilding the backbone of health care [PODCAST]

    The Podcast by KevinMD

Related Posts

  • Navigating mental health challenges in medical education

    Carter Do
  • Breaking the silence: the truth about mental health challenges among medical students and why medical schools must take action

    Erin Waldrop
  • Medical training and the systematic creation of mental health sufferers

    Douglas Sirutis
  • The missing piece in medical education: Why health systems science matters

    Janet Lieto, DO
  • Major medical groups back mandatory COVID vaccine for health care workers

    Molly Walker
  • How one medical student’s life-changing conversation reshaped her career

    American College of Physicians

More in Podcast

  • Fixing the system that fails psychiatric patients [PODCAST]

    The Podcast by KevinMD
  • Redefining health care through agency and partnership [PODCAST]

    The Podcast by KevinMD
  • Rebuilding the backbone of health care [PODCAST]

    The Podcast by KevinMD
  • Escaping the trap of false urgency [PODCAST]

    The Podcast by KevinMD
  • Reimagining medical education for the 21st century [PODCAST]

    The Podcast by KevinMD
  • Why physicians must not suffer in silence [PODCAST]

    The Podcast by KevinMD
  • Most Popular

  • Past Week

    • Rebuilding the backbone of health care [PODCAST]

      The Podcast by KevinMD | Podcast
    • Why you should get your Lp(a) tested

      Monzur Morshed, MD and Kaysan Morshed | Conditions
    • The psychological trauma of polarization

      Farid Sabet-Sharghi, MD | Physician
    • Why physicians must not suffer in silence [PODCAST]

      The Podcast by KevinMD | Podcast
    • Why physicians must lead the vetting of medical AI [PODCAST]

      The Podcast by KevinMD | Podcast
    • Is it time for the VA to embrace virtual care?

      Kent Dicks | Tech
  • Past 6 Months

    • Rebuilding the backbone of health care [PODCAST]

      The Podcast by KevinMD | Podcast
    • The dangerous racial bias in dermatology AI

      Alex Siauw | Tech
    • When language barriers become a medical emergency

      Monzur Morshed, MD and Kaysan Morshed | Physician
    • The dismantling of public health infrastructure

      Ronald L. Lindsay, MD | Physician
    • A doctor’s letter from a federal prison

      L. Joseph Parker, MD | Physician
    • The high cost of PCSK9 inhibitors like Repatha

      Larry Kaskel, MD | Conditions
  • Recent Posts

    • Why physicians must lead the vetting of medical AI [PODCAST]

      The Podcast by KevinMD | Podcast
    • Dealing with physician negative feedback

      Jessie Mahoney, MD | Physician
    • Deaths in custody highlight crisis in Philly prisons

      Kendall Major, MD, Tommy Gautier, MD, Alyssa Lambrecht, DO, and Elle Saine, MD | Policy
    • Why CPT coding ambiguity harms doctors

      Muhamad Aly Rifai, MD | Physician
    • Why health care needs empathy, not just algorithms

      Muhammad Abdullah Khan | Conditions
    • Moral injury, toxic shame, and the new DSM Z code

      Brian Lynch, MD | Physician

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

ADVERTISEMENT

  • Most Popular

  • Past Week

    • Rebuilding the backbone of health care [PODCAST]

      The Podcast by KevinMD | Podcast
    • Why you should get your Lp(a) tested

      Monzur Morshed, MD and Kaysan Morshed | Conditions
    • The psychological trauma of polarization

      Farid Sabet-Sharghi, MD | Physician
    • Why physicians must not suffer in silence [PODCAST]

      The Podcast by KevinMD | Podcast
    • Why physicians must lead the vetting of medical AI [PODCAST]

      The Podcast by KevinMD | Podcast
    • Is it time for the VA to embrace virtual care?

      Kent Dicks | Tech
  • Past 6 Months

    • Rebuilding the backbone of health care [PODCAST]

      The Podcast by KevinMD | Podcast
    • The dangerous racial bias in dermatology AI

      Alex Siauw | Tech
    • When language barriers become a medical emergency

      Monzur Morshed, MD and Kaysan Morshed | Physician
    • The dismantling of public health infrastructure

      Ronald L. Lindsay, MD | Physician
    • A doctor’s letter from a federal prison

      L. Joseph Parker, MD | Physician
    • The high cost of PCSK9 inhibitors like Repatha

      Larry Kaskel, MD | Conditions
  • Recent Posts

    • Why physicians must lead the vetting of medical AI [PODCAST]

      The Podcast by KevinMD | Podcast
    • Dealing with physician negative feedback

      Jessie Mahoney, MD | Physician
    • Deaths in custody highlight crisis in Philly prisons

      Kendall Major, MD, Tommy Gautier, MD, Alyssa Lambrecht, DO, and Elle Saine, MD | Policy
    • Why CPT coding ambiguity harms doctors

      Muhamad Aly Rifai, MD | Physician
    • Why health care needs empathy, not just algorithms

      Muhammad Abdullah Khan | Conditions
    • Moral injury, toxic shame, and the new DSM Z code

      Brian Lynch, MD | Physician

MedPage Today Professional

An Everyday Health Property Medpage Today
  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...