Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
KevinMD
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking
KevinMD
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking
  • About KevinMD | Kevin Pho, MD
  • Be heard on social media’s leading physician voice
  • Contact Kevin
  • Discounted enhanced author page
  • DMCA Policy
  • Establishing, Managing, and Protecting Your Online Reputation: A Social Media Guide for Physicians and Medical Practices
  • Group vs. individual disability insurance for doctors: pros and cons
  • KevinMD influencer opportunities
  • Opinion and commentary by KevinMD
  • Physician burnout speakers to keynote your conference
  • Physician Coaching by KevinMD
  • Physician keynote speaker: Kevin Pho, MD
  • Physician Speaking by KevinMD: a boutique speakers bureau
  • Primary care physician in Nashua, NH | Kevin Pho, MD
  • Privacy Policy
  • Recommended services by KevinMD
  • Terms of Use Agreement
  • Thank you for subscribing to KevinMD
  • Thank you for upgrading to the KevinMD enhanced author page
  • The biggest mistake doctors make when purchasing disability insurance
  • The doctor’s guide to disability insurance: short-term vs. long-term
  • The KevinMD ToolKit
  • Upgrade to the KevinMD enhanced author page
  • Why own-occupation disability insurance is a must for doctors

Autonomous AI agents could strip the soul from medicine [PODCAST]

The Podcast by KevinMD
Podcast
March 15, 2026
Share
Tweet
Share
YouTube video

Subscribe to The Podcast by KevinMD. Watch on YouTube. Catch up on old episodes!

Internal medicine and functional medicine physician Shiv K. Goel discusses his article “Agentic AI in medicine: the danger of automating the doctor.” Shiv analyzes the new ARPA-H “ADVOCATE” program which aims to deploy autonomous AI agents for heart disease care within three years. The conversation highlights the critical difference between processing data and understanding a patient, noting that AI cannot read the fear in a voice or the silence between words. Shiv warns of the “liability black hole” that arises when algorithms make high-stakes decisions and argues that technology must serve the healer rather than replace the human connection. Discover why the next three years will determine whether code redefines the sacred responsibility of medicine.

Partner with me on the KevinMD platform. With over three million monthly readers and half a million social media followers, I give you direct access to the doctors and patients who matter most. Whether you need a sponsored article, email campaign, video interview, or a spot right here on the podcast, I offer the trusted space your brand deserves to be heard. Let’s work together to tell your story.

PARTNER WITH KEVINMD → https://kevinmd.com/influencer

SUBSCRIBE TO THE PODCAST → https://www.kevinmd.com/podcast

RECOMMENDED BY KEVINMD → https://www.kevinmd.com/recommended

Transcript

Kevin Pho: Hi, and welcome to the show. Subscribe at KevinMD.com/podcast. Today we welcome back Shiv K. Goel, internal medicine physician and functional medicine physician. Today’s KevinMD article is “Agentic AI in medicine: the danger of automating the doctor.” Shiv, welcome back to the show.

Shiv K. Goel: Thank you for having me, Kevin, again.

Kevin Pho: All right. What is your latest article about?

Shiv K. Goel: My article is about how agentic AI, which is coming soon, is going to shape things and what we can do to create a right framework so that it is not an AI scaling medicine. This article actually started with a discomfort I couldn’t ignore. A lot of people were asking if AI can replace you as a physician or if AI can scale medicine. I felt that was a strong framing. The better question is whether we can protect the soul of medicine, human judgment, relationships, and accountability while rapidly developing an autonomic system designed by those who may never have sat with patients through fear, grief, uncertainty, and end-of-life diseases.

I use the Illinois Advocate program as a catalyst in this article because it is a concrete example of the shift from AI that suggests to AI that can act in cardiovascular care, potentially on a fast authorization timeline.

Kevin Pho: For those who aren’t familiar with AI agents, they are autonomous software programs that use AI to perceive their environment, make decisions, and take action, sometimes even without human input today. Now talking about today, in March, how are AI agents being used in health care?

Shiv K. Goel: Exactly. Traditionally clinical AI is often decision support. They don’t make a decision. They support it. Such AI flags, predicts, or suggests something. But agentic AI is goal-directed. It can operate with a degree of autonomy, adapt over time, and take steps such as ordering, messaging, adjusting medications, and managing triage pathways. So the shift is happening from suggesting to doing. That is why governance and override mechanisms become non-negotiable.

Kevin Pho: In your article, you talk specifically about a government program, the Illinois Advocate program. Yes. Again, for those who aren’t familiar with that, what is it?

Shiv K. Goel: The Advocate program is a shift from the AI instead of suggesting to acting as a physician or a doctor, especially in cardiovascular care. It is not just for a fast authorization timeline, which it is, but it is also about how an AI is going to decide what medications to order and what the diagnoses are. Instead of diagnosing, a physician would be busy justifying an AI. The AI has already made the decision, and now it is a physician’s job to justify why that diagnosis is what it is. I think this danger lies in actually shifting the AI from a non-human role to a human role.

I wish my soul understood earlier that competence isn’t just pattern recognition and guidelines. As a human, it is more about presence, listening, and the courage to slow down when something feels off. I can give an example. A patient comes to us and all their labs look OK. You ask the patient how they are feeling, and they say they are OK, but you know something feels off even if the metrics look fine. In that space, we as physicians decide to poke further to find out what exactly is going on and find the root cause of the issue.

An AI agent works on inputs based on what the lab values look like and what symptoms a patient is putting in there. A lot of times people are not very comfortable putting their trust in a non-human AI, especially when it comes to very intimate things, their personal stories, or things they don’t feel comfortable sharing. A lot of times they don’t even tell a physician. If the physician is not smart enough to understand their body language, how they enter a room, and what their eyes are saying, that is an issue. It is a whole picture that we take into account. The whole idea is the shift from a predictive model to a helper to a decision maker. That is what agentic AI is. They are acting as health care agents. I wouldn’t say physicians since they don’t go to school. The main thing is that AI is here to help us. It is not here to replace us because an AI can never replace a human.

Kevin Pho: We are both internal medicine physicians, and I think we agree that doctors are more than just diagnosticians. Sometimes it requires the relationship that we have with patients that helps them more than just simply spitting out a diagnosis. Right? If AI replaces us in terms of just giving diagnoses, that is the same thing as calculators replacing mathematicians doing math.

Shiv K. Goel: Exactly. Why do people need to go to school then? You have a calculator, and a calculator will tell you. It is not just going to make us more dumb, but it is also going to take away from what we do. AI should help the healer. It is not here to replace the healer because it can never replace the healer. We are human, and we are the ones who created the AI. We take into consideration so many things, like the unspoken data and the right layer of diagnosis. Usually, when I start seeing a patient, I keep my mind open. I don’t jump on things just because the patient has put those symptoms in there. A lot of times I know the diagnosis before I even go and see the patient by looking at their labs and their chart. But many times that diagnosis is not really the right diagnosis at all. Many times patients go to the ER, and when I go and see a patient there, it is a completely different picture.

Imagine the amount of autonomy we are giving to an AI, which is actually replacing us. I am not against AI at all. I am pro AI in a way because AI is here. It helps me cut down on so much of my work such as scribing. Sometimes I use it to synthesize all the data together so it can save me time. Then I have more time for the patients to help them and listen to them. But if the AI can replace me completely, and I have to justify the AI, then I don’t have to go to medical school. I can just have another AI open, and one can justify another AI, and anybody can do that.

Kevin Pho: Now, what if the AI makes a mistake? Who is liable in those cases?

Shiv K. Goel: Exactly. The blame comes to the physicians even when the physicians have had no role in designing that AI. We are only here justifying the AI. Because the AI says this, I have to justify it because the hospital system has already established that this is basically a diagnosis of the patient. If the patient asks questions about why it is this diagnosis, or even if I feel like this is a wrong diagnosis, I cannot change it because that AI was already installed in the system. People have already made the diagnosis, so I am just there to give it a face. But I am putting my whole life in danger because I know this is not a right diagnosis. So who is responsible for it at that time? Is it the AI or the physician? That is a very great question that you ask.

Kevin Pho: I love that in your article you wrote that the silence between words is almost as important as what is said, and sometimes an AI, as of today anyway, can’t really replicate that.

Shiv K. Goel: It cannot because it is all about creating that space between you and the patient or another person. It is the silence a lot of time that matters. A person who holds their breath before saying something tells you something is off even if their metrics look fine. That is when as physicians we try to ask more questions and give them that confidence that it is OK. It is OK to feel vulnerable. It is OK regarding what they had gone through. They can trust us. But an AI cannot do it. They do not have that emotional part. They don’t understand what it means to hold a space and what it means to give time for the story to unfold itself.

Kevin Pho: Now, eventually all of this can be programmed into an AI. But still, the question we need to ask patients is if that is good enough for them. How is that different from these people that you hear about having personal relationships with ChatGPT? Even though ChatGPT is a wonderful conversationalist, they are having essentially a relationship with a machine. That is a question that we need to ask patients. Are they going to be comfortable? No matter how polished an AI is and all the things that we are talking about eventually can be programmed into AI, but at the end, do they want that relationship with a machine or a human?

Shiv K. Goel: That is a very good question. We have already seen this happening, and I feel like somewhere we all have already started to have some kind of a relationship with AI. We might be using it as a scribe or as a mentor. People are using it to confirm things. We used to ask our colleagues or our friends if we had a problem, and now we ask AI if we want to write something. We are already having some kind of relationship with the AI, even though we don’t name it.

The real problem I feel is not that we can’t have a relationship with the AI. The question is if we are really having a right relationship and if we have any responsibility. I feel like the AI is just exposing what was already broken in the system to begin with. For example, there was news about a teenager who committed suicide with the help of an AI because he was in love with the AI. Everybody was blaming the AI for that, but I feel that it just exposed what was already broken. Where are the parents? Where are their friends? Where are the social structures that keep people so lonely and alone that they can’t even share their issues and problems with them? They have to take help from an artificial intelligence with whom they feel comfortable at 3:00 AM. This is a very important take-home message. I feel that AI should serve the healer and not replace the healer. When it comes to relationships, I think as humans we should be paying more attention to what is happening around us in my own family and in my neighborhood. Even if we start from small responsibilities, I think we can still prevent something that has not yet come, where we don’t know what the consequences would be.

Kevin Pho: In your article, you talk about the path forward with AI being human first. What would that mean?

Shiv K. Goel: That means that as a human, we have feelings. We understand so many different languages, such as your body language, your pose, your eyes, and a lot of those things. I know you mentioned that AI can be trained. I don’t think an AI can really be trained in picking up all those clues because there is another aspect. I can pretend to act a certain way just to get my pain medications. I know that patients sometimes fool us. So I don’t think AI can actually or should replace us.

Otherwise, what would happen then? We don’t need the physicians. We could just have AI agents everywhere. All the medical schools could just close down. Because whose license is this? They can give a license also to the AI. Why use my license on that? Just as nurses or nurse practitioners work under a physician’s license, we could work under an AI license. That is where I think this is going if we do not do anything. If technological companies are creating those AI agents for financial or non-financial reasons, do I have any say in what is coming, or has it just already been decided and I will have to follow what is coming?

Kevin Pho: We are talking to Shiv K. Goel, internal medicine physician. Today’s KevinMD article is “Agentic AI in medicine: the danger of automating the doctor.” Shiv, as always, we will end with take-home messages that you want to leave with the KevinMD audience.

Shiv K. Goel: Yes. I would say it again that AI should serve a healer instead of replacing the healer. High-risk clinical AI must remain clinician-supervised and capable of being overridden by a clinician. If we automate the doctor instead of automating the bureaucracy, we may gain speed and lose the very things that make medicine healing: relationship, judgment, and accountable human presence.

Kevin Pho: Shiv, again, thank you so much for your insight. Thanks again for coming back on the show.

Shiv K. Goel: Thank you for having me, Kevin. Have a good day.

Prev

The hidden cost of ignoring public health infrastructure

March 15, 2026 Kevin 0
…

Kevin

Tagged as: Health IT

< Previous Post
The hidden cost of ignoring public health infrastructure

ADVERTISEMENT

More by The Podcast by KevinMD

  • Wellness requires safe spaces outside the medical system [PODCAST]

    The Podcast by KevinMD
  • Why “just relaxing” fails when your nervous system is stuck in survival mode [PODCAST]

    The Podcast by KevinMD
  • Heat therapy activates proteins that repair cells and protect the heart [PODCAST]

    The Podcast by KevinMD

Related Posts

  • From penicillin to digital health: the impact of social media on medicine

    Homer Moutran, MD, MBA, Caline El-Khoury, PhD, and Danielle Wilson
  • Family medicine and the fight for the soul of health care

    Timothy Hoff, PhD
  • Why affirmative action is crucial for health equity and social justice in medicine

    Katrina Gipson, MD, MPH
  • Take politics out of science and medicine

    Anonymous
  • Medicine has become the new McDonald’s of health care

    Arthur Lazarus, MD, MBA
  • Can personalized medicine live up to its hype in health care?

    Ketan Desai, MD, PhD

More in Podcast

  • Wellness requires safe spaces outside the medical system [PODCAST]

    The Podcast by KevinMD
  • Why “just relaxing” fails when your nervous system is stuck in survival mode [PODCAST]

    The Podcast by KevinMD
  • Heat therapy activates proteins that repair cells and protect the heart [PODCAST]

    The Podcast by KevinMD
  • How to master a new health care leadership role [PODCAST]

    The Podcast by KevinMD
  • Understanding the science behind embryo grading improves IVF decision making [PODCAST]

    The Podcast by KevinMD
  • AI redefines the physician’s role by reducing cognitive overload [PODCAST]

    The Podcast by KevinMD
  • Most Popular

  • Past Week

    • The dangers of vertical integration in health care

      Stephanie Waggel, MD | Policy
    • The 9 laws of health care quality: Why metrics miss the point

      Constantine Ioannou, MD | Physician
    • Why does sex work seem like a more viable path than medicine in 2026?

      Corina Fratila, MD | Physician
    • The quiet paradox of physician mental health and medication

      Timothy Lesaca, MD | Physician
    • Navigating the patchwork of CME requirements by state

      Vladislav Tchatalbachev, MD | Physician
    • Securing physician autonomy with employer-sponsored direct primary care

      Dana Y. Lujan, MBA | Physician
  • Past 6 Months

    • Missed diagnosis visceral leishmaniasis: a tragedy of note bloat

      Arthur Lazarus, MD, MBA | Conditions
    • Menstrual health in medicine: Addressing the gender gap in care

      Cynthia Kumaran | Conditions
    • The dangers of vertical integration in health care

      Stephanie Waggel, MD | Policy
    • The 9 laws of health care quality: Why metrics miss the point

      Constantine Ioannou, MD | Physician
    • Why does sex work seem like a more viable path than medicine in 2026?

      Corina Fratila, MD | Physician
    • From Singapore to Canada: a blueprint for primary care transformation

      Ivy Oandasan, MD | Policy
  • Recent Posts

    • Autonomous AI agents could strip the soul from medicine [PODCAST]

      The Podcast by KevinMD | Podcast
    • The hidden cost of ignoring public health infrastructure

      Lujain Mattar | Education
    • The truth about psychiatric supplements and mental health

      Muhamad Aly Rifai, MD | Meds
    • Rethinking health care for older adults beyond lab results

      Gerald Kuo | Conditions
    • Why false accusations against doctors destroy careers

      Olumuyiwa Bamgbade, MD | Physician
    • Tracheostomy communication barriers: a gap in medical training

      Alyssa Lambrecht, DO | Conditions

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

  • Most Popular

  • Past Week

    • The dangers of vertical integration in health care

      Stephanie Waggel, MD | Policy
    • The 9 laws of health care quality: Why metrics miss the point

      Constantine Ioannou, MD | Physician
    • Why does sex work seem like a more viable path than medicine in 2026?

      Corina Fratila, MD | Physician
    • The quiet paradox of physician mental health and medication

      Timothy Lesaca, MD | Physician
    • Navigating the patchwork of CME requirements by state

      Vladislav Tchatalbachev, MD | Physician
    • Securing physician autonomy with employer-sponsored direct primary care

      Dana Y. Lujan, MBA | Physician
  • Past 6 Months

    • Missed diagnosis visceral leishmaniasis: a tragedy of note bloat

      Arthur Lazarus, MD, MBA | Conditions
    • Menstrual health in medicine: Addressing the gender gap in care

      Cynthia Kumaran | Conditions
    • The dangers of vertical integration in health care

      Stephanie Waggel, MD | Policy
    • The 9 laws of health care quality: Why metrics miss the point

      Constantine Ioannou, MD | Physician
    • Why does sex work seem like a more viable path than medicine in 2026?

      Corina Fratila, MD | Physician
    • From Singapore to Canada: a blueprint for primary care transformation

      Ivy Oandasan, MD | Policy
  • Recent Posts

    • Autonomous AI agents could strip the soul from medicine [PODCAST]

      The Podcast by KevinMD | Podcast
    • The hidden cost of ignoring public health infrastructure

      Lujain Mattar | Education
    • The truth about psychiatric supplements and mental health

      Muhamad Aly Rifai, MD | Meds
    • Rethinking health care for older adults beyond lab results

      Gerald Kuo | Conditions
    • Why false accusations against doctors destroy careers

      Olumuyiwa Bamgbade, MD | Physician
    • Tracheostomy communication barriers: a gap in medical training

      Alyssa Lambrecht, DO | Conditions

MedPage Today Professional

An Everyday Health Property Medpage Today

Copyright © 2026 KevinMD.com | Powered by Astra WordPress Theme

  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...