Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
KevinMD
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking
KevinMD
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking
  • About KevinMD | Kevin Pho, MD
  • Be heard on social media’s leading physician voice
  • Contact Kevin
  • Discounted enhanced author page
  • DMCA Policy
  • Establishing, Managing, and Protecting Your Online Reputation: A Social Media Guide for Physicians and Medical Practices
  • Group vs. individual disability insurance for doctors: pros and cons
  • Kevin Pho, MD | Primary care physician in Nashua, NH
  • KevinMD influencer opportunities
  • Opinion and commentary by KevinMD
  • Physician burnout speakers to keynote your conference
  • Physician Coaching by KevinMD
  • Physician keynote speaker: Kevin Pho, MD
  • Physician Speaking by KevinMD: a boutique speakers bureau
  • Privacy Policy
  • Recommended services by KevinMD
  • Terms of Use Agreement
  • Thank you for subscribing to KevinMD
  • Thank you for upgrading to the KevinMD enhanced author page
  • The biggest mistake doctors make when purchasing disability insurance
  • The doctor’s guide to disability insurance: short-term vs. long-term
  • The KevinMD ToolKit
  • Upgrade to the KevinMD enhanced author page
  • Why own-occupation disability insurance is a must for doctors

Hospitals must establish safety guardrails before deploying AI [PODCAST]

The Podcast by KevinMD
Podcast
March 8, 2026
Share
Tweet
Share
YouTube video

Subscribe to The Podcast by KevinMD. Watch on YouTube. Catch up on old episodes!

Physician and health care consultant Harvey Castro discusses his article “ChatGPT Health in hospitals: 5 essential safety protocols.” Harvey outlines the immense potential of large language models to reduce administrative burdens while warning of the risks regarding misinformation and privacy breaches. The conversation details five non-negotiable protocols, including rigorous encryption, human-in-the-loop oversight, and mandatory simulated testing before going live. Harvey emphasizes that transparency is the foundation of care, arguing that patients deserve to know when AI is part of the conversation. Discover why responsible AI adoption requires long-term vigilance and continuous monitoring to ensure patient safety.

Partner with me on the KevinMD platform. With over three million monthly readers and half a million social media followers, I give you direct access to the doctors and patients who matter most. Whether you need a sponsored article, email campaign, video interview, or a spot right here on the podcast, I offer the trusted space your brand deserves to be heard. Let’s work together to tell your story.

PARTNER WITH KEVINMD → https://kevinmd.com/influencer

SUBSCRIBE TO THE PODCAST → https://www.kevinmd.com/podcast

RECOMMENDED BY KEVINMD → https://www.kevinmd.com/recommended

Transcript

Kevin Pho: Hi, and welcome to the show. Subscribe at KevinMD.com/podcast. Today we welcome back Harvey Castro, emergency medicine physician and health care consultant. Today’s KevinMD article is “ChatGPT Health in hospitals: five essential safety protocols.” Harvey, welcome back to the show.

Harvey Castro: Thank you for having me.

Kevin Pho: All right, so let’s talk about your latest article about ChatGPT Health. It has been all over the news. What is this one about and what led you to write it?

Harvey Castro: Looking at the big picture back in the year 2022, I wrote the first book about how we use this thing called ChatGPT in health care. I chuckle now because everybody knows what that is. The more I study this, the more I realize that governance and the right way of doing this is crucial. If we get it wrong, there is too much at stake. It is not like we pick a bad stock because AI told us to buy that stock. Literally, people can die.

Kevin Pho: ChatGPT Health was recently introduced within the last few months for those who aren’t familiar with it. What exactly is it in 30 seconds?

Harvey Castro: In 30 seconds, it is kind of interesting. Basically, the ChatGPT algorithm now allows you to upload your wearables and your Apple Health information. As a result, you can go back and forth with the chat and say that your hemoglobin A1c is a certain number and ask what that means. For non-physicians, it is helpful. But at the same time, for us physicians, it gives us a little scariness because we wonder if this is correct or if it is hallucinating. We wonder what is happening here.

Kevin Pho: So basically, patients can upload a lot of their personal data. You mentioned things like Apple Watches, but they can also upload their entire health record as well. Then they could use ChatGPT to query it and ask it for any information. Is that correct?

Harvey Castro: That is correct. Yes, and it is kind of scary because you don’t know what is in your medical records at that time. Most patients think they know, but they really don’t know what is all in there. Giving that information up to ChatGPT is a flag from a privacy point of view. Secondly, even if you are OK with that, what happens if it misinterprets what is there? As we know, medical records have so many errors in them already.

Kevin Pho: From the hospital standpoint, what are some of the red flags that they need to be careful of as more and more patients use ChatGPT Health?

Harvey Castro: I broke it down into five pillars in the article. I invite everybody to make sure they read it. The first one obviously is data privacy. If we get this one wrong, it is game over. If your patients don’t trust us, you know they are not coming back. We must make sure that it is HIPAA compliant and that it is encrypted. Not only that, we must ensure the information is safe. Every time we move the data, the data must be safe as well. For example, if a nurse says the hospital AI system is going slow and decides to use a phone and a public AI account, obviously we can’t do that. There are people that say it is going to be OK. The answer is no. We need to make sure we do this correctly.

Kevin Pho: Now, in terms of the privacy risk from the patient standpoint, what are some of the things that they need to look out for? ChatGPT Health encourages patients to upload all their health data there. What are some of the privacy safeguards with that tool?

Harvey Castro: Number one, obviously they are going to make sure that the platform is HIPAA compliant. This means there is no person out there trying to hack into that bridge when you connect to ChatGPT, making sure they are not taking your data. That is obviously number one. Number two is ensuring encryption goes back and forth when you connect. The other thing is that you are actually at the mercy of their servers. We have heard multiple times in the news about different health care companies getting hacked. We have to wonder how we can make sure that ChatGPT doesn’t get hacked because they are taking that information.

Kevin Pho: Now, is that happening? Are we confident in the security of patient privacy from the ChatGPT perspective?

Harvey Castro: That is a tough question. I would imagine with the billions of dollars at stake that they are doing everything possible. At the end of the day, we all know that nothing is one hundred percent secure. I haven’t read anything stating that they have been hacked, but I would imagine that there are tons of people out there lining up trying to break that system.

Kevin Pho: Going back to the hospital standpoint, do they have to take any specific measures above what they are already doing as more and more patients use AI and use ChatGPT Health to analyze their data?

Harvey Castro: That is a really good question. Number one is the diversity in the data itself. The hospital can’t just buy a product without looking at the data. They need to ask if that data represents the population that the hospital is serving. That is a really important question that the hospital must look at. Another thing they must also look at involves using the AI model and monitoring what we on the geeky side call data drift. This means I am asking it questions every day and it is doing what I meant for it to do, but with time, the model will drift to a certain point where it stops doing that accurately. We need to make sure that hospital administrators know that we must have a data scientist in the loop. They need to check the data to make sure it looks good and the model hasn’t changed or drifted to something that it wasn’t meant to be.

Kevin Pho: Just give us an update today. This episode will go out probably sometime in March. How are hospitals and health care systems currently using large language models, and what is the state of the art?

Harvey Castro: That is a good question. The nice thing is we are seeing a bell curve. Hospital systems that have a little bit more resources are definitely leveraging AI to the point where they are bringing models in-house. They are training the data of their patients in their hospital system and making sure that it is secure. Secondly, they are able to use this data for predictive analytics for different areas like radiology findings. There have been different studies showing that a human can see certain things, but the AI is able to not get tired and see certain patterns that we may miss because we are tired. So the hospital systems are adding this as part of their protocols. What I find fascinating is we are starting to see a divided system. We see hospitals going really strong with AI, doing it correctly, and implementing governance. Then we are seeing the other side where they have no idea or they don’t have the resources to be able to adopt that technology.

Kevin Pho: So for those hospitals that don’t have those resources, are they just using these commercial models? Are they just being more lax with security and compliance? What are these lower-resource hospitals doing?

Harvey Castro: Unfortunately, they are just trying to keep their doors open. A lot of times they are not even adding the extra expense because they can’t afford it. I totally understand that point of view. What I foresee happening, in my biased opinion, is the bigger, stronger, and healthier hospitals coming in and taking over the other hospitals that don’t have those resources. They will use that AI capability as a competitive advantage to possibly close those other hospitals or just take them over. I know the last thing we want to hear as doctors is that we are closing hospitals.

Kevin Pho: As we both know, a lot of these health AI startups are selling their products to these hospitals. As hospitals evaluate these tools, you mentioned your framework, but let’s go into more detail. What are some of the questions they need to ask these new companies to ensure that their tools are not only compliant but effective as well?

Harvey Castro: Number one, they need to ask if the data is junk data and if it is correct. I will give you a quick example. I was in Europe giving a talk in dermatology, and it dawned on me that when I went to medical school, all the books that I read featured a certain type of population. I hadn’t seen rashes on different populations. The same analogy applies here. You should ask the vendor to look at the dataset and ask if it is representative of your data. If it isn’t, ask how they are making sure that it is. Then ask if there is someone in the company on the AI side who is going to check for that model drift as you start feeding it data. Most of the time, they are going to sell it to you and walk away. Then it changes from what you spent all this money on, and the accuracy is not there. Obviously, on the internal side, you have to ask if the hospital can handle the electricity and bandwidth. A lot of people don’t think about that, but when we use AI, we are going to start using more electricity. If it is true that some of the hospitals are just barely making it, we have to ask if they can expend that much more money and energy.

Kevin Pho: I was reading an article noting that because it is so easy to code for these AI companies, they are coming up with all these products without having any physician or clinician look at how the product would really impact hospitals. Now hospitals are just inundated with a lot of these vendors that really have no basis in terms of how medicine is actually practiced. Are you finding something similar through your observations?

Harvey Castro: I totally agree with you. That is why I say it takes a village. It takes people like you and me. The funny thing is when I consult with startups, I ask them if they have a patient as part of their board. They say no. I ask if they have a doctor on the board, and they say no. I think to myself that they only have part of the ecosystem and are trying to create a solution, but they don’t have the full team to represent that solution. Obviously, to your point, we need to make sure that we have the doctor in the loop and the patient in the loop. It is not just the doctor, but the whole staff. As you know, how many times has our nurse saved us from a mistake, or the front desk asked if a patient told us a certain detail that we didn’t even know? We need everybody included now for these health IT companies that consult with us.

Kevin Pho: So when you tell these health IT companies that they need a patient and a clinician in the loop, do they actually listen to you and make those changes?

Harvey Castro: Actually, they do. We have been able to successfully create certain boards for this, and I have been able to look at their models, their frameworks, and how they are operating. At the end of the day, my personal goal is patient safety. I have to add this because I want to make sure we get this in there. We have got to make sure that our hospital systems are transparent with their policies so that if AI is being used, our patients know that it is being used. Some might argue that patients just register and don’t need to know, but we need to be transparent because it is all about trust. If we lose this trust in this adoption of AI in health care, we are going to lose this battle and the war.

Kevin Pho: Now, from the patient standpoint, what are some common scenarios where AI may be used that they may not initially be aware of?

Harvey Castro: I hope patients realize that even down to scheduling an appointment now, a lot of that information is going through AI to look for a slot and give you that information. There are more and more models using ambient listening. A lot of hospital systems are using that. For better or worse, depending on which side you are on, you want to make sure that it is transparent so everybody knows. I really think we need to educate both the physician and hospital side as well as the patient side on how this tool is being used. I know this is very controversial, but we should give an opt-out button. If the patient says they don’t want this AI as part of their health care, we need to be respectful of that and confirm that it is fine. They just need to know that this is a different framework when we are working with them.

Kevin Pho: It is so interesting. Do you ever see a point where hospitals are going to be so intertwined with AI that if patients opt out of that, the hospital system simply can’t serve them appropriately?

Harvey Castro: Yes, unfortunately, I see that day coming. I see a day where it is going to be everywhere. It is going to be in our appointments, predictive analytics, X-rays, imaging, labs, and in our EMR. That is why I say at the end of the day, it is about education. Let’s educate the patients on the good, the bad, and the unknown. That way they understand this product so they don’t fear the unknown and are more likely to accept it.

Kevin Pho: As part of your framework, you talk about keeping a human in the loop. As hospitals adopt some of these AI tools, what would that look like?

Harvey Castro: That is a great question. For example, not many people know this, but when GPT-3.5 first came out, people in third-world countries were saying yes or no to the output, meaning they were not doctors. There were some other countries saying that the output looked correct, and then people were using that data. The reason a human in the loop is so important is that it can’t just be any human. It must be a doctor, and not just any doctor. If you are asking an ER question, then make sure you have an ER doctor in the loop. This matters if the AI hallucinates, makes an error, says something that is totally off, or if the guidelines have changed. For example, if an AI was trained until a certain date and a new guideline came out a day after, the AI has no idea. Its output might technically be correct based on its training, but it is no longer accurate. That is where the human comes in, looks at it, and recognizes that it is wrong. That is why we need to have that human oversight.

Kevin Pho: The evaluation of these tools doesn’t just stop once a hospital adopts them. There needs to be continuous monitoring after adoption as well. Please talk more about that.

Harvey Castro: We need to make sure that the data drift I mentioned earlier is managed by having someone audit it. We must ensure it is a full 360-degree feedback loop so that patients, doctors, or the front office can say when something seems off. That way, the data scientist is able to get that information, look at it, and recognize if the model is drifting. More importantly, hopefully the data scientist working on this will notice that the model is shifting and say that we need to pause it for a minute to fix it before using it anymore. That is really important. We must make sure it is tracking trends, accepting clinical feedback, and undergoing regular audits on the system.

Kevin Pho: You are consulting a lot of health care institutions regarding AI adoption. Are the majority of these medical institutions adhering to this framework that we are talking about today?

Harvey Castro: I see it happening. What fascinates me is that I am getting a lot of calls from the Middle East and from Singapore. They are doing things way differently, and I think it comes down to politics. When the government pushes a mandate down saying they will use one electronic medical record, one dataset, and one AI, things are going to happen. Here in the United States, we have fragmentation, lobbyists, and different points of view, so things are a little bit slower. I find it really interesting. I hate making predictions, but I predict we are going to start seeing some really interesting models used outside of the U.S. For example, the Middle East is using digital twins. Being able to have your own digital twin to practice medicine and test what drugs work is incredible. Then doctors can tell the patient that this is a drug they know is going to work best for them. I am getting a little fear of missing out because I want to have that here in the United States.

Kevin Pho: We are talking to Harvey Castro, emergency department physician and health care consultant. Today’s KevinMD article is “ChatGPT Health in hospitals: five essential safety protocols.” Harvey, let’s end with some take-home messages that you want to leave with the KevinMD audience.

Harvey Castro: At the end of the day, think of AI like riding a bike. When you first started riding a bike, you were scared and thought you were going to fall. Before you knew it, you probably started riding without using your hands. It is the same thing with AI. Use the tool, and know the good, the bad, and the ugly. The more you use it, the more you will realize how much more efficient you can be and how you can help your patients.

Kevin Pho: Harvey, thank you again for sharing your perspective and insight. Thanks again for coming back on the show.

Harvey Castro: Thank you.

Prev

The Mamba Mentality of an immigrant physician's journey

March 8, 2026 Kevin 0
…

Kevin

Tagged as: Health IT

< Previous Post
The Mamba Mentality of an immigrant physician's journey

ADVERTISEMENT

More by The Podcast by KevinMD

  • Unregulated botanical products pose hidden risks in convenience stores [PODCAST]

    The Podcast by KevinMD
  • AI could end the administrative nightmare for doctors [PODCAST]

    The Podcast by KevinMD
  • Physician vulnerability and authenticity: How shared stories heal [PODCAST]

    The Podcast by KevinMD

Related Posts

  • America’s health care safety net is in danger

    Donna Grande
  • The Federal SAVE Act: a beacon of hope for health care worker safety

    Scott Ellner, DO, MPH
  • Why nearly 800 U.S. hospitals are at risk of shutting down

    Harry Severance, MD
  • Why doctors must fight health misinformation on social media

    Olapeju Simoyan, MD
  • Digital health equity is an emerging gap in health

    Joshua W. Elder, MD, MPH and Tamara Scott
  • Why the health care industry must prioritize health equity

    George T. Mathew, MD, MBA

More in Podcast

  • Unregulated botanical products pose hidden risks in convenience stores [PODCAST]

    The Podcast by KevinMD
  • AI could end the administrative nightmare for doctors [PODCAST]

    The Podcast by KevinMD
  • Physician vulnerability and authenticity: How shared stories heal [PODCAST]

    The Podcast by KevinMD
  • Rest is a holy practice: Reclaiming the soul of medicine [PODCAST]

    The Podcast by KevinMD
  • Ecovillages and organic farming could reverse global warming [PODCAST]

    The Podcast by KevinMD
  • “Disruptive” behavior is often a cry for help from depleted doctors [PODCAST]

    The Podcast by KevinMD
  • Most Popular

  • Past Week

    • The Blanket Sign: Recognizing difficult patient encounters in the ER

      George Issa, MD | Physician
    • How board certification fuels the physician shortage crisis

      Brian Hudes, MD | Physician
    • The future of U.S. medicine: 10 health care trends in 2026

      Richard E. Anderson, MD & The Doctors Company | Physician
    • The passion vine: a lesson on restraint in medicine and life

      Rao M. Uppu, PhD | Conditions
    • The Platinum Rule in health care: Moving beyond the Golden Rule

      Harvey Max Chochinov, MD, PhD | Conditions
    • American health care policy reform: Why we need a bipartisan commission

      Steve Cohen, JD | Policy
  • Past 6 Months

    • Missed diagnosis visceral leishmaniasis: a tragedy of note bloat

      Arthur Lazarus, MD, MBA | Conditions
    • From Singapore to Canada: a blueprint for primary care transformation

      Ivy Oandasan, MD | Policy
    • The American Board of Internal Medicine maintenance of certification lawsuit: What physicians need to know

      Brian Hudes, MD | Physician
    • Sabbaticals provide a critical lifeline for sustainable medical careers [PODCAST]

      The Podcast by KevinMD | Podcast
    • Why Medicare must cover atrial fibrillation screening to prevent strokes

      Radhesh K. Gupta | Conditions
    • Why medical school DEI mission statements matter for future physicians

      Aditi Mahajan, MEd, Laura Malmut, MD, MEd, Jared Stowers, MD, and Khaleel Atkinson | Education
  • Recent Posts

    • Hospitals must establish safety guardrails before deploying AI [PODCAST]

      The Podcast by KevinMD | Podcast
    • The Mamba Mentality of an immigrant physician’s journey

      Joshua Salabei, MD, PhD | Physician
    • Why hospitals shouldn’t own physician practices: 6 key reasons

      David Wild, MD | Physician
    • Why does sex work seem like a more viable path than medicine in 2026?

      Corina Fratila, MD | Physician
    • Finding balance in political turmoil: a poem on resilience

      Michele Luckenbaugh | Conditions
    • Names as social texts: Navigating cultural identity in medicine

      Esiri Gbenedio | Education

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

  • Most Popular

  • Past Week

    • The Blanket Sign: Recognizing difficult patient encounters in the ER

      George Issa, MD | Physician
    • How board certification fuels the physician shortage crisis

      Brian Hudes, MD | Physician
    • The future of U.S. medicine: 10 health care trends in 2026

      Richard E. Anderson, MD & The Doctors Company | Physician
    • The passion vine: a lesson on restraint in medicine and life

      Rao M. Uppu, PhD | Conditions
    • The Platinum Rule in health care: Moving beyond the Golden Rule

      Harvey Max Chochinov, MD, PhD | Conditions
    • American health care policy reform: Why we need a bipartisan commission

      Steve Cohen, JD | Policy
  • Past 6 Months

    • Missed diagnosis visceral leishmaniasis: a tragedy of note bloat

      Arthur Lazarus, MD, MBA | Conditions
    • From Singapore to Canada: a blueprint for primary care transformation

      Ivy Oandasan, MD | Policy
    • The American Board of Internal Medicine maintenance of certification lawsuit: What physicians need to know

      Brian Hudes, MD | Physician
    • Sabbaticals provide a critical lifeline for sustainable medical careers [PODCAST]

      The Podcast by KevinMD | Podcast
    • Why Medicare must cover atrial fibrillation screening to prevent strokes

      Radhesh K. Gupta | Conditions
    • Why medical school DEI mission statements matter for future physicians

      Aditi Mahajan, MEd, Laura Malmut, MD, MEd, Jared Stowers, MD, and Khaleel Atkinson | Education
  • Recent Posts

    • Hospitals must establish safety guardrails before deploying AI [PODCAST]

      The Podcast by KevinMD | Podcast
    • The Mamba Mentality of an immigrant physician’s journey

      Joshua Salabei, MD, PhD | Physician
    • Why hospitals shouldn’t own physician practices: 6 key reasons

      David Wild, MD | Physician
    • Why does sex work seem like a more viable path than medicine in 2026?

      Corina Fratila, MD | Physician
    • Finding balance in political turmoil: a poem on resilience

      Michele Luckenbaugh | Conditions
    • Names as social texts: Navigating cultural identity in medicine

      Esiri Gbenedio | Education

MedPage Today Professional

An Everyday Health Property Medpage Today

Copyright © 2026 KevinMD.com | Powered by Astra WordPress Theme

  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...