Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking

Why fixing health care’s data quality is crucial for AI success [PODCAST]

Jay Anders, MD
Podcast
June 4, 2025
Share
Tweet
Share
YouTube video

Subscribe to The Podcast by KevinMD. Watch on YouTube. Catch up on old episodes!

Physician executive Jay Anders discusses his article, “Health care’s data problem: the real obstacle to AI success.” Jay asserts that the transformative potential of artificial intelligence in health care is fundamentally dependent on the quality of the underlying clinical data. He explains that while tools like large language models and conversational AI show promise in synthesizing information and easing documentation, their reliability is compromised when fed with data from repositories often filled with inconsistencies, errors, and gaps. This can lead to an “increased workload paradox,” where clinicians spend more time verifying and correcting AI-generated outputs, and a failure to produce the structured data vital for regulatory compliance, quality metrics, and analytics. Jay emphasizes that the “garbage in, garbage out” principle severely hampers interoperability and contributes to significant financial and clinical risks, including medical errors and inefficient workflows. To counter this, he advocates for robust data validation and normalization, enhancement of clinical terminologies, and the use of AI paired with evidence-based algorithms to rectify historical data issues, stressing that establishing trusted data sources is paramount before AI can truly revolutionize health care delivery.

Our presenting sponsor is Microsoft Dragon Copilot.

Want to streamline your clinical documentation and take advantage of customizations that put you in control? What about the ability to surface information right at the point of care or automate tasks with just a click? Now, you can.

Microsoft Dragon Copilot, your AI assistant for clinical workflow, is transforming how clinicians work. Offering an extensible AI workspace and a single, integrated platform, Dragon Copilot can help you unlock new levels of efficiency. Plus, it’s backed by a proven track record and decades of clinical expertise and it’s part of Microsoft Cloud for Healthcare–and it’s built on a foundation of trust.

Ease your administrative burdens and stay focused on what matters most with Dragon Copilot, your AI assistant for clinical workflow.

VISIT SPONSOR → https://aka.ms/kevinmd

SUBSCRIBE TO THE PODCAST → https://www.kevinmd.com/podcast

RECOMMENDED BY KEVINMD → https://www.kevinmd.com/recommended

Transcript

Kevin Pho: Hi, and welcome to the show. Subscribe at KevinMD.com/podcast. Today we welcome back Jay Anders; he’s a physician executive. Today’s KevinMD article is “Health care’s data problem, the real obstacle to AI success.” Jay, welcome back to the show.

Jay Anders: Thanks for having me, Kevin.

Kevin Pho: All right, so tell me, before getting to the article itself, what led you to write it in the first place?

ADVERTISEMENT

Jay Anders: Well, it’s interesting to see the advent and the explosion of AI, and it’s producing a whole bunch of data that may or may not be correct. And it’s interesting as I started looking into this more and more and more, both in my personal family adventure with health care but also with others, how much bad data is really out there. And what I mean by that is: it’s just flat incorrect, and it’s getting propagated almost because of AI, but not really. We’ve had this problem for years.

Kevin Pho: All right, so tell us about the article itself. Give us, of course, some examples of what we’re talking about as it relates to health care.

Jay Anders: OK. One of the more interesting things with health care data exchange right now and all of the HIEs and QHINs, is that data is being propagated across multiple systems. And I’ll give you a couple of examples. There’s a big difference between two very common things: ‘never smoked’ or ‘not smoking.’ Think about that for a minute. So, ‘never smoked’ is: I’ve never smoked, never had a cigarette in my mouth, never had a cigar, pipe, nothing; no smoking. ‘Not smoking,’ however, is: I stopped this morning. I’m not smoking now. So, does that change the risk profile of a patient? It certainly does. But it’s being propagated throughout the system as the wrong data. So, people are starting to transmit this. It’s being accepted into the receiving system sent by the sending systems without any type of cleanup and/or coding, and a whole host of other probable manipulations that need to happen to make sure it’s correct. The other issue is that there are incorrect diagnoses that have been propagated from one system to another, and I’ll give you two examples of that. The CEO of our company, whose father died of liver cancer, had put in his chart that he has liver cancer. So he went to one of his physicians and they said, “But you’re doing really well. Look at you. The chemo must have worked. It’s great.” He replied, “I don’t have liver cancer.” His father had liver cancer. So, he’s had a heck of a time trying to get that expunged out of his medical record. Recently, my own family went to their physicians, and I got an after-summary readout of an ambiently generated AI note, and it was absolutely littered with errors. Meaning, the conversation with your clinician or provider is misconstrued, and the correct information isn’t captured by the ambient system. And what’s interesting about that is we’ve had that problem forever when we would dictate our notes. We need to read our dictations. We need to make sure what we’re putting down, what we’re putting our name to, is actually correct. One of my other colleagues, in their AI-generated ambient listening note, had the subject change from a ‘he’ to a ‘she’ three times in the same note. So, when that all gets propagated and all gets transmitted, there’s an issue there, and I think sooner or later it’s going to catch up with folks. I’m mostly concerned about patient care. Obviously, I’m a physician. I want to make sure that the patient’s information that I get is correct and I can act upon it or trust it. The other issue is more patient-specific, meaning that if something is in your medical record that’s incorrect and you go and apply for health insurance or you go and apply for life insurance, how is that going to play going forward? So, we’ve got a data problem, and I think it’s multifactorial as to why.

Kevin Pho: So let’s talk about some of those reasons, because as you said, even going all the way back to dictation, right? You could have dictation errors, which also introduces bad data into the health record. Are you saying that with the propagation of these ambient AI documentation systems, that problem is just getting exponentially worse because the ambient AI is misconstruing or not picking up or entering inaccurate data?

Jay Anders: I think it’s accelerating it more than anything else. There needs to be a human in the loop. And when AI is applied to health care, that’s been one of my mantras forever: there needs to be a human in the loop, and there needed to be a human in the dictation loop as well. It seems to be that the errors are easily created. The physicians and clinicians aren’t paying attention because I think they’ve been lulled into believing this is going to fix my documentation problem; I don’t have to worry about that anymore. Well, you kind of still do because it’s not perfect. So you have to make sure that what you’re putting out there is accurate. But I think—and I’m all for ambient listening, I’m all for trying to create systems that will accelerate physicians and keep them from being burned out and help them do what they have to do. What I’m not really seeing is, yes, there’s some help there, but it’s actually creating a different problem than we had before. So I think it’s accelerating it as opposed to trying to correct it.

Kevin Pho: And is it just a matter of clinicians not reading the note that’s generated by the ambient AI and just accepting it as fact?

Jay Anders: I believe that’s a big part of it. And think about how much volume is being created in one of these ambient listening systems by AI. So as you go in, as you probably do with your providers, and I do with mine, you have a conversation. We’re physicians; it’s physician to physician. It’s a little different conversation. So all of that conversation is getting picked up and analyzed, so it actually creates more of a volume of text that has to be parsed through and made sure it’s correct than in the past when we’ve had straight dictation. When it’s my mind creating that, now I may make mistakes, but I’m still responsible for it. But it’s a different method of creating that text note. And that’s what these are: they’re just blobs of text that could be incorrect.

Kevin Pho: So ever since large language models became popular three to four years ago, there have been exponential improvements in these algorithms. Has there been that same evolution when it comes to ambient AI? And if so, how much further do we have to go to deem these systems as reliable as we’d like?

Jay Anders: That’s a really great question. I think in the last five years, they have improved up to, I will say, the 90 to 95 percent accurate level. It’s the 5 percent that worries me in health care. It always has been. The devil is in the details. If it’s incorrect, it’s incorrect. So instead of being a replacement for any type of review, it just adds more of a burden of review. So, has it gotten better? Absolutely. The thing that I see most now is that the pressure now has switched from actually having to dictate or write a note or type a note to: I have to read it, and I have to read it carefully. And it only takes one or two or three mistakes, and you have a real problem on your hands as a clinician, as a patient. So there are no cross-checks in this. The clinician now is the cross-check. Are there systems out there that can review these notes and apply some medical knowledge to them? Sure, there are. But they’re not being widely used yet. So I think it’s gotten better. It’s not perfect. I don’t believe it will ever be perfect. I think it’s up to the clinician practicing medicine to figure out exactly what they want to say about a patient sitting in front of them, and make sure it’s accurate and correct.

Kevin Pho: Now, is it just these ambient AI technologies that are leading to inaccurate data in our health records? What other technologies that are related to AI are also introducing inaccuracies into the record?

Jay Anders: That’s another great question. When it comes to AI analysis of a patient visit, one of the things that AI is not very good at right now—and it’s getting better, but not very good at it right now—is coding. What I mean by that is level of visit, diagnosis coding; and that really requires somebody to review it to make sure that diagnosis is correct. So you’ll get this note, and then you’ll have a diagnosis that the note has been labeled with, and you have to cross-check it. So, you have to cross-check the code as well, because one of the things that an AI does, which is still fascinating, is that it will make up what it doesn’t know. So, I’ll challenge anybody out there: take any AI system you like and try to get it to code correctly because every now and again, it’s just going to make up a code. And that’s going to get propagated. It’s an error. It’s going to go to an insurance company who’s going to say, “Uh-uh, that doesn’t work.” It’s going to come back to you again. So, there are things that are being improved, and coding is one of them in a way, but it’s still, again, you have to have a human in that loop to make sure that that’s correct.

Kevin Pho: So, short of having physicians better check their notes before permanently putting them into the medical record, what other paths forward or solutions do you propose?

Jay Anders: Well, a couple of things. The LLMs are not being medically trained in a very concise manner, meaning that they try to match millions of words, and it’s very difficult to get that correct 100 percent of the time. So there are other ways to filter that kind of output, to cross-check it, to help the clinician get flagged that, “Hey, look at this again. It’s not quite right. Do you want to revise it?” So I think there’s going to have to be some type of technology stuck in between what AI is producing, how it’s trained, and what’s presented to the clinician. And if AI doesn’t know something, then it needs to be able to tell the clinician that it doesn’t know. And effectively communicate, “You can’t accept this as truth.” That’s going to be another major step, I think, in AI and health care as a whole, and especially with this bad data problem that we have.

Kevin Pho: So you’re proposing having that in-between model. So you have the ambient AI parsing the discussion, but then you also have that AI cross-checking what’s happening in the discussion versus what’s really in the medical chart, and if there are any discrepancies, it brings that to the attention of the physician before entering it into the medical record. Am I hearing that right?

Jay Anders: You’re hearing it exactly correct. Yes. But those systems exist. They do. They do. There are pure AI systems, and there are other curated systems that are out there that will do that kind of thing. So there’s a way to alert physicians. There are a couple of EMR companies I know right now that have implemented something just like that. So when AI doesn’t know, and it’s presented, it will actually have a big italicized line in the output that says, “I can’t find anything that this matches, or I can’t find this anywhere else. Are you sure you want to say this?”

Kevin Pho: Now, what would be an example of something that they would typically flag?

Jay Anders: More often than not, it is a missed physical finding, or a misconstrued physical finding, or misconstrued history item, and a misconstrued diagnosis. Those are the three areas I see it most.

Kevin Pho: So if it can’t really match that an item truly is a part of medical history or even a medical term, then it would flag the clinician to say, “Hey, this is not a medical term that I’m aware of, nor is it in the patient’s chart right now. Do you still want to do it?”

Jay Anders: And are you seeing that as probably the next step in terms of the evolution of these documentation systems—having that AI go-between to cross-check? Is that universally accepted among a lot of these ambient AI companies?

Kevin Pho: Nothing in AI is universally accepted. That’s one of the issues. I think it would go a long way for transparency and documentation of trust. And this thing is going to tell you something. If it doesn’t know, it’s going to tell you it doesn’t know, as opposed to, “Here it is, believe it.” So I think yes, it would go a long way to build that transparency, that trust with clinicians. On the flip side of that, AI companies—and this is just another one of my opinions—are focusing on, and I’ve read this now in several places, replacing, say, a mid-level provider, a nurse practitioner, or a physician assistant with AI. That, to me, is a bad idea right now. It’s not there yet, and people are just racing towards replacement theory for clinicians and AI. It would be better to augment what a clinician’s knowledge and expertise is than it would be to try to replace it. Those systems are getting much better at being more accurate about things, and there are multiple demonstration projects around that. But ultimately, the person that’s responsible for patient care is the clinician. So, nobody’s going to sue OpenAI when they come up with a misdiagnosis. At least they haven’t yet. That’s obviously positive or possible. But I think that’s part of the whole evolution of this: let’s concentrate on what it really can do and do well. Put humans in the loop so that it’s cross-checked, and then have another, like, “Are you sure?” before you actually say, “I’m finished with this.”

Kevin Pho: So obviously you’ve been studying this area. I’m always interested in asking people like yourself who are familiar with the trends: what do you foresee in the next year or so? When it comes to that intersection between AI and what I do as a physician, what do I have to look forward to, given how quickly things are evolving in this space?

Jay Anders: My hope is that you will start to build more trust into the systems that are being implemented. We get rid of the examples I just talked about—those things that pop up and they’re not reviewed or they happen for whatever reason. I think we can look forward to the fact that it’s going to become more and more accurate, but we can also look forward to the fact that it will not replace a trained clinician. It can augment them. It can really help them do what they have to do. But let’s get off the kick of, “Oh, AI is going to solve medicine by just doing it.” And I had a very interesting conversation with a surgeon, actually. I said, “What happens when AI says you have appendicitis? Is AI going to take out your appendix?” Well, this is the response I got: “Well, it will recommend antibiotics.” I said, “OK, if your appendix is inflamed, do you want it taken out or not? As a surgeon, do you want it out or not?” He said, “Oh, absolutely.” Well, you know, there you go. It’s not a replacement for surgery. It’s not a replacement for any of the procedures. It can augment them, but it can’t replace them. And I think that concept needs to bleed on down to all internal medicine, which is my specialty, family practice, general OB-GYN, all of that. Let’s get off of this replacement thing and let’s talk about exactly what we can do to train these systems better, make sure their output is checked and correct, and make it easy to put the human in the loop.

Kevin Pho: We’re talking to Jay Anders. He’s a physician executive. Today’s KevinMD article is “Health care’s data problem, the real obstacle to AI success.” Jay, let’s end with some take-home messages you want to leave with the KevinMD audience.

Jay Anders: To all my colleagues that are out there using ambient AI, I salute you for taking it on. Please read your notes. Please read what it’s putting out because it’s very, very important. And if I could, that’s my major take-home for what I’m talking about today. The other is you have to realize—and you know this already, everybody does—that what you write and what you record is something that is permanent and very hard to get rid of. So you have to make sure that it’s the thing you want to put in that medical record before it goes in, because getting it out is very, very hard.

Kevin Pho: Jay, as always, thank you so much for sharing your perspective and insight, and thanks again for coming back on the show.

Jay Anders: Thanks for having me, Kevin.

Prev

Why so many physicians struggle to feel proud—even when they should

June 4, 2025 Kevin 0
…
Next

Financing cancer or fighting it: the real cost of tobacco

June 5, 2025 Kevin 0
…

Tagged as: Health IT

Post navigation

< Previous Post
Why so many physicians struggle to feel proud—even when they should
Next Post >
Financing cancer or fighting it: the real cost of tobacco

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

More by Jay Anders, MD

  • Health care’s data problem: the real obstacle to AI success

    Jay Anders, MD
  • Revitalizing rural health care with technology and policy

    Jay Anders, MD
  • Let’s focus more on caring, rather than coding

    Jay Anders, MD

Related Posts

  • Health care reform requires better access and quality: dialysis as an example

    David W. Moskowitz, MD
  • Health care, we have a problem

    Pamela Miles, RN
  • Why the health care industry must prioritize health equity

    George T. Mathew, MD, MBA
  • Proactive care is the linchpin for saving America’s health care system

    Ronald A. Paulus, MD, MBA
  • What happened to real care in health care?

    Christopher H. Foster, PhD, MPA
  • To “fix” health care delivery, turn to a value-based health care system

    David Bernstein, MD, MBA

More in Podcast

  • Navigating fair market value as an independent or locum tenens physician [PODCAST]

    The Podcast by KevinMD
  • How collaboration across medical disciplines and patient advocacy cured a rare disease [PODCAST]

    The Podcast by KevinMD
  • Avarie’s story: Confronting the deadly gaps in food allergy education and emergency response [PODCAST]

    The Podcast by KevinMD
  • Reimagining Type 2 diabetes care with nutrition for remission [PODCAST]

    The Podcast by KevinMD
  • How conflicts of interest are eroding trust in U.S. health agencies [PODCAST]

    The Podcast by KevinMD
  • Physician job change: Navigating your 457 plan and avoiding tax traps [PODCAST]

    The Podcast by KevinMD
  • Most Popular

  • Past Week

    • Physician patriots: the forgotten founders who lit the torch of liberty

      Muhamad Aly Rifai, MD | Physician
    • Why medical students are trading empathy for publications

      Vijay Rajput, MD | Education
    • The hidden cost of becoming a doctor: a South Asian perspective

      Momeina Aslam | Education
    • Why fixing health care’s data quality is crucial for AI success [PODCAST]

      Jay Anders, MD | Podcast
    • Navigating fair market value as an independent or locum tenens physician [PODCAST]

      The Podcast by KevinMD | Podcast
    • When errors of nature are treated as medical negligence

      Howard Smith, MD | Physician
  • Past 6 Months

    • What’s driving medical students away from primary care?

      ​​Vineeth Amba, MPH, Archita Goyal, and Wayne Altman, MD | Education
    • A faster path to becoming a doctor is possible—here’s how

      Ankit Jain | Education
    • How dismantling DEI endangers the future of medical care

      Shashank Madhu and Christian Tallo | Education
    • Make cognitive testing as routine as a blood pressure check

      Joshua Baker and James Jackson, PsyD | Conditions
    • How scales of justice saved a doctor-patient relationship

      Neil Baum, MD | Physician
    • The broken health care system doesn’t have to break you

      Jessie Mahoney, MD | Physician
  • Recent Posts

    • Navigating fair market value as an independent or locum tenens physician [PODCAST]

      The Podcast by KevinMD | Podcast
    • Gaslighting and professional licensing: a call for reform

      Donald J. Murphy, MD | Physician
    • How self-improving AI systems are redefining intelligence and what it means for health care

      Harvey Castro, MD, MBA | Tech
    • How blockchain could rescue nursing home patients from deadly miscommunication

      Adwait Chafale | Tech
    • When service doesn’t mean another certification

      Maureen Gibbons, MD | Physician
    • Financing cancer or fighting it: the real cost of tobacco

      Dr. Bhavin P. Vadodariya | Conditions

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

View 1 Comments >

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

  • Most Popular

  • Past Week

    • Physician patriots: the forgotten founders who lit the torch of liberty

      Muhamad Aly Rifai, MD | Physician
    • Why medical students are trading empathy for publications

      Vijay Rajput, MD | Education
    • The hidden cost of becoming a doctor: a South Asian perspective

      Momeina Aslam | Education
    • Why fixing health care’s data quality is crucial for AI success [PODCAST]

      Jay Anders, MD | Podcast
    • Navigating fair market value as an independent or locum tenens physician [PODCAST]

      The Podcast by KevinMD | Podcast
    • When errors of nature are treated as medical negligence

      Howard Smith, MD | Physician
  • Past 6 Months

    • What’s driving medical students away from primary care?

      ​​Vineeth Amba, MPH, Archita Goyal, and Wayne Altman, MD | Education
    • A faster path to becoming a doctor is possible—here’s how

      Ankit Jain | Education
    • How dismantling DEI endangers the future of medical care

      Shashank Madhu and Christian Tallo | Education
    • Make cognitive testing as routine as a blood pressure check

      Joshua Baker and James Jackson, PsyD | Conditions
    • How scales of justice saved a doctor-patient relationship

      Neil Baum, MD | Physician
    • The broken health care system doesn’t have to break you

      Jessie Mahoney, MD | Physician
  • Recent Posts

    • Navigating fair market value as an independent or locum tenens physician [PODCAST]

      The Podcast by KevinMD | Podcast
    • Gaslighting and professional licensing: a call for reform

      Donald J. Murphy, MD | Physician
    • How self-improving AI systems are redefining intelligence and what it means for health care

      Harvey Castro, MD, MBA | Tech
    • How blockchain could rescue nursing home patients from deadly miscommunication

      Adwait Chafale | Tech
    • When service doesn’t mean another certification

      Maureen Gibbons, MD | Physician
    • Financing cancer or fighting it: the real cost of tobacco

      Dr. Bhavin P. Vadodariya | Conditions

MedPage Today Professional

An Everyday Health Property Medpage Today
  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Why fixing health care’s data quality is crucial for AI success [PODCAST]
1 comments

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...