Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
KevinMD
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking
KevinMD
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking
  • About KevinMD | Kevin Pho, MD
  • Be heard on social media’s leading physician voice
  • Contact Kevin
  • Discounted enhanced author page
  • DMCA Policy
  • Establishing, Managing, and Protecting Your Online Reputation: A Social Media Guide for Physicians and Medical Practices
  • Group vs. individual disability insurance for doctors: pros and cons
  • KevinMD influencer opportunities
  • Opinion and commentary by KevinMD
  • Physician burnout speakers to keynote your conference
  • Physician Coaching by KevinMD
  • Physician keynote speaker: Kevin Pho, MD
  • Physician Speaking by KevinMD: a boutique speakers bureau
  • Primary care physician in Nashua, NH | Kevin Pho, MD
  • Privacy Policy
  • Recommended services by KevinMD
  • Terms of Use Agreement
  • Thank you for subscribing to KevinMD
  • Thank you for upgrading to the KevinMD enhanced author page
  • The biggest mistake doctors make when purchasing disability insurance
  • The doctor’s guide to disability insurance: short-term vs. long-term
  • The KevinMD ToolKit
  • Upgrade to the KevinMD enhanced author page
  • Why own-occupation disability insurance is a must for doctors

AI in clinical documentation: the hidden risk of automation bias

Gagandeep Rai
Tech
March 18, 2026
Share
Tweet
Share

The schedule is already behind. It is always already behind. In a family medicine clinic on an ordinary Tuesday, I open a chart and watch an AI tool do what it was built to do: draft the note, compress the story, produce an assessment that reads clean, confident, complete. For a moment it feels like relief, the kind you learn not to trust. Then the woman across from me says, quietly, “I don’t feel like myself. Are you sure this isn’t something serious?”

I look back at the screen. The language is polished. The differential is tidy. The uncertainty, the part that matters, is invisible. And in that gap between what the tool produces and what the patient needs, the real question surfaces. The ethical risk of AI in medicine is not that it makes mistakes. It is that we stop noticing when it does.

I want to be clear about where I stand. I am not here to argue against AI in medicine. Clinicians are drowning in documentation, in fragmented workflows, in the slow suffocation of cognitive overload. If AI can lighten that burden, improve access, bridge language barriers, and help patients feel heard rather than processed, we should pursue it. The promise is not hypothetical. I have felt it during my rotations.

But promise and readiness are not the same thing. And the history of medicine is full of interventions that arrived with genuine benefit and unexamined cost, scaling faster than the governance designed to contain them.

The silent arrival

We have lived through a version of this before. The electronic health record arrived with real promises: standardization, legibility, safer handoffs. It delivered some of them. But it also introduced harms that did not look like harms at first: copy-forward functions that fossilized yesterday’s inaccuracies into today’s plan, alert fatigue that trained a generation of physicians to click past warnings without reading them, documentation that appeared exhaustive while drifting, sentence by sentence, from the patient’s actual experience.

AI is not the EHR. But the lesson is portable: When a technology changes the texture of clinical work, it also changes where errors hide and who ends up absorbing the consequences. What makes today’s moment different is the speed and confidence with which AI is entering clinical spaces that already struggle to hold anyone accountable for anything.

AI does not announce itself. It does not arrive with alarms. It arrives with convenience, slipping into note-writing, triage pathways, portal messages, coding, risk scores, and what gets branded “decision support” but often functions as decision replacement. The more seamlessly it integrates, the less visible its influence becomes. That is the design. And that is the danger.

The erasure of nuance

Consider what happens to uncertainty. Family medicine lives in nuance, early symptoms that could be nothing or could be everything, stories that evolve over weeks, data that is always incomplete. A good clinician holds that ambiguity carefully, because the holding is part of the care. But AI turns uncertainty into confident language. A messy presentation becomes a neat assessment. A provisional impression becomes a crisp plan. The note reads “settled” when the clinical reality is anything but.

This is not a dramatic failure. It is a quiet one. When the note sounds more certain than the encounter felt, it changes behavior downstream: fewer follow-up questions, fewer alternative diagnoses considered, less urgency to bring the patient back. The harm is not a wrong answer. It is the slow erosion of clinical humility, the most protective instinct a physician has.

Then there is the question of who gets hurt. A model can look accurate in aggregate while consistently underperforming for patients with limited English proficiency, women presenting with atypical symptoms, people whose social complexity does not map neatly onto the training data. In primary care, where disparities compound quietly over years of missed screenings, dismissed symptoms, and delayed referrals, a small systematic error is not small. It is a slow-moving structural failure. And “average performance” is the metric that hides it best.

Even a tool that performs well at launch can degrade. Guidelines change. Patient populations shift. Upstream data fields get reformatted. Add the ordinary pressures of clinical workflow, click fatigue, time constraints, the gravitational pull toward accepting whatever the screen suggests, and you get a new category of error: not dramatic, not obvious, but infinitely scalable. “It worked in the pilot” is not the same as “it remains safe at scale,” and the distance between those two statements is where patients live.

Innovation without accountability

Which brings us to the question no one wants to answer cleanly: When AI nudges a clinician toward a diagnosis, a risk score, a triage decision, and it turns out to be wrong, who is responsible? The physician who signed the note? The health system that deployed the tool? The vendor that built and sold it? In practice, accountability becomes blurriest at the exact moment patients need it to be sharpest. Everyone touched the decision. No one owns it.

Innovation without accountability is not progress; it is risk, scaled.

Patients, meanwhile, often have no idea any of this is happening. AI is already touching the parts of care they rarely see, drafted notes they never read, portal messages composed by algorithms, triage decisions shaped before they walk through the door. Trust in medicine was never built by hiding the machinery. It was built by being honest about what the machinery does.

And here is a tension the industry rarely names out loud: Many medical AI companies emphasize that they have physicians on their teams. That can be genuinely valuable; clinician insight can improve safety, relevance, and design. But a physician employed to drive adoption can unintentionally become a trust shortcut, a white coat in the sales funnel, lending credibility to systems that have not earned independent scrutiny. The problem is not doctors working in industry. The problem is when clinical authority substitutes for clinical evidence.

Responsible adoption

So what does responsible adoption actually require? Not a rejection of AI, but a refusal to deploy it as though enthusiasm were a safety plan.

It starts with specificity and local proof. A documentation assistant is not the same as clinical decision support, and the standard of evidence should match the stakes. If a model was trained on another health system’s patients, the vendor’s confidence is not your evidence. Validation must happen here, with your patients, your workflows, your constraints, and it must disaggregate by the categories where harm concentrates: language, race and ethnicity, sex, age, disability, socioeconomic status. If performance differs across these lines, that is not a footnote in a technical report. That is the ethical headline. “It performs well overall” is a sentence that should make any clinician ask: Overall for whom?

It requires accountability structures that exist before something goes wrong, not after. Who owns monitoring? Who reviews adverse events? Who has the authority to pause or roll back a deployment? Health systems need a formal model-governance process, an AI equivalent of pharmacy and therapeutics, with the power to demand evidence, track harms, and say “not yet” without being called anti-innovation. Clinicians should be trained not only on how to use a tool but on how it fails, because “just review the output” is not a safety architecture. It is a liability transfer. And governance must be structurally separated from sales pressure, because credibility should never substitute for independent review.

Finally, it requires transparency with patients. AI should not be a hidden co-author of care. When it meaningfully shapes a decision, a document, or a communication, patients deserve to know. Not as a legal formality. As an act of respect.

None of this is radical. It is, in fact, the minimum. The safest technologies are not the ones that never fail. They are the ones designed to fail visibly, with clear accountability, and with the strongest protections built around the patients most likely to be harmed.

AI may eventually become as routine in clinical life as the stethoscope or the imaging order. But routine is not the same as ethical, and ubiquity is not the same as safety. The question was never whether we would use AI in patient care. The question, the one that will determine whether this technology heals or quietly harms, is whether we will build the governance to match its reach before the consequences do it for us.

The architecture of trust

I want to return to the woman I described at the beginning, the one who sat across from me and said, “I don’t feel like myself.” The one whose uncertainty the AI erased from the note. You may have already formed a picture of her face, her voice, the exam room.

I should tell you something: I wrote her to open this essay. Not to deceive you, but to demonstrate how quickly plausible detail becomes assumed truth. You trusted me, the way a clinician trusts a confident note, the way a system trusts a polished output, the way we are all learning to trust language that sounds like it knows what it is talking about. You did not pause. You did not question it. Why would you? The story was specific. The details were clean. It felt true.

That is the point. If a few sentences of credible prose can slip past the skepticism of someone actively reading an essay about the dangers of trusting AI-generated language, what happens in a clinic, at speed, under pressure, a hundred times a day? What happens when the confident note is not a rhetorical device but a medical record? What happens when no one pauses long enough to ask whether the story on the screen matches the person in the room?

We are building that world right now, one deployment at a time. We can still choose to build it differently. But only if we stop pretending that the architecture of trust is someone else’s problem.

Gagandeep Rai is a medical student.

Prev

How AI scribes can rescue clinical education from burnout

March 18, 2026 Kevin 0
…

Kevin

Tagged as: Health IT

< Previous Post
How AI scribes can rescue clinical education from burnout

ADVERTISEMENT

Related Posts

  • The hidden medication putting Parkinson’s patients at risk

    Rebecca Miller, PhD
  • The hidden bias in how we treat chronic pain

    Richard A. Lawhern, PhD
  • For medical students: 20 pearls to honor every clinical rotation

    Ton La, Jr., MD, JD
  • Understanding why people participate in clinical trials

    Pouria Rostamiasrabadi
  • Why retail pharmacies are the future of diverse clinical trials

    Shelli Pavone
  • The hidden financial burdens shaping modern medicine

    Sarah Fashakin

More in Tech

  • How AI scribes can rescue clinical education from burnout

    Lynn McComas, DNP, ANP-C
  • Health care cyberattacks expose a critical national security failure

    Kristen Cline, BSN, RN
  • AI agents in health care: What they say when we aren’t listening

    Alp Köksal
  • The hidden risks and rewards of AI scribes in medicine

    Arthur Lazarus, MD, MBA
  • The hidden risks of AI-generated progress notes in psychotherapy

    Arthur Lazarus, MD, MBA
  • How AI in dentistry is changing your next checkup

    Sowjanya Gunukula, DDS
  • Most Popular

  • Past Week

    • The dangers of vertical integration in health care

      Stephanie Waggel, MD | Policy
    • The 9 laws of health care quality: Why metrics miss the point

      Constantine Ioannou, MD | Physician
    • Navigating the patchwork of CME requirements by state

      Vladislav Tchatalbachev, MD | Physician
    • Securing physician autonomy with employer-sponsored direct primary care

      Dana Y. Lujan, MBA | Physician
    • Adult disability care transition: Why medicine must grow up

      Ronald L. Lindsay, MD | Conditions
    • AI in clinical documentation: the hidden risk of automation bias

      Gagandeep Rai | Tech
  • Past 6 Months

    • Menstrual health in medicine: Addressing the gender gap in care

      Cynthia Kumaran | Conditions
    • The dangers of vertical integration in health care

      Stephanie Waggel, MD | Policy
    • The 9 laws of health care quality: Why metrics miss the point

      Constantine Ioannou, MD | Physician
    • Why does sex work seem like a more viable path than medicine in 2026?

      Corina Fratila, MD | Physician
    • From Singapore to Canada: a blueprint for primary care transformation

      Ivy Oandasan, MD | Policy
    • How board certification fuels the physician shortage crisis

      Brian Hudes, MD | Physician
  • Recent Posts

    • AI in clinical documentation: the hidden risk of automation bias

      Gagandeep Rai | Tech
    • How AI scribes can rescue clinical education from burnout

      Lynn McComas, DNP, ANP-C | Tech
    • Surviving stage 4 breast cancer: a 10-year journey of hope

      Tami Berczuk | Conditions
    • Why immersive travel may be a powerful tool for behavior change

      Stacey Funt, MD | Physician
    • Health care cyberattacks expose a critical national security failure

      Kristen Cline, BSN, RN | Tech
    • The hidden cost of long-term care policy for family caregivers

      Gerald Kuo | Conditions

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

  • Most Popular

  • Past Week

    • The dangers of vertical integration in health care

      Stephanie Waggel, MD | Policy
    • The 9 laws of health care quality: Why metrics miss the point

      Constantine Ioannou, MD | Physician
    • Navigating the patchwork of CME requirements by state

      Vladislav Tchatalbachev, MD | Physician
    • Securing physician autonomy with employer-sponsored direct primary care

      Dana Y. Lujan, MBA | Physician
    • Adult disability care transition: Why medicine must grow up

      Ronald L. Lindsay, MD | Conditions
    • AI in clinical documentation: the hidden risk of automation bias

      Gagandeep Rai | Tech
  • Past 6 Months

    • Menstrual health in medicine: Addressing the gender gap in care

      Cynthia Kumaran | Conditions
    • The dangers of vertical integration in health care

      Stephanie Waggel, MD | Policy
    • The 9 laws of health care quality: Why metrics miss the point

      Constantine Ioannou, MD | Physician
    • Why does sex work seem like a more viable path than medicine in 2026?

      Corina Fratila, MD | Physician
    • From Singapore to Canada: a blueprint for primary care transformation

      Ivy Oandasan, MD | Policy
    • How board certification fuels the physician shortage crisis

      Brian Hudes, MD | Physician
  • Recent Posts

    • AI in clinical documentation: the hidden risk of automation bias

      Gagandeep Rai | Tech
    • How AI scribes can rescue clinical education from burnout

      Lynn McComas, DNP, ANP-C | Tech
    • Surviving stage 4 breast cancer: a 10-year journey of hope

      Tami Berczuk | Conditions
    • Why immersive travel may be a powerful tool for behavior change

      Stacey Funt, MD | Physician
    • Health care cyberattacks expose a critical national security failure

      Kristen Cline, BSN, RN | Tech
    • The hidden cost of long-term care policy for family caregivers

      Gerald Kuo | Conditions

MedPage Today Professional

An Everyday Health Property Medpage Today

Copyright © 2026 KevinMD.com | Powered by Astra WordPress Theme

  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...