Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
KevinMD
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking
KevinMD
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking
  • About KevinMD | Kevin Pho, MD
  • Be heard on social media’s leading physician voice
  • Contact Kevin
  • Discounted enhanced author page
  • DMCA Policy
  • Establishing, Managing, and Protecting Your Online Reputation: A Social Media Guide for Physicians and Medical Practices
  • Group vs. individual disability insurance for doctors: pros and cons
  • KevinMD influencer opportunities
  • Opinion and commentary by KevinMD
  • Physician burnout speakers to keynote your conference
  • Physician Coaching by KevinMD
  • Physician keynote speaker: Kevin Pho, MD
  • Physician Speaking by KevinMD: a boutique speakers bureau
  • Primary care physician in Nashua, NH | Kevin Pho, MD
  • Privacy Policy
  • Recommended services by KevinMD
  • Terms of Use Agreement
  • Thank you for subscribing to KevinMD
  • Thank you for upgrading to the KevinMD enhanced author page
  • The biggest mistake doctors make when purchasing disability insurance
  • The doctor’s guide to disability insurance: short-term vs. long-term
  • The KevinMD ToolKit
  • Upgrade to the KevinMD enhanced author page
  • Why own-occupation disability insurance is a must for doctors

AI clinical judgment is what AI chatbots still lack

Arthur Lazarus, MD, MBA
Tech
May 16, 2026
Share
Tweet
Share

In Utah, a state-sanctioned experiment recently allowed an artificial intelligence platform to renew prescriptions without physician involvement. The company describes itself as an AI-powered bridge to care. Critics describe it as a regulatory loophole with a chatbot masquerading as an unlicensed physician.

As a psychiatrist and physician executive who has written about AI governance, I was curious. So, I decided to try it myself, not as a policy analyst, but as a patient.

I am a 72-year-old man with stage 3b chronic kidney disease (CKD). My recent labs showed a ferritin of 12.6 ng/mL, hemoglobin of 13.9 g/dL, and hematocrit of 41 percent. I had already undergone an upper endoscopy, colonoscopy, and fecal occult blood testing, all normal. I take low-dose aspirin. I wanted to explore whether my low ferritin suggested early iron deficiency anemia, whether CKD might be contributing, and whether further evaluation, such as capsule endoscopy, was warranted.

The chatbot was polite, fast, and reassuring. It explained that low ferritin can indicate iron deficiency and that CKD can contribute through reduced absorption and chronic inflammation. It mentioned medication-related bleeding risk. It characterized my findings as “early or mild iron deficiency anemia.”

What it did not do was practice medicine, which, depending on your viewpoint, could be a blessing or a curse.

It did not ask for my medications. It did not clarify the laboratory’s reference ranges. It did not probe for relevant symptoms like fatigue, dyspnea, pica, and restless legs. It did not ask about NSAIDs, anticoagulants, SSRIs, or other agents that might increase bleeding risk beyond aspirin. It did not inquire about weight loss, melena, hematuria, or dietary intake. It did not explore erythropoietin levels, transferrin saturation, or trends over time. It did not do an individualized risk assessment.

Instead, it offered a gentle nudge: “I recommend scheduling a telehealth consultation with a human doctor.” For $39, I could connect with a licensed physician in as little as 30 minutes. “Given your symptoms, a doctor can give you personalized guidance and peace of mind.”

The phrase “given your symptoms” struck me. I had not described any symptoms.

The encounter read like a well-trained medical student who had memorized associations but had not yet learned to think clinically. The chatbot supplied information, but it did not synthesize it in the way a physician does by actively interrogating uncertainty. That gap, between glib explanation and calibrated clinical judgment, is precisely where human medicine still resides.

When I challenged its deficiencies, the system abruptly terminated the session: “For safety reasons we have been forced to end this consultation. If you believe this is a medical emergency please call 911. If you are experiencing emotional distress, please call 988.”

I was not in emotional distress. I was discussing iron studies.

This knee-jerk shutdown, algorithmic risk aversion cloaked as safety, reveals something disturbing about autonomous clinical AI. When faced with ambiguity it cannot confidently categorize, it defaults to a script. The script protects the company. It does not advance the patient’s understanding.

The Terms of Service, 36 pages of mostly legal protections, make the hierarchy explicit. The platform is “not a medical provider.” It “frequently produces incorrect outputs.” Users must verify everything with a qualified clinician. The company disclaims liability for inaccuracies. Disputes are subject to mandatory arbitration. Class actions are waived. Messages may not be encrypted. The service is not for complex chronic conditions.

In other words: Trust the AI, but don’t rely on it. Use it, but assume it is wrong. And if something goes awry, you are largely on your own.

This is not an indictment of AI in medicine as much as it is an indictment of deploying autonomous systems into clinical gray zones without the scaffolding that governs human clinicians. Physicians operate within a framework of licensure, defined scope of practice, supervised training, continuing education, peer review, malpractice exposure, and professional accountability. Our authority is conditional and revocable. We cannot disclaim responsibility in 36 pages of legalese.

Several recent commentaries have argued that if AI systems are to function autonomously (prescribing, diagnosing, managing chronic disease), they should be licensed in a manner analogous to clinicians. Competency should be demonstrated against standardized examinations. Deployment should begin under supervision. Scope of practice should be explicit. Authorization should be time-limited and contingent on real-world performance monitoring. Accountability should be clear: developer and deploying institution alike.

The Utah pilot exploited a regulatory “sandbox” to waive the requirement that a licensed practitioner be involved in prescribing. The company cites internal simulations and preprints. Independent validation is sparse. Transparency is limited. Yet the system is authorized to renew nearly 200 chronic medications.

Proponents argue that AI can reduce administrative burden, expand access, and lower cost. All true, in theory. But clinical medicine is not simply the execution of rules. It is the disciplined exploration of exceptions.

In my case, the exception is the interplay between aging, CKD, borderline hemoglobin, low ferritin, and medication exposure. A human clinician might notice that a hemoglobin of 13.9 g/dL in a 72-year-old man with CKD is not necessarily anemia by certain reference standards, yet low ferritin could still indicate iron depletion. They might review longitudinal trends. They might question whether the normal endoscopic workup truly excludes intermittent bleeding. They might decide to treat empirically with iron and reassess before pursuing capsule endoscopy. Or they might not. But they would explain their reasoning.

The chatbot did none of this. It provided information without ownership.

AI enthusiasts often invoke the physician shortage crisis. They are not wrong. Primary care is strained. Administrative burden is crushing. But replacing superficial access barriers with superficial analysis is not reform. It is substitution.

Additionally, the use of a chatbot all but eliminates clinical touchpoints. Prescription renewals, lab interpretations, “routine” follow-ups are opportunities to detect silent deterioration. A statin refill becomes a conversation about muscle pain. An antidepressant renewal uncovers suicidal ideation. An iron panel opens the door to occult malignancy. Automation smooths the workflow; it can also dull our alertness.

To be clear, AI can and should assist clinicians. It can draft notes, flag drug interactions, summarize records, identify outliers, even suggest differential diagnoses. It can serve as a tireless intern. But interns are supervised.

What unsettled me most was not that the chatbot fell short. It was that its limitations were simultaneously obvious and obscured. The platform projects clinical confidence while disclaiming clinical responsibility. It sounds like a doctor while insisting it is not one. On the one hand it states: “Describe your symptoms and I’ll provide a diagnosis and treatment plan, based on peer-reviewed medical research.” At the same time, the platform calls itself an AI doctor that is not a licensed physician and does not provide medical advice, diagnosis, or treatment. Which one is it?

As physicians, we have a duty to engage with these technologies: critically, constructively, and early. The window for shaping standards is now, before autonomous systems become embedded in workflows and reimbursement models. Unfortunately, too many AI applications are implemented backwards, tested in beta mode and corrected in the field.

My brief experiment did not harm me. It did not misdiagnose me. It did not prescribe inappropriately. It simply hovered at the surface of complexity and then invited me to pay for a telehealth visit with a PCP.

Perhaps that is the most honest outcome. AI can inform. It can triage. It can streamline. But when it comes to the messy, contextual, ethically weighted terrain of clinical judgment, information is not enough.

Until autonomous systems are held to standards commensurate with the authority they seek, we should resist confusing fluency with competence, and convenience with care.

I pasted that sentence into the chatbot, and it replied, predictably, “This topic seems unrelated to your health. I must end our chat if we continue discussing non-health-related issues.”

Arthur Lazarus is a former Doximity Fellow, a member of the editorial board of the American Association for Physician Leadership, and an adjunct professor of psychiatry at the Lewis Katz School of Medicine at Temple University in Philadelphia. He is the author of several books on narrative medicine and the fictional series Real Medicine, Unreal Stories. His latest book is Nobody Told Me There’d Be Days Like These: Hard Truths from Physicians—and What They Mean for Medical Practice.

Prev

5 health-destroying myths perpetuated by marketing

May 16, 2026 Kevin 0
…

Kevin

Tagged as: Health IT

< Previous Post
5 health-destroying myths perpetuated by marketing

ADVERTISEMENT

More by Arthur Lazarus, MD, MBA

  • Medical school rankings reshape what they measure

    Arthur Lazarus, MD, MBA
  • Artificial intelligence is changing medical writing today

    Arthur Lazarus, MD, MBA
  • Balancing civil rights and trauma in an antisemitism investigation

    Arthur Lazarus, MD, MBA

Related Posts

  • Why doctors must fight health misinformation on social media

    Olapeju Simoyan, MD
  • Digital health equity is an emerging gap in health

    Joshua W. Elder, MD, MPH and Tamara Scott
  • Why the health care industry must prioritize health equity

    George T. Mathew, MD, MBA
  • From penicillin to digital health: the impact of social media on medicine

    Homer Moutran, MD, MBA, Caline El-Khoury, PhD, and Danielle Wilson
  • Melting the iron triangle: Prioritizing health equity in dynamic, innovative health care landscapes

    Nina Cloven, MHA
  • For medical students: 20 pearls to honor every clinical rotation

    Ton La, Jr., MD, JD

More in Tech

  • AI therapy chatbots are crossing into impersonation

    Muhamad Aly Rifai, MD
  • 3 things AI in health care investing cannot evaluate

    Harsha Moole, MD
  • How ambient artificial intelligence can transform team-based care

    Matt Sakumoto, MD
  • EHR vendor evaluation should happen before the demo

    GetPracticeHelp
  • The limits of large language models in clinical practice

    Edward G. Rogoff and Alena Ivashenka, PhD
  • Artificial intelligence in residency education and family medicine

    Jyothi Ranga Patri, MD, MHA
  • Most Popular

  • Past Week

    • Primary care crisis requires new training and skills

      Justin Oldfield, MD | Physician
    • Expanding the SOAP framework boosts health outcomes

      Deepak Gupta, MD and Sarwan Kumar, MD | Physician
    • The handwashing standard nobody finished. Until now.

      Bernadette Burroughs, RN | Conditions
    • Primary care access is the real problem, not the system

      Payam Zamani, MD | Physician
    • How corporate medicine is eroding truth and patient dignity

      Ronald L. Lindsay, MD | Physician
    • Why bipolar II is not just a milder version of bipolar I

      Ethan Evans, MD | Conditions
  • Past 6 Months

    • I Googled my own name and a corporate clinic I’ve never worked at appeared [PODCAST]

      The Podcast by KevinMD | Podcast
    • Primary care crisis requires new training and skills

      Justin Oldfield, MD | Physician
    • How corporate health care ruined the medical profession

      Edmond Cabbabe, MD | Physician
    • Clinicians are failing at value-based care because no one taught them the system [PODCAST]

      The Podcast by KevinMD | Podcast
    • 13.1 reasons running a half marathon beats practicing medicine

      John Wei, MD | Physician
    • Medicare practice expense cuts will hurt patients

      John Birkmeyer, MD | Policy
  • Recent Posts

    • AI clinical judgment is what AI chatbots still lack

      Arthur Lazarus, MD, MBA | Tech
    • 5 health-destroying myths perpetuated by marketing

      Martha Rosenberg | Conditions
    • Why your patient’s biggest barrier isn’t pain. It’s walking through the door. [PODCAST]

      The Podcast by KevinMD | Podcast
    • Emotional over-functioning drives the competence trap

      J.H. Lynn | Conditions
    • Physician depression doesn’t always look like depression

      Kenneth Scott Burnham, DO | Physician
    • Physician retirement is a myth for the ripening doctor

      Farid Sabet-Sharghi, MD | Physician

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

  • Most Popular

  • Past Week

    • Primary care crisis requires new training and skills

      Justin Oldfield, MD | Physician
    • Expanding the SOAP framework boosts health outcomes

      Deepak Gupta, MD and Sarwan Kumar, MD | Physician
    • The handwashing standard nobody finished. Until now.

      Bernadette Burroughs, RN | Conditions
    • Primary care access is the real problem, not the system

      Payam Zamani, MD | Physician
    • How corporate medicine is eroding truth and patient dignity

      Ronald L. Lindsay, MD | Physician
    • Why bipolar II is not just a milder version of bipolar I

      Ethan Evans, MD | Conditions
  • Past 6 Months

    • I Googled my own name and a corporate clinic I’ve never worked at appeared [PODCAST]

      The Podcast by KevinMD | Podcast
    • Primary care crisis requires new training and skills

      Justin Oldfield, MD | Physician
    • How corporate health care ruined the medical profession

      Edmond Cabbabe, MD | Physician
    • Clinicians are failing at value-based care because no one taught them the system [PODCAST]

      The Podcast by KevinMD | Podcast
    • 13.1 reasons running a half marathon beats practicing medicine

      John Wei, MD | Physician
    • Medicare practice expense cuts will hurt patients

      John Birkmeyer, MD | Policy
  • Recent Posts

    • AI clinical judgment is what AI chatbots still lack

      Arthur Lazarus, MD, MBA | Tech
    • 5 health-destroying myths perpetuated by marketing

      Martha Rosenberg | Conditions
    • Why your patient’s biggest barrier isn’t pain. It’s walking through the door. [PODCAST]

      The Podcast by KevinMD | Podcast
    • Emotional over-functioning drives the competence trap

      J.H. Lynn | Conditions
    • Physician depression doesn’t always look like depression

      Kenneth Scott Burnham, DO | Physician
    • Physician retirement is a myth for the ripening doctor

      Farid Sabet-Sharghi, MD | Physician

MedPage Today Professional

An Everyday Health Property Medpage Today

Copyright © 2026 KevinMD.com | Powered by Astra WordPress Theme

  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...