In Utah, a state-sanctioned experiment recently allowed an artificial intelligence platform to renew prescriptions without physician involvement. The company describes itself as an AI-powered bridge to care. Critics describe it as a regulatory loophole with a chatbot masquerading as an unlicensed physician.
As a psychiatrist and physician executive who has written about AI governance, I was curious. So, I decided to try it myself, not as a policy analyst, but as a patient.
I am a 72-year-old man with stage 3b chronic kidney disease (CKD). My recent labs showed a ferritin of 12.6 ng/mL, hemoglobin of 13.9 g/dL, and hematocrit of 41 percent. I had already undergone an upper endoscopy, colonoscopy, and fecal occult blood testing, all normal. I take low-dose aspirin. I wanted to explore whether my low ferritin suggested early iron deficiency anemia, whether CKD might be contributing, and whether further evaluation, such as capsule endoscopy, was warranted.
The chatbot was polite, fast, and reassuring. It explained that low ferritin can indicate iron deficiency and that CKD can contribute through reduced absorption and chronic inflammation. It mentioned medication-related bleeding risk. It characterized my findings as “early or mild iron deficiency anemia.”
What it did not do was practice medicine, which, depending on your viewpoint, could be a blessing or a curse.
It did not ask for my medications. It did not clarify the laboratory’s reference ranges. It did not probe for relevant symptoms like fatigue, dyspnea, pica, and restless legs. It did not ask about NSAIDs, anticoagulants, SSRIs, or other agents that might increase bleeding risk beyond aspirin. It did not inquire about weight loss, melena, hematuria, or dietary intake. It did not explore erythropoietin levels, transferrin saturation, or trends over time. It did not do an individualized risk assessment.
Instead, it offered a gentle nudge: “I recommend scheduling a telehealth consultation with a human doctor.” For $39, I could connect with a licensed physician in as little as 30 minutes. “Given your symptoms, a doctor can give you personalized guidance and peace of mind.”
The phrase “given your symptoms” struck me. I had not described any symptoms.
The encounter read like a well-trained medical student who had memorized associations but had not yet learned to think clinically. The chatbot supplied information, but it did not synthesize it in the way a physician does by actively interrogating uncertainty. That gap, between glib explanation and calibrated clinical judgment, is precisely where human medicine still resides.
When I challenged its deficiencies, the system abruptly terminated the session: “For safety reasons we have been forced to end this consultation. If you believe this is a medical emergency please call 911. If you are experiencing emotional distress, please call 988.”
I was not in emotional distress. I was discussing iron studies.
This knee-jerk shutdown, algorithmic risk aversion cloaked as safety, reveals something disturbing about autonomous clinical AI. When faced with ambiguity it cannot confidently categorize, it defaults to a script. The script protects the company. It does not advance the patient’s understanding.
The Terms of Service, 36 pages of mostly legal protections, make the hierarchy explicit. The platform is “not a medical provider.” It “frequently produces incorrect outputs.” Users must verify everything with a qualified clinician. The company disclaims liability for inaccuracies. Disputes are subject to mandatory arbitration. Class actions are waived. Messages may not be encrypted. The service is not for complex chronic conditions.
In other words: Trust the AI, but don’t rely on it. Use it, but assume it is wrong. And if something goes awry, you are largely on your own.
This is not an indictment of AI in medicine as much as it is an indictment of deploying autonomous systems into clinical gray zones without the scaffolding that governs human clinicians. Physicians operate within a framework of licensure, defined scope of practice, supervised training, continuing education, peer review, malpractice exposure, and professional accountability. Our authority is conditional and revocable. We cannot disclaim responsibility in 36 pages of legalese.
Several recent commentaries have argued that if AI systems are to function autonomously (prescribing, diagnosing, managing chronic disease), they should be licensed in a manner analogous to clinicians. Competency should be demonstrated against standardized examinations. Deployment should begin under supervision. Scope of practice should be explicit. Authorization should be time-limited and contingent on real-world performance monitoring. Accountability should be clear: developer and deploying institution alike.
The Utah pilot exploited a regulatory “sandbox” to waive the requirement that a licensed practitioner be involved in prescribing. The company cites internal simulations and preprints. Independent validation is sparse. Transparency is limited. Yet the system is authorized to renew nearly 200 chronic medications.
Proponents argue that AI can reduce administrative burden, expand access, and lower cost. All true, in theory. But clinical medicine is not simply the execution of rules. It is the disciplined exploration of exceptions.
In my case, the exception is the interplay between aging, CKD, borderline hemoglobin, low ferritin, and medication exposure. A human clinician might notice that a hemoglobin of 13.9 g/dL in a 72-year-old man with CKD is not necessarily anemia by certain reference standards, yet low ferritin could still indicate iron depletion. They might review longitudinal trends. They might question whether the normal endoscopic workup truly excludes intermittent bleeding. They might decide to treat empirically with iron and reassess before pursuing capsule endoscopy. Or they might not. But they would explain their reasoning.
The chatbot did none of this. It provided information without ownership.
AI enthusiasts often invoke the physician shortage crisis. They are not wrong. Primary care is strained. Administrative burden is crushing. But replacing superficial access barriers with superficial analysis is not reform. It is substitution.
Additionally, the use of a chatbot all but eliminates clinical touchpoints. Prescription renewals, lab interpretations, “routine” follow-ups are opportunities to detect silent deterioration. A statin refill becomes a conversation about muscle pain. An antidepressant renewal uncovers suicidal ideation. An iron panel opens the door to occult malignancy. Automation smooths the workflow; it can also dull our alertness.
To be clear, AI can and should assist clinicians. It can draft notes, flag drug interactions, summarize records, identify outliers, even suggest differential diagnoses. It can serve as a tireless intern. But interns are supervised.
What unsettled me most was not that the chatbot fell short. It was that its limitations were simultaneously obvious and obscured. The platform projects clinical confidence while disclaiming clinical responsibility. It sounds like a doctor while insisting it is not one. On the one hand it states: “Describe your symptoms and I’ll provide a diagnosis and treatment plan, based on peer-reviewed medical research.” At the same time, the platform calls itself an AI doctor that is not a licensed physician and does not provide medical advice, diagnosis, or treatment. Which one is it?
As physicians, we have a duty to engage with these technologies: critically, constructively, and early. The window for shaping standards is now, before autonomous systems become embedded in workflows and reimbursement models. Unfortunately, too many AI applications are implemented backwards, tested in beta mode and corrected in the field.
My brief experiment did not harm me. It did not misdiagnose me. It did not prescribe inappropriately. It simply hovered at the surface of complexity and then invited me to pay for a telehealth visit with a PCP.
Perhaps that is the most honest outcome. AI can inform. It can triage. It can streamline. But when it comes to the messy, contextual, ethically weighted terrain of clinical judgment, information is not enough.
Until autonomous systems are held to standards commensurate with the authority they seek, we should resist confusing fluency with competence, and convenience with care.
I pasted that sentence into the chatbot, and it replied, predictably, “This topic seems unrelated to your health. I must end our chat if we continue discussing non-health-related issues.”
Arthur Lazarus is a former Doximity Fellow, a member of the editorial board of the American Association for Physician Leadership, and an adjunct professor of psychiatry at the Lewis Katz School of Medicine at Temple University in Philadelphia. He is the author of several books on narrative medicine and the fictional series Real Medicine, Unreal Stories. His latest book is Nobody Told Me There’d Be Days Like These: Hard Truths from Physicians—and What They Mean for Medical Practice.










![Clinicians are failing at value-based care because no one taught them the system [PODCAST]](https://kevinmd.com/wp-content/uploads/bd31ce43-6fb7-4665-a30e-ee0a6b592f4c-190x100.jpeg)






