Last week, I spent over two hours on a prior authorization for a patient who needed a medication she’d been stable on for years. Two hours of hold music, transferred calls, and faxed forms, while patients waited.
That same week, Anthropic announced Claude for Healthcare: An AI system that can verify coverage requirements, build claims appeals, and streamline prior authorizations in minutes. OpenAI launched ChatGPT Health days earlier. Both promise to liberate physicians from the administrative quicksand that’s drowning us.
My first reaction was relief. My second was a question every physician should be asking: What happens when AI becomes better at navigating health care than we are?
The administrative burden is killing us
Let’s be honest about where we are. Physicians spend nearly two hours on administrative tasks for every hour of direct patient care. Prior authorizations alone consume an estimated 34 hours per physician per week in some specialties. This isn’t medicine; it’s paperwork with a stethoscope.
Claude for Healthcare directly targets this pain point:
- CMS coverage database integration for real-time coverage verification.
- ICD-10 lookup for coding accuracy.
- Claims appeal generation with supporting documentation.
- PubMed access to 35 million articles for clinical decision support.
Commure, one of Anthropic’s health care partners, estimates Claude could “save clinicians millions of hours annually.” I believe it. The question is what we do with those hours, and what we lose in the process.
What Claude actually does
Unlike the AI chatbots patients have been using for years, Claude for Healthcare connects to the infrastructure of medicine itself.
For us:
- Verifies prior authorization requirements before we submit.
- Drafts appeals with relevant clinical evidence.
- Searches medical literature in seconds, not hours.
- Integrates with EHR systems through FHIR standards.
For patients:
- Translates lab results and medical reports into plain language.
- Connects to Apple Health and Android Health Connect for wellness data.
- Allows record sharing through HealthEx and Function connectors.
The promise is seductive: A world where the two hours I spent on that prior auth become two minutes, freeing me to actually practice medicine.
The race is on
Anthropic and OpenAI are now competing directly for health care dominance. OpenAI has 800 million weekly users and dominates consumer AI; ChatGPT Health extends its reach into personal wellness. Anthropic leads in enterprise adoption, with Claude already serving over 4,400 health care organizations through partners like Commure.
Sanofi reports that “Claude is integral to Sanofi’s AI transformation and is used by most Sanofians daily.” This isn’t experimental anymore. It’s infrastructure.
Meanwhile, the FDA is easing regulation of clinical decision support software. Products delivering single recommendations can now bypass FDA review if they meet certain criteria. The guardrails are coming down just as the technology accelerates.
The trust problem
Both companies emphasize safety: Health data won’t be used for AI training, users can disconnect permissions anytime, and Claude includes disclaimers directing users to professionals. Anthropic requires qualified professional review for care decisions.
But let’s be real: AI systems hallucinate. They generate confident-sounding nonsense. They lack the clinical intuition that comes from years of watching patients, listening to families, and learning from mistakes.
An Anthropic representative acknowledged these systems “can err and should not replace professional judgment.” That’s reassuring, until a patient walks in convinced their AI chatbot diagnosed them correctly and we’re the ones who have to explain why it didn’t.
What I’m actually worried about
My concern isn’t that AI will replace physicians. It’s that AI will be used to justify replacing the conditions that make good medicine possible.
If Claude can handle prior auths in two minutes, will administrators expect us to see more patients per hour? If AI can summarize charts instantly, will we lose the reimbursement that currently accounts for documentation time? If patients can get “answers” from chatbots, will insurers argue we’re redundant?
Technology is neutral. The systems that deploy it are not.
What I’m cautiously optimistic about
Despite my concerns, I see genuine potential:
- Democratizing medical knowledge: Billions of people lack access to physicians. AI that explains lab results in plain language could be genuinely life-changing for underserved populations.
- Reducing burnout: If AI handles the administrative torture that’s driving physicians out of medicine, we might actually stay.
- Accelerating research: Claude’s life sciences tools could help drugs reach patients faster through streamlined trial design and regulatory navigation.
- Leveling information asymmetry: Patients who understand their conditions become partners in care, not passive recipients.
The key word is “could.” Whether these benefits materialize depends on how health systems, insurers, and regulators choose to deploy the technology.
What physicians should do now
This technology isn’t coming; it’s here. Our choice is whether to engage with it thoughtfully or let others define how it shapes our profession.
Evaluate AI outputs critically. Hallucinations are improving but not eliminated. The Opus 4.5 model shows better performance on honesty evaluations, but “better” isn’t “perfect.” Trust, but verify.
Maintain the therapeutic relationship. AI can decode data. It cannot hold a patient’s hand, read the fear behind their questions, or know when silence matters more than answers. That’s still ours.
Advocate for appropriate implementation. If your health system adopts AI tools, insist on physician input in how they’re deployed. Productivity metrics shouldn’t be the only measure.
Stay current. This technology evolves monthly. What’s true today may be obsolete by summer. Engage or be left behind.
The bottom line
I don’t know if Claude can really do in two minutes what took me two hours yesterday. But I know this: The administrative burden crushing physicians is real, and if AI can lift even part of it, we should pay attention.
The danger isn’t AI itself. It’s AI deployed without physician voices at the table, without patient safety as the priority, without recognition that medicine is fundamentally human work that technology can support but never replace.
Claude for Healthcare might be the beginning of something transformative. Or it might be another tool weaponized against the physicians it claims to help. The outcome depends on whether we engage now, or wake up later to a system we no longer recognize.
I’m choosing to engage. Cautiously. Critically. But with eyes open.
The AI doctor isn’t here to replace us. But the AI administrator might be. And that’s the conversation we need to be having.
Shiv K. Goel is a board-certified internal medicine and functional medicine physician based in San Antonio, Texas, focused on integrative and root-cause approaches to health and longevity. He is the founder of Prime Vitality, a holistic wellness clinic, and TimeVitality.ai, an AI-driven platform for advanced health analysis. His clinical and educational work is also shared at drshivgoel.com.
Dr. Goel completed his internal medicine residency at Mount Sinai School of Medicine in New York and previously served as an assistant professor at Texas Tech University Health Science Center and as medical director at Methodist Specialty and Transplant Hospital and Metropolitan Methodist Hospital in San Antonio. He has served as a principal investigator at Mount Sinai Queens Hospital Medical Center and at V.M.M.C. and Safdarjung Hospital in New Delhi, with publications in the Canadian Journal of Cardiology and presentations at the American Thoracic Society International Conference.
He regularly publishes thought leadership on LinkedIn, Medium, and Substack, and hosts the Vitality Matrix with Dr. Goel channel on YouTube. He is currently writing Healing the Split Reconnecting Body Mind and Spirit in Modern Medicine.





![Physician father wrestles with daughter's post-Dobbs future [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-1-1-190x100.jpg)