We’ve all seen the hype.
AI will revolutionize health care. It will cut documentation time. Improve diagnoses. Save lives. Maybe even replace doctors.
But here’s what I know after 11 years as a hospitalist: Hype without evidence is dangerous. And AI—especially in medicine—isn’t just software. It’s treatment.
If we’re going to let AI influence life-or-death decisions, it needs to meet the same standard as any clinical intervention. That means rigorous trials, transparent design, and cultural alignment. Anything less is malpractice.
We’ve been here before. Remember Theranos? A dazzling promise, no peer-reviewed proof, and the medical world’s worst-kept secret. It didn’t just waste money—it risked lives. If we treat AI the same way—rolling out tools without evidence, accountability, or ethics—we’re asking for another disaster.
Clinical AI must be validated like any drug or device. Randomized controlled trials aren’t optional—they’re essential. Dr. David Byrne calls this the “secret sauce” for safe AI implementation, and he’s right. We’d never let a new chemotherapy hit the market based on a good pitch deck and some retrospective data. So why are we doing that with algorithms?
And yet, it’s happening. Tools are being deployed without explainability. Without understanding the data they were trained on. Without knowing how they’ll behave in different populations. That’s not innovation—it’s irresponsibility.
Physicians are not the enemy of progress. But we are skeptics for a reason. Skepticism protects patients. It’s why we double-check vitals, question assumptions, and push back on protocols that don’t feel right. If we’re slow to adopt AI, it’s not because we’re resistant. It’s because we remember what happens when systems overpromise and underdeliver.
That skepticism will only grow if we continue to treat physicians as implementation obstacles instead of partners. If AI is to succeed in health care, it must be built around clinician trust. That starts with education. Our colleagues won’t trust a tool they don’t understand—nor should they.
We need AI literacy woven into training programs, hospital onboarding, and executive discussions. We need frameworks that ensure ethical and clinically sound development—like SPIRIT-AI and CONSORT-AI—baked into deployment plans. And we need every leader to understand that an AI rollout is not just an IT project. It’s a clinical intervention that deserves the same scrutiny, the same rigor, and the same humility.
Just because something’s new doesn’t mean it’s good. In Silicon Valley, speed is a virtue. In medicine, safety is. The tech world tests ideas on users. We test interventions on patients. One misstep in a user interface may frustrate a customer. One misstep in medicine can cost a life.
And here’s the real irony.
Physicians want AI to work. We’re tired of clunky EMRs. We want our notes dictated faster, our patients flagged earlier, our discharges smoother. But what we fear is bad change—change without evidence, implementation without governance, and technology that adds burden instead of removing it.
We cannot afford to spend millions on shiny AI dashboards while our EHRs still frustrate basic care. Or roll out “smart” triage tools while ignoring the bias in their training data. Before we launch AI-powered ambulances, let’s make sure we can trust the software that predicts readmissions.
Physicians don’t fear innovation. We fear irresponsibility.
That’s why it’s time to flip the script. AI is not an accessory—it’s becoming part of the care plan. And if we accept that, then it must be evaluated, regulated, and respected the way we evaluate everything else we give to our patients.
We need to start treating AI like chemotherapy.
Not because it’s toxic—but because it’s powerful. Because it requires precision, vigilance, and consent. Because it must be safe before it’s scaled. And because if we get it wrong, the consequences are too great.
AI isn’t the future of health care. It’s the present. But it will only succeed if we build it on the foundation that medicine was always meant to stand on: trust, truth, and evidence.
Rafael Rolon Rivera is an internal medicine physician.