When I was a new hospitalist in a tertiary center, I once told a colleague, “Looks like I got an easy admit.” He just stared at me.
It didn’t take long to understand why.
Vitals can be stable. Labs can look clean. But patients—especially those we assume are “simple”—rarely are. There’s a language beyond numbers: the way a patient breathes, a hesitation in their tone, the look in a spouse’s eye. Over time, physicians develop instincts we don’t always have names for. But they save lives.
Now, we face a new challenge: Clinical AI tools are seeing things even we miss.
Predictive models flag subtle deterioration risks, suggest reintubation likelihood, and forecast outcomes with astonishing precision. Some outperform even the most experienced clinicians. Should we trust them more than ourselves?
It’s a false dichotomy. The real answer lives in the space between trust and verification.
Take the reintubation model developed at Vanderbilt. With just four clinical inputs, it could predict post-cardiac surgery reintubation risk better than many providers—and do it faster. Or consider a transplant decision model that didn’t change its data, just how it was presented. By showing surgeons the “time to next equivalent offer”, it helped reduce wasted kidney donations by 5 percent.
These are remarkable wins. But they don’t signal replacement. They signal reinforcement. AI can sharpen our senses, not override them.
Because here’s the truth: AI doesn’t walk into the room.
It doesn’t feel the tension when no one wants to say “hospice.” It doesn’t notice when a patient seems less scared of dying than of telling their son they’re ready.
Daniel Kahneman’s Thinking, fast and slow describes two modes of thinking: fast, intuitive reasoning and slow, analytical processing. AI excels at the latter. But the best medical decisions often arise from a blend of both—what I call clinician-AI synergy.
That synergy only works if we build it right.
AI systems must integrate into workflows without friction. No extra logins. No clunky dashboards. Ideally, these tools act like a silent co-pilot—guiding, not dictating. And they must be explainable. If a model offers a score with no reasoning, most clinicians (myself included) won’t trust it. But if we can understand even a simplified rationale, we’re more likely to engage.
Culture matters just as much.
I recall a case study where a predictive model was prematurely blamed for prolonged lengths of stay. Fortunately, it had been implemented through a randomized rollout, allowing the team to isolate the real cause—and protect the tool. That level of foresight is essential. Clinicians will push back—not against innovation, but against poor implementation.
Then comes the ethical tightrope.
If a validated model can reduce readmissions or prevent reintubation, do we cause harm by ignoring it? Possibly. But the reverse is just as dangerous: relying on unvalidated tools, ignoring bias, or surrendering our clinical judgment to black-box algorithms. That’s not decision support. That’s negligence.
Bias in datasets, automation over-reliance, and data drift are real and present threats. Oversight, monitoring, and rigorous evaluation must be built into any AI deployment. And ultimate accountability must remain with the clinician. That’s not just good practice—it’s a moral imperative.
The future of health care will include AI.
There’s no stopping that. But our job isn’t to resist the machine—it’s to elevate the practice of medicine through it. The best outcomes won’t come from man or machine. They’ll come from both, working together.
If we build the right tools, foster the right culture, and keep ethics at the center, we can create the system we’ve always dreamed of—smarter, more humane, and deeply personal.
Rafael Rolon Rivera is an internal medicine physician.