Everyone wants the best model. The flashiest algorithm. The one with the highest AUC and the sexiest machine learning buzzword attached.
But here’s the problem: In hospital care, especially at midsize institutions, the “best” model on paper might be the worst fit for your people, your patients, and your workflow.
As a hospitalist and data-minded clinician, I’ve been exploring how we can use AI to reduce 30-day readmissions—an outcome tied not just to cost, but to continuity, dignity, and trust. We lose $16,000 or more with each bounce-back admission, and worse, we lose momentum in healing. But if we’re going to use AI for this problem, we have to choose wisely.
That starts with asking the right question: Can we trust it enough to act on it?
Models like random forests or logistic regression may lack the allure of deep learning or neural nets. But in the clinical world, interpretability matters more than mystery. If I can’t explain the model to the nurse case manager or to my CMO, we won’t get buy-in—and that’s the end of the road.
What’s more, in a high-stakes setting like readmission prevention, recall matters more than anything else. False negatives aren’t just missed predictions—they’re missed opportunities to intervene. If we fail to flag a high-risk patient, we may lose our only shot at keeping them out of the hospital. A balanced random forest model recently showed recall rates jump from 25 percent to 70 percent without sacrificing accuracy or AUC. That’s not just statistically interesting. That’s operationally relevant.
And yet, even the best-performing model will fail if no one trusts it.
Clinicians don’t want magic—they want logic. They want a model that aligns with their instincts, one they can argue with and understand. They want to know why the algorithm flagged Mr. Jones for follow-up and not Ms. Smith. This is where interpretability isn’t optional—it’s ethical.
We also need to be honest about implementation. If your AI tool adds logins, dashboards, or extra cognitive load, it’s dead on arrival. The right solution fits into our flow, not the other way around. It augments our awareness, not replaces our judgment. And it speaks a language the whole team can understand.
That includes leadership.
If I walk into the boardroom to pitch an AI solution, I need to lead with what matters to them: Cost savings, length of stay, and readmission penalties. But then I pivot to what matters to us: Safer discharges, cleaner transitions, and fewer preventable harms. That’s how you earn trust—from both sides of the hallway.
AI isn’t plug-and-play. It’s not about picking what’s new. It’s about building what fits—clinically, operationally, and culturally. And in midsize hospitals, where every resource counts, that matters more than ever.
Let’s stop chasing the flash. Let’s start building what works.
Rafael Rolon Rivera is an internal medicine physician.