As a third-year medical student, I always carried three essentials during my emergency medicine clerkship: a stethoscope, a Celsius, and at my fingertips, MDCalc. With a few taps, I could translate uncertainty into structure: the Wells criteria for pulmonary embolism, NEXUS for cervical spine imaging, CAM for delirium, COWS for opioid withdrawal. These tools did not replace my clinical reasoning, but they scaffolded it. They allowed me to justify and present my decisions to my attendings or residents.
I adopted this practice from my previous rotation during outpatient surgery clinic after witnessing my attending cite the CODA trial directly in the assessment and plan to justify antibiotics over surgery for uncomplicated appendicitis, invoking a randomized controlled trial published in NEJM that challenged a century of surgical dogma (CODA Collaborative, 2020). That moment stuck with me. Evidence was not just academic; it was operational. It could legitimize restraint. It could defend deviation from tradition. I felt secure knowing my clinical judgment was supported by data. Risk could be stratified. Decisions could be defended.
The black box of clinical scores
Yet each time I opened MDCalc and scrolled through its catalog, I found myself wondering who built them, and how. The Wells score, derived from physician gestalt formalized into a decision rule, improved pretest probability estimation while retaining subjective elements (Wells et al., 1997). NEXUS substantially reduced unnecessary cervical spine imaging at the cost of a small but accepted miss rate, reflecting a deliberate population-level tradeoff (Hoffman et al., 2000). The CAM, though widely adopted, has demonstrated variable sensitivity depending on user training and clinical setting (Inouye et al., 1990). These tools earned legitimacy through rigorous validation, statistical significance, confidence intervals, and diagnostic performance metrics, but none were infallible, nor were they intended to be applied without clinical judgment.
What unsettled me was not that these models had limitations; it was how easily those limitations took a backseat once the models became institutionalized. What began as probabilistic aids slowly hardened into protocols. Protocols, in turn, began to feel like rules. From a student’s vantage point, evidence had a way of ossifying into dogma.
From consumer to critic
Around the same time, a group of classmates and I had developed something parallel to this realization. With the guidance of a faculty advisor and a biostatistician, we created an open-access curriculum in data analysis and interpretation. The goal was modest: empower medical students to understand their data well enough to ask better questions, not to replace professional statisticians, but to collaborate with them more substantially. We paired this curriculum with free, peer-led statistics consultations for students engaged in research.
The disclaimer was explicit: This was not a substitute for expert oversight. Rather, it was a hopeful solution to passivity. Too often, students relegated to chart reviews functioned as grunt work rather than thinkers, extracting data without understanding how it would be analyzed, interpreted, or ultimately used. As the curriculum expanded, we incorporated study design modules, REDCap-based data collection, and hands-on analysis using R and SPSS. Select students who once viewed research as opaque began to design their own studies. They moved from executing tasks to architecting questions, choosing outcomes, anticipating bias, and understanding how a statistically significant result may not always translate to clinical relevance.
That shift from consumer to critic, from follower to questioner, felt deeply familiar. It mirrored the apprehension I had begun to feel in the emergency department, where elegant tools sometimes masked uncertainty rather than illuminated it. In both spaces, the lesson was the same: Evidence is most powerful when you understand how it is made.
The danger of cookbook medicine
This realization crystallized for me while reading Blind Spots, in which Dr. Marty Makary invokes Dr. David Sackett, often called the father of evidence-based medicine. Dr. Sackett warned that evidence-based medicine was never meant to be a cookbook. He cautioned against the arrogance of preventive medicine driven more by authority than data, interventions aggressively promoted before being rigorously tested, sometimes causing widespread harm. He criticized unscientific guidelines and the silencing of dissent in the pursuit of certainty.
History has not been kind to medical hubris.
The central thesis remains essential: Good medicine requires both the best available external evidence and individual clinical expertise. Either alone is insufficient.
As a trainee, this tension plays out daily. We are taught algorithms, pathways, and scores, tools that undeniably improve care at scale. At the same time, my EM attending would rightfully remind us to “treat the patient, not the number.” The challenge is not rejecting evidence but resisting the temptation to outsource thinking to it.
Evidence-based medicine is powerful precisely because it embraces uncertainty. P-values do not absolve us of responsibility; they demand interpretation. A statistically significant finding is not a moral mandate. Guidelines are hypotheses frozen in time, awaiting revision.
This is where healthy skepticism becomes a professional virtue.
Asking questions should not be seen as defiance. It is fidelity to the scientific method. Why was this study designed this way? Who was excluded? What outcome was chosen and why? What incentives shaped this recommendation? What harms might be invisible in the average?
When medicine makes recommendations grounded in rigorous studies, transparent data, and humility about what we do not yet know, our profession shines. We reduce harm. We earn trust. But when questioning becomes taboo and certainty becomes performative, we risk repeating the very mistakes that evidence-based medicine was meant to prevent.
As I move forward in training, I still use MDCalc. I still cite trials. I still believe deeply in the power of data. But I now see evidence not as a shield against uncertainty, but as a lens through which to examine it more honestly.
Medicine is not a cookbook. It is a conversation between data and judgment, past and present, humility and conviction. Our responsibility as physicians-in-training is not merely to follow the evidence, but to understand it well enough to challenge it when necessary.
Jay Pendyala is a medical student.



![Politics and fear have replaced science in U.S. pain management [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-4-190x100.jpg)

