I have watched the U.S. News & World Report “Best Medical Schools” rankings shape behavior in American medicine for decades. As a physician, educator, and someone who has mentored trainees at different stages of their careers, I have seen the rankings function both as a compass and as a distortion. They promise clarity in a complex landscape. Too often, they deliver something closer to a mirage.
The appeal is obvious. Medical school applicants face a bewildering set of choices, and rankings offer a shorthand. They signal prestige, research strength, and, at least in theory, educational quality. Even in their current form, with a tiered system rather than strict ordinal rankings, they retain enormous influence. Schools in the highest tiers, representing the top percentiles, continue to attract attention, applicants, and institutional advantage. For a prospective student trying to make sense of hundreds of programs, that kind of structure feels indispensable.
There is also a legitimate argument that rankings impose a degree of accountability. They force schools to report data, compare outcomes, and compete in ways that can drive improvement. Research productivity, National Institutes of Health (NIH) funding, and the proportion of graduates entering primary care are not inconsequential measures. They reflect real institutional priorities and, in some cases, real societal needs. When U.S. News emphasizes these metrics, it is attempting, however imperfectly, to quantify excellence.
But the deeper I have engaged with these rankings, the more I have come to see their limitations not as technical flaws, but as conceptual ones.
The most pointed critiques have come not from outside medicine, but from within its most elite institutions. In recent years, major medical schools, including Harvard, Columbia, Stanford, Penn, and Mount Sinai, have withdrawn from participation. Their objections converge on a common theme: The rankings measure the wrong things.
At Penn, Dean J. Larry Jameson concluded that the rankings perpetuate a vision the school does not share, emphasizing metrics like grades, test scores, and research funding at the expense of qualities such as creativity, resilience, and empathy. Columbia’s dean went further, calling the rankings “narrow and elitist,” arguing that they reward institutional wealth and reputation rather than the ability to train physicians who meet the needs of a diverse society. Mount Sinai’s leadership captured the problem succinctly: Medical education “cannot be reduced to a set of numbers.”
I agree with them, but I would go a step further. The issue is not just that rankings reduce complexity. It is that they reshape it.
When metrics become targets, behavior follows. If Medical College Admission Test (MCAT) scores and grade point average (GPA) carry weight, schools will optimize for them. If NIH funding is a dominant factor, institutions will invest in research infrastructure that may or may not align with their educational mission. Over time, the rankings do not simply reflect reality; they begin to create it.
This is where the unintended consequences emerge. A system designed to guide applicants can end up narrowing the definition of excellence. It privileges what is measurable over what is meaningful. It risks marginalizing programs that excel in community engagement, primary care training, or the cultivation of humanistic physicians, attributes that are harder to quantify but central to the future of medicine.
To be fair, U.S. News has not stood still. The move to tiered rankings and the removal of peer and residency director surveys reflect an effort to address longstanding criticisms. The organization itself acknowledges that rankings should be “one consideration, not the lone determinant” in choosing a medical school. These are meaningful adjustments. They suggest a recognition that the previous model had reached its limits.
Yet the fundamental tension remains unresolved. On one side is the desire for objective comparison. On the other is the reality that medical education is deeply contextual, relational, and mission-driven. A rigid ranking system, no matter how refined, struggles to capture that.
There is also a broader cultural dimension to this debate. Critics of the withdrawals have argued that abandoning standardized metrics risks undermining meritocracy, replacing quantifiable achievement with less defined criteria. Supporters counter that traditional metrics are themselves biased, favoring applicants with greater resources and access. This is not simply a methodological disagreement. It is a debate about what kind of physicians we want to train and what kind of system we want to build.
As someone who has practiced in both clinical and academic environments, I find truth in both perspectives. Intellectual rigor matters. So do empathy, adaptability, and a commitment to service. The challenge is not choosing one over the other but integrating them in a way that is both fair and meaningful.
So where does that leave us?
The current controversy presents an opportunity, not to abandon rankings entirely, but to rethink them from the ground up.
A better approach would move away from a single, composite hierarchy and toward a multidimensional, transparent framework. Instead of asking, “Which school is best?” we should be asking, “Best for what, and for whom?” Limiting comparisons to just two domains (primary care and research, as the current system largely does) reduces a complex educational network to a narrow axis. It overlooks the wide spectrum of specialties, hybrid careers, and non-clinical pathways (industry, policy, informatics, entrepreneurship) that today’s medical students increasingly pursue.
Imagine a system that reports clearly defined domains: research impact, clinical training exposure, primary care outcomes, diversity and inclusion metrics, student well-being, debt burden, and career trajectories. Each domain would be independently verified, openly reported, and presented without forced aggregation into a single rank. Applicants could then weigh these factors based on their own goals and values.
Crucially, this framework should be co-developed by stakeholders across the ecosystem: educators, students, accrediting bodies, and yes, independent evaluators like U.S. News. Schools that have withdrawn from the rankings have already committed to greater transparency by publishing their own data. That momentum should be harnessed, not fragmented.
We should also incorporate longitudinal outcomes. What happens to graduates five, ten, or fifteen years into practice? Are they serving underserved communities? Advancing science? Leading health systems? These are harder questions to answer, but they are far more aligned with the ultimate mission of medical education.
Finally, we need to acknowledge a simple truth: No ranking system can substitute for judgment. Choosing a medical school is not a consumer decision in the traditional sense. It is a formative step in a professional identity that will evolve over decades. Reducing that decision to a number (or even a tier) does a disservice to the intricacy of the journey.
The U.S. News rankings are not going away. Nor should they. But if they are to remain relevant, they must evolve from arbiters of prestige to facilitators of understanding.
The goal should not be to rank medical schools. It should be to illuminate them.
Arthur Lazarus is a former Doximity Fellow, a member of the editorial board of the American Association for Physician Leadership, and an adjunct professor of psychiatry at the Lewis Katz School of Medicine at Temple University in Philadelphia. He is the author of several books on narrative medicine and the fictional series Real Medicine, Unreal Stories. His latest book is Nobody Told Me There’d Be Days Like These: Hard Truths from Physicians—and What They Mean for Medical Practice.











![Clinicians are failing at value-based care because no one taught them the system [PODCAST]](https://kevinmd.com/wp-content/uploads/bd31ce43-6fb7-4665-a30e-ee0a6b592f4c-190x100.jpeg)






