Every major policy in medicine is supposed to rest on evidence. We track outcomes for new drugs, devices, and procedures. We measure hospital mortality, infection rates, and adherence to guidelines. Yet one of the most time-consuming and expensive physician policies of all, Maintenance of Certification (MOC), has never been held to that same standard.
For decades, the American Board of Internal Medicine (ABIM) and other specialty boards have argued that recertification protects patients by ensuring physicians remain up to date. It’s a compelling narrative. But after years of research, the truth is far less clear: There’s little solid evidence that MOC participation, testing, or point accumulation improves how we practice or how our patients fare.
A claim without proof
When the MOC system was created, its premise seemed self-evident. Medicine changes rapidly; physicians must keep learning; therefore, periodic testing and structured education should make us better clinicians.
The problem isn’t with that logic; it’s with the lack of proof.
Most studies cited by ABIM and the American Board of Medical Specialties (ABMS) are observational or associative. They show that physicians who are board certified (as opposed to never certified) sometimes perform better on guideline adherence metrics. But that’s not evidence that MOC itself, the ongoing recertification cycles, the fees, the quizzes, improves care. It’s like claiming gym memberships improve fitness without measuring who actually works out.
No major randomized, longitudinal, or outcomes-based studies have shown that physicians actively participating in MOC deliver measurably better patient outcomes than those who do not.
A few studies, modest signals
Supporters often cite a handful of studies suggesting benefits. One 2008 analysis in Annals of Internal Medicine found that internists who scored higher on MOC exams provided somewhat better diabetes and hypertension care among Medicare patients. Another, more recent study found small differences in preventive care metrics among physicians who had recently recertified.
Those findings are interesting, but they don’t close the case. Both studies predate the current longitudinal MOC models and rely on surrogate outcomes (such as lab testing rates), not hard clinical endpoints like mortality, complication rates, or readmissions.
In contrast, larger reviews have found no consistent difference in patient outcomes between physicians who maintain certification and those who do not, particularly among those with extensive clinical experience. So far, the correlation between MOC and improved care appears to be tenuous at best.
Meanwhile, the burden grows
Let’s imagine a few physicians caught between ideals and implementation.
Dr. Malik, an internist in his 40s, diligently completes his LKA questions every quarter. He finds some intellectually stimulating; a well-crafted question can prompt a quick review of guidelines. But more often, he sees content disconnected from his daily work. “It’s not that I don’t want to keep learning,” he says. “I just want to spend the time on something relevant, not a pop quiz on inpatient nephrology when I’m an outpatient gastroenterologist.”
Then there’s Dr. Rivera, a hospitalist working in a busy urban center. She’s enrolled in two different longitudinal MOC programs, one for internal medicine and another for her subspecialty. Between clinical shifts, hospital-mandated CME, and MOC tasks, she estimates she spends about 150 hours a year on compliance activities. That’s nearly a month of full-time work, none of which has been shown to improve outcomes or reduce medical errors.
These are familiar stories in 2025. The intention behind MOC, lifelong learning, has been swallowed by a culture of perpetual testing and paperwork.
The problem with proxy metrics
Medicine loves metrics, but MOC relies on the wrong ones.
Passing an online quiz or collecting CME points is not the same as demonstrating competence. Real competence shows up in clinical reasoning, diagnostic accuracy, and patient communication, things that cannot be measured by multiple-choice questions.
If we were designing a true outcomes-based maintenance system, we’d look at indicators that matter: rates of missed cancer diagnoses, avoidable readmissions, patient safety incidents, or adherence to current therapeutic standards. Instead, we rely on documentation, logins, and fees. It’s as though the system confuses measuring education with measuring impact.
What the data could be, if we bothered to collect it
The irony is that we now have the tools to test whether MOC works. With linked electronic health records, outcomes registries, and physician-level performance data, we could finally study whether maintaining certification affects measurable outcomes.
Imagine if every physician’s certification status could be anonymously linked to real-world patient metrics: screening rates, procedural safety data, chronic disease control. Over a decade, we could know, empirically, whether physicians engaged in structured maintenance deliver better care.
But the boards haven’t done that research. Nor have they opened their data for independent analysis. Without transparency, we’re left with conjecture, not evidence.
The emotional and professional cost
Beyond the lack of data lies a deeper issue: The erosion of professional trust.
Physicians don’t object to learning; they object to being told that their professional worth depends on a process that’s never proven itself. The repeated demands for fees, attestations, and participation create the impression that MOC is less about education and more about control.
For many, it has become a symbol of top-down governance: a handful of administrators and testing vendors dictating how lifelong learning should look, with little regard for real-world practice.
As one colleague put it bluntly: “I don’t need a quarterly quiz to stay current. I need time to actually read, reflect, and care for patients.”
What the boards could do differently
If the certifying boards genuinely want to rebuild trust, they could start by turning MOC into something measurable, transparent, and clinically relevant.
- Commit to outcomes research: Boards should fund independent, peer-reviewed studies that track whether MOC participation correlates with objective improvements in care. Without data, claims of quality are marketing, not medicine.
- Integrate maintenance with actual CME: Instead of proprietary questions and point systems, boards should accept accredited CME that directly aligns with each physician’s scope of practice. A course on colonoscopy quality metrics or endoscopic bleeding control should count every bit as much as an MOC quiz.
- Reduce redundancy: Align board requirements with state CME and hospital credentialing. A single educational activity should satisfy all three.
- Build transparency and reciprocity: Publish annual reports on costs, revenues, and evidence of benefit. Allow physicians to transfer equivalent credits between boards.
- Focus on competence, not compliance: Shift from testing recall to assessing real-world performance through case reviews, peer feedback, or outcomes-based audits.
The grandfather question, again
The inequity between lifetime certificate holders and younger physicians remains a sore point, and it underscores how arbitrary the system has become.
If ongoing evaluation is essential for safety, why are tens of thousands of older physicians exempt? If it isn’t essential, why are their younger colleagues forced to comply?
The existence of this divide undermines every argument for MOC as a quality measure. It turns the program from a professional standard into a generational penalty.
The way forward
At its best, lifelong learning in medicine is self-driven, fueled by curiosity, responsibility, and the daily demands of complex patient care. A good system would harness that motivation, not commodify it.
The current MOC framework, for all its good intentions, has lost that balance. It measures the wrong things, demands too much of the wrong time, and assumes too much about its own value.
The boards could fix this by grounding every requirement in evidence, aligning with state and institutional systems, and eliminating redundant bureaucracy. They could make MOC a true reflection of modern practice, not an expensive ritual repeated under threat of decertification.
Until then, physicians will continue to meet their educational obligations the same way they always have: on their own time, under their own steam, and largely in spite of the system that claims to regulate their competence.
Brian Hudes is a gastroenterologist.




![Sabbaticals provide a critical lifeline for sustainable medical careers [PODCAST]](https://kevinmd.com/wp-content/uploads/The-Podcast-by-KevinMD-WideScreen-3000-px-3-190x100.jpg)

![Teaching joy transforms the future of medical practice [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-1-1-190x100.jpg)
![Community ownership transforms the broken health care system [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-4-190x100.jpg)