I have a love-hate relationship with practice guidelines. Love because it is often helpful to refer to a set of evidence-based recommendations as part of clinical decision-making; hate because of all of the shortcomings of the guidelines themselves, as well as the evidence upon which they are based.
A recent piece in JAMA and the editorial that accompanied it reinforced my ambivalence.
The research report addressed a straightforward question: How often do class I recommendations change in successive editions of guidelines on the same subject from the same organization. Recall that class I recommendations are things that physicians should do for eligible patients. They are particularly important, because these recommendations often form the basis for quality metrics, against which physician performance is measured, increasingly with financial consequences. It is not hard to understand why.
First, the recommendations are, by nature, definitive: If a patient meets certain criteria (i.e. has evidence of ischemic vascular disease, and no allergy to aspirin), then she should get the indicated therapy or intervention (aspirin), making the quality assessment fairly straightforward. It is also generally easy to detect if the intervention was made. Finally, it is also easier to engage clinicians using quality metrics that detect underuse (patient did not get something he should have) than overuse (patient got a treatment or service he should not have).
The authors limited their study to guidelines published jointly by the American College of Cardiology and the American Heart Association. These are generally well-respected documents, and are often held up as models for how guidelines should be developed and promulgated. (Disclosure: I am a card-carrying fellow of both organizations.) They categorized the status of the original class I recommendations in the subsequent guideline as either retained, downgraded or reversed, or omitted. So what did the study find?
The findings are summarized in this table:
Overall, about 9% of the recommendations were downgraded or reversed in the follow-up guideline.
I don’t know about you, but that seems like a lot to me, especially since the median time interval between the paired guidelines was 6 years. This is even more disturbing when you think about how many years it takes to develop quality metrics based on these guidelines, making it inevitable that some quality metrics will be based on discredited recommendations. The discordance of the newest cholesterol management guidelines with the widely adopted HEDIS measure for LDL management is just one example where this is already the case.
I think this is just one more reason why quality measures built around process (did you do this or that in the care of a patient) have to give way to those measuring outcomes (how well did the patient do under your care).
Ira Nash is a cardiologist who blogs at Auscultation.