How excited would you be about a medication that lowered your risk of cardiovascular death, heart attack or stroke by 1.5%? Excited enough to spend a few thousand dollars a year on the drug? I expect not.
What if, instead, the drug reduced those same terrible outcomes by 20%? That’s probably enough benefit to interest some in the drug.
Well, those statistics come from the same clinical trial, evaluating the same drug. In fact, they present the exact same results, but they simply do it in different ways. The 1.5% number refers to the absolute reduction in the risk of those outcomes — the drug reduced the two-year risk of cardiovascular death, heart attack, and stroke from 7.4% to 5.9%. That’s an important reduction by any account. That’s on par with many medications that have become critical in combating cardiovascular diseases.
But that 1.5% reduction sounds much less impressive than the “20% reduction” that the authors describe in the discussion section of their New England Journal article, and was repeated, practically verbatim, by the physician who wrote an accompanying editorial in the same journal.
How can these experts claim a 20% reduction in risk when the study showed only a 1.5% reduction? Because 1.5% is approximately 20% of 7.4%. When summarizing the impact of this drug, the researchers and the editorialist chose to emphasize the relative risk reduction of the treatment rather than the absolute risk reduction.
To make matters worse, the authors published a figure illustrating the results of the trial. Part of the figure appropriately plots their research findings on a graph in which the y-axis ranges from 0% risk to 100% risk. But within that graph, the authors present a magnified version of their results, that visually exaggerate the benefits of the drug. Here’s a picture of that figure within a figure:
Relative risk reduction is a troublesome way to convey the benefits of treatments. Consider two hypothetical medicines, each targeting a different disease. One reduces disease-related hospitalizations by 50%, and the other by 10%. Most would think the first drug is much more helpful than the second one. But suppose the first medication treated a disease that rarely requires hospitalization – reducing hospitalization rates from 1% to 0.5%. That’s a 50% relative reduction in risk, but only a 0.5% absolute reduction. By contrast, suppose the second drug (for a different disease, remember ) reduces hospitalization from 30% to 27%. That’s a 10% relative risk reduction, but a 3% reduction in absolute risk. That second drug is significantly more effective at reducing hospitalization than the first.
When making treatment decisions, people need information on absolute risk reduction, because it helps them determine whether the benefits of an intervention justify the burdens accompanying it. Medicines cost money. They are a hassle to take. And they carry the risk of side effects. These burdens are only justified if the medication brings enough benefit – enough absolute benefit – to outweigh the accompanying harms.
Prestigious medical journals should do a better job of communicating the risks and benefits of medical interventions to their readership.
Peter Ubel is a physician and behavioral scientist who blogs at his self-titled site, Peter Ubel and can be reached on Twitter @PeterUbel. He is the author of Critical Decisions: How You and Your Doctor Can Make the Right Medical Choices Together. This article originally appeared in Forbes.
Image credit: Shutterstock.com