We’re five years old. We’ve been doing the same thing the same way for five years and have analyzed over 1,488 stories using our ten standardized criteria on HealthNewsReview.org.
We’ve recently returned from the annual Association of Health Care Journalists (AHCJ) conference in Philadelphia, where we led a two-hour workshop, and where we received a tremendous amount of gratifying feedback on our work and many terrific suggestions about new approaches for the future — the next five years perhaps.
In addition to the stories evaluated on HealthNewsReview.org, we’ve commented on many more stories on this blog but let’s be clear about the difference:
• On HealthNewsReview.org, we only evaluate stories that includes claims of efficacy or safety in health care interventions. We apply ten standardized criteria to the review of every story – the kind of questions we think consumers need answered in stories. And we almost always have 3 independent reviewers evaluate each story. Our goal is to be as objective and systematic as possible.• My blog, in the nature of many personal blogs, often reflects my opinions. It is more subjective. It touches on a broader range of topics.
Here’s the scorecard after 5 years and 1,488 stories evaluated:
The following is a handout prepared for the AHCJ conference. The bold headings are our 10 criteria, followed by bullet points of some observations of things we’ve seen in stories over the past five years, pointing to ways to improve such stories.
Cost
• It’s not good enough to say, “The cost is much lower than the invasive procedures.” What is that cost? How much lower?
• Do insurance policies cover the intervention?
• Might there be costs for copays and other expenses (psychologists, dietitians, etc) that amount to significant additional expenditure? If you can’t quantify them you can at least mention them.
• Even in a business story, beyond reporting the net worth of the company and the projected market, there is room to explore the potential cost to the consumer and/or their insurer for a particular treatment or device. This will inform business readers about the potential market for subject of the story.
• It may be difficult to estimate costs of an experimental approach that is very early in its development. But can you at least cite costs of existing alternatives? Is the new approach comparable to other approaches whose costs you can cite? Our rule of thumb: If it’s not too early to talk about how well something might work, then it’s not too early to start discussing what it may cost.
• A study published in March 2011 reported “A significant portion of people – perhaps as many as one in five – don’t take drugs a doctor has prescribed because they can’t pay for them, according to a new survey of people visiting an emergency room.” This is why cost information is vital information. Yet 70% of the 1,500 stories we’ve reviewed get unsatisfactory grades on the cost issue.
• See our new online resource with sources of cost information for journalists.
Benefits
• In most but not all stories, relative risks don’t tell a meaningful story to consumers as well as absolute risks do. Using both may not be a bad bet.
• Insufficient to say “significantly increased.” What does that mean? How was it measured?
• Statistical significance may not equal clinical significance. What difference did it make in peoples’ lives?
• The plural of anecdote is not data. Patient vignettes make engaging reading but they are not data. When a story is top heavy with personal stories, it makes it hard for readers to sustain their critical thinking when (if) they get to information that is quantitative.
• Reporting only surrogate or intermediate endpoints, changes in blood test scores, etc. may not be a true benefit – may not influence individual health outcomes. What difference did the intervention make in peoples’ lives?
Harms
• Not good enough to report on “common side effects.” How common? 40-50% of patients? Important to quantify.
• Don’t accept promoters’ claims that “minimally invasive” automatically means safer. Demand the data. Compare with existing alternative options. Seek independent perspectives.
• Failing to account for “minor” side effects that could have a significant impact on a person’s life.
• Screening tests have harms. It is a common journalistic pitfall to overlook this fact. Don’t minimize anxiety from false positives; if you’ve talked with people who’ve faced this issue, it’s a big deal. Don’t forget to focus on downstream harms. Though a blood test may have little risk, downstream consequences that have real risks are rarely considered.
Evidence
• Conflating causation and association – failing to explain limitations of observational studies.
• Failing to report on lack of a control group, lack of blinding, etc.
• Failure to emphasize limitations of small, short-term studies. It’s possible to be accurate but totally unhelpful, as in failing to explain that “A study in 25 people” is tiny and why that matters.
• Failure to include study dropout rate. Why did they drop out?
• Presenting findings from an animal or lab experiment without cautioning about the possibly limited applicability for human health
• Are you reporting on a small group of patients at one medical center with experienced surgeons? How generalizable is this?
• Getting caught up in reporting on the latest study without reporting on larger, better-designed studies that have been done already (one recent story failed to mention that a recent Cochrane review had examined 106 papers on the same topic!)
• What does it mean to say “relatively inexpensive, painless and safe” ???
• Not explaining that presentations at conferences may need to be taken with a grain of salt, or why – that limited peer review may have taken place with data presented at a scientific meeting.
• Presenting anecdotes as evidence of a treatment’s harms or benefits – rather than as a single illustration of its use.
Disease Mongering
• Question prevalence estimates. Who says x number of people have restless leg syndrome? Social anxiety disorder? Scrutinize spurious statistics.
• Are you going to be caught reporting on a pseudodisease? A pre-disease state that lowers the threshhold for what we call disease, opening new markets for people to be treated with drugs or vitamins or whatever?
• Have you reported on how numbers get changed, threshholds are lowered, so that overnight, millions more have diabetes, high cholesterol, high blood pressure and osteoporosis?
• Have you reported on why women are targets of disease-mongering so often – with premenstrual dysphoric disorder, vaginal atrophy, female sexual dysfunction, overactive bladder, menopause all framed as diseases that must be treated? (Actually, men get their share as well – being hammered about low testosterone, balding, etc.)
• Have you exaggerated the human consequences of a condition? Are you using interviews with “worst-case” scenario patients, holding such patients up as examples as if they were representative of all with this condition?
• Have you framed surrogate markers or intermediate endpoints (test scores, blood values, etc.) as if they were diseases?
Alternatives
• Failing to discuss the harms/benefits of a new idea compared with harms/benefits of existing approaches.
• Failing to discuss how the new approach fits into the realm of existing approaches.
• Try to provide some sense of the inevitably larger evidence base for existing approaches than for the new approach.
• A story should not focus on a surgical approach while never mentioning nonsurgical options or prevention
• Stories should always consider the option of doing nothing – of “watchful waiting” – of “active surveillance” only
• Stories about screening tests should mention other screening options, including the option of not being screened.
Novelty
• By focusing on one new idea, it may appear that it’s the only thing being researched in the field. Even a line about other approaches/research would help.
• It is incredibly rare that a study really is novel, appears out of nowhere. Where the new report is coming from is part of the context that is often missing. The “big” picture. This is hard to know unless the reporter finds a truly independent expert.
• Don’t forget about clinicaltrials.gov as a click away source of information about other studies that are underway about a specific treatment or for a particular condition. It can be useful for providing context about something seemingly innovative.
• Another resource for assessing novelty is PubMed. You can put in a key word or two and pretty quickly establish whether something is absolutely unique and if not – how long it’s been around and studied
Availability
• One recent national story only mentioned that one doctor had done 6 robotic procedures. Was she the only doing them? If not, where else and how often?
• With devices/procedures, did you consider the availability of trained personnel to deliver the approach? The learning curve? These are important issues that may severely limit availability/adoption. You can address it in just a few more words.
• Allowing researchers to claim that something “may soon be” available after a study in just a few people is not wise journalism. The path to commercialization is likely years in the making. Especially when a researcher or a company is more specific about a prediction/projection of something being on the market in x years, what is the basis for that prediction? There needs to be some sense of history – of the thousands of previous flashes in the pan in medical research that were equally exciting but never panned out. Don’t treat FDA approval as a fait accompli.
Sources
• Don’t merely consult sources who have a dog in the hunt
• Single source stories in health/medical news stories are journalistic malpractice. There’s a conflict of interest around every corner in health care and you need to be aware of that and seek independent perspectives. See “List of Independent Experts” on HealthNewsReview.org site (in Toolkit at lower right of home page.)
• Avoid vagaries such as: “Experts believe….” or “Doctors think…” Who? How many? All of them? Are you increasing the smell of authority by being vague.
• Failing to make obvious the extent to which a source is likely to be conflicted (e.g., a PR consultant working for the company said…or Dr. Smith, who received a company grant to perform the study said..)
News release
• News releases can be valid sources of some information. But journalism is charged with independently vetting claims. So it is unacceptable to rely on a news release as the sole source of information.
Gary Schwitzer has specialized in health care journalism in his more than 30-year career in radio, television, interactive multimedia and the Internet. He is publisher of HealthNewsReview.org.
Submit a guest post and be heard on social media’s leading physician voice.