We often see media headlines or articles about scientific and medical advancements promising imminent treatments or cures. I am not referring to advertisements that frequently masquerade as stories for naturopathic, homeopathic, or other unproven remedies, but rather those from supposedly reliable sources, such as CBS, Yahoo, or the Chicago Tribune.
Reporters or news writers often compose them, despite lacking training in science or medical reporting. They frequently make inappropriate assumptions and interpretations about what they read and even, at times, copy press releases verbatim.
Can university or corporation press releases be considered dependable? They, too, are composed by non-specialist public relations writers whose goal is to write pieces that place their organization and researchers in a positive light, increasing their visibility and (hopefully) funding. They sometimes submit their compositions to the researchers they cite for accuracy review. Ideally, this should temper unsupported or grandiose statements. However, administrators pressure researchers to exaggerate, at least a little, for the same reasons as the writers, and, at times, strongly hint that promotions, benefits, and other recompense may be limited if they do not.
There are “red flags” to look for in an article indicating that it or the original research may be questionable. Below are key points to consider when evaluating a media report or an original paper. Not all of these may be included, but even one or two should raise suspicion.
For example, too few participants may be involved, which limits the ability to determine meaningful results and their applicability to a larger population. About nine months ago, I reviewed a study with only ten subjects; nonetheless, the author attempted to both draw conclusions and generalize them. There is no rule as to what constitutes too few subjects, as it depends on what is being assessed. For a drug study, there should be a a minimum of several hundred participants. However, if a specific form of surgery is being examined, the number is likely to be in the twenties or thirties.
A related question is whether the participants represent the population of concern? If the study group is eighty percent female, applying the findings to men would be dubious. The same applies to factors such as ethnicity, education, age, and income. In the study noted above, the majority of subjects were university-trained, but the author attempted to apply the findings to those with a high school diploma or less education.
A significant problem is the lack of an adequate control method or group. Without either, the effectiveness of whatever is being examined cannot be meaningfully determined. A study lacking control may demonstrate that a particular drug works, but it cannot determine whether it is more or less effective than a different medication or no treatment. If a study examines a past event, control will likely not exist; however, these studies only hint at possibilities that require further examination.
Another difficulty is that non-science-trained writers incorrectly perceive correlations as causative. A correlation or association demonstrates that two phenomena occur in temporal proximity to each other, but it does not indicate that one causes the other. For example, we often hear statements that exposure to certain chemicals as a child results in adult cancers. However, too many different potential causes exist within the intervening years to make this a certainty. Thus, it can only be associated with or correlated to. Some pharmaceutical and product liability lawsuits are based on correlational findings, with plaintiffs’ attorneys assuming, often correctly, that jurors will infer causation despite simplified explanations by defense experts.
These potential other causes are considered confounders and must be accounted for by a researcher. This was a problem, for example, in a recent study that concluded that abuse in childhood manifested as psychiatric illness in adulthood. The researchers failed to take into account or control for other potentially traumatic factors that may have occurred in the decades between the abuse and adult illness, including vehicle accidents, house fires, crime victimization, domestic abuse, etc. Without doing so, their conclusion was meaningless.
Additionally, beware of any study in which the subject and/or the researcher is aware of who is receiving treatment and who is not. In drug studies, these are referred to as “open-label” or “non-blind” research. If the researcher knows who is in the experimental group, there is always a potential for conscious or unconscious bias to influence the results. Suppose subjects are aware of being in the treatment group? In that case, they may report or magnify positive effects, believing that is what is wanted. Alternatively, if they are in the control group, they may experience a “placebo” effect and report a change, although no change should occur.
Predicting that success in rodent studies will lead to effective human treatments is problematic, but it is a common misconception among non-science writers. Only about five percent of these successes result in effective human treatment, and the average time from a rodent study to a marketable human product is seventeen years, but it can be more. Thus, any success reported for these animals is far from either an impending human therapy or one at all.
When you see an announcement trumpeting a scientific or medical breakthrough, examine it critically before placing any credence in it, and you do not have to be a scientist to do so.
M. Bennet Broner is a medical ethicist.
