As physicians, we all dread missing a diagnosis: indigestion that turns out to be angina, back pain that signals an aortic aneurysm, migraine that proves to be a brain tumor. Although it is only an estimate, several studies in the medical literature indicate that misdiagnosis occurs in 15% to 20% of all cases, and in half of these, there is serious harm to the patient.
Researchers have found that the vast majority of misdiagnoses, about 80%, are due to cognitive errors. Can thinking about your thinking help prevent these errors?
In our work as physicians, we are prone to make three major cognitive errors.
Three ‘A’ errors
“Anchoring” occurs when we fix on a particular bit of information or data given to us, and then think in a constrained, linear way. This causes us to potentially fail to obtain other information about a problem and to proceed down only one path of investigation.
A physician can anchor on a specific aspect of the history, a physical finding or a laboratory result. For example, we wrote about a case of anchoring in our November 2008 column, “Anchoring errors ensue when diagnoses get lost in translation,” where a patient’s complaint of gas caused clinicians to initially miss an abdominal aneurysm. Another patient who said he had a lack of “stamina” underwent an extensive and fruitless evaluation involving multiple blood tests and scans until a physician asked what exactly “stamina” meant.
“Availability” is another common thinking trap. Here, we are strongly influenced by dramatic or unusual cases that are prominent in our memory and easily recalled, and thus “available” when we consider a new patient’s problem. As physicians, we are all swayed by the dramatic cases we have seen, and may too quickly conclude that the symptoms or findings before us correspond to those that were present in the prior case.
We saw the power of availability in a case of Crohn’s disease with initial symptoms of weight loss and fatigue. Each specialist who saw the patient immediately considered diagnoses in their own particular field, akin to the famous poem about the blind men and the elephant presented in our April 2010 column, “Seeing the whole diagnostic picture.” But availability can sometimes work to the doctor’s advantage, as with a clinician who discussed in our September 2009 column, “Unmasking the patient’s hidden agenda,” how he had recently led a focus group on depression. With this discussion fresh in his mind, he probed more deeply than usual a patient’s denial of despair, and thus averted a potential suicide.
The third major thinking trap is an “attribution error.” This occurs when a physician is overly influenced by certain personal characteristics, particularly those that correspond to social stereotypes. The doctor then fits the history, physical findings and laboratory studies into a preset conception about that person, rather than weighing the information in a dispassionate way.
When we attribute findings to the social or other characteristics of the patient we fail to consider that this might be misleading. An attribution error was evident in the case of an elderly woman with failure to thrive that was diagnosed as just “old age.” In fact, her poor food intake and weight loss were due to masseter claudication from temporal arteritis, as described in our May 2009 column, “It’s just old age—or is it? Don’t be guided by stereotypes.” Another case of attribution error involved a woman described in our July/August 2010 column, “Attribution error confounds a diagnosis after colon cancer.” The woman was assumed to be nutritionally replete because she was obese, but in fact was deficient, lacking thiamine.
While attribution errors usually arise from negative stereotypes, there are also positive stereotypes that can mislead us. We devoted our May 2011 column, “Attribution error results from a positive stereotype,” to the case of a man with diabetes who was so deeply educated about his disease that his doctors assumed he knew not to reinject the same body site repeatedly, when in fact he was doing just that; this resulted in lipodystrophy and impaired insulin absorption that presented as apparent insulin resistance. An attribution error was also involved in our October 2010 column, “When patients don’t tell all: The diagnostic challenge,” in assuming that a wealthy businessman in India could not have leprosy as a cause of his neuropathy since this was a disease of the poor.
While these three “A’s” of anchoring, availability and attribution are the most common, there are other cognitive pitfalls.
Confirmation bias
“Confirmation bias” involves ignoring or rationalizing contradictory data to make the pieces of the puzzle fit neatly into the presumed picture. An unusual complaint or laboratory finding is dismissed in our minds as an “outlier” when it should actually raise a red flag, indicating that our presumption may be incorrect. Confirmation bias was prominent in a case of hypothyroidism occurring in a physician, where an elevated creatine phosphokinase level was initially ignored, as shown in our January 2009 column, “Perils of diagnosing the physician-patient.”
Satisfaction of search
During medical school and residency training, many of us learned about Ockham’s razor. This principle, derived from medieval scholars, holds that we should try to find a single unifying explanation for a diverse constellation of clues about a patient’s problem.
While it is valuable to look for a single cause that may explain all of the symptoms, physical findings and laboratory studies, we should keep in mind that in the real world, patients may not adhere to Ockham’s razor. There may be multiple maladies occurring concurrently, and we should not immediately be satisfied in our search when we identify one. This trap of “satisfaction of search” is particularly prominent in radiology, where studies show that once a radiologist has identified an abnormality on an X-ray or scan, his or her mind tends to neglect other findings that might be important and indicate more than a single pathological process at work. We wrote about a radiologist who noted a renal tumor on abdominal imaging and then halted his search so that appendicitis was not detected early in “Beware of ‘search satisfaction,’ a common cognitive error” from the May 2008 column. We also discussed the concept in our March 2008 column, “Patient’s doubts about diagnosis prompt a second opinion,” in a case of hypercalcemia in the setting of myeloma where the patient also had primary hyperparathyroidism.
Representativeness error
“Representativeness” or “prototype error” occurs when a case is not typical and therefore may elude physician thinking based on pattern recognition. We illustrated this in a patient with Addison’s disease that lacked classical findings in our July/August 2011 column, “Priming to diagnose an atypical case, avoid representativeness.” Our January 2010 column, “What to do when one expects everything to fit, but it doesn’t,” discussed how another clinician avoided this pitfall in his diagnosis of an androgen-producing tumor in the ovary that was difficult to detect.
Finally, several cognitive errors may occur at different points along a diagnostic process, and act to reinforce each other in swaying our thinking so we arrive at an incorrect diagnosis.
Remedies available
Despite the power of these pitfalls, there are some remedies. We have proposed in prior columns a few simple questions that a clinician can ask himself or herself to protect against falling into one of these thinking traps.
The first is: What else could it be? This allows us to unhinge if we have anchored, to move away from a dramatic memory in the setting of availability, and to reduce the impact of stereotype in the setting of attribution error.
A second question is: Does anything not fit? This is a safeguard against confirmation bias, whereby we instruct ourselves to focus on a contradictory or discrepant finding rather than to dismiss it as an outlier and irrelevant.
And last: Could there be more than one process at work? This contradicts Ockham’s razor, and ensures that we are not overly parsimonious in our deliberation. We should always consider whether there may be more than one illness that is contributing to the symptoms and findings in our patients, so we are not too quickly satisfied in our search. Each of these questions helps us to keep an open mind.
Cognitive scientists designate two “systems” of thinking: intuitive and deliberative. Clinicians use both and each has its advantages, but also limitations. Intuitive thinking is rapid and efficient but may cause us to miss some important clues; this kind of thinking becomes more accurate as we accrue more experience. Deliberative thinking is slow but can sometimes help us to see clues that do not register intuitively. Optimally, we merge both systems when we develop a differential diagnosis.
With the time pressure and hectic nature of modern health care, there is, we believe, great value in pausing to reflect on our thinking, particularly when an initial presumption about a diagnosis appears not to succeed in explaining the condition or an empiric therapy does not ameliorate the patient’s symptoms. At these times, drawing on both intuitive and deliberative thinking and asking the above questions can be vital in avoiding thinking traps and moving us back onto a better diagnostic path.
Jerome Groopman, a hematologist-oncologist and endocrinologist, and Pamela Hartzband are staff physicians at Boston’s Beth Israel Deaconess Medical Center. They are authors of Your Medical Mind: How to Decide What Is Right for You. This article was originally published in ACP Internist.