In hospitals today, administrators are eager to impose guidelines and metrics onto any facet of patient care they can measure. The sentiment that business people and clinical people are at odds flares with each new initiative. Clinicians feel ignored. Patients complain about how little attention they receive. Administrators try to distill complex pieces of information into smooth line items. We all spend as much time with computers as we do with one another.
Systemic failures, in the form of pressures on the wrong groups, abound in U.S. health care. Measurement and standardization are not problems; they’re tools, and tools are sometimes misused. Under logical implementation, data analysis is a powerful force for improvement. But misunderstanding data and the stories they tell can do more harm than good.
Because of the money involved — near 20 percent of GDP — we’re experiencing a greater focus than ever on cost and quality reporting. And because of the enormous complexity of health care processes, not just of each care episode but of the indirect and often delayed value each intervention adds to a person’s health, we haven’t yet demonstrated that process measures paint clear pictures of quality or value.
Because measures of process are difficult to understand and communicate, we focus instead on outcomes. If you’re a salesperson, you’re paid based on concrete, immediate information: the number of sales you’ve made. If you’re a surgeon doing elective joint replacements, insurers’ reimbursement for your work is at least a reasonable proxy for your value to a hospital, if not a perfect one. But if you’re out in the community building lifestyle modification programs, you’re going to have a hard time putting a dollar value on your work.
Think about that for a moment. A tool which exists only because it’s supposed to make us more effective — data analysis — is now the very barrier to a solution health care badly needs. We all know about the massive chronic disease burden of our country’s population, and we all have a general understanding that prevention is a cost-effective strategy for reducing that burden. But we can’t capture the value of prevention strategies in spreadsheet cells like we can the cost of a procedure, so prevention is relegated to a lower tier of importance. Soft numbers aren’t less important than hard ones, they’re just more difficult to understand, which is why only five percent of health care dollars are spent on prevention initiatives.
There’s no arguing whether or not outcomes are important, but it’s crucial that we spend more time understanding them than we do collecting them. Analyzing and reporting quality in health care is as convoluted a process as there is. We don’t require or expect staff at every level to understand the process, so we make efforts to streamline data collection and reporting. Focus is redirected towards specific sets of results: hospital discharges and readmissions, avoidable complications, patient satisfaction surveys. The problem with this strategy is that once providers and administrators know which results will be studied, even if they’re good and altruistic workers, they now face a constant pressure to facilitate better data for those outcomes. If staff were already delivering good care, allowing survey results to influence clinical and organizational decisions not only makes those results less meaningful but also creates an artificial impediment to best practice.
This phenomenon exists in many industries. In education, it’s “teaching to the test.” When state governments began to calculate school budgets based on standardized test scores and penalize those which didn’t improve year over year, school administrators altered curriculums to emphasize standardized test practice at the expense of teaching students to think and reason. The purpose of standardized testing was to estimate quality of education and teacher performance. Its unintended consequence was to force teachers to spend time on test strategies at the cost of actual teaching. A standardized outcome measures was implemented to assess education quality, and paradoxically ended up damaging the education process.
Almost any outcome can be manipulated, another reason we should shift focus to good processes rather than good results. If a hospital can reduce length of stay without compromising any other outcome – like readmission – that’s as close to an objective improvement as outcomes can account for. If a hospital can reduce length of stay at the expensive of greater readmission rates, that’s a shift in one direction, but not a clear improvement.
Suppose that Medicare continues to examine both of those metrics, and hospitals begin to discharge more patients to SNFs more quickly. If only length of stay and readmission rates are reported, the story those data tell are one of an improving hospital. This is a simplified example, but the point is that outcomes are more malleable than good processes. We can’t grasp causality through outcomes without good understanding of how we get to them. We need to better understand the inner workings of our machine.
We enjoy many advantages in health care. We’re here because we want to be and because of shared belief in the work that we’re doing. Our systems do our patients a disservice if they continue to emphasize numbers on paper at the very real expense of care quality.
We have options going forward, and in the face of inevitable change, the time is right to examine where we’re headed. We could continue on our trajectory of forcing compliance and building results that look good in spreadsheets without telling the whole story. Or we can change course and start to build strong organizational cultures of commitment to good work. But if we ignore the lessons of the education system, we may be repeating their mistakes.
John Corsino is a physical therapist.
Image credit: Shutterstock.com