This past November, the New England Journal of Medicine published results from the “Advancing Quality” program in the United Kingdom: hospitals in the northwest of England were paid up to 4% more based on quality scores for treating several common medical conditions. Patient outcomes were compared to other National Health Service hospitals not eligible for the bonuses, and several other conditions that were not explicitly measured in the same hospitals.
The results were striking: patients with conditions where hospitals could get financial bonuses were 6% less likely to die after the implementation of program. This was roughly equivalent to preventing 890 deaths in the 24 hospitals that entered the program.
The UK study stands in stark contrast to a similar trial performed in the U.S. which failed to find any measurable change in patient death rates after hospitals were paid bonuses for higher quality. In the Premier Hospital Quality Improvement Demonstration, hospitals were similarly required to report quality measures and top hospitals received bonuses for good quality.
In the US study, measures of the process of care – such as the number of patients with heart attacks who got an aspirin – improved, but outcomes – whether patients lived or died, did not.
Why should doctors and patients care whether certain pay-for-performance initiatives work and others don’t in big government-funded studies?
The reason is that American medicine – if you’re a doctor this means your practice – is actively undergoing a fundamental change where these “pay-for-performance” programs will soon become the norm. Currently, only a handful of measures are actually “pay-for-performance”: hospitals started getting dinged for higher-than-expected readmission rates since last October.
But in the very near future, these programs will only spread; increasingly, doctors and hospitals will get paid for an ever-expanding list of quality and outcomes.
The fundamental assumption by policymakers in pay-for-performance is that doctors and hospitals – with sufficient economic enticement – will put in the extra effort to improve care and quality, and ultimately, our patients will have a better chance of getting out the hospital alive. Since pay-for-performance is probably here to stay, the key question we should all be asking: What can we in the U.S. with our failed pay-for-performance model learn from the successful one in the UK?
To answer this question, we have to look closely at what actually happened during the UK and US trials. In both the UK and US, process measures of quality – specific actions taken by the healthcare team — improved after the program started. Since these improvements in process reduced death rates in the UK but didn’t in the US, one conclusion could be that improving processes are not enough – or maybe unrelated – to actually improving death rates.
But the key to understanding why deaths declined in the UK but not in the US may lie in pinpointing the differences between the trials. They were similar, but not exactly the same. The financial bonuses were larger in the UK, and the hospitals invested that additional income into a broad variety of quality improvement strategies, such as specialty nurses and new data collection systems to give clinicians regular feedback on their performance. In the UK, staff also met face-to-face regularly to share and learn and improve continuously.
In fact, the biggest focus was ensuring that providers adhered to pneumonia protocols – one the pay-for-performance conditions — this was actually where the greatest improvement in survival was seen. Little is written about what the U.S. hospitals actually did to improve performance, but according to the New England Journal of Medicine paper, learning quality improvement consisted of online learning from a series of “webinars.” Finally, the UK program applied to all patients, whereas the US program applied only to Medicare patients.
The divergent results should also be put in the context of a history of mixed results for studies on pay-for-performance. Some have shown a modest effect on outcomes – such as the UK model – while most have fallen flat.
The problem is that many pay-for-performance programs don’t address the underlying motivations of healthcare providers. A lot of the reasons that doctors and nurses deliver high quality care (or not) have little to do with money. These motivations include wanting to help others, do a good job at work, impress and compete with their peers, avoid lawsuits, or the internal self-motivation that allowed us to get through years of studying, training, and long hours.
Perhaps what differentiated the UK model from the US one is that it tapped into some of these more powerful motivations. Maybe it made quality improvement not just an extra checkbox for an administrator, but a change in culture that made the Brits become active participants in quality. Culture change requires leadership, provider engagement, and sufficient incentive.
Just having the incentive – the pay-for-performance bonus — without a system response that impacts the fundamental motivations is doomed to failure. (Also note the pay-for-performance money, is a potential bonus for the hospital, and not the provider). In the UK, implementing change may also be a little easier because everyone works for the NHS, while in the US, many of the various providers staffing hospitals often don’t even work directly for the hospital.
But regardless of how providers are paid, the real promise of pay-for-performance, we believe, lies in the intense focus that it could bring on changing the culture around quality improvement. That focus will hopefully improve the clinical decisions of providers who are already motivated to deliver excellent care, and ultimately make medical care safer, and perhaps even cheaper.
Jason H. Wasfy is a cardiologist. Jesse Pines is an emergency physician and health services researcher.