Quality care, quality metrics
I’ve been working a lot on the concept of quality care lately for a major project at my foundation. As I’ve written previously, in the USA care quality is generally measured based on the services delivered to patients (tests, evaluations, therapies) and not the outcomes achieved. I’d like to re-visit that topic, and think about why this is wrong and, in my next post, why it is inevitable.
Why measuring care quality through care delivered is wrong
I’ve worked in orthopedics and neurology. In both these fields, I’ve seen the difference that experience makes. Atul Gawande has written about how different centers, doing the same things, achieve different results. Gawande illustrates in cystic fibrosis how different centers do the same things to care for their patients but one center does them better. In orthopedics and neurology, I’ve seen the same things. Gerry O’Connor of Dartmouth wrote the seminal work on variation in outcomes when working with the Northern New England Cardiovascular Study Group, showing that expert surgeons’ in-hospital mortality ranged from 1.9% to 9.2% across a similar case mix. This finding was stunning, particularly because it is considered insensitive to publish physician-versus-physician outcomes. O’Connor continued on to guide the cystic fibrosis work cited by Gawande and my own registry project.
All this work makes clear that knowing what a physician does is very different from knowing what outcomes he or she achieves. The only thing we gain from measuring quality by measuring services is a system where the art of care is denigrated and patients may believe they are receiving great care when they are not. Take, for example, Parkinson’s disease. Referral to physical therapy is considered a clinical best practice. However, in the Berlin Big Study, it was shown that not all physical therapy is the same. As work funded in part by my foundation showed, phyisical therapists who know PD achieve better results than those who don’t. Take post-traumatic stress disorder, an area where I have done some work: it has been shown that screening for PTSD increases the incidence of PTSD. Clinical judgement, such as knowing when to evaluate a patient for PTSD and when not to, is an outcomes driver.
While some say that leaders can be made, no one would claim that there is an algorithm for leadership. Handling the unexpected is a key challenge for leaders. It is also for physicians. The work that went into developing Watson, a computer that plays Jeopardy, showed how difficult it is to create algorithms to respond to the vagaries of human communication. With all the work that went into Watson, it failed to recognize that Toronto was not a U.S. city. Make no mistake, medicine, like Jeopardy, is an art. Parkinson’s disease, a brain disorder characterized by tremor, can present as shoulder pain.
What should we learn from all of this? To get the best health outcomes, go to a physician who has a history of achieving best outcomes. We achieve only marginal gains by observing the physicians who achieve the best outcomes and trying to build an algorithm around their process, as Gawande illustrates. We should be focusing our efforts on creating experts and processes to scale their ability to deliver care, such as through a sophisticated triage process. The UK’s NICE guidelines for Parkinson’s disease try to deliver this. The hardest part of Parkinson’s disease is diagnosis and the second hardest is titrating medications. The NICE guidelines address this by requiring the referral of every patient with suspected Parkinson’s disease, untreated, to a Parkinson’s specialist. They supported the establishment of a new provider, the Parkinson’s nurse, who specialized in PD medications. Time will tell if this works, but it is a step in the right direction, and much better than the model of management in primary care (often by algorithm) until primary care fails, then escalate. The NICE model keeps the patient in primary care but layers in specialists to address the hard parts of management.
In manufacturing, quality improvement isn’t science (but it is based on science). It is about watching for bad outcomes and identifying where (more than why) they occur. Extending this model to medicine, quality improvement doesn’t have to be about identifying why, for example, a generalist orthopedic surgeon’s total hip replacement outcomes are worse than those of the hip specialist. Rather, it should be about realizing this is so and directing hip replacements to the hip surgeon. We should be doing more to adopt this approach to quality.