Earlier this month,
Medicare announced that it is revising the 5-star rating system currently used
to measure nursing home performance. The ratings are available on the Nursing Home Compare website, which allows
consumers to learn how a facility they might be considering going to shakes up
relative to others. The problem, and the reason the system is being revised,
thanks to funding from IMPACT (legislation passed in September), is that it’s
not so clear that the vaunted rating system measures anything meaningful.
An investigative piece in the New York Times this past summer
dramatically demonstrated the gap between reality and the ratings. The reporter
visited the Rosewood Post-Acute Rehab outside Sacramento, California, an
attractive facility that had garnered the much sought after 5-star rating from
Medicare. But it turned out that the ratings focused entirely on annual health
inspections and on 2 measures reported by the nursing home itself—staffing
ratios and a quality of care index. The rating left out data from state authorities,
even though it is those authorities that supervise the nursing home. In
Rosewood’s case, the state of California had fined the facility $100,000 in
2013 for a patient death attributed to inadequate medication monitoring. California
had also received 102 patient and family complaints between 2009 and 2013, way
over the state’s average. And the facility had been the subject of a dozen
lawsuits alleging substandard care. The revised rating system, by drawing on
external audits of nursing home quality and electronically submitted staffing
data, as well as by incorporating some new measures such as the proportion of
residents taking antipsychotic medications, hopes to overcome the shortcomings
of the current approach. But will it?
Nursing Home Compare
is not the only attempt to come up with a single, composite rating for medical
facilities, and nursing homes are not the only medical institutions to be
graded in this way. Hospitals are also rated, and multiple organizations offer assessments. I recently stumbled on a fascinating case: in June of
2012, the Leapfrog Institute, a non-profit think tank devoted to measuring and
improving safety in medicine, came out with its first hospital ratings. It
awarded an A or B to 56% of the hospitals surveyed, a C to 38%, and grades
below C to 6%. The UCLA Medical Center was given an F. At the very same
time, US News and World Report came out with its annual hospital rankings. In this report, the UCLA Medical Center was ranked #5 in the country. How can the same hospital get an “F” and an “A+” for its performance? And if
you think that maybe UCLA is an unusual case, it’s not. Consumer Reports, which
also got into the hospital rating business, ranked MGH below average in the same
year (2012) that US News ranked MGH as #1 in the country.
The answer to why different raters have different results is that
the grade depends on the methodology used to compute it. Leapfrog assesses hospitals based entirely on performance in the realm of safety and
does not adjust for the severity of illness. Consumer Reports uses a
complicated mixture of a safety score, a patient outcomes score, a patient
experience score, a hospital procedures score, and then a rating of heart
surgery, with several factors going into each of the subscores. US News and World Report looks at outcomes (by which it means mainly
mortality), process (which is largely nursing staffing levels), and other
factors (a big part of which is reputation). US News also rates hospital departments (neurology, cardiology, oncology, etc).
I was particularly amused a number of years ago when US News ranked the
Geriatrics Department of one of the Boston teaching hospitals among the top 10
in the country. It so happened that hospital didn’t have a geriatrics
department.
Americans like
report cards. We rank toasters and
washing machines and cars. We rate hotels and restaurants and auto mechanics.
We have institutions devoted to product evaluation (think Consumer Reports) and
thanks to the Internet, we now have a slew of informal, popular evaluations
(think Yelp or Trip Advisor). I admit I find these reports very useful: when I
was looking for a good bed and breakfast recently, I found it helpful to learn
that 50 people gave one particular inn 5 stars.
I could also read the individual comments to get a sense of whether the aspects
of the inn that other travelers liked were of any particular concern to me. But can we really come up with a report card
for a hospital or a nursing home? Can we really reduce performance to a single
grade?
Nursing homes and
hospitals will inevitably game the system, just as colleges did when US News and World
Report used the number of applications/number of offers of admission as a measure
of selectivity. Colleges instructed their admissions officers to travel around the
country encouraging students to apply, even if those students couldn’t possibly
be accepted, because the more students applied, the more “selective” the
college became. Some colleges created a huge waiting list and admitted many of their
freshman class from the wait list—but only counted the initial acceptance
letters in the computation of “offers of admission.” Some students and families have caught on and
the media has started to downplay the annual US News numbers—for the past couple
of years, the college rankings haven’t been front page news in the NY Times when the new ones are released.
But colleges continue to regard the rankings as important and use them in
marketing. Similarly, I’ve noticed big banners in the vicinity of some of
Boston’s hospitals proclaiming their latest ranking. And I learned from a
terrific piece of investigative reporting produced by Kaiser Health News
in collaboration
with the Philadelphia Inquirer that the hospitals pay to advertise their
rankings. US News, Leapfrog, and another rating organization, Healthgrades,
charge licensing fees to hospitals for the privilege of trumpeting their
“achievement.” These fees are not peanuts: Healthgrades charges $145,000, US
News charges $50,000 and Leapfrog charges $12,500.
There are now so
many rating agencies, using very different rating scales and arriving at widely
discrepant results, that there is even an organization, the Informed Patient
Institute, that grades the raters. But the truth is that it is impossible to
distill the performance of a complex institution such as a hospital or a
nursing home to a single measure. Such efforts will inevitably hide the very
real variability in performance depending on just exactly what is looked at.
What you need to know depends on why you need to know it. Are you an insurance
company, deciding whether or how much to reimburse a facility for a particular
service? Are you a patient choosing a hospital (actually, you probably
won’t have much say in the matter; in case of emergency, you will be taken to the
nearest facility; and in other somewhat less urgent situations, where you go is
typically determined by who your doctor is). Are you a patient or family member
choosing a nursing home for long term care (you may have a fair amount of choice)?
For short term rehab (you will have less choice in the matter)?
So will the revised
ratings of nursing homes (coming in January, 2015) make grades meaningful? Probably not. Requiring nursing homes to report data on
staffing electronically will likely improve the accuracy of their reporting—but
is the degree of improvement worth the millions of dollars that will be spent
on this? Including the rate of antipsychotic medication prescribing as a
quality indicator might tell us something about whether nursing homes are
unnecessarily and inappropriately sedating their residents—assuming the measure
has corrected for the rates of serious psychiatric illness in the facility. The
bottom line is that a single grade cannot capture all the features of a medical
facility’s performance that are relevant to all the different individuals and
groups for whom the ratings are intended. It’s time to abandon composite
ratings.
No comments:
Post a Comment