For 35 years, the Dartmouth Atlas of Health Care has been publishing startling data on regional variation in the amount of money spent on medical care in the U.S. It has consistently shown—and the newest version of the Atlas, released this month, is no exception—that Medicare spending on chronically ill patients during the last 2 years of life varies enormously across states (Dartmouth Atlas of Health Care 2008, www.dartmouthatlas.org.) . In recent years, for example, California spent $57,914/patient compared to Iowa, which spent $33,864/patient in the 2 years before death. When the brains behind the Atlas looked at what all the extra money is spent on in “high expenditure” states like California, they found that it’s not spent on effective care (interventions that have been shown unambiguously to be beneficial) and it’s not spent on preference-sensitive care (treatments that some patients select while other patients choose other equally effective treatments with a different side effect profile). Rather, it is lavished on supply-sensitive care: services whose supply determines utilization, without any clear-cut benefit. In states like Massachusetts, for example, with a disproportionately high number of specialists and lots of technology, patients have correspondingly more doctor visits with specialists and receive more high tech diagnostic tests.
It’s not that some areas have more specialists and fancy equipment per capita because they have a higher percentage of sick people in the population. When age and gender are taken into consideration, the rates of illness in California and Iowa are remarkably similar. California just has more medical resources per person, so it devotes more resources to the care of Californians. As a result, patients with chronic illness in some parts of the country spent 6 days in the hospital during their last 6 months of life, while patients in other regions spent 22 days.
The critical question is whether there is any added value to the extra expenditures, and if so, is it worth the additional cost? The creators of the Dartmouth Atlas say there is no additional value, since all the patients they studied died, regardless of what was spent on them.
But this analysis looks only at chronically ill patients who died and then asks what kind of care they received in the 2 years before their deaths. There’s a problem with looking back in this way. The problem is that 2 years before they died, their physicians did not know they were going to die. The real question is, did those chronically ill patients who lived benefit from all the extra medical care they got? What we need to do to answer this question is to study a group of chronically ill patients in a high roller state like New York and the same number of comparable patients in a low spending state like North Dakota. Some of these people, if followed for the next two years, will live and some will die, no matter how much is expended on them. After two years have passed, we can determine not only what fraction lived in each of the two states, but also what happened to the ones who lived. Did they live longer in New York than their counterparts in North Dakota? Was their quality of life any better? If neither those who lived nor those who died benefited from all the additional resources devoted to their care, then clearly New York was spending too much. But if some people benefited, even if others did not, then the issue is more complex. It’s complicated further if those who lived benefited but those who died were made worse off because of the resources spent on them—if they underwent painful procedures and spent a great deal of time in the Intensive Care Unit. In either case, we need to come up with a way—cost-effectiveness analysis is an example—to decide whether we derive sufficient value from the added money spent to make it worthwhile.
To be fair, the creators of the Dartmouth Atlas perfectly well recognize the desirability of looking forward instead of backwards. An important study carried out by this group based on data from 1993-1995 and published in 2003 did exactly that. Elliott Fisher, David Wennberg, and their colleagues studied a group of patient hospitalized for either a hip fracture, colon cancer, or a heart attack, as well as a large representative sample of other Medicare patients. They then asked what happened over the next 5 years: within each group, was there any difference in mortality, in functional status (the ability to care for oneself), or in patient satisfaction, depending on how much money was spent on medical care? What they found was that those people who lived in regions with the highest spending received 60% more medical care than those with the lowest level of spending, with no differences in outcomes (Elliott Fisher, David Wennberg, Therese Stukel et al, “The Implications of Regional Variations in Medicare Spending. Part 2: Health Outcomes and Satisfaction with Care,” Annals of Internal Medicine 2003; 138: 288-298). But this study relies on data that’s 15 years old, and it’s just one study. I think it’s very likely that the regional differences in expenditures uncovered in the 2008 Dartmouth Atlas similarly do not translate into benefit for patients—neither for those who lived nor for those who died. But we could be far more confident in this result if we conducted more studies that looked forward instead of back.
If the conclusions of the Dartmouth Atlas are correct, we need to put a stop to the endless proliferation of sub-specialists, of expensive diagnostic equipment such as PET scanners, and of facilities such as outpatient surgical centers. We should instead identify what medical interventions truly make a difference and limit the supply of those that do not.