A recent commentary in JAMA begins with the sweeping condemnation: “An epidemic of waste blights the US health care delivery system.”Sad as this claim is, it is also encouraging, for who wouldn’t want to get rid of waste? Any attempt at reforming American health care should start by eliminating waste—but to do so, we need to know exactly where to find the waste in the system and how best to dispose of it.
Waste comes in several varieties. A RAND study identifies 3 principal flavors: administrative waste (such as excess expenditures on running a health insurance plan); operational inefficiency (for example, duplication of diagnostic tests); and clinical waste (for instance spending money on expensive drugs when cheaper ones would be equally effective).
A good starting point in thinking about reform would be to enumerate all the types of each of these forms of waste and to put a dollar amount on each of them. It turns out that the McKinsey Global Institute, the economics research arm of the McKinsey management consulting firm, has done exactly this kind of analysis.
In fact, McKinsey first undertook to identify where the waste is in the US health care system using 2003 data and then repeated the analysis with 2006 data. What’s fascinating about the report is both seeing where the waste is and discovering how much has changed in 3 short years.
The McKinsey report compares US expenditures on health care with those of a group of 13 countries belonging to the Organization for Economic Cooperation and Development (OECD). Reasoning that richer countries are more willing to spend money on health care, the report computes the Estimated Spending According to Wealth (ESAW), a prediction of how much a given country would spend if it were like the OECD average, adjusted for per capita GDP. In 2006, the US spent nearly $2.1 trillion on health care, or $6800 per person, eating up 16% of GDP. This was an increase of $363 billion since 2003—and $643 billion more than the average spent in the 13 comparison countries (Austria, Canada, the Czech Republic, Denmark, Finland, France, Germany, Iceland, Poland, Portugal, South Korea, Spain, and Switzerland) after adjusting for per capita wealth, or the ESAW
The excess spending produced no discernible health benefit for Americans. In fact, comparisons of the US health care system to that of OECD peers typically find that the US performed worse than everyone else. On one report card, where 1 is the best score and 6 is the worst, the US managed to achieve scores of 5 or 6 on 5 measures: quality care, access, efficiency, equity, and healthy lives.
The waste, according to the McKinsey Report, is in 5 areas: outpatient care ($436 billion or 68% of the excess), drugs ($98 billion or 15%), administrative costs ($91 billion or 14%), investments in health ($50 billion or 7%), and inpatient care ($40 billion or 6%).
By outpatient care, the Report means visits to physicians’ offices, same day surgery, dental care, and treatment in ambulatory surgical centers, diagnostic imaging centers, and other outpatient clinics. What’s striking is that this is the fastest growing component of wasteful care, growing at 7.5% each year since 2003. The higher costs in this sector are due principally to two factors: how much physicians are paid in the U.S. and the high profit margins for ambulatory surgery centers and diagnostic imaging centers. Extremely generous physician compensation adds $64 billion of costs to the system each year. This reflects what we pay specialists and our extravagant use of specialist care: while generalists in the US are paid somewhat better than their counterparts in the comparison countries, specialists are paid much more highly than in the rest of the developed world. Both ambulatory surgery centers and imaging centers are proliferating rapidly, attracted by operating margins of as much as 25%. The result is that the US has 4 times as many CT scanners and MRI machines as the average OECD country, and does 4 times as many imaging studies each year, again with no measurable benefit in terms of patient outcomes.
Drugs are the second major area of wasted spending (this includes both drugs spent by outpatients and drugs spend by hospitals). What’s interesting here is that Americans actually take 10% fewer prescription drugs than the average OECD patient each year. The source of the waste is that drug companies charge on average 50% more for brand name drugs in the U.S. than elsewhere in the world and Americans use a more expensive mix of drugs (a large fraction of brand name or newly released drugs).
Health care administrative costs are next on the list. Most of this relates to operating expenses and profits among private health insurance companies. The American multi-payer system, far from driving down costs through competition, adds costs to the system in the form of marketing, sales, and management overhead. Even Medicare, which until recently had far lower administrative costs than the private sector, has experienced a rise in costs since 2003 because of payments it makes to private plans to administer the Medicare Advantage Plans and the Part D drug benefit.
Investments in health are also higher than in peer countries after adjusting for per capita wealth. This means the US invests more in public health and basic research than other countries. This is one area where the excess, relative to other nations, may well be beneficial and not wasteful. Whether NIH and state public health departments are spending their money in the most effective way—whether they are getting maximum value for their investment—should be carefully examined.
At the bottom of the list, but still a major source of waste, is inpatient care. What’s fascinating here is that both the number and length of hospitalizations are shorter in the U.S. than anywhere else. The waste stems from the cost per hospital day, which is roughly twice the OECD average. This in turn reflects more spending on high tech equipment and subspecialty care.
At least as interesting as the breakdown of the types of waste is the way the distribution has changed in the last few years. The identical analysis by McKinsey using 2003 data found that by far the largest source of waste was inpatient care, accounting for slightly under half of all the waste. Why has the contribution of hospitalization gone from first to last? The mechanism of reimbursement for hospital care by diagnosis related groups, which gives hospitals a fixed amount of money depending on the reason the patient was admitted rather than a per diem rate, has been in effect for older patients since 1983. What has changed is the availability of a more lucrative alternative—day surgery and treatment in ambulatory surgical centers—for the treatment of conditions such as hernias and cataracts. The move from one site of care to another dramatically demonstrates the tremendous adaptability of the health care eco-system.
The implications for health care reform of this type of analysis are profound. If we truly want to decrease wasteful spending, both short term and long term interventions will be required. If we want to manage the flow of procedures from the hospital to the outpatient setting wisely, we need to regulate the proliferation of ambulatory surgical centers and diagnostic imaging centers and to control what they charge for their services. If we want to affect the balance of specialty and generalist care, it will not be enough to provide incentives to medical students to go into primary care: we will need to markedly decrease the phenomenal rate of reimbursement for specialists. If we hope to decrease waste in the medication arena, we will need to determine whether new drugs are better than old ones and if so how much better. We will need to institute some sort of price control over the pharmaceutical industry, at least by negotiating prices (not currently an option under Medicare Part D). And if we truly want to get rid of wasteful administrative costs, we need to consider a single payer system, something that is currently not even on the table as Congress debates health care reform. Finally, we need to recognize that some of what other countries deem wasteful, such as high tech care near the end of life in exchange for a minute chance of life-prolongation, Americans seem to value. If we want to get rid of this type of expenditure, we will need to change the culture that supports this approach, not merely the economic incentives that further facilitate it.
LIFE IN THE END ZONE: A discussion of topical issues for anyone concerned with the final phase of life by Muriel R. Gillick, MD
June 19, 2009
June 02, 2009
Is Death Optional?
Just how far attitudes and expectations about aging have changed in the last 60 years hit home on reading an article from the NY Times Magazine from 1950 called “Recharting Life for an Aging America.” The author, a physician, wrote that “To lead a long and happy life falls, for the average citizen, into the same category of irrational wishes as to be a millionaire or a movie star.” The reality, he said, is that most old people are “lonely, poor, ailing, crippled, ugly, [and] mentally and physically deteriorated.” Today, by contrast, older people take the possibility of ever increasing longevity for granted.
The change in perspective is dramatic and it’s very new: while Americans born in 1950could expect to live far longer than their grandfathers did, most of the improvement in life expectancy was due to decreases in infant mortality. It was only in 1970—5 years after the introduction of Medicare—that 65-year-olds could look forward to a longer period of retirement than any previous generation. By 2005, white men could anticipate another 17.2 years of life and white women 20 years.
Is the result really that Americans today fail to accept that death is inevitable? Or do patients appear to believe that death is optional because physicians seldom discuss life’s final stage and continue to offer treatments, even if they are of little or no benefit?
For all the lip service paid to informed consent and joint physician-patient decision making, older patients seldom understand their likely trajectory with and without a particular treatment. I recently saw a dramatic example of this problem in the course of palliative care consultation at a major teaching hospital in Boston.
The patient was a man in his late 70’s who had been hospitalized with a devastating stroke due to massive bleeding in his brain. He was being kept alive in the ICU with a variety of high tech interventions. The attending neurologist told the patient’s wife that the likelihood of any recovery was very small but that the full extent of his improvement might not be known for months. The doctor held out no hope of a full recovery and expected that if the patient did survive, he would require total care and would have little if any language capacity.
The patient’s wife didn’t think her husband would have wanted life-prolonging treatment if he would be left with profound limitations on his functioning, but she wasn’t absolutely sure. She wondered if she should authorize further vigorous treatment to “give him a chance.”
What quickly became clear to me was that the wife’s conception of what it would be like for her husband over the next two months if she opted for attempted rehabilitation and life-prolonging treatment bore little relationship to reality. She imagined that “going to rehab” would be as benign as taking a daily vitamin pill. I explained to her that after transfer to a rehab facility, he would likely suffer multiple complications such as pressure ulcers or pneumonia. He would probably be shuttled back and forth between the rehab facility and the hospital—and after all that, he would either die or be left extremely debilitated. Once she understood both what treatment would entail and how unlikely meaningful recovery was, she had no further hesitation: the right course of action for her husband was to focus exclusively on his comfort.
In today’s medical world, this kind of discussion is rare. If Medicare patients are to get appropriate care, and if costs are to be controlled, physicians must have such conversations. But since the focus is on life-prolongation throughout a physician’s training, with little attention to maximizing quality of life or to deciding when to stop, medical education will need to change.
Medicare and Medicaid pay just under $10 billion per year to hospitals in the form of General Medical Education funds to train residents. But as the Council on Graduate Medical Education observed in a letter to the Secretary of Health and Human Services in May, 2009, hospitals are not held accountable for how they spend the money. Their concern is with their own labor needs, not with training the next generation of physicians to manage chronic disease. It is time to monitor and regulate the way the federal government’s money is spent and require proficiency in end-of-life discussions along with disease management and care coordination.
Some experts believe that telling patients about the trajectory of illness with different treatment options won’t suffice because patients engage in magical thinking: physicians can lay out the various possible scenarios but patients will gamble that they will be the lucky ones who have the best outcomes. My experience suggests that most patients do respond to realistic discussions about their future, but the way to deal with the minority of patients who might want to try a treatment that has a vanishingly small chance of working is simply not to offer such interventions.
Decisions to take certain kinds of treatment in certain situations off the table should be made at the policy level. This will require holding National Institute of Health consensus conferences to determine a new standard of care for patients with a variety of chronic conditions such as dementia or heart failure in the last phase of life. The Centers for Medicare and Medicaid Services (CMS) will then need to give teeth to the practice guidelines that emerge from such conferences by agreeing to reimburse only for treatment that is consistent with those guidelines.
Perhaps the greatest challenge is that policy makers, who will need to endorse the kinds of changes I am suggesting, share the same expectations of ever increasing longevity as other Americans. A good starting point is therefore to limit treatments that are burdensome, unlikely to be effective, and expensive.
The next step will be to dispassionately analyze interventions that offer only a slight chance of benefit and that are expensive but that are not burdensome to patients. Some devices such as pacemakers have become increasingly acceptable as they have become smaller and implanting them has become safer and less invasive. Likewise, some cancers have become chronic illnesses because of the development of relatively non-toxic, targeted therapy. Patients naturally want potentially life-extending treatment if it comes in the form of a pill, without the nausea, vomiting, hair loss, and bone marrow depression associated with conventional chemotherapy.
Ultimately, policy-makers will have to take into consideration cost-effectiveness in deciding whether CMS will cover such treatments. Far less politically charged are the steps that should be taken immediately: regulating spending on graduate medical education and limiting reimbursement for treatment that comes at a high price to both patients and society without conferring any appreciable benefit.
A modified version of this piece appeared on the Health Care Cost Monitor, a blog of the Hastings Center.
The change in perspective is dramatic and it’s very new: while Americans born in 1950could expect to live far longer than their grandfathers did, most of the improvement in life expectancy was due to decreases in infant mortality. It was only in 1970—5 years after the introduction of Medicare—that 65-year-olds could look forward to a longer period of retirement than any previous generation. By 2005, white men could anticipate another 17.2 years of life and white women 20 years.
Is the result really that Americans today fail to accept that death is inevitable? Or do patients appear to believe that death is optional because physicians seldom discuss life’s final stage and continue to offer treatments, even if they are of little or no benefit?
For all the lip service paid to informed consent and joint physician-patient decision making, older patients seldom understand their likely trajectory with and without a particular treatment. I recently saw a dramatic example of this problem in the course of palliative care consultation at a major teaching hospital in Boston.
The patient was a man in his late 70’s who had been hospitalized with a devastating stroke due to massive bleeding in his brain. He was being kept alive in the ICU with a variety of high tech interventions. The attending neurologist told the patient’s wife that the likelihood of any recovery was very small but that the full extent of his improvement might not be known for months. The doctor held out no hope of a full recovery and expected that if the patient did survive, he would require total care and would have little if any language capacity.
The patient’s wife didn’t think her husband would have wanted life-prolonging treatment if he would be left with profound limitations on his functioning, but she wasn’t absolutely sure. She wondered if she should authorize further vigorous treatment to “give him a chance.”
What quickly became clear to me was that the wife’s conception of what it would be like for her husband over the next two months if she opted for attempted rehabilitation and life-prolonging treatment bore little relationship to reality. She imagined that “going to rehab” would be as benign as taking a daily vitamin pill. I explained to her that after transfer to a rehab facility, he would likely suffer multiple complications such as pressure ulcers or pneumonia. He would probably be shuttled back and forth between the rehab facility and the hospital—and after all that, he would either die or be left extremely debilitated. Once she understood both what treatment would entail and how unlikely meaningful recovery was, she had no further hesitation: the right course of action for her husband was to focus exclusively on his comfort.
In today’s medical world, this kind of discussion is rare. If Medicare patients are to get appropriate care, and if costs are to be controlled, physicians must have such conversations. But since the focus is on life-prolongation throughout a physician’s training, with little attention to maximizing quality of life or to deciding when to stop, medical education will need to change.
Medicare and Medicaid pay just under $10 billion per year to hospitals in the form of General Medical Education funds to train residents. But as the Council on Graduate Medical Education observed in a letter to the Secretary of Health and Human Services in May, 2009, hospitals are not held accountable for how they spend the money. Their concern is with their own labor needs, not with training the next generation of physicians to manage chronic disease. It is time to monitor and regulate the way the federal government’s money is spent and require proficiency in end-of-life discussions along with disease management and care coordination.
Some experts believe that telling patients about the trajectory of illness with different treatment options won’t suffice because patients engage in magical thinking: physicians can lay out the various possible scenarios but patients will gamble that they will be the lucky ones who have the best outcomes. My experience suggests that most patients do respond to realistic discussions about their future, but the way to deal with the minority of patients who might want to try a treatment that has a vanishingly small chance of working is simply not to offer such interventions.
Decisions to take certain kinds of treatment in certain situations off the table should be made at the policy level. This will require holding National Institute of Health consensus conferences to determine a new standard of care for patients with a variety of chronic conditions such as dementia or heart failure in the last phase of life. The Centers for Medicare and Medicaid Services (CMS) will then need to give teeth to the practice guidelines that emerge from such conferences by agreeing to reimburse only for treatment that is consistent with those guidelines.
Perhaps the greatest challenge is that policy makers, who will need to endorse the kinds of changes I am suggesting, share the same expectations of ever increasing longevity as other Americans. A good starting point is therefore to limit treatments that are burdensome, unlikely to be effective, and expensive.
The next step will be to dispassionately analyze interventions that offer only a slight chance of benefit and that are expensive but that are not burdensome to patients. Some devices such as pacemakers have become increasingly acceptable as they have become smaller and implanting them has become safer and less invasive. Likewise, some cancers have become chronic illnesses because of the development of relatively non-toxic, targeted therapy. Patients naturally want potentially life-extending treatment if it comes in the form of a pill, without the nausea, vomiting, hair loss, and bone marrow depression associated with conventional chemotherapy.
Ultimately, policy-makers will have to take into consideration cost-effectiveness in deciding whether CMS will cover such treatments. Far less politically charged are the steps that should be taken immediately: regulating spending on graduate medical education and limiting reimbursement for treatment that comes at a high price to both patients and society without conferring any appreciable benefit.
A modified version of this piece appeared on the Health Care Cost Monitor, a blog of the Hastings Center.