While casting about for something to discuss in my blog, I stumbled on a short article that advocates renaming the “death panel” the “good planning panel.” The authors point out that family meetings involving physicians, patients, and their loved ones talking about future medical care are generally well-received. Moreover, this kind of advance care planning prevents depression and anxiety in both patients and their families, and when patients have these conversations, they typically end up undergoing fewer invasive procedures in their final weeks of life, procedures that most patients say they don’t want. Allowing Medicare reimbursement for such meetings would be a very positive step in the direction of improving the care for patients with advanced illness. Whether calling it a “good planning panel” would transform the way people think about these kinds of discussions, in light of the lingering association with the “death panels” born of the right wing media’s imagination, is another matter. Moreover, “panel” is a poor choice of word, evoking the image of a jury delivering a verdict. But it led me to think about the power of words and the role of euphemisms in medicine.
When the Center for the Advancement of Palliative Care commissioned a market survey a couple of years ago, they learned that most people either had no idea what the term “palliative care” meant or assumed, incorrectly, that it was the same as “hospice,” which they in turn associated with imminent death. (Palliative care is an approach to care for anyone with advanced illness: it neither assumes the patient is close to death nor does it in any way limit treatment, but rather provides treatment focused on improving quality of life; palliative care can be given alongside of life-prolonging medical therapy). When the public were asked if they were interested in having “an additional layer of support” from their health care team, as palliative care was defined, they were uniformly enthusiastic. Similarly, many physicians were reluctant to broach the topic of “palliative care” with their patients because they thought it would be too frightening; they preferred to offer “supportive care.” So is “supportive care” a more useful name because patients understand that term correctly, or is it a misleading euphemism, designed to make patients think it is something that it isn’t?
And what about the evolution of the “DNR” (do-not-resuscitate) order? Some years back, the phrase “DNAR” (do not attempt resuscitation) was introduced. Since I’m someone who likes to tell things as they are, I favored that substitution. After all, the implication of DNR seemed to be that if only the physician did perform CPR, the patient would be perfectly fine. Usually, the reality is quite different: no matter whether CPR is performed or not, the patient with advanced illness whose heart stops beating will almost certainly die. But more recently still, some physicians have replaced “DNAR” with “AND,” which stands for “Allow Natural Death.” Instead of focusing on whether a particular technological procedure (CPR) will or will not be tried, this formulation seeks to tell patients that what is at stake is having a “natural” experience. Natural, like organic, conjures up something good, unlike, presumably, something that is unnatural or inorganic. “Allow Natural Death” also adds the word “allow” to imply that if you don’t opt for this course, that is if you choose CPR, you will be obstructing or preventing something natural from occurring. Never mind that this is precisely the point—what is “natural” in this instance is to die, and CPR is intended to prevent that most unfortunate reality, just as taking insulin to treat diabetes or having bypass surgery to alleviate the symptoms of heart disease are very unnatural but often extremely desirable medical interventions.
So are these verbal permutations a good thing or they a kind of sleight-of-mouth, designed to deceive and manipulate? What if the original term—DNR or palliative care, for example—evokes such disgust that patients immediately reject it, whereas the new term—AND or supportive care—has far more positive resonance? I used to buy the bioethics argument that truth-telling is one of the cardinal virtues and that it’s a key ingredient of moral medical practice; that failing to tell the patient his diagnosis or his prognosis engenders fear and distrust, not to mention that it is profoundly disrespectful of a person’s autonomy, his individuality, his “right” to know about his own body and his own future. But I’ve been reading some behavioral psychology lately, and I’m not so sure that people make decisions based on calmly and systematically weighing the pros and cons of the various alternatives; they seem by contrast to rely heavily on their intuitions. What this perspective suggests is that there is no truly neutral way to present information, that words are powerful (though sometimes images are even more powerful), and that the best we can do is to avoid deliberately misleading patients.
So both “death panels” and “good planning panels” are out because they are not panels and they are not about death; “advance care planning discussions” are more accurate. “DNR” and “AND” are out because they mislead; DNAR is more objectively correct, though it may well have positive associations for some patients and negative associations for others. And I’ll stick with calling what I do providing “palliative care” rather than “supportive care,” though I’m quite willing to define palliative care—if I’m asked—as providing support to patients and families through symptom management, psychosocial support, and advance care planning.
Twenty-five years ago, discussions of medical futility were the rage in bioethics circles. The discussions petered out when it became clear that futility was in the eye of the beholder: physicians and patients often had very different ideas about what futility meant, depending on what they hoped medical treatment would accomplish.
In one case that generated considerable publicity, physicians sought to turn off the ventilator that was keeping 86-year-old Helga Wanglie alive. They argued that the ventilator was futile treatment since it would never allow Mrs. Wanglie, who was in a persistent vegetative state, to regain consciousness. Mrs. Wanglie’s husband, however, argued that keeping his wife alive—supplying the oxygen that her heart needed to keep on beating—was the goal of treatment. And by that standard, the ventilator was performing admirably. The court to which the physicians presented their case did not address whether the treatment was futile; it merely ruled that Mr. Wanglie was the rightful spokesperson for his wife and his wishes should be followed.
A second problem with futility is that it is a good deal easier to identify after the fact—the patient died, ergo the treatment didn’t work—rather than in advance. Because futility was proving elusive, medical ethicists stopped talking so much about it and focused instead on ascertaining the patient’s goals of care. The prevailing wisdom came to be that doctors should provide any treatment that was consistent with those goals. Ethics consultations were used to mediate disputes between families and physicians over whether particular treatments could achieve the desired goals. But physicians continued to be bothered by the nagging feeling that at least some of the treatments they provided were morally wrong: they caused needless suffering as well as outrageous costs without much, if any, benefit. A new study just out puts the futility debate back on the table.
The authors of the study used a focus group of 13 doctors who work in intensive care units, the site of 20% of all deaths in America, to agree on a definition of futility. They came up with four reasons for assessing a treatment as futile. The patient was imminently dying, the patient would not be able to survive outside an ICU, the burdens of treatment greatly exceeded the benefits, or the treatment could not possibly achieve the patient’s explicit goals. They then asked physicians at a large medical center in Los Angeles to evaluate each of their ICU patients every day and indicate whether the care they were providing was futile, using these four criteria. In one fell swoop, the authors got rid of the two problems with previous futility studies—sort of. They used a prospective design, asking for evaluations in real time, not after the fact. And they defined futile care, albeit by unilateral decree.
Over a 3-month period, the investigators collected data on 1125 patients cared for in one of 5 different ICUs by a total of 36 critical care doctors. They found that 123 patients (11%) were perceived by their physicians to be getting futile treatment at some point during their ICU stay. Another 98 patients (8.6%) got “probably futile treatment.”
What characterized the 123 patients whose doctors were convinced they were getting futile care? Their median age was 67 and 42% were on Medicare. They were more likely to be older and sicker than the rest of the group. The majority (68%) died before hospital discharge; another 16% died within 6 months; almost all the remainder were transferred to a long-term care facility, dependent on chronic life support. The total cost of futile hospital care for these 123 patients was $2.6 million.
In light of these results, it may be time for critical care specialists to convene a consensus conference to see if they can agree on the criteria for futility. Agreement by the majority of doctors who care for ICU patients would carry far more weight than the focus group comprised of 13 physicians whose opinions formed the basis of the current study. If a majority of the nation’s critical care experts came up with criteria for futility, whether the same ones used in this study or some modification, then Medicare would be in a good position to decide to pay only for clinical care that met the newly defined standard of care.
Medicare would not be dictating what is appropriate care; it would not be interfering in the practice of medicine. Medicare would merely be restricting payment to services of established benefit, just as it does when it pays for a cardiac pacemaker or an implantable defibrillator only if patients meet standard clinical criteria. Patients could still opt for treatment their doctors deemed futile if they were willing to pay for it. At an average cost of $4004/day for ICU care, I wonder how many people would pursue this route.
In a recent NY Times opinion piece, ethicist, oncologist and health policy guru Ezekiel Emanuel lauds the resurgence of the house call. Emanuel says that house calls are bringing back “real personalized medicine” and, as a nice bonus, they’re saving money. But he fails to address why house calls fell into disfavor in the first place—and what we will need to do if we want to change their reputation as second rate medicine and promote their use.
House calls are inefficient (at least when they involve clinicians actually traveling to the home rather than a “virtual” house call that is actually a video call). Reimbursement for a house call by a primary care physician is modest, though it is greater than for an office visit: the most recent Medicare Physician Fee Schedule reports that the highest possible reimbursement for a home visit to an established patient (someone the doctor has seen before) is $177.50, while the highest reimbursement for an office visit for a similar patient is $141.75. Payment for a procedure like colonoscopy or cataract extraction, by contrast, is 3-5 times greater. But beyond these financial considerations is the crucial recognition that physicians want certainty before they diagnose and treat. This kind of certainty comes from EKGs and blood tests and X-rays, only some of which can conveniently be performed in the home.
Consider a fictional but typical 85-year-old woman with mild dementia who lives with her daughter and son-in-law. Let’s call her Suzanne. One morning, Suzanne is much more confused than usual. She can’t figure out how to get dressed, even after her daughter lays out her clothes for her. She tries to eat her oatmeal with a fork. She babbles about how her husband will be coming to take her out for lunch, though her husband has been dead for twenty years and just the day before, she and her daughter visited his grave.
Suzanne’s daughter knows something is terribly wrong. She calls her mother’s physician, who insists that she go to the hospital emergency room for evaluation. The doctors in the ER do a battery of blood tests, looking for chemical imbalances in the blood or evidence of a failing liver or failing kidneys, even though Suzanne has never had liver or kidney problems. After two hours, all the blood tests come back normal. While waiting for the blood test results, they do an electrocardiogram, because people who are having a heart attack are sometimes very confused, even though Suzanne has never had heart trouble. The electrocardiogram is normal. And just to be sure that Suzanne has not had any bleeding in the brain, she goes for a CT scan of the head, even though she has not fallen and brain bleeds of the kind the doctors are looking for almost always result from a fall. After being in the ER for 6 hours, the doctors conclude that the most likely cause of Suzanne’s confusion is a urinary tract infection, since a urinalysis shows some abnormalities, though a confirmatory culture will not be available for another 2 days. They send her home on oral antibiotics.
The reality is that Suzanne could have been diagnosed and treated at home. It would have been a good deal cheaper—in 2006, Medicare paid an average of $651 for an emergency room visit compared to $180 for an office visit and the mean ER department charge for a urinary tract infection was a stunning $2398. It would also have been far less burdensome to Suzanne, who became even more agitated lying on the stretcher in the hospital, or to her daughter, who took off a full day of work to be with her mother in the ER. Her doctor could have avoided sending Suzanne to the hospital. He could have made a house call, checking by physical examination for various possible explanations for her acute confusion such as severe constipation, bruising on her face or head indicating a recent fall, or abnormally low blood pressure. He could have arranged for simple lab tests to be done in her home, including a urinalysis and basic blood chemistries. He could have started Suzanne on oral antibiotics, treating her for the most likely cause of her problem, while waiting for the results. Or he could have sent a visiting nurse to the home and relied on her assessment of Suzanne. Odds are he would have concluded that the most likely cause of her confusion was urinary tract infection—especially if he knew that the last few times Suzanne had developed worsening confusion that’s exactly what the problem had been.
But he couldn’t be sure that she wasn’t among the few percent of older patients who had something else wrong with her, something serious. And even finding evidence of an infection in the urine wouldn’t have proved that was really the cause of the confusion—almost half of older women routinely have bacteria in their urine, with no discernible effect on their well-being. So to be certain that Suzanne really had just a urinary tract infection, her physician had to order all those other tests such as the CT scan and start treatment only after he had all the results.
Home visits for certain kinds of patients such as frail elders can be very beneficial. As. Dr. Emanuel points out, studies of innovative home care programs such as the Johns Hopkins Hospital-at-Home program can deliver high quality results and save money. But if we want to see more house calls, we will need to modify the prevailing culture in which both physicians and patients regard certainty as the gold standard of medical care. We need to recognize that achieving certainty comes with a cost—both in dollars and in the sometimes dangerous and often burdensome tests and procedures to which patients are exposed. Physicians will need to talk with patients or their caregivers about how best to balance the risks and benefits of maximizing certainty.
It’s not often that a “research letter,” a short, preliminary report about ongoing research, makes it into the national media. But this week, newspapers picked up on just this kind of article from Nature, a prominent science journal. The article tentatively concluded that people aged 60-85 who practiced a custom designed video game several hours a week got better at multi-tasking. Not only that, but the improvement persisted 6 months later and was manifest not just in better performance on the game but in other measures of attention and memory. So is it time for octogenarians to start playing video games with their grandchildren?
Even before the University of California San Francisco lab published its NeuroRacer results, online companies like Lumosity were doing a booming business. Calling itself a “brain training and neuroscience research company,” Lumosity creates computer-based games that ostensibly offer a “scientifically proven brain workout.” It reported a 150% increase in business between 2012 and 2013, with 35 million users worldwide by January of this year and as many as 100,000 new subscribers each day. Clearly, people want to believe that playing mind games will keep them sharp and perhaps even fend off dementia.
To be fair, the authors of the study in Nature aren’t proposing anything of the kind. They offer their work as an illustration of the “plasticity” of the “prefrontal cortex,” or the ability of the brain to adapt with practice, even at older ages. But do mind exercises translate into useful improvements—as opposed to better scores on simple tests? And at least as important, if mind exercises are effective, what about singing in a chorus? Participating in a discussion group? Writing a letter-to-the-editor? The new study compared volunteers (hardly a random selection of the population) who played the video game to other volunteers who did not; it did not compare playing the video game to other activities.
What’s wonderful about these other pastimes—playing music, arguing, writing—is that they are fulfilling in and of themselves, whatever their cognitive benefit. Social engagement helps prevent depression; it gives people a sense that they matter. Perhaps it’s harder to study the effects of making music than to measure the EEG (brain wave) correlates of playing video games; after all, playing Beethoven may be different from playing Mozart, trios may be more challenging than duets, and playing the piano may not be equivalent to playing the clarinet. It’s certainly a great deal easier to monetize a video game than a social network that helps older people find others with shared interests.
Researchers should keep on studying highly standardized, precise activities. But for now, I’d take my chances with the real world, not the virtual world.
With Labor Day rapidly approaching, I began wondering about older people in the workforce. Just how many people over 65 work? What about over 75? How is this changing? And what does work mean for older individuals?
Of course 65 is an arbitrary way to define old age. Most people who turn 65 are not old in any meaningful sense—they are certainly nowhere near the end of life: they can expect to live another 19.1 years. For women, life-expectancy at age 65 is still greater, or 20.3 years. Even age 75 is no longer very old, with a life-expectancy of another 12.1 years. Moreover, as I pointed out in my last blog posting, roughly half those years are “disability-free.” But Social Security kicks in at 65 and so does Medicare, so this continues to mark the conventional threshold between working and retirement.
It turns out that a substantial and rising proportion of the population continue to work after their 65th birthdays. US Census Bureau projections for 2014 are that just under one in five people over age 65 will be working, a 36% increase in just 5 years. For the 65-74 year old group, it will be slightly over one in four, and for those over 75, it will be a little under 10%. Roughly half of those people who continue to work will do so pretty much full time; about one-third will work 15-34 hours a week, with the remainder working 14 hours or less.
The US is not the only developed nation to see a marked increase in older workers. England has experienced a surge of older workers, with numbers topping a million this spring: in 2013, 57% of people who reached the official retirement age said they planned to continue working, compared to 40% a year earlier.
Some of the change is a direct consequence of the recession. The value of retirement plans that were tied up in the stock market took a huge hit, and with it came the realization by many people that they didn’t have enough money saved up to retire at 65. They also stood to lose employee-sponsored health insurance—along with their main source of identity.
What I found fascinating is that there’s a lot of advice available for prospective retirees about where to live, how to save for retirement, and how to make your money last after you do retire but not much, as a recent article in Time pointed out, about how to make the most of the post-65 period, with or without a job. The pundits encourage everyone to be eat well, remain active and to nurture close personal relationships before they turn 65 in the hope of remaining healthy but are silent about what to actually do with their lives if they succeed..
My personal advice—and I wrote about this in my book, The Denial of Aging: Perpetual Youth, Eternal Life and Other Dangerous Fantasies, in the chapter “Making the Most of the Retirement Years,” is to concentrate on finding meaning in life. If work gives you a sense of meaning and if you’re able to keep at it, then do it. If work doesn’t give you a sense of meaning or if you can no longer continue what you’ve been doing, then it’s best to find something else that gives you that all-important sense of being part of the human community and making a contribution to the world. And it’s the job of the rest of us to make sure there are ample opportunities to do just that.