December 23, 2013

Fragile: Handle with Care

From both an individual and a public health perspective, frailty is one of the most important conditions affecting older people. Along with dementia, which is really just cognitive as opposed to physical frailty, it is a devastating syndrome. Frailty predisposes to recurrent hospitalizations and leads to the dreaded cascade of iatrogenic complications once someone is in the hospital. Frailty leads to nursing home placement and to disability and death. So a recent consensus statement discussing how to approach frailty is one of the most exciting and significant papers to appear in the recent geriatric literature. Published in a third tier medical journal, it’s only by chance that I stumbled on the article at all.

The consensus paper, authored by 20 geriatricians and including some of the most distinguished figures in the field, is based on a conference convened a year ago for the sole purpose of arriving at a shared perspective on frailty. It offers a definition of frailty, a few validated simple screening tests, several possible medical interventions, and the recommendation that physicians routinely screen for the disorder in people over age 70. The definition the authors came up with is “a medical syndrome with multiple causes and contributions that is characterized by diminished strength, endurance, and reduced physiologic function that increases an individual’s vulnerability for developing increased dependency and death.” Sounds like something written by a committee, but it hits all the high points. One of the screening tools the authors favor is the FRAIL questionnaire. Ask 5 simple questions: are you tired (Fatigue); are you unable to walk up 1 flight of stairs (Resistance); are you unable to walk 1 block (Aerobic); do you have more than 5 illnesses (Illness); and have you lost more than 5% of your weight in the past 6 months (Loss of weight). A score of 3 or greater (a yes answer counts as 1 point) indicates frailty. A score of 1 or 2 implies pre-frailty.

Frailty can seldom be cured and is often progressive, but in some cases it can be ameliorated. Exercise, nutrition, and medications can all help. In particular, the authors cite a review article indicating that 45-60 minutes of exercise done 3 times a week is beneficial. Calorie supplements can promote weight gain and reduced mortality in those frail old people who are undernourished. In people with a low vitamin D blood level , vitamin D supplements can reduce falls, hip fractures, and mortality. Reviewing all prescription medications and getting rid of selected drugs can also be useful. Finally, the conference concluded that primary care physicians as well as other clinicians should routinely screen for frailty in the geriatric population. 

No one wants to be labeled “frail.” It’s up there along with “elderly” or “old” as a term everyone seems to want to avoid. But far better to prevent or treat the condition than to pretend it doesn’t exist. It’s time for doctors to pay attention to frailty—to recognize when it’s present and to intervene when possible. How’s that for a New Year’s Resolution!

December 16, 2013

A Rare Win Win in Medicine

How often does anyone come up with an idea for improving medical care that does good and saves money at the same time? Not very often. Even inventions that ought to save money often don’t—for instance, a number of years ago, surgeons figured out how to take out a person’s gallbladder using a fiberoptic device called a laparoscope. Instead of a five-inch incision, patients have a one-inch incision; instead of a 5-day hospitalization, they spend a single night in the hospital; instead of a 6-8 week recuperation period, patients are up and about within days. Good idea? Absolutely. Money-saving? Not so clear. What happened is that the rates hospitals charged for the procedure were based on the “equivalent” alternative procedure that insurers were used to paying for, so the per procedure charge wasn’t much less than standard gallbladder surgery that involves cutting open the abdomen. Not only that, but the total number of people getting gallbladder surgery went up dramatically after the simpler procedure was introduced. Net result? No decrease in overall spending on taking out people’s gallbladders. So when something comes along that both improves outcomes and saves money, it’s worth taking note and celebrating. Palliative care, as a recent article in the New England Journal of Medicine points out, is exactly that kind of remarkable invention. 

Palliative care is not synonymous with hospice care. It is not the same as end-of-life care. And it is not a court-of-last-resort, what you get when you’ve exhausted all possible other treatments. Palliative care, as the Center for the Advancement of Palliative Care defines it, is “an extra layer of support,” something that is appropriate at “any stage in a serious illness.” Patients can have palliative care and life-prolonging treatments; they can, for instance, have chemotherapy or radiation therapy as treatment of cancer along with palliative care. What palliative care adds to conventional treatment is a whole team of clinicians (typically a doctor, nurse, and social worker, though it can include a chaplain or music therapist or other professional) whose focus is on managing symptoms (problems such as pain or nausea or depression), on advance care planning (on preparing for future medical care), and on providing psychosocial support to the patient with a life-limiting illness and his or her family. 

Several studies have now shown that early palliative care improves quality of life and may even lengthen life. In advanced lung cancer, in which patients typically have a prognosis of at most a year, patients who received  outpatient palliative care along with conventional cancer care were less depressed, had fewer physical symptoms, and actually lived longer than those who did not get palliative care. Similar findings have been reported for people with severe heart failure, severe chronic lung disease, and for people with multiple sclerosis. My clinical work suggests the same is true for older people with physical frailty or with cognitive frailty (dementia).

What’s remarkable is that palliative care also saves money. It leads to shorter hospital stays, fewer days spent in an intensive care unit, and fewer expensive tests, all without shortening life. But most patients with life-threatening illnesses do not receive palliative care services? Why not? The New England Journal article suggests that the way to improve the situation, and implicitly the reason for the current limitations, is by changing the payment system so that insurers would pay physicians for counseling about end of life care, by reforming the medical education system to increase palliative care training, and by improving access to palliative care by making consultation available at all hospitals. These are all reasonable strategies, but I don’t think they get to the heart of the problem. 

The main barrier to extending the benefits of palliative care more widely is not economic—palliative care physicians have been successfully billing Medicare and other insurers for their services for years by transforming a “family meeting” into a “history and physical examination” through the inclusion of a few comments about the patient’s medical problem (the “history”) and appearance (“physical examination”) into the medical record. The main barrier to more widespread use of palliative care is psychological. Even though patients and their families who do avail themselves of palliative care generally like what they get, many patients refuse palliative care services because they do not want to face their mortality. At the same time, physicians do not want to propose palliative care because they think they are conceding defeat in the “fight against death.” And Congress does not want to legislate changes in Medicare and Medicaid that mandate broader use of palliative care because of the "death panel" legacy. The truth is that we are all mortal.  The question is not whether we will die but what our journey will look like. Maybe it’s time to face this reality.

December 08, 2013

Words, Words, Words

Physicians are not known for their communication skills. Despite sessions in medical school and during residency addressing topics such as “breaking bad news” and “discussing prognosis,” clinicians still do not perform well. A new program for both doctors-in-training and nurse practitioner students sought to improve the poor track record. 

Participating students attended eight 4-hour workshop sessions addressing communication in end-of-life care. Each session included a short lecture, a demonstration by a faculty member of good communication, a practice session for the trainees using a simulated patient, and a discussion. A total of 184 students completed the workshops; another 222 completed “usual education.” (The vast majority were physicians- in-training and only a few were NP students, so I will restrict my comments to doctors). The students were evaluated for their proficiency in communication and their skill in providing end-of-life care after the educational intervention, with physicians, patients, and family members all contributing to the evaluation process. The bottom line: the workshops did not appear to accomplish anything. Students who participated in the workshops performed at almost exactly the same level as those who received the “usual education,” both in their communication skills and in their delivery of end-of-life care. In fact, the only measurable difference between the two groups was that patients whose physician had taken the training were more likely to end up being depressed than those whose physician did not. What are we to make of these findings?

The authors offer several interesting possible interpretations. They point out that an earlier study showed that patients who understood they had a very poor prognosis were more likely to rate their physicians’ communication skills as poor than those who incorrectly believed their prognosis was pretty good. So it is entirely possible that what palliative care physicians mean by “good communication skills” is not what patients and families mean. Palliative care doctors think that good communicators give patients a realistic understanding of their clinical situation and elicit their patients’ preferences for future medical care, all in a compassionate and caring way. Maybe patients and families equate “good communication” with encouragement, or instilling hope, or holding out the prospect of cure, however implausible cure may be. Maybe objective assessments by trained faculty are a better way of evaluating success than are patient and family reports; it turns out that when faculty did the evaluating, they found that students did improve after the workshop. Maybe expecting that the students would do a better job providing end-of- life care after a workshop on communication was naïve; after all, excellent end-of-life care includes expert pain management, good diagnostic skills, and appropriate referral to other clinicians, not just good communication. 

I’d like to suggest a different conclusion. Perhaps it’s time to stop investing so much effort in trying to change physicians. We should turn instead to a radically different way of educating patients and helping them make the difficult decisions they face near the end of life. One of my colleagues has taken just such a tack. He designs short videos to show patients and their families the medical conditions they have and the interventions they might be offered. Multiple studies have now demonstrated that patients who watch these videos have a much clearer idea of what is at stake and express different preferences from patients who hear a verbal description of their disease and the options for treatment. The videos do not replace clinicians; rather, they give patients a strong foundation on which to build when they talk with their physician. They allow doctors to go beyond explaining the basics and they allow patients to apply the information they have learned to their specific situation.

Most of my career has been devoted to trying to be the best possible communicator with my patients, on the one hand, and to writing books and articles to help clinicians and patients make better decisions, on the other. So it pains me to think that this intensely verbal approach may just not be as effective as carefully constructed videos that show the realities of advanced illness and contemporary treatment. I will continue to write (this blog included) because that’s what I do. But I will also partner with my young colleague to create scientifically accurate videos, reviewed by experts, that complement all those words.

Perhaps Eliza Doolittle put it best, in My Fair Lady:

Words, words, words
I’m so sick of words…

Sing me no song, read me no rhyme, 
Don’t waste my time, show me!
Please don’t implore, beg or beseech,
Don’t make a speech, show me!

December 02, 2013

Aging Well

My mother will turn 88 in a few weeks. According to the definition of successful aging put forward by Rowe and Kahn nearly 16 years ago, she is aging quite well. Her kidneys, lungs, and heart work fine. She is still very active—she teaches a French class once a week at the local senior center, she tutors English to foreigners, she plays scrabble with friends, and she drives daily to visit my father at the nursing home where he lives. My mother does have her share of medical problems: she has painful arthritis affecting her knees and her back and she is very weak, finding it difficult to turn a door knob or to lift a container of milk. Until about a year ago, she walked at least a mile every day, but now she can only take short walks and has to sit down frequently. Her memory isn’t what it once was, though it’s still pretty good. My mother will say that “old age is no picnic” and that “people live too long” today. When her physician told her she was aging gracefully, she told him he was full of it. Her doctor has one perspective on successful aging; she has another one. How are we to put the two views together? Are we using the right definition of “successful aging?”

A new study in The Gerontologist tries to answer this question. The authors carried out in-depth interviews with 56 elders who have significant disabilities and are enrolled in the On Lok program, the original PACE program (Program of All Inclusive Care for the Elderly) in San Francisco. Members of PACE all have enough disabilities to qualify for entry into a nursing home and for Medicaid enrollment, so they are both frail and poor. In fact, the group studied had an average age of 78; 64% were women;  the average number of ADL dependencies (problems in areas such as bathing or dressing) was 2.2 and the average number of IADL dependencies (areas such as food shopping or cooking) was 6.6. It was a diverse group, with 23% African American, 32% Asian American, 20% white, and 20% Latino. 

By and large, the group held the view that aging is an unavoidable process that entails disability. The key to successful aging, they said, was to accept your limitations and to adapt. If you have trouble walking, use a walker. If a walker isn’t enough, use a wheelchair. They also tended to focus on relative disability rather than absolute disability—as long as there were others who were worse off, then they felt they were doing well. The minority who said they hadn’t aged successfully commented that they had not found ways to adapt to their disabilities and they felt were a burden to their families.

So the PACE elders and my mother don't have quite the same perspective. My mother would agree that it’s critical to accept your limitations and to adapt, and she's done that. She doesn’t want to be a burden on anyone, and she isn't. But I doubt she would say she is “aging well." She is aging better than my father, who has dementia and Parkinson's and lives in a nursing home because he needs help with just about everything, but she wouldn't call herself a phenomenal success.

Perhaps the whole idea of “successful aging” or “aging well” is the wrong way to think about this phase of life. For no other stage of development do we assign grades: we don’t say someone had a “successful childhood” or a “failed adolescence.” We might refer to their emotional state during a particular stage: someone might have a “happy childhood” or a “troubled adolescence.” We might use the label “successful” for a career or a marriage, but not for a part of the life cycle. So why do we insist on evaluating aging in this way? 

Instead of grading aging, government and professionals should do work to assure that people are satisfied with their lives and are contributors to their community. After all, this is arguably the goal for the entire population, regardless of age. Our challenge is to figure out how to achieve this for people who are old and frail, whether because of physical impairments, cognitive impairments, or both. 

Just as we cannot eradicate inequality among people—they have different genetic endowments, they are born into different families and different cultures—but we can aspire to provide equal opportunity, perhaps our goal for older people should similarly be to promote equality of opportunity. We cannot eliminate differences in disease burden or disability, but we can seek to assure that everyone has a fair chance to make the most of themselves, whatever their situation. It’s time to switch from talking about “successful aging” to coming up with a successful aging policy.


November 24, 2013

Remember the Birth Pangs of Medicare!

The remembrances of JFK this week focused, understandably, on his great promise and how an assassin’s bullet burst the bubble of optimism and exhilaration generated by his election. Implicit in this perspective, however, is the suggestion that all that came after Kennedy was pessimism and gloom, epitomized by the Vietnam War. What is ignored by claims that it-was-all-downhill-after-JFK is that for all Kennedy’s youth, his oratory skills, and his brilliance, he was in many ways not a terribly effective president. Left largely unsaid is that it was under LBJ, the consummate politician, that progress was made on the liberal agenda. It was LBJ who pushed the Civil Rights Act, Head Start, the Food Stamp program—and Medicare and Medicaid--through Congress.

For 50 years, America had resisted national health insurance. Theodore Roosevelt (yes, the Republican Roosevelt) supported national health insurance in the election campaign of 1912, arguing that no society could be strong whose people were sick and poor. But he lost the election and the issue largely faded from the national agenda until a different Roosevelt became president in 1932. While enthusiasm for national health insurance grew within FDR’s administration, the president himself never championed it as he faced relentless opposition from the AMA and state medical societies.

Truman picked up the baton after WWII, but again confronted insurmountable opposition from the AMA as well as other powerful health care organizations such as the American Hospital Association. Thus, despite continued public support—polls every couple of years between 1936 and 1945 showed a large majority of Americans supported government health insurance—Congress balked. The passage of legislation providing comprehensive national health insurance for the poor (Medicaid) and the elderly (Medicare) during the Johnson Administration in 1965 was an extraordinary achievement, launching a very popular program from which older people continue to benefit today.

Just how important Medicare is to the health and well-being of the 47 million people (8 million disabled Americans and 41 million older individuals) now covered by the program was brought home recently by the publication of an article reminding us that access, affordability, and insurance complexity are generally worse in the US than in 10 other developed countries. 

American's poor access and affordability arise largely because of the enormous uninsured population in the US—a situation that does not affect older people, thanks to Medicare, and that will be less of a problem if the Affordable Care Act is allowed to go into effect. Even among insured adults in the US, however, high out-of-pocket spending was a problem, chiefly because of the high deductibles and cost-sharing in many US insurance plans. This problem is less likely to affect older people in light of Medicare’s comparatively modest cost-sharing. 

Access to primary care was less good in the US than in many other developed nations, a problem that was particularly severe for the uninsured. This is one domain that also affected the insured, including those with Medicare, because of a relatively poorly developed primary care infrastructure in the US. 

In the area of administrative costs and complexity, the US was an outlier, with US health insurers spending $606 per person on administrative costs, more than twice as much as the number 2 spender, Switzerland, and 17 times as much as  the number 11 spender, Norway. While there is considerable debate about just how to compute administrative costs of a health insurer, some of the best estimates indicate that Medicare spends less than 2% of its operating expenditures on administrative costs, compared to 11% for Medicare Advantage Plans (the private Medicare spinoffs) and 12% or higher for other private insurers. 

Let us celebrate what Medicare has achieved—reasonably good access to comprehensive care at an affordable price for consumers—and make sure that we do not sacrifice these accomplishments as we improve Medicare to make it more responsive to contemporary medical problems and to slow the rate of rise of health care costs. And as we pay tribute to Medicare, with all its imperfections, let us also recognize that the Affordable Care Act, with all its imperfections, aims to do for the rest of the population what Medicare has done for the elderly and the disabled. Obamacare is not national health insurance--it is as its core a plan designed by Republicans, supported by big business, and relying on private rather than government insurers--but its intent is to extend the indubitable benefits of health insurance to another 30 million Americans.

November 17, 2013

Getting Off Drugs

The 1.4 million people who live in nursing homes are among the most vulnerable, powerless individuals in American society. They are old (mean age 79.2), they are physically frail (60% are unable to do 4 or more of the most basic daily activities), and most of them are cognitively impaired, many of them severely (39%). Nursing homes have come a long way since the bad old days when residents were tied up, neglected, and abused, and one of the stratagems for improving care has been the “care planning meeting.” A plan of care must be developed by the facility staff for all new admissions to nursing homes that are Medicare or Medicaid certified, addressing physical, emotional, and medical needs. These plans are reviewed on a quarterly basis—more often if there is a major change in status, such as a hospitalization. And one of the innovations of the last decade is to invite family members to participate in care planning meetings. This gives families information about their loved one and an opportunity to make suggestions and raise concerns. But one issue that neither staff nor families routinely raise and that the many websites that advise families about how to negotiate the unfamiliar nursing home terrain is medications. And that, specially in light of recent revelations, is an essential question. 

The recent revelation is that Johnson & Johnson, the world’s largest drug company, just settled a variety of civil and criminal complaints about its sales of the psychiatric drug Risperidone (Risperdal) for $2.2 billion (yes, that’s billion)  J&J “accepted accountability”  for misbranding  Risperdal as useful for treating elderly patients with dementia, for marketing Risperdal for the elderly, and for paying kickbacks to both physicians and to Omnicare, the largest pharmacy supplying nursing homes, for using the drug.

It’s been known for quite some time that drugs like risperidone, an “atypical” neuroleptic used in the treatment of schizophrenia, come with considerable side effects. Though less likely to cause Parkinsonian symptoms than earlier “typical” neuroleptics such as chlorpromazine (Thorazine) or haloperidol (Haldol), it can cause sedation, low blood pressure, and dry mouth, among other symptoms. Then it was shown to increase the risk of diabetes and weight gain. And a meta-analysis in 2005 found it increased the risk of sudden death by death by 60-79%, which led to the FDA issuing a “black box” warning—a warning on the risperidone label highlighting its hazards. Families and physicians might have been willing to accept the risk of side effects and even of death when the drug was used in people who were already very old and very sick if it had been effective. Unfortunately, a series of studies looking at whether risperidone and other “atypical neuroleptics” (similar drugs in the same class) were effective in controlling the behavioral symptoms of dementia—problems such as agitation or paranoia—found only limited evidence that it achieves these goals. 

Since behavioral symptoms are often very difficult to control and create problems both for the patient and for nursing homes, physicians have continued to use neuroleptics including risperidone “off label,” that is for uses other than those for which the FDA approved them. This is an entirely legitimate practice. What is not legal is for drug companies to advertise their drugs for use in these conditions or to bribe physicians or pharmacies to use the drugs.

The Justice Department is hoping that the new settlement (in which, by the way, J&J does not admit any wrongdoing) will stop the prevailing practice and serve as a deterrent to this kind of behavior in the future. Given that GlaxoSmithKline settled with the government last year for $3 billion over similar behavior with respect to two antidepressants (Paxil and Wellbutrin), along with a diabetes drug, and that Pfizer made a payment of $2.3 billion in 2009 over inappropriate marketing of several other drugs, it’s not so clear that ithe deal will deter outrageous behavior. It is entirely possible that settlements of this kind are seen by Pharma as the cost of doing business. Everybody misbehaves all the time; occasionally a company is caught; on balance, a periodic payoff may be worth the tremendous benefits. After all, at its peak in 2007, J&J sold $4.5 billion worth of Risperdal. The company has now signed a 5-year “corporate integrity agreement” in addition to paying the fine, but analogous agreements signed by medical device manufacturers in the past led to no substantive changes in behavior.

So in those care planning meetings in the nursing home, if they ask nothing else, family members should ask “what drugs is mom on?” And that should be followed by “why is she on them?” and “are they helping?” And if there is no good reason for giving the medication, ask that it be stopped, especially neuroleptics. It will save mom a lot of misery—and save money for all of us. 

.


November 10, 2013

Putting Teeth into Medicare

Teeth matter. Not just to chew food, although that is critically important to older people, who are at greater risk of undernourishment than of obesity. Not just for esthetic reasons, although appearance is an important part of self-esteem and teeth are an important part of appearance. Oral health is a significant ingredient in the overall health of older people. For years, geriatricians have recognized poor dentition as a risk factor for pneumonia--the bacteria that build up in dental plaque can get into the lungs and cause infection. The Journal of the American Geriatrics Society, the leading professional journal dealing with medical issues in older individuals, has a special section each month called “Dental and Oral Health,” much as it has a section on”Ethics, Public Policy, and Economics” and one on “Educating and Training.”  In October, the article in the “Dental and Oral Health Section” was on oral health in old people with diabetes (it’s poor). So if teeth are so important, why isn’t dental care covered by Medicare?

It turns out that lots of arguably important services are not covered by Medicare. In large part, what is covered and what isn’t is still governed by the original 1965 legislation enacting Medicare (Title XVIII of the Social Security Act). Medicare excluded then, and still excludes today, eye exams, refractions, and eyeglasses, as well as auditory exams and hearing aids. It excludes “services that are not medically reasonable and necessary,” although nobody knows what exactly is reasonable and necessary and Congress has assiduously avoided defining the term, resulting in the exclusion of cost from consideration in determining Medicare coverage. Some of the services excluded from the original legislation have since been added in: for example, prescription drug coverage is available thanks to the Medicare Modernization Act of 2003, and certain preventive care such as colorectal cancer screening, Pap smears, and prostate cancer screening have been added. But dental care remains an exclusion: “Items and services that are furnished in connection with the care, treatment, filling, removal or replacement of teeth” are off the table.

Older individuals can have private dental insurance, just as younger people can. But this raises another problem. Dental insurance itself isn’t really insurance at all. It covers routine preventive and maintenance care but specifically excludes the costliest treatment. Typical policies have a $2000 per person annual maximum. All it takes is one or two root canal treatments and the bills start to mount up. So dental insurance has it backwards—it covers the small stuff and leaves you vulnerable to the big bills. The essence of insurance is supposed to be that it protects against extreme loss: as Wikipedia puts it, an individual assumes a guaranteed and known relatively small loss (the premium paid to the insurance company) in exchange for a promise to compensate the insured in case of a far greater loss. 

Medicare includes neither reasonable dental insurance (payment for costly care such as dentures or root canals or extractions) nor conventional dental insurance (payment for routine preventive care  and filling simple cavities but minimal coverage for anything else). The reason that Medicare doesn’t cover teeth or a whole host of other services that older people need is that Medicare was designed as insurance for hospital care. While it has gradually expanded—originally, it wasn’t even going to pay doctors—it is only slowly adapting to contemporary reality. What Medicare is still best at is providing comprehensive coverage for acute illness: all the high tech diagnostic procedures and treatments, from PET scans and cardiac catheterizations to surgery and intravenous chemotherapy. What Medicare is not so good at is addressing chronic disease. And most older people suffer from chronic disease. Over two-thirds of people on Medicare have more than one chronic condition; 21% have four to five chronic conditions and 14% have six or more. 

Good geriatric care has to be coordinated, integrated, and patient-centered, but Medicare does little to foster any of these features. Medicare still does not pay for case managers to facilitate integration; it is largely fee-for-service, undermining any realistic possibility of integrating physicians, hospitals, and nursing homes; and it does nothing to encourage patients to discuss their goals of care with their physicians. A few experimental programs are underway to remedy these deficits, such as Accountable Care Organizations (to promote integration) and disease management programs (to promote coordination and self-care). But we’re a long way from having a truly modern Medicare program that serves the needs of frail elders and near-frail elders along with those of their more robust counterparts.

So the critics are right that we need to do something about Medicare. But what we need to do is not to privatize the program or cut benefits. If we want to put teeth into Medicare, we should add true dental coverage— and overhaul the program so that it focuses more on chronic care rather than acute care, more on home care than on hospital care, and more on human care than on technological care.


November 03, 2013

Showing We Care

Since the 1990s, physicians and patients have been fighting over futility. The doctors look at a patient who is dying and say that further tests and treatment cannot possibly work and shouldn’t be done. The patients, or more commonly their families, look at those same patients and say that they want “everything done” to try to prolong life. 

As often happens in the US, the futility battle ended up in the court room. In the case of Helga Wanglie, an 86-year-old woman in a vegetative state after hip surgery, the doctors went to court over whether the patient's husband had the right to insist that she remain on a ventilator. The court, as also often happens, didn’t address the issue of whether the ventilator was or was not appropriate treatment for Mrs. Wanglie; it simply ruled that her husband, as her surrogate, had the right to make the decision. After that case, many physicians concluded that the fight over futility was itself futile. For the last 15 years, physicians have tried to focus on determining a patient’s goals of care and then suggesting what treatments are most consistent with those goals. When they still cannot agree with family members about the right course of action, they resort to mediation, sometimes provided by a hospital ethics committee. But simmering below the surface, conflicts over perceived futility rage bubble vigorously.

A short article in the New England Journal of Medicine, “The Debt of Life—Thai Lessons on a Process-Oriented Ethical Logic,” offers a refreshing way of looking at futility. Based on his experiences doing ethnographic field work in Thailand while a graduate student in Anthropology, physician Scott Stonington shines a new light on the typical ICU dilemma. The physicians, he reports, are loathe to perform various possible tests and treatments because they think in terms of outcomes. They argue that their interventions won’t work in the sense that they won’t overcome the existing medical problems and that they are burdensome to the patient and, parenthetically, expensive. The patient’s family, he observes, think in terms of the process of care. He comments on one Thai family who said that their father had given them “flesh, blood, and breath” so they had a “debt of life” to pay. The ICU, they reasoned, allowed them to repay their debt: it gave their father flesh (tube feedings for nutrition), blood (intravenous medications and dialysis to cleanse the blood), and breath (a ventilator for breathing). The family was not so much interested in the outcome of treatment as in the treatment itself. In this scenario, the conflict was ultimately resolved when the family came to the conclusion that they had paid their debt and further aggressive care could be discontinued.

I made a very similar argument in my essay, “The Standard of Caring: Why Do We Still Use Feeding Tubes in Patients with Advanced Dementia?” I noted that it had been over 10 years since a series of studies in the medical literature reported that feeding tubes (a tube inserted into the stomach to provide nutrition) did not prolong life in patients with advanced dementia who had eating difficulties. These patients are nearing the end of their lives and no matter what procedures they have, their prognosis remains pretty much the same. Not only don’t the tubes prolong life, but they don’t accomplish a variety of other goals that doctors had hoped they might: preventing pressure ulcers (skin breakdown that is often related to malnutrition) or preventing pneumonia (caused by food going into the lungs instead of the stomach). As a result of these studies, the rate of tube feeding people with advanced dementia has declined, but it is still far from zero. I suggest that the reason some families want a feeding tube is to show that they care. It’s not that they expect to improve some quantifiable outcome—living longer or avoiding pneumonia. It’s that they want to have a way to demonstrate caring. For the same reason, we keep people with advanced dementia clean and dressed. We don’t require a study that shows that they will be less likely to develop an infection if they are kept clean. We don’t demand proof that they will live longer if they are clothed. We assume that being clean and clothed contribute to well-being because they are among the only ways we as caregivers have of showing respect for the human being who happens to have dementia. Tube feeding, from this perspective, is a means of proving that we care. 

There’s an important conclusion to draw from the tube feeding example, a conclusion that applies to the ICU situation as well. If we want to dissuade families from advocating feeding tubes for their relatives with advanced dementia, we need to offer a viable alternative way to demonstrate caring. I suggested using special popsicles made by freezing high protein liquid supplements for patients who have trouble chewing and swallowing but can still suck. Hand feeding, laboriously spoon feeding someone who has trouble feeding himself, is an alternative for people who can still process food in this way. But simply telling families that we won’t feed their relative at all and trying to assure them that the person with advanced dementia will not experience hunger or thirst fails to offer any means of caring. In the ICU setting, perhaps what we need to do is not continue burdensome treatment until families feel their “debt is paid.” Perhaps instead what we need to do is to find genuine alternatives to painful or uncomfortable or undignified treatment. But unless we offer something rather than what families perceive as nothing, we will be stuck with providing what physicians regard as futile treatment. Hospice care is intended to serve this role, but may not offer enough active interventions to satisfy family members. Our challenge is to identify ways to truly show we care.

October 27, 2013

Turning Back the Clock

A fascinating article in this month’s health policy journal Health Affairs concludes that by focusing on diseases one at a time—trying to prevent heart disease or cancer or dementia—we are shooting ourselves in the foot. Instead, we should devote greater effort to delaying the aging process altogether. If we could slow aging, we could in principle delay the onset and progression of all fatal and disabling diseases at once. Instead of surviving your heart attack and then going on to suffer from dementia or cancer, you would remain healthy longer, perhaps dying suddenly, as centenarians have been reported to do. But will delaying aging improve the quality of life? And how likely are we to actually postpone aging any time soon?

Using a complicated model known as the Future Elderly Model (FEM), the authors predict what will happen to health care spending, functional status, and life expectancy under various scenarios. What they find is that decreasing the incidence of heart disease by 25% between 2010 and 2030 wouldn’t do very much for disability rates or overall mortality. Ditto for decreasing the incidence of cancer the same amount during the same period. In fact, mortality and disability would be much the same as what we can expect if the incidence of cancer and heart disease stayed the same and all that changed is that the number of older people increased, as we can expect when the baby boomers reach old age. Delaying aging, by contrast, would have a dramatic effect on both length and quality of life. These benefits would come at a considerable cost—by 2060, costs would be $295 billion greater in the delayed aging scenario than in the status quo scenario because all those people who live longer would typically qualify for Medicare and Social Security. The good financial news, however, is that changing the age of eligibility for Medicare from 65 to 68 and raising the age of eligibility for Social Security from 67 to 68 would offset the increased costs.

All very compelling. But just what are these potential advances that will allow us to delay aging? The Health Affairs authors cite two scientific papers, one in the Journal of Clinical Investigation and one in a journal called Experimental Gerontology, both published this year. The papers are very intriguing.

The two papers focus on the fact that aging cells secrete a variety of nasty substances that cause chronic inflammation, at least in mice. These chemicals are collectively referred to as SASP (senescence-associated secretory phenotype). SASP or the cells that make them are potential targets for drugs to delay the aging process. So far so good. But as one of the authors points out, it’s not known if SASP causes chronic age-related disease in people. Moreover, it’s entirely possible that disrupting the processes that cause aging and death will turn on the processes that promote cancer. Finally, as another of the authors argued, actually carrying out clinical research in humans, testing whether a drug (if we had one) has a beneficial effect, will take an estimated 17 years. This would bring us to 2030, the exact date in the Health Affairs article by which all the good effects of delaying aging are assumed to have already happened, according to their model. If we aren’t likely to have any aging-delaying drug available for clinical use before 2030, we can’t plausibly expect any beneficial effect until well after that time.

So by all means, let’s go ahead and invest in the basic science of aging. Let’s encourage more clinically trained geriatricians to go into this kind of research (reportedly of 7000 board certified geriatricians, only 12 have research grants from the biological division of the National Institute on Aging). But in the mean time, let’s figure out how best to care for the many frail elders who will be with us for years to come.



October 20, 2013

Are Hospitals Bad for Your Health?

When I was a medical resident, I noticed that bad things kept happening to my older patients: many got confused and some fell and maybe even broke a hip. I wondered whether the problems they developed were related to the acute medical illness for which they were admitted or to the hospitalization itself. 

So I did a study in which I compared the experience of older patients to that of people under 70. By looking through patients’ hospital charts and sitting in on the nurses’ rounds every day, the time when they reported to the next shift what was really going on with their patients, I was able to determine who was confused, who fell, who stopped eating, and who was incontinent. Then I analyzed whether there was any conceivable relationship between their medical problems and the symptom they developed. For example, a person admitted with a stroke or meningitis (an infection of the lining of the brain) could be expected to be confused but not a patient with a stomach ulcer. 

What I found was that 40% of the older patients, compared to 9% of the younger ones, had one or more of these symptoms that couldn’t be explained by their admitting diagnosis. Moreover, as soon as patients had one of these problems, doctors intervened in some way—they ordered restraints for the patients who had fallen or a urinary catheter for those who were incontinent—and all those interventions in turn predisposed to new problems. In subsequent years, several other investigators documented the perils of hospitalization for older people and geriatricians introduced ACE (acute care for the elderly) units to minimize the risk of hospital-induced problems. These units have made a difference, but even in specialized units, older people are at risk of hospital-related complications.

Today, there is a renewed interest in learning about the perils of hospitalization. One prominent researcher introduced the concept of “post-hospitalization syndrome,” arguing that older patients are at heightened risk of problems after discharge, problems related not only to the acute illness for which they were hospitalized, but also to the debilitating effects of having been in the hospital. Patients are often sleep-deprived, poorly nourished, and de-conditioned after a hospital stay, and it is these factors that may predispose to difficulties in the 30 days after discharge. According to this analysis, physicians and nurses need to pay more attention to making the hospital a better and safer place for patients.

Now a new study picks up on the theme of the post-hospitalization syndrome, measuring the risk of adverse drug reactions during this period of heightened vulnerability. Pharmacists reviewed the records of 850 older people who collectively experienced 1000 hospitalizations and they identified 330 possible adverse drug events (injury from a drug and not the underlying disease) during the 45 days after discharge. Physicians looked through the list and agreed that 242 cases were truly adverse drug events, of which 2.5% were life threatening and another 21% were serious. They deemed just about one-third of these events preventable. Most of the drugs causing these problems were cardiovascular drugs or diuretics (fluid pills that are typically also used to treat heart disease); the next major class of offenders was narcotics. The authors conclude that doctors need to do a better job in the hospital (deciding on what medications a patient should be discharged) and afterwards (monitoring for side-effects). 

It seems that patients still get into trouble after hospitalization, particularly frail elders, just as they did 30 years ago when I published my study of iatrogenesis. Adverse drug reactions are yet another form of trouble. But what are the implications of these observations? We should try harder to make the hospital a safer place for frail old patients. We should watch assiduously every time an older patient starts a new drug, and people who are discharged from the hospital often go home with several new medications or new doses of old medications. 

Maybe we should also think about whether the patient should really have been admitted to the hospital in the first place. Perhaps his illness could have been prevented. More plausibly, perhaps we could treat the illness in a way that didn’t necessitate admission to a large, alien institution like a hospital. An older person cared for at home when he develops pneumonia or a worsening of his chronic heart failure won’t suffer from confusion induced by unfamiliar surroundings. He won’t have his sleep disrupted by monitors going off in the adjacent bed or nurses and doctors talking loudly in the hall. Of course he won’t have all the benefits of acute hospital care either, the sophisticated technology, the 24-hour nursing care. But maybe the risks aren’t always worth the benefits. Maybe we should design alternatives to hospital care that feature some of the benefits of the hospital but all the benefits of home.

October 13, 2013

Talking the Talk

A new survey asking people about their wishes for end-of-life care has been getting quite a bit of publicity lately. Commissioned by “The Conversation Project,” a laudable grassroots effort to encourage families to talk to each other about the kind of medical care they would want in their final days, the poll reports some interesting observations—and raises some important questions.

Like other earlier studies, this one finds that most adults (94%) think it is important to talk about end-of-life care. An extensive California study, for example, found that 83% of the adults surveyed thought it was important to make their end-of-life wishes known, A poll of adults in Massachusetts found that 84% of adults were comfortable talking about dying. 

And like earlier studies, this one finds that though people think that talking about dying is important, many of them don’t actually talk about it (in the new study, only 27% did). In California, only 36% of adults actually had something in writing about their wishes; in Massachusetts, 51% had had a conversation with family members.

But if we look at people who are very sick or very old or both and ask whether they talked to their families about their wishes, the picture that emerges is a bit different. The Pew Research Center conducted a national poll in 2005 (with 1500 subjects) and another one in 2009 (with 2969 subjects) and found that while only 11% of people aged 18-29 had any kind of written documentation of their end of life wishes, 51% of those over 65 had such a document. Moreover, 63% of older Americans had talked to their adult children about their wishes for medical care in the event of incapacity—and among older women, 71% had had such conversations. So when the new study from the Conversation Project reports that only 27% of those polled had talked to family members about their personal wishes, is this too low? 

While I agree that it’s a good idea for everyone to designate a health care proxy—to state who will make medical decisions on their behalf if they are unable to do so themselves—I don’t think it makes sense for everyone to have “the conversation” about end-of-life care. The overwhelming majority of Americans will die in old age: people over 64 account for 12% of the population but 70% of the deaths;  I estimate that asking everyone who is between 18 and 64 to talk about their wishes for medical care at the end of life means that about 280 people will have such a conversation for every person who ends up actually dying in the next year.  It’s far more reasonable to target advance care planning to those people who are likely to get very sick in the relatively near future, which means those who are elderly or who already have a life-limiting illness such as advanced heart failure or metastatic cancer. Moreover, if 20- or 30-year-olds actually did have “the conversation,” they would likely express preferences that will change by the time they are old and dying. How much suffering you are willing to endure in exchange for a small chance of living longer or how much debility you find tolerable may be very different, depending on whether you are already 80 and have outlived most of your birth cohort and on your underlying health status at the time you develop a terminal illness. 

If pitching discussions about dying to everyone is excessive, talking only about dying isn’t enough. Once it becomes clear that a person is truly dying—and many people who are at that stage have difficulty accepting that the end is imminent, a situation made worse by doctors who have difficulty telling them where they stand—the vast majority of people don’t want more tests and treatments. The default approach to people who will surely die within a few months no matter what is done should be a focus on comfort. Conversations are needed principally to communicate that the end is near and to identify the small minority who do want trials of treatment that have only a very, very small chance of benefit. 

What we need desperately to talk about is the final phase of life, whether that is measured in months or in years, a time demarcated by a marked decline in the ability to function independently. The 85-year old with heart failure, diabetes, arthritis, and kidney problems may not be dying in any conventional sense of the term, but she is likely to develop some kind of acute illness in the near future, whether pneumonia or dehydration or an exacerbation of her heart condition, and thinking in advance about the approach to medical care that is right for her is critically important. She has real choices to make: does she want maximal medical therapy? Comfort-oriented treatment? Or something in between? As Katy Butler makes so poignantly clear in her recent book, Knocking on Heaven’s Door, this final phase can last a long time—it was 7 years between the time when her father got his pacemaker and death—but decisions made along the way dramatically shape the experience of those years.

So I’m not at all surprised that 94% of the 1067 people surveyed for The Conversation Project said that it is important to talk to your loved ones about your end-of-life care wishes but only 27% have had a discussion about “what they do and don’t want in their final days.” I applaud the Conversation Project for encouraging people to open the door to discussing difficult topics. But let’s be selective about whom we invite in and what we talk about.

October 06, 2013

Where in the World is the USA?


The widespread belief in American exceptionalism means that we tend to think we are unique. Often, being unique slips into being the best. But for a long time, I’ve had a sneaking suspicion that we might be able to learn something about how to improve life for older people by looking at other countries. A new study based on data from the UN, the World Health Organization, and the World Bank, suggests we could.

The report was released on October 1, the International Day of Older Persons, a day that will be celebrated at the UN this week but that I confess I wasn’t even aware existed. The authors created a “global age watch index,” made up of 4 domains: economics, employment and education, the environment, and health. Based on these measures, the best place to live if you’re over 65 is Sweden, which was in the top 10 in all 4 domains. The US placed 8th in the overall ranking (a geometric mean of the 4 domains). But if we look at the health status component, we find a bleaker picture.
Health status was measured based on life expectancy at age 60 (WHO data), healthy life expectancy at age 60 (data from the Global Burden of Disease Study, Institute of Health Metrics and Evaluation, Seattle), and psychological wellbeing (based on the Gallup WorldView, a subjective assessment of whether one’s life has an important purpose or meaning). Based on these indicators, the US ranking is an embarrassing 24. 

Actually, the US didn’t do so well in the economic or environmental domains either—its overall score is respectable mainly because of relatively high employment and high educational attainment among American elders. In terms of income security, the ranking is #36, reflecting a marked income inequality. And on the “enabling environment” dimension, which measures things like social connections and access to public transportation, the US came in at 16.

So who are the role models? Three countries stand out as having high rankings across the board: Sweden, Norway, and Germany. The second tier is comprised of the Netherlands, Canada, Switzerland, and New Zealand. Maybe it’s time we explore what these other countries are doing right and have the humility to learn from their example.

September 29, 2013

A Rose by Any Other Name

While casting about for something to discuss in my blog, I stumbled on a short article that advocates renaming the “death panel” the “good planning panel.” The authors point out that family meetings involving physicians, patients, and their loved ones talking about future medical care are generally well-received. Moreover, this kind of advance care planning prevents depression and anxiety in both patients and their families, and when patients have these conversations, they typically end up undergoing fewer invasive procedures in their final weeks of life, procedures that most patients say they don’t want. Allowing Medicare reimbursement for such meetings would be a very positive step in the direction of improving the care for patients with advanced illness. Whether calling it a “good planning panel” would transform the way people think about these kinds of discussions, in light of the lingering association with the “death panels” born of the right wing media’s imagination, is another matter. Moreover, “panel”  is a poor choice of word, evoking the image of a jury delivering a verdict. But it led me to think about the power of words and the role of euphemisms in medicine.

When the Center for the Advancement of Palliative Care commissioned a market survey a couple of years ago, they learned that most people either had no idea what the term “palliative care” meant or assumed, incorrectly, that it was the same as “hospice,” which they in turn associated with imminent death. (Palliative care is an approach to care for anyone with advanced illness: it neither assumes the patient is close to death nor does it in any way limit treatment, but rather provides treatment focused on improving quality of life; palliative care can be given alongside of life-prolonging medical therapy). When the public were asked if they were interested in having “an additional layer of support” from their health care team, as palliative care was defined, they were uniformly enthusiastic. Similarly, many physicians were reluctant to broach the topic of “palliative care” with their patients because they thought it would be too frightening; they preferred to offer “supportive care.” So is “supportive care” a more useful name because patients understand that term correctly, or is it a misleading euphemism, designed to make patients think it is something that it isn’t? 

And what about the evolution of the “DNR” (do-not-resuscitate) order? Some years back, the phrase “DNAR” (do not attempt resuscitation) was introduced. Since I’m someone who likes to tell things as they are, I favored that substitution. After all, the implication of DNR seemed to be that if only the physician did perform CPR, the patient would be perfectly fine. Usually, the reality is quite different: no matter whether CPR is performed or not, the patient with advanced illness whose heart stops beating will almost certainly die. But more recently still, some physicians have replaced “DNAR” with “AND,” which stands for “Allow Natural Death.” Instead of focusing on whether a particular technological procedure (CPR) will or will not be tried, this formulation seeks to tell patients that what is at stake is having a “natural” experience. Natural, like organic, conjures up something good, unlike, presumably, something that is unnatural or inorganic. “Allow Natural Death” also adds the word “allow” to imply that if you don’t opt for this course, that is if you choose CPR, you will be obstructing or preventing something natural from occurring. Never mind that this is precisely the point—what is “natural” in this instance is to die, and CPR is intended to prevent that most unfortunate reality, just as taking insulin to treat diabetes or having bypass surgery to alleviate the symptoms of heart disease are very unnatural but often extremely desirable medical interventions.

So are these verbal permutations a good thing or they a kind of sleight-of-mouth, designed to deceive and manipulate? What if the original term—DNR or palliative care, for example—evokes such disgust that patients immediately reject it, whereas the new term—AND or supportive care—has far more positive resonance? I used to buy the bioethics argument that truth-telling is one of the cardinal virtues and that it’s a key ingredient of moral medical practice; that failing to tell the patient his diagnosis or his prognosis engenders fear and distrust, not to mention that it is profoundly disrespectful of a person’s autonomy, his individuality, his “right” to know about his own body and his own future. But I’ve been reading some behavioral psychology lately, and I’m not so sure that people make decisions based on calmly and systematically weighing the pros and cons of the various alternatives; they seem by contrast to rely heavily on their intuitions. What this perspective suggests is that there is no truly neutral way to present information, that words are powerful (though sometimes images are even more powerful), and that the best we can do is to avoid deliberately misleading patients. 

So both “death panels” and “good planning panels” are out because they are not panels and they are not about death; “advance care planning discussions” are more accurate. “DNR” and “AND” are out because they mislead; DNAR is more objectively correct, though it may well have positive associations for some patients and negative associations for others. And I’ll stick with calling what I do providing “palliative care” rather than “supportive care,” though I’m quite willing to define palliative care—if I’m asked—as providing support to patients and families through symptom management, psychosocial support, and advance care planning.

September 22, 2013

No Sense, Lots of Dollars

Twenty-five years ago, discussions of medical futility were the rage in bioethics circles. The discussions petered out when it became clear that futility was in the eye of the beholder: physicians and patients often had very different ideas about what futility meant, depending on what they hoped medical treatment would accomplish.
   
In one case that generated considerable publicity, physicians sought to turn off the ventilator that was keeping 86-year-old Helga Wanglie alive. They argued that the ventilator was futile treatment since it would never allow Mrs. Wanglie, who was in a persistent vegetative state, to regain consciousness. Mrs. Wanglie’s husband, however, argued that keeping his wife alive—supplying the oxygen that her heart needed to keep on beating—was the goal of treatment. And by that standard, the ventilator was performing admirably. The court to which the physicians presented their case did not address whether the treatment was futile; it merely ruled that Mr. Wanglie was the rightful spokesperson for his wife and his wishes should be followed.

A second problem with futility is that it is a good deal easier to identify after the fact—the patient died, ergo the treatment didn’t work—rather than in advance. Because futility was proving elusive, medical ethicists stopped talking so much about it and focused instead on ascertaining the patient’s goals of care. The prevailing wisdom came to be that doctors should provide any treatment that was consistent with those goals. Ethics consultations were used to mediate disputes between families and physicians over whether particular treatments could achieve the desired goals. But physicians continued to be bothered by the nagging feeling that at least some of the treatments they provided were morally wrong: they caused needless suffering as well as outrageous costs without much, if any, benefit. A new study just out puts the futility debate back on the table.

The authors of the study used a focus group of 13 doctors who work in intensive care units, the site of 20% of all deaths in America, to agree on a definition of futility. They came up with four reasons for assessing a treatment as futile. The patient was imminently dying, the patient would not be able to survive outside an ICU, the burdens of treatment greatly exceeded the benefits, or the treatment could not possibly achieve the patient’s explicit goals. They then asked physicians at a large medical center in Los Angeles to evaluate each of their ICU patients every day and indicate whether the care they were providing was futile, using these four criteria. In one fell swoop, the authors got rid of the two problems with previous futility studies—sort of. They used a prospective design, asking for evaluations in real time, not after the fact. And they defined futile care, albeit by unilateral decree.

Over a 3-month period, the investigators collected data on 1125 patients cared for in one of 5 different ICUs by a total of 36 critical care doctors. They found that 123 patients (11%) were perceived by their physicians to be getting futile treatment at some point during their ICU stay. Another 98 patients (8.6%) got “probably futile treatment.”

What characterized the 123 patients whose doctors were convinced they were getting futile care? Their median age was 67 and 42% were on Medicare. They were more likely to be older and sicker than the rest of the group. The majority (68%) died before hospital discharge; another 16% died within 6 months; almost all the remainder were transferred to a long-term care facility, dependent on chronic life support. The total cost of futile hospital care for these 123 patients was $2.6 million.

In light of these results, it may be time for critical care specialists to convene a consensus conference to see if they can agree on the criteria for futility. Agreement by the majority of doctors who care for ICU patients would carry far more weight than the focus group comprised of 13 physicians whose opinions formed the basis of the current study. If a majority of the nation’s critical care experts came up with criteria for futility, whether the same ones used in this study or some modification, then Medicare would be in a good position to decide to pay only for clinical care that met the newly defined standard of care. 

Medicare would not be dictating what is appropriate care; it would not be interfering in the practice of medicine. Medicare would merely be restricting payment to services of established benefit, just as it does when it pays for a cardiac pacemaker or an implantable defibrillator only if patients meet standard clinical criteria. Patients could still opt for treatment their doctors deemed futile if they were willing to pay for it. At an average cost of $4004/day for ICU care, I wonder how many people would pursue this route.

September 16, 2013

No Place Like Home

In a recent NY Times opinion piece, ethicist, oncologist and health policy guru Ezekiel Emanuel lauds the resurgence of the house call. Emanuel says that house calls are bringing back “real personalized medicine” and, as a nice bonus, they’re saving money. But he fails to address why house calls fell into disfavor in the first place—and what we will need to do if we want to change their reputation as second rate medicine and promote their use.

House calls are inefficient (at least when they involve clinicians actually traveling to the home rather than a “virtual” house call that is actually a video call).  Reimbursement for a house call by a primary care physician is modest, though it is greater than for an office visit: the most recent Medicare Physician Fee Schedule reports that the highest possible reimbursement for a home visit to an established patient (someone the doctor has seen before) is $177.50, while the highest reimbursement for an office visit for a similar patient is $141.75. Payment for a procedure like colonoscopy or cataract extraction, by contrast, is 3-5 times greater. But beyond these financial considerations is the crucial recognition that physicians want certainty before they diagnose and treat. This kind of certainty comes from EKGs and blood tests and X-rays, only some of which can conveniently be performed in the home.

Consider a fictional but typical 85-year-old woman with mild dementia who lives with her daughter and son-in-law. Let’s call her Suzanne. One morning, Suzanne is much more confused than usual. She can’t figure out how to get dressed, even after her daughter lays out her clothes for her. She tries to eat her oatmeal with a fork. She babbles about how her husband will be coming to take her out for lunch, though her husband has been dead for twenty years and just the day before, she and her daughter visited his grave. 

Suzanne’s daughter knows something is terribly wrong. She calls her mother’s physician, who insists that she go to the hospital emergency room for evaluation. The doctors in the ER do a battery of blood tests, looking for chemical imbalances in the blood or evidence of a failing liver or failing kidneys, even though Suzanne has never had liver or kidney problems. After two hours, all the blood tests come back normal. While waiting for the blood test results, they do an electrocardiogram, because people who are having a heart attack are sometimes very confused, even though Suzanne has never had heart trouble. The electrocardiogram is normal. And just to be sure that Suzanne has not had any bleeding in the brain, she goes for a CT scan of the head, even though she has not fallen and brain bleeds of the kind the doctors are looking for almost always result from a fall. After being in the ER for 6 hours, the doctors conclude that the most likely cause of Suzanne’s confusion is a urinary tract infection, since a urinalysis shows some abnormalities, though a confirmatory culture will not be available for another 2 days. They send her home on oral antibiotics.

The reality is that Suzanne could have been diagnosed and treated at home. It would have been a good deal cheaper—in 2006, Medicare paid an average of $651 for an emergency room visit compared to $180 for an office visit and the mean ER department charge for a urinary tract infection was a stunning $2398.  It would also have been far less burdensome to Suzanne, who became even more agitated lying on the stretcher in the hospital, or to her daughter, who took off a full day of work to be with her mother in the ER. Her doctor could have avoided sending Suzanne to the hospital. He could have made a house call, checking by physical examination for various possible explanations for her acute confusion such as severe constipation, bruising on her face or head indicating a recent fall, or abnormally low blood pressure. He could have arranged for simple lab tests to be done in her home, including a urinalysis and basic blood chemistries. He could have started Suzanne on oral antibiotics, treating her for the most likely cause of her problem, while waiting for the results. Or he could have sent a visiting nurse to the home and relied on her assessment of Suzanne. Odds are he would have concluded that the most likely cause of her confusion was urinary tract infection—especially if he knew that the last few times Suzanne had developed worsening confusion that’s exactly what the problem had been. 

But he couldn’t be sure that she wasn’t among the few percent of older patients who had something else wrong with her, something serious. And even finding evidence of an infection in the urine wouldn’t have proved that was really the cause of the confusion—almost half of older women routinely have bacteria in their urine, with no discernible effect on their well-being. So to be certain that Suzanne really had just a urinary tract infection, her physician had to order all those other tests such as the CT scan and start treatment only after he had all the results. 

Home visits for certain kinds of patients such as frail elders can be very beneficial. As. Dr. Emanuel points out, studies of innovative home care programs such as the Johns Hopkins Hospital-at-Home program can deliver high quality results and save money. But if we want to see more house calls, we will need to modify the prevailing culture in which both physicians and patients regard certainty as the gold standard of medical care. We need to recognize that achieving certainty comes with a cost—both in dollars and in the sometimes dangerous and often burdensome tests and procedures to which patients are exposed. Physicians will need to talk with patients or their caregivers about how best to balance the risks and benefits of maximizing certainty.

September 08, 2013

Playing Games

It’s not often that a “research letter,” a short, preliminary report about ongoing research, makes it into the national media. But this week, newspapers picked up on just this kind of article from Nature, a prominent science journal. The article tentatively concluded that people aged 60-85 who practiced a custom designed video game several hours a week got better at multi-tasking. Not only that, but the improvement persisted 6 months later and was manifest not just in better performance on the game but in other measures of attention and memory. So is it time for octogenarians to start playing video games with their grandchildren?

Even before the University of California San Francisco lab published its NeuroRacer results, online companies like Lumosity were doing a booming business. Calling itself a “brain training and neuroscience research company,” Lumosity creates computer-based games that ostensibly offer a “scientifically proven brain workout.” It reported a 150% increase in business between 2012 and 2013, with 35 million users worldwide by January of this year and as many as 100,000 new subscribers each day. Clearly, people want to believe that playing mind games will keep them sharp and perhaps even fend off dementia. 

To be fair, the authors of the study in Nature aren’t proposing anything of the kind. They offer their work as an illustration of the “plasticity” of the “prefrontal cortex,” or the ability of the brain to adapt with practice, even at older ages. But do mind exercises translate into useful improvements—as opposed to better scores on simple tests? And at least as important, if mind exercises are effective, what about singing in a chorus? Participating in a discussion group? Writing a letter-to-the-editor? The new study compared volunteers (hardly a random selection of the population) who played the video game to other volunteers who did not; it did not compare playing the video game to other activities. 

What’s wonderful about these other pastimes—playing music, arguing, writing—is that they are fulfilling in and of themselves, whatever their cognitive benefit. Social engagement helps prevent depression; it gives people a sense that they matter. Perhaps it’s harder to study the effects of making music than to measure the EEG (brain wave) correlates of playing video games; after all, playing Beethoven may be different from playing Mozart, trios may be more challenging than duets, and playing the piano may not be equivalent to playing the clarinet. It’s certainly a great deal easier to monetize a video game than a social network that helps older people find others with shared interests. 

Researchers should keep on studying highly standardized, precise activities. But for now, I’d take my chances with the real world, not the virtual world.

September 01, 2013

Why We Work

With Labor Day rapidly approaching, I began wondering about older people in the workforce. Just how many people over 65 work? What about over 75? How is this changing? And what does work mean for older individuals?

Of course 65 is an arbitrary way to define old age. Most people who turn 65 are not old in any meaningful sense—they are certainly nowhere near the end of life: they can expect to live another 19.1 years. For women, life-expectancy at age 65 is still greater, or 20.3 years. Even age 75 is no longer very old, with a life-expectancy of another 12.1 years.  Moreover, as I pointed out in my last blog posting, roughly half those years are “disability-free.” But Social Security kicks in at 65 and so does Medicare, so this continues to mark the conventional threshold between working and retirement.

It turns out that a substantial and rising proportion of the population continue to work after their 65th birthdays. US Census Bureau projections for 2014 are that just under one in five people over age 65 will be working, a 36% increase in just 5 years.  For the 65-74 year old group, it will be slightly over one in four, and for those over 75, it will be a little under 10%. Roughly half of those people who continue to work will do so pretty much full time; about one-third will work 15-34 hours a week, with the remainder working 14 hours or less. 

The US is not the only developed nation to see a marked increase in older workers. England has experienced a surge of older workers, with numbers topping a million this spring: in 2013, 57% of people who reached the official retirement age said they planned to continue working, compared to 40% a year earlier. 

Some of the change is a direct consequence of the recession. The value of retirement plans that were tied up in the stock market took a huge hit, and with it came the realization by many people that they didn’t have enough money saved up to retire at 65. They also stood to lose employee-sponsored health insurance—along with their main source of identity. 

What I found fascinating is that there’s a lot of advice available for prospective retirees about where to live, how to save for retirement, and how to make your money last after you do retire but not much, as a recent article in Time pointed out, about how to make the most of the post-65 period, with or without a job. The pundits encourage everyone to be eat well, remain active and to nurture close personal relationships before they turn 65 in the hope of remaining healthy but are silent about what to actually do with their lives if they succeed..

My personal advice—and I wrote about this in my book, The Denial of Aging: Perpetual Youth, Eternal Life and Other Dangerous Fantasies, in the chapter “Making the Most of the Retirement Years,” is to concentrate on finding meaning in life. If work gives you a sense of meaning and if you’re able to keep at it, then do it. If work doesn’t give you a sense of meaning or if you can no longer continue what you’ve been doing, then it’s best to find something else that gives you that all-important sense of being part of the human community and making a contribution to the world. And it’s the job of the rest of us to make sure there are ample opportunities to do just that.

August 26, 2013

If 75 is the new 40, what's 85?

You've probably heard repeatedly that 70 is the new 40--and perhaps also that 80 is the new 65.  If that's true, then quality of life for people who used to be considered old should be much better than previously. It turns out that the perception that things are better for the elderly is true—but only sort of.

A recent study by the well-known health economist David Cutler and his colleagues carefully analyzes data from the Medicare Current Beneficiary Survey, a rich source of information about the health and welfare of all 47 million Americans enrolled in Medicare. After painstaking study, the authors conclude that the “compression of morbidity” is for real: Americans truly have more years of life without disability today than they did 20 years ago. Someone who turned 65 in 1991 could anticipate living another 17.5 years, of which exactly half were spent with disability. Someone who turned 65 in 2003, by contrast, could look forward to living 18.2 years, of which fully 10.4 years would be disability-free, leaving 7.8 years of disability.

But all these averages--the average life expectancy at 65, average number of disability-free years--hide an important truth. What a given person will actually experience, just how much impairment he has in his final months or years, depends on what medical conditions he has. In fact, for some people, the experience of old age has gotten a great deal better; for others, it has gotten far, far worse.

Roughly speaking, people follow one of three possible paths in the last year or two of life and a similar pattern may well describe what happens in the last 5 or more years of life. One group of people die quite quickly and do well until the very end--these include people who have the most common forms of cancer and those who die very suddenly, perhaps from an accident or a heart arrhythmia. People in this group tend to die at a relatively young age and account for about 20% of all people who die. A second group of people have chronic organ system failure, for example congestive heart failure or chronic obstructive pulmonary disease, and have a course of slow decline, punctuated by periods of acute worsening followed by improvement. They do pretty well until the final 6-12 months of life and account for another 25% of the population.  A third group of people have poor long term function and have a slow decline, either because of dementia or because of that nebulous condition known as frailty in which multiple interacting medical problems interfere with daily activities. These include many of the oldest old and constitute 40% of deaths. The percentages do not add up to 100 because the remaining 15% cannot be readily classified as fitting into any of the 3 main groups.

What this means is that if you are in Group Three, what you will experience is not a "compression of morbidity," but a long, drawn-out period of decline. And the reality is that this third group, which is comprised largely of people with dementia, is going to grow as the other groups shrink.

It's already happening. Between 1997 and 2007, the death rate from heart disease fell 25%. Many people with heart disease are in that middle group who have pretty good functioning until their disease gets so severe that it gets in the way of what they want to do, though some have other diseases as well and are in the third, frail, group. So improvements in the prevention and treatment of heart disease--interventions such as exercise, diet, medications, and pacemakers--have meant fewer people dying of heart problems. But in the same ten year period, the death rate from Alzheimer's Disease increased by 50%. And all those people are in Group Three, those with the slow fade. 

So are things better or worse for older people? Maybe that's the wrong question. Maybe the answer is, it depends. 







August 18, 2013

Dementia Redux

A few months ago I wrote about my father’s experience in the nursing home where he lives, commenting on the difficulty of implementing  the “culture change movement” that is supposed to promote quality of life for residents. Many readers responded that they, too, had been disappointed with attempts by nursing homes to improve care for people with dementia. Since then, I've been keeping my eyes peeled for studies that examine what approaches to nursing home care actually make a difference for residents.

This month I found a an article in a major geriatrics journal that asked a related question: what characteristics of residential facilities are associated with better health outcomes and better psychosocial outcomes for residents with dementia? The authors looked at organizational characteristics (for-profit vs non-profit, urban vs rural, special care units vs no special care units, nursing homes vs assisted living, culture change vs conventional), structures of care (staffing level, proportion of private rooms, staff expertise), and processes of care (activity programs, family involvement, resident-centered care). What was shocking about this report is that although the investigators reviewed 6209 articles written between 1990 and 2012, they only found 14 that met even the rudimentary scientific standards needed to be included in their analysis (for example, a study had to have enough cases to allow the authors to draw meaningful conclusions and it needed to compare two different strategies used in otherwise similar facilities so the investigators could figure out if one strategy was better than the other). Out of the 14 studies the authors identified, 10 reported specifically on psychosocial outcomes, the issues I am most concerned. These 10 studies showed that “person-centered care,” which is at the heart of the culture change movement, did lead to slight improvement in well-being. Overall, however, quality of life was pretty much the same (and not very high) in all the facilities studied, regardless of whether there were private rooms or special activities and whether or not the nursing home was for-profit.

In most nursing homes, unfortunately, the relevant question is far more basic than whether pets or plants or "therapeutic touch" can make a difference for residents. I learned this week from an article in the Boston Globe that my own state of Massachusetts is hoping for the first time to require residential facilities with dementia “special care units” to actually give specialized training to their staff. Right now, dementia care is principally provided by certified nursing assistants (CNA) and, to a lesser extent, by registered nurses. To become a CNA in Massachusetts, you have to take 75 hours of coursework and have 100 hours of hands-on training in subjects such as giving a bath and taking a blood pressure. A CNA training program, which typically last 2-6 months, does not necessarily include much about dementia. Once a CNA is hired in a nursing home, he or she is assumed to have adequate expertise to care for all residents and, until now, no additional training is mandated.

A total of about 1.7 million people live in nursing homes in the US, of whom 70% have dementia. Another 1.2 million people live in some other form of residential care facility such as assisted livingof whom 42% have dementiaSo it is reassuring that Massachusetts is likely to join the 16 other states that mandate some kind of training for direct care workers in facilities that claim to provide specialized dementia care. It's frightening that this new regulation will mean that workers will receive a mere 8 hours of training initially and 4 additional hours each year--acquiring real expertise in dementia care would surely require at least two or three times as many hours. It's also distressing that nursing home administrators immediately responded to the proposed regulations by protesting that they cannot possibly afford to spend so much time teaching their staff such essentials as gentleness, patience, and tolerance of repetition or techniques for handling such common problems as paranoia, agitation, and wandering.

We have come a long way since the appearance of great muckraking books like Tender Loving Greed exposed the nursing home industry nearly 30 years ago. We still have a long way to go.