May 31, 2015

Finding Our Way

The last year of life is often filled with trips to the emergency room, admissions to the hospital—frequently the ICU—and multiple visits to medical specialists. The treatments patients endure during that final year are burdensome, invasive, and costly. And in the end, they die anyway.

The problem with this kind of analysis is that it starts with the time of death and works backwards. But we don’t know in advance who is going to die. What about all the people who undergo aggressive treatments and don’t die? Isn’t it possible that they live longer, and sometimes better, because of all those doctors and hospitals? We will all die eventually and the very old will die sooner rather than later. The challenge is to predict how we will get from here to there so that we can make reasonable choices along the way. A new study in the BMJ offers a possible means of figuring that out.

We’ve known for some time that older people follow different trajectories near the end of life, and that a useful way to characterize those trajectories is by the extent of dependence and disability. A rough approximation of what happens is:


A more refined description suggests that there are five distinct “trajectories of disability” in the last year of life and that particular medical conditions—heart failure, cancer, or frailty—do not alone determine the path. The new study indicates that a powerful determinant of the path, independent of the medical condition that proves to be the cause of death, is hospitalization.

The authors had the opportunity to analyze data from an ongoing longitudinal study of 754 community-dwelling older people over the age of 70 who were initially independent in four essential activities of daily life: bathing, dressing, walking, and going from lying to sitting and sitting to standing. A comprehensive home-based assessment was conducted at baseline for every patient and then every 18 months for over ten years, as well as telephone interviews along the way. The evaluation included mental status, chronic conditions, and physical performance. Data was available on 582 decedents.

Using a complex modeling procedure called “trajectory modeling” which is a form of a complicated process known as “latent class analysis,” the authors ended up expanding their earlier classification of 5 trajectories to 6. At one extreme is the total absence of disability in the year prior to death (17.2% of decedents). At the other extreme is persistent, severe disability (28.1%) or the presence of marked disability a full year before death, disability that didn’t get any better. In between are catastrophic disability (11.1%), in which a patient becomes acutely disabled, for example from a stroke; and three forms of progressive disability: accelerated disability (9.6%), progressive mild disability (11.1%), and progressive severe disability (23%).

The striking result of the analysis is that without exception, the course of disability closely tracked hospitalization. No matter how the authors adjusted their analysis to account for possible confounders, the results remained unchanged. For every trajectory, being admitted to the hospital in a given month had a strong, independent effect on the severity of disability.

Now it’s possible that it was the acute problem leading to hospitalization, not the ensuing hospitalization, that caused the functional decline. The conclusion may be that we need to redouble our efforts to make hospital care for older people better, to try to improve over the modest progress we have made to date with ACOVE (acute care for vulnerable elders) units and fall prevention protocols. Or the conclusion might be, as the authors suggest, that patients admitted to the hospital with progressive, severe disability or with persistent severe disability, it would be best to suggest a palliative approach to medical care.

Whatever else we take away from this intriguing study, we should recognize what was perhaps obvious all along: it is often difficult and frequently impossible to predict from a single point in time what a given patient’s trajectory will look like. But if we consider two or three points in time and ask what the patient’s function is like over time, we can have a far better idea. Just as we cannot determine the slope of a line from just one point but we can calculate the slope from any two points—and we need more points to define more complicated curves—so, too, will we do better at prognosticating if we see patients as dynamic rather than static.

May 27, 2015

Getting from Here to There

We all have the same final destination: next blog post will discuss how we get from here to there.

May 25, 2015

Sneak Preview

Demographically, the US in 2050 will look much the way Germany and Italy do today: 20% of the population will be over age 65. Comparing the attitudes and beliefs of Germans, Italians, and Americans toward elder caregiving, as a new Pew Research Center report does, can give us a glimpse of our future.

The facts are intriguing. Twice as many Italians and Germans as Americans feel that government should bear the greatest responsibility for economic well-being in old age. This reflects today’s reality: in the US, 38% of the income among those over 65 comes from government sources such as Medicare, whereas in Germany and Italy 70% comes from public funds. It may also reflect the fact that there are fewer young people in Germany and Italy to bear the burden of caring for the older generation. The old age dependency ratio in both European countries is 30, which means there are 30 older adults for every 100 “working age adults,” defined as ages 15-64, even though 15 is seldom working age in these societies; in the US today, the dependency ratio is 19.5.

What I found particularly striking is that more older Americans continue earning money from working in old age than do their European counterparts: 32% of the income of elderly Americans derives from work, compared to 20% of that of Italians and only 13% of the Germans. It seems that Americans work more—and depend to a greater extent on working for their identity as well as for their income—than Europeans, who also have shorter work weeks and take more vacation time. And today’s Americans are far more likely to have a private pension fund of some kind, for example from their employer, than the Germans or Italians: 30% of American retirees receive private pension benefits, compared to 13% of Germans and 7% of Italians.

Those under 65 in all three countries have one belief in common: they are skeptical as to whether government old age benefits will be available to them when they retire. Paradoxically, Italians, who currently depend most heavily on the government for financial support in old age, are more convinced that adult children are obligated to provide financial help to their aging parents (87% assert this) than Americans (76%) or Germans (58%).

It’s sobering to note that though the US elderly are much better off since the introduction of Medicare in 1965, fully 20% of older Americans are poor—twice the rate in Germany or Italy. It’s also disturbing that the generous private pensions Americans received in the past are vanishing, as is employer-provided supplementary health coverage. American culture maintains an ethic of individual and family responsibility but is gradually eroding the support, both private and governmental, that makes that possible. 

If we want to focus on the family as the locus of care—and we shouldn’t kid ourselves into believing that older people won’t need care—we need to make sure that we develop rather than destroy what infrastructure there is in which those caregivers operate. That means more flexible and part time job options for caregivers (as well as for older people themselves) and technology that helps caregivers monitor remotely. It means developing a cadre of workers who can supplement the services provided by families and earn a decent wage doing do. It involves providing respite for caregivers so they can get mental health breaks and go on vacation. It involves nothing less than a societal makeover.

May 15, 2015

Dissing the Elderly

Every ten years since 1961, the White House has convened a Conference on Aging. It’s an opportunity for leaders in the field of Geriatrics as well as senior advocates and community representatives to articulate their vision of how best to assure that older Americans can lead dignified, meaningful, and healthy lives. We’re due for a WHCOA this year. But it’s the middle of May and the conference, while promised, hasn’t yet been scheduled. What’s going on?

It’s very simple. Congress hasn’t allocated the funds. The framework for the Conference has been established by legislation, and the legislation in question is the Older Americans Act. The problem is that Congress hasn’t re-authorized the Older Americans Act.

Failure to re-authorize the Older Americans Act doesn’t just mean undermining the White House Conference, which this year was supposed to focus on proposals to ensure retirement security, healthy aging, long term care services, and elder justice. It also means imperiling all the other programs supported by the Older Americans Act. The Act created a federal Administration on Aging and regional Area Agencies on Aging which provide funding for nutrition programs, congregate housing, and community services.

Our just-say-no Congress has evidently decided that the creed of personal responsibility extends to older people as well as to the poor, the disabled, and other vulnerable groups in our society. Medicare is too popular to roll back but other supportive services for older Americans can evidently be cut with impunity. After all, the people whose homes and whose meals are in jeopardy are poor, they often live in rural communities, and many are ethnic minorities whose first language is not English. They need others to speak out for them. So write to your Congressman. As Mahatma Gandhi said, a nation’s greatness is measured by how it treats its weakest members.

May 14, 2015

WHOA!! WH(C)OA coming soon--or is it?

Coming soon: blog post on 2015 White House Conference on Aging

May 03, 2015

The Deciders

Shared decision-making has become something of a sacred cow in medicine, even though few physicians actually practice it. There is certainly evidence that patient participation in discussions about their health care and patient engagement in self-care lead to better outcomes, as well as to greater patient satisfaction. Promoting patient autonomy requires that patients play a role in shaping their fate. So some kind of patient involvement is decidedly a good thing. But shared decision-making, I argue in an article just published in the Journal of Medical Ethics, needs to be re-engineered for it to work in practice.

To be sure, not everyone means exactly the same thing by shared decision-making. But most of the definitions look like this one, offered by a leading proponent and expounder of the model: shared decision-making is an approach in which the patient receives information about available treatment options (including their risks and benefits), the clinician and the patient consider each one in light of the patient’s situation, goals, and preferences, and the two parties jointly select the best option.

The focus of all this deliberation is the selection of a treatment. Which treatment to provide is what the doctor needs to know. And the reason for involving patients in the decision-making is that, unlike in selecting which antibiotic to use in the treatment of pneumonia, which is strictly a technical decision, deciding whether to use chemotherapy or radiation for the treatment of cancer, or whether to use fourth line chemotherapy or hospice for treatment of cancer when it progresses to a very advanced stage, depends on the patient’s values. But what I suggest in my essay is that respecting patient autonomy requires eliciting those values. Once the doctor understands the patient’s goals, once he or she knows what is most important to the patient, that information constitutes data that goes into the decision about treatment along with other data involving outcomes, side effects, and probabilities. To ask the patient to draw conclusions about which treatment is best, rather than to have the doctor make a recommendation based on a whole raft of information that includes the patient’s input about goals and values, makes no more sense than providing a patient with data about antibiotics and then expecting a patient to select which antibiotic he should take for his pneumonia.

I argue that despite several decades of work seeking to overcome the barriers to shared decision-making—barriers such as cognitive biases, innumeracy, and health illiteracy—and despite evidence that sophisticated decision aids can help patients, most doctors and patients don’t like the conventional approach to shared decision-making and don’t use it. Even medical ethicists who believe strongly in honoring patient autonomy and who have traditionally advocated shared decision-making balk when they themselves or their family members develop cancer and the physician tries to implement shared decision-making. 

The approach I advocate doesn't go back to the older paternalistic model in which physicians decree and patients obey; rather, it reformulates the way shared decision-making takes place by suggesting that what needs to be shared is the process of determining the patient’s goals of care, not the process of deciding how to translate those goals, along with other highly technical information, into a treatment decision.