May 27, 2015

Getting from Here to There

We all have the same final destination: next blog post will discuss how we get from here to there.

May 25, 2015

Sneak Preview

Demographically, the US in 2050 will look much the way Germany and Italy do today: 20% of the population will be over age 65. Comparing the attitudes and beliefs of Germans, Italians, and Americans toward elder caregiving, as a new Pew Research Center report does, can give us a glimpse of our future.

The facts are intriguing. Twice as many Italians and Germans as Americans feel that government should bear the greatest responsibility for economic well-being in old age. This reflects today’s reality: in the US, 38% of the income among those over 65 comes from government sources such as Medicare, whereas in Germany and Italy 70% comes from public funds. It may also reflect the fact that there are fewer young people in Germany and Italy to bear the burden of caring for the older generation. The old age dependency ratio in both European countries is 30, which means there are 30 older adults for every 100 “working age adults,” defined as ages 15-64, even though 15 is seldom working age in these societies; in the US today, the dependency ratio is 19.5.

What I found particularly striking is that more older Americans continue earning money from working in old age than do their European counterparts: 32% of the income of elderly Americans derives from work, compared to 20% of that of Italians and only 13% of the Germans. It seems that Americans work more—and depend to a greater extent on working for their identity as well as for their income—than Europeans, who also have shorter work weeks and take more vacation time. And today’s Americans are far more likely to have a private pension fund of some kind, for example from their employer, than the Germans or Italians: 30% of American retirees receive private pension benefits, compared to 13% of Germans and 7% of Italians.

Those under 65 in all three countries have one belief in common: they are skeptical as to whether government old age benefits will be available to them when they retire. Paradoxically, Italians, who currently depend most heavily on the government for financial support in old age, are more convinced that adult children are obligated to provide financial help to their aging parents (87% assert this) than Americans (76%) or Germans (58%).

It’s sobering to note that though the US elderly are much better off since the introduction of Medicare in 1965, fully 20% of older Americans are poor—twice the rate in Germany or Italy. It’s also disturbing that the generous private pensions Americans received in the past are vanishing, as is employer-provided supplementary health coverage. American culture maintains an ethic of individual and family responsibility but is gradually eroding the support, both private and governmental, that makes that possible. 

If we want to focus on the family as the locus of care—and we shouldn’t kid ourselves into believing that older people won’t need care—we need to make sure that we develop rather than destroy what infrastructure there is in which those caregivers operate. That means more flexible and part time job options for caregivers (as well as for older people themselves) and technology that helps caregivers monitor remotely. It means developing a cadre of workers who can supplement the services provided by families and earn a decent wage doing do. It involves providing respite for caregivers so they can get mental health breaks and go on vacation. It involves nothing less than a societal makeover.

May 15, 2015

Dissing the Elderly

Every ten years since 1961, the White House has convened a Conference on Aging. It’s an opportunity for leaders in the field of Geriatrics as well as senior advocates and community representatives to articulate their vision of how best to assure that older Americans can lead dignified, meaningful, and healthy lives. We’re due for a WHCOA this year. But it’s the middle of May and the conference, while promised, hasn’t yet been scheduled. What’s going on?

It’s very simple. Congress hasn’t allocated the funds. The framework for the Conference has been established by legislation, and the legislation in question is the Older Americans Act. The problem is that Congress hasn’t re-authorized the Older Americans Act.

Failure to re-authorize the Older Americans Act doesn’t just mean undermining the White House Conference, which this year was supposed to focus on proposals to ensure retirement security, healthy aging, long term care services, and elder justice. It also means imperiling all the other programs supported by the Older Americans Act. The Act created a federal Administration on Aging and regional Area Agencies on Aging which provide funding for nutrition programs, congregate housing, and community services.

Our just-say-no Congress has evidently decided that the creed of personal responsibility extends to older people as well as to the poor, the disabled, and other vulnerable groups in our society. Medicare is too popular to roll back but other supportive services for older Americans can evidently be cut with impunity. After all, the people whose homes and whose meals are in jeopardy are poor, they often live in rural communities, and many are ethnic minorities whose first language is not English. They need others to speak out for them. So write to your Congressman. As Mahatma Gandhi said, a nation’s greatness is measured by how it treats its weakest members.

May 03, 2015

The Deciders

Shared decision-making has become something of a sacred cow in medicine, even though few physicians actually practice it. There is certainly evidence that patient participation in discussions about their health care and patient engagement in self-care lead to better outcomes, as well as to greater patient satisfaction. Promoting patient autonomy requires that patients play a role in shaping their fate. So some kind of patient involvement is decidedly a good thing. But shared decision-making, I argue in an article just published in the Journal of Medical Ethics, needs to be re-engineered for it to work in practice.

To be sure, not everyone means exactly the same thing by shared decision-making. But most of the definitions look like this one, offered by a leading proponent and expounder of the model: shared decision-making is an approach in which the patient receives information about available treatment options (including their risks and benefits), the clinician and the patient consider each one in light of the patient’s situation, goals, and preferences, and the two parties jointly select the best option.

The focus of all this deliberation is the selection of a treatment. Which treatment to provide is what the doctor needs to know. And the reason for involving patients in the decision-making is that, unlike in selecting which antibiotic to use in the treatment of pneumonia, which is strictly a technical decision, deciding whether to use chemotherapy or radiation for the treatment of cancer, or whether to use fourth line chemotherapy or hospice for treatment of cancer when it progresses to a very advanced stage, depends on the patient’s values. But what I suggest in my essay is that respecting patient autonomy requires eliciting those values. Once the doctor understands the patient’s goals, once he or she knows what is most important to the patient, that information constitutes data that goes into the decision about treatment along with other data involving outcomes, side effects, and probabilities. To ask the patient to draw conclusions about which treatment is best, rather than to have the doctor make a recommendation based on a whole raft of information that includes the patient’s input about goals and values, makes no more sense than providing a patient with data about antibiotics and then expecting a patient to select which antibiotic he should take for his pneumonia.

I argue that despite several decades of work seeking to overcome the barriers to shared decision-making—barriers such as cognitive biases, innumeracy, and health illiteracy—and despite evidence that sophisticated decision aids can help patients, most doctors and patients don’t like the conventional approach to shared decision-making and don’t use it. Even medical ethicists who believe strongly in honoring patient autonomy and who have traditionally advocated shared decision-making balk when they themselves or their family members develop cancer and the physician tries to implement shared decision-making. 

The approach I advocate doesn't go back to the older paternalistic model in which physicians decree and patients obey; rather, it reformulates the way shared decision-making takes place by suggesting that what needs to be shared is the process of determining the patient’s goals of care, not the process of deciding how to translate those goals, along with other highly technical information, into a treatment decision.

April 27, 2015

Kissing Consent

Last week was a tough week for humankind. My heart goes out to the hundreds of refugees and would-be immigrants who drowned trying to flee oppression, war, and poverty, and to the thousands of Nepalese who died or lost everything just because they were in the wrong place at the wrong time. But I also ache for the 78-year-old Iowan man accused of rape—and mercifully exonerated—after being intimate, in some form or another, with his demented wife.

It’s always dangerous to discuss a case based exclusively on information from the media or arguments based in court. So I won’t presume to know what actually happened in the nursing home between Mr. and Mrs. Rayhons in her nursing home bed. But I do worry about the well-being of all those who have dementia. And I think that seeking to deprive people with dementia of one of the few pleasures they may still be able to experience in life is tragic.

The pundits have been pontificating that the Rayhon case is all about informed consent. But is it? Since when is informed consent required for anything other than a medical procedure or a research study? The case seems to me to be more about the quality of life of individuals with diminished cognitive capacity and the medicalization of society than about consent.

Informed consent is a tremendously important concept in medical treatment and medical research. One hundred years ago, future Supreme Court Justice Benjamin Cardozo ushered in the modern era of informed consent when he stated forcefully that “Every human being of adult years and sound mind has a right to determine what shall be done with his own body”—and that performing surgery without consent was assault and battery.

The need for informed consent, and the devastating consequences of performing medical experiments without it, were brought home after World War Two, when Nazi experimentation on hapless prisoners was revealed. The Nuremberg Code, promulgated in 1947 in response to the horrors inflicted in the name of science, established voluntary consent as essential to the ethical conduct of research. A mere twenty years later, the anesthesiologist Henry Beecher divulged to the American medical community that research without informed consent was occurring with alarming frequency. As a result, a regulatory framework was put in place to assure ethical conduct by physician investigators, at least among those applying for NIH funding of their work. Voluntariness is the bedrock of informed consent. But what does all this have to do with intimacy between two members of a married couple?

Not much. Sexual intercourse is not a medical procedure. Deciding whether or not to engage in intimacy is not like deciding whether to participate in medical research. It is something that is normal and expected within the context of the marital relationship. Nor do we expect people with Alzheimer’s disease to sign an informed consent form before they have dinner, acknowledging that they are aware of the risks and benefits of the meal they are about to eat. We don’t ask people with Alzheimer’s disease to formally agree to wear a coat when it’s cold out. What business do doctors, nurses, and nursing home administrators—let alone courts and juries—have interfering in the private relationship between two adults? To be sure, not all sexual relationships are voluntary, even within a marriage. People who have dementia are vulnerable and need protection against abuse. But to apply the standard of informed consent to everyday life, a standard meant for people undergoing cardiac catheterization or surgery, or for taking experimental medications of no proven benefit, is profoundly misguided.

We need to broaden our view of how best to approach people with dementia to go beyond a narrow focus on safety. Of course  physicians and nurses and social workers want those in our care to be safe. But many vulnerable older people do not consider safety their paramount concern. They would rather take the risk of falling than be confined to bed. (It’s worth noting that while conventional restraints do not necessarily prevent falls and are in fact associated with injury, surely it would be possible to tie people down using four-point restraints in such a way that they could not possibly get up and fall.) They would rather live on their own, even though they might become ill and have no one with them, rather than forgo their independence. The interest of many, quite likely most, frail or demented older people, is in maximizing their quality of life. And that means interaction with other human beings.

We know that people with dementia experience emotions long after they have lost much cognitive function. We know they respond to a smile or a hug even if they cannot tell you the full name and birth date of the person who is hugging them. Surely to deprive people with dementia of the possibility of intimacy if they are fortunate enough to be in a relationship with a loving spouse, is cruel and unusual punishment. There are other, better ways to protect the vulnerable.

April 20, 2015

Helping the Other Half

There’s nothing much new in the latest edition of the Alzheimer’s Association Alzheimer’s Facts and Figures, which came out last month. Once again, the report documents that about 43% of people age 75-84 have Alzheimer’s. Since no more than 75% of all dementia cases are due to Alzheimer’s disease, that means that over half of  people in this age group have dementia of one kind or another.

The proportion varies depending on how dementia was assessed and on the ethnic, geographic, and gender composition of the people studied. But despite an enormous amount of effort and much progress in the domain of understanding the pathophysiology of Alzheimer’s disease, the Alzheimer’s Association reminds us that there is no drug available today that slows or stops the death or neurons that causes Alzheimer’s disease.

The federal government’s National Alzheimer’s Project Act and Obama’s BRAIN (Brain Research Through Advancing Innovative Neurotechnologies) initiative are redoubling the effort to find a cure. That would be terrific—though we should remember that the last concerted effort to wipe out a chronic disease, the War on Cancer, was launched in 1971 and is still being fought.

But what about the other half—those older people who don’t have dementia? Is there anything that medicine should be doing special for them?

A new report from the well-respected, non-governmental, non-profit Institute of Medicine released last week addresses this question. Entitled Cognitive Aging: Progress in Understanding and Opportunities for Actionthe report makes ten recommendations for collecting data, engaging in research, developing programs, and providing resources that seek to maximize cognitive function in older people who don’t have dementia. It’s an intriguing report, principally because it focuses on what is most important to older people and is the essence of geriatric medicine—function, rather than disease. The authors cite an AARP survey of members in which fully 87% identified “remaining sharp” as one of their major concerns. Older individuals and their families are concerned with optimizing mental performance. They are alarmed by the recently described disorder, “Mild Cognitive Impairment,” a condition that does not meet the criteria for dementia but affects the ability to function in day to day life, whether or not it progresses to full-blown dementia.

The study is also of interest because of it public health angle—it draws attention to the major societal consequences of age-related cognitive decline, things like traffic accidents (as people with impaired judgment or slow reflexes continue to drive) and financial fraud (resulting from impaired decision-making on the one hand and minimal consumer protections on the other). The recommendation to develop assessment tools, educational programs, and improved regulations, as well as alternative means of transportation, have the potential to maintain quality of life for older individuals and to lower costs.

The report makes common-sense suggestions for preventing age-related cognitive decline: being physically active, remaining socially and intellectually engaged and getting enough sleep. Unfortunately, there is no more evidence that these measures will maintain cognitive function than there is that they will prevent dementia. That is, they might help and they can’t hurt. But to suggest that they are proven to be effective is, alas, to overstate the case.

Despite succumbing to the temptation to offer a little bit of hype along with a lot of wisdom, the report’s authors make a valuable contribution by reminding us to pay attention to the other half. Its broad societal focus is welcome, as is the recognition that it is function rather than disease that matters most to the majority of people as they age.

April 12, 2015

What We Believe

Kudos to the Huffington Post for running an article about the new report from the FrameWorks Institute, “Gauging Aging: Mapping the Gaps Between Expert and Public Understandings of Aging in America.” And shame on the NY Times, the  Washington Post, the Wall Street Journal, and the other major newspapers in America for ignoring it. That’s not entirely surprising since the report is all about the disconnect between public perception and reality, and the media are to a large extent responsible for shaping popular understanding.

The new study does not report the results of an opinion poll. It is not based on trendy focus group analysis. It seeks to understand what both geriatric experts and the lay public believe about aging and the “assumptions and thought processes” that underlie their opinions. The authors, supported by funding from AARP and a variety of foundations including the John A Hartford Foundation and the Retirement Research Foundation, use a “cultural-cognitive approach” to their work. That means they probe, they explore, they question. They do not rely on “big data.”

So what did they find? They learned a great deal about the gaps between the scientific understanding of aging (by which I mean physiologic, medical, psychological, and sociologic) and the public’s view. They learned so much that I will just touch on some of the highlights here.

Attitudes toward aging: the experts see aging as presenting challenges, but also an opportunity for growth and the possibility of continued contributions to society. The public sees aging as the enemy, to be combatted rather than embraced or supported. In particular, aging is thought to bring with it decay and disability; in fact, older people are very heterogeneous.

Root cause of aging: Americans tend to believe that what happens to them is entirely within their control. If they eat right, exercise, and lead a virtuous life, they can avoid the aging process entirely. The truth is more nuanced, with both genetic and external factors playing a significant role. In a similar vein, the public tends to believe that if older people do become disabled or demented and cannot take care of themselves, then their family rather than the government has an obligation towards them.

What we need in order to age well: The experts see a need to create structures to facilitate older engagement—whether opportunities for part time work, better transportation, or more volunteer positions. A related theme is the need, recognized by experts, for new public policy initiatives to modify today’s reality. The public, by contrast, takes the status quo for granted and assumes it’s up to older people to avail themselves of existing options.

There’s more. Maybe I will write more about this subject next week. Better yet, just read the study. And I look forward to future work from the FrameWorks Institute addressing how to change popular perceptions. Maybe they will shed some light on how to modify the public view of climate change and evolution, too.

April 05, 2015

Chemotherapy for Alzheimer's?

A provocative opinion piece in JAMA Neurology offers new hope for the treatment of Alzheimer’s disease. University of California Santa Barbara neuroscientist Kenneth Kosik begins by acknowledging that recent drug trials have all failed, this despite the fact that that were designed based on the latest understanding of the pathophysiology of Alzheimer’s disease. The most spectacular recent failures were studies of two separate monoclonal antibodies that were expected to selectively attack amyloid deposition in the brain. This approach was so intriguing that scientists haven’t given up on it yet, despite the lack of success so far—just a few weeks ago the drug company Biogen announced the results of a Phase I trial of yet another monoclonal antibody, this one evidently quite promising. But Kosik has a different idea.

Kosik comments that the typical response to the drug trial failures is either to question whether our understanding of the mechanism by which Alzheimer’s is produced is correct or to conclude that treatment of the disease will need to begin before symptoms develop. He suggests that perhaps where we have gone wrong is in believing that Alzheimer’s disease is homogeneous. In fact, he argues, there are many genetic variants. The best known is the APOE alleles, with the APOE4 allele conferring particularly high risk. And of course there are the relatively rare familial forms of Alzheimer’s disease, which involve mutations in any of several chromosomes. Perhaps if we identify a person’s genome, we can find a particular therapy that will work for that person.

This is the approach that is increasingly used in the treatment of cancer, with a handful of notable successes. In non-small cell lung cancer, for example, there are several mutations typically found in younger patients with no history of smoking. Drugs targeted against these mutations have produced excellent results in patients with the relevant mutation: the drugs Erlotinib (Tarceva) and Crizotinib (Xalkori), when used in appropriate patients, can convert what was previously a uniformly lethal disease (usually within a year) into a chronic illness. Will the same strategy work for Alzheimer’s disease?

Maybe. It’s certainly worth pursuing. And Kosik also has ideas about how to proceed with this kind of targeted, individualized treatment—he recommends what are known as “N of 1 trials,” tests of a potentially useful drug in a single patient rather than the conventional strategy of testing a drug in a large group of people. The proposed strategy would both revolutionize the treatment of a particular disease (Alzheimer’s) and change the search process for new drugs. It’s not likely to change the epidemiologic reality any time soon, but it’s more promising than many radical new ideas. Let's hope NIH or other funding agencies have the wisdom to provide the support needed for this innovative work to go forward.

March 30, 2015

When Push Comes to Shove?

There’s been a lot of talk about advance care planning lately, with Ellen Goodman’s Conversation Project, Atul Gawande’s book, Being Mortal, and very recently, Angelo Volandes’ book, The Conversation. The message: think about what matters to you and discuss with your physician and with your family the approach to medical care that makes sense for you if you become very ill. We know that these kinds of discussions may not lead to the patient’s directive being followed when he or she actually does get very sick—sometimes the advance directive does not travel with the patient to the hospital, often the directive is difficult to interpret in practice (what exactly does “no heroic measures” mean?), and in some circumstances physicians or families override the directive. We know that patients with a medical order such as a POLST (Physician Orders for Life-Sustaining Treatment) are more likely to get what they signed up for than those with a wish statement (for example a living will). What we don’t know much about is whether patients change their minds when faced with an actual illness. A new study examines how often patients admitted to an ICU with some kind of “treatment limitation” in effect nonetheless received the treatment they previously had said they didn’t want.

The answer: about one-quarter of the time.The most common treatment that people specifically said on admission to the hospital they didn’t want was attempted CPR (accounting for 77.4% of the limitations expressed), but 24.6% of them ended up having CPR initiated. Another 21.3% of people said they didn’t want specific therapies such as dialysis or artificial nutrition, but it is not clear from the study just what proportion of them received those treatments. Finally, 3.9% of those admitted said they wanted a focus on comfort. We don’t know what kind of treatment those patients in particular received. All we know is that of the 13,405 patients admitted to one of 141 ICUs at 105 US hospitals between 2001 and 2008 with some kind of treatment limitation request in place, 3123 ended up getting attempted CPR, 3841 got intravenous medicines to maintain their blood pressure, 2660 were put on breathing machines, and 283 got dialysis. Finally, we know that there was considerable variability between hospitals, both with respect to the rate of treatment limitation expressed initially (from 1% to 30%) and to the rate of treatment limitation reversal (2% to 76%). So what does this all this mean?

Since we don’t know what led to the changes that occurred, we can’t know for sure. What is clear that these were not cases where the doctors simply never saw the advance directive. These are all cases in which the plan was not to provide a specific treatment (principally CPR) but the reality was that the order was rescinded and the forbidden treatment was actually administered. So somewhere between entering the ICU and leaving it (whether alive or dead), a major change took place. It could be that when push came to shove, patients changed their minds. When confronted with their own mortality, they chose to have a shot, however small, at living longer. It could be that most of the patients were incapacitated by illness and the actual decision to reverse course was made by a health care proxy or family member—we know from other studies that fully 70% of patients in whom a decision about life-sustaining treatment needs to be made are cognitively unable to engage in decision-making at the crucial moment. It might be that the limitation-of-treatment directive didn’t really reflect the patient’s goals—perhaps what they meant was they didn’t want to spend their life as a vegetable, not that they didn’t want a trial of life-prolonging treatment—and they came to understand that their directive as written should be revised.  Or it could be that patients and families were persuaded to change course because the environment of the ICU promotes life-prolonging treatment and it is very difficult to stick to a plan of care that violates the raison d'etre of the ICU. 

In all likelihood, the answer is all of the above. This study does not imply we should abandon advance care planning. It’s worth pointing out that 75% of patients with treatment limitations in place retained those restrictions. But it does raise questions about whether patients really know what they are signing when they complete an advance directive—whether a traditional living will, a detailed instructional directive, or a POLST form. It makes me even more convinced than ever that we need to focus more strongly on ascertaining the patient’s goals of care when we engage in advance care planning, and leave the translation of those goals into medical treatment to the moment when an actual decision must be made. And it does remind us that hospital culture in general and ICU culture in particular are very powerful. When an institution is structured and staffed to provide life-sustaining care, then treating some patients without using life-sustaining care creates cognitive dissonance. If patients truly want a different approach to care, we should provide an alternative environment in which to deliver that care, either the home or an intermediate facility such as a skilled nursing facility.