December 06, 2010

News of a Life in Review

The father of modern geriatric medicine, Robert Butler, died last summer at the age of 83. His contributions to the study of aging and to the care of older people were prodigious: he was the founding director of the National Institute on Aging, the division of the National Institutes of Health devoted exclusively to the elderly; he persuaded Mount Sinai School of Medicine to establish the first ever Department of Geriatrics-and was promptly appointed its first chairman; his book, Why Survive: Being Old in America, published in 1975, made the case that America was "ageist" and argued forcefully that views of older people as feeble and "senile" reflected prejudice and ignorance, analogous to "racism."

Less well-known is Butler's endorsement of the idea of a "life review." Older individuals frequently look back on their lives, trying to create a coherent narrative that helps give meaning to their lives. Butler coined the term "life review" for this process and, far from mocking it as a sign of incipient dementia, as was the fashion in the 1950s, he encouraged the practice, suggesting it gave older people the opportunity to find what Erik Erikson called "ego integrity" in the final stage of life.

American medicine, by contrast, has focused on finding a quick fix to the problems of aging. Vitamin E, an anti-oxidant, was heralded as a miracle drug, until it turned out that instead of preventing heart disease and cancer, it had no effect in preventing either one and seemed to increase the risk of heart failure. Estrogen for women met a similar fate: touted as the fountain of youth and alleged to prevent such age-associated ailments as dementia and coronary artery disease, it was found to be useless in protecting against dementia and to increase the rate of heart disease. Vitamin D, the most recent candidate for anti-aging potion, just this month met the same fate. The Institute of Medicine concluded that vitamin D had not been shown to boost immunity, to prevent cancer, or to stave off diabetes.
The search for a magic bullet to prevent aging has led us to neglect the quality of life for those who are already old, and who may be frail or demented.

One strategy we already have for improving life satisfaction in old age is life review. A series of small studies this past year have confirmed the utility of life review for psychological well-being. One study of reminiscence therapy in Taiwan nursing homes found less loneliness, less depression, and greater psychological well-being in the experimental group. Similar results emerged from a randomized study conducted in ten Danish nursing homes. A Dutch randomized trial conducted in the community found that enrollment in a life review course called "Looking for Meaning" reduced depressive symptoms, a result that persisted at six months.

Unlike medications, life review has virtually no side effects. But unlike new drugs covered by patents, it will not enrich anybody and has no powerful corporate sponsors (Venlafaxine, an antidepressant, cost $70 for a one-month supply, according to, and Quetiapine, an antipsychotic commonly used for symptoms such as agitation, costs $90 for a one-month supply).

I recently engaged in a limited form of life review with my parents, who are now 85 and 86. I interviewed them at length about their experiences during their first 25 years. Both were born Jewish in Germany, my mother in the port city of Stettin (now Szczecin, Poland) and my father in the small East Prussian town of Osterode (now Ostroda, also in Poland). In January, 1939, they both left Germany forever on a children's transport to Belgium. When Germany invaded Belgium in May, 1940, they escaped by train to the south of France as part of a group of 100 refugee children. There they remained, in a children's colony supported by the Swiss Red Cross, until they were threatened with deportation to Nazi death camps and fled, in 2 separate expeditions, over the border to Switzerland. My interviews focused on their early lives in Germany and their experiences as refugees, first in Belgium, then in France, and next in Switzerland. After the war, refused residency by the Swiss, they immigrated: my father to Brazil and from there to the U.S., and my mother directly to the U.S.

The process of telling their story, while sometimes painful, was clearly valuable to my parents. They found it helped them make sense of their lives to think about how their early experiences affected them as American citizens and as parents. They felt there were lessons to be learned-lessons about countries' responsibilities to refugees and about what it means to act humanely-which they were eager to communicate and hence were gratified to see their story made public in my recently released book, Once They Had a Country: Two Teenage Refugees in the Second World War.

So as we salute Robert Butler for his many invaluable contributions to geriatrics, let us not forget the humble life review.

August 16, 2010

The Alzheimer's Revolution?

First a seemingly arcane debate at a medical conference about the criteria for diagnosing Alzheimer’s disease made news; then an article in a neurology journal was widely touted as showing that a spinal fluid test was “100% accurate in predicting Alzheimer’s.”

Are we on the cusp of a revolution? Is there truly a new way to think about this formidable disease—and is a cure finally at hand?

What we know now is that sometime in their mid- or late-fifties, individuals who are destined to develop Alzheimer’s disease begin accumulating a protein called amyloid-beta in their brains. This material clumps together to form plaques, the very same plaques seen by Alois Alzheimer under the microscope back in 1905, when he established early criteria for the disease. In the past, such plaques could only be identified by brain biopsy. Today, their presence can be inferred from “biomarkers” such as a low level of amyloid-beta in the cerebrospinal fluid (CSF) or specific changes on a PET or MRI scan.

The proposed new diagnostic criteria for Alzheimer’s disease reflect these biochemical findings. They do not require that a person have a special scan or a lumbar puncture to make the diagnosis. In fact, they define “probable Alzheimer’s disease dementia” based on the insidious onset of symptoms, a clear-cut history of worsening cognition, and certain cognitive deficits present on history and examination. The role of biomarkers—low CSF amyloid beta, elevated CSF tau, or specified abnormalities on either PET or MRI scans)—is solely to “enhance” the diagnosis, that is to slightly increase its certainty.

More noteworthy than the suggested diagnostic criteria is the recommendation that a syndrome of “preclinical Alzheimer’s disease” be defined. This recognizes the reality that Alzheimer’s disease is years in developing and that the opportunity for intervening to prevent the disease will almost surely arise during this developmental phase. Preclinical AD is defined based solely on biomarker evidence of amyloid beta accumulation or of early neurodegeneration. It requires either measurement of CSF biomarkers or a specialized scan.

In the setting of these proposed definitions, the new study from the Archives of Neurology is very intriguing. It reports on a test of the CSF fluid that measures both amyloid and tau and combines the results in a way that helps predict whether the patient has AD,  or is healthy but likely to develop AD later on. The researchers conducted the study as part of a larger “Alzheimer’s Disease Neuroimaging Initiative” in which they followed a large group of individuals over time, some who were clinically normal, some who had Mild Cognitive Impairment(MCI--a state that often progresses to full-blown dementia), and some who were clinically diagnosed as having Alzheimer’s disease. The subset of this population who agreed to have a lumbar puncture formed the study sample for the Archives work: a total of 114 clinically normal individuals, 200 with MCI, and 102 with clinical evidence of Alzheimer’s. Based on this group, the scientists were able to define a “biomarker mixture” that was abnormal in 90% of those with dementia, 72% of those with Mild Cognitive Impairment, and 36% of those who were cognitively normal. When they tested their model in 2 other groups—one group of 68 people with autopsy-confirmed Alzheimer’s and one group of 57 patients with MCI who were followed for 5 years—they found that the biomarker mixture correctly identified 94% of those in the first group and all of those in the second group who progressed to full-blown dementia.

What all this means is not that patients with a clinical diagnosis of Alzheimer’s disease or of MCI should have a lumbar puncture or a special scan. It decidedly does not mean that cognitively intact older individuals should have their biomarkers measured to see if they are likely to get Alzheimer’s. What it does mean is that we now have the opportunity to do research on healthy people and on those with MCI to see if there is a drug that can prevent the progressive deposition of amyloid that appears to cause full blown Alzheimer’s.

The only problem with this approach is that currently no such drugs have been definitively identified. There are a host of putative disease-modifying agents: vaccines that stimulate the body to develop antibodies against amyloid (which sort of work but with the side effect of causing encephalitis, a brain inflammation), drugs that bind amyloid (which so far have been found to decrease the level of biomarkers but not to affect cognition), and chemicals that interfere with the enzyme that cleaves the abnormal amyloid from a larger precursor protein (which also do what they’re supposed to do biochemically but have not improved cognition).

The hope is that the newly defined preclinical syndrome will encourage researchers to dare to intervene before individuals have any abnormal symptoms. Putting the focus of research squarely on influencing the prodromal period may allow us to prevent or at least delay the progression of Alzheimer’s.

Is this a revolution? It is certainly a conceptual revolution. It embodies our best hope for intervening in the cascade of biochemical events that produces the leading form of dementia. Will it succeed? That remains to be seen.

July 01, 2010

One Hundred Years of Alzheimer's Disease

One hundred years ago, the eminent German physician, Emil Kraepelin, published the eighth edition of his very successful textbook on psychiatry. It wasn't terribly different from the preceding version, but one small change would prove to have enduring consequences.  Several neurologists and psychiatrists had described a disorder that looked very much like the dementia associated with old age but which afflicted people as young as 50. Kraepelin gave the disease a name. He called it "Alzheimer's disease," based on a case report that the neuropathologist Alois Alzheimer had presented in 1906. The name stuck although "Alzheimer's disease," was ultimately found to be indistinguishable from age-related dementia. What have we learned about the disease in the ensuing 100 years? The Alzheimer's Society's new report, Alzheimer's Facts and Figures 2010 gives us a snapshot of the disease as we know it today.

For 100 years there has been debate about the cause of dementia. Is it caused by plaques and tangles, the abnormal material that Alois Alzheimer observed under the microscope when he examined the brain of his patient, Frau Auguste D, after her death? Or is it a vascular condition, due to hardening of the arteries? For the last several decades, physicians have been confident that there are several distinct forms of dementia, with Alzheimer's disease and vascular or multi-infarct dementia discrete entities. But now the lines are blurring: a new autopsy study suggests that many cases of dementia are due to a mixture of several problems. By examining the brains of older individuals who had clinical evidence of Alzheimer's disease during life, scientists found that less than half of them had Alzheimer's disease alone. Fully one-third had infarcts (strokes) as well as Alzheimer's changes in their brains. About 15% had changes of Parkinson's disease in addition to Alzheimer's. These findings are not merely of academic interest: if it takes several distinct processes occurring simultaneously to produce dementia, there are potential therapeutic implications. Perhaps it will sufficient to intervene in just one of the processes-or maybe it will be necessary to strike all the contributors to the disease at once.

When Alzheimer described his patient, Frau Auguste D, a woman who first presented at age 51 with memory loss, paranoia, and the inability to care for herself, dementia was relatively rare. Most cases of dementia, after all, arise in people over age 65 and life expectancy in Germany in the early 20th century was only 60 years. The latest prevalence data show that today 5.3 million Americans have Alzheimer's disease or some other form of dementia, 5.1 million of whom are 65 years of age or older. The disease disproportionately affects women-which is partly but not entirely due to the fact that women live longer than men. A total of 10% of men and 16% of women over age 70 have dementia.   

What Kraepelin and Alzheimer did not fully appreciate but what we know now is that dementia is a terminal illness: it is the 5th leading cause of death in people over age 65. But most people with dementia also have other chronic diseases, which makes sorting out what actually causes death is tricky: fully 60% of Medicare patients with dementia have high blood pressure, 26% have coronary artery disease, 23% have diabetes, and 25% show the residual effects of strokes.

Caring for individuals with dementia was sometimes a challenge even in Alzheimer's day-Frau D. had to be institutionalized because her behavior was so difficult to manage. The situation today is orders of magnitude more problematic. The number of people with dementia is growing dramatically, but the numbers of trained geriatric professionals (physicians, nurses, social workers) is not. Right now there are a paltry 7128 physicians who are board certified in geriatrics, with the projected need by 2030 estimated at 36,000 and no evidence that more doctors are going into the field.  Even more worrisome is the lack of personal caregivers. Currently most of the direct care for people with dementia is provided by family and friends: 11 million unpaid caregivers provide a stunning 12.5 billion hours of care annually, or about 22 hours/week. As the population with dementia increases, it is far from clear how we will provide adequate professional and non-professional care.

In 1910, medical technology was essentially nonexistent. In 2010, it is widespread, especially in older people and even more so in older people with dementia. Medicare beneficiaries with dementia are 3 times more likely than their non-demented counterparts to be hospitalized. Looked at differently, at any point in time about one-fourth of all hospitalized patients over age 65 have dementia.  Reflecting this trend, Medicare spends $15,145/year for each person with dementia compared to $5272/year for each person without dementia-and these figures do not include long term nursing home care, which is not covered by Medicare. What these numbers mean is that people with dementia receive is the same high tech, life-prolonging treatment that non-demented patients receive-multiplied by three both because they tend to have other chronic diseases and because they cannot articulate what exactly is bothering them, resulting int their being subjected to even more tests than other people. At first glance this might seem like a good thing, indicating that patients with dementia are not discriminated against. But instituting aggressive, life-prolonging therapy in people who have a terminal disease is of questionable benefit, particularly when it both causes suffering (people with dementia do not understand why they are receiving painful or frightening procedures) and is hugely expensive.

Kraepelin and Alzheimer lived in an ethnically and racially homogeneous society. Contemporary America is ethnically diverse and racial disparities in medical care are widespread. What is particularly striking is new evidence that older African Americans have twice the risk of developing dementia as do their white counterparts. The Washington Heights-Inwood Columbia Aging Program found that among people 85 years of age or older, the prevalence of dementia is 30.2% in whites, 58.6% in blacks, and 62.9% in Hispanics. The high rate of hypertension, diabetes, stroke, and coronary disease in African Americans and Hispanics may be responsible for the disparities. There is some hope that prevention of these conditions will ultimately be found to prevent dementia, although a recent NIH state-of-the-science conference concluded that "there is currently no evidence considered to be of even moderate scientific quality supporting the association of any modifiable factor...with reduced risk of Alzheimer's disease."

The centennial of Kraepelin's momentous naming decision is passing almost unnoticed. It should stimulate renewed dedication to addressing the challenges of caring for the growing numbers of patients and families devastated by Alzheimer's disease.

May 26, 2010

Say It Isn't So

My very first post on "Perspectives on Aging" dealt with dementia. I reported on a study that found exercise could decrease the risk of becoming demented. Since then I've blogged about dementia in general and Alzheimer's disease in particular 5 more times without a whole lot of encouraging news. The bad news just got a little worse: a recent National Institutes of Health "state-of-the-science" conference concluded that although a few tantalizing studies have suggested that exercise or social engagement or crossword puzzles could fend off dementia, a systematic evaluation fails to confirm these findings.

Dementia is unfortunately a very common condition which more and more people are developing as the population ages. At last count, 5.4 million Americans had Alzheimer's disease. Medications to treat Alzheimer's are mediocre-the most positive statement the American Psychiatric Association could make in a recent practice guideline for the most effective drugs, the cholinesterase inhibitors, is that they have a "modest" effect in a "substantial minority" of patients. Hardly a ringing endorsement. The antipsychotics, drugs often prescribed to treat the behavioral manifestations of dementia, have been associated with a small increased risk of death-which might be an acceptable price to pay if they worked, but they rarely do. Despite the scientific progress in the understanding of how Alzheimer's disease develops-and we know infinitely more today than Alois Alzheimer did in 1906 when he peered through the microscope and found plaques and tangles outside and inside (respectively) the neurons of his former patient, Auguste D-the prognosis for effective intervention is poor.

The new NIH report is very blunt: "There is currently no evidence considered to be of even moderate scientific quality supporting the association of any modifiable factor...with reduced risk of Alzheimer's disease." By "modifiable factor," the report means nutritional supplements, dietary factors, medications, social factors, economic factors, medical conditions, toxins, and other environmental exposures.

But perhaps we are looking at the problem the wrong way. In geriatrics, it's actually quite rare to find a single intervention that can prevent a complex condition, what geriatricians call a syndrome. A growing literature advocates a multi-pronged attack on this sort of problem. The risk of falling, for example, can be lowered somewhere between 25 and 39% of a combination of maneuvers, including review of medications and gait and balance training. The risk of developing delirium (an acute confusional state) in the hospital can be lowered by a third with a combination of 6 interventions, including avoidance of sleeping pills and providing patients with their glasses and hearing aids.

Maybe, just maybe, the risk of dementia can be lowered by maintaining contacts with other people, participating in social activities, playing music, exercising, and taking medication to treat high blood pressure. Each alone may have an extremely modest effect, but together, they just might make a difference.

April 13, 2010

It’s the Law

After all the rancor and the political posturing and the delays, we finally have major health reform legislation. The new law will result in 30 million currently uninsured Americans buying health insurance and it will abolish some of the most egregious practices of the insurance industry, such as use of pre-existing conditions to refuse coverage. But what effect will it have on controlling the cost of medical care?

The rate of rise of health care costs is a problem, as Peter Orszag (formerly of the CBO and now director of the Office of Management and Budget); Rand Corporation researchers; and the conservative think tank, the Heritage Foundation, all agree because it is a major threat to the American economy. The long-term projections of the CBO, published last July, are that total U.S. spending on health care, which was 16% of GDP in 2007, will rise to 25% of GDP in 2025 and 37% of GDP in 2050. The new law has the potential to result in a significant decline in the percent of GDP we spend on health care each year. But whether it achieves this end depends on how several key provisions of the bill are actually implemented.

The provisions of the health reform legislation that could have a profound effect on the rate of growth of spending on medical care have to do with payment reform. And the potentially most potent payment reform strategy that appears in the bill is “bundling.” The legislation calls for the establishment of a national Medicare pilot program to “test, develop, and evaluate” bundled payment for acute inpatient hospital care, physician services, outpatient services, and post acute care for a single episode of care. According to estimates by researchers at Rand, a system of bundled payments, could potentially decrease national health spending by as much as 5.4% in the next 10 years (if applied to both Medicare and the private sector). The same researchers propose a 6.2% target for reducing spending on health care over the next 10 years; hence bundling alone could make an enormous difference.

The new law also provides incentives to health care systems to form “accountable care organizations” (essentially networks of providers that agree to capitation, another form of bundling) by offering them a share in the savings they generate for Medicare if they meet quality targets. It remains to be seen how widespread accountable care organizations will become. Health care reform also provides for an “Innovation Center” within CMS to evaluate, test, and expand different payment structures. Whether this provision will lead to savings depends on what payment structures are tested, how effective they are, and whether they are then disseminated. A final provision in the domain of payment reform is the establishment of an Independent Payment Advisory Board, charged with submitting legislative proposals to reduce the per capita rate of growth in Medicare spending whenever spending exceeds a target growth rate. However, the Board is prohibited from submitting proposals that will “ration care,” a provision, like the notorious “reasonable and necessary” language of the original Medicare statute, which will no doubt serve as the basis for rejecting any plan that restricts potentially useful treatment on the basis of cost.

Beyond payment reform, the new law will establish a Patient Centered Outcomes Research Institute to conduct research comparing the clinical effectiveness of alternative treatments. In principle, this kind of information could change medical practice to avoid the use of unnecessary or unnecessarily expensive treatments. But the bill has a built-in guarantee that the information will not be used in this way. It states that the findings of the Institute “may not be construed as mandates, guidelines, or recommendation for payment, coverage … or used to deny coverage.”

A powerful new law is on the books. With it, the U.S. finally joins all the other countries in the developed world in assuring a basic level of health care for its citizens. But will it constrain the rate of growth of spending on medical care? Only if we can depoliticize the boards, centers, and institutes that are the key to change.

A modified version of this article was posted on the Health Care Cost Monitor on March 19, 2010.

March 29, 2010

This blog has moved

This blog is now located at
You will be automatically redirected in 30 seconds, or you may click here.

For feed subscribers, please update your feed subscriptions to

March 11, 2010

Drugged in the Nursing home

This week’s Boston Globe featured an article blasting Massachusetts nursing homes for having too many residents on antipsychotic medication, “Nursing home drug use puts many at risk.” It portrayed vulnerable grandmothers as sedated, mute, and drooling, transformed by drugs into tragically diminished versions of their earlier vivacious selves. The headline could have been from the 1970s when Mary Mendelson wrote her muckraking book, “Tender Loving Greed,” pillorying the nursing home industry. Has nothing changed in the last 30 years?

A great deal has changed. In the 1980s, Congress responded to the deplorable state of nursing home care with the Nursing Home Reform Amendments, part of the Omnibus Budget Reconciliation Act of 1987 (OBRA-87).This major piece of legislation sought to make nursing homes free of both physical and chemical restraints. And to a large extent it worked. Translated into regulations in 1991, OBRA-97 led to enormous declines in the use of antipsychotic as well as other “psychoactive” drugs such as benzodiazepines.

Then new, “atypical” antipsychotic medications came along. Touted as equally effective but less toxic, risperidone (Risperdal), olanzapine (Zyprexa), and quetiapine (Seroquel) appeared with growing frequency in nursing homes. Physicians were mandated by OBRA-87 to monitor the use of these drugs—to look for side effects such as Parkinsonian symptoms or drops in blood pressure. They were also supposed to limit prescription of these new drugs, along with other old fashioned or “typical” antipsychotics such as haloperidol (Haldol), to well-defined situations. The drugs were to be used for chronic schizophrenia and for dementia with psychotic features, unresponsive to alternative treatment. As the prevalence of dementia in nursing homes increased, so too did the prevalence of behavior problems: kicking, biting, smearing feces, and screaming. Less dramatic but nonetheless challenging were other behaviors that were also on the increase, such as pacing, wandering into other residents’ rooms, and urinating in trash cans. These behavioral disturbances wee extremely difficult for nursing home staff to manage. Convinced that the new “atypical” antipsychotics were the solution, physicians prescribed more and more of these drugs.

Some residents did improve. Most of them were not transformed into zombies by the medication. But the behavioral symptoms of dementia often come and go without pharmacological intervention. While many physicians were confident they were seeing a benefit of the drugs, others were not so sure. A large randomized trial
found that all three of the leading antipsychotics were equally effective in controlling symptoms—and indistinguishable in efficacy from placebo.

Nursing home physicians were skeptical. The study was conducted among outpatients. Maybe long term care residents were different from outpatients—surely they were more demented and had worse symptoms. But at around the same time came other disturbing news: the atypical antipsychotics were associated with a risk of sudden death. The FDA issued a new “black box” warning to physicians prescribing these drugs,followed 3 years later by another black box warning against using the older, “typical” antipsychotics. The drugs were not prohibited, but sober commentators advised prescribing them with great caution, restricting their use to patients with longstanding psychiatric illness and checking an EKG before and after starting the drugs.

In the face of all this bad news, how can it be that a recent study of over 16,000 nursing home admissions found that 29% received at least one antipsychotic over the course of a year? Of these, just about one-third had no identified clinical indication for the drug. Residents admitted to nursing homes with the highest baseline prescribing rates of antipsychotics were 1.37 times more likely than those admitted to nursing homes with the lowest baseline rates to receive a new antipsychotic prescription in the coming year.

Over-use of antipsychotics in the 1970s and 1980s was bad enough. In the intervening years, we have had regulation (OBRA-87), scientific studies of both efficacy and toxicity, and warnings from the FDA. What is going on?

The problem is that agitated behaviors in demented nursing home residents are a major challenge. There is no simple solution. Staff training can help. Increased staff: patient ratios can help. But with the rise of alternative options for care such as assisted living and a decline in the total number of people living in nursing homes, those individuals who do live in a nursing home tend to be more demented and have greater behavioral problems than ever before: in 2007, 69% of US nursing home residents had cognitive impairment, with 42% diagnosed with moderate to severe impairment. Solving the problem of how to care for demented nursing home residents will require far more sweeping changes than adding a few in-service programs for staff or hiring a few more nursing aides. The key is culture change.

To understand the role of culture change, we should look at the use of feeding tubes in nursing home patients with advanced dementia. The feeding tube story is in many ways similar to the antipsychotic medication story: the practice persists despite studies showing that feeding tubes do not prolong life, they do not prevent aspiration pneumonia, and they do not promote healing of pressure ulcers, the reasons given for their use. There is tremendous state-to-state variability in the use of feeding tubes. And while studies have identified a variety of institutional factors associated with feeding tube use (for-profit status of the nursing home, absence of advance care planning discussions, speech therapist on staff), there has been no deep understanding of why the rates vary so dramatically.

In the very same issue of the Archives of Internal Medicine that reported on the national use of antipsychotic drugs in nursing homes, Susan Mitchell and her colleagues presented an ethnographic study of two nursing homes to explore the role of nursing home culture in promoting the use of feeding tubes. In one nursing home, 42% of residents with advanced dementia had feeding tubes while in the other nursing home, only 11% had feeding tubes. The study found startling differences between the cultures of the two institutions. The low use nursing home had a home-like environment, specially trained nursing assistants, and lots of family involvement in decisions of the goals of care. The high-use facility, by contrast, had an institutional atmosphere, inadequately trained nursing assistants, and a focus on regulatory compliance rather than quality of life.

Preliminary evidence suggests that the same nursing homes that foster a culture conducive to hand feeding rather than tube feeding also have a low use of antipsychotic medications. Providence Mount St. Vincent’s, a nursing home in Seattle, Washington that pioneered the culture change model, reported a 100% decrease in the use of antipsychotic medications. If this is confirmed, it will provide compelling evidence that what matters most to nursing home residents, beyond rules and regulations, is designing a community that is patient-centered, where staff are cross-trained to provide multiple tasks, and that focuses on relationships. It is in homes like this that even individuals with advanced dementia may be able to flourish without either feeding tubes or antipsychotic medications.

January 04, 2010

When Lawyers Practice Medicine

I sat with my elderly parents and their estate planning lawyer, going over documents. The lawyer had updated their will, revised their powers of attorney, and done various other lawyerly things. She handed my parents page after page for them to sign. After an hour of this, my eyes began glazing over. But then I heard her say “and here’s your living will,” and I was all ears.

My work as a geriatrician and palliative care specialist revolves around advance care planning, discussions with patients about their preferences for medical care in the event they lose the ability to make decisions. It’s a complicated enterprise that begins with clarifying the person’s overall health, then moves on to determine the goals of care—what is most important at that point in time—and then seeks to translate those goals into a plan of action. What I have learned by doing this with hundred of patients and their families, as well as by studying and writing about it, is that it is a process. Other researchers and practitioners in the field agree that advance care planning is not about completing a form, it’s about discussing prognosis, it’s about explaining what medical interventions can and cannot achieve and the burdens associated with them, and it’s about factoring the patient’s values and preferences into this complex discussion. What was an estate lawyer doing handing out a living will document and saying “sign here?”

Clearly the lawyer thought the document was self-explanatory and that most people would want to sign such a form. Clearly she believed it was her responsibility to offer such a form to my parents and she felt she was providing them the opportunity to retain control over their medical care at the end of life.

The irony is that Massachusetts, where this encounter between my parents and their attorney took place, does not have living will legislation. In Massachusetts, the only legally recognized form of advance care planning is a health care proxy—choosing who is empowered to make decisions if the patient loses the capacity to speak for himself. The Massachusetts legislature deliberately opted not to accord official status to living wills because they are vague, ambiguous, and seldom applicable to real life medical situations.

The living will my parents were given is a classic example of pseudo-precision: “If a situation should arise in which there is no reasonable expectation of my recovery from extreme physical or mental disability, I direct that I be allowed to die and not be kept alive by medications or artificial means or procedures which serve only to prolong the process of my dying,” it begins. What is a “reasonable expectation of recovery?” A fifty-fifty chance? Is a 30% chance good enough? 10%? What does “recovery” mean anyway? Going home and living independently? Living in a nursing home and needing help with bathing and dressing? Going from unconsciousness and total paralysis to wakefulness and the ability to move one finger?

But there is more—the document seems, at first glance, to spell out the answers to these questions. It says “without limitation, I intend these instructions to apply if I am (i) terminally ill, (ii) permanently unconscious, or (iii) conscious, but have irreversible brain damage and there is no reasonable expectation that I will regain the ability to make decisions and express my wishes.” What is meant by “terminally ill?” Does it mean conforming to the Medicare hospice definition of having a life-expectancy of 6 months or less, if the disease follows its usual course? Does it mean death is imminent—in the next few hours or days? Or does it mean having a disease that is uniformly fatal, such as Alzheimer’s disease, which lasts 3-5 years, sometimes longer, from diagnosis until death? What is “extreme physical or mental disability?” Does this mean the most advanced stage of Alzheimer’s, or does moderately severe dementia—in which the individual can walk and talk, but has completely lost his short term memory and needs help with bathing, toileting, and personal care—qualify?

That’s not all. If my parents did seem to fall into any of the 3 categories of terminal illness, permanent unconsciousness, or irreversible brain damage, then it specifies that they would not want “ (a) electrical or mechanical resuscitation of my heart when it has stopped beating; (b) artificial nutrition or hydration when I am unable to take nourishment by mouth; (c) mechanical respiration when I am no longer able to sustain my own breathing; and (d) medications, tests and treatments for any purpose other than comfort.” There is no explanation of what any of these interventions entails and their benefits and burdens, nor is there any explanation of what the alternative might be. In principle, if my parents had a stroke that impaired decision-making capacity and the ability to swallow, but were awake and alert, they would not be given a feeding tube for nutrition, even if they were hungry and indicated they wanted to eat. In principle, if they were diagnosed with a fatal cancer and had a life expectancy of less than six months and were unable to make decisions for themselves, they could not receive radiation treatments to prolong life or antibiotics to cure a pneumonia.

Not only does this kind of living will potentially limit the use of treatment that might well be entirely appropriate and consistent with my parents’ wishes (if they understood what they were signing), but it also fails to protect them from overtreatment in most of the clinical situations which they are likely to encounter.

Whatever is meant by being “terminally ill” or suffering from “irreversible brain damage” with “no reasonable expectation that I will regain the ability to make decisions,” these are not the situations in which older people commonly find themselves. They find themselves in a whole host of situations in which they might well want limitation of treatment. Many 85-year olds, for example, have mild dementia, heart disease, arthritis, and diabetes. If they develop a severe pneumonia, another common scenario, this might well precipitate “delirium,” an acute confusional state in which they were unable to make medical decisions. Would they want to be put in an intensive care unit on a ventilator? The living will, which is supposed to guide proxies and physicians, is silent on this issue.

This seemingly innocuous living will, in short, failed to protect my parents from excessively burdensome treatments in a variety of very possible medical situations, and could prevent them from receiving desirable treatments in other situations. So how could they assure they would get the kind of treatment they wanted if they were unable to speak for themselves?

The answer is that they could discuss with their physician what sorts of medical interventions are consistent with their goals. Frail elders, for instance, if they wish to remain independent as long as possible, even if it means a few months less life, should opt to forgo attempted cardiopulmonary resuscitation. The reason is that CPR, when performed in this setting, is rarely successful, and in those instances where the person survives, he almost always experiences a significant decline in his ability to function on his own. Ideally, the health care proxy is present when the physician talks to the patient about preferences for care, so he will be well-positioned to carry out the patient’s wishes. It is important for the physician to document such discussions in the patient’s medical record. Formal “instructional directives” that spell out just which procedures a patient thinks he would want under what circumstances may be useful as a supplement to advance care planning discussions. An example is the “Five Wishes,” available from Aging With Dignity.

It is entirely appropriate for lawyers to give their clients a health care proxy form to sign—although it should be pointed out that no legal input is required to choose a surrogate. Anyone can print up the official Massachusetts health care proxy form and sign it, with two adult witnesses. There is no need to pay an attorney $100/hour for such a form. It is also reasonable to recommend to clients that they discuss with their doctor their preferences for medical care—in the context of their medical situation and their personal goals. At best, presenting clients with a living will and suggesting they sign it reduces the complex process of advance care planning to filling out a form. At worst, it is practicing medicine without a license. Is the lawyer in any position to explain the benefits and burdens of the medical interventions referred to in the living will, such as a ventilator (“mechanical respiration when I am no longer able to sustain my own breathing”), a feeding tube (“artificial nutrition…when I am unable to take nourishment by mouth”), or cardiopulmonary resuscitation (“electrical or mechanical resuscitation of my heart when it has stopped beating”)?

Physicians have shirked their responsibility by only sporadically engaging in advance care planning with their patients. The result is that lawyers are taking up the slack. Many state legislatures have contributed to the unfortunate conceptualization of advance medical planning as a legal issue through advance directive legislation: unlike Massachusetts, most states have living will laws as well as health care proxy laws and even prescribe what forms are legally recognized in their state. We need to reclaim advance care planning as a medical intervention that can prevent both over-treatment and under-treatment near the end of life.