September 30, 2018

The Nurse Will See You Now


          Articles in medical journals tend to pay scant attention to the role of nurses in treating illness—or for that matter, to the role of social workers, physical therapists, and many other clinicians. Hence, when JAMA, one of the major American general medical journals, published an article in 2002 entitled, “Hospital Nurse Staffing and Patient Mortality, Nurse Burnout, and Job Dissatisfaction,” it was a bombshell. The lead author, Linda Aiken, is a nurse researcher at the University of Pennsylvania where she is a Professor of Sociology and the founding director of the Center for Health Outcomes and Policy Research. Looking at survey data from 10,000 nurses and administrative data on nearly 250,000 medical and surgical patients hospitalized in Pennsylvania in 1998-1999, she drew some dramatic conclusions: each additional patient per nurse was associated with a 7 percent increase in the likelihood of dying within 30 days of hospitalization, and each additional patient per nurse was associated with a 23 percent increase in the odds of “burnout” and a 15 percent increase in nursing job dissatisfaction. 

          Aiken's analysis, along with other smaller studies, have led to calls for an increase in the nurse to patient staffing ratios in hospitals. In light of general hospital administrative reluctance to make such increases, many nurses have demanded and some state legislatures have proposed mandatory increases in nurse to patient ratios. Massachusetts voters are being asked to vote this November on a referendum that would establish mandatory staffing ratios. To date, the only state to have instituted such a requirement is California—which passed legislation in 1999, before Aiken’s landmark study. California’s experience offers an unparalleled opportunity to ask whether mandatory ratios result in the desired improvements in quality of care and whether they produce a variety of unintended consequences. In principle, it could also shed light on whether nurse to patient ratios are particularly important for older people.

            California’s law specifies different nurse to patient ratios for intensive care units, surgical units, and medical units. Compliance was at first uneven, but gradually hospitals conformed to the requirements. Several studies have attempted to assess the outcomes. They are limited because California did not conduct a randomized, controlled experiment before passing its legislation—it did not impose mandatory minimum ratios on some hospitals and not on others. Moreover, nursing staff ratios are hardly the only factor affecting outcomes that changed in the early years of the mandate: many other federal quality improvement initiatives were undertaken to encourage hospitals to prevent pressure ulcers, falls, catheter-related infections and other major hazards of hospitalization. Hospitals have also experienced major financial pressures: the cost of hiring nurses was only one of many economic challenges. Determining cause and effect is not easy. Nearly twenty years later, what do we know?

            First and foremost, did the mandatory ratios result in improved quality of care? One of the most carefully performed studies was undertaken by the California Health Care Foundation, part of UCSF, in 2009. This report looked at pressure ulcers, pneumonia deaths, and deaths from sepsis (blood-borne infection) and were unable to find any statistically significant change after the law was implemented in 2004 (following a multi-year study period to design the regulations). Another study that focused exclusively on pressure ulcers and falls used data from CalNOC, a large nursing database for the entire state, to conclude there was no change in fall rates. They did note a paradoxical increase in pressure ulcers in patients admitted to “step-down” units, units between an ICU and a general medical or surgical floor in the acuity of their patients, after the requirement of more nurses per patient was introduced (presumably a reflection of sicker patients). A systematic review of the literature, published in the Annals of Internal Medicine in 2013, failed to find any statistically significant effect on a variety of safety measures. Of note, none of the studies I identified looked at either patient or nursing satisfaction.

            If mandatory nurse to patient ratios did result in more face-time with patients but no demonstrable improvement in overall quality of care, might older patients nonetheless be one subgroup that did benefit?  All we can say is that many of the quality measures that were examined—pressure ulcers and falls, for example—are particularly relevant to older people. 
            What about the feared adverse consequences of imposing rigid nursing staff ratios? An analysis in Health Affairs in 2011 found no evidence that hospitals were substituting less well-qualified staff (who still meet the legal requirements) such as LPNs for registered nurses. The California Health Care Foundation did conclude that hospitals were increasingly relying on “travel nurses,” (nurses from out of state or from other countries, hired for short periods of time) and on “float nurses” to move from floor to floor to compensate for lunch breaks by the regular staff. Such changes may result in less continuity of nursing care for patients. They also showed that hospitals across the state experienced shrinking operating margins beginning in 2002, especially in the hospitals that were initially fiscally strongest. However, they emphasize that many other factors could account for this phenomenon. Hospitals of all types did comply with the law, resulting in more hours of nursing care for each patient every day:
          On balance, regulating nurse to patient staff ratios in isolation is not likely to make much of a difference in patient outcomes, nor is it likely to devastate hospitals' finances. Hospitals are complex institutions with many interrelated parts. Just because hospitals with low nurse to patient staffing ratios tend to have poorer outcomes than other hospitals does not mean that if we "fix" the ratio, care will necessarily improve. Assuring that there are enough nurses to provide good care is essential, but that step alone is unlikely to dramatically improve the hospital experience.


September 24, 2018

Of Mice and Men

For middle-aged mice, these are the best of times. Scientists now understand genetic factors that lead to the development of disease, disability, and death—in mice. Most importantly, researchers have found ways to improve the “healthspan,” the period of disease- and disability-free life before death—in mice. The question is whether the approaches they are developing will be applicable to people, and the ethical implications if they are.
The basic ideas are spelled out in a trio of “viewpoint” articles published in JAMA last week. S. Jay Olshansky, writing from an epidemiologic perspective, observes that over the past century, dramatic gains in life expectancy have been accomplished by reducing in mortality of children and young adults. But once these gains have been made, the only remaining way to lengthen life expectancy is by extending the lives of people at the other end of the age spectrum. Medical science has therefore concentrated on tackling the diseases of old age, one by one. Unfortunately, as Barzilai et al comment in their essay, “efforts focused on preventing individual diseases will have limited net effect on population health because one disease will be exchanged for another.” We’re already seeing this phenomenon: as fewer people die of heart disease, they develop and die of Alzheimer’s instead. Far better would be to tackle the aging process itself. Targeting the underlying driver of all the chronic diseases at once could, in principle, prevent or at least delay those disorders.
So, what do we know about turning off biological aging? We know there’s a gene in mice with the euphonious name rps6kb1 and if it’s “knocked out” (molecular genetics speak for “inactivated”), female mice live longer, healthier lives. We know there’s another gene called Sirt6 (short for Sirtuin 6), which is present in multiple mammalian species including humans, and if it is “overexpressed” (genetics speak for “turned on”) in male mice, they live longer. We also know that all creatures including people have “senescent cells,” cells that, old cells that start releasing all kinds of chemicals. When an individual has more than some threshold number of such cells, it develops chronic diseases, frailty, and is at high risk of dying. When the senescent cells of a mouse are destroyed, the mouse lives longer and without a long period of deterioration before death.
And what progress has been made in identifying drugs that achieve these goals in mice? And what about in people? Reportedly, the Interventions Testing Program, funded by the National Institute on Aging, has examined 26 “candidate drugs” for their effects on mice. They have identified 6, including the anti-inflammatory drug, aspirin, the anti-diabetes drug, acarbose, the immunosuppressive drug, rapamycin, and the estrogen, 17a-estradiol, as effective in some mice. Intervening in mice of an age equivalent to 70 human years has “extended life by more than 20 years and increase[d] health span even more substantially.” Other studies have found that the drug dasatinib (related to the anti-cancer drug, Tarceva) has a powerful effect in destroying senescent cells. In mice that are the equivalent of 80 human years, treatment with dasatinib combined with quercetin (a plant chemical found in green tea, red wine, apples, and other foods) increases survival 36 percent without increasing disability before death.
We don’t know whether any of these chemicals work in humans. And we have no idea at all whether they will produce side effects, though we do know that earlier attempts to interfere with cell lifespan were associated with the development of cancer. This is not entirely surprising, as the essence of cancer is uncontrolled cell proliferation. So even the very upbeat article by Tchkonia and Kirkland, the third of the triad, ends on a cautionary note: “…Patients should be advised not to self-medicate with senolytic agents or other drugs that target fundamental aging processes in the expectation that conditions alleviated in mice will be alleviated in people.”
If, years from now, human studies indicate the drugs or others like them are effective, we will have to deal with the ethical implications of extending the “healthspan.” What will they cost? Will everyone have access to such medications? Will we create greater inequality within society? Between countries? Banning such research on the grounds that a ballooning of the elderly population is unsustainable is almost certainly going to be impossible—the lure of more disease-free life will be irresistible. But we can begin to think about the consequences of our brave new world.

September 17, 2018

An Aspirin a Day...

The headlines this week—aside from the hurricane, the typhoon, and the charge of sexual misconduct in the Supreme Court nominee—are all about aspirin. For older people, unless you live in the Carolinas or Hong Kong, this is definitely the story. A new study (reported as 3 separate studies but really just one study with three different endpoints) threatens to unseat aspirin from its coveted spot as the little-pill-that-could.
A single aspirin a day, many people believed, could stave off heart disease, stroke, cancer, and perhaps dementia. If taken as a “baby aspirin,” a dose of 81 mg a day instead of the 325 mg in a regular aspirin tablet, and with a special “enteric” coating to protect the lining of the stomach, it was touted as effective with virtually no side effects. The truth, unfortunately, seems to be that it is neither effective nor devoid of side effects when taken by healthy older people.
The study, published online in the New England Journal of Medicine, examines three plausible possible benefits of low dose, enteric-coated aspirin. First, they ask whether aspirin has a desirable effect on cardiovascular events such as heart failure requiring hospitalization, stroke, or heart attack. They found no difference in benefit between healthy older people in the US or Australia (where older was defined as over 70 except in blacks and Hispanics, where it was defined as over 65) who took 100 mg of aspirin and those who did not.
Next, they looked at whether aspirin has an effect on how long healthy older people live without developing a disability. Again, they found no statistically significant difference between those who took aspirin and those who didn’t.
Finally, they examined overall mortality in the aspirin-takers and the non-aspirin takers. Once again, the two groups were indistinguishable.
There was, however, one striking difference in outcomes between the 9525 people who were randomized to take aspirin and the 9589 people who were randomized to placebo: the risk of bleeding was significantly higher. And by bleeding, the investigators meant major bleeding such as a gastrointestinal bleed or an intracranial hemorrhage. 
Not only did this randomized controlled study fail to show any benefit from taking aspirin, and not only did it show an increased risk of harm, but even when the results were subjected to subgroup analysis, no group emerged as potential beneficiaries. The authors looked at the composite endpoint (dementia, death, or persistent disability) in several pre-specified subgroups. One was gender: in the past, aspirin has been touted as preventive for healthy men but not women; in this study, neither men nor women benefited. Another was frailty (though I’m not quite sure how 421 of the “healthy” elderly subjects could have met the definition of frailty): in this study, neither the frail nor the non-frail benefited. If anything, there was a trend towards worse outcomes in the frail group, though the numbers were so small that the difference was not statistically significant and might well be due to chance.
No study is perfect and this one is no exception. The median period of observation was 4.7 years, a relatively short period with respect to the time needed to develop dementia or heart disease. The analysis was done on an “intention to treat” basis, which is the way such studies are supposed to be analyzed, but in fact only 2/3 of the people assigned to take aspirin were actually taking it by the end of the study period. The benefit of aspirin might therefore have been under-estimated. The risk of bleeding, however, which was already substantial in the aspirin users, may have also been under-estimated. For some reason, the study used a 100 mg dose even though a standard baby aspirin contains only 81 mg: maybe the results would have been different with an even smaller dose. But the strengths of the study are impressive. It was randomized; follow up was almost complete; data collection seems to have been thorough and careful.
I have a confession to make: for several years, I took a baby aspirin every day. I’m under 70 and I’m female, so my physician did not recommend that I take aspirin. I took it nonetheless because I really don’t want to have a stroke and thought that just maybe taking aspirin was something I could do to help. I took it because years ago, before I went to medical school, I worked in a hematology research lab and spent my days studying platelet aggregation. It turned out that people who had taken a single aspirin tablet within two weeks of my testing their blood showed markedly decreased clumping of platelets, blood cells that are critically involved in the clotting process. About a year ago, I had several episodes of subconjunctival hemorrhage, a benign form of bleeding involving the blood vessels of the eye. I worried the bleeding might be related to aspirin, so I stopped taking it. 
Today, the evidence is compelling that for people without heart disease or dementia or stroke, an aspirin is more likely to cause harm than good. As of now, aspirin has joined the ranks of other failed panaceas such as estrogen and calcium supplements. 

September 09, 2018

Looking Ahead

British researchers recently projected care needs of the very old.  We would do well to pay attention to their analysis—or, if we think Americans are substantially different from their British counterparts, then we should replicate the analysis with our own data. Their study concludes that the number of very old people (over age 85) who will be highly dependent will double by 2035. This despite a marked decline in the anticipated rate of dementia and the growth of the population who remain fully independent. The seeming paradox arises from an increase in the number of comorbidities that the very old will have and the interaction between multimorbidity and dementia. In short, if you do survive to extreme old age and if you are one of the still substantial number of people who develops dementia, you probably will also have several chronic diseases. The combination spells dependence, where the study authors define high dependency as requiring 24-hour care, medium dependency as needing help at regular times daily, low dependency as needing help less than daily, and independence as free from care needs.

A few charts make the points best.

If we look at the proportion of all 85+ year-old men who were highly dependent in 2015, we can see that it was low then and it will have fallen further by 2035. Very old women are somewhat worse off than men today and the discrepancy will increase by 2035. Since the total number of octogenarians will rise considerably over the next 20 years, the total number of dependent, very old men and women will be larger than it is today--in the UK, the numbers will double.

 If, instead, we look at the number of additional years that people who turn 65 can expect to live, we have a more nuanced view. Men who turned 65 in 2015 can anticipate another 18.7 years of life, of which 11.1 years will be spent entirely independent and 1.4 years will be in a state of high dependency. A man who turns 65 in 2035, projections suggest, can expect another 22.2 years of life, during which he will spend 15.2 years independent and only 1.1 years dependent.

Women who turned 65 in 2015 can expect to live another 21 years, during which they will be independent for 10.7 years and very dependent for 2 years. The next generation, which turns 65 in 2035, will have a life expectancy of 24.1 years and will independent for 11.6 of them. Unfortunately, the length of severe dependency will increase to 2.7 years.

The implication of all is that we will need a huge increase in the number of caregivers to accommodate the needs of the very old. Unless we are far more successful in rolling back the rate of chronic disease than we have been to date (cardiovascular disease, diabetes, and stroke are the leading offenders) and can also dramatically cut the risk of dementia, we need to start planning now. We should encourage more people to become nursing aides. This will involve raising the pay and enhancing the status of the job. We will also want to seriously consider increasing immigration to fill the needs of the oldest old. And we should not assume we are better off than our British counterparts: on the contrary, older Americans today experience more sickness and disability than our European counterparts and our health care system devotes less attention to social than medical problems.