May 16, 2016

You Get What You Pay For—Or Do You?

The Affordable Care Act, as it turns out, isn’t just about providing health insurance coverage for the 40 million previously uninsured Americans. It’s also about reforming Medicare, in part to pay for some of the costs of providing health insurance for everyone, in part to keep Medicare from going bust, and in part to improve the quality of care provided by Medicare. The favorite strategy for modifying Medicare is “value-based purchasing,” which is another name for pay-for-performance. The idea is simple: don’t just pay whatever doctors or hospitals ask for and don’t pay per service (the original fee-for-service model); instead, pay based on results. After all, physicians aren’t supposed to perform tests and procedures just for the sake of doing something; they are supposed to do things in order to improve health. So why not pay physicians only if they make people better? 

The problem, of course, is that not everyone will get better, no matter how state of the art their treatment, and some of them will get better but along the way they will also suffer from all kinds of complications. To deal with the realities of taking care of people who are old and sick, Medicare has adopted a policy that rewards—or penalizes—hospitals based on their performance on a combination of measures: the processes of care, the outcomes of care (specifically 30-day mortality), the patient’s satisfaction, and whether or not patients are readmitted to the hospital within a month of discharge. The big question is, does this approach work?

Previous studies have failed to show any benefit on clinical processes or patient satisfaction. Now, a new study in BMJ suggests that it doesn’t improve mortality either. The authors examined mortality among patients with heart attacks, heart failure, or pneumonia (the 3 conditions for which Medicare “incentivizes” hospitals using its value-based reimbursement scheme). They compared mortality rates for these conditions before and after the introduction of Hospital Value-Based Purchasing. They studied whether changes in mortality in the target conditions differed from changes in a comparable group of patients with other medical conditions. They tested whether the trends were any different at hospitals that didn’t participate in the HVBP system. And to look for trends, they determined mortality rates over a 3-year period before the introduction of Hospital Value-Based Purchasing and over the 3 years after its introduction. The result: nothing changed. 

Not everyone will be satisfied with the authors' choice of the comparison group—either of patients with different medical conditions or of hospitals that participated in a different reimbursement scheme. The risk adjustment process might be flawed. Maybe 3 years wasn’t long enough to see an effect, especially since the incentives have been changing—initially, hospitals were rewarded if they did well, now they are penalized if they do poorly, and the magnitude of the penalty increases annually. So it would be premature to conclude that value-based purchasing is a failure. But surely it isn’t a great success, either, if no one has been able to prove that it does what it’s supposed to.

Medicare has the potential to shape geriatric care in the U.S. There’s no question that strategies invoked in the past such as the introduction of prospective payment for hospital care (ie paying a fixed amount for a given condition, rather than a fixed amount per day in the hospital) have made a huge difference in both costs and outcomes. But it’s not at all clear that the prevailing enthusiasm for pay-for-performance is the answer to providing better, more cost-effective care to older people. 

Maybe we need to go back to the drawing board and analyze the weaknesses of our current system. Perhaps what we will find is that the weaknesses are not just fragmentation, lack of coordination, and the triumph of high tech over high touch, although these are all important. Perhaps what we will find is that the weaknesses include a focus on disease rather than function, on individuals rather than families, and on the values of physicians rather than patients.

May 09, 2016

Beyond Doctoring

I’ve long been amazed by the legerdemain that went into deciding what Medicare will cover and what it won’t. I’m not talking about decisions made in the past decade about what procedures to pay for, by and large rational decisions that have been based on a careful analysis of the evidence supporting their efficacy. I’m talking about some of the most basic aspects of Medicare, such as its exclusion of long term care. Now I recognize that the main concern of those who crafted the 1965 legislation was to provide some kind of health insurance for older people without busting the budget. To achieve this end, they decided to distinguish between things that are medical (which Medicare would ostensibly cover) and things that are not (which it wouldn’t). What that distinction has meant is that housing, transportation, diet, and all kinds of other nominally social goods are off limits for Medicare coverage. A new study by Elizabeth Bradley and her colleagues at Yale shows just how arbitrary—and often counterproductive—such a conceptual divide actually is.

Following up on their groundbreaking work in which they showed that countries with higher social service spending relative to health care spending had better health outcomes, the study team compared the performance of the 50 states (and the District of Columbia) over a 10-year period, from 2000-2009. They defined the extent of each state’s investment in social services by calculating the ratio of social service plus public health spending (on education, income support, nutritional assistance, housing, transportation, and the environment) to the state’s total government health care spending (Medicare plus Medicaid). Then they examined the relationship between this ratio and eight health outcomes (including the percent of the population that is obese, has asthma, or has functional limitations, and mortality rates for heart attack, lung cancer, and diabetes). What they found is that states with higher ratios of social to health spending had significantly better health outcomes (in 7 of the 8 domains).

It's striking that the variability in spending on health care (as a percentage of GDP) across the states is considerable, ranging from less than 4 percent in Colorado, Utah, and Wyoming to nearly 10 percent in Maine, West Virginia, and Missouri. Likewise, the variability in spending on social services and public health is dramatic, going from about 12 percent to over 20 percent. The net effect is that the allocation of resources between social services and health care differs substantially from one part of the country to the next.

It’s a complicated study and I’m sure that methodology mavens will have a field day with it. But the attempt to assess the contribution of social supports to outcomes is so reasonable and the results are so striking that we have to take very seriously the idea that social factors are a major determinant of health and well-being. I’m convinced this is particularly true in older people, whose quality of life is at least as affected by where they live and their ability to find meaning in life as it is by their physical ailments. I suspect that this study is as important as work by Michael Marmot showing that health worsens as people descend the social ladder—not just because of income inequality, but also because of discrepancies in social status. If we want to foster good health, which the World Health Organization defines as “a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity,” we need to focus on relationships and housing as well as on drugs and devices. And for older people, that may mean user-friendly computers and better assisted living facilities rather than a left ventricular assist device or a new monoclonal antibody.


April 28, 2016

How Much Help Does a Helper Need for a Helper to Give Help?

For some time, I’ve been insisting that the exclusive focus on patients and doctors in our discussions of “shared decision-making” is misplaced. I’ve maintained that our single-minded devotion to “patient engagement” in the practice of medicine is likewise ill-conceived. For many older patients, making medical decisions and providing hands on care fall at least in part on the shoulders of caregivers, and for the oldest, frailest, and most cognitively impaired patients, the responsibility rests entirely with caregivers.Yet caregivers are consistently left out of the loop, or given inadequate information, or only called in at the eleventh hour. A new study in Health Affairs confirms my worst suspicions and argues that we need to provide considerably more support to caregivers if they are to function effectively as care partners.

The researchers identified a mere 66 studies that evaluated the involvement of caregivers in making one or more health-related decisions for seniors. Four of the studies tested an intervention such as a decision aid; the others were descriptive. Only 14 of the studies were quantitative; the remainder were qualitative or utilized a mixture of methods. The majority of the decisions had to do with either nursing home placement or end of life care. Almost all the studies identified unmet caregiver needs.

Interestingly, only one intervention led to improved decision making and didn’t seem biased, a study of a decision aid addressing the use of feeding tubes. But in general, what emerged from the analysis was that caregivers need more information, they need discussions of values and preferences, they need help in figuring out how to make a decision, and they need support from doctors and nurses—before, during, and after the fateful decision is made.

The new study also recognizes that caregivers are involved in making lots of small but consequential decisions, not just in major decisions such as whether an older person should move to a nursing home and whether the person should enroll in hospice. Deciding whether to bring a patient with cough and fever to the emergency room, for example, versus initiating treatment at home with oral medications and oxygen,  or using exclusively comfort-oriented measures such as Tylenol and morphine, has huge implications for the patient’s well-being, future trajectory, and for health care costs.

Caregivers aren’t yet another obstacle for busy doctors and nurses to overcome. Involving caregivers in no way diminishes patient autonomy—in fact, it promotes patient self-determination by providing a window into patients’ wishes and by helping clinicians implement those wishes. The caregiver needs to be seen as the clinician’s best friend, as the partner who can make all the difference. 

The way forward is clear: physicians and nurses taking care of older patients who have a caregiver need to involve that caregiver at every step of the health care journey. Identifying a nurse or social worker to serve as a health care coach for the caregiver would make the system work even better.

April 25, 2016

Where's the "Assist" in "Assisted Living?"

Assisted living (AL) exists for one very simple reason: most older people don’t want to live in a nursing home. They want privacy and autonomy, which nursing homes seldom offer. Despite all the efforts to put the “home” back into nursing homes, and despite the culture change movement that sought to transform the structure and organization of nursing facilities, most people still don’t want to live in a nursing home. One consequence is that assisted living facilities today are filled with people who not that long ago would have lived in a nursing home: they are old, they have multiple chronic conditions, and just about half of them have some degree of dementia. But assisted living facilities were created with the idea that they would be strictly non-medical residences. That’s a problem.

The tension between the idealized image of the assisted living resident and the actual assisted living resident increasingly translates into a struggle over what services AL can legitimately provide and who will regulate them. The rules are set by the individual states, so what happens in California is not the same as what happens in Alabama. In some states, only a licensed nurse can give a patient a medication. In other states, aides can give out medications. In some states, aides can supervise a patient taking a medication—they can remind the person he is supposed to take a pill and watch him doing it, but they can’t take the pill out of a bottle and give it to him. In other states, aides aren’t even allowed to do that. Periodically, state legislatures try to change the rules about just how medical AL should be. That’s what’s happening in Massachusetts today. Proposed legislation would allow AL to provide certain medical services that are currently unavailable: treating skin problems, providing wound care, giving injections, and administering oxygen. And predictably, conflict has erupted over whether the rules should be changed and if they are, who should be responsible for ongoing monitoring.

The controversy over whether and to how great an extent AL should be able to provide nursing care is usually framed as a concern about the medicalization of assisted living. The whole idea of AL is that it is much more like a person’s home than like a hospital and the concern is that if residents can have medical procedures on site, this will undermine AL’s home-like essence. But is that really the way to think about this issue?

After all, if an older person lives in his own residence, say the house where he has lived for the past fifty years, and his spouse gives him his medication, no one would object that his home has turned into a medical facility. Ditto if a family member applies skin cream to a rash. And does it turn the home into a hospital if a personal care attendant wheels in an oxygen tank and hooks it up to a mask or to nasal prongs worn by the older individual? Family members learn to give insulin injections. They are taught how to give artificial nutrition through a gastrostomy tube and to administer intravenous medication. They even operate all kinds of pumps and monitoring equipment. In fact, the report, Home Alone, issued a few years ago, found that almost half of all family caregivers reported that medical tasks formed part of their responsibility, including some pretty complex interventions.

Now nursing aides aren’t the same as family members. They take on whatever responsibilities they are assigned because it’s their job, not out of love or compassion or filial obligation. But the point is that if family members routinely perform these sorts of duties, in most cases with minimal instruction and no supervision, then surely aides hired by assisted living facilities could be expected to do precisely the same things, perhaps with a smidgeon more instruction and some degree of ongoing supervision. In any case, the act of putting on a bandage or attaching a bottle of Ensure to a feeding tube doesn’t automatically turn AL into a medical facility. But failing to letting aides do some of the tasks that people would expect their families to provide if they lived in their own home turns AL into a very inadequate sort of a home indeed.

Sometimes I think we draw the wrong conclusions about who can do what because we assume that the person who performs a given task should have a thorough understanding of the technology he or she is using. That would be nice, I suppose, but how many of us who drive a car have the slightest understanding of how the transmission works or the difference between a generator and an alternator? In the case of people taking medicines or getting treatment for a rash, we shouldn’t confuse administering treatment with monitoring effectiveness. I don’t see why the same person necessarily has to do both.

Years ago, I read a study of the use of psychotropic medications in the nursing home. The authors were shocked to discover that the nurses who gave out powerful medications had no idea of their side effects and couldn’t identify one if their life depended on it. I thought at the time and I still think today that the researchers’ dismay was misplaced. Someone should have been monitoring those nursing home residents: what was shocking was that nobody was. But did it have to be the person who doled out pills? Her job was to make sure that Sally Smith got pills that had been prescribed for Sally Smith—and not pills that had been prescribed for Stuart Smith. Her job was to make sure that Sally Smith got her pills three times a day and not twice or four times and that she actually swallowed the pills. Her job was to report to a physician if Sally Smith became very sleepy or was more confused that usual or developed difficulty with her walking—but not to figure out whether the pills were causing those problems.

The same goes for assisted living today. Of course people should be able to get simple “medical” treatment on site, just as they would if they had stayed in their previous home. Of course staff should be able to administer any treatment that family members routinely provide without an RN or an MD degree. Yes, staff need to learn how to do these things. And yes, a system needs to be in place to assure that patients—in this case we are talking about patients—have adequate monitoring of their medical problems. But let’s separate administration of treatment from ongoing assessment of the medical response to treatment. And let’s not transform the character of AL by subjecting it to the same rules as a nursing home. The way forward is to provide on site medical treatment while designing new rules that relate separately to the training and supervision of aides who are part of the staff and the provision of ongoing medical care by physicians and nurse practitioners who are not. 

April 18, 2016

If it's Good for Wisconsin...

The big news in palliative care circles this week was the results of the PerryUndem Research poll that surveyed physicians on their views about advance care planning discussions. It made the Boston Globe, it made Forbes, and it made US News and World Report, though I couldn’t find any mention of it in the NY Times, the Washington Post, or the Wall Street Journal—maybe they are holding off until Sunday. Or maybe they realized that the poll is new, but the findings aren’t. Physicians still don’t talk to their patients about advance care planning very much.

To be fair, what is new is that physicians who take care of the sixty-five plus population on a regular basis, or at least primary care physicians, oncologists, pulmonary doctors, and cardiologists, overwhelmingly think they should be talking to patients about their goals of care and their preferences in the face of advanced illness. They think it’s important and that it’s their job to do so. They support Medicare’s decision to reimburse directly for such conversations. But then comes the disconnect. While acknowledging the importance of having such conversations, they have all kinds of excuses for not having them: not enough time, uncertainty about what to say, no formal training. Even those who say that the new reimbursement policy provides a strong incentive to have “the conversation” haven’t actually billed Medicare as yet—only 14 percent of the 470 primary care doctors and 266 subspecialists who were surveyed say they have submitted a bill for advance care planning since the new rule went into effect in January.

The Patient Self-Determination Act of 1990, which mandated that all health care facilities that receive federal money ask patients if they have an advance directive and offer them the opportunity to create one if they don’t, didn’t push doctors to do their job. The availability of Medicare reimbursement hasn’t pushed doctors to act—though perhaps it’s too early to judge. All the publicity given to the last phase of life with projects such as the Conversation Project and books such as Atul Gawande’s bestselling Being Mortal has raised awareness and has perhaps moved physicians to accept that advance care planning is an important part of medical care for patients with advanced illness, but it hasn’t had the kind of impact we’d like to see. So what would work?

The survey identifies two promising areas, formal training and a systematic approach to implementing advance care planning. When either of these was in place, physicians were more likely to report that they had conversations with patients about their preferences. Physicians who had had some sort of training said they had such discussions at least once a week in 79 percent of those polled, but only 61 percent who those who hadn’t had the training reported discussing acp. Among physicians whose practices or health care systems had a system in place to promote advance care planning, 81 percent had the talk versus 68 percent who didn’t work in a such a system. Perhaps even more important was the use of an electronic health record that had a place to document preferences for care: 79 percent of physicians with such an EHR said they had conversations at least weekly, compared to 51 percent who did not.

Maybe the solution to increased advance care planning is to do more formal training and promote systems to support this activity, including electronic medical records with a special “field” to enter the results of such conversations. But I suspect that these approaches, though laudable, will not be enough. After all, we don’t know how often physicians actually have advance care planning discussions with their patient; we only know their estimate of how often they discuss such matters—and we also know that when physicians are asked to estimate how much time they spend with each patient, they are notoriously inaccurate. If we really want to ensure that advance care planning takes place, at least for patients with advanced illness, we need to promote advance care planning to the public as well as to doctors. 

Earlier public campaigns to promote advance care planning were not very successful: the Robert Wood Johnson Foundation spent millions and their efforts achieved little. But the one approach that by all  accounts has worked is the “Respecting Choices” program in La Crosse, Wisconsin, which introduced specialized training for clinicians, a systematic approach for implementation within the Gundersen Health System, and educated patients and families. If we truly want to make a difference, it's not going to be enough to focus on a single approach. We need to use the kind of comprehensive approach that worked in Wisconsin, and we need to use it throughout the country.

April 11, 2016

Ready, Aim, Fire

Firearms are a geriatric issue. The reason: suicide is more common in older people than in the general population and guns are the method of choice for older people who kill themselves. In fact, elderly white men have the highest suicide rate in the country (29/100,000 compared to the national average of 12.4 deaths/100,000). White men over 85 have a particularly high suicide rate: 47/100,000. A study in the Lancet could in principle help remedy this problem by shedding light on which of the existing firearms laws have any effect.

Examining data on suicides and homicides in the United States between 2008 and 2010, the researchers identified 32,000 gun-related deaths. They then looked at the site of death and existing firearm legislation. The results: 25 types of firearm legislation are found across the 50 states. Of these laws, 9 were associated with a decrease in mortality, 9 with an increase in mortality, and 7 were equivocal. The 3 state laws with greatest evidence of statistically significant benefit are universal background checks to buy guns, universal background checks to buy ammunition, and ID requirements for buying firearms. The single law most likely to lead to an increase in violent deaths is “stand your ground” legislation. The authors of the study projected that if there were federal level implementation of universal background checks, gun-related deaths would fall from 10.35/100,000 to 4.46/100,000.

Now there are serious methodological problems with this study, as US newspapers were quick to point out: basically, it compares what was happening in various states before a particular law was enacted to what happened afterwards and assumes that any changes in gun violence were due to the law. But it’s entirely possible that there were other things going on in those states that led to the change in gun violence. In some cases, especially states where the law seemed to make matters worse, the changes that were occurring might have led to the decision to enact the legislation in the first place. But this kind of study is the best we have right now. And there’s a reason we don’t have anything better.

The reason we don’t have better studies on the effectiveness (or lack of effectiveness) of various gun control measures is that the CDC, and to a large extent, the NIH, are prevented from funding such studies. Thanks to the “Dickey Amendment,” passed by Congress in 1996, “none of the funds made available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate [for] or promote gun control.” This clause effectively scared the CDC, which spends millions on studies of highway safety, from supporting any research on guns. And in 2011, the Dickey amendment was extended to the NIH.

Congress has made a few attempts to repeal the Dickey amendment as recently as in January, 2016, after the San Bernardino shootings. They went nowhere. The irony is that even the most rabid right-wing politicians and their supporters who want to shrink the federal government, in the most extreme cases eliminating Medicare, Medicaid, social security, and the income tax (the view of the Koch brothers, according to Jane Mayer’s book, Dark Money) believe that the one role the federal government is to protect its citizens from physical harm. 

If the federal government is to keep us safe, it has to know how best to achieve that end. Neither ideology nor common sense are the best guides to determining effectiveness. Research on how to reduce gun violence is essential—and it’s a geriatric issue.




April 04, 2016

The most interesting article I came upon this past week dealing with an issue of great importance to older people wasn’t in JAMA or the New England Journal of Medicine and it wasn’t a report from the Institute of Medicine or from the Henry J. Kaiser Foundation. It was in the Wall Street Journal.

The article reported that beginning April 1, Medicare is embarking on a brave new experiment: it is “bundling” payment for patients getting a knee or hip replaced. MedPAC, the independent, agency that advises Congress on how to improve Medicare, has long advocated reforming the way Medicare pays for surgical procedures. And the CMS Innovation Center has funded a variety of projects testing the ability of bundling payments to improve care. But now, for the first time, proposals and theories affecting nearly half a million patients are being put into practice.

Actually, it’s not half a million patients right away. Only hospitals in the 67 metropolitan areas randomly selected by CMS will be affected—New York and Los Angeles won the lottery—hospitals that perform about one-third of all hip and knee replacement surgeries in Medicare enrollees. And calling the new payment mechanism “bundling” isn’t entirely accurate either: Medicare isn’t giving out a single lump sum for all aspects of care and telling orthopedists, hospitals, radiologists, and rehab facilities to divide it up however they see fit. What it’s doing instead is to pay everyone the way they usually do—hospitals get a single DRG (diagnosis-related group) payment, SNFs get paid a prospectively determined amount each day the patient is in the SNF, and physicians are paid on a fee-for-service basis. But if the total amount that Medicare ends up distributing over a 90-day period exceeds a target figure, the hospital has to pay back the excess. And if the total amount is less than the target, the hospital gets the difference. In short, rather than truly sharing the risk—or, from a clinical perspective, the responsibility—for care, the burden of ensuring that everyone provides optimal care rests solely on the hospital.

Now I think it’s a good idea for hospitals, rehabs, and doctors to work together—and for that matter, physical therapists and free-standing labs and radiology units as well—but I’m not convinced that placing the responsibility exclusively at the hospital’s doorstep is wise. It’s essentially the same approach taken by Medicare to the problem of hospital readmissions—of patients being discharged, only to come back to the same hospital, sometimes for the same problem, in less than a month. Medicare has instituted a system of penalties to hospitals whose readmission rates exceed a given threshold. As a result, the majority of hospitals were penalized for their readmission rates in 2015, some losing as much as 3 percent of their Medicare reimbursement. In a number of states, including New York and Massachusetts, three-quarters or more of the hospitals were hit with penalties.

The problem in both cases, the readmissions and payment for joint replacement surgery, is twofold: hospitals do not have control over all aspects of the patients care, and sometimes things go wrong that couldn’t have been prevented, no matter how much control the hospital exercised. Many Medicare enrollees are very old and very frail—these patients are likely to get sick again even if they are discharged from the hospital with follow-up arranged and their medications reviewed and a nurse visit scheduled the day after they get home, all the ingredients of a good “transitional care plan.” These same patients are likely to benefit from a stay in a skilled nursing facility or a rehabilitation hospital after they’ve had a joint replaced, strategies that cost more than sending them home with a few visits by a physical therapist and a nurse or a printout of exercises to do at home.

In the case of the new bundled payments for orthopedic procedures, the hospitals might respond by making sure that their patients only go to the very best skilled nursing facilities where they manage to restore them to perfect functioning in a matter of days or else go directly home, where they get the very best Visiting Nurse service that supplies the very best physical therapist who likewise can restore them to perfect functioning after just a few visits. But I worry that the hospitals might try to cherry pick patients—only accepting for surgery those people who are eighty-going-on-sixty and will do just fine at home with no services at all. I worry that hospitals will despair of their ability to control anything that goes on in a nursing home or home health agency and will opt instead to buy them up, leading to further consolidation within the hospital industry—and bigger isn’t always better for patients. And I worry that in the unlikely event that the system works, that care improves and costs go down, hospitals will have simply robbed Peter to pay Paul: they will achieve improvements in the domain of hip and knee surgery at the expense of care in the arena of abdominal surgery or stroke care.

I do think that older patients benefit from coordinated care. They win if their orthopedists at the hospital talk to the attending physician at the skilled nursing facility. They win if the details of their hospital stay are available electronically to the staff at the rehab facility. They win if hospitals, SNFs, and home care agencies work together. Let’s hope that Medicare’s experiment achieves that result.

March 28, 2016

Make No Bones About It

For some time, I’ve tried to find an up-to-date list of the medications most commonly prescribed to older people. Sounds like a simple question, but getting an answer has been surprisingly challenging. Most of the available data is ten years old and that’s a long time in an era when medications go off patent, new medications are introduced, and advertising campaigns affect medication use. Much of the information is for the population as a whole—but kids really are very different from octogenarians in their pill-taking. So I was pleased to find an article in JAMA Internal Medicine this week called “Changes in Prescription and Over-the Counter Medication and Dietary Supplement Use Among Older Adults in the United States, 2005 vs 2011.” This nationally representative sample of community dwelling older adults got its information from in-person interviews. It over-sampled certain populations to try to make sure its interviewees were truly representative. And the results are revealing.

The main finding is that fully 87.7 percent of adults over 65 (excluding those in institutions) took at least one prescription drug regularly in 2010-2011, up slightly 2005-2006. Moreover, 35.8 percent of the population take at least five prescription drugs a day (up significantly from 2005, when the rate was 30.6 percent. Lastly, there’s been a 50 percent increase in the number of people taking vitamins or supplements.

The Big Ten medications are pretty much what you would expect, though the actual percentages are a bit surprising. In first place is over-the-counter aspirin (40.2 percent); simvastatin, a cholesterol-lowering medication, is in second place (22.5 percent), and atorvastatin, another statin (formerly sold exclusively as Lipitor, before it lost patent protection) is number ten. The number three, four, six, and seven spots are taken by the anti-hypertensives, lisinopril (19.9 percent), hydrochlorothiazide (19.3 percent), metoprolol (14.9 percent), and amlodipine (13.4 percent) respectively, although it’s worth pointing out that these drugs can be used for other purposes besides lowering blood pressure—hydrochlorothiazide is a diuretic that may be used to treat heart failure, metoprolol is a beta-blocker often used to treat angina, and amlodipine is a calcium-channel blocker that can also be used in coronary artery disease. 

The remaining three drugs on the list are levothyroxine, a thyroid replacement medication, in fifth place, metformin, a drug used to treat diabetes, in eighth place, and omeprazole, a proton-pump inhibitor used for ulcers and acid reflux in the ninth spot. We are left wondering what all this means: are older people getting too many drugs? Not enough drugs? Are they getting the right medications?

Descriptive statistics cannot answer whether some patients are getting medicines they don’t need (though I’m pretty sure that’s the case) and others aren’t getting medicines from which they might benefit (probably also the case).  I think they do tell us something about the effectiveness of the strategies used to promote medications. When medications are categorized by type, statins are actually taken by just over 50 percent of older people (simvastatin and atorvastatin, drugs number 2 and 10 in the list of individual agents are not the only statins available) and anti-hypertensives by just over 65 percent of the elderly. What this tells me is that the combination of direct-to-consumer advertising, drug detailing to physicians, and  professional society guidelines--the methods used to promote statins and anti-hypertensives,at least when new drugs in each of these classes appeared on the scene--really works to change behavior. It doesn’t prove anything, but it’s awfully suggestive.

Also worth exploring is the dramatic increase in the percent of older people who take supplements. The authors of the study assert that this occurred although there is “no evidence of any clinical benefit.” I think this is a distortion. There may be little evidence of clinical benefit for some of the supplements, such as omega 3 fatty acids, but the story for vitamin D and calcium is both messier and more illuminating.

Over the years, Vitamin D has gone from being clearly necessary for strong bones, to very useful in preventing falls, to a dangerous poison, to a useless additive, and back again. Just what do we know as of 2016? We know that vitamin D is essential to human beings and we get it from sun exposure or from diet, although not many foods other than the ones such as milk to which we now add it naturally contain Vitamin D. Actually, that’s not quite accurate either, as what we get from the sun and from food is a pre-cursor of the active form of vitamin D that we need to make bones, and we rely on our kidneys and livers to perform the transformation. We also know from the National Health and Nutrition Examination Study that at least as of 2005-2006, 42 percent of adults had vitamin D levels below 20 ng/ml, which just about all authorities regard as too low. We also know that people who take megadose vitamin D as part of a fad diet, sometimes taking as much as 100 times the recommended daily dose, can get poisoned by such quantities.

The big question remains whether taking supplementary vitamin D—on the order of 800 units a day (not the tens of thousands of units taken by fad dieters)—prevents falls and fractures. Falls and fractures cost over $28 billion in older people, and that’s just the direct costs; it doesn’t include the pain and suffering and the loss of functioning and independence. The data on the efficacy of vitamin D are a mixed bag, with some studies showing strong evidence that it helps and a few failing to show any benefit at all. Putting all the conflicting evidence together, the American Geriatrics Society recommends, based on the preponderance of evidence, that all older adults, whether living in the community or in an institution, take vitamin D supplements of at least 1000 units together with calcium. Judging by the JAMA Internal Medicine article, we have a long way to go to reach this target: while 35 percent of older people do take a multivitamin (which includes 400 units of vitamin D), just under 16 percent take vitamin D alone.

The back story here is that vitamin D is cheap. No drug company is promoting vitamin D. In addition to being cheap, vitamin D has virtually no side effects (unless it is taken at hundreds of times the recommended dose). And it just might work. We should think about the ways that the consumption of cardiac medications have changed—and the ways that these changes have been achieved. We might learn something about how our system operates and how we can change the attitudes and behavior towards therapy that has a good chance of helping without breaking the bank.

March 14, 2016

Pay to Plan

When Medicare began allowing payment to physicians for advance care planning on January 1, bloggers and editorialists and columnists all commented on the new rule. Many said what a great new advance this is. Dr. Diane Meier, founder of the Center for the Advancement of Palliative Care, said it was “substantive and symbolic.” Others were more guarded. Dr. Robert Wachter of the University of California San Francisco said he expected a “modest uptick” in the number of advance care planning conversations, but did not anticipate the rule would be “transformative.” Now, an essay in Health Affairs pulls together the various comments and critiques and concludes that unless we overcome the prevailing “training deficit,” the pervasive inability of physicians to carry out such conversations, and unless we develop a health care system that allows for the implementation of whatever choices patients make, the reform will be meaningless.

The reason I didn’t blog on this topic—apart from the fact that so many other voices were chiming in—is that it seemed obvious to me that the change was exclusively symbolic. It was obvious because the truth is that doctors have been able to bill for advance care planning visits for years. Instead of using the elaborate “coding system” that most doctors use for billing purposes, in which you have to assess the level of complexity of the history, physical exam, and something nebulous called “medical decision making,” it is perfectly legitimate to bill based on time. All you have to do is state how long you spent with the patient (and family) and write that “over 50% of the visit was devoted to counseling.” If you do that, then there are no specific rules about “documenting” the physical examination and the history--your note can focus primarily on the substance of the visit, advance care planning. And you get paid more for a 40 minute office visit using the old system of time and counseling--$145.82 in 2016—than you do for a 30 minute advance care planning visit using the new code--$86.66 in 2016 (though you can also tack on an extra 30 minutes for filling out forms with the new system and bill another $75.11). 

To be sure, to use the old system, the patient has to be physically present. You have to write something in your note about the medical history, but “77 year old retired lawyer with stage 4 non-small cell lung cancer, unresponsive to chemotherapy” is good enough; and you have to write something in your note about the physical examination, but  “patient is pale and cachectic; he is hoarse and dyspneic" should suffice. The remainder of the note can address goals of care, choices about limitations of treatment, designation of a health care proxy, and so forth.

Now I don’t want to underestimate the power of symbols—especially since the effort to include any kind of mention of advance care planning in the Affordable Care Act was dead on arrival after Sarah Palin made her notorious “death panel” comments. But sometimes adopting health policies that are doomed to be ineffective leads us astray because we think we have “solved” whatever problem led to the introduction of that policy in the first place, and result in our failing to solve the problem over the coming years. Perhaps the Patient Self-Determination Act of 1990 was a legislative example of the same phenomenon—as a result of this law, every state and the District of Columbia passed some sort of advance directive legislation over the next decade, but we now recognize that this kind of “legal transactional” approach to advance care planning, rather than a more communications-based approach, doesn’t work: either people don’t use it, doctors don’t follow the directive, or the directive doesn’t apply in precisely the clinical situations that real people find themselves in. 

So yes, Medicare’s decision to reimburse doctors for time explicitly spent on advance care planning is symbolically important. But I worry that it will result in unwarranted complacence, in our checking off advance care planning reform as “accomplished” on our national to-do list. Now that would be a serious mistake.



March 07, 2016

What We Have in Common(wealth)

A year ago, I reported on an interesting comparative study of older adults: the Commonwealth Fund surveyed the health care experience of adults 55 and older in eleven developed countries and found some striking differences. Now Commonwealth has drilled deeper into its data and analyzed differences among “high need” patients in the US and eight other countries (Australia, Canada, France, Germany, the Netherlands, Norway, Sweden, and Switzerland). As usual when we compare ourselves to other countries in the health arena, we don't do so well. And as usual, the differences are enlightening.

This new study looks at patients who are “high need.” I like that term: instead of talking about “high risk” patients--patients who are really at high risk of hospitalization, institutionalization, or death because they already have a lot of needs, not because we can magically determine that they might develop needs in the future--we focus on people who have problems now. The study defines people as high need if they have either three or more chronic conditions or need help in one of their basic daily activities. I might have preferred a composite measure of chronic diseases and functional difficulties, but it turns out to be useful to separate the two for purposes of international comparisons. 

Which brings me to the first interesting observation: the US has more people with at least 3 chronic diseases than anyone else, by a lot. In the US, 42 percent of people over 65 have at least 3 chronic diseases. No other country even comes close. Switzerland is the best, at 19 percent. Everyone else is in the 20-29 range. Does this reflect actual disease rates? Or is it just that we are more thorough in diagnosis—some might say by over-diagnosing disease? The flip side of this finding is that the US performed best in ADLs—only 14 percent of Americans reported they needed a moderate amount or a lot of help, compared to 50 percent of the French. Surely this is cultural—I can imagine that individualistic Americans like to be self-reliant and don’t want to accept help; perhaps the residents of other countries are far more likely to feel that as they get older, they deserve help.

If the populations are as different as the disease and ADL prevalence variability suggests, then the differences that were found in access, costs, and coordination may be meaningless. But for what it’s worth, here's what the study found: the US has a high rate of preventable emergency room visits (19 percent, compared to a low of 4 percent in Germany); and a high rate of cost-related access problems (22 percent vs a low of 5 percent in Switzerland).

Coordination of care was poor across the board—except in France. It sounds as though French people like to have things done for them, both in terms of assistance in basic activities and having someone arrange their health care for them. They report that they actually get help with coordination; I wonder if they feel they get the help they need in other domains as well.

The US came out on top in a single area: the proportion of older people who report they have a “plan of care.” Since having a plan doesn’t amount to much if you can’t access the services you need in order to implement the plan and you don’t have anyone to help you make sure you get what the plan says you need, this accomplishment isn’t terribly impressive. But I think it does tell us something—just as in the earlier Commonwealth study, which found that American patients were more likely than their European counterparts to have designated a health care proxy, what we see here is that America does well on form. We don’t do as well on substance. That’s the disconnect we need to remedy.