Chronic Illness, Death, and Dying in Modern Times

1


Chronic Illness, Death, and Dying in Modern Times



Young friends regard this solemn Truth, soon you may die like me in youth: Death is a debt to nature due, which I have paid, and so must you.


Tombstone of James Hull Allen, died August 6, 1793, age 15 years, 3 months, and 21 days


It’s not that I’m afraid to die . . . I just don’t want to be there when it happens.


Woody Allen, Without Feathers


The great force of history comes from the fact that we carry it within us.


James Baldwin


History . . . What’s the point? They’re dead. I’m not.


Ferris Bueller, “Ferris Bueller’s Day Off”


Over the centuries healers have been called upon to palliate, or “make better,” myriad afflictions. Only in recent times has the notion arisen that our primary goal is to identify and cure diseases, thereby prolonging life and, presumably, preventing distressing symptoms and associated suffering. The medical advances made in recent decades are indeed so astonishing that one could almost forgive those who would hope that a cure-based medical system might eliminate scourges such as pain, chronic illness, and the debilitations of old age. However, we remain mortal. I recall a scene from Bernardo Bertolucci’s film Little Buddha in which a child sits with a wise, old monk looking out over a bustling city in Nepal. “What is impermanence?” asks the child. The monk answers, “See these people. All of us and all the people alive today. One hundred years from now we’ll all be dead. That is impermanence.” Intellectually, I understand the truth of this statement. However, that more than 7 billion people will die in a period of 100 years is beyond my comprehension.


As mentioned in the Preface, when I first found my way to palliative care, I was amazed to find that such a body of knowledge existed. Why had I not been exposed to it before? This was not some mere educational oversight. It became clear that as a society overall we were neglecting the needs of the chronically ill and dying. Relative to what could be done, we were (and still are) doing a terrible job. And so, I set out to figure out why.


Much of the early literature suggested the problem was doctors. I also heard this early on from clinicians working in hospice: “Those damn doctors!” In one of the earliest symposiums on death and dying in the United States Wahl even went so far as to suggest that the reason people became doctors was to compensate for their “thanophobia” (fear of death).1,2 The belief that doctors were the problem was so strong in the United States that the role of physicians was systematically downgraded in early hospices. Hospice leadership was provided largely by nurses and social workers.* God knows, we physicians bear significant responsibility for the state of things. However, I believe most physicians, like most clinicians, choose their path because they want to be of help to people. Surely, things aren’t this simple. Something deeper and more pervasive must be at work.


I began studying history. What I discovered was that what we think of as “modern medicine” is really a product of European history and culture. Perhaps the secret lay here.


That people age, become ill, and die is nothing new. Illness and death have always been part of human experience. Ancient people suffered chronic debilitations often associated with parasitic infections.4 Egyptians in the time of the pharaohs, for example, frequently suffered from schistosomiasis, resulting in chronic pain and weakness. However, how we get sick today and how our society responds to sickness have changed radically. As the nature of illness has changed, so too has dying.5,6,7 In 1900 the top five causes of death in the United States were respiratory infections (influenza and pneumonia), tuberculosis, gastroenteritis, heart disease, and stroke, in that order.8 With the exception of tuberculosis, most other deaths were relatively sudden, occurring over a few days of illness. In 2017 the top five causes of death in the United States were heart disease, cancer, accidents, chronic obstructive pulmonary disease (COPD), and strokes.9 With the exception of sudden heart attacks and accidental deaths, most of these deaths were due to prolonged, chronic illnesses. While significant differences remain between developed and developing countries in terms of causes of death, the trend seems irreversible: Chronic, degenerative illnesses such as cancer and dementia are increasing. Dying of cholera or some other horrible gastrointestinal scourge seems a very unpleasant way to go. However, because we remain mortal, to prevent one way of dying is, in effect, to “create” another. Even very good things like seatbelts are “carcinogenic” in that by decreasing the chance of dying in car accidents, they proportionately increase the probability of growing older and dying from other diseases such as cancer. That we are more likely to die of chronic illness at an advanced age is not such a terrible thing, considering the alternatives. However, we must take responsibility for these new forms of illness and associated dying.


This book has an unavoidable bias based on my experience practicing palliative care in the United States. I hope this perspective is not entirely irrelevant to people in other parts of the world who are struggling with illness, death, and suffering. Each society and culture will have its particular issues and challenges. For developing, often impoverished, countries, simply increasing the availability of oral morphine for patients dying of cancer may still be an overwhelming challenge. Developed countries are struggling with complex social forces that result in the warehousing of their chronically ill and dying members in institutions far from family and friends. Despite the great social, cultural, and political differences that divide the globe, I would argue that we have more in common than not. These dramatic changes in how we experience illness and death affect all of us. While we may find guidance and strength in our cultural traditions, I think none of us can rely on old ways of “doing” illness and dying. Cultural traditions related to illness and healing evolved slowly over millennia and are resistant to change. Civilizations developed elaborate ways for dealing with illness. As recently as 70 years ago, few people lived to an advanced age. The average life expectancy in the Americas in 1950 was 60.10 Few experienced prolonged states of severe incapacitation and dependence. For most, dying was a brief affair, usually lasting a few days and requiring simple acts of kindness from family and friends.


Compression of Morbidity?


In 1980 Dr. James Fries of Stanford published an important article describing a “compression of morbidity” hypothesis (Fig. 1.1).11 In essence Fries argued that it is possible that people can live healthier lives longer. Fries believed that the maximum life expectancy was relatively fixed at about 100 years. While modest extensions in life expectancy are possible, arguably more important is how long good health can be maintained into old age. Ideally, we would be perfectly healthy until just before the end, when we would fall apart suddenly, like Oliver Wendell Holmes’s “one-hoss-shay.” Much of this improved health would result from lifestyle modification. Fries understood that compression of morbidity is not inevitable. He later wrote that this hypothesis arose in part in response to those who thought that “increasing life expectancies would lead inevitably to additional years of chronic debilitating illness, economic collapse, and increasing misery for many seniors,” referencing an earlier article by Gruenberg entitled “The Failures of Success.”12,13 Fries was right in recognizing that modifying lifestyle behavior/healthy living can result in longer, healthier lives. However, Gruenberg also had a point in highlighting that at a population level, “The goal of medical research work is to ‘diminish disease and enrich life,’ but it produced tools which prolong diseased, diminished lives and so increase the proportion of people who have a disabling or chronic disease.”13 Oddly, despite their differing perspectives, Fries and Gruenberg shared a lot of common ground. Gruenberg called for more research in the prevention of chronic illness and Fries recognized the importance of good care for the chronically ill and dying. Neither believed that advances in medicine would magically eliminate chronic illness or dying. For my part, I hope I am not making the mistake of arguing for the “inevitability” of chronic illness and misery in old age, as Fries frets. However, I do not believe that as a society we have yet taken enough responsibility for the side effects of many medical advances. That is, we are still in collective denial as to our current state. The overt focus of medical culture is still on cure and prevention, while everyday medical practice must increasingly attend to chronic illness. I would simply advocate for a more reasonable balance in our attention and allocation of resources.




image


Figure 1.1. Compression of morbidity. The hope expressed in this theory is that various interventions, such as healthier lifestyles, preventive medicine, and other medical interventions, will shift the curve of decline in health status to the right.


It is safe to say that the dramatic increases in chronic illness and changes in how we die have swamped our cultural coping mechanisms. We are simply unprepared for the vast numbers of people in both developed and developing countries who will succumb to diseases such as cancer and dementia. Our culturally conservative ways of dealing with illness and dying are being outpaced by the rapidity of this change. We must create new ways of responding to modern forms of illness and dying if we are to maintain any hope of living and dying well. We must build a new culture.


Palliative care, as an international movement, is trying to respond to these changes. Palliative care seeks to use the powerful tools developed by modern medicine to address the needs of the sick in terms of relieving suffering and enhancing quality of life. We must also be mindful that the very same medical system that creates these tools too often creates new forms of suffering that must be addressed. Thus, palliative care must walk a tightrope—we try to use a system of medicine for the good of patients and families without being overrun and dominated by that system. Only time will tell if we will succeed. The origins of the palliative care movement are to be found in the hospice movement, an alternative approach to dealing with terminal illness.


Hospice Care—Early History


Traditions of kindness for sick and dying patients are to be found in all societies from antiquity. The beginning of the modern hospice movement is attributed to Dame Cicely Saunders, who founded St. Christopher’s Hospice in London in 1967.14 Two years later Elisabeth Kübler-Ross published her book On Death and Dying, based on her experiences talking with dying patients in a Chicago hospital.15 Was it a coincidence that these two landmark events occurred in such close temporal proximity? I do not think so.


In 1953 the first advanced mammal, a dog named Knowsy (because he “knew” what was on the “other side”), was successfully resuscitated.16 Further advances in resuscitation and advanced life support led to the propagation of cardiopulmonary resuscitation (CPR) and intensive care units during the early 1960s. Early articles on CPR reflected a naive optimism and lack of concern for the possible consequences of resuscitation. If it was possible to prolong life, then do it! The development of CPR and advanced life support was symbolic of much broader changes in how we understand illness and the role of medicine. The goal of medicine shifted from healing to cure and the elimination of disease. Given early successes in curing diseases such as pneumococcal pneumonia with penicillin, people began to believe that everything could be cured—creating a “cult of cure” of sorts.17 The trick seemed to be to break down the human body as a machine into its constituent parts and then figure out how to keep all the parts working or, when irrevocably damaged, how to replace the broken parts. Then, if we could fix all the parts in theory, our body-machines could live on forever. If not forever, at least we could live to a “ripe old age” before dutifully dying suddenly. Great idea. Unfortunately, things did not quite work out that way. People grew old, developing aches, pains, and debilitations that the new system of medicine seemed incapable of addressing. Clinicians also become frustrated. Patients, once “fixed,” did not stay fixed—they kept bouncing back to the hospital. Somewhere, there was a big mistake.


In retrospect, it seems inevitable that a backlash to such blind optimism would arise. Cicely Saunders and Kübler-Ross, as pioneers, were remarkable in having recognized the mistake far before the rest of us. It was easiest to recognize the mistake in considering those patients who were overtly dying, yet the power of belief in the cult of cure was (and still too often is) so strong that it denied the very existence of dying patients as a class of people. Deaths, if they occurred, were aberrations; in the cult of cure people did not die, they “coded.” Kübler-Ross wrote that the greatest barrier to her initial request to speak with dying patients in the hospital was a broad denial that such patients even existed.15 Cicely Saunders’s experiences as a nurse, social worker, and physician convinced her that dying patients were often neglected and ignored, suffering unnecessarily for want of both basic symptom management and attention. A new social institution was needed. This institution became known as hospice.


Early hospices arose as sanctuaries from traditional health care.18 The first hospices in North America were founded in 1975 in Connecticut, New York, and Montreal.19,20 They were all inpatient units, operating on grants and contributions. That same year the VA Palo Alto Health Care System started a small three-bed inpatient hospice within its newly built nursing home in Menlo Park, California, making it one of the first publicly funded hospices in the country. That hospice evolved into the VA Hospice and Palliative Care Center, where I have worked. All such early hospices were inpatient facilities. Home hospice, which for many Americans has become synonymous with hospice, was a later development, spurred on in the United States by the creation of the Medicare Hospice Benefit in 1983. This benefit overtly emphasized the provision of hospice at home and more subtly discouraged the creation of dedicated inpatient hospice facilities by restricting the funds hospice agencies could spend on inpatient care.21 As of 2017 the net result in the United States is over 4000 hospice agencies, which provide the vast majority of their care in the home.22 Relatively few dedicated inpatient hospice units currently exist in the United States, although the number has been growing in recent years, resulting in a significant increase in the percentage of deaths in such units from 0.2% in 2003 to 8.3% in 2017.23


Inpatient hospices like the VA Hospice Care and Palliative Care Center have continued within the Department of Veterans Affairs (VA). Hospice care within the VA has evolved independent of Medicare rules.24


The creation of the Medicare Hospice Benefit did many wonderful things for dying patients. Most obviously, it provided the first funding mechanism dedicated to end-of-life care. The benefit also pushed clinicians to consider healing beyond a narrowly constructed medical paradigm. Support for family as “the unit of care” was emphasized. An interdisciplinary approach was essential.20 Bereavement follow-up was mandated as a part of the benefit package. However, the package also contained major flaws. In emphasizing care at home, the benefit seemed to ignore the obvious: The majority of Americans die in either acute care or nursing homes. Paraphrasing the words of bank robber Willy Sutton, hospice did not “go where the money is.” Hospice care was also narrowly defined as being applicable only to dying patients, rather arbitrarily defined as the last six months of life by Medicare. Begging the obvious, it is not only the imminently dying who wish not to suffer during episodes of illness. A broader concept was needed that could build on the base established by the hospice movement to address the needs of the dying and those who are seriously and chronically ill beyond hospice, a concept that has come to be known as palliative care.


Regrettably, the early hospice movement in the United States had little effect on the greater health care system. Hospice remained a fringe program, available to and used by few. Most Americans continued to die in acute care hospitals. Most clinicians and laypeople had never heard of hospice or palliative care.


It had not gone unnoticed that with advances such as ICUs, ventilators, and dialysis, chronically ill and dying patients were too often becoming stuck in hospitals, receiving care of little or no benefit. Academic health care leaders looked not to hospice as to how to address the problem, but to medical ethicists.25 As many of these patients ended up sedated in ICUs on ventilators or otherwise lacked decision-making capacity, the notion arose that “if we only knew what Gramma would have wanted,” things would get better, as it seemed obvious that most people would not choose care of little benefit. This thinking gave rise to a movement promulgating new concepts of surrogate decision making and advance directives (ADs). The thought was that if a patient lacked decision-making capacity another person, the “surrogate,” should decide what should be done, based on what the patient would have wanted, if he or she were thinking clearly. Not a bad idea—except it largely didn’t work.26 The concept of surrogate decision making made questionable assumptions. Most obviously, it assumed most people had a good understanding of whether or not they wanted resuscitative efforts in certain situations. Later studies would demonstrate that most did not.27 They generally had a poor understanding of what resuscitation entailed, the probability of success, and what “success” would look like, if the patient survived.28 ADs were seen as a “fix” for the fact that few individuals were having discussions with loved ones about what they might want in critical situations. Early ADs focused primarily on whether or not one would want resuscitation attempted in certain situations. However, ADs were not embraced by clinicians or the lay public.29,30,31 Physicians seemed reluctant both to encourage completion of ADs and to use them, if available. Again, many interpreted this as evidence of some inherent character flaw in physicians. Those damned physicians! There is some justification for this criticism. Many physicians balk at relinquishing control of decisions they view as being primarily “medical” in nature. However, physicians also could see that ADs, as often written, were vague or questionably related to a particular decision at hand. They also had good cause for doubting how well individuals understood the implications of their ADs or the consistency of their wishes. Besides, the lay public wasn’t exactly clamoring for them.31


A more basic problem was the assumption that proxies would feel comfortable making decisions for loved ones as if they were that loved one (the underlying concept of surrogacy). This idea, promulgated by medical ethicists, was not based on any known historical or cultural tradition.27 It rather reflected the beliefs of a small “subculture” of medical ethicists. Koenig termed this belief system the “autonomy paradigm.”32 It was embraced by our legal system, reflecting, I suspect, the strong streak of individualism in American law. While ICUs and ventilators might be new, there was nothing new about people becoming confused with sickness and decisions needing to be made. In all cultural groups of which I’m aware, decision making for confused loved ones has historically been a family affair. Cultures might vary as to who is identified as the proper proxy, but I know of no cultural group wherein the basis for decision making was driven primarily by the prior wishes of the patient. While knowledge of any such wishes may have influenced decision making, it was generally understood that the proxy would have the clearest understanding of current circumstances necessitating a decision.


Instead of surrogacy I believe the more common basis worldwide for decision making by proxies is something I have called role obligation, wherein the proxy tries to do right by the patient by being true to his or her role—spouse, child, or friend.33,34 I believe some of the resistance to surrogate decision making experienced by many proxies results from conflicts between preferences arising from their role as a “good wife” or son, for example, and prior patient wishes. “He may have not wanted this, but a good wife would never agree to letting him go.” How many times have I heard words to this effect expressed!


Resistance to ADs and the underlying concept of surrogate decision making did not go unnoticed. Obviously, something more needed to be done. Advocates promoted a new law, the Patient Self-Determination Act (PSDA), passed in 1991, which mandated that on admission health care systems receiving Medicare funding must provide information about ADs and inquire as to whether or not the admitted patient already has one. As Drought and Koenig reflected, “The passage of the Patient Self-Determination Act in 1991 was based on widespread acceptance—with no empirical research backing the underlying premise or hindering the resolve of policymakers—that everyone should have an advance directive.”27 It soon became clear that the law was having little effect. Most health care systems paid little attention to these requirements, at best just going through the motions of compliance.



Palliative Care Note

Culture eats policy for lunch.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Aug 6, 2022 | Posted by in ANESTHESIA | Comments Off on Chronic Illness, Death, and Dying in Modern Times

Full access? Get Clinical Tree

Get Clinical Tree app for offline access