Prevention of Ischemic Injury in Cardiac Surgery





The development of the heart-lung machine by John and Mary Gibbon over a half-century ago ushered in the modern era of cardiac surgery. Prior to 1953, the emerging field of heart surgery was limited primarily to brief operations conducted on the aorta, great vessels, pericardium, and cardiac surface, all of which were performed without interrupting cardiac function; valvular repairs were often performed with a blind sweep of the surgeon’s finger. The introduction of the Gibbon oxygenator made possible the bloodless, motionless field necessary to perform anything beyond the simplest of cardiac repairs. In the previous decades, advances in the commercial production of the natural anticoagulant heparin had made that drug safe, inexpensive, reversible, and readily available. Together, these two developments provided the foundation for the modern surgical treatment of cardiovascular disease.


Interestingly, despite his initial success with extracorporeal oxygenation (repair of an atrial septal defect in an 18-year-old woman), Gibbon’s next three patients all died, and he never again used the machine. Clearly, oxygenating blood was only one piece of the puzzle, and strategies needed to be developed to keep the patient safe while on the heart-lung machine. This chapter attempts to summarize the major developments in myocardial protection. Some, such as hypothermia, trace their history back to the early days of the field. Others, such as del Nido cardioplegia, can trace their roots to congenital cardiac surgery prior to receiving widespread acceptance into adult cardiac surgery. Still others, such as ischemic preconditioning (IPC), have largely fallen out of favor. Taken together, these strategies have led to tremendous decreases in the morbidity and mortality associated with heart surgery, and they have enabled cardiac surgery, particularly coronary artery bypass grafting (CABG), to become one of the most widely and successfully performed procedures in the world.


Despite the myriad advancements in the prevention of ischemia during cardiac surgery, no universally applicable myocardial protection technique has been identified, and the ideal method for myocardial protection remains to be established. In part, this may be due to the overall success of four generations of cardiac surgeons in reducing the morbidity and mortality associated with cardiac surgery to very low levels, which makes demonstrating a significant difference between one technique and another more challenging. To a greater extent, the explanation stems from our relatively recent acknowledgment of the fact that each cardiac surgery patient is unique, and thus a given patient’s response to cardiac surgery, including, in particular, cardiopulmonary bypass (CPB), reflects individual biological variability. Thus, no single intervention or strategy or protocol in isolation can be expected to succeed for every patient.


Another explanation for the many ongoing controversies in myocardial protection is the uniqueness of each operating surgeon. In the past 2 decades, medical societies and institutions the world over have attempted a paradigm shift in which practice patterns are grounded in evidence-based medicine. This model deemphasizes intuition and unsystematic clinical experience as sufficient grounds for clinical decision making and stresses the examination of evidence from rational, hypothesis-driven clinical and experimental research. However, recent surveys of practice patterns in the United States and Great Britain belie the fact that, to a great degree, cardiac surgery is still as much an art as it is a science. Indeed, a cursory review of the Cochrane Library, an internationally recognized, evidence-based health-care database, reveals minimal information on CABG. Similarly, Bartels and colleagues recently conducted a study of the scientific evidence supporting 48 major principles that are currently applied for CABG performance, and their evaluation found that the data concerning the effectiveness and safety of every one of these key principles was insufficient in both amount and quality to serve as a basis for practical, evidence-based guidelines. With the recent introduction of larger databases such as the Society of Thoracic Surgeons cardiovascular database, and even more recently the interest in artificial intelligence, we can expect more clarity in the future in terms of information moving toward knowledge and stricter guidelines.


Nevertheless, despite the significant variation that exists between different countries, different institutions, and even different individuals within institutions, some universally accepted tenets can be identified. Recognizing that an efficiently executed and technically superior operation performed on an appropriate patient under the right circumstances is perhaps the greatest form of myocardial protection, in this chapter we attempt to highlight these protective strategies, and the data (or lack thereof) supporting them, in an effort to develop an evidence-based approach to prevention of ischemic injury during cardiac surgery.


Perioperative Ischemia Prevention Strategies


The success of any operation requires a coordinated plan of care that begins prior to the patient entering the operating room and continues throughout the postoperative period. As with any surgical intervention, the ideal medical management of stable and unstable coronary artery disease, heart failure, and acute myocardial infarction (MI) continues to evolve and improve, as does the care of the postoperative cardiac patient. The appropriate use of medicines such as beta-blockers, afterload-reducing agents, statins, and antiplatelet agents is essential, but pharmacotherapy is just one facet of cardiac care. Appropriate vigilance must be directed toward patient-controlled factors such as dietary modification, smoking cessation, glucose control, and exercise, because these factors may doom a technically perfect operation, hastening the onset of graft failure and the recurrence of ischemia. Numerous large, well-conducted clinical trials are available to guide the internist, cardiologist, or intensive care specialist who may be primarily responsible for directing care of the cardiac patient outside of the operating room, and a detailed discussion of the perioperative management of ischemic heart disease is beyond the scope of this chapter. However, there are strategies in which the surgical team plays a central role and which therefore merit brief discussion here.


Optimal Timing of Cardiac Surgery after Acute Myocardial Infarction


The appropriate timing of revascularization surgery in the setting of acute MI has been the subject of uncertainty since the earliest days of CABG. In contrast to the patient with stable coronary artery disease or unstable angina, to whom the general rule of “sooner is better” applies, the traditional dogma for patients who have experienced MI is to delay surgical intervention if possible. For patients with evidence of valvular or papillary muscle dysfunction, ongoing ischemia despite maximal medical therapy, or cardiogenic shock, high mortality in the absence of surgery warrants the risk of immediate surgical intervention, either CABG or implantation of a ventricular assist device (see Chapter 14 ). Less clear is the best course of action with a relatively stable patient who has recently experienced an acute MI.


Over the past half-century, numerous attempts have been made to identify the appropriate timing of operative intervention after MI. In 1974, Cooley and colleagues noted a striking temporal association with in-hospital mortality of CABG patients after MI. When CABG was performed within 7 days of MI, mortality was 38%; when performed 31 to 60 days after MI, mortality fell to 6%. This study, along with several others performed in the 1970s with similar outcomes, led a generation of cardiac surgeons to delay operative management of the acute MI patient. As the management of acute MI evolved in the 1980s and 1990s, particularly with the advent of new thrombolytic therapy, platelet inhibitors, percutaneous transluminal coronary angioplasty, coronary stenting, and intra-aortic balloon pumps (IABPs), a number of investigators attempted to readdress this question of timing of CABG after MI. Although contemporary data suggest that perhaps such a long delay between MI and CABG is not necessary, few of these studies were randomized, and results have been disparate. Several large retrospective analyses have been used to create risk models to suggest that, when possible, waiting 7 days after MI may lead to improved outcomes. Other investigators have argued that, in the setting of a nontransmural (non–Q-wave) MI, patients may undergo CABG relatively safely at any time, and that even in the case of a transmural (Q-wave) infarct, a delay of only 48 to 72 hours may be sufficient. Early revascularization after transmural MI with impaired regional or global ventricular function has been shown consistently to have a higher operative mortality and worse long-term benefits in survival. Therefore, no firm conclusion can be made regarding the optimal timing of CABG in the setting of acute MI; the available evidence suggests that a delay of 3 to 7 days is appropriate, especially with impaired ventricular function. As mechanical assist devices increase physicians’ abilities to manage the sequelae of MI, particularly cardiogenic shock, new questions have arisen as to whether the initial surgical intervention should be definitive revascularization or insertion of a temporary assist device to lengthen the window of time between MI and surgery.


Intraoperative Ischemia-Prevention Strategies


Nonoperative Strategies


Anesthesia Considerations


The concept of cardiac anesthesia has been in development since the introduction of the CPB machine in the 1950s, and the modern cardiac anesthesiologist is an invaluable member of the cardiac surgery team. Because the focus of the cardiac surgeon while in the operating room must be devoted to the performance of a technically superior operation, the cardiac anesthesiologist, whose tasks include surveying numerous monitors and ongoing laboratory results, is in perhaps the best position to identify the subtle changes in a patient’s status that may signify myocardial injury or other impending complications. We will now review a number of the specific tools that may be used to identify ischemia, as well as some of the nonoperative remedies employed to prevent or treat ischemia in the setting of cardiac surgery.


Monitoring for ischemia


Routine intraoperative monitoring during cardiac surgery includes temperature, pulse oximetry, capnography, surface electrocardiography, and noninvasive blood pressure monitoring. More invasive methods typically employed during cardiac surgery include an arterial line for continuous blood pressure monitoring and repeated blood sampling, a central venous catheter to measure central venous pressure, and use of a Swan-Ganz pulmonary artery catheter (PAC). Transesophageal echocardiography (TEE) is also an invaluable adjunct in the operating room, especially with valvular surgery.


The PAC provides potentially valuable information regarding pulmonary arterial pressure, pulmonary capillary wedge pressure (a surrogate of left-sided filling pressure), cardiac output, mixed venous saturation, and both systemic and pulmonary vascular resistance, all of which can be used to guide management directed at improving perfusion or optimizing hemodynamic performance after cardiotomy. Nevertheless, use of the PAC has been a topic of great debate ever since its introduction into clinical practice, and the controversy continues to rage today. Almost 20 years ago, several published studies failed to detect a benefit from the use of PACs in the setting of acute myocardial ischemia/infarction. Similarly, in 1989, Tuman and colleagues published their study showing no differences in outcome when PACs were used during CABG. In 1996, the SUPPORT trial (Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments) reported that PAC use was associated with longer hospital and intensive care unit (ICU) stays, significantly increased costs, and increased mortality, including a 1.5-fold increase in the relative risk of death in postoperative patients. In the ensuing discussions, some called for a moratorium on continued use of the PAC. However, attachment to the device by most practitioners proved strong, and, with evidence that the PAC was a useful tool for identifying perioperative ischemia, a more measured approach was adopted. Nevertheless, even the staunchest of PAC supporters recognized that nonselective, routine use of the PAC was not justified. Today, the decision to place a PAC preoperatively should be made between the surgical and anesthetic teams, and important factors such as the type of operation and baseline cardiac function will help guide this decision. TEE provides important data on valvular and cardiac function and is a common adjunct used in the cardiac operating room. Advocates of TEE point to the fact that it is generally safe and can provide nearly all the same data as the PAC, plus additional information on wall motion, ejection fraction, stroke volume, volume status, and valvular function not ascertainable by any other method. Small, nonrandomized studies of TEE in the setting of CABG have shown that TEE may be more important than PAC in guiding interventions involving fluid administration, vasoactive medications, and other anti-ischemia therapies, and that among high-risk patients undergoing CABG, data provided by TEE affected anesthetic or surgical management 50% of the time.


The reported safety and benefits of TEE notwithstanding, debate similar to that surrounding the PAC has emerged. Critics of TEE point to studies suggesting that unsuspected TEE findings of major significance occur in less than 2% of cases, that intraoperative interpretation by cardiac anesthesiologists is widely variable and often does not concur with later interpretation by cardiologists, and that, like PAC, TEE has not been shown to improve outcomes. Critics also point to the fact that performing TEE may divert the anesthesiologist from performing other critical tasks; in one study, anesthesiologists’ response time to an alarm light was 10 times slower when performing TEE than during monitor observation.


In sum, a range of modalities to detect ischemia are available to the modern cardiac anesthesiologist, and the wealth of information provided by them is vast. In valvular cardiac surgery, the importance of TEE is well established. However, in the setting of myocardial revascularization, serious questions remain unanswered as to whether the information these devices provide, and the way that clinical care is guided by that information, is helpful as determined by measuring important outcome variables such as morbidity, mortality, and cost. Thus, judicious rather than routine use is warranted.


Anesthetic agents


The goals of cardiac anesthesia are to maintain hemodynamic stability and myocardial oxygen balance, minimize the incidence and severity of ischemic episodes, and facilitate prompt and uncomplicated separation from CPB and assisted ventilation. Throughout the relatively short history of cardiac surgery, numerous anesthetic regimens have been developed, each with its own proponents and each with purported advantages and shortcomings. To date, however, no evidence exists that any one technique can be claimed to be superior for patients with cardiovascular disease.


Anesthesia during the early years of cardiac surgery consisted primarily of high-dose opioids, first morphine and later synthetic opioids. In the 1970s, fentanyl gained widespread acceptance for both induction and maintenance of anesthesia because of its improved hemodynamic stability. Because opiate-alone anesthesia was associated with an unacceptably high risk of patient awareness (which can lead to hypertension, tachycardia, and an attendant increase in myocardial oxygen consumption), benzodiazepines were added. More recently, propofol and volatile anesthetics have become a mainstay in both the induction and maintenance of cardiac anesthesia. Although some centers have gone to an all-intravenous anesthetic technique, the reported preconditioning protection provided by inhaled anesthetics has led some to advocate their continued and routine use. In practice, the importance of anesthesia in minimizing ischemic risk is perhaps most evident during induction, when the potentially wide variation in blood pressure may put overwhelming strain on an already stressed heart. Standard therapy in the modern era, therefore, may include preinduction use of beta-blockade in addition to anxiolytics. Typical induction agents include a combination of paralytic, analgesic, and anesthetic agents. Anesthesia is maintained with a combination of analgesics and anesthetic agents.


Metabolic Considerations


Systemic temperature


With the passing of Wilfred Gordon Bigelow in 2005, cardiac surgery lost the man referred to as “the father of heart surgery in Canada.” After training at Johns Hopkins, Bigelow returned to the University of Toronto and the Banting Research Institute in 1947, where over the next 2 decades, he and his colleagues performed much of the initial research that identified hypothermia as a practical means by which the body could be protected during the brief periods of circulatory arrest required for relatively simple cardiac repairs. Prior to his investigations, heart surgery was performed in an environment approaching normothermia. The early CPB circuits did not include a heat exchanger, and most of the temperature change that did occur was passive. Indeed, if any attempt was made to regulate the patient’s body temperature, it was to maintain normothermia. Hypothermia was seen as detrimental to the sick and wounded patients because of its adverse effects on coagulation and because metabolic rates were actually known to increase as patients got colder, largely caused by shivering. Bigelow’s systematic studies of surface-induced hypothermia, primarily using a canine model, were the first to show that with shivering minimized by adequate anesthesia, metabolism actually decreased in a linear relationship to core body temperature, with each 10°C temperature drop corresponding to a roughly 50% reduction in oxygen consumption.


Although metabolism could be essentially halted at low-enough temperatures, providing excellent protection for the brain, temperatures below 28°C to 32°C led to cardiac and pulmonary failure, limiting the early use of hypothermic protection alone to cases requiring only very brief periods of cardiac arrest. With the advent of CPB, however, temperatures could be reduced to 10°C, thereby lengthening the window of protection. Following these early observations on systemic hypothermia, numerous others expanded on the use of hypothermia and its application in cardiac surgery. Shumway and colleagues soon demonstrated the additional benefit gained by topical cooling of the heart using ice-cold saline lavage. This topical effect probably had more impact on the more anteriorly situated, thinner-walled right ventricle more prone to rewarming under the operative lights and ambient room temperature than the thicker left ventricle better protected by intravascular protection strategies. Other advances, including introduction of the heat exchanger in the CPB circuit, made hypothermia a mainstay of myocardial protection during the early years of cardiac surgery.


Despite significant advances in our understanding of the molecular processes involved in tissue metabolism, hypothermia continues to be a central component of myocardial protection strategies, particularly in the transplant realm. The use of hypothermia, however, remains relatively individualized between surgeons and institutions. Some centers routinely actively cool during CPB; others simply let the patient’s temperature drift downward. Almost all centers use topical cooling of the heart, although since the early 1990s, a few investigators have advocated “warm heart surgery.”


Although significant debate persists, consensus is building that rather than the creation of severe hypothermia or active maintenance of normothermia, the appropriate temperature for the systemic circulation may be mild to moderate hypothermia (approximately 28°C to 34°C). A number of studies suggest that such a tepid strategy provides the best level of myocardial protection without the consequences of deep hypothermia.


Myocardial acid–base management


The influence of intraoperative systemic pH status on CABG outcomes has been studied primarily in relation to its influence on neuroprotection. Most cardiac surgeons, and certainly cardiac anesthesiologists, are familiar with the concept of α-stat versus pH-stat management strategies, a detailed discussion of which is beyond the scope of this chapter. Less well understood is the relationship between ischemia, myocardial acid–base management, and non-neurologic outcomes.


Normal myocardial pH (7.2) is lower than systemic pH, and various studies have demonstrated that mild acidosis (6.8 to 7.0) may actually protect the myocardium during ischemia by decreasing cardiomyocyte energy demands. However, myocardial acidosis during CABG may be much more severe, with typical measurements of pH 6.5 or lower, and may trigger cell death via apoptosis. Decreased myocardial pH is a consequence of inadequate coronary blood flow, which results in decreased oxygen delivery, decreased washout of hydrogen ions, and an attendant rise in myocardial tissue partial pressure of carbon dioxide, and as such, it may be used as a surrogate marker for myocardial ischemia. During cardiac surgery, low coronary flow may occur as a result of preexisting severe coronary artery stenosis, ineffective cardioplegia during CPB, or inadequate revascularization. Myocardial acidosis may be the sentinel event in a potentially devastating cycle in which inadequate oxygen delivery leads to depressed myocardial function. This in turn necessitates the use of inotropes, which themselves can lead to increased myocardial oxygen consumption. If not reversed, irreversible cardiac injury can occur. As an indicator of underlying ischemia, measurement of myocardial acidosis represents a potentially valuable monitoring tool for guiding the care of the cardiac surgery patient. Regular, repeated arterial blood gas measurements are standard practice in cardiac surgery, but they represent only static images of a constantly changing landscape and may not accurately reflect the condition of the myocardium. Current myocardial protection strategies based on these intermittent measurements may be insufficient. Several attempts have therefore been made to develop real-time, continuous myocardial pH-measuring devices. Continuous blood gas monitors, for example, have been credited with decreasing the need for intraoperative pacing and cardioversion, decreasing the length of postoperative mechanical ventilation, and decreasing the length of ICU stay.


In a series of studies over the past 30 years, Khuri and colleagues have increased our understanding of the potential importance of intramyocardial pH management. These investigators have developed a system using electrodes implanted in the ventricular wall that allows continuous pH measurements to monitor for regional myocardial ischemia and decreased coronary perfusion. Their retrospective analyses conclude that low myocardial pH can predict clinically relevant outcomes ranging from an increased need for inotropic support, to an increased risk of 30-day adverse events, and finally to decreased long-term survival. On the basis of their findings, they have developed a series of recommendations aimed at keeping myocardial pH in a safe range throughout all aspects of any cardiac surgery procedure. Though compelling, these data and recommendations require prospective, randomized examination, and little external validation has ever been done; therefore, widespread adoption of intramyocardial pH monitoring has not become widely accepted in clinical practice.


Blood glucose


Diabetes mellitus has long been considered an established risk factor both for the development of cardiovascular disease and for significant perioperative morbidity and mortality associated with cardiac surgery. Even after adjusting for other confounding risk factors, such as age, hypertension, hypercholesterolemia, and smoking, diabetes has been shown in numerous studies to be a significant independent predictor of both short- and long-term survival after CABG. The data are not all consistent, as some studies have not identified diabetes as an independent predictor of mortality. Nevertheless, because diabetes now affects almost one-third of patients undergoing bypass surgery, optimal perioperative glucose management must be a priority for all cardiac surgeons.


Although the mechanism of diabetes-related cardiac pathophysiology is multifactorial, patients with more severe forms of the disease (i.e., those who require preoperative insulin therapy), and by extension those with higher blood glucose levels (poor control), have a poorer prognosis. Both acute and chronic hyperglycemia increase the risk of ischemic myocardial injury through a number of mechanisms, all of which may play a role around the time of cardiac surgery. These include a decrease in coronary collateral blood flow, endothelial dysfunction, and attenuation of the protective effects of inhaled anesthetics and other pharmacologic preconditioning agents. The association of higher blood glucose with increased morbidity and mortality has shifted the focus of research in this area away from characterizing the risk from diabetes per se to the role of elevated blood glucose in cardiac pathophysiology. In the absence of intervention, serum glucose concentrations in the intraoperative and perioperative periods often become elevated far above the normal range, even in nondiabetic patients. The cause of this elevation, which is similar to that which occurs in other forms of surgery and is in response to stress such as trauma or infection, reflects a combination of acute glucose intolerance in the form of insulin suppression, stress-hormone–induced gluconeogenesis, and impaired glucose excretion as a consequence of enhanced renal tubular resorption. The metabolic effects of diabetes and elevated blood glucose have been shown to be similarly wide ranging and include a higher incidence of left ventricular dysfunction, more diffuse coronary artery disease, altered endothelial function, and abnormal fibrinolytic and platelet function.


Maintenance of normal glucose levels during the intraoperative and perioperative periods is difficult, even in nondiabetic patients, and carries with it the risk for potentially life-threatening iatrogenic hypoglycemia. Nevertheless, Furnary and colleagues performed a series of studies investigating the feasibility of tight perioperative glycemic control and its effects on the important outcomes of sternal wound infection and death. An initial study of 1585 diabetic patients undergoing cardiac surgery demonstrated that elevated blood glucose levels (> 200 mg/dL) on the first and second postoperative days were associated with a higher incidence of deep sternal wound infection, and the average blood glucose level over those 2 days was the strongest predictor of deep sternal wound infection in a diabetic patient. On the basis of these findings, these investigators hypothesized that tight glycemic control would decrease the incidence of postoperative sternal wound infections. A prospective study of 2467 diabetic patients undergoing cardiac surgery was performed in which maintaining serum glucose at a level of less than 200 mg/dL was the goal. The control group (968 patients) was treated with intermittent doses of subcutaneous insulin, with administration based on a sliding scale; the study group (1499 patients) was treated with a continuous intravenous insulin infusion in an attempt to maintain a blood glucose level of less than 200 mg/dL. Continuous intravenous insulin infusion resulted in better glycemic control and a significant reduction in the incidence of deep sternal wound infection (0.8%) compared with the intermittent subcutaneous insulin injection group (2.0%; P < .01). A subsequent retrospective review of 3554 diabetic patients undergoing isolated CABG demonstrated that continuous insulin infusion resulted in better glycemic control. Furthermore, tight glycemic control led to a 57% reduction in mortality, with this reduction being accounted for by cardiac-related deaths. On the basis of these results, the authors concluded that diabetes mellitus per se is not a true risk factor for death after CABG and that continuous insulin infusion should become the standard of care for glucose control in diabetic patients undergoing CABG. After review of the available data, other investigators have also reached similar conclusions, namely that poor glycemic control, not a diagnosis of diabetes per se, significantly increases the risk of adverse clinical outcomes, prolonged hospitalizations, and increased health-care costs following cardiac surgery. Additional evidence comes from a prospective study involving 1548 critically ill patients in the surgical ICU, in which even tighter control (a serum glucose level goal of between 80 and 110 mg/dL versus 180 to 200 mg/dL) was associated with significantly improved mortality (4.6% versus 8.0%; P < .04). On the basis of these data, it is reasonable to conclude that tight glycemic control is likely beneficial for all patients undergoing cardiac surgery, and this has become a current standard of practice.


Transfusion strategy


Despite the development of national consensus guidelines for blood transfusion in the 1980s, in 2002 it was estimated that some 20% of all allogeneic blood transfusions in the United States were associated with cardiac surgery. Despite established guidelines, a number of studies have demonstrated that transfusion practices vary dramatically across institutions, with some centers transfusing less than 5% of patients and others transfusing nearly all patients. Different transfusion practices even within the same institution highlight the lack of widespread accepted transfusion thresholds. Furthermore, a recent analysis of cardiac surgery patients undergoing cardiac surgery found that red blood cell transfusion appears to be associated more closely with morbidity and mortality than with preoperative anemia; therefore, efforts to minimize unnecessary blood transfusion are justified.


The myocardium relies on either increased blood flow or increased oxygen content to satisfy increased oxygen demand. One of the primary rationales for blood transfusions in the setting of cardiac ischemia, therefore, is to increase oxygen delivery to the stressed myocardium. Unfortunately, very little evidence exists to support this rationale. On the contrary, some degree of anemia is required during hypothermic CPB to reduce blood viscosity and allow adequate flow without excessive arterial blood pressure. Furthermore, a number of large studies have concluded that blood transfusion is associated with increased short- and long-term mortality, including transfusions in the setting of CABG.


Decreased hematocrit is one of the prime drivers of the decision to transfuse, but management of hematocrit during CPB is controversial. Numerous studies have demonstrated that normovolemic anemia is well tolerated in cardiac patients, even at levels as low as 14%. Spiess and colleagues analyzed more than 2200 bypass patients and found that high hematocrit (34% or greater) upon entry to the ICU was associated with a significantly higher rate of MI than was a low hematocrit (less than 24%), leading the authors to conclude that low hematocrit might be protective against perioperative MI. In contrast, Klass and colleagues performed a study of 500 CABG patients and found no association between perioperative MI rate and hematocrit value on entry into the ICU. Habib and colleagues examined 5000 operations using CPB and found that a number of clinically significant outcomes, including stroke, MI, cardiac failure, renal failure, pulmonary failure, and mortality, were all increased if the intraoperative hematocrit nadir was less than 22%. Similarly, DeFoe and colleagues demonstrated that low hematocrit during CABG is associated with perioperative cardiac failure and increased in-hospital mortality. Each of these studies is limited by its retrospective design, and significant differences in patient populations and other key factors make direct comparisons difficult. Until a well-designed prospective study is performed, the optimal hematocrit value for CPB will remain undetermined, leaving the evaluation of this key indicator of the need to transfuse to the discretion of the physician.


Although the optimal hematocrit during CABG surgery is the subject of continued debate, data are accumulating on the deleterious effects of blood transfusion. Several studies have demonstrated the proinflammatory properties of transfused blood. In addition, the immunomodulatory effects of transfusion have been known for more than 2 decades, and blood transfusion has been associated with increased risk of bacterial as well as viral infection. A number of blood conservation strategies have been designed with the specific intent of decreasing the need for transfusion, including technical modifications to the bypass circuit and the use of various cell salvage techniques, such as autologous blood transfusion and Cell-Saver use. For those patients in whom transfusion cannot be avoided, the use of leukoreduced blood is gaining favor as a method of minimizing the detrimental effects of transfused blood. A national universal leukoreduction program in Canada has been credited with decreasing mortality and antibiotic use in high-risk patients. In the setting of CABG, the role of transfusing leukoreduced blood is unsettled. At least two studies have shown that leukoreduction is not associated with a decrease in postoperative infections. However, in a well-conducted, prospective trial, Furnary and colleagues showed that transfusing leukoreduced blood confers a survival advantage that is present at 1 month and persists up to 1 year.


In sum, despite the regular occurrence of blood transfusion in cardiac surgery patients, the indications, goals, effectiveness, and safety of this common clinical practice remain uncertain. Clinician preference and habit therefore continue to be the prime determinants of many blood transfusion strategies. Data on transfusion practices in cardiac surgery are mixed, but as one prominent expert in the field has concluded, the predominance of data regarding red blood cell transfusion does not support the premise that it improves outcome. Thus, until the appropriate patients and circumstances of transfusion are better defined, it is a practice to be used judiciously.


Operative Strategies to Prevent Ischemia and Ischemia-Reperfusion Injury


For an operation that is performed safely more than 1 million times annually worldwide, CABG is an incredibly complex procedure. Accordingly, strategies aimed at minimizing the morbidity and mortality associated with heart surgery are equally broad in scope. Although overlap exists, conceptually one may divide these efforts into two broad categories: strategies to protect the myocardium itself and strategies to protect against the effects of CPB. In addition, a third category is emerging that includes newer techniques which are a combination of the two and thus do not fall easily into either of the first two categories.


Myocardial Protection Strategies


Cardioplegia


Generally agreed-on characteristics of the ideal cardioplegia solution are that it will (1) achieve a rapid and sustained diastolic arrest, (2) minimize energy requirements while the heart is arrested, (3) prevent damage caused by the absence of coronary blood flow, and (4) prevent ischemia-reperfusion (I/R) injury when blood flow is restored. The earliest cardioplegia solutions contained a high (2.5%) concentration of potassium citrate. Although this solution was effective in achieving chemical cardiac arrest, it was abandoned after only several years when the high potassium concentration was shown to induce myocardial necrosis. In the mid-1960s, several new cardioplegia solutions were introduced, and they were the forerunners of solutions still in use today. The most popular of these were Bretschneider’s intracellular crystalloid solution, St. Thomas’ Hospital extracellular crystalloid solution, and several solutions developed by American researchers. These new solutions continued to rely on potassium to induce cardiac arrest, although at much lower levels than previous solutions. By the late 1970s, use of potassium-based cold crystalloid cardioplegia had become common practice in the United States. Since that time, efforts to improve on cardioplegia have focused on composition of the solution, temperature, the route of delivery, and the use of special additives.


Cardioplegia composition: blood versus crystalloid


Asanguineous crystalloid solutions have been shown to provide good protection against ischemia, even in cases with prolonged bypass times; however, their poor oxygen-carrying capacity may give rise to myocardial oxygen debt. This problem may be overcome, in part, by reducing myocardial metabolism via hypothermia, by oxygenating the crystalloid solution, or by using blood as the cardioplegia vehicle.


Potassium-based blood cardioplegia, introduced in the late 1970s, was shown experimentally to provide better protection than either blood alone or crystalloid cardioplegia. Work by Buckberg and colleagues demonstrated that blood cardioplegia could be performed safely in humans and with good results. Laks and associates provided similar positive results in 1979. Since that time, a number of studies have shown that blood cardioplegia may lead to decreased creatine kinase-MB enzyme release and improved postoperative ventricular function and that it may be of particular benefit to patients with unstable angina or reduced left ventricular function.


The preponderance of evidence suggests that use of blood cardioplegia is superior to use of crystalloid; however, no large-scale, randomized trial has ever been undertaken to provide a more definitive answer. Despite these limitations, over the past 20 years blood cardioplegia has become the preferred means of myocardial protection for most cardiac surgeons.


Cardioplegia solutions: single dose


Del Nido cardioplegia was developed mainly for use in the congenital cardiac population in the mid-1990s as a way to address the inability of the immature myocardium to tolerate high levels of intracellular calcium influx. As opposed to Buckberg cardioplegia and other whole-blood (WB) solutions, del Nido solution is not glucose based; instead, it is a calcium-free, potassium-rich solution intended to have an electrolyte composition close to the extracellular fluid. It also contains lidocaine, which blocks sodium channels maintaining depolarization and limits intracellular calcium influx.


Del Nido cardioplegia solution is typically mixed with cold blood in a 4:1 ratio (as opposed to Buckberg solution, mixed in a 1:4 ratio). It is usually delivered as a single dose for straightforward cardiac operations, as its safety up to 90 minutes of myocardial ischemic time has been described. This may improve the efficiency and flow of the operation, as other WB-based cardioplegia solutions are typically redosed every 20 minutes.


Over the past decade, del Nido cardioplegia solution has gained increasing favor among adult cardiac surgeons and in many institutions now represents the cardioplegia solution of choice. Potential benefits include its single-dose use for many cases (typical redosing interval around 60 minutes), decreased cost, decreased postoperative glucose perturbations, and possible improved myocardial protection. Mick and colleagues evaluated del Nido and Buckberg use among patients undergoing isolated valve replacement. They found shorter CPB, aortic cross-clamping, and total operative times associated with the use of del Nido solution. Postoperative insulin requirements were also lower in the del Nido group.


Ad and colleagues performed a prospective randomized trial comparing del Nido with WB cardioplegia. They found that the use of del Nido was associated with a lower troponin level 24 hours after surgery. Patients receiving del Nido solution also had a higher return to spontaneous rhythm, and fewer patients in the del Nido group required postoperative inotropic support. These results favor at least equivalent myocardial protection between del Nido and traditional WB solutions and possibly point toward superior myocardial protection with del Nido.


Some have questioned the safety of del Nido cardioplegia in complex adult cardiac operations, including patients with significant multivessel coronary artery disease. Theoretical concerns with single-dose cardioplegia include maldistribution of delivery and inadequate myocardial protection. Several studies have evaluated del Nido solution in these higher-risk patient populations. Yerebakan and colleagues evaluated del Nido use in patients undergoing CABG during an acute MI and found no difference in outcome between del Nido and WB cardioplegia. A recent propensity-matched study looked at CABG with del Nido cardioplegia versus Buckberg. Del Nido was found to provide equivalent myocardial protection while requiring significantly fewer doses, was associated with decreased aortic cross-clamping time, and had lower postoperative glucose levels. Del Nido has also been found to be effective in reoperative cardiac surgery as well, including given in a continuous retrograde fashion during redo CABG with patent left internal mammary artery (LIMA).


Del Nido cardioplegia likely provides myocardial protection at least equivalent to that of WB solutions, and there may be a signal among published reports that del Nido in fact provides superior myocardial protection during adult cardiac surgery. This is accompanied by its ease of use and potential for increased efficiency during the operation, with corresponding shorter operative times. Previous concerns regarding patients with significant coronary artery disease are probably overstated; however, continued investigation is needed to prove del Nido solutions’ safety in these complex patients.


Cardioplegia temperature: warm versus cold


For 4 decades, hypothermia was considered a fundamental requirement in cardiac surgery. However, in the early 1990s, Lichtenstein and colleagues published the earliest reports describing the use of retrograde continuous normothermic cardioplegia. This study compared 121 consecutive patients undergoing CABG with normothermic cardioplegia with 133 historical control subjects, and it showed significant improvement in perioperative MI rate (1.7% versus 6.8%; P < .05), decreased use of IABP (0.9% versus 9.0%; P < .005), and decreased prevalence of low output syndrome (3.3% versus 13.5%; P < .005). In 1994, the Warm Heart Investigators Trial reported the initial results of a study involving more than 1700 patients randomized either to continuous warm-blood cardioplegia (systemic temperature 33°C to 37°C) or to cold-blood cardioplegia (systemic temperature 25°C to 30°C). This study again demonstrated decreased evidence of enzymatic MI using normothermia (warm 12.3% versus cold 17.3%; P < .001) and decreased incidence of postoperative low-output syndrome in warm patients (6.1% versus 9.3%; P < .01). A subsequent prospectively designed subanalysis of this study demonstrated that warm cardioplegia significantly reduced the overall prevalence of morbidity and mortality (warm 15.9% versus cold 25.2%; P < .01); this protection was seen across all risk groups. However, given the historical precedent of hypothermia being a critical adjunct to myocardial protection, larger studies comparing normothermic and hypothermic cardioplegia strategies will be needed until large-scale practice patterns evolve. Cold blood cardioplegia remains the current gold standard for protection during cardiac arrest.


Cardioplegia route of delivery: antegrade versus retrograde


Antegrade administration of cardioplegia represents the most physiologic method for delivery and is the workhorse method in a majority of cases. However, retrograde delivery of cardioplegia offers a number of potential advantages over antegrade perfusion, including the ability to perfuse regions of the myocardium that would not be reached via antegrade infusion because of occlusion of coronary arteries and the ability to maintain continuous cardioplegia. Disadvantages include the fact that it is technically more difficult than cannulation of the aorta, that retrograde flow provides less homogeneous distribution of cardioplegic solution, and that the right ventricle and posterior ventricular septum receive inferior protection. Despite these limitations, a number of investigators have demonstrated good outcomes using retrograde cardioplegia. Numerous attempts have been made to determine whether antegrade or retrograde cardioplegia provides superior protection. No definitive conclusion has been reached, but many investigators have determined that a combined approach is likely to yield the greatest success and that high-risk patients with severe coronary artery occlusion and/or left ventricular dysfunction or patients undergoing repeat coronary revascularization stand to benefit the most from retrograde delivery of cardioplegia.


Noncardioplegia myocardial protection strategies


Hypothermia


Of all the non–cardioplegia-based myocardial protection strategies, hypothermia has been used most widely and has had the most consistent benefit. As we have learned more about the potential negative consequences of lowered body temperature, and as other protection strategies have been developed, the role of systemic hypothermia has become less central in myocardial protection. Nevertheless, its importance in the history of cardiac surgery cannot be overemphasized, and even today deep systemic hypothermia, as well as topical hypothermia (i.e., slush), may be the complementary strategy or the strategy of choice in special situations.


Arrest variations


Cardiac arrest serves the dual purpose of greatly reducing the metabolic demand of the myocardium while providing the motionless field necessary to complete many operations. As discussed earlier, potassium-based depolarizing chemical cardioplegia has been the mainstay of cardiac arrest mechanisms since the late 1960s. However, a number of alternative techniques have been employed, many of which may be used in conjunction with chemical cardioplegia. These include hypothermia and intermittent aortic cross-clamping with electrically induced ventricular fibrillation. Newer strategies such as polarized arrest and “electroplegia” have yet to be tested in large clinical studies. Of the alternative accepted arrest techniques, cold crystalloid cardioplegia and intermittent aortic cross-clamping are used most widely. Proponents of intermittent cross-clamping cite its simplicity and the reduced cumulative ischemia in comparison with cardioplegia, but head-to-head comparative studies have failed to demonstrate the superiority of either strategy.


Hypothermic fibrillatory arrest


Hypothermic fibrillatory arrest (without aortic cross-clamping) is used with increasing frequency in alternative access cardiac surgery, and in particular has been championed in minimally invasive valve surgery, with excellent results. This technique generally involves CPB with systemic cooling to 26–30°C. The heart is appropriately vented or the left atrium opened directly (in the case of mitral valve surgery) once the patient begins to fibrillate. Obvious limitations are significant aortic insufficiency, which must be dealt with using the optimal surgical visualization. Upon conclusion of the operation using this technique, the importance of meticulous de-airing techniques cannot be overemphasized. These include use of aortic root vents as well as direct venting of the left ventricle while closing the left atrium (in the case of mitral valve surgery). In the appropriate patient, this represents a powerful tool and should be in the arsenal of the cardiac surgeon caring for complex patients as well as performing alternative access surgery.


Cannulation techniques


The modern technique of placing a patient on CPB is remarkably similar to the technique employed by pioneers in the field 50 years ago. Both the venous and arterial systems are cannulated as they enter and exit the heart, respectively. When the venae cavae and aorta are then clamped, blood flow is diverted to the bypass machine, effectively excluding the heart and lungs. Cardiac venting is typically required to remove blood that enters the heart from noncoronary collateral flow, such as the bronchial arteries and Thebesian veins. In addition to improving operative visibility, effective drainage prevents distention of the ventricles. Alternatives to the most common forms of venous drainage (right atrial two-stage and bicaval cannulation) and arterial perfusion (ascending aortic cannulation) exist and may be particularly useful in certain circumstances, but no technique has been demonstrated to have a significant impact on prevention of ischemic or inflammatory injury.


CPB circuit modifications


Since its introduction, the CPB circuit has undergone numerous advances, each of which has contributed to declining morbidity and mortality associated with cardiac surgery. The many challenges posed by use of CPB has made it one of the most researched areas of cardiovascular medicine.


Heparin-bonded circuits


To minimize the systemic inflammatory response to the CPB circuit, a number of strategies have been implemented in an attempt to make the circuit more biocompatible. By far the best studied molecule used in these modified circuits is heparin. In theory, a layer of heparin molecules lining the CPB circuitry may mimic the heparan sulfate that coats endothelial cells in vivo, thereby reducing the pathophysiologic response that occurs when blood cells come into contact with the foreign surface. Gott and colleagues first reported the binding of heparin to artificial surfaces in 1963. Since then, heparin-bonded circuits have been tested extensively, and abundant experimental and clinical evidence suggests that, during CPB, heparin-bonded circuits do in fact attenuate the activation of leukocytes, platelets, and complement; decrease release of inflammatory cytokines; and diminish the formation of thromboembolic debris. Although evidence exists that heparin-bonded circuits may not actually decrease thrombogenesis, several controlled studies suggest that the level of anticoagulation can be safely decreased when heparin-bonded circuits are used. Work from our institution has demonstrated that a strategy that combines heparin-bonded circuits and low-dose heparinization as part of a comprehensive blood conservation strategy decreases the inflammatory response and need for transfusion more than any single measure in isolation.


Cardiotomy suctioning


The possible negative consequences of infusion of cardiotomy suction blood have been recognized for decades. As early as 1963, it was demonstrated that neurologic complications associated with CPB could be ameliorated by discarding shed blood rather than returning it to the patient, and diffuse cerebral intravascular fat emboli have been observed in patients who die of neurologic complications in the perioperative period. To minimize this potentially disastrous complication, a defoaming chamber is incorporated into the cardiotomy reservoir, and various filtration systems have been developed. Recent evidence suggests that use of cell salvage techniques may be an even more effective method of recycling shed mediastinal blood. Cell savers have been shown to reduce the lipid burden from shed blood before it is returned to the patient and to reduce the number of lipid microemboli. An additional advantage of a cell saver is that it removes leukocytes from the shed blood, which may help to minimize the inflammatory reaction.


Less readily apparent than the threat of embolism but perhaps equally detrimental are the significant metabolic and proinflammatory effects that are associated with reinfusion of cardiotomy suction blood. Paradoxically, attempts to minimize transfusion requirements by salvaging shed mediastinal blood may be offset by heightened inflammation, vasomotor dysfunction, and altered coagulation. In an observational study involving 12 academic medical centers and more than 600 patients, Body and colleagues concluded that autotransfusion of shed mediastinal blood was ineffective as a blood conservation strategy and that it may be associated with an increased risk of wound infection. Further refinements to the CPB cardiotomy circuit are likely needed to reduce both the embolic and proinflammatory component of recirculation of shed blood.


Open versus closed circuit


Contact with air and filters is known to contribute to blood activation. The conventional CPB circuit includes an open venous reservoir (“hard shell”), which collects both venous return and cardiotomy blood; blood in this open reservoir is exposed to the air and must pass through an integrated filter. A closed reservoir (“soft shell”) is independent from the cardiotomy reservoir, is never exposed to the air, and does not require a filter. Use of closed reservoirs has been shown to decrease fibrin deposition and decrease the expression of a number of inflammatory mediators, including complement levels, the proinflammatory cytokine interleukin (IL)-8, thromboxane, elastase, and tissue plasminogen activator antigen. More importantly, closed reservoirs have been shown to decrease blood loss, decrease the need for blood transfusion, and decrease the length of stay. Although limited to only a few studies, these data are promising, and use of closed reservoirs should be expected to increase in the coming years as a result of solid evidence of their efficacy. However, closed shells are slightly limited in their ability to incorporate vacuum-assisted drainage, and therefore may not be the best choice for certain high-risk situations.


Pump type


Currently, two types of pumps, roller and centrifugal, are used in the vast majority of cardiac surgery cases with CPB. For many years, CPB was performed exclusively with continuous roller pumps. Hemolysis, the risk of pumping large volumes of air, and spallation (the release of particles from the tubing surface) are known consequences of the roller pump; however, its simplicity of design and implementation, as well as relatively low cost, are used to justify its continued use. Reported advantages of a centrifugal pump are improved blood handling, elimination of the risk of over pressurization, and decreased spallation. In vitro analysis has demonstrated reduced hemolysis using centrifugal pumps; however, two small studies have shown that terminal complement levels, the proinflammatory cytokines IL-6 and IL-8, neutrophil count, and elastase levels are all higher when using centrifugal pumps. Clinical outcomes, including chest tube drainage, transfusion requirements, and length of hospital stay may be improved through the use of centrifugal pumps, although clinical benefit has not been shown in all studies.


Both roller and centrifugal pumps generate continuous, nonpulsatile blood circulation. In the 1950s, Wesolowski and Welch published a series of reports based on more than 20 years spent developing an artificial pump. Their studies, using a canine model, indicated that a short period (up to 6 hours) of nonpulsatile flow had no apparent effect on pulmonary, cardiac, renal, or central nervous system physiology. Limited evidence accumulated since that time suggests that the flow characteristics do have physiologic consequences, and small studies have demonstrated that pulsatile CPB may reduce endothelial damage, suppress cytokine activation, and prevent increases in endogenous endotoxin levels. Taylor and coworkers have suggested that pulsatile flow may provide significant clinical benefit, including improved postoperative ventricular function and reduced mortality. However, these positive findings have not been universal. The effects of pulsatility have been the subject of several recent reviews. In short, the impact of nonpulsatile flow is not fully known. Furthermore, by extrapolating from the success of durable left ventricular assist devices, exposing patients to long-term nonpulsatile flow, this probably has minimal impact during surgery.


Blood filtration: leukocyte depletion


The central role of leukocytes, particularly neutrophils, in the inflammatory response to CPB and I/R injury is well established. Like many other strategies, leukocyte depletion seems to be fairly effective in reducing inflammatory cells and mediators involved in the response to CPB, but clinically relevant data have been inconclusive. A number of investigators have noted that leukocyte depletion may be beneficial only in certain populations, such as children, patients with impaired cardiac function, and patients undergoing emergent CABG.


Pharmacologic protection strategies


Just as numerous modifications to the CPB circuit have been devised to combat the complexity of the endogenous response to cardiac surgery, numerous pharmacologic interventions have been studied as well. Taking a broad view of the data regarding pharmacologic anti-inflammatory strategies, two conclusions emerge. First, a common feature of many of these drugs is that although experimentally each may significantly reduce the biochemical markers of infection, their clinical utility has been questionable. Most have not undergone the sort of large, prospective, double-blind, randomized trial that would enable some measure of certainty on their efficacy. Second, as more is learned about the inflammatory mechanisms initiated by cardiac surgery and the unique response of each cardiac surgery patient to those mechanisms, it is becoming clear that no single therapy, in isolation, is effective or appropriate for all situations in all patients. Thus, future investigations must be designed to determine the most beneficial combination of anti-inflammatory strategies, so that treatment can be tailored accordingly.


Corticosteroids


Experimentally, corticosteroids have been shown to decrease the levels of numerous proinflammatory cytokines and chemokines; to reduce complement levels; to prevent the production of thromboxane and prostaglandins; and to inhibit the activation of inflammatory cells, including macrophages and neutrophils. The effectiveness of corticosteroids in the setting of CPB has been studied by a number of investigators. General agreement exists that at the molecular level, corticosteroids are effective in minimizing the inflammatory response to CPB. Various studies have demonstrated reductions in the release of proinflammatory cytokines IL-6 and IL-8, complement levels, tumor necrosis factor-α (TNF-α), cellular adhesion molecules, and neutrophil activation and sequestration. At the same time, the anti-inflammatory cytokine IL-10 has been shown to increase with the use of corticosteroids.


In terms of clinical efficacy, the data regarding use of corticosteroids have been far less consistent. Dietzman and colleagues reported some of the first observational studies on the use of corticosteroids in the setting of human CPB surgery. On the basis of their findings, they determined that steroids might lead to decreased vasoconstriction, resulting in improvements in both pulmonary and cardiac function. These positive outcomes were soon called into question by the findings of another small study which found that steroid use led to increased blood loss, decreased cardiac function, and an increased requirement for postoperative mechanical ventilation. In the 3 decades since that time, numerous small, randomized trials have been published, but the results have been conflicting.


These contradictory clinical findings have fueled great controversy on the appropriateness of steroid use in cardiac surgery. Proponents point to data which suggest that steroid use is associated with fewer arrhythmias and improved pulmonary function. Limited data indicate that steroids may directly protect the myocardium against ischemic injury as well. Those who advocate against the use of glucocorticoids argue that the existing data do not adequately prove any clinically significant benefit. Because of the lack of proven benefit, and in light of evidence that corticosteroids may prolong mechanical ventilation, suppress T-cell function, and decrease glucose tolerance, thereby increasing the risk of wound disruption and infection, the potential risk is not justified.


After reviewing the extant data, a joint task force of the American College of Cardiology and American Heart Association published guidelines in which they supported the “liberal prophylactic use” of corticosteroids in the setting of surgery with CPB—notably, except for diabetic patients. In the absence of more definitive data, the present authors arrive at a different conclusion. Although the weight of the evidence strongly supports the notion that corticosteroids are effective in ameliorating the proinflammatory response to CPB at molecular and cellular levels, conclusive evidence that corticosteroids lead to clinically significant benefit is lacking. At the same time, the evidence that corticosteroids are harmful is equally insufficient. Until appropriately designed, large, randomized, controlled trials are carried out, expansion of use does not seem warranted at present.


Hemostatic agents


Several drugs are used to decrease bleeding associated with CPB in the effort to mitigate transfusion-related morbidity. These agents include the lysine analogs tranexamic acid and epsilon-aminocaproic acid, which reduce bleeding by inhibiting the conversion of plasminogen to plasmin (the serine protease responsible for breaking down fibrinogen); and desmopressin, a vasopressin analog that induces release of the contents of endothelial cell–associated Weibel-Palade bodies, including von Willebrand factor and associated coagulation factor VIII, leading to potentiation of primary hemostasis. Several meta-analyses suggest that tranexamic acid and epsilon-aminocaproic acid may be similarly effective in preventing perioperative bleeding and the risk of transfusion in cardiac surgery patients; desmopressin does not seem to provide as much benefit. Data concerning the clinical outcomes associated with the use of these drugs are insufficient to allow conclusive recommendations to be made regarding their use.


Recombinant factor VIIa, which may produce its hemostatic effects by activating platelets in the absence of tissue factor to activate factors IX and X and thus enhance thrombin generation or by directly interacting with tissue factor at the site of injury to initiate thrombin generation, has only recently been introduced to cardiac surgery. Currently, this drug is being used primarily as a measure of last resort to treat uncontrollable hemorrhage rather than as routine therapy. Several other factor concentrates are available for salvage situations during intense coagulopathy.


As cardiac surgery patients become increasingly complex and higher risk, the need for multiple pharmacologic options to decrease bleeding will only grow.


Antioxidants


The generation of reactive oxygen species is a major component of the I/R response to cardiac surgery. Oxygen radicals are primarily produced by activated neutrophils and may exert their deleterious effects by the peroxidation of membrane lipids and the oxidation of protective proteins. The body’s innate antioxidant defenses, including α-tocopherol (vitamin E) and ascorbic acid (vitamin C), are critical in preventing free radical–mediated damage. Indeed, studies have shown an inverse epidemiologic correlation between plasma vitamin E levels and mortality due to ischemic heart disease. CPB induces simultaneous increases in both reactive oxygen species and the body’s own antioxidant defense mechanisms; however, these endogenously produced free-radical scavengers may not be able to compensate fully, leading to subsequent tissue destruction. For this reason, a number of investigators have attempted to determine whether administration of exogenous antioxidants is beneficial in CPB.


In animal models, supplementation of vitamin E and vitamin C has been demonstrated to decrease the molecular damage caused by reactive oxygen species; the free-radical scavengers superoxide dismutase (SOD) and catalase resulted in significantly better recovery of left ventricular function after reperfusion; and SOD and allopurinol have been shown to reduce significantly the extent of myocardial necrosis that developed after reversible coronary arterial branch occlusion. The same biochemical protection provided by antioxidants has been seen in humans undergoing cardiac surgery; however, in a number of small trials, clinically relevant effects have been minor, nonexistent, or even potentially harmful.


Alternative Approaches to Myocardial Protection


Previously in this chapter, we discussed protective strategies that are focused either on direct protection of the myocardium or on ameliorating the inflammatory response and I/R injury that are caused by CPB. In these final paragraphs, we shall examine two protective strategies that do not easily fit into either of those categories.


Bypassing CPB: off-pump CABG


Myocardial revascularization without the use of CPB is not a novel concept. Indeed, many of the landmark events in the early years of cardiothoracic surgery, including the first CABG, were performed without the aid of the bypass circuit. Although this technique fell out of favor in the late 1960s with the rise of CPB and cardioplegia, the development of new stabilizing devices and the use of a left anterior thoracotomy rather than median sternotomy contributed to their reintroduction into clinical practice in the early 1990s, in large part because both off-pump CABG (OPCAB) and minimally invasive direct CABG offer the theoretical advantage of eliminating CPB-associated morbidity altogether. With the exception of the short duration of regional myocardial ischemia created when the anastomoses are being performed, blood flow to the beating heart is uninterrupted, thereby minimizing ischemic injury. In addition to avoiding the deleterious effects of CPB, OPCAB has other supposed advantages, including decreased surgical trauma, quicker recovery time, and shorter hospital stays.


A number of studies have attempted to compare the inflammatory response of OPCAB with standard CABG (CABG with CPB). The majority of reported studies are small and nonrandomized, but most have demonstrated that OPCAB is associated with decreased markers of inflammation when compared with standard CABG. For example, leukocyte, neutrophil, and monocyte activation are greater with the use of CPB. Complement levels (C3a, C5a), TNF-α, IL-1, IL-6, IL-8, and IL-10 are all increased with CPB. An important confounding factor in most of these studies is that surgical access (i.e., median sternotomy in standard CABG versus anterolateral thoracotomy for OPCAB) has been demonstrated to play an important role in cytokine release; indeed, some authors believe that alternative surgical approaches may have a greater effect on the inflammatory response than does the use of CPB. In addition, many of the early studies comparing OPCAB with standard CABG did not incorporate the newer drugs and technical modifications (described earlier) that have been specifically designed to ameliorate the effects of CPB. Thus, for example, study protocols that have incorporated normothermia, heparin-bonded circuits, complement inhibitors, and elimination of cardiotomy suction blood from the CPB circuit have yielded results which suggest that surgical trauma, rather than CPB, may prove to be a more significant driver of inflammation than CPB. In low-risk patients, differences in markers of inflammation may be indistinguishable.


A final consideration regarding the role of OPCAB in minimizing inflammation is that although global myocardial ischemia may be avoided, regional myocardial ischemia continues to occur. When the anastomoses are complete and blood flow is restored, all the same factors are in play in terms of I/R injury. The preservation of regional myocardial perfusion by using coronary shunts may preserve left ventricular function and prevent severe hemodynamic consequences, but the ability of shunts to prevent I/R injury has not been adequately examined. Thus, although ischemic myocardial damage may be lessened by OPCAB, it is not eliminated, nor is I/R injury.


The ROOBY (Randomized On/Off Bypass) trial was a randomized controlled trial comparing on-pump CABG and OPCAB. A total of 2203 patients were included in the analysis. No difference was observed in early mortality. However, at 1 year, the OPCAB has significantly inferior graft patency compared with the on-pump CABG cohort (82.5% versus 87.8%; P < .01). Furthermore, patients in the OPCAB had fewer grafts performed. Critics of this trial will point to the inclusion of operations where surgical residents were involved, pointing toward the steep learning curve of the procedure. Regardless, OPCAB remains a viable option in experienced hands and in certain clinical settings (i.e., porcelain aorta). However, the superiority over on-pump CABG remains to be settled.


Myocardial conditioning


As investigation into the molecular machinery of I/R injury enters its fourth decade, we understand that adaptive, protective cellular mechanisms exist. Much current research is aimed at understanding the key receptors, transduction pathways, and molecular mediators involved so that we may develop methods to modify gene expression in order to shift cellular machinery toward a protective phenotype. The phenomenon of myocardial conditioning is reputed to provide the most powerful protection against ischemic injury yet demonstrated.


Preconditioning


In 1986, Murry and colleagues reported their somewhat paradoxical findings that brief, nonsustained periods of I/R could actually diminish the effects of a subsequent prolonged I/R event. This phenomenon, termed ischemic preconditioning , leads not only to smaller infarct size but also to fewer I/R-induced arrhythmias, improved postischemic contractile recovery, reduced ventricular remodeling, and improved survival. Later research established that the protection afforded by preconditioning occurs in two phases, with an early period of protection beginning within minutes of the preconditioning event and lasting several hours (“classical” or “early” preconditioning), and a later period of protection beginning approximately 24 hours after the preconditioning regimen and lasting as long as 3 to 4 days (“delayed preconditioning”).


Since the initial description by Murry using a canine model, the beneficial effects of IPC have been demonstrated experimentally in multiple species, including humans. In addition to myocardial ischemia, other physiologic and pharmacologic stimuli have been shown to trigger preconditioning, including remote ischemia, rapid atrial pacing, heat shock, adenosine, opioids, volatile anesthetics, endotoxin, and many others. Extensive research has significantly advanced our understanding of the molecular mechanisms underlying both IPC and I/R.


Unfortunately, efforts to translate these promising laboratory findings into clinical therapies have proven disappointing, as recently highlighted by a Working Group of the National Heart, Lung, and Blood Institute. One of the difficulties in translating the gains made in our basic science understanding of the preconditioning phenomenon into clinical practice is that the onset of the ischemic event (e.g., in the case of acute coronary syndromes) is often unpredictable. Although certain at-risk patients may one day benefit from pharmacologic strategies aimed at harnessing the protective power of delayed preconditioning, early preconditioning–focused strategies will be less applicable in these unforeseeable events. On the other hand, cardiac surgery and transplantation are two examples of instances in which the nature and timing of the I/R event are predictable and controllable, and thus surgical I/R is very amenable to strategies aimed at exploiting the resistance to injury provided by early preconditioning.


Unfortunately, despite the wealth of supporting laboratory data, results in humans in the clinical setting have been mixed. In fact, a large randomized trial comparing remote ischemic preconditioning (RIPC) with controls prior to cardiac surgery found no difference in outcomes between groups. Furthermore, a recent meta-analysis of RIPC found that this technique offered no benefit in morbidity or mortality in patients undergoing cardiac surgery. Therefore, this technique has largely been abandoned in contemporary practice.


Postconditioning


The beneficial effects of modified reperfusion have been known for many years and have been demonstrated in the clinical setting. In 2003, Zhao, Vinten-Johansen, and colleagues at Emory published the first study describing a variation of controlled reperfusion they entitled postconditioning. Using an open-chest canine model, they demonstrated that by briefly interrupting reperfusion in a repetitive fashion at the onset of coronary reflow, infarct size was greatly diminished. These intriguing findings have since been achieved in other animal models, with a degree of protection comparable to that seen with preconditioning. In addition to decreasing infarct size, protection against life-threatening arrhythmias has also been shown.


Although the mechanisms of postconditioning protection are undefined, possibilities include an attenuation of injury caused by reactive oxygen species, the prevention of cardiomyocyte hypercontracture, a reduction in ischemia-induced swelling, and activation and “cross-talk” of various “cell survival” pathways. Some of the key mediators involved in preconditioning have been studied in postconditioning protocols. In addition to ischemia, anesthetics, adenosine, bradykinin, and insulin are protective when given immediately before or at the time of reperfusion. Important effectors include mitochondrial adenosine triphosphate-regulated potassium (K ATP ) channels, the mitochondrial permeability transition pore, and phosphoinositide 3-kinase-Akt.


Although still in its infancy, the field of postconditioning has tremendous appeal because of its potential therapeutic impact. As noted earlier, one of the great difficulties in applying preconditioning strategies in the clinical setting is that the ischemic event is frequently unpredictable. In postconditioning, by contrast, any proposed intervention comes at the time of reperfusion, the manner and timing of which is under the control of the physician. Furthermore, in cases of planned I/R (e.g., CABG or organ transplant), a combination of preconditioning and postconditioning strategies may yield protection superior to either alone.



References

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jun 9, 2021 | Posted by in ANESTHESIA | Comments Off on Prevention of Ischemic Injury in Cardiac Surgery

Full access? Get Clinical Tree

Get Clinical Tree app for offline access