The timing between initiation of extracorporeal therapy and ICU admission is another issue that should be taken into account for classification purposes of early and late RRTs. Data available from the BEST kidney registry (8) reveal that, when timing was analyzed in relation to ICU admission, “late” RRT was associated with greater crude mortality, covariate-adjusted mortality, RRT requirement, and hospital length of stay (8). Although several studies have suggested a possible positive role of early RRT among AKI patients, contrasting results are available in literature. In 2002, Bouman et al. (9) showed no differences for ICU or hospital mortalities and for renal recovery among patients treated with an early or late RRT. However, if cumulatively considered in systematic review or meta-analysis, by parameters utilized to define the onset, an early initiation of RRT seems to be associated to an improved outcome (10). In a recent meta-analysis, including 15 unique studies published through 2010, comparing early and late initiation of renal support, Karvellas et al. have calculated an odds ratio for 28-day mortality of 0.45 associated with an early RRT (10). Similar results were obtained by Wang and colleagues (11) in a 2012 meta-analysis encompassing data from 2,955 patients; the results of this study have clearly demonstrated that an early initiation of both continuous and intermittent RRT may reduce the mortality of patients with AKI compared with late treatments.
|TABLE 133.2 Indications for Renal Replacement Therapya|
When to Stop
Once RRT has been started, timing for stopping is another field of uncertainty as the literature is scarce. Bedside evaluation of weaning RRT implies two fundamental clinical data elements: the state of renal function, and recovery from the morbidity that initially led to RRT. Current guidelines suggest discontinuing RRT when kidney function has recovered and is able to meets patient needs or, globally “is no longer consistent with the goals of care” (1). Many, but not all, patients receiving RRT will recover renal function, so daily evaluation of the appropriateness of treatment is necessary to identify weaning opportunities, including a modality transitions—e.g., from continuous to intermittent—or to decide to withdraw treatment for futility (12–13). The assessment of kidney function during RRT is a complex issue and clear recommendations are not available. From a practical clinical point of view, diuresis seems to be the most efficient predictor of RRT weaning success. A large prospective observational study, encompassing 529 patients, showed that urine output was the most significant predictor of successful termination of RRT (14). It is important to underline that, while diuretics increase urine output even in AKI patients, current guidelines suggest “not using diuretics to enhance kidney function recovery, or to reduce the duration or frequency of RRT” (1). In fact, diuretics increase urine output but do not seem to positively influence renal function.
Unanswered Research Topics
A number of issues remain unaddressed, warranting future research. The KDIGO guidelines recommend studies to establish reproducible criteria capable of suggesting optimal timing for initiation of RRT in AKI patients (1). Timing—early versus late initiation and criteria for weaning—should be correlated with outcome measures, taking into consideration all the variables: dose, modality, materials, and anticoagulation.
CONTINUOUS, INTERMITTENT, DIFFUSIVE, CONVECTIVE: AN ONGOING MATTER
Renal replacement consists of the purification of blood by semipermeable membranes; a wide range of molecules—from water to urea to low–, middle–, and high–molecular-weight solutes—are transported across such membranes by the mechanism of ultrafiltration (water), convection, and diffusion (solutes) (Fig. 133.1).
During diffusion, movement of solute depends upon their tendency to reach the same concentration on each side of the membrane; the practical result is the passage of solutes from the compartment with the highest concentration to the compartment with the lowest. Other components of the semipermeable membrane deeply affect diffusion: thickness and surface, temperature, and diffusion coefficient. Dialysis is a modality of RRT and is predominantly based on the principle of diffusion: a dialytic solution flows through the filter in a manner counter (countercurrent) to blood flow in order to maintain the highest solute gradient from inlet to outlet port.
During convection, the movement of solute across a semipermeable membrane takes place in conjunction with significant amounts of ultrafiltration (water transfer across the membrane). In other words, as the solvent (plasma water) is pushed across the membrane in response to the transmembrane pressure (TMP) by ultrafiltration (UF), solutes are carried with it, as long as the porosity of the membrane allows the molecules to be sieved from blood. The process of UF is governed by the UF rate (Qf), the membrane UF coefficient (Km), and the TMP gradient generated by the pressures on both sides of the membrane (see the legend of Fig. 133.1).
The hydrostatic pressure in the blood compartment is dependent on blood flow (Qb). The greater the Qb, the greater the TMP. In modern RRT machines, UF control throughout the filter is obtained by the use of a pump, which generates suction to the UF side of the membrane. Modern systems are optimally designed in order to maintain a constant Qf; it is worth noting that, when the filter is “fresh,” the initial effect of the UF pump is to retard UF production, generating a positive pressure on the UF side. Thus, TMP is initially dependent only on Qb. As the membrane fibers foul, a negative pressure is necessary to achieve a constant Qf. In this case, a progressive increase of TMP can be observed up to a maximal level in which clotting is likely, membrane rupture may occur and, above all, solute clearance may be significantly compromised. In fact, if it is true that the size of molecules cleared during convection exceeds that during diffusion, because they are physically dragged to the UF side, it is also true that this feature is seriously limited by the protein layer that progressively closes filter pores during convective treatments (15). A peculiar membrane capacity, termed adsorption, has been shown to have a major role in higher–molecular-weight toxins (16); however, membrane adsorptive capacity is generally saturated in the first hours from the beginning of the treatment. This observation notes the scarce impact of the adsorption component on solute clearance and suggests relying only on the effects of mass separation processes such as diffusion and convection (17). As UF proceeds, and plasma water and solutes are filtered from blood, hydrostatic pressure within the filter is lost, and oncotic pressure is gained because blood concentrates and hematocrit increase. The fraction of plasma water that is removed from blood during UF is called filtration fraction; it should be kept in the range of 20% to 25% to prevent excessive hemoconcentration within the filtering membrane and to avoid the critical point where oncotic pressure is equal to TMP and a condition of filtration/pressure equilibrium is reached. Finally, replacing plasma water with a substitute solution completes the hemofiltration (HF) process and returns purified blood to the patient. The replacement fluid can be administered after the filter, a process called postdilution HF. Otherwise, the solution can be infused before the filter in order to obtain predilution HF, whereas predilution plus postdilution replacement is obtained on mixed infusion of substitution fluids both before and after filtering the membrane. While postdilution allows a urea clearance equivalent to therapy delivery (i.e., 2,000 mL/hr; see below) predilution, despite a theoretical reduced solute clearances, prolongs the circuit lifespan, and reduces hemoconcentration and protein-caking effects occurring within filter fibers. Conventional HF is performed with a highly permeable, steam-sterilized membrane with a surface area of about 1 m2, and with a cutoff point of 30 kd (Fig. 133.2).
Concept of RRT Dose
The conventional view of RRT dose is that it is a measure of the quantity of blood purification achieved by means of extracorporeal techniques. As this broad concept is too difficult to measure and quantify, the operative view of RRT dose is that it is a measure of the quantity of a representative marker solute that is removed from a patient. This marker solute is considered to be reasonably representative of similar solutes, which require removal for blood purification to be considered adequate. This premise has several major flaws: the marker solute cannot and does not represent all the solutes that accumulate in renal failure; its kinetics and volume of distribution are also different from such solutes; finally, its removal during RRT is not representative of the removal of other solutes. This is true both for end-stage renal failure and acute renal failure. However, a significant body of data in the end-stage renal failure literature (18–23) suggests that, despite all of the above major limitations, a single-solute marker assessment of dialysis dose appears to have a clinically meaningful relationship with patient outcome and, therefore, clinical utility. Nevertheless, the HEMO study, examining the effect of intermittent hemodialysis (IHD) doses, enforced the concept that “less dialysis is worse,” but failed to confirm the intuition that “more dialysis is better” (23). Thus, if this premise seems useful in end-stage renal failure, it is accepted to be potentially useful in AKI for operative purposes. Hence, the amount (measure) of delivered dose of RRT can be described by various terms: efficiency, intensity, frequency, and clinical efficacy; each will be discussed below.
The efficiency of RRT is represented by the concept of clearance (K), i.e., the volume of blood cleared of a given solute over a given time. K does not reflect the overall solute removal rate (mass transfer) but, rather, its value normalized by the serum concentration. Even when K remains stable over time, the removal rate will vary if the blood levels of the reference molecule change. K depends on solute molecular size and transport modality—diffusion or convection—as well as circuit operational characteristics—blood flow rate (Qb), dialysate flow rate (Qd), ultrafiltration rate (Qf), hemodialyzer type, and size. K can normally be used to compare the treatment dose during each dialysis session, but it cannot be employed as an absolute dose measure to compare treatments with different time schedules. For example, K is typically higher in IHD than in continuous renal replacement therapy (CRRT) and sustained low-efficiency daily dialysis (SLEDD). This is not surprising, since K represents only the instantaneous efficiency of the system. However, mass removal may be greater during SLEDD or CRRT. For this reason, the information about the time span during which K is delivered is fundamental to describe the effective dose of dialysis.
The intensity of RRT can be defined by the product “clearance × time” (Kt). Kt is more useful than K in comparing various RRTs. A further step in assessing dose must include frequency of the Kt application over a particular period (e.g., a week). This additional dimension is given by the product of intensity × frequency (Kt × treatment days/week = Kt d/w). Kt d/w is superior to Kt since it offers information beyond a single treatment—patients with AKI typically require more than one treatment. This concept of Kt d/w offers the possibility to compare disparate treatment schedules—intermittent, alternate-day, daily, continuous. However, it does not take into account the size of the pool of solute that needs to be cleared; this requires the dimension of efficacy.
The efficacy of RRT represents the effective solute removal outcome resulting from the administration of a given treatment to a given patient. It can be described by a fractional clearance of a given solute (Kt/V), where V is the volume of distribution of the marker molecule in the body. Kt/V is an established marker of adequacy of dialysis for small solutes correlating with medium-term (several years) survival in chronic hemodialysis patients (23). Urea is typically used as a marker molecule in end-stage kidney disease to guide treatment dose, and a Kt/VUREA of at least 1.2 is currently recommended. As an example, we can consider the case of a 70-patient who is treated 20 hr/d with a postfilter HF of 2.8 L/hr at a zero balance. The patient’s KUREA will be 47 mL/min (2.8 L/hr = 2,800 mL/60 min) because we know that during postfilter HF, ultrafiltered plasmatic water will drag all urea across the membrane, making its clearance identical to UF flow. The treatment time (t) will be 1,200 minutes (60 minutes for 20 hours). The urea volume of distribution will be approximately 42,000 mL (60% of 70 kg, 42 L = 42,000 mL)—that is, roughly equal to total body water. Simplifying our patient’s Kt/VUREA, we will have 47 × 1,200/42,000 = 1.34.
However, Kt/VUREA application to patients with AKI has not been rigorously validated. In fact, although the application of Kt/V to the assessment of dose in AKI is theoretically intriguing, many concerns have been raised because problems intrinsic to AKI can hinder the accuracy and meaning of such dose measurement. These include the lack of a metabolic steady state, uncertainty about the volume of distribution of urea (VUREA), a high-protein catabolic rate, labile fluid volumes, and possible residual renal function, which changes dynamically during the course of treatment. Furthermore, delivery of prescribed dose in AKI can be limited by technical problems such as access recirculation, poor blood flows with temporary venous catheters, membrane clotting, and machine malfunction. Furthermore, clinical issues such as hypotension and vasopressor requirements can be responsible for solute disequilibrium within tissues and organs.
These aspects are particularly evident during IHD, less so during SLEDD, and even less so during CRRT. This difference is due to the fact that, after some days of CRRT, the patients’ urea levels approach a real steady state. Access recirculation is also an issue of lesser impact during low-efficiency continuous techniques. Finally, because the therapy is applied continuously, the effect of compartmentalization of solutes is minimized and, from a theoretical point of view, single-pool kinetics can be applied (spKt/V) with a reasonable chance of approximating true solute behavior. In a prospective study on continuous therapies, the value of clearance predicted by a simple excel software applying formulas for K calculation showed a significant correlation between estimated K and that obtained from direct blood and dialysate determinations during the first 24 treatment hours, irrespective of the continuous renal replacement modality used (24,25).
The major shortcoming of the traditional solute marker–based approach on dialysis dose lies beyond any methodologic critique of single-solute kinetics-based prescriptions: in patients with AKI, the majority of whom are in intensive care, a restrictive (solute-based only) concept of dialysis dose seems grossly inappropriate. In these patients, the therapeutic needs that can be or need to be affected by the “dose” of RRT are more than the simple control of small solutes as represented by urea. They include control of acid–base, tonicity, potassium, magnesium, calcium, phosphate, intravascular volume, extravascular volume, temperature, and the avoidance of unwanted side effects associated with the delivery of solute control. In the critically ill patient, it is much more important (e.g., in the setting of coagulopathic bleeding after cardiac surgery) for 10 units of fresh frozen plasma, 10 units of cyroprecipitate, and 10 units of platelets to be administered rapidly without inducing fluid overload (because 1 to 1.5 L of ultrafiltrate is removed in 1 hour) than for Kt/V to be of any particular value at all. A dose of RRT is about prophylactic volume control. In a patient with right ventricular failure, AKI, ARDS, who is receiving lung-protective ventilation with permissive hypercapnia and with acidemia, inducing a further life-threatening deterioration in pulmonary vascular resistance, the “dose” component of RRT that matters immediately is acid–base control and normalization of pH 24 hours a day. The Kt/V (or any other solute-centric concept of dose) is essentially a byproduct of such dose delivery. In a young man with trauma, rhabdomyolysis, and a rapidly rising serum potassium already at 7 mMol/L, the beginning dialysis dose is all about controlling hyperkalemia. In a patient with fulminant liver failure, AKI, sepsis, and cerebral edema awaiting urgent liver transplantation, and whose cerebral edema is worsening because of fever, RRT dose is centered on lowering the temperature without any tonicity shifts that might increase intracranial pressure. Finally, in a patient with pulmonary edema after an ischemic ventricular septal defect requiring emergency surgery, along with AKI, ischemic hepatitis, and the need for inotropic and intra-aortic balloon counterpulsation support, RRT dose mostly concerns removing fluid gently and safely so that the extravascular volume falls while the intravascular volume remains optimal. Solute removal is just a byproduct of fluid control. These aspects of dose must explicitly be considered when discussing the dose of RRT in AKI, for it is likely that patients die more often from incorrect “dose” delivery of this type than incorrect dose delivery of the Kt/V type. Although each and every aspect of this broader understanding of dose is difficult to measure, clinically relevant assessment of dose in critically ill patients with AKI should include all dimensions of such a dose, and not one dimension picked because of a similarity with end-stage renal failure. There is no evidence in the acute field that such solute control data are more relevant to clinical outcomes than volume control, acid–base control, or tonicity control.
Despite all the uncertainty surrounding its meaning and the gross shortcomings related to its accuracy in patients with AKI, the idea that there might be an optimal dose of solute removal continues to have a powerful hold in the literature. This is likely due to evidence from ESRD, where a minimum Kt/V of 1.2 thrice weekly is indicated as standard (23). However, the benefits of greater Kt/V accrue over years of therapy. In AKI, any difference in dose would apply for days to weeks. The view that it would still be sufficient to alter clinical outcomes remains somewhat optimistic. Nonetheless, the hypothesis that higher doses of dialysis may be beneficial in critically ill patients with AKI must be considered by analogy and investigated. Several reports exist in the literature dealing with this issue. Furthermore, the concept of predefined dose is a powerful tool to guide clinicians to a correct prescription and to, at least, avoid under treatment.
Brause et al. (26), using continuous venovenous hemofiltration (CVVH), found that higher Kt/V values (0.8 vs. 0.53) correlated with improved uremic control and acid–base balance; no clinically important outcome metric was affected. Investigators from the Cleveland clinic (27) retrospectively evaluated 844 patients with AKI requiring CRRT or IHD over a 7-year period. They found that, when patients were stratified for disease severity, dialysis dose did not affect outcome in patients with very high or very low scores, but did correlate with survival in patients with intermediate degrees of illness. A mean Kt/V greater than 1.0 or TACUREA below 45 mg/dL was associated with increased survival. This study was retrospective with a clear post hoc selection bias. Therefore, the validity of these observations remains highly questionable.
Daily IHD, compared to alternate-day dialysis, also seemed to be associated with improved outcome in a randomized trial (28). Daily hemodialysis resulted in significantly improved survival (72% vs. 54%, p = 0.01), better control of uremia, fewer hypotensive episodes, and more rapid resolution of AKI. However, several limitations limited this study: sicker, hemodynamically unstable patients were excluded, undergoing CRRT instead. Furthermore, according to the mean TACUREA reported, it appears that patients receiving conventional IHD were under-dialyzed. In addition, this was a single-center study with all the inherent limitations in regard to external validity. Finally, the second daily dialysis was associated with significant differences in fluid removal and dialysis-associated hypotension, suggesting that other aspects related to “dose,” beyond solute control—such as inadequate and episodic volume control—may explain the findings. Clearly, further studies need be undertaken to assess the effect of dose of IHD on outcome.
In a randomized controlled trial of CRRT dose, continuous venovenous postdilution hemofiltration (CVVH) at 35 or 45 mL/kg/hr was associated with improved survival when compared to 20 mL/kg/hr in 425 critically ill patients with AKI (29). Applying Kt/V dose assessment methodology to CVVH, at a dose of 35 mL/kg/hr in a 70-kg patient treated for 24 hours, a treatment day would be equivalent to a Kt/V of 1.4 daily. Despite the uncertainty regarding the calculation of V urea, CVVH at 35 mL/kg/hr would still provide an effective daily delivery of 1.2, even in the presence of an underestimation of VUREA by 20%. Many technical and/or clinical problems—including filter clotting, high filtration fraction in the presence of vascular access dysfunction with fluctuations in blood flow, circuit down-time during surgery or radiological procedures, and filter changes—can make it difficult, in routine practice, to apply such a strict protocol by pure postdilution HF. Equally important is the observation that this study was conducted over 6 years in a single center, uremic control was not reported, the incidence of sepsis was low compared to the typical populations reported to develop AKI in the world, and the final outcome was not the accepted 28- or 90-day mortality typically used in ICU trials. Thus, despite the interesting findings, the external validity of this study remains untested.
Another prospective randomized trial conducted by Bouman et al. (9) assigned patients to three intensity groups: early high-volume HF (72–96 L/24 hr); early low-volume HF (24–36 L/24 hr); and late low-volume HF (24–36 L/24 hr). These investigators found no difference in terms of renal recovery or 28-day mortality. Unfortunately, prescribed doses were not standardized by weight, causing a wide variability in RRT dose ultimately delivered to patients. Furthermore, the number of patients was small, making the study insufficiently powered and, again, the incidence of sepsis was low compared to the typical populations reported to develop AKI in the world.
Notwithstanding the problems we raise with these studies, they must be seen in light of an absolute lack of any previous attempt to adjust AKI treatment dose to specific target levels. The differences between delivered and prescribed dose in patients with AKI undergoing IHD were analyzed by Evanson and coworkers (30). The authors found that a high patient weight, male gender, and low blood flow were limiting factors affecting RRT administration, and that about 70% of dialysis delivered a Kt/V of less than 1.2. A retrospective study by Venkatarman et al. (31) also showed, similarly, that patients receive only 67% of prescribed CRRT therapy. These observations underline that RRT prescriptions for AKI patients in the ICU should be monitored closely if one wishes to ensure adequate delivery of prescribed dose.
Two recent large randomized trials, the Randomised Evaluation of Normal versus Augmented Level of Replacement Therapy (RENAL) (32) and the Acute Renal Failure Trial Network (ATN) (33) studies, seemed to definitely refute the concept that a “higher” dose is better. These two large multicenter, randomized controlled trials did not show improved outcome with a “more intensive dose” (40 and 35 mL/kg/hr, respectively) compared to a “less intensive dose” (25 and 20 mL/kg/hr, respectively). Based upon these findings, the current KDIGO guidelines recommend delivering an effluent volume of 20 to 25 mL/kg/hr for CRRT in patients with AKI (1). In addition, by comparing two multicenter CRRT databases, Uchino et al. (34) found that patients with AKI treated with low-dose CRRT did not have worse short-term outcome compared to patients treated with what is currently considered the standard (higher) dose. In particular, comparing patients from The Beginning and Ending Supportive Therapy (BEST) study (2) and from The Japanese Society for Physician and Trainees Intensive Care (JSEPTIC) Clinical Trial Group, the authors observed no differences between groups of patient treated with a doses of 14.3 and 20.4 mL/kg/hr (34). Finally, considering that high-dose CRRT could lead to electrolyte disorders, removal of nutrients and drugs (e.g., antibiotics) and high costs (35), but at the same time low dose may expose patients to undertreatment, potentially worsening outcome, seeking the range of adequate treatment dose is a crucial issue. At this time, a delivered dose (without downtime) between 20 and 35 mL/kg/hr may be considered clinically acceptable (36). A CRRT dose prescription below 20 mL/kg/hr and over 35 mL/kg/hr may be definitely identified as the dose-dependent range, where the dialytic intensity is likely to negatively affect outcomes, due to both under- and overdialysis. On the other hand, the prescriptions lying between these two limits can be considered as practice-dependent and variables such as timing, patients characteristics, comorbidities, or concomitant supportive pharmacologic therapies may have a significant role for patients’ outcome and should trigger a careful prescription and a closest monitoring of dose delivery.
During RRT, clearance depends on circuit blood flow (Qb), hemofiltration (Qf), or dialysis (Qd) flow, solute molecular weights, and hemodialyzer type and size. Qb, as a variable in delivering RRT dose, is mainly dependent on vascular access and the operational characteristics of machines utilized in the clinical setting. Qf is strictly linked to Qb, during convective techniques, by filtration fraction. Filtration fraction does not limit Qd, but when Qd/Qb ratio exceeds 0.3, it can be estimated that dialysate will not be completely saturated by blood-diffusing solutes. The search for specific toxins to be cleared, furthermore, has not been successful despite years of research, and urea and creatinine are generally used as reference solutes to measure renal replacement clearance for renal failure. While available evidence does not allow the direct correlation of the degree of uremia with outcome in chronic renal disease, in the absence of a specific solute, clearances of urea and creatinine blood levels are used to guide treatment dose. During UF, the driving pressure forces solutes, such as urea and creatinine, against the membrane and into the pores, depending on membrane sieving coefficient (SC) for that molecule. SC expresses a dimensionless value and is estimated by the ratio of the concentration of the solutes in the filtrate divided by that in the plasma water, or blood. An SC of 1.0, as is the case for urea and creatinine, demonstrates complete permeability, and a value of 0 reflects complete impermeability. Molecular size over approximately 12 kDa and filter porosity are the major determinants of SC. The K during convection is mea-sured by the product of Qf and SC. Thus, different from diffusion, there is a linear relationship between K and Qf, the SC being the changing variable for different solutes. During diffusion, the linear relationship is lost when Qd exceeds about one-third of Qb. As a rough estimate, we can consider that during continuous slow-efficiency treatments, RRT dose is a direct expression of Qf–Qd, independent of which solute must be removed from blood. During continuous treatment, it has now been suggested to deliver at least a urea clearance of 2 L/hr, with the clinical evidence that 35 mL/kg/hr may be the best prescription (i.e., about 2.8 L/hr in a 70-kg patient). Other authors suggest a prescription based on patient requirements, i.e., as a function of the urea generation rate and catabolic state of the single patient. It has been shown, however, that during continuous therapy, a clearance less than 2 L/hr will almost definitely be insufficient in an adult critically ill patient. For more exact estimations, simple computations have been shown to adequately estimate clearance (23,37). Tables 133.3 and 133.4 show a potential flow chart that could be followed each time an RRT prescription is indicated.
|TABLE 133.3 Algorithm for RRT Prescription|