Dialysis Therapy in the Intensive Care Setting



Dialysis Therapy in the Intensive Care Setting


Arif Showkat

Sergio R. Acchiardo

William F. Owen Jr.



Overview of Dialysis Modalities


Bones can break, muscles can atrophy, glands can loaf, even the brains can go to sleep, without immediately endangering our survival; but should the kidneys fail neither bone, muscle, gland, nor brain could carry on.

Homer W. Smith (1895-1962)

In 1839, Addison [1] reported that stupor, coma, and convulsions were consequences of diseased kidneys, which was referred to as Bright’s disease for much of that century [2]. Almost 50 years later, Tyson [3] noted that uremic symptoms depend on retention of urea and allied substances in the blood, which, when they have accumulated to a sufficient quantity, act on the nervous system producing delirium and convulsions or coma.

These rudimentary observations relevant to the pathobiology of symptomatic renal failure suggested that therapy should be directed at mitigating the retention of nitrogenous products of metabolism. In September 1945, Kolff [4] dialyzed a 67-year-old woman with acute oliguric renal failure. Although this was a successful effort, dialysis remained an experimental tool until Teschan [5] described its use in treating Korean War casualties with posttraumatic renal failure in 1955. In 1960, Scribner et al. [6] devised an exteriorized arteriovenous access for long-term hemodialysis and initiated a 39-year-old man with end-stage renal disease (ESRD) on regular dialysis treatments. By the mid-1960s, hemodialysis was becoming conventional therapy for acute renal failure (ARF) and its application was expanding to patients with ESRD.

Conceptually, peritoneal dialysis is an older technique than hemodialysis, but its practical application was delayed by numerous unsuccessful attempts to treat both acute and end-stage renal disease patients [7]. Results were poor because of technical problems with catheters, peritonitis, and dialysate fluid composition. The subsequent development of commercially prepared dialysate and the introduction of the silicon-cuffed catheter by Tenckhoff and Schechter [8] in 1968 heralded the modern era of peritoneal dialysis. Despite those improvements, peritoneal dialysis was still a somewhat unsatisfactory technique until 1976, when Popovich et al. [9] advocated “portable equilibration” from which continuous ambulatory peritoneal dialysis is modeled.

Because ARF is potentially reversible, an aggressive pursuit should be undertaken to identify and correct the cause. Within the intensive care unit (ICU) setting, although elements of chronicity may or may not be present, most cases of renal failure that require dialytic support are acute in nature. Therefore, most of the subsequent discussion focuses on patients with ARF as well as those with advanced chronic renal failure within the ICU. Unless specifically noted, all subsequent discussions are applicable to both.

Dialysis fulfills two biophysical goals: the addition or removal (“clearance”) of solute and the elimination of excess fluid from the patient (“ultrafiltration”). These two processes can be performed simultaneously or at different times. The dialysis procedures in common use are hemodialysis, hemofiltration, a combination of these, and peritoneal dialysis (Table 25-1).


Hemodialysis

Hemodialysis is a diffusion-driven, size-discriminatory process for the clearance of relatively small solutes such as electrolytes and urea [less than 300 daltons (Da)]. Larger solutes are typically cleared far less readily. During hemodialysis, ultrafiltration is engendered by the generation of negative hydraulic pressure on the dialysate side of the dialyzer. The major components of the hemodialysis system are the artificial kidney or dialyzer; the respective mechanical devices that pump the patient’s blood and the dialysate through the dialyzer; and the dialysate, which is a fluid having a specified chemical composition used for solute clearance. During the performance of “conventional” intermittent hemodialysis, the patient’s blood and dialysate are pumped continuously through the dialyzer in opposite (countercurrent) directions at flow rates averaging 300 and 500 mL per minute, respectively. The dialysate passes through the dialyzer only once (single-pass system) and is discarded after interaction with the blood across the semipermeable membrane of the dialyzer. The efficiency of hemodialysis can be augmented by the use of dialyzers that are more porous to water and solutes. Those kidneys with enhanced performance characteristics are described as high-efficiency or high-flux dialyzers, depending on their ultrafiltration capacity and ability to remove larger molecular weight solutes such as β2-microglobulin. High-efficiency hemodialysis uses a high-porosity dialyzer that has an ultrafiltration coefficient greater than 10 and less than 20 mL per mm Hg per hour. High-flux hemodialysis uses an even more porous dialyzer with an ultrafiltration coefficient greater than
20 mL per mm Hg per hour and greater clearances of solutes greater than 300 Da.








TABLE 25-1. Dialysis Modalities


































































Technique Dialyzer Physical Principle
Hemodialysis
   Conventional Hemodialyzer Concurrent diffusive clearance and UF
   Sequential UF/clearance Hemodialyzer UF followed by diffusive clearance
   UF Hemodialyzer UF alone
Hemofiltration
   SCUF Hemofilter Arteriovenous UF without a blood pump
   CAVH Hemofilter Arteriovenous convective transport without a blood pump
   CAVHD Hemofilter Arteriovenous hemodialysis without a blood pump
   CAVHDF Hemofilter Arteriovenous hemofiltration and hemodialysis without a blood pump
   CVVH Hemofilter Venovenous convective transport with a blood pump
   CVVHD Hemofilter Venovenous hemodialysis with a blood pump
   CVVHDF Hemofilter Venovenous hemofiltration and hemodialysis with a blood pump
Peritoneal dialysis
   Intermittent None Exchanges performed for 10–12 h every 2–3 d
   CAPD None Manual exchanges performed daily during waking hours
   CCPD None Automated cycling device performs exchanges nightly
CAPD, continuous ambulatory peritoneal dialysis; CAVH, continuous arteriovenous hemofiltration;
CAVHD, continuous arteriovenous hemodialysis; CAVHDF, continuous arteriovenous hemodiafiltration;
CCPD, continuous cycling peritoneal dialysis; CVVH, continuous venovenous hemofiltration; CVVHD, continuous venovenous hemodialysis; CVVHDF, continuous venovenous hemodiafiltration; SCUF, slow continuous ultrafiltration; UF, ultrafiltration.

Variables of the hemodialysis procedure that may be manipulated by the dialysis care team are the type of dialyzer (determines the solute clearance and ultrafiltration capacity of the dialysis treatment), the dialysate composition (influences solute clearance and loading), the blood and dialysate flow (influences solute clearance), the hydraulic pressure that drives ultrafiltration, and the duration of dialysis.

These dialytic parameters should be prescribed so as to afford patients the optimal “dose” of dialysis [10]. Dialysis dose can be determined either by measuring the fractional reduction of blood urea nitrogen with each dialysis or by measurement of the total amount of urea removed in each dialysis. Fractional reduction of urea can be measured by volume-adjusted fractional clearance of urea, Kt/Vurea, where K is the dialyzer’s urea clearance, t is the time on dialysis, and V is the volume of distribution of urea. It can also be expressed as the percentage reduction in urea during a single hemodialysis treatment (urea reduction ratio or URR) [11], which is defined mathematically as

1 – (postdialysis blood urea nitrogen concentration /predialysis blood urea nitrogen concentration)] × 100

In ARF patients, fractional urea reduction (Kt/V) is found to overestimate the actual amount of urea removed [12]. In a steady state, the urea distribution volume approximates total body water because urea is freely permeable across cell membranes and equilibrates rapidly among different body spaces. The relationship of urea kinetics and dialysis dose is more complex in ARF patients, however, possibly due to their unstable hemodynamics. Volume of urea distribution has been found to exceed total body water in patients with ARF by approximately 20% [13, 14]. In addition, ARF is usually characterized by hypercatabolism and negative nitrogen balance. Taking into account these different limitations, a method for calculating equilibrated Kt/V has been proposed for dialysis dose measurement in ARF [15].


Hemofiltration and Hemodiafiltration

In contrast to the diffusion-driven solute clearance of hemodialysis, hemofiltration depends principally on convective transport. The patient’s blood is conveyed through an extremely high-porosity dialyzer (hemofilter), resulting in the formation of a protein-free hemofiltrate that resembles plasma water in composition. In the case of arteriovenous hemofiltration, the major determinant of perfusion of the hemofilter is the patient’s mean arterial pressure, whereas the hydrostatic pressure gradient between the blood and hemofiltrate compartments provides the driving force for the formation of the filtrate. For effective hemofiltration, the mean arterial pressure should be maintained at more than 70 mm Hg.

If a hemofiltrate is formed but is not replaced by a replacement fluid, the process is called slow continuous ultrafiltration. Little solute clearance occurs during slow continuous ultrafiltration. An alternative technique that enhances solute clearance is to replace the lost volume continuously with a
physiologic solution lacking the solute to be removed. If an arteriovenous blood path is used, the process is called continuous arteriovenous hemofiltration (CAVH). A venovenous circuit may be used with blood flow driven by a blood pump, and this procedure is referred to as continuous venovenous hemofiltration (CVVH). Optimal solute clearance is achieved by combining diffusive clearance and convective transport. This is accomplished by circulating dialysate through the hemofilter with or without high ultrafiltration rates (continuous arteriovenous hemodiafiltration [CAVHDF] and continuous arteriovenous hemodialysis [CAVHD], respectively). Alternatively, these procedures may be performed using venovenous access with a blood pump to generate adequate flow rates (continuous venovenous hemodiafiltration [CVVHDF] and continuous venovenous hemodialysis [CVVHD], respectively). Hemodiafiltration combines high ultrafiltration rate (requiring replacement fluid) and dialysate flow for clearance. Collectively, all these modalities are termed continuous renal replacement therapies (CRRT). Due to the availability of reliable blood pumps, precise ultrafiltration controllers, and the desire to avoid the morbidity of arterial cannulation, venovenous therapies have greatly diminished the use of arteriovenous modes of CRRT.


Peritoneal Dialysis

Solute clearance in peritoneal dialysis is gradient-driven, whereas ultrafiltration during peritoneal dialysis depends on the osmolality of the dialysis solution. In stable ESRD patients, maintenance peritoneal dialysis is performed daily, either by manual instillation and drainage of the dialysate during waking hours (continuous ambulatory peritoneal dialysis) or while sleeping using an automated dialysate cycling device (continuous cycling peritoneal dialysis). The dialysate is allowed to dwell in the peritoneal cavity for variable intervals depending on the clearance and ultrafiltration goals (described as an “exchange”). The dialysate volume is usually 1.5 to 2.0 L per instillation, but can be as great as 3 L for the overnight dwell while the patient is recumbent.

In the acute setting, after the placement of a peritoneal dialysis catheter, peritoneal dialysis is easily initiated and discontinued with limited personnel and equipment. As in ambulatory ESRD patients, peritoneal dialysis for ARF may be performed manually or using an automated cycler. If it is performed acutely using an uncuffed dialysis catheter, rather than the conventional Dacron-cuffed catheters used for ESRD patients, 60 to 80 L of dialysate are exchanged during 48 to 72 hours, and the catheter is removed. The risk of peritonitis increases significantly thereafter without a cuffed catheter. A soft Dacron-cuff catheter is preferred if extended periods of peritoneal dialysis are expected.

A major disadvantage of peritoneal dialysis is its relative inefficiency for solute clearance, which may be problematic for patients in the ICU, who are often hypercatabolic and require high clearance of nitrogenous wastes. The advantages of peritoneal dialysis are that it obviates the use of anticoagulation, uses a biologic membrane (the peritoneum) for dialysis, and demands much less nursing time if automated cycling is used. Careful attention has to be paid to the patient’s nitrogen balance because substantial losses of protein and amino acid may occur through the peritoneum [16].


Indications for Dialysis in Renal Failure

Whereas in chronic renal failure the therapeutic objective is for dialysis to substitute for absent renal function indefinitely, in patients with ARF, the goal of dialytic therapy is to support the patient while awaiting the recovery of adequate renal function to sustain life. In patients with ARF who have had insufficient time to establish compensatory or adaptive physiologic alterations, it is mandatory that dialysis be initiated promptly. Absolute indications for the initiation of dialysis are uremic serositis, uremic encephalopathy, hyperkalemia resistant to conservative therapy, hypervolemia unresponsive to high doses of diuretics, and acidosis that is not adequately corrected with alkali. In addition, there are selected conditions that typically are not life-threatening and that can be managed by more conservative means. These conditions are relative indications for the initiation of dialysis. Examples include azotemia in the absence of uremia, hypercalcemia, hyperuricemia, hypermagnesemia, and bleeding exacerbated by uremia.


Absolute Dialysis Indications


Uremic Encephalopathy and Serositis

Of the complications of uremia, few respond as dramatically to dialysis as those involving the central nervous system (CNS). Tremor, asterixis, diminished cognitive function, neuromuscular irritability, seizures, somnolence, and coma are all reversible manifestations of uremia that merit the provision of dialysis [17]. Reversible cardiopulmonary complications of uremia, such as uremic pericarditis and uremic lung, respond to the initiation of an adequate dialysis regimen, but clinical resolution is more protracted than for the CNS manifestations of uremia [18, 19]. Uremic pericarditis is characterized by the presence of noninfectious inflammation of both layers of the pericardium. It is accompanied by pericardial neovascularization and the development of a serofibrinous exudative effusion. Injudicious use of systemic anticoagulation for hemodialysis or other indications may induce intrapericardial hemorrhage and cardiac tamponade, although these complications may also occur spontaneously. Development of uremic cardiac tamponade has been reported to be associated with prolonged ARF and inadequate renal replacement therapy [20]. Uremic pericarditis may also be associated with systolic dysfunction of the left ventricle and serosal inflammation with pleural hemorrhage. Uremic lung is a poorly understood late pulmonary complication of uremia demonstrating a roentgenographic pattern of atypical pulmonary edema that is not necessarily associated with elevated pulmonary capillary wedge pressures. It is also treated by dialysis.

There is a poor correlation between the blood urea nitrogen (BUN) and the development of uremic signs and symptoms. The BUN reflects not merely the degree of renal insufficiency but also the dietary protein intake, hepatocellular function, and protein catabolic rate. It is thus not surprising that uremic manifestations may arise with a BUN less than 100 mg per dL. In addition to the absolute level, the temporal rate of increase of the BUN seems to influence the development of uremia. Patients with a precipitous decline in renal function, such as those with ARF, typically manifest uremic symptoms at lesser degrees of azotemia than do patients whose renal function declines more
gradually, such as those with chronic progressive kidney disease. Finally, selected patient populations, such as children, the elderly, and individuals with diabetes mellitus, appear to have a lower threshold for manifesting uremic symptoms.


Hyperkalemia

During the course of progressive renal insufficiency, the capacity to excrete potassium is compromised and the adaptive cellular uptake declines [17]. The need for emergent treatment of hyperkalemia is based on the degree of hyperkalemia, rate of rise, symptoms, and presence of electrocardiographic changes. Patients with chronic renal failure develop adaptive mechanisms to excrete potassium [18, 21]. In contrast, hyperkalemia in the setting of ARF is usually poorly tolerated. In the absence of indications for urgent treatment, conservative measures will suffice. These interventions include limiting daily intake of potassium (oral or parenteral), discontinuing incriminating medications, augmenting potassium excretion in the urine or stool (if oliguric), and implementing strategies to minimize cardiotoxicity [22, 23 and 24]. Hemodialysis can reduce serum potassium by 1 to 1.5 mmol per L per hour [25]. Peritoneal dialysis and CRRT are not as effective as intermittent hemodialysis in lowering serum potassium in acute, severe hyperkalemia. The decision to start emergency hemodialysis should not preclude treatment with nondialytic measures.


Hypervolemia

Hypervolemia is a common complication of renal insufficiency, in both the inpatient and outpatient setting. The obligatory volume of fluids, medications, and food administered to hospitalized patients often exceeds their excretory capacity, even if nonoliguric.

Integral to the prevention and treatment of hypervolemia is the administration of diuretics. Even patients with advanced renal insufficiency (glomerular filtration rate [GFR] less than 15 mL per minute) may respond to aggressive doses of loop diuretics. The absence of an adequate diuretic response (diuresis that is inadequate to meet the volume challenge from obligatory fluids) is an absolute indication for dialysis. Fluid removal can be accomplished by hemodialysis, hemofiltration, or peritoneal dialysis (ultrafiltration). Ultrafiltration rates of more than 3 L per hour can be achieved during hemodialysis, 1 to 3 L per hour during hemofiltration, and usually less than 1 L per hour during peritoneal dialysis.


Acidosis

As renal function declines, endogenously generated organic acids and exogenously ingested acids are retained, and the capacity to generate and reclaim bicarbonate becomes increasingly compromised. In patients who are not hypercatabolic or receiving an acid load, acid generation occurs at a rate of approximately 1 mEq per kg per day, resulting in an uncorrected decline in serum bicarbonate concentration to as low as 12 mEq per L [26]. Therefore, in patients with renal failure, metabolic acidosis is the typical acid-base disturbance.

Severe acidosis in the context of ARF can result in changes in mental state leading to coma and can provoke hypotension by depressing myocardial contractility and causing vasodilatation. The correction of acidosis is thus a major concern, especially in the intensive care setting. Less severe acidosis may be corrected by administration of exogenous oral alkali therapy. Because both bicarbonate and citrate are administered as sodium salts, volume overload is a risk. Excessive correction of severe metabolic acidosis (plasma bicarbonate less than 10 mEq per L) may have adverse consequences, including paradoxical acidification of the cerebrospinal fluid and increased tissue lactic acid production. An initial partial correction to 15 to 20 mEq per L is quite appropriate.

With these concerns in mind, if progressively larger doses of alkali therapy are required to control acidosis in renal failure, dialysis is indicated. Hemodialysis is also useful in treating acidosis accompanying salicylate, methanol, and ethylene glycol poisonings [27, 28, 29 and 30], offering the added benefit of clearing the parent compounds and their toxic metabolites (methanol [formic acid] and ethylene glycol [glycolic acid, oxalate]).


Relative Dialysis Indications

Non-life-threatening indications for dialysis can typically be managed with more conservative interventions. Multiple relatively mild manifestations of ARF may provoke consideration of dialysis when present simultaneously. For example, a hypercatabolic trauma patient with ARF manifested by a gradually increasing serum potassium concentration, declining serum bicarbonate concentration, falling urine output, and a mildly diminished sensorium does not fulfill any of the absolute criteria for the initiation of dialysis. However, in such a case, the need for dialysis is almost inevitable, and the patient’s care is not improved by withholding dialysis until a life-threatening complication of renal failure develops.

Other subcritical chemical abnormalities may be approached with hemodialysis. Hypercalcemia and hyperuricemia associated with ARF are common in patients with malignancies and tumor lysis syndrome, respectively [31, 32]. Hypermagnesemia in renal failure is usually the result of the injudicious use of magnesium containing cathartics or antacids [33]. These metabolic disorders are readily corrected by deletion of the excessive electrolyte from the dialysate. For example, hypercalcemia that is unresponsive to conventional conservative interventions may be corrected by hemodialysis with a reduced calcium dialysate (less than 2.5 mEq per L).

Bleeding from the skin and gastrointestinal tract are common manifestations of platelet dysfunction in renal insufficiency. The hemostatic defect of renal insufficiency, an impairment of platelet aggregation and adherence, is reflected in the typical threefold prolongation of the bleeding time [33, 34, 35 and 36]. Although either peritoneal or hemodialysis can correct the platelet defect [37], more conservative therapies are available [38 39 40 41 and 42].


Prophylactic Dialysis and Residual Renal Function

The presence of definitive indications such as severe hyperkalemia and overt uremia prompts the decision to initiate dialysis without further delay. However, in the absence of such emergent indications, the timing of the initiation of dialysis is more a matter of judgment. Prevailing opinion dictates that dialysis should be initiated prophylactically in ARF when life-threatening events appear imminent, understanding that it may be difficult to predict imminent events in so volatile a context.

Several small trials of prophylactic renal replacement therapy revealed conflicting results. Forty-four coronary artery
bypass graft patients with chronic kidney disease (serum creatinine more than 2.5 mg per dL) were studied with prophylactic hemodialysis [43]. Patients who were prophylactically dialyzed before surgery had a lower rate of acute renal failure, in-hospital mortality, and shorter hospital stay than those who were not. In the second trial of chronic renal failure patients (creat- inine clearance less than 50 ml per minute) needing elective cardiac catheterization, periprocedural CVVHF helped to decrease the risk of development of ARF, and improve in-hospital and long-term outcome [44]. In contrast, three trials of prophylactic hemodialysis given to chronic renal failure patients during [45] or immediately after [46, 47] exposure to radiocontrast material failed to show any benefit compared to the nonhemodialysis group treated with intravenous fluid.

An additional consideration in the timing of dialysis is its influence on residual renal function and length of recovery from ARF. Numerous observations suggested an important and possibly deleterious effect of dialysis on residual renal function. Several investigators have observed that in posttraumatic ARF treated with hemodialysis, there were pathologically demonstrable fresh, focal areas of tubular necrosis 3 to 4 weeks after the initial renal injury [48, 49]. The only culpable hemodynamic insults experienced during this period were short-lived episodes of intradialytic hypotension. Furthermore, the initiation of dialysis is frequently associated with an acute decline in urine output [50]. It has been observed that in patients with advanced renal failure, the institution of hemodialysis results in progressive deterioration in GFR over several months [51]. The fact that peritoneal dialysis does not provoke a similar relentless decline in residual renal function [52] suggests that this may be a consequence of combined hemodialysis-induced hypotension with abnormal vascular compensation and complement-mediated injury resulting from immunogenic dialyzer membrane materials [53, 54]. Preserving residual renal function must be a high priority in managing patients with severe renal insufficiency [55]. Even modest preservation of GFR may ease fluid management in patients with ARF. A residual GFR of approximately 15 mL per minute is equivalent to 5 hours of hemodialysis with a dialyzer, having a urea clearance of 160 mL per minute. The clearance of middle-molecular-weight solutes is especially enhanced with endogenous function compared with hemodialysis.

Relatively few studies have examined the issue of the timing of dialysis initiation, and most are reports from the early days of dialytic therapy. In 1960, Teschan et al. [56] described their success with the initiation of hemodialysis before the onset of frank uremia. Using this protocol, dialysis was continued on a daily basis to maintain the BUN less than 150 mg per dL until recovery occurred. Compared with historical control subjects who were dialyzed for severe uremia alone, patient survival was greatly improved. In later years, two relevant randomized prospective studies were performed. In one study, 18 posttraumatic ARF patients were randomized to initiate hemodialysis at low (predialysis BUN and creatinine levels less than 70 mg per dL and 5 mg per dL, respectively) or high BUN threshold treatment groups (predialysis BUN approximately 150 mg per dL or overt uremia as an indication for dialysis). Continued hemodialysis was performed only when these thresholds were reached. The study reported 64% survival in the low BUN group compared with 20% in the high BUN group [57]. Sepsis and hemorrhage were frequent complications in the high BUN group. As intriguing as these observations are, none of these studies was sufficiently optimally designed as to allow definitive conclusions.

In a prospective randomized trial of 106 ARF patients, Bouman et al. [58] compared the effect of early (within 12 hours of meeting the criteria for ARF) and late (meeting traditional criteria) initiation of CVVH. The mean serum urea nitrogen at the start of renal replacement therapy was 47 mg per dL in the groups with early initiation of dialysis compared to 105 mg per dL in the group with late initiation. No significant difference was found between the groups regarding patient survival and recovery of renal function. Overall patient survival in the study was 72.6%, which is significantly higher than most studies with ARF, and might indicate enrollment of less-sick patients in the study. Recognition of early ARF and early initiation of CVVHDF in a group of patients with cardiac surgery yielded more positive results [59]. Sixty-one patients with cardiac surgery were treated with CVVHDF for early ARF (decrease in urine output to less than 100 mL within 8 hours, with no response to 50 mg of furosemide) versus late ARF (serum creatinine above 5 mg per dL or serum potassium above 5.5 mEq per L). Patients who received early CVVHDF had significantly lower ICU and overall hospital mortality compared with patients treated with late CVVHDF. Positive clinical outcome was also seen in acute liver failure patients with ARF who received early renal replacement therapy (BUN 117.3 ± 32.8 mg per dL) compared with late (BUN 44.64 ± 19.5 mg per dL) [59].

In summary, although dialytic therapy is an essential tool in the management of ARF, its use or abuse may have detrimental effects on patient outcome. The physician must avoid the temptation to intervene in the course of ARF based purely on the patient’s reaching a threshold blood urea or creatinine level. Proper consideration of the patient’s overall condition, expected course of renal failure, fluid and nutritional requirements, and the presence of comorbid conditions is a wiser approach.


Practical Application of Engineering Principles


Hemodialysis and Hemofiltration


Clearance

The most fundamental biophysical principle of dialysis is solute movement across a semipermeable membrane by diffusion. Diffusional movement of a solute from a region of higher concentration to that of a lower concentration is governed by Fick’s law:

J = – DA × (dc/dx)

where J is the solute flux, D its diffusivity, A is the area available for diffusion, and dc is the change in the concentration of the solute over the intercompartmental distance, dx. For a particular model of dialyzers, dx and A are constant, and for an individual solute, D is constant. It is clear that solute flux is influenced by the surface area and the physical structure of the membrane, the variables that define the clearance characteristic of a given dialyzer.

In clinical practice, the clearance of a solute not only depends on the dialyzer but also the blood and dialysate flow rates, expressed by the following equation:

K = [QBi × (CBiCB0)] ÷ CBi


In this expression, K is the diffusive clearance of the solute from the blood, QBi is the rate at which blood containing the solute flows into the dialyzer, CBi is the concentration of the solute in the blood entering the dialyzer (arterial end), and CB0 is the concentration of the remaining solute at the egress side of the blood compartment (venous end). This mathematical description is accurate for the situation in which the solute is not initially present in the dialysate (CDi = 0). Usually, the dialysate passes through the dialyzer only once (single-pass dialysis system), and there is minimal convective transport of the solute during its clearance. Thus, the clearance of a solute during dialysis may be functionally defined as the volumetric removal of the solute from the patient’s blood. Within the practical application of this formulation, the clearance of a solute can be modified by altering the patient’s blood flow into the dialyzer (QBi).

A similar relationship exists for the dialysate and the diffusive clearance of a solute from the blood:

K = [QDi × (CD0CDi)] ÷ CBi

where QDi is the flow rate of the dialysate into the dialyzer, and CD0 and CDi are the concentrations of solute at the dialysate outlet and inlet ends of the hemodialyzer, respectively. Thus, an additional means of augmenting the diffusive clearance of a solute from the blood into the dialysate, or vice versa, is to increase the dialysate flow rate. Increases in blood, dialysate flow, or both do not improve the clearance linearly. As blood and dialysate flow rates are increased, resistance and turbulence within the dialyzer occur, resulting in a decline in clearance per unit flow of blood or dialysate. For conventional dialyzers, this limitation occurs above 300 and 500 mL per minute, respectively, for blood and dialysate flows. Limitations for high-flux dialyzers are observed at above 400 and 800 mL per minute for blood and dialysate flow rates.

The clearance characteristics of dialyzers, as provided by their manufacturers, are determined in vitro; the influence of plasma proteins on solute clearance is not accounted for, and the actual in vivo diffusivity is usually lower [61]. The clearance of a solute by a given dialyzer is a unique property of that solute. Molecules larger than 300 Da, such as vitamin B12 or β2-microglobulin, typically have lower K values compared with smaller solutes, such as urea and potassium. The clearance of these larger solutes from blood depends to a greater extent on ultrafiltration and the passive movement of solute (convective transport). The summary interaction between the diffusive clearance and convective transport of a solute is expressed as

J = (K × [1 – Qf/QBi] + Qf) × CBi = KCBi

where Qf is the ultrafiltration rate, and K is the sum of the convective and diffuse clearances. If the diffusive clearance (K) is large, as is true for urea, the influence of the ultrafiltration rate is not great. As the diffusive clearance for a solute declines because of increasing molecular weight (value of K approaches Qf), the proportionate contribution of convective transport to solute movement increases greatly. The practical application of convective transport of solutes alone is observed during pure hemofiltration (CAVH, CVVH) because no dialysate is passed through the hemofilter (preventing diffusive clearance).


Ultrafiltration

An equally important operational variable in the dialysis procedure is the ultrafiltration coefficient (Kf), defined by

Kf = Qf ÷ (PBPD)

where Qf is the ultrafiltration rate, and PB and PD are the mean pressures in the blood and dialysate compartments, respectively. Analogous to the information derived for the clearance of a particular solute for a specific dialyzer, each dialyzer also has an ultrafiltration coefficient. Because these values are typically derived in vitro, similar limitations exist for their application to the in vivo situation. It is not unusual for an individual dialyzer’s in vitro and in vivo ultrafiltration coefficients to vary by 10% to 20% in either direction.

The ultrafiltration coefficient for a dialyzer operationally defines the volume of ultrafiltrate formed per unit time in response to a given pressure differential (PB minus PD) across the dialysis membrane, and is expressed in milliliters per millimeters of mercury per hour. It is therefore possible to use the ultrafiltration coefficient to calculate the quantity of pressure that must be exerted across the dialysis membrane (transmembrane pressure [TMP]) to achieve a given volume of ultrafiltration during a dialysis session. To make this calculation, it is first necessary to quantitate the pressure that is exerted across the dialysis membrane from the blood to the dialysate compartment. During ultrafiltration, the serum oncotic pressure increases in the dialyzer’s blood compartment from the arterial to the venous end, but this is usually a relatively negligible biophysical factor. The net pressure across the dialysis membrane that arises from the flow of blood and dialysate is defined by the following equation:

Pnet = [(PBiPB0) ÷ (PBiPB0) ÷ 2] – [(PDi + PD0) ÷ (PDi + PD0) ÷ 2]

where PBi, PB0, PD0, and PDi are the pressures measured at the inflow and outflow ports of the blood and dialysate compartments, respectively. If the Pnet is too low to provide for adequate ultrafiltration during a dialysis session (Pnet × ultrafiltration coefficient × dialysis time < target ultrafiltrate volume), additional pressure can be generated across the dialysis membrane by creating negative pressure in the dialysate compartment. The effective pressure, or TMP, required can be derived from TMP = desired weight loss ÷ (ultrafiltration coefficient × dialysis time).

The performance of ultrafiltration during hemodialysis has been greatly simplified by the development of dialysis machines that possess volumetric control systems (“ultrafiltration controllers”). Ultrafiltration with these devices is remarkably precise, and weight loss can be affected in a linear manner per unit of time. Such exact volumetric control is a prerequisite for high-flux hemodialysis in which high-porosity dialyzers are used.

During most hemodialysis treatments, ultrafiltration and solute clearance are performed simultaneously. However, it is possible to segregate the two procedures temporally by a modification of the hemodialysis procedure described as sequential ultrafiltration clearance [62]. This modification of the conventional hemodialysis procedure is accomplished by first ultrafiltering the patient to a desired volume and then conducting diffusive clearance without ultrafiltration. During the initial ultrafiltration phase, diffusive clearance is prevented by not pumping dialysate through the dialyzer. During the second phase, no negative pressure is instituted, and the small fluid losses secondary to Pnet are balanced by the infusion of saline. Sequential
ultrafiltration clearance has distinct hemodynamic advantages over conventional hemodialysis, making it a particularly useful technique for aggressive fluid removal within a short interval. When ultrafiltration is performed concurrently with diffusive solute clearance, intravascular volume losses may exceed the rate of translocation of fluid from the interstitium. If these losses are not counterbalanced by an appropriate increase in the peripheral vascular resistance and venous refilling, hypotension occurs [63]. With sequential ultrafiltration clearance, these hemodynamic abnormalities are attenuated, and up to 4 L per hour may be removed without causing hypotension. However, unless the total time allotted to dialysis is increased during sequential ultrafiltration clearance, solute clearance is compromised and inadequate dialysis may occur.

During CRRT, ultrafiltration is governed by physical principles different from those for hemodialysis. Because the driving force for blood flow during CAVH is the mean arterial pressure, and the resistance in the blood path that arises from the hemofilter and lines is low, the hydraulic pressure in the blood compartment of the hemofilter is also low. Therefore, during CAVH, the principle driving force for the formation of an ultrafiltrate is the negative hydrostatic pressure within the ultrafiltrate compartment of the hemofilter. This effective negative pressure (Ph) is generated by the weight of the column within the ultrafiltration collection line and is calculated by Ph = height difference (cm) between the hemofilter and collection bag × 0.74. Therefore, to increase or decrease the rate of fluid formation during hemofiltration, the collection bag is either lowered below or raised to the level of the hemofilter. In contrast to hemodialysis, the increase in oncotic pressure at the venous end of the hemofilter is of sufficient magnitude that little ultrafiltration occurs at this end. Because significant convective solute clearance occurs with hemofiltration, little clearance occurs at the venous end of the hemofilter. This situation is most likely to occur when the amount of ultrafiltrate formed is maximal and when the blood flow rate through the hemofilter is low. It is therefore undesirable to have the net ultrafiltration rate greater than 25% of the blood flow rate [64]. Excessively large amounts of ultrafiltrate can result from a kinked or thrombosed venous return line because increased resistance will translate into increased hydraulic pressure in the blood compartment.

The ultrafiltrate formed during hemofiltration is free of protein and has a solute composition that closely resembles that of the aqueous component of plasma. The quantity of a selected solute that is removed is determined by the volume of ultrafiltrate formed, by its concentration in the blood (and therefore in the ultrafiltrate), and by the composition of the replacement solution. For example, if hemofiltration results in the formation of 0.5 L ultrafiltrate per hour and the ultrafiltrate and the replacement solution contain 5 and 0 mEq per L of potassium, respectively, 30 mEq of potassium will be cleared in 12 hours.

As mentioned previously, hemofiltration usually necessitates replacement fluid because of the inherently high rate of associated ultrafiltration. The replacement solution can be administered immediately before (predilutional hemofiltration) or after the hemofilter (postdilutional hemofiltration), simultaneously into both locations (pre- and postdilution hemofiltration), or into the peripheral venous circulation [65]. Predilution hemofiltration offers the advantage of diluting plasma proteins, effectively lowering the thrombogenicity of the hemofilter and increasing the ultrafiltration rate for a given hydrostatic pressure. However, this technique also reduces the concentration of solutes in the blood entering the hemofilter and therefore may compromise their clearance. Alternatively, the replacement solution can be administered incompletely before the hemofilter, with the balance being infused immediately after the hemofilter. This offers the advantages of predilutional hemofiltration and avoids the clearance disadvantages.


Solute Clearance and Fluid Management in Continuous Renal Replacement Therapy

Solute clearance and fluid management in CRRT is closely related. Fluid status in CRRT can be managed by two strategies [66]. Most commonly, CRRT is started with a fixed rate of replacement fluid, and the ultrafiltration rate is adjusted according to patient’s fluid status. When the patient requires negative fluid balance and significant ultrafiltration, this helps to achieve adequate solute clearance. On the other hand, zero or positive fluid balance will require reduction in ultafiltration and thereby impairment in solute clearance. An alternate option is to start CRRT at a fixed ultrafiltration rate that will provide adequate solute clearance. Positive, negative, and zero fluid balance in this strategy is achieved by adjusting the replacement fluid rate.


Peritoneal Dialysis


Clearance

The simplest kinetic model of solute transport in peritoneal dialysis is that of two compartments separated by a membrane, with the two pools representing blood in the mesenteric vasculature and dialysate in the peritoneal cavity. Solutes passing from the blood into the dialysate compartment encounter three structures of resistance: the capillary endothelium, the interstitial tissues, and the mesothelial cell layer of the peritoneum. Solute clearance occurs both by diffusion and convective transport in peritoneal dialysis. As within the context of hemodialysis, diffusion (according to Fick’s law) can be expressed as J = – DA × dc/dx, where J is the solute flux, D is the diffusion coefficient of that solute, A is the functional surface area of the membrane, dx is the diffusion distance, and dc is the concentration gradient. DA/dx is known as the mass transfer area coefficient (MTAC), and the concentration gradient is the difference between the plasma (CP) and dialysate (CD) concentrations. Hence,

J = MTAC × (CP – CD)

As is evident from these principles of solute transfer, the diffusive clearance can clearly be influenced by the concentration gradient; the diffusion coefficient, which is solute size-dependent; and the characteristics of the peritoneal membrane. As the concentrations approximate each other (CP – CD nears zero), diffusive clearance declines. Thus, to increase clearance, the concentration gradient must be maintained with frequent change of dialysate (increase the number of exchanges with shorter dwell times). The benefit of increasing dialysate exchange rate (above 3.5 L per hour) is limited by the fact that this decreases the effective time available for contact between the dialysate and the peritoneum. A successful alternative to aid clearance is to increase the volume of dialysate per exchange from 2.0 to 3.0 L. These volumes are usually well tolerated by adults, especially when recumbent. The larger volume of
dialysate allows more contact with the peritoneum, facilitating clearance. Increments in the dialysate volume have been shown to augment mass transfer area coefficient [67, 68].

Because ultrafiltration and clearance occur concurrently during peritoneal dialysis, the use of hyperosmolal dialysates augments the clearance of solutes. In this situation, convective transport is added to diffusive clearance. The use of the most osmotically active dialysate (4.25% dextrose) can increase urea clearance by approximately 50% [69]. If the peritoneal vascular surface area is adequate, urea clearance and ultrafiltration are not limited by peritoneal blood flow [70, 71, 72 and 73]. For example, in patients in shock, the clearance of urea is depressed only by approximately 30%; intraperitoneal vasodilators augment small solute clearance by only 20%.


Ultrafiltration

In peritoneal dialysis, the net pressure generated favors movement of water from the dialysate into the capillaries. Therefore, an active osmotic solute (typically dextrose) is used to facilitate ultrafiltration. As is the situation for solute clearance, ultrafiltration is maximal at the beginning of an exchange and declines as the osmolality gradient declines during equilibration. Hyperglycemia, by decreasing the osmolal gradient, also impairs ultrafiltration. Even with optimal exchange frequency and volume, ultrafiltration rates during peritoneal dialysis are usually at least 700 mL per hour. The ultrafiltrate formed is hypoosmolal to serum, so hypernatremia is a common complication of excessive ultrafiltration.


Components of The Dialysis Process


Hemodialysis and Hemofiltration


Dialyzers and Hemofilters

Virtually all commercially available hemodialyzers are configured as large cylinders packed with hollow fibers through which the blood flows (hollow-fiber dialyzer). The dialysate flows through the dialyzer, bathing these fibers, usually in a countercurrent direction. The membrane for these dialyzers is composed of a variety of modified biologic or synthetic materials such as regenerated cellulose, cuprophan, hemophan, cellulose acetate, polysulfone, polymethylmethacrylate, and polyacrylonitrile [74]. The surface area available for solute transport and the filling volume of the blood and dialysate compartments vary significantly among different dialyzers and are a function of the membrane material. These materials vary in their characteristics for solute transport and ultrafiltration, the degree to which they are tolerated by the patient’s immune system (“biocompatibility”), costs, and capacity to be reused. The choice of dialysis membrane materials has been suggested to influence the outcome of patients with ARF, but these findings are inconsistent [75, 76].

The impetus to develop more efficient dialyzers stems from the desire to decrease hemodialysis treatment time through faster solute clearance. A secondary goal is to improve clearance of larger solutes that may be toxic, such as β2-microglobulin. A putative disadvantage of these efficient larger-pore dialyzers is that they may more readily permit the transmembrane back-flux of bacterial-derived lipopolysaccharides from the dialysate into the dialyzer blood compartment (backfiltration) [77]. The resultant patient exposure to the pyrogen results in an acute nonbacteremic, febrile illness described as a pyrogen reaction. The use of bicarbonate-buffered dialysates, which permit the growth of Gram-negative bacteria, and ultrafiltration controllers that limit the rate of ultrafiltration may also contribute to the occurrence of pyrogen reactions.

Use of polyacrylonitrile membrane (specifically, AN-69, Hospal Corp, Lyon, France) in patients receiving angiotensin-converting enzyme inhibitors can result in anaphylactoid reaction [78]. Increased production of bradykinin by the contact of blood with the membrane and impaired degradation of bradykinin by the angiotensin-converting enzyme inhibitors appears to be the mechanism behind the hypotension.

The characteristic requirements of a dialyzer or hemofilter used in CRRT must be considered from a different perspective. Because the driving force for hemofiltration is the mean arterial pressure of a low-speed venous pump and clearance is disproportionately dependent on convection of low resistance, a high Kuf hemofilter is required. Hemofilters are usually composed of polysulfone, polymethylmethacrylate, or polyacyrylonitrile because cuprophan does not have sufficient hydraulic permeability at TMP of 30 to 70 mm Hg. Hemofilters are available in two geometric configurations: the hollow-fiber or the parallel-plate configuration (blood and dialysate separated by a flat, semipermeable membrane).

In continuous hemodialysis techniques (CAVHD, CVVHD), solute transport is limited by the dialysate flow rate, unlike conventional intermittent hemodialysis. The blood flow rate is usually 100 to 150 mL per minute, and dialysate flow rate is generally 16 to 30 mL per minute. The rapidity of solute equilibration across the dialysis membrane, which is a function of the hemofilter, determines the type of hemofilter that can be used.


Dialysates for Hemodialysis

The composition of the dialysate is the other key variable of the dialysis process that determines the outcome of this procedure (Table 25-2). Although sodium and potassium are typically the only components of the dialysate that are adjusted to meet the demands of specific clinical situations, the other constituents are equally critical. The dialysate is stored as a liquid or powdered concentrate that is diluted in a fixed ratio to yield the final solute concentration.








TABLE 25-2. Dialysate Formulation for Peritoneal Dialysis and Conventional Hemodialysis





















































Solute Range (Usual Concentration)
Peritoneal dialysis  
   Na+ 132 mEq/L
   K+ 0
   Cl 96 mEq/L
   Lactate 35 mEq/L
   Ca+ 2.5 or 3.5 mEq/L
   Mg+ 0.5 or 1.5 mEq/L
   Glucose 1.5%, 2.5%, or 4.25% g/dL
Conventional hemodialysis  
   Na+ 138–145 mEq/L (140)
   K+ 0–4 mEq/L (2)
   Cl 100–110 mEq/L (106)
   HCO3 35–45 mEq/L (38)
   Ca+ 1.0–3.5 mEq/L (2.5)
   Mg+ 1.5 mEq/L (1.5)
   Dextrose 1.0–2.5 mEq/L (2)



Glucose

Before hydraulic-driven ultrafiltration became available, the dialysate glucose concentration was maintained at above 1.8 g per L to generate an osmotic gradient between the blood and the dialysate [79]. Although this was effective for inducing ultrafiltration, some patients developed morbidity from hyperosmolality. Currently, dialysates are glucose-free, normoglycemic (0.00% to 0.25%), or modestly hyperglycemic (more than 0.25%). Hemodialysis with a glucose-free dialysate results in a net glucose loss of approximately 30 g and stimulates ketogenesis and gluconeogenesis [80]. Such alterations in intermediary metabolism may be particularly deleterious in chronically or acutely ill hemodialysis patients who are malnourished or on a medication such as propranolol that may induce hypoglycemia [81, 82]. These effects are ameliorated by the use of a normoglycemic dialysate. Additional metabolic consequences occurring from the use of a glucose-free dialysate include an accelerated loss of free amino acids into the dialysate [83], a decline in serum amino acids [84], and enhanced potassium clearance stemming from relative hypoinsulinemia [80]. Therefore, the dialysate glucose concentration should be maintained at normoglycemic concentrations.


Sodium

Historically, the dialysate sodium concentration was maintained on the low side (more than 135 mEq per L) to prevent interdialytic hypertension, exaggerated thirst, and excessive weight gains. However, hyponatremic dialysates increase the likelihood of loss of sodium by diffusion with two consequent possible side effects [85]. First, loss of intravascular sodium can cause osmotic shift of fluid from extracellular to intracellular compartment leading to intracellular overhydration. This will lead to CNS symptoms of dialysis disequilibrium syndrome. The second effect of loss of intravascular volume is intradialytic hypotension and associated symptoms of cramps, headache, nausea, and vomiting. Use of hypernatremic dialysate helps to prevent net sodium loss and adverse effects of low dialysate sodium. This also permits enhanced ability to ultrafiltrate these patients. However hypernatremic dialysate carries the risk of inadequate sodium removal, polydipsia, and refractory hypertension. The hemodynamic alterations are minimized in the setting of equal dialysate and serum sodium concentrations. Thus, there has been an appropriate increase in the dialysate sodium to 140 to 145 mEq per L.

The newer dialysate delivery systems permit real-time modification of dialysate sodium concentrations during hemodialysis by the use of variable-dilution proportioning systems. The technique of sodium “profiling” to fit a patient’s hemodynamic needs has been espoused as a means of accomplishing optimal blood pressure support without increased thirst at the completion of the treatment. The modulation of dialysate sodium concentration can be executed in several patterns [86]. Sodium profiling may reduce the frequency of hypotension during ultrafiltration without decreasing the dialysis time committed to diffusive clearance, as is the case with sequential ultrafiltration clearance [87]. However, it is unclear if this technique offers any advantage over a fixed dialysate sodium of 140 to 145 mEq per L [88, 89 and 90]. Furthermore, interdialytic weight gains appear unaffected by sodium modeling. Combining dialysate sodium concentration with ultrafiltration profiling reduces intradialytic symptoms compared with standard dialysis with constant dialysate sodium and ultrafiltration [91].


Potassium

Unlike urea, which usually behaves as a solute distributed in a single pool with a variable volume of distribution, only 1% to 2% of the total body store of 3,000 to 3,500 mEq of potassium are present in the extracellular space. The flux of potassium from the intracellular compartment to the extracellular space, and subsequently across the dialysis membrane to the dialysate compartment, is unequal. Therefore, the efficacy of potassium removal in hemodialysis is highly variable, difficult to predict, and influenced by dialysis-specific and patient-specific factors [92].

During hemodialysis, approximately 70% of the potassium removed is derived from the intracellular compartment. Because 50 to 80 mEq of potassium are removed in a single dialysis session and only 15 to 20 mEq of potassium are present in the plasma, life-threatening hypokalemia would be the consequence of hemodialysis if this was not the case [92]. However, the volume of distribution of potassium is not constant; the greater the total-body potassium, the lower its volume of distribution [93]. As a result, the fractional decline in plasma potassium during a single dialysis session is greater if the prehemodialysis level is higher. Optimal potassium elimination by hemodialysis is accomplished by daily short hemodialysis treatments instead of protracted sessions every other day. The transfer of potassium from intracellular to extracellular compartment usually occurs more slowly than the transfer from the plasma across the dialysis membrane [93, 94], making it difficult to predict the quantity of potassium that will be removed during hemodialysis. A practical consequence of the discordant transfer rates is that the plasma potassium measured immediately after the completion of hemodialysis is approximately 30% less than the steady-state value measured after 5 hours. Therefore, hypokalemia based on blood measurements obtained immediately after the completion of hemodialysis should not be treated with potassium supplements.

The transcellular distribution of potassium is influenced by several variables, including the blood insulin concentration (insulin promotes potassium uptake by cells, lowering its intradialytic clearance), catecholamine activity (β-agonists promote cellular uptake of potassium and a-agonists stimulate the cellular egress of potassium, attenuating and increasing the intradialytic clearance of potassium, respectively) [95], sodium-potassium adenosine triphosphatase activity (pharmacologic inhibition diminishes potassium uptake into cells, which may enhance intradialytic clearance), and systemic pH (alkalemia augments transcellular potassium uptake, which may diminish dialytic clearance of potassium) [95]. Modification of dialysate that can enhance reduction in serum potassium includes use of lower dialysate potassium, glucose-free dialysate, and increased dialysate bicarbonate [96]. Use of higher concentration of HCO3 dialysate (39 mmol per L vs. 35 mmol per L and 27 mmol per L) can result in a more rapid decrease in serum potassium concentration, although the reduction in serum potassium is thought to be due to transfer of potassium from extracellular to intracellular fluid compartment, and not due to removal by dialysis [97]. Paradoxically, it has been observed that, as the gradient for potassium clearance from blood into the dialysate is increased by decreasing the dialysate potassium concentration, the uptake of bicarbonate from the dialysate declines. This interaction between buffer base and potassium in the dialysate is significant: a 1-mEq increase in the potassium gradient results in a decline in bicarbonate loading of 50 mEq.
This interaction should not be overlooked in planning the dialysate prescription for patients being dialyzed for severe acidosis [98]. Use of lower dialysate potassium (1 mEq per L vs. 3 mEq per L) can increase postdialysis blood pressure possibly through increasing peripheral resistance [99]. However, dialysis efficiency seems unaffected.

Because the selection of the dialysate potassium concentration is empirical, most patients are dialyzed against a potassium concentration of 1 to 3 mEq per L. For stable patients who do not have significant cardiac disease or who are not taking cardiac glycosides, a dialysate potassium concentration of 2 to 3 mEq per L is appropriate. In a patient with a history of cardiac disease, especially with arrhythmias and cardiac glycoside usage, the dialysate potassium should be increased to 3 to 4 mEq per L [100]. Such patients are at the greatest risk for the development of dysrhythmias associated with intradialytic potassium flux.

Most cardiac morbidity attributable to the dialysate potassium concentration occurs during the first half of the dialysis session because of the rapid decline in serum potassium concentration associated with large blood-to-dialysate potassium gradient. The sudden fall in serum potassium concentration during the initial phase of hemodialysis can prolong the QTc interval even in patients without cardiac disease [101, 102]. The rapidity of the fall in the plasma potassium concentration, rather than the absolute plasma concentration, appears to determine the risk of cardiac arrhythmias. The effect of dialysate potassium modeling on this initial rapid decline in serum potassium and cardiac arrhythmias was studied in a prospective randomized trial [103]. Hemodialysis patients were randomized to either a fixed dialysate potassium (2.5 mEq/L) or to an exponentially declining dialysate potassium (3.9 to 2.5 mEq/L) that maintains a constant blood-to-dialysate potassium concentration gradient (1.5 mEq/L). Even though the total decreases in serum potassium concentration were similar in both groups, patients with variable dialysate potassium had fewer premature ventricular contractions, particularly in the first hour of dialysis. If a patient has a significant deficit in total body potassium, postdialysis hypokalemia can occur, even if the dialysate potassium concentration is greater than the serum potassium concentration. This seemingly contradictory situation arises because of the potential for a delayed conductance of potassium from the dialysate into the patient, compared with its movement from the extracellular space into the intracellular compartment.


Bases

Although bicarbonate was used as the original base used in dialysate in the early 1960s, it was superseded by acetate, which is more stable in aqueous solution at neutral pH in the presence of divalent cations. It was found that vascular instability is much more problematic with predominantly acetate-containing dialysates than bicarbonate-containing dialysates [104, 105, 106 and 107]. The hemodynamic instability associated with acetate is worsened by hyponatremic dialysates and is lessened with a normonatremic dialysate [105, 108, 109].

Hemodialysis using bicarbonate-buffered dialysate prevents these complications. The transient anion gap metabolic acidosis associated with acetate dialysis is avoided with bicarbonate-based dialysis. A supraphysiologic bicarbonate concentration in the dialysate not only attenuates the diffusive gradient from blood to dialysate but generally allows the patient to achieve a net positive bicarbonate balance. Additionally, dialysis-induced hypoxemia is attenuated by a bicarbonate dialysate.

Bicarbonate is now used routinely as the buffer in hemodialysis solutions. Bicarbonate dialysis is now feasible because of the widespread availability of proportioning systems that permit mixing of the separate concentrates containing bicarbonate and divalent cations close to the final entry point of the dialysate into the dialyzer. Unlike the more acidic and hyperosmolal acetate-based dialysate, liquid bicarbonate concentrates and reconstituted bicarbonate dialysates support the growth of Gram-negative bacteria such as Pseudomonas, Acinetobacter, Flavobacterium, and Achromobacter; filamentous fungi; and yeast. Because of the propensity of the dialysate to support bacterial growth, and the morbidity associated with the presence of such growth in the dialysate, strict limits are placed on bacterial growth and presence of lipopolysaccharide in the dialysate, and frequency of dialyzer reuse [110].

A bicarbonate-based dialysate of 30 to 35 mEq per L is conventionally used. Bicarbonate concentrations above 35 mEq per L may result in the development of a metabolic alkalosis with secondary hypoventilation, hypercapnia, and hypoxemia. Use of higher dialysate bicarbonate to correct metabolic acidosis might have a beneficial effect on patient’s nutritional status. In maintenance hemodialysis patients, use of higher dialysate bicarbonate (40 mEq per L) to keep predialysis total CO2 concentration above 23 mmol per L is associated with significant decrease in protein degradation. If a bicarbonate dialysate is unavailable, acetate at an equivalent concentration is suitable, but large-surface-area dialyzers or dialyzers with high-efficiency or high-flux transport characteristics cannot be used.


Calcium

Patients with renal failure are prone to develop hypocalcemia, hyperphosphatemia, hypovitaminosis D, and hyperparathyroidism. Positive calcium balance is thus useful as an adjunct during hemodialysis for controlling metabolic bone disease [111, 112 and 113]. In patients with renal failure requiring dialysis, more than 60% of the calcium is not bound to plasma proteins and is in a diffusible equilibrium during hemodialysis [114]. During hemodialysis, serum ionized calcium level is directly related to the dialysate calcium concentration and it varies in a time-dependent manner [115]. Assuming free conductance of calcium across the dialysis membrane secondary to diffusive clearance and an additional contribution secondary to convective losses, a dialysate calcium concentration of roughly 3.5 mEq per L (7.0 mg per dL) is necessary to prevent intradialytic calcium losses [116]. Various complications such as vascular calcification, calciphylaxis, and adynamic bone disease have been attributed to increased calcium load over the years in chronic hemodialysis patients. To avoid the risks of these complications and to allow the use of vitamin D and calcium-containing phosphate binders, it has been recommended that 2.5 mEq per L dialysate calcium be used in maintenance hemodialysis patients [117].

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Aug 27, 2016 | Posted by in CRITICAL CARE | Comments Off on Dialysis Therapy in the Intensive Care Setting

Full access? Get Clinical Tree

Get Clinical Tree app for offline access