Concepts in the Management of Systemic Local Anesthetic Toxicity

Current Concepts in the Management of Systemic Local Anesthetic Toxicity



Local anesthetics are amphipathic compounds; they are both lipophilic and hydrophilic. The lipophilic component of the molecule allows local anesthetics to cross plasma and intracellular membranes, whereas the hydrophilic portion gives them the ability to interact with charged targets such as structural or catalytic proteins [1] and ion channels. When given at appropriate sites and doses, local anesthetics are safe. However, local anesthetic systemic toxicity (LAST) can occur from either accidental intravascular injection or when excessive amount of local anesthetic finds its way to the intravascular space. Patient factors can also reduce the threshold to LAST such that even normally safe serum concentrations of local anesthetic can lead to symptoms of clinical instability. Therefore, LAST is the end result of the interaction and contribution of patient-specific factors, the peak plasma concentration, and physicochemical properties of the specific local anesthetic [2]. Intrinsic anesthetic potency and the potential for causing acute cardiac and neurotoxicity parallel the lipid solubility of the drug.


The history of local anesthetics begins with the conquest of Peru by Pizarro in the early part of the 16th century and the introduction of the coca plant to Europe [2]. In 1850, Austrian von Scherzer first brought enough quantity of the coca plant to allow the isolation of cocaine by Niemann [3]. In the late 1880s Sigmund Freud suggested to his colleague Carl Koller the idea of using cocaine for its LA properties. In 1884 Koller performed the first eye surgery with the use of topical cocaine.


Shortly after the use of cocaine for topical anesthesia, physicians began to inject cocaine near peripheral nerves and into the spinal and epidural spaces [4]. In 1855 Alexander Wood first presented the idea of a nerve block by direct application of cocaine [2] and it was not long before the toxic effects of cocaine were identified. Cocaine not only led to addiction among medical staff but resulted in deaths among patients and medical staff alike. Before the introduction of cocaine as a local anesthetic, cocaine toxicity was reported in 1868 by Moreno y Maiz when he described cocaine-induced seizures in rats [4]. By 1887 J.B. Mattison had reported 30 cases of cocaine toxicity that involved a spectrum of symptoms from convulsions to death [3].


In 1919 Eggleston and Hatcher published a comprehensive summary of the prevention and treatment of LAST. They concluded that different local anesthetics were additive in their toxicity and that adding epinephrine to subcutaneous injection of local anesthetics apparently reduced the incidence of LAST [4]. In 1925 Tatum, Atkins and Collins identified that seizures from LAST could be controlled with barbiturate injection. By 1928 the medical community in the United States recognized a growing risk of mortality directly attributable to local anesthetics, which led to the formation of a specific ad hoc Council of the American Medical Association. The recommendations of the Council to treat toxicity from local anesthetics included cardiac massage and artificial respiration. The Council concluded that intracardiac adrenaline and digitalis were not useful to treat the toxicity [3].


As the frequency of central nervous system (CNS) and cardiovascular (CV) systemic toxicity from local anesthetics grew in number the medical community was prompted to search for new and less toxic local anesthetics [3]. Giesel isolated tropocaine in 1891 from a Javanese species of coca. Tropocaine proved to have similar degrees of toxicity to cocaine. However, structural modifications of tropocaine led to the preparation of newer local anesthetics such as eucaine, Holocaine, and orthoform [3]. In 1900 and 1905 Eihorn synthesized benzocaine and procaine, respectively. Procaine had relatively few side effects and quickly grew out of favor because of its low potency, slow onset, short duration of action, and limited ability to penetrate tissue [3]. Chloroprocaine was created by a chlorine substitution to the aromatic ring of procaine. Unfortunately, its use declined after 1980 because of reports of prolonged sensory and motor block following subarachnoid administration of an intended epidural dose. The last ester type local anesthetic to be developed was tetracaine in 1930. Tetracaine can be used to achieve 1.5 to 2.5 hours of spinal anesthesia as an isobaric, hypobaric, or hyperbaric solution [3]. It is also effective as a topical airway anesthetic. Lidocaine was created in 1944 and was first used clinically in 1948. It quickly became one of the most widely used local anesthetics because of its potency, rapid onset, and effectiveness for infiltration. The safety profile of lidocaine for neuraxial anesthesia was questioned in the late 1980s when numerous reports appeared of transient neurologic symptoms following uneventful spinal anesthesia and instances of cauda equina syndrome from high concentrations of lidocaine being administered through a continuous spinal catheter. All newer local anesthetics after the invention of lidocaine encompass the amide structure. Mepivacaine and prilocaine are both related to lidocaine and were introduced for clinical use in 1957 and 1960, respectively [3]. Prilocaine is limited for clinical use by its potential to cause methemoglobinemia.


The evolution of modern regional anesthesia begins with the invention of bupivacaine in 1957. Shortly after its synthesis bupivacaine was initially discarded for clinical use because it was found to be 4 times more toxic than its homolog mepivacaine [3] and was not introduced into clinical practice until 1965. Bupivacaine belongs to the family of n-alkyl-substituted pipecholyl xylidines having a butyryl substitution and an asymmetric carbon atom that represents a chiral center [5]. Bupivacaine is a long-acting amide local anesthetic that can be used for neuraxial block, peripheral nerve block, and infiltration. It can be used for a differential blockade because lower concentrations mainly provide sensory blockade, whereas motor blockade is only seen at higher concentrations [3].


Hollmen reported the first clinical descriptions of bupivacaine toxicity in 1966. A total of 133 patients were studied for toxic reactions during epidural and caudal anesthesia for abdominal and urological surgery [3]. Five out of 6 patients had mild to severe CNS toxicity manifesting as tremor or convulsions. One case involved hypotension and bradycardia after a caudal block.


In 1969 Beck and Martin reviewed 19,907 cases of paracervical blockade with bupivacaine in women during labor [6]. They found 23 cases of infant death and evidence of newborn acidosis associated with paracervical blockade with bupivacaine. It was not until 1983 that bupivacaine was abandoned for use in paracervical blockade [3].


In the early 1970s there were several human volunteer studies of bupivacaine toxicity. Most of these involved continuous infusion of bupivacaine until symptoms appeared [3]. The sample sizes for these studies were extremely small, consisting of only 3 to 6 patients. The main observed symptoms were CNS in nature (lightheadedness, muscle twitching, dizziness, lip numbness, tinnitus, and slurred speech).


The first case of severe cardiovascular toxicity was reported in 1977 by Edde and Deutsch, 10 years after the introduction of bupivacaine to clinical use. They described ventricular fibrillation in a patient undergoing an interscalene block with 100 mg of bupivacaine [2]. Albright’s editorial in 1979 detailed bupivacaine-induced CV compromise such as ventricular arrhythmias, CV collapse and death. He implied a causal relationship between severe CV toxicity and the use of bupivacaine and etidocaine [7].


In the 1980s the pharmaceutical industry began to search for a less toxic, but potent and long-acting local anesthetic with reduced toxicity [3]. This led to the use of the potentially less toxic S-(−)-enantiomer of bupivacaine, levobupivacaine, and the new local anesthetic, ropivacaine. Ropivacaine was introduced into clinical practice in 1996 after evaluation in clinical trials starting in 1990. It is able to produce differential blockade and may have a better safety profile.


The latest focus on methods to reduce local anesthetic toxicity involves the delivery of local anesthetics mixed with substances capable of slowing release. Two approaches involve liposomes and microspheres. Boogaerts and colleagues [8] published the first study of epidural administration of liposomal bupivacaine in 1994.




CNS


Clinical central nervous system toxicity caused by bupivacaine consists of 2 phases. The first phase or the excitatory phase manifests as shivering, muscle twitching, and tremors progressing to tonic-clinic seizures [9]. The second or the inhibitory phase involves generalized depression leading to hypoventilation and respiratory arrest. The CNS symptoms usually occur before signs of cardiovascular toxicity. The specific mechanism underlying CNS toxicity involves neuronal desynchronization, possibly because of disturbances with the γ-aminobutyric acid neurotransmitter [10].



CVS


Cardiotoxicity from bupivacaine involves a 2-stage pattern similar to CNS toxicity. First, the central activation of the sympathetic nervous system produces tachycardia and hypertension, which can mask the direct cardiac depressant effects of bupivacaine. Malignant arrhythmias and contractile dysfunction shortly follow the initial sympathetic activation. The end result can be complete cardiovascular collapse.


Bupivacaine blocks cardiac sodium channels in a time- and voltage-dependent manner. The sodium channel block is intensified as heart rate is increased or membrane potential is more depolarized [11]. Bupivacaine has a preference for inactivated sodium channels over those in the resting or open configuration. At low concentrations, bupivacaine blocks sodium channels in a slow-in slow-out manner and at high concentrations the channel is blocked in a fast-in slow-out manner. Non–protein-bound free bupivacaine concentrations of 0.5 to 5 μg/mL slow conduction of cardiac action potentials, which leads to a prolonged PR interval and a widened QRS complex. The persistence of sodium channel blockade into diastole further slows cardiac conduction and can predispose the heart to re-entrant arrhythmias and/or unifocal, multifocal beats or ventricular tachycardias.


Local anesthetics also inhibit cardiac contractility in a nonstereoselective manner, which might result from various effects on mitochondrial energy metabolism, intracellular calcium regulation [12,14] inhibition of cAMP [15] or interference with other metabotropic signaling pathways. Bupivacaine reduces both the mitochondrial transmembrane potential and lipid-based respiration in mitochondria [16]. The former reduces efficiency of oxidative phosphorylation; the latter impairs the transport of lipid fuel necessary for the 70% of myocardial energy that is normally derived from fatty acid oxidation in the mitochondrial matrix. The observation that bupivacaine inhibits carnitine-acylcarnitine translocase was an important step in the discovery of lipid emulsion (see later discussion). This specific inhibition of carnitine fatty acyl transfer into the mitochondrial matrix might explain the relatively low dose of bupivacaine that leads to cardiovascular collapse [16]. Bupivacaine also impairs cardiac relaxation (lusitropy), an effect that may be caused by impairment of calcium handling in the sarcoplasmic reticulum [17].


May 25, 2016 | Posted by in ANESTHESIA | Comments Off on Concepts in the Management of Systemic Local Anesthetic Toxicity

Full access? Get Clinical Tree

Get Clinical Tree app for offline access