Laboratory Principles




INTRODUCTION



Listen




Clinical toxicology addresses the harm caused by acute and chronic exposures to excessive amounts of a xenobiotic. Detecting the presence or measuring the concentration of toxic xenobiotics is the primary activity of the analytical toxicology laboratory. Such testing is closely intertwined with therapeutic drug monitoring (TDM), in which drug concentrations are measured as an aid to optimizing drug dosing regimens. Another offshoot of laboratory toxicology that has become increasingly requested is testing for drugs of abuse.



The goal of the hospital toxicology laboratory is to provide clinically relevant test results to support the management of poisoned or intoxicated patients.



To be of optimal clinical value, the laboratory service requires both an appropriate test menu that meets quality criteria, as well as the ability to provide results within a clinically relevant time frame.




RECOMMENDATIONS FOR ROUTINELY AVAILABLE TOXICOLOGY TESTS



Listen




Despite a common focus, there is remarkable variability in the range of tests offered by clinical toxicologic laboratories. Test menus range from batched testing for routinely monitored drugs and common drugs of abuse to around-the-clock availability of a broad arsenal of assays with the potential to identify thousands of compounds. For most laboratories, it would be entirely impractical and inefficient to attempt to provide a full range of analyses in real-time because of cost constraints, staffing issues, and the required technical expertise.



Decisions on the test menu and turn-around times (TATs) to be offered should be made by the laboratory director in consultation with the medical toxicologists and other clinicians who will use the service. They should take into account regional patterns of the use of both licit and illicit xenobiotics and exposure to environmental xenobiotics as well as the available resources and competing priorities.



The essential questions that need to be addressed include:





  1. Which tests should be available?



  2. Do the assays require qualitative or quantitative results?



  3. What specimen matrices will be tested (eg, blood, urine, other body fluid)?



  4. When and how should the specimen be obtained?



  5. What is the acceptable TAT?




From a consensus process that involved clinical biochemists, medical toxicologists, forensic toxicologists, and emergency physicians, the National Academy of Clinical Biochemists (NACB) recommended that hospital laboratories provide a two-tiered approach to toxicology testing.27 In the United Kingdom, the National Poisons Information Service and the Association of Clinical Biochemists recommend a nearly identical list of tests, omitting the anticonvulsants.21



Tier 1 (Table 7–1) includes basic tests that should be offered locally because of their clinical relevance and the technical feasibility of rapid TATs. As with all laboratory tests, they should be selectively ordered based on the history, clinical presentation, and other relevant factors and not simply ordered as a general screening panel for all suspected cases of poisoning. The recommended TATs for most of the tests listed was 1 hour or less.




TABLE 7–1Tier-1 (Basic) Toxicology Assays



Unfortunately, on-site testing for the toxic alcohols is often difficult because of the complexity of their chromatography-based methodologies. As a compromise, a TAT of 2 to 4 hours was deemed acceptable. Another option is to calculate the serum osmolar gap (see later), which provides some important clinical information regarding the toxic alcohols.



Although the consensus for the menu of serum assays was generally excellent, there was less agreement as to the need for qualitative urine assays. This was largely a result of issues of poor sensitivity and specificity, poor correlation with clinical effects, and infrequent alteration of patient management. Although these were potential issues for all of the urine drug tests, they led to explicit omission of tests for tetrahydrocannabinol (THC) and benzodiazepines from the recommended list despite their widespread use. The results for THC have little value in managing patients with acute clinical concerns, and tests for benzodiazepines have an inadequate spectrum of detection. Testing for amphetamines, propoxyphene, and phencyclidine (PCP) is only recommended in areas where use is prevalent. It is also suggested that the diagnosis of tricyclic antidepressant (TCA) toxicity not be based solely on the results of a urine screening immunoassay because a number of other drugs cross-react. The significance of TCA results should always be correlated with electrocardiographic and clinical findings. The only urine test included in the United Kingdom guidelines was a spot test for paraquat.21 Paraquat testing was omitted in the NACB guidelines because of a very low incidence of paraquat exposure in North America.27



The NACB guidelines also recommend additional tier 2 testing in selected patients whose clinical presentations are compatible with poisoning but remain undiagnosed and are not improving. In general, such testing should not be ordered until the patient is stabilized and input is obtained from a medical toxicologist or poison control center. This second level of testing may be provided directly by referral to a reference laboratory or a regional toxicology center. Such a system has the advantages of avoiding costly duplication of services and providing a pooled workload to enable the development of technical expertise for complex, low-volume methodologies.



Many physicians order a broad-spectrum toxicology panel, or a “tox screen” for a poisoned or overdosed patient if one is readily available, but only 7 % of clinical laboratories provide relatively compre­hensive urine toxicology testing (as estimated from proficiency testing data).6 Although broad-spectrum toxicology screens can identify many xenobiotics present in poisoned or overdosed patients, the results of broad-spectrum screens infrequently have altered management or outcomes.4,14,15



The extent to which the NACB recommendations are being followed may be estimated from the numbers of laboratories participating in various types of proficiency testing. Result summaries from the 2017 series of proficiency surveys administered by the College of American Pathologists suggest that among laboratories that offer routine clinical testing, only about 40% offer quantitative assays for lithium, phenobarbital and theophylline; 50% to 60% offer assays for acetaminophen (APAP), carbamazepine, carboxyhemoglobin, methemoglobin, ethanol, salicylates, and valproic acid; 60% to 70% offer digoxin, iron, and transferrin or iron-binding capacity; and 70% to 80% offer screening tests for drugs of abuse in urine.6



About 7% of laboratories participated in proficiency testing for a full range of toxicology services. These full-service laboratories typically offer quantitative assays for additional therapeutic drugs, particularly TCAs, as well as assays that are designated as broad-spectrum or comprehensive toxicology screens. About 7% of laboratories participated in proficiency testing for a full range of toxicology services. These full-service laboratories typically offer quantitative assays for additional therapeutic drugs, particularly TCAs, as well as assays that are designated as broad-spectrum or comprehensive toxicology screens. Only about 2% of laboratories offer testing for volatile alcohols other than ethanol; most of these also offer testing for ethylene glycol.6



Although relatively few laboratories offer a wide range of in-house testing, most laboratories offer a limited toxicology panel and send out specimens to reference laboratories that offer large toxicology menus. The TAT for such “send-out” tests ranges from a few hours to several days, depending on the proximity of the reference laboratory and the type of test requested.



Even in full-service toxicology laboratories, the test menus vary substantially from institution to institution. Larger laboratories typically offer one or more broad-spectrum test panels, often referred to as “tox screens.” There is as much variety in the range of xenobiotics detected by various toxicologic screens as there is in the total menu of toxicologic tests. Routinely available tests are usually listed in a printed or online laboratory manual. Some laboratories with comprehensive services offer locally developed chromatographic assays for additional xenobiotics that are not listed. Testing that is sent to a reference laboratory is often not listed in the laboratory manual. The best way to determine if a particular xenobiotic can be detected or quantitated is to ask the director or supervisor of the toxicology or clinical chemistry section because laboratory clerical staff are likely only to be aware of tests listed in the manual.




USING THE TOXICOLOGY LABORATORY



Listen




There are many reasons for toxicologic testing. The most common function is to confirm or exclude suspected toxic exposures. A laboratory result provides a level of confidence not readily obtained otherwise and may avert other unproductive diagnostic investigations driven by the desire for completeness and medical certainty. Testing increased diagnostic certainty in more than half of cases,1,15 and in some instances, a diagnosis is based primarily on the results of testing. This can be particularly important in poisonings or overdoses with xenobiotics having a delayed onset of clinical toxicity, such as APAP, or in patients exposed to multiple xenobiotics. In these instances, characteristic clinical findings typically have not developed at the time of presentation or are obscured or altered by the effects of coingestants.



Testing provides two key parameters that will have a major impact on the clinical course, namely, the xenobiotic involved and the intensity of the exposure. This information can assist in triage decisions and can facilitate management decisions, such as the use of specific antidotes or interventions to hasten elimination. Well-defined exposure information can also facilitate provision of optimum advice by poison control centers. Finally, positive findings for ethanol or drugs of abuse in trauma patients should serve as an indication for substance use intervention as well as a risk marker for the likelihood of future trauma.15



Laboratory support of a clinical diagnosis of poisoning provides important feedback, enabling the clinical team to confirm a suspected diagnosis. Another important benefit is reassurance that an unintentional ingestion did not result in absorption of a toxic amount of xenobiotic. This reassurance allows physicians to avoid spending excessive time with patients who are relatively stable. It also allows admissions and discharges to be made and interventions to be undertaken more confidently and efficiently than would be likely based solely on a clinical diagnosis. Testing is also indicated for medicolegal reasons to establish a diagnosis “beyond a reasonable doubt.”



Unfortunately, clinicians are often unaware of the capabilities and limitations of their laboratory. A survey of emergency physicians found that more than 75% were not fully aware of the range of drugs detected and not detected by the toxicology screen of their laboratory. The majority believed that the screen was more comprehensive than it actually was.12



The key to optimal use of the toxicology laboratory is communication. This begins with learning the capabilities of the laboratory, including the xenobiotics on its menus, which can be quantitated and which merely detected, and the anticipated TATs. For screening assays, one should know which xenobiotics are routinely detected, which ones can be detected if specifically requested, and which ones cannot be detected even when present at concentrations that typically result in toxicity.



One should know the specimen type that is appropriate for the test requested. A general rule is that quantitative tests require serum (red stopper) or heparinized plasma (green stopper) but not ethylenediamine tetraacetic acid (EDTA) plasma (lavender stopper) or citrate plasma (light-blue stopper). Both EDTA and citrate bind divalent cations that serve as cofactors for enzymes used as reagents or labels in various assays. Additionally, liquid EDTA and citrate anticoagulants dilute the specimen. Serum or plasma separator tubes (gold or green stoppers with a separator gel at the bottom of the tube) are also acceptable, provided that prolonged gel contact before testing is avoided. Some hydrophobic drugs diffuse slowly into the gel, leading to falsely low results after several hours. A random, clean urine specimen is generally preferred for toxicology screens because the higher drug concentrations usually found in urine can compensate for the lower sensitivity of the broadly focused screening techniques. However, it should be emphasized that urine testing typically provides qualitative information and generally does not indicate the degree of clinical toxicity. The concentrations of xenobiotics and their metabolites are dramatically affected by the hydration status and underlying kidney function of the patient. A urine specimen of 20 mL is usually optimal. Requirements for all specimens vary from laboratory to laboratory.



When requesting a screening test, an important—and often overlooked—item of communication is specifying any xenobiotics of particular concern. This often enables faster results and a greater likelihood of detection. It is also important to provide sufficient clinical information, including the times and dates of suspected exposure and of specimen collection.



Most full-service toxicology laboratories welcome consultation on difficult cases or results that appear inconsistent with the clinical presentation. The laboratory will be familiar with the capabilities and limitations of their testing methods, as well as common sources of discrepant results. For example, they are aware of coadministered drugs or xenobiotics that interfere either positively or negatively with laboratory measurements.




METHODS USED IN THE TOXICOLOGY LABORATORY



Listen




Most tests in the toxicology laboratory are directed toward the identification or quantitation of xenobiotics. The primary techniques used include spot tests, spectrochemical tests, immunoassays, and chromatographic techniques. Mass spectrometry (MS) is also used, usually in conjunction with gas chromatography (GS) or liquid chromatography (LC). Table 7–2 compares the basic features of these methodologies. Other methodologies include ion-selective electrode measurements of lithium, atomic absorption spectroscopy or inductively coupled plasma mass spectroscopy for lithium and heavy metals, and anodic stripping methods for heavy metals. Many adjunctive tests, including glucose, creatinine, electrolytes, osmolality, metabolic products, and enzyme activities, are also useful in the management of poisoned or overdosed patients. The focus here is on the major methods used for directly measuring xenobiotics.




TABLE 7–2Relative Comparison of Toxicology Methods



Spot Tests



These are simple, noninstrumental, qualitative assays that rely on a rapid reaction between a xenobiotic and a chemical reagent to form a colored product. A classic example is the Trinder test for salicylates, in which salicylate is complexed with ferric ions to produce a violet-colored product.2 Although once a mainstay of toxicologic testing, spot tests are rarely used today because of the significant variability in visual interpretation. However, the fundamental chemical reactions used in the tests are often used in modern quantitative methods.



Optical (Spectrophotometric) Tests



Spectrophotometry involves the measurement of light intensity at selected wavelengths. The method depends on the light-absorbing properties of a substance in solution. The intensity of transmitted light passing through the solution decreases in proportion to the concentration of the substance. The transmitted light is then measured and related to the concentration of the analyte.



Analytes that are intrinsically light absorbing are measured by direct spectrometry. An example of this is cooximetry for the measurement of hemoglobin and its variants in a whole blood sample. Because the absorbance of hemoglobin is altered by its oxidative state, measurements at multiple wavelengths are used to individually quantify the amounts of deoxyhemoglobin, oxyhemoglobin, carboxyhemoglobin, and methemoglobin. Classic pulse oximetry, which uses only two wavelengths, yields spurious results in the presence of significant amounts of methemoglobin or carboxyhemoglobin. Cooximetry is relatively free of interferences because the concentrations of the hemoglobins are so much higher than other substances in the blood. However, the presence of intensely colored substances such as methylene blue, hydroxocobalamin or unexpected hemoglobin species such as sulfhemoglobin cause spurious results or error flags.



Most analytes in physiologically relevant concentrations do not absorb enough light at a distinct wavelength or are not present at high enough concentrations to be measured by direct spectrometry. Spectrochemical methods use chemical reactions to produce intensely colored compounds that absorb light at specific wavelengths. The Trinder test described earlier is one early example. The difference between the spot test and the more advanced spectrochemical test versions is the degree of automation involved, and the spectrophotometer detector used to quantify the concentration of the colored product. The ultimate limitation of the Trinder test is its lack of specificity; a variety of both endogenous and exogenous compounds cross-react with the ferric reagents, yielding a large number of false positives.



Indirect spectrochemistry improves on the selectivity of the spectrochemical assays by increasing the selectivity of the reaction that generates the light-absorbing product. Enzymes that catalyze highly selective reactions are often used for this purpose. For example, many ethanol assays are based on an indirect spectrochemical method that uses alcohol dehydrogenase to specifically catalyze the oxidation of ethanol to acetaldehyde with the concomitant reduction of the cofactor NAD+ (oxidized nicotinamide adenine dinucleotide) into reduced NADH. The generation of NADH is spectrophotometrically monitored by the increase in absorbance at 340 nm. Reactions leading to the conversion of NAD+ to NADH or vice versa are very common in the clinical laboratory.



In spectrochemical assays, light absorbance is measured after the completion of the reaction (endpoint method) or by making multiple measurements during the reaction to determine the rate of reaction (kinetic method). The initial rate of reaction is constant and proportional to the concentration of the analyte. Kinetic methods are typically faster and less sensitive to interference from other light-absorbing substances because their absorbance is constant and does not affect the measured rate. Kinetic methods are more complex and require precise timing.



Although enzymatic methods are more specific, they are not free of interferences from cross-reacting substrates. For example, some lactate assays use the enzyme lactate oxidase, which also accepts glycolic acid as a substrate. Consequently, patients who have high glycolate concentrations from metabolism of ingested ethylene glycol may have falsely elevated lactate measurements. Likewise, because endogenous lactate dehydrogenase can oxidize lactate to pyruvate with concomitant reduction of NAD+ to NADH, patients with high serum lactate concentrations can convert NAD+ to NADH via this pathway, leading to falsely increased results in assays based on NADH production.



Determination of Volatiles by Serum Osmol Gap



As mentioned earlier, because of their significant toxicity and the availability of an effective antidote such as fomepizole for methanol and ethylene glycol poisonings, their measurements are considered to be tier 1 tests. However, on-site availability is often limited. One possible alternative measure in such overdoses of toxic alcohols is the serum osmol gap, derived from the measured serum osmolality and calculated serum osmolarity:



where



Normally, the osmol gap is less than –2 ± 6. Alcohols and acetone, when present at significant concentrations, increase the measured osmolality and would increase the osmol gap. An important caveat is that both measured osmolality and calculated osmolarity should be obtained from the same serum sample, and because of the volatile nature of these substances, the serum osmolality should be analyzed expeditiously by a freezing point depression osmometer. Furthermore, other substances that may be administered (eg, mannitol for osmotic diuresis or propylene glycol as a solvent for diazepam or phenytoin) also increase serum osmolality (Chap. 12).2



Immunoassays



The need to measure very low concentrations of an analyte with a high degree of specificity led to the development of immunoassays. The combination of high affinity and high selectivity makes antibodies excellent assay reagents. There are two common types of immunoassays: noncompetitive and competitive. In noncompetitive immunoassays, the analyte is sandwiched between two antibodies, each of which recognizes a different epitope on the analyte. In competitive immunoassays, analyte from the patient’s specimen competes for a limited number of antibody binding sites with a known amount of a labeled version of the analyte provided in the reaction mixture. Because most xenobiotics are too small to have two distinct antibody binding sites, drug immunoassays are usually competitive.



Competitive immunoassays can either be heterogeneous or homogenous. An early example of the heterogeneous immunoassay was the radioimmunoassay (RIA). In this technique, the patient sample and radiolabeled antigen are added to a solution containing antibody fixed to a surface, such as the wall of a tube or beads, and compete for binding. A subsequent wash step removes any unbound radiolabel before the tube or beads are placed in a counter to measure the remaining radioactivity, which is inversely proportional to the amount of xenobiotic in the patient sample. Modern heterogeneous assays (Fig. 7–1) have largely replaced radioactive labels with fluorescent or luminescent moieties, often activated by an enzymatic reaction. The fixed surfaces have also been updated to microbeads with very large surface areas. These improvements have enabled methods with faster assay times than previously and higher sensitivities than attainable by homogenous assays.




FIGURE 7–1.


Magnetic microparticle chemiluminescent competitive immunoassay. (A) Unlabeled drug from the specimen competes with alkaline phosphatase–labeled drug for binding to antibody-coated magnetic microparticles. The microparticles are then held by a magnetic field while unbound material is washed away. (B) A dioxetane phosphate derivative is added and is dephosphorylated by microparticle-bound alkaline phosphatase to give an unstable dioxetane product that spontaneously decomposes with emission of light. The rate of light production is directly proportional to the amount of alkaline phosphatase bound to the microparticles and inversely proportional to the concentration of competing unlabeled drug from the specimen.





Homogenous immunoassays are among the most widely used methods. In these techniques, the signal generated by the labeled moiety is modified by binding to the assay antibody. This negates the need to physically separate or wash out the unbound labels. Avoiding the separation step simplifies the methodology and facilitates automation. Commonly used homogenous techniques include the enzyme multiplied immunoassay technique (EMIT)and the cloned enzyme donor immunoassay (CEDIA).2



In the EMIT technique (Fig. 7–2), an enzyme-analyte conjugate is used as the label and competes with unlabeled substrate from the patient sample. If antibody binds to the labeled analyte, it blocks the active site and reduces the activity of the enzyme. Unlabeled substrate from the patient’s specimen will displace the enzyme from the antibody and increase its activity in proportion to its concentration.




FIGURE 7–2.


Enzyme multiplied immunoassay technique. The drug to be measured is labeled by being attached to the enzyme glucose-6-phosphate dehydrogenase (G6PD) near the active site. (A) Binding of the enzyme-labeled drug to the assay antibody blocks the active site, inhibiting conversion of NAD+ (oxidized form of nicotinamide adenine dinucleotide) to NADH (reduced form of nicotinamide adenine dinucleotide). (B) Unlabeled drug from the specimen can displace the drug–enzyme conjugate from the antibody, thereby unblocking the active site and increasing the rate of reaction.





In the CEDIA technique, two complementary yet inactive fragments of a reporting enzyme are genetically engineered, in which one fragment is linked to the target drug. These two fragments spontaneously reassemble into the active enzyme in solution. However, in the absence of substrate from patient’s specimen, assay antibody will prevent formation of the active enzyme.



Both EMIT and CEDIA assays are used for quantitative TDM in blood and for qualitative screening of drugs of abuse in urine. Microparticle capture assays are a type of qualitative competitive immunoassay that have become very popular, especially for urine drug screening tests. The use of either colored latex or colloidal gold microparticles enables the result to be read visually as the presence or absence of a colored band, with no special instrumentation required. Competitive binding occurs as the assay mixture is drawn by capillary action through a porous membrane. This design feature is responsible for the alternate names of the technique: lateral flow immunoassay or immunochromatography.



The simplest microparticle capture design uses an antidrug antibody bound to colored microparticles and a capture zone consisting of immobilized drug (Fig. 7–3). If the specimen is xenobiotic free, the beads will bind to the immobilized analyte, forming a colored band. When the amount of xenobiotic in the patient specimen exceeds the detection limit, all of the antibody sites will be occupied by xenobiotic from the specimen, and no labeled antibody will be retained in the capture zone. The use of multiple antibodies and discrete capture zones with different immobilized analytes can allow several xenobiotics to be detected with a single device.




FIGURE 7–3.


Microparticle capture immunoassay. (A) Diagram of a device before specimen addition. Colored microbeads (about the size of red blood cells) coated with antidrug antibodies (Y) are in the specimen well. At the far end of a porous strip are capture zones with immobilized drug molecules (•) and a control zone with antibodies recognizing the antibodies that coat the microbeads. (B) Adding the urine specimen suspends the microbeads, which are drawn by capillary action through the porous strip and into an absorbent reservoir (hatched area) at the far end of the strip. In the absence of drug in the urine, the antibodies will bind the beads to the capture zone containing the immobilized drug and form a colored band. Excess beads will be bound by antibody–antibody interactions in the control zone, forming a second colored band that verifies the integrity of the antibodies in the device. (C) If the urine contains the drug (•) in concentrations exceeding the detection limit, all of the antibodies on the microbeads will be occupied by drug from the specimen, and the microbeads will not be retained by the immobilized drug in the capture zone. No colored band will form. However, the beads will be bound and form a band in the control zone.





Although immunoassays have a high degree of sensitivity and selectivity, they are also subject to interferences and problems with cross-reactivity. Cross-reactivity refers to the ability of the assay antibody to bind to xenobiotics other than the target analyte. Xenobiotics with similar chemical structures may be efficiently bound, which can lead to falsely elevated results. In some situations, cross-reactivity can be beneficially exploited. For example, some immunoassays effectively detect classes of xenobiotics rather than one specific xenobiotic. Immunoassays for opioids use antibodies to morphine that cross-react to varying degrees with structurally related substances, including codeine, hydrocodone, and hydromorphone. Oxycodone typically has low cross-reactivity, and higher concentrations are required to give a positive result. The cross-reactivity of nonmorphine opiates varies with manufacturer. It is recommended that clinicians consult directly with their laboratory for the relative sensitivities of the immunoassay used. Structurally unrelated synthetic opioids, such as meperidine and methadone, have little or no cross-reactivity and are not detected by opiate immunoassays. Immunoassays for the benzodiazepine class react with a wide variety of benzodiazepines but with varying degrees of sensitivity.14,16 Because of the highly variable response of immunoassays to the various opiates and benzodiazepines, methods based on MS should be used for definitive results.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Nov 19, 2019 | Posted by in ANESTHESIA | Comments Off on Laboratory Principles

Full access? Get Clinical Tree

Get Clinical Tree app for offline access