KEYWORDS
instrumentation
gas concentrations
infrared absorption
mass spectrometry
Raman scatter analysis
monitoring methods
gas concentrations
oxygen
nitrogen
carbon dioxide
anesthetic gases and vapors
How comfortable would you be not having end-tidal carbon dioxide monitoring after intubating a patient? Although we might be called on to intubate a patient in the intensive care unit (ICU) or emergency department (ED) without the benefit of capnography, we certainly rely on it in the operating room (OR). It is a standard of care for our specialty. Not only do we rely on capnography to ensure tracheal intubation, but we also are able to use the values and even the waveforms derived from capnography to diagnose and treat the patient.
Along with pulse oximetry, capnography was introduced into wide usage in anesthesia in the mid 1980s. Then clinicians had two monitors that aided in the detection of two of the most dreaded and serious events that could happen during a case: unseen hypoxemia and undetected esophageal intubation. Before capnography, undetected esophageal intubation was a main cause of anesthesia morbidity and mortality. There are old stories of the surgeon saying the blood looked darker than normal being the first symptom of esophageal intubation. What panic that would cause!
Although anesthetic agent respiratory monitoring is not as vital to good outcomes as end-tidal carbon dioxide is, it is a standard of care. We rely on it in a myriad of ways during an anesthetic case. This chapter discusses the history, types, and physics (ouch!) involved with capnography and anesthetic respiratory gas monitoring.
We will not go into the physiology and interpretation of capnography waveforms. Although it is a fascinating subject and is of importance for you to know, this topic is covered in other texts in a more detailed way than we could do here. We encourage you to learn as much as you can about capnography waveform interpretation because not only is it vital for you to know in order to provide good care to the patient, it is also often of value when you need to troubleshoot problems with your machine or monitor.
A BRIEF HISTORY
You can go into as much detail regarding the history of capnography as you want, much in the same way that you can go into as much detail regarding the physics of capnography and anesthetic gas monitoring as you want. Again, we stand on giants’ shoulders; contributions from chemists, physicists, electrical engineers, and physiologists all connect into capnography. It should not be a surprise to you that much of the driving force to develop capnography in fact came from the study of respiratory physiology. The first regular clinical use of capnography was not in the OR, but in the ICU during care of ventilated patients. As mentioned, capnography became common in the OR in the mid 1980s and became an American Society of Anesthesiologists standard of care in 1991.
TYPES AND TECHNOLOGY
Three main technologies are used in clinical capnography: infrared, Raman scattering, and mass spectrometry. The first kind used clinically was the mass spectrometer. We will discuss all three of these types. In addition, capnometers can be divided into types by where they measure the respiratory sample: one type measures the sample in the anesthesia circuit (mainstream, or non-diverting), and the other kind takes a sample from the circuit and measures it inside a monitor (sidestream, or diverting). We will explain and discuss the pros and cons of mainstream versus sidestream sampling and then discuss the technologies used for capnography and anesthetic gas monitoring.
Types
Mainstream
This type measures carbon dioxide from within the patient’s anesthesia circuit. A special attachment called a cuvette is placed at the Y piece. The cuvette is slightly smaller than, and looks like, a heat and moisture exchange (HME) filter. An adaptor snaps over the cuvette. The adaptor is where the electrical parts are located and has a cable that runs to the display screen. An infrared radiation (IR) generator is inside the adaptor, which shines the IR through the clear window of the cuvette to a sensor that is also built into the adaptor. Using IR spectroscopy, the carbon dioxide at the Y piece is measured in real time for each inspiration and expiration.
This is the type of capnometer that you may see in an ICU or ED. Many vital sign monitors have a module that can be used for mainstream capnography. Mainstream monitors need regular calibration by attaching the adaptor to a special calibrating cuvette that is built into the adaptor cable. Some mainstream capnometers also incorporate an oxygen analyzer. Because there is no gas siphoned out of the patient circuit with a mainstream type monitor, there is nothing to have to send to the scavenging system, such as in sidestream monitors. The cuvettes are reusable after sterilization. There are methods or attachments that allow the clinician to measure end-tidal carbon dioxide via nasal cannula with mainstream sampling. Mainstream sampling is said to be faster than sidestream sampling, but it is not so fast as to make you want to use a mainstream monitor over a sidestream type for that reason alone.
There are a few disadvantages to mainstream monitoring. Blood or secretions inside the cuvette can interfere with proper function. The cuvette/adaptor assembly is relatively heavy compared to the rest of the anesthesia circuit, and it can pull on the endotracheal tube or laryngeal mask airway, possibly kinking or dislodging the tube. It generates heat, and although it is not that warm to touch, there have been reports of the adaptor causing burns on patients when it was touching them for a prolonged period. The adaptor itself is fragile and could be put out of action if it was dropped on the floor. The authors have noticed also that the spring that helps keep the adaptor snapped onto the cuvette is easily broken, making it difficult to keep the adaptor in position.
The biggest disadvantage historically to the mainstream type of capnometer is that it did not measure any other gas or agent besides carbon dioxide (and oxygen if the system also has an oxygen analyzer). Although the measurement of only carbon dioxide and oxygen meets the standard of care, most clinicians definitely preferred to also have the ability to measure inspired and exhaled anesthetic agents. Although now there are mainstream capnometer designs that indeed measure anesthetic agents, the earlier ability of sidestream capnometry to do this made sidestream monitoring more widespread.
Sidestream
In this type, a sample of gas is diverted from the patient circuit through a small tube that comes off the circuit near or at the Y piece. This sample travels through the tubing into the monitor box itself. This is the type that is used by most current capnometers and anesthetic gas monitors. Technologies that use or used sidestream sampling include mass spectrometry, Raman spectroscopy, and infrared spectroscopy. Therefore, unlike mainstream sampling, sidestream sampling can be used by monitors capable of measuring not only end-tidal carbon dioxide but all the other anesthetic gases and agents as well.
Sidestream sampling is easily used with a nasal cannula for capnography, using either a homemade version or a specially designed monitoring cannula. The special type of cannula has the sampling line built into the cannula itself, and some have a small spoon-shaped piece of plastic that sits below the nasal prongs that funnels the exhalations of mouth breathers into the sampling line ports. You can even rig the nasal cannula to monitor through the lumen of an oral airway. If you are in the magnetic resonance imaging suite, you can simply add extra tubing to the sampling line to monitor the capnogram.
By the way, if you do not know how to fabricate a homemade capnography-capable nasal cannula, you simply stick a 14-gauge intravenous (IV) catheter into a single prong from the opposite side of the prong opening. Remove the needle and cut the catheter off flush with the prong. Then you can attach the Luer lock of the sampling line to the end of the IV catheter. The accuracy is decreased, especially in regards to fraction of inspired oxygen (FiO2), but nevertheless this simple technique works. (Of course, don’t do this with the cannula attached to the patient!)
A few minor problems are associated with sidestream monitoring, but none are horrible enough to make you not want to use this type of capnometer. One problem is the need for scavenging the gas sample when the monitor is done with it if you are using anesthetic agents. The exhaust port of the monitor can be hooked up to the scavenger system of an anesthesia machine without too much trouble. It is even possible to return the sample through different tubing back to the patient circuit. If you are using very low flow or closed-circuit inhalational anesthesia, you must factor in how much volume per minute you are losing through the sampling line. This can be anywhere usually from 100 to 150 mL/min. However, newer models only draw off 50 mL/min; these are called microstream units. Especially in head and neck cases, the sampling line tubing is easily kinked, interfering with monitoring. A quick trick to alleviate this is using a roll of tape as a support around the tubing. Blood, secretions, and water condensation can also block the tubing. There is a disposable water trap at the end of the sampling line right before it goes into the monitor to catch moisture. These need to be changed on occasion. Leaks in the circuit can be caused by the Luer lock on either end of the sampling line being loose or unattached. Cracks can develop in the plastic water traps that will cause either a leak or incorrect readings.
All in all, sidestream monitoring is much more versatile than mainstream monitoring, but mainstream monitoring hardware is less expensive.
Technology
Mass Spectrometry
This was the first kind of clinically used capnometer. You may have used a “mass spec” at sometime during your education such as in chemistry lab. It is a large machine that is able to weigh charged atoms and molecules based on how far they travel when ionized in a vacuum and then deflected by an electrical or magnetic charge.
Let’s look at an example. You give a basketball on the floor a shove, and it rolls along the floor. It passes a box fan, blowing air into its path. You would not expect the path of the basketball to be influenced much, if at all, by the air blown by the fan. The basketball continues along its path. Now roll a tennis ball along the floor. Because the tennis ball has less mass than the basketball, it would not surprise you too much if the fan blew the tennis ball slightly off course. Next roll a ping-pong ball along the floor and in front of the fan. You would expect the ping-pong ball, because of its lower mass, to be deflected by the fan to a greater degree than either the basketball or the tennis ball.
Now if we performed this example many times for each ball and measured the distance from straight that each ball was deflected by the fan, we would have a known averaged set of values that were repeatable in our setup, and then we could estimate that a ball weighing somewhere between a tennis ball and a ping-pong ball would be deflected by the fan at an angle somewhere between our deflection angle average for those two balls.
In a mass spectrometry, the ions are the balls, the electrical or magnetic force is the fan blowing air, and the deflection of the ions will be based on their mass, just like the deflection of the three balls on the floor in front of the fan. Matter will be deflected and travel in the mass spectrometer based on its mass, and this can be measured by comparing the length of travel to known compounds. That is how a mass spectrometer can determine the identity of a certain compound—by comparing how far its component pieces travel, and then comparing that with known values.
The exact same thing happened in these early clinical capnometers. It was actually made easier because of the decreased number of substances to which the instrument would be exposed. These monitors would not be exposed to unknowns like in a chemistry lab. They would see the same things all the time—carbon dioxide, nitrogen, halothane, enflurane, isoflurane, nitrous oxide, and oxygen. (Desflurane and sevoflurane weren’t in clinical use back then.) Not only would it tell what was in the sample, but it also told how much was in the sample. You knew what the end-tidal agent was, the inspired agent, the nitrogen, as well as the all-important carbon dioxide.
There was a problem with using mass spectrometers in the OR; however, they were huge pieces of equipment. They would not fit on top of the anesthesia machine or even in the corner. In addition, they were complicated and had a lot of down time. They were also quite expensive. So the companies that made the technology came up with a way for all the rooms in a busy OR to use the mass spectrometer.
Each room used the same mass spectrometer, located in a remote room somewhere in the OR. Sampling lines went from a local monitor (where the readout screen and control buttons were) on top of the anesthesia machine to the central mass spectrometer. About every minute or so, the data on the screen would be updated. This was quite satisfactory—do you really need the end-tidal isoflurane updated constantly? No, once a minute or so is fine. But what about waiting for carbon dioxide? If you’re intubating someone, you surely don’t want to wait a minute before it is your room’s turn again to see if you are in the trachea or not! The companies also thought of this problem. In each box on top of the anesthesia machine in each room was a separate local infrared capnometer. That way, you didn’t have to wait for your room’s turn to know if you had end-tidal carbon dioxide because the carbon dioxide analysis was done right there in the room.
All of this may seem quite dated to you, and you’re correct. You can still find some facilities that may not have updated their monitoring from mass spectrometry, but mass spectrometers for anesthetic clinical use are no longer produced. But at that time, from the late 1970s (when it was first used in critical care) until the early 1990s, the mass spectrometer was the best way to perform respiratory gas analysis. Many institutions used mass spectrometry into the late 1990s and early 2000s. However, recent research with the detection of end-tidal propofol used a mass spectrometer. If end-tidal propofol monitoring becomes commonplace one day, though, it is doubtful that mass spectrometry will return to clinical use because infrared monitors will be modified to measure end-tidal propofol.
Back in those days, you probably would have heard about “Sara” or “Elmer,” depending on what brand of mass spec was used at your hospital. “Sara” stood for “System for Anesthetic and Respiratory Analysis,” which was made by a company called PPG-Biomedical, and “Elmer” referred to “Perkin-Elmer,” the manufacturer of another clinical anesthesia mass spectrometer system.
Raman Scattering
This does not have to do with spilling noodles; in the 1920s, a man named C.V. Raman discovered that when you shine light onto a compound, some of the light bounces off the molecules of that compound at a different wavelength than the original light. In addition to measuring this new wavelength, the intensity of the new wavelength that comes off the compound can be measured. When compared with a catalog of known values, you can tell what the compound is and what its concentration is. This is called Raman spectroscopy because it is based on the process of Raman scattering. Raman won the Nobel Prize in 1930 for this work.
Raman spectroscopy was adapted to clinical anesthesia in the late 1980s. It was a good system for its time. It could do all the things that a mass spectrometer system could do but in a much smaller package, right in the individual OR. The system was called “Rascal” (RAman SCattering AnALyzer). The light that was used was an argon beam laser, and about once a month or so, a small Thermos bottle–sized cylinder of argon was changed out on the back of the machine.
This technology is no longer in common use in clinical anesthesia. Infrared gas monitoring was, and is, cheaper to buy and to maintain.
Infrared Analysis
This is the most commonly used anesthetic gas monitoring system. This technology is cheaper than the previously mentioned methods, so it has more or less replaced any other means of capnography and anesthetic gas monitoring. The two previously mentioned technologies, mass spectrometry and Raman spectroscopy, are really only of historic interest. Probably every gas monitor you will see now uses infrared spectroscopy.
Infrared technology relies on the fact that different substances absorb different wavelengths of light. This is also discussed in Chapter 17 on pulse oximetry. Simply put, infrared light is shined through the sample gas from the patient’s airway or circuit, and based on what wavelengths of infrared light the sensor detects (meaning the infrared light that the molecules in the sample did not absorb), the monitor can tell the type and amount of gas or vapor present in the sample.
Infrared radiation gas monitoring systems are designed so that there are multiple chambers, so simultaneous measurements of the components of the sample are done. (The mass spectrometer and Raman scattering types also have multiple chambers.) Sample analysis is quick enough that you get inspired and expired results. They are now designed so small that they are easily portable, and may be built in to transport monitors.
The traditional type of infrared device is called “black body.” The black body type generates its infrared light from a heating element inside the monitor. This type works fine, but the kind of IR that is produced in this manner is made up of many different wavelengths that are not needed, so filters are used to keep those out of the sampling chamber, allowing only the radiation that is useful for gas monitoring. Smaller units have been designed using what is called “microstream” technology, which uses IR that comes from a laser source. This cuts down on the amount of unused IR radiation that is produced. Microstream also uses far less of a sample amount per minute: 50 mL/min versus the 100 to 150 mL/min that black body units siphon from the patient circuit.
Infrared radiation technology has one main drawback: it cannot detect and measure molecules that are made up of the same atoms, such as elemental oxygen and nitrogen (O2 and N2). This means that another form of oxygen sensor is needed, and because nitrogen is undetectable, you cannot rely on the monitor to tell you when a large amount of nitrogen (e.g., from an air embolism) appears. The mass spectroscopy and Raman spectroscopy types did measure these two gases.
DISPOSABLE (CHEMICAL) CAPNOMETERS
Most of you have probably seen or used disposable capnometers. They are especially handy for first responders and are also commonly stocked on code carts in health care settings. They are reminiscent of an HME and have 15-mm adaptors to fit in the circuit at the Y piece or to the endotracheal tube adaptor. These devices incorporate a disk of filter paper that is impregnated with a chemical base and an indicator. The pH changes when exposed to carbon dioxide on exhalation, so the color of the filter paper changes. On inspiration, the color reverts to its initial shade. What colors are involved depend on the type chemistry and brand that is involved.
These devices are mainly good for qualitative, not quantitative, information even though there is a built-in scale from which to compare the color shades that are generated. You’ll more or less find out the presence or absence of carbon dioxide and that its level is low, medium, or high. But for what they are designed for, that is good enough. However, in a true code or cardiac arrest situation, if there is very little or no carbon dioxide exchange, disposable capnometers will be of limited value because there might not be enough carbon dioxide in the exhalations to cause the color to change. They will work for quite a while, and one of the authors has even done a whole case using one when at a remote location and the regular capnometer failed. Secretions, gastric contents, and moisture are the devices’ worst enemies and interfere with proper function. Table 16-1 lists several types of gas monitors.