Ergonomics of the Anesthesia Workspace





History


Decades ago, accidental delivery of hypoxic gas mixtures was a constant threat during general anesthesia. Many instances of hypoxia were attributed to human error. In some cases, the anesthesiologist mistakenly turned the wrong gas flow control knob or failed to recognize that the oxygen cylinder was empty. In another case, a technician placed the flowmeter tubes in the wrong positions while servicing the anesthesia machine. In each case, these small human errors led to major injury or to death of the patient. With modern anesthesia machines, the risk of accidental hypoxia has been dramatically reduced. In effect, the potential for human error has been reduced by redesigning the equipment. The concept that equipment can be designed for optimal performance by the human user is one of the core principles of ergonomics. This chapter reviews the role of ergonomics in the practice of anesthesia, including the design of the anesthesia workplace.




What is Ergonomics?


Most people have probably thought about ergonomic issues, even if they are not familiar with the term. Accidentally sticking oneself with a hypodermic needle and wondering whether there might be a better way to inject intravenous (IV) drugs is, in effect, thinking about safety, one area of concern in ergonomics. Ergonomics involves optimizing the work environment for the benefit of the user, such as moving the anesthesia machine and elevating the chair to see both the patient and the monitors more easily. Evaluating the ease of use of equipment before purchase is another ergonomic activity.


Ergonomics is a discipline that investigates and applies information about human requirements, characteristics, abilities, and limitations to the design, development, engineering, and testing of equipment, tools, systems, and jobs. The objectives of ergonomists are to improve safety, performance, and well-being by optimizing the relationship between people and their work environment. The terms ergonomics, human factors, human engineering, and usability engineering are often used interchangeably; however, the term ergonomics is used exclusively in this chapter.


Scope of Ergonomics


The Software-Hardware-Environment-Liveware (SHEL) model, first introduced by Edwards in 1972, can be used to illustrate the scope of ergonomics ( Fig. 24-1 ). Within this model, all jobs are performed by three classes of resources. The first class is composed of the physical items, or hardware . This includes the buildings, equipment, and materials used for the job. The second class, the software, consists of the rules, guidelines, policies, procedures, and customs involved in the job. People make up the third class of components, liveware. These components act together within a larger context, or environment, which is composed of external physical, economic, social, and political factors that affect the job.




FIGURE 24-1


The software-hardware-environment-liveware (SHEL) model of system resources.


Ergonomics is the discipline of designing and testing the human/systems interface with the goal of improving the interactions between the liveware component and the other components. In the broadest terms, ergonomics deals with the study and enhancement of the tools and systems used by humans to interact with the physical world around them.


Ergonomics is both a science and a profession, encompassing both research and application. One goal of ergonomics research is to understand and describe the capabilities and limitations of human performance. Another is to develop principles of interaction between people and machines. Examples of ergonomics research are the investigation of visual perception in relation to a particular task and the measurement and compilation of anthropomorphic data (e.g., what is the distribution in the lengths of the tibia and femur in all men between 18 and 45 years of age). Application involves the use of these data in the development of equipment, systems, and jobs. For example, the selection of color coding for displays is based on an understanding of visual perception, information processing, and decision theory, whereas anthropomorphic data are used in the design of a chair.


Some aspects of ergonomics focus on the worker and the human-to-human interfaces within the system. This may include task and workload analysis, examination of vigilance and fatigue, and analysis of team interactions. The focus of this chapter is on the interface between the liveware and the hardware, that is, the interface between the human and the machine.




Ergonomics Research in Anesthesiology


The number of ergonomics studies in anesthesiology continues to grow. The focus of these studies has been to identify human/machine interface factors that affect patient safety and the anesthesiologist’s job performance.


Task Analysis Studies


Task analysis is a basic ergonomics methodology for evaluating jobs or designing new human/machine systems. Several variants of this methodology are used, such as cognitive task analysis, critical decision method, and time and motion studies, depending on the focus of the problem. Task analysis methods typically involve the structured decomposition of work activities and/or decisions and the classification of these activities as a series of tasks, processes, or classes. At least three interacting components can be identified and described for each task: the task’s goals, constraints, and behaviors.


One of the first formal time and motion studies ever performed was an analysis of surgeons’ tasks in the operating room (OR). Frank and Lillian Gilbreth conducted time and motion studies of surgical teams during the early 1900s and concluded that surgeons spent an inordinate amount of time looking for instruments as they picked them off the tray. Their findings led to the current practice of the surgeon requesting instruments from a nurse, who places the instrument in the surgeon’s hand.


One of the first time-and-motion studies of anesthesiologists was conducted to identify ways to improve anesthesiologists’ job satisfaction. Drui and colleagues filmed eight operations and put the anesthesiologists’ activities into 24 categories. They then had anesthesiologists rate each activity’s importance, knowledge demand, and skill requirement. They found that filling out the anesthesia record occupied a large proportion of the anesthesiologists’ time but was rated as relatively unimportant and easy to perform. They also found that blood pressure and pulse were determined faster when the pressure gauge was located at the head of the OR table instead of on the anesthesia machine. An unexpected finding was that the anesthesiologist’s attention was directed away from the patient or surgical field 42% of the time. The authors recommended automating the task of creating an anesthesia record and redesigning the anesthesia machine to increase productivity and decrease the amount of distraction away from the patient and surgical field.


It is interesting that only recently, almost 40 years after Drui’s recommendations were published, have electronic anesthesia record-keeping systems and integrated anesthesia workstations attained commercial viability. Kennedy and colleagues recorded three coronary artery bypass procedures on video and coded 13 categories of anesthesiologist activity at 2-second intervals. They found that the two most frequent activities were “observe patient” and “scan entire field” but that attention was directed away from the patient and surgical field 30% of the time. Logging data on the anesthesia record occupied 10% to 15% of the anesthesiologists’ time; this activity was tightly linked with observing instrument displays. These authors also recommended automation of the anesthesia record and a more structured arrangement of equipment around the patient and surgical field.


Neither of these studies directly resulted in a redesign of anesthesia equipment. However, in 1976, Fraser Harlake (Orchard Park, NY) produced a prototype line-of-sight anesthesia machine designed by Goodyear and Rendell-Baker. With this machine, the user could see both the patient and the machine controls with minimal eye movement. Although it was never made commercially available, the machine may have nevertheless influenced the design of the Ohio Modulus Wing anesthesia machine (Ohmeda; GE Healthcare, Waukesha, WI). An important feature of the Modulus Wing machine was that the displays and controls had more ergonomic viewing angles and could be positioned closer to the patient.


Boquet and colleagues collected 16 hours of time and motion data during general anesthetic procedures before redesigning an anesthesia system. They recorded and classified 31 manual activities and 26 visual activities. In their study, 40% of the anesthesiologists’ visual attention was directed away from the patient or surgical field, and the anesthesiologists were physically idle 72% of the time. The also found that logging data on the anesthesia record occupied 6% of the anesthesiologists’ time and was frequently linked to measurement of blood pressure. In addition, the patterns of activity were different during the four quarters of the anesthetic procedure. Based on these observations, the authors proposed a new anesthesia machine design.


More recent studies have confirmed previous findings that anesthesiologists spend significant amounts of time on indirect patient-related tasks and that the distribution of tasks is influenced by the stage of the anesthetic procedure. The similarities of the results in these time-and-motion studies are striking, especially because they were conducted over 20 years in a wide variety of clinical settings.


Weinger and colleagues at the University of California–San Diego Medical Center used a combination of task analysis methods to compare the clinical performance of novice and experienced anesthesia care providers. A trained observer used a computer to record, in real time, 28 anesthesia-related tasks during 22 general anesthesia cases. Clinicians also rated their workload at intervals during the case and performed a vigilance task ( Fig. 24-2 ; see also Chapter 23 ).




FIGURE 24-2


Task distribution, subjective workload, and vigilance of one senior anesthesia resident during a single, routine, 160-minute general anesthetic procedure. Top, Subjective workload score ( circles ) and response latency to a vigilance light ( triangles ) over time. Subjective workload was self-reported on a scale from 6 (no effort) to 20 (maximal effort). Response latency was the time required for the anesthesiologist to recognize the illumination of a small red light located adjacent to the electrocardiograph monitor. Bottom, Distribution and pattern of tasks performed. Each data point ( square ) represents a single occurrence of that task category. IV, intravenous.


Important differences were detected between the novices (residents in their first clinical anesthesia year) and the experts (residents in their third clinical anesthesia year and certified registered nurse anesthetists [CRNAs]). Novices took longer to induce anesthesia, performed fewer tasks per unit of time, and rated their workload as higher ( Fig. 24-3 ). In addition, novices appeared less efficient in their allocation of effort to different tasks. There were, however, many common findings among groups. With few exceptions, task distribution was similar between the novices and experts, although after intubation, experts spent significantly more time observing the surgical field ( Fig. 24-4 ). In both groups, there was a large effect of the stage of the anesthetic on task distribution; during the preintubation period, a more limited set of tasks was performed, and task durations were shorter than during the postintubation phase. Hardly any recordkeeping was done by novice or expert practitioners during the preintubation period, but record keeping consumed 15% of their postintubation time.




FIGURE 24-3


Workload was higher for novice residents than for experienced practitioners during routine general anesthesia procedures. Subjective workload was scored by an impartial observer, sitting in the operating room, using the 15-point Borg scale (6 = no effort, 20 = maximal effort) throughout a series of procedures performed by novice (2 weeks to 2 months of anesthesia training) or experienced providers (senior residents and certified registered nurse anesthetists). Workload was higher in the novices both before and after intubation (∗ P < .05, novices vs. experienced). In addition, in the experienced providers, but not in the novices, a statistically significant decrease was reported in workload after intubation (§ P < .05, preintubation vs. postintubation). Despite a higher reported workload, novices performed fewer tasks per minute and were less vigilant (data not shown).

(From Weinger MB, Herndon OW, Zornow MH, et al: An objective methodology for task analysis and workload assessment in anesthesia providers. Anesthesiology 1994;80[1]:77-92.)



FIGURE 24-4


Distribution of tasks among expert and novice anesthesia providers during general anesthesia. Experienced practitioners (third-year anesthesia residents and experienced certified registered nurse anesthetists) performed 11 procedures under limited supervision by attending anesthesiologists. Novices (anesthesia residents in their first 8 weeks of training) performed 11 procedures under almost constant attending supervision. Each bar represents the percent of time used in one of 22 task categories during the preintubation or postintubation period of the case. For simplicity, five communication tasks are grouped into the “conversing” category, and two mask ventilation tasks are grouped into the “ventilation by mask” category. Both practitioner groups demonstrated dramatic differences in task distribution between the preintubation and postintubation periods. Little record keeping was done by any of the practitioners during the preintubation period, but recording consumed 15% of postintubation time. With few exceptions, task distribution was similar between the novice and experienced providers; after intubation, experts spent significantly more time observing the surgical field, and novices spent more time conversing with the supervising attending. IV, intravenous line.

(Data from Weinger MB, Herndon OW, Zornow MH, et al: An objective methodology for task analysis and workload assessment in anesthesia providers. Anesthesiology 1994;80[1]:77-92.)


Workload Studies


Workload assessments are important both for evaluating the cognitive requirements of new workplace designs and equipment and for predicting the worker’s cognitive capacity for additional tasks. Workload can have important effects on clinical performance. For example, recovery from critical events may be impaired during high-workload situations. Workload is multidimensional and complex; multiple cognitive, psychological, and physical factors contribute to overall workload, which has been divided into various categories such as perceptual, communicative, mediational, and motor load . Specific workload measurement techniques may be more sensitive and/or specific for different types of workload. From a practical standpoint, workload measures can be divided into psychological, procedural (i.e., task related), and physiologic metrics.


Psychological metrics include psychologic tests and survey instruments, either retrospective or prospective. A common example is subjective workload assessment in which either an observer or the subjects themselves rate their workload, or some component of it, on a predefined scale. For example, Weinger and colleagues assessed subjective workload by having an observer and the subject rate the subject’s workload every 10 to 15 minutes during general anesthesia cases using an integrated workload scale ranging from 6 (no work) to 20 (extremely hard work). They found a strong correlation between the ratings of the subject and the observer, and subjective workload was significantly higher prior to intubation than during the remainder of the case ( Fig. 24-5 ).




FIGURE 24-5


Workload, as reported by the anesthesia provider, varies with both the type of case and the phase of the anesthetic. Workload reported by experienced anesthesia providers using a Borg scale (6 = no effort to 20 = maximal effort) was higher during induction preintubation than during the maintenance phase postintubation (‡ P < .05, preintubation vs. postintubation) independent of the type of general anesthetic case performed. In addition, subjective workload during cardiac surgical cases was uniformly greater when compared with routine ambulatory general anesthetics (∗ P < .05, cardiac vs. ambulatory) independent of phase of the case.

(Data from Weinger MB, Herndon OW, Gaba DM: The effect of electronic record keeping and transesophageal echocardiography on task distribution, workload, and vigilance during cardiac anesthesia. Anesthesiology 1997;87[1]:144-155; and Weinger MB, Herndon OW, Zornow MH, et al: An objective methodology for task analysis and workload assessment in anesthesia providers. Anesthesiology 1994;80[1]:77-92.)


Procedural workload assessment techniques are generally based on alterations in primary or secondary task performance. For example, Gaba and Lee used the ability of anesthesia residents to perform an extra task (paced arithmetic problems) during administration of anesthesia as a measure of the workload of the primary task (administering anesthesia). They found that performance of the secondary task was compromised in 40% of the samples; that is, the problem was skipped, or there was a greater than 30-second excess response time. Workload was highest during the induction and emergence phases of anesthesia. Higher workload occurred during performance of manual tasks, conversations with OR personnel, and interactions with the attending anesthesiologist.


Weinger and colleagues found a correlation between subjective workload and objective workload measured with a different secondary task probe, time to respond to the illumination of a light in the anesthesia monitoring array. Not only was the response time slower during induction than during maintenance, but less experienced anesthesia residents had slower response times compared with more experienced residents, especially during induction. This suggests that less experienced clinicians may have less spare capacity to respond to new task demands, particularly during high-workload conditions. Findings to date suggest that during the course of a typical OR procedure, the anesthesiologist’s workload is heavy 20% to 30% of the time and very low 30% to 40% of the time, and the anesthesiologist is physically active but able to respond to additional tasks the remainder of the time.


When workload increases, the sympathetic nervous system is activated, leading to a variety of physiologic changes, many of which can be measured. For example, increased workload is associated with increases in heart rate or respiratory rate, decreases in heart rate variability or galvanic skin response, and changes in pupil size or vocal patterns. In two older reports, Toung and colleagues reported that the heart rate of anesthesia providers increased significantly while they were administering anesthesia, and heart rate increased to between 39% and 65% above baseline values at the time of patient intubation, although more experienced individuals manifested less of a heart rate increase.These results have been corroborated by Weinger and colleagues. Experienced anesthesia providers showed significant increases in heart rate, above baseline values, during the induction and emergence phases of general anesthesia in healthy outpatients. In addition, their heart rate variability increased throughout the procedure; this is consistent with diminished stress levels as these experienced providers became more comfortable during the course of administration.


Quantitation of the pace and difficulty of the tasks performed in a job may be an alternative type of procedural workload measure. Weinger and colleagues used data from their time-motion study to generate what they called “task density,” a continuous measure of the number of tasks performed per unit of time. Although task density correlated well with subjective workload in this study, its value seemed limited by the fact that the demands imposed by different tasks were all weighted equally. Workload density has been proposed as a real-time measure that incorporates both task density and a measure of the subjective workload associated with individual clinical tasks ( Fig. 24-6 ). Workload values for common anesthesia tasks were estimated from the results of a questionnaire on which anesthesia providers rated the difficulty of specific tasks (e.g., “observe monitors” or “laryngoscopy”) in three different dimensions: 1) mental workload, 2) physical workload, and 3) psychological stress. Factor analysis was used to generate a single index of the perceived workload for each task (i.e., workload factor scores; Table 24-1 ). Workload density was calculated by multiplying the amount of time spent on each task by that task’s workload factor score. Workload density correlates with heart rate variability, response latency, and subjective workload.




FIGURE 24-6


Workload density ( diamonds ) and average heart rate ( triangles ) for a senior resident during a single, routine, general anesthesia procedure. Workload density is a real-time procedural measure of clinical workload. Note that the subject took a 10-minute break in the middle of the case, during which time data collection was suspended.


TABLE 24-1

Workload Values Associated with Various Anesthesia Tasks

From Weinger MB, Herndon OW, Gaba DM: The effect of electronic record keeping and transesophageal echocardiography on task distribution, workload, and vigilance during cardiac anesthesia. Anesthesiology 1997;87(1):144-155; and Keenan RL, Boyan P: Cardiac arrest due to anesthesia. JAMA 1985; 253:2373-2377.
















































































































Task Value
Procedural
Laryngoscopy 1.519
Intubation 1.463
Extubation 1.426
Controlled ventilation by mask 1.399
Teaching 1.333
Airway secure/manipulation 1.130
Position patient 1.130
IV catheter placement 0.940
Spontaneous mask ventilation 0.935
Prep for next case 0.909
Adjust TEE 0.841
Other tasks 0.700
Recording (manual) 0.596
Medication preparation 0.519
Tidying up 0.475
Adjust monitors 0.441
IV medications given 0.426
Anesthesia machine adjustment 0.404
Suctioning 0.352
Adjust IVs 0.222
Conversational
Attending conversation 0.931
Surgeon conversation 0.907
Patient conversation 0.685
Converse with others 0.308
Nurse conversation 0.259
Observational
Observe TEE 0.672
Observe monitors 0.593
Observe patient 0.574
Observe airway 0.500
Observe anesthesia machine 0.482
Observe surgical field 0.352
Observe IVs/fluids 0.154
Other observation 0.154

Higher values represent greater workload. Workload values were calculated using a factor analysis on data obtained from questionnaires completed by anesthesia providers, who were asked to rate each task on a three-point workload scale (high, medium, or low) based on its associated physical effort, mental effort, and psychological stress. Workload values are used in the calculation of workload density.

IV, intravenous line; TEE, transesophageal echocardiography.


Attention Studies


Vigilance has long been considered important to anesthesiologists, as reflected in the word’s inclusion on the official seal of the American Society of Anesthesiologists (ASA). Anesthesiologists understand the need to pay attention to details and subtle signs that could easily be overlooked. Vigilance is discussed in some detail in Chapter 23 , although several studies pertinent to the above discussion are presented here. Kay and Neal performed one of the earliest studies of anesthesia vigilance, but their experiment had a number of methodological flaws. Cooper and Cullen subsequently described a better method for investigating auditory vigilance. They used a computer-controlled device to occlude the stethoscope tubing silently at random intervals during routine general anesthesia cases. Study participants were instructed to press a button to restore function whenever they perceived the absence of stethoscope sounds. The elapsed time between the occlusion of the tubing and the press of the button was automatically recorded. Researchers studied 320 stethoscope occlusions in 32 intubated patients; the interval from occlusion to detection ranged from 2 to 457 seconds with a mean of 34 seconds ( Fig. 24-7 ). They concluded that auditory vigilance during general anesthesia was typically high but not infallible. Manual tasks and conversations interfered with auditory vigilance because the subjects were involved in one of these activities in all instances of response times greater than 5 minutes.




FIGURE 24-7


Comparison of intraoperative vigilance of anesthesiologists from two different studies. Audible vigilance was assessed as response time to detect occlusion of the esophageal stethoscope. Visual vigilance was measured as response time to detect an abnormal value displayed on a physiologic monitor.

(Modified from Cooper JO, Cullen BF: Observer reliability in detecting surreptitious random occlusions of the monaural esophageal stethoscope. J Clin Monit 1990;6:271-275, and Loeb RG: Monitor surveillance and vigilance of anesthesia residents. Anesthesiology 1994;80:527-533.)


In another study, Loeb evaluated visual vigilance in eight anesthesia residents by displaying numbers at random intervals on an OR monitor during operative procedures. The residents were required to detect an “abnormal” value and asked to respond by pressing a button on the anesthesia machine. During 60 minor operative procedures, the average response time was 61 ± 61 seconds (mean ± standard deviation), and 56% of the detections were made within 60 seconds. Compared with Cooper’s study, it appears that response times in the OR are longer for visual than for auditory signals (see Fig. 24-7 ).


Loeb conducted a second vigilance study to investigate why his subjects took longer to detect changes in monitored data during the induction phase of anesthesia than during the maintenance phase. Residents performed the vigilance task described above, and task analysis data were recorded concurrently by a trained observer. Ten residents were studied during 73 surgical procedures, and performance on the vigilance task correlated with monitor-watching activities. Residents spent less total time watching monitors during induction than during maintenance, and the average duration of monitor observations was shorter. These results, combined with the findings of the above workload studies, suggest that anesthesiologists watch the monitors less during high-workload periods, such as during induction, so they may be less aware of electronically monitored data during that time.


Automation and New Technologies


A recurrent application of task analysis, workload, and attention studies has been to investigate the effect of automation and new technologies on anesthesiologist performance. The impetus for these studies may derive from two opposing schools of thought: one espousing technology and the other decrying it. From one side come claims that technology decreases workload, enhances task efficiency, and increases idle time, thereby allowing the anesthesiologist more opportunity to observe and process information from the patient, equipment, and surgical field. The other side claims that technology removes the human from the information loop, thereby distancing the anesthesiologist from the patient and decreasing situation awareness. A more balanced view may be that technology can improve or degrade human performance, depending on how it is implemented. Automation will only prove beneficial if the human was previously overloaded, the automated system is a team player (i.e., responsive, directable, and nonintrusive), and the interface between them supports the human’s situation awareness. Systems that do not fulfill these criteria may create new problems and degrade overall system performance.


One study suggests a beneficial effect of automation on anesthesiologist task distribution. McDonald and colleagues compared the results of two time-motion studies conducted 5 years apart at the Ohio State University Hospitals. In the newer series, automated blood pressure devices, ventilators, and disconnect alarms were used. With these newer technologies, the time that anesthesiologists spent directly observing or monitoring the patient increased from less than 25% to nearly 60% of their total task time. At the same institution, Allard and colleagues examined the effect of an electronic anesthesia record keeper (EARK) on the time spent keeping records and the anesthesia resident’s situation awareness. They videotaped 37 general anesthesia procedures in which record keeping was done manually and 29 cases that used a commercial EARK. The intraoperative time of the subjects (33 anesthesia residents and 8 CRNAs) was categorized into 15 predefined activities. Situation awareness was assessed by having the subject turn away from the monitors and recall the value of eight patient variables. No difference was reported between the two groups in the time spent keeping records or the ability to recall clinical data accurately.


Loeb also investigated whether intraoperative vigilance was different when residents kept a manual record than when a human assistant performed the charting. Nine residents were studied during 36 procedures in a within-subjects balanced design. Vigilance was assessed as the subject’s response time and detection rate to detect an experimental signal displayed on the physiologic monitor. No overall difference was reported in vigilance between the two record-keeping conditions, but a tendency was observed toward reduced vigilance (i.e., longer response times or lower detection rates) during high-workload periods in the manual record-keeping group.


Weinger and colleagues studied the effects of modern anesthesia technology during the prebypass period of 20 coronary artery bypass graft procedures. In 10 cases record keeping was done manually; in the other 10 cases, a commercially available EARK was used. Transesophageal echocardiography (TEE) was used in all cases. The investigators collected task analysis data (32 task categories, recorded by an observer in real time), subjective workload ratings (10- to 15-minute random intervals), and response latencies to the illumination of a light in the monitoring array. The EARK group spent less time on record keeping between intubation and initiation of bypass and more of their time observing the monitors than did the group keeping manual records. However, no difference was found between the record-keeping groups in subjective workload or rapidity of detecting illumination of the light. Subjects spent nearly 8% of their time observing or adjusting the TEE, and it took an average of 7.4 min to insert the TEE and do a preliminary assessment. Residents were slower to react to the illuminated light while observing or adjusting the TEE than while performing record keeping or other monitor-observation tasks.


In these studies, no demonstrable negative effect was seen of automated record keeping. However, the results from Weinger do demonstrate a high workload imposed by current TEE technology and suggest the potential for impaired vigilance when TEE is used intraoperatively by a solo practitioner.


Critical Incident Studies


A careful analysis of adverse events and “near misses” can lead to productive changes in system structure, equipment design, training procedures, and other interventions to improve safety. Critical incident analysis is an established method for investigating human error that was first used in 1954 to study near misses in aviation. The technique involves structured interviews of people who have either observed or been involved in unsafe acts. Analysis of these interviews often provides evidence of behavior patterns and other recurrent factors that may contribute to accidents.


Cooper and colleagues were the first to apply the critical incident technique to anesthesiology. From 1975 through 1984, they collected descriptions of 1089 critical incidents from 139 anesthesiologists, residents, and nurse anesthetists. The descriptions were obtained through a combination of retrospective interviews and contemporaneous reports. Critical incidents were defined as occurrences of “human error or equipment failure that could have led (if not discovered or corrected in time) or did lead to an undesirable outcome, ranging from increased length of hospital stay to death.” Their data indicated that human error was responsible for 65% to 70% of the incidents. The 67 incidents that resulted in substantive negative outcomes included 28 technical human errors, 23 judgmental errors, and 13 vigilance errors. A number of recurrent technical human errors were related to the design or organization of equipment. Examples of these included syringe and drug ampoule swaps, gas flow control technical errors, vaporizers unintentionally turned off, drug overdoses (technical), misuses of blood pressure monitors, breathing circuit control technical errors, and wrong IV lines used. On the basis of their findings, the authors recommended a standardized system of syringe labels and redesign of the breathing circuit to prevent disconnections.


A detailed description of one of their critical incidents, gas flow control technical error, illustrates the importance of evaluating equipment designs prior to implementation. At one of the hospitals where the studies were conducted, all the anesthesia machines had been modified. On each machine, the oxygen flow control knob had been replaced with a large, square knob in an attempt to distinguish it from the nitrous oxide knob. However, rather than preventing gas flow control errors, the knob was found to be a contributing factor: half the accidental decreases in oxygen flow occurred when the knob was bumped by an object placed on the desktop surface of the anesthesia machine. This example highlights the importance of field testing new device designs by intended users.


Subsequent critical incident studies have been performed using contemporaneous reporting strategies. In each, human error has been a predominant cause of mishaps, and the patterns of incident types and associated factors have been similar. Kumar and colleagues demonstrated that critical incidents decreased when an anesthesia equipment checklist was used, old anesthesia machines were replaced, and incidents were discussed at department conferences. They recommended critical incident surveillance as a method of identifying specific problems and ensuring quality control.


Since 1989, the Australian Patient Safety Foundation has supported an ongoing multiinstitutional collection of anesthesia critical incidents. Anesthesiologists from 90 participating hospitals and practices in Australia and New Zealand anonymously report unexpected incidents using a structured format. Each report is entered into a computerized database after it has been reviewed and classified using standard keywords. In 1993, an exhaustive analysis of the first 2000 incidents was published. Human error was believed to be involved in 83% of the incidents; only 9% of the incidents involved equipment failure. Equipment design improvement was suggested as an appropriate corrective strategy in 17% of the reports. System failure was a contributory factor in 26% of the incidents and, based on the results, the authors recommended 111 system improvements to increase patient safety.


Reporting bias is a recognized shortcoming of the critical incident methodology. Many incidents are never reported, and those that are may be incomplete or inaccurate for a number of reasons. In studies of adverse drug reporting, only a very small fraction of the total number of events are voluntarily reported. Both the number and accuracy of adverse event reports can be enhanced by scanning automatically collected data for predefined criteria, such as out-of-bounds physiologic parameters. A continuing problem, however, is the collection of adverse events that become apparent only in the postoperative period. Until comprehensive computerized medical records are widely available, painstaking follow-up will remain a cornerstone of accurate adverse event collection and analysis.


Gaba and DeAnda used a comprehensive anesthesia simulator to investigate factors of accident evolution and techniques used by clinicians to recognize and recover from critical events. The simulator recreated the OR environment with real monitors and equipment, and a patient mannequin was used (see Chapter 25 ). In an initial study of behavior of residents in response to planned critical incidents, these researchers noted problems and errors that arose in addition to the planned events. They documented 132 unplanned events during 19 simulated cases; 87 events were attributed to human error, and only 4 were equipment failures. However, many of the human failures involved errors in the use of equipment; for example, failure to switch the ventilator power back on after hand-ventilating the patient and neglecting to turn up the oxygen flow during preoxygenation. This study indicated that errors occur commonly, that many errors involve interactions with equipment, and that most errors are detected and corrected before they become hazardous to the patient. These findings also apply to experienced clinicians, who averaged five unplanned incidents during each simulated case. Again, many of the experienced practitioners’ errors involved interactions with equipment.


MacKenzie and colleagues took an alternative approach to the study of clinical decision making in anesthesia. Similar to the intraoperative task analysis studies described above, MacKenzie’s group assessed performance during actual trauma cases at the Maryland Shock Trauma Center and developed a sophisticated audio-video and physiologic data-capture methodology that allows off-line analysis. They described four components of task complexity that appear to have a significant impact on teamwork during emergency resuscitation after trauma: 1) multiple concurrent tasks, 2) uncertainty, 3) changing plans, and 4) high workload. Their work suggests that video analysis methodology can be a powerful tool in the evaluation of factors leading to deficiencies in airway management.




Ergonomics Guidelines


The successful development of ergonomically sound equipment and systems requires that ergonomics principles and guidelines be adhered to throughout the entire design cycle, beginning in the predesign phase. A number of ergonomics handbooks and guidelines have been published for equipment designers in fields outside medicine. In the late 1980s, the Association for the Advancement of Medical Instrumentation (AAMI), the professional organization for American clinical/hospital engineers, began a national standards-making process to develop guidance for medical device manufacturers to improve the human factors of their products. The result, “Human Factors Engineering Guidelines and Preferred Practices for the Design of Medical Devices,” was largely an adaptation of human factors design guidance from other industries (especially for the design of military products). In the early 1990s, the AAMI Human Factors Committee decided to revise this document substantially. The group first developed a process-oriented standard on a structured approach to user interface design for all medical devices. This national standard, ANSI/AAMI HE-74-2001, described design approaches relevant to all aspects of the design of devices, including labeling, documentation, and learning tools. More importantly, the standard and the Committee’s deliberations drove greater interest in human factors in the national and even international medical device industry and its regulators. For example, HE-74 was the foundation for the international collateral standard 60106-1-6 on medical device usability from the International Electrotechnical Commission (IEC), which only applied to medical devices requiring electricity to operate. This international standard was subsequently replaced by standard 62366, Medical Devices—Application of Usability Engineering to Medical Devices, a joint standard by the IEC and the International Organization for Standardization (ISO) that applies to all medical devices. The content of HE-74 is provided in Appendix G of standard 62366. At the time of this writing, standard 62366 is undergoing another revision expected to be completed in 2015.


Although these standards specify the process of designing medical device user interfaces, they do not provide any guidance on the design elements of a good medical device user interface. Thus, the AAMI HF Committee spent 5 years creating a companion standard, HE-75 (Factors Engineering—Design of Medical Devices), which is intended to provide comprehensive human factors design principles for medical devices. In parallel with this effort, a number of the HF Committee members published a handbook intended to amplify HE-75 with greater topical detail, figures, and case studies. The Food and Drug Administration (FDA) is responsible for federal oversight of medical devices and has become increasingly interested in ensuring that medical device manufacturers use human factors design principles and adhere to standardized good manufacturing practices (GMP). The FDA recently published guidelines on this subject, which also are available on the Internet.


Principles of Good Device Design


User requirements must be emphasized during the design of equipment and devices. The goal is to produce devices that are easily maintained, have an effective user interface, and are tailored to the user’s abilities. This is best accomplished during the early phases of system and equipment design, when the ergonomics and engineering specialists can work together with end users to produce a safe, reliable, and usable product. Norman eloquently presents this principle of user-centered system design in The Psychology of Everyday Things. This book can be recommended for all engineers, programmers, and designers responsible for the development of new medical devices. Some of the key aspects of user-interface design that Norman emphasizes are to 1) make things visible, 2) provide good mapping, 3) create appropriate constraints, 4) simplify tasks, and 5) design for error.


Make Things Visible


A well-designed interface between human and machine conveys to the user the purpose, operational modes, and controlling actions for the device. If the design of the device or system is based on a good conceptual model, its purpose will be readily apparent to the user. Most devices have several operational modes, and the user must be able to determine rapidly and accurately whether the system is in the desired mode and when the mode changes. With most devices, a number of user actions are possible at any given time; with complex systems, the allowable commands often depend on the current operational mode. The user should be able to tell what actions are possible at any given instant and what the consequences of those actions will be. Feedback must be provided after each user action that should be readily understandable, and it should match the user’s intentions.


The user’s understanding of the function and operation of a device is paramount to the effectiveness of the system. The function and operation of many common devices is learned through cultural experience. People also expect certain objects to always function in a particular manner: knobs are for turning, buttons are for pushing, and so on. With other devices, the function can and often should be implied by the device itself. That is, the purpose and operation of a particular control or display should by design be as intuitive as possible for the user; for example, the sturdy horizontal handle on the side of the anesthesia machine is for pulling the device from one location to another. Such intuitive operation may be difficult to attain with complex, microprocessor-controlled multifunction devices. However, when the design requires the user to memorize specialized knowledge to operate the system (e.g., “To see the systolic blood pressure trend plot, I must push a particular sequence of soft buttons in a specific order”), the need for training increases, and the chance of system-induced user error increases, especially under stressful, unusual, or high-workload conditions.


Provide Good Mapping


Mapping is the relationship between an action and a response and may be natural or artificial. Natural mappings are intuitive; artificial mappings must be learned. Artificial mappings that have been learned so well that the relationship between action and effect is recognized at a subconscious or automatic level are called conventional mappings . On an anesthesia machine, squeezing the bag to inflate the lungs is a natural mapping. Turning the oxygen flow control knob counterclockwise to increase gas flow is an artificial mapping. However, because this design follows the conventional mapping of valves, users typically do not have difficulty adjusting the flow of oxygen on the anesthesia machine. Unfortunately, for many medical machines and systems, methods for activating alternate modes of action, adjusting alarm limits, or manipulating data are via artificial, unique, and nonstandard mappings.


Any device has three different stages of mapping: 1) between intentions and the required action; 2) between actions and the resulting effects, and 3) between the information provided about the system and the actual state of the system. Inappropriate mapping at any stage leads to delayed learning and poor user performance. If natural or well-known artificial mappings are not used, the designer should seek preexisting standards or perform tests to ascertain optimal mappings, which should be consistent within a single device or system.


Create Appropriate Constraints


Constraints are limitations on the user’s available options or actions and can be physical, semantic, cultural, or logical. The provision of a control that can be oriented only in specific ways is a physical constraint (e.g., a switch can only be either on or off). With a semantic constraint, the meaning of a particular situation controls the set of possible actions. For example, the sounding of an alarm is meant to indicate the need to take some kind of action.


Cultural constraints are a set of allowable actions in social situations: signs, labels, and messages are meant to be read. Natural mappings typically work by logical constraints. When a series of indicator lights are arranged in a row, each with a switch underneath, the logical constraint dictates that the switch underneath a particular indicator light controls, or is associated with, that light. Devices, particularly their human interface components, should contain constraints that facilitate simple, logical, and intuitive operation.


Design for Error


Human performance is prone to error. Slips , or actions that do not go as planned, are a common form of human error arising from interactions with devices. Accidentally pushing the wrong button is an example of a slip (see Chapter 23 ). It is the device designer’s responsibility to anticipate user errors and minimize the risk that these inevitable errors will produce ill effects. Actions with potentially undesirable consequences should be reversible and the device should perhaps require additional user acknowledgment prior to completing the action. The designer also can implement a forcing function, a type of constraint that prevents performance of an action that is clearly undesirable. An example of a forcing function is the oxygen/nitrous oxide interlock mechanism that prevents the delivery of a hypoxic gas mixture.


These principles of good design are not limited to the interface between user and machine. A well-designed device is also easy to clean, maintain, and repair, and its documentation is organized and understandable. However, many currently available commercial devices violate these basic design principles. In an ergonomics evaluation of a microprocessor-controlled respiratory gas humidifier, Potter and colleagues found that the device had hidden modes of operation, ambiguous alarm messages, inconsistent control actions, and complex resetting sequences. One clinically used respiratory gas analyzer issues arcane alarm messages and has multiple display formats that are difficult to access. Another gas analyzer has a hidden calibration mode that renders it unusable if the sampling tubing is not attached when the unit is initially powered up. We have noted that at least two brands of limb tourniquet controllers have no indicator that the cuff is inflated, although this impression is mistakenly given by a display of “cuff pressure” and a running timer on the front panel.


Crisis situations tend to generate “use errors” that may not occur during less stressful times. For example, during simulated crisis situations, many subjects forgot to coordinate the setting of the bag/ventilator selector switch on the anesthesia machine breathing circuit when switching between controlled and spontaneous ventilation. At least some of what, on first glance, appears to be human error often can be traced back to poorly designed interfaces between human and machine. Devices often are used inefficiently or incorrectly as a consequence of poor design. When the device acts in unexpected ways, the user develops erroneous or inconsistent mental models of its operation. This problem can be exacerbated when the user has not received adequate instruction before using the device.


Visual Displays


Humans rely heavily on the visual sensory channel for communicating or obtaining information. The cathode ray tube (CRT), printed page, vehicle instrument panel, and line drawing are all examples of visual displays. An early application of ergonomics was the design of instrument displays for military applications. Research continues on developing and improving visual displays for such diverse areas as the airplane cockpit ( Fig. 24-8 ), the nuclear power plant control room, and the office computerized workstation. Although a complete description of this work is outside the scope of this chapter, a number of guidelines that have resulted from these studies are presented. Much of the specific information presented stems directly from the general considerations already discussed.


Aug 12, 2019 | Posted by in ANESTHESIA | Comments Off on Ergonomics of the Anesthesia Workspace

Full access? Get Clinical Tree

Get Clinical Tree app for offline access