Bibliography

CHAPTER 9 Bibliography




So there’s a bazillion articles on Simulators, and each article has a bibliography as long as your arm. Where do you start? What do they all mean? Do you pound through each and every one and accrete knowledge like a tree adds growth rings? Is there any theme to them other than, “Simulators are really cool, grab your phone, a credit card, and order before midnight tonight and we’ll send you a free Thighmaster”? Is there a way out of this chaos? Yes.


Since 1969 there have been well over 1000 articles published on simulation. The BEME collaboration* (we’ll come back to that later) took more than 3 years to identify, collect, read, and evaluate all of these articles. Do not worry—we’ll help you through this.


We begin this chapter with a brief description of our general search strategy for articles so you have an idea about how we found all of them. Next we briefly review the current areas of simulation research. Although this chapter focuses on the use of simulators for education, training, and assessment, we provide references for the other areas in case you are interested. The heart of this chapter contains an annotated bibliography separated into interesting themes.





OUR LITERATURE SEARCH


We wanted to provide you with the mother of all simulation bibliographies. So we began the search with references from 1969 when the seminal article about simulation in medical education was published by Abrahamson and then proceed all the way to June 2005. We searched five literature databases (ERIC, MEDLINE, PsychINFO, Web of Science, and Timelit) and employed a total of 91 single search terms and concepts and their Boolean combinations (Table 9-1). Because we know that electronic databases are not perfect and often miss important references, we also manually searched key publications that focused on medical education or were known to contain articles on the use of simulation in medical education. These journals included Academic Medicine, Medical Education, Medical Teacher, Teaching and Learning in Medicine, Surgical Endoscopy, Anesthesia and Analgesia, and Anesthesiology.


Table 9-1 Search Terms and Phrases.

































































































In addition, we also manually searched the annual Proceedings of the Medicine Meets Virtual Reality Conference, the annual meeting of the Society for Technology in Anesthesia, now the International Meeting on Simulation in Healthcare and the biannual Ottawa Conference on Medical Education and Assessment. These Proceedings include “gray literature” (e.g., papers presented at professional meetings, doctoral dissertations) that we thought contained the most relevant references related to our review.


We also performed several basic Internet searches using the Google search engine—an invaluable resource to locate those articles you cannot find anywhere else (it reviews every CV on the web—so you are bound to find even the most obscure reference). Our aim in doing all this was to perform the most thorough literature search possible of peer-reviewed publications and reports in the unpublished “gray literature” that have been judged at some level for academic quality.


All of the 91 search terms could not be used within each of the five databases because the databases do not have a consistent vocabulary. Although each database also has unique coverage and emphasis, we did attempt to use similar text word or keyword/phrase combinations in the searches. Thus the essential pattern was the same for each search, but adjustments were made for databases that enabled controlled vocabulary searching in addition to text word or keyword/phrase searching. This approach acknowledges the role of “art” in information science, recognizing that information retrieval requires professional judgment coupled with high-technology informatics—and a whole lot of extra time on your hands. [Ojala M. Information professionals as technologists. Online 2002;26(4)5.]



GENERAL AREAS OF SIMULATION RESEARCH


For the past 36 years, the primary focus of medical simulation research has been to justify its use as a training and assessment method. Nearly all of the articles begin with the obvious comparison of medicine to aviation and clinicians to pilots. Then they spend the rest of the time in a defensive tone justifying simulation as a valid training to the point that you think simulators are the ugly stepsister of books, lectures, small group discussions, and patient rounds. We believe it is time to stop all of this defensive research and start moving forward—let’s end the meaningless studies comparing simulators to other unproven methods and begin determining the most effective ways to use simulation for training and assessment. We have an important responsibility, as the current generation of trainers who have seen simulation develop and become integrated with traditional training (we are in a sense simulation immigrants). We need to start planning on training the next generations of clinicians who have grown up with simulation (simulation natives) and not worry about previous generations of clinicians (simulation Luddites) who have looked at simulation as some threat to their unproven, outdated, and unsafe “see one, do one, teach one” philosophy. Let us heed the words of Eric Hoffer: “In a time of drastic change, it is the learners who inherit the future. The learned usually find themselves equipped to live in a world that no longer exists.”



Simulators for Training and Assessment


How do you categorize the studies? How do you evaluate the effectiveness of the simulation as a training and/or assessment tool? We are in luck. Donald Kirkpatrick devised a very useful system to evaluate the effectiveness of training programs—that has since been modified for direct application to simulation: Donald Kirkpatrick described four levels for evaluating training programs. (Kirkpatrick DI. Evaluating Training Programs: The Four Levels, 2nd ed. San Francisco: Berrett-Koehler; 1998). Although originally designed for training settings in varied corporate environments, the concept later extended to health care education. Kirkpatrick’s framework for evaluation as adapted for health care education includes all four of these levels. (Freeth D, Hammick M, Koppel I, Reeves S, Barr H. A critical review of evaluations of interprofessional education. http://www.health.ltsn.ac.uk/publications/occasionalpaper02.pdf. Accessed March 10, 2006. Centre for the Advancement of Interprofessional Education, London, 2002.)








The higher the level, the greater the impact of simulation’s effectiveness on training.


Unfortunately, there are no studies at the “Benefits to patients” level, very few at the “change in organization practice”—an example would be the FDA’s decision to grant approval for the use of carotid stents only to clinicians who are trained on a Simulator. We demonstrate that there are far more studies in each lower category.


Now that we have everything organized, we will provide a more friendly approach to read the literature by grouping articles into themes and even linking some of these to the Kirkpatrick criteria. Truth to tell, those Kirkpatrick criteria are a little tough to wade through. You feel yourself falling into “education PhD—speak”, and not so much “regular old doctor teaching another doctor—speak.”


Simulator articles fall into five main “themes.”







1. It stands to reason: Logic dictates that a Simulator makes sense. You wouldn’t want someone flying a plane without flying a “pretend” plane first. You wouldn’t want someone manipulating nuclear reactors without practicing first. So, darn it, it just seems inescapable that a Simulator is the way to go in anesthesia too.

Articles from aviation and industry fit into the “it stands to reason” column. Educational theory gives us some “it stands to reason” arguments as well. Teach with a “death-proof” patient—how can you say no to that? Teach with a patient who can “do what you want” at the stroke of a key. Teach in a setting where the learner has to act, to speak, to interact. Teach where the student has an emotional investment. They’ll learn better. It just plain “stands to reason.”


What would an “anti-Simulator” person say to these “it stands to reason” articles? “Nice. I’m glad a Simulator seems like a good idea. Lots of things seem like good ideas. Has anyone proven it’s a good idea, or are we to go on a hunch? A hunch with, lest we forget, a half million dollar price tag?”


Articles related to this theme would fall into the Level 1 category—how the learners felt about participating in the simulation experiences—“This was the best learning experience in my career—it sure beats listening to the program director talk about this stuff” and the Level 2a category—did the experience change how they felt about the importance and relevance of the intervention—“I now realize how many things can go wrong and how aware I have to be at all times to prevents mishaps.” These are also editorial discussions and descriptive articles about the use of simulators for training and testing and comparing medicine to other high-risk industries—aviation, military.





5. Salvation: These are the articles that matter, the Holy Grail of Simulator literature. Yes, it’s great that there are “it stands to reason” articles. A solid logical base for simulators is comforting. “Canary in the mineshaft” articles help too. We are all looking for better ways to teach. Intellectual honesty demands that we probe for our own weaknesses and failings. If the Simulator can tell me where to shore up my teaching, then thank you Mr. Simulator. “Gee whiz, golly, I belong too” articles merit a place at the table. Simulators are new, they are expensive. We should ask the hard questions of so pricey a technology. When scholarly detractors speak up, we should listen. These are not Luddites, throwing their wooden shoes in the looms. These are serious educators who want proof that simulators help. Detractors focus on simulator research. If simulator champions take an “us versus them” approach, the simulator debate sinks into a squabble. If simulator champions take a “let’s hear them out” approach, the simulator debate elevates into a living, breathing academic discussion. “Halfway to the station” articles serve as necessary stepping stones. We have to examine simulators in the “in vitro” setting. Lab proof precedes clinical proof, and the simulator is a “lab” of sorts. But “Salvation” articles are the real deal. Pure gold. Precious. Salvation articles show that simulators made a difference in the real world. Because someone trained in the Simulator, someone else did better.

A patient didn’t have an MI.


A patient didn’t have a stroke.


Someone lived, who would have died. And the Simulator made it happen.


How could you ever design a study to prove that?


That explains why “Salvation” articles don’t fall out of the sky every day. Truth to tell, that explains why there are no real salvation articles. The closest we can come is articles that suggest salvation. And they are rare but rare. But oh man do they carry some heft.


Articles related to this theme would fall into the Level 3a category—did resident’s actually change their habits after taking a course, and in Level 3b—have any groups changed what they are doing. Finally Level 4—does all this really mean anything important—are patients safer?


So there they are the major themes of simulator articles. Of course, these articles don’t neatly plop into their pigeonholes. An article’s main idea may be “gee whiz golly, I belong too,” but you extract a “canary in the mineshaft” idea. So, this classification system is a little arbitrary and whimsical. But what the heck.



Articles Touching on the Theme “It Stands to Reason”


The articles included in this section say “it stands to reason” that simulators are good things. You read them and you just can’t help but blurt it out. “It stands to reason” that a simulator is a good way to teach because you can’t hurt a patient while practicing on it. “It stands to reason” that reproducible scenarios that you can “dial in” anytime you want is a good way to train medical professionals.


Then here are the gigantic “leaps of faith” implied by these articles: it stands to reason that it’s a better way—pay tons of money to buy one; it stands to reason that it’s a better way—pay tons of money and devote hundreds of staff-hours to support one.


In a world of infinite resources and infinite money, we wouldn’t even bring up these leaps of faith. But that is not the world we live in. So as you read these articles, ask yourself, “OK, so it stands to reason that simulators are good, but just how good, given the cost and time necessary to keep them afloat.”





If simulators make so much sense, why is their use so recent? Haven’t humans been participating in risky behavior (either to themselves or others) before the Wright Brothers proved powered air flight was possible?


The answer is yes—of course it is. It stands to reason that previous generations of humans must have wanted to practice their skills or to practice protecting themselves. “Historically, whenever the real thing was too expensive, too dangerous, too bulky, too unmanageable, or too unavailable, a stand-in was sought.”


In a comprehensive review of anesthesia simulators as they were available during the late 1980s and early 1990s, Good and Gravenstein (the original developers of the METI Human Patient Simulator at the University of Florida) provide an example of simulators from antiquity.


The field—warfare. The simulator—a quintain. What’s a quintain? A quintain originated from tree stumps upon which soldiers would practice their sword fighting. These were fitted with shields and features to resemble adversaries. By the Middle Ages, quintains were mounted on poles to simulate a joust. It also contained feedback. If the novice failed to attack his “enemy” correctly, a weighted arm on the pole would swing around and smack him on his back. Sometimes, we wish we could do this with some of our students and residents. But alas, we live in a kinder, gentler time.


Good and Gravenstein then cite Andrews, who differentiated between simulators and training devices. Simulator … attempts to…. [r]epresent the exact or near exact phenomena likely to occur in the real world; are good for trainee and expert practice but are not necessarily good for systematic learning of new skills and knowledge.


Training device … systematically presents to the trainee only the necessary training stimuli, feedback, reinforcement, remediation, and practice opportunities appropriate to the trainee’s learning level and style. It uses fidelity only as necessary to enhance the learning process. These are commonly referred to as task trainers.


Just as in aviation, there is a right blend for simulators and training devices. Much like tackling dummies and practice scrimmages in football, or a punching bag and sparring partner in boxing.


The remainder of the article reviews the educational applications of anesthesia simulators and training devices. The following examples of training devices (task trainers) are listed here along with the original citations for further reading:




Simulators


SIM ONE


See below.


CASE


See below.


GAS




While there is evidence of using simulators for military training in ancient Rome, their use in medicine did not occur until the mid-sixteenth century. Although it can be argued that Italian physicians such as Mondino de’Luzzi (1275–1326) used “simulators” when he employed cadavers to complement lectures, the idea to use simulation methods to demonstrate rare conditions or a difficult procedure did not occur until the 1540s.


Why then? At the time, many institutions starting to become concerned regarding the safety of women during childbirth. Although physicians (all men) had the knowledge to deliver babies, it was considered a social taboo for a man to perform a task that was the responsibility of the midwives. However, midwives had no formal training and were graduates of the famous “see one, do one, teach one” university. Initial attempts at formal instruction consisted of lectures with illustrations. This did not affect the infant and mother mortality rates; and more than 100 years later, a father and son physician team from France did something about it—they developed an obstetric simulator.


The Gregoires’ Simulator—it was crude by today’s standards—human skeletal pelvis contained in a wire basket with oil skin to simulate the genitalia and coarse cloth to simulate the reaming skill. “Real fetuses, likely preserved by some means, were used in conjunction with the manikin.” The simulator could reproduce the birth of a child and some complications that would require a trained person to fix.


And yes—there were complaints regarding its validity and transfer to real patients, but for the first time someone said, “it stands to reason we can do a better job and not allow these poor women and children to die.”


Over the next two centuries, there were additional obstetric simulators developed in England and the United States—and they appeared to have enjoyed support from lay people and some other physicians. However, some very familiar factors limited their widespread adoption.





You think after 400 years we would have adequately addressed these issues! Even when the majority of students in the late nineteenth century graduated medical school (there was no such thing as residency) without any direct experience with childbirth, available simulators were not adopted, even though “the use of the simulator would provide medical students with at least some experience with birthing techniques and with some of the related complications.” But no—we would have to wait 80 years before another attempt at simulation for training.










So what was the purpose of this Simulator, built before Neil Armstrong took his famous walk?









So we had a Simulator that could do many of the things modern simulators can do, and Denson and Abrahamson had identified all of the potential benefits for simulators that we are talking about now! They performed one formal study involving 10 anesthesia residents for endotracheal intubation (the study is described later, in the Halfway to the Station section).


Over the years, Denson and Abrahamsom went on to train many more health care providers, including medical students, interns, inhalation therapists, nurses, nursing students, and ward attendants. In addition to intubation, they trained in ventilator application, induction of anesthesia, intramuscular injection, recovery room care and pulse and respiration measurement (HOFFMAN KI, ABRAHAMSON S. The “cost-effectiveness” of Sim One. J Med Educ 1975;50:1127–8).


Although additional simulators were planned, funding dried up and the culture was not ready for this type of training—the old guard was skeptical of technology, and there was no appreciation of the need to reduce medical errors and improve patient safety, although Denson and Abrahamson clearly made a case for it. In the words of Abrahamson, the factors that led to Sim-One’s demise was “internal administrative problems,” which means a lack of university support. As a result “the funding agencies were no longer interested” and there was growing “low esteem the academic world was developing for education.” Ouch! (ABRAHAMSON S. Sim One: a patient simulator ahead of its time. Caduceus 1997;13(2):29–41).


What is the legacy of Sim One? As Abrahamson states, “the effectiveness of simulation depends on the instructional method with which the simulation is being compared … if there is no alternative training method available (limited patient availability or restrictions on the use of patients), the effectiveness of a simulation device probably depends on the simple fact that the device provides some kind of learning experience as opposed to none.” Thus, Abrahamson was saying 30 years ago that it stands to reason we should be using these devices if nothing else exists or if traditional training is too dangerous.


What did they think about this Simulator at the time?


“From an anesthesiologist’s point of view, SIM 1 might represent man’s most impressive attempt, thus far, to manufacture himself from something other than sperm and ovum.”


“The appropriateness of the anesthetist’s response to each stress is automatically recorded for his later bemusement and education.”


“The next phase, Sim II, would appear to be an automated trainer to eliminate the need for a flesh-and-blood instructor, and the obvious finale is to simulate the learner as well.”


This is not a community-based practitioner reminiscing about the good-old-days of ether and a biting stick; this was the official response of the Association of University Anesthesiologists! [HORNBEIN TF. Reports of scientific meetings. Anesthesiology 1968;29:1071–7.]


We would have to wait until the late 1980s to pick up from where these pioneers left off.


✓ Gaba DM, DeAnda A. A comprehensive anaesthesia simulation environment: re-creating the operating room for research and training. Anaesthesiology 1988;69:387–94.


This article describes the rediscovery of full-body simulators for anesthesia training and introduced Gaba as a player in the wild, wooly world of simulation. You will see his name again and again in this bibliography. Based out of Stanford, home of lots of smart people, it comes as no surprise that Gaba, too, is smart and on a mission to see simulators reach their potential.


Way back in 1988, Gaba laid out how to do a simulation, and he made clear the argument that it just plain “stands to reason” that simulation is a good way to train. He described their setup and how they went through simulations. He argues that a “total simulation” requires the complete capabilities for noninvasive and invasive monitoring. Also, other tasks are performed using standard operating room equipment so the scenario recreates the anesthesiologist’s physical as well as mental task environment.


Gaba and DeAnda described a script, actors in the field, “on the fly” decisions by the simulator director, a high-fidelity mannequin—basically all the stuff we do now in the Simulator. He ran 21 people through the Simulator and they all judged the experience as highly realistic. This article did not actually do any kind of study, it just laid out how simulations are done and how much the participants liked it. Finally, Gaba proposed that simulation has “major potential for research, training, evaluation, and certification.” Amen to that, Dr. Gaba.


SCHWID HA, O’DONNELL D. The anesthesia simulator-recorder: a device to train and evaluate anesthesiolgists’ responses to critical incidents. Anesthesiology 1990;72:191–7.


Dr. Schwid has shown us that simulators come in all shapes, sizes, types, costs, range of feasibility. This multicenter study evaluated the acceptance of a computer-based anesthesia simulator that uses sophisticated mathematical models to respond to user-controlled interventions, including airway management, ventilation control, and fluid and drug administration (53 different agents).


The Simulator also provided detailed feedback that tracked all of the user’s and Simulator’s responses—this could be used for formative feedback during training or summative evaluation to determine if the learner has mastered the key critical events. The Simulator was evaluated by 44 residents and attendings at seven institutions. Feedback was very positive, as nearly all participants found the patient’s responses to management interventions as realistic and determined it was a good device to test anesthesiologists’ responses to critical events. A significant and important finding was that there were no differences in response among any of the institutions—demonstrating the practical transferability of this training device.


It is always tempting to compare this Simulator with the full-body, comprehensive simulator environment developed by Gaba and Good and Gravensein. To do so misses the point! A comprehensive training environment is as much dependent on the faculty facilitator, the debriefing feedback sessions, and the involvement of the “team” as it is on the Simulator.


Schwid’s computer-based Simulator and others similar to it have several advantages.





Finally, the two following extreme cases illustrate the use of these devices.





Anesthesia has consistently looked to aviation as its “model” for training. Well, aviation manufacturers, including Boeing and Airbus, are now “equipping” pilots with computer-based simulators to master prior to attending the full-scale simulator. Rather than compare one simulator type with another, we should focus on the most effective methods in the best mix for training.


GABA DM. Improving anesthesiologists’ performance by simulating reality [editorial]. Anesthesiology 1992;76:491–4.


Gaba starts out by discussing a screen-based Simulator study by Schwid. Schwid discovered that residents made errors.






Although Gaba never draws the analogy between the simulation and the aforementioned canaries in the mineshaft, we can see how they fulfill this crucial function. If deadly methane gas had seeped out of the coal deposits, the canaries would suffer a severe case of death, alerting miners to the danger. Maybe simulators should be our “canaries.” Instead of waiting for a methane explosion in the mine (a patient catastrophe in the operating room), we should see how the canary’s doing (run residents through the Simulator and uncover their weaknesses).


Usually, we analyze cases retrospectively, after disaster has befallen. This analysis is clouded by incomplete records, failed memories, and, who knows, perhaps a little defensiveness? “I have no idea what went wrong!” So, looking at stuff after the fact isn’t too good.


We could videotape cases as they occur and, in effect, see disasters during the fact. Only problem with that is that most of the time nothing happens. We’d be looking at millions of hours of people sitting on a chair. It would be like watching the medical equivalent of C-SPAN. We might save a few patients that way, but we’ll kill scores of people with boredom. So, looking at stuff during the fact is no good.


How about looking at stuff before the fact? Time travel. Back to the Future instead of C-SPAN. Only the Simulator can provide that kind of time travel. “It stands to reason” that the Simulator is a good idea. You don’t have to wait until a patient is hurt (the retrospective way); you don’t have to wade through miles of stultifying tape (the real-time way); you can “create the problems” without patient harm. You do it ahead of time (the prospective way).


Gaba also reviewed the limits of Simulators, including that, despite their sophistication, they can never create a patient with all of the inherent variables seen in clinical medicine—but so long as they are “reasonable” representations of real patients they could be considered valid by experienced anesthesiologists.


Another limitation is that the trainee is never convinced the simulation is 100% real—leading to hypervigilance in which the poor resident is always worried that something bad is going to happen. This would be okay, except that many errors may result, in reality, from the very boredom and fatigue that occur in real practice. At the other end of the spectrum are the smart alecks who believe that they can do whatever they want because no real patient is at risk.


However, this is true in other industries, and they have made successful use of simulation. In medicine, the validation of simulation will be even more difficult than aviation because no two patients are alike (unlike a 747); the effects of training should be measured over years of training and remediation not after a single training session. Gaba summarized his editorial by making the important point: “No industry in which human lives depend in the skilled performance of responsible operators has waited for unequivocal proof of the benefits of simulation before embracing it.” I say we embrace it too.


✓ Gaba DM. The future vision of simulation in health care. Qual Saf Health Care 2004;13(Suppl 1):i2–10.


In this article, Gaba shows why he is the maven of high-fidelity simulation in health care. He describes a comprehensive framework for future applications of simulations as the key enabling technique for a revolution in health care—one that optimizes safety, quality, and efficiency.


Gaba begins by making the case that simulation addresses current deficiencies of the health care system.






To address these problems, Gaba proposes that Simulators must be integrated into the fabric of health care delivery at all levels, which is much more complex than piling it on top of the existing system. To do so, he outlines 11 dimensions (and gradients within each) that can take us there. Next, Gaba outlines the various social entities, driving forces, and implementation mechanisms that could forward the agenda of simulation in medicine. Finally, he paints two possible scenarios (he has had lots of practice at developing scenarios) for the fate of simulation in health care.


Optimistic scenario








Pessimistic scenario








Although we certainly take the optimistic view, we know it stands to reason that Simulators will have a significant future in medical training because of the dedication and hard work of individuals who will ensure that it happens.


HELMREICH RL, DAVIES JM. Anaesthetic simulation and lessons to be learned from aviation [editorial]. Can J Anaesth 1997;44:907–12.


This editorial points out that simulators have a lot of potential for serving as tests. All the usual arguments hold—you don’t put a patient at risk, you can reproduce the scene. But this editorial goes on to point out a crucial problem with using a Simulator as a “test vehicle.” A key problem is the idea of “equifinality”—that is, different techniques can give you the same end result. (The article does not mention the following example, we made it up just to illustrate the point.) For example, one anesthesiologist may use epinephrine to achieve a goal, whereas another may use dobutamine to achieve a goal. Both achieve the same goal—better cardiac output. So, in the Simulator, what do you do? Grade someone wrong who uses epinephrine because the “simulator grade book” says you should use dobutamine? The editorial finishes by saying “there is a need to provide opportunities for practice and assessment until the culture supports the fairness of the assessment process.” In other words, it “stands to reason” that a Simulator is a good way to test, but we haven’t quite gotten there yet.


MURRAY WB, SCHNEIDER AJ, ROBBINS, R. The first three days of residency: an efficient introduction to clinical medicine. Acad Med 1998;73:595–6.


Dr. Murray and the fine folks at Penn State (you can almost smell the chocolate from the Hershey factory) describe the first 3 days of their anesthesia residency. Rather than just shoveling a ton of stuff at their residents, they make the learning more active, using (what else) the Simulator. Result—a questionnaire showed “improvement in the residents’ confidence in their ability to carry out clinical tasks.”


So, it “stands to reason” that if a Simulator increases the confidence of a resident, a Simulator must be a good thing. A hard-nosed scientific drudge could look at this and say, “This is not rigorous proof.” A skeptic could look at it and say, “So what, what difference does that make, a little more confidence?” But I’ll bet that to those Penn State residents the added confidence made all the difference in the world when they walked into the OR the first day.


MURRAY DJ. Clinical simulation: technical novelty or innovation in education [editorial]. Anesthesiology 1998;89:1–2.


Dr. Murray is the big cheese in Simulation at Washington University in St. Louis. This is a “do we really need Simulators?” editorial. What did we do in the “B.S. (before simulator) era”? We did residency and did a lot of cases with supervision. We did lectures, one-on-ones with attendings. But why use the past tense? That’s what we are doing right now!


So, do we need to throw Simulators into the mix? Yes. You can use Simulators to teach.





Murray goes on to say that a lot of different groups need to work in the Simulator. Anesthesiologists alone can’t keep the thing humming all the time. A Simulator is a Lamborghini—you bought it, now drive it! Don’t let it sit in the garage all day collecting dust. Get that thing on the road.


ISSENBERG SB, MCGAGHIE WC, HART IR. Simulation technology for health care professional skills training and assessment. JAMA 1999;282:861–6.


Dr. Issenberg, who is one of the authors of this book, oversees the development “Harvey,” the Cardiology Patient Simulator at the University of Miami. In this Special Communication, Issenberg et al. touch on all the simulation technologies that were available in 1999, laparoscopy simulators to train surgeons, their own mannequin Harvey to train students about 27 cardiac conditions, flat screen computer simulators, and finally anesthesia simulators.


What does Dr. Issenberg have to say about the anesthesia simulators? “The high cost and requirements for accompanying equipment, space, and personnel have resulted in research to justify the installation of such devices.” (Hence so many “justification of simulators” articles in this bibliography.) If you look at “intermediate” benefits of simulators, Issenberg points out the following.





So, as study after study comes out hinting that simulators can make us better practitioners, do we have to wait for proof positive? No.


GORDON JA, WILKERSON WM, SHAFFER DW, ARMSTRONG EG. “Practicing” medicine without risk: students’ and educators’ responses to high-fidelity patient simulation. Acad Med 2001;76:469–72.


This is a “feel good” qualitative paper about simulators, pure and simple. Altogether, 27 clinical medical students and clerks and 33 educators went through the Simulator and were asked how they feel about it. The medical students were instructed to evaluate and treat two patients: (1) a trauma patient with hypovolemic shock and a tension pneumothorax and (2) a cardiac patient with marginally stable ventricular tachycardia. The educators, on the other hand, were instructed to care for a patient with anaphylaxis. All participants were debriefed in a case discussion afterward and then completed several evaluations to determine who liked the experience.


To get back to the “theme” of this group of articles—It “stands to reason” that an educational method that everyone likes should be an educational method we should use. Everyone likes Simulators. Even better than the statistics (85% loving the Simulator) were the “raw comments” that hammer home just how cool Simulators are.


“I think everyone could benefit from this.” “Every medical student should have the opportunity to learn using this Simulator several times each year.”


How can you argue with that?


This study also demonstrates the benefit of relatively small sample sizes—you can collect more qualitative data so you know not only what they liked but, more importantly, why they liked it.


✓ Gordon JA, Oriol NE, Cooper JB. Bringing good teaching cases “to life”: a simulator-based medical education service. Acad Med 2004;79:23–7.


Based on their successful pilot studies of positive learner reactions to simulation-based education, Dr. Gordon and his colleagues set out to develop a comprehensive on-campus simulation program at Harvard Medical School. They provide a descriptive case study of how to develop a simulator program in an undergraduate medical curriculum. And when the folks at Harvard give free advice—we listen.


The authors outline several initial steps that are critical to get a simulation program off the ground and make sure it lasts.






The authors provide practical tips on integrating simulation into the existing medical school curriculum by using existing material rather than “reinventing the wheel.” Students in every year of medical school can have meaningful education and training using simulation—you don’t need to restrict this to junior and seniors in medical school.


However, what separates this program from all others is the development and implementation of a “medical education service” dedicated to providing “education on demand” for any student who wants to use the Simulators. Faculty members and residents provide the instruction so students can use whatever “down time” they have to hone their skills.


This has become very successful, as evidenced by a group of 15 graduating students who wrote to the dean, “the Simulator stands out as the most important educational adventure at Harvard Medical School.”


What can be better than that?


GREENBERG R, LOYD G, WESLEY G. Integrated simulation experiences to enhance clinical education. Med Educ 2002;36: 1109–10.


Dr. Greenberg and her faithful minions from the University of Louisville Patient Simulation Center at the Alumni Center for Medical Education (see? what did we tell you about the importance of having an impressive name for your simulation center) combined a high-fidelity Simulator with a standardized patient. The ultimate simulatory experience—first you talk with an actor pretending to have a condition, then you go to the Simulator as if the actor has now “become” the mannequin. Great idea!


First, students meet a patient (SP—standardized patient, the actor) about to have an appendectomy. Next, the student follows the patient into the OR and participates in anesthetizing the patient (Simulator) throughout the procedure. Then the student returns to the waiting room to discuss the procedure with the patient’s spouse (SP). Finally, the student examines the patient (SP) 2 weeks later when she presents with a fever. Whew! Faculty like exploring new clinical teaching and testing methods, and the students are more engaged in their education.


This is an educational twist—that it “stands to reason” is a great way to teach. You combine the best of both worlds and give the student a hell of an experience.


EPSTEIN RM, HUNDERT EM. Defining and assessing professional competence. JAMA 2002;287:226–35.


When you think of “medical science” you think of hard data: blood levels of propofol, heart rates that say “beta-blockade achieved,” or gastric emptying times. And even in the “softer” realm of medical education, you still look for “hard data”: test scores, percentage pass rate of a program, and (in our Simulator world) checklists.


This JAMA article takes us even farther into the “soft.” What is competence? How do you assess it? Look at their definition of competence and ask yourself, “Just how could I assess competence?” and, not to be too cagey about it, “Could I use the Simulator to assess competence?”


Competence is “the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community.”


OK, genius, just how in blue blazes do you assess that? (For our nefarious purposes, can a couple of hours in the Simulator fill that tall order?) JAMA tells us that the three biggies for assessing competence are:





Note: Simulators are not mentioned. The million dollar question—Should Simulators be included?


OK, our goal is to assess competence, and we currently have three ways of doing it. Are they any good? (By extension, does a budding Simulationologist see any defects in the current system that the Simulator could fill?)





So here we have the current three methods of assessing competence. Look again at the definition of competence and ask yourself if any of these three really hit the nail on the head. Competence is “the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community.”


Does an attending physician’s evaluation of a resident assess “the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community.” Not really.


Does a multiple choice exam assess “the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community.” Not really.


Does a standardized patient assessment evaluate “the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community.” Um, closer. I think.


This whole world is murky and quasi-scientific. Go ahead, try to make a bold and sure statement about assessing competence. “The best method for assessing competence is the standardized patient assessment!” Someone asks you, “Prove it.” You say, uh, you say … what do you say?


So wouldn’t it be great if the JAMA then said, “So the current methods of assessing competence aren’t any good. But putting people through the Simulator fits the bill perfectly!” Well, they didn’t. Too bad. But they did say that we need to develop innovative ways to assess professional competence. And, who are we kidding, that is exactly what we’re trying to do with our Simulators.


DILLON GF, BOULET JR, HAWKINS RE, SWANSON DB. Simulations in the United States Medical Licensing Examination (USMLE). Qual Saf Health Care 2004;13(Suppl 1):i41–5).


This is the article we have been waiting for—the people in charge of providing the assessment requirement for a medical license in the United States predicting the inevitable use of simulators in high-stakes examinations.


They provide a current description of the US medical licensing system and explain how all of them use some form of simulation.









There—the folks in charge of testing and therefore education and training (testing drives learning) have just stated what we knew all along. Want to go for a ride?


✓ Seropian MA. General concepts in full scale simulation: getting started. Anesth Analg 2003;97:1695–705.


This article is cited later in this book, where we mention, “If you are thinking of starting a simulation center, and you’re looking for a good ‘how-to’ article, this is the one.” Dr. Seropian pays most attention to the person running the Simulator, not so much the Simulator mannequin itself. It’s the live

Stay updated, free articles. Join our Telegram channel

May 31, 2016 | Posted by in ANESTHESIA | Comments Off on Bibliography

Full access? Get Clinical Tree

Get Clinical Tree app for offline access