Essential Components for a Patient Safety Strategy

Chapter 1 Essential Components for a Patient Safety Strategy



The safety of operating rooms depends largely on the professional and regulatory requirements that mandate skill levels, documentation standards, appropriate monitoring, and well-maintained equipment. Prescriptive and detailed protocols exist for almost every procedure performed, and although variation based on surgical and anesthesia preference is allowed, overall there is excellent management of the technical aspects. Experienced operating room physicians, nurses, and technicians come to rely on these operating room characteristics to support the delivery of safe care. Most practitioners, however, at some time have had the experience of working in suboptimal operating room conditions. This may be attributed to the level of procedural complexity in even the simplest of operative procedures when the level of complexity is not matched by the necessary team coordination, leadership engagement, or departmental perspective that encompasses all the prerequisites for reliable delivery of care. There are many causes for this current state, which include, depending on the country, the mechanisms for reimbursement that impede alignment of interests between physicians and hospitals (Ginsburg et al, 2007); the limited interdisciplinary training of the various disciplines (i.e., surgery, anesthesia, nursing, and technician), which promotes a hierarchical structure and undervalues core team characteristics; and the historical perceptions about the roles of physicians, nurses, and ancillary personnel, which have not kept pace with the changing nature of current health care delivery (Baker et al, 2005).


As far back as 1909, Ernest Avery Codman, a Boston orthopedic surgeon, openly challenged the then-current orthodoxy and proposed that Boston hospitals and physicians publicly share their clinical outcomes, complications, and harm. Wisely, he resigned his hospital position shortly before going public with this request, so he could not be thrown off the staff. Despite that, criticisms of him and his ideas were severe. Today his wishes are being realized across the United States at a rapidly accelerating pace (Mallon, 2000). The 1991 Harvard Practice Study, which evaluated errors in 30 hospitals in New York State, ultimately led to the now highly quoted number of 98,000 unnecessary deaths per year attributed to health care error. The Harvard Practice Study forced the health care industry to reflect on problems in patient care resulting in patient harm (Brennan et al, 1991; Leape et al, 1991). From these reflections a science of comprehensive patient safety has developed from disciplines such as engineering, cognitive psychology, and sociology. Combined with the increasing pace of electronic health record deployment, the movement toward demonstrable quality and value in medical care is advancing quickly.



THE CASE FOR SAFE AND RELIABLE HEALTH CARE


The 1991 Harvard Practice Study was the seminal article leading to the 1999 Institute of Medicine (IOM) report, To Err Is Human (Kohn et al, 2000), and that report has led to great public and business awareness of quality and safety problems in the health care industry. The media have fueled the public’s interest, and businesses have formed advocacy groups, such as Leapfrog (Delbanco, 2004), to focus attention on this critical topic. The U.S. government program Medicare, with approximately $600 billion in annual spending, recently announced it would not pay for care resulting from medical errors (Centers for Medicare & Medicaid Services [CMS], 2008). Large private insurers are quickly following suit. Aetna just announced it will not pay for care related to the 28 “never events” defined by the National Quality Forum (Aetna Won’t Pay, 2008).


Rapidly developing transparency in the market about safety and quality will be a major driver in health care change. Beth Israel Deaconess Hospital in Boston posts its quality measures on its Web site, including its recent Joint Commission accreditation survey (Beth Israel Deaconess Medical Center, 2008). New York Health and Hospitals, the largest public care system in the United States, has committed to following this example. The State of Minnesota publicly posts on the Internet all its hospitals’ reported never events, such as wrong site surgeries and retained foreign objects during surgery (MDH Division of Health Policy, 2008). Several other states are quickly emulating this practice. Geisinger Clinic in Pennsylvania now offers a warranty on heart surgery (Ableson, 2007), in which specified complications are cared for without charge. Given the impressive care processes they have developed, this is a logical way to communicate their superior care and compete in the market. The successful hospitals and health systems in this new rapidly transparent market will be the ones that apply systematic solutions to enhance patient safety. Other bright spots in the systematic approaches taken by large care systems include Kaiser Permanente and Ascension in the areas of surgical and obstetric safety and through Institute for Healthcare Improvement (IHI) initiatives, such as the 100,000 Lives Campaign and the 5 Million Lives Campaign (IHI, 2008).


There has been a great deal of activity to improve the safety and quality of care since the IOM report. Currently there are pockets of excellence, but broadly there is much more work to do, and there are fundamental gaps in the quality and safety of health care. Well-intentioned projects and efforts to improve patient safety have met with variable results. Overall, however, in the absence of systematic, solutions-based approaches, health care organizations are unlikely to achieve sustained excellence in clinical safety and quality. This chapter describes the necessary elements for a comprehensive program to help ensure safe and reliable care for every patient every day. The surgical environment is an obvious one to which these programs should be applied, and perioperative nursing will play a significant role in shaping the efforts for patient safety and safe nursing practice.




PROCESS STEPS


Consider that each step in the process is an individual and indivisible action. For example, the circulating nurse obtains the patient’s chart before the procedure. The simple act of holding the chart in one’s hands or reviewing the computerized patient record is a step in the process of evaluating a patient before beginning a surgical procedure.


Once a chart is available, there is a series of other steps that include identifying and confirming patient identification, review of current history and physical, confirmation of correct surgical consent, and other essential laboratory and clinical data as determined by the facility. These steps depend on several processes of their own, such as a secretary or assistant placing the chart in a convenient location and checking that the correct information is in the correct place. In the case of an electronic record, that may include steps such as making certain that the person collecting the data appropriately enters the information. Each of the process steps undertaken has its own failure rate and determines whether or not the information is present in the chart when it reaches the nurse.


Viewed from this perspective, any operative procedure performed in any location is made up of dozens to hundreds or thousands of steps. Each step taken has an intrinsic defect rate; some might be single steps, but many also will have associated processes that determine their defect rate. To the degree that each step’s defect rate can be quantified, the safety of a system is measurable. The measure is not only whether the outcome is achieved, but also whether the processes may be replicated over and over again. To a large extent, safety is a system property determined by a system’s reliability.



ACHIEVING RELIABILITY IN SYSTEMS


Operating rooms have done a remarkably good job of making themselves reliable and safe, albeit in a health care industry that has been slow to incorporate many key features of reliable systems (Cooper and Gaba, 2002). The Harvard anesthesia practice standards (Eichhorn et al, 1988), generated in the 1980s and adopted across the United States, are a shining example of standardization of anesthesia care that has helped improve the safety of the specialty. These standards identified minimum monitoring expectations now commonly used in every surgical procedure. They affect all of perioperative nursing and influenced the broad adoption of pulse oximetry and capnography.


Another rich source of reliability in operating rooms in the past has been promoting the interoperability of its practitioners. Although one anesthesia provider or nurse may begin a procedure, it has been likely that many other members in a department would be capable of replacing that person and might be called on to do so. This continues to be likely in many departments in which transfers of care occur daily. However, the limitations in interoperability are growing as equipment and surgical specialties become more specialized and require increasingly sophisticated knowledge of technique and machinery. The implications of increased specialization and technical complexity inevitably will influence decisions about caseload and case type regarding timing of cases, after-hours procedures, and, in all likelihood, credentialing of all operating room practitioners.


Reliability is feasible only when six interdependent factors are effectively integrated (Leonard et al, 2004a):



Integration occurs only through concerted effort at multiple levels, starting with a goal, which takes precedence over all others, to achieve reliability. Organizations and departments that pursue greater reliability find that the end result positively influences patient care and employee satisfaction (Yates et al, 2004); it is obvious even to outside observers. To some extent this applies to all operating room practitioners as they arrive in a location to participate in a procedure. The initial reaction, that gut feeling about the quality of relationships and the safety of the environment, should be taken seriously, for it is likely to be a good barometer of the risk inherent in the environment (CMS, 2008).



AN ENVIRONMENT OF CONTINUOUS LEARNING


One example of a paradigm for a learning environment is Toyota Industries. It leads the auto industry in size and sales, and the enthusiasm of its car owners is well known. Toyota employees make suggestions for improving the work they do an average of 46 times per year and do so with the knowledge that a significant number of their suggestions will be tested and, if found worthy, adopted and spread. This process of applying the insights of the frontline workers to change and improvement applies not only to the production of Toyota’s cars but also to the fundamental work of improvement itself (Spear, 2004). If a change in a procedure takes 1 month today, Toyota would be seeking ideas so that a year from now it could perform that change in 3 weeks. If Toyota receives 10 useful suggestions daily from a department, then 1 year from now its goal would be to receive 12 or 15. Toyota’s perspective is that improvement is always feasible and there is always waste to be removed from its processes. The fact that in a prior quarter wasted effort and materials decreased as a result of focused improvement efforts is immaterial. There is, unrelentingly, always more to be achieved (Liker, 2004; Liker and Hoseus, 2008).


For decades, physicians and hospitals have had a guild relationship in which single physicians plied their trade within the walls of a hospital but with singularly insular perspectives. In the past 20 years a different health care industry has begun to emerge, built on a flood of hard evidence from randomized controlled clinical trials. Groups of clinicians are now providing service-line delivery across the spectrum of care-associated specific diseases (Ableson, 2007).


An environment of continuous learning in health care requires the presence of certain structural elements and the ability to execute ideas. The most basic structural element is the meeting of the clinical, unit-based leadership to consider information about unreliable events and decide on actions to remedy them (Mohr and Batalden, 2002). Surgical procedures will take place safely only in those clinical units whose leaders are able to orchestrate this process. Nursing must be an integral part of the leadership discussions in that unit. Multidisciplinary staff should meet on a regular basis to examine the straightforward operational issues in units, from items as specific as getting drugs to the right places in each room to the flow of patients through the entire suite.


The information collected at such meetings should be collated and evaluated so that remedies to any problems, potential problems, or concerns may be pursued. As in other industries with reputations for high reliability (Freiberg and Freiberg, 1998), listening to the front line and acting on their concerns is key to ensuring a safe process. This requires an environment or culture that makes it easy to bring problems to light and a teamwork structure that supports this process.



A JUST AND FAIR CULTURE


A just and fair culture in health care is one in which individuals fully appreciate that, although they are accountable for their actions, they will not be held accountable for system flaws (Reason, 1997; Marx, 2001). This culture provides a framework for looking at errors and adverse events to quickly and consistently determine if an individual nurse or physician involved in the event has behavioral or technical skill problems, whether or not he or she was set to fail by system flaw. This means evaluating the culpability of an individual after an error, accident, or adverse event by using a simple algorithm (Figure 1-1) that asks the following (Reason, 1997):




If the answers are, respectively, no, no, yes, and no, then no personal blame accrued. In this culture the organization believes that a reasonable mechanism exists to evaluate untoward events, regardless of the outcome of the event, and the organization acts accordingly. Implicit in this, and an extension of it, is that actions are evaluated based on what is best for patients and not on who is advocating the actions. Hierarchy, formal or informal, is not material in discussions of this sort.


When medication errors are being evaluated, the majority of the time the algorithm identifies capable conscientious individuals who are working in an unsafe system and on whom no blame should be placed. James Reason, who first articulated the algorithm described previously, is clear when describing his model that blaming individuals for events beyond their control does not fix a problem or make a system safer, although it might satisfy the patient’s concerns or the legal issues about accountability. This model allows quickly separating individual issues from system ones. What is critical is creating a safe environment that allows good nurses, physicians, and others to tell us when they make mistakes or have near misses.


Tragic examples highlight the need for this objective and clear evaluation mechanism, such as the overdoses in Indianapolis in 2006 of the blood thinner heparin. After the wrong concentration of heparin, 100 times too concentrated, was put in the automated pharmacy dispensing machine, nine very skilled individuals—six newborn intensive care unit nurses and three neonatologists—mistakenly took the wrong concentration of drug and administered it to very small infants. Three fatalities resulted (WRTV, 2008).


A similar episode occurred in 2007 involving the actor Dennis Quaid and his family in Los Angeles (Fox News, 2008). The media coverage of the Quaids’ cases has highlighted their trauma as patients and the outrage that occurs when patients feel they are not being told the truth. Missing are the processes required to identify the underlying causes and fixes of these errors. They require an engineering and systematic approach that begins with an objective view of the events and from which flow insights about systematic flaws and individual culpability.


Thought leaders on both sides of the Atlantic have developed schema to address this topic. James Reason in the 1990s described his incident analysis tree (Reason, 1997). In the past decade David Marx developed his Just Culture Algorithm for evaluating the choices made by frontline providers, which incorporates and expands on Reason’s work. In both cases the goal is to ensure appropriate accountability and an environment where every decision made by senior leadership and middle management is based on integrity and ethics.


In some serious patient injuries, there are contributing factors that are universally agreed upon. Other individual actions or events require careful analysis, teasing away bias or misconception, to arrive at a fair and just conclusion regarding culpability.


The advantage of promoting, nurturing, and supporting a climate perceived as fair is that it opens the door for discussion about problems and makes it acceptable to explore opportunities for improvement and to disagree and find resolution through testing and the quest for continuous improvement. A culture of fairness is fundamental to the implementation of a safe system. Every time a patient is brought into an operating room, the degree to which a fair and just culture is present in part determines the degree to which the environment supports the safety of the procedure.




TEAM LEADERS: THE CRITICAL ROLE OF LEADERSHIP


The active and committed engagement of executive and clinical leaders in systematically improving safety and quality is essential. One of the greatest challenges is aligning the frequently large number of strategic priorities in an organization with a simple, focused message that resonates with frontline clinicians caring for patients. Alignment and clarity of an organization’s patient safety goals and work is critical. Senior leaders need to clearly communicate the priority of safe and reliable care and model these behaviors on a daily basis. Effective leaders continually reinforce the values and “this is the way we provide care within our organization.” Excellent examples of how to do this well come from (1) the communication at Ascension Healthcare to everyone working in their 71 hospitals: “Care that’s safe, care that’s right, and then we’ll have the resources to take care of the people with no care” (Pryor, 2006), and (2) the long-standing Mayo Clinic motto that goes back to Dr. Mayo himself, “Always in the patient’s best interests” (Davidson, 2002).


In Ascension’s case, every organizational priority and activity filters through and aligns with those three goals, and providers know them through internal activities, internal marketing, and time to reflect. There is real value in every employee’s knowing and working toward a short list of clear goals every day. That is what habitually excellent organizations do.


Leaders also are keepers and drivers of organizational culture. Setting the tone of how the organization values its people—how it treats them and expects them to treat each other—is at the core of organizational excellence, or the lack thereof. The presence of overt disrespect is extremely destructive within a culture. Unfortunately, this behavior is pervasive in most health care systems and creates unacceptable risk, because nurses may be hesitant to call certain physicians with patient concerns because of the way they have been treated in the past. Sadly, hesitancy to voice a concern or approach certain individuals is a common factor in serious episodes of avoidable patient harm (Leonard et al, 2004a). Encouragingly, a growing list of leaders and hospitals are now dealing directly with this issue. If they do not, they pay with increased nursing turnover, poorer patient satisfaction, and increased clinical risk.


Team leadership is not an innate skill; it is learned (Heifetz, 1994). Physicians are by definition most frequently the leaders of their teams, and nowhere is this truer than in the environments where surgery is performed. Equally true, however, is that the best decisions about direction and goals—those decisions that are most likely to support reliability and safety—accrue from leadership shared among surgeon, anesthesiologist, nursing, and other team members and are feasible only with forethought, discussion about agreed-on norms of behavior, and practice.



Briefing


One act of good leadership is to take the team through a process called briefing. Unlike “pause” or “time-out,” briefing is not a static, one-time event. Briefing is an ongoing process that ensures that all team members have a similar mental model of the team’s plan and presumes that as the plan changes or requires changing, team members will be informed and engaged in making informed decisions.


Briefings in operating rooms are multistep affairs, ideally beginning with a gathering of the surgical team with the patient in the preoperative area and a discussion that engages the patient and team members in delineating a plan for that procedure. The briefing process might continue after the patient is sedated or asleep in the operating room, at which point a further briefing might ensue about any issues that team members might consider unsettling. These might include, for example, concerns about equipment logistics or a team member’s personal comments about what he or she believes are his or her limitations that day, stated as a request that other team members work more closely with him or her. In the United States a third part of the briefing process occurs just before incision and is the time-out. This is a regulatory requirement by The Joint Commission (2003) to ensure correct laterality of the procedure and identification of the patient and procedure.


A good initial briefing process has four components in which leaders do the following:



Ensure that all team members know the plan.


Assure team members they are operating in an environment of psychological safety (Bohmer and Edmondson, 2001; Roberto et al, 2006) where they may be completely comfortable speaking up about their concerns.


Remind team members of agreed-on norms of conduct (Hackman, 1990) such as specific forms of communication that increase the likelihood of accurate transmission and reception of information.


Expect excellence and excellent performance (Collins, 2001), reminding team members of their responsibility to do their best and remain, throughout, engaged in the performance of the team activity and centered on the plan and team goals.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Aug 5, 2016 | Posted by in ANESTHESIA | Comments Off on Essential Components for a Patient Safety Strategy

Full access? Get Clinical Tree

Get Clinical Tree app for offline access