Quality Assurance and Risk Management



Quality Assurance and Risk Management


Jerry A. Cohen

Robert S. Lagasse

John H. Eichhorn





Why Is Quality Management Increasingly Important Now?


▪ RECENT PERSPECTIVES

What has changed most about quality management in recent years is not the methodology used, but rather the recognition of its importance in controlling the cost and efficacy of medical practice by a growing number of hospitals, regulators, and insurers.2 Anesthesiologists have a long history of concern for safety and an appreciation for data-driven decisions and the innovations derived from them. Most people believe that the practice of anesthesiology is now safer and less frightening than it was just a few decades ago.


What Important Historical Events Affected Quality?

The motives and future of contemporary quality improvement are founded on events of the last century. To a large
extent, it is still the product “… of events set in motion by private foundations, organized medicine, and politics acting through government institutions…”3 Our government, medical societies, and politicians, working together (and sometimes at cross-purposes), have produced the current state of quality management. We need not look too far for those responsible; as the comic strip character, Pogo, put it, “We have met the enemy and he is us!”.4 Fortunately, during our search for quality improvement, the standards of performance improvement used by industrial engineering—with their more objective, clear-headed thinking—have been of great value.5 Although medical practice may be more akin to the repair business than the assembly line, there is a message to be gleaned from those who made a success of converting a series of ad hoc solutions into an efficient production pathway.


▪ RISE OF THE JOINT COMMISSION ON ACCREDITATION OF HEALTHCARE ORGANIZATIONS

The current quality and RM procedures and the development of outcome measurement have a history reaching back over 100 years. Although the Joint Committee on Accreditation of Healthcare Organizations (JCAHO) is not the only determinant of how institutions measure quality and improve performance, it is undoubtedly the largest, single driving force for hospital quality management.

Early in the 20th century, events leading to the establishment of the JCAHO began to unfold. Following the Flexner report in 1910,6 a commentary on the shortcomings of the apprenticeship system of medical education in the early 1900s, E.A. Codman, at the Clinical Congress of the American College of Surgeons (ACS) held in 1912, encouraged the development of outcomes approach for evaluating the competence of surgeons and developing the concept of hospital standardization.7 In part, he stated that hospitals should look critically at their outcomes and strong and weak points; contrast their results with other hospitals; base physician credentials on demonstrated ability; and be forthcoming about bad outcomes as a lever for increasing resources. His key concepts, although extremely well developed and the impetus for the current hospital survey system, were left to twist idly in the wind. They were to be rediscovered later by Donabedian8 and then systematically applied to healthcare by George Labovitz through his company, Organizational Dynamics, Inc. A few years after Codman presented his outcomeoriented approach, the ACS adopted it as their official position, leading ultimately to the Hospital Standardization Program (1917) which was the grandparent of the JCAHO.

In 1918, the first audit of almost 700 hospitals was carried out, 87% of which failed to meet the minimum standards.9 Thirty-two years later, the failure rate was less than 6%. Because of the enormous task of surveying approximately 2,500 participating hospitals, the Joint Commission of Accreditation of Hospitals (JCAH) was established in 1951. It refined the survey standards of the ACS and issued its first edition of the Standards for Hospital Accreditation in 1953. When Medicare was created by the Social Security Act of 1965, JCAH accreditation was deemed to qualify hospitals for payment. By 1982, the major portion of the JCAH audit related to quality assurance, with 62% of challenges to accreditation of the hospitals falling in this category.10


▪ LEGISLATIVE AND JUDICIAL HISTORY

In 1965, the statutory standards for participation in the Medicare Program were written. They required JCAH accreditation, and, by 1967, the amendments to the Social Security Act included requirements for utilization review.3 In 1972, the Professional Standards Review Organizations (PSRO) was established by federal law (Public Law 92-603). The practice of medicine changed over the ensuing 20 years in response to a plethora of new regulations that required physicians to: (i) meet national standards of practice, (ii) be credentialed on the basis of their ability to meet these standards, (iii) contain costs, (iv) limit hospital confinement, and (v) establish quality assurance programs to realize these goals. The Social Security Act of 1983 established the concept of reimbursement on the basis of diagnosisrelated groups. The Social Security Act of 1986 mandated the establishment of a federal system for reporting quality of care problems on a recurrent basis.


Malpractice Legislation

As a result of the virtually uncontrolled malpractice climate at that time, Florida in 1985 adopted legislation mandating an internal RM program11 for all licensed medical care facilities. This act mandated disclosure to the state of:



  • All adverse occurrences to an institutional risk manager


  • Frequency and cause of problems, and the providers responsible (reported annually)


  • Malpractice claims brought against the institution or its practitioners, and measures taken to reduce risk, including reduction and/or suspension of clinical privileges and


  • Incidents resulting in death or central nervous system damage (report required within 3 days)

When this legislation did not stop the exodus of insurance carriers from the state, the legislature extended the law with the passage of the Medical Incident Recovery Act in 1988.12 In addition to a system for arbitration of and limits on malpractice claims, the Division of Medical Quality Assurance was created within the Division of Professional Regulation (DPR). The DPR was charged with developing a list of adverse incidents that would be deemed reportable under the law. The law held all persons immune from civil liability (including antitrust) who reported incidents of incompetence to DPR, except when
fraudulent or malicious. This step was significant because the statute obligated all practitioners to report adverse incidents that they observed. All documentation involved in this quality assurance process was mandated by state law, and therefore immune to discovery outside the DPR.


Case Law and Corporate Negligence

The concept of corporate negligence, as expressed in the courts, has served to write into case law that which was established by statute. The case of Darling vs. Charleston Community Memorial Hospital13 established that a hospital which extends credentials to a physician is ultimately held liable for controlling the quality of care by establishing a functioning peer review process. In this particular legal case, a leg was amputated after a cast was improperly applied by a physician who did not request orthopedic consultation.

The responsibility of a hospital in determining the capability of its staff was reviewed in Johnson vs. Misericordia Community Hospital.14 The court found the hospital liable for not discovering that the physician in question had a long history of malpractice litigation and revoked privileges at other hospitals that appointed him to their staff.


Liability of Peer Reviewers

Because of the potential legal hazards of the peer review process (mandated by Title 19 of the Social Security Act), the process of review is protected from legal discovery by statute (section 1160 of the Social Security Act). Disclosure of information, except for legitimate peer review purposes, is punishable by a $1,000 fine and up to 6 months incarceration in a federal institution. Nonetheless, the protection extended by federal and state statutes was eroded when a physician sued on the basis that his hospital’s peer review committee violated the Sherman Antitrust Act when it stripped him of staff privileges. In the malpractice suit of Patrick vs. Burget,15 this physician was allowed by the federal district court to obtain confidential peer review documents normally considered protected from legal discovery by Oregon law and was awarded nearly $3 million. The case was remanded for retrial by the Federal Court of Appeals, which ruled that the disclosure was prohibited.

The concept of due process and the various state and federal acts protecting confidential peer review creates a potential conflict and exposes peer reviewers to substantial liability. Most of the suits impugning the peer review process have been brought by physicians who have been judged improperly by their peers. The only real solution for this problem is scrupulous fairness and use of review procedures based on written criteria, accepted in writing by the hospital staff, and applied in a uniform manner.


Rise of the Joint Commission

Throughout the 1970s and early 1980s, JCAHO accreditation required the adoption of quality assurance programs based on procedure and diagnosis-related audits. (In 1987, JCAH changed its name to JCAHO to reflect its expanded scope of activities.) The organization was required, on a quarterly basis, to select a problem for evaluation and then eliminate it. The audits were episodic in nature, did not encompass overall practice, and often concluded with the recommendation that the problem should continue to be studied. The audit era was a well meaning attempt to look at major problems in care and correct them, but it lacked the following: (i) The tools to conceptualize the root causes of problems, (ii) the concept of the relationship of resources and processes to outcome, and, most importantly, (iii) a means of linking episodic improvements to long term gain. It had no adequate theory of how to evaluate and improve quality. It relied on the “point and shoot” approach—getting rid of problem equipment and problem people to produce a good outcome, rather than making a series of improvements. This JCAHO approach mirrored a management style typical in America and in medicine at that time—strong managers eliminated problems once and for all, and then moved on to the next problem until they achieved perfection.

We still retain more than a hint of this belief system in our morbidity and mortality conferences—that is, the idea that if we can just understand and correct each individual mistake, we will improve. The chief problem with this theory is that it does not work, and for a very good reason: The number of possible mistakes we can make is infinite, and problems tend to recur. What was needed was a concept of how to relate what we do in medical practice, and how we do it, to the type of outcomes that result; this linkage of cause to effect is essential to achieving solutions that produce better outcomes.


What Are the Scientific Tools for Quality Management?


▪ ANALYSIS OF STRUCTURE, PROCESS, AND OUTCOME

Throughout the 1970s, as the JCAHO explored ways to measure and affect quality, the tools for quality improvement slowly matured under the influence of W. Edwards Deming in industry16 and Donabedian in medicine.8,17 In the 1960s, Donabedian established a model for the objective assessment of the quality of care. The concepts were derived from, but different than, industrial quality control, in which the elimination of variation in production through standardization provides the basis for quality improvement.18 Donabedian’s approach to quality evaluation and improvement introduced three enduring interdependent elements that continue to form the core of quality assessment systems today: Structure, process, and outcome.

Each of these elements applies to the quality management activities of administrators, nurses, and physicians
in healthcare organizations. Individual elements specific to each group will tend to receive more emphasis by that group, although all require the collection of verifiable data on the basis of predefined criteria. The goal is to define the causes of adverse outcomes and provide a basis for assessing improvements that translate into reduced risk. Historically, the elements have come into common use in the order listed.


Structural Review

This review validates the presence of adequate structural elements, that is, physical facilities, equipment, and personnel, management algorithms (clinical and logistical), safety measures, and expected performance limits. The definition of what constitutes adequate structure is essential for structural review to be useful. For example, expectations need to be identified for staffing levels and expected capacity to move patients along the surgical care pathway, from the operating room (OR) to the nursing unit. If the suitability of equipment is to be validated, the expected purpose, performance standards, and limits of that equipment should be specified.

Often, obvious structural deficiencies go unrecognized as such, even in extensive, well planned studies. Half of the deaths and neurological injuries in a classic study of 198,103 patients in 460 French institutions were caused by hypoventilation during the postoperative period.19 The study concluded that this was a result of the popularity of narcotic anesthesia in France. Similar results were obtained in a retrospective study of over two million anesthetics in North Carolina.20 Although narcotic usage may have been the precipitating factor, the lack of recognition of hypoventilation (an issue of process) may have been the more correctable root cause not addressed in either study.

Even when the objective of the study is to relate structural failure to the process of anesthetic administration, the definition of structure may be drawn too narrowly. Cooper, using critical incident techniques, described a 4% occurrence of critical incidents attributable to equipment failure in 1089 patients.21 Drug administration errors, IV apparatus problems, gas flow errors, anesthesia circuit disconnects, and other factors were defined primarily as errors in the process rather than the structure of providing anesthesia. However, these elements have significant structural significance, the appreciation of which is reflected in the subsequent improvements to anesthesia machines. The study did conclude that many of these incidents could be reduced by changes in monitoring techniques or the adoption of different management algorithms, that is, structure-related changes.

The key to maximizing the use of structure analysis, therefore, is to include within its sphere the identification of errors in the decision-making process that can be modified by structural change, for example, intelligent alarm systems, redundant syringe labeling, or the automatic detection of potentially hazardous combinations of drugs. Elements of process, which can be reduced to algorithms or policies, for example, generally accepted, reproducible standards for monitoring, techniques, or procedures should also be included.22


Process Review

Elements that comprise process review include the proper use of techniques, management strategies and judgments, drugs, blood products, medical records, and surgical procedures according to accepted practice guidelines and standards to produce an acceptable outcome. Differences between process and structure may be reduced when the scientific basis of an action is so well understood and developed that it has defined indications and methods of execution. If medication orders are written with errors in dosage and spelling, a review of the process or structure might both show that those errors could be eliminated through use of a computer-based order writing system.

A more complex example is the selection of patients for elective tracheal intubation. Assume that reliable evidence shows rapid sequence inductions fail in an unacceptable number of patients with a body mass exceeding a certain index. A policy for performing awake intubations in the entire cohort of these patients might be adopted, thereby converting a problematic decisionmaking process, by virtue of standardization, into one of structure. As with all standards, such a policy does not restrict the right of the individual anesthesiologist to decide a matter of process to reduce a known risk based on evidence in a cohort when one cannot determine in advance which member of the cohort is at risk.

For the process of assessment to be successful, the following issues should be defined: (i) Adverse events to be reduced, (ii) ideal outcome, and (iii) highly specific and verifiable changes in management. When these changes are adopted, they should lead to the desired outcome through a reduction in adverse events. Ultimately, it is not necessary to make an absolute distinction between errors in management (process), technical errors (process or structure), and purely equipment problems (structure), as long as the quality assessment process detects the problem and can correct it with the appropriate improvement in outcome.


Outcome Review

These types of evaluations involve endpoints of care, including morbidity and mortality, length of hospital stay, escalation of care including unexpected outpatient admission, and overuse or underuse of blood products, drugs or monitoring techniques. The purpose of the outcome review is to determine when a problem exists that requires corrective action. Because the multiple antecedents of outcome often reinforce or cancel each other, good care and bad care do not always result in proportionally good and bad outcome. Therefore, although outcome is the result of its antecedent causes, inferring these antecedent causes from outcome is not straightforward in medicine.

Episodic outcome assessment has a long tradition in anesthesiology in the form of mortality and morbidity
conferences; they often served before the 1980s as the only form of quality review. This practice was an integral part of surgery, from which anesthesia emerged as a discipline.23 Our use of outcome analysis to point to specific problems of structure and process is more recent and still evolving. Not until 1999 was a structured peer review (SPR) model in anesthesiology introduced that looked at system errors as critically as human errors.24 The measurement of defined indicators of outcome along the care pathway became a central piece in the JCAHO’s Agenda for Change paradigm in the early 1980s and had a major influence on outcome measurement. However, measurable outcome rarely points to the root cause of a problem, only its existence.

Outcome may be positive or negative, although the terminology most commonly refers to adverse outcomes. Adverse patient-related occurrence (APO) is a relatively old term, but is as useful as any number of other terms—complication, adverse event, untoward outcome, or variance—that refer to negative outcomes related to care. While the occurrence of negative outcomes tends to be most frequently measured, positive outcomes, when expectations are met, are also important. An APO in this chapter designates a negative outcome related to patient care.

Sometimes outcome causes are obscure or multivariate. The usefulness of outcome studies in altering clinical practice depends on our ability to ferret out sometimes complex cause-and-effect relationships. Good outcome does not prove the absence of management errors. Conversely, the absence of patient management errors does not preclude a bad outcome because the mechanisms can be subtle and previously unknown.23 For example, the injudicious use of muscle relaxants may be reflected by the number of patients who require unexpected postoperative ventilation or reintubation in the acute recovery period. Many coexisting variables interact to cause the specific incidence of this problem: (i) The frequency with which relaxants are used determines the population of patients at risk and is inflated by the frequency with which relaxants are used in excess of that needed; (ii) the methods of reversal and testing of adequate reversal of relaxants may be the proximate cause of residual postoperative relaxation; and (iii) the monitoring capabilities and size of the staff available in the postanesthesia recovery area can amplify or reduce the frequency of the problem and may determine its early detection and intervention.

Multiple process and structure variables present the potential for changing the outcome if any one of the variables changes. If there is a high institutional use of muscle relaxants, but the intraoperative and postoperative monitoring and control of these agents is excellent, then the excess use is unlikely to be detected. However, if a shortage of twitch monitors, oximeters, capnographs, or postanesthesia recovery room nurses were to develop, a major increase in respiratory arrests might occur in the postoperative period.

In summary, outcome measurement is useful in evaluating the gross confirmation of successes and failures and results of changes in processes and structures. Its usefulness in evaluating individual performance or pointing to specific processes that need improvement is limited by uncontrollable variables and the difficulty of distinguishing provider-caused events from those caused by process. The same outcome may result from the combination of a competent provider using a flawed process and an incompetent provider using a well designed process. In addition, delayed outcome blurs its relation to the behavior of the provider and the quality of the process.


What Are the Roles of Industry, Statistical Quality Control, and Continuous Quality Improvement?

The origins of continuous quality improvement (CQI) in industry began with the work of statisticians like Walter Shewhart, W. Edwards Deming, and Joseph Juran,25 all of whom introduced the concept that quality can be measured and analyzed to reduce the incidence of defects and improve the product quality. It had little immediate impact in America, but it revolutionized manufacturing by the Japanese who added to and refined the methodology from 1950 to 1980. CQI was reintroduced to America in the early 1980s as industry became aware of its potential to simultaneously improve the quality and reduce the cost of production of goods and services.

The application of statistical process control to the measurement of quality in health care has been discussed extensively,26 including specific examples of its use in defining the quality of perioperative care.24 The JCAHO consciously borrowed from the ideas of George Labovitz in formulating its philosophy of quality improvement and using quality teams to improve performance.

Although Deming’s original writings were highly technical and difficult to read, even when they were intended for the public, the methodology used and its impact on the rise of Japanese industry were captured in a very accessible, well-written book by Walton.16 Today, we use Deming’s principles of statistical quality control to improve practices that work and eliminate those that do not. At the same time, we attempt to improve the efficiency and economy of medical practice.

Eventually the JCAHO began to adopt the methods of quality improvement that were very successful in industry as a central principle of performance improvement. First, it introduced quality screening as a means for populating a statistically valid quality measurement database.27 Subsequently, it incorporated the structure, process, and outcome concepts of Donabedian; the performance limits introduced by Deming; and the methods used by Labowitz to focus on CQI. Responding to criticism that it provided hospitals with no strategic tools to help them survive in difficult times, or even guarantee that they would be accredited by their state inspectors,28 the JCAHO became more collaborative as a consultant for quality management in conjunction with its role as regulator.



How Are Standards Established and Quality Management Programs Operated?

The JCAHO sets forth its theory and regulations in its annually published accreditation manuals. Building on structure, process, and outcome, it transforms quality assurance from an almost useless bureaucratic waste of paper into a reasonably scientific approach to quality assessment and improvement. Part of transformation of the JCAHO’s accreditation process into a valid indicator of health care quality occurred in response to external criticism.28

Although the principles that will be discussed were first published over 20 years ago,10 they are still used and are essential for an effective quality management program. They represent the foundation of effective standards because they underlie the essential elements of performance improvement.


▪ RELATIONSHIP TO THE QUALITY OF CARE

Reports that rely largely on structure analysis, that is, equipment maintenance, OR utilization or staffing, may influence the quality of care, but they do not suffice as absolute indicators of quality. However, reporting based on process and outcome is essential. In anesthesiology, quality of care is evaluated by relating anesthetic management to outcome. Assessment of the quality of care is based, in part, on standards such as measurement of the frequency of APOs in the perioperative period, the observation of standards, efficiency measurements that improve patient throughput and satisfaction, near and actual catastrophic failures, and sentinel events.


▪ CONSENSUS OF THE PRACTITIONERS

The JCAHO has, in general, avoided prescribing specific methods or outcome indicators for monitoring the quality of care, although it insists that institutions consider its Sentinel Event Alerts and National Patient Safety Goals in performance improvement planning.29 Problems monitored by the quality assessment process should be acknowledged as clinically significant by the practitioners affected by them. Because the process requires extensive voluntary reporting—similar to the reporting of income taxes in the United States—faith in the validity of the system is essential.

If practitioners believe the system is unfair, Draconian, or meaningless, they are unlikely to participate in a manner that yields statistically valid solutions. Physician selfreporting is a more reliable method of identifying adverse outcomes than either medical chart review or incident reporting. Reporting by chart reviewers is biased by the severity of outcome and severity of patient illness, whereas incident reports tend to focus on human error. However, when the data may result in improved patient care, all groups feel compelled to report adverse outcomes.30


▪ STATED QUALITY INDICATORS SHOULD RELATE TO PRACTICE

Indicators most commonly consist of a compilation of adverse occurrences that, when absent or kept below a certain frequency, indicate appropriate quality of care. They may also include demographic variables such as the volume of patients anesthetized and divided into categories by anesthetic technique, American Society of Anesthesiologists’ (ASA) physical status classification, provider, or by other means. Other indicators include rates of compliance with certain standards of practice, such as satisfying criteria for performing various procedures or employing the use of a minimum complement of monitors such as those defined by the ASA.

In 1986, JCAHO departed from its previous policy that required hospitals and their departments to develop their own quality indicators. They established a major project called, The Agenda for Change, the purpose of which was to develop a quality measurement process clearly related to outcome.31 To implement this process nationwide, they developed national standards for indicators of quality care. Seventy previously used indicators of care were reduced to seven major categories, which included mortality, medication errors, complications, and nosocomial infections.

From these seven categories, clinically meaningful subcategories (adjusted for the severity of illness) were developed and related to the profile of services and associated risks of each specialty. Unfortunately, by the time the indicators were distilled down to the essential few that could be agreed upon, only approximately a halfdozen of the worst, most infrequent complications (such as neurological injury) remained for the specialty of anesthesiology. While the concept of the service-risk profile was good, the attempt to determine the indicators by the JCAHO instead of individual healthcare organizations had little potential to actually improve day-to-day practice. Further, a valid service-risk profile requires a larger number of indicators, making it impossible to reduce the numerators and denominators of care to numbers counted on two hands32 (Table 3.1). Twenty years later, some of the indicators being developed for use in current pay-for-performance initiatives33 are equally unlikely to represent a consensus of anesthesiologists’ views of important outcomes measurements.


▪ STATED OBJECTIVES SHOULD BE VERIFIABLE

To be verifiable, the quality improvement objectives should be limited to those that can be confirmed. For
example, if controlling dental trauma is the objective, one should be able to demonstrate that the number of damaged teeth resulting from intubation is at a specific verifiable level through the application of statistical process control. Intraoperative mortality is another clearly verifiable indicator of outcome. As with chipped teeth, however, it does not in itself define causation and, therefore, should be combined with a peer review process and continuously monitored.24








TABLE 3.1 Anesthesia Service-Risk Profile

















































































































































































































































































































































































































































































1.


Type of patient served. Those who require:



a.


Surgical procedures




(1)


Inpatient




(2)


Outpatient



b.


Obstetrical services



c.


Diagnostic studies and special procedures



d.


Pain management


2.


Major patient care services



a.


General anesthesia



b.


Regional anesthesia



c.


Monitored anesthesia care



d.


Management of chronic and acute pain



e.


Postoperative recovery from anesthesia and surgery



f.


Intensive care management


3.


Major clinical activities



a.


Diagnostic




(1)


Preoperative evaluation and consultation




(2)


Postoperative evaluation and consultation




(3)


Evaluation and consultation of chronic and acute pain syndromes




(4)


Postoperative care



b.


Therapeutic activities




(1)


Preoperative medication and sedation




(2)


Anesthesia (general, regional, or monitored anesthesia care)




(3)


Pain therapy





(a)


acute





(b)


chronic




(4)


Pain management in labor and delivery





(a)


labor and delivery care





(b)


delivery care only




(5)


Sedation and monitoring for special procedures




(6)


Postoperative





(a)


pain management





(b)


general supportive postoperative care




(7)


Intensive care



c.


Preventive




(1)


Diagnosis and recommendation for the treatment of hyperpyrexia (hyperpyrexia clinic)




(2)


Preventive maintenance by Medical Engineering Department


4.


Important aspects of care



a.


High-risk (to patient or staff) procedures




(1)


Anesthesia delivery to patients with:





(a)


full stomachs





(b)


fetal distress





(c)


cardiac instability






i.


cardiac disease, congenital or acquired






ii.


volume depletion






iii.


hypertension





(d)


airway or ventilatory compromise





(e)


metabolic derangement






i.


malignant hyperpyrexia






ii.


diabetes






iii.


electrolyte disorders





(f)


contagious disease requiring isolation






i.


hepatitis






ii.


AIDS





(g)


neurological damage






i.


increased intracranial pressure






ii.


evolving stroke




(2)


Blood products administration




(3)


Invasive monitoring or other invasive techniques





(a)


arterial catheters





(b)


central and pulmonary artery catheters





(c)


bronchoscopy for placement of endotracheal or endobronchial tubes



b.


High volume procedures




(1)


General anesthesia




(2)


Regional anesthesia




(3)


IV access




(4)


Intubations




(5)


Preoperative evaluations




(6)


Postoperative evaluation and discharge




(7)


Obstetric analgesia and anesthesia




(8)


Pain blocks




(9)


Noninvasive monitoring




(10)


Postoperative pain management



c.


Problem-prone aspects of practice




(1)


Rapid sequence inductions




(2)


Nasal intubations




(3)


Invasive monitoring




(4)


Blood transfusions




(5)


Airway management during airway surgery




(6)


Reintubation, postoperatively




(7)


Anesthesia for patients with unstable cardiovascular status, including hypertensive patients




(8)


Nerve blocks


5.


Recognized indicators of quality care. Documentation of:



a.


Preoperative evaluation



b.


Recovery room Aldrete scores and discharge note



c.


Postoperative evaluation



d.


Quality assurance reports on individual patients


6.


Acceptable criteria used for:



a.


Performing procedures of risk



b.


Evaluating adverse occurrences (“avoidable” vs. “unavoidable” with care evaluate “appropriate” or “inappropriate” for the problem)



c.


Selecting cases for detailed review



d.


Evaluating outcome


AIDS, acquired immunodeficiency syndrome; IV, intravenously.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jul 15, 2016 | Posted by in ANESTHESIA | Comments Off on Quality Assurance and Risk Management

Full access? Get Clinical Tree

Get Clinical Tree app for offline access