35. Evidence-Based Practice, Research, and Quality Management

CHAPTER 35. Evidence-Based Practice, Research, and Quality Management

Cheryl Erler and Reneé Semonin Holleran




EVIDENCE-BASED PRACTICE


Worldwide attention has been focused on the need for healthcare providers to be accountable for delivery of safe quality care based on scientific information and knowledge of “best practices.” Evidence-based medicine (EBM) is a term that originated in the 1980s at McMaster Medical School, Hamilton, Ontario. Canada, Australia, and Great Britain are credited with the early EBM movement to promote healthcare best practice decisions. Evidence-based medicine, as defined by Sackett and colleagues, 12 is the conscientious, explicit, and judicious use of current best practice information for making decisions about the care of patients. Evidence-based practice (EBP) includes integration of individual clinical expertise with the best available clinical evidence from systematic research. 5 EBP is a problem-solving approach to the delivery of healthcare that combines the best evidence from well-designed studies with a clinician’s expertise and a patient’s preferences and values. 1,2 A number of EBP models for nursing care are found in the literature, two of which are the Advancing Research and Clinical Practice Through Close Collaboration (ARCC) model, developed by Melnyk and Fineout-Overholt, and the Iowa model, developed by Titler and colleagues. 9

The EBM process involves categorization of clinical practices according to the strength of the evidence. The general categories include systematic reviews, clinical evidence, and clinical practice guidelines. Levels of evidence (Box 35-1) are also referred to as the hierarchy of evidence. Systematic reviews and metaanalyses of randomized clinical trials (RCTs) are considered the strongest level of evidence (level I), and opinions and reports of expert committees are considered the weakest (level VII). 9

BOX 35-1
Levels of Evidence





































Level I Systematic reviews

Metaanalyses of randomized controlled trials (RCTs)

Evidence-based clinical practice guidelines based on systematic reviews of RCTs
Level II Well-designed RCTs (one or more)

Single nonrandomized trials
Level III Noncontrolled trials without randomization

Systematic reviews of correlational/observational studies
Level IV Well-designed case-control and cohort studies

Single correlational/observational studies
Level V Systematic reviews of descriptive and qualitative studies
Level VI Single descriptive or qualitative studies
Level VII Opinions of authorities or reports of expert committees

From Melnyk BM, Fineout-Overholt E: Evidence-based practice in nursing and healthcare: a guide to best practice, Philadelphia, 2005, Lippincott, Williams & Wilkins.

Evidence-based practice guidelines provide direction for dealing with many common clinical situations. A number of electronic resources provide access to systematic reviews and evidence-based practice guidelines. The following sources disseminate high-quality reviews of healthcare interventions: the Cochrane Database of Systematic Reviews (CDSR); the Johanna Briggs Institute for EBP and Midwifery; and the Database of Abstracts of Reviews of Effects, or DARE. A number of professional health care organizations publish clinical practice guidelines; these can be accessed through the National Quality Measures Clearinghouse and the Agency for Healthcare Research and Quality (AHRQ). 1,5.6. and 7. Although systematic reviews are considered the highest level of evidence, some have design flaws. The practitioner needs to ask the following questions when evaluating any study:




▪ Is the study of high scientific merit, and does it have an appropriate research design?


▪ Are results consistent from study to study?


▪ Are all clinical outcomes, including benefits and potential harm, addressed?


▪ Does my patient population of interest correspond to patients described in the studies?5

For identification of best practice information, the clinician needs to formulate an appropriate clinical question that narrows the scope of the literature search. PICO, a model developed by faculty at McMaster University, Ontario, is a strategy used for framing the clinical question. PICO is an acronym in which P stands for patient or population of interest, including diagnosis, age, and gender; I is for intervention, such as a medication or treatment; C is for comparison (not every question has a comparison component); and O is for outcome, which may include a primary and secondary outcome (Box 35-2). 2 A sample clinical question might be, “Do patients with traumatic brain injury who are intubated during prehospital transport have fewer metabolic acid-base disturbances than patients who are not intubated?” This question involves a comparison group, but not all questions do. In a search of the literature, limiting electronic searches to articles with the terms “evidenced-based” or “systematic reviews” can help narrow your search. 13

BOX 35-2
PICO (Patient/Population, Intervention, Comparison, Outcome) Model for Evaluation of a Study’s Clinical Utility























Components of the Clinical Question Patient Intervention Comparison (optional) Outcome

Description of the patient or population of interest Intervention or therapy of interest (e.g., medication, treatment) or prognostic factor Alternative medication, treatment, etc, for comparison The clinical outcome, including a timeframe if appropriate
Example In patients with anxiety Does music therapy Or support groups Reduced hospital visits


RESEARCH


Research is systematic inquiry that uses disciplined methods to answer questions or solve problems with an ultimate goal of developing, refining, and expanding a body of knowledge. 10 The goal of healthcare research is validation of existing knowledge or discovery of new knowledge that provides the evidence or direction necessary to guide quality clinical practice. Nurses are responsible for using knowledge gained through research to define and refine critical care transport practices. The field of critical care transport is a relatively new area of research compared with other medicine and nursing specialties. It offers a wide spectrum of potential research questions based on method of transport and patient populations transported. Clinical practice can serve as a rich source of research ideas that include clinical observations, validation of treatment guidelines, testing of new procedures, replication of previous studies, and identification of gaps in existing medical or nursing literature. Additional areas for research in critical care transport include, but are not limited to, staff education, program administration, patient safety, economic concerns, development of a safety culture, healthcare delivery systems, and quality evaluation.

The research process is consistent across all disciplines. However, research studies are often accorded greater credibility when the research team is multidisciplinary. Approaches to research may be qualitative or quantitative. Quantitative research includes systematic collection of numeric information; qualitative research involves systematic collection of subjective data. The selection of one or the other or both methodologies typically depends on the question of interest. Research may also be grossly classified as basic or applied. Basic research is conducted to expand a body of knowledge; applied research is performed to find answers or solutions to existing problems. Applied research tends to be of greater immediate utility for defining best practices. 10

Before initiation of the research process, a fundamental understanding of common terms is essential. Box 35-3 contains definitions of several useful research terms. The initial step in the research process is identification of the question of interest. The question should be clear and specific. For example, what is the incidence rate of hypothermia in patients transported via air? Another research question might focus on identification of the relationship between an independent variable (the intervention variable) and one or more dependent variables (the outcomes measured). Such a question might read, what is the relationship between hypovolemia and hypothermia? An example of a qualitative study questions is, how do family members of patients transported via air perceive the level of care provided? Once a specific question has been selected, a literature review can be conducted to identify existing studies and determine their levels of evidence. Pertinent studies then need to be critiqued for research strength and may be evaluated with established critique guidelines that focus on each step of the research process. Some questions to consider include:

BOX 35-3
Definition of Terms







Cluster sampling: A process in which the sample is selected by randomly choosing smaller and smaller subgroups from the main population.


Convenience sampling: A process in which a sample is drawn from conveniently available subjects.


Internal validity: The degree to which the changes or differences in the dependent variable (the outcome) can be attributed to the independent variable (intervention or group differences). This is related to the degree to which extraneous variables are controlled.


Judgmental sampling: Another name for purposive sample.


Population: All subjects of interest to the researcher for the study.


Purposive sampling: A process in which subjects are selected by investigators to meet a specific purpose.


Quota sampling: A process in which the subjects are selected by convenience until the specified number of subjects for a specific subgroup is reached. At this point, subjects are no longer selected for that subgroup, but recruitment continues for subgroups that have not yet reached their quota of subjects.


Sample: The small portion of the population selected for participation in the study.


Sampling: The process used for selecting a sample from the population.


Simple random sampling: A process in which a sample is selected randomly from the population, with each subject having a known and calculable probability of being chosen.


Snowball sampling: A process in which the first subjects are drawn by convenience; these subjects then recruit people they know to participate, and they recruit people they know, etc.


Stratified random sampling: A process in which a population is divided into subgroups and a predetermined portion of the sample is randomly drawn from each subgroup.


Systematic random sample: A process in which a sample is drawn by systematically selecting every nth subject from a list of all subjects in the population. The starting point in the population must be selected randomly.


Threats to Validity






Assignment of subjects: Changes in the dependent variable are a result of preexisting differences in the subjects before implementation of the intervention.


Biophysiologic measures: Measures of biologic function obtained through use of technology, such as electrocardiographic or hemodynamic monitoring.


Blocking: Assignment of subjects to control and experimental groups based on extraneous variables. Blocking helps to ensure that one group does not get the preponderance of subjects with a specific value on a variable of interest.


Concurrent validity: Criterion-related validity where the measures are obtained at the same time.


Construct validity: A form of validity where the researcher is not as concerned with the values obtained by the instrument but with the abstract match between the true value and the obtained value.


Content validity: Concern with whether the questions asked, or observations made, actually address all of the variables of interest.


Criterion-related validity: The results from the tool of interest are compared with those of another criterion that relates to the variable to be measured.


Determination of stability: Only appropriate when the value for the variable of interest is expected to remain the same over the time period examined.


External validity: The degree to which the results can be applied to others outside the sample used for the study.


Face validity: The instrument looks like it is measuring what it should be measuring.


Hawthorne effect: Subjects respond in a different manner just because they are involved in a study.


History: Natural changes in the outcome variable are the result of another event inside or outside the experimental setting but are attributed to the intervention instead.


Instrumentation: Changes in the dependent variable are the result of the measurement plan rather than the intervention.


Internal consistency: The degree to which items on a questionnaire or psychological scale are consistent with each other.


Interrater reliability: The degree to which two or more evaluators agree on the measurement obtained.


Loss of subjects: Changes in the dependent variable are a result of differential loss of subjects from the intervention or control groups.


Maturation: Changes in the dependent variable are a result of normal changes over time.


Observation: The activity of interest is observed, described, and possibly recorded via audiotape or videotape.


Predictive validity: Criterion-related validity where measurement with one instrument is used to predict the value from another instrument at a future point in time.


Psychological scale: Usually a number of self-report items combined in a questionnaire designed to evaluate the subject on a particular psychological trait, such as self esteem.


Reliability: The degree of consistency with which an instrument measures the variable it is designed to measure.


Self report: The variables of interest are measured by asking the subject to report on the perception of the value for the variable.


Stability: Determination of the degree of change in a measure across time.


Validity: How well the tool measures what it is supposed to measure.




1. How many subjects are in the study, and how was sample size determined?


2. Was the research design appropriate to answer the research question?


3. What are the results of the study? Are the results valid? Can they be generalized to my setting?


4. Will the results of the study impact clinical care or project design?
< div class='tao-gold-member'>

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jul 4, 2016 | Posted by in ANESTHESIA | Comments Off on 35. Evidence-Based Practice, Research, and Quality Management

Full access? Get Clinical Tree

Get Clinical Tree app for offline access