(1)
Division of Pulmonary and Critical Care Medicine, Eastern Virginia Medical School, Norfolk, VA, USA
Keywords
Evidence based medicineResearchScienceCritical careRandomized controlled trialsThere are in fact two things, science and opinion; the former begets knowledge, the latter ignorance.
—Hippocrates (c460–c377 BCE), Greek physician
Before medicine developed its scientific basis of pathophysiology, clinical practice was learned empirically from the events of daily experience in diagnosing and treating the maladies patients presented. Students learned as apprentices to clinicians, observing the phenomena of disease, the skill of diagnosis and treatment, and the outcomes of different remedies. Sir William Osler’s classic textbook of medicine was based almost entirely on his “personal experience correlated with the general experience of others” [1]. With advances in our understanding of human physiology and the pathophysiologic basis of disease, these remedies fell by the wayside and treatment became based on modalities of treatment that were shown to interrupt or otherwise modify the disease process. Until recently, it was considered sufficient to understand the disease process in order to prescribe a drug or other form of treatment. However, when these treatment modalities were subjected to randomized, controlled clinical trials (RCTs) examining clinical outcomes and not physiological processes, the outcome was not always favorable. The RCT has become the reference in medicine by which to judge the effect of an intervention on patient outcome, because it provides the greatest justification for conclusion of causality, is subject to the least bias, and provides the most valid data on which to base all measures of the benefits and risk of particular therapies [2]. Numerous ineffective and harmful therapies have been abandoned as a consequence of RCTs, while others have become integral to the care of patients and have become regarded as the standard of care.
Many RCT’s are, however, inconclusive or provide conflicting results. In this situation systematic reviews that are based on meta-analysis of published (and unpublished) RCTs are clearly the best strategy for appraising the available evidence. While meta-analyses have many limitations, they provide the best means of determining the significance of the treatment effect from inconclusive or conflicting RCTs (as well as trials that demonstrate a similar treatment effect). Furthermore, as a result of publication bias positive studies are more likely to be published and usually in more prestigious journals than negative studies. A clinician may base his/her therapeutic decisions on these select RCTs which may then lead to inappropriate patient care. It is therefore important that common medical interventions be systematically reviewed and the strength of the evidence (either positive or negative) be evaluated. Although over 250,000 RCTs have been performed, for many clinical problems, there are no RCT’s to which we can refer to answer our questions. In these circumstances, we need to base our clinical decisions on the best evidence available from experimental studies, cohort studies, case series and systematic reviews.
Alert
Be cautious in the interpretation of retrospective “before-after” studies and small single-center unblinded RCT’s [3, 4]. The investigators of these studies may have a vested interest in the outcome of the study resulting in “misrepresentation” of the true data. Generally blinded studies show less of a treatment effect than unblinded studies evaluating the same intervention; both subconscious and conscious bias influence unblinded studies Before-after studies are particularly of questionable scientific value particularly if the variables and end-points are not defined prior to commencing the study, the data is collected retrospectively and there is no control arm (as other factors may influence the outcome). Prospective cluster controlled trials take these factors into account [5]. If a finding is true and valid it can be reproduced; that’s the amazing thing about scientific exploration. So be very wary of invalidated single studies no matter how robust they appear [6].