Controversies abound in the clinical management of pain, and there are enormous geographic variations in care. Lumbar spine surgery rates historically vary fivefold among developed countries, with rates in the United States being highest and rates in the United Kingdom being among the lowest
1—yet, patient outcomes appear to be broadly similar across countries. In smaller geographic areas, variations are also striking. Within the United States, rates of lumbar fusion surgery among Medicare enrollees vary more than 20-fold between regions, from 4.6 per 1,000 enrollees in Idaho Falls, Idaho, to 0.2 per 1,000 in Bangor, Maine.
2 Within Washington State, county back surgery rates vary more than sevenfold, even after excluding the smallest counties.
3
Another problem in pain management is the successive uptake of a series of fads in treatment. Research has eventually discredited many of these, but they enjoyed widespread use, with substantial costs and side effects, before they were found to be ineffective. Examples include sacroiliac joint fusion for the treatment of low back pain, coccygectomy for coccydynia, bed rest and traction for back pain, and many others.
4 This phenomenon is prominent in the field of pain medicine but not unique to it. Examples of abandoned therapies from other areas of medicine include internal mammary artery ligation for treating angina pectoris, gastric freezing for duodenal ulcers, and vitamin E and hormone therapy for prevention of cardiovascular events.
5,6,7 Promoting such ineffective treatments drains resources from more useful interventions, produces side effects, and eventually damages professional credibility.
Despite welcome breakthroughs in basic science research on pain, increases in knowledge regarding optimal ergonomics of work tasks, and the development and use of more technologically advanced medical therapies, evidence indicates an increasing prevalence of chronic back pain and disability. In the state of North Carolina, the prevalence of chronic, impairing back pain more than doubled from 3.9% in 1992 to 10.2% in 2006.
8 A large and steady rise in use of opioids, surgery, and interventional therapies for low back pain has not been associated with improved health status but appears to be an important factor contributing to increases in health care expenditures associated with back pain.
9,10,11 Thus, despite impressive gains in our understanding of the molecular and cellular origins of pain, there is an important gap in translating this knowledge into effective clinical management. One reason may be the widespread reliance on inadequate research designs that lead to conflicting, confusing, or misinterpreted results. Biostatistical and epidemiologic methods make it possible to substantially improve this situation, but many key principles are not widely appreciated.
Uncontrolled Studies Paradigm
Historically, much of pain treatment research consisted simply of uncontrolled studies in which clinicians treated a group of patients and then reported mean pain scores or the proportion who improved. Such studies are often referred to as
case series, although the alternative term
before-after study or
treatment series may help distinguish them from studies that identify “cases” based on an outcome (such as an adverse event) rather than an exposure (such as a medical intervention) and only assesses patients at one point in time.
12 The before-after study design remains popular in part because it usually does not require extensive resources but is vulnerable to many pitfalls.
13
First, many uncontrolled studies are retrospectively reported. After treating a certain number of patients, the clinician looks back at his or her experience and tries to summarize the characteristics, treatments, and outcomes of the patients studied. Unfortunately, in this retrospective approach, there is often incomplete baseline information on patient characteristics. For example, factors such as age, sex, previous surgery, disability compensation, neurologic deficits, psychological comorbidities, and pain duration often have a major influence on the outcomes of back surgery. Yet, in a systematic review of outcome studies on surgery for spinal stenosis, 74 relevant articles were found, but less than 10% mentioned all these patient characteristics.
7
Another problem with the retrospective approach is that it can be difficult to identify an “inception cohort” of all patients (or a random sample) who met specified criteria and received the intervention. A systematic review of 72 uncontrolled studies of spinal cord stimulation for chronic low back pain or failed back surgery syndrome found that less than one-quarter clearly described evaluation of a consecutive or representative sample of patients.
14 In such studies, it is impossible to know if patients with poorer results were excluded for arbitrary reasons, or how many patients received the treatment but were lost to follow-up. If patients excluded from analysis or lost to follow-up were more likely to experience poor outcomes than those who were followed, this could result in serious overestimates of benefits.
A third problem with uncontrolled studies is that even if the researcher collects data prospectively, there is typically no blinding of patient, therapist, or outcome assessor to the nature of the treatment provided. This allows important unconscious—or conscious—biases to affect assessments. This is particularly important for outcomes related to pain, which by nature are subjective. Most of us would question the reliability of outcomes rated by a surgeon evaluating his or her own patients, and yet, this is the norm in much of the literature.
By definition, uncontrolled studies do not include control groups for comparison. The assumption seems to be that patients with painful conditions, especially chronic pain, will not improve unless effective treatment is given. However, there are many reasons why patients improve in the face of ineffective therapy, some of which are listed in
Table 10.1. First, the natural history of many painful conditions is to improve spontaneously. This may be true even for patients with long-standing pain, who sometimes improve for unclear reasons. For acute conditions such as acute low back pain, rapid early improvement is the norm.
15 Second are placebo effects, which are not well understood but are consistently underestimated and may be particularly important when assessing pain.
7 Several factors may mediate placebo effects, including patient expectations,
16 learning and conditioning from previous treatments, reduction of anxiety, and endorphin effects. Placebo effects for pain treatments may be getting larger. In 1996, patients in US clinical trials reported that drugs relieved neuropathic pain 27% more than placebo, but by 2013, the difference had decreased to 9%,
17 a trend that appeared due to a stronger placebo response in the setting of stable drug effects.
Another poorly appreciated factor is
regression to the mean.
18 This term was coined by statisticians who observed that in a group of patients who are assembled because of the extreme nature of some clinical condition, there is a tendency for the condition to return to some average level that is less severe over time.
Figure 10.1 shows what we often assume to be the course of chronic pain problems, with a steady level of severity that falls after successful intervention. However, the second panel is more likely to represent the true natural history, with good days and bad days, and fluctuations being the norm.
19 Patients seek health care when their symptoms are most extreme. We might easily be misled into believing that improved outcomes are due to the intervention, when in fact, random fluctuations are why their symptoms have returned toward a more average level. As Sartwell and Merrell
20 pointed out, “the term chronic has a tendency to conjure up ideas of stability and unchangeability … it is changeability and variation, not stability, that is in fact the dominant characteristic of most long-lived conditions.”
A host of other nonspecific effects also can affect assessments of patient improvement. Increased concern, conviction, enthusiasm, and attention of a therapist, a researcher, and a clinical staff may all have positive but nonspecific effects on patient outcomes.
Table 10.2 shows a potential consequence of all these factors, using data from a clinical trial of patients with chronic low back pain.
19 The 31 patients in
Table 10.2 have had back pain an average of 4 years. They received a clinical intervention that resulted in 20% to 44% improvements in pain frequency, severity, and function, all of which were highly statistically significant. However, this seemingly effective treatment for chronic pain was a sham transcutaneous electrical nerve stimulation (TENS) unit, along with hot packs twice a week. This was the control arm of a randomized trial and illustrates the substantial improvements that may occur among those with long-standing pain who receive ineffective treatments.
Finally, an issue that has begun to receive more attention is that uncontrolled studies are highly susceptible to publication bias.
21 There is little incentive for clinicians to publicize poor or even average results. Estimates of efficacy from uncontrolled studies that get published will therefore often overrepresent the most positive results.
There is considerable room for improvement in the design and conduct of uncontrolled studies of pain interventions.
14,22 However, even when conducted well, the ability of uncontrolled studies to provide reliable information about treatment efficacy will always be limited. Exceptions can occur when the relationship between an intervention and outcomes is obvious, the effects are immediate, and the effects are so dramatic that they cannot be explained by other factors.
23 Examples include surgery for appendicitis, eyeglasses for correction of refractive error, and cataract surgery. For nearly all pain conditions, however, there are many plausible alternative explanations for the observed changes in outcomes, and reliable conclusions about treatment efficacy require the use of more rigorous study designs. There is simply too much “noise” to sort out whether outcomes are due to the treatment or to other factors.
24
CONTROL GROUPS: AN IMPROVEMENT OVER THE CASE SERIES
Given the variety of factors that may produce improvement with ineffective therapy, it is incumbent on investigators to have a comparison group of subjects with the same likelihood for improvement as a treatment group but who do not receive the active therapy. The goal should be to minimize the potential differences across groups in the effects of the various nonspecific causes for improvement that are listed in
Table 10.1. With this goal in mind, the appropriate comparison group is unlikely to be one that receives no care at all. Patients in such a group would not experience placebo effects or the nonspecific effects of clinical concern and enthusiasm. The importance of
having an adequate placebo is illustrated by a trial that found acupuncture more effective than no treatment for chronic low back pain but no more effective than sham acupuncture.
25 Similarly, using a “waiting list” control group is often suboptimal because these patients experience none of the placebo or nonspecific effects of the intervention group. A preferable control group would be one that receives other credible, appropriate care that does not include the specific treatment under study. This might consist of “usual care” supplemented by a placebo of some sort. The placebo should be difficult to distinguish from the intervention under study so that it is perceived as being as likely to help as the active therapy. This is the reason for providing inactive pills in the control groups of drug trials, but even for nondrug treatments, credible placebos should be provided when possible. Examples include the use of sham TENS units in trials of TENS, the use of sham injections in trials of interventional therapies, the use of subtherapeutic weight in trials of traction, or “misplaced needling” as a control for acupuncture.
In some cases, it may be unethical or impossible to provide a true placebo. Examples include many surgical interventions, psychological therapies, and rehabilitation interventions. In such situations, a reasonable alternative is to provide a control treatment that creates some sense that patients are receiving an additional intervention and attention but is not likely to have a strong effect on outcomes. One example might be a brief educational brochure.
26
In addition to choosing an appropriate control intervention, it is also important to make the treatment and control groups as similar to each other as possible in other ways. Confounding is a critical concept that refers to variables associated with both the intervention being evaluated and observed outcomes. A classic example of confounding is the association between alcohol consumption and lung cancer. This association is confounded by smoking, which is associated with alcohol consumption and is also an independent risk factor for lung cancer. Examples of common confounders in pain research include severity of baseline pain or functional deficits, psychological and medical comorbidities, age, and use of other therapies. The consequence of confounding is that the observed treatment effect is a poor estimate of the true effect. The modifying effect of the confounding variables result in either an overestimate or underestimate of treatment benefits and can sometimes even result in a positive effect when the true effect is negative (or vice versa).
Selection of controls to minimize the potential for confounding is often a challenge. Control groups that are convenient to assemble are also unfortunately frequently associated with important pitfalls. For example, it would be unwise to choose patients who did not have adequate insurance coverage for the treatment being provided as a control group because insurance coverage is related to important sociodemographic characteristics. Patients with the best insurance are typically those with the highest salaries and the most satisfying jobs, are happier with their insurance, and are more likely to practice healthy behaviors. Failure to adjust for socioeconomic status in observational studies could have resulted in the subsequently disproven belief in the positive cardiovascular benefits of hormone replacement therapy.
27 Similarly, selecting patients nonadherent with intended therapy as a control group is a flawed strategy. In a large-scale study of cholesterol-lowering therapy, control patients were divided among those who took more than 80% of their placebo tablets and those who took less than 80%.
28 Even after adjusting for 40 coronary risk factors, there were enormous differences in mortality between the adherent and nonadherent groups. Patients who were adherent with their placebos had a 5-year mortality of only 16%, whereas those who were not adherent had a 5-year mortality rate of 26% (
P < .0001). These findings were probably related to important differences between the groups that were not reflected in their coronary risk factors. These may have included other health habits, behaviors, attitudes toward risk, and occupations. Thus, nonadherent patients are often strikingly different from adherent patients, and we cannot assume that any differences in outcome are related only to treatment effects.
Sometimes, the issues of proper selection of control patients and treatments are intertwined. A study that assigned patients with presumed discogenic low back pain to intradiscal electrothermal therapy (IDET) or rehabilitation therapy based on their insurance coverage for IDET reported an average 4.5-point improvement in pain scores.
29 Subsequent randomized trials found either no advantage of IDET or only a 1-point difference between IDET and sham treatment.
30,31 In addition to potential socioeconomic differences related to differential insurance coverage, patients who were denied IDET probably had lower expectations about the likely benefits of rehabilitation therapy, particularly because some had previously received this treatment but had not responded.
Confounding by indication is particularly important in studies that assess treatment efficacy. It refers to the strong, natural (and appropriate) tendency for clinicians to selectively use therapies in patients most likely to benefit. A striking example of confounding by indication is a study of new users of nonsteroidal anti-inflammatory drugs that found use of ulcer-healing drugs associated with a 10-fold
increase in risk of gastrointestinal bleeding or perforations.
32 Obviously, ulcer-healing drugs do not cause ulcers. Rather, the increased risk of gastrointestinal complications in patients deemed appropriate for ulcer-healing drugs dwarfed any protective effect of the drugs.
There are ways to minimize or adjust for the effects of confounding. These include matching patient selection on the variables thought to be most important potential confounders, restricting enrollment to patients defined by a narrow set of inclusion criteria, and statistically adjusting and analyzing known confounders.
33 Nonetheless, the effects of confounding can be dramatic even when one or more of these strategies are employed. For example, confounding by indication was strong in the study on ulcer-healing drugs, even though it attempted to restrict enrollment to lower risk patients without a previous ulcer or who had even been previously prescribed an ulcer-healing drug.
32
Matching also may not be enough to overcome effects of confounding.
Table 10.3 shows how one might assemble two groups of objects that are well matched on five different characteristics and yet literally be comparing apples and oranges.
19 Table 10.4 shows real data from a comparison of outcomes of two groups of Medicare patients who underwent low back surgery. They were matched on diagnosis (all had spinal stenosis), gender, age, insurance (all Medicare), and surgical procedure (all had a laminectomy without fusion). Despite being well matched on these five characteristics, the likelihood of reoperations differed almost fourfold between the two groups.
Differences of this magnitude might easily be attributed to some dramatic advantage of the treatment used in group A. However, these groups were intentionally assembled in such a way that group A was composed of African American patients who had not had prior surgery and group B was composed of white patients with prior surgery.
19 These two characteristics, which might have easily been overlooked, accounted entirely for the difference in reoperation rates. Unfortunately, it usually is not as simple as matching on a few critical and easily measured variables. The cholesterol-lowering placebo study described earlier shows how even matching (or adjusting) for 40 different risk factors may not capture important differences between two groups of patients.
28
If waiting lists, patients with insufficient insurance coverage, nonadherent patients, or even carefully matched patients receiving appropriate placebo treatments make poor control groups, is there a better solution? Fortunately, the concept of random allocation provides an ideal method of establishing a comparison group that is likely to be similar in nearly all respects to an intervention group.
Randomized Allocation of Treatment and Control Groups
The term randomized trial has become familiar among clinicians and yet is often misunderstood. Some assume that a randomized trial is one in which patients are randomly selected from a population of interest. However, just the opposite may be true. Patients may be highly selected from a group of potential candidates based on specific characteristics that make the study treatment safe and likely to succeed. Randomization does not refer to the selection of patients to be studied but rather to the patients’ allocation to the treatment or the control group.
Why is randomization such a desirable way of creating a control group? It is attractive because the problem of confounding is largely eliminated.
34 Because it is never possible to completely understand or measure all confounders, residual confounding is always a potential issue in studies that are not randomized.
35 With random allocation, we may not even know the important prognostic factors, but they will be equally distributed (given a fair randomization and enough patients) between the treatment and control groups. Effective randomization requires the generation of a truly unpredictable (random) allocation sequence as well as its successful implementation via allocation concealment.
36
There is sometimes confusion about what constitutes randomization. Randomization requires using a list of random numbers that may be published or determined by a computer program. Each successive subject has an equal likelihood of being assigned to each treatment arm, although the order in which they are assigned is unpredictable. Alternating assignment—that is, the first patient is assigned to treatment, the next to placebo, the next to treatment, and so on—is not random because it is predictable. Similarly, assigning patients without conscious bias, or haphazardly, is not the same as random allocation. Using hospital numbers, date of birth, or day of the week is also not randomization. If day of the week is used, a patient could simply come in (or be told to come in) on the day that the desired intervention will be offered. Allocation concealment means that the allocation sequence remains unknown until at least after patients have been assigned to therapy, thus preserving the actual randomization. A traditional method to help preserve allocation concealment is use of opaque sealed envelopes containing the treatment assignment. An increasingly common alternative is to have an offsite facility that keeps the random sequence, so research personnel cannot know the next assignment as a subject is enrolled.
37
A dramatic example of the effects of randomization in pain research is a systematic review of TENS therapy for postoperative pain that found 15 of 17 randomized trials of efficacy showed no benefit.
38 By contrast, 17 of 19 nonrandomized studies showed a substantial positive treatment effect. Some investigators have also quantified the magnitude of bias that occurs when allocation concealment is inadequate. One such study, shown in
Table 10.5, compared randomization with adequate allocation concealment with randomization with inadequate allocation concealment and with nonrandom allocation of controls.
39 The investigators examined a series of treatments for acute myocardial infarction and, as the table shows, demonstrated that maldistribution of prognostic factors was least with randomization with adequate allocation concealment and greatest with nonrandom allocation. Similarly, the likelihood of finding a substantial improvement in case fatality rate rose dramatically, from just 9% of trials with randomization and adequate allocation concealment to up to almost 60% of trials with nonrandom allocation. Other studies suggest that on average, inadequate allocation concealment inflates results by about 40% compared to studies with adequate allocation concealment.
37,40
Why is allocation concealment so important? There are probably several reasons. Failure to conceal allocation makes it easy to subvert the randomization process. If this occurs, confounding by indication can be as much of a problem as in nonrandomized studies.
37 Some overt methods that have been used to bypass randomization include adjusting treatment assignments based on posted allocation sequences or ignoring allocation to treatments perceived as less desirable.
41 Inadequate allocation concealment can also have more subtle effects. If the investigator has a bias as to which treatment group is more effective—even a subconscious bias—he or she may approach the next subject differently based on knowledge of what the next treatment assignment will be. This may affect the way in which a clinical trial is presented to a patient, the enthusiasm with which consent is sought, or the rigor with which eligibility criteria are applied.