Skip to Main Content

Peter S. Staats and Sudhir Diwan

"Doctors are men who give drugs of which they know little, into bodies of which they know less, for diseases of which they know nothing at all." Voltaire 1770s

For thousands of years, however, physicians blithely administered a variety of concoctions intended to treat pain; a few worked, many eventually fell by the wayside; and others were reluctantly abandoned when they failed to stand up to rigorous therapeutic analysis. Thus, although healers throughout antiquity accurately touted the efficacy of opium, now known to contain the potent analgesic morphine, and of willow bark, which is the source for aspirin, dusty tomes also contain scores of therapeutic recommendations that have little merit in the management of pain.

The Skillful Physician, the mainstay of seventeenth-century medicine, unequivocally recommends applying hot goose oil to treat sciatica.

The Concept

Evidence-based medicine, the concept that physicians should use the best available data to guide ones practice, has become the mainstay in modern medicine. Comparative effectiveness research, comparing two accepted strategies to determine which therapies are the most effective, is widely being considered the standard.

The Flaws and Frustrations

While no one can argue that physicians should use the best "available data to guide the practice," the concept is now being misinterpreted and distorted largely by insurers and other carriers to deny appropriate care. Years ago, the same week the media reported the CEO of a major health care insurance company's compensation package of over a billion dollars, one of the authors (PSS) was called to emergently evaluate a patient (with previous back surgery in the ICU) with a lumbosacral radiculopathy with this insurance. The request was specifically for an epidural lysis of adhesions procedure. After a thorough evaluation it was felt to be a reasonable approach, and the procedure was performed successfully. The next day his pain was under control for the first time in weeks, we facilitated discharge, there was great patient and hospital satisfaction and, we succeeded in saving the insurance company money since he was discharged from the hospital. The insurance company never paid for this procedure and claimed that the therapy offered was experimental. This was in spite of four double-blind randomized controlled trials demonstrating the efficacy of this therapy. The denial was appealed which was reviewed by the insurance company's "appeal committee" that included three physicians: a gynecologist, a neurologist, and a general surgeon. None of whom had heard of an epidural lysis of adhesions procedure. Not surprisingly the committee upheld the denial of the insurance carrier, indicating that there was no "evidenced-based medicine" supporting the claim. Of course, this was patently untrue, but does highlight several problems that can occur with evidence-based medicine if they are not judiciously applied.

Problems With Evidence-Based Medicine

  • EBM is limited to clinical research only, and does not correlate well to the clinical expertise.

  • It presents a "cookbook" approach to practice medicine.

  • The clinical evidence should be a source of information, not a replacement of individual clinical expertise.

  • Insurance industry uses this concept as a cost-effective (cost-cutting) tool, and ignores patient's values and preferences.

  • It promotes a state of mind that is analogous to ivory-tower, whereby the insurers define the care path.

  • Continued concern of EBM being hijacked by purchasers and insurance managers to cut costs.

Many physicians have received similar frustrating denials from insurance companies claiming the procedures or medications being offered are experimental. We receive these denials with discography, epidural steroids, therapeutic occipital nerve blocks, radiofrequency ablations of facet joints, spinal cord stimulation to name a few, claiming that each of the above is "experimental." It became clear that the insurers are using the rationale of "no evidence-based medicine" to selectively deny high-cost procedures, or procedures insurers have felt have been abused.

Conflicts With Common Sense

  • Quantitative research from randomized controlled trials (RCTs) may not be relevant to all treatments in all situations.

  • The EBM is a slow, lengthy, and expensive process that will take years before the evidence is produced and applied to the clinical practice.

  • RCTs may restrict under-researched racial minorities and patients with comorbid diseases from practice of EBM.

  • RCTs apply to only the group of people that are included in the studies, and do not address the individualized treatment plans based on physicians' personal experience and knowledge.

Historical Perspective on Evidence-Based Practice

  • In the 1960s, there were very few double-blind randomized controlled trials demonstrating efficacy of any number of therapies.

  • Medical decisions were largely made on the basis of clinical intuition pathophysiology and clinical experience.

  • There were few large studies, and the results of large clinical trials were rarely used to modify or change clinical practice paradigms.

  • In the 1990s, physicians began to realize that that a higher standard was required. Evidence-based medicine and evidence-based practice were born.

What Is Evidence-Based Medicine?

Evidence-based practice is "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of the individual patient. It means integrating individual clinical expertise with the best available external clinical evidence from systematic research." (Sackett D, 1996)

  • Evidence-based medicine and guidelines that use evidence-based medicine are involved in synthesizing the available published data to come up with the most effective approach to care.

  • The available data is graded in a hierarchal fashion.

  • Large double-blind randomized controlled trials receive the highest grade, followed by prospective studies and retrospective reviews and even case reports and opinions of experts are graded.

  • If an approach has a large number of well-designed randomized controlled trials supporting its use, the approach is given a high grade.

  • If there are no well-designed trials, and the physician's experience is touted as the only rationale for proceeding with a therapy, a low grade is given.

Source and Synthesis of Evidence

  • Basic science studies on and animal research: Very first step to produce evidence.

  • Case reports and case series: Reports of treatment of individual cases or case series without control groups, with a little statistical validity.

  • Case-control studies: Studies with a specific condition are compared with people without the condition. These studies are less reliable than randomized controlled trials and cohort studies.

  • Cohort studies: A group of patients treated with a particular treatment and followed for an extended period, and then compared their outcomes with a similar group that has not been treated with the similar treatment.

  • Randomized controlled trials: Carefully planned methodologies to randomize and blind the researcher and the patient to reduce a potential bias while comparing the interventional (treated) and control (untreated) groups. These studies provide the best evidence with high statistical validity.

  • Systemic reviews: An extensive literature search is conducted to identify studies with sound methodology focused on a specific treatment or procedure. The studies are reviewed for quality and results are summarized based on predetermined criteria.

  • Meta-analysis: It is a large study to mathematically combine results of a number of very valid studies that have used accepted standards of statistical methodology.

Levels of Evidence

United States Preventive Services Task Force (USPSTF) has developed systems to stratify evidence by its quality for ranking evidence about the effectiveness of the treatment:

  • Level I: Evidence obtained from at least one properly designed randomized controlled trials

  • Level II-1: Evidence obtained form well-designed controlled trials without randomization

  • Level II-2: Evidence obtained from well-designed multicenter cohort or case-control analysis

  • Level II-3: Evidence obtained from multiple studies with or without intervention including uncontrolled trials

  • Level III: Opinions of respected authorities, based on clinical experience, descriptive studies, or reports of expert committees

Levels of Recommendations

The risks versus benefits ratio obtained from the evidence available in literature, USPSTF uses following levels of recommendations for clinical service or treatments.

  • Level A: Good scientific evidence to suggest substantial benefits outweigh the potential risks

  • Level B: Fair scientific evidence to suggest the clinical benefits outweigh the potential risks

  • Level C: Fair evidence to suggest clinical benefits, but the ratio of benefits to risks is too close to make recommendations

  • Level D: Fair scientific evidence to suggest that risks of clinical service clearly outweigh the potential benefits

  • Level I: The scientific evidence is either lacking, or poor quality, or conflicting to assess the risks of clinical service to potential benefits

Problems With Evidence-Based Medicine (EBM)

There are several problems with using evidence-based medicine to guide all care in pain management or the development of guidelines. Frequently studies are funded by industry, either pharmaceutical or medical device companies. Those are the companies with the money to spend on demonstrating the efficacy of large clinical trials. These studies may have potential conflict of interest, but there is no funding otherwise to conduct studies.

  • There is a shortage of coherent and consistent scientific studies to produce evidence.

  • The insurance companies obtain the evidence to their advantage from nonindexed journals with non–peer-reviewed articles, and ignore the good evidence published in indexed journals.

  • The evidence is often reviewed by the physicians who do not have hands-on experience of particular procedures, eg, a neurologist who never performed an epidural steroid injection, writing the guidelines for epidural steroid injections based on evidence.

  • The poorly written guidelines produced by the "so-called" experts with vested produce barriers to the practice of high-quality medicine.

Expensive Proposition

Double-blind randomized controlled trials are widely considered the gold standard for study design. The paucity of evidence is largely due to paucity of good studies, and the cost is a big factor.

  • New drug applications for the FDA require multiple studies.

  • It has been estimated cost close to a billion dollars to get a new drug approved through the FDA.

  • Each study costs millions to perform.

  • For this reason, many well-designed studies do not come from physicians as sponsors, looking at old drugs or new approaches to pain.

Companies with significant financial resources, that stand to make financial gain if their drug or product is successful, are motivated to fund large-scale clinical trials to demonstrate efficacy of their product. Older inexpensive drugs, that may be off patent, may be just as effective as a new drug but will not be studied in large-scale clinical trials and will be given a low score in an EBM approach.

Different Standards

Limitations for Studies Regarding Interventional Procedures

  • Physicians do not have the financial wherewithal to pay for the studies requested by the insurers.

  • Few physicians have the time and expertise to apply for federal funding to perform these studies.

  • Infrequently performed procedures may be cost effective (ie, thoracic epidurals) but have no medical device company funding the studies. There will be a paucity of data supporting their use.

  • For these reasons, many insurers are denying interventional procedures under the guise of "evidence-based medicine."

  • Different standards: They site lack of randomized double-blind controlled trials as a rationale for noncoverage, but allow surgical procedures that have not been subjected to the same rigor as many of the interventional therapies discussed in this book.

Oxford Centre for Evidence-Based Medicine Levels of Evidence (May 2001)

Our perspective is that there has been an explosive growth in our field, associated with abuse. We as a society and the payers, need to establish reasonable reimbursement criteria and follow those with poor outcome, not to blanketly deny care.

Many Intricacies in Doing the Procedures

If one does not do the procedure exactly the same as another, there will be inconsistencies and results will vary. For example, the difficulty in determining the efficacy of epidural steroids will be influenced by the following factors.

  • Blind procedures versus fluoroscopically guided procedures.

  • Transforaminal versus Interlaminar.

  • Cervical epidurals versus thoracic epidurals.

  • The quality of pain may vary.

  • The severity of the pain may be poorly controlled for.

  • Coexisting diseases such as obesity or diabetes may influence the outcome.

  • The doses and types of steroids may vary between practitioners.

  • The technique may vary on precisely where the needle is placed.

  • The use or amount of local anesthetics used may vary greatly.

Level of Skills and Experience

Inexperienced physicians may perform a procedure under fluoro guidance but may not have the same expertise in guiding the needle to the exact position as well-experienced physician who has spent years perfecting this approach. Thus taking one very simple examples one can see that physicians outcomes would be expected to vary greatly. Some physicians routinely do one procedure while others do a series of injections.

Varieties in Indications for Procedures

The indications may vary for a variety of techniques being performed for patients with:

  • Herniated disc versus stenosis.

  • Radiculopathy versus axial back pain.

  • Epidural steroids may be used for CRPS, radiculopathy, or postherpetic neuralgia.

  • Difference in rates of traditional insurance versus workman's compensation.

Accordingly many small studies may not represent the exact patient population being studied. While it is important for the physician to remain conversant with the literature, it is important to continually individualize the therapy for that specific case.

Comparative Effectiveness Research

Comparative effectiveness research (CER) is the direct comparison of existing health care interventions to determine:

  • Which treatment works best for which patients

  • Which treatment poses the greatest benefits and harms

  • The core question of CER in which treatment works best, for whom, and under what circumstances

It is more of a pragmatic approach that attempts to compare a variety of reasonable interventions in determining the most appropriate strategy.

Cost-Effectiveness Analysis

Cost-effectiveness analysis (CEA) is a form of economic analysis that compares the relative costs and outcomes (effects) of two or more courses of action. CEA is distinct from cost-benefit analysis, which assigns a monetary value to the measure of effect. CEA is often used in the field of health services, where it may be inappropriate to monetize health effect.

Guideline Development

Guidelines on the appropriate steps in the management of various diseases have become a useful tool for physicians.

  • The guidelines synthesize the evidence-based medicine, the well-designed studies that have been done.

  • More studies on a particular therapy or drug it is more likely to receive a favorable position in the guideline development.

  • Guidelines which attempt to synthesize evidence-based medicine in to clinical paradigms, count the number of evidence-based studies, and make recommendations based on the total number of patients and number of studies in the literature.

  • Those with better financial resources may increase the total number of studies that will lead to a pharmaceutical or medical device approach. This type of weighted research will favor more expensive therapies that are frequently funded by pharmaceuticals and medical device companies.

  • The guideline development itself may be insidiously influenced by medical device and pharmaceutical companies as they tend to fund those guidelines that support their development.

  • Some insurers have begun funding the development of guidelines, and they tend to weigh more heavily noninterventional therapies, in spite of a demonstrated lack of efficacy.


This text is an atlas of interventional pain medicine. We espouse Sackets original tenets, of using the best available evidence, as well as those of Hippocrates to do no harm. We recognize that the practice of medicine takes an individual approach to the management of pain. We do believe that a rationale physician, when faced with limited data, may try therapies that make sense. The text that follows is more of a "how to" approach. The denial of appropriate care by insurers when there is a paucity of date on a specific approach, indicating that there is no evidence-based medicine flies in the face of what evidence-based medicine is about. Evidenced-based medicine allows the physician to understand the literature, its pitfalls, and extrapolate based on their clinical experience in determining the most appropriate course of action.

Suggested Reading

Agency for health care policy and research.
Evidence-based medicine—Wikipedia, the free encyclopedia.
Sackett  D. Evidence-based medicine: what it is and what it isn't. BMJ. 1996;312:71–72.
Staats  PS. Introduction. In: Aronoff  G, ed. The Pharma-cologic Management of Pain Task Force Ratings.

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.