What is Evidence Based Medicine?
Seetaram V. Korgaonkar
Rita S. Korgaonkar
Basic and clinical research results that were incorporated in medical decision making was found to be subjective in earlier times, and there was no formal process for determining the extent of research evidence. Many studies done from 1960 to 1972, found that there were a lack of controlled trials supporting many practices which had previously been assumed to be effective. Towards the end of 1980, it was seen that larger numbers of procedures performed by physicians were considered inappropriate and not up to medical standards. The absence of comparative research, undermined medical decision-making at the level of individual patients and population at large, and this paved the way for the introduction of evidence based medicine (EBM). Now EBM has a considerable impact on modern day healthcare practices.
In modern day health-care practice, when clinicians encounter patient care decisions, optimal practice demands knowledge and application of the relevant evidence. Although medicine has a long tradition of basic and clinical research, research incorporated in medical decision was subjective. This traditional approach depended on each individual physician and was so called “clinical judgment.” Whereas in the case of a decision that applied to the population, guidelines of research would be determined by committees of experts, but there was still no formal process for determining the extent of research evidence. It used to be assumed that decision makers would incorporate evidence based on their education, experience, expertise and ongoing study of applicable literature. Subsequently, several flaws became apparent in the traditional approach of medical decision-making. Many studies done between the years 1960 to 1972 found out that there were a lack of controlled trials supporting many practices which had previously been assumed to be effective. This resulted in a significant gap between the available evidence and the actual clinical evidence.
Prior to the onset of EBM, due to the inconsistency between evidence and expert recommendations, it was difficult to make rational clinical decisions. Towards the end of 1980, it was seen that larger proportions of procedures performed by physicians were considered inappropriate even by the standards set by their own peers. This resulted in flaws in medical decision-making, at the level of individual patients and populations, and so paved the way for the introduction of EBM.
David M. Eddy first began to use the term “evidence-based” in 1987 in a manual on insurance coverage of new technologies.[1,2] He explicitly described available evidence that pertained to a policy and connected the policy to evidence. He therefore consciously anchored a policy not to the current practice or the belief of experts, but instead to experimental evidence. The policy must be consistent with and supported by evidence. The pertinent evidence must be identified, described and analyzed. The policymakers must determine whether the policy is justified by evidence and rationale must be written.
The term “evidence based medicine” is in the context of medical education and it is applied to individual versus population and it is defined as a “set of principles and methods intended to ensure that to the greatest extent possible, medical decisions, guidelines and other types of policies, are based on and consistent with, good evidence of effectiveness and benefits.”
EBM research evolved into two groups, one was based on research involving individual studies and the other was based on population studies. Hence it spread and progressed rapidly. The American College of Physicians, American Heart Association, and BMJ (British Medical Journal) wrote many evidence-based guidelines. Cochrane collaborations created a network of 13 countries to produce systemic reviews and guidelines. Programs to teach EBM have been created in medical schools across Canada, United States, United Kingdom, Australia and other countries.
Many programs developed to help individuals gain better access to evidence. The Cochrane Centre published evidence reviews in 1993. BMJ published a 6-monthly periodical in 1995 called “Clinical Evidence” that provided brief summaries of current states of evidence about important clinical questions for clinicians. This program made evidence more accessible to practitioners.[5,6]
Evidence quality is assessed based on the source type of study i.e. by meta-analysis, systemic reviews of triple blind randomized clinical trials, as well as other factors like statistical validity, clinical relevance and peer review acceptance.[7,8] Several organizations have developed grading systems for assessing the quality of evidence such as the one given below:
Level I: Evidence obtained from at least one properly designed randomized control trial.
Level II(1): Evidence obtained from well-designed control trials without randomization.
Level II(2): Evidence obtained from well-designed cohort or case-control analytic studies, preferably from more than one center or research group.
Level II(3): Evidence obtained from multiple time series designs with or without interventions. Dramatic results in uncontrolled trials might also regarded as this type of evidence.
Level III: Opinions of respected authorities, based on clinical experience, descriptive studies or reports of expert committees.
Most of the evidence ranking schemes grade evidence for treatment and prevention, but not for diagnostic tests, prognostic markers or risks. The Oxford University Centre for EBM created levels of evidence related to diagnosis, prognosis, treatment risks and benefits, and screening of diseases as well.
Recommendations for clinical service (see Table 1) are classified by the balance of risk verses benefit of service and the five levels of evidence on which this information is based such as:
Level A: Good scientific evidence suggest that the benefits of clinical service substantially overweigh the potential risks. The clinician should discuss this service with eligible patients (High Quality Evidence).
Level B: At least fair scientific evidence suggests that the benefits of clinical service substantially overweighs the potential risks. The clinician should discuss this service with eligible patients (Moderate Quality Evidence).
Level C: At least fair scientific evidence suggests that the benefits are provided by the clinical service, but the balance between the benefits and risks are too close for making general recommendations. The clinician need not offer it unless there are individual considerations (Low Quality Evidence).
Level D: At least fair scientific evidence suggest that the risk of clinical service outweighs the potential benefits. The clinician should not routinely offer the service to asymptomatic patients (Very Low Quality Evidence).
Level I: Scientific evidence is lacking, is of poor quality, or conflicting such that risk verses benefit balance cannot be assessed. The clinician should help the patient understand the uncertainty surrounding the clinical service.
The new hierarchy of evidence is best expressed in the Grading of Recommendation, Assessment, Development and Evaluation (GRADE) framework, which provides guidance on evaluating and rating the quality of body of evidence in healthcare.[9,10,11]
Panelists make strong or weak recommendations based on certain criteria such as the balance between desirable and undesirable effects (cost, quality of evidence, values and preferences, resources utilization etc.). Although EBM is regarded as the gold standard of clinical practice, there are several limitations of its use.
EBM is an approach used in teaching the practice of medicine and in improving decision making by individual physicians about individual patients. It also includes the use of evidence in the design of guidelines and policies that apply to groups of patients and populations. It emphasizes the use of evidence from well designed and well conducted research.
- Evidence Based Medicine Working group. EBM: A new approach to teaching the practice of medicine. JAMA 1992; 268(17): 2420-5.
- Cochrane AL. Effectiveness and efficacy, random reflections on health service. Nuffield Provincial Hospital Trust. 1972.
- Eddy DM. Guidelines for policy statements. JAMA 1990; 263(16): 2239-43.
- Eddy DM. Clinical policies and quality of clinical practice. N Eng J Med 1982; 307(6): 343-7.
- Eddy DM. Practice policies-where do they come from? JAMA 1990; 263(9):1265-75.
- Eddy DM. Variations in physician practice. The role of Uncertainty Health affairs 1984; 3(2):74-89.
- Eddy DM. The quality of Medical evidence: implications for quality care health affairs 1988; 7(1):19-32.
- Eddy DM. Practice policies: Guidelines for methods. JAMA 1990; 263(13):1839-41
- Balshem H, Helfand M, et al. GRADE Guidelines: Rating of the quality of evidence. J Clin Epidemiol 64(4):401-6.
- Guyatt GH, Oxman AD, et al. GRADE guidelines: A new series of articles. J Clin Epidemiol 2011; 64:380-2.
- Guyatt GH, Oxman AD et al. GRADE: An emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008; 336: 924-6.
- Shekelle PG, Woolf SH, Eccles M, Grim Shaw J. Developing clinical guidelines. West J Med. June 1999;170(6):348-51.