Abstract
The judicious practice of evidence-based medicine improves the quality of care provided to patients as well as patient outcomes. It involves the systematic search and evaluation of literature and then applying these findings to clinical decision-making. Specifically, evidence-based medicine encompasses five steps: (1) formation of a specific clinical question, (2) systematic search of the literature for available evidence, (3) appraisal of evidence, (4) interpretation of findings in the context of the individual patient and application to clinical decision-making, and (5) regular evaluation of own performance. An effective search requires sufficient knowledge of different categories of evidence and where they can be accessed. Once the studies are obtained, they are critically appraised. This step requires an understanding of study designs, associated biases, and the resulting strength of evidence. For clinicians who may not have time to perform a systematic search of the literature, they can target their search to focus on higher levels of evidence. Alternatively, they can begin their search with pre-appraised literature rather than individually published studies. Ultimately, a commitment to evidence-based medicine ensures that every clinical decision will be made based on the best available evidence.
Access provided by CONRICYT-eBooks. Download chapter PDF
Similar content being viewed by others
Keywords
Defining Evidence-Based Medicine
During the 1990s, an emerging body of literature called for a shift in the paradigm of medical practice [1, 2]. There was a recognition that the anecdotally informed approach to patient care was insufficient in the face of modern medicine and research. This impetus coincided with key technological advancements, including the creation of online databases that eased accessibility to studies as well as the development and refinement of numerous research methodologies [1]. As part of this paradigm shift, many changes were seen in medical academia. Specifically, journals began to utilize structured abstracts, and practice guidelines became more common; there was also an increasing effort to redesign residency training programs to incorporate competencies surrounding evidence search and appraisal [1]. This movement was eventually termed evidence-based medicine.
In short, evidence-based medicine describes the systematic search and evaluation of literature to make clinical decisions that are based on the current evidence. The consistent and judicious practice of evidence-based medicine ensures that every clinical decision is based on the best available evidence. In fact, studies have demonstrated that the potential impact is that the quality of delivered care is improved and patient outcomes improve when an evidence-based approach is taken [3,4,5,6,7,8,9]. With health economic evaluations becoming more prominent, evidence-based medicine may also lead to a more cost-effective care.
The process of evidence-based medicine encompasses five steps [10]: (1) formation of a specific clinical question, (2) systematic search of the literature for available evidence, (3) appraisal of evidence, (4) interpretation of findings in the context of the individual patient and application to clinical decision-making, and (5) regular evaluation of own performance. It is important to note that the final clinical decision is dependent on individual physician expertise as well as the values and preferences of the patient. Therefore, evidence-based medicine represents the intersection of a systematic examination of literature, clinician experience and expertise, and patient preferences [11]. To become proficient at evidence-based medicine, healthcare providers must be familiar with available resources for evidence, study designs and its implications for the quality of evidence, and, finally, the levels of evidence.
Accessing Evidence
Journal Articles
There are numerous bibliographical resources that can be used by clinicians to access original studies. The Ovid databases , including Medline and Embase, are among the most prominent databases used in medicine. Medline was established by the US National Library of Medicine (NLM) in 1966. It contains over 5600 journals, with over 22 million references [12]. Embase, which began in 1974, is produced by Elsevier Science. It contains over 8500 journals, with over 30 million references [13]. Both Medline and Embase encompass all areas of biomedicine, while Embase also contains journals on health administration, health economics, and forensic sciences [14]. PubMed is another database produced by NLM. In addition to containing all Medline references, PubMed also contains in-process citations, “ahead of print” citations, and articles from Medline journals that are not within the scope of biomedicine, among many other types of citations. Another database that clinicians should be aware of is CINAHL (Cumulative Index to Nursing and Allied Health Literature). While this database focuses primarily on nursing and allied health journals, there is a wide range of subjects including medicine, education, and health administration [15, 16].
Historically, Medline was felt to be the most comprehensive database , and it is the most commonly used databases in meta-analyses [17, 18]. However, more recent studies have shown that searching additional databases, such as Embase and CINAHL, yields more relevant studies than can be found via Medline alone [15, 18,19,20]. For more specific inquiries, sources such as the Cochrane Central Register of Controlled Trials [21] and Health Services Research Queries [22] may be helpful. Research librarians are great resources for clinicians in determining the suitability of different databases for specific questions. They are also helpful in devising search strategies that can optimize both yield and efficiency.
Consolidated Evidence
There are ongoing efforts to consolidate data from individual studies to create resources that are practical and accessible to clinicians [23]. First, there has been a growth in the number of evidence-based medicine journals, whereby individual studies are summarized into abstract form, delivering only the most poignant findings from studies that are appraised to be of value. This is an expedited method for clinicians to stay informed of new research. An example of such a journal is Evidence-Based Medicine, which spans all medical specialties. Discipline-specific evidence-based journals exist as well. A journal of this concept does not currently exist specific to otolaryngology.
Systematic reviews and meta-analyses represent systematic evaluations of the literature, with syntheses of available evidence to answer specific clinical questions. A systematic review or meta-analysis is helpful for decision-making as it presents all pertinent research surrounding a specific clinical scenario. The Cochrane Library is a good source for quality systematic reviews and meta-analyses. These studies may also be accessed as individual articles from the Medline, Embase, and CINAHL databases.
Evidence-based texts and clinical practice guidelines represent more extensive efforts to systematically review the evidence to make formal recommendations to clinicians . Examples of texts which incorporate a focus on evidence include UpToDate (http://www.uptodate.com), DynaMed (http://www.dynamed.com), and Best Practice (http://bestpractice.bmj.com). Numerous discipline-specific societies and organizations put forth clinical practice guidelines, which are meticulously created over the course of years. In otolaryngology, the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) regularly publishes clinical practice guidelines regarding prominent otolaryngology issues (http://www.entnet.org/content/clinical-practice-guidelines), which are based on data from systematic reviews and high-level clinical trials.
Evidence Appraisal
Critical appraisal of evidence requires sound understanding of study designs, associated biases, and the resulting strength of evidence. Bias is defined as the presence of a systematic error from the design, performance, or analysis of a study [24]. Every study design is prone to its own set of biases. This section presents an overview of commonly encountered study designs in otolaryngology and their associated risks of bias (Table 1.1).
Randomized Controlled Trial
Randomized controlled trials (RCTs) are typically considered the gold standard for evidence [25, 26]. In RCTs, subjects are randomized into two or more groups that are assigned to receive certain interventions [27]. They are then followed for a period of time, and data regarding pre-specified endpoints are collected. The advantage of randomization is that it ensures that there is an even distribution of known and unknown baseline confounders [28]. In a well-designed and well-executed RCT, any difference of effects seen between groups likely represents a true treatment effect. There are numerous RCT designs, depending on the clinical question, patient population, disease and treatment characteristics, and outcomes of interest [29].
When reviewing RCTs, clinicians should be mindful of potential biases that are commonly encountered in this type of study design. The first is selection bias , whereby there are differences in the baseline characteristics of the study groups [30]. To minimize selection bias, there should be an appropriate randomization sequence by which study subjects are assigned to their groups. Furthermore, the sequence should be concealed such that research staff and subjects cannot predict upcoming assignments [30, 31]. Randomization, however, can only be justified when there is true clinical equipoise [32]. When true uncertainty regarding the superiority of an intervention does not exist, an RCT would not be the appropriate study design.
Performance bias and detection bias are also common biases encountered in RCTs. Together, they lead to ascertainment bias. Both biases stem from a lack of blinding of the study investigators or subjects [31]. Performance bias refers to differences in the care or study conditions received or experienced by the study participants based on their group assignment. Similarly, detection bias refers to differences in the measurement of outcomes among study participants based on their group assignment. In both scenarios, knowledge of the treatment group may affect the outcome rather than the treatment itself. Sufficient blinding of investigators and participants minimizes the risk of both biases. Standardized methods of subject assessment may also circumvent performance and detection bias [33].
Finally, bias in RCTs can occur when there is missing data. During the course of a study, missing data may arise as a result of participant withdrawal or missed follow-up appointments. When this occurs, baseline characteristics may no longer be equal between the treatment groups, thereby placing the study at risk for bias. To decrease the risk of bias, RCTs commonly utilize an intention to treat analysis [34]. This method ensures that patients are analyzed based on their initial group assignment, even if they did not receive the assigned treatment and/or did not complete the study [35]. Improper handling of missing data may lead to biased estimates of intervention effects. To minimize the amount of missing data, preventative measures can be undertaken during the design and implementation stages of a trial [28]. There are also numerous methods of statistical analysis that can be utilized depending on the nature of the missing data [28]. When reviewing an RCT, the clinician should be critical of the presence of missing data and how this was handled by the investigators.
Observational Analytic Studies
While RCTs are highly regarded on the evidence pyramid [36], it is not suitable for every clinical scenario and question. In surgery, for example, blinding may not always be achievable; it can be impossible to truly blind the treating surgeon to the intervention [37]. In such instances, observational analytical designs are more appropriate and can yield comparable results to RCTs [38, 39].
Cohort Studies
In a cohort study , a group of subjects are identified based on an exposure of interest. A control group without the exposure is identified as well. These groups are then followed for a period of time, and the outcome(s) of interest are identified, recorded, and analyzed [40]. Cohort studies can be prospective or retrospective. In a prospective design, the exposure groups are identified and followed into the future [41]. The main advantage of this design is that investigators can collect data prospectively, increasing the likelihood of a more complete dataset. However, prospective studies are time-consuming and may be financially burdensome. Alternatively, a retrospective design identifies the groups of interest, and data is collected from a review of established records. It is less time-consuming with lower study costs. The main disadvantage of a retrospective design is the reliance on pre-collected data, which may be incomplete or inaccurate [41].
Cohort studies are susceptible to selection bias , information bias, and confounding [24]. Selection bias occurs when the study groups are not representative of the population from which they are drawn, thereby limiting generalizability of the study findings [24]. Selection bias can also compromise the internal validity of the study, such that estimates may not represent the true treatment effect [42]. In cohort studies, specific causes of selection bias are non-response bias and attrition bias [43]. Non-response bias occurs when there is a systematic difference among subjects who declined to participate in the study and those who agreed. Similarly, attrition bias occurs when there is a systematic difference among subjects lost to follow-up and those who completed the study.
Information bias occurs during data collection. It can come from the investigators, if there are inconsistencies or inaccuracies in exposure or outcome measurement. It may also come from the participants, if there are systematic differences in how exposure or outcomes are reported [24]. Finally, cohort studies are at risk for confounding . Due to the lack of randomization, there will likely always be unknown confounders that are not adjusted for [24].
Case-Control Studies
Case-control studies begin with the identification of a group of subjects with an outcome of interest. The control group is then determined, which consists of subjects of similar characteristics but without the outcome of interest. Data on previous exposures and risk factors is then collected retrospectively, and analysis is performed to determine whether there are significant associations between certain exposures and the outcome of interest [44]. Case-control studies are appropriate for outcomes that are either rare or have a long latency period, where a cohort design would be inefficient, time-consuming, and expensive [41].
Similar to cohort studies, case-control studies are susceptible to selection bias. A form of selection bias that is unique to case-control studies is prevalence-incidence bias, otherwise known as Neyman’s bias [45]. It is based on the tenet that a study that examines prevalent cases may miss either severe disease that led to rapid mortality or mild disease that is subclinical [46]. As a result, survival of the study groups may be biased to be either increased or decreased, leading to inaccurate calculations of mortality [45, 46]. Selection bias can also occur secondary to convenience sampling, which is a common way for subject recruitment in case-control studies [47]. Specifically, participants are often recruited from the same institution, and there may be systematic differences in these patients compared to the population that they theoretically represent. This can compromise both internal and external validity.
Case-control studies are also at risk for assessor bias, recall bias, and confounding. Assessor bias occurs when knowledge of the study affects the measurement of an exposure or outcome by the investigator [47]. Because case-control studies rely on the retrospective collection of data, often based on participant report, recall bias can occur if there is systematic misreporting of exposure status. Lastly, confounding can occur for similar reasons as in cohort studies.
Cross-Sectional Study
In a cross-sectional study , data is collected from the study group at one point in time [48]. It is a commonly utilized method for determining the prevalence of a particular condition in the population [49]. The advantages of a cross-sectional study are its brevity, relatively simple methodology, and lower costs. Furthermore, because cross-sectional studies are typically population-based, the sample group is likely to be representative of the larger population of interest. The main disadvantage of this study design is that it does not allow for hypothesis testing; only associations, rather than causal inferences, can be made. Second, because a cross-sectional study only captures information at one time point, it may not fully capture the truth as there may be variations in conditions overtime [49].
Cross-sectional studies are susceptible to non-response bias, volunteer bias, and ascertainment bias [48]. Non-response bias, again, is defined by the presence of systematic differences among those who declined to participate in the study and those who agreed. Alternatively, volunteer bias occurs when there are systematic differences among those who volunteered to participate in the study and those who did not. Ascertainment bias can occur in cross-sectional studies as a result of either the participants (response bias) or investigators (observer bias). In the setting of response bias, participants may intentionally misreport certain exposures if, for example, the exposure is viewed to be embarrassing or unacceptable [48]. Observer bias can occur if there are consistent inaccurate recordings of exposure or outcome status. This can occur if data collection requires subjective interpretation by the individual investigator completing the recording.
Studies of Diagnostic Results
A daily challenge encountered by clinicians is determining the utility of specific diagnostic tests and applying this to individual patients. A growing body of literature is aimed at assisting clinicians in performing this important task. Studies of diagnostic tests are observational in nature and examine an investigational modality’s key characteristics, such as sensitivity, specificity, and likelihood ratios. A description of these terms is presented in Table 1.2. For a more in-depth review of diagnostic test characteristics, the reader is referred to additional resources [50, 51].
Many diagnostic tests are used for the purposes of screening and surveillance of malignancies. In evaluating studies pertaining to diagnostic tests for these purposes, the clinician should be cognizant of two important concepts and potential biases: lead time bias and length time bias. Lead time bias refers to a potential artificial increase in survival caused by a test that leads to earlier diagnosis of disease. While rapid identification and timely initiation of treatment in less advanced cases may confer an advantage over later diagnoses of more advanced disease, it does not change the inherent properties of that disease state; however, earlier detection of disease may shift the estimated overall survival, due to cases detected by screening rather than the occurrence of symptoms [52, 53]. Length time bias occurs when there is preferential detection of more indolent and slow-growing tumors. Slow-growing tumors are more likely to be detected by screening given their longer subclinical phase compared to more aggressive tumors. Screening will, therefore, be associated with improved survival outcomes; however, the observed improvement in survival is secondary to the increased detection of more indolent disease rather than a benefit of screening [52].
Observational Nonanalytic Studies
Case Series
In a case series , a group of patients who share a diagnosis or who have undergone the same intervention are followed for a period of time. Outcomes are then measured and recorded. A case series can be performed either retrospectively or prospectively. The main difference between case series and cohort studies is the lack of a control group in the former. As such, case series do not allow for hypothesis testing, and causal relationships cannot be established [54, 55]. Case series are suitable for assessment of disease progression and treatment outcomes. As such, they are often performed as the preliminary step to determine whether a hypothesis may be worthy of further and more rigorous testing.
The main biases that clinicians should be cognizant of in case series are selection bias and confounding. Similar to cohort studies, a case series often employs convenience sampling. There are no a priori criteria for who will be included in the study or who will receive a particular intervention. Therefore, participants in a case series may share characteristics that are systematically different from the population that they represent. As a result of convenience sampling and unrepresentative study populations, case series cannot be used to determine the population prevalence or incidence of a particular condition. Second, confounding may occur due to the lack of randomization of study participants.
Systematic Reviews and Meta-Analyses
Systematic reviews involve a meticulous and methodological search of the literature to identify all studies pertaining to a clinical question. These studies are then evaluated for bias, and a summary of the findings is provided to answer the clinical question. Meta-analyses are often reported with systematic reviews and represent quantitative syntheses of the literature. Systematic reviews are preferred to traditional narrative reviews, as the latter can be more prone to bias given the lack of standardized methodology [56].
To critically appraise a systematic review or meta-analysis , the clinician should be familiar with the study design. After defining a clinical question, the a priori protocol of the systematic review is typically described. As part of the protocol, data extraction, primary and secondary outcomes, and analysis methods are outlined. The literature search is performed and should include all relevant bibliographical sources and with comprehensive search terms. The review process for the inclusion and exclusion of individual studies should be transparent. The data is then extracted and analyzed, and the overall findings are presented. If a meta-analysis is performed, the method of statistical analysis and the heterogeneity seen in the data are reported. An important feature of systematic reviews and meta-analyses is the assessment of included studies for bias. There should also be an assessment for publication bias. For detailed guides on how to perform and interpret a systematic review and/or meta-analysis, the reader is encouraged to refer to a few suggested resources [57,58,59].
By knowing what constitutes sound methodology, clinicians can become critical consumers of systematic reviews and meta-analyses . For every element of a study, the clinician should assess adherence to established systematic review and meta-analysis protocols. Checklists have been established to assist the clinician in their critical assessment of synthesis studies [60, 61]. A study that is conducted with greater rigor is less subject to biases in its search of the literature, the inclusion and exclusion of studies, and its synthesis of the data.
Clinical Practice Guidelines, Clinical Consensus Statements, and Position Statements
With the progression of evidence-based medicine, there has been a rapidly growing body of literature pertaining to all aspects of medicine. It has become increasingly difficult for clinicians to stay current in their fields of practice. For this reason, there has been an effort by numerous professional societies and working groups to distill the vast body of literature into tangible guidelines to assist clinicians in everyday practice. These exist in the form of clinical practice guidelines, clinical consensus statements, and position statements. It is important for clinicians to understand how each category of document is developed and how they should be used.
The Institute of Medicine defines clinical practice guidelines as “statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options” [62]. As the definition suggests, clinical practice guidelines employ a systematic review of the literature at the outset. The working group typically consists of a representative group of medical experts, ancillary healthcare providers, and key stakeholders such as consumer advocates. To minimize the introduction of biases, clinical practice guidelines are developed using rigorous and transparent processes. The guideline presents evidence both on the benefits and harms of the intervention of interest. The evidence should also be graded to give the consuming clinician an estimate of the strength of the evidence. Finally, clinical practice guidelines should be updated in the identical systematic fashion as its initial development. The timing of revisions ultimately occurs at the discretion of the professional society and is influenced by the rate of emerging evidence, as well as external recommendations. The AAO-HNS is continuously developing clinical practice guidelines on clinical entities such as otitis media with effusion [63], adult sinusitis [64], and tinnitus [65], among many others (http://www.entnet.org/content/clinical-practice-guidelines). These guidelines are updated approximately every 5 years. Another prominent guideline initiative that is relevant to otolaryngologists is by the American Thyroid Association and their management guidelines for thyroid nodules [66].
Consensus statements are also developed by a panel of medical experts following an evidence-based review of the literature. They are often borne from structured conferences on certain topics [62]. They are similar to clinical practice guidelines in that there is often a systematic review of the literature, which is then interpreted and presented in the form of recommendations. Consensus statements may focus more on recent evidence rather than examine the entire body of evidence pertaining to a clinical topic [62]. They are also less centered on action statements than clinical practice guidelines, instead focusing on statements on which the panel can agree. The AAO-HNS publishes consensus statements, which are borne out of systematic syntheses of expert opinion (http://www.entnet.org/content/clinical-consensus-statements). Consensus statements may also be developed by interested groups of clinicians who take the initiative to form panels or forums. The methodologies employed may vary. Two examples are a recent consensus statement on the definition and diagnosis of Eustachian tube dysfunction [67] and an extensive and international effort to review and offer recommendations on rhinosinusitis [68].
Position statements represent the least methodologically taxing document. They are often the product of a professional society committee meeting or conference. They exist in a range, and some can be based mainly on the views of a committee or task force whose expert opinions carry weight. Some development groups will begin with a review of the literature, but that is not always mandated. There are also variations in how the final statements are formulated; they may be vetted only by the internal group or by an umbrella or commissioning organization. Position paper statements present the collective opinions of a group as they pertain to a specific clinical question, but their evidence base may be variable.
Instruments for Quality Assessment
To assist the clinician in assessing the quality of studies, numerous quality rating scales have been developed. In one systematic review of scales developed to assess the quality of RCTs, 21 published scales were identified [69]. Five of these were developed using standardized methodology [70,71,72,73,74]. These scales focus on the key methodological features of an RCT, such as randomization, blinding, outcomes measurement, and management of missing data. The scale developed by Jadad et al. (1996) was found to have the best evidence for validity and reliability [74]. It is also one of the most recognized instruments, with many adaptations since its initial report [69]. The Cochrane Handbook for Systematic Reviews of Interventions also provides a set of criteria by which to evaluate biases in randomized controlled trials [30]. This resource is intended for authors in preparation of Cochrane reviews.
In another systematic review of quality assessment of observational studies, 86 tools were identified, 60 of which were proposed for future use [75]. Many of these tools were created for specific study designs. Unfortunately, only half of the studies described the development of the tool or the assessment of its validity and reliability . Ultimately, the authors concluded that there is no single superior tool for the assessment of observational studies at this time.
For systematic reviews, there are numerous critical appraisal tools for quality assessment [76]. Most tools reported in the literature are not validated as of this writing. To date, the most widely accepted and validated instrument is A MeaSurement Tool to Assess systematic Reviews (AMSTAR) [77, 76]. It was designed to assess the methodological rigor of systematic reviews. Another more recently developed instrument, the Risk Of Bias in Systematic reviews (ROBIS) , was designed specifically for the assessment of risk of bias in systematic reviews [76]. A quality assessment tool also exists for meta-analyses, pertaining to the evaluation of methodology [78], but currently, there is no risk of bias assessment tool for meta-analyses.
Clinical practice guidelines also vary in quality, secondary to inconsistent adherence to development protocols. This can result in discrepancies in recommendations. The AGREE II (Appraisal of Guidelines and Research Evaluation) instrument was designed to assist clinicians in developing, reporting, and appraising clinical practice guidelines [79]. The Institute of Medicine also provides a detailed prescriptive document for the development of clinical practice guidelines [62].
A caveat to the rating of study quality is that, as alluded to above, numerous quality rating scales exist for each study design. These scales are developed with varying rigor and, subsequently, are of varying validity. They also emphasize different characteristics of a study in judging its quality. Furthermore, different organizations and journals may employ different methods for assessing the quality of the reviewed studies. The same evidence may receive a number of evaluations, depending on the grading system used [80]. This can be confusing to clinicians, who may not know which rating scale is the most reliable in judging the quality of individual studies or which set of guidelines they should apply to practice. The lack of standardization in quality rating scales hinders the broader goal of quality rating , which is to effectively and efficiently communicate the strength of evidence to the medical community. Another consequence of the heterogeneity of rating scales is that depending on the scale used, the classification of a study as high or low quality might differ. As such, depending on the rating scales used, meta-analyses of randomized controlled trials may reach different conclusions even if examining the same studies [81]. Some authors argue against the use of quality scores [81]. Instead, they recommend focusing on the key methodological components of any given trial and assess how the quality of these components might affect study conclusions by performing sensitivity analyses.
Along with the development of quality assessment instruments for the evaluation of individual studies, there has been a movement toward ensuring the quality of study reporting. In 1999, the QUOROM (Quality of Reporting of Meta-analysis ) statement was published [82]. Its contents were updated, and it was renamed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses ) statement in the 2009 publication [83]. Analogous statements exist for randomized trials (CONSORT , Consolidated Standards of Reporting Trials ) [84], diagnostic test studies (STARD , Standards for Reporting Diagnostic Accuracy Studies) [85], meta-analyses of observational studies (MOOSE , Meta-analysis of Observational Studies in Epidemiology ) [86], and observational epidemiological studies (STROBE , Strengthening the Reporting of Observational Studies in Epidemiology ) [87]. It is important to note that these consensus statements pertain only to the reporting of studies, not the design and performance of studies. They are intended for the investigators and authors of studies and should not be used by clinicians for evidence appraisal.
Where to Start
There is a recognized hierarchy of evidence, with systematic reviews of randomized controlled trials, with or without accompanying meta-analyses, at the top. This is followed by individual randomized controlled trials, systematic reviews of cohort studies, individual cohort studies, systematic reviews of case-control studies, individual case-control studies, case series, and expert opinion [88]. This hierarchy can help direct the clinician in their literature review, focusing attention first on studies with higher levels of evidence.
Others have proposed an approach to evidence-based medicine that concentrates on pre-appraised literature rather than individually published studies. In 2001, Haynes proposed the “4S” model of information resource hierarchy to direct clinician efforts when seeking evidence [23]. It was subsequently revised to “5S” and, most recently, the “6S” model [89, 90]. The S’s of the pyramid represent, in descending hierarchy, systems, summaries, synopses of syntheses, syntheses, synopses of studies, and studies. Systems refers to computerized decision support systems that combine the latest evidence with individual patient information to recommend the most appropriate clinical decision. Summaries refer to clinical practice guidelines and evidence-based texts. Synopses of syntheses refer to summaries of systematic reviews on specific topics. For the sake of efficiency and breadth, the “6S” model suggests that clinicians should start with systems and summaries first, before moving down the pyramid. Based on this model, referring to original studies to answer a clinical question should only be performed when no other sources of evidence exist.
Conclusion
Evidence-based medicine is a multistep process that requires familiarity with literature search methods, the knowledge to critically appraise the evidence, and an ability to interpret and apply the evidence to the specific clinical scenario. Clinicians should also be proactive in reviewing their own performance and seek opportunities for improvement. Proficiency in evidence-based medicine has the potential to improve patient care by ensuring that every clinical decision is made based on the synchrony of best evidence, medical expertise, and patient values.
References
Evidence-Based Medicine Working Group. Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA: J Am Med Assoc. 1992;268(17):2420–5.
Rosenberg W, Donald A. Evidence based medicine: an approach to clinical problem-solving. BMJ. 1995;310(6987):1122–6.
***Shin JJ, Randolph GW, Rauch SD. Evidence-based medicine in otolaryngology, part 1: the multiple faces of evidence-based medicine. Otolaryngol – Head Neck Surg: Off J Am Acad Otolaryngol-Head Neck Surg. 2010;142(5):637–46. https://doi.org/10.1016/j.otohns.2010.01.018.
Watts DD, Hanfling D, Waller MA, Gilmore C, Fakhry SM, Trask AL. An evaluation of the use of guidelines in prehospital management of brain injury. Prehosp Emerg Care. 2004;8(3):254–61.
Barr J, Hecht M, Flavin KE, Khorana A, Gould MK. Outcomes in critically ill patients before and after the implementation of an evidence-based nutritional management protocol. Chest. 2004;125(4):1446–57.
Briffa T, Hickling S, Knuiman M, Hobbs M, Hung J, Sanfilippo FM, et al. Long term survival after evidence based treatment of acute myocardial infarction and revascularisation: follow-up of population based Perth MONICA cohort, 1984–2005. BMJ. 2009;338:b36. https://doi.org/10.1136/bmj.b36.
Rasmussen JN, Chong A, Alter DA. Relationship between adherence to evidence-based pharmacotherapy and long-term mortality after acute myocardial infarction. JAMA: J Am Med Assoc. 2007;297(2):177–86. https://doi.org/10.1001/jama.297.2.177.
Shorr AF, Micek ST, Jackson WL Jr, Kollef MH. Economic implications of an evidence-based sepsis protocol: can we improve outcomes and lower costs? Crit Care Med. 2007;35(5):1257–62. https://doi.org/10.1097/01.CCM.0000261886.65063.CC.
Suojaranta-Ylinen RT, Roine RO, Vento AE, Niskanen MM, Salmenpera MT. Improved neurologic outcome after implementing evidence-based guidelines for cardiac surgery. J Cardiothorac Vasc Anesth. 2007;21(4):529–34. https://doi.org/10.1053/j.jvca.2006.12.019.
Sackett DL. Evidence-based medicine. Semin Perinatol. 1997;21(1):3–5.
Masic I, Miokovic M, Muhamedagic B. Evidence based medicine – new approaches and challenges. Acta Inform Med. 2008;16(4):219–25. https://doi.org/10.5455/aim.2008.16.219-225.
U.S. National Library of Medicine. Medline, PubMed, and PMC (PubMed Central): how are they different? 2016. https://www.nlm.nih.gov/pubs/factsheets/dif_med_pub.html. Accessed 11 Dec 2016.
Elsevier. Embase coverage and content 2016. https://www.elsevier.com/solutions/embase-biomedical-research/embase-coverage-and-content. Accessed 11 Dec 2016.
Brazier H, Begley CM. Selecting a database for literature searches in nursing: MEDLINE or CINAHL? J Adv Nurs. 1996;24(4):868–75.
Bahaadinbeigy K, Yogesan K, Wootton R. MEDLINE versus EMBASE and CINAHL for telemedicine searches. Telemed J E Health. 2010;16(8):916–9. https://doi.org/10.1089/tmj.2010.0046.
EBSCO Industries. CINAHL complete. 2016. https://www.ebscohost.com/nursing/products/cinahl-databases/cinahl-complete. Accessed 11 Dec 2016.
Burnham J, Shearer B. Comparison of CINAHL, EMBASE, and MEDLINE databases for the nurse researcher. Med Ref Serv Q. 1993;12(3):45–57. https://doi.org/10.1300/J115V12N04_05.
Sampson M, Barrowman NJ, Moher D, Klassen TP, Pham B, Platt R, et al. Should meta-analysts search Embase in addition to Medline? J Clin Epidemiol. 2003;56(10):943–55.
Wilkins T, Gillies RA, Davies K. EMBASE versus MEDLINE for family medicine searches: can MEDLINE searches find the forest or a tree? Can Fam Physician. 2005;51:848–9.
Subirana M, Sola I, Garcia JM, Gich I, Urrutia G. A nursing qualitative systematic review required MEDLINE and CINAHL for study identification. J Clin Epidemiol. 2005;58(1):20–5. https://doi.org/10.1016/j.jclinepi.2004.06.001.
Cochrane Library. Cochrane central register of controlled trials (CENTRAL). 2016. http://www.cochranelibrary.com/about/central-landing-page.html. Accessed 11 Dec 2016.
U.S. National Library of Medicine. Health services research (HSR) PubMed queries. 2016. https://www.nlm.nih.gov/nichsr/hedges/search.html. Accessed 11 Dec 2016.
Haynes RB. Of studies, summaries, synopses, and systems: the “4S” evolution of services for finding current best evidence. Evid Based Ment Health. 2001;4(2):37–9.
Sedgwick P. Bias in observational study designs: prospective cohort studies. BMJ. 2014;349:g7731. https://doi.org/10.1136/bmj.g7731.
Gugiu PC, Gugiu MR. A critical appraisal of standard guidelines for grading levels of evidence. Eval Health Prof. 2010;33(3):233–55. https://doi.org/10.1177/0163278710373980.
Brighton B, Bhandari M, Tornetta P 3rd, Felson DT. Hierarchy of evidence: from case reports to randomized controlled trials. Clin Orthop Relat Res. 2003;413:19–24. https://doi.org/10.1097/01.blo.0000079323.41006.12.
Centre for Evidence-Based Medicine. Study designs. 2016. http://www.cebm.net/study-designs/. Accessed 12 Dec 2016.
Dziura JD, Post LA, Zhao Q, Fu Z, Peduzzi P. Strategies for dealing with missing data in clinical trials: from design to analysis. Yale J Biol Med. 2013;86(3):343–58.
Evans SR. Clinical trial structures. J Exp Stroke Transl Med. 2010;3(1):8–18.
Higgins JPT, Green S. Assessment of study quality. Cochrane handbook for systematic reviews of interventions version 5.1.0. Available from http://www.cochrane-handbook.org/ 2011.
Viera AJ, Bangdiwala SI. Eliminating bias in randomized controlled trials: importance of allocation concealment and masking. Fam Med. 2007;39(2):132–7.
Wallace DK. Evidence-based medicine and levels of evidence. Am Orthopt J. 2010;60:2–5.
Evans SR. Fundamentals of clinical trial design. J Exp Stroke Transl Med. 2010;3(1):19–27.
Lachin JM. Statistical considerations in the intent-to-treat principle. Control Clin Trials. 2000;21(3):167–89.
Akobeng AK. Understanding randomised controlled trials. Arch Dis Child. 2005;90(8):840–4. https://doi.org/10.1136/adc.2004.058222.
Jonas WB. The evidence house: how to build an inclusive base for complementary medicine. West J Med. 2001;175(2):79–80.
Petrisor BA, Keating J, Schemitsch E. Grading the evidence: levels of evidence and grades of recommendation. Injury. 2006;37(4):321–7. https://doi.org/10.1016/j.injury.2006.02.001.
Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. Am J Ophthalmol. 2000;130(5):688.
Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med. 2000;342(25):1887–92. https://doi.org/10.1056/NEJM200006223422507.
Rothman KJ, Greenland S, Lash TL. Cohort studies. Modern epidemiology. 3rd ed. Philadelphia: Lippincott, Williams & Wilkins; 2012.
Song JW, Chung KC. Observational studies: cohort and case-control studies. Plast Reconstr Surg. 2010;126(6):2234–42. https://doi.org/10.1097/PRS.0b013e3181f44abc.
Howe CJ, Cole SR, Lau B, Napravnik S, Eron JJ Jr. Selection bias due to loss to follow up in cohort studies. Epidemiology. 2016;27(1):91–7. https://doi.org/10.1097/EDE.0000000000000409.
Viswanathan MBN, Dryden DM, Hartling L. Assessing risk of bias and confounding in observational studies of interventions or exposures: further development of the RTI item Bank. Methods research report. Rockville: Agency for Healthcare Research and Quality; 2013. Report No.: 13-EHC106-EF
Rothman KJ, Greenland S, Lash TL. Case-control studies. Modern epidemiology. 3rd ed. Philadelphia: Lippincott, Williams & Wilkins; 2012.
Hill G, Connelly J, Hebert R, Lindsay J, Millar W. Neyman’s bias re-visited. J Clin Epidemiol. 2003;56(4):293–6.
Sackett DL. Bias in analytic research. J Chronic Dis. 1979;32(1–2):51–63.
Sedgwick P. Bias in observational study designs: case-control studies. BMJ. 2015;350:h560. https://doi.org/10.1136/bmj.h560.
Sedgwick P. Bias in observational study designs: cross sectional studies. BMJ. 2015;350:h1286. https://doi.org/10.1136/bmj.h1286.
Levin KA. Study design III: cross-sectional studies. Evid Based Dent. 2006;7(1):24–5. https://doi.org/10.1038/sj.ebd.6400375.
Shin JJ, Stinnett S, Page J, Randolph GW. Evidence-based medicine in otolaryngology, part 3: everyday probabilities: diagnostic tests with binary results. Otolaryngology – Head Neck Surg: Off J Am Acad Otolaryngol-Head Neck Surg. 2012;147(2):185–92. https://doi.org/10.1177/0194599812447750.
Shin JJ, Stinnett SS, Randolph GW. Evidence-based medicine in otolaryngology part 4: everyday probabilities – nonbinary diagnostic tests. Otolaryngol – Head Neck Surg: Off J Am Acad Otolaryngol-Head Neck Surg. 2013;149(2):179–86. https://doi.org/10.1177/0194599813491070.
Duffy SW, Nagtegaal ID, Wallis M, Cafferty FH, Houssami N, Warwick J, et al. Correcting for lead time and length bias in estimating the effect of screen detection on cancer survival. Am J Epidemiol. 2008;168(1):98–104. https://doi.org/10.1093/aje/kwn120.
Tanner EJ, Chi DS, Eisenhauer EL, Diaz-Montes TP, Santillan A, Bristow RE. Surveillance for the detection of recurrent ovarian cancer: survival impact or lead-time bias? Gynecol Oncol. 2010;117(2):336–40. https://doi.org/10.1016/j.ygyno.2010.01.014.
Carey TS, Boden SD. A critical guide to case series reports. Spine (Phila Pa 1976). 2003;28(15):1631–4. https://doi.org/10.1097/01.BRS.0000083174.84050.E5.
Kooistra B, Dijkman B, Einhorn TA, Bhandari M. How to design a good case series. J Bone Joint Surg Am. 2009;91(Suppl 3):21–6. https://doi.org/10.2106/JBJS.H.01573.
Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med. 1997;126(5):376–80.
Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions. 2011. http://handbook.cochrane.org/. Accessed 17 Dec 2016.
Umscheid CAA. Primer on performing systematic reviews and meta-analyses. Clin Infect Dis: Off Publ Infect Dis Soc Am. 2013;57(5):725–34. https://doi.org/10.1093/cid/cit333.
Akobeng AK. Understanding systematic reviews and meta-analysis. Arch Dis Child. 2005;90(8):845–8. https://doi.org/10.1136/adc.2004.058230.
*National Institutes of Health. Quality assessment of systematic reviews and meta-analyses. 2014. https://www.nhlbi.nih.gov/health-pro/guidelines/in-develop/cardiovascular-risk-reduction/tools/sr_ma. Accessed 17 Dec 2016.
***Murad MH, Montori VM, Ioannidis JP, Jaeschke R, Devereaux PJ, Prasad K, et al. How to read a systematic review and meta-analysis and apply the results to patient care: users’ guides to the medical literature. JAMA. 2014;312(2):171–9. https://doi.org/10.1001/jama.2014.5559.
Institute of Medicine. Clinical practice guidelines we can trust. Washington, DC: The National Academies Press; 2011.
Rosenfeld RM, Shin JJ, Schwartz SR, Coggins R, Gagnon L, Hackell JM, et al. Clinical practice guideline: otitis media with effusion (update). Otolaryngol – Head Neck Surg: Off J Am Acad Otolaryngol-Head Neck Surg. 2016;154(1 Suppl):S1–S41. https://doi.org/10.1177/0194599815623467.
Rosenfeld RM, Piccirillo JF, Chandrasekhar SS, Brook I, Ashok Kumar K, Kramper M, et al. Clinical practice guideline (update): adult sinusitis. Otolaryngol – Head Neck Surg: Off J Am Acad Otolaryngol-Head Neck Surg. 2015;152(2 Suppl):S1–S39. https://doi.org/10.1177/0194599815572097.
Tunkel DE, Bauer CA, Sun GH, Rosenfeld RM, Chandrasekhar SS, Cunningham ER Jr, et al. Clinical practice guideline: tinnitus. Otolaryngol – Head Neck Surg: Off J Am Acad Otolaryngol-Head Neck Surg. 2014;151(2 Suppl):S1–S40. https://doi.org/10.1177/0194599814545325.
Haugen BR, Alexander EK, Bible KC, Doherty GM, Mandel SJ, Nikiforov YE, et al. 2015 American Thyroid Association management guidelines for adult patients with thyroid nodules and differentiated thyroid cancer: the American Thyroid Association guidelines task force on thyroid nodules and differentiated thyroid cancer. Thyroid. 2016;26(1):1–133. https://doi.org/10.1089/thy.2015.0020.
Schilder AG, Bhutta MF, Butler CC, Holy C, Levine LH, Kvaerner KJ, et al. Eustachian tube dysfunction: consensus statement on definition, types, clinical presentation and diagnosis. Clin Otolaryngol: Off J ENT-UK; Off J Neth Soc Oto-Rhino-Laryngol Cervico-Facial Surg. 2015;40(5):407–11. https://doi.org/10.1111/coa.12475.
Orlandi RR, Kingdom TT, Hwang PH, Smith TL, Alt JA, Baroody FM, et al. International consensus statement on allergy and rhinology: rhinosinusitis. Int Forum Allergy Rhinol. 2016;6(Suppl 1):S22–209. https://doi.org/10.1002/alr.21695.
Olivo SA, Macedo LG, Gadotti IC, Fuentes J, Stanton T, Magee DJ. Scales to assess the quality of randomized controlled trials: a systematic review. Phys Ther. 2008;88(2):156–75. https://doi.org/10.2522/ptj.20070147.
Sindhu F, Carpenter L, Seers K. Development of a tool to rate the quality assessment of randomized controlled trials using a Delphi technique. J Adv Nurs. 1997;25(6):1262–8.
Verhagen AP, de Vet HC, de Bie RA, Kessels AG, Boers M, Bouter LM, et al. The Delphi list: a criteria list for quality assessment of randomized clinical trials for conducting systematic reviews developed by Delphi consensus. J Clin Epidemiol. 1998;51(12):1235–41.
Bizzini M, Childs JD, Piva SR, Delitto A. Systematic review of the quality of randomized controlled trials for patellofemoral pain syndrome. J Orthop Sports Phys Ther. 2003;33(1):4–20. https://doi.org/10.2519/jospt.2003.33.7.F4.
Yates SL, Morley S, Eccleston C, de C Williams AC. A scale for rating the quality of psychological trials for pain. Pain. 2005;117(3):314–25. https://doi.org/10.1016/j.pain.2005.06.018.
Jadad AR, Moore RA, Carroll D, Jenkinson C, Reynolds DJ, Gavaghan DJ, et al. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Control Clin Trials. 1996;17(1):1–12.
Sanderson S, Tatt ID, Higgins JP. Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Int J Epidemiol. 2007;36(3):666–76. https://doi.org/10.1093/ije/dym018.
Whiting P, Savovic J, Higgins JP, Caldwell DM, Reeves BC, Shea B, et al. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34. https://doi.org/10.1016/j.jclinepi.2015.06.005.
Shea BJ, Bouter LM, Peterson J, Boers M, Andersson N, Ortiz Z, et al. External validation of a measurement tool to assess systematic reviews (AMSTAR). PLoS One. 2007;2(12):e1350. https://doi.org/10.1371/journal.pone.0001350.
Higgins JP, Lane PW, Anagnostelis B, Anzures-Cabrera J, Baker NF, Cappelleri JC, et al. A tool to assess the quality of a meta-analysis. Res Synth Methods. 2013;4(4):351–66. https://doi.org/10.1002/jrsm.1092.
Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, et al. AGREE II: advancing guideline development, reporting and evaluation in health care. CMAJ: Can Med Assoc J = J l’Assoc Med Can. 2010;182(18):E839–42. https://doi.org/10.1503/cmaj.090449.
Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, et al. Grading quality of evidence and strength of recommendations. BMJ. 2004;328(7454):1490. https://doi.org/10.1136/bmj.328.7454.1490.
Juni P, Witschi A, Bloch R, Egger M. The hazards of scoring the quality of clinical trials for meta-analysis. JAMA: J Am Med Assoc. 1999;282(11):1054–60.
Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Qual Rep Meta-Anal Lancet. 1999;354(9193):1896–900.
**Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol. 2009;62(10):e1–34. https://doi.org/10.1016/j.jclinepi.2009.06.006.
Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA: J Am Med Assoc. 1996;276(8):637–9.
Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, et al. The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann Intern Med. 2003;138(1):W1–12.
Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA: J Am Med Assoc. 2000;283(15):2008–12.
von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Int J Surg. 2014;12(12):1495–9. https://doi.org/10.1016/j.ijsu.2014.07.013.
***Centre for Evidence-Based Medicine. Oxford centre for evidence-based medicine – levels of evidence. 2009. http://www.cebm.net/oxford-centre-evidence-based-medicine-levels-evidence-march-2009/. Accessed 18 Dec 2016.
**Haynes RB. Of studies, syntheses, synopses, summaries, and systems: the “5S” evolution of information services for evidence-based healthcare decisions. Evid Based Med. 2006;11(6):162–4. https://doi.org/10.1136/ebm.11.6.162-a.
***Dicenso A, Bayley L, Haynes RB. Accessing pre-appraised evidence: fine-tuning the 5S model into a 6S model. Evid Based Nurs. 2009;12(4):99–101. https://doi.org/10.1136/ebn.12.4.99-b.
To assist the reader in gaining familiarity with available evidence, the following rating system has been used to indicate key references for each chapter’s content:
***: Critical material. Anyone dealing with this condition should be familiar with this reference.
**: Useful material. Important information that is valuable in in clinical or scientific practice related to this condition.
*: Optional material. For readers with a strong interest in the chapter content or a desire to study it in greater depth.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Carrie Liu, C., Shin, J. (2018). Evidence-Based Medicine: Key Definitions and Concepts. In: Perkins, J., Balakrishnan, K. (eds) Evidence-Based Management of Head and Neck Vascular Anomalies. Springer, Cham. https://doi.org/10.1007/978-3-319-92306-2_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-92306-2_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-92305-5
Online ISBN: 978-3-319-92306-2
eBook Packages: MedicineMedicine (R0)