Keywords

Defining Evidence-Based Medicine

During the 1990s, an emerging body of literature called for a shift in the paradigm of medical practice [1, 2]. There was a recognition that the anecdotally informed approach to patient care was insufficient in the face of modern medicine and research. This impetus coincided with key technological advancements, including the creation of online databases that eased accessibility to studies as well as the development and refinement of numerous research methodologies [1]. As part of this paradigm shift, many changes were seen in medical academia. Specifically, journals began to utilize structured abstracts, and practice guidelines became more common; there was also an increasing effort to redesign residency training programs to incorporate competencies surrounding evidence search and appraisal [1]. This movement was eventually termed evidence-based medicine.

In short, evidence-based medicine describes the systematic search and evaluation of literature to make clinical decisions that are based on the current evidence. The consistent and judicious practice of evidence-based medicine ensures that every clinical decision is based on the best available evidence. In fact, studies have demonstrated that the potential impact is that the quality of delivered care is improved and patient outcomes improve when an evidence-based approach is taken [3,4,5,6,7,8,9]. With health economic evaluations becoming more prominent, evidence-based medicine may also lead to a more cost-effective care.

The process of evidence-based medicine encompasses five steps [10]: (1) formation of a specific clinical question, (2) systematic search of the literature for available evidence, (3) appraisal of evidence, (4) interpretation of findings in the context of the individual patient and application to clinical decision-making, and (5) regular evaluation of own performance. It is important to note that the final clinical decision is dependent on individual physician expertise as well as the values and preferences of the patient. Therefore, evidence-based medicine represents the intersection of a systematic examination of literature, clinician experience and expertise, and patient preferences [11]. To become proficient at evidence-based medicine, healthcare providers must be familiar with available resources for evidence, study designs and its implications for the quality of evidence, and, finally, the levels of evidence.

Accessing Evidence

Journal Articles

There are numerous bibliographical resources that can be used by clinicians to access original studies. The Ovid databases , including Medline and Embase, are among the most prominent databases used in medicine. Medline was established by the US National Library of Medicine (NLM) in 1966. It contains over 5600 journals, with over 22 million references [12]. Embase, which began in 1974, is produced by Elsevier Science. It contains over 8500 journals, with over 30 million references [13]. Both Medline and Embase encompass all areas of biomedicine, while Embase also contains journals on health administration, health economics, and forensic sciences [14]. PubMed is another database produced by NLM. In addition to containing all Medline references, PubMed also contains in-process citations, “ahead of print” citations, and articles from Medline journals that are not within the scope of biomedicine, among many other types of citations. Another database that clinicians should be aware of is CINAHL (Cumulative Index to Nursing and Allied Health Literature). While this database focuses primarily on nursing and allied health journals, there is a wide range of subjects including medicine, education, and health administration [15, 16].

Historically, Medline was felt to be the most comprehensive database , and it is the most commonly used databases in meta-analyses [17, 18]. However, more recent studies have shown that searching additional databases, such as Embase and CINAHL, yields more relevant studies than can be found via Medline alone [15, 18,19,20]. For more specific inquiries, sources such as the Cochrane Central Register of Controlled Trials [21] and Health Services Research Queries [22] may be helpful. Research librarians are great resources for clinicians in determining the suitability of different databases for specific questions. They are also helpful in devising search strategies that can optimize both yield and efficiency.

Consolidated Evidence

There are ongoing efforts to consolidate data from individual studies to create resources that are practical and accessible to clinicians [23]. First, there has been a growth in the number of evidence-based medicine journals, whereby individual studies are summarized into abstract form, delivering only the most poignant findings from studies that are appraised to be of value. This is an expedited method for clinicians to stay informed of new research. An example of such a journal is Evidence-Based Medicine, which spans all medical specialties. Discipline-specific evidence-based journals exist as well. A journal of this concept does not currently exist specific to otolaryngology.

Systematic reviews and meta-analyses represent systematic evaluations of the literature, with syntheses of available evidence to answer specific clinical questions. A systematic review or meta-analysis is helpful for decision-making as it presents all pertinent research surrounding a specific clinical scenario. The Cochrane Library is a good source for quality systematic reviews and meta-analyses. These studies may also be accessed as individual articles from the Medline, Embase, and CINAHL databases.

Evidence-based texts and clinical practice guidelines represent more extensive efforts to systematically review the evidence to make formal recommendations to clinicians . Examples of texts which incorporate a focus on evidence include UpToDate (http://www.uptodate.com), DynaMed (http://www.dynamed.com), and Best Practice (http://bestpractice.bmj.com). Numerous discipline-specific societies and organizations put forth clinical practice guidelines, which are meticulously created over the course of years. In otolaryngology, the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) regularly publishes clinical practice guidelines regarding prominent otolaryngology issues (http://www.entnet.org/content/clinical-practice-guidelines), which are based on data from systematic reviews and high-level clinical trials.

Evidence Appraisal

Critical appraisal of evidence requires sound understanding of study designs, associated biases, and the resulting strength of evidence. Bias is defined as the presence of a systematic error from the design, performance, or analysis of a study [24]. Every study design is prone to its own set of biases. This section presents an overview of commonly encountered study designs in otolaryngology and their associated risks of bias (Table 1.1).

Table 1.1 Study designs and their associated biases

Randomized Controlled Trial

Randomized controlled trials (RCTs) are typically considered the gold standard for evidence [25, 26]. In RCTs, subjects are randomized into two or more groups that are assigned to receive certain interventions [27]. They are then followed for a period of time, and data regarding pre-specified endpoints are collected. The advantage of randomization is that it ensures that there is an even distribution of known and unknown baseline confounders [28]. In a well-designed and well-executed RCT, any difference of effects seen between groups likely represents a true treatment effect. There are numerous RCT designs, depending on the clinical question, patient population, disease and treatment characteristics, and outcomes of interest [29].

When reviewing RCTs, clinicians should be mindful of potential biases that are commonly encountered in this type of study design. The first is selection bias , whereby there are differences in the baseline characteristics of the study groups [30]. To minimize selection bias, there should be an appropriate randomization sequence by which study subjects are assigned to their groups. Furthermore, the sequence should be concealed such that research staff and subjects cannot predict upcoming assignments [30, 31]. Randomization, however, can only be justified when there is true clinical equipoise [32]. When true uncertainty regarding the superiority of an intervention does not exist, an RCT would not be the appropriate study design.

Performance bias and detection bias are also common biases encountered in RCTs. Together, they lead to ascertainment bias. Both biases stem from a lack of blinding of the study investigators or subjects [31]. Performance bias refers to differences in the care or study conditions received or experienced by the study participants based on their group assignment. Similarly, detection bias refers to differences in the measurement of outcomes among study participants based on their group assignment. In both scenarios, knowledge of the treatment group may affect the outcome rather than the treatment itself. Sufficient blinding of investigators and participants minimizes the risk of both biases. Standardized methods of subject assessment may also circumvent performance and detection bias [33].

Finally, bias in RCTs can occur when there is missing data. During the course of a study, missing data may arise as a result of participant withdrawal or missed follow-up appointments. When this occurs, baseline characteristics may no longer be equal between the treatment groups, thereby placing the study at risk for bias. To decrease the risk of bias, RCTs commonly utilize an intention to treat analysis [34]. This method ensures that patients are analyzed based on their initial group assignment, even if they did not receive the assigned treatment and/or did not complete the study [35]. Improper handling of missing data may lead to biased estimates of intervention effects. To minimize the amount of missing data, preventative measures can be undertaken during the design and implementation stages of a trial [28]. There are also numerous methods of statistical analysis that can be utilized depending on the nature of the missing data [28]. When reviewing an RCT, the clinician should be critical of the presence of missing data and how this was handled by the investigators.

Observational Analytic Studies

While RCTs are highly regarded on the evidence pyramid [36], it is not suitable for every clinical scenario and question. In surgery, for example, blinding may not always be achievable; it can be impossible to truly blind the treating surgeon to the intervention [37]. In such instances, observational analytical designs are more appropriate and can yield comparable results to RCTs [38, 39].

Cohort Studies

In a cohort study , a group of subjects are identified based on an exposure of interest. A control group without the exposure is identified as well. These groups are then followed for a period of time, and the outcome(s) of interest are identified, recorded, and analyzed [40]. Cohort studies can be prospective or retrospective. In a prospective design, the exposure groups are identified and followed into the future [41]. The main advantage of this design is that investigators can collect data prospectively, increasing the likelihood of a more complete dataset. However, prospective studies are time-consuming and may be financially burdensome. Alternatively, a retrospective design identifies the groups of interest, and data is collected from a review of established records. It is less time-consuming with lower study costs. The main disadvantage of a retrospective design is the reliance on pre-collected data, which may be incomplete or inaccurate [41].

Cohort studies are susceptible to selection bias , information bias, and confounding [24]. Selection bias occurs when the study groups are not representative of the population from which they are drawn, thereby limiting generalizability of the study findings [24]. Selection bias can also compromise the internal validity of the study, such that estimates may not represent the true treatment effect [42]. In cohort studies, specific causes of selection bias are non-response bias and attrition bias [43]. Non-response bias occurs when there is a systematic difference among subjects who declined to participate in the study and those who agreed. Similarly, attrition bias occurs when there is a systematic difference among subjects lost to follow-up and those who completed the study.

Information bias occurs during data collection. It can come from the investigators, if there are inconsistencies or inaccuracies in exposure or outcome measurement. It may also come from the participants, if there are systematic differences in how exposure or outcomes are reported [24]. Finally, cohort studies are at risk for confounding . Due to the lack of randomization, there will likely always be unknown confounders that are not adjusted for [24].

Case-Control Studies

Case-control studies begin with the identification of a group of subjects with an outcome of interest. The control group is then determined, which consists of subjects of similar characteristics but without the outcome of interest. Data on previous exposures and risk factors is then collected retrospectively, and analysis is performed to determine whether there are significant associations between certain exposures and the outcome of interest [44]. Case-control studies are appropriate for outcomes that are either rare or have a long latency period, where a cohort design would be inefficient, time-consuming, and expensive [41].

Similar to cohort studies, case-control studies are susceptible to selection bias. A form of selection bias that is unique to case-control studies is prevalence-incidence bias, otherwise known as Neyman’s bias [45]. It is based on the tenet that a study that examines prevalent cases may miss either severe disease that led to rapid mortality or mild disease that is subclinical [46]. As a result, survival of the study groups may be biased to be either increased or decreased, leading to inaccurate calculations of mortality [45, 46]. Selection bias can also occur secondary to convenience sampling, which is a common way for subject recruitment in case-control studies [47]. Specifically, participants are often recruited from the same institution, and there may be systematic differences in these patients compared to the population that they theoretically represent. This can compromise both internal and external validity.

Case-control studies are also at risk for assessor bias, recall bias, and confounding. Assessor bias occurs when knowledge of the study affects the measurement of an exposure or outcome by the investigator [47]. Because case-control studies rely on the retrospective collection of data, often based on participant report, recall bias can occur if there is systematic misreporting of exposure status. Lastly, confounding can occur for similar reasons as in cohort studies.

Cross-Sectional Study

In a cross-sectional study , data is collected from the study group at one point in time [48]. It is a commonly utilized method for determining the prevalence of a particular condition in the population [49]. The advantages of a cross-sectional study are its brevity, relatively simple methodology, and lower costs. Furthermore, because cross-sectional studies are typically population-based, the sample group is likely to be representative of the larger population of interest. The main disadvantage of this study design is that it does not allow for hypothesis testing; only associations, rather than causal inferences, can be made. Second, because a cross-sectional study only captures information at one time point, it may not fully capture the truth as there may be variations in conditions overtime [49].

Cross-sectional studies are susceptible to non-response bias, volunteer bias, and ascertainment bias [48]. Non-response bias, again, is defined by the presence of systematic differences among those who declined to participate in the study and those who agreed. Alternatively, volunteer bias occurs when there are systematic differences among those who volunteered to participate in the study and those who did not. Ascertainment bias can occur in cross-sectional studies as a result of either the participants (response bias) or investigators (observer bias). In the setting of response bias, participants may intentionally misreport certain exposures if, for example, the exposure is viewed to be embarrassing or unacceptable [48]. Observer bias can occur if there are consistent inaccurate recordings of exposure or outcome status. This can occur if data collection requires subjective interpretation by the individual investigator completing the recording.

Studies of Diagnostic Results

A daily challenge encountered by clinicians is determining the utility of specific diagnostic tests and applying this to individual patients. A growing body of literature is aimed at assisting clinicians in performing this important task. Studies of diagnostic tests are observational in nature and examine an investigational modality’s key characteristics, such as sensitivity, specificity, and likelihood ratios. A description of these terms is presented in Table 1.2. For a more in-depth review of diagnostic test characteristics, the reader is referred to additional resources [50, 51].

Table 1.2 Characteristics of diagnostic tests

Many diagnostic tests are used for the purposes of screening and surveillance of malignancies. In evaluating studies pertaining to diagnostic tests for these purposes, the clinician should be cognizant of two important concepts and potential biases: lead time bias and length time bias. Lead time bias refers to a potential artificial increase in survival caused by a test that leads to earlier diagnosis of disease. While rapid identification and timely initiation of treatment in less advanced cases may confer an advantage over later diagnoses of more advanced disease, it does not change the inherent properties of that disease state; however, earlier detection of disease may shift the estimated overall survival, due to cases detected by screening rather than the occurrence of symptoms [52, 53]. Length time bias occurs when there is preferential detection of more indolent and slow-growing tumors. Slow-growing tumors are more likely to be detected by screening given their longer subclinical phase compared to more aggressive tumors. Screening will, therefore, be associated with improved survival outcomes; however, the observed improvement in survival is secondary to the increased detection of more indolent disease rather than a benefit of screening [52].

Observational Nonanalytic Studies

Case Series

In a case series , a group of patients who share a diagnosis or who have undergone the same intervention are followed for a period of time. Outcomes are then measured and recorded. A case series can be performed either retrospectively or prospectively. The main difference between case series and cohort studies is the lack of a control group in the former. As such, case series do not allow for hypothesis testing, and causal relationships cannot be established [54, 55]. Case series are suitable for assessment of disease progression and treatment outcomes. As such, they are often performed as the preliminary step to determine whether a hypothesis may be worthy of further and more rigorous testing.

The main biases that clinicians should be cognizant of in case series are selection bias and confounding. Similar to cohort studies, a case series often employs convenience sampling. There are no a priori criteria for who will be included in the study or who will receive a particular intervention. Therefore, participants in a case series may share characteristics that are systematically different from the population that they represent. As a result of convenience sampling and unrepresentative study populations, case series cannot be used to determine the population prevalence or incidence of a particular condition. Second, confounding may occur due to the lack of randomization of study participants.

Systematic Reviews and Meta-Analyses

Systematic reviews involve a meticulous and methodological search of the literature to identify all studies pertaining to a clinical question. These studies are then evaluated for bias, and a summary of the findings is provided to answer the clinical question. Meta-analyses are often reported with systematic reviews and represent quantitative syntheses of the literature. Systematic reviews are preferred to traditional narrative reviews, as the latter can be more prone to bias given the lack of standardized methodology [56].

To critically appraise a systematic review or meta-analysis , the clinician should be familiar with the study design. After defining a clinical question, the a priori protocol of the systematic review is typically described. As part of the protocol, data extraction, primary and secondary outcomes, and analysis methods are outlined. The literature search is performed and should include all relevant bibliographical sources and with comprehensive search terms. The review process for the inclusion and exclusion of individual studies should be transparent. The data is then extracted and analyzed, and the overall findings are presented. If a meta-analysis is performed, the method of statistical analysis and the heterogeneity seen in the data are reported. An important feature of systematic reviews and meta-analyses is the assessment of included studies for bias. There should also be an assessment for publication bias. For detailed guides on how to perform and interpret a systematic review and/or meta-analysis, the reader is encouraged to refer to a few suggested resources [57,58,59].

By knowing what constitutes sound methodology, clinicians can become critical consumers of systematic reviews and meta-analyses . For every element of a study, the clinician should assess adherence to established systematic review and meta-analysis protocols. Checklists have been established to assist the clinician in their critical assessment of synthesis studies [60, 61]. A study that is conducted with greater rigor is less subject to biases in its search of the literature, the inclusion and exclusion of studies, and its synthesis of the data.

Clinical Practice Guidelines, Clinical Consensus Statements, and Position Statements

With the progression of evidence-based medicine, there has been a rapidly growing body of literature pertaining to all aspects of medicine. It has become increasingly difficult for clinicians to stay current in their fields of practice. For this reason, there has been an effort by numerous professional societies and working groups to distill the vast body of literature into tangible guidelines to assist clinicians in everyday practice. These exist in the form of clinical practice guidelines, clinical consensus statements, and position statements. It is important for clinicians to understand how each category of document is developed and how they should be used.

The Institute of Medicine defines clinical practice guidelines as “statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options” [62]. As the definition suggests, clinical practice guidelines employ a systematic review of the literature at the outset. The working group typically consists of a representative group of medical experts, ancillary healthcare providers, and key stakeholders such as consumer advocates. To minimize the introduction of biases, clinical practice guidelines are developed using rigorous and transparent processes. The guideline presents evidence both on the benefits and harms of the intervention of interest. The evidence should also be graded to give the consuming clinician an estimate of the strength of the evidence. Finally, clinical practice guidelines should be updated in the identical systematic fashion as its initial development. The timing of revisions ultimately occurs at the discretion of the professional society and is influenced by the rate of emerging evidence, as well as external recommendations. The AAO-HNS is continuously developing clinical practice guidelines on clinical entities such as otitis media with effusion [63], adult sinusitis [64], and tinnitus [65], among many others (http://www.entnet.org/content/clinical-practice-guidelines). These guidelines are updated approximately every 5 years. Another prominent guideline initiative that is relevant to otolaryngologists is by the American Thyroid Association and their management guidelines for thyroid nodules [66].

Consensus statements are also developed by a panel of medical experts following an evidence-based review of the literature. They are often borne from structured conferences on certain topics [62]. They are similar to clinical practice guidelines in that there is often a systematic review of the literature, which is then interpreted and presented in the form of recommendations. Consensus statements may focus more on recent evidence rather than examine the entire body of evidence pertaining to a clinical topic [62]. They are also less centered on action statements than clinical practice guidelines, instead focusing on statements on which the panel can agree. The AAO-HNS publishes consensus statements, which are borne out of systematic syntheses of expert opinion (http://www.entnet.org/content/clinical-consensus-statements). Consensus statements may also be developed by interested groups of clinicians who take the initiative to form panels or forums. The methodologies employed may vary. Two examples are a recent consensus statement on the definition and diagnosis of Eustachian tube dysfunction [67] and an extensive and international effort to review and offer recommendations on rhinosinusitis [68].

Position statements represent the least methodologically taxing document. They are often the product of a professional society committee meeting or conference. They exist in a range, and some can be based mainly on the views of a committee or task force whose expert opinions carry weight. Some development groups will begin with a review of the literature, but that is not always mandated. There are also variations in how the final statements are formulated; they may be vetted only by the internal group or by an umbrella or commissioning organization. Position paper statements present the collective opinions of a group as they pertain to a specific clinical question, but their evidence base may be variable.

Instruments for Quality Assessment

To assist the clinician in assessing the quality of studies, numerous quality rating scales have been developed. In one systematic review of scales developed to assess the quality of RCTs, 21 published scales were identified [69]. Five of these were developed using standardized methodology [70,71,72,73,74]. These scales focus on the key methodological features of an RCT, such as randomization, blinding, outcomes measurement, and management of missing data. The scale developed by Jadad et al. (1996) was found to have the best evidence for validity and reliability [74]. It is also one of the most recognized instruments, with many adaptations since its initial report [69]. The Cochrane Handbook for Systematic Reviews of Interventions also provides a set of criteria by which to evaluate biases in randomized controlled trials [30]. This resource is intended for authors in preparation of Cochrane reviews.

In another systematic review of quality assessment of observational studies, 86 tools were identified, 60 of which were proposed for future use [75]. Many of these tools were created for specific study designs. Unfortunately, only half of the studies described the development of the tool or the assessment of its validity and reliability . Ultimately, the authors concluded that there is no single superior tool for the assessment of observational studies at this time.

For systematic reviews, there are numerous critical appraisal tools for quality assessment [76]. Most tools reported in the literature are not validated as of this writing. To date, the most widely accepted and validated instrument is A MeaSurement Tool to Assess systematic Reviews (AMSTAR) [77, 76]. It was designed to assess the methodological rigor of systematic reviews. Another more recently developed instrument, the Risk Of Bias in Systematic reviews (ROBIS) , was designed specifically for the assessment of risk of bias in systematic reviews [76]. A quality assessment tool also exists for meta-analyses, pertaining to the evaluation of methodology [78], but currently, there is no risk of bias assessment tool for meta-analyses.

Clinical practice guidelines also vary in quality, secondary to inconsistent adherence to development protocols. This can result in discrepancies in recommendations. The AGREE II (Appraisal of Guidelines and Research Evaluation) instrument was designed to assist clinicians in developing, reporting, and appraising clinical practice guidelines [79]. The Institute of Medicine also provides a detailed prescriptive document for the development of clinical practice guidelines [62].

A caveat to the rating of study quality is that, as alluded to above, numerous quality rating scales exist for each study design. These scales are developed with varying rigor and, subsequently, are of varying validity. They also emphasize different characteristics of a study in judging its quality. Furthermore, different organizations and journals may employ different methods for assessing the quality of the reviewed studies. The same evidence may receive a number of evaluations, depending on the grading system used [80]. This can be confusing to clinicians, who may not know which rating scale is the most reliable in judging the quality of individual studies or which set of guidelines they should apply to practice. The lack of standardization in quality rating scales hinders the broader goal of quality rating , which is to effectively and efficiently communicate the strength of evidence to the medical community. Another consequence of the heterogeneity of rating scales is that depending on the scale used, the classification of a study as high or low quality might differ. As such, depending on the rating scales used, meta-analyses of randomized controlled trials may reach different conclusions even if examining the same studies [81]. Some authors argue against the use of quality scores [81]. Instead, they recommend focusing on the key methodological components of any given trial and assess how the quality of these components might affect study conclusions by performing sensitivity analyses.

Along with the development of quality assessment instruments for the evaluation of individual studies, there has been a movement toward ensuring the quality of study reporting. In 1999, the QUOROM (Quality of Reporting of Meta-analysis ) statement was published [82]. Its contents were updated, and it was renamed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses ) statement in the 2009 publication [83]. Analogous statements exist for randomized trials (CONSORT , Consolidated Standards of Reporting Trials ) [84], diagnostic test studies (STARD , Standards for Reporting Diagnostic Accuracy Studies) [85], meta-analyses of observational studies (MOOSE , Meta-analysis of Observational Studies in Epidemiology ) [86], and observational epidemiological studies (STROBE , Strengthening the Reporting of Observational Studies in Epidemiology ) [87]. It is important to note that these consensus statements pertain only to the reporting of studies, not the design and performance of studies. They are intended for the investigators and authors of studies and should not be used by clinicians for evidence appraisal.

Where to Start

There is a recognized hierarchy of evidence, with systematic reviews of randomized controlled trials, with or without accompanying meta-analyses, at the top. This is followed by individual randomized controlled trials, systematic reviews of cohort studies, individual cohort studies, systematic reviews of case-control studies, individual case-control studies, case series, and expert opinion [88]. This hierarchy can help direct the clinician in their literature review, focusing attention first on studies with higher levels of evidence.

Others have proposed an approach to evidence-based medicine that concentrates on pre-appraised literature rather than individually published studies. In 2001, Haynes proposed the “4S” model of information resource hierarchy to direct clinician efforts when seeking evidence [23]. It was subsequently revised to “5S” and, most recently, the “6S” model [89, 90]. The S’s of the pyramid represent, in descending hierarchy, systems, summaries, synopses of syntheses, syntheses, synopses of studies, and studies. Systems refers to computerized decision support systems that combine the latest evidence with individual patient information to recommend the most appropriate clinical decision. Summaries refer to clinical practice guidelines and evidence-based texts. Synopses of syntheses refer to summaries of systematic reviews on specific topics. For the sake of efficiency and breadth, the “6S” model suggests that clinicians should start with systems and summaries first, before moving down the pyramid. Based on this model, referring to original studies to answer a clinical question should only be performed when no other sources of evidence exist.

Conclusion

Evidence-based medicine is a multistep process that requires familiarity with literature search methods, the knowledge to critically appraise the evidence, and an ability to interpret and apply the evidence to the specific clinical scenario. Clinicians should also be proactive in reviewing their own performance and seek opportunities for improvement. Proficiency in evidence-based medicine has the potential to improve patient care by ensuring that every clinical decision is made based on the synchrony of best evidence, medical expertise, and patient values.