Introduction

As two social workers who support the process of evidence-based practice (EBP) as outlined by the originators of this model, proposed in the first edition of Evidence-based Medicine (Sackett et al. 1997) and continued through to the 4th edition of this primary sourcebook (Straus et al. 2010), the title of this paper may seem surprising. It is actually very accurate, and to the extent it seems surprising to readers reflects the extent that EBP is often misunderstood within the human service community, including clinical social workers. It is always best to become familiar with any given model’s tenets by reviewing its original source documents. Here is how EBP is defined by its developers:

Evidence-based medicine (EBM) requires the integration of the best research evidence with our clinical expertise, and our patient’s unique values and circumstances.

By best research evidence we mean clinically relevant research, often from the basic sciences, but especially from patient centered clinical research into the accuracy and precision of … tests … and the efficacy and safety of therapeutic, rehabilitative and preventive regimens…

By clinical expertise we mean the ability to use our clinical skills and past experiences to rapidly identify each patient’s unique health state and diagnosis, their individual risks and benefits of potential interventions, and their personal values and expectations.

By patient values we mean the unique preferences, concerns and expectations each patient brings to a clinical encounter and which must be integrated into clinical decisions if they are to serve the patient.

When these three elements are integrated, clinicians and patients form a diagnostic and therapeutic alliance which optimizes clinical outcomes and quality of life. (Straus et al. 2010, p. 1, emphases in original)

We note the evident congruence between the above description of EBP with clinical social work values and practices. Having the practitioner seek out the best scientific evidence about potentially helpful assessment methods and approaches to intervention is certainly congruent with various social work ethical codes, as in.

Social workers should base practice on recognized knowledge, including empirically-based knowledge, relevant to social work and social work ethics (National Association of Social Workers 2008, 4.01[c])

The reader will also note the importance of a client’s preferences, values, concerns and expectations, and unique state in this foundational description of evidence-based practice. In many ways, EBP is a very holistic approach to practice, and differs from prior models by its explicit incorporation of nonscientific considerations into the decision-making process. As illustrated in Fig. 1, EBP is the congruence of all of these elements into making a clinical decision, with no one factor having priority over another (Fig. 1).

Fig. 1
figure 1

The essential components of the process model of evidence-based practice

One controversial point consists of what the best research evidence consists of. EBP embraces all forms of evidence, and certainly recognizes that in many areas of practice the scientific evidentiary foundations may be quite weak. But this does not mean that the model has no applicability. EBP relies on the best evidence, and yes, there is a preference of certain forms of evidence over others. But this preference is based upon commonly accepted standards of science, standards which have proven their usefulness is drawing legitimate conclusions. Table 1 lists, in approximate order, the types of evidence which may be taken into account when seeking the best research evidence relating to selecting an intervention. Other types of evidence would be considered if the issue dealt with selecting an appropriate diagnostic or assessment measure (e.g., factor analytic studies), or attempting to determine the etiology of a particular psychosocial problem (e.g., epidemiological studies). And within each form of evidence, studies may vary in quality. Usually, for example, a randomized controlled trial (RCT) is seen as stronger if it contains an appropriate number of participants, relative to one which, although published, had so few participants as to be statistically underpowered. A published study which included credible fidelity checks to ensure that the interventions were delivered competently, and without contamination between groups supposedly receiving differing treatments, would be seen as stronger than a similar study lacking such fidelity checks. And a study whose findings have been independently replicated and held up would be seen as more convincing than a study which has never been put to the test again, or had been put to the test and whose findings were not replicated.

Table 1 Hierarchical forms of evidence that may bear on selecting an intervention (evidence higher on the list is usually seen as more credible than lower forms of evidence)

In the EBP practice model, the findings of the higher forms of evidence, if competently conducted, are usually given greater weight or credibility than forms of evidence lower on the hierarchy. This is because these higher forms are more capable of reducing bias and uncertainty in arriving at conclusions. It is not uncommon for the positive findings obtained from an uncontrolled pretest–posttest group outcome study, e.g., one assessing clients’ psychosocial functioning before and immediately after treatment, which can be diagrammed as an O1–X–O2 design, to be overturned when a comparison group of untreated clients is added to a replication study on the same intervention. Such a comparison group can partially control for confounds such as spontaneous improvements, the passage of time, historical events, or regression to the mean (which may occur when clients seek help when their problems are at their worse). It is also common for positive findings obtained through quasi-experiments, studies using naturally occurring groups of clients, to be overturned when the study is replicated using more experimental methods, such as randomly assigning clients to differing treatment conditions. At the low end of the hierarchy, the opinions of highly experienced practitioners would usually be given greater credibility than those of the novice clinician; and treatment recommendations derived from legitimate to widely accepted social work theory (e.g., attachment theory) seen as more useful than some novel and bizarre theory involving completely new concepts and meta-physical forces (e.g., Reich’s Orgone theory). Conclusions derived from a series of replicated narrative case studies would accorded greater weight than a single such report, and so forth. Practitioners of all theoretical and epistemological persuasions weigh and value evidence, using differing standards, but the consistent use of mainstream scientific methodology affords the field some systematic rationale for making these necessary and inevitable judgments about the credibility and weight of evidence.

One common misconception of EBP is that it requires the existence of randomized controlled trials, meta-analyses, or systematic reviews, in order to be successfully undertaken. This is far from the truth. The reality is that EBP asks the practitioner to locate the best available evidence, and to evaluate its findings and potential applicability, in order to help inform practice decisions. For example, if there are no systematic reviews (SRs), meta-analyses, or RCTs, then one would seek out any pertinent quasi-experiments. If none could be found, then see if there are any pre-experimental studies, and so on. In EBP there is always evidence to help inform practice. And sometimes evidence lower on the scale found in Table 1 is indeed the best evidence, and should be critically evaluated for its potential applicability.

An important and often overlooked part of the EBP process of critical evaluation is that in addition to seeking out best confirmatory evidence, it is equally important to focus on the best disconfirmatory evidence, including evidence that a particular intervention may have been shown to produce harmful effects in some clients (Gambrill 2011; Lilienfeld 2007). Many of the evidence hierarchies are designed in such a way that they seek out confirmatory evidence to the exclusion of an equally important part of the critical appraisal process, disconfirmatory evidence (Gambrill 2006). For example, a critical analysis of a systematic review on interventions for foster children found that one of the interventions classified as supported and acceptable, also had evidence of harm that was being overlooked (Pignotti and Mercer 2007). The EBP critical appraisal process goes beyond simply looking at a hierarchy of confirmatory evidence and critical examines all evidence available. Given the ethical mandate, first do no harm, and incorporating the importance of considering client values, even lower level evidence of harm needs to be taken into serious consideration.

Another example of failure to examine disconfirmatory evidence was found in a review of energy therapies (Feinstein 2008) which did not take into account randomized controlled studies that provided disconfirmatory evidence (Pignotti and Thyer 2009). Here, even though these practices have not been shown to do harm, confirmatory studies conducted by those who had a vested interest in the interventions need to be tempered with an examination of disconfirmatory studies that showed that the effects are likely do to placebo. This is all part of the process that may easily get lost if we simply pick from a list of “supported” treatments rather than fully engaging in the evidence based practice process.

Obviously there is little chance of finding perfect one–one correspondence between your client’s characteristics and unique circumstances, and published studies involving clients similar to yours, and EBP does not require this. Again, simply look for the best available evidence including any disconfirmatory evidence. Suppose you have a client who is Asian and is seeking help in overcoming compulsive hoarding, yet when you read the empirical literature on the treatment of hoarding you find that the available RCTs have only involved Caucasian and African American clients. Does the fact that you find nothing involving the treatment of Asians who hoard mean you have no evidence to rely on? Obviously not but in this particular case the best available evidence on the treatment of hoarding involving Caucasian and African American clients would be an excellent place to start. And even if there are obvious demographic similarities between your client and those reported in the published literature, there will remain uncounted differences—religion, socio-economic status, prior treatment histories, etc. which preclude any simplistic extrapolation from the published evidence to your unique client. But this does not mean you start from scratch. Ideally, as research on psychosocial treatments evolves, the field can identify robust therapies which are effective across a wide array of client groups service providers (e.g., LCSWs, psychologists, counselors, etc.), and whose effects are not so evanescent that they collapse when applied under slightly different conditions than those they originally were shown to be helpful.

One can find various efforts in the social work literature that have attempted to provide some benchmarks or standards which one can use in evaluating scientific studies (e.g., Thyer 1989, 1991, 2002), but there are now two recent contributions to this literature which deserve particular consideration. The first of these are the Journal Article Reporting Standards (JARS) found in the sixth edition of the Publication Manual of the American Psychological Association (APA 2010, pp. 247–253). There are four separate JARS, each building upon the prior ones. The first is information recommended for inclusion in all research reports involving the collection of data; the second building upon the first by adding reporting standards for studies involving an experimental manipulation or intervention; the third adds additional reporting criteria for studies using random or non-random assignment methods to allocate participants to comparison or control groups; and the fourth relates to recommended standards for reporting the results of a meta-analysis. By placing the imprimatur of the American Psychological Association behind such standards, it is hoped that a greater degree of consistency in reporting the details of research studies will facilitate the flow of information and result in the publication of more comprehensively reported and transparent research.

A related influential development has been the creation of the Consolidated Standards of Reporting Trials (Consort Standards, see http://www.consort-statement.org/home/). The CONSORT Statement is a template to aid in the write up and critical analysis of randomized controlled trials. It consists of a 25 item checklist for a reader (or author) to use, indicating the page number of a paper where information pertaining to critical reporting elements of an RCT is included. It addresses the standard elements and content of an outcome study, e.g., Title and Abstract, Introduction Methods, Results, and Discussion. A more focused extension of the CONSORT Statement pertinent to experiments involving nonpharmacological (e.g., psychotherapy, clinical social work, counselling) interventions has also been developed (Boutron et al. 2008). Both the JARS and the CONSORT Guidelines are benchmarks which can be used to evaluate the scientific adequacy of studies on the outcomes of social work practice, using group research designs. They possess the virtues of transparency and are an admirable effort to control for the effects of experimenter/therapist bias and extra-study confounds as much as possible to approximate nature’s truth regarding the efficacy of psychosocial treatments. Their epistemological foundations are clear and, to many, compelling, consisting of an amalgam of positivism, determinism, operationism, falsificationism, empiricism, parsimony, realism, and scientific skepticism (see Thyer 2010). While these principles may be endlessly debated in the social work literature by our philosophical nihilists and sophists, in fact they are widely accepted in the scientific community and put to good use by serious research methodologists in the service of evaluating the outcomes of practice. The results are clear—a slow accumulation of credible knowledge about empirically supported treatments for a wide array of mental health diagnoses, other psychosocial disorders, and problematic interpersonal dysfunctions.

EBP is a process aimed at helping clinicians and clients make important practice decisions. It is not a listing of treatments that have met some evidentiary standard. The five steps of this EBP process are as follows:

  1. 1.

    Step 1—converting the need for information (about prevention, diagnosis, therapy, causation, etc.) into an answerable question.

  2. 2.

    Step 2—tracking down the best evidence with which to answer that question.

  3. 3.

    Step 3—critically appraising that evidence for its validity (closeness to the truth) impact (size of the effect), and applicability (usefulness in our clinical practice).

  4. 4.

    Step 4—Integrate the critical appraisal with our clinical expertise and with our client’s unique…values and circumstances.

  5. 5.

    Step 5—evaluating our effectiveness and efficiency in executing steps 1–4, and seeking ways to improve them both for next time. (Steps 1–5 quoted from Sackett et al. 1997, pp. 3–4).

If this five-step process of EBP is unfamiliar to the reader, s/he is encouraged to seek out and read the original primary source documents describing this model and outlined by its developers (e.g., Straus et al. 2010) and not rely on third or fourth-hand interpretations which have successively deviated from the original description. This is a rampant problem in the social work and other literatures which critique EBP. For example, the first article on this topic that was ever published in the British Journal of Social Work (Webb 2001), failed to cite a single primary source document about evidence based practice. The closest it came was to cite a webpage from a social work academic program in England. As a result, this now widely-cited article was rife with mischaracterizations and straw-person portrayals, and did a serious disservice to subsequent informed discussion on the topic.

Evidence-Based Practice is Not a Medical Model

Although, EBP originated in the field of medicine, its generic process has become widely adopted in many non-medical fields. We contend that this approach is not a medical model because nowhere does it suggest that human problems have a biological etiology (although many medical illnesses do); nowhere is it contended that human problems are best addressed through biologically-based interventions, such as drugs or surgery; and nowhere does it assert that medical doctors should be the primary providers of care. These three elements are the defining features of the medical model. The reality is that EBP is atheoretical with respect to the causes of problems, the proper basis of intervention, or who should deliver services. EBP has been adopted not only within many health care disciplines, but also many decidedly non-medical fields such as policy practice, community intervention, supervision, management and administration, psychology, and public administration. It is a decidedly scientific model, but this is a well-justified approach to many areas of social work practice. The conceptual origins of a model of practice, in this case within the discipline of medicine, have little bearing on the pragmatic application of an approach in other areas. It is either helpful, or it is not.

Evidence-Based Practice is Not a Collection of Empirically-Supported Treatments

One of the more common misconceptions of EBP, and a major source of resistance to it, if the notion that it consists of clinicians diagnosing a client using formal mental health criteria, then tracking down psychosocial or other interventions that have met some established standards of research evidence. The expectation is that the practitioner is expected to then apply this selected empirically supported treatment (EST) to the client, and if this is not done the clinician is somehow not adhering to so-called ‘best practices’ and may be seen as ethically suspect or even engaging in malpractice. For example, social workers Mullen and Streiner (2004, p. 113) define EBPs as “any practice that has been established as effective through scientific research according to a clear set of explicit criteria.” This confusion was also evident in a national survey of American social work faculty, wherein Rubin and Parrish (2007) found that about 23% of respondents endorsed the definition of EBP as “… a way to designate certain interventions as empirically supported under certain conditions” (p. 116), about 24% as “a process that includes locating and appraising evidence as a part of practice decisions” (p. 116), and 46% defined EBP as both of the above definitions. In other words, <25% of American social work faculty identified the definition of EBP correctly (the second choice above).

In part this confusion arises through the conflation of EBP with a parallel initiative undertaken by elements of the American Psychological Association (APA) known as the Empirically Supported Treatments initiative. This, like EBP, began in the early 1990s and consisted of two processes. The first was to come to some consensus regarding how much evidence should be considered necessary in order to designate a given form of psychotherapy as ‘empirically supported’. Once this benchmark was arrived at, researchers scoured the literature to determine which treatments met these standards, and lists of these so-called empirically-supported treatments (ESTs) began to appear in the literature. These evidentiary standards and lists of ESTs (now called Research-supported Treatments) can be located via a website maintained by the Division of Clinical Psychology of the APA (http://www.psychology.sunysb.edu/eklonsky-/division12/). One can click on a given mental illness diagnosis and a description of the disorder and its research supported treatments (and supportive citations) will appear, along with information on how to get trained in that method. Or one can click on a given form of psychotherapy and read about its level of research support. Thyer (2010) review the history of social work’s empirical clinical practice model, psychology’s empirically supported treatments initiative, and medicine’s evidence-based practice movement and note the similarities (a strong reliance on scientific evidence to guide the selection of treatment) and differences (the greater sophistication of the EBP process model). Nowhere does EBP provide lists of treatments. Instead, individual clinicians are urged to consult and appraise the research evidence themselves, and integrate this information with other crucial elements of this model, including client preferences, ethical considerations, one’s own clinical expertise, and the availability of resources. These latter elements are conspicuously absent from the lists of ESTs, and this lacunae is one source of confusion and resistance to EBP, since EBP is so widely seen as referring to ESTs. In fact, nowhere does EBP provide lists of endorsed or approved therapies, or statements to the effect of, for example, “Beck’s Cognitive Therapy is an Evidence-based Practice for Clients with Major Depression” Any statement like this would virtually ignore two-third of the EBP process model. If a client simply refused to engage in cognitive therapy, a practitioner could still remain true to EBP by offering alternative treatments. If a given treatment were seen as unethical, even if strongly supported by research evidence, EBP permits, indeed encourages, allowing ethical considerations to overrule scientific considerations. Suppose a client with major depression also had an intellectual disability, and was unable to productively engage in Beck’s Cognitive Therapy? Would it be appropriate to say that Cognitive Therapy was an evidence-based practice for this client? Obviously not.

The most explicit guidance EBP provides in this regard is the commissioning and publication of what are called systematic reviews (SRs) by the Cochrane and Campbell Collaborations, with the former focusing on health care (including mental health and substance abuse) and the latter on social welfare, education, and criminal justice. These SRs (see Littell et al. 2008) are prepared by collaborative teams of clinical and methodological experts summarizing the best available evidence about various interventions related to particular problems (as well as SRs on assessment methods and the etiology of problems). The end statements are NOT recommendations about what clinicians should do or not do, but rather careful summaries of what the research evidence has to say about the intervention’s effectiveness. It is up to the individual practitioner to assimilate this information, along with other sources of evidence, and in conjunction with the client, decide on what to do. And sometimes doing nothing (watchful waiting) can be the best option. It would be antithetical, according to EBP, to say that a given treatment should be used with clients with a particular problem, and in real EBP, this is not done. Hence the title of this article. “Evidence-based Practices Do Not Exist”. There is the process model of evidence-based practice, which is more of a verb. There are no scientifically justifiable lists of evidence-based practices (as a noun). One cannot decide how to treat a client only by considering the scientific evidence. The other factors comprising the EBP model are also essential to consider. This is why it is incorrect to assert that any given treatment is an evidence-based practice.

We have included a brief table (see Table 2) of the acronyms used in this paper, common to discussions of evidence-based practice, so that the reader may more readily understand their differences and commonalities (Table 2).

Table 2 Common acronyms related to evidence-based practice