Keywords

Introduction

Health and behavioral health professionals recognize a critical research-to-practice gap in the provision of community-based services. This gap lies between what is known about effective services developed through careful research and what is typically provided in community-based behavioral health services. Effective services, practices, and programs, defined as evidence-based programs (EBPs), have demonstrated evidence of their effectiveness under controlled research settings. EBPs were developed with the expectations that professionals would readily adopt services of proven efficacy to improve the quality of outcomes for service recipients. It was believed good programs would easily find a home in service agencies that are genuinely interested in using the best interventions for their clients.

Unfortunately, it is now recognized that programs are not adopted readily and there are significant gaps in the translation of EBPs into working programs in the field (Proctor et al., 2009; Urban & Trochim, 2009). Simply providing an effective new program is not sufficient to ensure that it is implemented in the real world.

This inability to translate effective programs into practices in the field has led to an emphasis on implementation science (IS). IS attempts to bridge the gap between research and practice by identifying and accounting for the barriers that prevent effective programs from being easily identified, accepted, and utilized in clinical practice. Known as tracing blue highways, a two-way adaptation, research-practice integration, and research translation (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Hoagwood & Johnson, 2003; Urban & Trochim, 2009; Wandersman et al., 2008; Westfall, Mold, & Fagnan, 2007), IS deals with the capacity to move what is known about effective treatment into services (Proctor et al., 2009).

IS encompasses the investigation of methods, variables, interventions, and strategies to promote appropriate adoption, support, and sustainability of EBPs (Titler, Everett, & Adams, 2007). This perspective recognizes the complex problem of ensuring that an effective intervention is adapted and integrated into practice where community acceptability, applicability, organizational and political demands, resources, and cultural differences may compromise program effectiveness and consumer outcomes.

This chapter reviews and discusses research and practice in the fields of behavioral health and public health from the perspective of IS, with an emphasis on critical questions researchers and practice professionals must address as they attempt to improve services in the community. While a complete discussion of the research-to-practice gap might include the early stages involved with converting basic science findings into human applications and interventions (often labeled translational science), this chapter concentrates on latter stages concerned with moving programs that have been conceptualized and tested under controlled conditions into clinical practices. We are concerned with the issues that help in moving programs of proven efficacy into programs of ongoing effectiveness in the field. We pay particular attention to the process of implementation, issues in program fidelity, fit, and adaptation and conclude with a discussion of integration and sustainability.

Evidence-Based Programs

As we are concerned with the implementation of evidence-based programs and practices (EBPs), it may be helpful to clarify how we define EBPs. The term “evidence-based practice” has a number of definitions. One definition revolves around evidence-based treatments, practices, and interventions and those related sets of programs or policies that have empirical proof of their effectiveness. Empirical proof, by definition, is based in a demonstration of therapeutic change, an outcome that is different from a no-treatment or treatment as usual condition (Kazdin, 2008), and focuses on approaches shown to be effective through research rather than through professional experience or opinion (Guevara & Solomon, 2009).

A second definition of EBPs addresses the practice of clinical service that is based on an evidence-informed philosophy in which services for consumers should emerge from careful consideration of the professional’s clinical expertise and accumulated experience, available research evidence, and the wishes, needs, and preferences of the patient. An EBP then becomes one that integrates these perspectives in the process of making decisions about patient care. Research evidence is just one source of information that helps support an effective patient care process. This broader term is often used by health disciplines including medicine, public health, and psychology (APA Presidential Task Force on Evidence-Based Practice, 2006; Hoagwood & Johnson, 2003; Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996) and is a source of confusion among professionals and laypersons alike. Our use of the term EBP aligns with the first definition, as in those practices, programs, or interventions shown to be empirically efficacious under controlled research situations.

The emphasis on the use of EBPs has significantly increased in the last three decades. In 1999, the US Surgeon General reported that despite the widespread availability of EBPs, persons with mental illnesses were not actually receiving them (Office of the Surgeon General, 1999).

Of the many programs and services that were in use, only a relatively small number had evidence of their effectiveness (Kazdin, 2000). This led to the President’s New Freedom Commission on Mental Health (2003), which suggested all clinical practice should have a foundation in evidence in order to increase the effectiveness of mental health services. From this emphasis, IS emerged as a key component in the improvement of clinical services.

Barriers to the Use of EBPs

As EBPs are widely available, any discussion of IS must begin with why programs of proven efficacy are not used. The difficulty inherent in the translation of programs into the community does not lie with the lack of effectiveness studies or sufficient evidence to convince skeptics of a program’s utility or value. A large number of evidence-based programs and interventions are available for many behavioral health concerns. Rather, the difficulties rest with the EBP and its fit with a range of issues germane to the service organization and professionals providing services. These include staffing, clientele, political climate, funding limitations, and cultural expectations at both the organizational and community levels (Aarons, 2004, 2006; Green, 2008; Lehman, Greener, & Simpson, 2002).

A number of implementation models suggest six sets of factors are relevant for program implementation success (Chaudoir, Dugan, & Barr, 2013; Damschroder et al., 2009; Durlak & DuPre, 2008; Nilsen, 2015). These factors include (1) characteristics of the EBP itself, (2) characteristics of the professionals providing services, (3) consumer/patient and stakeholder variables, (4) the context and culture of the organization providing services, (5) the community, and (6) the strategies used to facilitate or implement the EBP (see Table 1).

Table 1 Factors relevant for program implementation success

Characteristics of the EBP relevant for successful implementation may include the source of the intervention the strength of the evidence supporting its use, the advantage of its use, and issues of cost, complexity, adaptability, and “trialability” (Damschroder et al., 2009). A program or practice that can be used on a trial basis, adapted to fit the needs or qualifications of current staff, and costs little to implement is more likely to be adopted than one that does not. The presence of a standard “manualized” approach is also an important characteristic of the EBP (Stichter, Herzog, Owens, & Malugen, 2016).

Characteristics of the professionals providing services also play a critical role in the successful adoption of new or different services. A fundamental concern for staff is if they have the qualifications and skills to provide the new service and, if not, is training available and readily obtained. The National Implementation Research Network (NIRN) model of implementation suggests the selection, training, and coaching of professional staff are critical drivers of successful implementation (Fixsen et al., 2005). Even with the requisite skills, staff readiness for change and willingness to try a new program may determine if it is implemented successfully (Aarons, 2004). Finally, staff attitudes toward the new effort, their faith in its value, and their trust that the program will be supported all bear on eventual implementation success (Durlak & DuPre, 2008).

Characteristics of the clientele receiving services include considerations of those who will eventually receive the service or program. Even the most effective program will not succeed if it confronts the culture, faith, or beliefs of the consumers for whom it is intended (Feldstein & Glasgow, 2008). Patient values and preferences will determine if they are willing to participate in interventions proposed on their behalf. Culture may trump evidence in the ultimate test of successful implementation. Those belonging to cultures who have suffered historic disparities may not trust the program or its purveyors and may refuse to engage in services they did not have a say in developing (Dovidio et al., 2008).

Characteristics of the organization providing services such as organizational type, leadership styles, organizational climate, and the management processes that support the program or practice all contribute to implementation success (Aarons & Sommerfeld, 2012; Aarons, Sommerfeld, & Walrath-Greene, 2009; Durlak & DuPre, 2008). An adaptive leadership style has been proposed as increasing successful program implementation and having appropriate decision support systems, middle management support, and administrative supports (Fixsen et al., 2005; Tabrizi, 2014).

Another major consideration is the importance of change agents or program champions who may be engaged in the implementation process (Greenhalgh, Robert, Macfarlane, Bate, & Kyriakidou, 2004; Rogers, 2003). These individuals believe in the purpose and mission of the EBP that their organization is implementing and can assist in creating the organizational culture and climate conducive to accepting innovation. Finally, adequate staffing patterns and supervision may also impact the successful implementation of new services (Walker et al., 2003), as can larger issues of organizational structure such as identifying lines of authority and accountability (Massey, Armstrong, Boroughs, Henson, & McCash, 2005).

Organizations are embedded in broader communities that influence the implementation of new programs and practices. Thus, characteristics of the community also influence successful implementation. Public policies; local, state, and federal laws and regulations; political climate; and realities of funding may all contribute to the utilization of new programs and services (Chaudoir et al., 2013; Durlak & DuPre, 2008). Legal, political, and human capital are often required to ensure successful implementation, and each EBP brings its own set of political, regulatory, and leadership issues (Isett et al., 2007). Damschroder et al. (2009) include communication and social network channels and the resulting community culture that encourages or discourages adoption of new programs and policies.

Lastly, characteristics of the implementation process itself may influence the eventual success of a new program or practice. Damschroder et al. (2009) suggest at least four considerations in how programs are implemented including the process of planning, engaging, executing, and evaluating programs as they are implemented. Blase, Kiser, and Van Dyke (2013) suggest successful implementation requires consideration of resources, capacity, readiness, and fit as part of the planning and engaging process. As will be discussed later, implementation occurs in stages, with different considerations emerging over time. Much research remains regarding how to move programs optimally into practice. Crucial questions also remain regarding how much each of these domains weighs in the implementation of new programs and where scarce resources should be placed to maximally encourage successful program innovation.

Fidelity and Adaptation of EBPs

Given the many barriers to successful implementation, an overarching concern is what must be done to address these challenges to ensure that programs are implemented successfully. Successful program implementation demands a balance between maintaining the fidelity of the program and allowing program adaptations that are required to overcome any barriers to its successful use. The challenge is to resolve the tension between fidelity and fit. This tension deals with the match between programs as developed and the needs, interests, and concerns of populations in the community and may include the degree to which efforts account for cultural, community, and family standards and expectations (Lieberman et al., 2011).

Fidelity has been variously labeled as integrity, implementation fidelity, and treatment fidelity (Allen, Shelton, Emmons, & Linnan, 2018; Carroll et al., 2007; Dane & Schneider, 1998) and defined as the extent to which a program or innovation is implemented as it was originally designed or intended (Allen et al., 2018; Carroll et al., 2007; Durlak & DuPre, 2008). It involves attention to measuring and maintaining the elements of a program or practice that are critical for programmatic impact as the program is brought into the community setting (Bond, Evans, Salyers, Williams, & Kim, 2000; Bruns, 2008; Center for Substance Abuse Treatment, 2007).

The conceptualization and operationalization of fidelity has evolved to include five core elements: (1) adherence, (2) dose or exposure, (3) quality of delivery, (4) participant responsiveness, and (5) program differentiation (Allen et al., 2018; Durlak & DuPre, 2008) (see Table 2).

Table 2 Core elements of fidelity

Adherence refers to the degree to which a program or practice was implemented consistent with the structure, components, and procedures under which it was designed (Carroll et al., 2007). For example, if a substance abuse prevention program delivered in a classroom setting required the teacher to implement the curriculum based on a weekly schedule utilizing an adult learning model, utilizing a biweekly schedule without the adult learning model would reflect poor program adherence.

Dose or exposure refers to the degree to which the amount of a program participants receive matches the program model as designed (Durlak & DuPre, 2008). While dose in medical terminology is readily defined, in behavioral health settings, dose may correspond to appropriate exposure to program elements, the duration of the program as it was originally prescribed, or even the number of therapeutic sessions attended (Baldwin, Johnson, & Benally, 2009). In an evaluation of a school-based intervention program, Yampolskaya, Massey, and Greenbaum (2006) measured dose as time spent in hours in academic and behavioral programming.

Quality of delivery is the manner in which the implementer (e.g., teacher, clinician, or staff) delivers a program or practice (Allen et al., 2018). This can include how well an implementer answers questions or addresses concerns and how knowledgeable they are of the program model and curriculum. Often, observation and a trained rater or observer measure this element based on components included in a fidelity measure or checklist. For example, raters observing a classroom-based substance abuse prevention program may be interested in observing and rating a teacher’s clarity of instruction on how to complete a marijuana myth-busting assignment.

Participant responsiveness refers to how engaged and responsive a participant is to a program or practice as well as their level of understanding of program materials or the importance of a practice (e.g., deep breathing or adherence to medication) (Allen et al., 2018; Durlak & DuPre, 2008). Although much emphasis has been put on the examination of adherence and dosage, achieving high levels of adherence can be influenced by other elements like participant responsiveness (Carroll et al., 2007) and may not always be the most significant predictor of participant outcomes.

Program differentiation refers to components that have been identified as unique to a program, without which, programmatic success would be impossible (Allen et al., 2018). The identification of the critical common elements of a program or intervention constitutes and defines the program (Chorpita, Daleiden, & Weisz, 2005). Program differentiation may also be important for evaluations of new interventions in order to identify components of the program that are essential for positive outcomes (Carroll et al., 2007). While some researchers suggest that all core elements of fidelity are equally important, others argue those implementing need to prioritize the elements based on the intervention, its purpose, and the resources and personnel that are available (Allen et al., 2018; Harn, Parisi, & Stoolmiller, 2013).

Fidelity and Outcomes

There is significant evidence supporting the relationship between fidelity and participant outcomes (c.f. Carroll et al., 2007; Durlak & DuPre, 2008), and a thorough evaluation of fidelity is integral to understanding why an intervention succeeds or fails. If fidelity is not monitored and evaluated, it may not be possible to determine if the failure of an intervention is related to poor implementation, the shortcomings of the intervention itself (labeled as a “type III error”), or other ancillary variables (Allen et al., 2018; Carroll et al., 2007; Harn et al., 2013). The emphasis in fidelity has resulted in numerous attempts to identify critical elements and standards of programs and to conduct fidelity assessments to measure the degree to which programs maintain these standards (c.f. Deschênes, Clark, & Herrygers, 2008; Hernandez, Worthington, & Davis, 2005). For example, Pullmann, Bruns, and Sather (2013) developed a fidelity index that assessed the degree to which providers followed the essential principles of wraparound in their service delivery. The index assesses the degree to which critical components of wraparound such as family participation, strength-based approaches, and cultural competence are present in therapeutic encounters. Thus, fidelity has become the cornerstone of effective implementation (Lendrum, Humphrey, & Greenberg, 2016).

Balanced against the concern for program fidelity is the need for EBPs to fit the communities where they are implemented. This contrasting perspective may be characterized as the relevance of the program for the community and the realities of not only resources and capacity but also characterized by culture, family and community preferences, and acceptance by professionals who recognize the unique characteristics and needs of their consumers. Not all EBPs are necessarily developed for members of specific communities or all proven interventions appropriate for all communities in need of services. In efforts to ensure the internal validity of research studies, interventions are developed and tested on narrowly defined, homogeneous populations.

The emphasis on internal validity, a critical concern for the development of evidence-based research, comes at the expense of external validity and the effectiveness of interventions across populations (Green, 2008; Green & Glasgow, 2006; Hoagwood, Burns, Kiser, Ringeisen, & Schoenwald, 2001). Thus, one difficulty rests with establishing a match between the program developed for a narrow, specifically defined clientele and the diverse clientele residing in the community. A second difficulty rests with the match between the EBPs’ programmatic requirements and the needs, capacity, and constraints operating in community service agencies. Community organizations may simply not have the resources to provide an EBP under the same conditions or at the same level of intensity as the program was developed. Adaptations are then necessary in order to provide an intervention that is effective at the local level (Castro, Barrera Jr, & Martinez Jr, 2004; Harn et al., 2013).

Adaptations can be defined as modifications or changes made to an EBP in order to serve the needs of a particular setting or to increase the fit of a program to a target population. Adaptations typically take place during the adaption and implementation of the intervention. They improve a program’s fit and compatibility with a new setting and the needs of the individual(s) and population(s) of interest (Carvalho et al., 2013; Rabin & Brownson, 2018; Stirman, Miller, Toder, & Calloway, 2013). Client and provider attributes (e.g., language, cultural norms, understanding of the EBP) may also be taken into consideration to enhance the fit between the EBP and consumers (Cabassa & Baumann, 2013).

For example, a study in Zambia looked to adapt adult trauma-focused cognitive behavioral therapy (TF-CBT) for use with children and adolescents. Murray et al. (2013) discovered it was critical to work collaboratively with local stakeholders and counselors in order to create culturally responsive and high-fidelity adaptations to increase “fit” and acceptability of the intervention. The collaborative process by which TF-CBT was selected and adapted assisted in creating strong buy-in from the local community, including the support and recommendation of the Ministry of Health in Zambia (Murray et al., 2013).

Tension exists in the research community over the competing ideas of fidelity versus adaptation (Castro et al., 2004; Morrison et al., 2009). While some argue adaptations are essential in order to meet the needs of a particular setting, others argue a program that has been adapted will be significantly less effective when compared to the original program (Carvalho et al., 2013; Castro et al., 2004; Chambers & Norton, 2016). This distinction rests with the emphasis on ensuring the effectiveness of an intervention under clearly specified conditions versus the emphasis on generalizability and effectiveness in less consistent, real-world settings. While adaptations may threaten internal validity, the intent is to improve external validity and thus enhance outcomes for program participants in the real world (Baumann, Cabassa, & Stirman, 2018).

To address the issues associated with adaptations and fidelity, it is important for consumers (e.g., schools, clinicians, mental health organizations) to identify the core components or “active ingredients” (Chorpita et al., 2005; Harn et al., 2013) of a program or practice in order to preserve them during the adaptation process. Once these core components are defined, frameworks, such as the Interactive Systems Framework (Wandersman et al., 2008), the Modification Framework (Stirman et al., 2013), or the Adaptome data platform (Chambers & Norton, 2016), can assist in monitoring adaptations to ensure critical components are left unchanged. If significant program modification does occur, then it is incumbent on implementers to conduct rigorous outcome evaluations in order to assess the possible impact the changes may have on intended outcomes (Carvalho et al., 2013) (for a more comprehensive discussion of managing adaptations and fidelity, c.f. Cabassa & Baumann, 2013; Castro et al., 2004; Chambers & Norton, 2016; Lee, Altschul, & Mowbray, 2008; Stirman et al., 2013; Wandersman et al., 2008).

The question remains as to whether a program reaching optimal fidelity would be sufficient to obtain significant outcomes (Chambers & Norton, 2016). More research is needed to identify the appropriate balance between fidelity and adaptation.

Stages of Implementation

Given the tension between program fidelity and community fit, a natural question is how the implementation process might work. In human service settings, practitioners usually serve to enable a new intervention. As a result, innovations have to be built into thousands of practitioners in multiple organizations that operate under different regulations (e.g., state and federal) and contexts (Fixsen, Blase, Naoom, & Wallace, 2009). It has been suggested the ultimate success of a program and its sustainability (described below) will be largely dependent on laying an appropriate foundation for change (Adelman & Taylor, 2003).

To assist in building innovations into community settings, researchers have proposed several models of implementation that emphasize the implementation process as occurring in stages (Aarons, Hurlburt, & Horwitz, 2011; Fixsen et al., 2005, 2009). The EPIS (exploration, adoption/preparation, implementation, sustainment) is an example of a four-stage model which has different stages that span outer (e.g., sociopolitical) and inner (e.g., organization characteristics) contexts (Aarons et al., 2011). To provide a concrete example of implementation stages, we review another four-stage model, the National Implementation Research Network’s (NIRN) model, that includes exploration, installation, initial implementation, and full implementation (Fixsen et al., 2005; National Implementation Research Network, 2015).

The National Implementation Research Network (NIRN) Model

The first stage in the NIRN model is exploration. Exploration begins when an organization, community, or an individual within an organization/community decides to make use of a new program or practice. The purpose of this stage is to explore the potential fit between the community and the EBP, the needs of the community, the needs of the EBP, and the amount of community resources needed and available in order to implement the new program. The stage helps determine whether the organization should proceed with the innovation or not. A critical question in this stage is the degree of an organization’s readiness for implementation. Research has shown that taking time for exploration and planning saves time and money and can increase the likelihood of success (Fixsen et al., 2005; National Implementation Research Network, 2015; Saldana, Chamberlain, Wang, & Hendricks Brown, 2012).

The second stage of implementation in the NIRN model is installation. During installation, the resources and structural supports needed to assist the implementation of an EBP are procured. Resources can include selecting staff, finding sources for training and coaching and providing the initial training for staff, ensuring location/space (e.g., classroom or office space) and access to materials or equipment (e.g., computer or projector), finding or developing fidelity tools, and identifying funding streams and human resource strategies. This is the stage where a community or organization prepares their staff for the new innovation During (Fixsen et al., 2005; National Implementation Research Network 2015).

The third stage is initial implementation. This stage involves using the new EBP for the first time. Often referred to as the “initial awkward stage” of implementation, this is where practitioners become familiar with the new program or practice (Fixsen et al., 2005). It also happens to be the most delicate stage of implementation, because organizations and practitioners are changing their normal, comfortable routines and have to fight the urge of reverting to old routines. In order to sustain these changes in a practitioner’s routine, it is essential to establish external supports (e.g., coaches, implementation teams, or leadership) on the practice, organization, and system levels (National Implementation Research Network 2015).

The final stage in the NIRN model is full implementation. Full implementation is achieved when the new ways of providing services have become standard practice with practitioners, staff, and organizational leaders. Concomitant changes in policies and procedures also are standardized. At this point, the anticipated benefits of an EBP are realized, with staff and practitioners skilled in the procedures of their new routine. Achieving and sustaining full implementation is an arduous process and may be enabled by the success of the preceding stages (National Implementation Research Network 2015). However, research has shown that success in early stages of implementation may not always guarantee full implementation (Abdinnour-Helm, Lengnick-Hall, & Lengnick-Hall, 2003).

One of the main benefits from adhering to a theoretical model or conceptual framework is it allows consumers and researchers to plan for potential barriers and recognize the facilitators of implementation before resources and time are depleted. More examples and information on other models of implementation are found elsewhere (c.f. Aarons et al., 2011; Damschroder et al., 2009; Rogers, 2003; Saldana, 2014; Saldana et al., 2012).

Sustainability of EBPs

Once a program is in place, the question becomes how to sustain it. Sustainability is involved with the continuity and maintenance of programs after implementation and must be a major consideration of IS. Sustainability may be broadly defined to encompass several aspects of the continuity of an EBP, including maintenance of the procedural processes, commitments, financing (Fixsen et al., 2005), obtaining resources, gaining visibility, status and organizational place (Massey et al., 2005), and supporting the continued benefits and positive outcomes of the program effort (Moore, Mascarenhas, Bain, & Straus, 2017). Sustainability may be best thought of as a continuation of the implementation process, where the emphasis shifts from putting a program into place to maintaining the program through ongoing adaptation and continuous quality improvement efforts (Chambers, Glasgow, & Stange, 2013).

While there have been major advances in understanding the adoption, integration, and implementation of EBPs, program sustainability is not always adequately considered (Shelton, Cooper, & Stirman, 2018). This lack of attention can not only lead to economic and resource losses from wasted effort but also limits the likelihood of successful improvements. EBPs that are discontinued or deserted can result in lower levels of buy-in when a new EBP is proposed for an organization/community and limit the trust that individuals place in research and organizations that conduct research (Shelton et al., 2018).

A number of challenges exist to the sustainability of even well-implemented programs. For example, a systematic review examining the sustainability of health interventions implemented in sub-Saharan Africa found that weak health systems, lack of financial leadership, lack of a consistent workforce, and social and political climates limited an organization’s ability to build capacity and sustain interventions (Iwelunmor et al., 2016).

Those who implement EBPs frequently fail to consider the ongoing changes that happen within communities and organizations (Chambers et al., 2013). Prevention programs implemented within a community or organization evolve over time due to changes and level of understanding of staff (i.e., buy-in), feedback from the community or organization, and improvement in the quality of delivery (Shelton et al., 2018). Consistent with the implementation process, research suggests, among other factors, successful sustainability requires modifiable programs, internal champions, readily perceived benefits, and adequate funding and infrastructure support (Hunter, Han, Slaughter, Godley, & Garner, 2017; Scheirer, 2005). It is also critical to ensure all the important stakeholders are included in the sustainability planning. For example, failing to include the individuals who deliver the practice or program (e.g., clinicians or teachers) may lead to issues with long-term buy-in (Cooper, Bumbarger, & Moore, 2015).

Planning for sustainability should be an ongoing discussion that takes place from the initial exploration stage. This allots time dedicated to planning for long-term financing, commitment and organizational support, training and coaching for the workforce, and procedural evaluation and monitoring (Chambers et al., 2013).

Implications for Behavioral Health

IS has clearly defined the difficulties of bringing programs of proven efficacy into the community where they may serve the public interest. For the researcher, it is clear that simply developing a program with the expectation that it will be adopted readily into the field is naïve. While preliminary studies may narrowly focus on exemplary conditions to demonstrate an intervention is effective, it behooves the researcher to move into the community to assess effectiveness as well.

For the practitioner in the field, there is an opportunity to work collaboratively to identify the critical components of interventions and work to match those demands to the needs and characteristics of the organization, the community, and the clientele for whom the program is intended. This bi-directional effort that links the practitioner to the researcher strengthens not only the development of programs and their relevance for the community but also helps identify and build the conditions under which new programs may be maximally effective.

A collaborative process can be established by which consumers, families, practicing clinicians, communities, and cultures develop common agendas for the improvement of service outcomes and actively participate through all stages of program development and implementation (Baumbusch et al., 2008; Gonzales, Ringeisen, & Chambers, 2002; Green, 2008; Hoagwood et al., 2001; McDonald & Viehbeck, 2007). Models for this approach include community-based participatory research (CBPR), which strays from traditional applied research paradigms and strives to incorporate community partnership and action-oriented approaches to behavioral health research (Minkler & Wallerstein, 2013).

In addition, IS training efforts that prepare researchers for program development and implementation may also benefit from expanded opportunities to work in community settings. For example, service-learning opportunities that place researchers in the settings where programs are implemented offers training opportunities for expanding the implementation process and strengthening the cooperation between program implementers and program users (Burton, Levin, Massey, Baldwin, & Williamson, 2016).

The push for policy and regulations requiring EBPs in multiple health services, lack of buy-in from health practitioners, and poor dissemination methods for evidence remain critical in the research-to-practice gap. Estimates suggest it can take up to 17 years for EBPs to make their way from research to practice (Green, Ottoson, Garcia, & Hiatt, 2009). IS addresses this gap by assisting researchers and communities with the translation of research to real-world practice by identifying the implementation factors that are essential for consistent, sufficient, and effective use of EBPs. IS is an essential driver for ensuring effective and efficacious programs and practices and will lead to significant health benefits for the diverse populations and communities requiring behavioral health services.