Abstract
This chapter reviews and discusses research and practice in the fields of behavioral health and public health from the perspective of implementation science, with an emphasis on critical questions researchers and practice professionals must address as they attempt to improve services in the community. This chapter will also focus on the latter stages concerned with moving programs that have been conceptualized and tested under controlled conditions into clinical practices, paying particular attention to the process of implementation, issues in program fidelity, fit, and adaptation and conclude with a discussion of integration and sustainability.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
Introduction
Health and behavioral health professionals recognize a critical research-to-practice gap in the provision of community-based services. This gap lies between what is known about effective services developed through careful research and what is typically provided in community-based behavioral health services. Effective services, practices, and programs, defined as evidence-based programs (EBPs), have demonstrated evidence of their effectiveness under controlled research settings. EBPs were developed with the expectations that professionals would readily adopt services of proven efficacy to improve the quality of outcomes for service recipients. It was believed good programs would easily find a home in service agencies that are genuinely interested in using the best interventions for their clients.
Unfortunately, it is now recognized that programs are not adopted readily and there are significant gaps in the translation of EBPs into working programs in the field (Proctor et al., 2009; Urban & Trochim, 2009). Simply providing an effective new program is not sufficient to ensure that it is implemented in the real world.
This inability to translate effective programs into practices in the field has led to an emphasis on implementation science (IS). IS attempts to bridge the gap between research and practice by identifying and accounting for the barriers that prevent effective programs from being easily identified, accepted, and utilized in clinical practice. Known as tracing blue highways, a two-way adaptation, research-practice integration, and research translation (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Hoagwood & Johnson, 2003; Urban & Trochim, 2009; Wandersman et al., 2008; Westfall, Mold, & Fagnan, 2007), IS deals with the capacity to move what is known about effective treatment into services (Proctor et al., 2009).
IS encompasses the investigation of methods, variables, interventions, and strategies to promote appropriate adoption, support, and sustainability of EBPs (Titler, Everett, & Adams, 2007). This perspective recognizes the complex problem of ensuring that an effective intervention is adapted and integrated into practice where community acceptability, applicability, organizational and political demands, resources, and cultural differences may compromise program effectiveness and consumer outcomes.
This chapter reviews and discusses research and practice in the fields of behavioral health and public health from the perspective of IS, with an emphasis on critical questions researchers and practice professionals must address as they attempt to improve services in the community. While a complete discussion of the research-to-practice gap might include the early stages involved with converting basic science findings into human applications and interventions (often labeled translational science), this chapter concentrates on latter stages concerned with moving programs that have been conceptualized and tested under controlled conditions into clinical practices. We are concerned with the issues that help in moving programs of proven efficacy into programs of ongoing effectiveness in the field. We pay particular attention to the process of implementation, issues in program fidelity, fit, and adaptation and conclude with a discussion of integration and sustainability.
Evidence-Based Programs
As we are concerned with the implementation of evidence-based programs and practices (EBPs), it may be helpful to clarify how we define EBPs. The term “evidence-based practice” has a number of definitions. One definition revolves around evidence-based treatments, practices, and interventions and those related sets of programs or policies that have empirical proof of their effectiveness. Empirical proof, by definition, is based in a demonstration of therapeutic change, an outcome that is different from a no-treatment or treatment as usual condition (Kazdin, 2008), and focuses on approaches shown to be effective through research rather than through professional experience or opinion (Guevara & Solomon, 2009).
A second definition of EBPs addresses the practice of clinical service that is based on an evidence-informed philosophy in which services for consumers should emerge from careful consideration of the professional’s clinical expertise and accumulated experience, available research evidence, and the wishes, needs, and preferences of the patient. An EBP then becomes one that integrates these perspectives in the process of making decisions about patient care. Research evidence is just one source of information that helps support an effective patient care process. This broader term is often used by health disciplines including medicine, public health, and psychology (APA Presidential Task Force on Evidence-Based Practice, 2006; Hoagwood & Johnson, 2003; Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996) and is a source of confusion among professionals and laypersons alike. Our use of the term EBP aligns with the first definition, as in those practices, programs, or interventions shown to be empirically efficacious under controlled research situations.
The emphasis on the use of EBPs has significantly increased in the last three decades. In 1999, the US Surgeon General reported that despite the widespread availability of EBPs, persons with mental illnesses were not actually receiving them (Office of the Surgeon General, 1999).
Of the many programs and services that were in use, only a relatively small number had evidence of their effectiveness (Kazdin, 2000). This led to the President’s New Freedom Commission on Mental Health (2003), which suggested all clinical practice should have a foundation in evidence in order to increase the effectiveness of mental health services. From this emphasis, IS emerged as a key component in the improvement of clinical services.
Barriers to the Use of EBPs
As EBPs are widely available, any discussion of IS must begin with why programs of proven efficacy are not used. The difficulty inherent in the translation of programs into the community does not lie with the lack of effectiveness studies or sufficient evidence to convince skeptics of a program’s utility or value. A large number of evidence-based programs and interventions are available for many behavioral health concerns. Rather, the difficulties rest with the EBP and its fit with a range of issues germane to the service organization and professionals providing services. These include staffing, clientele, political climate, funding limitations, and cultural expectations at both the organizational and community levels (Aarons, 2004, 2006; Green, 2008; Lehman, Greener, & Simpson, 2002).
A number of implementation models suggest six sets of factors are relevant for program implementation success (Chaudoir, Dugan, & Barr, 2013; Damschroder et al., 2009; Durlak & DuPre, 2008; Nilsen, 2015). These factors include (1) characteristics of the EBP itself, (2) characteristics of the professionals providing services, (3) consumer/patient and stakeholder variables, (4) the context and culture of the organization providing services, (5) the community, and (6) the strategies used to facilitate or implement the EBP (see Table 1).
Characteristics of the EBP relevant for successful implementation may include the source of the intervention the strength of the evidence supporting its use, the advantage of its use, and issues of cost, complexity, adaptability, and “trialability” (Damschroder et al., 2009). A program or practice that can be used on a trial basis, adapted to fit the needs or qualifications of current staff, and costs little to implement is more likely to be adopted than one that does not. The presence of a standard “manualized” approach is also an important characteristic of the EBP (Stichter, Herzog, Owens, & Malugen, 2016).
Characteristics of the professionals providing services also play a critical role in the successful adoption of new or different services. A fundamental concern for staff is if they have the qualifications and skills to provide the new service and, if not, is training available and readily obtained. The National Implementation Research Network (NIRN) model of implementation suggests the selection, training, and coaching of professional staff are critical drivers of successful implementation (Fixsen et al., 2005). Even with the requisite skills, staff readiness for change and willingness to try a new program may determine if it is implemented successfully (Aarons, 2004). Finally, staff attitudes toward the new effort, their faith in its value, and their trust that the program will be supported all bear on eventual implementation success (Durlak & DuPre, 2008).
Characteristics of the clientele receiving services include considerations of those who will eventually receive the service or program. Even the most effective program will not succeed if it confronts the culture, faith, or beliefs of the consumers for whom it is intended (Feldstein & Glasgow, 2008). Patient values and preferences will determine if they are willing to participate in interventions proposed on their behalf. Culture may trump evidence in the ultimate test of successful implementation. Those belonging to cultures who have suffered historic disparities may not trust the program or its purveyors and may refuse to engage in services they did not have a say in developing (Dovidio et al., 2008).
Characteristics of the organization providing services such as organizational type, leadership styles, organizational climate, and the management processes that support the program or practice all contribute to implementation success (Aarons & Sommerfeld, 2012; Aarons, Sommerfeld, & Walrath-Greene, 2009; Durlak & DuPre, 2008). An adaptive leadership style has been proposed as increasing successful program implementation and having appropriate decision support systems, middle management support, and administrative supports (Fixsen et al., 2005; Tabrizi, 2014).
Another major consideration is the importance of change agents or program champions who may be engaged in the implementation process (Greenhalgh, Robert, Macfarlane, Bate, & Kyriakidou, 2004; Rogers, 2003). These individuals believe in the purpose and mission of the EBP that their organization is implementing and can assist in creating the organizational culture and climate conducive to accepting innovation. Finally, adequate staffing patterns and supervision may also impact the successful implementation of new services (Walker et al., 2003), as can larger issues of organizational structure such as identifying lines of authority and accountability (Massey, Armstrong, Boroughs, Henson, & McCash, 2005).
Organizations are embedded in broader communities that influence the implementation of new programs and practices. Thus, characteristics of the community also influence successful implementation. Public policies; local, state, and federal laws and regulations; political climate; and realities of funding may all contribute to the utilization of new programs and services (Chaudoir et al., 2013; Durlak & DuPre, 2008). Legal, political, and human capital are often required to ensure successful implementation, and each EBP brings its own set of political, regulatory, and leadership issues (Isett et al., 2007). Damschroder et al. (2009) include communication and social network channels and the resulting community culture that encourages or discourages adoption of new programs and policies.
Lastly, characteristics of the implementation process itself may influence the eventual success of a new program or practice. Damschroder et al. (2009) suggest at least four considerations in how programs are implemented including the process of planning, engaging, executing, and evaluating programs as they are implemented. Blase, Kiser, and Van Dyke (2013) suggest successful implementation requires consideration of resources, capacity, readiness, and fit as part of the planning and engaging process. As will be discussed later, implementation occurs in stages, with different considerations emerging over time. Much research remains regarding how to move programs optimally into practice. Crucial questions also remain regarding how much each of these domains weighs in the implementation of new programs and where scarce resources should be placed to maximally encourage successful program innovation.
Fidelity and Adaptation of EBPs
Given the many barriers to successful implementation, an overarching concern is what must be done to address these challenges to ensure that programs are implemented successfully. Successful program implementation demands a balance between maintaining the fidelity of the program and allowing program adaptations that are required to overcome any barriers to its successful use. The challenge is to resolve the tension between fidelity and fit. This tension deals with the match between programs as developed and the needs, interests, and concerns of populations in the community and may include the degree to which efforts account for cultural, community, and family standards and expectations (Lieberman et al., 2011).
Fidelity has been variously labeled as integrity, implementation fidelity, and treatment fidelity (Allen, Shelton, Emmons, & Linnan, 2018; Carroll et al., 2007; Dane & Schneider, 1998) and defined as the extent to which a program or innovation is implemented as it was originally designed or intended (Allen et al., 2018; Carroll et al., 2007; Durlak & DuPre, 2008). It involves attention to measuring and maintaining the elements of a program or practice that are critical for programmatic impact as the program is brought into the community setting (Bond, Evans, Salyers, Williams, & Kim, 2000; Bruns, 2008; Center for Substance Abuse Treatment, 2007).
The conceptualization and operationalization of fidelity has evolved to include five core elements: (1) adherence, (2) dose or exposure, (3) quality of delivery, (4) participant responsiveness, and (5) program differentiation (Allen et al., 2018; Durlak & DuPre, 2008) (see Table 2).
Adherence refers to the degree to which a program or practice was implemented consistent with the structure, components, and procedures under which it was designed (Carroll et al., 2007). For example, if a substance abuse prevention program delivered in a classroom setting required the teacher to implement the curriculum based on a weekly schedule utilizing an adult learning model, utilizing a biweekly schedule without the adult learning model would reflect poor program adherence.
Dose or exposure refers to the degree to which the amount of a program participants receive matches the program model as designed (Durlak & DuPre, 2008). While dose in medical terminology is readily defined, in behavioral health settings, dose may correspond to appropriate exposure to program elements, the duration of the program as it was originally prescribed, or even the number of therapeutic sessions attended (Baldwin, Johnson, & Benally, 2009). In an evaluation of a school-based intervention program, Yampolskaya, Massey, and Greenbaum (2006) measured dose as time spent in hours in academic and behavioral programming.
Quality of delivery is the manner in which the implementer (e.g., teacher, clinician, or staff) delivers a program or practice (Allen et al., 2018). This can include how well an implementer answers questions or addresses concerns and how knowledgeable they are of the program model and curriculum. Often, observation and a trained rater or observer measure this element based on components included in a fidelity measure or checklist. For example, raters observing a classroom-based substance abuse prevention program may be interested in observing and rating a teacher’s clarity of instruction on how to complete a marijuana myth-busting assignment.
Participant responsiveness refers to how engaged and responsive a participant is to a program or practice as well as their level of understanding of program materials or the importance of a practice (e.g., deep breathing or adherence to medication) (Allen et al., 2018; Durlak & DuPre, 2008). Although much emphasis has been put on the examination of adherence and dosage, achieving high levels of adherence can be influenced by other elements like participant responsiveness (Carroll et al., 2007) and may not always be the most significant predictor of participant outcomes.
Program differentiation refers to components that have been identified as unique to a program, without which, programmatic success would be impossible (Allen et al., 2018). The identification of the critical common elements of a program or intervention constitutes and defines the program (Chorpita, Daleiden, & Weisz, 2005). Program differentiation may also be important for evaluations of new interventions in order to identify components of the program that are essential for positive outcomes (Carroll et al., 2007). While some researchers suggest that all core elements of fidelity are equally important, others argue those implementing need to prioritize the elements based on the intervention, its purpose, and the resources and personnel that are available (Allen et al., 2018; Harn, Parisi, & Stoolmiller, 2013).
Fidelity and Outcomes
There is significant evidence supporting the relationship between fidelity and participant outcomes (c.f. Carroll et al., 2007; Durlak & DuPre, 2008), and a thorough evaluation of fidelity is integral to understanding why an intervention succeeds or fails. If fidelity is not monitored and evaluated, it may not be possible to determine if the failure of an intervention is related to poor implementation, the shortcomings of the intervention itself (labeled as a “type III error”), or other ancillary variables (Allen et al., 2018; Carroll et al., 2007; Harn et al., 2013). The emphasis in fidelity has resulted in numerous attempts to identify critical elements and standards of programs and to conduct fidelity assessments to measure the degree to which programs maintain these standards (c.f. Deschênes, Clark, & Herrygers, 2008; Hernandez, Worthington, & Davis, 2005). For example, Pullmann, Bruns, and Sather (2013) developed a fidelity index that assessed the degree to which providers followed the essential principles of wraparound in their service delivery. The index assesses the degree to which critical components of wraparound such as family participation, strength-based approaches, and cultural competence are present in therapeutic encounters. Thus, fidelity has become the cornerstone of effective implementation (Lendrum, Humphrey, & Greenberg, 2016).
Balanced against the concern for program fidelity is the need for EBPs to fit the communities where they are implemented. This contrasting perspective may be characterized as the relevance of the program for the community and the realities of not only resources and capacity but also characterized by culture, family and community preferences, and acceptance by professionals who recognize the unique characteristics and needs of their consumers. Not all EBPs are necessarily developed for members of specific communities or all proven interventions appropriate for all communities in need of services. In efforts to ensure the internal validity of research studies, interventions are developed and tested on narrowly defined, homogeneous populations.
The emphasis on internal validity, a critical concern for the development of evidence-based research, comes at the expense of external validity and the effectiveness of interventions across populations (Green, 2008; Green & Glasgow, 2006; Hoagwood, Burns, Kiser, Ringeisen, & Schoenwald, 2001). Thus, one difficulty rests with establishing a match between the program developed for a narrow, specifically defined clientele and the diverse clientele residing in the community. A second difficulty rests with the match between the EBPs’ programmatic requirements and the needs, capacity, and constraints operating in community service agencies. Community organizations may simply not have the resources to provide an EBP under the same conditions or at the same level of intensity as the program was developed. Adaptations are then necessary in order to provide an intervention that is effective at the local level (Castro, Barrera Jr, & Martinez Jr, 2004; Harn et al., 2013).
Adaptations can be defined as modifications or changes made to an EBP in order to serve the needs of a particular setting or to increase the fit of a program to a target population. Adaptations typically take place during the adaption and implementation of the intervention. They improve a program’s fit and compatibility with a new setting and the needs of the individual(s) and population(s) of interest (Carvalho et al., 2013; Rabin & Brownson, 2018; Stirman, Miller, Toder, & Calloway, 2013). Client and provider attributes (e.g., language, cultural norms, understanding of the EBP) may also be taken into consideration to enhance the fit between the EBP and consumers (Cabassa & Baumann, 2013).
For example, a study in Zambia looked to adapt adult trauma-focused cognitive behavioral therapy (TF-CBT) for use with children and adolescents. Murray et al. (2013) discovered it was critical to work collaboratively with local stakeholders and counselors in order to create culturally responsive and high-fidelity adaptations to increase “fit” and acceptability of the intervention. The collaborative process by which TF-CBT was selected and adapted assisted in creating strong buy-in from the local community, including the support and recommendation of the Ministry of Health in Zambia (Murray et al., 2013).
Tension exists in the research community over the competing ideas of fidelity versus adaptation (Castro et al., 2004; Morrison et al., 2009). While some argue adaptations are essential in order to meet the needs of a particular setting, others argue a program that has been adapted will be significantly less effective when compared to the original program (Carvalho et al., 2013; Castro et al., 2004; Chambers & Norton, 2016). This distinction rests with the emphasis on ensuring the effectiveness of an intervention under clearly specified conditions versus the emphasis on generalizability and effectiveness in less consistent, real-world settings. While adaptations may threaten internal validity, the intent is to improve external validity and thus enhance outcomes for program participants in the real world (Baumann, Cabassa, & Stirman, 2018).
To address the issues associated with adaptations and fidelity, it is important for consumers (e.g., schools, clinicians, mental health organizations) to identify the core components or “active ingredients” (Chorpita et al., 2005; Harn et al., 2013) of a program or practice in order to preserve them during the adaptation process. Once these core components are defined, frameworks, such as the Interactive Systems Framework (Wandersman et al., 2008), the Modification Framework (Stirman et al., 2013), or the Adaptome data platform (Chambers & Norton, 2016), can assist in monitoring adaptations to ensure critical components are left unchanged. If significant program modification does occur, then it is incumbent on implementers to conduct rigorous outcome evaluations in order to assess the possible impact the changes may have on intended outcomes (Carvalho et al., 2013) (for a more comprehensive discussion of managing adaptations and fidelity, c.f. Cabassa & Baumann, 2013; Castro et al., 2004; Chambers & Norton, 2016; Lee, Altschul, & Mowbray, 2008; Stirman et al., 2013; Wandersman et al., 2008).
The question remains as to whether a program reaching optimal fidelity would be sufficient to obtain significant outcomes (Chambers & Norton, 2016). More research is needed to identify the appropriate balance between fidelity and adaptation.
Stages of Implementation
Given the tension between program fidelity and community fit, a natural question is how the implementation process might work. In human service settings, practitioners usually serve to enable a new intervention. As a result, innovations have to be built into thousands of practitioners in multiple organizations that operate under different regulations (e.g., state and federal) and contexts (Fixsen, Blase, Naoom, & Wallace, 2009). It has been suggested the ultimate success of a program and its sustainability (described below) will be largely dependent on laying an appropriate foundation for change (Adelman & Taylor, 2003).
To assist in building innovations into community settings, researchers have proposed several models of implementation that emphasize the implementation process as occurring in stages (Aarons, Hurlburt, & Horwitz, 2011; Fixsen et al., 2005, 2009). The EPIS (exploration, adoption/preparation, implementation, sustainment) is an example of a four-stage model which has different stages that span outer (e.g., sociopolitical) and inner (e.g., organization characteristics) contexts (Aarons et al., 2011). To provide a concrete example of implementation stages, we review another four-stage model, the National Implementation Research Network’s (NIRN) model, that includes exploration, installation, initial implementation, and full implementation (Fixsen et al., 2005; National Implementation Research Network, 2015).
The National Implementation Research Network (NIRN) Model
The first stage in the NIRN model is exploration. Exploration begins when an organization, community, or an individual within an organization/community decides to make use of a new program or practice. The purpose of this stage is to explore the potential fit between the community and the EBP, the needs of the community, the needs of the EBP, and the amount of community resources needed and available in order to implement the new program. The stage helps determine whether the organization should proceed with the innovation or not. A critical question in this stage is the degree of an organization’s readiness for implementation. Research has shown that taking time for exploration and planning saves time and money and can increase the likelihood of success (Fixsen et al., 2005; National Implementation Research Network, 2015; Saldana, Chamberlain, Wang, & Hendricks Brown, 2012).
The second stage of implementation in the NIRN model is installation. During installation, the resources and structural supports needed to assist the implementation of an EBP are procured. Resources can include selecting staff, finding sources for training and coaching and providing the initial training for staff, ensuring location/space (e.g., classroom or office space) and access to materials or equipment (e.g., computer or projector), finding or developing fidelity tools, and identifying funding streams and human resource strategies. This is the stage where a community or organization prepares their staff for the new innovation During (Fixsen et al., 2005; National Implementation Research Network 2015).
The third stage is initial implementation. This stage involves using the new EBP for the first time. Often referred to as the “initial awkward stage” of implementation, this is where practitioners become familiar with the new program or practice (Fixsen et al., 2005). It also happens to be the most delicate stage of implementation, because organizations and practitioners are changing their normal, comfortable routines and have to fight the urge of reverting to old routines. In order to sustain these changes in a practitioner’s routine, it is essential to establish external supports (e.g., coaches, implementation teams, or leadership) on the practice, organization, and system levels (National Implementation Research Network 2015).
The final stage in the NIRN model is full implementation. Full implementation is achieved when the new ways of providing services have become standard practice with practitioners, staff, and organizational leaders. Concomitant changes in policies and procedures also are standardized. At this point, the anticipated benefits of an EBP are realized, with staff and practitioners skilled in the procedures of their new routine. Achieving and sustaining full implementation is an arduous process and may be enabled by the success of the preceding stages (National Implementation Research Network 2015). However, research has shown that success in early stages of implementation may not always guarantee full implementation (Abdinnour-Helm, Lengnick-Hall, & Lengnick-Hall, 2003).
One of the main benefits from adhering to a theoretical model or conceptual framework is it allows consumers and researchers to plan for potential barriers and recognize the facilitators of implementation before resources and time are depleted. More examples and information on other models of implementation are found elsewhere (c.f. Aarons et al., 2011; Damschroder et al., 2009; Rogers, 2003; Saldana, 2014; Saldana et al., 2012).
Sustainability of EBPs
Once a program is in place, the question becomes how to sustain it. Sustainability is involved with the continuity and maintenance of programs after implementation and must be a major consideration of IS. Sustainability may be broadly defined to encompass several aspects of the continuity of an EBP, including maintenance of the procedural processes, commitments, financing (Fixsen et al., 2005), obtaining resources, gaining visibility, status and organizational place (Massey et al., 2005), and supporting the continued benefits and positive outcomes of the program effort (Moore, Mascarenhas, Bain, & Straus, 2017). Sustainability may be best thought of as a continuation of the implementation process, where the emphasis shifts from putting a program into place to maintaining the program through ongoing adaptation and continuous quality improvement efforts (Chambers, Glasgow, & Stange, 2013).
While there have been major advances in understanding the adoption, integration, and implementation of EBPs, program sustainability is not always adequately considered (Shelton, Cooper, & Stirman, 2018). This lack of attention can not only lead to economic and resource losses from wasted effort but also limits the likelihood of successful improvements. EBPs that are discontinued or deserted can result in lower levels of buy-in when a new EBP is proposed for an organization/community and limit the trust that individuals place in research and organizations that conduct research (Shelton et al., 2018).
A number of challenges exist to the sustainability of even well-implemented programs. For example, a systematic review examining the sustainability of health interventions implemented in sub-Saharan Africa found that weak health systems, lack of financial leadership, lack of a consistent workforce, and social and political climates limited an organization’s ability to build capacity and sustain interventions (Iwelunmor et al., 2016).
Those who implement EBPs frequently fail to consider the ongoing changes that happen within communities and organizations (Chambers et al., 2013). Prevention programs implemented within a community or organization evolve over time due to changes and level of understanding of staff (i.e., buy-in), feedback from the community or organization, and improvement in the quality of delivery (Shelton et al., 2018). Consistent with the implementation process, research suggests, among other factors, successful sustainability requires modifiable programs, internal champions, readily perceived benefits, and adequate funding and infrastructure support (Hunter, Han, Slaughter, Godley, & Garner, 2017; Scheirer, 2005). It is also critical to ensure all the important stakeholders are included in the sustainability planning. For example, failing to include the individuals who deliver the practice or program (e.g., clinicians or teachers) may lead to issues with long-term buy-in (Cooper, Bumbarger, & Moore, 2015).
Planning for sustainability should be an ongoing discussion that takes place from the initial exploration stage. This allots time dedicated to planning for long-term financing, commitment and organizational support, training and coaching for the workforce, and procedural evaluation and monitoring (Chambers et al., 2013).
Implications for Behavioral Health
IS has clearly defined the difficulties of bringing programs of proven efficacy into the community where they may serve the public interest. For the researcher, it is clear that simply developing a program with the expectation that it will be adopted readily into the field is naïve. While preliminary studies may narrowly focus on exemplary conditions to demonstrate an intervention is effective, it behooves the researcher to move into the community to assess effectiveness as well.
For the practitioner in the field, there is an opportunity to work collaboratively to identify the critical components of interventions and work to match those demands to the needs and characteristics of the organization, the community, and the clientele for whom the program is intended. This bi-directional effort that links the practitioner to the researcher strengthens not only the development of programs and their relevance for the community but also helps identify and build the conditions under which new programs may be maximally effective.
A collaborative process can be established by which consumers, families, practicing clinicians, communities, and cultures develop common agendas for the improvement of service outcomes and actively participate through all stages of program development and implementation (Baumbusch et al., 2008; Gonzales, Ringeisen, & Chambers, 2002; Green, 2008; Hoagwood et al., 2001; McDonald & Viehbeck, 2007). Models for this approach include community-based participatory research (CBPR), which strays from traditional applied research paradigms and strives to incorporate community partnership and action-oriented approaches to behavioral health research (Minkler & Wallerstein, 2013).
In addition, IS training efforts that prepare researchers for program development and implementation may also benefit from expanded opportunities to work in community settings. For example, service-learning opportunities that place researchers in the settings where programs are implemented offers training opportunities for expanding the implementation process and strengthening the cooperation between program implementers and program users (Burton, Levin, Massey, Baldwin, & Williamson, 2016).
The push for policy and regulations requiring EBPs in multiple health services, lack of buy-in from health practitioners, and poor dissemination methods for evidence remain critical in the research-to-practice gap. Estimates suggest it can take up to 17 years for EBPs to make their way from research to practice (Green, Ottoson, Garcia, & Hiatt, 2009). IS addresses this gap by assisting researchers and communities with the translation of research to real-world practice by identifying the implementation factors that are essential for consistent, sufficient, and effective use of EBPs. IS is an essential driver for ensuring effective and efficacious programs and practices and will lead to significant health benefits for the diverse populations and communities requiring behavioral health services.
References
Aarons, G. A. (2004). Mental health provider attitudes toward adoption of evidence-based practice: The Evidence-Based Practice Attitude Scale (EBPAS). Mental Health Services Research, 6(2), 61–74. https://doi.org/10.1023/B:MHSR.0000024351.12294.65
Aarons, G. A. (2006). Transformational and transactional leadership: Association with attitudes toward evidence-based practice. Psychiatric Services, 57(8), 1162–1169. https://doi.org/10.1176/ps.2006.57.8.1162
Aarons, G. A., Hurlburt, M., & Horwitz, S. M. (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health, 38(1), 4–23. https://doi.org/10.1007/s10488-010-0327-7
Aarons, G. A., & Sommerfeld, D. H. (2012). Leadership, innovation climate, and attitudes toward evidence-based practice during a statewide implementation. Journal of the American Academy of Child and Adolescent Psychiatry, 51(4), 423–431. https://doi.org/10.1016/j.jaac.2012.01.018
Aarons, G. A., Sommerfeld, D. H., & Walrath-Greene, C. M. (2009). Evidence-based practice implementation: The impact of public versus private sector organization type on organizational support, provider attitudes, and adoption of evidence-based practice. Implementation Science, 4, 83. https://doi.org/10.1186/1748-5908-4-83
Abdinnour-Helm, S., Lengnick-Hall, M. L., & Lengnick-Hall, C. A. (2003). Pre-implementation attitudes and organizational readiness for implementing an Enterprise Resource Planning system. European Journal of Operational Research, 146(2), 258–273. https://doi.org/10.1016/S0377-2217(02)00548-9
Adelman, H. S., & Taylor, L. (2003). On sustainability of project innovations as systemic change. Journal of Educational and Psychological Consultation, 14(1), 1–25. https://doi.org/10.1207/S1532768XJEPC1401_01
Allen, J. D., Shelton, R. C., Emmons, K. M., & Linnan, L. A. (2018). Fidelity and its relationship to implementation, effectiveness, adaptation, and dissemination. In R. C. Brownson, G. A. Colditz, & E. K. Proctor (Eds.), Dissemination and implementation research in health: Translating science to practice (2nd ed., pp. 267–284). New York, NY: Oxford University Press.
APA Presidential Task Force on Evidence-Based Practice. (2006). Evidence-based practice in psychology. American Psychologist, 61(4), 271–285. https://doi.org/10.1037/0003-066x.61.4.271
Baldwin, J. A., Johnson, J. L., & Benally, C. C. (2009). Building partnerships between indigenous communities and universities: Lessons learned in HIV/AIDS and substance abuse prevention research. American Journal of Public Health, 99(Suppl 1), S77–S82. https://doi.org/10.2105/ajph.2008.134585
Baumann, A. A., Cabassa, L. J., & Stirman, S. W. (2018). Adaptation in dissemination and implementation science. In R. C. Brownson, G. A. Colditz, & E. K. Proctor (Eds.), Dissemination and implementation research in health: Translating science to practice (2nd ed., pp. 285–300). New York, NY: Oxford University Press.
Baumbusch, J. L., Kirkham, S. R., Khan, K. B., McDonald, H., Semeniuk, P., Tan, E., & Anderson, J. M. (2008). Pursuing common agendas: A collaborative model for knowledge translation between research and practice in clinical settings. Research in Nursing and Health, 31(2), 130–140. https://doi.org/10.1002/nur.20242
Blase, K., Kiser, L., & Van Dyke, M. (2013). The Hexagon Tool: Exploring context. Chapel Hill, NC: National Implementation Research Network, FPG Child Development Institute, University of North Carolina at Chapel Hill. https://www.pbis.org/Common/Cms/files/pbisresources/NIRN-Education-TheHexagonTool.pdf
Bond, G. R., Evans, L., Salyers, M. P., Williams, J., & Kim, H. W. (2000). Measurement of fidelity in psychiatric rehabilitation. Mental Health Services Research, 2(2), 75–87. https://doi.org/10.1023/A:1010153020697
Bruns, E. (2008). Measuring wraparound fidelity. The resource guide to wraparound. Portland, OR: Portland State University, National Wraparound Initiative, Research and Training Center on Family Support and Children’s Mental Health. http://depts.washington.edu/wrapeval/sites/default/files/Bruns-5e.1-%28measuring-fidelity%29.pdf
Burton, D. L., Levin, B. L., Massey, T., Baldwin, J., & Williamson, H. (2016). Innovative graduate research education for advancement of implementation science in adolescent behavioral health. Journal of Behavioral Health Services and Research, 43(2), 172–186. https://doi.org/10.1007/s11414-015-9494-3
Cabassa, L. J., & Baumann, A. A. (2013). A two-way street: Bridging implementation science and cultural adaptations of mental health treatments. Implementation Science, 8, 90. https://doi.org/10.1186/1748-5908-8-90
Carroll, C., Patterson, M., Wood, S., Booth, A., Rick, J., & Balain, S. (2007). A conceptual framework for implementation fidelity. Implementation Science, 2, 40. https://doi.org/10.1186/1748-5908-2-40
Carvalho, M. L., Honeycutt, S., Escoffery, C., Glanz, K., Sabbs, D., & Kegler, M. C. (2013). Balancing fidelity and adaptation: Implementing evidence-based chronic disease prevention programs. Journal of Public Health Management and Practice, 19(4), 348–356. https://doi.org/10.1097/PHH.0b013e31826d80eb
Castro, F. G., Barrera, M., Jr., & Martinez, C. R., Jr. (2004). The cultural adaptation of prevention interventions: Resolving tensions between fidelity and fit. Prevention Science, 5(1), 41–45. https://doi.org/10.1023/B:PREV.0000013980.12412.cd
Center for Substance Abuse Treatment. (2007). Understanding evidence-based practices for co-occurring disorders: Overview paper 5 (DHHS Publication No. (SMA) 07–4278). Rockville, MD: Substance Abuse and Mental Health Services Administration, Center for Mental Health Services, Center for Substance Abuse Treatment. http://purl.access.gpo.gov/GPO/LPS84453
Chambers, D. A., Glasgow, R. E., & Stange, K. C. (2013). The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science, 8, 117. https://doi.org/10.1186/1748-5908-8-117
Chambers, D. A., & Norton, W. E. (2016). The adaptome: Advancing the science of intervention adaptation. American Journal of Preventive Medicine, 51(4 Suppl 2), S124–S131. https://doi.org/10.1016/j.amepre.2016.05.011
Chaudoir, S. R., Dugan, A. G., & Barr, C. H. (2013). Measuring factors affecting implementation of health innovations: A systematic review of structural, organizational, provider, patient, and innovation level measures. Implementation Science, 8, 22. https://doi.org/10.1186/1748-5908-8-22
Chorpita, B. F., Daleiden, E. L., & Weisz, J. R. (2005). Identifying and selecting the common elements of evidence based interventions: A distillation and matching model. Mental Health Services Research, 7(1), 5–20. https://doi.org/10.1007/s11020-005-1962-6
Cooper, B. R., Bumbarger, B. K., & Moore, J. E. (2015). Sustaining evidence-based prevention programs: Correlates in a large-scale dissemination initiative. Prevention Science, 16(1), 145–157. https://doi.org/10.1007/s11121-013-0427-1
Damschroder, L. J., Aron, D. C., Keith, R. E., Kirsh, S. R., Alexander, J. A., & Lowery, J. C. (2009). Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science, 4, 50. https://doi.org/10.1186/1748-5908-4-50
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18(1), 23–45. https://doi.org/10.1016/S0272-7358(97)00043-3
Deschênes, N., Clark, H. B., & Herrygers, J. (2008). The development of fidelity measures for youth transition programs. In C. Newman, C. Liberton, K. Kutash, & R. M. Friedman (Eds.), The 20th annual research conference proceedings: A system of care for children’s mental health: Expanding the research base (pp. 333–338). Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, Research and Training Center for Children’s Mental Health. http://rtckids.fmhi.usf.edu/rtcconference/proceedings/20thproceedings/20thChapter10.pdf
Dovidio, J. F., Penner, L. A., Albrecht, T. L., Norton, W. E., Gaertner, S. L., & Shelton, J. N. (2008). Disparities and distrust: The implications of psychological processes for understanding racial disparities in health and health care. Social Science and Medicine, 67(3), 478–486. https://doi.org/10.1016/j.socscimed.2008.03.019
Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41(3–4), 327–350. https://doi.org/10.1007/s10464-008-9165-0
Feldstein, A. C., & Glasgow, R. E. (2008). A practical, robust implementation and sustainability model (PRISM) for integrating research findings into practice. Joint Commission Journal on Quality and Patient Safety, 34(4), 228–243. https://doi.org/10.1016/S1553-7250(08)34030-6
Fixsen, D. L., Blase, K. A., Naoom, S. F., & Wallace, F. (2009). Core implementation components. Research on Social Work Practice, 19(5), 531–540. https://doi.org/10.1177/1049731509335549
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature (FMHI Publication No. 231). Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, National Implementation Research Network. https://nirn.fpg.unc.edu/sites/nirn.fpg.unc.edu/files/resources/NIRN-MonographFull-01-2005.pdf
Gonzales, J. J., Ringeisen, H. L., & Chambers, D. A. (2002). The tangled and thorny path of science to practice: Tensions in interpreting and applying “evidence”. Clinical Psychology: Science and Practice, 9(2), 204–209. https://doi.org/10.1093/clipsy/9.2.204
Green, L. W. (2008). Making research relevant: If it is an evidence-based practice, where’s the practice-based evidence? Family Practice, 25(Suppl 1), i20–i24. https://doi.org/10.1093/fampra/cmn055
Green, L. W., & Glasgow, R. E. (2006). Evaluating the relevance, generalization, and applicability of research: Issues in external validation and translation methodology. Evaluation and the Health Professions, 29(1), 126–153. https://doi.org/10.1177/0163278705284445
Green, L. W., Ottoson, J. M., Garcia, C., & Hiatt, R. A. (2009). Diffusion theory and knowledge dissemination, utilization, and integration in public health. Annual Review of Public Health, 30, 151–174. https://doi.org/10.1146/annurev.publhealth.031308.100049
Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of innovations in service organizations: Systematic review and recommendations. Milbank Quarterly, 82(4), 581–629. https://doi.org/10.1111/j.0887-378X.2004.00325.x
Guevara, M., & Solomon, E. (2009). Implementing evidence-based policy and practice in community corrections. Washington, DC: U.S. Department of Justice, National Corrections Institute. https://s3.amazonaws.com/static.nicic.gov/Library/024107.pdf
Harn, B., Parisi, D., & Stoolmiller, M. (2013). Balancing fidelity with flexibility and fit: What do we really know about fidelity of implementation in schools? Exceptional Children, 79(3), 181–193. https://doi.org/10.1177/001440291307900204
Hernandez, M., Worthington, J., & Davis, C. S. (2005). Measuring the fidelity of service planning and delivery to system of care principles: The system of care practice review (SOCPR) (Making children’s mental health services successful series; FMHI Publication #223-1). Tampa, FL. http://rtckids.fmhi.usf.edu/rtcpubs/SOCPR/SOCPR-Monograph.pdf
Hoagwood, K., Burns, B. J., Kiser, L., Ringeisen, H., & Schoenwald, S. K. (2001). Evidence-based practice in child and adolescent mental health services. Psychiatric Services, 52(9), 1179–1189. https://doi.org/10.1176/appi.ps.52.9.1179
Hoagwood, K., & Johnson, J. (2003). School psychology: A public health framework I. From evidence-based practices to evidence-based policies. Journal of School Psychology, 41(1), 3–21. https://doi.org/10.1016/s0022-4405(02)00141-3
Hunter, S. B., Han, B., Slaughter, M. E., Godley, S. H., & Garner, B. R. (2017). Predicting evidence-based treatment sustainment: Results from a longitudinal study of the Adolescent-Community Reinforcement Approach. Implementation Science, 12(1), 75. https://doi.org/10.1186/s13012-017-0606-8
Isett, K. R., Burnam, M. A., Coleman-Beattie, B., Hyde, P. S., Morrissey, J. P., Magnabosco, J., … Goldman, H. H. (2007). The state policy context of implementation issues for evidence-based practices in mental health. Psychiatric Services, 58(7), 914–921. https://doi.org/10.1176/ps.2007.58.7.914
Iwelunmor, J., Blackstone, S., Veira, D., Nwaozuru, U., Airhihenbuwa, C., Munodawafa, D., … Ogedegebe, G. (2016). Toward the sustainability of health interventions implemented in sub-Saharan Africa: A systematic review and conceptual framework. Implementation Science, 11, 43. https://doi.org/10.1186/s13012-016-0392-8
Kazdin, A. E. (2000). Psychotherapy for children and adolescents: Directions for research and practice. New York, NY: Oxford University Press.
Kazdin, A. E. (2008). Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist, 63(3), 146–159. https://doi.org/10.1037/0003-066x.63.3.146
Lee, S. J., Altschul, I., & Mowbray, C. T. (2008). Using planned adaptation to implement evidence-based programs with new populations. American Journal of Community Psychology, 41(3–4), 290–303. https://doi.org/10.1007/s10464-008-9160-5
Lehman, W. E., Greener, J. M., & Simpson, D. D. (2002). Assessing organizational readiness for change. Journal of Substance Abuse Treatment, 22(4), 197–209. https://doi.org/10.1016/S0740-5472(02)00233-7
Lendrum, A., Humphrey, N., & Greenberg, M. (2016). Implementing for success in school-based mental health promotion: The role of quality in resolving the tension between fidelity and adaptation. In R. H. Shute & P. T. Slee (Eds.), Mental health and wellbeing through schools: The way forward (pp. 53–63). London: Routledge.
Lieberman, R., Zubritsky, C., Martínez, K., Massey, O. T., Fisher, S., Kramer, T., … Obrochta, C. (2011, February). Issue brief: Using practice-based evidence to complement evidence-based practice in children’s behavioral health. Atlanta, GA: ICF Macro, Outcomes Roundtable for Children and Families. http://cfs.cbcs.usf.edu/_docs/publications/OutcomesRoundtableBrief.pdf
Massey, O. T., Armstrong, K., Boroughs, M., Henson, K., & McCash, L. (2005). Mental health services in schools: A qualitative analysis of challenges to implementation, operation, and sustainability. Psychology in the Schools, 42(4), 361–372. https://doi.org/10.1002/pits.20063
McDonald, P. W., & Viehbeck, S. (2007). From evidence-based practice making to practice-based evidence making: Creating communities of (research) and practice. Health Promotion Practice, 8(2), 140–144. https://doi.org/10.1177/1524839906298494
Minkler, M., & Wallerstein, N. (2013). Introduction to community-based participatory research: New issues and emphases. In M. Minkler & N. Wallerstein (Eds.), Community-based participatory research for health: From process to outcomes (5th ed., pp. 5–24). San Francisco, CA: Jossey-Bass. http://rbdigital.oneclickdigital.com
Moore, J. E., Mascarenhas, A., Bain, J., & Straus, S. E. (2017). Developing a comprehensive definition of sustainability. Implementation Science, 12(1), 110. https://doi.org/10.1186/s13012-017-0637-1
Morrison, D. M., Hoppe, M. J., Gillmore, M. R., Kluver, C., Higa, D., & Wells, E. A. (2009). Replicating an intervention: The tension between fidelity and adaptation. AIDS Education and Prevention, 21(2), 128–140. https://doi.org/10.1521/aeap.2009.21.2.128
Murray, L. K., Dorsey, S., Skavenski, S., Kasoma, M., Imasiku, M., Bolton, P., … Cohen, J. A. (2013). Identification, modification, and implementation of an evidence-based psychotherapy for children in a low-income country: The use of TF-CBT in Zambia. International Journal of Mental Health Systems, 7(1), 24. https://doi.org/10.1186/1752-4458-7-24
National Implementation Research Network. (2015). Implementation stages. [Web page]. Chapel Hill, NC: University of North Carolina, Chapel Hill, FPG Child Development Institute, The National Implementation Research Network. https://nirn.fpg.unc.edu/learn-implementation/implementation-stages
Nilsen, P. (2015). Making sense of implementation theories, models and frameworks. Implementation Science, 10, 53. https://doi.org/10.1186/s13012-015-0242-0
Office of the Surgeon General. (1999). Mental health: A report of the Surgeon General. Rockville, MD: U. S. Department of Health and Human Services, U.S. Public Health Service. https://profiles.nlm.nih.gov/ps/retrieve/ResourceMetadata/NNBBHS
President’s New Freedom Commission on Mental Health. (2003). Achieving the promise: Transforming mental health care in America: Final report (DHHS publication no.). Rockville, MD: U. S. Department of Health and Human Services. http://www.eric.ed.gov/PDFS/ED479836.pdf
Proctor, E. K., Landsverk, J., Aarons, G., Chambers, D., Glisson, C., & Mittman, B. (2009). Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health, 36(1), 24–34. https://doi.org/10.1007/s10488-008-0197-4
Pullmann, M. D., Bruns, E. J., & Sather, A. K. (2013). Evaluating fidelity to the wraparound service model for youth: Application of item response theory to the Wraparound Fidelity Index. Psychological Assessment, 25(2), 583–598. https://doi.org/10.1037/a0031864
Rabin, B. A., & Brownson, R. C. (2018). Terminology for dissemination and implementation research. In R. C. Brownson, G. A. Colditz, & E. K. Proctor (Eds.), Dissemination and implementation research in health: Translating science to practice (2nd ed., pp. 19–45). New York, NY: Oxford University Press.
Rogers, E. M. (2003). Diffusion of innovations. New York, NY: Free Press.
Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn’t. BMJ, 312(7023), 71–72. https://doi.org/10.1136/bmj.312.7023.71
Saldana, L. (2014). The stages of implementation completion for evidence-based practice: Protocol for a mixed methods study. Implementation Science, 9(1), 43. https://doi.org/10.1186/1748-5908-9-43
Saldana, L., Chamberlain, P., Wang, W., & Hendricks Brown, C. (2012). Predicting program start-up using the stages of implementation measure. Administration and Policy in Mental Health, 39(6), 419–425. https://doi.org/10.1007/s10488-011-0363-y
Scheirer, M. A. (2005). Is sustainability possible? A review and commentary on empirical studies of program sustainability. American Journal of Evaluation, 26(3), 320–347. https://doi.org/10.1177/1098214005278752
Shelton, R. C., Cooper, B. R., & Stirman, S. W. (2018). The sustainability of evidence-based interventions and practices in public health and health care. Annual Review of Public Health, 39, 55–76. https://doi.org/10.1146/annurev-publhealth-040617-014731
Stichter, J. P., Herzog, M. J., Owens, S. A., & Malugen, E. (2016). Manualization, feasibility, and effectiveness of the School-Based Social Competence Intervention for Adolescents (SCI-A). Psychology in the Schools, 53(6), 583–600. https://doi.org/10.1002/pits.21928
Stirman, S. W., Miller, C. J., Toder, K., & Calloway, A. (2013). Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implementation Science, 8, 65. https://doi.org/10.1186/1748-5908-8-65
Tabrizi, B. (2014, October). The key to change is middle management. Harvard Business Review. [Web page]. https://hbr.org/2014/10/the-key-to-change-is-middle-management
Titler, M. G., Everett, L. Q., & Adams, S. (2007). Implications for implementation science. Nursing Research, 56(4 Suppl), S53–S59. https://doi.org/10.1097/01.NNR.0000280636.78901.7f
Urban, J. B., & Trochim, W. (2009). The role of evaluation in research – Practice integration working toward the “golden spike”. American Journal of Evaluation, 30(4), 538–553. https://doi.org/10.1177/1098214009348327
Walker, J. S., Koroloff, N. M., Schutte, K., & Portland State University. Research and Training Center on Family Support and Children’s Mental Health. (2003). Implementing high-quality collaborative individualized service/support planning: Necessary conditions. Portland, OR: Research and Training Center on Family Support and Children’s Mental Health, Portland State University. https://nwi.pdx.edu/NWI-book/Chapters/App-6f-Individualized-Service-And-Support%20Planning.pdf
Wandersman, A., Duffy, J., Flaspohler, P., Noonan, R., Lubell, K., Stillman, L., … Saul, J. (2008). Bridging the gap between prevention research and practice: The interactive systems framework for dissemination and implementation. American Journal of Community Psychology, 41(3–4), 171–181. https://doi.org/10.1007/s10464-008-9174-z
Westfall, J. M., Mold, J., & Fagnan, L. (2007). Practice-based research – “Blue Highways” on the NIH roadmap. JAMA, 297(4), 403–406. https://doi.org/10.1001/jama.297.4.403
Yampolskaya, S., Massey, O. T., & Greenbaum, P. E. (2006). At-risk high school students in the “Gaining Early Awareness and Readiness” Program (GEAR UP): Academic and behavioral outcomes. Journal of Primary Prevention, 27(5), 457–475. https://doi.org/10.1007/s10935-006-0050-z
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Massey, O.T., Vroom, E.B. (2020). The Role of Implementation Science in Behavioral Health. In: Levin, B.L., Hanson, A. (eds) Foundations of Behavioral Health. Springer, Cham. https://doi.org/10.1007/978-3-030-18435-3_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-18435-3_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-18433-9
Online ISBN: 978-3-030-18435-3
eBook Packages: Behavioral Science and PsychologyBehavioral Science and Psychology (R0)