Introduction

Exposure to potentially traumatic events is highly prevalent among children and youth (i.e., individuals ages 2–17) (Copeland et al. 2007; McLaughlin et al. 2013; Finkelhor et al. 2009; Hillis et al. 2016). For example, global prevalence estimates of violent victimization (e.g., physical violence, emotional violence, sexual violence) have been shown to be 50% at minimum, with 1 billion children and youth experiencing past-year violent victimization (Hillis et al. 2016). Only 6–20% of these youth experience symptoms that qualify them for a formal diagnosis of Post-Traumatic Stress Disorder (PTSD); however, many others report a broad array of emotional, behavioral, and functional difficulties that seriously impact the attainment of milestones associated with normal child development that can endure into adulthood if not proactively addressed (Felitti et al. 1998; Anda et al. 2006; Kahana et al. 2006; Grasso et al. 2015). Furthermore, disparities exist in youth trauma outcomes with rates of PTSD diagnosis as high as 50% in under-resourced communities (Horowitz et al. 2005).

Fortunately, a number of psychosocial interventions have proven effective for treating trauma-related difficulties experienced by children and youth. Systematic reviews have indicated that certain trauma-focused interventions, most notably those involving cognitive-behavioral therapy (CBT), are linked with positive outcomes (e.g., pre- to post-treatment decline in post-traumatic stress and other trauma-related symptoms) (Silverman et al. 2008; Gillies et al. 2012; Dorsey et al. 2017). In general, these evidence-based trauma-focused interventions involve some common elements that can be difficult to deliver, such as psychoeducation, management of stress-related symptoms, trauma narration (gradual exposure), imaginal or in vivo exposure to trauma reminders, and cognitive restructuring of maladaptive thoughts (Amaya-Jackson and DeRosa 2007; Dorsey et al. 2011). Descriptive information and research evidence for trauma-focused interventions are summarized on several websites [e.g., those maintained by the National Child Traumatic Stress Network (NCTSN; https://www.nctsn.org/) and the California Evidence-Based Clearinghouse for Child Welfare (CEBC; http://www.cebc4cw.org/)]. Much like other effective practices in child and adolescent behavioral health (Garland et al. 2010; Kohl et al. 2009; Raghavan et al. 2010; Zima et al. 2005), effective trauma interventions are underutilized, and even when organizations and systems adopt them, implementation challenges can limit their effectiveness (Allen and Johnson 2012; Powell et al. 2013a).

A thorough understanding of the factors that facilitate or impede effective implementation and the attainment of key implementation outcomes (e.g., adoption, fidelity, penetration, and sustainment) is needed to improve child and family outcomes and optimize the public health impact of trauma-focused interventions. A number of studies have sought to understand barriers and facilitators to evidence-based interventions (Addis et al. 1999; Raghavan et al. 2007; Cook et al. 2009; Forsner et al. 2010; Rapp et al. 2010; Stein et al. 2013; Beidas 2016b; Powell et al. 2013a, 2013b, 2017a), and several conceptual frameworks in the field of implementation science have proposed an array of potential barriers and facilitators across levels (e.g., intervention, individual, team, organization, system, policy) and phases of implementation (e.g., exploration, preparation, implementation, and sustainment) (Aarons et al. 2011; Cane et al. 2012; Damschroder et al. 2009; Flottorp et al. 2013). These empirical and conceptual contributions highlight targets for implementation strategies that can promote the effective integration of evidence-based interventions into community settings (Powell et al. 2015). However, there has not yet been a systematic assessment of determinants for implementing evidence-based psychosocial interventions to address trauma-related symptoms in children, youth, and families.

Purpose and Contribution of this Review

The aim of the current study is to systematically review the literature to summarize empirical studies that identify determinants (i.e., barriers and facilitators) of implementing evidence-based psychosocial interventions that address trauma-related symptoms in children, youth, and families. The purpose of this review is twofold. First, it is intended to inform efforts to implement trauma-focused interventions in community settings by helping relevant stakeholders to anticipate and address barriers and leverage facilitators to improve implementation and clinical outcomes. Second, this review will inform a research agenda on the implementation of trauma-focused interventions for children, youth, and families by summarizing current knowledge of barriers and facilitators at different levels and across phases of implementation, and by suggesting how future studies might address gaps in the current evidence base.

Guiding Conceptual Frameworks

There are an increasing number of relevant conceptual frameworks that can guide implementation research and practice (Strifler et al. 2018, Tabak et al. 2012). These frameworks serve three main purposes: (1) to facilitate the identification of potential determinants of implementation; (2) to outline processes by which these determinants may be addressed; and (3) to suggest implementation outcomes (Proctor et al. 2011) that serve as indicators of implementation success, proximal indicators of implementation processes, and key intermediate outcomes in relation to service system or clinical outcomes in effectiveness and quality of care research (Nilsen 2015). In this review, we draw upon two frameworks that meet these three main purposes.

The Exploration, Preparation, Implementation, and Sustainment (EPIS) framework (Aarons et al. 2011) was selected to guide the assessment of determinants and implementation processes, because it was developed to inform implementation efforts in public service sectors (e.g., public mental health and child welfare services) and has been used frequently within the field of child and adolescent mental health as well as other formal health care settings (Moullin et al. 2019). The EPIS model provides useful guidance for identifying key determinants and processes within the course of an implementation effort, as it specifies determinants that are internal and external to an organization (inner context and outer context) across the different phases of implementation (exploration, preparation, implementation, and sustainment). EPIS also acknowledges the recursive nature of implementation processes, as organizations and systems may reach one phase (e.g., implementation or sustainment) and then return to a prior phase (e.g., to explore need for clinical intervention adaptation or new services) (Becan et al. 2018). Accordingly, we used EPIS to identify determinants of implementing evidence-based, trauma-focused interventions across the four phases of implementation.

The Implementation Outcomes Framework (Proctor et al. 2011) outlines eight key intermediate outcomes that can serve as indicators of implementation success: acceptability, appropriateness, feasibility, adoption, penetration, fidelity, costs, and sustainment. As we identified determinants in relevant articles, we sought to ensure that each determinant had an explicit or implicit connection to one or more of these implementation outcomes. For example, clinicians’ previous negative experiences with an intervention may reduce acceptability and adoption of that intervention.

Methods

The methods described here were pre-registered on PROSPERO, an international database of protocols for systematic reviews in health and social care (Powell et al. 2017c).

Data Sources and Searches

We searched CINAHL, MEDLINE (via PubMed), and PsycINFO using terms related to trauma, children and youth, psychosocial interventions, and implementation (Online Appendix I) to identify English-language peer-reviewed journal articles published prior to May 17, 2017 that present original research related to the implementation of evidence-based trauma-focused interventions primarily targeting children and youth.

Study Selection

Titles and abstracts of identified articles were independently reviewed by two members of the study team and full-texts of potentially relevant articles were retrieved. If the reviewers disagreed about the potential relevance of an article, we took a conservative approach of pulling the full-text for review. We also hand searched the reference lists of dually excluded articles that appeared likely to include relevant studies (e.g., systematic reviews) and retrieved relevant full-texts. Full-texts of potentially relevant studies were independently reviewed by two members of the study team. At this level of review, conflicts were resolved through discussion until consensus was reached.

This review focused on interventions for children and youth experiencing emotional or behavioral difficulties related to trauma that were identified as well-established by Dorsey et al. (2017). Criteria for well-established interventions included efficacy demonstrated either by:

  1. (1)

    statistically significant superiority to pill, psychological placebo, or other active treatment or

  2. (2)

    equivalence to an already established treatment in at least two independent research settings by two independent research teams as well as various methodological criteria (i.e., randomized controlled design; treatment manuals or equivalent used; treated specified problems for population meeting inclusion criteria; reliable and valid outcome measures used; appropriate analyses used with sufficient sample size to detect effects).

Studies of well-established interventions were included if they related an implementation determinant to an implementation outcome (e.g., staffing or funding’s impact on feasibility or sustainability). Determinants were identified according to the EPIS model (Aarons et al. 2011). Outcomes of interest were those included in Proctor et al.’s (2011) taxonomy of implementation outcomes. Specific inclusion criteria are listed in Table 1; inclusion criteria were intentionally broad with respect to study design and research methods to ensure that we could characterize the level of evidence for specific determinants of implementing trauma-focused interventions.

Table 1 Inclusion criteria

Data Extraction and Analysis

Data analysis was driven by a primarily deductive approach guided by qualitative content analysis as described by Forman and Damschroder, which unfolds over three phases: immersion, reduction, and interpretation (Forman and Damschroder 2007). In the immersion phase, researchers engaged with the data, reading and re-reading included articles to obtain a sense of “the whole.”

In the reduction phase, the results sections of the included studies were coded based on the EPIS framework using Dedoose mixed methods analysis software (version 7.6.22) to extract any relevant data for analysis (Aarons et al. 2011). Three modifications were made to EPIS for the purposes of this review, with approval from the framework’s developer (G.A.): (1) fidelity monitoring and support could be coded for the inner or outer context (2) any factor could be coded for any phase of implementation, and (3) implementation determinants that were identified but that did not fit into the factors specified by EPIS were coded as ‘other.’ Each excerpt was coded for the relevant phase of implementation and at least one determinant; a single excerpt could be coded for multiple determinants. A subset of included articles was identified for pilot data abstraction and coding by the research team. The results of this pilot round were discussed to ensure interrater agreement and minimize conflicts. For the remaining included articles, initial data abstraction and coding were verified by a second researcher with any conflicts resolved through discussion until consensus was reached.

After applying the codebook to all included articles, implementation determinants coded as ‘other’ were further classified using an inductive approach. The emergent factors were compared to implementation determinants included in the Consolidated Framework for Implementation Research (CFIR) and defined accordingly (Damschroder et al. 2009). All excerpts were then rearranged into code reports to facilitate in-depth exploration of each phase (exploration, preparation, implementation, and sustainment), level (inner and outer context), and construct within the EPIS framework (and its extensions via inductive coding).

Finally, during the interpretation phase, descriptive and interpretive summaries of the data were written that included the main points from the report, sample quotations, and an interpretive narrative.

Quality Assessment

The quality of included studies was assessed using the Mixed Methods Assessment Tool (MMAT), which provides a single scoring guide across qualitative, quantitative, and mixed methods studies (Pluye et al. 2011). Quality assessment was conducted based on coded data rather than overall study data (e.g., if we only coded qualitative findings from a mixed methods study because there was no quantitative data related to implementation, we only assessed the qualitative aspects of the study design). When multiple methods (i.e., qualitative and quantitative) were used to collect relevant data but use of the methods was independent and not considered “mixed,” quality assessment scores were based on items for qualitative and appropriate quantitative components without incorporating items for mixed methods studies (Palinkas et al. 2011). Single studies represented in multiple published articles were considered together and assigned a single quality score. Possible quality scores include 25%, 50%, 75%, or 100%, with studies meeting all relevant methodological requirements receiving 100% (total scores for included studies are reported in Table 2 and whether studies met relevant methodological requirements are reported in Online Appendix II). Each study was initially assessed by one researcher with all quality assessment scores then verified by a second researcher. Conflicts were resolved through discussion until consensus was reached.

Table 2 Characteristics of included studies

Results

After initial screening (n = 1393) and full text review (n = 207), 23 articles were included for data abstraction and coding (Fig. 1). Table 2 lists the characteristics of the included studies. Most studies assessed implementation determinants in community-based mental health settings within the United States; four studies implementing trauma-focused care outside of the United States were included. All included studies examined the implementation of either trauma focused-CBT (TF-CBT; n = 20) or Cognitive Behavioral Intervention for Trauma in Schools (CBITS; n = 3). Most studies identified focused on determinants during the implementation phase, followed by studies of determinants in the preparation phase; determinants in the exploration and sustainment phases were less common (Fig. 2). We limit our discussion of results to the eight more commonly coded determinants. The quality of the included studies based upon MMAT varied, with four studies rated at 25%, ten at 50%, eight at 75%, and one at 100%. When study quality was deemed to be low, however, it was largely a result of incomplete reporting of methods rather than methods that were deemed inadequate (Online Appendix II).

Fig. 1
figure 1

PRISMA flow diagram. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram. Our search identified 2029 records, of which 23 articles were included

Fig. 2
figure 2

Density of codes by EPIS phase and factor. Coding density by factor for each phase of implementation. Darker shades indicate a higher density of coding

Outer Context

A total of 70 excerpts from 19 of the included articles were coded for outer context implementation determinants. A majority of these were discussed as part of the implementation phase (74%), followed by the preparation and sustainment phases (10% and 13%, respectively). We summarize findings regarding external fidelity monitoring and support and two determinants stemming from the outer context’s ‘other’ category (client perception and patient needs and resources). The remaining excerpts were coded for sociopolitical context (7 excerpts from 3 articles), interorganizational networks (6 excerpts from 3 articles), funding (6 excerpts from 6 articles), external leadership (5 excerpts from 2 articles), and public-academic collaboration (4 excerpts from 2 articles). No excerpts were coded for client advocacy or intervention developers.

Fidelity Monitoring and Support

This code was applied to excerpts exploring the relationship between external support targeting clinician knowledge of and fidelity to the intervention and implementation outcomes. Ultimately, 15 excerpts from 7 articles were coded as outer context fidelity monitoring and support (Cohen et al. 2016; Ebert et al. 2012b; Gleacher et al. 2011; Lang et al. 2015; Morsette et al. 2012; Nadeem et al. 2011; Sabalauskas et al. 2014).

Data collected from clinicians, staff, and administrators revealed that logistical and clinical supports, hosted in external training and learning collaborative environments by groups other than the intervention developers, generally facilitated implementation. Initial training alone was found to be insufficient in one study, with clinicians and staff expressing a desire for ongoing training and oversight (Sabalauskas et al. 2014). This was echoed by participants in another study reporting periodic consultation and site visits as some of the most important components of their learning collaborative experience (Lang et al. 2015). One study found that Plan-Do-Study-Act (PDSA) cycles were particularly useful for administrators and supervisors, and less so for clinicians (Ebert et al. 2012b). Improvement metrics (e.g., supervision time spent on TF-CBT, adherence to the treatment model) were useful to administrators and senior leaders, but not for supervisors and clinicians (Ebert et al. 2012b; Lang et al. 2015). For clinicians, one study found intervention checklists to be helpful (Nadeem et al. 2011). Although such approaches for fidelity monitoring and support were generally found to facilitate implementation, the overall time and resources required for various stakeholders to engage in ongoing external supports may serve as a barrier to maximizing their benefits (Gleacher et al. 2011).

Client Perception

Excerpts coded as outer context other were further categorized as client perception when stakeholder beliefs about a specific intervention or clinical treatment more generally were related to an implementation outcome, such as the appropriateness of the intervention. Ultimately, 15 excerpts from 8 articles were coded as client perception (Dorsey et al. 2014; Hanson et al. 2014; Murray et al. 2013b, 2014; Nadeem et al. 2011; Nadeem and Ringle 2016; Self-Brown et al. 2016; Wenocur et al. 2016).

Studies of caregivers found that care-seeking and continued engagement in treatment were influenced by previous experiences accessing mental health services and fit between the family and clinician (Dorsey et al. 2014; Self-Brown et al. 2016, Wenocur et al. 2016). In one study, caregivers’ perceptions of an evidence-based practice’s (EBPs) appropriateness influenced their decision to initiate treatment (Murray et al. 2013b). National TF-CBT trainers also raised concerns about caregivers’ perceptions influencing engagement in treatment (Hanson et al. 2014). In a study of CBITS implementation, engaging parents prior to implementation was considered critical to successful delivery of care in schools, but lack of parent engagement in treatment remained a barrier (Nadeem et al. 2011; Nadeem and Ringle 2016). In Zambia, clinicians attributed poor TF-CBT session attendance to families’ familiarity with and preference for briefer treatments, comprised of fewer sessions (Murray et al. 2014).

Patient Needs and Resources

Excerpts coded as outer context other were further categorized as patient needs and resources when patient or caregiver characteristics affected treatment engagement and were related to an implementation outcome, such as fidelity to the intervention. Ultimately, 15 excerpts from 8 articles were coded for patient needs and resources (Dorsey et al. 2014; Hoagwood et al. 2007; Murray et al. 2013a, b, 2014; Self-Brown et al. 2016; Wenocur et al. 2016; Woods-Jaeger et al. 2017).

Multiple studies reported logistical barriers that influenced caregiver engagement in treatment, such as limited availability of appointment times and inconvenient appointment locations that were incompatible with caregiver schedules and access to transportation (Dorsey et al. 2014; Murray et al. 2014; Self-Brown et al. 2016; Wenocur et al. 2016). In Zambia, similar logistical barriers were addressed in several ways: shortened sessions were still offered to clients who arrived late while fewer, longer sessions could be scheduled for clients who had to travel further distances (Murray et al. 2013a). Having limited financial resources was another factor that influenced engagement in treatment (Murray et al. 2013b, 2014; Woods-Jaeger et al. 2017; Self-Brown et al. 2016). Lay counselors in Kenya and Tanzania reported noticing that their clients were distracted by hunger and recognized the benefits of having a referral network with organizations that could help address economic needs outside of treatment; still, one offered snacks to clients prior to their sessions (Woods-Jaeger et al. 2017). One study reported deviating from manual-based interventions to address other patient needs like client comorbidities and family crises (Hoagwood et al. 2007).

Inner Context

A total of 80 excerpts from 20 of the included articles were coded for inner context implementation determinants. Half of these were discussed as part of the implementation phase (53%), followed by the preparation and sustainment phases (24% and 20%, respectively). We summarize findings regarding organizational characteristics, individual adopter characteristics, internal fidelity monitoring and support, staffing, and one determinant stemming from the inner context’s ‘other’ category (adaptability). The remaining excerpts were coded for internal leadership (7 excerpts from 4 articles), innovation-values fit (5 excerpts from 3 articles), and other (3 excerpts from 2 articles).

Organizational Characteristics

This code was applied to excerpts exploring the relationship between organizational characteristics such as structure, climate, receptive context, absorptive capacity, and readiness for change and implementation outcomes. Ultimately, 13 excerpts from 8 articles were coded for organizational characteristics (Ebert et al. 2012b; Gleacher et al. 2011; Jensen-Doss et al. 2008; Lang et al. 2015; Murray et al. 2013b; Nadeem et al. 2011; Nadeem and Ringle 2016; Wenocur et al. 2016).

Barriers due to absorptive capacity (especially related to organizational ability to use new knowledge and receptive context (most often related to an organization’s ability to minimize competing demands) were common (Ebert et al. 2012b; Jensen-Doss et al. 2008; Lang et al. 2015; Murray et al. 2013b; Nadeem et al. 2011; Nadeem and Ringle 2016; Wenocur et al. 2016). Clinicians reported that time demands to attend training, to meet productivity requirements, and to incorporate new approaches to assessment and treatment were barriers to implementation and sustainment (Ebert et al. 2012b; Jensen-Doss et al. 2008; Lang et al. 2015). Two studies suggested that insufficient organizational capacity to meet patients’ demands led to long waiting lists and decreased treatment initiation and completion (Murray et al. 2013b; Wenocur et al. 2016). Perceived capacity within an organization to implement change was sometimes used as a screening tool to select organizations into interventions (Gleacher et al. 2011). A study on scaling up a school-based intervention suggests that organizational culture and climate were critical in implementation success. In particular, leadership support to build staff buy-in for the EBP, dedicated time, and physical space to support the new practice were facilitators of successful implementation (Nadeem et al. 2011). Having an implementation team in place, comprised of individuals within the organization, was considered one of the most important facilitators to implementation in one study (Ebert et al. 2012b).

Individual Adopter Characteristics

This code was applied to excerpts exploring the relationship between the goals, perceived need to change, and attitudes towards the intervention at the individual level within organizations, and implementation outcomes. Ultimately, 29 excerpts from 14 articles were coded for individual adopter characteristics (Allen et al. 2012; Allen and Johnson 2012; Allen et al. 2014; Beidas et al. 2016a; Cohen et al. 2016; Hanson et al. 2014; Hoagwood et al. 2007; Jensen-Doss et al. 2008; Lang et al. 2015; Morsette et al. 2012; Murray et al. 2014; Nadeem et al. 2011; Nadeem and Ringle 2016; Sigel et al. 2013).

Individuals’ attitudes towards the innovation and perceived need for change influenced decisions to adopt and undergo training as well as participation in training in several studies (Nadeem et al. 2011; Murray et al. 2014; Sigel et al. 2013). A strong belief that the innovation was appropriate, but flexible to the context was an adoption driver in one international study (Murray et al. 2014). In a study examining the implementation of CBITS over a four-year period, clinicians with positive attitudes about the EBP due to positive clinical experiences or improved patient outcome were more like to sustain the practice (Nadeem and Ringle 2016). Several studies suggested that clinician and supervisor buy-in improved with training and as they gained experience with treatment (Hoagwood et al. 2007; Morsette et al. 2012; Jensen-Doss et al. 2008; Beidas et al. 2016a; Lang et al. 2015; Allen et al. 2014).

In a study examining perceived implementation challenges from the perspective of 19 national trainers of TF-CBT, trainers also expressed concerns that clinicians’ beliefs about the intervention and level of skills impacted implementation fidelity (Hanson et al. 2014). Further, one study observed implementation challenges with supervisors’ negative perception of protocols, and clinicians’ attitudes towards manualized treatment (Hoagwood et al. 2007). The orientation of the clinician prior to training or experience with the intervention may impact clinicians’ perceptions of the value of the intervention, buy-in, and implementation fidelity (Jensen-Doss et al. 2008; Allen et al. 2012). While one study found no association between clinician’s professional discipline, age, or years of experience with implementing all components of TF-CBT, another study reported that fully licensed clinicians trained in TF-CBT were more likely to complete the model with fidelity compared to non-licensed providers (Allen et al. 2012; Cohen et al. 2016).

Fidelity Monitoring and Support

This code was applied to excerpts exploring the relationship between internal support targeting clinician knowledge of and fidelity to the intervention and implementation outcomes. Ultimately, 8 excerpts from 6 articles were coded for inner context fidelity monitoring and support (Ebert et al. 2012b; Hoagwood et al. 2007; Murray et al. 2013b, 2014; Nadeem et al. 2011; Nadeem and Ringle 2016).

Having an internal fidelity support system in place facilitated clinician buy-in and increased acceptability of the EBP (Hoagwood et al. 2007; Nadeem et al. 2011; Nadeem and Ringle 2016; Murray et al. 2013b). One study used clinical outcome data to gain continued financial support to program sustainability and later program expansion (Nadeem et al.). EBP fidelity was positively impacted by supportive coaching, supervision, and monitoring clinical outcome data (Ebert et al. 2012b; Murray et al. 2013b, 2014).

Staffing

This code was applied to excerpts exploring the relationship between hiring, retaining, or replacing employees and implementation outcomes. Ultimately, 10 excerpts from 7 articles were coded for staffing (Ebert et al. 2012b; Hoagwood et al. 2007; Lang et al. 2015; Murray et al. 2013b; Nadeem et al. 2011; Nadeem and Ringle 2016; Wenocur et al. 2016).

Organizational restructuring required to deliver trauma-focused interventions and employee turnover were common challenges to implementation (Ebert et al. 2012b; Murray et al. 2013b; Lang et al. 2015). In Zambia, community volunteers who assessed and referred potential clients were not formally contracted and would sometimes stop working or become unreachable (Murray et al. 2013b). A study in the United States found that issues related to funding contributed to employee turnover (Hoagwood et al. 2007). Senior leaders in one study noted that turnover was particularly concerning with regards to loss of investment in training (Lang et al. 2015). One study found that clinicians who changed schools, added a school to their caseload, or experienced a change in school administration, however, did not continue offering the intervention (Nadeem and Ringle 2016). Other studies noted that organizations needed to hire and train more clinicians as demand for treatment increased—a homeless shelter that implemented TF-CBT planned to hire additional clinicians while a group that implemented CBITS in schools required the schools to begin providing their own clinicians (Wenocur et al. 2016; Nadeem et al. 2011).

Adaptability

Excerpts coded as inner context other were further categorized as adaptability when stakeholder perceptions of an intervention’s ability to be modified to meet local needs were related to an implementation outcome, such as adoption of the intervention. Ultimately, 11 excerpts from 5 articles were coded as adaptability (Morsette et al. 2012; Murray et al. 2013a, 2014; Nadeem et al. 2011; Woods-Jaeger et al. 2017).

Three studies discussed the need for cultural adaptations to evidence-based trauma-focused care. In Zambia, TF-CBT was selected for implementation by a group of stakeholders who believed the core components were appropriate for the local cultural but that examples, activities, etc. needed to be modified to be more relevant (Murray et al. 2013a). Lay counselors who were trained in the intervention reported liking both its structure and flexibility (Murray et al. 2014). This sentiment was echoed by lay counselors in a study of TF-CBT implementation in Kenya and Tanzania who stressed the importance of connecting skills taught to cultural norms (Woods-Jaeger et al. 2017). Other modifications made to the delivery of TF-CBT included engaging more family members and sending text messages to remind and encourage clients to stay engaged (Murray et al. 2013a). In the United States, CBITS was adapted for American Indian youth; specifically, tribe elders and healers were invited to participate in the initial treatment session by presenting Indian perspectives on trauma as well as in the final treatment session by conducting ceremonies based on traditional healing practices (Morsette et al. 2012). Another study of CBITS, conducted in Louisiana after Hurricane Katrina, identified contextual modifications as paramount to successful implementation—this included addressing both the broader mental health needs and the limited resources and capacity of a community recovering from disaster (Nadeem et al. 2011).

Discussion

To our knowledge, this study is the first to systematically review the empirical literature on determinants of implementing evidence-based, trauma-focused interventions for children and youth. Systematic reviews of determinants are increasingly common, as they are a means of consolidating the literature in a specific clinical area and alerting stakeholders to potential determinants that they may face in implementation research or practice (Barnett et al. 2018; Pomey et al. 2013; Tricco et al. 2015; Vest et al. 2010). Understanding the contexts in which services are provided is fundamental to improving the quality of trauma-focused care. As noted by Hoagwood and Kolko (2009), “It is difficult and perhaps foolhardy to try to improve what you do not understand. Implementation of effective services in the absence of knowledge about the contexts of their delivery is likely to be impractical, inefficient, and costly” (p. 35). Given the high rates at which children and youth are exposed to trauma (Copeland et al. 2007; McLaughlin et al. 2013; Finkelhor et al. 2009; Hillis et al. 2016), the availability of evidence-based interventions to address trauma-related symptoms (Dorsey et al. 2017) and the scope of efforts to disseminate and implement trauma-focused interventions nationally (Amaya-Jackson et al. 2018; Ebert et al. 2012a), it is critical to take stock of what we currently know about implementation determinants.

Despite few studies having the explicit objective of assessing determinants for implementing trauma-focused interventions and their impact on implementation outcomes, the results of this systematic review highlight the complexity of implementation, with important determinants being identified at multiple levels and phases of implementation. Each of the determinants identified is a potential target for implementation strategies (Baker et al. 2015; Powell et al. 2015, Powell et al. 2017a), though some are likely to be more malleable than others. Some determinants identified may be more readily addressed by certain types of stakeholders. For example, some client-level determinants, such as financial insecurity, may be more difficult for clinicians and organizations to address, and determinants related to the financing of EBPs might be best addressed by policymakers and system leaders. In fact, the goals of effective implementation and sustainment are more likely achieved where there are strong system level financing strategies (Jaramillo et al. 2018). While there is some evidence that implementation strategies that are prospectively tailored to address determinants are more effective than those that are not tailored (Baker et al. 2015), there is also evidence to suggest that a one-time assessment of determinants prior to an implementation effort may not be sufficient, as determinants are likely to change throughout the implementation process (Wensing 2017). The multilevel, multiphase determinants identified in this review certainly underscore the importance of an ongoing approach to assessing determinants and suggest that implementation strategies may actually need to be adaptively tailored throughout (Powell et al. 2019b). Thus, efforts to prepare organizational leaders and clinicians to apply implementation strategies that match the needs of their organization are essential (Amaya-Jackson et al. 2018; Powell et al. 2019a).

The preponderance of evidence for the influence of implementation determinants in this review is descriptive and based upon qualitative data. Qualitative methods are particularly well-suited to capturing contextual factors (QUALRIS 2018) and the exploratory nature of many of these studies is consistent with the developmental stage of implementation science (Chambers 2012). However, it is also important to move beyond lists of potential determinants and to seek a more robust understanding of causality in the field (Lewis et al. 2018a; Williams and Beidas 2018). To develop a richer understanding of how determinants interact to promote or inhibit implementation, it is recommended that future studies (1) engage a wide range of stakeholders to ensure that their vantage points are represented (Chambers and Azrin 2013); (2) apply well-established conceptual frameworks and theories that can promote comparability across studies (Birken et al. 2017; Proctor et al. 2012); (3) use psychometrically and pragmatically strong measures of implementation determinants (Glasgow and Riley 2013; Lewis et al. 2018b; Powell et al. 2017d; Stanick et al. 2018); and (4) leverage methods that can capture complexity and elucidate causal pathways through which determinants operate to influence implementation and clinical outcomes, including mixed methods (Aarons et al. 2012; Palinkas et al. 2011) and systems science approaches (Hovmand 2014; Zimmerman et al. 2016; Burke et al. 2015).

Consistent with prior research using the EPIS framework, few studies examined the exploration and sustainment phases (Novins et al. 2013; Moullin et al. 2019). The lack of focus on the early and late phases of implementation is problematic, as we have much to learn about the factors that influence clinicians’, organizations’, and systems’ readiness to implement new innovations (Weiner et al. 2008; Weiner 2009) as well as their ability to sustain them over time (Schell et al. 2013; Luke et al. 2014). One way of encouraging research on the earlier and later phases of implementation is through the use of process models and measures such as the EPIS model and the Stages of Implementation Completion measure that explicitly focus on all phases of implementation (Aarons et al. 2011; Saldana 2014). NCCTS has articulated implementation science-informed elements of their learning collaborative across phases of the EPIS model, encouraging attention to each phase from exploration to (planning for) sustainment (Amaya-Jackson et al. 2018). Similarly, NCTSN has developed functional and translational products that address the need for focusing on all phases of implementation, such as a guide for senior leaders that facilitates consideration of factors related to fidelity and sustainment in the early phases of implementation (Landsverk 2012; NCTSN 2015, 2017; Agosti et al. 2016).

Finally, while nearly half of excerpts focused on client-level determinants, these determinants are not represented in detail in many of the leading implementation determinant frameworks (Nilsen 2015), such as EPIS (Aarons et al. 2011) and CFIR (Damschroder et al. 2009). The field of implementation science has primarily focused on provider-level and organizational-level change, and client-level determinants have largely been the focus of clinical intervention developers. There is an opportunity for implementation researchers and practitioners to begin to more thoroughly assess and address client-level determinants, integrate those factors into prevailing conceptual frameworks, and draw upon the existing body of client-engagement research (McKay et al. 2004; Gopalan et al. 2010) more deliberately and consistently.

Limitations and Strengths

A few limitations are worth noting. First, given the quantity, quality, and nature of the included studies, we can say little about which determinants influenced specific implementation outcomes. Such aggregation and more precise linking of determinants to implementation outcomes may be facilitated by coalescing on common conceptual frameworks and theories, as well as improving methods for assessing and prioritizing determinants (as described above). Second, for efficiency, our data extraction and quality assessment processes were not done by two independent researchers but were instead coded by one researcher and then reviewed by a second to verify accuracy of interpretation. Third, our approach to quality assessment may not have been optimal for every study design included. It was chosen because it is flexible and allows for a single metric to compare studies of heterogeneous designs. Additionally, it is important to reiterate that the low-quality ratings for many studies were due to incomplete reporting, and therefore may or may not reflect methodological shortcomings.

Despite its limitations, this study employed a rigorous review approach, adhering to a pre-registered protocol for the systematic review, engaging in a rigorous systematic search of the literature that built upon previous reviews of evidence-based psychosocial treatments for trauma, and relying upon a theory-driven approach guided by widely used determinant and process frameworks (Aarons et al. 2011; Moullin et al. 2019; Proctor et al. 2011).

Conclusion

This study represents the first systematic review of determinants of implementing evidence-based psychosocial interventions for children and youth who experience symptoms as a result of trauma exposure. It advances the field by presenting multilevel and multiphase targets for intervention, allowing stakeholders engaging in implementation efforts to anticipate potential challenges and leverage points. Furthermore, this review suggests that, although the assessment of implementation determinants has almost become passé, we have much to learn about how to pragmatically assess and prioritize them; how they interact to influence implementation and clinical outcomes; and how we can design, select, and tailor implementation strategies to address them.