Training community providers in evidence-based practices (EBPs) is a critical public health concern (Institute of Medicine 2015). Traditional formats such as one-time workshops are insufficient for creating sustained change in provider practices (Herschell et al. 2010). Therefore, the field is turning toward more comprehensive training models that combine didactic training and supervised or coached practice.

A model popular in medicine is the learning collaborative (LC) approach (for a review see Wells et al. 2017). The goal of LCs is to attain rapid, measureable, and sustainable improvements in practice within a system of agencies. LCs vary in structure, but typically involve bringing together multidisciplinary teams from several provider agencies for a series of workshops focused on establishing provider competency in a practice and ensuring that agency leaders are prepared to support implementation (Nadeem et al. 2013). Workshops are separated by action periods, during which teams return to their home agencies to practice the targeted EBP, with ongoing support from experts through consultation calls. LCs are used throughout the world, including in over 35 states within the United States (Nadeem et al. 2014). They have gained traction in mental health; Substance Abuse and Mental Health Services Administration (SAMHSA)’s National Child Traumatic Stress Network (NCTSN) has recommended LCs for improving trauma-informed care for youth (Ebert et al. 2012; Hanson et al. 2016), and initial data suggest these efforts have been successful (Bartlett et al. 2016; Lang et al. 2015).

A challenge of more intensive training models like LCs is maintaining participant engagement. Between 47 and 73% of participants who begin comprehensive trainings drop out or do not complete all training requirements within the training period (Beveridge et al. 2015; Gleacher et al. 2011; Olin et al. 2016). Although trainings following the LC model have higher clinician engagement and program completion than other intensive training models (Nadeem et al. 2016), engagement remains a challenge. For example, research indicates that in NCTSN LCs and other similar comprehensive trauma-focused training efforts, only approximately 40–50% of participants attend all learning sessions (Ebert et al. 2012) or meet consultation call and case presentation requirements (Amaya-Jackson et al. 2018; Fritz et al. 2013; Pemberton et al. 2017).

Prior research has identified organizational and process barriers to agencies and providers agreeing to take part in training, including time, cost, lack of fit with client population, and theoretical objections to EBPs (Damschroder et al. 2009; Olin et al. 2015; Stewart et al. 2017; Stewart et al. 2012). However, we know less about the individual or organizational factors that predict retention in training efforts (Fritz et al. 2013); such information could guide selection of training participants and strategies to enhance engagement.

At the individual level, examinations of clinician demographic (e.g., age, race) and professional (e.g., years of experience) predictors of engagement have yielded few significant findings (Garner et al. 2012; Lyon et al. 2011; Olin et al. 2016). Provider attitudes may be one exception, as some studies indicate that clinicians with more positive attitudes are more likely to participate in consultation calls (Fritz et al. 2013; Nelson et al. 2012). However, other studies have failed to find a relationship between attitudes and training engagement (Lyon et al. 2011; Pemberton et al. 2017), so additional research is needed to clarify these mixed findings.

Another potential individual-level predictor of engagement is previous training or experience using EBPs. That is, providers with previous training and experience with similar interventions might be open to further training and more likely to engage. Conversely, those providers might perceive less benefit to additional training. One study found no relationship between prior clinician training in EBP and provider dropout (Olin et al. 2016). However, another found that clinicians who felt they learned more during an initial workshop were more likely to go on to engage in consultation calls (Pemberton et al. 2017).

Researchers have also identified organizational variables as potential predictors of implementation success (Glisson et al. 2008), which may play a role in training engagement. An organization’s implementation climate and leadership’s commitment to implementation appear important in cultivating positive attitudes toward EBPs (Aarons et al. 2014; Ehrhart et al. 2014), which may subsequently facilitate training engagement. Additionally, agencies that value implementation may also dedicate more resources to training. In contrast, researchers have consistently identified limited time available for training and difficulty identifying appropriate training cases as organizational predictors of dropout (Fritz et al. 2013; Garner et al. 2012; Gleacher et al. 2011). However, a multilevel analysis of predictors of dropout from a statewide training on the Managing and Adapting Practice program found that clinician perception of organizational functioning and the extent to which their agencies had previously taken part in EBP trainings were not related to clinician dropout (Olin et al. 2016).

In sum, despite their strong potential to train clinicians in EBPs, intensive training approaches like LCs have low rates of engagement. Although increased understanding of predictors may inform efforts to improve engagement, few studies have examined a wide range of predictors using valid and reliable measures. Moreover, to date, findings are mixed and many studies rely on post hoc explorations. Thus, the purpose of this study was to address these limitations by examining predictors of engagement in a community-based learning collaborative (CBLC) focused on training in trauma-focused cognitive behavioral therapy (TF-CBT).

As described in more detail below, the CBLC model is an LC approach that engages both clinical and “broker” agencies that provide referral and case management, such as child welfare agencies (Hanson et al. 2016; Saunders and Hanson 2014). The goal of a CBLC is to promote community-wide, sustainable and coordinated implementation of evidence-based practices by focusing both on training clinicians in TF-CBT and on training brokers to provide case management strategies to support and monitor referrals to TF-CBT. Two recent publications examining outcomes for Project BEST, a statewide CBLC implementing TF-CBT in South Carolina, suggest that the model is associated with increased self-reported use of TF-CBT, decreased barriers to trauma-focused treatment, and improved community collaboration (Hanson et al. 2018, 2019). Project BEST conducted several waves of trainings over a 9-year period; similar to other comprehensive training models, fewer than half of participants completed the training during earlier rounds of training, although the last 2 years of training had a 68% completion rate.

Although predictors of engagement in the CBLC model have not been previously examined, two of the engagement studies cited above examined similar clinician-focused TF-CBT trainings consisting of workshops, consultation, and completion of training cases. One examined a small number of individual-level predictors (attitudes toward evidence-based practice, perceived barriers to using TF-CBT, confidence using TF-CBT, perceived in-workshop learning) measured after the initial workshop training (Pemberton et al. 2017). The other only explored participation in consultation calls, utilizing survey data collected after the training ended (Fritz et al. 2013).

Guided by implementation theory (Damschroder et al. 2009; Proctor et al. 2011), a battery of baseline measures was administered before the CBLC to examine engagement. The focus of this study was on individual and agency factors that could inform either selection of training participants or design of training activities. We examined engagement in the individual CBLC components, which included attending in-person learning sessions, participating in consultation calls, seeing training cases (clinician participants only), and completing all training requirements. At the individual provider level, we hypothesized that higher engagement would be associated with more positive attitudes toward EBP (Aarons 2004) and lower professional burnout (West et al. 2009). We also explored whether self-reported prior use of TF-CBT (Deblinger et al. 2005) and knowledge about TF-CBT (Fitzgerald 2012) were associated with clinician engagement. Regarding organization-level predictors, we hypothesized that engagement would be higher for individuals whose agencies had a stronger implementation climate (Ehrhart et al. 2014), stronger implementation leadership (Aarons et al. 2014), and a more trauma-informed environment (Richardson et al. 2012).

Method

Participants

The study included 283 participants from three CBLCs (Site 1 n = 68, Site 2 n = 118, Site 3 n = 97) taking part in three metropolitan settings in the southern US. Thirty-seven agencies participated in the CBLCs, including 14 clinical agencies (e.g., community mental health agencies), eight broker agencies (e.g., child protective services), and 15 agencies providing both types of services (e.g., child advocacy centers providing assessment, referral, and treatment). Participants included 180 clinicians, 69 brokers, and 34 senior leaders. Senior leaders were individuals in administrative positions, such as clinical directors. Participant characteristics are presented in Table 1.

Table 1 Demographic and professional characteristics of sample

Procedure

The three CBLCs followed the same training procedures, except where noted below. All three were funded by grants from SAMHSA. In two communities, the local child advocacy center coordinated the training, and in the third, the training was offered as part of a SAMHSA system of care project. In all three communities, the grant covered the cost of the trainers and staff time to coordinate and evaluate the trainings, but no other financial support was provided to the agencies, who covered the time for their staff to take part in the training activities. Participating agencies were selected by the coordinators based on their role in providing trauma-focused services in the community and their capacity to take part in the training.

The CBLC aimed to increase community-wide access to TF-CBT by focusing on community collaboration around identification and treatment of youth in need of trauma-focused treatment. One track focused on training clinicians to deliver TF-CBT, and another on training service brokers (e.g., case managers, child welfare system caseworkers). The broker track focused on helping brokers understand TF-CBT, and use evidence-based case management procedures, with the goal of increasing screening for, referral to, and ongoing monitoring of TF-CBT by brokers. By including both tracks in a training together, CBLC aimed to improve the referral stream and collaboration between brokers and clinicians. Both front-line staff and agency administrative leaders (i.e., “senior leaders”) participated in the trainings so that senior leaders could facilitate implementation support within agencies. Clinician and broker supervisors participated together with their supervisees in the clinician and broker tracks, respectively.

Prior to the first training session, all participants were required to complete a free, web-based training (focused on TF-CBT for clinicians and on evidence-based treatment planning for brokers) and they completed baseline study measures. All participants completed the same study measures, with the exception of the Evidence-Based Practices Attitudes Scale (EBPAS), TF-CBT Practices, and TF-CBT Knowledge measures (see “Measures” section below), which were only completed by clinician participants actively treating clients at the beginning of the collaborative.

The CBLC training consisted of two (Sites 2 and 3) or three (Site 1) two-day, in-person workshops or “learning sessions,” which included didactic training and opportunities to practice skills (e.g., role plays) focused on TF-CBT and evidence-based case management procedures. Learning sessions were spaced across the year of training. Between learning sessions, participants were asked to apply TF-CBT techniques in their work and attend group consultation calls, which occurred every 2 weeks for clinicians, and monthly for brokers and senior leaders. Clinicians also enrolled TF-CBT training cases during this time.

To successfully complete the training, broker and senior leader participants were required to attend at least six consultation calls and all learning sessions. Clinician participants were required to participate in at least 12 consultation calls, attend all learning sessions, and complete at least two TF-CBT training cases during the collaborative. Brokers were encouraged to refer cases for treatment by the participating clinicians. Participants were informed of these training requirements, and those who met all requirements were listed in a public roster indicating successful completion of the CBCL as incentive for participation. All study procedures were approved by the [removed for blind review] Institutional Review Board.

Measures

Provider Demographics and Background Information

Participants completed a brief survey of demographic and professional background information (theoretical orientation, degree, licensure status, years of professional experience, average weekly caseload, professional field).

Evidence-Based Practices Attitudes Scale (EBPAS; Aarons 2004)

The EBPAS is a 15-item, psychometrically valid measure of mental health provider attitudes toward adoption of EBPs (Aarons et al. 2012). Respondents rate their agreement with each item using a 5-point scale ranging from 0 (“not at all”) to 4 (“a very great extent”), yielding a total score and four subscale scores. The subscales measure the extent to which a provider (1) would adopt an EBP if it were intuitively appealing (Appeal), (2) would adopt an EBP if it were required by their supervisor, agency, or state (Requirements), (3) is generally open to trying new interventions (Openness), and (4) perceives EBPs as divergent from usual clinical care (Divergence). Higher scores indicate more positive attitudes for all scales except the Divergence subscale, for which higher scores reflect negative attitudes. In the current study, internal consistency of the EBPAS total scale was excellent (α = .91) and ranged from acceptable to excellent for the subscales (Requirements α = .96; Appeal α = .88; Openness α = .90; Divergence α = .75).

Burnout

Provider professional burnout was measured with two single items, which have been found to perform as well as longer professional burnout scales in predicting outcomes among medical providers (West et al. 2012). Respondents rate how often they experience feelings of emotional exhaustion and depersonalization using a seven-point scale ranging from 0 (“never”) to 6 (“every day”).

TF-CBT Practices Scale

Drawn from the Clinical Practices Questionnaire (Deblinger et al. 2005), the TF-CBT Practices Scale is a 40-item measure of treatment techniques that should be used in TF-CBT and that has detected practice changes in previous TF-CBT CBLCs (Hanson et al. 2019). Respondents are instructed to consider the child trauma-focused cases they have seen in the past three months and indicate for what percentage of these cases they used each technique along a six-point scale (0 = none, 1 = 120%, 2 = 2140%, 3 = 4160%, 4 = 6180%, and 5 = 81100%). The TF-CBT Practices Scale includes six subscales: General Clinical Skills (e.g., agenda setting; sample α = .69), Psychoeducation (α = .81), Coping (e.g., cognitive restructuring; α= .92), Gradual Exposure (α = .92), Personal Safety (e.g., discussing ways to stay safe in the future; α = .88), and Behavior Management (e.g., teaching parents to set up a reward system; α = .91).Footnote 1

Boulder IMPACT TF-CBT Knowledge Survey (Fitzgerald 2012)

The Boulder IMPACT TF-CBT Knowledge Survey consists of 17 multiple-choice items assessing clinician knowledge about TF-CBT and trauma. For scoring, the number of correct answers are summed and divided by 17 to calculate the percentage correct.

Implementation Climate Scale (ICS; Ehrhart et al. 2014, Ehrhart et al. 2016)

The ICS is an 18-item measure of the degree to which an organization is supportive of EBP implementation. Responses are indicated using a five-point scale ranging from 0 (“not at all”) to 4 (“very great extent”). It yields a total score and six subscale scores reflecting theoretically derived dimensions of implementation climate. Internal consistency was excellent for the ICS in the current sample (Total scale α = .94; Focus on EBP α = .96; Educational Support for EBP α = .93; Recognition for EBP α = .89; Rewards for EBP α = .90; Selection for EBP α = .93; Selection for Openness α = .97).

Implementation Leadership Scale (ILS; Aarons et al. 2014)

The ILS uses 12 items to assess for strategic leadership of EBP implementation. Using a five-point scale ranging from 0 (“not at all”) to 4 (“very great extent”), respondents indicate the degree to which a leader is proactive, knowledgeable, supportive, and perseverant regarding EBP implementation. Providers that identified as staff completed a version of the ILS in which items were framed to report on their supervisor’s implementation leadership (e.g., My supervisor is knowledgeable about evidence-based practice). Providers that identified as supervisors completed a version with language adapted to report about themselves (e.g., I am knowledgeable about evidence-based practice). Given that both versions of the ILS contained identical questions with very slight wording changes, supervisor and staff versions were grouped together for analyses in the present study. The total scale and subscales of this combined ILS demonstrated excellent internal consistency (Total scale α = .95; Proactive scale α = .94; Knowledgeable scale α = .98; Supportive scale α = .95; Perseverant scale α = .97).

Trauma-Informed System Change Instrument (TISCI; Richardson et al. 2012)

The TISCI measures the degree to which an agency is “trauma-informed” at the organizational and individual levels. It consists of 19 items rated on a five-point scale ranging from 1 (“not at all true for my agency/me”) to 5 (“completely true for my agency/me”) and has four subscales. Agency Policy assesses the degree to which agency, local, state, and federal policy supports trauma-informed practice. Agency Practice measures the degree to which agency procedures are designed to support trauma-informed care. Integration assesses the degree to which the respondent’s own practices are trauma-informed. The fourth scale, Openness, includes items adapted from EBPAS; as the EBPAS was also included in this study, this scale was not used in the current analyses. The Agency Policy, Agency Practice, and Integration scales demonstrated good internal consistency in this sample (α’s = .91, .91, .87, respectively).

Engagement

Four engagement variables mapped onto CBLC training requirements. Since the number of learning sessions required varied across CBLCs and participants were required to attend all learning sessions, learning session attendance was coded as 1 = attended all learning sessions, 0 = missing some or all sessions. Next, because the number of required calls varied by training track, consultation call attendance was coded as percent of required consultation calls. For clinician participants, we also recorded the number of training cases. Finally, for all participants, a variable was created that indicated whether or not they completed all of the training requirements.

Data Analysis Plan

Given differences in training requirements and measures between clinicians, brokers, and senior leaders, some study analyses focused on all participants, while others only focused on clinician participants. In the full sample, analyses explored variables related to learning session attendance, consultation call attendance, and completion of all training requirements using the TISCI, ICS, ILS, and two burnout measures. In the clinician sample (n = 161 clinicians who indicated they had active caseloads during the baseline survey), all study measures were used to predict number of training cases, and the EBPAS, TF-CBT Practices Scale, and the Boulder IMPACT TF-CBT Knowledge Survey were used to predict clinician learning session attendance, consultation call attendance, and completion of all training requirements.

Since participants were nested within agencies, Generalized Estimating Equations (GEE; Hanley et al. 2003) were used for all analyses to account for agency-level dependencies. All analyses controlled for CBLC site. To decrease the likelihood of Type 1 errors, initial analyses focused on measure total scores where available, followed by subscale analyses when total scores were significant predictors. Analyses focused on one predictor at a time, controlling for site; significant individual predictors were then examined simultaneously to determine which were independent predictors. Rates of missing data were very low (all < 5%). Within the clinician sample, these data were missing completely at random [Little’s (1988) MCAR test χ2 = 148.38, df = 169, p = .87]. Within the whole sample, the MCAR test was significant (χ2 = 120.87, df = 80, p = .002). Inspection of the pairwise t-tests from this analysis indicated that missingness was un-related to all study dependent variables, satisfying requirements for being missing at random. Multiple imputation was therefore used to account for missing data; 10 datasets were imputed using the Blimp program for multi-level data (Keller and Enders 2017). Analyses were run using the SPSS 24.0 multiple imputation function for GEE (IBM Corp. 2016).

Results

Rates of Engagement

Table 2 lists descriptive statistics for all study measures, including the engagement variables. Overall, half of study participants (50.2%) met all training requirements. Nearly 80% of participants attended all learning sessions. Fewer (61.1%) participants met the consultation call requirement, attending 76% of the required consultation calls on average. Fewer still (48.4% of clinicians) met the two-cases requirement, completing 1.27 training cases on average.

Table 2 Descriptive statistics and comparisons to measure development samples

The three CBLC sites did not differ in the proportion of consultation calls attended, number of training cases completed, or rates of completing all training cases. However, Site 1, which required 6 days of learning sessions rather than four, had lower rates of attending all learning sessions (64.7%) than the other two sites [81.4% and 86.6%; χ2(2) = 12.21, p = .002]. Analyses also indicated significant differences across the 34 agencies in attending all learning sessions [χ2(36) = 63.16, p = .003, range = 0–100%], percent of consultation calls attended [F (36, 246) = 1.98, p = .001, range = 6.7–100%], number of completed cases [F (28, 132) = 2.30, p = .001, range = 0 - 3.2], and completion of all training requirements [χ2(36) = 62.11, p = .002, range = 0–100%]. The three training tracks (i.e., clinicians vs brokers vs senior leaders) did not differ significantly on learning session or consultation call attendance, but there were significant differences in training completion rates [χ2(2) = 6.10, p = .047; Brokers = 62.3%; Clinicians = 45.0%; senior leaders = 52.9%].

Figure 1 details participation for each of the three training tracks. Of the brokers and senior leaders who did not complete the training requirements (i.e., attending all learning sessions and at least six consultation calls), approximately 1/3 of participants did not complete either of the requirements and approximately 2/3 completed one, but not both requirements. When individuals only completed one requirement, they most often attended all learning sessions but did not attend the required number of consultation calls.

Fig. 1
figure 1

Engagement flow chart mapping completion of the community-based learning collaborative requirements for each training track. Brokers and Senior leaders were required to attend all learning sessions and at least six consultation calls. Clinicians were required to attend all learning sessions, attend at least 12 consultation calls, and complete at least two training cases

Of the clinicians who did not complete the training requirements (i.e., attending all learning sessions, attending at least 12 consultation calls, and completing at least two training cases; n = 99), 29.3% completed two of the three requirements, 31.3% completed one requirement, and 39.4% completed no requirements. Of the clinicians who only completed one requirement (n = 31), 96.8% attended all of the learning sessions, and none completed the required number of training cases. All of the individuals who completed two of the three requirements (n = 29) attended all of the learning sessions and 86.2% attended all of the required consultation calls; only 13.8% completed the two required training cases.

Predictors of Engagement

Table 3 presents the results of the GEE analyses examining predictors of engagement. Two significant findings emerged in the analyses involving all three tracks. Contrary to study hypotheses, higher ICS scores were associated with a decreased odds of attending all of the learning sessions (B = − .43, p = .028; OR .65). To facilitate interpretation of that finding, coding for the learning sessions variable was reversed and the analysis was re-run. The resulting odds ratio indicated that a one-point increase on the ICS total score was associated with a 54% increase in the odds that someone would not attend all of the learning sessions (OR 1.54). Consistent with study hypotheses, higher scores on the TISCI Integration scale were associated with higher rates of consultation call attendance (B = 3.87, p = .033); every one-point increase on TISCI Integration was associated with a 3.87 increase in % of consultation calls attended. Finally, within the clinician participants only, a one-point increase on the TF-CBT Practices Total Score was associated with a 0.33 increase in the number of training cases seen (B = .33, p = .019) and a 53% increased likelihood of meeting all training requirements (B = .46, p = .043; OR 1.58). Engagement was not related to the other TISCI scales, the ILS total score, or the two burnout items.

Table 3 Predictors of engagement

Follow up analyses of the two significant total scores indicated that learning session attendance was associated with the ICS Recognition for EBP (B = − .35, p = .020, OR .70) and Rewards for EBP scales (B = − .45, p < .001, OR .64). Reversing the odds ratios indicated that a one-point increase in the Recognition for EBP (OR 1.42) and in the Rewards for EBP scales (OR 1.57) were respectively associated with a 42% decrease and a 57% decrease in the odds of attending all learning sessions. For the TF-CBT Practices Scale, one-point increases on the Psychoeducation (B = .16, p = .045), Personal Safety (B = .18, p = .015), and Behavior Management (B = .30, p = .005) scales were respectively associated with seeing .16, .18, and .30 more training cases. One point increases on the Personal Safety (B = .26, p = .024, OR 1.30) and Behavior Management (B = .43, p = .013, OR 1.53) scales were respectively associated with 30% and 53% increased likelihood of completing all training requirements.

Post Hoc Analyses to Probe Study Findings

Given that few of our study hypotheses were supported, we considered alternate explanations for our findings. One possibility was the presence of ceiling effects due to high scores on the predictor variables. To explore this possibility, we compared sample scores on the predictor variables to available data from the measure development samples (see Table 2). On the ICS, the sample had significantly higher scores on the Recognition for EBP and Selection for Openness scales than the ICS child welfare setting validation sample presented by Ehrhart and colleagues (Ehrhart et al. 2016; p’s < .05); however, scores on the Rewards for EBP scale were significantly lower than the Ehrhard et al. mean (p < .05). On the ILS, the sample had significantly higher scores on the Total, Supportive, and Perseverant scores than the ILS validation sample (Aarons et al. 2014; all p’s < .001). Finally, on the EBPAS, the sample had significantly more positive attitudes on the Total score and all subscales (all p’s < .001).

Comparison samples were not available for the TISCI, burnout items, TF-CBT Practice Scale, or TF-CBT Knowledge measure. The average scores on the TISCI fell in between “somewhat true for my agency” and “mostly true for my agency,” suggesting the agencies on average were perceived as moderately trauma-informed. The average scores on the two burnout items, which asked how often individuals felt professional burnout, fell between “a few times a month” and “once a week” for emotional exhaustion and between “once a month or less” and “a few times a month” for depersonalization, indicating relatively low levels of professional burnout. The average scores on the TF-CBT Practice Scale Total Score fell between “41–60%” and “61–80%,” indicating that clinicians felt they were already employing TF-CBT strategies with the majority of their cases prior to beginning the CBLC. However, despite this high level of prior use, the average score on the Boulder IMPACT TF-CBT Knowledge scale was only 53.43% correct.

Discussion

This paper sought to expand the literature on LCs by examining a wide range of predictors of engagement in LC activities, utilizing data from 34 agencies participating in three CBLCs focused on implementation of TF-CBT. Consistent with prior literature on EBP training efforts (e.g., Beveridge et al. 2015; Gleacher et al. 2011; Olin et al. 2016), only about half of CBLC participants completed all of the training requirements. Examination of each training component indicated that engagement in learning sessions was higher than engagement in ongoing consultation calls and completion of training cases, although engagement in the learning sessions was lower for one CBLC that required three two-day learning sessions than for the two CBLCs that only required two. Engagement rates varied widely across the 34 participating agencies, and clinicians were somewhat less likely to successfully complete the CBLC requirements than brokers and senior leaders, perhaps due to more activities being required (i.e., 12 rather than six consultation calls and completion of two training cases).

Only a few significant predictors of engagement were found. As hypothesized, in the analyses involving all three training tracks, participants who viewed their own practice as more trauma-informed on the TISCI Integration subscale attended more consultation calls. However, the Integration subscale did not predict other engagement variables, nor did the other TISCI subscales, which measured agency trauma-informed practices and policies. We also found that clinicians who reported higher use of TF-CBT prior to the LC, particularly psychoeducation, personal safety, and behavior management strategies, saw more training cases and were more likely to complete the CBLC training requirements. Taken together, these findings suggest that individuals who started the CBLC with practices that were more consistent with the goals of the CBLC were more likely to engage. It may be that they were more open to a training that reinforced the current practices. Given that completing training cases proved the biggest barrier to clinicians completing CBLC requirements, it also may be that clinicians who were already using TF-CBT had more access to appropriate training cases, were more willing to take them on, and/or were able to retain their treatment cases to completion, perhaps due to greater comfort with the techniques.

However, contrary to study hypotheses, implementation leadership, professional burnout, and attitudes toward EBP did not predict engagement, nor did TF-CBT knowledge, our other exploratory predictor. Surprisingly, implementation climate, particularly the ICS Recognition for EBP and Rewards for EBP subscales, was negatively associated with learning session attendance in the analyses involving all three tracks, although it was not associated with the other indicators of engagement. Higher scores on the ICS Recognition for EBP subscale indicate that individuals are positively regarded within their agencies for delivering EBP, while higher scores on the Rewards for EBP subscale suggests that individuals receive financial incentives for using EBP. One possibility is that individuals at agencies with higher ICS scores also have a more solid foundation in EBP, and may subsequently find it less necessary to attend the learning sessions, but would still make an effort to complete other training components. It also may be that agencies who routinely deliver EBPs do not feel the need to recognize or reward individual EBP efforts because they are expected.

One possible explanation for our lack of findings is that participants in these CBLCs endorsed high scores on many of the predictor variables. Indeed, they scored above the national average on most of the ICS, ILS, and EBPAS scales. Further, clinicians in this sample reported knowing and using many of the TF-CBT strategies before the start of the CBLC, and overall levels of professional burnout were low. As such, it is possible that there was not sufficient variability in these predictors to detect differences. However, the fact that agencies did differ significantly on all aspects of engagement suggests that other organizational factors not measured here likely do play an important role in engagement. Future research on engagement should include other organizational variables such as agency resources and other measures of organizational climate (e.g., Glisson et al. 2012).

This study had several strengths. First, our large sample size across three cities and service systems increases study generalizability of EBP training efforts. This sample also included an a priori, theory-driven set of predictor variables with a larger set of predictors than previous studies (e.g., Fritz et al. 2013; Pemberton et al. 2017). Most predictors were measured using existing measures with psychometric support; a lack of use of validated implementation measures has been identified as a critical issue facing the implementation science field (Martinez et al. 2014). Analyses focused on various aspects of engagement to understand which parts of the CBLC were most difficult to complete. Studying engagement within the context of the CBLC also expanded the literature beyond clinician samples to include agency leaders and brokers, who play an important role in the child welfare system. Lastly, the sample included very low rates of missing data, strengthening our ability to interpret findings.

Nevertheless, the study should be interpreted in light of several limitations, which suggest future directions for research. First, the predictor variables examined here were measured prior to the beginning of the CBLC. It is possible that ongoing assessment of attitudes and implementation barriers would allow for more sensitive measurement of barriers and facilitators to engagement and measurement of additional variables that cannot be measured well prior to training. For example, previous studies have found that clinicians’ perceived match of TF-CBT with their clients predicts engagement in consultation calls (Fritz et al. 2013; Pemberton et al. 2017). In addition, we did not directly ask about perceived barriers to engagement, such as being too busy to engage in consultation or not having access to training cases. Future work would benefit from asking participants about problems they might anticipate so that the implementation efforts can more directly address those barriers. The predictors examined in this study also focused on primarily “inner context” factors within agencies, but it would be helpful for future work to examine “outer context” factors such as funder support as potential implementation drivers (Damschroder et al. 2009). Additionally, although we have information about the individuals who participated in the CBLC training efforts, we do not have information from non-participants working in these same agencies or about agencies in these communities who did not participate. It would be interesting to examine how individuals and agencies are selected to participate in EBP training efforts, particularly given that it appears that participants of these CBLCs came from agencies who were very supportive of EBP implementation.

Despite these limitations, the current study has several implications. First, our findings suggest that these trainings may be “preaching to the choir” in that they are reaching agencies with pre-existing strong support for EBP and engaging individuals who were already using many of the practices covered in the training. It is possible that this is an appropriate use of resources, given that these agencies might be more likely to benefit from training efforts than less supportive agencies. However, these findings add to an emerging literature suggesting that efforts to increase access to EBPs may not be reaching the agencies and providers most in need of the training (Chor et al. 2014; Olin et al. 2015; Stewart et al. 2017). Additional research is needed to understand how to engage a broad range of participants in training and what types of agencies and providers are most likely to succeed. A related question is whether individuals who are already using evidence-based strategies need such intensive training, or whether a less time-intensive approach might have been equally successful with these trainees.

Although they came from agencies with strong implementation support and holding positive attitudes about EBP, only half of participants completed all of the CBLC requirements. In particular, consistent with prior studies (Ebert et al. 2012; Fritz et al. 2013; Hanson et al. 2019; Pemberton et al. 2017), attending consultation calls and completing training cases were the most difficult aspects of the training to complete. Although the inclusion of a senior leadership track in the CBLC was meant to include problem-solving to address training and implementation barriers, future training efforts may need to include a more specific focus on ensuring participants have time to take part in consultation calls and that a flow of training cases is available.

These findings raise questions for the field about how to reconcile the fact that less intensive training methods, such as one-time workshops, are feasible for participants, yet generally ineffective for creating sustained practice change, whereas more intensive, more effective training methods are challenging for many providers to complete. There is a critical need for the field to work to develop innovative, cost-effective training methods that are feasible and lead to sustained implementation of EBPs. In their multi-wave examination of engagement in a state-wide CBLC, Hanson et al. (2019) found higher rates of engagement in their third wave of trainings than in their previous waves and in this study. They theorized this increased engagement could be several factors, including: (1) increased involvement by senior leaders, (2) increasing opportunities for participant interaction and active skill practice during training, (3) incentivizing participation through a public “roster” of successful completers, (4) including the brokers to increase community-wide cooperation around trauma-focused services, and (5) an increased awareness of trauma and valuing of TF-CBT resulting from several years of training in the state. The CBCLs in this study used the first four of these approaches, but the potential importance of a state-wide climate that values EBP highlights the importance of “outer context” incentives for implementation (Damschroder et al. 2009), such as funder support, as potential implementation drivers that can decrease barriers to participation.

Given the interest in comprehensive training models like LCs to successfully implement EBPs, additional research is needed to understand why so many participants fail to complete these trainings. Although this study provided an important step by examining a range of theoretically-grounded predictors, future research should incorporate a wider range of organizational variables and “outer context” factors, and assess potential predictors of engagement longitudinally, as it may be that organizational and attitudinal factors change over time as the CBLC unfolds. Such information is crucial to improving the success and public impact of EBP implementation efforts.