Introduction

In order for evidence-supported psychotherapeutic interventions to achieve their potential public health impact, large numbers of clinicians need to be trained in these therapies. These interventions provide challenges for dissemination because they typically involve developing competence in flexibly delivering a large number of intervention components (e.g., Beidas et al. 2011). This complexity is usually addressed through comprehensive and expensive in-person training programs (Lyon et al. 2011). Unfortunately, the intensive nature and expense of these programs pose scalability problems. In the current study, we evaluated clinician participation in low cost, scalable training activities designed to assist clinicians in learning one complex psychotherapeutic intervention, trauma-focused cognitive-behavioral therapy.

Background

Glasgow’s RE-AIM model (2006) views public health impact as the product of reach multiplied by effectiveness. Applied to psychotherapy training, this model implies that a large number of clinicians need to be trained in an evidence-based intervention in order to increase the reach of the intervention to the most people. Of course, these clinicians also need to be trained well, to assure that the intervention is delivered effectively. This joint focus calls for training programs that are both intense and scalable.

Training in complex, multi-component interventions that require multiple hard and soft skills offer substantial dissemination challenges. Training in far less complex programs has been shown to require extended contact and multiple types of training activities (see e.g., Davis et al. 1995). For mental health interventions, critical reviews of the literature suggest the need for multi-component, active training (Beidas and Kendall 2010; Lyon et al. 2011). Lyon et al. (2011) concluded that, “Successful trainings in complex psychotherapy practices are likely to be time and resource intensive, involve careful attention to clinician engagement, utilize active methods of promoting initial skills acquisition, and provide ongoing supports to solidify skills and strengthen training transfer.” (p. 250). These ongoing supports might include observation, feedback, consultation, or coaching (Herschell et al. 2010). Unfortunately, development and testing of such training strategies lag behind the development of the interventions themselves (Lyon et al. 2011).

Two recent studies suggest the presence of a market for effective and efficient training in evidence-based treatments among practicing mental health clinicians (Herschell et al. 2014; Powell et al. 2013). In a survey of community mental health clinicians, respondents indicated interest in clinician training that offers advanced skill development, fits clients’ needs, and offers continuing education credit (Powell et al. 2013). Mental health clinicians in focus groups expressed a desire for relevant, interactive, hands-on training with ample post-training supports (Herschell et al. 2014). Training requirements for using treatment manuals and external supervision, previously reported in the literature as barriers to using evidence-based treatments (e.g., Addis and Krasnow 2000), were not seen as deal breakers to the clinicians in these studies. However, clinicians in both studies wanted low-cost training that did not take them away from their clients and families for long periods of time (Herschell et al. 2014; Powell et al. 2013).

Unfortunately, mental health provider organizations may not be well-suited or disposed to provide intensive clinician training. Many operate under severe financial strain (Schoenwald et al. 2008) and increasingly use restrictive compensation systems based on billable hours (Weisz et al. 2013). Adding a considerable number of non-billable hours to a clinician’s schedule for training may be unrealistic for agency-based clinicians and agencies. For example, one mental health supervisor, commenting on sending more than one clinician to a training event said, “That’s not always an option for us, unless it is a local one that is only an hour or two long” (Herschell et al. 2014, p. 195). The same may be true for practitioners in private practice and those employed in agencies on a contract basis, both of whom are paid entirely based on billable hours.

Is the mental health field at an impasse if each of the following three conclusions from this literature is true? First, multi-component, high intensity clinician training is needed for effective implementation. Second, training of this kind is typically high cost and time consuming. And, third, high costs and large amounts of time are the very two things that agencies and clinicians are unable to commit.

A number of scholars have advocated for increased use of technology in mental health clinician training as a means to work around these problems (Beidas et al. 2011; Weingardt 2004; Dimeff et al. 2009; Herschell et al. 2010). In a recent study, a majority of clinicians at in-person training events reported being very or extremely interested in online training, citing the ability to take the course at home at a convenient time and at their own pace as motivators (Hubley et al. 2015). When online training is free and well-designed, clinicians appear to be willing to sign up in large numbers (see, e.g., National Crime Victims Research and Treatment Center 2007). Furthermore, in at least two training trials related to learning skills in delivering dialectical behavior therapy (DBT), online learning performed as well or better than instructor led training or reading a treatment manual on measures like satisfaction, knowledge gains and application of skills (Dimeff et al. 2009, 2011). Neither of these trials focused on learning the full set of clinical skills necessary to deliver the evidence-supported therapy.

These findings are heartening and suggest that technology may provide a cost-effective means of reaching large numbers of clinicians. On-line training can be used to convey information, model skills, create learning communities, observe clinicians in action, conduct supervision, and provide feedback (Beidas et al. 2011). It is also possible to combine learning via technology with other means, including book learning, in-person didactic training, and in-person peer collaboration. One recent trial found that online learning plus a learning community outperformed online learning alone in teaching clinicians exposure therapy skills (Harned et al. 2014). It is possible to mix and match an almost unlimited number of training components, in order to determine ideal combinations of intensity and efficiency. Research, to date, has not tackled this complexity. The field needs multi-component, yet efficient and scalable training systems that can deliver clinician proficiency at acceptable costs and time commitments but the field knows very little about such training programs, including whether clinicians find them acceptable.

Research Questions

This study addressed three questions that are crucial in order to design scalable, affordable trainings in evidence-supported interventions for mental health clinicians. First, what training activities will practicing mental health clinicians be willing to use in order to learn a complex intervention? Second, what personal and practice characteristics account for variation in clinicians’ participation in these training activities? Third, how can training activities be modified to increase clinician participation?

Methods

Overview

The quantitative portion of the study encompassed an online pre-training survey and three online follow-up surveys. The qualitative methods were developed using findings from the first follow-up survey and were designed to help explain and expand on the quantitative findings.

The Clinical Intervention Used for Training

Trauma-focused cognitive-behavioral therapy for children and youth who had experienced trauma (TF-CBT; Cohen et al. 2006) was chosen as the evidence-supported psychotherapy for examining clinician training for three reasons. First, the TF-CBT developers had already worked with the Medical University of South Carolina (MUSC) to create an introductory on-line training program in TF-CBT, meaning we would not have to create one for the purposes of this study. Second, like other evidence-supported mental health interventions, TF-CBT is a complex psychotherapy intervention, involving many components and skills. Third, Medicaid claims data on the mental health conditions being treated by clinicians in the state where the research was conducted suggested clinician caseloads included many children who had been traumatized.

Training Components

While no official TF-CBT training requirements or training certification were in place at the time of this study, the developers of TF-CBT recommended the following training components: completion of the online MUSC course (http://tfcbt.musc.edu), reading the treatment manual (Cohen et al. 2006), attending a TF-CBT training with one of the treatment developers or a TF-CBT certified trainer, and ongoing consultation with one of these trainers through at least one learning case. Since we were interested in creating training program as scalable as possible, our initial training package included the following low-cost training components and clinicians were encouraged to participate in as many as possible over the course of 6 months.

  1. (1)

    Complete the free online introductory training in TF-CBT (http://tfcbt.musc.edu).

  2. (2)

    Read Treating Trauma and Traumatic Grief in Children and Adolescents (Cohen et al. 2006), the treatment manual for TF-CBT, which we sent to trainees free of charge.

  3. (3)

    Watch four live (or archived) webinars featuring the treatment developers discussing topics that they usually cover in their in-person trainings.

  4. (4)

    Read weekly emailed TF-CBT clinical and implementation tips created by the training investigators and the treatment developers.

  5. (5)

    Participate in an on-line discussion forum with other trainees and a certified TF-CBT trainer.

  6. (6)

    Read and use a toolkit of supplementary TF-CBT training materials developed by the research team. The kit included clinical measures, sample treatment plans, handouts for clients and other clinical tools.

  7. (7)

    Meet with a learning partner to role play key clinical scenarios and practice TF-CBT skills prior to use with a client. Clinician trainees were assigned a learning partner, another trainee who lived or worked near the trainee, and were provided with case scenarios and suggested role play activities to practice specific TF-CBT clinical skills.

Participants

Participants were recruited from (a) a Practice Based Research Network of community mental health clinicians and (b) mental health agencies serving youths who had been traumatized operating in one of 84 counties in a Midwestern state. PBRN members who lived in one of the 84 counties and had previously indicated that they served children who were traumatized (n = 614) were contacted via email and asked if they were interested in a training program for TF-CBT. In addition, a small number of mental health agencies serving rural catchment areas were also contacted and asked if they had clinicians who would like to be trained in TF-CBT. In total, 301 clinicians responded to these queries to express interest in training. We emailed these clinicians and asked them to enroll in the training by completing an online pre-training survey if they met three additional inclusion criteria: they (1) had access to high-speed internet, (2) had seen three or more children who had been traumatized in the past 12 months, and (3) had submitted therapy claims for reimbursement to the state’s Medicaid authority in the past 12 months. Of the 301 clinicians we emailed, 163 completed the online survey to indicate that they met the inclusion criteria and provided pre-training data.

Procedures

These 163 clinicians were randomized to two conditions, Immediate Training (n = 89) and Delayed Training (n = 74). This randomization allowed us to conduct analyses not included here, comparing trained versus untrained clinicians. The randomization was by person for private practitioners and by agency for clinicians in agency-based practice (in order to permit assignment of a learning partner within their agency). Trainees were sent links to a follow-up survey at the end of the six- month training period, and again 6 and 12 months after that, for a total of four possible surveys (including the pre-training survey). Of the 163 clinicians, 105 (64 %) completed a survey 6 months following the initiation of their training period and 84 completed a survey 12 months following the initiation of their training period. An additional four respondents took a shortened version of the second follow-up survey offered to non-respondents.

Measures

The baseline and follow-up surveys were developed using the Tailored Design Method (Dillman 1999) and were modeled after other recent clinician surveys (e.g., Hawley et al. 2009; Jensen-Doss and Hawley 2011; Powell et al. 2013). The baseline survey, completed by all 163 clinicians, included clinician demographics, practice characteristics, and the Evidence-Based Practices Attitude Scale-50 (EBPAS-50; Aarons et al. 2012). The EBPAS-50 was designed to assess potentially important clinician attitudes towards the adoption and use of clinical innovations and evidence-supported clinical interventions and has shown good psychometric properties in previous studies (Aarons et al. 2012). This version updates the original EBPAS (Aarons, 2004), which contained four subscales (the appeal of evidence-based practice, the likelihood of adopting given the requirements of evidence-based practices, openness to new practices, and perceived divergence between the new and current practice) with an additional eight subscales (Aarons et al. 2012). The eight additional subscales include limitations of evidence-based practices, perceived fit of an evidence-supported intervention with client needs and clinician values, negative perceptions of monitoring required by the new practice, balance of the science by emphasizing the art of practice, the burden of learning and using a new intervention, perceived enhancement of job security by learning a new intervention, the willingness to learn a new intervention if organizational support was available and valuing feedback on clinical work. These 12 subscales range from 3 to 7 items in length. In our sample, six subscales had internal consistency coefficients above .8; four more had internal consistency coefficients above .7 and two had internal coefficients below that (balance, .43 and divergence, .63).

Since the respondents contained very small numbers of Latino, Asian, and Native American clinicians (see Table 1), these groups were combined with African American clinicians in a single category for clinicians of color. Profession was determined by reported licensing status. Clinicians were categorized into agency-based practice, private practice, or both, based on reported employment settings. Clinicians also reported the proportion of their clients that were reimbursed through Medicaid, the proportion who were traumatized as children, and the proportion who were children in foster care (see Table 1).

Table 1 Information on the clinician participants

The follow-up surveys included self-report of TF-CBT training activities completed. To create a summative count of training activities, trainees were considered to have participated in an activity if they said that they had participated at least somewhat in the activity (a 2 or greater on a 0–4 scale) at any of the follow-up surveys, yielding a count measure ranging from 0 to 7.

Initial Qualitative Interview

Following completion of the initial follow-up survey, we conducted qualitative interviews with a subsample of the participants in the Immediate Training group. We stratified the trainees by level of participation in the training activities and selected 20 clinicians to interview, five from each of the following strata: respondents who had completed the full training, most of the training, some of the training and none of the training by the first follow-up survey. We created a scoring system from 0 to 40 points based on answers to questions about the six non-webinar activities and the four webinars. To be classified as full, respondents had to have at least 35 participation points and to have scored a 3 or above on reading the book. To qualify as most, but not full, they had to score over thirty points. To qualify as some, they had to score over seven points. To qualify as none, they had to score under seven. Analyses suggested that we had reached saturation with 5 per strata.

Qualitative interviews were conducted by doctoral students in clinical psychology trained in qualitative interviewing by the first author. The interview protocol was developed to ask about clinicians’ experiences with the training and to describe why they did, or did not, participate in certain training activities. For example, we saw from the initial follow-up survey that participation in the on-line discussion forum was low. Therefore, we knew to ask specifically about reasons for lack of participation in this training activity. Since our trainees lived across a large geographic area, qualitative interviews were conducted via phone and audio recorded. Clinicians were paid $50 for these interviews. The audio content was transcribed, cleaned and entered into qualitative data management software.

An Additional Training Group

We used clinician feedback from the initial round of qualitative interviews to refine and expand upon our training protocol. Following completion of the Immediate and Delayed Training periods, we offered these modified training activities to all already-enrolled participants and to a newly recruited Additional Training group. Participants for the Additional Training group were recruited from the Practice Based Research Network. Members who were not part of the Immediate or Delayed Training groups were emailed and asked to respond if interested in receiving training in TF-CBT. The Additional Training group consisted of 38 new trainees; 33 took a follow-up survey but are not included in the quantitative analyses to predict training activity completion because they had a different set of training activities offered to them. These clinicians were encouraged to complete the seven training activities already described, plus two activities suggested by the initial round of qualitative interviews: a 1-day in-person TF-CBT training (this was offered on two dates in two different locations to facilitate attendance by clinicians geographically spread across the state) and four group consultation phone calls, all led by a certified TF-CBT trainer. Ten clinicians from the Immediate and Delayed Training groups also participated in one or more of these added training activities.

We subsequently qualitatively interviewed seven clinicians from the Additional Training group, each of whom had participated in the in-person training. These additional interviews focused just on interviewees’ perceptions of the value-added of in-person training and consultation calls.

In total, 201 clinicians participated in some aspect of the study. Twenty-seven clinician participants were interviewed qualitatively, 20 from Immediate Training and 7 from the Additional Training group

Analyses

Non respondent Analyses

Differences between clinicians who completed a follow-up survey and those who did not were assessed with multiple variable logistic regression. Variables were entered in three subsets, starting with demographic information, followed by practice setting and case mix variables, followed by EBPAS-50 variables and knowledge about TF-CBT. Variables were then removed one-by-one using a p < .10 criterion to reach a final and more parsimonious model.

Quantitative Analysis of Training

Analyses to predict number of training activities completed proceeded in a similar fashion, using zero inflated negative binomial regression in SAS 9.3 using the countreg procedure. Zero-inflated negative binomial regression generates two separate models and then combines them. First, a logit model is generated for the cases where a person participated in no training activities, predicting whether or not a clinician would be in this group. Then, a negative binomial model is generated to predict the number of training activities for those students who are not certain zeros.

Qualitative Analyses

Content analysis (Downe-Wamboldt 1992) was chosen as our qualitative analytic approach because we were not attempting to create emerging theory; we were more directly trying to assess reasons for why clinicians participated or did not participate in different training activities. Content analysis examines language content and intensity in a subjective interpretation of classifications, themes and patterns. Thoughts were used as the unit of analysis. Each author read all transcripts and participated in the identification of themes and the development of code definitions. These codes were then applied to the data by the first author to generate coding reports that aided in more in-depth reading by the analytic team. NVivo 10 aided in these analyses.

Results

The sample for the first two training groups consisted of predominantly middle aged, white mental health clinicians, split across both private and agency practice and three primary disciplines: social workers, professional counselors and psychologists (see Table 1). They reported that the majority of their child clients were trauma survivors and received services covered through the Medicaid system. Based on EBPAS-50 scores, this was a group of clinicians for whom evidence-based interventions were appealing and they saw few limitations to using them (see Table 1). Compared to the clinicians used in the development of the EBPAS-50 (Aarons et al. 2012), this sample appears to perceive fewer limitations to using evidence-supported interventions. This is not unexpected as the clinicians in this study all volunteered to receive training in an evidence-supported intervention.

Lost to Follow-Up

The backward elimination logistic regression analyses designed to assess differences between clinicians who completed a follow-up survey on their training activities yielded a model with three variables. Those who were in the Delayed Training group were less likely to complete a follow-up interview, compared to those in the Immediate Training group (OR .43, CI .22, .84, p < .05). The proportion of clients reported by clinicians to be trauma survivors was also positively associated with completing a follow-up interview (OR .01; CI 1.003, 1.03, p < .05). The other variable in the model, EBPAS organizational support (would get training if my organization supported it), was not statistically significant at the .05 level (OR 1.88, CI .99, 3.55).

Quantitative Results

What Training Activities were Mental Health Clinicians Willing to Use to Learn a Complex Evidence-Supported Psychotherapy Intervention?

Table 2 shows the rates of participation in the various training opportunities for the 105 Immediate or Delayed Training group clinicians who completed a follow-up survey. Overall, clinicians reported moderate rates of reading the manual or toolkit, completing the on-line training or viewing webinars. Low rates of participation were found for the online discussion forum and role-play training with an assigned learning partner. A substantial number of trainees reported minimal participation across all training activities.

Table 2 Participation in TF-CBT training activities among those completing a follow-up survey following a training period

What Accounts for Clinicians’ Participation or Lack of Participation in These Training Activities?

Zero inflation models predict both zero scores and count scores. No variables were predictive of the zero score on training activities. We thus re-ran regression analyses specifying a negative binomial distribution and modeling only the number of training activities. The alpha dispersion parameter was almost right at the .05 level, suggesting that either a negative binomial distribution or a Poisson distribution might be appropriate. Results for the negative binomial regression are shown in Table 3. Participants were more likely to participate in more training activities the older they were, if they were professional counselors (as opposed to psychologists), and if they had higher scores on the EBPAS Job Security subscale which indicates that they would learn an EBP if it helped them get or keep a job.

Table 3 Negative binomial regression results predicting number of training activities (0–7)

Qualitative Results

Two prominent intertwined themes emerged from participant comments on the training that help to explain which training activities they participated in and why participation often decreased as time went on: training flexibility and lack of accountability. Participants appreciated the flexibility the training activities offered them—indeed, for many clinicians, this flexibility is what allowed them to even consider the training opportunity. However, this flexibility may have been a negative for others who reported that they needed more accountability in order to stick with the training program.

Flexibility

The majority of the qualitative participants (15/20, 75 %) spoke about appreciating the flexibility of the training offerings. They mentioned two kinds. First, was time flexibility, appreciating that the training was always there when they had time in their schedules:

Doing that after hours at home when it was convenient to me, that’s a huge selling point because I had no other hours to choose from.

I really liked the webinars. I liked the flexibility of them. It wasn’t that I had to be at a specific place at a specific time. I could watch them later and I really liked that. I could fit them into my schedule around my clients or my life.

If I had a patient no-show, I could still use the hour in a productive way.

Second, several respondents used the term “at my own pace” to reflect their appreciation that they could do as much or as little as they wanted at any time, the training was not rushed and they could spend more time on what was new or difficult for them:

Most of the training I get is seminar style and it is rushed so you don’t get a chance to process much. You don’t really get to do it and reflect on it. What I liked about participating in this program over a long period of time was that it was distributed, so you can get into the learning, reflect on it, look at it, come back to it.

I found what I needed to sink my teeth into at my own pace and at my own rate. That was huge. It just, just got me hooked.

Lack of Accountability

The training program’s flexibility, however, was a detriment, even a downfall, for other practitioners who reported that they needed more accountability mechanisms to help them stick with the training. Accountability issues were raised by 11 of the 20 qualitative respondents (55 %).The range and vitality of comments on this theme was telling. We offer three short examples:

The flexibility of the internet training was my downfall because I could just endlessly put it off.

Just to get it all done takes a lot and not everyone has a lot.

The tyranny of the urgent gets to us and we do all these things that are immediately urgent.

Clinicians offered a number of suggestions about what could have helped their learning stay on track. Some thought the training directions should be more directive (e.g., “Well, if the online part said, ‘OK, now you need to go read chapters one and two and look at module three, blah, blah, blah,’ maybe I would do it”). Others suggested timetables for accomplishing learning components, with some suggesting that continuing education credits be tied to meeting the deadlines (e.g., “The way that advertisers say, ‘For a limited time only.’ …that might be something that applies here too”).

Attraction

We also asked clinicians to comment on what initially attracted them to the training opportunity. Besides the flexibility noted above, the largest category of responses here was related to the fit of the intervention to the problems their clients experienced. Many clinicians reported seeing a lot of children who had experienced trauma, and wanting more training to be able to help those children. Several other types of responses indicated to us a desire to increase professionalism and specialization in their clinical practices. This was supported by comments about wanting to learn something with a strong evidence base, and comments about wanting to improve their clinical skills (e.g., “I thought it was just a nice opportunity to maybe be on the cutting edge of something”). A few saw it as an opportunity to “catch up” on skills that they thought they should have learned by this point. Still others had job-related reasons for participating in the training (e.g., to please a supervisor who wanted clinicians to learn new skills; to join other clinicians who were already applying TF-CBT in their agencies).

Peer Learning Participation

We also used the qualitative interviews to better understand the low rates of use of the learning partner and to learn more about the exceptions, when clinicians did use learning partners to increase their skills in delivering TF-CBT. A number of issues were related to using a learning partner. In general, motivation to participate with a learning partner was very low. This was rarely expressed directly, but became apparent in quotes like the following, “I think that both of us felt like, ‘Are we gonna get in trouble if we don’t do it?’ We felt like it was a forced thing that we had to do as opposed to wanting to do.” Clinicians also reported discomfort with the idea of role playing, and perhaps displaying limited skills, in front of another clinician:.

I think it’s just an awkward position to feel like your colleagues are judging you.

It’s probably professional pride. I don’t want to look stupid. I don’t want to, you know, do anything wrong with another colleague.

The following quote was from a clinician that did meet with her learning partner several times, but they still did not conduct the role play exercises that were the purpose of their meeting, “We would get together and not spend the time doing what we were supposed to do. We would, you know, just get off task.”

The few clinicians who did do the role plays with their assigned learning partners, without exception, knew one another prior to being paired, many working in the same agency. This suggests that lack of prior familiarity might have hindered clinicians from contacting one another.

The most successful application of partnered learning occurred in one agency, where a qualitative interviewee reported that a group of trainees came together to meet weekly to learn TF-CBT and to work through the training and various TF-CBT concepts. This group developed its own structure, systematically covered contents of the TF-CBT manual, and discussed case examples of TF-CBT applications. This group reportedly engendered considerable enthusiasm among the clinicians who participated in it but, even members of this group did not do as many roles plays as they had intended or expected. Once again, there seemed to be issues related to needing accountability to overcome a natural hesitancy to perform in front of others. “I think honestly if you make it mandatory, people will do it and they would benefit from it.”

Finally, time and scheduling also appeared to play a role in the low use of learning partners. Participants talked about their own schedules being too busy, their learning partners’ schedule being too busy, difficulty finding a time when both clinicians would be free, and the time requirements if the learning partners were not geographically close.

On-Line Discussion Forum

We also explored reasons for the lack of participation in the on-line discussion boards in our qualitative interviews. Three main reasons for the low participation emerged in analyses: technology barriers, the hassle factor and low awareness. First, several respondents said that this just wasn’t the way that they interact with technology:

I’m not as young as some of these other folks. I’m not used to using the internet as a social talking things out. It didn’t come natural.

Just going to talk up on the bulletin board thing on the web? That doesn’t appeal to me.

Second, several participants found it a hassle to remember that the discussion boards were there, remember a password and log on. One compared it to the use of the webinars, “I found the webinars easy to use because they would send you an email and you would just click on the link. But on the therapy network website, it was kind of a little bit more difficult to access, like in terms of what my login ID is. So I think I just didn’t mess with it.”

Suggestions for Improvement

We also assessed additional suggestions for improvement in the training activities that might have application beyond TF-CBT. By far, the most prominently mentioned suggestion (15/20, 75 %) was the request that the training include a live training event that would both provide some accountability and some familiarity to break the ice.

I think having a face-to-face training first would be really good to actually meet people in person…That might even promote people to talk more on the discussion board if they knew who the other people were.

If I had a personal connection to this person, you know. Just someone that called and said, “Do you have any questions? We’re here if you need us.

It would have been helpful to have a facilitator come like once a month and help us practice with each other the skills in some of those role plays that they gave us.

In-Person Training and Phone Consultation

As noted above, we used this feedback to implement a third training group with the addition of a 1-day in-person workshop (in one of two locations) and four opportunities to call in for an open-line consultation with the trainer who conducted the training. Thirty-eight new trainees enrolled. Thirty of the 38 participated in one of the one-day in-person workshops. Few, however, called into the open-line phone consultations (e.g., one of the four scheduled calls had no callers).

Seven of the participants who attended the in-person workshop were interviewed qualitatively. Although one person did not find the in-person training helpful, wanting more depth than it provided, the other six were enthusiastic about what the in-person session added to their learning experience. For several, it seemed to cap off their learning. For example, “It just added another element, another learning curve.” And, “I found that the day long training was really helpful in terms of consolidating my learning.” Finally, two explicitly commented on the ability to ask questions (which was also available in the live webinars, but few questions were asked) and two noted the ability to network with other clinicians.

Two of the seven clinicians interviewed qualitatively took part in the phone consultations. Both found it helpful. For example, “She not only helped with cases but she just added more information and more resources.” And, “It was great to hear others’ issues that came up. I thought both [calls] very helpful in a practical sense.” Of the five clinicians interviewed qualitatively that did not use the phone consultations, one said she did not need it, and the others cited scheduling issues.

Discussion

This is the first study to examine mental health clinicians’ willingness to participate in various low-cost, non-classroom clinical training activities designed to increase the scalability of evidence-supported psychotherapies. We focus our discussion around the three study questions, plus additional signs of encouragement and trouble that emerged from the data.

Participation rates were very low for the on-line discussion board and for partnered in-person practicing of clinical skills. We struggle with how to characterize the participation rates for other activities, like the static online learning, the webinars and reading the treatment manual, where about one-half to two-thirds reported completing some or most of the activity. The literature provides us little guidance on what rates of participation to expect for the online portions of clinician training although we should note that our rates are nowhere near as low as those being reported for massive online open classes across disciplines (Ho et al. 2014), where completion rates of 5 % appear to be the norm. A subset of clinicians participated wholeheartedly in multiple activities—namely the online static training, the webinars and the book, but did not complete all suggested activities. Others picked and chose what they would do. Some participated in very little. A quarter of those who signed up for the training and completed a follow-up questionnaire reported not doing any of the training activities. Overall, our participation findings suggest that any group launching open, free clinician training with online components risks substantial non-participation and should carefully consider strategies to fully engage clinicians (Lyon et al. 2011).

The trainees who did participate did so for practical reasons, notably to learn skills needed for their work. This appears to relate to the implementation outcome of appropriateness (Proctor et al. 2011). Most of the subscales of the EBPAS were not predictive of training participation rates. This may be because only clinicians who indicated an interest in learning TF-CBT were queried about their attitudes toward evidence-based practices. A survey of clinicians more broadly may have yielded a different result. Also, clinician attitudes toward evidence-supported treatments may be changing rapidly, as more clinicians get training in these protocols and has the evidence-based movement matures. The number of training activities completed was also associated with the Job Security subscale of the EBPAS, suggesting a strategic element to the choice to participate in additional clinician training. In other words, convenience alone may not be enough to get people to participate in clinical training. Perceived appropriateness of the clinical intervention to be learned will likely be a necessary condition for pursuing training in an evidence-supported intervention.

In addition to the moderate overall participation, we saw three signs of trouble ahead for efforts to use technology to increase access to high intensity, low cost clinical training for mental health professionals. While at least one study found that clinicians valued hands-on clinical training (Herschell et al. 2014), in this study, clinician’s reluctance to role play clinical scenarios with other clinicians was deep. Most hands-on clinician training involves another clinician viewing or listening to a trainee using her new skills, as did the partnered role plays we attempted to get clinicians to try. This may be one area where the encouragement of an in-person trainer is crucial. Participants may agree to role play to garner the approval of a trainer with whom they feel some affinity. Training activities without an in-person component may need to develop alternative means to motivate and encourage clinicians to participate in role play practice activities.

Consistent with the observations of Weisz et al. (2013), our data suggests that many mental health agencies are not well structured to support clinical training for their clinicians, even when it is free to them and offers potential agency benefit. Most participating clinicians ended up doing this training on their own time, as an adjunct to their agency work, rather than as part of their agency work. This might be related to a need to accumulate billable hours (Herschell et al. 2014; Weisz et al. 2013) or perhaps to agency recognition of unreimbursed marginal costs associated with the implementation of evidence-supported interventions (Raghavan et al. 2008).

The third sign of possible trouble ahead for large-scale implementation was the clinician’s desire, and perceived need, for in-person instruction and individualized coaching. This was consistent with the focus group findings from Herschell et al. (2014). It may be possible to create effective individual or small group based coaching over the phone or online but the costs of these supports may outstrip the willingness of agencies or individual clinicians to pay for them. Furthermore, the small number of clinicians typically certified by treatment developers to provide these supports introduces another barrier to the scalability needed to substantially increase public health impact of clinician training.

We also identified four signs of encouragement for the use of technology-based clinician training. One is the identified virtue of self-paced learning also mentioned by Hubley et al. (2015). It provides opportunity for reflection that other training strategies might not. A second was that one agency organized their own work group around our training activities and, in that work group, we found higher rates of participation and greater enthusiasm. We do not know whether there was something unique in that agency’s organizational culture or climate that contributed to this development, but it suggests that technology-based training within agency-based learning groups may be a promising avenue for training agency-based clinicians. These work groups can also overcome barriers to participation by building in their own accountability deadlines and mechanisms, encouraging and norming participation in phone consultation, and other activities such as role plays in front of peers.

A third encouraging sign is that we may be on the verge of identifying dimensions by which online clinical training activities can be tailored. Prior experience in online communities is one such dimension. Groups with minimal experience in online communities may need more encouragement and coaching on how to use these tools. The need for external accountability is another dimension upon which training could be individually tailored. Some clinicians may need a more structured experience with set assignments, reminders and meaningful deadlines whereas other clinicians may be turned off by these externally imposed structures.

Fourth, clinicians were drawn to the training by perceived relevance to client need and their eagerness to advance skills and develop expertise. These seem like healthy sources for motivation that future clinical training efforts can use to market clinician training in evidence-supported intervention through words like relevance, expertise, and specialization.

The study possessed several strengths. Among them were the ability to follow-up the quantitative findings showing low participation in some training activities with qualitative interviews to explore reasons for this finding; (b) well-described training activities and components; (c) the flexibility to add another group of trainees to examine reactions to the addition of in-person training and phone consultation; and (d) the use of practicing community clinicians from both agency and private practice settings.

There were also multiple limitations. The participating clinicians were 90 % white and experienced. Younger clinicians or a more ethnically or racially diverse sample may have had different reactions to the training options. Over seven million young people take online courses through U.S. colleges each year (Allen and Seaman 2014). As these younger people graduate from clinical training programs, they will likely be more receptive to participation in activities like online discussion groups. In addition, by including only clinicians who completed an online survey, we may have over-estimated the level of participation possible for online training activities. Another limitation was that this study used only self-report of training activities. Finally, although we designed the training activities with the assistance of treatment developers and by reference to existing training literature, a different group of researchers may have chosen different training components. The nature of what these components should be, how many there should be and how intense they need to be remain unknown. Our choices to include the activities we did likely influenced results.

We tentatively offer a number of recommendations for future research. First, this was not a test of the effectiveness of a training package and such studies remain needed. We also need more studies that compare different compositions of training components. We know almost nothing about the thresholds required to achieve differing levels of clinician competence. How much training? At what levels of intensity? How much needs to be in-person? How much practice is needed? There are many such thresholds that future research could address. Our results also suggest that future studies compare tailored training strategies to non-tailored strategies. Finally, more training studies are needed with different compositions of clinicians to better understand receptivity to technologically-delivered clinician learning.

Training in evidence-supported mental health interventions will likely remain an important policy, organizational and clinical issue for the foreseeable future. While the goal of knowing what combinations of training efforts are needed to efficiently and effectively train mental health clinicians in new interventions may remain distant, the mental health services field is rapidly shedding its naiveté about training. This paper contributes to this progress and offers a few paths forward.