Although efficacious treatments for mental health and substance abuse problems (e.g., cognitive-behavioral treatment (CBT) for anxiety disorders) have been identified (see Chambless and Hollon 1998), access to such treatments is undermined by the limited availability of clinicians who provide such services (Williams and Martinez 2008). In an effort to close the research-practice gap, national organizations such as the American Psychological Association (APA) and the American Academy of Child and Adolescent Psychiatry (AACAP) strongly promote the clinical provision of Evidence-Based Practice (EBP; AACAP 2006; APA 2005). Despite these efforts, most individuals in need of EBP do not receive them (President’s New Freedom Commission on Mental Health 2004). To increase widespread adoption and integration of EBP into everyday use in clinical and community health care practice, the National Institutes of Health (NIH) proposes a wide-ranging research agenda (Program Announcement in Dissemination and Implementation Research in Health; http://grants.nih.gov/grants/guide/pa-files/PAR-10-038.html) which explicitly recommends the initiation of empirical study of how best to accomplish training in EBP.

System-wide training efforts such as the ‘Evidence Based Psychotherapy’ initiative in the United States Veterans Health Administration (see McHugh and Barlow 2010; special issue of Implementation Science, Eccles and Graham 2009) and the United Kingdom’s ‘Improving Access to Psychological Therapies’ (IAPT 2008) break important new ground. However, such large-scale efforts require resources that few systems and researchers have. More commonly, resource constraints make research on training and implementation efforts necessarily incremental and small scale even when the scope of the problem does indeed merit sweeping, multi-year efforts. In order to have maximum impact despite limited resources, we argue that systematically taking practical steps could help design focused projects that contribute meaningfully to a coherent dissemination and implementation (DI) literature. The aim of this paper is to offer recommendations for individuals interested in programmatic research within the area of training clinicians in EBP for mental health and substance abuse.

Four issues require attention when designing training research: (a) aligning with the larger DI literature to consider contextual variables and clearly define terminology, (b) critically examining the implicit assumptions underlying the stage model of psychotherapy development, (c) incorporating research methods from other disciplines that embrace the principles of formative evaluation and iterative review, and (d) thinking about how technology can be used to take training to scale throughout all stages of a training research project. Within each theme, we make practical recommendations and identify specific questions for future research. These recommendations are based on a review of the training literature from an interdisciplinary perspective (i.e., medicine, psychology, implementation science). To illustrate, we provide an extended example at the conclusion of the article showing the implementation of these recommendations in the design of a research study of training in behavioral activation.

1. Align with the Larger DI Literature to Consider Contextual Variables and Clearly Define Terminology

Reviews of training research criticize the inconsistent terminology used within the literature, lack of consistency in commonly used constructs (Beidas and Kendall 2010; Rakovshik and McManus 2010), and failure to draw upon established theories of DI in making design selections (Davies et al. 2010; Rakovshik and McManus 2010). A discussion of suggestions to guide future inquiry is elaborated upon below; a key suggestion includes consideration of contextual variables (Beidas and Kendall 2010).

Use Common Terminology

A widely acknowledged failure of training research is inconsistency in the use of terminology and constructs (Beidas and Kendall 2010; Rakovshik and McManus 2010). This lack of agreement on terms and definitions in relation to EBP (Kendall and Beidas 2007) and implementation science (Damschroder et al. 2009; Graham et al. 2006) is partially due to a literature fragmented by intervention type (e.g., prevention vs. treatment), population (e.g., adult vs. child), treatment target (e.g., substance abuse vs. mental health), and discipline (i.e., psychiatry, nursing, psychology and social work). For example, work to bridge the gap between research and practice is variously referred to as: quality assurance, quality improvement, knowledge translation, knowledge transfer, knowledge translation and exchange, decision support, performance support, technical assistance, implementation research, research utilization, health services research, DI research, and continuing education research (Institute of Health Economics 2008). This fragmentation is also true of the constructs used in studies (e.g., clinician competence and skill both refer to similar but not entirely overlapping concepts), making it difficult to compare and synthesize findings across studies (Fixsen et al. 2005). What Donabedian (1981) said 30 years ago remains true today: “…we have used the words in so many different ways that we no longer clearly understand each other when we say them” (p. 409).

Fortunately, progress has been made in the broader DI literature to empirically identify processes and constructs that can be applied to training research in EBP for mental health and substance abuse. Researchers need to better utilize these existing theories in study design and construct selection (Davies et al. 2010). By rapidly moving to shared terminology and constructs that have a demonstrated evidence-base, we can prevent research silos from hardening and leaving the field with data sets embedded in separate models whose constructs vary and undermine synthesis of findings. Structural policy change may be needed to encourage more rapid utilization of shared terminology and constructs (e.g., requirement from funding agencies).

Many frameworks and conceptual models have been proposed to organize processes and impacts of implementation (cf., Graham et al. 2006; Grol et al. 2007) and promising theories exist within the broader DI literature (e.g., Greenhalgh et al. 2004; Kitson et al. 2008). However, Damschroder and colleagues (2009) state, “a comparison of theories reveals considerable overlap, yet each one is missing one or more key constructs included in other theories” (p. 2). Even within the broader DI literature, a lack of consistency in operational definitions of constructs across theories has been identified. One meta-theoretical framework that addresses these critiques to come out of the implementation science literature addresses these concerns.

Consider Using Consolidated Framework for Implementation Research (CFIR) as a Heuristic to Guide Study Design

Damschroder and colleagues (2009) conducted an exhaustive review of published theories in implementation science, and have created a typology of constructs based on empirical evidence. The CFIR does not specify hypotheses or relationships among constructs, but rather consolidates and unifies constructs from published theories as a pragmatic list organized within five major domains to reflect the structure of other widely cited implementation theories (e.g., Fixsen et al. 2005; Greenhalgh et al. 2004; Kitson et al. 2008): intervention characteristics, outer setting, inner setting, individual characteristics, and the implementation process (see Table 1 for components of each domain). Note that as an overall framework, the CFIR has not yet garnered empirical support. However, the individual components within the CFIR are selected from evidence-based theories.

Table 1 Summary of Consolidated Framework for Implementation Research (CFIR)

We strongly suggest that at this stage, training researchers consider using this heuristic typology to help better adopt the stance of integrating DI science and systematically thinking through contextual variables relevant to training research to promote theory development and verification based on clearly defined common constructs. We recommend that researchers accomplish this by systematically considering each of these five domains as they design training studies to ensure sufficient attention is given to contextual variables and to ensure that studied constructs align well with the larger conceptual and empirical DI literature. We further suggest that research reports consider using CFIR’s definition of constructs to describe and discuss results in terms of each of these five domains.

2. Critically Examine the Implicit Assumptions Underlying the Stage Model of Psychotherapy Development

Training research may move forward more rapidly if researchers critically examine the utility of several assumptions inherent in current approaches to developing and implementing EBPs and training interventions. Historically, applying the range and rigor of behavioral science from origination and initial testing of novel therapies to their DI in community settings has been conceptualized as a three stage model (e.g., Onken et al. 1996). In Stage I, basic science is translated into clinical applications, pilot testing and feasibility trials begin on new and untested treatments, and treatment manuals, training programs, and fidelity measures are developed. In Stage II, promising treatments are evaluated for efficacy via randomized controlled trials (RCTs) that emphasize internal validity. In Stage III, efficacious treatments are subjected to effectiveness trials and are evaluated with regard to their external validity and transportability to community settings (Rounsaville et al. 2001). This model bears many similarities to the process by which investigational drugs or medical devices are tested and ultimately brought to market.

Unfortunately, the model contains several elements that can create problems when adopted uncritically to guide training research. An untested and often implicit assumption is that the entire treatment package must be transported and used exactly as intended in order to achieve the desired clinical outcomes. When applied to training research, one may uncritically assume that training interventions must be designed to train the entire treatment package or protocol and that the success of a training intervention is defined as community clinicians implementing entire treatment protocols exactly as standardized and performed by research therapists. A further implicit assumption in the stage model is that innovation flows in one direction from researchers to clinicians.

Uncritically adopting these assumptions leads to: (a) an over-commitment to training specific treatment protocols rather than an experimental approach that includes modular training of clinician competencies, (b) an overemphasis on adherence and competence at the expense of understanding flexibility and adaptation during implementation, and (c) an overemphasis on intensive DI strategies (e.g., expensive experts providing training and supervision to reach adherence in an entire treatment protocol as conducted in efficacy research) rather than an approach that includes experimental investigation of extensive strategies (e.g., training clinicians in principles or modules through technology-based training). We discuss each implication and make recommendations to augment dominant research paradigms. See Table 2 for associated research questions.

Table 2 Critically examine the implicit assumptions underlying the stage model of psychotherapy development: Research questions

Consider Units Smaller Than an Entire Treatment Package

Often when considering the DI process of EBP, the question of what is to be transferred has been answered with reference to an empirically-supported treatment protocol via treatment manuals (e.g., Beck et al. 1987; Kendall and Hedtke 2006; Linehan 1993a, b) that outline the procedures session-by-session (Cucciare et al. 2008). While potentially effective in disseminating knowledge, such printed materials may not always be sufficient for professional behavior change. The Cochrane Effective Practice Organization of Care (EPOC) group reports that when compared to no intervention, printed educational materials improve process outcomes by a median absolute improvement of care of 4.3% on categorical variables and 13.6% on continuous variables but do not have an effect on patient outcomes (Farmer et al. 2008). Self-guided study via printed educational material such as treatment manuals is unlikely to result in clinician behavior change (Miller et al. 2006) and may not even result in meaningful knowledge acquisition (Beidas et al. 2009).

Just as it is a mistake to over-rely on self-guided study of the treatment manual to change clinician practice, it may be problematic to over-rely on the treatment manual as the primary unit for designing training interventions (i.e., training focuses around entire treatment protocols). Instead, we recommend that the research agenda shift to the use of clinician as the primary unit (Fouad et al. 2009; Kaslow et al. 2009). Competency-based training is a strategy that ensures that clinicians develop the skills and behaviors necessary for a particular task by delineating important components of the task (Ricciardi 2005). Complex, multi-component interventions are dismantled into component skills based on taxonomies that define the domain’s range of required capabilities (Darken 2009; www.skillsforhealth.org.uk; see Luoma et al. 2007 for an example). For training research, a modular approach (Weisz and Chorpita, in press) fits with training common effective strategies and competencies across protocols (Chorpita and Daleiden 2009), may be more effective and efficient than training clinicians in an EBP for each presenting disorder (Chorpita et al. 2008), and may be better-received by clinicians when compared to manualized treatment protocols (Borntrager et al. 2009).

Similar to the distillation of competencies for child therapies (Chorpita and Daleiden 2009), those involved in the IAPT project have distilled competencies by therapy type for adults (see Roth and Pilling 2008). Building modules based on major core components that cut across EBP protocols would allow researchers to recombine and repurpose content to use when teaching principles of a treatment approach. For example, clinicians learning CBT can be taught the general rationale and purpose of a thought record which is then used whenever indicated for a particular presenting problem (e.g., coping with anger, sadness, anxiety). The most important outcome to identify from the use of core modules is whether the intervention (i.e., module) affects the processes that produce treatment outcomes (Greenberg and Pinsoff 1986). The use of modules will allow us to understand this in a more systematic way and is akin to dismantling studies of treatment interventions.

While the use of a modular approach to clinical training in EBP is certainly promising, clinicians’ ability to apply the knowledge and skills acquired through modular training and combine them appropriately will certainly require empirical investigation. For example, a study of usual care in disruptive behavior disorders suggests that community clinicians use a breadth of different components of EBP in their every-day practice but do not use any one component in depth (Garland et al. 2010). For example, although affect education is used frequently by community clinicians in the treatment of disruptive behavior disorders (81%), the intensity of level of implementation is low (10%). If in fact the components must be combined such that some are used in greater depth to change disruptive behavior, the training intervention must not only train the competencies but also train the relevant principles that guide the clinician in combining the competencies for each disorder in specific ways. Further, the use of a modular approach does not mean that clinicians can selectively use components of the treatment that are comfortable for the clinician (e.g., supportive listening) while avoiding less pleasant treatment components (e.g., exposure to feared situations).

Despite needing further empirical study, the modular approach is well-suited for systematic training research. For example, by identifying successful methods for sub-component competencies that are common across multiple protocols, the knowledge gained from one group could rapidly advance many teams’ work. This modular approach also aligns with commonly used best practices in instructional design and easily adopted technology-based methods of training discussed later. The effectiveness of the modular approach should be a central focus for training research; training researchers can critically examine the assumption that one must train clinicians in entire protocols and that such protocols must be delivered in routine settings as they have been delivered in research investigations to obtain comparable client outcomes.

Assess More Than Adherence and Competence

Similarly and consistent with Stage 2 efforts, measuring adherence and competence has become standard procedure to determine treatment integrity in efficacy research (Kendall and Comer 2011; Perpepletchikova and Kazdin 2005). Independently rated adherence and competence/skillFootnote 1 in delivery of the intervention are the two primary outcomes to evaluate effectiveness of training and implementation interventions (Beidas and Kendall 2010). Adherence is the degree to which the clinician follows the procedures of a treatment protocol, whereas competence refers to the level of skill demonstrated by the clinician in the delivery of treatment (Perpepletchikova and Kazdin 2005). Generally, adherence and competence are rated by independent evaluators based on in-session clinician behavior. Commonly used measures of adherence and competence vary across treatment modality but illustrative examples include the Cognitive Therapy Scale (Young and Beck 1980) and the Motivational Interviewing Treatment Integrity scale (Moyers et al. 2003).

An emphasis on adherence and competence as the default gold standard, however, is not without criticism. A recent meta-analysis suggests that neither adherence nor competence are significantly related to patient outcomes (Webb et al. 2010). Possible explanations of this finding include: (a) The lack of relationship is due to limited variability on adherence and competence ratings within RCT protocols because research therapists generally score above criteria on these constructs and those who do not may be excluded from analyses, (b) the relationship between fidelity and outcomes is curvilinear—both extremely low and high adherence may contribute to poorer outcomes whereas moderate adherence may be related to best outcomes, and/or (c) adherence is essential to outcome in treating some disorders or in some modalities (e.g., treatment adherence in multisystemic therapy; Schoenwald et al. 2004) but perhaps not in others. Much is unknown about the causal role of specific treatment interventions on specific outcomes (Morgenstern and McKay 2007). More dismantling studies are needed to understand the relative contribution of various therapeutic procedures on outcomes. Given the current literature, we believe it premature to conclude either that adherence and competence to specific therapeutic interventions are unimportant, or that adherence and competence should be the sole criteria for defining the success of a training intervention. Instead we advocate an experimental approach so that we do not inadvertently work from untested assumptions that could hinder training research.

An over-concern with adherence and competence may narrow the focus of investigated outcome variables in training research (Weingardt et al. 2009). Current research paradigms emphasize transfer of training (i.e., finding evidence of the use of skills after the training), so that “to be considered successful, adoption must occur while maintaining intervention fidelity, integrity or adherence. That is, the critical features of the intervention are implemented consistently across adopters and there is no drift from the original validated procedures” (Turner and Sanders 2005, p. 180). Using these evaluative criteria, creative and innovative adaptations to suit the clinical context and client population are defined as noise in the data and failure to adhere.

Alternatively, we suggest that research focus directly on measuring and evaluating “flexibility within fidelity” (Kendall and Beidas 2007; Kendall et al. 2008; Koerner et al. 2007). For example, a CBT training intervention can teach clinicians principles to be used flexibly given a client’s presenting problem. An example of such flexibility includes understanding that reinforcement can be variable depending on the individual client’s needs. For one anxious child with difficulty separating from her parents, playing a game with someone new could be positively reinforcing following an exposure task, while for another anxious child with social difficulties, this may not be positively reinforcing and would not be an appropriate reward following an exposure task. Another example comes from dialectical behavior therapy (DBT) (Linehan 1993a) which recommends after-hours, as-needed phone coaching in DBT skills by the individual therapist. Given that many systems are not set up for this type of care, a flexible adaptation might include training crisis team members in coaching clients rather than relying on the individual therapist. Flexible adaptations such as these can result in important empirical investigations. In the CBT example, the principles of reinforcement could be taught and then data collected both regarding how well therapists generalize correctly across differing clinical presentations and how this flexibility impacts client outcomes. In the DBT example, one could train individual therapists versus crisis team workers in phone coaching to examine whether learning and performance in both groups is comparable and whether either outperforms the other in helping patients use skills to avoid crises.

One way to look at ‘flexibility within fidelity’ is to experimentally manipulate and assess whether training leads clinicians to understand the key principles of a treatment, apply principles appropriately, and generalize principles across clinical situations; this should become a priority in training research. Both quantitative and qualitative research methods could contribute to such efforts. For example, practitioners can be trained to conduct functional analyses of targeted problem behaviors or trained to design homework assignments to help clients better generalize in-session changes to daily life, either through didactic or experiential methods. One productive research strategy is to study ‘positive deviants,’ individuals who most successfully adopt an innovation despite barriers and often in doing so successfully adapt the innovation in creative ways (Bradley et al. 2009). Training researchers might design naturalistic studies to identify community clinicians who learn and implement a new EBP and study how they are successful, rather than assume that training research means unidirectional flow from expert to practitioner. Then, these lessons learned can be shared to help other community clinicians as they learn and implement the EBP in their local settings.

3. Incorporate Research Methods from Other Disciplines that Embrace the Principles of Formative Evaluation and Iterative Review

A third recommendation for researchers is to augment their research methods with innovative methodology from other disciplines. One inadvertent effect of the stage model’s early emphasis on strategies that maximize outcomes in tightly controlled research environments (i.e., RCTs) is that it tends to produce intensive interventions that are not sustainable after research funds are depleted. Intensive interventions may work against the intervention “being effective in more complex, less advantageous settings with less motivated patients and overworked staff” (Glasgow et al. 1999, p. 1322). Further, moving sequentially from efficacy trials to effectiveness evaluations and only then to dissemination and implementation research leaves crucial factors that influence external validity, clinical utility and the intervention’s reach, adoption, implementation and sustainability in routine settings until too late in the development process (Glasgow et al. 1999). The disconnect between stages of development of training products and the needs of consumers is a ubiquitous problem, common to the diffusion of any innovation in any field of endeavor, from farming practices to consumer products, and is not unique to DI of scientific products. One solution is to incorporate rigorous testing by stakeholders and consumers earlier in the development sequence ensuring that the end result maximizes both internal and external validity. See Table 3 for research questions applicable to this recommendation.

Table 3 Incorporate research methods from other disciplines that embrace the principles of formative evaluation and iterative review: Research questions

Consider New Product Development Models

Social processes are essential to the DI of EBP in community settings. An understanding of stakeholder needs and thorough integration of their feedback is crucial from initial concept all the way through DI and sustainability. To accommodate for the social nature of such efforts, we suggest that researchers consider contemporary models of managing product innovation, such as New Product Development (NPD; Kahn 2004), New Products Management (NPM; Crawford and Di Benedetto 2010) and “agile” software development (e.g. The Rational Unified Process; Kroll and Kruchten 2003; von Hippel 2005) during the development processes of training products. These models are widely used in business and engineering to guide the development of diverse consumer products including groceries, electronics, services (Shostack 1982, 1984) and user experiences in health care (Cottam and Leadbeater 2004).

For example, the New Product Development model outlines five phases in the new products process: (a) opportunity identification and selection, (b) concept generation, (c) concept evaluation, (d) development and (e) launch (Crawford and Di Benedetto 2010). New products are developed with rapid prototyping, where inexpensive mock-ups of ideas are quickly developed and tested through multiple iterations. At critical junctures in the product’s development, rigorous criteria about performance quality, usability, consumer satisfaction, and adequate plans to ensure adoption are specified. Experimental tests and review procedures form decision gates, to guarantee that a new product meets needed criteria prior to moving forward.

Many aspects of these models map directly onto our tasks in developing training strategies. In particular, we think that such bidirectional development processes which treat product users (i.e., trainees in an EBP) as active and engaged participants in the process rather than as passive recipients of information is consistent with many models of implementation science (e.g., Damschroder et al. 2009; Greenhalgh et al. 2004) and should be integrated into training research. Bidirectional collaboration and engagement allows for the emergence of ‘champions of the intervention,’ an important predictor of implementation success (Forman et al. 2009), and allows key opinion leaders in the community to exert their social influence which is likely much more effective than researcher influence (Atkins et al. 2008). Such collaboration and relationship-building is best constructed on trust, equality, and continued commitment on the parts of both the researcher and agency (Chorpita and Mueller 2008). For example, an initiative launched by the Division 12 of the APA surveyed clinicians to identify problems they have had in implementing an EBP for panic disorder, enabling researchers to attend to important clinician concerns (DeAngelis 2010).

Adopt Formative Evaluation

A practical implication from New Product Development models is that we can borrow tools that help us collect and integrate stakeholder feedback as standard procedure while developing and evaluating the efficacy of training interventions from the very beginning (Atkins et al. 2006). Our current stage model for moving EBP from research to community settings tends to overemphasize internal validity, thus inadvertently deferring feedback until too late in the development process, particularly when the transfer of EBP relies on social processes. Certain methodological approaches could ameliorate some of the tension between internal and external validity such as practical clinical trials (e.g., March et al. 2005; Tunis et al. 2003) and hybrid efficacy-effectiveness models (Carroll and Rounsaville 2003; Curran et al. 2010). For example, Dimeff and colleagues (2009) propose a hybrid model that addresses both efficacy (e.g., randomization, assessment, intent-to-train) and effectiveness (e.g., naturalistic environment, limited exclusion criteria, sample representative of community mental health clinicians).

The iterative process of new product design that emphasizes user feedback at each stage of development provides a complementary methodology for addressing threats to external validity. For example, formative evaluation, a rigorous assessment process designed to identify potential and actual influences on the progress and effectiveness of implementation efforts (Stetler et al. 2006), is a further research tool to augment hybrid efficacy-effectiveness models. Different from summative evaluation designs (i.e., RCTs) which measure the impact of an intervention after delivery, formative evaluation allows researchers to evaluate the training interventions during the design, piloting and implementation stages. Recent efforts highlight the way in which formative evaluation can be used in training clinicians in DBT (Dimeff et al. 2009) and CBT for substance abuse (Weingardt et al. 2009).

Formative evaluation can be used to address important contextual factors that influence clinician behavior change, factors which training interventions to date have paid too little attention (Beidas and Kendall 2010). For example, in a study guided by the Theory of Planned Behavior (TPB; Ajzen 1988, 1991), formative evaluation was used to facilitate implementation of an evidence-based assessment instrument (Caspar 2007). TPB states that a person’s behavior is determined by his/her intentions to perform a given behavior and that these intentions are a function of: (a) attitudes toward the behavior, (b) subjective norms, and (c) perceived control over the behavior. A key experimental intervention to come out of TPB includes the use of an elicitation exercise which gathers participant attitudes, social norms, and perceived control with regard to EBP (Caspar 2007). This type of exercise can allow instructors to format training content and procedures to strengthen participant intentions to use EBP—in other words, to use formative evaluation to produce a training workshop that best fits participants’ needs. The promising results of using TPB to change practitioner behavior lead us to suggest that such elicitation studies become one standard formative evaluation feature in the design of training research. A comprehensive manual describing how to use elicitation studies for such purposes is available (Francis et al. 2004).

Use Best Practices in Instructional Systems Design

Across both higher education and post-graduate professional continuing education, process is dictated by tradition, rather than data (Twigg 2001). Out of habit, we continue to conceptualize training and continuing professional education as a one-way broadcast from expert to trainee, primarily through didactic lecture, with only minimal feedback loops to learners from instructors regarding learning outcomes. Yet data repeatedly show that developing expertise requires not didactic lectures or knowledge tests but rather deliberate practice with feedback (Ericsson and Charness 1994). Deliberate practice entails the repetition of behaviors in a focused manner with instructor-guided corrective feedback until mastery levels of skill are demonstrated (McGaghie et al. 2009). For clinicians to become experts at a particular EBP, rather than achieve the minimal gains we tend to see, they must deliberately engage in target clinical behaviors, often and with feedback. Data to come out of the instructional design literature suggest that prompting learners’ strategic knowledge (i.e., self-regulation) during learning can improve outcomes (Sitzmann, Bell et al. 2009). Similarly, increasing learner control (i.e., the degree to which an individual has control over instructional features during self-guided training) has been linked to both training satisfaction and improved learning (Orvis et al. 2009). Peer-lead groups (Crossouard 2008) and peer-led coaching and feedback during behavioral rehearsal (Cross et al. 2007) are cost-effective, and produce important benefits (Rourke and Anderson 2002). Such findings and many others from the instructional design literature can be integrated into training research.

Despite a wide scientific literature on effective instructional design, few DI scientists, treatment developers, or trainers have training in instructional design. Thus few researchers in this area develop well-defined performance-based objectives and design active learning experiences that use important principles from instructional design such as deliberate practice with feedback. Consequently, we suggest that training research teams include instructional design experts as consultants. Solid instructional design has the potential to increase learner engagement, which in turn may result in increased knowledge and behavior change, as well as increased adoption and implementation of the intervention (Weingardt 2004).

Emphasize Social Transactional Nature of Knowledge Exchange

Our habitual unexamined assumptions about how to provide training and implementation support are based more on the metaphor of knowledge as an exchangeable object rather than an exchangeable practice. When knowledge is viewed as an object one can possess that can be explicitly rendered in words and numbers, documented and acquired by others, one tends to use technologies that capture perceived knowledge in forms of documentation (e.g., treatment manual). Within this framework, knowledge sharing activities may include the creation and submission of best practices documents and other sources of information into a shared repository (Hansen and Avital 2005).

Alternatively, when knowledge is viewed as a practice to be shared and as tacit understanding arising from experience that defies “straightforward articulation or documentation,” then these practices are viewed to be “best transferred between individuals through personal interactions” (Hansen and Avital 2005, p. 4). This suggests the use of systems that promote greater contact between knowledgeable individuals and remove barriers so it is easier to offer one’s time and skills for face-to-face interaction and other forms of direct discussion (e.g., supervision). For example, to bring training to scale, one can create online learning communities such as those hosted by the National Center for Child Traumatic Stress Network (Markiewicz et al. 2006; learning collaborative) to provide access to information on best practices by bringing together treatment experts, adoption experts, and front-line clinicians. Ultimately, the effectiveness of such methods is an empirical question.

Understanding the role of social processes in the process of DI of EBP is important. Studies on the effects of local opinion leaders (Atkins et al. 2008; Doumit et al. 2007) and ambassador programs (Rashiq et al. 2006) suggest that social processes are important in the natural diffusion processes of EBP following training. When knowledge is viewed as a practice, understanding how such social networks function and the role of key opinion leaders and positive deviants (Bradley et al. 2009) becomes an important avenue to consider.

4. Think About How Technology Can Be Used to Take Training to Scale Throughout All Stages of a Training Research Project

Considerations of how technology can be leveraged to support the widespread implementation of a training intervention as it is being developed and tested should be explored at all stages of a training research project. Scalability is often addressed once the training intervention has already been designed and implemented. Given that in-person instructor-led training has a limited impact due the number of clinicians it can reach, we propose that researchers consider implementing technology throughout all stages of training research and discuss the implications of using technology-based training as an adjunctive or stand-alone tool. See Table 4 for associated research questions.

Table 4 Think about how technology can be used to take training to scale throughout all stages of a training research project: Research questions

Why Technology?

Current efforts to provide training and consultation face challenges because talent and resources can be spread only so far; providing training and consultation to an increasing number of clinicians requires hiring additional trainers and supervisors. In contrast, technology-based methods of training and consultation can be designed to be scalable so that the system can add data, tasks or users in a straightforward and relatively cost-effective manner (Bondi 2000). Technology offers the promise of knowledge exchange among researchers, clinicians, clients, and policy makers at an unprecedented scale by developing models that help users readily adopt new practices without the need for face-to-face proximity via online interaction (Bradach 2010). Technology-based tools can be used at any time from any location, (Cucciare et al. 2008; Kulier et al. 2008; Weingardt et al. 2006), are more cost-effective to train larger numbers of clinicians than face-to-face instruction (Cucciare et al. 2008), and can be provided free of charge (e.g., National Crimes Victims Research and Treatment Center 2007; http://tfcbt.musc.edu/). While initial funding to develop technology-based training is necessary, little ongoing funding is needed to maintain such efforts. Additionally, a number of agencies provide funding mechanisms for this type of work (e.g., NIH Small Business Innovation Research, Substance Abuse and Mental Health Administration).

Technology-based training has been shown to be effective across diverse fields including medicine (Cook et al. 2008), human resources (Bray et al. 2009), police work (Donovant 2009), and substance abuse treatment (Weingardt et al. 2009). A particular strength of technology-based training is that it is well-suited for blended learning, the use of multiple learning methods (e.g., self- and instructor-guided) and procedures (e.g., active learning, deliberate practice) to best promote knowledge and skill acquisition (Cucciare et al. 2008).

Various technologies have been used to deliver training in EBP to clinicians (e.g., Web-based, DVD’s, CD-ROMS, video-, audio-, telephone-conferencing; Weingardt 2004). Technology for training efforts includes self-guided study (e.g., reading written materials or multi-media on the web, Kendall and Khanna 2008; National Crime Victims Research and Treatment Center 2007; http://tfcbt.musc.edu/; Weingardt et al. 2009), peer-guided study (e.g., peer-mediated online discussion boards, see Moore and Marra 2005; Rourke and Anderson 2002), and instructor-guided study (Beidas et al. 2010). Instructor guided-study includes both synchronous learning in which the instructor and all students are virtually present in different geographical locations (e.g., WebEx conference; Weingardt et al. 2009) and asynchronous learning in which one-to-one, or one-to-many communication occurs at a delay via the computer (e.g., recorded mp3 s of educational podcasts; discussion boards; Moore and Marra 2005; Weingardt 2004). An emerging area in technology-based training is the use of online platforms to support consultation (Beidas et al. 2010; Weingardt et al. 2009) and peer-to-peer interactions via online communities of practice (e.g., EBP online community of practice hosted at www.practiceground.org).

Technology-based training is also well-suited for the modular approach discussed earlier. For example, technology can allow researchers to integrate easy-to-use authoring tools for e-learning (e.g., Adobe Captivate, Adobe Presenter, and Articulate). Because such technologies make it easy to revise content, subject matter experts can strategically develop e-learning versions of training and then test them to ensure they produce the intended learning and clinical outcomes. This allows researchers to use formative evaluation to rapidly cycle through iterations before committing extensive resources. This kind of technology-based training allows for the melding together of different training methods and procedures to create the strongest design for clinician training in EBP (i.e., blended learning; Cucciare et al. 2008). The development of such training modules using common technical standards (e.g., Shareable Content Object Reference Model; SCORM; http://www.adlnet.gov/Pages/Default.aspx) allows researchers to share training materials across studies rather than needing to reinvent the wheel each time.

Additionally, technology-based training provides exactly the tools to make and test low-intensity extensive interventions (see Bennett-Levy et al. 2010). Low-intensity extensive interventions refer to interventions that may be less efficacious but require fewer resources and reach larger numbers of people (i.e., scalable), thus having a larger impact (Glasgow et al. 1999). For example, relatively low-cost strategies such as multiple contacts over time with bite-sized e-learning modules delivered via podcast and/or webinars are consistent with well-supported learning strategies (spaced versus massed practice; Rakovshik and McManus 2010) and may have small but important impacts.

Technology-based training and implementation support can be both top-down and bottom-up. Thus, we suggest that researchers actively explore where technology-based methods can be used to increase access to training and consultation, and make the shift to prioritize such methods when possible.

Technology: An Adjunctive or Stand-alone Tool?

Technology can be utilized as an adjunct to traditional effective training methodology (Beidas et al. 2010) and as a stand-alone tool (Weingardt et al. 2009) depending on delivery. Technology is merely the delivery method within which training can be disseminated to clinicians; the principles that underlie effective training remain important. When used as an adjunct, technology can be used to deliver knowledge effectively (Beidas et al. 2010) via online pre-recorded trainings, websites, and forums. This type of knowledge delivery (e.g., websites) may also be effective in influencing attitudinal change towards EBP. Consistent with a graded approach (Rakovshik and McManus 2010), clinicians might initially receive technology-based training followed by more intensive in-person training designed to augment specific competencies. For technology-based training to be used as a stand-alone tool, it is necessary to include deliberate practice, experiential learning, and continued consultation and implementation support, the hypothesized ingredients of effective training (Rakovshik and McManus 2010). All of these important training components can be achieved via technology through virtual conferencing and e-learning formats such as online learning communities. However, ultimately, this empirical question must be answered via further research. We believe this to be a very important question given that this is the most scalable way to provide training in EBP.

The Behavioral Activation Demonstration Project

To illustrate how these recommendations can be implemented, we will describe the process we are using to plan a training project in behavioral activation (Hopko et al. 2003; Jacobson et al. 1996; Kanter et al. 2010), an EBP for individuals suffering from major depressive disorder.

In our example, activity scheduling, one component of behavioral activation, has been chosen rather than an entire treatment protocol because it is the core element common to the two primary models of behavioral activation treatment (Kanter et al. 2010; In accordance with recommendation 2: Critically examine the implicit assumptions underlying the stage model of psychotherapy development). First, data are gathered on training as usual to collect a baseline measure of training’s impact on learning and clinical outcomes. The training sessions are recorded and when the evaluation is complete, the training package and evaluation data go to both the instructional design team and to the DI team. Professional instructional designers then work with the behavioral activation experts to specify clear performance based objectives (e.g., “the clinician orients a new patient to the two circle model in a manner that increases patient motivation to engage in therapy”). Then information from instructional design strategies is used to design learning activities that influence desired outcomes and increase learner satisfaction (Incorporate research methods from other disciplines).

Once the performance-based training is designed, it is translated into an e-learning format (Think about how technology can be used to take training to scale) to allow for scalability. Rapid authoring tools allow treatment experts and instructional designers to inexpensively design and test iterations of learning activities to ensure performance objectives for learning, clinical outcomes and learner satisfaction are met. The initial concepts move to product planning, which entails rapidly prototyping and testing each iteration with stakeholders to check whether a prototype fits the intended use within the constraints faced by users. Additionally, product planning ensures that the prototype stays true to the science that bears on the product, is technically feasible, and generates high user satisfaction (Incorporate research methods from other disciplines). In tandem with the instructional design review, a DI team audits the disseminability of the training-as-usual approach using CFIR (Align with the larger DI literature).

Going through the checklist provided by the CFIR framework serves to augment hypotheses, acknowledges contextual factors, and addresses measurement strategies to ensure that data about diffusion, application, and utilization in addition to adherence and competence is collected (Ottoson 1997; Critically examine the implicit assumptions underlying the stage model of psychotherapy development). With regard to the behavioral activation e-learning, we consider intervention characteristics (e.g., core versus peripheral elements), the outer setting (e.g., political context), the inner setting (e.g., organizational context), characteristics of the individuals involved (e.g., client presenting problem/treatment match), and the process of implementation (e.g., reflecting on e-learning format). External expert feedback is used to evaluate our assessment, and then a deliberate DI intervention carefully phrased in terms of common terminology and constructs is drafted (Align with the larger DI literature). During this process, the research team also drafts the criteria that will be used at each subsequent decision gate to decide whether we should proceed to the next phase of development (Incorporate research methods from other disciplines).

Overlapping the work by the instructional design team and DI team, the trainer runs a second iteration of the training, this time modeled on Caspar’s (2007) work where training begins with elicitation activities (Incorporate research methods from other disciplines). The data on the learning and clinical outcomes of this version of training is then compared to training as usual. Additionally, the data gathered from clinicians during the elicitation activities become the first round of formative feedback on barriers and facilitators in implementing the EBP. The work from all three sources (instructional design, dissemination audit, and formative evaluation from the elicitation study intervention) is integrated to create a new e-learning package that is then itself tested and iterated until it meets criteria for impact on learning and clinical outcomes. When the training package passes the test, it then proceeds in two directions.

First, training researchers can take the optimized e-learning package and experiment with adding different instructional strategies and delivery methods. Using research methods that balance internal and external validity (Curran et al. 2010), we plan to test adding deliberate practice with feedback via web-camera (Follette and Callaghan 1995; Scherl and Haley 2000), prompting self-regulation strategies (Sitzmann et al. 2009), and creating and evaluating self-guided study manuals that can be used by local clinical trainers (Incorporate research methods from other disciplines). Second, the e-learning package is used to test a comparison of different dissemination strategies. For example, we are interested in the effects of intensive training (e.g., a model that blends the use of e-learning with weekly expert-led web-conference implementation support) versus extensive training (e.g., a model that packages the e-learning module as a subscription podcast series with suggestions for peer-facilitated spaced practice that cycles several times). Data gathered about implementation and clinical outcomes can then guide whether an extensive training strategy generates enough impact on clinical outcomes (and does no harm) to merit further use of that dissemination strategy or whether data suggest intensive training is needed (Critically examine the implicit assumptions underlying the stage model of psychotherapy development). Using qualitative and quantitative strategies we plan to examine positive deviants (i.e., those who implement the principles successfully; Bradley et al. 2009) and opinion leaders (Atkins et al. 2008) to understand how they successfully use the training and integrate the lessons learned into the dissemination strategy.

We are amidst trialing these processes ourselves—their merit remains to be determined. Therefore we offer this example only as an illustration of how one team is beginning to apply these research suggestions. We look forward to the iterative process as other researchers develop alternative creative and pragmatic solutions.

Conclusion

The DI of EBP for mental health and substance abuse depends upon a training research agenda using methods that can take our collective efforts to scale. We suggest four recommendations that training researchers can consider to increase the impact of their work: (a) aligning with the larger DI literature to consider contextual variables and clearly define terminology, (b) critically examining the implicit assumptions underlying the stage model of psychotherapy development, (c) incorporating research methods from other disciplines that embrace the principles of formative evaluation and iterative review, and (d) thinking about how technology can be used to take training to scale throughout all stages of a training research project. Our hope is that researchers will actively consider how they might apply these suggestions to their own programs of research, particularly when exploring areas which may be new (e.g., formative evaluation).