For decades, intervention research in special education, school psychology, school social work, and clinical child psychology has focused on testing the efficacy of interventions designed to assist children who struggle with social, emotional, and behavioral challenges (Silverman & Hinshaw, 2008; Thomas & Grimes, 2008). As a result, we can now identify evidence-based practices (EBPs) to ameliorate many childhood problems, including disruptive behavior (Owens, Storer, & Girio, 2011), anxiety (Fox, Herzig-Anderson, Colognori, Stewart, & Masia Warner, 2014), depression (Patel, Stark, Metz, & Banneyer, 2014), bullying and aggression (Leff, Waasdorp, Waanders, & Paskewich, 2014), and academic problems (Swanson & Hoskyn, 1998). However, much of this evidence is based on short-term assessments and on interventions that are delivered in analog educational contexts by highly trained and supervised research staff. Significantly, less is known about the implementation of EBPs under typical school conditions and delivered by school-based professionals, or about factors that influence the uptake and sustainment of EPBs following the removal of external resources.

Given the state of the science, it is difficult for school professionals to know which EBPs are most compatible with their context, how to train staff to best implement an EBP with high quality, and how to sustain the intervention over time. Indeed, these gaps may partially explain the limited use of EBPs by school mental health (SMH) providers (Evans, Koch, Brady, Meszaros, & Sadler, 2013; Kelly et al., 2010) as well as the wide variability in the dose and/or integrity with which the EBPs are implemented (Durlak & Dupre, 2008). To address these types of concerns, the field of implementation science (IS) has emerged. IS is defined as “the scientific study of methods to promote the systematic uptake of research findings and other [EBPs] into routine practice and, hence, to improve the quality and effectiveness of health services” (Eccles & Mittman, 2006, p. 1). Several conceptual models for IS research and practice have been articulated (e.g., Cook & Odom, 2013; Damschroder et al., 2009; Eccles & Mittman, 2006; Han & Weiss, 2005; Proctor et al., 2011), and some groups have begun to apply and evaluate components of these models in schools (e.g., Langley, Nadeem, Kataoka, Stein, & Jaycox, 2010; Lyon, Charlesworth-Attie, Vander Stoep, & McCauley, 2011). Recently, the American Psychological Association School Psychology Division’s Working Group on Translating Science to Practice (Forman et al., 2013) provided a valuable explication of the fundamentals of IS and their application to mental health services in schools. The Working Group offered a nine-component research agenda intended to better equip researchers, trainers, and practitioners “with both declarative knowledge [what (EBPs) to use], as well as procedural knowledge [how to implement (EBPs)] in school contexts” (p. 94).

The purpose of the current paper is to support these goals by identifying compelling and timely research questions in three areas of IS: (a) professional development (PD) and coaching for school professionals regarding EBPs; (b) intervention integrity; and (c) intervention sustainment under typical school conditions. These areas were selected from the IS frameworksFootnote 1 (e.g., see Durlak, 2013 for review) because they were viewed as essential to the ultimate goal of SMH, which is to provide high-quality school-based services to maximize student success. To achieve this goal, we must identify the program or service to be introduced in the school(s), train providers to implement it with high quality, and sustain high-quality delivery over time. Although this seems straightforward, there are many complex issues within PD, implementation, and sustainment processes that must be better understood if we are to achieve this goal.

Schools are a unique context in which to conduct IS research, particularly with regard to their leadership, the diverse types of professionals working in them, and the unique parameters associated with the school calendar. We begin with a brief overview of these contextual issues that affect IS in schools. Then, we articulate research questions that we believe are central to the next generation of IS research related to PD, intervention integrity, and sustainment of EBPs in schools and possible methods to address these questions.

Contextual Issues

School Organizational Factors

In developing broad conceptual models of IS, researchers have identified organizational factors that affect implementation (Aarons, Hurlburt, & Horwitz, 2011; Damschroder et al., 2009). Some factors are within the inner context in which implementation occurs (e.g., school building or district), whereas others lie within the broader external economical, political, and social contexts in which an organization exists (e.g., federal educational policies, local cultural norms). Principal leadership is an inner-setting characteristic that is particularly relevant to SMH, as it has been found to influence (either directly or indirectly) school climate and student achievement (Hallinger & Heck, 1996), as well as implementation and outcomes of prevention and intervention programs (Kam, Greenberg, & Walls, 2003; Langley et al., 2010). Although other factors, such as financial, personnel, and technological resources, influence SMH implementation, careful analysis of principal leadership prior to implementation may be critically important. It may determine the way in which the inner- and outer-setting challenges are perceived and managed and the extent to which SMH efforts related to PD, integrity monitoring, and sustainment are valued and promoted. Similarly, education policies, such as state-level mandates for PD or federal special education law, are examples of outer context characteristics that represent the contexts into which SMH interventions must be integrated.

In addition to organization factors, there is an interaction between the organization and both characteristics of an intervention and of its implementer (Forman & Barakat, 2011). These interactions are referred to as the intervention-setting fit, compatibility, or appropriateness (Proctor et al., 2011; Rogers, 2003). In schools, appropriateness refers to the intersection of a new program with the values, practices, and structures at all organizational levels, each of which may influence PD processes, intervention integrity, and sustainment (Lyon et al., in press).

Diversity of School-Based Professionals

The many types of professionals who may be involved in SMH implementation initiatives is another factor that makes schools a unique context for IS. Although SMH efforts may be led by school counselors, school social workers, or school psychologists, teachers and other non-mental health professionals (e.g., principals, resource officers, nurses) are additional resources that can be leveraged for successful SMH implementation and sustainment. During the pre-implementation phase, all such professionals can provide information about relevant inner-setting characteristics (e.g., influential school policies, key members of social networks). Teachers also may provide perceptions about intervention adoption and appropriateness and document integrity or student outcomes that can help shape intervention adaptation. Non-mental health professionals also may serve as essential referral sources and gatekeepers for access to SMH services (Williams, Horvath, Wei, Van Dorn, & Jonson-Reid, 2007) and program sustainment. Further, many universal programs (e.g., Sugai & Horner, 2006) and indicated behavioral interventions (e.g., Owens et al., 2012) can be successfully implemented by non-mental health school professionals. These successes, coupled with the often limited number of SMH professionals per building (Lyon, McCauley, & Vander Stoep, 2011), highlight the importance of using a “task-shifting” approach, whereby some healthcare duties are redistributed among professionals within a system (Patel, 2009). This type of workforce realignment has the potential to effectively reduce the workload of SMH professionals and reach more children in need, particularly in schools that do not have access to onsite mental health providers. Yet, it also creates challenges to consider when studying PD processes, intervention integrity, and sustainment (e.g., training professionals with diverse backgrounds; how to leverage diverse resources for sustainment).Footnote 2

School Calendar

Unlike social service agencies that operate on a 12-month calendar, most schools operate on a 9-month academic calendar. This calendar includes distinctive bursts of activity (e.g., state testing, grading periods, holiday, and summer breaks) that imposes unique demands on students and teachers and impact every aspect of implementation. For example, administrators are challenged to identify time for managing both curricular content and mental health issues. If coaching is a critical component of PD (Han & Weiss, 2005), how do school professionals fit it into their schedule in a meaningful way? Similarly, as we ask teachers and other non-mental health professionals to implement EBPs, document integrity, or assess student outcomes, we must consider the impact of competing demands on their ability to complete such tasks.

In summary, conceptual models of IS indicate that contextual factors must be considered in any implementation effort, several of which are unique to IS in schools, including organizational leadership, state and national education policy, the diverse array of professionals working in schools, and the unique calendar of school events. Next, we discuss timely research questions in the areas of PD, integrity, and sustainment. We argue that in achieving the ultimate goal of SMH, PD is an essential feature of the pre-implementation phase, integrity is an essential feature during the implementation phase, and that sustainment issues are critical during the maintenance and enhancement phases. However, PD, integrity, and sustainment are interconnected, each impacting the others, and relevant to all implementation phases.

Professional Development

Definition and Rationale

PD is the primary vehicle through which implementers learn the rationale for an intervention, its core components, the mechanisms through which components impact student outcomes, and the skills necessary to implement the components with high integrity. PD can involve “session-oriented” activities, such as workshops that include didactic lectures; exposure to manuals and implementation guides; and active learning activities, such as discussions, observations of models, and participation in role plays. These activities provide exposure to the basic factual knowledge and skills needed to initiate implementation of an EBP. However, this knowledge and initial skill acquisition do not necessarily translate into behavioral proficiency (Beidas & Kendall, 2010). Indeed, there is now substantial empirical evidence that session-oriented PD activities (often referred as “train-and-hope” models) are insufficient to produce change in implementers’ behavior or in student outcomes (Blank, de las Alas, & Smith, 2008; Farmer et al., 2008; Herschell, Kolko, Baumann, & Davis, 2010). Instead, effective PD incorporates individually focused and ongoing coaching and consultation (henceforth referred to as coaching), constructive performance feedback, encouragement of self-reflection on one’s own performance, and access to problem-solving and supports to refine and develop mastery of new skills. Below, we discuss the state of the science on coaching in SMH and highlight burgeoning research questions for the next generation of research in this IS area.

State of the Science

Given the evidence described above, there are many PD-related issues that are ripe for study. We focus on four issues that, if addressed, would significantly impact implementation of SMH services: (a) coaching dose; (b) coaching strategies; (c) coaching models; and (d) the role of implementer motivation and perceptions. First, it is important to consider the dose of coaching required for providers to become competent in implementing an EBP. That is, how long and with what frequency and intensity does coaching need to continue to achieve sufficient implementation quality? No research has systematically evaluated coaching dose to identify minimally sufficient standards. This is important given that ongoing coaching is one of the most costly implementation-related processes (Olmstead, Carroll, Canning-Ball, & Martino, 2011) and that its implementation has many challenges in schools (e.g., garnering principal support, finding time within the school calendar, coaching professionals from diverse backgrounds).

Second, there is a need for research on which coaching strategies are most effective (Lyon, Stirman, Kerns, & Bruns, 2011). During initial training, modeling and behavioral rehearsal paired with performance feedback can increase provider uptake and intervention integrity (Han & Weiss, 2005). Once implementation has begun, a multifaceted approach to ongoing performance feedback is likely warranted to achieve and sustain high integrity (Joyce & Showers, 2002; Schouten, Hulscher, Everdingen, Huijsman, & Grol, 2008). Such an approach is tailored to the provider’s needs and addresses barriers to implementation. Although this evidence begins to shed light on effective coaching practices, additional work is needed to understand the relative merits of various ongoing coaching strategies in the context of schools. For example, there may be unique benefits to audit and feedback procedures (Ivers et al., 2012) that could be borrowed from the healthcare field and applied to SMH. Similarly, there may be added value in training implementers to self-monitor integrity and compare such ratings to those made by a coach to determine the most effective dose and type of coaching (Masia Warner et al., 2013).

A third issue for future PD research is the ongoing coaching model, more specifically, whether coaching is provided by an external “expert” coach (e.g., program developer; university researcher) or an internal coach who has been trained by the program developer (e.g., via a train-the-trainer approach). Many EBP implementation models have in-house school staff who assume ongoing coaching and quality assurance roles as a way to support program sustainment (e.g., Olweus & Limber, 2010). However, there is some evidence of a positive association between the expertise of an external consultant (someone who has deep and principled knowledge about the EBP and skill in applying it in a variety of situations) and more skilled implementation by trainees and improved youth outcomes (Schoenwald, Sheidow, & Letourneau, 2004). Further, findings from the only study known to us on the relative cost-effectiveness of a train-the-trainer model verses an external “expert” coach suggest that having expert involvement may be more effective and economical for training a large number of staff over time (Olmstead et al., 2011). These findings have implications for the model through which ongoing coaching occurs and whether a fading and removal of outside consultants are desirable.

Fourth, it may be important to consider the role of implementer motivation and perceptions (e.g., perceptions about the therapeutic techniques or coaching process) in mediating the PD processes described above. There is a large theoretical literature on behavior change (e.g., theory of reasoned action, theory of planned behavior, social-cognitive theory) that is critical to understanding individual-level drivers of provider skill uptake (e.g., Ajzen, 1991; Bandura, 1977). There also is emerging evidence that incorporating motivational interviewing techniques into coaching may enhance teacher motivation and confidence related to implementation of classroom behavior management interventions (Frey et al., 2011; Reinke et al., 2012). Lastly, marketing research techniques have been used to group education professionals on the basis of the program and implementation attributes (e.g., cost, effectiveness, choice) by which they are most influenced when deciding to implement an EBP (Cunningham et al., 2009). Taken together, these data suggest that future research should examine the extent to which implementer motivation and perceptions are malleable factors that can be targeted during coaching to enhance intervention adoption, integrity, and student outcomes.

Given the above-described state of the science, as well as the complex context in which PD in schools is embedded, as noted earlier, we argue that future research should focus on enhancing an understanding of the role of PD dose, strategies, and models as well as the role of implementer motivation and perceptions. Below, we articulate research questions and possible methods for examining them. Given the forthcoming section on sustainment, here we focus on the immediate dependent variables of initial intervention integrity and student outcomes.

Critical Research Questions

Question 1:

How much coaching (dose) is needed following initial PD to enable providers to deliver an EBP or its components with high integrity and produce desired student outcomes?

Question 2:

What coaching strategies (or combinations of strategies) are most likely to enhance provider knowledge, skills, and beliefs, intervention integrity, and student outcomes?

Question 3:

What models of coaching (external vs. internal coach) are most likely to be adopted and best promote the skills taught within PD? Can internal coaches be trained to be as effective as coaches associated with an EBP?

Question 4:

To what extent are implementer motivation and perceptions malleable, and does change in these factors facilitates implementer skill development?

Possible Research Methods

Addressing these questions will require manipulation of coaching dose (length or intensity after initial “session-oriented” activities), coaching strategies (e.g., performance feedback with or without motivational interviewing; coaching on component skills or an entire EBP protocol), or coaching model (varying levels of external versus internal coaches). Group-level analyses could examine the interactions between coaching conditions and (a) intervention integrity, (b) change in provider knowledge, skills, and beliefs, and (c) student outcomes. Adaptive designs, in which specified decision rules govern how and when the coaching dose or strategy should change based on an individual’s response, could be an innovative method for identifying optimal sequencing of coaching doses and the optional timing of transitions from one strategy to the next (Murphy, Lynch, Oslin, McKay, & TenHave, 2007). Additionally, mediation models could be tested to better understand the mechanisms through which a given manipulation is occurring. For example, by obtaining baseline information on implementer motivation (e.g., the importance of the EBP, level of investment in implementing it) and perceptions (e.g., level of confidence in implementing the EBP; its perceived effectiveness) and monitoring change in them and in knowledge and skills over the PD and coaching period, we could begin to model the mechanisms through which the coaching dose, strategy, or model may be producing its effects. Through this process, researchers could also assess reactions to and economical costs of each condition, which would enhance an understanding of factors that produce optimal outcomes and costs associated with achieving those results.

Intervention Integrity

Definition and Rationale

Intervention integrity (also referred to as fidelity) is a complex construct, with several unique but related dimensions (e.g., adherence, competence, differentiation, participant responsiveness; Southam-Gerow & McLeod, 2013). Adherence and competence are consistently included in models of integrity (Dane & Schneider, 1998; Dusenbury, Brannigan, Falco, & Hansen, 2003; Perepletchikova, 2009). Adherence is defined as the extent to which procedures are implemented as intended. Competence is defined as the quality with which procedures are implemented and whether they are implemented with flexibility and sensitivity to the client. We argue that a research agenda that facilitates an understanding of the multiple dimensions of integrity, situated in a SMH context with its diverse professionals, and their nuanced relationship to outcomes is warranted. Among many, we prioritize three issues in this area and pose timely questions that warrant consideration in IS in SMH.

State of the Science

Historically, integrity research has almost exclusively focused on examining adherence (see Schulte, Easton, & Parker, 2009 for review) and has documented a meaningful positive relationship between adherence and child outcomes for a wide variety of EBPs (Durlak & DuPre, 2008). These findings suggest that low adherence compromises positive student outcomes, leading to the assumption that more integrity is better (DuPaul, 2009). However, this assumption has yet to be tested empirically. Because efforts to sustain high integrity require substantial resources (e.g., support from the principal, time from school professionals, an infrastructure for acquiring and analyzing data), it is prudent to identify how much is good enough. There may be a threshold of integrity that produces the desired student outcomes after which there are diminishing returns on further investments in achieving high integrity. Understanding this good enough threshold could provide critical guidance for resource expenditure in SMH.

Although the positive relationship between adherence and child outcomes is reasonably well documented, a linear relationship has not always been found. For example, data from the psychotherapy literature (Barber et al., 2006) indicate that the relationship may be curvilinear, i.e., rigid adherence at the expense of quality implementation or clinician judgment about prudent adaptations for the individual or local context may preclude positive outcomes. Further, the relationship between integrity and outcomes may be more complicated once the multiple dimensions of integrity (e.g., participant responsiveness; Chu & Kendall, 2004) and other related issues (e.g., the alliance between the implementer and the recipient; Barber et al., 2006) are taken into account. Together, these data suggest that different dimensions (e.g., adherence, competence, participant responsiveness; Kutash, Cross, Madias, Duchnowski, & Green, 2012) may be differentially related to or predictive of positive student outcomes. Because of its many dimensions and the multiple perspectives involved (implementer, supervisor, objective observer, service recipient, caregiver), the next wave of integrity research should attend equally to the assessment of each dimension as well as its incremental utility. The many measurement issues involved are beyond the scope of this article and are cogently articulated in Fabiano et al. (2013). Nonetheless, we contend that IS research should seek to illuminate the nuanced relationship between each dimension and outcomes.

A third issue is highlighted by studies indicating that adherence is variable (e.g., Owens, Murphy, Richerson, Girio, & Himawan, 2008) and declines rapidly in the absence of ongoing coaching or accountability for teachers as implementers (e.g., Noell, Witt, Gilbertson, Ranier, & Freeland, 1997) or SMH professionals as implementers (e.g., Masia Warner et al., 2013). Given that this variability and decline are documented under monitored research conditions, these rates likely underrepresent the true variability and decline that occurs in everyday practice. Thus, researchers should examine PD strategies and accountability structures or incentive programs that may mitigate this commonly observed threat to positive student outcomes.

Critical Research Questions

Question 1:

What level of integrity (adherence and competence) is good enough to produce the intended student outcomes?

Question 2:

Are the multiple dimensions of integrity differentially predictive of student outcomes?

Question 3:

What accountability structures (e.g., administration or peer networks) and/or incentive programs (used in combination with PD) reduce variability in integrity?

Possible Research Methods

To address Questions 1 and 2, members of a community-research partnership could select an EBP or its core components and assess integrity and student outcomes under conditions that allow for variations in integrity. The assessment period should be long enough to produce change in the outcomes of interest. Unique time points could be selected, and at each time point, the relationship between integrity dose and student outcome could be assessed using probit regression techniques, regressing dichotomous improvement in the outcome measure on the level of adherence achieved during that time frame. Such techniques have recently been applied to answer questions about therapeutic dose in SMH interventions (Evans, Schultz, & DeMars, 2013). They also could identify the percentage of students who achieve change at varying levels of integrity and the level of integrity that may cease to produce incremental benefits given the investments. Further, if multiple dimensions of integrity were assessed, the analyses could help to determine which dimensions best promote student outcomes.

For Question 3, researchers could manipulate who conducts monitoring activities (administrator, peer, self) and examine the impact on the dimensions of integrity and their feasibility and reliability. Researchers also could manipulate facets of the accountability systems (e.g., rewards versus consequences for adherence, quality, or outcomes) to determine their relative impact on the dimensions of integrity and educator response to each type of approach. Because such systems are not commonly used in US schools to support quality implementation, there is much to learn quantitatively and qualitatively about accountability structures and incentive programs to facilitate intervention integrity.

Sustainment

Definition and Rationale

Thus far, we have discussed the critical role of PD in achieving intervention integrity and the issues involved in measuring and understanding how to promote integrity to achieve desired student outcomes. However, even when effective PD is provided, intervention integrity is achieved, and improvements in outcomes are well documented; those improvements may be transitory. Given the investments necessary to achieve initial implementation and desired outcomes, there is an increasing interest in whether the intervention endures. Sustainment is the maintenance of EBP components and activities in the absence of or following the conclusion of research support (Lyon, Frazier, Mehta, Atkins, & Weisbach, 2011; Scheirer & Dearing, 2011; Schell et al., 2013), which can only be achieved following successful implementation (Fixsen, Blase, Duda, Naoom, & Van Dyke, 2010). It is equally important to consider, if and how, interventions change as implementation evolves, as well as whether or not adaptations are appropriate. Although a concern for adherence cautions an implementer to hold fast to the intervention protocol, adaptations (defined as planned modifications to the design or delivery of an intervention) may be associated with sustainment and positive student outcomes (Harn, Parisi, & Stoolmiller, 2013; Stirman, Miller, Toder, & Calloway, 2013), resulting in encouragement of “fidelity with flexibility” in implementation (Cook & Odom, 2013, p. 140). This concept is akin to “reciprocal adaptation” between the practitioner–implementer and the EBP: “the perceived need for the EBP to be adapted and the need for providers to adapt their perceptions and behaviors to accommodate the EBP” (Aarons & Palinkas, 2007, p. 416).

State of the Science

Much of what has been written about sustainment comes from conceptual and theoretical models (e.g., Fixsen et al., 2010; Han & Weiss, 2005; Scheirer & Dearing, 2011). Although qualitative and quantitative data that document the facilitators and barriers to sustainment are emerging, most studies are retrospective, rely on study-specific self-report measures, and either fail to include a definition of sustainment or use a study-specific definition of it (Stirman et al., 2012). As a result, the literature on sustainment has been described as “fragmented and underdeveloped” (Stirman et al., 2012, p. 13). In this context, systematic reviews of sustainment studies can be helpful in showcasing the variables and methods that have been examined to date, but are less useful in identifying factors that promote sustainment. Thus, additional work is needed in two broad areas: (a) developing a consensus definition of sustainment based on a data-driven process and (b) identifying and manipulating predictors and processes, including planned adaptation that enhance the likelihood of sustainment; both are discussed below.

The most comprehensive literature reviews of sustainment span the disciplines of public health, medicine, and mental health (e.g., Aarons et al., 2011; Scheirer, 2005; Stirman et al., 2012). A common finding across them is that a lack of sustainment or partial sustainment is more common than full sustainment; in fact, full sustainment of EBPs is achieved less than half the time. Although disappointing, these data encourage questions such as “sustain[ment] of what and to what end?” (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005, p. 42). It is important to consider what is meant by sustainment and what constitutes desired outcomes. Does sustainment mean continued implementation of program activities or continued benefits to program recipients, or both? Is there sustainment if planned or unplanned adaptations have occurred? How sustainment is defined and what subcomponents are measured will affect conclusions drawn. Thus, developing a data-based consensus definition would establish a solid foundation from which other critical questions could be systematically answered.

Specific to SMH, Han and Weiss (2005) reviewed the literature on factors associated with teacher implementation of EBPs and used these data to develop a framework for studying sustainment of SMH interventions. Their framework highlights important variables and processes to consider during the phases of pre-implementation, initial and full implementation, and sustainment, including implementer factors, program factors, training feedback, and implementation monitoring procedures. Forman, Olin, Hoagwood, Crowe, and Saka (2009) interviewed the developers of EBPs about barriers and facilitators to their implementation and sustainment in schools. Critical factors related to program sustainment included support from teachers and principals; financial resources; ongoing high-quality PD; program appropriateness for the school context; visible and relevant program outcomes; and methods to successfully address staff turnover.

With support from the US Department of Education’s Office of Special Education Programs (OSEP) since 2005, researchers within the Model Demonstration Coordination Center (MDCC) at SRI International have been studying implementation experiences, outcomes, and, more recently, sustainment. Data from interviews, focus groups, informal observations, and school district records were synthesized to identify facilitators and barriers to sustainment of model activities. Consistent with Rogers’ (2003) theory of innovation diffusion, the investigators found support for the importance of program effectiveness and program-setting compatibility (including local ownership) in fostering sustainment (Yu, Wagner, & Shaver, 2012), as well as a positive role in sustainment for EBPs that allow for adaptation to the intervention environment (Wagner, Gaylor, Fabricant, & Shaver, 2013). They also found that support from district administrators and systems for integrating progress monitoring and data-driven decision-making into local cultures were common among schools where programming was sustained. Surprisingly, intervention complexity was not reported to be as strongly related to sustainment as expected. With regard to barriers, primary factors identified included the absence of a program champion, budget cuts, staff turnover, and competing initiatives. Given this state of the science, we offer two primary questions below.

Critical Research Questions

Question 1:

What does it mean for an EBP to be sustained in SMH? What are the critical dimensions of sustainment and how are they best measured (definition)?

Question 2:

What are the essential ingredients (e.g., coaching, integrity, adaptation, student outcomes, contextual characteristics) for successful sustainment (predictors/processes)?

Possible Research Methods

To address questions in either of these domains, we make several recommendations. First, consistent with recently developed models for IS (e.g., Aarons et al., 2011), community-research partnerships should begin to think about sustainment during the pre-implementation phase (e.g., defining sustainment goals) and consider sustainment and long-term student outcomes in their decision-making throughout all phases of implementation. Second, we recommend that researchers partner with professionals in school districts where an EBP has been successfully implemented for a designated period (e.g., one academic year) and has achieved positive student outcomes. Such situations set the stage for obtaining data that would inform the definition of sustainment, as well as highlight the contextual issues related to the success achieved (e.g., principal support, implementation within the school calendar, characteristics of training) and the impact of planned and unplanned intervention adaptations. Third, we suggest that research funders require applicants to build on previously funded large-scale effectiveness or implementation projects to create new sustainment studies and advance the state of IS research. This could be particularly fruitful ground for collecting practice-based evidence (PBE) or culling evidence from the typical experiences of practitioners who are implementing interventions in real-word settings (Barkham, Hardy, & Mellor-Clark, 2010). For example, new grant mechanisms could enable community-research partnerships to follow school professionals and their students who have demonstrated a certain level of success during initial implementation. In this context, researchers could use a multi-method, multi-informant approach to systematically examine degrees of sustainment and dimensions of sustainment in a manner that informs the refinement of the definition and future measurement decisions, and identifies explanatory factors. Alternatively, a small sample of in-depth prospective case studies could be conducted with partnerships that are beginning to plan for program implementation, following these partnerships and their programs prospectively through to multiple years and phases of sustainment. Through the latter two processes, researchers could systematically obtain data to inform the development of frameworks that guide (a) the de-adoption of interventions that have minimal empirical support for effectiveness and (b) the integration of multiple EBPs across the continuum of services (e.g., promotion and prevention programs, screening, selected and targeted interventions). Further, these processes may help identify the core intervention components that should be sustained “as is,” thereby informing which adaptations to an EBP can be undertaken without jeopardizing its efficacy (Backer, 2001; Lee, Altschul, & Mowbray, 2008).

With regard to Question 2, all of the questions raised in the sections on PD and integrity could also be asked with program sustainment and long-term student outcomes as the dependent variable (rather than initial implementation and initial student outcomes). Indeed, the answers and conclusions drawn from the previously articulated research questions are only informative to the extent that the positive outcomes achieved (in integrity or student functioning) can be sustained. Thus, we could ask the following questions:

  • What types of ongoing PD are most likely to lead to sustained intervention integrity and continued student success?

  • What level and types of initial integrity are most predictive of sustained implementation and positive student outcomes?

  • What types of organizational infrastructure and accountability systems are needed to sustain and enhance the initial positive outcomes achieved?

An innovative method for exploring some of these questions is sequential multiple assignment randomized trials (SMART; see Murphy et al., 2007 for discussion). In a SMART design, implementers first could be randomized to one of two doses or models of coaching and then randomized again based on their levels of response/nonresponse to the coaching as identified through their intervention integrity data. Researchers could then monitor the impact of the various sequences on sustainment of integrity and student outcomes. This approach may help to identify the thresholds of PD and integrity that are “good enough” to produce desired outcomes. Further, by examining factors that predict more immediate versus longer-term outcomes, we may identify those that are shared across these two time frames, as well as those that are differentially predictive. For example, perhaps, the highest level of integrity is most predictive of initial student outcomes, but does not facilitate longer-term sustainment as a function of low feasibility and implementer burnout. In contrast, perhaps, a lower level of integrity that is “good enough” enhances perceptions of feasibility and mediates the likelihood of longer-term sustainment. Clearly, there are many questions to be asked. Thus, we recommend that sustainment become a vanguard issue, heeded at all phases of decision-making.

Summary and Conclusions for an Integrated Program of IS Research in SMH

In summary, there are several conceptual frameworks for IS research in SMH. They highlight the phases of implementation, as well as variables and processes to consider within each of them. Through a systematic process, our team identified three priority areas of study to advance IS research in SMH. We argue that PD, intervention integrity, and sustainment are essential considerations in each implementation phase and that their interrelationships need to be understood if we are to provide and sustain high-quality school-based services to children who struggle with social, emotional, and behavioral challenges. Thus, in each area, we reviewed the state of the science, posed research questions whose answers would make a significant contribution to the field, and offered possible methods to address the questions. We conclude by acknowledging several guiding principles for the developing research agenda and by discussing themes cutting across the three areas.

Principles for the Developing Research Agenda

First, the next generation of IS research should move beyond identifying barriers and facilitators to implementation and sustainment and begin to systematically manipulate conditions to identify factors and strategies that help overcome barriers or leverage facilitators (Proctor, Powell, Baumann, Hamilton, & Stantens, 2012). Second, we encourage the use of mixed-methods designs (Palinkas et al., 2011) that produce both qualitative and quantitative prospective data. These methods are complimentary; in that, they offer unique insights into successes and failures to consider in the next iteration of program development or research. Third, although many studies focus on important proximal implementation outcomes or mediators (e.g., implementation-setting appropriateness, change in implementer skill, indicators of integrity; Proctor et al., 2009), the priority for SMH studies should be to assess the achievement of positive student outcomes. If implementation efforts change proximal outcomes, but do not increase student success, we run the risk of learning an academic lesson with limited public health impact. Thus, we recommend that researchers routinely adopt “hybrid” designs that address both implementation and effectiveness within a single research project (Curran, Bauer, Mittman, Pyne, & Stetler, 2012). Fourth, given the limited resources available for PD and implementation supports, IS researchers in schools should focus on efficiency, working to identify “good enough” approaches to PD and integrity monitoring.

Lastly, multilevel interdisciplinary partnerships that include researchers, educators, mental health providers, youth, and families are likely necessary to achieve the research agenda outlined here and elsewhere (Forman et al., 2013). Leading models for this type of partnered research in schools include community-based participatory research (CBPR; Leff, Costigan, & Power, 2004; Owens, Andrews, Collins, Griffeth, & Mahoney, 2011) and participatory action research (PAR; Nastasi et al., 2000) among others. Given the divergent cultures, goals, values, and demands in academia versus the community (Owens, Dan, Alvarez, Tener, & Oberlin, 2007), a deliberate focus on partnership development, communication, and trust is important at each implementation stage. Using these kinds of models will enhance the likelihood that EBPs are grounded in the realities of school context and that outcomes are meaningful to school and community stakeholders. Such models also prioritize cross-disciplinary communication and education so that all stakeholders understand and come to value rigorous research (e.g., randomized control trials). Although these efforts can be time-consuming, there is promise that the benefits of collaborative and egalitarian relationships among stakeholders outweigh the costs, as such partnerships strategically enhance school and community capacities to identify and address local needs while also advancing science.

Cross-Cutting Themes

In addition to these guiding principles, several cross-cutting themes emerge as researchers begin to design studies examining PD, intervention integrity, and sustainment of SMH practices. The themes detailed below are derived from theories of implementation and highlight important research questions that span our three priority areas for research.

Many fields, including special education, school psychology and social work, and clinical child psychology have made great strides in developing an arsenal of EBPs for children. While we may bask in these accomplishments, it is important to note that these EBPs may not be fully implemented or sustained without adaptation. In fact, EBP adaptation may be the norm, rather than the exception. Yet, the knowledge base to illuminate what adaptations lead to success versus failure in SMH is sparse. Further, for many EBPs, core components have yet to be identified (Masia Warner et al., 2013). Given that adaptation influences PD and coaching processes (i.e., which content and skills to promote), integrity (i.e., which activities to monitor), and sustainment (i.e., which activities continue), documenting adaptations and examining their impact may become a common requirement of future IS studies. With increasingly systematic data, we may be able to answer questions such as: What types of adaptations are most common and useful at each implementation stage? How do adaptations at different levels (e.g., among intervention developers, implementation teams, individual providers) or at different stages (e.g., pre-implementation, initial and full implementation, sustainment) impact program acceptability, adoption, effectiveness, and sustainment? To what extent does sustainment require a unique set of adaptations? How can PD and coaching support appropriate and helpful adaptation?

Frameworks for systematically tracking and coding types of adaptations are emerging (Stirman et al., 2013). Making use of such frameworks and online technology systems that monitor the delivery of program components in future studies may facilitate standardization and comparison across implementation studies regarding how to make adaptation decisions, what adaptations are most common and mostly related to positive outcomes, and how these decisions impact the subsequent assessment of integrity. The answers to some of the above questions may substantially enhance processes for future intervention development and implementation, thereby meaningfully narrowing the science-to-practice gap that has been difficult to bridge.

Another emerging theme relates to resource allocation. Schools have a wide range of professionals as well as constricted financial resources. To meet students’ diverse educational and behavioral health needs, we need to maximize the utility of each professional and each resource, while also considering the context of the school calendar. Thus, community–university partnerships should assess whether the strengths and training of each professional are being maximized and whether resources are being expended on the activities that best promote positive outcomes. In that process, partnerships should consider how task-shifting may optimize resources for SMH efforts and how online resources could be used to extend the timing (e.g., beyond the school calendar) and delivery (e.g., allowing for self-pacing and refresher courses) of PD (Becker, Haak, Domitrovich, Keperling, & Ialongo, 2013). In this context, researchers can empirically explore the relationship between task-shifting and implementation efforts, asking whether different professionals need different types of PD and coaching or how different professionals can be used to promote and monitor integrity. Researchers can also explore whether task-shifting enhances the accessibility of SMH services or comes with additional costs (e.g., asking staff to work beyond the bounds of their professional training may require more intense coaching). A related theme is how to achieve “good enough” efforts. We discussed this issue primarily in the context of finding an appropriate threshold for integrity, but it also applies to several other implementation efforts. What is a “good enough” threshold for PD and coaching? What is “good enough” sustainment of activities to maintain positives student outcomes?

Lastly, the theme of contextualization resonates across all the IS domains is discussed. Schools are a unique context in which to conduct IS research, particularly with regard to their mission, diverse staff, and unique calendar. Compared with a clinical setting, where the organization’s mission, goals, and functions are devoted to mental health service delivery, the primary mission of schools is student education and, broadly defined, student success. Although student mental health plays a role in fostering student success, many educational leaders have had little or no training in mental health issues and may view maximizing student achievement and promoting student mental health as incompatible or potentially competing goals. Thus, it would be helpful to understand the strategies that best foster an administrator’s initial and ongoing support and commitments to a SMH agenda. In addition, for every question we pose, we argue that we should consider the impact of competing demands on teachers, other school professionals, and students, as well as the impact of time spent on measurement. Questions arise regarding where PD and ongoing coaching can feasibly fit into a school schedule and how teachers can leverage SMH EBPs to achieve other goals (e.g., increasing time for test preparation by reducing disruptive behavior). Similarly, district and school leaders could consider how technology could be used for integrity monitoring systems to help teachers reflect and build on the results of instruction in a 9-week grading period (e.g., producing graphs depicting the relationship between individual students’ behavioral and academic performance). Finally, we have yet to determine the time frame required to meet the criteria for successfully sustained implementation.

The purpose of this paper was to contribute to a developing blueprint to guide community–research partnerships as well as funding agencies in their efforts to advance IS in SMH and student outcomes. Although it is unlikely that we will soon achieve consensus on this blueprint, a shared understanding of research priorities and a common language for this exciting area of inquiry may increase cohesion and comparability across projects, ultimately leading to a more generalizable IS knowledge base and supporting a greater public health impact as the quality of care received by students increases.