Introduction

In recent years, increasing pressure has been placed on schools to select social–emotional and behavioral interventions for students that have demonstrated effects on target outcomes (e.g., academic, behavioral; http://www.casel.org/policy/). Education and prevention scientists champion this recommendation, as evidence-based interventions (EBIs) should produce intended benefits. Yet, as noted recently in reports of the American Institutes for Research (Osher, Friedman, & Kendziora, 2014) and the US Department of Health and Human Services (Durlak, 2013), whether interventions benefit students depends upon their successful implementation. As conceptualized by Dane and Schneider (1998), implementation involves multiple elements, including dosage or exposure (i.e., duration and frequency of receipt) and adherence (i.e., delivery as intended), among others. Although researchers increasingly document implementation outcomes (Durlak & DuPre, 2008; Gottfredson et al., 2015), and scholars have proposed conceptual models of implementation systems (Aarons, Hurlburg, & Horwitz, 2011; Damschroder et al., 2009; Graczyk, Domitrovich, Small, & Zins, 2006; Wandersman et al., 2008), implementation systems and processes in schools remain understudied (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005). Calls have been made to examine implementation as a dynamic process (Berwick, 2008; Hoagwood, Atkins, & Ialongo, 2013; Saunders, Evans, & Joshi, 2005), explore heterogeneity in implementation and effects (Seidman, 2012), and use implementation data to inform program refinements (Cappella, Reinke, & Hoagwood, 2011).

Aligned with these calls, and guided by implementation science scholarship (Fixsen et al., 2005), conceptual models (Damschroeder et al., 2009), and school mental health research (e.g., Fazel, Hoagwood, Stephan, & Ford, 2014; Forman, Olin, Hoagwood, Crowe, & Saka, 2009), the current study uses qualitative and quantitative data to illuminate the barriers, opportunities, and processes underlying the implementation of a teacher consultation and coaching model in urban elementary schools—BRIDGE (Cappella et al., 2012; Cappella, Jackson, Bilal, Hamre, & Soulé, 2011). The overall aim is to move education science and policy beyond what constitutes an evidence-based intervention toward an understanding of effective implementation of EBIs as a means toward positive outcomes for more students in schools.

BRIDGE Teacher Consultation and Coaching: Intervention Overview

The BRIDGE teacher consultation and coaching intervention is founded in research on elementary age children’s behavioral difficulties (Reid, Gonzalez, Nordness, Trout, & Epstein, 2004) and the challenges teachers face when working with these children (Reinke, Stormont, Herman, Puri, & Goel, 2011). Children with behavioral difficulties (e.g., inattention, hyperactivity, and/or oppositional behavior) often experience academic underachievement and relational problems in schools (Hughes & Cavell, 1999; Miles & Stipeck, 2006). Left unaddressed, these problems can lead to a cycle of maladjustment that worsens over time (Reid et al., 2004). Teachers also struggle to manage and motivate students with behavioral difficulties (Reinke et al., 2011) and refer these students to support services at high rates (Cappella, Frazier, Atkins, Schoenwald, & Glisson, 2008). Most support services involve individual assessment or individual, group, or family treatment outside the classroom (Brener, Weist, Adelman, Taylor, & Vernon-Smiley, 2007; Kutash, Duchnowski, & Lynn, 2006). Rarely are these services complemented with regular and systematic consultation to teachers to assist them in creating a classroom context where children with behavioral difficulties can succeed.

Recent studies provide evidence that effective classrooms, such as those with emotionally supportive (i.e., warm, responsive) and organized (i.e., consistent, structured) teaching practices, can facilitate social–emotional development and academic performance, particularly among children with initial adjustment problems (Brock, Nishida, Chiong, Grimm, & Rimm-Kaufman, 2008; Hamre & Pianta, 2005). This research is rooted in a theory-based lens through which to assess and understand effective classrooms—the Classroom Assessment Scoring System (CLASS: Pianta, LaParo, & Hamre, 2008). According to the CLASS, teachers who create emotional support in the classroom are aware of and responsive to students’ needs, encourage warm and positive interactions, and provide appropriate levels of autonomy. Organized classrooms (i.e., high behavioral support) have clear and productive routines, engaging and varied instructional methods, and positive and proactive behavior management (Pianta, Hamre, & Allen, 2012).

In randomized trials of classroom-focused, social–emotional, and behavioral interventions, researchers find effects on observed classroom emotional support and organization and on children’s social and academic adjustment over time (Brown, Jones, LaRusso, & Aber, 2010; Jones, Brown, & Aber, 2011; Raver et al., 2008; 2011). Intervention effects on child-level outcomes appear to be mediated through improvements to classroom emotional support or organization (McCormick, Cappella, O’Connor, & McClowry, 2015). Though promising, these demonstration trials use externally funded coaches, facilitators, or consultants to provide training and support to teachers. Given the financial costs, it may be difficult for schools in low-income communities to sustain these supports over time.

In response to the research evidence and practical realities, BRIDGE was developed for school-based staff to deliver to teachers as a part of their regular daily activities (see Cappella et al., 2011; Cappella, Jackson, et al., 2012). BRIDGE is based on two models: (a) MyTeachingPartner (MTP: Pianta, Mashburn, Downer, Hamre, & Justice, 2008; Allen, Pianta, Gregory, Mikami, & Lun, 2011), a teacher professional development program, and (b) Links to Learning (L2L: Atkins et al., 2008, 2015; Cappella et al., 2008), a mental health services model. Like MTP, BRIDGE is rooted in the CLASS, a standardized and validated observational tool for understanding and assessing effective classroom practices (Pianta et al., 2008). The CLASS lens enables the teacher and mental health practitioner (i.e., BRIDGE consultant) to view the classroom similarly when reflecting on classroom interactions, and organizes consultation and coaching within a set of empirically based dimensions of effective teaching. Like L2L, BRIDGE operates at two tiers—(1) overall classroom interactions (universal) and (2) specific interactions with children with behavioral difficulties (targeted)—in order to respond to teacher needs to work effectively across the classroom and with students with behavioral challenges. Unlike other interventions, BRIDGE involves a classroom continuous quality improvement cycle (Park, Hironaka, Carver, & Nordstrum, 2013) implemented monthly to individualize intervention to classrooms and students.

Overall, BRIDGE goals are to improve classroom emotional support and organization and promote the academic and psychosocial adjustment of students with and without behavioral difficulties. To reach these goals, school-based mental health staff (i.e., BRIDGE consultants) receive training in (a) the CLASS observation system (Pianta et al. 2008), (b) evidence-based strategies targeting emotional support and classroom organization (Berryhill & Prinz, 2003; Embry, 2004), and (c) approaches to effective consultation and coaching (Miller & Rolnick, 2002; Reinke, Lewis-Palmer, & Merrell, 2008). After training, BRIDGE consultants are paired with teachers in their schools, engage in initial interviews, and conduct baseline classroom observations using an adapted CLASS framework (Pianta et al., 2008) and principles of functional behavior assessment (FBA; Watson & Skinner, 2001). Then, consultant–teacher pairs meet to reflect on classroom and student needs and choose specific dimension(s) of teaching practice within domains of emotional support and organization on which to focus the consultation and coaching cycles. A full cycle involves: (1) classroom and student observation, (2) teacher consultation, and (3) implementation of evidence-based strategies from the BRIDGE Tips and Tools manual (see Table 1). This cycle is repeated monthly to improve teaching practices across the classroom and with students with behavioral difficulties.

Table 1 Examples of links between CLASS domains/dimensions and classwide and targeted evidence-based strategies in BRIDGE

BRIDGE Intervention Effects and Implementation Questions

In the 2010–2011 school year, intent-to-treat effects of BRIDGE on classroom and child outcomes were assessed in a classroom-randomized trial in five urban schools (Cappella et al., 2012). At the classroom level, BRIDGE had a positive impact on observed teaching practices in classrooms with low levels of emotional support at the start of the year. At the student level, children with and without behavioral difficulties in BRIDGE classrooms benefited in terms of their relational closeness to teachers, social experiences with peers, and academic self-concept. In addition, children in BRIDGE classrooms who had behavioral problems at the start of the year were less likely to experience peer victimization at the end of the year than comparable children in control classrooms. No significant effects were detected for teacher practices of classroom organization, students’ aggression, or students’ behavioral regulation.

Across this trial, implementation dosage and adherence were adequate (Cappella et al., 2011; Cappella, Jackson, et al., 2012). On average, teachers participated in one cycle of observation/consultation/implementation (with coaching) during each month of the intervention. With the exception of viewing video segments, consultation sessions included the primary BRIDGE content and strategy implementation involved use of classroom strategies from the BRIDGE Tips and Tools manual implemented to minimum levels of adherence (see, for a detailed description, Cappella et al., 2011; Cappella, Jackson, et al., 2012). However, significant heterogeneity in implementation was observed. This is not unusual in intervention trials in which the intervention is delivered by school-based personnel rather than paid consultants, university personnel, or research staff (e.g., Aber, Jones, Brown, Chaudry, & Samples, 1998; Atkins et al., 2015). Moreover, BRIDGE was designed to match classroom and student need, increasing the likelihood of variation in implementation. We also found moderated effects for students with behavioral difficulties and classrooms with low baseline levels of emotional support, suggesting that baseline levels of classroom and student need mattered.

Motivated by these results and responding to calls to explore implementation processes in real-world contexts (Hoagwood et al., 2013), we aim in the current study to: (1) explore barriers to and facilitators of BRIDGE implementation, and (2) describe variation in implementation dosage and adherence by pre-intervention classroom need. Our inquiry was guided by the Consolidated Framework for Implementation Research (Damschroder et al., 2009), an implementation framework that integrates across published theories to identify common constructs related to effective implementation in healthcare settings. In this model, Damschroder et al. (2009) identify five key domains of implementation: intervention characteristics, outer setting, inner setting, characteristics of the individuals involved, and the process of implementation. We use this model as a guide, considering characteristics of BRIDGE (content, implementation), school organizational factors, classroom contextual factors, and characteristics of teachers and consultants and their relationship (see Fig. 1 for a conceptual model). The overall goals are to expand understanding of implementation variability and processes in school mental health, inform refinements to implementation systems in school-based interventions, and increase our success at enhancing teaching practices and student outcomes among students with and without behavioral problems in urban schools.

Fig. 1
figure 1

Conceptual model of implementation analysis of teacher focus groups and interviews

Method

Setting and Participants

This study draws data from five public elementary schools in a large urban center invited to participate in an experimental trial of BRIDGE across one school year. The schools were linked to a partner community agency for school-based mental health services on the basis of location (proximal to the agency) and economic disadvantage (free/reduced lunch eligibility: 89–99 %). According to district records, participating schools enrolled mainly Latino and African-American students (87 and 11 %), with four of the five schools serving mainly Latino students (89–99 %) and one school serving mainly African-American students (69 %).

Participants in the current study were school-based mental health professionals—BRIDGE consultants—and classroom teachers. Consultants were recruited during informational meetings in the agency and schools. Resource mapping procedures (Adelman & Taylor, 2006) were used to determine the school-based mental health staff whose roles and time in the school enabled implementation. Interested individuals met with researchers to determine whether their time and role were sufficient to accommodate the consultation practice, as well as to discuss intervention implementation and study design. Across the five schools, 12 of 18 eligible mental health professionals consented to participate. The consultants were mostly female (82 %) and identified as Latino (33 %), White (50 %), Black (8 %), and mixed/other (8 %). Roles in the school included counselor, social worker, and psychologist.Footnote 1 Consultants were employed by the school district (n = 7) or the community agency (n = 5) providing services at the school.

A letter of invitation to participate was distributed to all teachers (N = 154) in the five schools. Given the number of consultants, a total of 48 teachers could be accommodated in this trial. Researchers met with teachers who expressed interest within 2 weeks of receipt of the invitation (n = 44); a total of 36 provided informed consent. After baseline data were collected in the fall, researchers used a random numbers table to randomize teachers within schools to BRIDGE (intervention; n = 18 classrooms) or training only (comparison; n = 18 classrooms) conditions. Consultants were paired with teachers of intervention classrooms based on existing relationships in the school and delivered BRIDGE consultation and coaching as a part of their ongoing school activities.

For the current study, the 18 lead teachers from the classrooms randomly assigned to BRIDGE were included in the analysis. These teachers led regular education (n = 12) or special education/combined classes (n = 6). Similar numbers of younger grade teachers (K–2nd grade: n = 9) and older grade teachers (3rd–5th grade: n = 9) were included. The teachers were primarily female (83 %) and identified as Latino (56 %), White (28 %), or Black (17 %). On average, teachers worked in their current school for 8.82 (SD = 5.47) years.

Procedures and Data

All procedures were approved by the university and school district institutional review boards. Qualitative and quantitative data were collected over one school year to: (1) assess existing implementation barriers and facilitators through the eyes of classroom teachers, and (2) gain insight into the variation in implementation between participating classrooms.

Data included: (a) focus groups (n = 3) and interviews (n = 3) with intervention teachers, (b) pre-intervention classroom observations in BRIDGE classrooms (n = 18), (c) initial teacher interviews (n = 14), (d) fidelity measures (n = 123), and (e) and implementation materials (n = 149). Following research board guidelines, schools received small monetary gifts for teacher participation and the mental health agency received a training stipend.

Focus Groups and Semi-structured Interviews

Three teacher focus groups were conducted by researchers at the end of the trial (see Morgan, 1998). Trained researchers led the 90-min focus groups in order to gather feedback on implementation processes and gain insight into perceived barriers and facilitators of intervention implementation. Focus groups were audio-recorded and transcribed. Semi-structured individual interviews were conducted with intervention teachers unable to attend a focus group. Researchers took detailed notes during interviews. Transcribed focus groups and structured interview notes were coded for analyses (see below for codebook development details).

Pre-intervention Classroom Observations

A systematic classroom observation measure—Classroom Assessment Scoring System (CLASS: Pianta et al., 2008)—was used to assess classroom practices. This is the same tool that guides BRIDGE intervention with teachers. Ten dimensions are scored on a seven-point scale ranging from 1 or 2 (low) to 6 or 7 (high). Each dimension contains a detailed overall description, behaviorally anchored scale points, and behavioral indicators (see Mashburn et al., 2008). Each dimension was coded four times per teacher during each of two observational periods. Scores for each dimension were averaged to calculate dimension scores. The dimension scores factor into three domains: Emotional Support, Classroom Organization, and Instructional Support (Pianta et al., 2008).

The current study focused on the CLASS domains underlying BRIDGE: Emotional Support (Positive Climate, Negative Climate—reverse, Teacher Sensitivity, and Regard for Student Perspectives) and Classroom Organization (Behavior Management, Productivity, and Instructional Learning Formats). Current study internal reliabilities were α = .79 (Emotional Support) and α = .86 (Classroom Organization). CLASS domain scores coded prior to intervention were used to assess classroom need at the beginning of implementation.

Initial Teacher Interview

As the first step to implementation, BRIDGE consultants conducted structured individual interviews with their assigned classroom teachers. The dyads met at a convenient time in a quiet space in the school. Consultants first introduced the BRIDGE consultation procedure and reviewed dimensions of quality classroom interactions aligned with the CLASS. Then, consultants asked questions regarding teachers’ views on their overall classroom interactions (behavioral, academic) as well as questions about interactions with specific children with behavioral difficulties. The consultant records from these teacher interviews were included in classroom case studies.

Fidelity Measures

Three checklists were used as research tools to assess implementation dosage and adherence. Approximately once a month, teachers and consultants completed the Monthly Intervention Checklist indicating the CLASS dimension(s) of focus and the specific intervention strategies discussed. Teachers also completed the Weekly Strategy Checklist, which reported the strategies (targeted and classroom-wide) they used in their classrooms each week. Lastly, consultants completed the Post-Consultation Checklist after each consultation session. This checklist indicated what occurred in each consultation session, such as discussing the recent classroom observation, problem-solving challenges to implementing a classwide or targeted strategy, or discussing how to use a new strategy. These fidelity measures were included in the data analyzed for the classroom case studies.

Intervention Materials

Multiple forms were created to facilitate and document intervention delivery for the BRIDGE consultants and their supervisors. Among the provided forms, BRIDGE consultants chose to use the ones they found most helpful for implementation. These forms included tools to assist consultants’ observation of teachers’ classroom practice and implementation of classroom strategies. Specifically, the Classroom and Child Observation Form documented consultants’ observation of classwide and target student interactions (CLASS and FBA lenses) in an open-ended format. The Tips and Tools Implementation Form was used by consultants to observe and provide feedback to teachers about their implementation of the specific classwide and targeted strategies chosen during the consultation sessions.

Other tools were designed to facilitate productive consultation and follow-up. The Consultation Preparation Form was designed to guide consultants in (a) reflecting on what they observed in the classroom, (b) choosing specific CLASS dimensions of focus, and (c) deciding relevant classroom and targeted strategies for teachers to implement. The Consultation Notes Form enabled consultants to record discussions with teachers and next steps for implementation (e.g., modeling of a classwide strategy, and implementation of a targeted strategy). University Consultant Reports were written records completed by university consultants who provided supervision to BRIDGE consultants. Each of these tools was included in the case study analysis.

Analytic Plan

The overall goal was to explore the BRIDGE implementation process, including barriers to and facilitators of implementation and heterogeneity related to pre-intervention classroom need. This goal was achieved through (a) content analysis of teacher focus group and interview data, and (b) case study analysis of BRIDGE implementation materials.

Content Analysis of Focus Groups and Interviews

Teachers’ reflections after the BRIDGE trial were content analyzed using a directed approach (Hsieh & Shannon, 2005) to understand the multilevel barriers to and facilitators of implementation (see Fig. 1). Development of a codebook and qualitative analysis of focus group transcriptions and interview notes were guided by the implementation framework by Damschroder et al. (2009), which characterized key aspects of implementation, including the characteristics of the intervention, the outer setting or organizational context, the inner setting or micro-context, the characteristics of the individuals involved, and the implementation processes.Footnote 2

Two researchers independently reviewed focus group and interview data to identify salient themes within these topics: (a) BRIDGE intervention components/implementation processes, (b) school organization, (c) classroom context, and (d) characteristics of teachers/consultants and their relationship. Following this independent review, researchers created a preliminary coding manual and independently identified discrete units of text that corresponded to specific categories within each theme. Researchers compared independent codes, discussed coding challenges, and resolved discrepancies, a process that led to coding manual revisions.

The two researchers then recoded all text using the revised manual to identify the prevalence of each theme and category, and normative examples. Researchers met with the principal investigator to review all codes and resolve discrepancies. An external auditor who was familiar with the project was trained and independently coded 30 % of the text using the revised manual (e.g., Hill et al., 1997). Over 80 % of the external codes matched the original codes; discrepancies were resolved via discussion.

Case Studies

Four classrooms were selected to illustrate implementation in practice. These classrooms were selected based on (a) perceived need for intervention, (b) to what extent intervention was implemented (dosage, adherence), and (c) availability of intervention materials and fidelity measures.

To assess need, classrooms were rank ordered based on the sum of their pre-intervention scores on the CLASS domains of emotional support and classroom organization (scale from 1 to 7). The top tertile of classrooms was classified as “low-need” (i.e., high emotional support and organization), and the bottom tertile was classified as “high-need” (i.e., low emotional support and organization). To assess implementation, a dosage/adherence index was created that summed exposure to consultation sessions (1 = low; 2 = moderate; 3 = high) and exposure to classroom strategies weighted by intensity level of the strategy (1 = low intensity, e.g., “catch ‘em being good,” and 2 = high intensity, e.g., Good Behavior Game; Barrish et al., 1969; see Table 1).Footnote 3 Classrooms at the low end of the resulting 0–9 range participated in 1–2 consultation sessions and implemented 1–2 low-intensity classroom and/or targeted strategies (with basic adherence to the BRIDGE model). Classrooms as the high end of the range participated in 4–6 consultation sessions and implemented 2–5 classroom and/or targeted strategies that ranged from low to high intensity (see Fig. 2).

Fig. 2
figure 2

BRIDGE intervention classrooms’ pre-intervention need and implementation dosage/adherence. Note: Starred classrooms were included in case study analysis

As indicated by a visual analysis of Fig. 2, pre-intervention classroom need and BRIDGE dosage/adherence were unrelated. Some moderate–high (i.e., adequate) dosage/adherence classrooms had low initial CLASS scores (i.e., high-need), whereas others had high initial CLASS scores (i.e., low-need). To explore potential differences in intervention received between classrooms that varied by need, we chose two high-need classrooms and two low-need classrooms for a case study analysis. All were moderate–high dosage/adherence classrooms in order to have sufficient data (intervention materials, fidelity measures) from multiple reporters (consultant, teacher, and supervisor) for analysis.

Aligned with developmental evaluation approaches (Patton, 2011), we analyzed these records according to the theoretical framework by Yin (2003). Co-authors reviewed these documents to identify implementation patterns aligned with key components of the conceptual model. Twice monthly meetings over a four-month period were used to discuss patterns and build consensus. Disagreements were largely resolved via discussion. In isolated cases where discussion did not resolve the disagreement, the first author’s perspective was weighed more heavily. Written results were then distributed to co-authors for refinement and reviewed by a subset of BRIDGE consultants for a member check. Lastly, findings were integrated with focus group analysis and interpreted in the context of the overall implementation framework.

Results I: Focus Groups and Interviews

Findings from teacher focus groups and interviews that explored barriers and facilitators of BRIDGE implementation are summarized by major themes aligned with Damschroder et al. (2009): (a) BRIDGE intervention model; (b) school organization; (c) classroom micro-context, and (d) implementer characteristics and relationships. We present the themes and sub-categories within each theme that emerged in our directed content analysis (see Table 2).

Table 2 Themes and categories derived from teacher focus groups and interviews on barriers and facilitators to BRIDGE implementation

BRIDGE Intervention

Several of the teachers spoke about barriers and facilitators associated with components of the BRIDGE intervention (34 % of codes). One category that emerged was the dosage of the intervention received (14 % of codes), with much of the discussion focused on inadequate dosage. For example, one teacher stated: “At that point we didn’t see the consultant until you know maybe a week or within two weeks. You know that’s when the continuity got lost.” The specific content of the BRIDGE intervention was another category within this theme (12 % of codes). One teacher talked about the evidence-based classroom strategies: “The suggestions she gave me were good. And I tried some of them. You know… and for most of the class they worked – except for a few.” Teachers mentioned the CLASS observation framework and the focus of the consultation (i.e., individual or classwide). For example, one suggested: “I think for me, these dimensions cover a lot of what should be going on in classrooms.”

Teacher participants spoke about the personal and professional support associated with the intervention (6 % of codes). One teacher who had experienced many difficulties earlier in the year said: “I felt support, I felt like I’m not alone.” Teachers also discussed the overall goals of the BRIDGE model (2 % of teacher codes). For example, one commented on the difficulty of reaching the goal of self-reflection: “This program points at both the teacher and the child—which is a little sensitive because people don’t want to look in the mirror and see themselves.”

School Organization

Most teachers mentioned the organizational context of the schools in which BRIDGE was implemented (32 % of codes), with specific attention to the barriers within the school structure (12 % of codes) and the support teachers felt they received from the school (10 % of codes). For instance, teachers spoke about the lack of time for consultation to take place. One teacher spoke about changes to her schedule: “It was just time constraints. I think that a few times we wanted to meet and couldn’t meet because prep got changed.” Another teacher spoke of the number of programs within the school as another barrier, stating “They have so many programs that they don’t have time to really implement this well.” The lack of support from the school for the teachers overall was noted in regard to teachers’ experiences within their classrooms. For example, one teacher spoke about her desire for more help saying, “… You know in a kindergarten class I think that you should have an extra set of hands.”

Teachers discussed the lack of buy-in from the school, or incentive to support teachers in implementation, as another factor that impacted implementation of BRIDGE (5 % of codes). One teacher suggested the school needed to change priorities in order to fully commit to BRIDGE: “With this school, that’s just not the way it is.… I mean a big change would have to be made and it’s just not a priority.” Another teacher spoke about the importance of the whole school’s “buy-in” for the implementation of an intervention to be successful, stating:

There has to be a more consistent school-wide culture that constantly reinforced what the expected behavior is that doesn’t turn into this wishy-washy thing depending on who the teacher is and all these other factors in some cases excuses, that would also help support the teachers to enforce those expectations.

Teachers also spoke about peer professional support in their school (5 % of codes). One teacher expressed the need for support from colleagues who share similar classroom experiences: “I think I would want someone who has recent experience in the classroom and someone who has demonstrated that they have overcome some of their behavioral issues. I feel like that kind of feedback would be of value.” Yet, most teachers indicated a lack of time for this: “Even at grade level meetings, we already have a set agenda so there’s very little time to discuss other things.”

Classroom Micro-context

Teachers identified barriers and facilitators to implementation related to the classroom micro-context (24 % of codes). Of the categories under this theme, teachers mainly spoke about the whole classroom composition (10 % of codes) and organizational structure (10 % of codes).

In these categories, teachers spoke about difficulties and issues that characterized their classrooms, and particular needs with respect to daily interactions with students. For example, one teacher spoke about her students: “… they don’t start their morning in a good way, you know … they come in with all this baggage they have to deal with.” Another teacher commented that in the beginning of the year, “We’d get one misbehavior and then we’d need a lot of strategies to maintain [the students] and we absolutely had it up to here.” Teachers also spoke about strategies already in place prior to the introduction of BRIDGE (e.g., Stoplight), which helped or hindered implementation by creating complementary or competing organizational systems.

A third category within this theme pertained to target students (4 % of codes). Specifically, teachers discussed the student–teacher relationship and target student characteristics that affected interactions across the classroom. For example, one teacher reported, “Many students need counseling and other services and they are not getting them. Several students have problems reading and interacting with one another… [this] makes the whole class difficult.”

Implementer Characteristics and Relationships

Across teacher participants, facilitators and barriers related to characteristics of the individual teacher or consultant were offered. Specifically, teachers reflected on the consultants’ competence and experience (3 % of codes). One teacher spoke about her consultant being generally knowledgeable but questioned her classroom experience, commenting, “I have the impression that she’s read the stuff and knows a lot of the stuff but she hasn’t really been in the classroom.” Teachers also spoke about their own professional role and readiness (2 % of codes). One teacher described herself as: “… Very open to have someone who could help me with behavior management. Even give me feedback on my teaching. For me, this is my 4th year teaching, I am still new enough where I can still learn, but old enough that I am getting in the groove of things.” Finally, teachers mentioned the quality of the teacher–consultant relationship (4 % of codes) as a facilitator or a barrier. For instance, one teacher spoke about the level of comfort she had with her consultant, stating “I had a very good relationship with her. I felt very comfortable when I met with her – when we could get a chance to talk.”

Summary of Focus Group and Interview Results

In sum, teachers reflected on implementation barriers and facilitators related to aspects of the BRIDGE intervention, school organization, classroom micro-context, and implementer characteristics and relationships. Each of these themes and their sub-categories emerged through directed content analysis and provide information to consider in refinements of the intervention model and implementation system.

Results II: Case Studies

The intervention materials and fidelity measures collected during implementation provide data for case studies of moderate–high dosage classrooms that varied in level of need for intervention. Students in the four case study classrooms were primarily from low-income Latino families, with a small subset classified as English language learners. Classrooms ranged from 12 to 25 students in kindergarten through grade three.Footnote 4 In three of the four classrooms, teachers identified as Latino and spoke Spanish and English; the teachers in the fourth classroom identified as White and spoke English only. Our analysis focuses on the content and pattern of BRIDGE implementation in each of these four classrooms. We first present results from the high-need classrooms and then from low-need classrooms. For sources and relevant dimensions of the specific classwide and targeted strategies described below, see Table 1.

High-Need Classrooms: “How Much is Enough”

Ms. S’s classroom was observed to have a moderate level of emotional support (4.88) and a low level of organization (3.50) prior to the intervention. During the initial interview with her BRIDGE consultant, Ms. S reported difficulties in managing the classroom as a whole. During the initial observations conducted by the BRIDGE consultant, negative interactions were recorded around management of student behaviors (e.g., “[Ms. S is] reactive when it got rowdy” “lots of kids wandering even when whole group is on rug”) and unproductive transitions (e.g., “lots of waiting and wandering” “transitions slow because of wandering”).

Ms. O’s classroom, on the other hand, was observed to have low levels of both emotional support (2.75) and classroom organization (2.83) prior to intervention. At the initial interview, Ms. O reported difficulties in managing students’ inattentive and disruptive behaviors. Also in the interview, Ms. O expressed challenges in addressing the range of behavioral and academic needs among students and concerns about the mismatch between their family backgrounds and the peer and school context. At the start of BRIDGE implementation, the consultant noted in an initial classroom observation “yelling” to redirect or punish misbehavior and “many students waiting” for the teacher to deliver instructions or for academic activities to begin.

Specific Dimensions of Practice

Much of the BRIDGE consultation and coaching in these high-need classrooms focused on a small number of CLASS dimensions. Specifically, in Ms. S’s classroom, 71 % of the consultations focused on behavior management and productivity, with implementation of the Good Behavior Game (Barrish et al., 1969; Embry, 2002) as the primary way to improve behavior and maximize learning time. In Ms. O’s classroom, the consultation times were focused mainly on positive climate (86 % of the consultations) and productivity (57 % of the consultations), with classwide strategies implemented to encourage positive interactions among students (positive climate) and increase productivity for students who finished activities early (productivity). Although the goal was for teachers and consultants to cycle through more than two of the CLASS dimensions over the course of BRIDGE implementation, this did not occur in the high-need classrooms.

Classwide Needs and Strategies

Implementation data suggest BRIDGE consultants working with high-need classrooms focused primarily on classwide needs and strategies. Ms. S’s consultations involved coaching to implement classwide strategies such as the Good Behavior Game (GBG: Embry, 2002), Think-Pair-Share (Lyman, 1987), and positive peer reporting or “tootling” (Cihak, Kirk, & Boon, 2009; Skinner, Cashwell, & Skinner, 2000). Selected targeted strategies were used by the BRIDGE consultant but not fully integrated into teacher practice. Similarly in Ms. O’s classroom, consultation focused on classwide strategies to improve teacher–student and peer interactions (e.g., “peer tootling”; Lambert, Tingstrom, Sterling, Dufrene, & Lynne, 2015) and maximize learning time and establish routines (e.g., activity box: Greenwood, 1997). The BRIDGE consultant also provided tips for modifying specific students’ behaviors, but these were not the primary focus of consultation and coaching.

Implementation Difficulties

The teachers experienced difficulties implementing new strategies in their classrooms. For example, the BRIDGE consultant observed Ms. S when she implemented the Good Behavior Game (Embry, 2002), and noted that the teacher did not state the rules of the game, did not implement the strategy during key times, and did not reinforce the rules consistently. In Ms. O’s case, the BRIDGE consultant found the teacher sometimes implemented the strategies in unintended ways. For example, the consultant and teacher agreed to implement positive peer reporting to improve positive climate in the classroom. The strategy involves students writing positive statements about what a target student says or does. Ms. O thought the strategy would be useful to teach new vocabulary. This creative appropriation of the strategy may have been an effective instructional tool, but the negative peer interactions remained unaddressed.

Active and Ongoing Coaching

In part due to these implementation difficulties, BRIDGE consultants played an active and ongoing role in coaching and modeling. The teachers expressed relatively low levels of confidence and competence in implementing new strategies without coaching support. Therefore, BRIDGE consultants introduced and implemented strategies the first time they were used. After the introduction, however, teachers continued to request support in implementing strategies, including co-leading classwide strategies (e.g., GBG: Ms. S) and targeted strategies (e.g., behavioral contract: Ms. S. and Ms. O).

Summary of Implementation in High-Need Classrooms

In sum, the intervention materials and fidelity measures from high-need classrooms revealed a focus on a small number of dimensions of effective teaching practice, implementation of classwide strategies rather than targeted student strategies, difficulties in using strategies as intended and with fidelity, and need for active and ongoing coaching support.

Low-Need Classrooms: “Always Can Use Extra Help”

Ms. H’s and Ms. B’s classrooms represent low-need classrooms. In a pre-intervention observation, Ms. H and her co-teacher were observed to have high scores in both emotional support (5.88) and classroom organization (6.00). In the initial observation, the BRIDGE consultant found no significant classwide issues, noting “very positive climate,” “routines in place,” and “children are very engaged.” During the initial interview, Ms. H mentioned two children with inattentiveness and mild behavioral difficulties and one child with significant academic and social challenges who experienced peer victimization. The BRIDGE consultant observed this child “consistently calling for attention, lacking self-awareness.” When this child gave an incorrect answer, it was met with “giggling from other students.”

Similarly, Ms. B’s classroom was positive and well organized prior to intervention, as indicated by moderate-to-high CLASS scores (emotional support = 4.75; classroom organization = 5.17). In an initial observation, the BRIDGE consultant noted need for higher productivity (e.g., “Transition a little lengthy: takes a while for them to settle down and begin work. It is pretty clear, though, what needs to get done”) and teacher sensitivity (“maybe more aware of girls in back of room”). Yet, the consultant found the classroom without significant overall difficulties. Individual student challenges were seen, however, with three students who displayed inattentive and disruptive behaviors, including “tease other kids,” “not staying in seat,” and “drift off really easily.”

Focus on Target Students

BRIDGE implementation in these low-need classrooms was focused on managing the behaviors of specific targeted students. Because the classrooms had adequate pre-intervention levels of warmth and emotional support with well-established routines and positive and proactive behavioral management practices, most of the consultation and coaching was focused on targeted strategies for students with behavioral difficulties. For example, Ms. H’s classroom had three target students: One child experienced inattention, another was identified as having a poor self-concept and behavioral difficulties, and the third was regularly teased by classmates because of language, social, and developmental challenges. The BRIDGE consultant worked with the teachers to choose targeted strategies, such as a self-monitoring card (Mooney, Ryan, Uhing, Reid, & Epstein, 2005) for inattention, positive performance feedback (Warren et al., 2006) for the student with low self-concept, and positive peer reporting (Skinner et al., 2000) for the student who experienced teasing (to accompany individual treatment). Similarly, Ms. B and the BRIDGE consultant chose targeted strategies, such as self-monitoring, good news notes, and behavioral contracts, to help the three target students stay engaged during academic activities (e.g., Mooney et al., 2005). The goal of the consultation was primarily to determine the most effective strategies for each student.

Teacher and Consultant as Partners

BRIDGE implementation in low-need classrooms was conducted in partnership. Teachers and consultants made joint decisions about strategies to implement for the targeted students, with teachers taking a leadership role in the implementation and consultants taking a supporting role. When Ms. H implemented a targeted strategy, the consultant did not model or co-lead; instead, the consultant monitored the student’s progress and provided feedback on implementation quality. In Ms. B’s case, the teacher asked the consultant to introduce the strategies to target students at the start; then, the teacher led subsequent and ongoing implementation of these strategies. When an initial strategy did not work, Ms. B worked with the consultant to identify additional tools to supplement or replace the strategy.

Effective Modification and Implementation of Strategies

Low-need teachers worked with consultants to tailor BRIDGE strategies to fit the needs of targeted students. For example, in Ms. H’s classroom, the teacher modified the self-monitoring procedure to increase the teacher’s role in co-monitoring students’ attention and off-task behaviors. This was designed to improve its efficacy for a student who experienced difficulty in self-reflection. Similarly, although Ms. B implemented the good news note (Blechman, Taylor, & Schrader, 1981) consistently with target students at the start of the consultation, she shifted to a daily report card (Kelley, 1990) when she and the consultant recognized that parent reinforcement was needed (e.g., “each day the students would take the card home and get it signed by the parent”). These planned modifications occurred in collaboration, maintained the original goal (e.g., improving student behavior), and preserved the effectiveness of the strategy.

Summary of Implementation in Low-Need Classrooms

Overall, these case studies reveal classrooms with moderate-to-high pre-intervention emotional support and classroom organization scores were primarily focused on targeted student interactions. In addition, consultation sessions involved collaborative decision-making between the teacher and consultant. Lastly, implementation of targeted strategies was perceived to be appropriate and effective.

Discussion

Responding to calls for increased understanding of implementation processes in practice (Hoagwood et al., 2013), the current study used qualitative and quantitative data to illuminate the barriers, opportunities, and processes underlying the implementation of BRIDGE teacher consultation and coaching in urban elementary schools (Cappella et al., 2011; Cappella, Jackson, et al., 2012). Unlike many well-studied, classroom-focused intervention models, BRIDGE is implemented by existing school-based mental health staff and responsive to individual classroom and target student needs. Results from directed content analysis of teacher focus group and interview data suggests that aspects of the BRIDGE intervention model, school organization, classroom contexts, and teachers/consultants and their relationship were relevant as implementation facilitators or barriers. In addition, case study analysis of permanent products from moderate–high implementation classrooms suggests variation in consultation and coaching by initial level of classroom need. Results illuminate the need for implementation research to extend beyond simple indicators of dosage and fidelity to the multiple systems and variation in processes at play across levels of the implementation context.

Implementation Barriers and Facilitators

In the context of a classroom-randomized trial of BRIDGE effects on classrooms and students, we gathered qualitative data from teachers to understand their experience of BRIDGE implementation in their schools. Guided by Damschroeder et al.'s (2009) implementation model, we coded five major themes and emergent categories within each theme.

A major focus of teachers’ comments involved specific aspects of the BRIDGE model, including concerns about the frequency and consistency of consultation and coaching, remarks on components that were appealing (e.g., classroom strategies) or challenging (e.g., self-reflection), and the presence or absence of support received from BRIDGE consultants. These themes are not surprising: prior scholarship suggests the intervention itself—including its components, logistics, and content—is a factor in initial engagement and ongoing implementation (Graczyk et al., 2006). Interestingly, given recent discussion as to whether relational support or concrete information is more important to provide in effective teacher consultation (e.g., Knotek & Hylander, 2014), in the current study, teachers spontaneously mentioned both factors when describing their interest and progress. The main barrier cited—inconsistencies in implementation dosage due to logistical constraints—points to the need to better integrate intervention with the regular schedule and daily fabric of schooling.

School organization and classroom contexts were also mentioned as barriers to implementation. Lack of support from the school (e.g., time, peer professional learning interactions, supportive culture) made it difficult for some teachers to implement BRIDGE well. Scholars suggest that a supportive and organized context (e.g., “organizational readiness”; Weiner, Amick, & Lee, 2008) is necessary for consistent and high-quality implementation of EBIs. Our data do not test organizational readiness, but do indicate that school organizational structure and culture may be critical to consider. In addition, classroom composition and practices were mentioned with relative frequency. As seen in other teacher reports (e.g., Reinke et al., 2011), teachers in the current study cited students’ problem behaviors and poverty-related stress as challenges to effective teaching. Some teachers indicated these factors made their classroom ripe for BRIDGE—a two-tier intervention focused on classrooms and target students and a responsiveness to teachers’ strengths, needs, and practices. Others suggested the level of student need made it difficult to fully implement BRIDGE. This may add to the call for linking across all three tiers of intervention (universal, targeted, and intensive), so students with the most significant needs are receiving aligned services (Cappella et al., 2008).

Finally, teachers discussed their own willingness to receive support, their consultants’ competence, and the teacher–consultant relationship. Some research indicates that teacher traits, such as openness and emotional competence, are predictive of uptake and implementation of evidence-based practices (Durlak & DuPre, 2008), which was reflected in our data as well. In addition, and also replicating prior research (e.g., Rohrbach, Grana, Sussman, & Valente, 2006), the extent of the consultant’s expertise was seen as relevant. BRIDGE consultants ranged in their role in the school and relationship with teachers. Consultants with more competence in classroom practices and better relationships with teachers were seen as more effective implementers. This may speak to selecting staff with these characteristics (e.g., lead or mentor teachers), or alternatively, to training staff who are equipped to acquire them (e.g., school mental health personnel with teacher consultation experience).

Implementation Heterogeneity by Classroom Need

We identified four classrooms with moderate–high levels of BRIDGE implementation and varying levels of initial need to serve as the sample for a case study analysis. We examined intervention materials and fidelity tools to determine the content and pattern of implementation across classrooms with different levels of observed need.

In exploration of data from high-need classrooms (low initial CLASS scores) versus low-need classrooms (moderate–high initial CLASS scores), we identified several areas of divergence in implementation. First, in high-need classrooms, consultant–teacher dyads focused on a limited number of CLASS dimensions. The goal in BRIDGE was for each dyad to cycle through several dimensions of emotional support and classroom organization over the intervention period—once assessments indicated sufficient progress in that dimension. High-need classrooms focused on approximately two dimensions (e.g., behavior management, positive climate) for the entire period. Results suggest the difficulties observed by consultants and/or raised by teachers within these dimensions of practice were substantial enough to require multiple cycles of assessment, feedback, and action. Thus, depth of intervention was greater in high-need classrooms and breadth of intervention was greater in low-need classrooms.

Second, high-need classrooms focused on implementation of classwide strategies to prevent behavior problems and promote a positive classroom climate (e.g., Good Behavior Game; positive peer reporting). Low-need classrooms emphasized implementation of targeted strategies to impact the behavior and well-being of specific students with behavioral problems (e.g., daily report card, self-monitoring). This finding is aligned with the public health model in schools (Kutash et al., 2006; Nastasi, 2004) whereby universal intervention is implemented first to strengthen the setting, and targeted or intensive intervention follows to address specific needs. BRIDGE consultants were not instructed to administer intervention in this manner; but it may, in fact, benefit BRIDGE and other interventions to be explicit in these processes and goals in the future.

Third, teachers in high-need classrooms had difficulty using strategies as intended and with fidelity, whereas use of targeted strategies in low-need classrooms was generally appropriate and effective. This may be a reflection of the difficulty of implementation of classwide versus targeted strategies: The Good Behavior Game requires a broader and deeper set of skills to implement well than does a self-monitoring card (see Becker, Bradshaw, Domitrovich, & Ialongo, 2013). Alternatively, it may reflect the level of teacher skills and the prevalence of student behavioral problems in the classroom. Teachers in high-need classrooms are struggling to use their skills to meet classroom demands and thus require more active and ongoing coaching to implement complex classwide strategies well. Teachers in low-need classrooms have the skills needed and/or a more manageable group of students and therefore may be better able to work with their consultant to choose targeted strategies and implement them with minimal support. The BRIDGE model was designed to provide flexible coaching. However, it is clear that the intensity of implementation support (i.e., coaching) is greater for consultants working in high-need classrooms. It may be helpful in the future to consider classroom need prior to creating teacher–consultant pairs in order to ensure that consultants have the time and capability to meet classroom needs.

Limitations and Future Directions

Several limitations are important to consider. First, the sample is small and the study involves one public school system and intervention model. These results are not generalizable beyond this set of schools, or this teacher consultation and coaching model. The current study findings are aligned with other research, and the intervention model is based on a widely used observation system and evidence-based classroom strategies; yet, other research is needed to understand implementation processes across contexts and populations. Second, although this study has a strong foundation in implementation science and school mental health, findings are descriptive and exploratory and should be interpreted as such. Third, due to the need for sufficient data to analyze, the classroom case studies included only classrooms with moderate–high implementation. Future work should use other kinds of data to enable inclusion of low-implementation classrooms in case study analysis. Fourth, both teachers and consultants were critical to BRIDGE implementation; yet, this study focuses solely on teachers. In a future trial, it is critical to incorporate consultant strengths, concerns, and experiences into the analysis. Lastly, given that BRIDGE is based on a specific system of understanding effective classrooms (CLASS), the classrooms in the case studies were split by high and low levels of classroom need as assessed by the CLASS. However, it may be important in future work to define and measure classroom needs differently to determine whether similar implementation patterns emerge.

Still, this study has practical implications for classroom interventions such as BRIDGE and for implementation science. First, the current findings suggest teacher consultation and coaching should be integrated not only into the daily activities of existing school personnel but also into school organizational structures (e.g., meeting times, professional development) and their social and instructional climates. Second, models that focus on both relational support and functional support can be implemented. In fact, these may engage different teachers, perhaps leading to more implementation overall. Third, in high-poverty schools, prevention and intervention scientists need to attend more fully to the third (intensive) tier. Given the levels of student need, linking two-tier programs to the third tier may reduce teacher stress and increase implementation of effective practices. Fourth, classroom interventions may be more efficient when priorities and personnel are explicitly based on initial classroom need. For example, high-need classrooms may benefit from classwide coaching from a lead or mentor teacher; low-need classrooms may benefit from targeted student consultation from a school mental health professional.

Finally, the current study is an example of how to use existing data from intervention trials to illuminate the process of implementation. As a field, we know a great deal about effective programs and practices; researchers increasingly document implementation outcomes such as dosage and adherence. We know less, however, about patterns, processes, and heterogeneity in implementation, particularly for intervention models delivered by existing school and community resources. Conducting systematic research on implementation processes using commonly available data and guided by strong theoretical frameworks will inform future implementation research and installation of evidence-based implementation systems. Taking this step will move the field beyond a basic understanding of implementation outcomes toward a theoretically rich, data-based, and practically relevant understanding of effective implementation of EBIs in schools.