Abstract
Although it is widely recognized that variation in implementation fidelity influences the impact of preventive interventions, little is known about how specific contextual factors may affect the implementation of social and behavioral interventions in classrooms. Theoretical research highlights the importance of multiple contextual influences on implementation, including factors at the classroom and school level (Domitrovich et al., Advances in School Mental Health Promotion, 1, 6–28, 2008). The current study used multi-level modeling to empirically examine the influence of teacher, classroom, and school characteristics on the implementation of classroom-based positive behavior support strategies over the course of 4 years. Data were collected in the context of a 37-school randomized controlled trial examining the effectiveness of school-wide Positive Behavioral Interventions and Supports. Multi-level results identified several school-level contextual factors (e.g., school size, behavioral disruptions) and teacher-level factors (perceptions of school organizational health and grade level taught) associated with variability in the implementation of classroom-based positive behavior supports. Implications for prevention research and practice are discussed.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
With the increasing emphasis on measuring implementation and the greater formulation of implementation science (Spoth et al. 2013), greater attention is given to implementation in research trials. However, this is often done to determine whether attenuated intervention effects could be due to poor implementation (Dane & Schneider 1998; Domitrovich & Greenberg 2000; Gresham et al. 1993). This paper goes beyond describing the implementation process by examining the influence of teacher, classroom, and school characteristics on the implementation of positive behavior support strategies over time within the context of a randomized controlled trial (RCT) testing the effectiveness of school-wide Positive Behavioral Interventions and Supports (SW-PBIS; Horner et al. 2005; Sugai & Horner 2002, 2006). The current study serves as an important test of the multi-level, contextual implementation model proposed by Domitrovich et al. (2008), and thus has important implications for implementation science and the high quality use of classroom-based supports.
Factors Associated with Implementation Quality
Although it is widely recognized that garnering stakeholder “buy-in” and preparing an organization (e.g., schools) for change by providing time, staff, resources, and materials are critical to the successful implementation of programs (Adelman & Taylor 1997), less is known about how specific baseline contextual elements may impact the implementation of social and behavioral interventions. Literature supports the notion that multiple levels of context (e.g., classroom and school) are important facilitators of or obstacles for implementation, and therefore need to be considered when examining intervention effects (Berkel et al. 2011; Domitrovich et al. 2008; Durlak & DuPre 2008). Specifically, Domitrovich et al.’s (2008) theoretical model asserts that there are two important elements to implementation: characteristics of the intervention itself and the support system which promotes implementation. The model asserts that one must consider the core components, standardization, and delivery of both the intervention and support system (Domitrovich et al. 2008). The extent to which implementers respond to the intervention and support system, combined with the quality with which an intervention is implemented, will be influenced by three hypothesized ecological levels: the individual implementer level (e.g., a teacher), the school level, and the macro (e.g., district policy) level. For example, research suggests a variety of important contextual factors which are concurrently associated with implementation quality, including the following: (a) the school’s climate (Beets et al. 2008; Domitrovich et al. 2009; Glisson & Green 2006) including features of the work environment, relationships between teachers, and principal leadership (Domitrovich et al. 2008; Payne et al. 2006; Ringeisen et al. 2003); (b) school-level composition and student indicators which often signal disorder (e.g., student mobility and school size; Domitrovich et al. 2008; Payne et al. 2006); (c) the attitudes and beliefs of teachers (Beets et al. 2008), including the extent to which teachers’ philosophies on education are consistent with the intervention (Rohrbach et al. 1993), the perception that students’ needs will be met with the intervention (Reimers et al. 1987), and teachers’ perceptions that implementation of the intervention is feasible (Pankratz et al. 2002; Ringwalt et al. 2003); and (d) level of training provided to school staff (Dane & Schneider 1998; Dusenbury et al. 2003).
Much of the extant research has focused on the associations of these factors with implementation quality at a single time point. It is unclear whether contextual factors will impede or promote growth in implementation over time. For example, school disorder could serve as an obstacle to quality implementation (e.g., as stated by Gottfredson et al. 2002), or it could provide the opportunity for greater growth because there is more room (Bradshaw et al. 2009) or motivation for improvement (Pas & Bradshaw 2012). Furthermore, few studies have examined implementation longitudinally or tested the association of changes in implementation with teachers’ perceptions of their school environment and demographics. The exploration of multiple levels of contextual influence on implementation quality is particularly important when considering the SW-PBIS model, as it targets both the school and classroom context, highlighting the significance of potential influences at both levels.
School-Wide-Positive Behavioral Interventions and Supports
The current study focused on a widely used prevention model called SW-PBIS (Sugai & Horner 2002, 2006), which is a non-curricular, school-based approach implemented in over 20,000 schools in the USA. It incorporates behavioral, social learning, and organizational behavioral principles to promote changes in staff behavior as a means for positively impacting student discipline, behavior, and academic outcomes. SW-PBIS is focused on formalizing and improving systems, implementing evidence-based practices, and promoting data-based decision making (Horner et al. 2010). The multi-level prevention model is implemented across classrooms and non-classroom settings with the aim of preventing disruptive behavior and enhancing the school’s organizational climate. SW-PBIS in Maryland utilizes a unique training protocol whereby school teams are trained by the state and then the school-based team trains teachers within the school (see Barrett et al. 2008; Bradshaw & Pas 2011 for additional information on the state infrastructure). Though advantageous in ways, this training model leads to some concerns about gaps in implementation at the classroom level.
Since SW-PBIS implementation occurs at both the school and classroom level (Barrett et al. 2008), it is important to assess fidelity at both levels. Yet, implementation research on SW-PBIS has examined school-level implementation, with little consideration of classroom-level implementation. For example, state-wide scale-up research regarding the influence of school-level factors on school-level implementation has demonstrated a link between student mobility, enrollment, and the percent of certified teachers with implementation quality (Bradshaw & Pas 2011; Pas & Bradshaw 2012). Moreover, implementation quality generally improves over time. Scale-up research has also shown that schools with more student suspensions are more likely to seek training (Bradshaw & Pas 2011). However, some of these same contextual influences may also pertain to implementation quality within the classroom context, though this is unexamined.
Current Study
The current study draws on data from a randomized controlled effectiveness trial of the SW-PBIS model in 37 elementary schools to examine the growth in implementation of classroom-based positive behavior support strategies. Given the RCT design and the fact that some elements of SW-PBIS are likely implemented in schools which lack formal training in the model (Bradshaw et al. 2008a, b), we have the rare opportunity to explore implementation of program features in both trained and non-trained schools. Previous outcomes-focused studies from this RCT reported a number of significant treatment effects of SW-PBIS, including student suspensions and office referrals (Bradshaw et al. 2010), teacher ratings of students’ aggressive, disruptive, and prosocial behaviors (Bradshaw et al. 2012a, b), and teacher ratings of school climate (Bradshaw et al. 2009). However, there has been limited consideration of SW-PBIS implementation quality in the classroom or implementation variation based on school- and classroom-level contextual factors.
The current study examined the potential influences of teacher-, classroom-, and school-level contextual factors on classroom-based implementation of positive behavior support strategies (Berkel et al. 2011; Domitrovich et al. 2008; Durlak & DuPre 2008). Given the longitudinal design, whereby data from teachers across 4 years were analyzed, we were interested in multi-level contextual influences on changes in implementation over time. Since SW-PBIS is the confluence of a variety of basic behavior management strategies implemented across all school contexts, it is important to examine how implementation might also occur in classrooms where there has been no formal training of the SW-PBIS model (Bradshaw et al. 2008a, b). Therefore, fidelity was assessed in both treatment and comparison schools, and thus, we refer to the fidelity outcome as “positive behavior support strategies.”
Based on the Domitrovich et al. (2008) model, we hypothesized that classroom composition (e.g., student behavior (Kellam et al. 1998) and class size) and teacher perceptions of the school environment, as well as school-level indicators of disorder and support for a behavioral approach would be associated with the implementation of positive behavior support strategies (Domitrovich et al. 2008; Gottfredson et al. 2002; Payne et al. 2006). For example, we expected that classrooms where students were highly disruptive (Kellam et al. 1998) would have poorer baseline implementation of positive behavior support strategies, as classroom-level disorganization may impede initial adoption of positive behavioral support strategies. In addition, we hypothesized that teachers’ more favorable perception of school climate would be associated with better initial implementation (Beets et al. 2008; Domitrovich et al. 2008; Durlak & DuPre 2008), as this may signal a more favorable school environment which is ready for implementation (Adelman & Taylor 1997). At the school level, we hypothesized that school-wide implementation and leadership for SW-PBIS would be positively associated with better classroom-based implementation over time, as this indicates school-level support structures for implementation (e.g., Domitrovich et al. 2008; Ringeisen et al. 2003). In addition, we hypothesized that schools with greater organizational challenges (e.g., high mobility, student-to-teacher ratio, and suspensions) would have a more difficult time with implementation over time (Durlak & DuPre 2008; Gottfredson et al. 2002; Payne et al. 2006). We also explored for potential interactions between intervention status and school- and classroom-level factors to determine if any of the contextual influences were more pronounced in the formally SW-PBIS trained schools.
Method
Design
Data come from 37 elementary schools that were involved in a group randomized controlled trial (Murray 1998) testing the effectiveness of the universal SW-PBIS model. All schools agreed to enroll in the trial. Schools were matched on select baseline demographics (e.g., school enrollment), of which 21 schools were randomized to treatment and 16 to the comparison condition. The comparison schools were trained in SW-PBIS at the end of the 4-year trial.
Training
Treatment schools formed SW-PBIS teams (i.e., 5–6 members including teachers, administrators) who attended an initial 2-day summer training led by the SW-PBIS Maryland State Leadership Team and co-led by one of the developers of SW-PBIS. The SW-PBIS teams, in turn, trained the teachers and other school-based staff (i.e., using a “training of trainers” model). In each subsequent year, school-based teams attended 2-day booster training events to ensure sustained training and implementation. In addition, monthly on-site support and technical assistance were provided by a behavior support coach (e.g., school psychologist) who was trained by the state and supervised by the district (see Barrett et al. 2008; Bradshaw & Pas 2011; Bradshaw et al. 2012a, b for additional information on the training).
Participants
The participants were 1,056 teachers in 37 elementary schools involved in the RCT testing the effectiveness of SW-PBIS. The vast majority of the teachers were female (92 %) and White (89 %). About half of the teachers were young (20–30 years old; 48 %) and taught the upper (grades 3–5) elementary grades (51 %). The schools had a diverse student population; on average, about 35 % of the students were African American, and about 40 % of students received free and reduced meals. See Table 1 for further detail regarding teachers and schools.
Measures
Classroom-Based Implementation of Positive Behavior Support Strategies
Classroom teachers in all 37 schools completed the classroom systems subscale of the Effective Behavior Support Survey (EBS; referred to as EBS-classroom throughout; Sugai et al. 2000), which is a 12-item scale that measured the use and quality of positive behavioral support strategies (e.g., having positively worded and clear statements describing rules and expectations). This measure has been used as an indicator of classroom management in a number of studies (e.g., Bohanon et al. 2006; Bradshaw et al. 2010; Hagan-Burke et al. 2005; Mitchell & Bradshaw 2013; Safran 2006) and demonstrates adequate internal consistency (Cronbach’s alpha [α] = .83). Prior research confirmed the one-factor structure, and other studies have demonstrated predictive validity with student ratings of school climate and low rates of student discipline problems (Mitchell & Bradshaw 2013). Teachers indicated whether each item was “in-place” within their classroom on a scale of 0–2 (i.e., 0 = not in place, 1 = partially in place, and 2 = in place); the measure was scored by dividing the total earned points by the total possible points. The teachers’ continuous EBS scores (ranging from 0 to 100 %) over the five possible time points were used as the repeated measures outcome variable in this study. Average scores and standard deviations are displayed in Table 1.
School Organizational Health
Teachers completed the 37-item Organizational Health Inventory (OHI; Hoy & Feldman 1987), which assessed five aspects of the functioning of the schools: teacher affiliation (9 items, α = .94), academic emphasis (5 items, α = .87), collegial leadership (10 items, α = .95), resource influence (7 items, α = .89), and institutional integrity (6 items, α = .90). Items were endorsed on a 4-point Likert-type scale ranging from rarely occurs to very frequently occurs. An overall score for the OHI was calculated by averaging the responses on all 37 items. This measure has been used in various studies, and a factor analysis has confirmed the factor structure (Hoy & Tarter 1997; Hoy et al. 1991).
Classroom Indicator of Student Disruptive Behavior
The Teacher Observation of Classroom Adaptation-Checklist (TOCA-C; Koth et al. 2009) was used to assess the baseline level of student disruption in each classroom. The TOCA-C is a checklist version of the TOCA-R used in several prevention trials (e.g., Ialongo et al. 1999; Petras et al. 2004). It included a measure of students’ aggressive and disruptive behaviors (fights; 9 items, α = .92), using a Likert-type scale (1 = never to 6 = almost always). Prior research demonstrated the test-retest and internal consistency of the TOCA-C (for a review, see Koth et al. 2009) and the predictive validity of the aggressive-disruptive behavior subscale (e.g., Petras et al. 2004). Given our interest in classroom disorder in the current study, we created a classroom-level average score on the aggressive and disruptive behaviors subscale.
Number of Students in the Classroom
This was calculated as an indicator of class size.
Teacher Demographics
Classroom teachers completed an information form providing their age, gender, race/ethnicity, grade taught, and number of years teaching in the school.
School-Wide Implementation of Positive Behavior Support Strategies
As described above, the EBS also contained a scale assessing perceptions of school-wide implementation (referred to as EBS-school-wide). Specifically, the EBS-school-wide included 15 items that assessed the degree to which school-wide behavioral supports were in place (e.g., school-wide rules are explicitly stated; problem behaviors are clearly defined; α = .90). Staff indicated whether each item was in-place in their school (i.e., scale of 0–2 for each item), and scores reflected the proportion of earned points. Data from all staff were averaged to the school level and used as a predictor variable in this study. The school average score is presented in Table 1.
External Assessment of School-Level SW-PBIS Fidelity
Annual assessments of SW-PBIS implementation using the validated School-wide Evaluation Tool (SET; Horner et al. 2004; Sugai et al. 2001) were also conducted in all 37 schools by trained external observers, hired and trained by the research team, who were unaware of the schools’ implementation status. This is a widely used measure completed by conducting a structured tour of the school, brief interviews with randomly selected school personnel and students, and a review of school documentation and posted materials (e.g., posting of behavioral expectations). The SET provides an overall score, representing the averaged proportion of elements implemented across the following dimensions: (a) behavioral expectations defined, (b) expectations taught, (c) on-going system for rewarding behavioral expectations, (d) system for responding to behavioral violations, (e) monitoring and decision making, (f) management, and (g) district level support. The overall possible score ranges from 0 to 100 %, with 80 % as the recommended cutoff for fidelity (Bradshaw et al. 2008a, b). The overall score is typically used as an aggregated measure of implementation of SW-PBIS, rather than simultaneously modeling all seven SET subscales as different predictor variables. Data from the current RCT demonstrated high internal consistency (α = .96). See Table 1 for data regarding SET scores.
School Demographics
We obtained publicly available baseline demographic data from the Maryland State Department of Education regarding the percent of African American students in each school, the mobility rate (i.e., number of entrances and withdrawals divided by total enrollment), the student-to-teacher ratio, and the percent of students who were suspended.
Analyses
Longitudinal, three-level hierarchical linear modeling analyses were conducted using the HLM 7.1 software (Raudenbush et al. 2011) to examine the associations between teacher, classroom, and school factors on up to five repeated measures of teacher reported, classroom-based positive behavior support strategies over the course of 4 years. Only time was modeled as a covariate at level 1. HLM uses full-information maximum likelihood to account for missing data at level 1; therefore, teachers were retained in the analyses even when they were missing data on one or more time points (Raudenbush et al. 2011).
At level 2, teacher characteristics (i.e., age [20–30 years old = 1, 31 or older = 0], gender [female = 1, male = 0], race [white = 1, all other races = 0], grade taught [3rd to 5th grade = 1, kindergarten to 2nd grade = 0], and years of experience in the school), teacher perceptions of organizational health, and classroom composition (i.e., baseline classroom size and average disruptive behavior TOCA score) were modeled both at the intercept and on the slope of time. Dichotomous variables (i.e., age, gender, race, and grade taught) were uncentered; however, all other (continuous) variables were centered on the grand mean (Luke 2004).
At level 3, school-level characteristics (i.e., mobility, student-to-teacher ratio, percent African American, percent suspensions, and baseline scores on the school-wide EBS and SET) as well as intervention status (SW-PBIS vs. comparison) were modeled on the slope. Finally, we examined the interactive effects of intervention status with school demographics, including the mobility rate, student-to-teacher ratio, percent of African American students, and suspensions on change in the EBS-classroom scores over time. All predictor variables at this level were grand-mean centered, except intervention status (Enders & Tofighi 2007). Model fit indices (AIC and BIC) for the unconditional and conditional models are reported on the tables and are interpreted such that smaller values indicate better fit (Raudenbush et al. 2011).
Prior to analyzing data in HLM, the covariates used were examined in SPSS to ensure that collinearity was not a concern (Tabachnick & Fiddell 2001). See Table 2 for correlations between the teacher-level variables (i.e., outcomes and level 2 predictors). Once in HLM, the variables were added one at a time to ensure that changes in the direction of variable effects did not occur, which is another means for detecting collinearity (Raudenbush & Bryk 2002).
Results
Variance Accounted for by Model
The fully unconditional model revealed that there was substantial between-school variability on the EBS-classroom (i.e., implementation of positive behavior support strategies). Specifically 27 % of the variability was between-school variability (i.e., intra-class correlation [ICC] = .27). The final model accounted for 44 % of the variability, with a final ICC of .15. On the other hand, the between-teacher (and classroom) variability (i.e., sigma-squared) was relatively unchanged by this model. The AIC and BIC indices demonstrated better fit for the final conditional model than the unconditional model (see Table 3).
Changes in EBS Over Time
With regard to the HLM results, when only the time variable was accounted for within the unconditional model, the effect was significant demonstrating a significant increase in EBS scores over time. In the final model, the effect for time was no longer significant.
Effects of Teacher and Classroom Variables
The multi-level analyses indicated a significant relationship between grade taught and perceptions of the environment on the EBS-classroom. Specifically, teachers in grades 3–5 had a lower intercept score (β 040 = −2.79, p < .05) than teachers of younger children (i.e., grades K-2; see Table 2) for use of positive behavioral strategies. Teachers with a higher than average perception of the school environment reported nearly one standard deviation higher scores on the EBS-classroom scale (β 080 = 12.15, p < .01). No other teacher or classroom variables were significantly related to the EBS-classroom intercept scores.
Only the teachers’ perceptions of school’s organization health (i.e., OHI) were significantly related to changes in the growth of EBS-classroom scores over time. Teachers with more favorable perceptions showed less growth in EBS-classroom scores over time as compared to teachers with poorer perceptions of the school’s organizational health (β 180 = −1.27, p < .01); this reflects growth of one-tenth of a standard deviation each year.
Effects of School-Level Variables
Results indicated that a number of school-level variables were related to teacher implementation of positive behavior support strategies over time, though in a modest way. Specifically, higher student-to-teacher ratio (β 107 = .11, p < .01), a higher percent of African American students (β 108 = .02, p < .01), and a higher SET score at baseline (β 1011 = .02, p < .05) were associated with a slightly higher degree of growth in the EBS-classroom over time. In addition, teachers in schools with a higher suspension rate reported less growth over time (β 109 = −.07, p < .05). The EBS-school-wide subscale scores approached significance in relation to EBS-classroom scores. The only school-level variable that did not have a significant main effect on EBS-classroom scores over time was student mobility.
Treatment Main and Interaction Effects
School-level intervention status was also modeled on the slope of time. As expected, the slope term indicated a significant positive intervention effect, when controlling for all other variables, such that teachers in SW-PBIS schools experienced significantly greater growth in their classroom-based implementation, by approximately 3.3 points on a 100-point scale at each assessment point, as compared to teachers in comparison schools (β 101 = 3.27, p < .05). This finding corresponds to a difference of one-quarter of a standard deviation.
Our exploration of interaction effects between intervention status and mobility yielded a non-significant effect. However, the interactions between intervention and all three other school-level covariates were significant. Specifically, the interaction between intervention status and student-to-teacher ratio (β 102 = −.09, p < .05) and percent of African American students (β 104 = −.02, p < .01) was significant, indicating that teachers in SW-PBIS schools where there was a higher number of students per teacher and a greater proportion of African American students reported the least growth in implementation over time. The interaction between intervention and suspension rate was positive (β 105 = .09, p < .05), indicating that teachers in SW-PBIS schools with a higher suspension rate reported greater growth of the EBS-classroom scores over time.
Discussion
There is increasing interest in empirical investigations into contextual influences on implementation of school-based programs (Domitrovich, et al. 2008). The growing number of school-wide prevention models that also include classroom-based components (e.g., SW-PBIS) provides an opportunity for exploring these associations. Yet, much of the research examining SW-PBIS has focused on implementation at the school level, with limited attention to the important classroom context. The SW-PBIS implementation framework is a particularly important one to examine within the context of classrooms, given that it uses a “training of trainers model,” whereby the school-based team leads the training of other school staff. Thus, there is potential for slippage between initial training of the team by the expert trainers and the classroom-based implementation by teachers. As such, the outcome of interest in the current study was classroom-based implementation of positive behavior support strategies, which is a unique contribution to the SW-PBIS literature as well as to the broader field of implementation science. Another unique contribution of this study is that implementation of positive behavior strategies was collected both in treatment and comparison schools. This allowed for the examination of both the causal effect of training in SW-PBIS on the use of these strategies, as well as the non-experimental examination of how teacher-, classroom-, and school-level factors related to the use of positive behavior strategies in the classroom more broadly.
Controlling for a number of teacher demographics, only grade level taught and perceptions of the school environment were significantly related to the intercept. Results suggest that a higher level of positive behavioral supports was provided to young children vs. older elementary students. This could be because there is a developmental expectation in the early elementary grades that teachers will establish school and behavioral readiness skills, thus allowing for more time to devote to positive behavioral supports. In addition, state standardized assessments begin in grade 3, which competes for time devoted to a behavioral curriculum. Results also showed that those with more favorable baseline perceptions of the school organizational health had markedly higher scores (i.e., 12 points higher, on a scale of 0 to 100 %) at the intercept. These teachers also exhibited less growth over time. Previous research (Bradshaw et al. 2008a, b) has similarly demonstrated that schools which started off with higher scores on the OHI demonstrated less improvement in school-wide implementation over time, which suggests they had less room for improvement.
Unlike the teacher and classroom factors, for which only two variables were associated, nearly all of the school-level variables examined were associated with growth in classroom implementation of positive behavior support strategies. At the school level, the baseline SET score was positively related to teachers’ implementation of positive behavior strategies in the classroom over time. Furthermore, training in and implementation of SW-PBIS was significantly related to improved implementation of positive behavior strategies over time; this finding is consistent with previously published findings from this trial (Bradshaw et al. 2010). This indicates that school-wide adoption of these principles may positively impact individual teachers’ behavior in their classrooms. In addition, student-to-teacher ratio and the percent of African American students were also related to classroom-based implementation over time. Perhaps the teachers in these schools were more motivated to implement behavioral supports.
On the other hand, student discipline appeared to be an obstacle to classroom-based implementation in both comparison and intervention schools. It is possible that teachers in schools with higher suspension rates, suggesting reliance on punitive behavioral responses, were less adept at implementing positive supports within the classroom and may need additional supports for successful implementation of positive behavioral responses. In fact, teachers in SW-PBIS schools with higher suspensions showed greater growth in the implementation of positive behavior strategies over time, supporting this hypothesis, and demonstrating the potentially protective nature of SW-PBIS training. With explicit training in creating behavioral expectations and responses, teachers improved their classroom-based implementation of positive behavior support strategies, whereas teachers in comparison schools showed less improvement, though the magnitude of these effects was small.
Other interaction effects were also small but revealed that teachers in SW-PBIS schools with higher student-to-teacher ratios and higher percentages of African American students showed less growth in their classroom implementation of positive behavioral supports over time as compared to both SW-PBIS schools with lower rates and comparison schools. In the majority of schools trained in SW-PBIS, the EBS-classroom scores improved over time, despite the student-teacher ratio and percent of African American students in the school. However, these improvements were statistically significantly smaller than those observed in comparison and other SW-PBIS schools. Further research is needed to tease apart why some potential “obstacles” to implementation (e.g., higher student-to-teacher ratio) were associated with a positive main effect, but not interaction effect, in implementation over time. It should also be noted that while there were significant differences between schools on the EBS-classroom, school averages on the classroom-based EBS were not poor, and were in line with earlier findings regarding SET scores (e.g., Bradshaw et al. 2008a, b; 2010).
The current multi-level model predicting classroom-based EBS scores best accounted for between-school variability, suggesting that the school context is important for the classroom-based implementation of evidence-based practices and perhaps interventions themselves. In addition, this indicates that the measured and modeled variables are well specified. This finding is consistent with previous research which asserts that the consideration of multiple levels is important in promoting classroom-based implementation of evidence-based practices (Domitrovich et al. 2008; Han & Weiss 2005). To ensure that treatment status did not account for all of this explained variance, the ICCs for a final model with and without treatment status modeled were compared. Interestingly, when treatment status was excluded, the ICC was nearly identical to when it was included. A reduction in the AIC and BIC from the fully unconditional to final model also supports the improved fit that resulted from adding the modeled variables.
Unfortunately, changes in the sigma-squared (i.e., between-classroom variability) were less notable, and fewer teacher and classroom variables were significant, implying that more work needs to be done to identify classroom-level factors that impact the use of positive behavior strategies in the classroom. Implementation and sustainability literature (Domitrovich et al. 2008; Han & Weiss 2005) suggest that other variables regarding the teacher (e.g., knowledge, skills, job satisfaction and burnout, openness to innovation) and classroom (e.g., objective measures of student behavior) are needed to account for variability in implementation quality. These data are not available in this study; future research should consider focusing some resources on assessing teacher and classroom variables in a more explicit way than is typical.
Strengths and Limitations
As noted earlier, a strength of this study is the inclusion of a measure of classroom-, rather than school-, based implementation of positive behavior strategies. On the other hand, the use of teacher self-report is a limitation; the use of a more objective measure of implementation is preferable. However, given the magnitude of this study (i.e., over 1,000 teachers in 37 schools across 4 years), this is a difficult standard to achieve. The EBS measure has been examined and has demonstrated convergent validity with the SET (e.g., Bohanon et al. 2006; Hagan-Burke et al. 2005; Safran 2006), which is a strength given that the SET is conducted by external observers. The EBS has also demonstrated predictive validity with student ratings of school climate and student discipline (Mitchell & Bradshaw 2013). Given that the RCT was focused on the effects SW-PBIS training and implementation had on student, teacher, and school outcomes, there was a limit to the number of variables that could be collected regarding the teachers and their classrooms. Therefore, variables available were collected to ensure proper specification of the statistical models; however, additional data are needed to better account for variability in the use of positive behavior strategies across classrooms. Finally, although HLM is robust to missing data at level 1 when it is repeated over time, sensitivity analyses were conducted to explore if teacher turnover influenced the findings. These results indicated that teacher turnover did not influence the pattern of findings.
Implications for Prevention Science
The findings from this study support the model by Domitrovich et al. (2008), which identified a rich set of teacher-, classroom-, and school-level contextual factors that may influence the implementation of classroom-based supports based on the existing evidence base. Particularly noteworthy is the large number of school-level effects detected as compared to the teacher level. This suggests that prevention scientists need to carefully consider the school context as important for readiness to implement interventions, when aiming to implement preventive interventions and practices in the classroom. These findings also suggest that variability in teacher implementation cannot be anticipated or addressed by considering easy-to-attain information such as the teacher’s gender and years of experience. In addition, the finding that school-wide training and supports provided by the school team to the broader population of school staff can impact classroom-based implementation is encouraging, although more research is needed that specifically examines different training models (e.g., direct training of teachers) in relation to school contextual influences.
References
Adelman, H. S., & Taylor, L. (1997). Toward a scale-up model for replicating new approaches to schooling. Journal of Educational & Psychological Consultation, 8, 197–230.
Barrett, S. B., Bradshaw, C. P., & Lewis-Palmer, T. (2008). Maryland statewide PBIS initiative: Systems, evaluation, and next steps. Journal of Positive Behavior Interventions, 10, 105–114.
Beets, M. W., Flay, B. R., Vuchinich, S., Acock, A. C., Li, K.-K., & Allred, C. (2008). School climate and teachers? Beliefs and attitudes associated with implementation of the positive action program: A diffusion of innovations model. Prevention Science, 9, 264–275.
Berkel, C., Mauricio, A. M., Schoenfelder, E., & Sandler, I. N. (2011). Putting the pieces together: An integrated model of program implementation. Prevention Science, 12, 23–33.
Bohanon, H., Fenning, P., Carney, K. L., Minnis-Kim, M. J., Anderson-Harriss, S., Moroz, K. B., & Pigott, T. D. (2006). Schoolwide application of positive behavior support in an urban high school: A case study. Journal of Positive Behavior Interventions, 8, 131–145.
Bradshaw, C. P., Koth, C. W., Bevans, K. B., Ialongo, N. S., & Leaf, P. J. (2008a). The impact of school-wide Positive Behavioral Interventions and Supports (PBIS) on the organizational health of elementary schools. School Psychology Quarterly, 23, 462–473.
Bradshaw, C. P., Koth, C. W., Thornton, L. A., & Leaf, P. J. (2009). Altering school climate through school-wide Positive Behavioral Interventions and Supports: Findings from a group-randomized effectiveness trial. Prevention Science, 10, 100–115. doi:10.1007/s11121-008-0114-9.
Bradshaw, C. P., Mitchell, M. M., & Leaf, P. J. (2010). Examining the effects of schoolwide Positive Behavioral Interventions and Supports on student outcomes: Results from a randomized controlled effectiveness trial in elementary schools. Journal of Positive Behavior Interventions, 12, 133–148.
Bradshaw, C. P., & Pas, E. T. (2011). A state-wide scale-up of Positive Behavioral Interventions and Supports (PBIS): A description of the development of systems of support and analysis of adoption and implementation. School Psychology Review, 40, 530–548.
Bradshaw, C. P., Pas, E. T., Barrett, S., Bloom, J., Hershfeldt, P., Alexander, A., & Leaf, P. (2012a). A state-wide partnership to promote safe and supportive schools: The PBIS Maryland Initiative. Administration and Policy in Mental Health and Mental Health Services Research, 39, 225–237. doi:10.1007/s10488-011-0384-6.
Bradshaw, C. P., Reinke, W. M., Brown, L. D., Bevans, K. B., & Leaf, P. J. (2008b). Implementation of school-wide Positive Behavioral Interventions and Supports (PBIS) in elementary schools: Observations from a randomized trial. Education & Treatment of Children 31, 1–26. doi:10.1353/etc.0.0025.
Bradshaw, C. P., Waasdorp, T. E., & Leaf, P. J. (2012b). Effects of school-wide Positive Behavioral Interventions and Supports on child behavior problems. Pediatrics, 130, 1136–1145. doi:10.1542/peds.2012-0243.
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18, 23–45. doi:10.1016/s0272-7358(97)00043-3.
Domitrovich, C. E., Bradshaw, C. P., Poduska, J. M., Hoagwood, K. E., Buckley, J. A., Olin, S., & Ialongo, N. S. (2008). Maximizing the implementation quality of evidence-based preventive interventions in schools: A conceptual framework. Advances in School Mental Health Promotion, 1, 6–28.
Domitrovich, C. E., Gest, S. D., Gill, S., Bierman, K. L., Welsh, J. A., & Jones, D. (2009). Fostering high-quality teaching with an enriched curriculum and professional development support: The Head Start REDI Program. American Educational Research Journal, 46, 567–597. doi:10.3102/0002831208328089.
Domitrovich, C. E., & Greenberg, M. T. (2000). The study of implementation: Current findings from effective programs that prevent mental disorders in school-aged children. Journal of Educational & Psychological Consultation, 11, 193–221.
Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327–350.
Dusenbury, L., Brannigan, R., Falco, M., & Hansen, W. B. (2003). A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Research, 18, 237–256. doi:10.1093/her/18.2.237.
Enders, C. K., & Tofighi, D. (2007). Centering predictor variables in cross-sectional multilevel models: A new look at an old issue. Psychological Methods, 12, 121–138.
Glisson, C., & Green, P. (2006). The effects of organizational culture and climate on the access to mental health care in child welfare and juvenile justice systems. Administration and Policy in Mental Health and Mental Health Services Research, 33, 433–448.
Gottfredson, G. D., Jones, E. M., & Gore, T. W. (2002). Implementation and evaluation of a cognitive-behavioral intervention to prevent problem behavior in a disorganized school. Prevention Science, 3, 43–56.
Gresham, F. M., Gansle, K. A., Noell, G. H., Cohen, S., & Rosenblum, S. (1993). Treatment integrity of school-based behavioral intervention studies: 1980–1990. School Psychology Review, 22, 254–272.
Hagan-Burke, S., Burke, M. D., Martin, E., Boon, R. T., Fore, C., III, & Kirkendoll, D. (2005). The internal consistency of the school-wide subscales of the Effective Behavioral Support survey. Education and Treatment of Children, 28, 400–413.
Han, S. S., & Weiss, B. (2005). Sustainability of teacher implementation of school-based mental health programs. Journal of Abnormal Child Psychology, 33, 665–679. doi:10.1007/s10802-005-7646-2.
Horner, R. H., Sugai, G., & Anderson, C. M. (2010). Examining the evidence base for schoolwide positive behavior support. Focus on Exceptional Children, 42, 1–14.
Horner, R. H., Sugai, G., Todd, A. W., & Lewis-Palmer, T. (2005). School-wide positive behavior support. In L. Bambara & L. Kern (Eds.), Individualized supports for students with problem behaviors: Designing positive behavior plans (pp. 359–390). New York: Guilford.
Horner, R. H., Todd, A. W., Lewis-Palmer, T., Irvin, L. K., Sugai, G., & Boland, J. B. (2004). The school-wide evaluation tool (SET): A research instrument for assessing school-wide positive behavior support. Journal of Positive Behavior Interventions, 6, 3–12.
Hoy, W. K., & Feldman, J. (1987). Organizational health: The concept and its measure. Journal of Research and Development in Education, 20, 30–38.
Hoy, W. K., & Tarter, C. J. (1997). The road to open and healthy schools: A handbook for change (Elementaryth ed.). Thousand Oaks: Corwin.
Hoy, W. K., Tarter, J., & Kottkamp, R. (1991). Open schools/healthy schools measuring organizational climate. Columbus: Arlington Writers, Ltd.
Ialongo, N. S., Werthamer, L., Kellam, S. G., Brown, C. H., Wang, S., & Lin, Y. (1999). Proximal impact of two first-grade preventive interventions on the early risk behaviors for later substance abuse, depression, and antisocial behavior. American Journal of Community Psychology, 27, 599–641. doi:10.1023/a:1022137920532.
Kellam, S. G., Ling, X. G., Merisca, R., Brown, C. H., & Ialongo, N. S. (1998). The effect of the level of aggression in the first grade classroom on the course and malleability of aggressive behavior into middle school. Development and Psychopathology, 10, 165–185.
Koth, C. W., Bradshaw, C. P., & Leaf, P. J. (2009). Teacher Observation of Classroom Adaptation-Checklist: Development and factor structure. Measurement and Evaluation in Counseling and Development, 42, 15–30.
Luke, D. (2004). Multilevel modeling. Thousand Oaks: Sage.
Mitchell, M. M., & Bradshaw, C. P. (2013). Examining classroom influences on student perceptions of school climate: The role of classroom management and exclusionary discipline strategies. Journal of School Psychology, 51, 599–610. doi:10.1016/j.jsp.2013.05.005.
Murray, D. M. (1998). Design and analysis of group-randomized trials. New York: Oxford.
Pankratz, M., Hallfors, D., & Cho, H. (2002). Measuring perceptions of innovation adoption: The diffusion of a federal drug prevention policy. Health Education Research, 17, 315–326. doi:10.1093/her/17.3.315.
Pas, E. T., & Bradshaw, C. P. (2012). Examining the association between implementation and outcomes: State-wide scale-up of school-wide Positive Behavior Intervention and Supports. Journal of Behavioral Health Services and Research, 39, 417–433. doi:10.1007/s11414-012-9290-2.
Payne, A. A., Gottfredson, D. C., & Gottfredson, G. D. (2006). School predictors of the intensity of implementation of school-based prevention programs: Results from a national study. Prevention Science, 7, 225–237. doi:10.1007/s11121-006-0029-2.
Petras, H., Chilcoat, H. D., Leaf, P. J., Ialongo, N. S., & Kellam, S. G. (2004). Utility of TOCA-R scores during the elementary school years in identifying later violence among adolescent males. Journal of the American Academy of Child and Adolescent Psychiatry, 43, 88–96.
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (2nd ed.). Thousand Oaks: Sage.
Raudenbush, S. W., Bryk, A. S., Cheong, Y. F., Congdon, R., & du Toit, M. (2011). HLM statistical software: Version 7. Lincolnwood: Scientific Software International, Inc.
Reimers, T. M., Wacker, D. P., & Koeppl, G. (1987). Acceptability of behavioral interventions: A review of the literature. School Psychology Review, 16, 212–227.
Ringeisen, H., Henderson, K., & Hoagwood, K. (2003). Context matters: Schools and the ‘research to practice gap’ in children’s mental health. School Psychology Review, 32, 153–168.
Ringwalt, C. L., Ennett, S., Johnson, R., Rohrbach, L. A., Simons-Rudolph, A., Vincus, A., & Thorne, J. (2003). Factors associated with fidelity to substance use prevention curriculum guides in the nation’s middle schools. Health Education & Behavior, 30, 375–391. doi:10.1177/1090198103030003010.
Rohrbach, L. A., Graham, J. W., & Hansen, W. B. (1993). Diffusion of a school-based substance abuse prevention program: Predictors of program implementation. Preventive Medicine, 22, 237–260.
Safran, S. P. (2006). Using the Effective Behavior Supports survey to guide development of schoolwide positive behavior support. Journal of Positive Behavior Interventions, 8, 3–9.
Spoth, R., Rohrbach, L., Greenberg, M., Leaf, P., Brown, C. H., Fagan, A., & Hawkins, J. D. (2013). Addressing core challenges for the next generation of type 2 translation research and systems: The translation science to population impact (TSci Impact) framework. Prevention Science, 14, 319–351. doi:10.1007/s11121-012-0362-6.
Sugai, G., & Horner, R. H. (2002). The evolution of discipline practices: School-wide positive behavior supports. Child & Family Behavior Therapy, 24, 23–50. doi:10.1300/J019v24n01_03.
Sugai, G., & Horner, R. H. (2006). A promising approach for expanding and sustaining school-wide positive behavior support. School Psychology Review, 35, 245–259.
Sugai, G., Lewis-Palmer, T., Todd, A. W., & Horner, R. H. (2001). School-wide Evaluation Tool (SET). Eugene: Center for Positive Behavioral Supports, University of Oregon.
Sugai, G., Todd, A. W., & Horner, R. H. (2000). Effective Behavior Support (EBS) survey: Assessing and planning behavior supports in schools. Eugene: University of Oregon.
Tabachnick, B. G., & Fiddell, L. S. (2001). Using multivariate statistics (4th ed.). Needham Heights: Allyn.
Acknowledgments
Support for this project comes from the National Institute of Mental Health (R01MH67948-1A1), the Centers for Disease Control and Prevention (1U49CE000728, K01CE001333-01), and the Institute of Education Sciences (R305A090307).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Pas, E.T., Waasdorp, T.E. & Bradshaw, C.P. Examining Contextual Influences on Classroom-Based Implementation of Positive Behavior Support Strategies: Findings from a Randomized Controlled Effectiveness Trial. Prev Sci 16, 1096–1106 (2015). https://doi.org/10.1007/s11121-014-0492-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11121-014-0492-0