Abstract
Aggressive and defiant behaviors in students are costly to schools, teachers, and students. In this paper, we summarize findings from meta-analyses, systematic reviews, and meta-reviews that examined school-based interventions for aggressive and defiant behaviors in students. Results of the review suggest that school-based interventions produce significant but small positive effects on aggression and defiance, with larger effects for interventions that are implemented with higher quality. Behavioral and cognitive behavioral techniques are key components of nearly all effective school interventions, whether interventions are student-directed or teacher-/environment-directed. Specific interventions with empirical support, as identified using the Blueprints for Healthy Youth Development and “What Works Clearinghouse” databases, are briefly summarized. Finally, recommendations are made for schools considering a school intervention for aggression and defiance, and important priorities for future research are outlined.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Aggression and defiance (AD) present considerable burden to schools. Aggressive behaviors, including both physical (e.g., hitting, kicking, pushing) and verbal behaviors (e.g., threats of harm, mean-spirited teasing, or name calling), are relatively common in schools. Epidemiological data show that 14% of third graders report being frequently shoved, slapped, hit or kicked by other students and 8% of high school students report having been in a physical fight on school grounds over the past 12 months (Musu-Gillette, Zhang, Wang, Zhang, & Oudekerk, 2017). Among youth in grades 6 through 10, more than 50% report experiencing verbal aggression in the last 2 months (Wang, Baker, Gao, Raine, & Lozano, 2012). Defiance at school such as arguing with a teacher or principal or failing to comply with an instruction given by a teacher also occurs quite frequently. Defiance is the most common reason students are referred to the office for disciplinary action, and defiant and aggressive behaviors together account for almost half of all disciplinary referrals (Predy, McIntosh, Frank, & Hitchcock, 2014). The purpose of this paper is to provide an overview of school-based interventions for AD. We first briefly discuss the importance of AD for schools and then focus the majority of this review on synthesizing the current evidence for interventions to address AD in schools, including a review of factors that impact intervention effectiveness.Footnote 1
Importance of Aggression and Defiance to Schools
It is often the case that students who display high levels of AD experience substantial academic and social difficulties. In addition, students displaying high levels of AD miss school because of suspensions, expulsions, and truancy, and some of these youth are placed in restrictive special education settings (Ruhl & Hughes, 1985). About one-third of students engaging in high levels of AD fail to graduate high school on time, a rate that is twice as high as the general population and that is higher than the risk associated with anxiety, depression, or substance use (Breslau, Miller, Joanie Chung, & Schweitzer, 2011). Later in life, these students are at high risk for job loss, relationship instability, and criminal behavior (Newman et al., 2011).
AD in schools also negatively impact the victims of these behaviors. As many as 80–90% of students report being the target of serious physical or verbal aggression at some point in school, with 10–20% of children indicating that they are currently the target of aggression at school (Nansel et al., 2001). Observational studies show that elementary students experience more than one aggressive behavior each hour of recess (Frey & Strong, 2018). Over time, being a victim of peer aggression is a risk factor for low academic achievement (Musu-Gillette et al., 2017), physical and mental health problems (Eslea et al., 2004), and school dropout (Cornell, Gregory, Huang, & Fan, 2013). Victims of aggression are also more likely to become aggressive themselves (Duane & Bierman, 2006) even after victimization desists (Schwartz, McFadyen-Ketchum, Dodge, Pettit, & Bates, 1998).
AD behaviors also negatively impact teachers. About one in ten teachers report having been physically threatened by a student, with 3% (secondary) to 8% (elementary) reporting that they had been physically attacked (Musu-Gillette et al., 2017). Victimized teachers have higher rates of absenteeism, burnout, and job turnover that collectively cost the USA over $2 billion annually (American Psychological Association, 2016). Further, teachers whose classrooms include high levels of AD report higher levels of stress (Shernoff, Mehta, Atkins, Torf, & Spencer, 2011) and experience higher levels of occupational burnout (Aloe, Shisler, Norris, Nickerson, & Rinker, 2014). Nearly half of both regular and special education teachers have contemplated quitting because of their experiences working with AD in the school setting (Westling, 2010).
Research shows that AD behaviors in youth also have high societal costs (Christenson, Crane, Malloy, & Parker, 2016), with school costs among the highest contributor (Beecham, 2014). Annual costs of educating children with AD are three to six times higher than it is for other children (Foster, Jones, & Conduct Problems Prevention Research Group, 2005). A considerable portion of this cost is due to the discipline problems these children exhibit. State-level estimates suggest 5–13% of students in kindergarten through 12th grade have been suspended or expelled due to serious misbehavior, with AD being the most common cause (Burke & Nishioka, 2014). Suspensions and expulsions are costly to schools—expulsions from school have been estimated to cost $431 per incident (Batton, 2003) and detentions or suspensions have been estimated to cost $71 per incident (Robb et al., 2011)—because they require administrative and teacher time and resources. Further, students who are disciplined (suspended or expelled) even one time are 23.5% more likely to drop out of school before graduation (American Academy of Pediatrics Council on School Health, 2013), which lowers the school’s performance on high impact metrics (e.g., the school “grade” provided by school evaluation websites). Over time, this may reduce the ability to attract new students and thereby reduce funding for the school. In addition, students who drop out of school contribute less in taxes and have higher welfare use over their lifetime (Marchbanks III et al., 2014). When these findings are considered together, it becomes clear that AD behaviors substantially strain societal and school resources.
Collectively, these research findings show that AD in school is a serious problem with costly consequences at many levels. Over the past decades, there have been hundreds of studies focusing on reducing AD in school. In fact, there is so much research that meta-reviews (i.e., reviews of reviews) have emerged. This research provides guidance for reducing AD in schools, but also strongly underlines the need for continued research. After briefly discussing types of school interventions, we summarize results of meta-reviews that provide information about the impact of school interventions.
Types of School Interventions
School interventions are often implemented at three levels (Durlak & Wells, 1997; Offord, 2000; Smith, Molina, Massetti, Waschbusch, & Pelham, 2007). Universal interventions (also called Tier 1 interventions) are applied to all students. Targeted interventions (also called Tier 2 interventions) are delivered to a subset of students who do not adequately respond to universal intervention. Indicated interventions (also called Tier 3 interventions) involve even more specialized and intensive interventions that are delivered to students who do not respond adequately to universal and targeted interventions (Jimerson, Burns, & VamDerHeyden, 2007).Footnote 2 Typically, 10–25% of students will not respond sufficiently to a universal intervention and will thus require additional targeted services, and about 5% of students will require the most intensive indicated services (Fuchs & Fuchs, 2017; Lewis, Mitchell, Bruntmeyer, & Sugai, 2016). Universal interventions are more widely researched than targeted or indicated interventions, yet they often produce weaker effects, perhaps because they are implemented with a wider range of students (Cook, Gottfredson, & Na, 2010). Often similar intervention techniques are used across the different levels of intervention. For instance, behavior therapy techniques play an important role in most, if not all, school interventions for reducing AD, whether implemented at the universal, targeted, or indicated level (Epstein, Atkins, Cullinan, Kutash, & Weaver, 2008).
All three levels of intervention seek to induce change using student-centered and/or teacher-/environment-centered approaches (Durlak & Wells, 1997; Osher, Bear, Sprague, & Doyle, 2010). Student-centered interventions are delivered directly to the students, often drawing on techniques from clinical psychology, such as cognitive behavioral therapy (e.g., teaching nonviolent problem-solving skills and targeting maladaptive social thought processes such as the hostile attribution bias) or social–emotional learning modules. Teacher-/environment-centered interventions seek to prevent or reduce AD behavior by using adult-driven behavior management programs or by changing the school culture (Epstein et al., 2008). Of course, many interventions use both approaches. For instance, schools may use a student-centered approach by teaching social-emotional skills and use a teacher-centered approach by having teachers reward children when they display newly learned social-emotional skills. Such an approach may seem ideal because the two intervention styles have similar goals, but student-oriented and teacher-/environment-oriented interventions may not be compatible. For instance, the role of contingencies (reward and punishment) to shape student behavior is central to many teacher-/environment-oriented school interventions, yet may be downplayed or even viewed as unhelpful in some student-oriented interventions (Osher et al., 2010). Such incompatibilities illustrate one reason why multi-component interventions may be less effective than single-component interventions (Matjasko et al., 2012; Park-Higgerson, Perumean-Chaney, Bartolucci, Grimley, & Singh, 2008???; Wilson & Lipsey, 2007).
Review of Interventions
Evaluation of Effectiveness
Recent reviews of school interventions for student AD are summarized in Table 1. Several themes emerge from these reviews. First, school interventions produce statistically significant improvements in AD, but the magnitude of improvement is small. Take, for example, the review conducted by Wilson & Lipsey (2007), which is arguably the best-known meta-analytic review on this topic (e.g., 673 citations as of January 2018 according to Google Scholar). They reported a standardized mean difference (d) effect size for school interventions on AD outcomes that was statistically significant but small by conventional standards (d = 0.21). Consistent with this conclusion, the average effect size from the meta-analytic reviews reported in Table 1 is small (d = 0.19), suggesting that school interventions produce positive but small impacts on student AD.
Second, intervention effects are typically not uniformly consistent as judged by the variance in reported effect sizes. Wilson and Lipsey (2007) reported that the effect sizes in their review were significantly heterogeneous, and exploration of this heterogeneity revealed (among other findings) that universally delivered interventions (d = 0.21) and targeted interventions (d = 0.29) were more effective than interventions delivered in specialized schools or classrooms (d = 0.11) and more effective than multi-component interventions (d = 0.05). The results of other reviews (see Table 1) are consistent with this example, as demonstrated by the fact that reported effect sizes range across reviews from a low of 0.09 (Park-Higgerson et al., 2008), indicating no impact, to a high of 0.43 (Sklad, Diekstra, Ritter, Ben, & Gravesteijn, 2012), indicating a moderately sized positive impact.
Third, there is some evidence that behavioral or cognitive behavioral strategies are more effective than other school intervention strategies for reducing AD (see Table 1). For example, Wilson & Lipsey (2007) reported that targeted/indicated interventions, as well as interventions that were implemented in special schools or classes, were more effective at reducing AD if they incorporated behavioral strategies. An example of the central role played by behavior management in school-based intervention can be found in a report titled “Reducing Behavior Problems in the Elementary School Classroom,” which was written as part of the What Works Clearinghouse (Epstein et al., 2008). The intervention strategies outlined in this report include: (1) identify the specifics of the problem behavior and the conditions that prompt and reinforce it, (2) modify the classroom learning environment to decrease problem behavior, (3) teach and reinforce new skills to increase appropriate behavior and preserve a positive classroom climate, (4) draw on relationships with professional colleagues and students’ families for continued guidance and support, and (5) assess whether school-wide behavior problems warrant adopting school-wide strategies or programs and if so implement ones shown to reduce negative and foster positive interactions. These steps, which are described in more detail in the cited report, are based on moderate-to-strong empirical research and are a concise summary of important points for implementing classroom-based approaches for reducing AD. In general, interventions that make rules clear to students and that increase the consistency and fairness of enforcing rules are effective at reducing AD.
Fourth, several intervention characteristics are consistently associated with more positive intervention effects. These characteristics are cogently discussed by Cook, Gottfredson, and Na (2010), who drew several important conclusions about school interventions for AD. First, the composition and organization of schools significantly impact student AD. For example, there is substantial evidence that having fewer students per teacher is associated with more positive student behaviors because it increases the frequency, quality, and consistency of student contact with teachers, which in turn fosters positive relationships between students and teachers and between students and the school as a whole. Second, social-emotional interventions have been shown to significantly reduce AD. These interventions typically rely on instructional techniques to develop student skills that are associated with lowering AD, such as recognizing situations that are likely to get them into trouble, controlling their impulses, anticipating the consequences of their actions, perceiving accurately the feelings or intentions of others, and coping with peer pressure. Third, AD is almost always broadly defined and measured in school intervention studies, but limited available evidence suggests school interventions may have different effects on different types of antisocial behavior. For instance, one meta-analysis (Alford & Derzon, 2012) reported stronger positive effects of school interventions on physical aggression (d = 0.26) than on broader measures of AD (ds < 0.15). Fourth, there are some impacts of child development on AD behavior and interventions for AD. Specifically, AD behavior is higher for students in middle school as compared to elementary or high school (Cook et al., 2010), and there is some evidence that younger students may benefit more from school interventions than older students (Metropolitan Area Child Study Research Group, 2002; Wolpert, Humphrey, Belsky, & Deighton, 2013). It is also worth noting that some interventions are specific to particular developmental levels. For example, keeping sixth grade students in elementary schools as opposed to moving them to middle schools reduces disciplinary infractions (Cook et al., 2010). Finally, as discussed next, quality of implementation was one of, if not the most, the crucial aspects of producing positive intervention effects.
Quality of Implementation
Evidence-based interventions will not produce the desired effects unless they are implemented as designed (Gresham, Cohen, Gansle, Noell, & Rosenblum, 1993; Lipsey, 2009). Four characteristics have been suggested as critical factors in determining whether a school intervention is implemented with high quality: organizational capacity, organizational support, program features, and local integration (Gottfredson & Gottfredson, 2002). Organizational capacity refers to the ability of school personnel to work together to implement the intervention. Organizational capacity is indicated by factors such as staff morale, past history of intervention efforts, and amount of turnover in administration and teaching staff. Organizational support is the pragmatic supports for the intervention that are available in the school. This characteristic is indicated by availability of training for the intervention, ongoing supervision during the intervention, and principal/administrative support for each of these. Program features are the amount of structure and support built into the intervention in terms of manuals, implementation standards, quality control, feedback mechanisms, and so on. Finally, local integration is the extent to which the program is merged into the daily routine and operation. Interventions that are carried out by regular school employees as part of their typical day are likely to be widely implemented and maintained, as compared to interventions that are carried out by specialized personnel or during non-school times. Relatedly, interventions that are selected by the school and community are likely to have better implementation than interventions perceived as a mandate handed down from individuals outside the school.
Although research has demonstrated that quality plays a significant role in determining effectiveness, relatively little attention has been paid to the quality of school interventions as typically implemented. This research gap was addressed as part of the national survey of US schools (Gottfredson & Gottfredson, 2002). Results showed that the average student-directed intervention involved 31% of the student body, consisted of 27 sessions/lessons delivered once per week, and lasted less than one semester (with some lasting less than 1 month). The average teacher-/environment-direct intervention involved 52% of the student body, was delivered about once per week, and lasted nearly all year. For both types of intervention, inconsistency was a hallmark of implementation; just 61% of interventions were conducted on a regular basis. Results also showed that higher-quality implementation was associated with more organizational support, more local integration, and use of standardized program features for both student-directed and teacher-/environment-directed interventions.
Specific Intervention Programs
To provide additional information to school professionals considering interventions for AD, we next review details of selected school interventions (see Table 2). Specific interventions selected for inclusion in this review were those (1) considered empirically supported in the What Works Clearinghouse maintained by the Institute for Education Science in the US Department of Education, (2) classified as a promising or model program on the Blueprints for Healthy Youth Development hosted by the Center for the Study and Prevention of Violence at the University of Colorado Boulder, and/or (3) supported by at least three studies demonstrating their efficacy. We divided interventions into those that are either universally implemented or have been implemented at multiple levels (i.e., have been implemented as a universal intervention and as a targeted/indicated intervention; 8 interventions) versus those that have been primarily implemented at targeted or indicated levels (3 interventions). As shown, all interventions use behavioral or social-emotional learning techniques, and the majority focus on preschool- through middle school-aged students. Two other specific school interventions for AD—the Families and Schools Together (FAST Track) program and the Positive Behavior Intervention and Support (PBIS) program—are discussed in more detail next because they have been highly influential on researchers and on schools.
FAST Track
FAST Track is a well-known longitudinal preventive intervention study that used methods informed by developmental and clinical psychology research to prevent and treat serious conduct problems in children (Conduct Problems Prevention Research Group, 1992). Participants included 891 children who were in 401 classrooms, including 445 who were randomly assigned to the intervention condition and 446 who were randomly assigned to the control condition, with random assignment conducted at the school level. The FAST Track school intervention incorporated both universal and targeted interventions. The universal intervention used was the Promoting Alternative Thinking Strategies curriculum, in which classroom teachers delivered two to three classes per week on emotional, friendship, self-control, and social problem-solving skills (Greenberg, Kusche, Cook, & Quamma, 1995). The targeted intervention was implemented for children judged to be at high risk for conduct problems and included academic tutoring, social skills groups, peer-pairing (supervised play sessions to practice social skills), and parenting groups, with all except the peer-pairing program conducted after school or on weekends. Quality of implementation was a primary consideration in FAST Track, with intervention schools assigned an educational coordinator who monitored implementation quality and provided behavioral consultation on classroom management to teachers (Conduct Problems Prevention Research Group, 1999). While FAST Track significantly improved numerous outcomes (e.g., reduced rates of internalizing and externalizing psychiatric problems, substance use, and crime at age 25), it did not have a significant impact on academic outcomes in either elementary or middle school (Bierman et al., 2013), nor in overall education attainment up to age 25 (Dodge et al., 2015).
PBIS
PBIS is another school intervention that focuses on to reducing AD behaviors. PBIS is a multi-tiered intervention approach with universal, targeted, and indicated interventions (Horner, Sugai, & Anderson, 2010). The universal intervention stresses implementing behavior management practices throughout the school in a manner that is consistent across classrooms. The targeted and indicated interventions are implemented in classrooms and with individuals at risk of or actively demonstrating aggressive and defiant behaviors. PBIS has been widely disseminated, with an estimated 20,000 schools in the USA using PBIS (Yeung et al., 2016). Dozens of open-trial studies have shown that PBIS is associated with reductions in AD behavior (Horner et al., 2010), but to our knowledge there have been just two randomized trials. The first trial included 60 elementary schools in two states, with 30 schools randomly assigned to implement PBIS and 30 randomly assigned to a waitlist condition (Horner et al., 2009). Results showed improvement in the intervention schools on measures of school culture and subjectively reported school safety, but no difference on measures of AD or academic functioning. The second randomized trial (Bradshaw, Mitchell, & Leaf, 2010; Bradshaw, Waasdorp, & Leaf, 2012) was conducted in 37 elementary schools that were matched then randomized to receive the PBIS intervention (n = 21) or waitlist control (n = 16). Teacher ratings showed that students in intervention schools had lower disruptive behavior problem scores and higher prosocial and empathy scores, with stronger positive outcomes (prosocial, empathy) effects for younger students. Mixed evidence was found regarding office discipline referrals and suspensions due to misbehavior; teacher-report of office referrals but not suspensions were rated as improved by the intervention, whereas administrative records showed suspensions but not office referrals as improved. Other research shows that higher intervention fidelity by teachers is significantly associated with more improvement of AD behavior, demonstrating that quality of implementation influences the outcomes of PBIS (Benner, Beaudoin, Chen, Davis, & Ralston, 2010).
Recommendations for Schools
What are the implications of existing research for school personnel considering implementing an intervention for AD? First, as noted earlier, behavioral or cognitive behavioral interventions are a key component of school-based intervention for AD because research consistently shows they are effective at reducing AD (Epstein et al., 2008). Although these strategies are not always the central focus of an intervention, virtually all school-based interventions are likely to rely heavily on clearly communicated rules, praise and incentives, and prudent consequences for misbehavior, which are essential components of behavior therapy.
Second, quality of implementation is a key determinant of intervention effectiveness. In order to develop and implement an intervention for AD with high quality, it is important for school staff to proceed in a systematic manner that takes a personal approach to the identified school. Toward that end, we suggest that school personnel who are considering implementing an intervention for AD should proceed in three steps. First, develop and implement a system of measuring AD in a reliable and valid manner. Routinely collecting reliable and valid data about AD, ideally linked to other local, regional, and national data on similar outcomes, provides information about the extent to which AD is (or is not) a problem in a specific school (Benbenishty & Astor, 2007). This is important because principals and teachers often underestimate the extent of AD that occurs in schools (Cook et al., 2010). Systematically measuring AD also provides valuable information for assessing whether an intervention is effective once it is implemented. Monitoring AD across multiple schools in a school district could be used to identify schools that have been especially effective at preventing or reducing AD; those schools could then be used as a local model or support team for other schools.
The second step is to develop broad goals or principles for the intervention. Essentially, school staff should develop a theory of change for the proposed intervention by openly deliberating and deciding on proximal targets of the intervention and deciding on methods for achieving the targets that are both evidence-based and acceptable to the school and the larger community. For example, staff in one school may decide that their students lack social-emotional skills and select direct instruction by the teachers as an acceptable means of delivering this intervention. Staff in another school may decide that the school is chaotic and focus on developing rules that are clear and enforced fairly and consistently. Finding an empirically supported intervention that best fits the values or culture of a school is important because it is likely to increase staff commitment to the intervention, which in turn increases the quality of implementation, and ultimately improves the chances of positive outcomes (Atkins, Rusch, Mehta, & Lakind, 2016; Frazier, Formoso, Birman, & Atkins, 2008). There are many research-supported proximal targets of change that schools could address such as (1) increasing attention to positive behavior, (2) increasing the consistency of applying mild negative consequences for misbehavior, (3) decreasing the severity of harsh and negative consequences for misbehavior, (4) imparting social-emotional or self-control skills to students, (5) improving the teacher–student ratio, (6) improving the sense of connection between students and teachers, (7) improving staff monitoring of students, and/or (8) providing intervention quickly during misbehavior incidents to prevent escalation to more serious misbehavior.
The third step for implementing an intervention is for school staff to decide which specific intervention will best achieve their stated goals. As is apparent from Tables 1 and 2, this is not a straightforward decision because there is no single package that is clearly above all others. Instead, there are a range of interventions with varying levels of evidence to support them. This makes choosing an intervention complex; fortunately, there are resources available to help navigate this decision. The previously mentioned Blueprints for Healthy Youth Development (http://www.colorado.edu/cspv/blueprints) and What Works Clearinghouse (https://ies.ed.gov/ncee/wwc/) are two resources designed to help schools find empirically supported school interventions. These sites evaluate the evidence supporting the effectiveness of school intervention programs and provide the results in a user-friendly manner. The websites are continually updated and provide an excellent starting point for organizations looking to implement an intervention.
Recommendations for Research
We have several recommendations for additional research on school interventions. The first, and most obvious, recommendation is to do more research. There are likely many thousands of schools doing interventions aimed at preventing or treating AD, yet there are surprisingly few definitive conclusions that can be drawn about their effects. Indeed, it is not yet clear which interventions are generally effective, which are effective under specific conditions or with particular students, and which are not effective. Drawing firm conclusions about school interventions is impeded by important methodological weaknesses in available studies, such as failing to conduct multi-level analyses that simultaneously take individual, classroom, and school differences into account. Another important factor holding back research on school interventions is that open-trial studies are common and randomized trials are rare. School administrators could help address this latter shortcoming by incorporating randomization when implementing a new intervention. For instance, school districts could implement new interventions in stages with the first stage consisting of randomly assigning the intervention to half the schools in the district and the other schools serving as controls. Knowledge gained from this effort could be used to determine how or whether to proceed with the intervention.
Future research is also needed to better understand factors that impact the quality of implementation for school-based interventions. Given that just 61% of school-based interventions are implemented on a regular basis (Gottfredson & Gottfredson, 2002), schools could improve the quality of implementation by providing more organizational support (e.g., support by principal and school administrators), local integration (e.g., integrating the intervention into daily routine, developing a local decision making and planning mechanism, using regular school staff to implement the intervention), and choosing program features carefully (e.g., using interventions with program materials and methods that are well developed and easily available), but this is easier asserted than done. Research that helps schools successfully navigate these tasks would represent a meaningful advance.
Third, research is needed on moderators of school intervention effects to help move research beyond answering the relatively simplistic question of “what is the effect of school interventions?” to answering the more useful question of “what interventions work for each type of school, student, and context?” The list of potential moderators of school intervention effects is nearly limitless because interventions might be impacted by student factors (e.g., age, academic ability, level of antisocial behavior), classroom/teacher factors (e.g., teaching style, academic subject taught), or school/community factors (e.g., culture, poverty, crime rate). Indeed, there is evidence that interventions are moderated by at least some of these factors: students with high baseline levels of antisocial behavior often benefit more from interventions than do students who do not have high baseline antisocial behavior (Farrell, Henry, & Bettencourt, 2013; Stoolmiller, Eddy, & Reid, 2000), and low-income, urban youth may benefit less than other students (Atkins et al., 2016; Farahmand, Grant, Polo, & Duffy, 2011).
Fourth, mediators of school interventions are also largely unknown. Mediators provide important information on how interventions make an impact, which in turn helps to refine and improve the potency of intervention effects. A meta-analytic review of school-based interventions for aggression implemented in elementary schools reported that mediation was examined in just 10 of 36 studies (Dymnicki, Weissberg, & Henry, 2011). About half of the 10 studies showed that the intervention was effective at improving the mediator (student skills, social cognitive style, or classroom behavior management), but the changes in the mediator were not associated with changes in aggression. Another one-fourth of the studies showed that the intervention influenced the mediator which in turn influenced the outcome.
Fifth, research is needed comparing single-component versus multi-component interventions. It seems intuitive that interventions that use multiple components would have greater benefit as compared to interventions that use just one approach. However, three meta-analytic reviews concluded that single-component interventions are more effective than multi-component interventions (Matjasko et al., 2012; Park-Higgerson et al., 2008; Wilson & Lipsey, 2007). The authors of these reviews speculated that multi-component interventions may have unintended negative effects on the fidelity of implementation. In other words, it may by that multi-component interventions result in schools doing many things poorly instead of doing one thing well—a “jack of all trades but master of none” effect. Research is needed to continue evaluating whether more is better or less is more when it comes to school interventions.
Finally, more effort is needed to distinguish school interventions that are effective from school interventions that are not effective. Although most interventions are well-meaning, not all well-meaning interventions are effective. For example, metal detectors have been widely introduced in schools as a means of deterring serious antisocial behavior, yet empirical reviews suggest that they are ineffective and possibly detrimental for reducing aggression in schools (Hankin, Hertz, & Simon, 2011). Distinguishing interventions that have insufficient evidence on which to draw conclusions from interventions that have been shown to be ineffective would help educational professionals make better evidence-based decisions about interventions (Waschbusch, Fabiano, & Pelham, 2012).
As is clear by the many questions that remain unanswered, evaluating and implementing school interventions for aggression and defiance remains an important task for researchers and educators. Serious aggressive and defiant behavior by students is a far-reaching problem that has long-standing consequences to students, schools, and society. It will take a concerted effort by teachers, school administrators, and scientists to ensure that the steps taken by schools to address these problems are effective and acceptable to all parties involved.
Notes
Throughout this paper, we conceptualize AD broadly because this is what is done in school intervention studies. However, we excluded studies that specifically examined bullying or off-task behavior because these are covered in other articles of this special issue.
We use the universal/targeted/indicated framework because this is most common in the reviewed studies.
References
Alford, A. A., & Derzon, J. (2012). Meta-analysis and systemic review of the effectiveness of school-based programs to reduce multiple violent and antisocial behavioral outcomes. In S. R. Jimerson, A. B. Nickerson, M. J. Mayer, & M. J. Furlong (Eds.), Handbook of school violence and school safety: International research and practice (2nd ed., pp. 593–606). New York: Routledge.
Aloe, A. M., Shisler, S. M., Norris, B. D., Nickerson, A. B., & Rinker, T. W. (2014). A multivariate meta-analysis of student misbehavior and teacher burnout. Educational Research Review, 12, 30–44. https://doi.org/10.1016/j.edurev.2014.05.003.
American Academy of Pediatrics Council on School Health. (2013). Out-of-school suspension and expulsion. Pediatrics, 131(3), e1000–e1007. https://doi.org/10.1542/peds.2012-3932.
American Psychological Association. (2016). A silent national crisis: Violence against teachers. Retrieved from http://www.apa.org/ed/schools/cpse/activities/violence-against.aspx.
Atkins, M. S., Rusch, D., Mehta, T. G., & Lakind, D. (2016). Future directions for dissemination and implementation science: Aligning ecological theory and public health to close the research to practice gap. Journal of Clinical Child and Adolescent Psychology, 45(2), 215–226. https://doi.org/10.1080/15374416.2015.1050724.
Barnes, T. N., Smith, S. W., & Miller, M. D. (2014). School-based cognitive-behavioral interventions in the treatment of aggression in the United States: A meta-analysis. Aggression and Violent Behavior, 19(4), 311–321. https://doi.org/10.1016/j.avb.2014.04.013.
Barrish, H. H., Wolf, M. M., & Saunders, M. (1969). Good behavior game: Effects of individual contingencies for group consequences on disruptive behavior in a classroom. Journal of Applied Behavior Analysis, 2(2), 119–124.
Batton, J. (2003). Cost-benefit analysis of CRE programs in Ohio. Conflict Resolution Quarterly, 21(1), 131–133.
Beecham, J. (2014). Annual research review: Child and adolescent mental health interventions—A review of progress in economic studies across different disorders. Journal of Child Psychology and Psychiatry, 55(6), 714–732.
Benbenishty, R., & Astor, R. A. (2007). Monitoring indicators of children’s victimization in school: Linking national-, regional-, and site-level indicators. Social Indicators Research, 84(3), 333–348. https://doi.org/10.1007/s11205-007-9116-4.
Benner, G. J., Beaudoin, K. M., Chen, P.-Y., Davis, C., & Ralston, N. C. (2010). The impact of intensive positive behavioral supports on the behavioral functioning of students with emotional disturbance: How much does fidelity matter? Journal of Behavior Assessment and Intervention in Children, 1(1), 85–100. https://doi.org/10.1037/h0100361.
Bierman, K. L., Coie, J., Dodge, K., Greenberg, M. T., Lochman, J., McMahon, R., et al. (2013). School outcomes of aggressive-disruptive children: Prediction from kindergarten risk factors and impact of the Fast Track prevention program. Poster presented at the Aggressive Behavior.
Bradshaw, C. P., Mitchell, M. M., & Leaf, P. J. (2010). Examining the effects of schoolwide positive behavioral interventions and supports on student outcomes results from a randomized controlled effectiveness trial in elementary schools. Journal of Positive Behavior Interventions, 12(3), 133–148. https://doi.org/10.1177/1098300709334798.
Bradshaw, C. P., Waasdorp, T. E., & Leaf, P. J. (2012). Effects of school-wide positive behavioral interventions and supports on child behavior problems. Pediatrics, 130(5), e1136–e1145. https://doi.org/10.1542/peds.2012-0243.
Breslau, J., Miller, E., Joanie Chung, W. J., & Schweitzer, J. B. (2011). Childhood and adolescent onset psychiatric disorders, substance use, and failure to graduate high school on time. Journal of Psychiatric Research, 45(3), 295–301. https://doi.org/10.1016/j.jpsychires.2010.06.014.
Burke, A., & Nishioka, V. (2014). Suspension and expulsion patterns in six Oregon school districts (REL 2014-028). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Northwest.
Christenson, J. D., Crane, D. R., Malloy, J., & Parker, S. (2016). The cost of oppositional defiant disorder and disruptive behavior: A review of the literature. Journal of Child and Family Studies, 25(9), 2649–2658. https://doi.org/10.1007/s10826-016-0430-9.
Conduct Problems Prevention Research Group. (1992). A developmental and clinical model for the prevention of conduct disorder: The FAST track program. Development and Psychopathology, 4(4), 509–527. https://doi.org/10.1017/S0954579400004855.
Conduct Problems Prevention Research Group. (1999). Initial impact of the fast track prevention trial for conduct problems: I. The high-risk sample. Journal of Consulting and Clinical Psychology, 67(5), 631–647.
Conduct Problems Prevention Research Group. (2002). Evaluation of the first 3 years of the fast track prevention trial with children at high risk for adolescent conduct problems. Journal of Abnormal Child Psychology, 30(1), 19–35.
Cook, P. J., Gottfredson, D. C., & Na, C. (2010). School crime control and prevention. Crime and Justice, 39(1), 313–440. https://doi.org/10.1086/652387.
Cornell, D., Gregory, A., Huang, F., & Fan, X. (2013). Perceived prevalence of teasing and bullying predicts high school dropout rates. Journal of Educational Psychology, 105(1), 138–149. https://doi.org/10.1037/a0030416.
Crean, H. F., & Johnson, D. B. (2013). Promoting alternative thinking strategies (PATHS) and elementary school aged children’s aggression: Results from a cluster randomized trial. American Journal of Community Psychology, 52(1–2), 56–72.
Dodge, K. A., Bierman, K. L., Coie, J. D., Greenberg, M. T., Lochman, J. E., McMahon, R. J., et al. (2015). Impact of early intervention on psychopathology, crime, and well-being at age 25. American Journal of Psychiatry, 172(1), 59–70. https://doi.org/10.1176/appi.ajp.2014.13060786.
Dolan, L. J., Kellam, S. G., Brown, C. H., Werthamer-Larsson, L., Rebok, G. W., Mayer, L. S., et al. (1993). The short-term impact of two classroom-based preventive interventions on aggressive and shy behaviors and poor achievement. Journal of Applied Developmental Psychology, 14(3), 317–345.
Duane, T. E., & Bierman, K. L. (2006). The impact of classroom aggression on the development of aggressive behavior problems in children. Development and Psychopathology, 18(2), 471–487.
Durlak, J. A., Weissberg, R. P., Dymnicki, A. B., Taylor, R. D., & Schellinger, K. B. (2011). The impact of enhancing students’ social and emotional learning: A meta-analysis of school-based universal interventions. Child Development, 82(1), 405–432.
Durlak, J. A., & Wells, A. M. (1997). Primary prevention mental health programs for children and adolescents: A meta-analytic review. American Journal of Community Psychology, 25(2), 115–152.
Dymnicki, A. B., Weissberg, R. P., & Henry, D. B. (2011). Understanding how programs work to prevent overt aggressive behaviors: A meta-analysis of mediators of elementary school-based programs. Journal of School Violence, 10(4), 315–337.
Epstein, M., Atkins, M., Cullinan, D., Kutash, K., & Weaver, R. (2008). Reducing behavior problems in the elementary school classroom: A practice guide. Retrieved from Washington, DC.
Eslea, M., Menesni, E., Morita, Y., O’Moore, M., Mora-Merchan, J. A., Pereira, B., et al. (2004). Friendship and loneliness among bullies and victims: Data from seven countries. Aggressive Behavior, 30(1), 71–83.
Farahmand, F. K., Grant, K. E., Polo, A. J., & Duffy, S. N. (2011). School-based mental health and behavioral programs for low-income, urban youth: A systematic and meta-analytic review. Clinical Psychology: Science and Practice, 18(4), 372–390.
Farrell, A. D., Henry, D. B., & Bettencourt, A. (2013). Methodological challenges examining subgroup differences: Examples from universal school-based youth violence prevention trials. Prevention Science, 14(2), 121–133.
Farrell, A. D., Meyer, A. L., & White, K. S. (2001). Evaluation of responding in peaceful and positive ways (RIPP): A school-based prevention program for reducing violence among urban adolescents. Journal of Clinical Child Psychology, 30(4), 451–463. https://doi.org/10.1207/S15374424jccp3004_02.
Farrington, D. P., Gaffney, H., Lösel, F., & Ttofi, M. M. (2017). Systematic reviews of the effectiveness of developmental prevention programs in reducing delinquency, aggression, and bullying. Aggression and Violent Behavior, 33, 91–106. https://doi.org/10.1016/j.avb.2016.11.003.
Flay, B. R., Allred, C. G., & Ordway, N. (2001). Effects of positive action program on achievement and discipline: Two matched-control comparisons. Prevention Science, 2(2), 71–89.
Foster, E. M., Jones, D. E., & Conduct Problems Prevention Research Group. (2005). The high cost of aggression: Public expenditures resulting from conduct disorder. American Journal of Public Health, 95(10), 1767–1772.
Frazier, S. L., Formoso, D., Birman, D., & Atkins, M. S. (2008). Closing the research to practice gap: Redefining feasibility. Clinical Psychology: Science and Practice, 15(2), 125–129. https://doi.org/10.1111/j.1468-2850.2008.00120.x.
Frey, K. S., Hirschstein, M. K., & Guzzo, B. A. (2000). Second step: Preventing aggression by promoting social competence. Journal of Emotional and Behavioral Disorders, 8(2), 102–112. https://doi.org/10.1177/106342660000800206.
Frey, K. S., & Strong, Z. H. (2018). Aggression predicts changes in peer victimization that vary by form and function. Journal of Abnormal Child Psychology, 46(2), 305–318.
Fuchs, D., & Fuchs, L. S. (2017). Critique of the National Evaluation of Response to Intervention: A case for simpler frameworks. Exceptional Children, 83(3), 255–268. https://doi.org/10.1177/0014402917693580.
Gottfredson, D. C., & Gottfredson, G. D. (2002). Quality of school-based prevention programs: Results from a national survey. Journal of Research in Crime and Delinquency, 39(1), 3–35.
Greenberg, M. T., Kusche, C. A., Cook, E. T., & Quamma, J. P. (1995). Promoting emotional competence in school-aged children: The effects of the PATHS curriculum. Development and Psychopathology, 7(1), 117–136.
Gresham, F. M., Cohen, S., Gansle, K. A., Noell, G. H., & Rosenblum, S. (1993). Treatment integrity of school-based behavioral intervention studies: 1980–1990. School Psychology Review, 22(2), 254–272.
Hankin, A., Hertz, M., & Simon, T. (2011). Impacts of metal detector use in schools: Insights from 15 years of research. Journal of School Health, 81(2), 100–106. https://doi.org/10.1111/j.1746-1561.2010.00566.x.
Horner, R. H., Sugai, G., & Anderson, C. M. (2010). Examining the evidence base for school-wide positive behavior support. Focus on Exceptional Children, 42(8), 1–14.
Horner, R. H., Sugai, G., Smolkowski, K., Eber, L., Nakasto, J., Todd, A. W., et al. (2009). A randomized, wait-list controlled effectiveness trial assessing school-wide positive behavior support in elementary schools. Journal of Positive Behavior Interventions, 11(3), 133–144. https://doi.org/10.1177/1098300709332067.
Ialongo, N. S., Werthamer, L., Kellam, S. G., Brown, C. H., Wang, S., & Lin, Y. (1999). Proximal impact of two first-grade preventive interventions for the early risk behaviors for later substance abuse, depression, and antisocial behaviors. American Journal of Community Psychology, 27(5), 599–641.
Jimerson, S. R., Burns, M. K., & VamDerHeyden, A. M. (Eds.). (2007). Handbook of response to intervention: The science and practice of assessment and intervention. New York: Springer.
Lester, S., Lawrence, C., & Ward, C. L. (2017). What do we know about preventing school violence? A systematic review of systematic reviews. Psychology, Health and Medicine, 22(sup1), 187–223.
Lewis, T. J., Mitchell, B. S., Bruntmeyer, D. T., & Sugai, G. (2016). School-wide positive behavior support and response to intervention: System similarities, distinctions, and research to date at the universal level of support. In S. Jimmerson, M. Burns, & A. Van Der Heyden (Eds.), Handbook of response to intervention (pp. 703–717). Boston, MA: Springer.
Lipsey, M. W. (2009). The primary factors that characterize effective interventions with juvenile offenders: A meta-analytic overview. Victims and Offenders, 4(2), 124–147. https://doi.org/10.1080/15564880802612573.
Lochman, J. E., & Wells, K. C. (2003). Effectiveness of the Coping Power Program and of classroom intervention with aggressive children: Outcomes at 1-year follow-up. Behavior Therapy, 34(4), 493–515. https://doi.org/10.1016/S0005-7894(03)80032-1.
Lochman, J. E., Wells, K. C., & Murray, M. J. (2007). The Coping Power Program: Preventive intervention at the middle school transition. In P. H. Tolan, J. Szapocznik, & S. Sambarno (Eds.), Preventing youth substance abuse: Science-based programs for child and adolescents (pp. 185–210). Washington, DC: American Psychological Association.
Low, S., Cook, C. R., Smolkowski, K., & Buntain-Ricklefs, J. (2015). Promoting social–emotional competence: An evaluation of the elementary version of Second Step®. Journal of School Psychology, 53(6), 463–477.
Marchbanks, M. P., III, Blake, J. J., Smith, D., Seibert, A. L., Carmichael, D., Booth, E. A., et al. (2014). More than a drop in the bucket: The social and economic costs of dropouts and grade retentions associated with exclusionary discipline. Journal of Applied Research on Children: Informing Policy for Children at Risk, 5(2), 17.
Matjasko, J. L., Vivolo-Kantor, A. M., Massetti, G. M., Holland, K. M., Holt, M. K., & Cruz, J. D. (2012). A systematic meta-review of evaluations of youth violence prevention programs: Common and divergent findings from 25 years of meta-analyses and systematic reviews. Aggression and Violent Behavior, 17(6), 540–552.
Metropolitan Area Child Study Research Group. (2002). A cognitive-ecological approach to preventing aggression in urban settings: Initial outcomes for high risk children. Journal of Consulting and Clinical Psychology, 70(1), 179–194. https://doi.org/10.1037//0022-006x.70.1.179.
Meyer, A. L., & Farrell, A. D. (1998). Social skills training to promote resilience in urban sixth-grade students: One product of an action research strategy to prevent youth violence in high-risk environments. Education and Treatment of Children, 21, 461–488.
Musu-Gillette, L., Zhang, A., Wang, K., Zhang, J., & Oudekerk, B. A. (2017). Indicators of school crime and safety: 2016. (NCES 2017-064/NCJ 250650). Washington, DC.
Nansel, T. R., Overpeck, M., Pilla, R. S., Ruan, W. J., Simons-Morton, B., & Scheidt, P. (2001). Bullying behaviors among U.S. youth: Prevalence and association with psychosocial adjustment. Journal of the American Medical Association, 285(16), 2094–2100.
Newman, L., Wagner, M., Knokey, A.-M., Marder, C., Nagle, K., Shaver, D. et al. (2011). The post-high school outcomes of young adults with disabilities up to 8 years after high school: A report from the National Longitudinal Transition Study-2 (NTLS2), Menlo Park, CA. Retrieved from http://www.ncser.ed.gov/pubs.
Offord, D. R. (2000). Selection of levels of prevention. Addictive Behaviors, 25(6), 833–842. https://doi.org/10.1016/s0306-4603(00)00132-5.
Osher, D., Bear, G. G., Sprague, J. R., & Doyle, W. (2010). How can we improve school discipline? Educational Researcher, 39(1), 48–58.
Park-Higgerson, H. K., Perumean-Chaney, S. E., Bartolucci, A. A., Grimley, D. M., & Singh, K. P. (2008). The evaluation of school-based violence prevention programs: A meta-analysis. Journal of School Health, 78(9), 465–479. https://doi.org/10.1111/j.1746-1561.2008.00332.x. (quiz 518-420).
Powers, J. D., Bowen, N. K., Webber, K. C., & Bowen, G. L. (2011). Low effect sizes of evidence-based programs in school settings. Journal of Evidence-Based Social Work, 8(4), 397–415. https://doi.org/10.1080/15433714.2011.534316.
Predy, L., McIntosh, K., Frank, J. L., & Hitchcock, J. (2014). Utility of number and type of office discipline referrals in predicting chronic problem behavior in middle schools. School Psychology Review, 43(4), 472–489. https://doi.org/10.17105/spr-13-0043.1.
Reid, J. B., Eddy, J. M., Fetrow, R. A., & Stoolmiller, M. (1999). Description and immediate impacts of a preventive intervention for conduct problems. American Journal of Community Psychology, 27(4), 483–518.
Robb, J. A., Sibley, M. H., Pelham, W. E., Jr., Foster, E. M., Molina, B. S. G., Gnagy, E. M., et al. (2011). The estimated annual cost of ADHD to the U.S. education system. School Mental Health, 3(3), 169–177. https://doi.org/10.1007/s12310-011-9057-6.
Ruhl, K. L., & Hughes, C. A. (1985). The nature and extent of aggression in special education settings serving behaviorally disordered students. Behavioral Disorders, 10, 95–104.
Schwartz, D., McFadyen-Ketchum, S. A., Dodge, K. A., Pettit, G. S., & Bates, J. E. (1998). Peer group victimization as a predictor of children’s behavior problems at home and in school. Development and Psychopathology, 10(1), 87–99.
Shapiro, J. P., Burgoon, J. D., Welker, C. J., & Clough, J. B. (2002). Evaluation of the peacemakers program: School-based violence prevention for students in grades four through eight. Psychology in the Schools, 39(1), 87–100.
Shernoff, E. S., Mehta, T. G., Atkins, M. S., Torf, R., & Spencer, J. (2011). A qualitative study of the sources of stress and impact of stress among urban teachers. School Mental Health, 3(2), 59–69. https://doi.org/10.1007/s12310-011-9051-z.
Shure, M. B. (2001). I can problem solve (ICPS): An interpersonal cognitive problem solving program for children. Residential Treatment for Children and Youth, 18(3), 3–14.
Shure, M. B., & Spivack, G. (1979). Interpersonal cognitive problem solving and primary prevention: Programming for preschool and kindergarten children. Journal of Clinical Child Psychology, 8(2), 89–94. https://doi.org/10.1080/15374417909532894.
Sklad, M., Diekstra, R., Ritter, M. D., Ben, J., & Gravesteijn, C. (2012). Effectiveness of school-based universal social, emotional, and behavioral programs: Do they enhance students’ development in the area of skill, behavior, and adjustment? Psychology in the Schools, 49(9), 892–909.
Smith, B. H., Molina, B. S. G., Massetti, G. M., Waschbusch, D. A., & Pelham, W. E. (2007). School-wide interventions: The foundation of a public health approach to school-based mental health. In S. W. Evans, M. D. Weist, & Z. N. Serpell (Eds.), Advances in school-based mental health interventions: Best practices and program models (Vol. 2, pp. 7-2–7-19). Kingston, NJ: Civic Research Institute.
Stoltz, S., Londen, M. V., Deković, M., Castro, B. O. D., & Prinzie, P. (2012). Effectiveness of individually delivered indicated school-based interventions on externalizing behavior. International Journal of Behavioral Development, 36(5), 381–388. https://doi.org/10.1177/0165025412450525.
Stoolmiller, M., Eddy, J. M., & Reid, J. B. (2000). Detecting and describing preventive intervention effects in a universal school-based randomized trial targeting delinquent and violent behavior. Journal of Consulting and Clinical Psychology, 68(2), 296.
Taylor, R. D., Oberle, E., Durlak, J. A., & Weissberg, R. P. (2017). Promoting positive youth development through school-based social and emotional learning interventions: A meta-analysis of follow-up effects. Child Development, 88(4), 1156–1171.
Wang, P., Baker, L. A., Gao, Y., Raine, A., & Lozano, D. I. (2012). Psychopathic traits and physiological responses to aversive stimuli in children aged 9–11 years. Journal of Abnormal Child Psychology, 40(5), 759–769. https://doi.org/10.1007/s10802-011-9606-3.
Waschbusch, D. A., Fabiano, G. A., & Pelham, W. E. (2012). Evidence-based practice in child and adolescent disorders. In P. Sturmey & M. Hersen (Eds.), Handbook of evidence-based practice in clinical psychology (Vol. 1, pp. 27–49). Hoboken, NJ: Wiley.
Weare, K., & Nind, M. (2011). Mental health promotion and problem prevention in schools: What does the evidence say? Health Promotion International, 26(Suppl 1), i29–i69. https://doi.org/10.1093/heapro/dar075.
Westling, D. L. (2010). Teachers and challenging behavior: Knowledge, views, and practices. Remedial and Special Education, 31(1), 48–63. https://doi.org/10.1177/0741932508327466.
Wilson, S. J., & Lipsey, M. W. (2007). School-based interventions for aggressive and disruptive behavior: Update of a meta-analysis. American Journal of Preventive Medicine, 33(2 Suppl), S130–S143. https://doi.org/10.1016/j.amepre.2007.04.011.
Wolpert, M., Humphrey, N., Belsky, J., & Deighton, J. (2013). Embedding mental health support in schools: Learning from the Targeted Mental Health in Schools (TaMHS) national evaluation. Emotional and Behavioural Difficulties, 18(3), 270–283.
Yeung, A. S., Craven, R. G., Mooney, M., Tracey, D., Barker, K., Power, A., et al. (2016). Positive behavior interventions: The issue of sustainability of positive effects. Educational Psychology Review, 28(1), 145–170.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflicts of interest.
Research involving Human Participants and/or Animals
This article does not contain any studies with human participants or animals performed by any of the authors.
Ethical Approval
Ethical approval is not applicable to this article because it is a review paper; neither human participants nor animals were involved.
Informed Consent
Informed consent is not applicable to this article because it is a review paper; human participants were not involved.
Rights and permissions
About this article
Cite this article
Waschbusch, D.A., Breaux, R.P. & Babinski, D.E. School-Based Interventions for Aggression and Defiance in Youth: A Framework for Evidence-Based Practice. School Mental Health 11, 92–105 (2019). https://doi.org/10.1007/s12310-018-9269-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12310-018-9269-0