Abstract
Chronic and increasingly intense behavioral challenges continue to vex educators from pre- through high school. While early intervention and prevention are essential to alter these patterns, schools are still responsible for ensuring student success regardless of their prior learning history. Unfortunately to date, educators have had limited impact with students with emotional and behavioral disabilities. Response-to-intervention research has documented that early identification of high-risk learners coupled with a continuum of empirically validated instructional practices can both reduce the numbers of students requiring intensive supports and lead to improved learning outcomes among students with disabilities. A parallel process of school-wide positive behavior support (SW-PBS) also applies the logic of early identification of risk coupled with research-based supports to reduce the numbers of students requiring intensive supports as well as provide more comprehensive and integrated supports for students with disabilities. This chapter provides an overview of SW-PBS with a specific emphasis on essential features of tier 2/3 systems within the context of a complete multi-tiered system of support. Research to date on both the individual practices as well as integrated tier 2/3 supports is discussed. Implications for practice are also provided.
This chapter was supported in part by a grant from the Office of Special Education Programs, US Department of Education (H326S980003). Opinions expressed herein are those of the authors and do not necessarily reflect the position of the US Department of Education, and such endorsements should not be inferred. Inquires should be directed to Dr. Barbara Mitchell, 303 Townsend, University of Missouri, Columbia, MO 65211 (mitchellbs@missouri.edu).
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Universal Screening
- Behavioral Support
- Progress Monitoring
- Serious Emotional Disturbance
- Behavioral Challenge
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
The social behavior challenges educators face on a daily basis across American schools has long been established. Overall levels of minor but chronic disrespect, noncompliance, and aggression continue to rise and are linked to later more intensive behavioral challenges (Benedict et al. 2007; Conroy et al. 2004; Duda et al. 2004; Muscott et al. 2009; Heaviside et al. 1998). Further, for those students who are identified as having an emotional/behavioral disorder (EBD) and receiving specialized instruction, intervention impact has been limited (Bradley et al. 2004; Wagner et al. 2005). Recent data indicate that over half of students receiving special education services under the EBD category fail to graduate with a high school diploma (Wagner et al. 2005), 20% have been arrested at least once while a student (VanAcker 2004), and the majority will require ongoing mental health and social assistance across their lifetimes (Walker et al. 2004).
To date, as described above, the overall impact of specialized supports for students with EBD have not been universally effective in altering significant behavioral challenges. However, services provided under the Individuals with Disabilities Education Act (IDEA) have lead to modest improvements (Wagner et al. 2005). A further challenge for educators tasked with providing specialized and intensive individualized supports is the overall gross under-identification within the IDEA category of “seriously emotionally disturbed” (EBD). Since passage of the law, fewer than 1 % of students have been identified as having EBD (U.S. Department of Education 2005). Experts within the EBD field estimate that 5–7 % of students manifest emotional and behavioral concerns significant enough to warrant special education services (Kauffman 2005; Walker et al. 2004). Simple mathematics indicates that using 5 % as the expected prevalence across the school-age population equals 2,810,149 children not receiving services under the Serious Emotional Disturbance (SED) label who might otherwise be eligible (U.S. Department of Education 2005). And while outcomes to date indicate improvements are needed in serving this population, the complete absence of specialized supports will surely exacerbate the social, emotional, and behavioral challenges among students at high risk.
The combination of increased overall levels of problem behavior and the large numbers of students at risk for significant emotional and behavioral challenges who currently may not be receiving specialized services has been a driving factor in establishing school-wide positive behavior support (SW-PBS) as both a preventative measure and universal intervention to support high-risk students (Sugai et al. 2000a). SW-PBS provides educators with a problem-solving framework that matches evidence-based practices to presenting problems unique to each school setting (see Chap. ## for a more in depth overview of SW-PBS). An additional feature of SW-PBS is the attention to creating school-wide systems that provide the necessary training and technical assistance to all faculty and staff within the school and ideally across the school district. School teams apply the problem-solving logic of SW-PBS: (a) data-based decisions to identify behavioral concerns and to progress monitor, (b) implementation of evidence-based practices matched to data, and (c) implementing systems of support to increase implementation fidelity across a continuum of available interventions creating a parallel to the academic response-to-intervention (RTI) multi-tiered system framework. Universal supports target common social behavioral challenges by explicitly teaching pro-social alternative behaviors, providing multiple opportunities for student practice, and providing high rates of positive specific feedback on skill use. In addition, universal expectations, instructional strategies, and environmental supports are adapted across structured (e.g., classroom) and unstructured (e.g., playground, cafeteria) school environments. Implementation of universal SW-PBS is monitored through student outcomes as well as annual fidelity checks (e.g., School-wide Evaluation Tool, Horner et al. 2004). For those students who are not successful with universal supports alone, a continuum of social, emotional, cognitive, and behavioral interventions are put in place through tier 2, or small-group interventions, and tier 3 individualized strategies.
The remainder of this chapter focuses on building a complete continuum of SW-PBS with emphasis on tiers 2 and 3. It is important to note that practices and systems of support overviewed within this chapter are intended to be part of an interconnected system of behavioral supports to create an integrated multi-tiered system of support (MTSS). Schools should be implementing universal supports with fidelity as a foundation to establishing a complete continuum of supports prior to systemic development of additional connected tiers. Ongoing attention to maintaining universal supports with high fidelity is also a critical prerequisite for successful installation of tier 2 and tier 3 interventions. An additional consideration in the establishment of tiers 2 and 3 is attention to the phases of implementation (see Chap. ##) and matching training and coaching for schools based on their readiness. Finally, within the continuum of SW-PBS, educators should view additional tiers beyond universal not as discrete and separate but rather as intensifying instruction that reinforces the universal support strategies (i.e., teaching, practicing, providing feedback, and modifying environments) provided at tier 1 to match the intensity of student need. Likewise, educators should not view tiers and students as synonymous. In other words, there are no “tier 2 students” but there will be students who will require additional levels of support across the school day to increase the likelihood they are successful.
Essential Features of Tier 2 and 3 SW-PBS Systems
Tier 2 and 3 supports within a multi-tiered system are designed to provide additional instruction and supports for students who are struggling to meet the goals and objectives taught in the core academic and behavioral curriculum (Chard 2013). Within SW-PBS, school teams are encouraged to implement a variety of interventions and supports based on student need. Regardless of the specific tier 2 and 3 supports used, as these interventions likely vary by school, there are several fundamental components associated with the identification of students requiring these levels of support and subsequent documentation of effectiveness. Essential features of tier 2 and 3 systems include: (a) insuring universal strategies are in place with high fidelity, (b) commitment from school administrators and a specialized behavior support team (tier 2/3 SW-PBS team) to lead intervention efforts, (c) a proactive method to identify in a timely manner the students who require additional supports, (d) a process for matching intervention with student need, (e) selection, adoption, and use of research-based interventions, (f) regular monitoring of student progress to assess performance and rate of improvement, and (g) coordinated decision-making to determine student movement (e.g., more or less intensive supports) within the multi-tiered system (Horner et al. 2010; Crone et al. 2010). Along with these key features, effective tier 2/3 systems also include plans for monitoring fidelity of implementation across interventions and of the overall system, as well as periodic evaluation of impact and effects relative to the goals of improving social and academic outcomes (see Fig. 1 for additional features).
Universal SW-PBS Implemented with Fidelity
Following the logic of RTI research within schools implementing SW-PBS, the process begins with all students having access to a rigorous and relevant academic and social–behavioral curriculum in an organized, effective, and positive classroom environment (Chard 2013; Lewis and Sugai 1999). In terms of behavior and social skills, all staff implement universal strategies that include teaching clearly defined expectations, rules, and procedures; explicit instruction for meeting school-wide behavioral goals; high rates of recognition for social and behavioral success; and consistent response for incorrect behavior that includes reteaching and practice opportunities for students who need these. These universal-level supports are implemented with fidelity by all staff, for all students, across all school settings to ensure each child participates in high-quality instruction before determining that he or she requires additional intervention. In particular, all classrooms consistently implement essential instructional management strategies, so that any additional supports or recommended environmental modifications as part of the tier 2/3 process will be positioned to have maximal impact (Simonsen et al. 2008). Specifically, in the same manner of having expectations and rules that are communicated consistently and taught school-wide across settings (e.g., cafeteria, hallway, restroom, playground), individual teachers also use an instructional approach for promoting desired behaviors and social skills within their classrooms. Productive classroom environments are established by using strategies that structure the learning environment so that a majority of problems are prevented from occurring. Strategies include using behavior-specific praise to recognize desired behaviors, providing frequent opportunities to respond during instruction, using material that is correctly matched with student’s instructional level, providing choices to students (e.g., order of task completion), alternating easier tasks with those that are more challenging, pacing instruction adequately, implementing clear routines and procedures, and designing physical arrangements of the room that permit active supervision by adults (Kern and Clemens 2007; Simonsen et al. 2008).
Following adequate academic and behavioral instruction, assessment data are gathered and reviewed on a regular basis to evaluate each student’s success in core instruction. Typically students are not considered for additional intervention (i.e., tier 2) until they have had sufficient time to respond to the universal strategies (e.g., approximately six to 8 weeks) implemented during core instruction. During this time, it is critical to confirm that universal interventions or core instructional strategies are implemented with fidelity.
Tier 2/3 SW-PBS Team
Initial development of SW-PBS begins with the formation of a team to focus on core instruction for social–behavioral skills. As the need for tier 2/3 intervention emerges, it is recommended that schools maintain the universal support-focused SW-PBS team but form a second group to address tiers 2 and 3. The tier 2/3 team should maintain linkages with the universal SW-PBS team by insuring all tier 2/3 strategies are couched within the set of school-wide expectations and that all staff are aware of their roles in supporting tier 2/3 interventions by including common members such as the building administrator, but also capitalize on existing behavioral expertise in the school, district, or regional SW-PBS initiative. Tier 2/3 teams often include school psychologists, special educators, behavior specialists, counselors, and one or more content-area specialists (e.g., language arts or mathematics).
Personnel who serve on the tier 2/3 teams establish systems and practices for students requiring more intensive social, emotional, and/or behavioral support. Members of this group ensure timely access to interventions, oversee implementation of selected interventions, regularly use data to monitor student progress during intervention, and evaluate overall program outcomes (Crone and Horner 2003). Membership of the team is determined according to ability and interest in fulfilling key roles and completing specific responsibilities. At a minimum, a tier 2/3 team typically includes one or more school administrators who are able to make decisions about staff, student, and school-wide schedules, reallocate resources, and set an overall tone of commitment and importance about the development, implementation, and ongoing monitoring of behavioral interventions. Also, it is recommended that the tier 2/3 team include staff with specific expertise in academic and behavioral assessment and intervention. While a primary focus of the team is development and implementation of social, emotional, and behavioral supports, many of the students identified for additional interventions will be at risk also for or already experiencing poor academic outcomes. The specialized behavior support team needs at least one member who can identify academic skill deficits and match students with appropriate academic intervention in addition to behavioral supports that may be needed. Finally, the tier 2/3 team should be structured to include general education teacher representation. Teachers are responsible often for making initial referrals for assistance, selecting appropriate strategies to meet student needs, and for implementing and monitoring effects of interventions (Debnam et al. 2013). Regardless of educational role, the more critical aspects when deciding on membership are structuring the team in a way that allows collaborative and data-based decision-making, division of a shared work load, and commitment to an instructional approach for behavior management and discipline (McIntosh et al. 2013)
Student Identification for Tier 2/3 Supports
Once SW-PBS practices, data-decision making and systems of support are established, implemented with fidelity, and have a measurable impact on student outcomes, tier 2/3 teams should develop a proactive process to actively seek out students who either are not responding to the core instruction or are displaying patterns of behavior that warrant additional intervention to lessen risk (e.g., Glover and Albers 2007; Kratochwill et al. 2004; Simmons et al. 2000; Walker and Shinn 2002). To identify students in need of tier 2/3 behavioral interventions, tier 2/3 teams develop a comprehensive system so that all students in a classroom, school, or district have an equal opportunity to be considered against characteristics of risk (e.g., acting out, argumentative, verbally or physically aggressive, overly shy, worried or withdrawn, anxious, unusually sad; see Fig. 2). The identification process is designed to promote early access to readily available interventions before student problems develop to a level that requires intensive intervention. It also includes clear criteria for indicating which students require immediate, intensive assistance (e.g., safety concerns for student or others). In addition, the identification approach is intended to identify students with internalizing (e.g., social withdrawal, anxiety) and externalizing (e.g., disruption, aggression) concerns regardless of the presence or absence of risk factors (Kamphaus et al. 2010).
Office Discipline Referral Data
Within an SW-PBS framework, one commonly used measure of student social behavior performance is an office discipline referral (ODR), which is a standardized record of events of problem behavior. Documentation of disciplinary events is carried out consistently across students in that behavioral infractions are identified by common definitions and a standard set of information about these incidents is recorded and collected (i.e., type of problem, location, time of day, others involved, and possible motivation). Parallel to the tiered public health model that matches level of need with level of support (Gordon 1983), SW-PBS researchers and school teams frequently use number of ODRs accrued to define the level of support each student may require (McIntosh et al. 2009b). Regular review of ODR data is the primary method for determining impact of tier 1, core behavioral instruction. Review of ODRs also serves as a way to identify individual students who continue to display social behavioral problems even with exposure to high-quality core instruction (McIntosh et al. 2009). Considering that ODRs are already commonly collected, easily available and cost-efficient, use of these data for identifying students in need of tier 2 intervention may be more likely than use of other measures that may require additional funding or collection procedures (McIntosh et al. 2009). The following criteria, derived in part from proportions of ODR distributions in a large sample of schools, are currently viewed as a method for monitoring level of support required (Horner et al. 2005). Students receiving zero to one ODR per year are considered as responsive to tier 1. That is, the foundational level of support (i.e., structuring of environment to include instruction for behavioral expectations and recognition for successful demonstrations) provided to all students is sufficiently meeting the needs of students with low or no frequency of disciplinary events. Students with documented disciplinary events in the range of two to five incidents are considered for receiving tier 2 supports. Students with six or more ODRs are recommended for tier 3 interventions. Although ODRs offer valuable information about the frequency, topography, location, and potential motivation for behavior, they tend to be more representative of students with externalizing, rather than internalizing, behaviors (Walker et al. 2005).
Universal Screening
Another method for systematically identifying students who may require additional support is use of a brief, behavioral screening instrument. Typically, screening instruments require a response to short statements about emotional or behavioral characteristics of a student. These instruments can be used to generate risk scores for all students in a classroom, grade level, building, or district.
There are several potential advantages for developing a systematic identification process that incorporates universal screening. First, screening instruments are generally perceived as a quick, accurate, and respectful process with capacity to include all children and youth of interest. Second, if an error occurs most often it is on the side of caution with the tendency to overidentify rather than missing students who are in need of support. Third, use of screening scores also informs schools about the needs of their particular student population which can assist with planning and resource mapping by finding groups of students with common needs. Finally, universal screening is recommended as an evidence-based practice by a number of different influential groups associated with educational policy and practice (e.g., President’s Commission on Special Education 2002; No Child Left Behind Act 2001; U.S. Department of Health and Human Services, 2001).
Unfortunately, there are a number of reasons why universal screening for behavioral risk has not become a more common practice yet. The following list represents concerns that often are expressed (Levitt et al. 2007):
-
Behavior is viewed as purposeful rather than as associated with environmental arrangements.
-
Historically, schools tend to be reactive rather than proactive with respect to behavior.
-
There is a widespread impression kids will “grow out of it” regarding problem behavior displayed during the early years of child development.
-
Concerns about profiling or stigmatizing children and youth who meet risk criteria.
-
Fear of costs and potential for identifying large numbers of students with EBD.
-
General perception that it is easier to screen for vision and hearing concerns because the response falls in the realm of families.
-
Political realities of managing parent reactions to behavior screenings and addressing issues of confidentiality.
-
Lack of needed skill set. Educators often are not trained to respond to behavior with the same confidence that they are able to respond to academic concerns.
Within a tiered framework of support one important goal is to “catch” students before academic and/or behavioral challenges become severe. Universal screening provides an opportunity for all children to be considered for risk against identified criteria. This approach shifts focus from a traditional “wait to fail” service delivery model toward proactively seeking out children who may be at risk of academic failure and/or behavioral difficulties that would potentially benefit from specific instruction or intervention (Glover and Albers 2007). Use of a universal screening process also has the potential to minimize impact of risk and/or impede further development of more severe problems by identifying students before problems become severe.
The process for determining social behavioral risk within school settings is perhaps less firmly established than is the process for uncovering academic risk. Contemporary thinking in academic screening suggests that multiple concurrent methods are redundant and do not have a value-added effect. Instead, academic screening emphasizes gated or filtered screening with intervention trials in between. Yet related to behavioral concerns schools have long relied on teacher nominations (i.e., referral for special education) as the most common approach for identifying problems (Kamphaus et al. 2010). Increasingly, educators are also making use of commonly collected student behavioral data (i.e., ODR) (McIntosch et al. 2009; Walker et al. 2005). Systematic screening is becoming more prevalent but confronts educators with challenges of instrument selection, interpretation of results, capacity to meet the needs of identified students, and potentially negative perceptions of the process by families and community stakeholders who may view the procedures as invasive and/or stigmatizing (Golver and Albers 2007). The President’s Commission on Special Education (United States Department of Education Office of Special Education and Rehabilitative Services 2002), the No Child Left Behind Act of 2001 (NCLB; United States Department of Education 2001), the US Public Health Service (2000), and the National Research Council (NRC; Donovan and Cross 2002) each indicate the need for early identification and intervention with recommendations to adopt universal, early, behavioral screening, yet limited information is available for schools regarding use of techniques and results (Albers et al. 2007)
The following section provides a brief description and sample items from several different screening questionnaires: (a) the Strengths and Difficulties Questionnaire (SDQ; Goodman 1997), (b) the Behavioral and Emotional Screening System (BASC-2 BESS; Kamphaus and Reynolds 2007), and (c) the Systematic Screening for Behavior Disorders (SSBD; Walker and Severson 1992).
The Strengths and Difficulties Questionnaire
The SDQ is a brief behavioral screening questionnaire for children and youth aged 3–16 years old. All versions of the SDQ ask about 25 attributes, some stated positively and others negatively. These 25 items are divided between five scales: (a) emotional symptoms (five items), (b) conduct problems (five items), (c) hyperactivity/inattention (five items), (d) peer relationship problems (five items), and (e) prosocial behavior (five items). Scales (a) through (d) are added together to generate a total difficulties score (based on 20 items).
The same 25 items are included in questionnaires for completion by the parents or teachers of 4–16-year-old children (Goodman 1997). A modified informant-rated version is available for the parents or preschool teachers of 3- and 4-year-old children. In addition, questionnaires for self-completion by adolescents also are available and ask about the same 25 traits, with slightly different wording (Goodman et al. 1998). The self-report format is appropriate for youth aged 11–16. In general population samples, it is recommended to use a three-subscale division of the SDQ into internalizing problems, externalizing problems, and the prosocial scale (Goodman et al. 2010). The SDQ can be administered by hand and scored by hand or by entering scores on-line. Paper copies of the instrument can be downloaded and photocopies made with no charge. On-line administration and scoring for the SDQ also are available.
The Behavioral and Emotional Screening System (BASC-2 BESS)
The BASC-2 BESS offers a systematic way to determine the level of individual student risk drawn from ratings of behavioral and emotional strengths and weaknesses for considered students. The instrument was designed for children and adolescents in preschool through high school. The process consists of brief forms that can be completed by teachers, parents, or students individually or in any combination. Each rating form ranges from 25 to 30 items, requires no formal training for the raters, and is easy to complete, taking only 5–10 min of administration time per student. The screener assesses behaviors that represent both problems and strengths, including internalizing problems, externalizing problems, school problems, and adaptive skills. It yields one total score with associated risk classification (i.e., normal, elevated, extremely elevated) that is a reliable and accurate predictor of behavioral and emotional problems.
The Systematic Screening for Behavior Disorders
The Systematic Screening for Behavior Disorders (SSBD; Walker and Severson 1992) incorporates three gates, or stages. The screening takes into consideration both teacher judgments and direct observations in order to identify students at risk for developing ongoing internalizing and externalizing behavior concerns. Stage 1 of the SSBD involves teacher nomination. Stage 2 requires that teachers complete a critical events inventory and a short adaptive and maladaptive behavior checklist for each of the nominated students. Students whose scores on these checklists exceed the established cutoff are then candidates for stage 3. This final stage involves a 15-min interval observation in both the classroom and on the playground to determine a student’s actual performance in social and classroom interactions.
Intervention Matched with Student Need
Once students are identified as nonresponsive to tier 1 instruction, the tier 2/3 team continues the SW-PBS logic by matching students with an appropriate level or tier of support. This decision is based on intensity, chronicity, and nature of the problem, as specific intervention strategies are selected to reflect individual student need (e.g., social skill deficit, self-management issue, emotional concerns). For example, a team may find a student has four ODRs and scored in the borderline range of total difficulties on the SDQ Goodman 1997), thus indicating the need for a tier 2 intervention. Or, a team may choose to forego tier 2 in favor of tier-3-level intervention for a student with more significant and intense problems (e.g., more than six ODRs, abnormal score on the SDQ). Beyond matching the intensity level (i.e., tier 2 or tier 3) of support based on student data, it is beneficial to consider particular attributes of the problem (e.g., function, location, setting, time of day, behavioral topography, acquisition deficit, performance deficit) prior to selecting an intervention (Hansen et al. 2014).
Accordingly, the tier 2/3 team should gather relevant information in a timely manner so that features of the problem are accurately identified, while still allowing for rapid access to interventions that are readily available. For a majority of identified students, screening results along with existing school data can be used to facilitate decisions that accurately inform intervention selection. Data that are easily accessible and generally useful for pinpointing aspects of social, emotional, or behavioral challenges may include documented disciplinary events (e.g., ODRs), student attendance patterns, grade point average and/or course grades, academic performance scores, and/or frequency of visits to the school nurse or counselor. These data can indicate when, where, and under what condition problem behavior is most likely to occur. For example, data may indicate a student is (a) reading below grade level, (b) often complains of sickness during reading class and thus frequently asks to see the nurse, and (c) when referred to the office for behavioral infractions the teacher consistently indicates “task avoidance” as a possible motivation for the problem behavior. Collectively, these data indicate the student may need both targeted small-group reading instruction, as well as behavioral supports to keep the student engaged during instruction. Similarly, universal screening tools originally used to identify students for tier 2/3 supports may provide relevant information for selecting an intervention. For instance, the SSBD may indicate a student has critical internalizing issues such as extreme anxiety. This information should be coupled with other data to determine the root of the anxiety, and in turn, select an intervention that confidently reflects the student’s needs. When tier 2/3 teams triangulate data rather than relying solely on one tool or one statistic, they can more precisely capture the student’s abilities and deficits, and thus match intervention accurately (Bruhn, Lane, & Hirsch, 2014; Marchant et al. 2009)
Beyond data collected as part of regular school practices, brief assessments such as the Functional Assessment Checklist for Teachers & Staff (FACTS; March et al. 2000) and the Functional Assessment Screening Tool (FAST; Iwata and DeLeon 1996) can be used to identify possible functions of students’ problem behavior (e.g., access or avoidance of attention or tasks, sensory stimulation), which in turn, can inform the appropriate intervention selection. Although identifying function is generally regarded as part of the tier 3 process, quick tools such as the FACTS and FAST may help guide “function-based” thinking at tier 2. For example, if a student is misbehaving to gain peer attention, the student might be placed in a self-management intervention whereby if the student meets his/her goal of remaining on task they can access free time spent with peers (Briere and Simonsen 2011; Bruhn et al. 2014). Similarly, much of the research on the popular tier 2 intervention, check-in/check-out (CICO), has suggested students who are attention motivated are more likely to respond positively to CICO than students with escape-motivated behavior (Hawken et al. 2011; McIntosh et al. 2009).
For the few students (approximately 5 % of the student population) who display intense and chronic problem behavior, a comprehensive functional behavioral assessment (FBA) should be conducted to create an equally individualized and intensive behavior support plan (BSP). Generally, students needing an FBA have long, complex histories of behavior problems and have been exposed to multiple risk factors (e.g., transiency, academic failure, substance abuse, poor parenting). Ideally, the team conducts interviews with the student’s teachers, parents, and the student him/herself; reviews archival student records; and directly observes the student to identify antecedent conditions occasioning the target behavior and the consequences maintaining that behavior (Cooper et al. 2007). Data gleaned from the FBA are then used to design a highly individualized BSP. Tier 3 interventions such as function-based BSP’s tend to require more time, effort, and resources than tier 2 interventions because they are acutely tailored to the individual rather than addressing a small group of students with comparable problems through more generalized supports as in tier 2. Function-based interventions have a record of success that began in clinical settings with students who had low-incidence disabilities, but more recently, positive outcomes have been documented across a range of school-based settings for students with persistent behavioral problems (McIntosh et al. 2008; Lane et al. 2009).
Regardless of the interventions the team has to choose from, teams must have adequate time to consider and plan for all students identified as needing support beyond tier 1. Use of a specific format for collecting, reviewing, and discussing relevant student information can keep conversations directed toward existing school-based supports and/or accessible community agencies and programs while avoiding discussion of factors that are beyond influence of the support team (e.g., homelife, community circumstances, previous experiences with related families). Maintaining a problem-solving approach that is focused on alterable indicators of risk through the use of research-based interventions will maximize time allotted for the behavior support team to address all students identified for additional support.
Research-Based Interventions
Following student identification and data review to determine appropriate intervention and level of environmental supports, intervention implementation, and ongoing supports should be provided. The tier 2/3 team should first determine their capacity to provide a range of interventions and supports, start with those currently in place, and expand to create a full range within each tier of the continuum. Key factors related to intervention implementation within a continuum of supports include: (a) clear alignment with core behavioral instruction and procedures (e.g., addresses school-wide behavioral expectations, uses same or similar language for recognizing or correcting behavior, follows school-wide reinforcement system, but provides a greater frequency of support), (b) a designated coordinator for each tier 2 strategy and designated case managers for students receiving tier 3 supports, (c) ongoing data review to assess implementation fidelity and student progress, and (d) a plan for fading or intensifying supports. Within tier 2, the current research base advocates (a) additional small-group social skill instruction addressing skill and performance deficits (Elliott and Gresham 2008), (b) empirically validated self-management strategies that have demonstrated improvements in social and academic behaviors such as CICO (Crone et al. 2010), Check and Connect (Christenson et al. 2012), or Check, Connect, and Expect (Cheney et al. 2009), and (c) academic supports either through differentiated and supplemental instruction ideally within an MTSS with progress monitoring (Epstein et al. 2008).
Research studies reporting positive impact of tier 2 interventions are evident. For example, outcomes from use of social skill instructional groups have shown decreases in disruptive behavior, increases in on-task behavior, improved scores from teacher ratings of social behavior and academic competence, along with increases in prosocial play skills, peer interactions, and communication (Kamps et al. 2011; Gresham et al. 2006; Marchant et al. 2007). Evidence for the group-oriented self-management program show similar results. For example, investigations of the CICO intervention consistently demonstrate decreases in ODR for student participants (e.g., Hawken et al. 2007; Todd et al. 2008), increases in academic engagement (Hawken and Horner 2003; Campbell and Anderson 2011), and reduced frequency of disruptions or negative social interactions (e.g., Campbell and Anderson 2008; McIntosh et al. 2009). Finally, a recent study that documented use of an academic support intervention that showed improvements in “assignment attack” behaviors such as task persistence and organization (Ness et al. 2011).
Tier 3 supports are guided by an FBA whereby an individual support plan is developed to teach a functionally equivalent replacement behavior and environmental modifications are made to reinforce replacement skill use (Gage et al. 2012). Additional mental health, family, and other noneducational supports are also included in tier 3 supports when indicated (e.g., RENEW; Malloy et al. 2010).
In addition to building small-group strategies within tier 2 supports, the tier 2/3 team can also include simple alterations within classroom and other school environments as an additional tier 2 option. It may be the case that a small-group intervention is not necessary to address the presenting concerns, rather intensifying universal practices in response to the problem may be sufficient to produce behavior change. For example, increasing universal strategies on targeted behaviors such as (a) providing more frequent prompts for expected behavior, (b) increasing the rate of specific performance feedback for correct behavior, (c) reteaching classroom rules and routines, or (d) altering the environment to increase supervision or the add effective instructional practices may be sufficient to meet the student’s need.
Monitoring Student Progress
In the same manner that curriculum-based measures are one type of progress monitoring tool to assess academic performance over time, behavioral assessment data within SW-PBS can also be collected and used as the basis for determining students’ RTI. In academic progress monitoring, there are explicit decision rules to guide service delivery and determine student responsiveness (VanDerHeyden et al. 2007). Behavioral progress monitoring, on the other hand, lacks an equivalent standard protocol. Rather, instructional programming decisions may be made based on data collected via systematic direct observation, direct behavior ratings (DBR; Chafouleas 2011), and review of archival data (e.g., documented disciplinary events, time out of instruction due to problem behavior, academic work samples). These are common techniques for monitoring student progress before, during, and after intervention.
Direct observation is certainly the most time and labor intensive progress monitoring option, as it requires an individual to watch a student’s behavior for a set period of time and record either the numerical (e.g., frequency) or temporal dimension (e.g., duration) of the behavior. This process must be done frequently and data should be graphed for analysis. Additionally, to establish reliability of direct observation data, another observer may simultaneously, but independently, collect data using the same preestablished procedures. Then, the data may be compared for interobserver agreement (IOA). Although direct observation of both problem and appropriate behavior would provide schools with the best data source to inform instructional decisions, the challenges of (a) observing behavior expected across multiple educational settings, (b) the costs in terms of time to complete, and (c) the expertise required to collect reliable data often limit direct observation as a viable option for progress monitoring. This may be true especially for teachers who are asked to simultaneously teach and collect data, as many teachers view this as an impossible task (Epstein 2010). Instead, teacher perceptions as measured through informal and formal behavioral rating scales, and DBRs are more common approaches to monitoring behavioral interventions. A DBR is similar to direct observation in that it requires the teacher to directly observe the student. However, observation is not continuous and is accumulated over a set period of time (e.g., first period, circle time, 45 min). The DBR can be used in the pre-populated form, which asks teachers to rate students on a scale of 0–10 on three behaviors—academic engagement, disruptive, and respectful. Or, the teacher can tailor the DBR to reflect the behavior(s) of interest. Researchers have suggested the pre-populated DBR is moderately to highly correlated with frequency and duration data obtained via direct observation (Chafouleas et al. 2009; Riley-Tillman et al. 2008). Thus, DBR may be a more practical option than direct observation for teachers.
Data-Based Decisions for Movement Between Tiers
Whether the team selects direct observation or a DBR, regular review of student data is key to making decisions about movement within the tiered model. For example, in some cases, review of student data will demonstrate a student has consistently met a specific behavioral performance target over a period of several weeks. In this event, the Tier 2 team would have sufficient evidence to advocate for a gradual reduction in intervention supports that leads the student to a self-management and maintenance phase (e.g., Campbell and Anderson 2011). Or, regular review of student data might reveal highly variable performance or performance that is consistently below the expected level, representing a questionable RTI. Under these circumstances, the team will first verify fidelity of intervention implementation and then make decisions that could include simple modifications to the existing support or recommendation for a higher intensity intervention (e.g., Campbell and Anderson 2008; Fairbanks et al. 2007). Finally, in some cases, review of student performance data will indicate an overall poor response to the intervention that is demonstrated by an increase in frequency or intensity of problems and/or failure to improved at the expected rate or to the desired level of performance after sufficient exposure to an intervention matched with need and implemented with integrity. This type of data can serve as the basis for recommending additional information gathering (i.e., FBA) and development of an individualized support plan in tier 3 (Crone and Horner 2003; March and Horner 2002).
Monitoring Fidelity
With respect to fidelity of treatment, the tier 2/3 team has two foci. First, the tier 2/3 team must routinely consider how well or to what extent interventions are being accurately implemented. Second, the team should conduct, at least annually, a review of their SW-PBS tier 2/3 system that includes evaluation of overall effects and impact toward improvement of social and academic outcomes.
Tier 2/3 Intervention Fidelity
Only when an intervention is implemented as designed can conclusions about behavior changes be made accurately (Gansle and Noell 2007; Yeaton and Sechrest 1981). That is, school teams that measure implementation quality and accuracy can have greater confidence in decisions made when reviewing student data. For example, if a student demonstrates limited or poor response during an intervention but the school team has measured and determined fidelity of implementation to be high, then it is likely the student truly is in need of adapted, alternate, or more intensive support. Conversely, if a student did not have the opportunity to benefit from intervention because it was not implemented with fidelity, then the team may need to provide additional training and resources to the interventionist and allow time for the intervention to be implemented with fidelity prior to placing the student in a more intense level of support (Bruhn et al. 2014; Mitchell et al. 2011). Additionally, teams must realize there are multiple factors such as intervention complexity, training adequacy, and skill of the interventionist that affect intervention integrity (Yeaton and Sechrest 1981). Regardless of the reason for low treatment integrity, performance feedback is critical for improvement and accurate implementation (Duhon et al. 2009; Keller-Margulis 2012). Performance feedback should involve the team reviewing fidelity data and participating in a constructive dialogue about setting goals for improving implementation and specific steps to take toward meeting those goals (Keller-Margulis 2012). Determining the extent to which all parts of an intervention are implemented accurately is a key priority for the tier 2/3 team. SW-PBS tier 2/3 teams are challenged with developing a process that (a) provides adequate evidence the intervention was implemented as intended, and (b) can feasibly be conducted on a regular basis. Three common methods for measuring accuracy of intervention implementation include: (a) intervention-specific product review (e.g., monitoring forms, rating scales, daily progress report (DPR), self-management records), (b) direct observation of intervention implementation, and (c) self-report measures completed by intervention providers. Each method includes both benefits and potential limitations.
Intervention Product Review
One way to verify delivery of intervention is by completing a review of products associated with the intervention (Crone et al. 2010). For example, many tier 2 behavioral interventions include daily or weekly documentation of student performance such as a DPR system, as is used in CICO. In these cases, the behavior support team can examine three to five of the most recent progress documents to determine whether some of the specific elements of the intervention occurred (e.g., progress was recorded, progress was calculated and evaluated using specified criteria, data were shared with relevant stakeholders such as classroom teacher and family as evidenced by stakeholder signatures). When review of student products provides evidence that specified components of the intervention are in place and being delivered consistently, the team can have greater confidence in the accuracy of progress monitoring data and subsequent decisions made from use of those data. Alternately, when review of intervention-related products identifies an area of low implementation, a member of the support team can be designated to provide coaching, remediation of skills, and feedback about intervention delivery as needed (e.g., student, teacher, and/or parent). However, one limitation of the product review is that it does not take into account components of the intervention that are not part of the product, or in this case, the DPR. For instance, when students and interventionists use the DPR, it is expected that students will receive verbal feedback from an adult about their performance as well as a predetermined reward for meeting goals. These components, which may be critical to the success of the intervention, may need to be documented via direct observation or self-report (Bruhn, McDaniel, & Kreigh, in press.)
Direct Observation
A second method for verifying accuracy of intervention integrity is direct observation. In these cases an observation checklist may be especially useful both for documenting features that occurred and for providing feedback to implementers. Direct observation tools can be developed to reflect the essential features of intervention delivery. For example, a tier 2 social skills intervention observation checklist might include: (a) instructor introduced, defined, and discussed skill use and its importance, (b) instructor demonstrated at least two examples and nonexamples of the skill, (c) students correctly demonstrated skill use in prompted role plays, (d) students correctly demonstrated skill use in nonprompted role plays, and (e) instructor reinforced occurrences of the appropriate skill and corrected incorrect skill use (Elliott and Gresham 2008). As a second example, within a tier 3 FBA-BSP intervention, direct observation targets might include: (a) teacher prompted student regarding correct skill use, (b) teacher consistently reinforced desired behavior, and (c) teacher minimized reinforcement of problem behavior. Like direct observation used to monitor student progress, direct observation of fidelity is also time and labor intensive, but is the most accurate method for estimating intervention integrity.
Self-report Measures
A final option for collecting fidelity of implementation data is asking intervention providers to record components they provide and/or self-assess their level of accuracy according to a list of described features. Self-report measures can be organized to collect implementer ratings (e.g., five-point scale) or as a simple “yes” or “no” checklist. Interviews and questionnaires may be used to give informants an opportunity to elaborate beyond the scope of what is covered in a checklist or rating scale. Although self-report measures are often used to assess implementation because they are easy and require little time to complete, teams using this technique should be cautious when reviewing and evaluating data. Some research on self-reporting has indicated implementers tend to overrate their performance and accuracy (Wickstrom et al. 1998). However, more recently, some researchers have demonstrated self-report of fidelity can be an accurate and more efficient alternative to direct observation (Hagermoser Sanetti and Kratochwill 2009). Regardless, a multi-method, multi-informant approach to measuring fidelity that includes both direct and indirect observation can only increase the accuracy of conclusions drawn about intervention effects (Lane et al. 2004).
Tier 2/3 Systems Fidelity
A hallmark of the SW-PBS process is coordinated universal, tier 2, and tier 3 systems of support (Lewis et al. 2010). Similar to the work of VanDerHeyden et al. (2007) who evaluated fidelity of an entire RTI system including not only intervention fidelity but also fidelity to the decision-making process; recent advancements in SW-PBS related to effective tier 2/3 intervention efforts led to the development of instruments designed to assess the fidelity of the overall systems, or process, of implementation. These tools are used both as metrics of fidelity as well as data sources for making data-based decisions to continually build and refine tier 2/3 systems. One example, the Benchmarks for Advanced Tiers (BAT; Anderson et al. 2010) was created to answer three main questions: (1) Are the foundational (organizational) elements in place for implementing tier 2 and 3 behavior support practices? (2) Are the essential features of a tier 2 support system in place? and (3) Are the essential features of a tier 3 system in place? The BAT is completed by the tier 2/3 team and reflects the consensus or majority of team member perceptions. A second example, the Individual Student Systems Evaluation Tool (ISSET; Anderson et al. 2012) is also used to measure the implementation status of tier 2/3 systems within a school (e.g., Debnam et al. 2013). The ISSET consists of 35 items and is divided into three parts: foundations, tier 2 interventions, and tier 3 interventions. A summary score is obtained for each of the three parts and is administered/completed by an external evaluator. Two data sources are used to score the ISSET: Administer and teacher interviews and a review of permanent products/documented procedures. Results from either instrument are used to identify areas of strength and needed improvements.
Evaluating Impact
Tier 2/3 teams, in concert with the universal SW-PBS, adhere to the basic logic of SW-PBS (i.e., data–practices–systems) and also conduct informal evaluations of the effects of their efforts. Teams focus on three questions: (a) Did the interventions we put in place lead to improved student outcomes? (b) If interventions were not effective, what else do we need to know to increase the likelihood of success (e.g., what training and technical assistance do we need)? (c) Could we have worked more efficiently (see Fig. 3 for a more comprehensive list of system evaluation points). To answer these questions, the tier 2/3 and universal SW-PBS teams determine what change has occurred across the variables or behaviors of interest. In the case of behavioral intervention, programs were likely selected with the expectation of impacting problem behavior and student engagement, which in turn may lead to improvements in academic achievement. Evaluating features of tier 2 and 3 program impact can occur using a variety of methods. For example, in some cases compilation of existing school data provides valuable evidence of program impact and can be used to evaluate system outcomes.
Under other circumstances, a more formalized assessment of behavior support team member perceptions or measures of system-wide implementation may be warranted when student outcomes are inconsistent (e.g., Self-assessment Survey, SAS; School Safety Survey, SSS; Team Implementation Checklist, TIC). School staff members typically complete the SAS survey at minimum annually. After reviewing outcomes from tier 2 or 3 interventions that demonstrated questionable positive impact for multiple students, the team could review SAS survey results to get a better understanding about staff perceptions of implementation at the school-wide, nonclassroom, classroom, and individual student levels (Safran 2006). SAS results also demonstrate staff perceptions of priorities for improvement. Regular review of these data lends itself to ensuring the SW-PBS efforts across all tiers address the needs identified by staff that completed the survey.
In addition to the outcomes derived from existing school data, teams also gather social validation data. Social validity data typically provides a picture of the extent to which particular stakeholder groups (i.e., students, families, and teachers) value identified practices and outcomes. Social validity data are commonly gathered through use of a survey or asking personnel to respond to items on a brief questionnaire. Example statements or questions may include: Overall problem behaviors have decreased for this student during participation in the behavioral intervention program; I think this behavioral intervention program may be good for other students in our school; Having my child/student in the behavioral intervention program is worth my time and effort. If social validity results are low it may be difficult to continue implementation of the support “as is.” Instead, teams will investigate why the practice is perceived poorly and make adjustments either by providing additional information and technical assistance and/or by making changes to features that perhaps were not feasible to maintain (Fixsen et al. 2009; Guskey 2002).
Finally, timelines for conducting the system evaluation must be considered also. An annual review that occurs near or after the end of each school year may be practical and make sense for many school teams. This time frame allows widespread participation in the system across many staff members, students, and parents throughout the school year, concludes during a period when student data are already commonly collected, and guides decisions the team will make for refining the system the following year. Annual evaluation allows time before the start of the next school year for making adjustments to the existing system such as improving the communication system to all staff regarding implementation if student outcomes are inconsistent across the school day or additional training for intervention implementers if fidelity is low. Systems evaluation requires thoughtful but realistic consideration. The process should use existing data collection strategies (e.g., student outcomes) and fidelity/planning tools (e.g., the BAT) to insure teams engage in the process. At the same time, evaluations also should not be so simplistic that valuable outcomes are overlooked or never uncovered.
Empirical Support for Tier 2/3 SW-PBS
To date, there is empirical evidence on the social, emotional, behavioral, and academic impact of tier 1 SW-PBS supports (Barrett et al. 2008; Bradshaw et al. 2010; Bradshaw et al. 2008; Curtis et al. 2010; Horner et al. 2009; Simonsen et al. 2010). Likewise, there is a strong evidence base demonstrating positive effects from use of particular interventions that are commonly implemented within tier 2 and 3 levels of support. For example, social skill instruction, self-management strategies, academic supports, and use of FBA data to guide individual BSPs have been verified as effective treatments for changing behavior (e.g., Bessette and Wills 2007; Christensen et al. 2007; Skinner et al. 2009; Sumi et al. 2013). However, empirical demonstrations of the impact of tier 2 and 3 supports within the context of a complete continuum of SW-PBS to date are limited. The following provides a brief summary of what has been concluded from the existing body of work.
Two recent reviews of tier 2 implementation studies examined questions associated with identification of students, delivery of supports, and overall impact (Bruhn et al. 2014; Mitchell et al. 2011). From the two reviews several themes emerged. First, on balance, the overall number of studies investigating tier 2 level of implementation has increased recently. Ranging from approximately 2002 through 2012 researchers conducted and published nearly 30 separate investigations concerning tier 2 interventions and delivery within a tiered framework (Bruhn et al. 2014). Second, demonstrations reporting sufficient fidelity of tier 1 implementation prior to initiation of tier 2 intervention, particularly at the classroom level, were fewer than expected (Bruhn et al. 2014). Third, although there is evidence of reliance on a few commonly collected data such as disciplinary events, screening scores, and teacher nominations as primary sources for determining which students will access intervention, wide variation in identification approaches were still clearly evident (Bruhn et al. 2014; Mitchell et al. 2011). Fourth, across the existing body of work, evidence generally indicated high social validity ratings for the variety of interventions that have been included. The remainder of this section provides further detail for some of these findings.
Core Instruction Delivered With Fidelity
Within the SW-PBS literature, several instruments have been developed and used for monitoring implementation efforts. Examples include the TIC (Sugai et al. 2009), the SSS (Sprague et al. 2002), and the Benchmarks of Quality (BoQ; Cohen et la. 2007). Two of the most commonly used tools are the School-Wide Evaluation Tool (SET; Horner et al. 2004) and the Effective Behavior Support/Self Assessment Survey (EBS/SAS; Sugai et al. 2000). The SET is designed to assess features of tier 1 that are in place and evaluate ongoing efforts for developing a school-wide approach for behavior management and discipline.
The SET consists of interviews, observation of the setting, and a review of products such as teaching plans, signage, and data collection. An overall score is calculated and results of 80 % or higher indicate adequate implementation. In some of the initial publications for tier 2/3 interventions, it was common to simply state that features of tier 1, core instruction such as clearly defined expectations and rules, explicit behavioral instruction across settings, use of a recognition system, and consistent documentation of behavioral infractions were in place (e.g., Lane et al. 2002; Lane et al. 2003; Gresham et al. 2006). However, more recent investigations reflect the importance of high-quality core instruction as part of the continuum of supports by attending to and reporting scores from the SET (e.g., Campbell and Anderson 2011; Hawken et al. 2011; Mong et al. 2011; Simonsen et al. 2011). While the SET is a research-validated instrument with strong psychometric properties demonstrating consistency in measurement of tier 1 features, one aspect of tier 1 that is not addressed within the SET is how well the core behavioral instruction is provided within individual classrooms (Horner et al. 2004). Likewise, acknowledgment of students demonstrating expected behaviors and matching instruction with student ability are associated with sustained implementation and positive student outcomes (e.g., ODR) (Matthews et al. 2014). However, the existing literature related to tier 2 has not explicitly reported that scores from the SAS showed integrity of classroom-level implementation prior to delivery of behavioral supports for identified students (Bruhn et al. 2014; Mitchell et al. 2011).
Identification Practices and Tools
The existing evidence also indicates some commonalities among tools that are used to identify students for tier 2 behavioral interventions, but still shows wide variation in decision criteria. For example, Mitchell et al. (2011) reported participation in a tier 2 intervention was based on one or a combination of the following: (a) a nomination process in which a classroom teacher, a parent, or a problem-solving team identified the student as at-risk; (b) use of existing behavioral performance data—typically, ODR information—to indicate that the student was unresponsive to the tier 1 prevention efforts or continuing to demonstrate difficulties meeting social behavioral expectations; or (c) use of a behavioral screening score. Similarly, Bruhn et al. (2014) determined (a) ODR; (b) teacher nomination; (c) academic performance; or (d) other methods such as parent nomination, attendance data, or a specified behavioral function served as tools for identifying student candidates for intervention. Yet, specific details regarding exact criteria differed across studies. Descriptions of several examples follow.
Nomination
Several studies asked teachers to identify, without a quantifiable index, which students needed intervention (Bruhn et al. 2014). In one study, participants were identified for intervention based on teacher nominations because of behavioral difficulties in the classroom and/or existence of a behavior plan (McCurdy 2007). A different example indicated teachers nominated students because of classroom problem behavior and lack of responsiveness to the tier 1 prevention efforts (McIntosh 2009). In a third example, a multi-informant process was used, which included administrator nomination, teacher verification of problem behavior, parental consent, and student willingness to participate, to determine which children would receive intervention (Todd et.al. 2008).
Student Data
The existing research base also provides examples for using specified cut points of student academic and/or behavioral data that are commonly collected in school settings to identify intervention candidates. For example, students who met a predetermined threshold of risk such as five or more disciplinary incidents or two or more documented events of problem behavior were selected for intervention (Hawken 2006; Hawken et al. 2007). Other studies have also included use of academic data such as grade point averages (e.g., Robertson and Lane 2007) or Dynamic Indicators of Basic Early Literacy Skills (DIBELS) scores (e.g., Wills et al. 2010) along with the collected behavioral data (Bruhn et al. 2014).
Behavioral Screening
In the most recent comprehensive review of tier 2 within SW-PBS, use of one or more screening instruments to identify students at risk was evident in less than half of all the reviewed studies (i.e., 13 out of 28, Bruhn et al. 2014). When a systematic screening process was used, the most commonly employed instruments were the Student Risk Screening Scale (SRSS; Drummond 1994), the SSBD (Walker and Severson 1992), and the Social Skills Rating System (SSRS; Gresham and Elliott 1990). Several examples combined use of the SSBD with the SRSS (e.g., Lane et al. 2008; Lane et al. 2010; Little et al. 2010) which is interesting considering the SSBD has been shown to identify both externalizing and internalizing concerns and at this time the SRSS is best known for detecting externalizing attributes alone. One study demonstrated use of a behavioral screener, the SSBD, but also included DIBELS as part of the identification method (Wills et al. 2010). Demonstration of screening as a method for tier 2 identification appears to be increasing within recent years. Among the 13 studies from the Bruhn et al. (2014) review, all studies that included a behavioral screening process were published within the past 10 years (i.e., 2003–2011) and a majority occurred in studies published in 2007 or later. While use of screening to identify students at risk in academic domains is well established, teams working within SW-PBS have continued to rely on more of a multi-method approach that often includes use of a psychometrically sound rating instrument, but may review other data such as informal teacher or school personnel perceptions of problems and/or recorded disciplinary events (Bruhn et al. 2014)
Social Validity
A final topic that was evident within both of the recently published tier 2 literature reviews was the issue of social validity. In a 1976 address presented for the Division of Experimental Analysis of Behavior within the American Psychological Association, Montrose Wolf described, with humor, how Don Baer helped prioritize “social importance” as it relates to the mission of applied behavior analysis and how subsequently ABA came to “have a heart” (Wolf 1978). Not quite 40 years later, the measurement of behavioral change that is perceived as socially valued or important is more common than not at least in the area of SW-PBS research. Currently the majority of studies conducted with tier 2 level interventions have reported social validity data (Bruhn et al. 2014; Mitchell et al. 2011). Most of the studies that included social validity measurements at minimum gathered these data from participating teachers who perhaps most closely detected the impact of school-based interventions (e.g., Fairbanks et al. 2007; Hawken et al. 2007). Several studies also demonstrated measurement of student beliefs about interventions that were delivered (e.g., Lane et al. 2002; Lane et al. 2003). Commonly used tools included the Intervention Rating Profile (IRP; Witt and Elliott 1985), the Children’s Intervention Rating Profile (CIRP; Witt and Elliott 1985), and the CICO Program Acceptability Questionnaire (Crone et al. 2010). Results from these assessments across a variety of stakeholders showed generally favorable outcomes for indicators like ease of implementation, perceived benefit, and potential sustainability of practices over time (Bruhn et al. 2014).
Recommendations for Future Research
There is agreement about the types of practices to include in a full continuum of behavioral supports and there is understanding of the essential system features that will facilitate high-quality implementation. What is needed is a more expansive body of empirical evidence indicating positive outcomes (e.g., student academic and behavioral indicators, organizational health, school safety and climate) are attained and can be sustained over time when interconnected tiered support is delivered. In addition, research is needed on several factors within the tier 2/3 system including (a) impact on students with internalizing problem behavior (e.g., depression, anxiety), (b) the ability of teams to effectively and efficiently match intervention to presenting problem with existing commonly collected school data, (c) the impact of academic and behavioral supports guided by student need, (d) essential implementation steps that lead to sustained outcomes, and (e) the mediating and/or moderating effect of related environmental variables such as school district/state support, community and cultural factors, and prior student learning history. This will require sophisticated investigations demonstrating a clear linkage between tier 1 systems, including classroom, implemented with high-fidelity and student-need-directed tier 2/3 supports. While the emerging literature on impact of tier 2/3 supports within the context of SW-PBS is encouraging, the true “value add” has yet to be empirically demonstrated. The logic of nesting tier 2/3 supports within a connected and related instructional environment is built on decades of research on promoting maintenance and generalization of intervention outcomes and to identify and support students at the first sign of risk. Two recent studies provide a starting point to empirically validate the impact of both the prevention logic of SW-PBS and the impact of a complete continuum of supports.
First, Reddy and colleagues completed a meta-analytic review of the types of prevention and intervention programs available for students with or at risk for emotional and behavioral disorders (Reddy et al. 2009). Findings indicated that prevention and early intervention programs can be effective for reducing risk of onset and for minimizing impact of already existing symptoms. Second, Nelson et al. (2009) assessed child outcomes when a full continuum of behavioral supports was provided. The tiered system of supports consisted of Behavior and Academic Support and Enhancement (BASE) at the universal level (Nelson et al. 2002), First Step to Success as the tier 2 level intervention (Walker et al.1997), and multi-systemic therapy (MST) as the tier 3 treatment (Henggeler et al. 1998) and were tested with 407 children in grades K–3. Effects on problem behavior, social skills, and academic competence across levels of intervention (tier 1, tier 2, and tier 3) were measured. One finding of particular importance was that children who participated in the tier 2 or tier 3 interventions early in their school career showed immediate social behavioral improvements that could be sustained over time with continued implementation of the tier 1 supports (Nelson et al. 2009). However, the results did not demonstrate a similar outcome for teacher ratings of academic competence among students who received the advanced tier interventions. The results offered initial evidence to support the idea that a tiered service delivery model could impact behavioral performance of children with or at risk for behavioral problems (Nelson et al. 2009).
Implications for Practice
Across this chapter, essential features and examples to build SW-PBS systems at the tier 2/3 level of support have been provided. The limitations of a single chapter and the complexities of both specific intervention strategies and the intensity of support across school environments preclude simple recommendations for practice. The encouraging news as previously discussed, intervention and data collection strategies along with the necessary systems of support at the universal level are well established and comprehensive implementation guides are readily available for practitioners (see pbis.org). In addition, many districts have adopted integrated frameworks to tie in community supports at the individual student level. Unfortunately, schools and districts continue to struggle with establishing effective tier 2 systems of support. While the research base is limited with respect to systems, as noted above there is strong empirical evidence for specific intervention strategies appropriate for tier 2 level of supports (e.g., CICO, social skill instruction).
References
Albers, C. A., Glover, T. A., & Kratochwill, T. R. (2007). Introduction to the special issue: How can universal screening enhance educational and mental health outcomes? Journal of School Psychology, 45(2), 113–116.
Anderson, C., Lewis-Palmer, T., Todd, A., Horner, R., Sugai, G., & Sampson, N. (2008). Individual student systems evaluation tool. Unpublished instrument, Eugene, OR: Educational and Community Supports, University of Oregon. http://www.pbis.org/common/cms/files/pbisresources/ISSET_TOOL_v_3_March_2012.pdf. Accessed 1 March 2015.
Anderson, C. M., Childs, K., Kincaid, D., Horner, R. H., George, H., Todd, A. W., Sampson, N. K., & Spaulding, S. (2010). Benchmarks for advanced tiers (BAT). Unpublished instrument. Eugene OR: Educational and Community Supports. http://www.pbis.org/common/cms/files/pbisresources/BAT_v2.5.pdf. Accessed 1 March 2015.
Anderson, C. M., Lewis-Palmer, T., Todd, A. W., Horner, R. H., Sugai, G., & Sampson, N. K. (2012). Individual student systems evaluation tool. Educational and Community Supports, University of Oregon.
Barrett, S. B., Bradshaw, C. P., & Lewis-Palmer, T. (2008). Maryland statewide PBIS initiative systems, evaluation, and next steps. Journal of Positive Behavior Interventions, 10(2), 105–114.
Benedict, E., Horner, R. H., & Squires, J. (2007). Assessment and implementation of positive behavior support in preschools. Topics in Early Childhood Special Education, 27(3), 174–192.
Bessette, K. K., & Wills, H. P. (2007). An example of an elementary school paraprofessional-implemented functional analysis and intervention. Behavioral Disorders, 32(3), 192–210.
Bradley, R., Henderson, K., & Monfore, D. A. (2004). A national perspective on children with emotional disorders. Behavioral Disorders, 29, 211–223.
Bradshaw, C. P., Reinke, W. M., Brown, L. D., Bevans, K. B., & Leaf, P. J. (2008). Implementation of school-wide Positive Behavioral Interventions and Supports (PBIS) in elementary schools: Observations from a randomized trial. Education and Treatment of Children, 31(1), 1–26.
Bradshaw, C. P., Mitchell, M. M., & Leaf, P. J. (2010). Examining the effects of schoolwide positive behavioral interventions and supports on student outcomes results from a randomized controlled effectiveness trial in elementary schools. Journal of Positive Behavior Interventions, 12(3), 133–148.
Briere, D. E. III, & Simonsen, B. (2011). Self-monitoring interventions for at-risk middle school students: The importance of considering function. Behavioral Disorders, 36, 129–140.
Bruhn, A. L., McDaniel, S., & Kreigh, C. (in press). Self-monitoring interventions for students with behavior problems: A review of current research. Behavioral Disorders.
Bruhn, A. L., Lane, K. L., & Hirsch, S. E. (2014). A review of tier 2 interventions conducted within multitiered models of behavioral prevention. Journal of Emotional and Behavioral Disorders, 22(3), 171–189.
Campbell, A., & Anderson, C. M. (2008). Enhancing effects of check-in/check-out with function-based support. Behavioral Disorders, 33(4), 233–245.
Campbell, A., & Anderson, C. M. (2011). Check-in check-out: a systematic evaluation and component analysis. Journal of applied behavior analysis, 44(2), 315–326.
Chafouleas, S. M. (2011). Direct behavior rating: A review of the issues and research in its development. Education and Treatment of Children, 34, 575–591. doi:10.1353/etc.2011.0034.
Chafouleas, S. M., Kilgus, S. P., & Hernandez, P. (2009). Using direct behavior rating (DBR) to screen for school social risk: A preliminary comparison of methods in a kindergarten sample. Assessment for Effective Intervention, 34, 224–230. doi:10.1177/1534508409333547.
Chard, D. (2013). Systems impact: Issues and trends in improving school outcomes for all learners through multitier instructional models. Interventions in School and Clinic, 48(4), 198–202.
Cheney, D. A., Stage, S. A., Hawken, L. S., Lynass, L., Mielenz, C., & Waugh, M. (2009). A 2-year outcome study of the check, connect, and expect intervention for students at risk for severe behavior problems. Journal of Emotional and Behavioral Disorders, 17(4), 226–243.
Christensen, L., Young, K. R., & Marchant, M. (2007). Behavioral intervention planning: Increasing appropriate behavior of a socially withdrawn student. Education and Treatment of Children, 30(4), 81–103.
Christenson, S. L., Stout, K., & Pohl, A. (2012). Check & connect: A comprehensive student engagement intervention. Minneapolis: University of Minnesota.
Cohen, R., Kincaid, D., & Childs, K. E. (2007). Measuring school-wide positive behavior support implementation development and validation of the benchmarks of quality. Journal of Positive Behavior Interventions, 9(4), 203–213.
Conroy, M. A., Hendrickson, J. M., & Hester, P. (2004). Early identification and prevention of emotional and behavioral disorders. In R. B. Rutherford, M. M. Quinn, & S. R. Mathur (Eds.), Handbook of research in emotional and behavioral disorders (pp. 199–215). New York: Guildford Press.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). New Jersey: Pearson.
Crone, D. A., & Horner, R. H. (2003). Building positive behavior support systems in schools: Functional behavioral assessment. New York: Guilford Press.
Crone, D., Hawken, L. S., & Horner, R. H. (2010). Responding to problem behavior in schools: The behavior education program (2nd ed.). New York: Guilford Press.
Curtis, R., Van Horne, J. W., Robertson, P., & Karvonen, M. (2010). Outcomes of a school-wide positive behavioral support program. Professional School Counseling, 13(3), 159–164.
Debnam, K. J., Pas, E. T., & Bradshaw, C. P. (2013). Factors influencing staff perceptions of administrator support for tier 2 and 3 interventions: A Multilevel perspective. Journal of Emotional and Behavioral Disorders, 21(2), 116–126.
Donovan, M. S., & Cross, C. T. (2002). Minority students in special and gifted education. Washington, DC: National Academy Press.
Dowdy, E., Doane, K., Eklund, K., & Dever, B. V. (2013). A Comparison of teacher nomination and screening to identify behavioral and emotional risk within a sample of underrepresented students. Journal of Emotional and Behavioral Disorders, 21(2), 127–137.
Drummond, T. (1993). The student risk screening scale (SRSS). Grants Pass: Josephine County Mental Health Program.
Duda, M. A., Dunlap, G., Fox, L., Lentini, R., & Clarke, S. (2004). An experimental evaluation of positive behavior support in a community preschool program. Topics in Early Childhood Special Education, 24(3), 143–155.
Duhon, G. J., Mesner, E. M., Gregerson, L., & Witt, J. C. (2009). Effects of public feedback during response to intervention team meetings on teacher implementation integrity and student academic performance. Journal of School Psychology, 47, 19–37.
Elliott, S. N., & Gresham, F. M. (2008). SSIS intervention guide. Minneapolis: NC Pearson.
Epstein, M., Atkins, M., Cullinan, D., Kutash, K., & Weaver, R. (2008). Reducing behavior problems in the elementary school classroom: A practice guide (NCEE #2008–012). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. http://ies.ed.gov/ncee/wwc/publications/practiceguides. Accessed 23 Jan 2013.
Everett, S., Sugai, G., Fallon, L., Simonsen, B., & O’Keeffe, B. (2011). School-wide tier 2 interventions: Check-in/check-out getting started workbook. Center on Positive Behavioral Interventions and Supports Center for Behavioral Education and Research University of Connecticut.
Fairbanks, S., Sugai, G., Guardino, D., & Lathrop, M. (2007). Response to intervention: Examining classroom behavior support in second grade. Exceptional Children, 73(3), 288–310.
Fantuzzo, J. W., Rohrbeck, C. A., & Azar, S. T. (1987). A component analysis of behavioral self-management interventions with elementary school students. Child & Family Behavior Therapy, 9, 33–43.
Fixsen, D. L., Blase, K. A., Naoom, S. F., & Wallace, F. (2009). Core implementation components. Research on Social Work Practice, 19(5), 531–540.
Gage, N. A., Lewis, T. J., & Stichter, J. P. (2012). Functional behavioral assessment-based interventions for students with or at risk for emotional and/or behavioral disorders in school: A hierarchical linear modeling meta-analysis. Behavioral Disorders, 37(2), 55–77.
Gansle, K. A., & Noell, G. H. (2007). The fundamental role of intervention implementation in assessing response to intervention. In S. R. Jimerson, M. K. Burns, & A. M. VenDerHeyden. (Eds.) Handbook of response to intervention: The science and practice of assessment and intervention (pp. 244–251). New York: Springer.
Glover, T. A., & Albers, C. A. (2007). Considerations for evaluating universal screening assessments. Journal of School Psychology, 45(2), 117–135.
Goodman, R. (1997). The strengths and difficulties questionnaire: A research note. Journal of Child Psychology and Psychiatry and Allied Disciplines, 38, 581–586.
Goodman, R., Meltzer, H., & Bailey, V. (1998). The strengths and difficulties questionnaire: A pilot study on the validity of the self-report version. European Child and Adolescent Psychiatry, 7, 125–130.
Goodman, A., Lamping, D. L., & Ploubidis, G. B. (2010). When to use broader internalising and externalising subscales instead of the hypothesised five subscales on the Strengths and Difficulties Questionnaire (SDQ): Data from British parents, teachers and children. Journal of Abnormal Child Psychology, 38, 1179–1191.
Gordon, R. S. (1983). An operational classification of disease prevention. Public Health Reports, 98, 107–109.
Gresham, F. M., & Elliott, S. N. (1990). Social skills rating system. Circle Pines: AGS Publishing.
Gresham, F. M., Van, M. B., & Cook, C. R. (2006). Social skills training for teaching replacement behaviors: Remediating acquisition deficits in at-risk students. Behavioral Disorders, 31(4), 363–377.
Guskey, T. R. (2002). Professional development and teacher change. Teachers and Teaching: Theory and Practice, 8(3), 381–391.
Hagermoser Sanetti, L. M., & Kratochwill, T. R. (2009). Treatment integrity assessment in the schools: An evaluation of the treatment integrity planning protocol. School Psychology Quarterly, 24(1), 24–35.
Hansen, B. D., Wills, H. P., Kamps, D. M., & Greenwood, C. R. (2014). The effects of function-based self-management interventions on student behavior. Journal of Emotional and Behavioral Disorders, 22(3), 149–159.
Hawken, L. S. (2006). School psychologists as leaders in the implementation of a targeted intervention: The Behavior Education Program. School Psychology Quarterly, 21(1), 91.
Hawken, L. S., & Horner, R. H. (2003). Evaluation of a targeted intervention within a schoolwide system of behavior support. Journal of Behavioral Education, 12(3), 225–240.
Hawken, L. S., & Hess, R. S. (2006). School psychologists as leaders in the implementation of a targeted intervention: The behavior education program. School Psychology Quarterly, 21(1), 91.
Hawken, L. S., MacLeod, K. S., & Rawlings, L. (2007). Effects of the behavior education program (BEP) on office discipline referrals of elementary school students. Journal of Positive Behavior Interventions, 9(2), 94–101.
Hawken, L. S., O’Neil, R. E., & McLeod, K. S. (2011). An investigation of the impact of function of problem behavior on effectiveness of the behavior education program (BEP). Education and Treatment of Children, 34, 551–574. doi:.org/10.1353/etc.2011.0031.
Heaviside, S., Rowand, C., Williams, C., & Farris, E. (1998). Violence and discipline problems in U.S. Public Schools: 1996–97 (NCES 98-030). Washington, DC: U.S. Department of Education, National Center for Education Statistics.
Henderson, J., & Strain, P. S. (2009). Screening for delays and problem behavior. (Roadmap to effective intervention practices). Tampa: University of South Florida.
Henggeler, S. W., Schoenwald, S. K., Borduin, C. M., Rowland, M. D., & Cunningham, P. B. (1998). Multisystemic treatment of antisocial behavior in children and adolescents. New York: Guilford Press.
Horner, R. H., Todd, A. W., Lewis-Palmer, T., Irvin, L. K., Sugai, G., & Boland, J. B. (2004). The School-Wide Evaluation Tool (SET): A research instrument for assessing school-wide positive behavior support. Journal of Positive Behavior Interventions, 6(1), 3–12.
Horner, R. H., Sugai, G., Todd, A. W., & Lewis-Palmer, T. (2005). School-wide positive behavior support. Individualized supports for students with problem behaviors: Designing positive behavior plans (pp. 359–390). New York: Guilford Press.
Horner, R. H., Sugai, G., Smolkowski, K., Eber, L., Nakasato, J., Todd, A. W., & Esperanza, J. (2009). A randomized, wait-list controlled effectiveness trial assessing school-wide positive behavior support in elementary schools. Journal of Positive Behavior Interventions, 11(3), 133–144.
Horner, R. H., Sugai, G., & Anderson, C. M. (2010). Examining the evidence base for school-wide positive behavior support. Focus on Exceptional Children, 42(8), 2–14.
Iwata, B. A., & DeLeon, I. G. (1996). The functional analysis screening tool (FAST). Gainesville: University of Florida, The Florida Center on Self-Injury.
Kamphaus, R. W., & Reynolds, C. R. (2007). Behavioral and emotional screening system. Manual. Minneapolis: Pearson.
Kamphaus, R. W., DiStefano, C., Dowdy, E., Eklund, K., & Dunn, A. R. (2010). Determining the presence of a problem: Comparing two approaches for detecting youth behavioral risk. School Psychology Review, 39(3), 395–407.
Kamps, D., Wills, H. P., Heitzman-Powell, L., Laylin, J., Szoke, C., Petrillo, T., & Culey, A. (2011). Class-wide function-related intervention teams: Effects of group contingency programs in urban classrooms. Journal of Positive Behavior Interventions, 13(3), 154–167.
Kauffman, J. M. (2005). How we prevent the prevention of emotional and behavioural difficulties in education. In P. Clough, P. Garner, J. T. Pardeck, & F. K. O. Yuen (Eds.), Handbook of emotional and behavioural difficulties (pp. 429–440). London: Sage.
Keller-Margulis, M. A. (2012). Fidelity of implementation framework: A critical need for response to intervention models. Psychology in the Schools, 49, 342–352.
Kern, L., & Clemens, N. H. (2007). Antecedent strategies to promote appropriate classroom behavior. Psychology in the Schools, 44(1), 65–75.
Kratochwill, T. R., Albers, C. A., & Steele Shernoff, E. (2004). School-based interventions. Child and Adolescent Psychiatric Clinics of North America, 13(4), 885–903.
Lane, K. L., & Mariah Menzies, H. (2010). Reading and writing interventions for students with and at risk for emotional and behavioral disorders: An introduction. Behavioral Disorders, 35(2), 82.
Lane, K. L., Wehby, J. H., Menzies, H. M., Gregg, R. M., Doukas, G. L., & Munton, S. M. (2002). Early literacy instruction for first-grade students at risk for antisocial behavior. Education and Treatment of Children, 25(4), 438–458.
Lane, K. L., Wehby, J. H., Menzies, H. M., Doukas, G. L., Munton, S. M., & Gregg, R. M. (2003). Social skills instruction for students at risk for antisocial behavior: The effects of small-group instruction. Behavioral Disorders, 28(3), 229–248.
Lane, K. L., Bocian, K. M., MacMillan, D. L., & Gresham, F. M. (2004). Treatment integrity: An essential-but often forgotten-component of school-based interventions. Preventing School Failure, 48, 36–43.
Little, M. A., Lynne Lane, K., Harris, K. R., Graham, S., Story, M., & Sandmel, K. (2010). Self-regulated strategies development for persuasive writing in tandem with schoolwide positive behavioral support: Effects for second-grade students with behavioral and writing difficulties. Behavioral Disorders, 35(2), 157.
Levitt, J. M., Saka, N., Hunter Romanelli, L., & Hoagwood, K. (2007). Early identification of mental health problems in schools: The status of instrumentation. Journal of School Psychology, 45(2), 163–191.
Lane, K. L., Kalberg, J. R., Bruhn, A. L., Mahoney, M. E., & Driscoll, S. A. (2008). Primary prevention programs at the elementary level: Issues of treatment integrity, systematic screening, and reinforcement. Education and Treatment of Children, 31(4), 465–494.
Lane, K. L., Bruhn, A. L., Crnobori, M., & Sewell, A. L. (2009). Designing functional assessment-based interventions using a systematic approach: A promising practice for supporting challenging behavior. In T. E. Scruggs & M. A. Mastropieri (Eds.), Policy and practice: Advances in learning and behavioral disabilities (Vol. 22). Bingley: Emerald.
Lewis, T. J., Jones, S. E., Horner, R. H., & Sugai, G. (2010). School-wide positive behavior support and students with emotional/behavioral disorders: Implications for prevention, identification and intervention. Exceptionality, 18(2), 82–93.
Lewis, T. J., & Sugai, G. (1999). Effective behavior support: A systems approach to proactive schoolwide management. Focus on Exceptional Children, 31(6), 1–24.
Liaupsin, C. J., Umbreit, J., Ferro, J. B., Urso, A., & Upreti, G. (2006). Improving academic engagement through systematic, function-based intervention. Education and Treatment of Children, 29(4), 573–591.
McCurdy, B. L., Kunsch, C., & Reibstein, S. (2007). Secondary prevention in the urban school: Implementing the Behavior Education Program. Preventing School Failure: Alternative Education for Children and Youth, 51(3), 12–19.
McIntosh, K., Horner, R. H., Chard, D. J., Dickey, C. R., & Braun, D. H. (2008). Reading skills and function of problem behavior in typical school settings. Journal of Special Education, 42, 131–147.
McIntosh, K., Campbell, A. L., Carter, D. R., & Dickey, C. R. (2009). Differential effects of a tier two behavior intervention based on function of problem behavior. Journal of Positive Behavior Interventions, 11(2), 82–93.
McIntosh, K., Reinke, W. M., & Herman, K. E. (2009a). School-wide analysis of data for social behavior problems: Assessing outcomes, selecting targets for intervention, and identifying need for support. In G. G. Peacock, R. A. Ervin, E. J. Daly, & K. W. Merrell (Eds.), The practical handbook of school psychology (pp. 135–156). New York: Guilford.
McIntosh, K., Campbell, A. L., Carter, D. R., & Zumbo, B. D. (2009b). Concurrent validity of office discipline referrals and cut points used in schoolwide positive behavior support. Behavioral Disorders, 34(2), 100–113.
McIntosh, K., Frank, J. L., & Spaulding, S. A. (2010). Establishing research-based trajectories of office discipline referrals for individual students. School Psychology Review, 39(3), 380.
McIntosh, K., Mercer, S. H., Hume, A. E., Frank, J. L., Turri, M. G., & Mathews, S. (2013). Factors related to sustained implementation of schoolwide positive behavior support. Exceptional Children, 79(3), 293–311.
Malloy, J. M., Sundar, V., Hagner, D., Pierias, L., & Viet, T. (2010). The efficacy of the RENEW model: Individualized school-to-career services for youth at risk of school dropout. Journal of at Risk Issues, 15(2), 17–25.
March, R. E., & Horner, R. H. (2002). Feasibility and contributions of functional behavioral assessment in schools. Journal of Emotional and Behavioral Disorders, 10(3), 158–170.
March, R. E., Horner, R. H., Lewis-Palmer, T., Brown, D., Crone, D., Todd, A. W., & Carr, E. (2000). Functional Assessment Checklist for Teachers and Staff (FACTS). Eugene: Educational and Community Supports.
Marchant, M. R., Solano, B. R., Fisher, A. K., Caldarella, P., Young, K. R., & Renshaw, T. L. (2007). Modifying socially withdrawn behavior: A playground intervention for students with internalizing behaviors. Psychology in the Schools, 44(8), 779–794.
Mathews, S., McIntosh, K., Frank, J. L., & May, S. L. (2014). Critical features predicting sustained implementation of school-wide positive behavioral interventions and supports. Journal of Positive Behavior Interventions, 16(3), 168–178. 1098300713484065.
Mitchell, B. S., Stormont, M., & Gage, N. A. (2011). Tier two interventions implemented within the context of a tiered prevention framework. Behavioral Disorders, 36(4), 241–261.
Mong, M. D., Johnson, K. N., & Mong, K. W. (2011). Effects of check-In/check-out on behavioral indices and mathematics generalization. Behavioral Disorders, 36(4), 225–240.
Muscott, H. S., Pomerleau, T., & Szczesiul, S. (2009). Large-scale implementation of program-wide positive behavioral interventions and supports in early childhood education programs in New Hampshire. NHSA Dialog, 12(2), 148–169.
Nelson, J. R., Martella, R. M., & Marchand-Martella, N. (2002). Maximizing student learning: The effects of a comprehensive school-based program for preventing problem behaviors. Journal of Emotional and Behavioral Disorders, 10(3), 136–148.
Nelson, J. R., Hurley, K. D., Synhorst, L., Epstein, M. H., Stage, S., & Buckley, J. (2009). The child outcomes of a behavior model. Exceptional Children, 76(1), 7–30.
Ness, B. M., Sohlberg, M. M., & Albin, R. W. (2011). Evaluation of a second-tier classroom-based assignment completion strategy for middle school students in a resource context. Remedial and Special Education, 32(5), 406–416.
No Child Left Behind Act of 2001, Pub. L. No. 107–110, 115 Stat. 1425 (2002).
Presidents Commission on Excellence in Special Education. (2002). A new era: Revitalizing special education for children and their families. Washington, DC: US Department of Education. www.ed.gov/inits/commissionsboards/whspecialeducation/index.html. Accessed 7 March 2013.
Reddy, L. A., Newman, E., De Thomas, C. A., & Chun, V. (2009). Effectiveness of school-based prevention and intervention programs for children and adolescents with emotional disturbance: A meta-analysis. Journal of School Psychology, 47(2), 77–99.
Riley-Tillman, T. C., Chafouleas, S. M., Sassu, K. A., Chanese, J. A. M., & Glazer, A. D. (2008). Examining the agreement of Direct Behavior Ratings and Systematic Direct Observation for on-task and disruptive behavior. Journal of Positive Behavior Interventions, 10, 136–143. doi:10.1177/1098300707312542.
Robertson, E. J., & Lane, K. L. (2007). Supporting middle school students with academic and behavioral concerns: A methodological illustration for conducting secondary interventions within three-tiered models of support. Behavioral Disorders, 33(1), 5–22.
Safran, S. P. (2006). Using the effective behavior supports survey to guide development of schoolwide positive behavior support. Journal of Positive Behavior Interventions, 8(1), 3–9.
Simmons, D. C., Kuykendall, K., King, K., Cornachione, C., & Kameenui, E. J. (2000). Implementation of a school-wide reading improvement model: “No one ever told us it would be this hard!”. Learning Disabilities Research & Practice, 15(2), 92–100.
Simonsen, B., Fairbanks, S., Briesch, A., Myers, D., Sugai, G. (2008). Evidence-based practices in classroom management: Considerations for research to practice. Education and Treatment of Children, 31(3), 351–380.
Simonsen, B., Britton, L., & Young, D. (2010). School-wide positive behavior support in an alternative school setting: A case study. Journal of Positive Behavior Interventions, 12(3), 180–191.
Simonsen, B., Myers, D., & Briere, D. E. (2011). Comparing a behavioral check-in/check-out (CICO) intervention to standard practice in an urban middle school setting using an experimental group design. Journal of Positive Behavior Interventions, 13(1), 31–48.
Skinner, J. N., Veerkamp, M. B., Kamps, D. M., & Andra, P. R. (2009). Teacher and peer participation in functional analysis and intervention for a first grade student with attention deficit hyperactivity disorder. Education and Treatment of Children, 32(2), 243–266.
Sprague, J., Colvin, G., & Irvin, L. (2002). The school safety survey. The Institute on Violence and Destructive Behavior. University of Oregon College of Education.
Sugai, G., Horner, R. H., Dunlap, G., Hieneman, M., Lewis, T. J., Nelson, C. M., Scott, T., Liaupsin, C., Sailor, W., Turnbull, A. P., Turnbull, H. R., Wickham, D., Reuf, M., & Wilcox, B. (2000a). Applying positive behavioral support and functional behavioral assessment in schools. Journal of Positive Behavioral Interventions, 2, 131–143.
Sugai, G., Horner, R. H., & Todd, A. W. (2000b). Effective behavior support: Self-assessment survey. Eugene: University of Oregon.
Sugai, G., Horner, R. H., & Lewis-Palmer, T. (2009). The team implementation checklist (v. 3.0). Unpublished instrument. Eugene, Oregon: Educational and Community Supports, University of Oregon.
Sumi, W. C., Woodbridge, M. W., Javitz, H. S., Thornton, S. P., Wagner, M., Rouspil, K., Yu, J. W., Seeley, J. R., Walker, H. M., Golly, A., Small, J. W., Feil, E. G., & Severson, H. H. (2013). Assessing the effectiveness of first step to success: Are short-term results the first step to long-term behavioral improvements? Journal of Emotional and Behavioral Disorders, 21(1), 66–78.
Todd, A. W., Campbell, A. L., Meyer, G. G., & Horner, R. H. (2008). The effects of a targeted intervention to reduce problem behaviors: Elementary school implementation of check in—check out. Journal of Positive Behavior Interventions, 10(1), 46–55.
United States Department of Education Office of Special Education and Rehabilitative Services. (2002). A new era: Revitalizing special education for children and their families. Washington, DC: United States Department of Education Office of Special Education and Rehabilitative Services.
United States Public Health Service. (2000). Report of the surgeon general’s conference on children’s mental health: A national action agenda. Washington, DC: Department of Health and Human Services.
U.S. Department of Education. (2005). 27th annual report to Congress on the implementation of the Individuals with Disabilities Education Act, 2004. Washington, DC: Author.
US Department of Health and Human Services. (2001). Report of the Surgeon General’s conference children’s mental health: A national action agenda. Washington, DC: Author.
VanAcker, R. (2004). Current status of public education and likely future directions for students with emotional and behavioral disorders. In L. M. Bullock & R. A. Gable (Eds.), Quality personnel preparation in emotional/behavioral disorders: Current perspectives and future directions. Denton: Institute for Behavioral and Learning Differences.
VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multi-year evaluation of the effects of a response to intervention (RTI) model on identification of children for special education. Journal of School Psychology, 45, 225–256.
Walker, H. M., & Severson, H. H. (1992). Systematic Screening for Behavior Disorders (SSBD): User’s guide and administration manual. Longmont: Sopris West.
Walker, H. M., & Shinn, M. R. (2002). Structuring school-based interventions to achieve integrated primary, secondary, and tertiary prevention goals for safe and effective schools. Interventions for academic and behavior problems 2: Preventive and remedial approaches, 1–25.
Walker, H. M., Kavanagh, K., Stiller, B., Golly, A., Severson, H. H., & Feil, E. G. (1998). First step to success an early intervention approach for preventing school antisocial behavior. Journal of Emotional and Behavioral Disorders, 6(2), 66–80.
Walker, H. M., Ramsey, E., & Gresham, R. M. (2004). Antisocial behavior in school: Evidence-based practices (2nd ed.). Belmont: Wadsworth.
Walker, B., Cheney, D., Stage, S., Blum, C., & Horner, R. H. (2005). Schoolwide screening and positive behavior supports: Identifying and supporting students at risk for school failure. Journal of Positive Behavior Interventions, 7(4), 194–204.
Wagner, M., Kutash, K., Duchnowski, A. J., Epstein, M. H., & Sumi, W. C. (2005). The children and youth we serve: A national picture of the characteristics of students with emotional disturbances receiving special education. Journal of Emotional and Behavior Disorders, 13(2), 79–96.
Wickstrom, K. F., Jones, K. M., LaFleur, L. H., & Witt, J. C. (1998). An analysis of treatment integrity in school-based behavioral consultation. School Psychology Quarterly, 13(2), 141.
Wills, H., Greenwood, C. R., Kamps, D., Heitzman-Powell, L., & Selig, J. (2010). The combined effects of grade retention and targeted small-group intervention on students’ literacy outcomes. Reading & Writing Quarterly, 26(1), 4–25.
Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding it’s heart. Journal of applied behavior analysis, 11(2), 203–214.
Witt, J. C., & Elliott, S. N. (1985). Acceptability of classroom intervention strategies. In T. R. Kratochwill (Ed.), Advances in school psychology (Vol. 4, pp. 251–288). Mahwah: Erlbaum.
Yeaton, W. H., & Sechrest, L. (1981). Critical dimensions in the choice and maintenance of successful treatments: Strength, integrity, and effectiveness. Journal of Consulting and Clinical Psychology, 49, 156–167.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer Science+Business Media New York
About this chapter
Cite this chapter
Mitchell, B., Bruhn, A., Lewis, T. (2016). Essential Features of Tier 2 and 3 School-Wide Positive Behavioral Supports. In: Jimerson, S., Burns, M., VanDerHeyden, A. (eds) Handbook of Response to Intervention. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7568-3_31
Download citation
DOI: https://doi.org/10.1007/978-1-4899-7568-3_31
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4899-7567-6
Online ISBN: 978-1-4899-7568-3
eBook Packages: Behavioral Science and PsychologyBehavioral Science and Psychology (R0)