Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

What makes an intervention implemented at tier 2 of a response-to-intervention (RTI) model more intensive than instruction received within tier 1? It could be that a tier 2 intervention occurs in small groups and tier 1 core instruction is geared toward the entire classroom. However, small-group instruction has become a staple of core teaching practices (Allington 2011), and it is not unusual for a small number of students who are receiving similar or the same tier 3 intervention to be grouped together for efficiency. Perhaps it is not the size of the group that matters, but the amount of problem analysis needed to identify the appropriate instructional practice that differentiates a tier 2 intervention from tier 1 instruction and a tier 3 intervention from one delivered for tier 2. Problem analysis is the process by which targeted interventions are identified so that the intervention is directly linked to the nature of the academic need and therefore has a high likelihood of being successful (Tilly 2008). Thus, problem analysis is central to the RTI process and should occur at all three tiers (Christ et al. 2005).

Burns and Gibbons (2012) suggest that the RTI process is essentially answering three problem analysis questions: (a) tier 1—Is there a whole-class problem? (b) tier 2—What is the category of the problem? and (c) tier 3—What is the environmental variable that is most closely related to the problem? These three questions are answered with data and are used to identify appropriate interventions. The questions also represent a continuum of intensity because the question for tier 1 (whole-class problem) is answered with group scores (e.g., median score for a classroom) from universal screening measures, but additional data are needed to answer questions for tiers 2 and 3. The purpose of the current chapter is to describe the problem analysis process for tier 2 with reading and mathematics. First, a review of relevant research is provided, followed by data to support the effectiveness of analyzing problems within tier 2, and concluding with directions for future research.

Research Regarding the Need to Target Tier 2 Interventions

An effective tier 2 is critical to the success of any RTI model. Of course, without quality core instruction, intervention efforts are not likely to be successful, which makes tier 1 the most important component of a school’s RTI implementation model. However, an effective tier 2 intervention could support a relatively large number of students and could prevent students from needing even more intensive services (i.e., tier 3). Moreover, a review of research found only low to moderate support for many of the main components of an RTI model, but found strong empirical support for providing systematic interventions to target foundational skills in reading (Gersten et al. 2008). Interventions are delivered in small groups to students who score below the benchmark standard on universal screening measures.

Interventions implemented for tier 2 often rely on standardized protocols or commercially prepared intervention packages that tend to be comprehensive in nature (i.e., address multiple components of reading; Vaughn et al. 2008), but that has not always been the case. Some of the earliest RTI implementation efforts relied heavily on individualized interventions that were often derived through a problem-solving team process (e.g., Ikeda and Gustafson 2002; Lau et al. 2006; McNamara and Hollinger 2003). Schools tended to struggle to implement the problem-solving process (Burns et al. 2005b) and doing so consumed too many school resources. For example, consider an elementary school with 650 students. Research has found that approximately 20 % of students need support beyond effective core instruction (Burns et al. 2005a), which means that in the hypothetical school with 650 students, approximately 130 of them would need intervention beyond tier 1. If those interventions were developed with a problem-solving team approach, and the problem-solving team met on a weekly basis to discuss two students each time, then there would need to be 65 weeks in the school year to get to all of the students. The 65 weeks would not even include meetings at which the team discusses progress of students who were discussed at previous meetings. There simply is not enough time in the school year to rely on a problem-solving team to develop interventions at tier 2.

Another reason that intervention packages and commercially prepared interventions are used for tier 2 is because there could be some assurance of a research base. There are numerous websites that rate how effective various intervention packages are at improving reading (e.g., http://www.intensiveintervention.org/chart/instructional-intervention-tools). Finally, implementation of a packaged or commercially prepared intervention is likely easier than one that is developed by school personnel, and likely includes materials to assess implementation integrity. However, research might suggest that a comprehensive packaged intervention might not be the most effective approach to deliver tier 2 interventions. Below, the research regarding comprehensive packaged interventions is discussed.

Comprehensive Intervention Packages for Reading

As stated above, small-group interventions can be effective for increasing student skills in reading (Gersten et al. 2008). However, recent research that implemented a standardized small-group intervention addressing word recognition, vocabulary, fluency, and comprehension found only small effects for struggling readers ( d = 0.16, Vaughn et al. 2010). One potential reason that the effect size for the Vaughn et al. study was smaller than other syntheses of research could be that it was conducted with middle-school students, and most reading intervention research is conducted with elementary-aged students. However, meta-analytic research with adolescent struggling readers found an average effect size that was much larger ( g = 0.95; Scammacca et al. 2007) than the small effects noted by Vaughn et al. (2010).

Another reason the effect size was small for the Vaughn et al. (2010) study could be that the comprehensive intervention did not adequately target the student needs. Not all types of instruction will be equally beneficial for all students, and in order for students to effectively learn, instruction must be provided that matches their diverse needs (Al Otaiba and Fuchs 2006; Kamps and Greenwood 2005). Juel and Minden-Cupp (2000) found that first-grade students with weaker reading skills benefited more from instruction that was explicit and focused on decoding words, whereas students with higher reading skills benefited more from meaning-based instruction. Similarly, Connor et al. (2009) found that the amount and type of reading instruction necessary for students to achieve proficient reading skills differed for individual students.

Targeted Reading Interventions

A recent meta-analysis compared the effectiveness of a comprehensive intervention (addressed multiple components of reading; g = 0.35) to a targeted intervention (addressed one component of reading based on student need; g = 0.65), and found that the latter was more effective than the former (Hall and Burns 2014). Interventions in general were more effective if they targeted the student’s area of need (Burns et al. 2008), but how to best accomplish this for small-group interventions has not been well researched. Burns and colleagues (Burns and Gibbons 2012; VanDerHeyden and Burns 2010) proposed a model for tier 2 that uses a standardized approach, but that also targets interventions based on the categories of the National Reading Panel areas (NRP; National Institute of Child Health and Human Development 2000), phonemic awareness, phonics, fluency, and vocabulary/comprehension. Phonological decoding predicted word reading, and the rate and accuracy of word reading predicted comprehension among students who struggled with reading (Berninger et al. 2006). Although the developmental progress of specific reading skills is not linear in nature, assessing how well a student is progressing through them could provide a useful heuristic for most students who experience difficulties. For example, if a student struggled with comprehension and demonstrated adequate reading fluency, then the intervention would focus on comprehension; however, if a student struggled with comprehension, fluency, and decoding, then decoding would be the intervention target because it is the most basic of the three. The model and supporting data are described in more detail below.

Targeted Interventions for Mathematics

Fluent computation is an important goal for mathematics (National Council of Teachers of Mathematics, NCTM 2000; National Math Advisory Panel 2008) and could be a target for small-group interventions because students with difficulties in mathematics often struggle to quickly recall basic mathematics facts (Geary et al. 2007; Hanich et al. 2001). Moreover, students who are not proficient in more advanced mathematics problems often lack fluency in the basic skills within them (Houchins et al. 2004). Students compute fluently when they solve mathematics problems more quickly if they recall the answer rather than perform the necessary mental algorithm (Logan et al. 1996). For example, fluent computation can occur when a student can look at 5 × 6 = and quickly recall that the answer is 30 without counting by 5’s or some other manual computation. Providing additional practice with basic or component skills (e.g., single-digit multiplication) has consistently led to increased performance of the more advanced skills (Dehaene and Akhavein 1995; Singer-Dudek and Greer 2005). Thus, tier 2 interventions for mathematics tend to focus on building fluency of the basic skill.

Problem analysis for tier 2 in mathematics focuses on identifying the correct skill to target fluency building, which is best accomplished by a series of single-skill survey assessments. The analysis begins by sequencing the skills or objectives within a mathematics curriculum (e.g., single-digit multiplication, then single-digit division, then multi-digit multiplication, etc.). Most mathematics curricula provide this sequence of objectives. Next, a series of single-skill curriculum-based assessments are created to represent each skill or objective. There are many free websites available with which single-skill probes can be created including www.mathfactcafe.com, www.aplusmath.com, and www.interventioncentral.com. After creating the probes, the sequence of the skills within the curriculum will determine the sequence with which the probes are used. Students are assessed for 2 or 4 min each depending on the skill being assessed.

After the data are collected, they are converted to digits correct per minute and are compared to instructional-level criteria to find the highest skill from the survey assessments in which the student scores within an instructional level, and intervention begins with that skill. Deno and Mirkin (1977) provide instructional-level criteria for mathematics that are commonly used, but those criteria were derived from experience in one school in Minnesota (S. L. Deno, personal communication, April 15, 2005) and were not based on research. Burns et al. (2006) empirically derived instructional-level criteria and found that 14–31 digits correct per minute (dcpm) for second and third graders and 24–49 dcpm for fourth- and fifth-grade students represented an appropriate level of challenge for mathematics. Thus, single-skill assessments are administered, usually in reverse order, until the student scores within the instructional-level range.

Research has consistently demonstrated the effectiveness of using survey-level assessment to identify the mathematics objective that represented an instructional level for an individual student and intervening with that skill (Burns et al. 2010; Spicuzza et al. 2001; Ysseldyke et al. 2003). Moreover, previous research used flashcards with student dyads that were matched with instructional-level data in order to build fluency of basic mathematics skills, which resulted in significant gains in mathematics computation skills (VanDerHeyden and Burns 2005).

Effectiveness of Targeting Tier 2 Interventions for Reading

The current data were collected during the second year of a 3-year partnership (Path to Reading Excellence in School Sites; PRESS) among six urban schools, a research university, a statewide service organization, and a national corporation. PRESS is a comprehensive research-based approach to early literacy that was designed to prepare all students to read at grade level by the end of third grade.

Data presented below were taken from four of the six participating PRESS schools that were traditional public schools from one urban district in Minnesota. There were a total of 316 second-grade students and 315 third-grade students across the four elementary schools. The total sample consisted of 51.4 % females and 14 % white students (86 % from a minority background), and 80 % were eligible for the federal free or reduced lunch program. There were 10 students in second grade who received special education services for reading and 12 in third grade. Thus, the total number of students represented in the data below was 306 for second grade and 303 for third.

Measures

Benchmark: Oral Reading Fluency

All second- and third-grade students were assessed with oral reading fluency (ORF) measures from AIMSweb (Pearson 2008) in the fall, winter, and spring of the academic school year as a universal screener. Data were recorded as the number of words read correctly (WRC) during each 1-min assessment. The ORF scores ranged from 0 to 181 ( M = 59.68, SD = 47.80) in the fall for second grade, 2 to 194 ( M = 97.72, SD = 44.64) for spring of second grade, 0 to 232 ( M = 76.63, SD = 44.56) for fall of third grade, and 7 to 271 ( M = 105.91, SD = 47.88) for spring of third grade. Students who scored below the seasonal benchmark associated with the system (Pearson 2008) were identified as struggling readers and received a tier 2 intervention. In addition, ORF data were used to assess progress by computing a slope across the three benchmark assessments using weeks within an ordinary least-square calculation.

Benchmark Measures of Academic Progress

In addition to the ORF seasonal benchmark screener, students were assessed with the Measures of Academic Progress (MAP) assessment (Northwest Evaluation Association 2003) for reading three times per year. MAP Reading is a norm-referenced computer adaptive test that is designed to measure growth across a year or several years. The second- and third-grade MAP reading measure assesses several areas of student comprehension, including word analysis, vocabulary, literal comprehension, interpretive comprehension, and literary response and analysis. The MAP assessment was therefore used as a global measure of reading comprehension.

Students completed the MAP on a computer in one 40–60-min session. MAP assessment scores are presented in Rasch units (i.e., RIT scores), allowing for comparisons across grades. Students scoring at or below the 25th percentile struggled with comprehension. The scores ranged from 136 to 213 ( M = 169.28, SD = 17.82) in the fall for second grade, 142 to 223 ( M = 181.05, SD = 16.90) for spring of second grade, 144 to 234 ( M = 183.51, SD = 18.02) for fall of third grade, and 139 to 234 ( M = 190.49, SD = 19.12) for spring of third grade. MAP benchmark scores were converted to a rate of growth using weeks within ordinary least squares.

Progress Monitoring: ORF

Although not presented here, student progress was monitored on a regular basis using ORF from the Formative Assessment System for Teachers (FAST; Christ et al. 2011). All students were assessed every other week with grade-level ORF measures. Data were converted to a slope using ordinary least squares to represent average growth per week. The slope estimates were compared to criteria for slopes based on rate of growth needed to obtain seasonal benchmark standards. The fall, winter, and spring benchmark criteria for both measures were used to compute slope estimates with ordinary least squares. For example, AIMSweb (Pearson 2008) criteria indicate that a student in second grade should read 50 WRC per minute in the fall, 80 in the winter, and 92 in the spring. Assuming that those data are collected in the 2nd, 18th, and 34th weeks of the year, the resulting slope would be 1.28 WRC per minute per week. Students whose slope was at least 1.28 were making sufficient progress during the intervention. The third-grade ORF benchmark criteria were 90, 91, and 109, which resulted in a slope of 1.22 WRC per minute increase per week. Third-grade students who demonstrated a slope of at least 1.22 were considered to be making sufficient progress during intervention.

Diagnostic Assessment Process

The model articulated by Burns and colleagues (Burns and Gibbons 2012; VanDerHeyden and Burns 2010) in which tier 2 consists of standardized reading intervention targeted toward phonemic awareness, phonics, fluency, or vocabulary/comprehension was used to target the tier 2 intervention. The diagnostic model is presented in Fig. 1.

Fig. 1
figure 1

Diagnostic assessment process used to target interventions

All students were screened for reading interventions with MAP and ORF. Students who scored below the 25th percentile were identified as struggling with comprehension, which was the first step in the diagnostic model. If the student demonstrated low comprehension, but adequate reading fluency (an ORF score at or above seasonal benchmark), then comprehension was the most fundamental skill in which the student struggled and the student received a comprehension intervention. If the student’s ORF score was below the seasonal benchmark, then the student demonstrated a fluency deficit. However, in the latter scenario, the more fundamental skill of decoding was assessed and ruled out as an intervention target.

Decoding was screened by examining the accuracy with which students read the words during the ORF assessments. Reading accuracy is computed as the percentage of words read correctly (number of words read correctly/number of total words), which results in reliable data (Burns et al. 2000) that can be useful for instructional decision-making (Burns 2007; Hosp and Ardoin 2008; Treptow et al. 2007). The percentage of words read correctly is compared to the research-based criterion of 93 % or higher (Gickling and Armstrong 1978; Treptow et al. 2007). Thus, students who read less than 93 % of the words correctly are likely struggling to decode the text and would likely benefit from reading instruction (Burns and Parker in press). There may be other measures of decoding that work well too, but examining accuracy data is likely sufficient to hypothesize the intervention target for most students. It may be beneficial to further screen decoding skills of students who score close to 93 % correct with any one of a number of assessments including a nonsense word fluency measure, or a word-attack subtest from a standardized norm-referenced reading assessment.

Phonemic awareness was not routinely screened for students in second and third grade, but was screened with students in kindergarten and first grades with phoneme segmentation fluency from AIMSweb (Pearson 2008). Phonemic awareness is the knowledge that words are made of individual sounds and that those sounds can be manipulated to make new words, and it is usually well developed by second grade (National Institute of Child Health and Human Development 2000). However, the phonemic awareness of second- and third-grade students who were suspected to have a difficulty with this most fundamental of all reading skills was screened using the Quick Phonemic Awareness Assessment (QPAA; PRESS Research Team 2013a). The authors did not have a consistent decision rule to determine if phonemic awareness should be assessed and relied on either teacher-generated data or reading accuracy scores that were quite low (e.g., less than 80 %). The QPAA is a 20-item assessment that examines rhyming, blending, segmenting, and initial sound with five items in each area. Students are asked to generate an answer to each item (e.g., “tell me a word that rhymes with [given word]” or “I am going to say some sounds, what word to you hear when you say those sounds fast?”). The test stops if the student does not correctly respond to any two items within one area, and failure to complete the test suggests a phonemic awareness deficit. A student with low decoding skills but sufficient phonemic awareness received a decoding intervention, and those with low phonemic awareness participated in phonemic awareness intervention.

Grade-Level Teams

Student screening and diagnostic data were interpreted by the schools’ grade-level teams (GLTs), which were teams of teachers made up of all of those who taught a particular grade level. The GLT process was based on the professional learning community model (DuFour et al. 2005; Hord 1997) because of the focus on student outcome data and creating a culture of collaboration to enhance student learning (DuFour 2005). The GLTs met on a weekly basis for 60 or 90 min.

One of the monthly GLT meetings in September, January, and May focused on examining universal screening data and relied on an agenda and analysis-to-action form that embedded the following questions: (a) Is there a whole-class problem? (b) Who needs a tier 2 intervention? (c) Among students needing a tier 2 intervention, what is the category of the problem for each? and (d) Are there any students for whom we should go immediately to a tier 3 intervention? It should be noted that there were very few students receiving a tier 3 intervention across the PRESS project, with less than 10 in second and third grade from each of these four schools.

The GLT meetings that examined screening data were facilitated by a data manager who was well trained in the screening and diagnostic process. Each school was supported by a literacy coach who was trained in the PRESS model and by one school psychology graduate student with advanced training in diagnostic and general assessments. The coach or graduate student facilitated the meeting by projecting student data and having the team answer the four questions outlined above using an analysis-to-action form. Each teacher completed an analysis form for his or her students, but the process was completed as a group.

Interventions

There were three conditions in which students participated. A total of 175 second- and third-grade students scored below the seasonal benchmark criterion and received an intervention in a small group from the PRESS program that targeted their reading deficit. An additional 69 students also fell below the benchmark criteria, but received a comprehensive intervention that was delivered by school personnel. Finally, there were 365 students in second and third grade who scored at or above seasonal benchmark criteria and received only core instruction with no additional interventions.

Targeted Intervention

Students in the first group were identified as needing a tier 2 intervention and received a PRESS intervention (PRESS Research Team 2013b) that targeted the most basic skill in which the student struggled. Interventions were delivered in a small group of two to five students for approximately 20 min per day for 4 days per week. The interventionists were graduate students in school psychology, special education, or curriculum and instruction. The interventions are succinctly described below, but further information and implementation protocols/checklists are available at http://www.cehd.umn.edu/reading/PRESS/resources/interventions.html.

Targeted Phonemic Awareness Interventions

Phonemic awareness interventions focused on the isolation and identification of sounds as well as sound manipulation in words. Eight isolation/identification interventions were used with students unable to identify individual phonemes in words. Isolation/identification interventions 1 and 2 required students to identify initial sounds in words, interventions 3 and 4 required students to identify final sounds in words, the interventions 5 and 6 focused on the identification of the middle sounds in words, and the final two isolation/identification interventions required students to sort picture cards based on initial, middle, and final sounds.

In addition to the isolation/identification phonemic awareness interventions, five manipulation interventions were implemented with students able to isolate sounds in words, but who could not manipulate sounds within words. In manipulation intervention 1, students deleted initial and final sounds from words. In manipulation intervention 2, students substituted initial sounds in words. Phonemic awareness manipulation intervention 3 required students to substitute final sounds, and manipulation intervention 4 required students to substitute medial sounds. The fifth manipulation intervention required students to substitute initial, final, and medial sounds.

Targeted Phonics Interventions

Six phonics interventions were implemented with students who had acquired phonemic awareness, but who lacked sufficient decoding skills. In interventions 1 and 2, students were explicitly taught letter–sound correspondence by matching picture cards with letters. Intervention 3 focused on letter–sound correspondence through the use of Elkonin boxes (1971). Students placed magnetic letters in the boxes based on where the sound was heard in a word. The fourth intervention elicited word-building activities by manipulating graphemes within words. Students again used Elkonin boxes in this intervention by manipulating one grapheme within a word to create a new word. In the fifth phonics intervention, more advanced phonics skills such as vowel patterns were the focus, through word writing and passage reading. Students wrote words on white boards, focusing on a specific phonics skill; then, students identified words with the targeted grapheme or letter combination while reading a passage. In the final phonics intervention, students analyzed words by sorting word cards into three categories based on graphemes or combinations of letters.

Targeted Fluency Interventions

Two fluency interventions were implemented with students who struggled with speed, accuracy, and/or expression in reading, but had mastered phonemic awareness and phonics skills. In the supported cloze procedure (Rasinski 2003), students read an instructional-level passage, working in pairs with a peer or with a graduate research assistant, alternating reading every other word while providing error correction. Pairs read the passage three times total, for 1 min each time. An interventionist also provided assistance or error correction as needed.

Students needing additional support in rate and expression participated in the repeated reading intervention (Samuels 1979). Students read a passage three times, for 1 min each time. The interventionist provided error correction as needed at the conclusion of each reading. After the second and third readings, the interventionist asked the student comprehension questions including what the passage was mostly about and the most important information in the passage. After the three independent readings, the student and interventionist finished reading the story together.

Targeted Comprehension

The comprehension intervention was based on reciprocal teaching (Palinscar and Brown 1984), which was used to teach the following comprehension strategies: prediction, summarization, question generation, and clarifying, along with activating prior knowledge. The interventionists used grade-level texts to teach these skills and had the students complete comprehension questions about them to determine if the skills were being learned.

Comprehensive Intervention

Each of the schools also chose to implement a different and more comprehensive intervention for a portion of students through their school-based supplemental service (e.g., Title 1). The schools used the Fountas and Pinnell Leveled Literacy Intervention System (LLI; Fountas and Pinnell 2011). The LLI is a small-group, supplementary literacy intervention that focuses on comprehension and fluency, with decoding and phonemic awareness to be embedded throughout the lessons. Examples of strategies taught with LLI include reading aloud, writing, phonics/word study, attention to features of genre, attention to disciplinary reading, literature inquiry, writing about reading, close reading, and more. LLI was delivered by school personnel three to five times each week, with three to five students in each group. There was no consistently implemented decision rule to determine how the schools selected the students to receive the LLI. For example, students were selected in one school because they were judged to have the strongest skills among those identified as needing support, but in another school they were selected based on eligibility for Title 1 and teacher nomination. A total of 38 second-grade students and 31 third-grade students received the comprehensive intervention.

Tier 1

Students who scored above screening benchmark criteria received no supplemental intervention. They participated only in the core classroom literacy instruction and served as a control group for this demonstration. A total of 188 second-grade and 177 third-grade students fell within this category.

Core Instruction

The four schools used a balanced literacy approach to reading instruction that included guided reading groups based on the Fountas and Pinnell (1996) reading program. All participating classrooms were observed multiple times throughout the school year with a 54-item observation protocol to assess the quality of the core instruction. The scale included 28 items that addressed the classroom environment (e.g., accessibility of writing tools and varied reading materials) and 26 items that addressed the instructional practice (e.g., explains purpose of the lesson, fosters discussion). Each item was rated on a 0–3 scale with a 3 indicating that the item was implemented at an expert level. The mean rating for the classrooms on the final observation, which occurred in May, was 1.82 for environment items and 2.01 for instructional practice. Thus, overall, the schools seemed to use effective instructional practice within the reading curriculum used for core instruction.

Intervention Integrity

The interventionists were trained in the implementation of the interventions and relevant assessments during one 3-h training at the beginning of the year. Each followed a protocol that included scripted administration procedures for each intervention. Before beginning interventions, all interventionists were assessed on their ability to implement each intervention accurately, and intervention sessions were observed on multiple occasions throughout the year to ensure fidelity. All observed interventions resulted in at least 90 % fidelity to the model. However, no fidelity data were available for the comprehensive (LLI) intervention.

Education graduate students working for the PRESS project collected all data. The ORF administration procedures were evaluated before the graduate students began collecting data. A second assessor recorded WRC, and all words that were consistently rated as correct or incorrect across both observers were counted as agreements. Inconsistent ratings were counted as disagreements. The total number of agreements was divided by the total number of words and multiplied by 100 to obtain interobserver agreement (IOA). All data collectors demonstrated at least 95 % IOA before they began collecting data.

Analyses

Growth was evaluated using a multivariate analysis of covariance (MANCOVA) in which ORF and MAP slope from the three benchmark assessments served as the dependent variables. Student ORF and MAP growth served as the outcome variables because both measure student reading skills, but ORF measures only fluency whereas MAP measures comprehension. The fall ORF benchmark score served as the covariate.

Growth was also compared to a criterion based on 1 year’s worth of growth. Second-grade students with a slope of at least 1.28 WRC per minute per week were identified as having made at least 1 year’s worth of growth, and third graders at or above 1.22 WRC per minute per week were identified as having made 1 year’s worth of growth.

Results

The project examined the differential effectiveness of targeted and comprehensive tier 2 reading interventions. Descriptive statistics regarding the mean ORF and MAP growth for students participating in targeted and comprehensive interventions as well as students not receiving tier 2 interventions are presented in Table 1. Second-grade students participating in targeted tier 2 interventions mean growth per week was 1.33 words read correctly per minute (WRCM) whereas second-grade students not receiving supplemental tier 2 intervention had a mean weekly growth of 1.25 WRCM. Moreover, students participating in comprehensive tier 2 interventions (i.e., not directly targeted toward student skill need) had an average weekly growth of 1.07 WRCM. Third-grade students participating in targeted tier 2 interventions mean weekly growth was 1.23 WRCM compared to 1.03 WRCM for students not participating in tier 2 interventions and 0.94 WRCM for students receiving a comprehensive tier 2 intervention.

Table 1 Mean ORF and MAP growth in average increase per week for second- and third-grade students in targeted intervention, comprehensive intervention, and tier 1

A MANCOVA comparing student ORF growth and MAP growth was conducted to compare the progress of students receiving targeted and comprehensive tier 2 interventions. Findings showed that second- and third-grade students receiving targeted tier 2 interventions made statistically significant greater growth than students receiving comprehensive interventions or no tier 2 interventions. Partial η 2 was used as an estimate of effect size and resulted in a moderate to large effect for second grade ( η 2 = 0.12) and a large effect for third grade ( η 2 = 0.16).

The percentages of students making at least 1 year’s reading growth on the ORF and MAP measures are presented in Table 2. A total of 70 % of second-grade students in targeted interventions made at least 1 year of growth compared to 62 % of students receiving no tier 2 intervention and 55 % of students receiving a comprehensive intervention. In third grade, 73 % of students participating in the targeted tier 2 interventions made at least 1 year’s worth of growth compared to 70 % of students receiving no tier 2 intervention and 48 % of students receiving a comprehensive intervention.

Table 2 Percentage of students making 1 year’s growth on CBM-R and/or MAP

Implications for Practice and Research

Students participating in targeted tier 2 interventions made significantly more growth over one school year than students participating in comprehensive tier 2 interventions and students receiving no tier 2 intervention. Thus, targeting interventions directly to students’ area of need resulted in greater growth for both second- and third-grade students, which was consistent with previous research emphasizing the need to target intervention efforts (Burns et al. 2008).

The average rates of growth for the targeted interventions were higher than the criteria based on AIMSweb (Pearson 2008) benchmark standards, but the rate of growth for students not receiving intervention did not represent 1 year’s worth of growth. Thus, the students receiving targeted interventions narrowed the gap with their peers and potentially with grade-level benchmark standards as well. The lower average rate of growth for the comprehensive intervention suggested that those students could have fallen further behind their peers and standards. Further implications for practice are presented below and are summarized in Table 3.

Table 3 Summary of implications for practice

Implications for Practice

Tier 2 interventions are an important part of a comprehensive RTI framework (Marston 2005; Reschly 2008). RTI models are preventative, data-driven service delivery models, which provide intervention to students struggling with academic skills. Moreover, RTI models utilize assessment to identify struggling readers and to provide explicit instruction in the student’s area of need (Fuchs and Fuchs 2006; Justice 2006). Thus, providing targeted reading interventions to struggling readers is an effective method for increasing students’ reading skills.

The term “targeted” might have different meanings depending on who is asked. Many schools refer to their tier 2 interventions as “targeted” because they are provided at the targeted level in the universal–targeted–indicated continuum (or tier 3 in the universal–selected–targeted continuum; Weissberg and Greenberg 1998) or because the students identified as below the benchmark have been “targeted” for intervention. However, the authors suggest that “targeted” should refer to more than just how many students receive the intervention and should describe the focus of intervention efforts. The NRP (National Institute of Child Health and Human Development 2000) categories were used to precisely target the area of need for students participating in targeted tier 2 interventions whereas the comprehensive intervention group all received the same intervention regardless of their area of need. Previous research has shown that, for struggling readers, code-based skills such as phonics and fluency are prerequisites to reading comprehension (Berninger et al. 2006). Thus, targeting intervention toward those code-based skills for struggling readers is necessary to ensure that students have the necessary foundational skills.

The model suggested by Burns and colleagues (Burns and Gibbons 2012; VanDerHeyden and Burns 2010) and used here within the PRESS model seems to provide an effective balance between individualizing and standardizing interventions. As stated above, many schools use standardized commercially prepared interventions because they are easy to obtain, are easy to implement, and have a research base. All of the interventions used in this study were implemented with high fidelity, and utilized components or intervention approaches with a considerable research base. However, each was identified with a low-level analysis that mostly involved examining existing data. Most school personnel would agree that it would be ideal to individualize interventions for every student who needs support, but cannot do so for logistical reasons. The current model allowed for a broad categorical approach to individualize interventions, which seemed to be more effective than using the same comprehensive intervention for every student.

These data suggest an effective intervention system, but the positive outcomes are dependent on several factors such as high implementation integrity and adequate school attendance. Perhaps the most important aspect of the model was the presence of a data manager or coach to facilitate the meeting. The current data were taken from the 2nd year of the 3-year PRESS project. All of the teachers in the participating schools were instructed in screening, diagnostic decisions, and monitoring student progress during the 1st year. New staff was also trained during the 2nd year and teachers were provided booster sessions regarding the uses of the assessment data. Moreover, the teachers were shown how to implement the interventions during the 2nd year because implementation was turned over to the teachers during the 3rd year of the project. Thus, the teachers were trained in using data to identify students who need additional support and to determine what they needed. However, it is likely that they would have struggled to implement those skills three times each year without the support of a guide who was well trained in the process and without an effective data management system.

It is considered wise to dedicate professional learning time to instruct staff on the use of data, and provide extended time from one person to facilitate these data meetings. The school psychologist could be ideally suited to facilitate the three meetings each year in which screening data are discussed, but it would be difficult for one person to attend all six meetings (assuming there is one GLT for every grade at an elementary school) after each seasonal benchmark. Thus, it might be beneficial to have two people who have expert-level understanding to facilitate these meetings.

Another important consideration is the quality of the core instruction. The PRESS project involved assessing the core instruction and using the data to provide coaching around literacy instruction. Thus, there was some assurance that aspects of quality core instruction occurred. However, there was certainly variability in the quality of the core instruction, which was problematic. The model that was used here targets reading interventions and focuses intervention efforts on that particular goal. This approach is only effective if students receive effective core instruction. In other words, reading can be broken apart and remediated, only if it is being reassembled somewhere else with quality balanced instruction. It seems that part of the allure of comprehensive interventions could be that they have the potential to fill holes in core instruction. Our suggestion would be to target the interventions and to provide effective balanced instruction rather than filling holes elsewhere.

Implications for Research

Previous research has shown that targeted reading interventions result in moderate reading growth ( g = 0.52; Piasta and Wagner 2010) that was larger than comprehensive small-group interventions ( g = 0.65 and 0.35, respectively; Hall and Burns 2014). However, the comparison was conducted through meta-analytic research and the current project was not experimental in nature. Thus, additional research is needed to determine if it is more effective to target an intervention or to use a commercially prepared comprehensive reading intervention.

Although the findings from the PRESS project provide evidence that directly targeting a student’s area of need results in more growth than providing comprehensive interventions, the results should be interpreted within their limitations. First, students were not randomly assigned to targeted, comprehensive, or no intervention groups. This was not designed to be an experimental study, but was a demonstration and an implementation project with supporting data. Thus, threats to internal validity cannot be excluded, such as differences between participants across groups. Moreover, extraneous interfering variables impacting the results cannot be ruled out. Second, intervention fidelity of implementation was not assessed for students participating in the comprehensive tier 2 interventions (LLI). Therefore, smaller growth for students participating in comprehensive interventions could be affected by a potential lack of implementation fidelity.

Future Directions

Because the current study did not utilize random assignment of groups, future research should replicate the current study with random group assignment. Such research would provide stronger evidence regarding the differential effectiveness of targeted and comprehensive tier 2 interventions. The results of the current study suggest that targeting tier 2 interventions to students’ area of challenge is effective, but how best to implement such targeted interventions is not well established within the literature. Therefore, future research should examine how best to intervene in small groups.

There is also considerable research needed to determine the most effective approach to conducting a GLT meeting. Many schools are using the professional learning community (Dufour et al. 2005) approach to teaming within and across grade levels, but the research supporting that approach is limited. Moreover, it is unknown how well teams actually implement the model that they reportedly use.

The diagnostic assessment model used by PRESS was supported with earlier research, but has not been tested empirically beyond examining student outcomes. Greater student outcomes in comparison to a control group of students not needing intervention and to one that consisted of students receiving a different intervention make an argument for the validity of the model (Kane 2013; Messick 1995), but additional research is needed. For example, the diagnostic accuracy of the assessment system could be evaluated by comparing the results to other measures of the same construct (e.g., Does reading less than 93 % of the words correctly identify the same students as needing decoding help as a different measure of reading decoding?).

Conclusion

The current chapter found that targeting tier 2 interventions was effective and provided a model to do so. Problem analysis is, by definition, the process of finding interventions with a high likelihood for success (Tilly 2008), but it seems that most school personnel do not engage in this process before beginning tier 2 interventions. There is considerable future research to be conducted, but the process outlined here was brief, did not involve many additional resources, could be completely embedded within an existing RTI framework, and improved student learning. The model used by the PRESS project is just one approach, and there may be many others. However, no others have likely been evaluated in comparison to an active intervention with such a large group of students. Thus, additional research is needed, but the research seems warranted.