Abstract
The current chapter describes the skill-by-treatment interaction (STI) framework for directing academic interventions, which use preintervention data in the skill being intervened to identify skill deficits and select interventions with the highest likelihood of success. Poor academic skills place children and youth at extraordinarily high risk for mental health issues during school and later in life. Strong academic skill interventions may be the strongest possible prevention activity for improving mental health. We summarize relevant research and outline specific guidelines to select interventions for reading and math. The chapter concludes with case studies demonstrating STIs in action.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
Schools are facing an ongoing crisis of low academic proficiency as students are graduating with lower skills in reading and math in 2019 than they did in 2015, and only 37% of them are proficient in reading and 24% in math (National Center for Educational Statistics, 2019). School psychologists are trained in interventions to enhance academic skills given that all major professional standards address intervening in this area (Burns, 2019). However, there is no complete agreement on how best to do so.
Using preintervention measures of achievement to predict intervention effects has been called a skill-by-treatment interaction (STI; Burns et al., 2010) and has been used to identify interventions that were most likely to be successful for individual students. Interventions are developed from an STI paradigm based on student functioning within the skill rather than by assessing assumed underlying aptitudes (Burns et al., 2014). For example, a student with a deficit in reading decoding would respond better to an intervention that addresses decoding than a student for whom that skill is well developed but who struggles in a different aspect of reading such as comprehension. Within the skill of reading decoding, a student who is slow and inaccurate would likely benefit from an intervention that involves high modeling of the skill with immediate corrective feedback, but a student who is accurate and slow might require more practice opportunities with feedback. This chapter will discuss the theoretical underpinnings of STI, the clinical reasoning model for implementing STI, and the implications for prevention and assessment within school psychology. Finally, we will present case studies demonstrating an STI framework in action.
Guiding Framework/Theoretical Approach for Assessment, Prevention, and Intervention
The process of targeting interventions within STI involves identifying the most fundamental skill in which a student struggles. For example, a student who is low in reading fluency and comprehension, but who has acceptable phonemic awareness and reading decoding skills, would likely benefit from a reading fluency intervention because that would be the most fundamental skill in which the student experiences difficulty. In math, if a student struggles with the conceptual understanding of basic computation, then the intervention should focus on understanding the underlying concepts rather than practice completing the computation. We will discuss relevant reading and math development and will provide information about the instructional-level construct because that is a conceptual basis for much of an STI approach. Finally, we will conclude with a discussion of the learning hierarchy because it provides the conceptual framework for advanced decision-making within STI.
Reading Development
The National Reading Panel (2000) identified five critical reading skills that children need to acquire to become functionally independent readers. The five reading skills for reading success include phonemic awareness (the manipulation of spoken syllables in words), phonics, (letters-sound correspondences), fluency (reading speed and accuracy), vocabulary (lexicon of known words), and comprehension skills (deriving meaning from print). All five reading skills are essential features of the reading process that need to be measured systematically to drive information-based decision-making.
Phonological awareness (PA) interventions are defined as those that increase children’s awareness of the sounds at the word level (e.g., dag, dig, dog). PA interventions target awareness of the sounds (i.e., phonemes) composing words (e.g., “cat” as /k/-/a/-/t/). Accordingly, PA is more specific to reading because this often requires decoding words at the phoneme level. Phonics interventions teach associations between phonemes and orthography, thereby differing from PA interventions in that they directly incorporate letters or text. Fluency interventions target the ability to read with speed and fluency (Therrien, 2004). Reading comprehension interventions provide “specific procedures that guide students to become aware of how well they are comprehending as they read” (National Reading Panel, 2000). Typical activities in reading comprehension interventions involve identifying themes, inferential thinking, pictorial cues, prior knowledge, reflection, question generation, summarization, and story structure (Suggate, 2010).
Reading is conceptualized as the combination of all of the skills listed earlier. The Simple View of Reading (SVR) defines reading as the product of decoding and linguistic comprehension (Hoover & Gough, 1990). Simply comparing word reading to comprehension can help target interventions. For example, Vadasy and Sanders (2009) added a word-level intervention to repeated reading with 98 second- and third-grade struggling readers who had low fluency skills, which led to higher scores than just repeated reading alone on measures of letter-sound knowledge (d = 0.41), reading fluency (d = 0.37–0.38), and reading comprehension (d = 0.30–0.31). Interventionists can become more precise in their efforts by examining all of the areas described above. Among struggling readers, phonological decoding predicted word reading, and the rate and accuracy of word reading predicted comprehension (Berninger et al., 2006). Thus, interventions can be targeted according to phonemic awareness, decoding, reading fluency, or vocabulary and comprehension based on the most fundamental skill in which the student struggles, and doing so led to significantly more growth on measures of reading fluency and comprehension (η2 = .12 for second grade and η2 = .16 for third grade) than a control group that used a comprehensive intervention that addressed multiple aspects of reading (Burns et al., 2016).
Math Development
The National Mathematics Advisory Panel (NMAP, 2008) identified three fundamental clusters of skills that are essential to developing proficiency with algebra: Number sense refers to a wide range of abilities from the capacity to immediately identify quantities, to an understanding of the distributive property. Fractions refer to the segmentation of whole numbers represented by traditional fractions, decimals, and percentages as well as the ability to apply basic arithmetic models to them. Geometry and measurements involve the ability to calculate the perimeter and area of two and three-dimensional shapes, as well as the slope of lines and the relationships among shapes.
The National Council of Teachers of Mathematics (NCTM, 2000) identified five components for what constitutes math proficiency, (1) conceptual understanding, (2) procedural fluency, (3) adaptive reasoning, (4) strategic competence, and (5) productive disposition. The different areas of math competence described are interwoven and complement one another throughout skill development, but conceptual understanding and procedural fluency are often the first components to develop (NCTM, 2000). Conceptual understanding is the relations that underlie math problems and procedural fluency is the understanding of the rules and steps to solve the problems (Hiebert & Lefevre, 1986). It is somewhat unclear as to which type of knowledge develops first, and the sequence may be specific to the domain or the individual (Rittle-Johnson et al., 2001), conceptual understanding may provide the basis for procedural fluency. For example, students with conceptual understanding should be able to apply certain concepts of understanding to solve familiar problems, even if they do not have procedural fluency (Burns, 2011).
There is less research regarding how to target math interventions in relation to targeting reading interventions. Given that students with math difficulties frequently struggle to quickly solve basic math facts (Burns, 2011; Geary et al., 2007), intervention efforts may be more effective for some students if they focus on procedural issues such as accurately recalling the basic math fact or completing the steps within a problem. For other students, a conceptual intervention might have the most promise because they lack the basic understanding of the underlying concepts, and teaching the steps to solve the problem would not address the deficit. Burns (Burns, 2011; Burns et al., 2015) used measures of conceptual understanding, demonstrated in Fig. 5.1, to target interventions. Students who demonstrated low procedural fluency and acceptable conceptual understanding received a procedural fluency intervention, and those who were low in both received an intervention that focused on conceptual understanding, both of which led to large effects as compared to interventions that did not target the student deficit.
Instructional Level
The term instructional level is probably one of the most used and misused terms in education today. We define an instructional level as the appropriate balance between task expectations and student performance so that the student can be challenged enough to learn new information while having enough background knowledge to complete the task (Gravois & Gickling, 2008). An instructional level is conceptually similar to Vygotsky’s (1978) Zone of Proximal Development in which a student learns the most when taught information that requires some guidance from a skilled partner. Academic difficulties are viewed as the results of a mismatch between a student's skill and the curriculum or instructional materials (Gravois & Gickling, 2008). A curriculum that is too difficult results in student frustration, and one that is not challenging enough results in student boredom. Instructional material that represents the perfect match of new material and review so that optimal learning occurs represents an instructional level.
The term instructional level came from Betts’s (1946) famous observation that students can read books better if they can read about 95% of the words correctly. That anecdotal observation led to an entire industry of educational assessments used to assess the instructional level, but most of the data generated by published tools did not accurately represent an instructional level when independently evaluated (McCarthy & Christ, 2010; Parker et al., 2015). In 1977, Ed Gickling coined the phrase curriculum-based assessment (Coulter, 1988) to refer to systematic assessment of the “instructional needs of a student based upon the on-going performance within the existing course content in order to deliver instruction as effectively as possible” (Gickling et al., 1989, pp. 344–345). The term evolved to Curriculum-Based Assessment for Instructional Design (CBA-ID; Gickling & Havertape, 1981) to differentiate it from other curriculum-based approaches.
Essentially, an instructional level is determined with CBA-ID by having the student engage in the skill with the materials used for instruction and record the number and percentage of items for which the student responded correctly (e.g., words read correctly, gave the correct letter sound) to determine appropriately challenging material for intervention. For example, reading is determined by having a student read from instructional material for 1-min, recoding the number of words read correctly and incorrectly, and then dividing the number read correctly by the total number of words to find the percentage of words read correctly. Math involves having the student complete a math task for 2 min in a single skill (e.g., a probe of single-digit multiplication facts) and computing the number of digits correct per minute. As shown in Table 5.1, an instructional level for reading would be material in which the student can read 93–97% of words correctly (Gickling & Thompson, 1985). The material in which the student reads more than 97% is called an independent level and is too easy while less than 93% is a frustration level and is too difficult. An instructional level for math would be 14–31 digits correct per minute for second and third graders and 24–49 digits correct per minute for fourth and fifth graders (Burns et al., 2006).
Decades of research have consistently supported the effects of teaching students with instructional-level material. Having struggling readers read passages in which they could read 93–97% of the words increased their time on task, reading comprehension, and reading fluency (Gickling & Armstrong, 1978; Parker et al., 2015; Treptow et al., 2007). Preteaching words to create an instructional level with difficult material has also increased reading and behavioral outcomes (Beck et al., 2009; Burns et al., 2011), and the correlation between reading growth and the number of times that preteaching created an instruction level among students identified with a learning disability in reading was an astounding r = .80 (Burns, 2007).
Learning Hierarchy
The instructional hierarchy (Haring & Eaton, 1978) is the dynamic boundary between instructional activity and student competence (Burns et al., 2006) and can be used to differentiate interventions for students with the most severe learning needs. The learning hierarchy is an intervention heuristic that identifies interventions with a high likelihood for success by matching student skill with one of four phases of student learning, (a) acquisition, (b) fluency, (c) generalization, and (d) adaptation (Haring & Eaton, 1978).
A student’s performance at the acquisition stage is characterized by low accuracy and subsequent dysfluency. Appropriate interventions within this phase include high modeling and frequent cuing (VanDerHeyden & Burns, 2005). Thus, acquisition interventions are one in which students have little or no knowledge of the skill and are initially taught or modeled the relevant concepts or procedures. After acquiring the skill, the student exhibits fluency, that is, the student is accurate but still dysfluent, and corresponding interventions should enhance fluency through additional practice, multiple opportunities to respond, and the use of contingent reinforcement. Fluency interventions are those in which the students can accurately complete the skill but need additional practice to become more proficient. Examples of fluency in math would include cover-copy-compare (Skinner et al., 1989), timed math fact trials, and incremental rehearsal (Burns, 2005). Once a student can accurately and fluently exhibit the skill, efforts can focus on the later phases of generalization and adaptation. Most academic deficits involve the first two phases, but students operating in the generalization or adaptation stages may require interventions such as guided application of fluent skills under novel conditions and using learned skills to solve more complex or different tasks. Research has consistently supported the positive effects of matching interventions to the phase of the learning hierarchy (Erion & Hardy, 2019; Szadokierski et al., 2017; Codding et al., 2011).
Clinical Reasoning Model for Assessment, Prevention, and Intervention
An STI approach to assessment and intervention has a four-step process: (1) select skill-based assessment to assess (2.1) specific domains (2.2) phase of learning (as needed) (3) select intervention based on identified skill deficit (4) continuous progress monitoring (4.1) on grade level (4.2) and at the instructional level. The overarching premise of this approach is appropriately selecting a small group (tier 2), and individual (tier 3) interventions, based on brief pre-intervention measures of achievement. Unlike many approaches to data-based decision making, STI rarely used norm-referenced standardized measures of achievement. STI is not an assessment, and certainly, norm-referenced measures of specific skills (e.g., word attack) can inform intervention decisions, but data interpreted within an STI framework are usually collected with CBA-ID. This quick and widely accessible assessment allows teachers to specifically target the skill a student is struggling with and match it to the skills that an intervention targets, which in turn, allows researchers to better predict the outcome of the interventions (Burns et al., 2010; Szadokierski et al., 2017).
Step 1: Select Skill-Based Assessments
What makes STI an effective approach to assessment and intervention is the use of single-skill mastery measurement (SMM) to help identify both broad domains for remediation and if students are in the process of acquiring information or building their fluency within the specific skills. SMM uses quick probes of smaller domains of reading and math based on predetermined criteria. For reading, assessments should target phonemic awareness, phonics, fluency, vocabulary, comprehension (Table 5.2), and math (Table 5.3) focus more on a broader range of developmental skills. Ideally, these assessments are prescriptive in nature as interventionalists and teachers can identify the specific skill domain that has not been mastered through the SMM.
Step 2.1: Specific Domains—Tier 2
The primary problem analysis question at Tier 2 is what is the category of the problem? In other words, Tier 2 interventions should target one primary deficit area for each student, but the target is a broad domain such as phonemic awareness, phonics, and math computation with single digits. The first step in an STI framework is to examine the data to find the most fundamental skill in which the student struggles, and then the intervention would target that skill. In reading, the five domains are assessed in the following sequence, (1) comprehension, (2) fluency, (3) decoding, and (4) phonemic awareness. The assessment sequence begins with a measure of comprehension (see Table 5.2). If the student demonstrates low comprehension, then the interventionists would examine reading fluency in the manner displayed in Fig. 5.2. Once assessment data are used to identify the most fundamental skill in which the student struggles, then the intervention is selected to match that deficit.
Math assessments at Tier 2 follow a similar framework as reading, but the domains are somewhat more specific. As shown in Table 5.4, an STI framework in math requires a known list of objectives that build on one another. Many math curricula provide a list similar to Table 5.4. Once a list of objectives is located or compiled, interventionists can create a series of short assessments for each. Websites such as https://www.mathfactcafe.com/ can be used to create free assessment probes. Students can then be given 2–3 min to complete each of the probes, and the data converted to a digits-correct-per-minute metric, which are then compared to instructional level criteria presented in Table 5.1. The lowest skill for which the student demonstrates an instructional level becomes the target for Tier 2 interventions. For example, if a student demonstrates independent-level skill (higher than 31 digits correct per minute) in the first five objectives in Table 5.4 (addition through 20, subtraction through 20, and fact families), but the assessment for two-digit addition without regrouping fell within the instructional level of 14–31 digits correct per minute, then the Tier 2 intervention for the student would be two-digit addition without regrouping.
Step 2.2: Phase of Learning—Tier 3
Most students respond well to targeted Tier 2 interventions as described in the previous section (Burns et al., 2016). However, for those who do not, the learning hierarchy can then be used to interpret additional data to intensify the intervention. When looking at the data, the first question asked was should have the student acquired the information needed for this skill? If they have not, then accuracy will be low within the assessment. Is the student proficient with the knowledge? If they are not, then they would be accurate, but slow in completing the assessment. Tables 5.2 and 5.3 outline specific criteria identified through research in regard to matching assessments based on student’s phase within the learning hierarchy.
Step 3: Select the Intervention
Based on information collected through SMM, teachers and interventionalists can decide what specific intervention will target the area of need for the student. Tables 5.2 and 5.3 outline specific interventions that can be used for students at Tiers 2 and 3, but there are others that can be effective. Readers can find lists of potential interventions on several websites including the Evidence-Based Intervention Network (https://ebi.missouri.edu/) and the National Center for Intensive Intervention (https://charts.intensiveintervention.org/aintervention). What matters most is that the intervention addresses what the student needs and that it is implemented with fidelity.
At Tier 3, the learning hierarchy is considered to intensify the intervention. If a student is struggling in the acquisition stage of a particular domain of literacy and math, the learning stimuli needs to become easier for the student and focus intensely on the core concepts of the domain. This could look like incorporating visual cues, teaching less each lesson, more modeling, or addressing a skill with which the student has demonstrated success. If a student is struggling within the proficiency stage, then it is important to incorporate repeated exposure and practice within and across lessons. This could be accomplished with independent practice, timed drills, more frequent yet shorter lessons, and frequently asking the students to recall what they had already learned.
Step 4: Continuous Progress Monitoring
Progress monitoring is the process of quantifying rates of improvement and adjusting instructional programs to make them effective and better suited to student needs (National Center on Intensive Intervention, n.d.), and it is critically important to any intervention model (Mellard et al., 2009; Shapiro, 2011; Stecker et al., 2008). An STI approach monitors progress in two ways. First, progress toward broad instructional goals is measured with a general outcome measure (GOM) such as a curriculum-based measure of reading fluency (CBM-R), which is a useful tool to monitor progress for general reading proficiency (Fuchs et al., 2001). However, targeting narrow skills for intervention might reduce the sensitivity of a GOM to show growth (Shapiro, 2011).
Progress monitoring in an STI framework relies on both GOM and SMM data to gauge intervention effects (Ball & Christ, 2012). Previous research has supported the psychometric adequacy of several early literacy measures as progress monitoring tools (McConnell & Wackerle-Hollman, 2016; Oslund et al., 2012), and the distinctions between growth demonstrated by GOM and SMM data are more pronounced for interventions that target more fundamental skills (e.g., phonemic awareness or early phonics patterns) (Van Norman et al., 2018). Growth for GOM data is based on typical interpretative frameworks such as national norms or comparisons to growth needed to obtain proficiency. Well-established frameworks are needed to evaluate growth with GOM data because those are the data used to allocate resources (e.g., move from tier 2 to tier 3). There are less well-developed interpretative schemes for SMM data, but less specificity is needed because those data are used to supplement GOM data to determine if a student is making sufficient growth, and SMM data would be used to modify an intervention rather than to reallocate resources. For example, consider a student who receives a phonics intervention that focuses on early literacy skills (e.g., letter-sound correspondence). A GOM, such as CBM-R, data might show a growth rate that is less than expected, but an SMM that examines phonics (e.g., nonsense or word list fluency) could suggest that phonics skills are increasing while not yet adequately affecting the GOM scores.
Perspectives and Approaches Relative to School Psychology Assessment
Assessment is fundamental to school psychology practice and is included in every published professional standard for the field (Burns, 2019), but STI fundamentally differs from typical approaches to school psychology assessment. School psychologists spend at least 50% of their time engaged in assessment activities to determine eligibility for special education services (Walcott et al., 2018). The Wechsler Intelligence Scale for Children, 5th Edition (WISC-V; Wechsler, 2014), was reportedly used by 80% of school psychologists who responded to a national survey (Benson et al., 2019). Conversely, only 29.3% of the respondents used CBM-R, and although small numbers of respondents reported use of specific skill measures such as early literacy (26.6%), early numeracy (22.8%), and math concepts and application (27.3%), CBA-ID was not included in the survey (Benson et al., 2019). Not including CBA-ID in the recent survey was surprising given that participants in the Shapiro et al. (2004) survey reported that they used CBA with 80% of the students with whom they worked, and 72% used a model that aligned with CBA-ID and STI.
STI is not an assessment tool, but it is an approach to interpreting the data. Floyd and Kranzler (2019) discuss how STI is different from more typical approaches to assessment in school psychology. First, assessment targets specific skills rather than broad domains of achievement for both assessment and subsequent interventions. Second, STI assessments rarely rely on norm-referenced tools and instead compare student performance to research-based criteria for proficiency/mastery and phase of student learning. Finally, the goal of assessment is to drive intervention rather than classify students into “fixed educational structures” (p. 413). Thus, STI is a different approach to assessment than what is commonly used in school psychology practice, with a focus on determining interventions rather than identifying disabilities.
Perspectives and Approaches to Prevention and Intervention
As stated earlier, 80% of school psychologists reported using the WISC-IV, and 95% of respondents reported using a measure of cognitive ability (Benson et al., 2019), which is surprising because only one special education disability (intellectual disability) requires an assessment of cognitive functioning, and special education eligibility remains the most common professional activity for school psychologists (Walcott et al., 2018). Why is there such a disparity between regulatory mandates and actual practice? School psychologists administer measures of cognitive functioning because they believe that doing so will lead to improved outcomes for students (Braden & Shaw, 2009). However, meta-analytic research has consistently shown that measures of cognitive ability did not predict student outcomes (Scholin & Burns, 2012; Stuebing et al., 2009) and have limited utility in identifying interventions for reading and math (Burns et al., 2016; Kearns & Fuchs, 2013; Stuebing et al., 2015). Even efforts to target specific cognitive areas such as working memory and executive function did not improve student learning (Jacob & Parkinson, 2015; Melby-Lervåg & Hulme, 2013).
Most school psychologists are trained in the aptitude × treatment interaction tradition, in which interventions have differential effects based on individual differences in various cognitive skills, despite the lack of an established causal link between measures of cognitive functioning and intervention outcomes (Floyd & Kranzler, 2019). School psychologists would better meet the needs of students if they adopted a prevention framework approach to practice that examined student difficulties through ecological systems theory (Burns, 2011), both of which are consistent with an STI approach to solving problems.
Prevention science is a method to identify and alter targets that will improve important outcomes for children (Herman et al., 2012). The goal of STI is to identify specific skill deficits that are linked to broader skills such as math and reading proficiency (Burns et al., 2010; Szadokierski et al., 2017). Research has consistently found that effective intervention efforts can prevent future learning difficulties (Lembke et al., 2010; VanDerHeyden et al., 2007), and interventions are more effective if they target specific skill deficit (Hall & Burns, 2018). Thus, identifying student deficits and using those data to target intervention efforts is consistent with prevention science and is likely to improve student outcomes.
Perhaps the biggest difference between STI and traditional school psychology is how student failures are interpreted. Ecological Systems Theory (EST) is the study of multiple interconnected environmental systems that influence individual development (Bronfenbrenner, 1977) and provides a theoretical foundation for STI, along with prevention science. In an EST approach, “disturbance is not viewed as a disease located with the body of the child, but rather a discordance in the system” (p. 89), and dysfunctions occur when there is a mismatch between student skill and the environmental demands (Apter & Conoley, 1984). Given that 95% of school psychologists use a measure of cognitive functioning within their evaluations (Benson et al., 2019), many practitioners view skill deficit as a dysfunction located within the individual child.
Given that the goal of most school psychological assessments is to identify a disability (Floyd & Kranzler, 2019; Walcott et al., 2018), it is not surprising that the current system fosters student-dysfunction thinking. However, outcomes associated with systems that rely on disability labels do not result in academic or mental health improvement (Algraigray & Boyle, 2017; Kavale & Forness, 2000; Sullivan & Field, 2013). STI requires practitioners to examine student failure as a mismatch between skill and expectation in some important academic area, which is exactly the purpose of CBA-ID. For example, a student who reads less than 93% of the words correctly from the material used for reading instruction will experience a multitude of difficulties, and once that mismatch is corrected, academic and behavioral outcomes increase (Gickling & Armstrong, 1978; Burns & Parker, 2014; Treptow et al., 2007). Moreover, using STI to target interventions to match the specific student deficit increases outcomes in reading and math (Burns et al., 2010; Szadokierski et al., 2017).
Case Studies
STI is not a difficult process to implement but requires an in-depth understanding of the data. Next, we provide two examples of STI, one that was implemented at tier 2 and one at tier 3.
Tier 2
AJ was a second-grader who scored below benchmark standards on the STAR Reading test, which was the school’s universal screener for reading. His teacher assessed his instructional level with a commonly used informal reading inventory (IRI), which resulted in a reading level of E. He was placed into a reading group of students with similar reading levels to read books together.
AJ was not making sufficient progress after several weeks of instruction. The school psychologist assessed his reading skills with CBA-ID using E-level books produced by the same publisher as the IRI. The percentage of words read correctly ranged from 78% to 88% correct, which represented a frustration level and suggested that E-level books were too difficult. A reading fluency assessment (ORF) fell below the 20th percentile for his age group. Next, the school psychologist assessed AJ’s decoding skills with a list of low-frequency highly decodable words, and he correctly identified less than 90% of the sounds correctly. As a result, the school psychologist recommended that AJ be placed into a Tier 2 intervention that used Sound Partners (Vadasy, 2005) because he demonstrated low comprehension, fluency, and decoding and decoding is the most fundamental of the three skills.
AJ’s decoding skills were monitored with weekly letter-sound assessments, and grade-level Aimsweb ORF was used as the GOM to also monitor his progress. He quickly obtained the 90% known criterion on three consecutive assessments, after which the focus switched to practicing the use of letter sounds to make words with various blending activities and connected text. His weekly GOM data also continued to increase at a rate that exceeded the typical second-grade readers.
Tier 3
Lonnie was a kindergarten student who experienced significant difficulties learning basic letter sounds. He was participating in a Tier 2 intervention that focused on phonemic awareness while teaching basic letter sounds. His progress was monitored with letter sound fluency, and the data did not suggest an upward trend. The school psychologists examined screening data. Lonnie scored above the proficiency score on measures of first-found fluency and phoneme-segmentation fluency, which are measures of phonemic awareness. Thus, Lonnie demonstrated acceptable phonemic awareness, and continued difficulty learning letter sounds. School personnel decided that his lack of growth indicated that Lonnie required a tier 3 intervention and started teaching him the letter-sound correspondence with Incremental Rehearsal (IR), which is a well-researched intervention (see https://charts.intensiveintervention.org/intervention for a description of the research https://www.youtube.com/channel/UC0ad1ei6p_HOHHhc-T-JnZg/videos for video demonstrations of IR). However, Lonnie’s scores on letter-sound correspondence did not increase. The school psychologist then observed the intervention and saw that IR was implemented correctly, but at the end of the intervention session, Lonnie did not correctly provide the sound of the letter that he was just successfully taught.
Lonnie’s data suggested that he was in the acquisition phase of learning (see Table 5.2) because he knew less than 90% of the letter-sound correspondences and he did not successfully demonstrate the new skill immediately after being taught it. Fortunately, Lonnie had good phonemic awareness but that was also puzzling and suggested that letter-sound correspondence was the correct intervention target. Students in the acquisition phase need intervention stimuli that are more errorless and salient. Thus, the school team decided to continue using IR but to use pictures as knowns and to use a picture cue for the unknowns. For example, the letter h was taught with a picture of a hammer with the letter h at the bottom of the card. Lonnie was asked to say “hammer, /h/, h” every time that he saw the card.
During the first intensified Tier 3 intervention, Lonnie was presented with the letter h and was asked what sound it made. He hesitated for a moment, looked at the interventionist, and said /h/. That was the first time in this young student’s educational experience that he stated the correct letter-sound correspondence after being taught the letter a few moments before. Lonnie quickly learned his letter sounds and moved on to more advanced decoding skills with similar approaches and reached proficiency on kindergarten screening measures by the end of the year.
Conclusion
In 1975, Maynard Reynolds warned that “In today’s context the measurement technologies ought to become integral parts of instruction designed to make a difference in the lives of children and not just a prediction about their lives” (p. 15). School psychologists have sought measures to improve the lives of the children that we serve, but improving measurement technology will not be as effective as improving the decisions made with the data. STI provides a framework to examine data that are based on sound theory, well researched, and practical. The information provided here may be useful to school psychologists interested in better supporting the academic skills of the students that they serve, and researchers may find a conceptual framework to drive future research. Additional research is needed, but given the number of students who struggle with reading and math, and the simplicity of the model presented here, the additional research seems warranted.
References
Algraigray, H., & Boyle, C. (2017). The SEN label and its effect on special education. Educational & Child Psychology, 34(4), 70–79.
Apter, S. J., & Conoley, J. C. (1984). Childhood behavior disorders and emotional disturbance: An introduction to teaching troubled children. Prentice-Hall.
Ball, C. R., & Christ, T. J. (2012). Supporting valid decision making: Uses and misuses of assessment data within the context of RTI. Psychology in the Schools, 49(3), 231–244. https://doi.org/10.1002/pits.21592
Beck, M., Burns, M. K., & Lau, M. (2009). The effect of preteaching reading skills on the on-task behavior of children identified with behavioral disorders. Behavioral Disorders, 34(2), 91–99. https://doi.org/10.1177/019874290903400203
Benson, N. F., Floyd, R. G., Kranzler, J. H., Eckert, T. L., Fefer, S. A., & Morgan, G. B. (2019). Test use and assessment practices of school psychologists in the United States: Findings from the 2017 National Survey. Journal of School Psychology, 72, 29–48. https://doi.org/10.1016/j.jsp.2018.12.004
Berninger, V. W., Abbott, R. D., Vermeulen, K., & Fulton, C. M. (2006). Paths to reading comprehension in at-risk second-grade readers. Journal of Learning Disabilities, 39(4), 334–351. https://doi.org/10.1177/00222194060390040701
Betts, E. A. (1946). Foundations of reading instruction, with emphasis on differentiated guidance. American Book Company.
Braden, J. P., & Shaw, S. R. (2009). Intervention validity of cognitive assessment: Knowns, unknowables, and unknowns. Assessment for Effective Intervention, 34(2), 106–115. https://doi.org/10.1177/1534508407313013
Bronfenbrenner, U. (1977). Toward an experimental ecology of human development. American Psychologist, 32(7), 513–531. https://doi.org/10.1037/0003-066X.32.7.513
Burns, M. K. (2005). Using incremental rehearsal to increase fluency of single-digit multiplication facts with children identified as learning disabled in mathematics computation. Education and Treatment of Children, 28, 237–249.
Burns, M. K. (2007). Reading at the instructional level with children identified as learning disabled: Potential implications for response-to-intervention. School Psychology Quarterly, 22(3), 297–313. https://doi.org/10.1037/1045-3830.22.3.297
Burns, M. K. (2011). Matching math interventions to students’ skill deficits: A preliminary investigation of a conceptual and procedural heuristic. Assessment for Effective Intervention, 36(4), 210–218. https://doi.org/10.1177/1534508411413255
Burns, M. K. (2019). Introduction to school psychology: Controversies and current practice. In M. K. Burns (Ed.), Introduction to school psychology: Controversies and current practice (pp. 1–14). Oxford University Press.
Burns, M. K., Codding, R. S., Boice, C. H., & Lukito, G. (2010). Meta-analysis of acquisition and fluency math interventions with instructional and frustration level skills: Evidence for a skill-by-treatment interaction. School Psychology Review, 39(1), 69–83. https://doi.org/10.1080/02796015.2010.12087791
Burns, M. K., Hodgson, J., Parker, D. C., & Fremont, K. (2011). Comparison of the effectiveness and efficiency of text previewing and preteaching keywords as small-group reading comprehension strategies with middle-school students. Literacy Research and Instruction, 50(3), 241–252. https://doi.org/10.1080/19388071.2010.519097
Burns, M. K., Maki, K. E., Karich, A. C., Hall, M., McComas, J. J., & Helman, L. (2016). Problem analysis at tier 2: Using data to find the category of the problem. In S. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention (pp. 293–307). Springer.
Burns, M. K., & Parker, D. C. (2014). Curriculum-based assessment for instructional design: Using data to individualize instruction. Guilford.
Burns, M. K., VanDerHeyden, A. M., & Jiban, C. L. (2006). Assessing the instructional level for mathematics: A comparison of methods. School Psychology Review, 35(3), 401–418. https://doi.org/10.1080/02796015.2006.12087975
Burns, M. K., VanDerHeyden, A. M., & Zaslofsky, A. F. (2014). Best practices in delivering intensive academic interventions with a skill-by-treatment interaction. In A. Thomas & P. Harrison (Eds.), Best practices in school psychology VI (pp. 129–142). National Association of School Psychologists.
Burns, M. K., Walick, C., Simonson, G. R., Dominguez, L., Harelstad, L., Kincaid, A., & Nelson, G. S. (2015). Using a conceptual understanding and procedural fluency heuristic to target math interventions with students in early elementary. Learning Disabilities Research & Practice, 30(2), 52–60.
Codding, R. S., Burns, M. K., & Lukito, G. (2011). Meta-analysis of mathematic basic-fact fluency interventions: A component analysis. Learning Disabilities Research & Practice, 26(1), 36–47. https://doi.org/10.1111/j.1540-5826.2010.00323.x
Coulter, W. A. (1988). Curriculum-based assessment: What’s in a name? Communiqué, 18(3), 13.
Erion, J., & Hardy, J. (2019). Parent tutoring, instructional hierarchy, and reading: A case study. Preventing School Failure: Alternative Education for Children and Youth, 63(4), 382–392.
Floyd, R. G., & Kranzler, J. H. (2019). Remediating student learning problems: Aptitude-by-treatment interaction versus skill-by-treatment interaction. In M. K. Burns (Ed.), Introduction to school psychology: Controversies and current practice (pp. 413–434). Oxford University Press.
Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5(3), 239–256. https://doi.org/10.1207/S1532799XSSR0503_3
Geary, D. C., Hoard, M. K., Byrd-Craven, J., Nugent, L., & Numtee, C. (2007). Cognitive mechanisms underlying achievement deficits in children with mathematical learning disability. Child Development, 78(4), 1343–1359. https://doi.org/10.1111/j.1467-8624.2007.01069.x
Gickling, E. E., & Armstrong, D. L. (1978). Levels of instructional difficulty as related to on-task behavior, task completion, and comprehension. Journal of Learning Disabilities, 11(9), 559–566. https://doi.org/10.1177/002221947801100905
Gickling, E. E., & Havertape, S. (1981). Curriculum-based assessment (CBA). School Psychology Inservice Training Network.
Gickling, E. E., Shane, R. L., & Croskery, K. M. (1989). Developing math skills in low achieving high school students through curriculum-based assessment. School Psychology Review, 18, 344–356. https://doi.org/10.1080/02796015.1989.12085431
Gickling, E., & Thompson, V. (1985). A personal view of curriculum-based assessment. Exceptional Children, 52, 205–218. https://doi.org/10.1177/001440298505200302
Gravois, T. A., & Gickling, E. E. (2008). Best practices in instructional assessment. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 503–518). National Association of School Psychologists.
Hall, M. S., & Burns, M. K. (2018). Meta-analysis of targeted small-group reading interventions. Journal of School Psychology, 66, 54–66. https://doi.org/10.1016/j.jsp.2017.11.002
Haring, N. G., & Eaton, M. D. (1978). Systematic instructional technology: An instructional hierarchy. In N. G. Haring, T. C. Lovitt, M. D. Eaton, & C. L. Hansen (Eds.), The fourth R: Research in the classroom (pp. 23–40). Merrill.
Herman, K. C., Riley-Tillman, T. C., & Reinke, W. M. (2012). The role of assessment in a prevention science framework. School Psychology Review, 41(3), 306–314. https://doi.org/10.1080/02796015.2012.12087511
Hiebert, J., & Lefevre, P. (1986). Conceptual and procedural knowledge for teaching on student achievement. In J. Hiebert (Ed.), Conceptual and procedural knowledge: The case of mathematics (pp. 1–27). Erlbaum.
Hoover, W. A., & Gough, P. B. (1990). The simple view of reading. Reading and Writing, 2(2), 127–160.
Jacob, R., & Parkinson, J. (2015). The potential for school-based interventions that target executive function to improve academic achievement: A review. Review of Educational Research, 85(4), 512–552. https://doi.org/10.3102/0034654314561338
Kavale, K. A., & Forness, S. R. (2000). Policy decisions in special education: The role of meta-analysis. In R. Gersten, E. P. Schiller, & S. R. Vaughn (Eds.), Contemporary special education research: Synthesis of the knowledge base on critical instructional issues (pp. 281–326). Routledge.
Kearns, D. M., & Fuchs, D. (2013). Does cognitively focused instruction improve the academic performance of low-achieving students? Exceptional Children, 79(3), 263–290. https://doi.org/10.1177/001440291307900200
Lembke, E. S., McMaster, K. L., & Stecker, P. M. (2010). The prevention science of reading research within a Response-to-Intervention model. Psychology in the Schools, 47(1), 22–35. https://doi.org/10.1002/pits.20449
McCarthy, A. M., & Christ, T. J. (2010). Test Review: Beaver, J. M., & Carter, M. A. (2006). The developmental reading assessment—second edition (DRA2). Upper Saddle River, NJ: Pearson. Assessment for Effective Intervention, 35(3), 182–185. https://doi.org/10.1177/1534508410363127
McConnell, S., & Wackerle-Hollman, A. (2016). Can we measure the transition to reading? General outcome measures and early literacy development from preschool to early elementary grades. AERA Open, 2, 1–15. https://doi.org/10.1177/2332858416653756
Melby-Lervåg, M., & Hulme, C. (2013). Is working memory training effective? A meta-analytic review. Developmental Psychology, 49(2), 270–291. https://doi.org/10.1037/a0028228
Mellard, D. F., McKnight, M., & Woods, K. (2009). Response to intervention screening and progress-monitoring practices in 41 local schools. Learning Disabilities Research & Practice, 24(4), 186–195. https://doi.org/10.1111/j.1540-5826.2009.00292.x
National Assessment of Educational Progress. (2019). NAEP report card: 2019.
National Center on Intensive Intervention (n.d.). Progress monitoring. Available online at https://intensiveintervention.org/data-based-individualization/progress-monitoring
National Council of Teachers of Mathematics. (2000). Principles and standards for school mathematics.
National Mathematics Advisory Panel. (2008). Foundations for success: The final report of the National Mathematics Advisory Panel. U.S. Department of Education.
National Reading Panel. (2000). Report of the National Reading Panel: Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction: Reports of the subgroups. National Institute of Child Health and Human Development, National Institutes of Health.
Oslund, E. L., Habun-Burke, S., Taylor, A. B., Simmons, D. C., Simmons, L., Kwok, O., et al. (2012). Predicting kindergarteners’ response to early reading intervention: An examination of progress-monitoring measures. Reading Psychology, 33, 78–103. https://doi.org/10.1080/02702711.2012.630611
Parker, D. C., Zaslofsky, A. F., Burns, M. K., Kanive, R., Hodgson, J., Scholin, S. E., & Klingbeil, D. A. (2015). A brief report of the diagnostic accuracy of oral reading fluency and reading inventory levels for reading failure risk among second-and third-grade students. Reading & Writing Quarterly, 31(1), 56–67. https://doi.org/10.1080/10573569.2013.857970
Rittle-Johnson, B., Siegler, R. S., & Alibali, M. W. (2001). Developing conceptual understanding and procedural skill in mathematics: An iterative process. Journal of Educational Psychology, 93(2), 346–362. https://doi.org/10.1037/0022-0663.93.2.346
Scholin, S., & Burns, M. K. (2012). Relationship between pre-intervention data and post-intervention reading fluency and growth: A meta-analysis of assessment data for individual students. Psychology in the Schools, 49, 385–398. https://doi.org/10.1002/pits.21599
Shapiro, E. S. (2011). Academic skills problems: Direct assessment and intervention. Guilford.
Shapiro, E. S., Angello, L. M., & Eckert, T. L. (2004). Has curriculum-based assessment become a staple of school psychology practice? An update and extension of knowledge, use, and attitudes from 1990 to 2000. School Psychology Review, 33(2), 249–257. https://doi.org/10.1080/02796015.2004.12086246
Skinner, C. H., Beatty, K. L., Turco, T. L., & Rasavage, C. (1989). Cover, copy, and compare: A method for increasing multiplication performance. School Psychology Review, 18(3), 412–420. https://doi.org/10.1080/02796015.1989.12085436
Stecker, P. M., Fuchs, D., & Fuchs, L. S. (2008). Progress monitoring as essential practice within response to intervention. Rural Special Education Quarterly, 27(4), 10–17.
Stuebing, K. K., Barth, A. E., Molfese, P. J., Weiss, B., & Fletcher, J. M. (2009). IQ is not strongly related to response to reading instruction: A meta-analytic interpretation. Exceptional Children, 76(1), 31–51. https://doi.org/10.1177/001440290907600102
Stuebing, K. K., Barth, A. E., Trahan, L. H., Reddy, R. R., Miciak, J., & Fletcher, J. M. (2015). Are child cognitive characteristics strong predictors of responses to intervention? A metaanalysis. Review of Educational Research, 85(3), 395–429. https://doi.org/10.3102/0034654314555996
Suggate, S. P. (2010). Why what we teach depends on when: Grade and reading intervention modality moderate effect size. Developmental Psychology, 46(6), 1556–1579. https://doi.org/10.1037/a0020612
Sullivan, A. L., & Field, S. (2013). Do preschool special education services make a difference in kindergarten reading and mathematics skills? A propensity score weighting analysis. Journal of School Psychology, 51(2), 243–260. https://doi.org/10.1016/j.jsp.2012.12.004
Szadokierski, I., Burns, M. K., & McComas, J. J. (2017). Predicting intervention effectiveness from reading accuracy and rate measures through the instructional hierarchy: Evidence for a skill-by-treatment interaction. School Psychology Review, 46(2), 190–200. https://doi.org/10.17105/SPR-2017-0013.V46-2
Therrien, W. J. (2004). Fluency and comprehension gains as a result of repeated reading: A meta-analysis. Remedial and Special Education, 25(4), 252–261. https://doi.org/10.1177/07419325040250040801
Treptow, M. A., Burns, M. K., & McComas, J. J. (2007). Reading at the frustration, instructional, and independent levels: The effects on students’ reading comprehension and time on task. School Psychology Review, 36(1), 159–166. https://doi.org/10.1080/02796015.2007.12087958
Vadasy, P. F. (2005). Sound partners: A supplementary one-to-one tutoring program in phonics-based early reading skills. Voyager Sopris Learning.
Vadasy, P. F., & Sanders, E. A. (2009). Supplemental fluency intervention and determinants of reading outcomes. Scientific Studies of Reading, 13(5), 383–425. https://doi.org/10.1080/10888430903162894
Van Norman, E. R., Maki, K. E., Burns, M. K., McComas, J. J., & Helman, L. (2018). Comparison of progress monitoring data from general outcome measures and specific subskill mastery measures for reading. Journal of School Psychology, 67, 179–189. https://doi.org/10.1016/j.jsp.2018.02.002
VanDerHeyden, A. M., Broussard, C., & Burns, M. K. (2019). Classification agreement for gated screening in mathematics: Subskill mastery measurement and classwide intervention. Assessment for Effective Intervention, 46, 1534508419882484.
VanDerHeyden, A. M., & Burns, M. K. (2005). Using curriculum-based assessment and curriculum-based measurement to guide elementary mathematics instruction: Effect on individual and group accountability scores. Assessment for Effective Intervention, 30(3), 15–31. https://doi.org/10.1177/073724770503000302
VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multi-year evaluation of the effects of a response to intervention (RTI) model on identification of children for special education. Journal of School Psychology, 45(2), 225–256. https://doi.org/10.1016/j.jsp.2006.11.004
Vygotsky, L. S. (1978). Mind in Society: The development of higher mental processes. Harvard University Press.
Walcott, C. M., Hyson, D., McNamara, K., & Charvat, J. L. (2018). Results from the NASP 2015 membership survey, part one: Demographics and employment conditions. NASP Research Reports, 3(1), 1–17.
Wechsler, D. (2014). Wechsler intelligence scale for children (5th ed.). NCS Pearson.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Burns, M.K., Duesenberg, M.D., Romero, M.E. (2022). Skill-by-Treatment Interaction: Increasing the Likelihood for Success in Reading and Math. In: Andrews, J.J., Shaw, S.R., Domene, J.F., McMorris, C. (eds) Mental Health Assessment, Prevention, and Intervention. The Springer Series on Human Exceptionality. Springer, Cham. https://doi.org/10.1007/978-3-030-97208-0_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-97208-0_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-97207-3
Online ISBN: 978-3-030-97208-0
eBook Packages: Behavioral Science and PsychologyBehavioral Science and Psychology (R0)