Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

Public schools are continually faced with the challenge of responding to and meeting the demands of an ever evolving set of societal and global expectations (Friedman, 2005). Recent iterations of this responsiveness came in the wake of the No Child Left Behind Act (NCLB) and the reauthorization of the Individuals with Disabilities Education Improvement Act (IDEA) of 2004. These have converged to emphasize the data-based accountability of student achievement in public schools. While many of the professionals working in schools have received training in some areas dealing with instruction, curriculum, behavior, assessment, evaluation, consultation, and data analysis, there are few that have been trained in each of these areas. One profession that does require training in each of these areas is school psychology. The combination of school psychologists’ training and skills and the evolving needs of schools to demonstrate improved student outcomes unveil a context for school psychologists to be increasingly active and influential participants on instructional leadership teams. The goal of this chapter, is to explore recent developments that are impacting professional practices in schools and identify ways that school psychologists can use their skills and knowledge to support effective instruction.

Review of Literature

While all students have access to the public education system, data continually confirm that an achievement gap exists between groups of students (NAEP, 2007). For example, the National Assessment of Educational Progress (NAEP) results from 1992 to 2005 show that Caucasian and Asian students have consistently and significantly outperformed African American, Hispanic, and American Indian students in the areas of reading, math, and writing across grade levels. Similarly, students from high-income families have significantly higher scores than students from low-income families. As a result, many lawmakers and student advocates believe that some schools are failing to meet the educational needs of their students. In addition, many believe that the parents of students enrolled in failing schools should have choices to ensure that their children receive a high-quality education. This current state of the United States educational system was the impetus for the proposal and enactment of NCLB with its focus on data-based accountability for all students.

No Child Left Behind

The No Child Left Behind Act (2002) was devised to meet four primary goals: (1) Make schools accountable for student achievement, (2) Increase flexibility for schools to spend their money, (3) Focus resources on proven educational methods, and (4) Expand choices for parents if their local school is failing. Since its inception, NCLB has had its supporters and dissenters. The goal of this section is not to evaluate NCLB but to describe what this piece of legislation proposes and consequently the context (including both supports and challenges) that schools are presently operating within.

Under NCLB, states are required to clearly define standards in reading and math in grades three to eight, define baseline levels of achievement, and set goals for achievement gains, known as annual yearly progress (AYP). To answer questions about whether schools are meeting these goals, states specify standardized assessments of achievement to measure student performance in the form of state tests. The results of these state tests can then be used to provide feedback about the performance of schools with regard to both students’ aggregate achievement levels and the achievement levels of the following subgroups: Caucasian, African American, American Indian, Asian/Pacific Islander, Latino, students with an individualized education plan (IEP), students from low-income families (i.e., students receiving free and reduced lunch), and English language learners (ELL).

NCLB was created with the vision that schools should have increased flexibility over how to use funds to best meet their needs, allowing schools to use up to 50% of their non-Title 1 funds at their discretion. For example, school A may have a large ELL population, while school B has an 80% free and reduced lunch population. Given the difference in the needs of each school, it makes sense not to tie up monies to pre-determined programs but to allow schools to differentiate funds toward high priority areas in an effort to increase achievement in the most efficient manner possible.

NCLB also increased funding to support research-based approaches to improving student achievement. These federal funding sources are provided to individual states who then determine how to distribute these resources. In many cases, this means that states channel these monies to schools with significant needs due to poor achievement or the presence of high risk populations, such as ELL. Many schools who are meeting their goals do not receive any additional money as this money often goes to schools who are not meeting state AYP trajectories. While NCLB purports to provide increased funding and flexibility to schools in need, some question the extent to which funding and support is being given to schools in need of assistance.

If a school or any of the before mentioned subgroups continually fail to meet AYP goals, a variety of options are open to parents of students attending the failing school. These include being able to transfer their child to a higher achieving school or use Title 1 funds to pay for supplemental services (e.g., outside tutoring, after school programs, or summer school). NCLB also encouraged the development of charter schools as an avenue for innovative new practices that deviate from traditional K-12 public school settings, such as corporations and universities.

In summary, policymakers and advocates have sent a clear message to schools that they expect increased achievement results for all students. In order to help schools accomplish these goals, NCLB has presented states and schools with an increased amount of flexibility to funnel funding to the areas of their schools, which they feel will result in the greatest overall achievement gains. Also, NCLB increased spending to support research-based academic programs. If schools fail to progressively increase achievement on state tests, then repercussions could include intervention from the state, a loss of Title 1 funds for supplementary services, or face restructuring which involves a change in governance where management of the school is taken over by the state or a private entity.

The Individuals with Disabilities Education Act 2004

NCLB was not the only piece of recent legislation to stress the importance of student achievement data. In 2006, the final regulations of the reauthorization of the Individuals with Disabilities Education Act were published. Two changes in IDEA that will certainly impact the roles and functions of school psychologists are the increased emphasis on eligibility determination based on students’ response to intervention (RtI) and the increased amount of IDEA Part B funds available to supplement early intervention services. These changes encourage the prevention and remediation of student difficulties before special education eligibility determinations, as well as decrease the importance of de-contextualized (i.e., away from the classroom) assessments using published standardized norm referenced tests.

The sheer number of students that are receiving special education services in the learning ­disabled category stresses the importance of IDEA’s emphasis on using a RtI approach to determining special education eligibility. In the past, nearly all state rules and regulations used some form of an IQ/achievement discrepancy to determine special education eligibility for students identified with learning disabilities. With the enactment of IDEA 2004, states can no longer require the use of the discrepancy model and must allow evaluation processes focusing on student RtI. This, when combined with a focus on demonstrating that the student received scientifically based instruction in his or her area of need, necessitates the use of assessments explicitly tied to classroom instruction and the measurement of the student’s RtI. Instead of assessing to determine within-child strengths and weaknesses, evaluations will primarily gather information about a student’s learning rate as it relates to factors such as instruction (i.e., how to teach), curriculum (i.e., what to teach), and the environment (i.e., physical arrangements).

Prior to the reauthorization of IDEA 2004, schools were allowed to use 5% of Part B funds to assist students not identified for special education, but who demonstrate educational need in academic and/or behavioral areas. Uses of these funds might include staff professional development in scientifically based interventions or providing educational and behavior evaluation, supports, and services to struggling students, as is done in student/teacher assistance teams. With IDEA 2004, schools can now use up to 15% of their Part B funds for these early intervention services. Increased funding for early intervention services combined with an emphasis on using RtI as a major component in diagnosing learning disabilities converge to focus school psychologists on collaborating with all school staff concerning instruction across the educational setting.

In the past decade, much discourse has taken place concerning the identification of learning ­disabilities. While past policy and practice have largely employed a discrepancy model to special education eligibility determination, both NCLB and IDEA emphasize the use of early intervention services coupled with an RtI approach to identify students with learning disabilities. Furthermore, these new provisions place the burden of proof on intervention-based student outcomes and should, consequently, focus discussion on the student-instruction-curriculum match and not on the aptitudes of the student.

Recent legislation has significantly impacted the way in which schools are structuring ­themselves to meet the diverse needs of their students. Today’s climate of high-stakes testing and data-based accountability places a premium on making sound decisions that result in the efficient use of school resources to increase student achievement, not only with at-risk populations, but across the board. To support schools in meeting the lofty demands of NCLB and IDEA 2004, the roles and functions of school psychologists must evolve from those of determining eligibility and placement, to those of identifying and preventing academic and behavioral difficulties before special education placements become necessary (Reschly & Ysseldyke, 2002). Major components of achieving these goals are (1) properly identifying problems, (2) having knowledge and skills in identifying, constructing, implementing, and evaluating interventions across the school setting, and (3) being able to effectively incorporate these interventions into school systems.

It is evident that schools are progressively moving toward a system in which data is driving educational decisions. While many policy makers and community members are focusing on student outcome data, practitioners in the schools are well aware that a variety of factors, in addition to data-based decision making, must be in place if we are to see achievement gains. These include factors such as scientifically-based instruction, systematic curricular alignment, age-appropriate social and interpersonal skills, and continual systemic improvement. Unfortunately, as popular trends continue to have an impact on practices in the schools, some school personnel may be compelled to follow the latest promises by publishers and/or organizations that focus on a particular method of teaching (e.g., whole language and constructivist approaches to mathematics).

School Psychologists Role on Instructional Leadership Teams

Given the pressures of NCLB to increase student achievement across all students, including those groups that traditionally struggle, administrators are actively seeking quality programs that are effective, efficient, and easy to implement. Unfortunately, one-size-fits-all approaches, or easy answers, are not available. Schools do have a variety of data that they can use to determine which instructional methods result in increased levels and rates of student achievement.

As a practitioner working alongside school districts on school improvement initiatives, it was apparent to me that a variety of district wide assessments (e.g., the state tests) were used to collect data on student achievement. However, it was unclear whether the school linked data from these assessments to specific questions regarding instructional practices. It seemed that the primary purpose of collecting such large amounts of data, usually from state tests, was to report out to the state on AYP goals. While these data could be used to identify groups not reaching AYP and items could be analyzed in an attempt to hypothesize what skills were missing, the annual administration of these broad band tests appeared to provide little data to meaningfully inform instruction. Some of the reasons that it is difficult to glean this information from these broad band achievement tests include a lack of fluency based items, number of items per skill, and selection-type responses; for a more detailed critique, see Marston (1989).

These reasons lead to why school leadership teams may have difficulty using district assessment data to impact decisions about instruction and curriculum. They have general data that can identify a problem, but they often do not have sufficient data to determine why the problem is occurring, to directly link data to intervention strategies, or to efficiently monitor student progress. For example, a district’s aggregated subgroup data shows that only 30% of the students receiving free and reduced lunch are meeting AYP in reading, but without the aid of supplemental and targeted assessments, they could not say whether it was caused by phonemic awareness skill deficiencies, inaccurate decoding (i.e., phonics), a lack of fluency, low vocabulary, or an absence of using comprehension strategies. Without this information, or the capacity of the system to provide other sources of data to address these issues, it is ­difficult for administrators to work with teachers and delineate instructionally what needs to be altered.

Given the training experiences of school psychologists, we could be a valuable resource to use our skills of assessment, instruction, program evaluation, and systems to help schools collect and analyze data that directly link to recommendations for instructional practices and materials targeting identified problems. One approach to assisting schools in accomplishing these goals is to raise assessment questions inherent to the problem solving model. While schools are currently mandated to gather data that will, once a year, identify problems, these data do very little to efficiently answer questions about why the problem is occurring, what should be done to fix it, and when a treatment is decided on – if the treatment is working. The goal of the next section is to discuss ways in which school psychologists can increase their role in supporting schools in organizing and collecting ­systemic data to align these data with the purposes of the problem solving model.

Research to Practice

Deno (1989) emphasized the importance of the marriage of assessment and evaluation and how this can help practitioners answer questions focused on alterable variables in the interaction between the educational environment and student achievement. School psychologists have combined this ­ecological focus with the problem solving model to successfully make instructional decisions for ­individual students through RtI as well as systems, using a variety of methods (e.g., CBM and positive behavioral supports (PBS)). The core ingredient of these approaches is the use of data with the problem solving model to drive decision making. The problem solving model consists of four parts: (1) problem identification; (2) problem analysis; (3) plan implementation; and (4) plan evaluation (Bergan, 1977; Heartland Area Education Agency, 2002). Each of these steps contextualizes a series of educationally relevant questions that need to be answered.

Step 1 – Problem Identification: What is the Problem?

Before problem solving can commence, data needs to be identified or collected to set a standard and answer the question, “what is expected.” Without this, identifying what the problem is cannot be completed and a system emphasizing data-based accountability will fall flat on its face. Screening may seem to some a fairly simple task of data collection where everyone gets tested and their scores are compared, but there are some key elements that must be met. With screening, in order to compare “apples-to-apples,” so to speak, the more standardized the testing materials (e.g., probes) and procedures (e.g., directions, timing), the better. This uniformity in how and what you test allows you to make reliable and valid rank-order comparisons between students, in the testing group (Poncy, Skinner, & Axtell, 2005). For example, Shinn (1989) explains different procedures for use depending on whether the goal is to collect data for a district-wide, school-wide, or class-wide norm. Depending on the questions that will eventually be answered with these data, the team will need to decide about how far-reaching to set up their screening assessments, especially in larger school districts that may have multiple elementary, middle, and high school buildings. These data will ultimately determine what types of problems will be possible to identify and define.

Steps that accompany the problem definition stage include problem identification, problem ­definition, and problem validation (Deno, 1989; Heartland Area Education Agency, 2002). A majority of the data used to make these decisions come from the normative data of a standardized test (e.g., a state test or CBM). Traditionally, these data are collected once per year for state tests, while CBM data are usually collected three times over the course of a school year in the fall, winter, and spring. These data can be presented in the form of a raw score, standard score, and/or percentile rank. These allow for the establishment of an expected criterion (e.g., the 50th percentile) from which to identify if a problem exists, define the magnitude of the problem (e.g., expected words correct per minute (wcpm) is 120, student behavior is 46 wcpm, so the problem discrepancy is 74 wcpm), and validate where in the system the problem exists (i.e., at the district, school, classroom, supplemental, or individual student level).

The importance of correctly identifying problems has been emphasized in the literature. Given the finite amount of resources available in schools coupled with the intense pressure of documenting achievement increases across students, schools need to efficiently invest resources. While school psychologists have traditionally used normative achievement data with individual students to identify and define problems, these same data can also be used to identify and define systemic achievement issues as well. This is accomplished by looking at patterns in the district level data to rule out problems that may not be at the child level. For example, in Poncy, Skinner, and O’Mara (2006), a third grade student was referred for low achievement in math, however, before delving into an individual evaluation, the school psychologist reviewed relevant sources of data to rule out the presence of systemic problems, namely low achievement across the classroom and/or grade level. In this case, the low achieving student was one of many, and a class-wide intervention was selected, implemented, and evaluated.

In the previous case, without the collection of a grade level standard, the practitioner may have implemented an individualized intervention due to the misidentification of the problem. School psychologists serving on instructional leadership teams need to discuss the importance of investi­gating district wide data to understand what the problem is and, just as importantly, where it is occurring. Answering these questions can aid teams in preventing the inefficient misappropriation of instructional resources to misidentified problems. Depending on patterns in the data, the problem should be identified and defined at its highest level of occurrence (the highest level being the district and lowest being the individual student). This will allow schools to appropriately match resources (e.g., teachers, money, and instructional time) to the most relevant problem. Once the problem is correctly identified and defined, the instructional team can turn its collective focus to the next ­question of the problem solving model – why is the problem occurring.

Step 2 – Problem Analysis: Why is the Problem Occurring?

The assessment data collected during the problem analysis stage, provides information to support inferences about why the identified problem is happening with the goal of linking data to an intervention (Heartland Area Education Agency, 2002). This process consists of asking educationally relevant questions that focus on the interactions of alterable variables of the educational environment. Heartland recommends focusing on four primary areas: (1) Instruction; (2) Curriculum; (3) Environment; and (4) Learner. The instructional domain pertains to how behaviors are taught. The curricular domain focuses on what is taught and addresses the question of the match between what behaviors/skills are expected and the actual skills of the student(s). The environmental domain investigates the physical and affective components of schools and how these can be altered to increase the rate of student skill development (e.g., reinforcement and classroom management). Aspects of the learner include interests and preferences as well as constructs such as task perse­verance and self efficacy. These four domains encompass the crucial facets of the interaction between the student and the educational environment and serve as a tool to organize the alterable variables that impact student achievement.

Howell, Fox, and Morehead (1993) propose that the goal of this stage of problem solving is to link data to a provable hypothesis (e.g., Cale reads slowly but accurately, therefore, he needs repeated practice in instructional level material to build fluent decoding skills). To aid in the development of assessment questions, practitioners should break down skills and curricular objectives into their component parts (i.e., task analyze the target behavior and the student’s corresponding skills and/or knowledge). This approach has traditionally been done with individual students but can also be applied to groups. For example, a review of a district’s CBM data indicates that 55% of the students are reading below the criterion of 120 wcpm, however, nearly all of these students are making only 1–2 errors. This would suggest that while students were learning appropriate decoding strategies to read grade level material, they need increased repetition with reading. If, on the other hand, 55% of the students were reading below the criterion of 120 wcpm and a majority of these students were making 5+ errors, then the team would likely need to investigate the ability of students to use phonics skills to decode unknown words. Based on the identified error patterns, the team would have to explore empirically-validated instructional practices and materials and match these to the identified skill deficiencies (e.g., using re-reading strategies to build fluency), implement an intervention plan, and measure the impact of the treatment on student achievement.

Using the Instructional Hierarchy to Aid Hypothesis Generation

This chapter has recommended that school psychologists work with instructional leadership teams to emphasize functional assessment and the alterable variables of the educational setting. Functional assessment seeks to provide an educationally relevant context to prompt patterns in student responding to the academic tasks expected in the classroom. Furthermore, identifying these patterns (e.g., slowly and inaccurately, slowly but accurately, or quickly in a fixed context) can be helpful in suggesting instructional strategies to remediate the identified deficit (Howell et al., 1993). One approach that has been used to “functionally” match students’ academic responding to classroom based tasks to intervention selection is the Instructional Hierarchy (IH; Haring & Eaton, 1978).

Haring and Eaton (1978) described a learning hierarchy of skill development that included ­acquisition (the goal is to increase accuracy), fluency building (the goal is to increase speed and maintenance of accurate responding), and generalization/adaptation (the goal is to enhance ­discrimination and creative responding). Many traditional assessment (e.g., teacher exams) and intervention procedures (e.g., demonstration) focus on accuracy. More recently, educators and researchers also have become focused on assessing (e.g., CBM, DIBLES, AIMSweb) and increasing fluency or speed of accurate responding (AIMSweb Progress Monitoring and Response to Intervention System, 2006; Daly, Chafouleas, & Skinner, 2005; Deno & Mirkin, 1977; Good & Kaminski, 2002). The final two stages are more difficult to define as they require students to learn other conditions where their skills can be applied (generalization) and/or adapted (creativity). Generalization requires teachers to provide students with numerous opportunities to respond along with feedback about when specific skills can, and cannot, be applied to similar tasks. During this stage of learning, ­students are given assignments that require the application of multiple skills, thereby providing students with opportunities to learn to discriminate when skills can be applied and, just as importantly, when they cannot be applied. Adaptation is similar, but now the stimuli require students to adapt or engage in novel responding. Both of these advanced and essential stages of skill development are most likely to be successful when students can apply their basic skills rapidly, resulting in numerous discrimination, generalization, and adaptation trials with a short period of time.

The goal of problem analysis is to go beyond knowing what the problem is. Specifically, the goal is to focus on understanding why the problem is occurring. Data about how a student, or students, respond to curriculum-based tasks can be an integral source of information for instructional teams to generate hypotheses about why skill deficiencies exist and what instructional procedures and materials could be implemented to remedy skill deficiencies. For example, a significant group of students (perhaps 30%) are reading slowly and inaccurately. The team would hypothesize that fluency rates weren’t reached due to inaccurate decoding (i.e., they were spending time breaking down words). Using a combined approach of using curriculum-based assessment (CBA), the IH, and ICEL, the team would need to use data to hypothesize about what instructional strategies could be used to increase accurate decoding, common error patterns that would guide the selection of curricular objectives, adaptations that could be made to the instructional environment (e.g., increasing reinforcement), and what aspects of the learner(s) could be used to increase relevance and interest in instructional materials. While evidence-based programs are available in reading, math, and written expression, teachers and/or interventionists will need to know why and how to differentiate their instruction practices based on student needs. Below we will discuss several strategies which have been shown to enhance learning across these levels of skill development. This system linking student responding to instructional strategies should be a valuable heuristic for instructional teams to conceptualize, how teaching can be changed to match patterns in student responding.

When discussing teaching, we often focus on techniques which increase accuracy such as ­demonstration, description, modeling, prompting, and immediate corrective feedback (Haring & Eaton, 1978). The teaching behaviors, or events, that occur prior to the student responding (e.g., modeling, demonstration, description) are critical and necessary in that they allow someone who cannot respond accurately to begin responding accurately. However, it is equally important that students receive feedback concerning the accuracy of their responses as this may reinforce the future use of the response. Providing immediate corrective feedback following errors can prevent students from practicing these errors while ensuring that their last response is accurate (Skinner & Smith, 1992). Finally, errors can be analyzed and used to guide teaching (e.g., a student who ­subtracts the smallest numeral from the largest in all subtraction problems will need additional demonstrations, descriptions, and models that focus on the skills and concepts of regrouping (Skinner & Schock, 1995).

Skills differ from knowledge in that both speed and accuracy of responding are important (Skinner, 1998). For example, cognitive limitation theories suggest that students who read slowly and accurately will struggle to comprehend what is read because their cognitive resources (e.g., working memory) are being used to decode words (Breznitz, 1987; Daneman & Carpenter, 1980; LaBerge & Samuels, 1974; Perfetti, 1977; Rasinski, 2004; Stanovich, 1986). These theories have been supported by numerous correlational findings showing a relationship between rates of accurate responding and achievement (e.g., Marston, 1989). Additionally, behavioral research suggests that those who respond accurately and slowly will be required to expend more effort and receive poorer reinforcement (e.g., lower rate, less immediate) for their responding (Billington, Skinner, & Cruchon, 2004; Binder, 1996). These variables make it less likely that slow but accurate responders will choose to engage in academic behaviors that require these resources (Skinner, Pappas, & Davis, 2005). These theories have been supported by numerous studies that show a relationship between rates of responding and choosing to engage in assigned academic behaviors (see Skinner, 2002).

Given that demonstration, modeling, and practice with immediate feedback can enhance student accuracy, research has examined whether demonstrations of rapid and accurate responding can enhance the speed of accurate responding (i.e., fluency). Unfortunately, these results suggest that such antecedent procedures are not efficient with increasing fluency (Skinner, Logan, Robinson, & Robinson, 1997). However, researchers investigating opportunities to respond (Greenwood, Delquadri, & Hall, 1984), academic learning time (Berliner, 1984), and learning trial rates (Skinner, Belfiore, Mace, Williams, & Johns, 1997) have applied Ebbinghaus’ (1885) independent variable of repeated practice and found that an emphasis on repetition not only enhances the maintenance of accurate responding but also the speed of accurate responding. This suggests that after students have acquired a degree of accurate responding, educators should reduce the time they spend teaching (e.g., demonstrating and describing) and allot time to activities that allows students to engage in high rates of active, accurate, academic responding (AAA responding). Procedures which have enhanced response rates include altering the topography of the response (Skinner et al., 1997), reinforcing more rapid responding (Skinner 1997, Bamberg, Smith, & Powell, 1993), using explicit timing procedures (Rhymer, Skinner, Henington, D’Reaux, & Sims, 1998), and the rapid presentation of new stimuli (Skinner, Fletcher, & Henington, 1996).

Educators can develop and assign work designed to occasion high rates of AAA responding (e.g., homework, worksheets), however, these assignments will only be effective if students choose to engage in those activities. Students who must expend a lot of time and effort to respond accurately are less likely to choose to engage in these assignments. Therefore, the students who most need the addition trials to enhance their fluency and maintenance may be the least likely to choose to engage in assigned academic tasks. These students may need stronger or additional reinforcement for engaging in these activities than other students (Skinner et al., 2005). Interdependent group oriented contingencies (Popkin & Skinner, 2003; Sharp & Skinner, 2004), altering long assignments to multiple brief assignments (Wallace, Cox, & Skinner, 2003) and interspersing additional brief problems (McCurdy, Skinner, Grantham, Watson, & Hindman, 2001; Skinner, 2002; Skinner, Hurst, Teeple, & Meadows, 2002) have all been shown to be effective procedures for enhancing the probability that students will choose to engage in assigned work, by increasing the strength and/or rate of ­reinforcement for choosing to engage in assigned work (Skinner, Wallace, & Neddenriep, 2002).

While there is a significant amount of research about the instructional strategies that accompany the acquisition and fluency building stages of the IH, information concerning generalization/­adaptation stages is less abundant. These stages emphasize the application of skills across settings and problem types. An example of this would be a student using his or her fluent computation skills to solve a multi-step story problem. Educators can assist by demonstrating, describing, and modeling strategies and behaviors that enhance discrimination and generalization as well as modeling adaptation attempts. Thus, both acquisition strategies (e.g., demonstration, modeling) and fluency building strategies (high rates of AAA responding) may be needed to enhance skill development during these final two stages. Another way to think about this is to acknowledge that many “problem solving” activities require procedural (i.e., how to) scripts. While the accurate and fluent use of skills is necessary, this is not sufficient to ensure the generalization and/or adaptation of skills. Therefore, teachers may need to use acquisition building strategies (e.g., modeling and ­demonstration) to build procedural knowledge and provide ample opportunities for students to use a combination of skills and procedures (e.g., problem solving) to apply skills to novel contexts.

The IH poses a heuristic to link patterns of academic responding to instructional strategies. While it may be easy for instructional teams to become overwhelmed with district wide problems, the IH can be used to match interventions (e.g., repeated practice and increased reinforcement to build fluency in basic math facts) to student responding (e.g., students compute basic facts in subtraction slowly but accurately). While doing this, the IH provides direction for the instructional leadership team to answer such questions as, why is the problem occurring? and the answer provides the foundation to the third stage of the problem solving model, what should be done to fix the problem?

Step 3 – Plan Implementation: What Should Be Done to Fix the Problem?

Once problems have been identified, defined, and analyzed, the team is ready to link the assessment data to an intervention plan that will be implemented and evaluated. Teams can take several approaches to arriving at an intervention. Treatments can be created and constructed, published programs can be selected, or current programs can be altered. Each of these will have strengths and weaknesses. For example, if the team creates and constructs the intervention it should be tightly matched to the identified skill deficiencies but will also be costly in terms of staff time. Also, this task becomes more difficult the larger and more heterogeneous the group becomes. Published programs can be selected that generally reflect a skill need (e.g., SRA Reading Mastery for beginning phonics) but still may re-teach previously learned skills and be somewhat inefficient. Also, teachers may not be familiar with the program and will need training as well as time to learn how to deliver the program. Adapting generic instructional strategies would require the least amount of change to classroom routines, but will also be susceptible to poor intervention integrity. Furthermore, differentiating learning needs with broad curricular objectives for heterogeneous groups poses a variety of challenges.

Previously in the chapter we have discussed the intense pressures schools face to increase achievement. While some schools qualify for extra funding through Reading First and other funded programs, a majority of schools are carefully distributing a limited amount of resources across their district. This places a premium on decisions that not only increase achievement, but increase it efficiently. The question every instructional leadership team should ask is, “How can we get the most achievement growth, given our resources?” The problem identification process should allow the team to identify the highest level of the problem (i.e., district, building, classroom, supplemental, and student level). The problem analysis stage should have helped the team collect data to understand why the problem was occurring and used patterns of student responding to drive instructional decisions. Together, these two stages should set the stage for confidently made educational decisions. However, before deciding on an intervention plan, teams will want to discuss a variety of issues to ensure the efficiency of interventions. First, it is important to not spend instructional time teaching students items they have mastered or items that are too advanced. Teaching without a proper instructional match will stifle learning rates and waste valuable instructional time. In line with the argument of knowing how much learning you gain per instructional minute (Poncy, Skinner, & Jaspers, 2007), schools should investigate student-to-teacher ratios. It makes little sense to teach to a group of three students if you can get similar achievement results teaching a group of six. It is simply not enough to do an intervention; instructional teams need to take into account student learning rates in relationship to resource expenditures. In other words, we need to get the best for our buck.

While these questions may appear overwhelming, educational professionals have continually conducted research to empirically-validate specific interventions that increase student learning. In education and other fields, researchers have proposed various criteria that allow one to classify a treatment as being empirically-validated (Chambless & Hollon, 1998; Drake, Latimer, Leff, McHugo, & Burns, 2004; Kazdin, 2004; Kratochwill & Shernoff, 2004; McCabe, 2004). Although these criteria are not identical, they are designed to meet the same goal. Empirically-validated interventions are to have gone through a process that increases both researchers’ and practitioners’ confidence that an intervention has altered student behavior (e.g., enhanced learning rates). Given that a strategy, procedure, or intervention has been proven to work in a particular instance suggests that it may also be effective in preventing or remedying the presenting problem across students (i.e., it worked before, it will work with other students). This common, and foundational, purpose of empirically validating interventions requires researchers to establish the validity of the intervention and practitioners to consider the contextual/pragmatic issues. This information will be crucial to teams who are selecting and/or recommending educational treatments.

An intervention is said to have internal validity when scientific procedures demonstrate a cause-and-effect relationship between the intervention and the change in behavior. Research methods including design, analysis, and measurement procedures allow researchers to establish internal validity by ruling out other known and unknown variables (i.e., variables other than the intervention) that may have caused the measured change in the dependent variable (Campbell & Stanley, 1966; Skinner, 2004). Some suggest that designs must be true-experiments, meaning that they must include random assignment of participants (subjects) to conditions. However, much of what we know about what works was established using other research procedures, such as single subject designs.

After establishing that an intervention can enhance learning or skill development, the next issue is whether it will have a similar effect across subjects, target behaviors, and environments. This is referred to as external validity and provides us with confidence that the intervention will work again, and more specifically with presenting target behavior(s). One of the most common fallacies associated with large N statistical research is that using many subjects automatically enhances external validity. In some cases it may, but in other cases a larger number of subjects may not enhance external validity (Michael, 1974). Thus, another common component of criteria that is needed to classify an intervention as being empirically validated is replication (Chambless & Hollon, 1998; Kazdin, 2004). Replication studies can provide additional evidence for internal validity while also demonstrating effects across treatment agents (e.g., across teachers), settings (e.g., across classrooms), target behaviors, time, and students.

While establishing the internal and external validity of an intervention is important, from a practitioners’ standpoint, it is not enough. For an intervention to benefit educators and students, it also must be pragmatic. Many variables effect whether an intervention can be applied to a presenting problem including: (a) the amount of training needed to implement it; (b) the degree of implementation precision (e.g., treatment integrity/fidelity) needed for the treatment to be effective; (c) possible negative side effects of the intervention; and (d) the amount of resources needed to carry it out. Finally, the degree with which the intervention can be implemented within the current context of the classroom is an important pragmatic concern. For additional resources, information, and forms supporting the documentation and implementation of interventions see Heartland Area Educational Agency (2002).

The profession of school psychology has continually worked to develop (a) a data-base on what works with respect to intervention development (Berliner, 1984), (b) numerous theories regarding why these things work, (c) assessment procedures designed to allow us to measure skill development levels and within learning strengths and weaknesses, and (d) numerous models of linking assessment to ­intervention. These data and criteria may allow school psychologists to identify skill deficits (e.g., acquisition vs. a fluency problem) and develop or select interventions based on the skill characteristics we want to improve (e.g., provide more demonstration and modeling to enhance accuracy vs. reinforcing high rates of AAA responding to enhance fluency). However, the science and practice driving school psychologists practice has not developed to the point where data is collected and can be used to determine with any certainty that intervention X will remedy a presenting problem (Daly et al., 2005). Researchers have offered numerous reasons for this state of affairs, sometimes deriding others for wasting efforts, resources, and time on inappropriate models of linking assessment to intervention.

Our opinion regarding our progress in this area is that we have done remarkably well considering we are attempting to address (a) an extremely complex phenomenon (i.e., human behavior change), (b) that is not stable (i.e., human behavior change is constant), (c) is susceptible to many known and many more unknown variables, and (d) is difficult to study (just ask those trying to run large N double blind studies in educational environments). Regardless, as researchers continue to pursue this line of research we are faced with the same qualified conclusion that aptitude-treatment interaction researchers have offered: the data on student skill development levels may guide educators in their formation of hypothesizes regarding which interventions are more likely to be effective in preventing and remedying skills. While new methods of linking assessment to intervention have taken a behavioral, and arguably a more direct, approach to observing variables that account for student learning, a hypothesis testing model to validate what works for which students continues to be needed. For this reason, intervention evaluation is critical.

Step 4 – Intervention Evaluation: Did the Plan Work?

RtI models have increasingly garnered support, in part, to the realization that practitioners need to demonstrate how students interact with the classroom environment. In RtI models, many of the same brief assessment procedures used for screening and identification of target behaviors can also be used to evaluate the effects of interventions (Fletcher, Coulter, Reschly, & Vaughn, 2004). These brief measures often allow for repeated assessment of skill development and are sensitive enough to allow researchers and practitioners to evaluate the effects of interventions on learning rates. Thus, educators can conclude not only whether the intervention is working, but also how quickly the intervention is working. These formative data help educators arrive at many formative decisions including whether to continue the intervention, adapt the intervention, change interventions, and/or cease remediation procedures (Deno & Mirkin, 1977).

There are at least three ways in which practitioners can assess student progress. Two of these are formative in nature and consist of using general outcome measures (GOM) and sub-skill mastery measures (SMM). Another is the use of summative assessment via a pre–post test method. Each is useful for different reasons and is accompanied by strengths and weaknesses. A GOM, such as oral reading fluency, is useful because it strongly correlates with broad tests of achievement, as is seen in reading. Researchers have demonstrated that decisions about a student’s learning rate in response to an intervention can be estimated within as little as 8 weeks (Christ, 2006). Unfortunately, this still takes approximately 25% of the school year to have reliable and valid data to support growth rate estimates. Another option is to use SMMs, such as single digit subtraction probes or a list of five sight words, to quickly detect immediate gains in student achievement (see Poncy et al., 2007). However, SMMs fail to generalize to broad tests of achievement. Although we can be sure that an intervention increased single digit subtraction fluency, whether this significantly helps the student across mathematical tasks (e.g., algebra) is doubtful.

Both GOMs and SMMs are forms of formative assessment and present feedback to practitioners while instruction is occurring. Summative assessment, on the other hand, is conducted before and after the intervention and does not provide information about intervention effectiveness, until the intervention has concluded. It is our opinion that formative assessment is the better option of the two for evaluating the effects of interventions; however, summative assessments can be useful if the treatment is a systemic change (i.e., curriculum). In this case, it would be hoped that the new curriculum would meet the needs of a large portion of the students (e.g., 80%). An example of the use of summative assessment to aid in program evaluation can be observed by using state test data to compare and evaluate a curricular change. A district may review past data and determine that student achievement levels and progress are not at a desired level. A team uses data to select a new curriculum and implements it. After a year of the new curriculum, the instructional team could compare achievement gains of the year of the initial curriculum to that of the newly implemented curriculum, evaluating what worked best for their students.

Depending on the nature and purpose of the particular program under investigation, an appropriate method to evaluate the success of the intervention will need to be selected. This chapter has attempted to define several approaches to collecting data to address the success of an intervention. For more information on formative assessment practices, see Chap. 6 of this book. This final stage completes a cycle of the problem solving model, which is a process with the end goal of continuous improvement. When schools are successful in remedying problems and improving the system, it is hoped that educators will be continually reflective and search for new and innovative ways to improve student achievement.

Practice Implication

We propose three ways in which school psychologists can meaningfully impact instructional leadership teams. They are: To (1) Organize and present data in a purposeful and meaningful manner; (2) Support school professionals in using data to answer the questions posed in the problem solving model; and (3) Assist with the methods and interpretation of evaluating instructional practices. To guide practitioners through this process, we have constructed an assessment matrix that aligns assessments used in districts with the questions from the problem solving model (see Fig. 10.1). To demonstrate how this tool may be used by psychologists, a case study of how an instructional leadership team used this matrix to influence decisions about a building wide reading model is presented.

Fig. 10.1
figure 1

The Soap Creek Assessment Matrix: Reading (SCAM-R)

Sunset Heights Elementary School staff is working hard to increase the reading skills of all our students. The purpose of the current assessment matrix is to define the assessments given to students and how we can use this information to provide answers about how to continually increase reading scores. As building principal, I want to personally applaud the Sunset staffs’ professionalism and commitment to helping each of our students meet their potential. Below is a quick description of each of the assessments used in our building in the area of reading.

Iowa Test of Basic Skills (ITBS). The ITBS is a standardized broad band achievement test that investigates several areas of achievement. It reports out a percentile rank comparing students to a national and state based norm. Student progress concerning AYP is reported out using the results of the ITBS. Students are considered proficient if they score at or above the 40th percentile when compared to the national norm. This test is given once per year to both 3rd and 4th grade students in the month of February.

Measures of Academic Performance (MAP). The MAP test provides data similar to the ITBS and was chosen as an additional measure to report to the state concerning student achievement in the areas of reading, mathematics, and language usage. Students are deemed proficient if they score above the 33rd percentile versus the national norm. This test is given twice per year to both 3rd and 4th grade students, once in the fall and spring.

Dynamic Indicators of Basic Early Literacy Skills (DIBELS). The DIBELS is a set standardized achievement tests specifically investigating reading skills such as phonemic awareness, alphabetic understanding (i.e., phonics), and oral reading fluency. This data from these tests are used to screen students a minimum of three times per year with achievement scores being classified in the benchmark, strategic, or intensive range. Students that score in the strategic and/or intensive range receive increased intervention services focusing on need areas through classroom instruction and Title 1 services. Furthermore, these students are monitored weekly to assess how students respond to intervention services. If students fail to make significant progress, the student is referred to the Sunset Success Team, where a multidisciplinary team meets to review data and systematically attempt to solve the problem.

Curriculum-Based Assessment (CBA). While state tests and DIBELS allow teams to identify, define, and monitor progress, they do little to address why students are failing to meet the task demands of the educational setting. To assist in answering these questions teachers and other educational professionals are encouraged to summarize data derived from a variety of classroom/curriculum based assessments. Information can be obtained from reviews of student products, interviews of teachers, parents, and the student, classroom observation, and tests. These data need to be collected to answer pre-specified questions about student performance (e.g., what types of decoding errors does the student make?).

Each of these assessments provides data that can be used to answer questions about our how our instruction is impacting student achievement in some form or another. The goal of Sunset Heights is to have a minimum of 80% of our students scoring in the proficient and/or benchmark range of the district wide assessments (ITBS, MAP, & DIBELS), a maximum of 15% of our students to receive supplemental services on a consistent basis, such as Title 1 reading, and to use our most intensive services (i.e., special education) for the bottom 5% of our students. The proceeding Reading Assessment Matrix aligns building assessment with the four stages of the problem solving model and places an “X” when the data from the assessment provides data to address the question.

Soap Creek Assessment Matrix: Reading

The goal of the Assessment Matrix is to identify the assessments administered at a particular school, specify when the assessments are given, and identify questions that the data from the assessments can answer (i.e., their purpose). In cannot be emphasized enough that the final goal of this process should not be to simply document the different assessments, but to specify the purpose of each assessment as well as the questions that can be answered from the collected data. In order for this process to not be overwhelming to the school staff, it is suggested that this process be broken down by area (e.g., reading, math, written expression); in the forthcoming case study reading, when used. The ultimate goal of the assessment matrix is to present teams with a heuristic to support the understanding of the relationship between the data collected and educational questions that need to be answered. Once this is organized, the school psychologist, with a combination of other qualified team members, should be able to increase the confidence with which the team can make instructional recommendations

Soap Creek Elementary School is a K-4th grade building and educates approximately 350 ­students. Student demographics show that approximately 90% of the students are Caucasian, 36% receive free and reduced lunch, 10% are ELL, and 8% receive special education services. The Soap Creek staff consists of 14 general education teachers, two special education teachers, two reading interventionists, one behavioral interventionist, and a half time position for a math interventionist and ELL teacher. Class sizes are approximately 22 students. To report AYP goals, Soap Creek Elementary uses results from the Iowa Test of Basic Skills (ITBS) for their fourth grade students. In addition, data are collected from the Measures of Academic Performance, the Dynamic Indicators of Basic Early Literacy Skills (DIBELS), and a variety of CBAs (see Fig. 10.1 for expanded descriptions). These assessments and the questions they address are identified and described on the Soap Creek Assessment Matrix: Reading (SCAM-R; see Fig. 10.1). The completed SCAM-R is meant to provide an example of one method to describe and summarize the alignment of district assessments and the purposes of the problem solving model.

At the end of the 2005–2006 year, the instructional team gathered and organized their reading data using ITBS, MAP, and DIBELS data. The instructional team consisted of the principal, school psychologist, title 1 reading teacher, and one classroom teacher from each grade. The team members were happy given the outstanding performance of the students on the state tests. Soap Creek’s data in the area of reading had 92% of the third grade and 90% of fourth grade students scoring at or above the 40th percentile when compared to the national norm of the ITBS. These data placed Soap Creek’s achievement results above their AYP and building goals. MAP data also supported these data with 94% of the fourth grade students scoring in the proficient range, which for the MAP was above the 33rd percentile when compared to the national norm. DIBELS data confirmed exemplary achievement in grades K-1, however trends in the data suggested that as students progressed through second and third grade that an increasing amount of students were being specified in the some risk to at risk range (see Fig. 10.2). While general achievement rates were extremely high, especially when compared to national norms, DIBELS data demonstrated a pattern of 90% or more students scoring in the low risk range in grades K-1, 80% in second grade, and 65% in grade three, and 77% in grade four.

Fig. 10.2
figure 2

Soap Creek Elementary end of the year DIBELS results

The team convened, reviewed the data, and was not surprised by the test results. In the past, given the relatively high achievement rates of the district (usually around 85–90% of the students scoring in the proficient range) the team provided more support services to kindergarten and first grade students. For example, the Title 1 teachers would see anywhere from 30 to 50% of the kindergarten students at some point through the course of the year, some for a short time for a “boost” and others more consistently. This decision to concentrate resources in kindergarten and first grade to prevent reading problems may have been affecting fluency gains in students in second and fourth grades. As the team reviewed these patterns in student achievement, they asked the following questions, “Should we lessen our focus on prevention in kindergarten and first grade to bolster instruction in grades two to four and what would be the most efficient way to deliver the needed interventions for each group?” Ultimately, the team needed to figure out how to keep student achievement rates in kindergarten and first grade from dipping, while they increased oral reading fluency rates for ­students in second, third, and fourth grades.

The Soap Creek Elementary School instructional team established that they wanted a minimum of 85% of their students scoring at benchmark. While approximately 90% of students in kinder­garten and first grade were meeting the reading benchmarks, the percentage of students meeting bench marks for second through fourth grade was declining. This decline was most evident with the third grade, where they had only 65% of the students meeting the reading benchmark of 110 words/minute. Further assessments using DIBELS and classroom data indicated that a majority of the ­students not meeting benchmark were reading accurately (i.e., above 95%) but slowly and these data were ­consistent across classrooms. Furthermore, students who were reading slowly and inaccurately were receiving support from the Title 1 teachers. At this point, the problem was defined (see Fig. 10.3) and patterns in student responding were identified and compared to the IH. These data converged to show decreasing student performance in reading fluency.

Fig. 10.3
figure 3

Problem definition data

The team met with the Soap Creek teaching staff with a plan to increase ORF rates. The group decided on using a classroom based set of fluency building interventions for grades two, three and four and an intensive professional development program for the teachers emphasizing interventions to increase reading fluency. Specifically, the intervention plan was composed of combining and using a variety of fluency building reading interventions including repeated reading, emphasized phrasing, and readers theater, cooperative learning groups (paired reading and peer assisted learning), and the incorporation of various options for feedback and reinforcement using performance feedback and interdependent group contingencies (Daly et al., 2005; Rasinski, 2003). In addition, the cessation of round robin reading was discussed and the aforementioned replacement strategies were reiterated. These interventions were to be implemented daily in a pre-determined 15 minutes block. To address the needs of students who are identified as reading slowly and inaccurately, the Title 1 teachers continued to work to build decoding/phonics skills. In addition they identified the struggling student’s instructional level and selected material for them to use in the rereading interventions. The plan was to be implemented at the beginning of the next school year with progress being monitored using oral reading fluency scores from DIBELS assessments. The instructional leadership team will continue to meet next year to discuss professional development efforts with the teachers from grades two, three, and four and will review data, as they are collected and will make decisions accordingly.

Future Issues

Participating on instructional decision making teams provides a context for school psychologists to use their skills in data collection and analysis, consultation, learning theory, and program evaluation to influence student achievement on a large scale. However, given the relative newness of this role, there will be a variety of issues for practitioners to consider. As school psychologists transition into this role, two basic issues will need to be initially confronted: (1) role advocation and (2) time allocation.

Role Advocation

With the inception of IDEA in 1974, school psychologists spent a majority of their time in testing activities using a battery of aptitude and achievement tests to determine if students were eligible for special education services. As the profession has evolved, researchers and practitioners have ­emphasized that assessment and intervention activities should be concretely tied to demonstrable increases in student achievement. These principles have been the impetus for the development and implementation of RtI models. While school psychologists have generally embraced this transition, administrators and teachers may still adhere to traditional conceptions of school psychologists and view school psychologists as “gatekeepers” to special education eligibility.

If administrators and teachers continue to see school psychologists as professionals who ­diagnose within student deficits of disabled populations, they will rarely be asked to join an instructional leadership team focusing on the entire student population. However, given their breadth of training, many school psychologists posses novel skills that would assist teams in using the problem solving process to collect and/or analyze data to identify and define problems, figure out why they are occurring, select empirically validated treatments, and evaluate the effects of the interventions on student achievement. As legislation changes to provide a platform for these activities, practitioners and researchers will need to advocate for a shift from traditional roles to new roles, such as instructional decision making teams. Given the importance of data-based accountability with NCLB and the emphasis on documenting improved student achievement, school psychologist’s skill sets with data collection and analysis could be a valuable asset to district administrators. However, for this to happen, school personnel need to be aware of how a school psychologist could assist instructional decision making teams.

Time Allocation

As state rules and regulations change to support RtI approaches, school psychologists will likely spend an increased amount of time conducting and interpreting data (i.e., CBA) that are directly linked to the curricular objectives and expectations of the district. In addition, practitioners will likely begin to spend more time consulting with teachers and intervening with students. This exposure and immersion in the instructional milieu will provide school psychologists with knowledge about the scope and sequence of district curricula, data concerning the level, trend, and distribution of student achievement, and will assist in building personal relationships with teachers, staff, and students. However, finding time to conduct these activities will be difficult if school psychologists continue to spend their time administering published norm referenced aptitude and achievement tests. Therefore, agencies and schools who employ school psychologists will not only need to acknowledge a shift in the role of school psychologists, but also allot the time for the training and implementation of these activities.

Conclusion

As the needs of schools continue to change, school psychologists will be faced with a plethora of new challenges and changing roles. One area that school psychologists will likely be called upon to increasingly participate in is the area of instructional decision making. The goal of this chapter was to present readers with methods and ideas for how psychologists can support schools with their alignment of assessments with the problem solving model to drive data-based educational decisions concerning instructional practices and materials. The Assessment Matrix was designed to focus the attention of instructional teams to the specific purposes of the problem solving model. Specifically, teams will be looking at (1) identifying problems; (2) analyzing why they occur; (3) implementing instruction; and (4) evaluating the effects of the intervention. While districts are directed to collect achievement data, these data are usually in the form of broad-band achievement tests and usually produce data that is useful to identify and define problems, but does little to produce data that informs the other questions of the problem solving model that answer what needs to be done and how well it works for students. It is the goal of the Assessment Matrix to prompt teams to assess with purpose and collect data to specifically answer the questions of the problem solving model. The systematic application of these principles and methods should direct teams to use data to efficiently match resources to school needs and to demonstrate how the implemented interventions are impacting student learning.

Chapter Competency Checklist

DOMAIN 1 – Enhancing the development of cognitive and academic skills

Foundational

Functional

Understand and explain the following:

□Legal pressures on public education

□Roles of school psychologists on instruction leadership teams

□Linking assessment and intervention through problem solving

□Instructional hierarchy

□Role of state curricula standards and NCLB

□How to access state curricula standards in the state in which you practice

Gain practice:

□Consulting with individual teachers regarding instruction

□Consulting with school teams regarding instruction

□Collecting data via system-wide intervention packages (DIBELS, AIMSWEB)

□Using data to make instructional decisions for individual students

□Using data to make instructional decisions within an RtI system