Abstract
Technology involves application of research to solve practical problems, and it may include bodies of knowledge and processes as well as tools. Whether technology impacts learning, and if it does, how it may be best used are questions that have been debated for several decades. This chapter discusses the effect technology has had on schooling and argues that any attempt to improve student learning must stand on relevant, well-designed curricula and evidence-based instructional methods. With this in place, technologies can help support RTI efforts around learning, assessment, teaching, and productivity. The use of technology in instruction and intervention across tiers is discussed, including its affordances, limitations, and barriers, with research and recommendations for the use of technology in reading and mathematics specifically addressed.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Educational Technology
- Student Response Systems
- Adaptive Instruction
- Technology Should
- Virtual Manipulatives
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
What is Educational Technology?
In the first half of the twentieth century, educational technology focused on media, as visual and then audiovisual tools were used to present instruction in forms such as film. This view of educational technology as hardware and software is still common today, and one might first think of educational technology in the form of computers and the educational software that runs on them (Reiser 2012).
Technology, however, involves application of research to solve practical problems and includes processes as well as tools (Clark and Salomon 1986; Twyman 2011). The notion of technology as a process became a focus of educational technology beginning in the 1950s, when educational technology came to be seen as involving the design of solutions for instructional problems and application of science to instructional practices (Reiser 2012).
In 2008, the Association for Educational Communications and Technology revised its definition of the field to include both resources and processes: “Educational technology is the study and ethical practice of facilitating learning and improving performance by creating, using, and managing appropriate technological processes and resources” (quoted in Reiser 2012, p. 4). In describing this definition, the authors define technological processes as “the systematic application of scientific or other organized knowledge to accomplish practical tasks” (quoted in Reiser 2012, p. 5), while technological resources refer to the hardware and software that we more typically think of when we think of educational technology (Reiser 2012).
Does Technology Impact Learning?
Whether and how technology impacts learning has been debated for several decades. The focus of this debate has been on whether the media used for instruction have unique effects on learning, apart from instructional methods. For example, Clark (1983, 1985, 1994) argued that media do not influence learning. Rather, it is the instructional method employed that influences learning, with the medium simply being a vehicle for a particular method. Kozma (1991, 1994) argued that different media might influence learning when they have different capabilities. Instructional methods could take advantage of these capabilities, and this interaction between a medium’s capabilities and the methods that utilize them can result in more or different learning.
Any attempt to improve student learning must first stand on relevant, well-designed curricula and evidence-based instructional methods. Good instructional design requires a systematic design process that includes performing content, task, and learner analyses, clearly defining the learning objectives, determining the criterion tests to assess for understanding or mastery, establishing the entry repertoire needed by the student, building the instructional sequences, using performance data to continually adjust instruction, and ensuring student motivation by incorporating both program intrinsic and extrinsic consequences throughout the instructional sequence (Tiemann and Markle 1990; see also Dick and Carey 1996; Smith and Ragan 1999; Twyman et al. 2004). Good instructional delivery requires active learner engagement with frequent opportunities to respond (Rosenshine and Berliner 1978) and immediate, relevant, and contingent feedback (Bardwell 1981; Mory 1992; Shute 2008). Instruction should support the learner in moving forward at his or her own learning pace (Fox 2004) so that new material is not presented until the student has demonstrated mastery or application of current material (Bloom 1968; Keller 1968; Kulik et al. 1990). This requires that the progression of instruction and content be tied to actual measures of student learning, and not dictated by curriculum content chunks such as chapters or units, or the passage of marking periods or calendar years. Any viable educational technology must support, enhance, or provide these critical components.
Computer-based instruction (CBI) or computer-assisted instruction (CAI) has been the most prevalent form of hardware/software technology introduced into schools over the past half century. After initial fanfare with little tangible results (see AL-Bataineh and Brooks 2003), followed by more thorough empirical questions regarding the impact of CBI (see Shlechter 1991; Siegel 1994), the educational use of technology and CBI are gaining positive traction in the research literature. Improved outcomes have been demonstrated clearly in structured content areas such as mathematics (Valdez et al. 1999), social sciences (Kulik and Kulik 1991), and, with the growing involvement of the Internet and social connection in digital technology, contributions are beginning to be seen in other content areas as well (Redecker et al. 2010). Modern technology tools and applications such as video, interactive whiteboards, student response systems, portable devices, virtual learning, and a 1:1 ratio of computers to students have been found to greatly increase the collection, management, analysis, storage, and communication of educational data (McIntire 2002; Wayman 2005). Numerous meta-analyses of existing research have indicated a range of improvement effects for the use of computers, game-like curricula, and interactive simulations (Niemiec et al. 1987; Vogel et al. 2006). McNeil and Nelson (1990, cited in Hattie 2008) reported great variance in student outcomes due to factors such as instructional methods, learning materials, implementation variables, as well as the purpose(s) of the media. For example, Blanchard et al. (1999) conducted a meta-analysis of more than ten multimedia/game curriculum implementations for mathematics and language arts instruction in grades 1–5, and found overall low effects for mathematics ( d = 0.13) and language arts ( d = 0.18), yet higher effects ( d = 0.23) when interventions were implemented with quality. Vogel et al. (2006) analyzed studies that reported on differences in cognitive gains or attitudinal changes for computer games and simulations versus traditional classroom instruction. They found significantly higher cognitive gains for participants using games and simulations ( z = 6.05) as well as a main effect for attitude favoring the use of games and simulations ( z = 13.74).
A notable finding from more than two decades of computer-aided and multimedia interactive education is that increased “student control” over learning (such as pacing, sequencing, time allocation for mastery, choice of practice and review items) has resulted in equivocal outcomes compared to programs that were heavily or solely teacher directed (Niemiec et al. 1996). Some more recent studies have indicated that student attitudes towards school and subject matter (Roblyer 1989; Roblyer and Edwards 2000), as well as self-image and self-confidence (Alexiou-Ray et al. 2003; Christensen 2002; Roblyer et al. 1988), can be positively affected when using technology tools, although the durability of these effects is not known clearly.
Educational Technology and Response to Intervention
The purpose of response to intervention (RTI) is to “contribute to more meaningful identification of learning and behavioral problems, improve instructional quality, provide all students with the best opportunities to succeed in school, and assist with the identification of learning disabilities and other disabilities” (National Center on Response to Intervention, n.d., n.p.). A critical feature of RTI is the seamless integration of assessment and intervention to increase student learning and outcomes. RTI commonly employs three levels of intervention: primary prevention, secondary prevention, and tertiary prevention (Fuchs and Fuchs 2009). Primary prevention entails whole-class instruction delivered by a general education teacher. This instruction may or may not be differentiated and may or may not involve small-group and independent activities. The instructional materials used may or may not be empirically validated, but the design of most core programs is based on instructional principles. Secondary prevention involves additional small-group instruction using an evidence-based intervention protocol for those students who are identified as at risk. Reading or mathematics coaches may oversee this instruction, and paraprofessionals may deliver the instruction to small groups of students. Those students who do not make adequate progress in secondary prevention may move into tertiary prevention. This level employs intensive, individualized instruction (Fuchs and Fuchs 2009). The level of progress monitoring may also differentiate tiers, with more frequent progress monitoring often occurring with more intensive intervention (i.e., secondary and tertiary). RTI implementations may adjust the intensity and frequency of supplemental instruction and rate of progress monitoring to best meet the unique needs of the students receiving services, instructional demands of the classroom, and meaningful knowledge of a student’s response to instruction.
Educational technology can help support RTI goals. The 2010 National Educational Technology Plan described five major goals for research and development related to educational technology (US Department of Education 2010). These five goals relate to (1) learning, (2) assessment, (3) teaching, (4) infrastructure, and (5) productivity, and are consistent with an RTI model. For example, the plan calls for research and development exploring how technologies such as simulations, virtual worlds, games, cognitive tutors, and collaboration environments can effectively motivate learners, while assessing complex skills and providing immediate performance feedback and adaptive instruction. Motivation, frequent assessment, feedback, and adaptive instruction are a few of the components that are part of an effective instructional model, and integration of assessment and intervention described within these goals also is a critical feature of RTI.
Technology also may help facilitate formative assessment and instructional decision-making. While research shows the effectiveness of RTI approaches in enhancing student learning (see Burns et al. 2005), fidelity of implementation is a critical variable in large-scale RTI implementation (Burns n.d.; Gansle and Noell 2007; Ysseldyke 2005). Technology may ease RTI implementation barriers, thus making RTI more likely to occur (Ysseldyke and McLeod 2007).
The promise of technology lies in its affordances. A teacher teaching a class of 30 students may ask an individual student a question, allowing that student a response opportunity. Other students in the class may or may not respond covertly to the question. If the teacher asks all students to respond—for example, by holding up response cards—it is more likely that all students will respond (Gardner et al. 1994; Narayan et al. 1990). If a teacher has a student response system, all students have an opportunity to respond, and the teacher can collect, analyze, and track individual and group responses over time, allowing for a more detailed assessment of student understanding and progress (Penuel et al. 2007). Technologies can provide teachers with a response system in which all students participating in instruction can respond, receive corrective feedback, and experience adjustments to instruction to further accelerate their learning. Supplemental instructional programs providing adaptive instruction and embedded assessment may also be used in all tiers—for example, as a supplement to core instruction in tier 1, for intervention in tier 2, and in a more focused way in tier 3 to help students gain fluency or fill skill gaps (Allsopp et al. 2010). The following sections describe some of the affordances technology can provide across tiers within an RTI model.
Technology in Tier 1 Instruction: Differentiating Instruction and Increasing Response Opportunities for All Learners
Many RTI theorists state that around 75–80 % of children achieve adequate levels of competency with the core curriculum alone (i.e., tier 1). Approximately 20 % of students are not successful in the core instruction despite good curriculum and generally effective instructional practices (Shapiro n.d.). Because most public schools in America do not have the resources to implement moderate-to-intensive intervention (i.e., tiers 2 and 3) to more than a quarter of the total student population, a strong core curriculum is foundational to student learning and a successful RTI implementation. Much work has been done in identifying evidence-based core curriculum programs and helping educators make informed decisions about what to use when teaching (especially in reading, see Foorman 2007; National Reading Panel—NRP 2000; Simmons and Kame’enui 2003).
Although instruction in tier 1 typically includes core instruction delivered to the whole class by the general education teacher, this instruction should be differentiated and include peer tutoring and flexible groupings (Lembke et al. 2012). However, in a review of the current state of RTI, Fuchs and Vaughn (2012) list differentiated tier 1 instruction as a continued challenge for classroom teachers. Although differentiated core instruction has been shown to reduce the number of students who require more intensive intervention and result in more proportionate representations of males, minorities, and English language learners in special education (Torgesen 2009; VanDerHeyden et al. 2007), this differentiation requires not only extensive knowledge of the subject matter but also the ability to use appropriate assessment tools to determine students’ needs and effectively vary the type and intensity of instruction based on those needs. Improving instruction for primary prevention requires high instructional quality and opportunities to learn (Gerber 2005). Fuchs and Vaughn (2012) call for research and development of innovative instructional methods to improve tier 1 instruction. Given that the majority of instructional time is spent in a general education classroom, improving differentiated instruction in tier 1 has high potential for improving learning outcomes and decreasing the need for more intensive intervention.
Various forms of instructional technology have been found to influence and improve student learning in core curricula areas and general classroom instruction (Fadel and Lemke 2006; Flecknoe 2002; Gilbert 1996; Spector 2010). For example, student response systems have been found to improve student understanding and engagement, and create a more positive and interactive atmosphere (Caldwell 2007; Kay and LeSage 2009; Poole 2012). Reviews of research have generally agreed that the use of computers can increase student learning in a variety of subject areas and basic skills when combined with traditional instruction, and that students can learn more quickly and with greater retention when learning with computers. Student attitudes towards school and learning are also positively affected by the use of computers, and this use is most promising for at-risk and struggling learners (Fouts 2000).
Adaptive Instruction Across Tiers: Implementing Sound Instructional Methods and Progress Monitoring
Students who are struggling in an area also need explicit, systematic instruction with many opportunities for practice to build both accuracy and fluency. Instruction should also include cumulative and varied practice to promote retention and transfer of skills (Gersten et al. 2009). However, several challenges to implementing these instructional methods exist. For example, explicit instruction in mathematics may include providing a variety of models for problem-solving, verbalizing thought processes in teaching procedures and problem-solving methods, and offering guided practice and corrective feedback. Often, however, instructional materials include models of only one or two simple problems and may not include enough practice and cumulative review. (see, for example, Jitendra et al. 1996). Interventionists may also provide less practice than is necessary and may not be expert enough in the mathematics content to supplement a lack of modeling and practice within the materials (Gersten et al. 2009; Ma 1999).
Adaptive instruction within each tier is important in helping students make adequate progress and reducing the number of students who are moved to a more intensive tier. This can save time and resources, while still helping each student to succeed. However, using assessment data to make instructional decisions can be challenging for many schools, and skill in using data and implementing interventions may vary considerably among teachers (Kupzyk et al. 2012).
There are increasingly prevalent research-based, technology-enhanced interventions that assess and analyze current skills, target student deficits, and allow for automated instructional delivery. For example, Burst®: Reading by Wireless Generation® helps teachers continuously match reading interventions to each student’s current ability and changing needs. Wireless Generation reports that smart technology allows learning data to be analyzed behind the scenes, recommends student groupings based on similar needs, and aligns instruction for the group. Similarly, Scholastic’s Read 180® intervention program includes a “Groupinator” tool that suggests optimal small groups for differentiated instruction and then links those groups to appropriate resources. These software tools enhance implementation by automatically translating student data into specific intervention recommendations that teachers can then implement, either in their own teaching or by accessing other technology resources. Teachers can view what skills and concepts students have mastered, how much instruction was required for mastery, what areas might have caused particular difficulty, as well as the amount of time spent in instruction (or on a particular topic). Through analysis of the data, teachers can evaluate what educational materials produced the best outcomes and other behavioral and cognitive information relevant to academic performance over time. Such analytics can help educators determine the best instructional plan for groups of students, or any particular student (West 2011).
Motivation
Motivators are also important, as students who have historically had difficulty are less likely to engage in learning and practice opportunities as a result of frequent past failures (Fuchs et al. 2008). Motivators may be program extrinsic, such as awards, points, or badges for mastery or high levels of performance, and sites that purport to enhance student motivation through digital badges (such as Badgeville or Mozilla’s Open Badges) or behavior management apps (such as Class Dojo), are just a few examples of motivational technology tools that may augment RTI. Motivators can also be program intrinsic. These motivators may arise from the instructional sequence and what mastery allows the learner to do in other contexts (Layng et al. 2004). For example, when an instructional sequence begins with a challenging task that a learner can do successfully, this experience of success may help the student more readily approach learning in that area (Fuchs et al. 2008). Computer-based programs that are able to continually assess and differentiate instruction (and therefore effectively align instruction with students’ skill levels) may therefore promote student motivation by allowing for high rates of success in challenging tasks.
Games for learning have gained increasing attention in recent years, although games and “edutainment” have been part of educational technology for several decades. For example, the popular mathematics game Math Blaster® was first introduced in 1987. The structure of rewards in games may be especially effective in increasing motivation for struggling students, and can offer a learning environment in which feedback is less threatening (Shute 2008). Although games such as this are often thought of as offering practice in lower-level skills such as mathematical facts, games can also offer instruction and practice in higher-order thinking skills and problem-solving (Rice 2007).
Technology and Reading Intervention
Research
Research has provided us with relatively clear guidelines about how to effectively teach children to read. In 2000, the NRP concluded a review of more than 100,000 studies that met scientifically based research standards and examined the effectiveness of an instructional approach in early reading that could be generalizable to a large number of students. They identified, based on an extensive body of knowledge, the skills children must learn in order to read well: phonemic awareness (the ability to manipulate individual sounds), phonics (the relationship between individual written letters and individual spoken sounds), fluency (the ability to accurately and quickly read text), vocabulary (the meaning of words), and comprehension (the understanding of what is being read). These skills often form the basis of not only core reading instruction but also the more focused work of a tier 2 or tier 3 reading intervention.
Until recently, relatively few studies have thoroughly evaluated new technologies for reading and literacy education. For example, Kamil and Lane (1998) reviewed the research in two mainstream literary journals with the highest citation rates for literacy research ( Reading Research Quarterly and the Journal of Reading Behavior, since changed to Journal of Literacy Research) during the years between 1990 and 1995 and found that only 1 % of the articles discussed technology issues (also see Kamil et al. 2000). Within the past decade or more, there has been a growing number of examples of a technology assist for the five critical components of early reading instruction. Wise and Olson (1995) found that elementary students who received computer-assisted instruction in phonological awareness (by reading words in context and completing exercises involving individual words) made significant gains in phoneme awareness and word recognition. The use of screen-reading software (that converts text to digital speech) has helped improve comprehension, fluency, and accuracy and enhances concentration for special education students (Leong 1992; Lundberg and Olofsson 1993). Hearing a word spoken within the context of a passage helps students build decoding skills, word recognition, and vocabulary (Califee et al. 1991). Text to speech (TTS) software has been found to support comprehension by allowing the listener to focus on the meaning of the text without disturbing the text flow, thus increasing the ability to read interesting or grade-level materials, while minimizing the need for decoding skills (Wise et al. 2000).
Recommendations
The US Department of Education’s Institute of Education Sciences assists educators in identifying and implementing evidence-based interventions to increase student-reading achievement. In their What Works Clearinghouse report Assisting Students Struggling with Reading: Response to Intervention (RTI) and Multi-Tier Intervention in the Primary Grades, they identified five recommendations for effective RTI reading interventions:
-
Screen all students for potential reading problems early and midway into the school year. Monitor the progress of students identified as “at risk” for developing reading disabilities.
-
Provide evidence-based reading instruction, differentiated for all students based on assessments of current reading levels (tier 1).
-
Provide intensive, systematic, and evidence-based instruction on foundational reading skills in smaller groups to students who score below “benchmark” (target score) on screening measures (tier 2).
-
Monitor individual tier 2 student progress frequently (at least monthly) and use the data to determine which students still require intervention and may be in need of a tier 3 intervention plan.
-
Provide daily, intensive, evidence-based instruction on the necessary components of reading proficiency to students who are far from the benchmark or show minimal progress after small-group instruction (tier 3).
Each of these recommendations can be supported with technology. Efficient, reliable screening measures and assessments may be provided online and evaluated automatically by the software. A database of reading screening scores over time allows for analysis at the student, class, or school level across years. Patterns in scores may allow for the identification of students, teachers, curricula, or systems that increase or decrease reading ability levels. Data analysis tools can help educators make data-driven decisions, or recommend differentiated instruction for students at varied reading proficiency levels. Learning management tools can help teachers plan and schedule time, instructional content, and degree of support and scaffolding based on student needs. Web-based clearinghouses or review sites can assist teachers in identifying various materials that support critical components of reading instruction (such as phonemic awareness, phonics, vocabulary, comprehension, and fluency), and that are prearranged to build skills gradually based on what has been previously taught. Technology programs can offer a high level of student interaction, providing precisely engineered learning opportunities and individualized feedback on responses, often to several students simultaneously. Technology programs can easily collect a student’s response to each interaction, and parse those data based on important characteristics such as error patterns (to identify concepts that may require more instruction) or response latency (which may indicate the concept is not yet firm or fluent).
Within the domain of reading, technology assists can be grouped on various dimensions depending on purpose:
Fundamental Skills
Programs, software, or apps that reinforce fundamental skills such as letter or phoneme identification, phonics, word attack, sentence construction, or symbol recognition. They may teach, review, or practice these types of skills in isolation or as part of a larger reading technology package.
Text-Reading/Text-to-Speech Software
These programs convert written text into spoken words, with more modern technologies providing a more natural voice and cadence than available previously. Words may be highlighted in conjunction with their spoken output, or specific words may be selected for pronunciation or even definition.
Digital Text/Leveled Readers
These resources allow teachers (and students) to identify books based on reading level or interest and may consist of classic literature, book summaries, study guides, picture books, or even interactive activities based on the material read. Many sites maintain a database of what has been read and provide “smart” recommendations (based on a user’s individual choices and ratings), and often offer assistive technology enhancements like the ability to change text size, contrast, words per page, picture supports, and other enhancements.
Technology and Mathematics Intervention
Research
Despite being a foundational skill critical for student success, the research base for effective mathematics instruction and intervention is not as extensive as that for reading (Fuchs et al. 2012; Lembke et al. 2012). For example, while reading studies have identified the critical component skills of phonics, phonemic awareness, fluency, comprehension, and vocabulary, component skills for mathematics have not been similarly identified. While it is possible that there are many more component skills required for mathematics (Fuchs et al. 2012), instruction in mathematics can be categorized in terms of three broad types of learning: conceptual, procedural, and strategic (Fuchs et al. 2008). For example, categorizing a story problem in terms of its type would be conceptual learning, carrying out the procedures to solve the problem would be considered procedural (Rittle-Johnson and Star 2009), and systematically attacking the problem—for example, starting by reading the problem carefully and ending by checking the work—would be considered strategic (Montague 1992; Polya 2004; Whimbey and Lochhead 1984).
Interventions utilizing technology have been shown to increase skills in mathematics. For example, in a review of studies investigating the effects of software programs on mathematics achievement, Kulik (2003) reported that out of 16 controlled studies conducted, 9 had an effect size large enough to be educationally meaningful. In all of the studies, test scores were at least slightly higher for the students engaged in the computer-based programs, and the median effect for all 16 studies was 0.38 standard deviations. Although technology used to increase fluency of basic fact recall is common, technology-based intervention programs can be effectively used for much more. For example, software programs may be particularly suited to delivering instruction in multiple levels of abstraction, such as moving from more concrete representations such as virtual manipulatives to abstract numerical representations, working with interactive simulations, and providing a variety of strategically delivered examples and nonexamples for conceptual learning. Technology can also be used in a supportive role, as when calculators are used to assist students in problem-solving when they may not be fluent in mathematics fact recall (Allsopp et al. 2010).
Recommendations
In the primary grades, students who struggle with mathematics often have difficulty with number combinations (particularly in automatic retrieval of mathematical facts) and story problems. Also, although procedural instruction in mathematics is very common, conceptual instruction is often neglected (Fuchs et al. 2008). In the What Works Clearinghouse report Assisting Students Struggling with Mathematics: Response to Intervention (RTI) for Elementary and Middle Schools, eight recommendations were identified for effective RTI mathematics interventions:
-
1.
Screen all students to identify those at risk for potential mathematics difficulties and provide interventions to students identified as at risk.
-
2.
Focus on whole numbers in kindergarten through grade 5 and on rational numbers in grades 4–8.
-
3.
Provide explicit and systematic instruction. This includes providing models of proficient problem-solving, verbalization of thought processes, guided practice, corrective feedback, and frequent cumulative review.
-
4.
Include instruction on solving word problems based on common underlying structures.
-
5.
Include opportunities for students to work with visual representations of mathematical ideas.
-
6.
Devote about 10 min in each session to building fluent retrieval of basic arithmetic facts.
-
7.
Monitor the progress of students receiving supplemental instruction and other students who are at risk.
-
8.
Include motivational strategies in tier 2 and tier 3 interventions.
Each of these recommendations can be supported with technology.
Screening and Intervention
Several programs are available for mathematical assessment and intervention, and software increasingly integrates assessment and instruction. For example, Wireless Generation®’s mCLASS® Mathematics formative assessment tool offers screening, diagnostic interviews, and progress monitoring tools as well as offering guidance on instructional interventions based on assessment results. Dreambox® Learning offers continuous adaptive instruction by tracking each mouse click within the program, using the data to identify student strategies, and adjusting instruction accordingly. As educational data mining and learning analytics continue to advance and grow more robust, the use of technology for continuous assessment and adaptive instruction will likely become more ubiquitous in educational technology products. (Bienkowski et al. 2012).
Explicit and Systematic Instruction
Systematic instruction refers to the particular skills that are taught and the order in which they are taught, while explicit instruction refers to how those skills are taught (Kupzyk et al. 2012). Explicit and systematic instruction may include instructional techniques such as modeling, including think-aloud models of problem-solving, guided practice, corrective feedback, and frequent review (Gersten et al. 2009). Systematic instruction teaches component skills before those component skills are used in a more complex skill, building knowledge and skills in a logical order (Kupzyk et al. 2012). The sequence of instruction can also help to minimize learning challenges. For example, Fuchs et al. (2008) described an instructional sequence that began with what students already knew or could easily do for early success and then introduced new concepts and strategies as they became necessary and broadly applicable.
Several challenges exist to providing explicit and systematic instruction, and these are challenges that educational technology can help meet. For example, many instructional materials offer only a few models of problem-solving (Jitendra et al. 1996) and teachers or interventionists may not have the expertise in the subject matter necessary to provide additional models or talk through different strategies that could be used for problem-solving (Ma 1999). In addition, materials may lack appropriate levels of practice and review, particularly for students who are struggling (Gersten et al. 2009).
CBI programs that are developed based on a systematic and thorough analysis of the content and are able to analyze student errors can support teachers and interventionists by providing clear and varied models and carefully juxtaposed examples and nonexamples, can assess student strategy use and provide think-aloud models of the strategies. Finally, programs based on a mastery framework can provide practice and review based on learner performance, allowing those who have mastered the skills and strategies to move on, while providing more practice and review opportunities to those students who need them. It is important for companies developing educational software to take these instructional elements into account in the design of the program and for educators to evaluate potential software programs for these elements.
Instruction on Solving Word Problems Based on Common Underlying Structures
Conceptual instruction is an important but often neglected aspect of mathematics instruction. Instruction should teach the underlying structure of different problem types, how to categorize problems based on their structure, and how to solve problems with a particular structure. However, instructional materials may not arrange instruction in a way that allows for classification of problem types and more complex problems are more difficult to classify (Gersten et al. 2009).
Although studies have shown the importance of conceptual instruction for word problems (see for example, Jitendra et al. 1998; Xin et al. 2005), conceptual instruction is not limited to word problems. For example, in her study comparing mathematics teachers’ content knowledge in China and the USA, Ma (1999) described conceptual foundations of elementary mathematics. A purely procedural approach to subtracting two-digit numbers with regrouping would teach only the steps themselves, such as “borrowing” a ten from the tens column, adding ten ones to the ones column, and then subtracting. A conceptual approach, however, would teach fundamental concepts and principles that underlie the reasoning behind this algorithm, such as the meaning of place value and composing (and decomposing) a higher value unit.
Just as in providing explicit and systematic instruction, CBI programs can be developed to include instruction in a problem’s underlying structure and practice categorizing and solving problems with different structures. In addition, programs can be developed to teach fundamental concepts and principles of mathematics and integrate conceptual and procedural instruction. However, it is important for companies developing programs to include this type of instruction and important for educators to look for these elements when evaluating educational software.
Opportunities For Students to Work With Visual Representations of Mathematical Ideas
Mathematical ideas can be represented in a number of ways. Instructional programs should include work with concrete manipulatives, visual representations, as well as abstract symbols, and include consistent language across representations. However, working with different representations can be difficult in a classroom, and some interventionists may not have the content expertise to fully understand different representations—particularly for negative numbers, fractions, and proportional reasoning (Gersten et al. 2009). Ma (1999), for example, reported that most teachers she interviewed said they would use manipulatives in teaching subtraction with regrouping. However, when the teachers did not have strong content knowledge, the use of manipulatives was not directly related to the concept and therefore are not useful in teaching the skill. For example, two teachers suggested using counters such as beans in learning subtraction with regrouping. If the problem was 23−17, they would start with 23 beans and have students take away 17 beans. Although taking beans away illustrates subtraction, the students already understood subtraction, and the goal of the lesson is to teach the process of regrouping. Using manipulatives in the way that teachers described does not help students understand decomposing a higher-value unit in a base-ten system which is the key concept underlying regrouping. In fact, showing a child 23 beans and then asking the child to remove 17 to illustrate subtraction with regrouping makes no sense as an instructional strategy because it makes regrouping unnecessary altogether.
The visual and interactive nature of computer-based programs affords movement among representations and may be able to offer this type of instruction more easily and systematically than a teacher or interventionist. The National Library of Virtual Manipulatives (http://nlvm.usu.edu/en/nav/vlibrary.html) offers a variety of web-based virtual manipulatives in the form of Java applets, and virtual manipulatives are included in more extensive mathematics programs such as Dreambox® Learning.
Fluency Building
Computer-based programs such as MathBlaster® have been offering opportunities for practice to build fluency for decades. Practice opportunities that are “gamified” can help to increase student motivation to practice. ExploreLearning’s Reflex Math, for example, offers adaptive and individualized fluency practice with fact families in a game-playing context.
Limitations and Concerns of Technology-Based Interventions
Technology faces the same array of limitations that has plagued almost every innovation or major change impacting education. Factors such as an understanding of the purpose of the change, the need for properly trained staff, leadership and support for the change, and ongoing funding are fundamental for any successful change in schools. Ertmer (1999) grouped barriers to implementation into two broad categories: First-order barriers having to do with infrastructure such as access, time, training, support, and resources, and second-order barriers having more to do with the culture of the school and the individuals within it, such as attitudes, beliefs, practices, history of change, and resistance to change. These barriers and limitations, as well as potential solutions, are not endemic to an RTI technology implementation and have been comprehensively addressed elsewhere (see Barron et al. 2003; Earle 2002; Gülbahar 2007; Hope 1997; Leggett and Persichitte 1998; Lumley and Bailey 1993; Sheingold and Hadley 1990). Interested readers are encouraged to consult these resources.
Technology implementations may face additional barriers. Teaching (and learning) is viewed by many as a human, interpersonal endeavor, requiring an attention to the quality of the interaction and its attitudinal effects (O’Neal 1991). Technology is prevalent in every aspect of life, especially among youth, thus perhaps making learners more at ease with its use than teachers. Because technology is evolving so rapidly, knowledge and skills learned in one year may be obsolete in three to seven years, the amount of time research has shown it takes for an “implementation” to take hold and reap sustainable rewards (Fixsen et al. 2005). In addition to being facile in current technology uses, education policy-makers, curriculum specialists, technology specialists, school administrators, as well as those who develop technology products and services are required to “stay ahead of the curve” in order to properly prepare for changes ahead.
Perhaps the greatest overarching limitation to the successful use of technology in the classroom is educators’ ability to find and effectively use technology to meet their teaching or their students’ learning needs. The number of apps, tools, and resource sites, as well as commercial or enterprise technology programs from established educational publishers is huge, and continues to grow. In just under 2 years, from September 2009 to July 2011, the number of free apps in the Education category of Apple’s ® ITunes® Store grew 369 %, from 866–3202. Paid educational apps grew by 202 %, from 4453 to 9013 (Gammon 2011). In 2012, almost three-quarters (72 %) of the top selling iTunes apps targeted preschool or elementary-aged children. Within the highly saturated games category, 32 % of apps stated an intended learning objective or made a claim of educational benefit (Shuler 2012). While the data are not readily available for the Android/Google app market, it can be expected that a similar growth trend is occurring. Teachers, curriculum specialists, technology specialists, and administrators must become “educated consumers” in the technology marketplace. With so many tools and programs available, sifting through the myriad of resources is a daunting task. Rubrics, guides, or checklists of necessary or notable characteristics of good technology can be helpful in determining what to use, when, and with whom. Such tools can help educators decide a technology’s degree of:
-
Relevance (Is there a strong connection between the learning goals or needs and the purpose of the technology?)
-
Appropriateness (Does the technology fit the age, abilities, interest level of the learner—or educator?)
-
Feedback (Does the technology let the user know when they are doing well, or provide additional help when needed?)
-
Customization (Does the technology offer flexibility to alter content and settings to meet user needs?)
-
Personalization (Does the technology adapt to learner needs—and interests?)
-
Engagement (Does the technology increase efficient instructional time or capture learner interest?)
-
Critical thinking (Does the technology encourage or support higher-order thinking skills including evaluating, analyzing, or creating?)
-
Communication (Does the technology support the sharing of information or data?)
Appendices A–F provide examples of evaluation rubrics that may help educators determine the best use of certain technologies or tools. Rubrics may focus on different issues and should be carefully selected based on the context of the technology use. Some are relatively simple, like the “yes/no” checklist used in critical evaluation of an iPad/iPod App (Appendix A), while others present criteria aligned to Common Core Standards (e.g., Appendix D. Mobile Application Selection Rubric). Evaluate Apps for Special Needs, as the name implies, provides specific criteria for selecting apps to use with students with disabilities (see Appendix E). Evaluating the technology-learning environment is also important, as shown in the Arizona Technology Integration Matrix designed to help teachers to assess their own level of technology integration across learning environments (see Appendix F).
Issues for Future Research
Technology is constantly evolving, with new and innovative uses occurring all the time. The rate of technology change is accelerating exponentially (see Kurzweil 1999) across all areas of human endeavor, including education. Because of this rapid change, researchers have described their work with regard to educational technology as akin to chasing a “moving target” (Valdez et al. 1999, p. 1). Even as work is being conducted on the effects or implications of any particular educational technology, that technology itself is changing. While this evolving nature of educational technologies may hamper efforts to predict the success of, and establish guidelines for, subsequent educational practices (Leu 2000), many of the issues related to technology use remain constant. Factors such as the need for properly trained staff, sufficient equipment, and ongoing funding are essential for any successful integration of technology to increase learning.
Teachers, Technology, and Schooling Systems
One of the most pivotal factors in the successful implementation of education technology is the teacher. Technology integration has moved beyond a handful of barely used, outdated computers in the back of the classroom or once or twice weekly class-wide forays into the computer lab to almost full-time integration of hardware, software, and Internet-access across all activities throughout the school day. When one considers that less than two decades ago barely half of our nation’s teachers had Internet access at home (Becker et al. 1999), the technical savvy required of today’s teachers might seem insurmountable. Expertise must surpass a basic understanding of hardware and software, and move into knowledge of the purposes of various software tools and how to use productivity tools, while following curriculum standards and adopting or maintaining a learner-centered perspective. Sang and colleagues (Sang et al. 2010) found that few teachers feel confident and competent in the goals and use of computer-based education in their classrooms. Even in classroom environments where technology is frequently used, researchers have found more emphasis on giving students access to information outside the classroom or on increasing student motivation, and less focus on how computers could improve specific academic achievement (Jostens Learning Corporation 1997) or be integrated with the curriculum or learning standards (Niess 1991; Trotter 1997).
The US Office of Technology Assessment reports one of the greatest roadblocks to integrating technology into a school’s curriculum is the lack of teacher training, finding that most school districts spend less than 15 % of their technology budgets on teacher training and development (US Congress, Office of Technology Assessment 1995). Ensuring understanding of pedagogical implications of technology integration is essential (Gilbert 1995; Watts and Hammons 2002). Instructional or educational technology should not be viewed as an add-on to teaching, but “integral to teaching practice” (Chism 2004, p. 43; see also Bates and Poole 2003; Grasha and Yanbarger-Hicks 2000) and critical to quality implementation (Shields and Behrman 2000). To be educated consumers of technology products, teachers must learn to select technologies that improve “the quality of teaching and learning [and] student motivation” (Gilbert 1996, p. 12), and research suggests that teachers who receive professional development focused on integrating technology into teaching may use the technology more effectively (Penuel et al. 2007).
In fact, all members of an educational ecosystem would benefit from increased understanding of effective uses of technology. “Digital media literacy continues its rise in importance as a key skill in every discipline and profession” (Johnson et al. 2011, p. 3). All educators need to be comfortable in evaluating technology resources for their students’ and their own needs. They should be able to evaluate content, determine if it is culturally unbiased, current, appropriate to the curriculum standards, and respectful of student interest. Perhaps the most important quality is in understanding the educational goal before technology is selected or implemented, including a clear provision of how to seamlessly integrate the software into lesson strategies (Roblyer and Edwards 2000). As noted by Fullan (2000):
Technology generates a glut of information, but it has no particular pedagogical wisdom—especially regarding new breakthroughs in cognitive science about how learners must construct their own meaning for deep understanding to occur. This means that teachers must become experts in pedagogical design. It also means that teachers must use the powers of technology, both in the classroom and in sharing with other teachers what they are learning. (p. 582)
Educational Technology Development
Educational interventions and the companies who design and develop them should also be held to a high-quality standard in terms of student learning and engagement. As previously mentioned, good instructional design requires a systematic design process. That process should include iterative design and development with formative evaluation (Tiemann and Markle 1990; see also Dick and Carey 1996; Smith and Ragan 1999; Twyman et al. 2004). How a product is developed—whether it is developed based on best practices, evaluated for effectiveness after development (summative evaluation), or empirically tested during the design and development process (formative evaluation) should be considered when evaluating evidence of effectiveness (Twyman and Sota 2008).
Why Technology? What Technology Should Do.
By harnessing the power of digital and hardware advances merged with new knowledge and processes, we can further advance student learning and improve school outcomes. Using technology to assist teaching and learning may have started with stone carvings, papyrus, and the quill pen, progressed through the ages with the use of pencils, chalkboards, slide projectors, and TVs, and is now accelerating through the use of personal computers, laptops, tablets, and the power of the Internet and applications that leverage its reach and scale.
As noted, there is a strong literature base suggesting technology can improve instruction. However, the authors of this chapter and others (see Earle 1994; Fullan 2000; Rumph et al. 2007; Skinner 1968) suggest that it is not the “technology” (in this case hardware or software) itself that affects instruction, it is the philosophical underpinnings on which it is based and how it is used that influences its effectiveness. As noted by Wager (1992) “the educational technology that can make the biggest difference to schools and students is not the hardware, but the process of designing effective instruction” (p. 454). At least there is a fairly robust list of teaching and learning strategies with a strong evidence-base, across populations and subject matter, that have been shown to reliably improve learner outcomes (see Embry and Biglan 2008; Greer 2002; Lovitt 1994; Hattie 2008; Wolery et al. 1988). Any meaningful use of technology must support, augment, make easier, or make possible the myriad of things that we know empirically make a difference in children’s lives. Any innovative use of technology must enable us to do important things that were not possible before.
Implications for Practice
At the District Level
Carefully consider, design, implement, and frequently review a district-wide technology plan:
-
Improve organizational effectiveness by offering district-wide coordination and training to improve communication, planning, and record keeping.
-
Purchase technology that supports greater efficiencies (i.e., doing more with less).
-
Provide adequate equipment with plans for necessary upgrades.
-
Plan for and support the integration of tools across sites.
-
Plan for and support a common database.
-
Plan for district-level interim assessments that support routine evaluation of instructional programs, and that provide credible, actionable data linked to relevant instructional resources.
-
Align staff development with the district/school’s technology goals.
At the School Level
-
Encourage/support collaborative meetings by grade-level or subject matter to discuss planning and outcomes of technology-based instruction.
-
Provide training and support in using student data and data systems to make instructional decisions.
-
Conduct test runs of technology applications before widespread staff or student use (to identify any roadblocks or problems and avoid wasting valuable learning time).
-
Plan for ongoing staff training to make effective use of the technology available.
-
Provide professional development opportunities that are individualized to the teacher’s level of expertise and experience and that focus on integrating technology into instruction.
-
Identify model classrooms or peer mentors to allow other educators to see how various technologies can be integrated in teaching and learning.
-
Provide peer coaching and mentor modeling to help the transition from knowing about (workshop information) to knowing how (classroom application and practice).
-
Provide ongoing teacher support and opportunities for teachers to practice what they have learned (or to continue their learning).
At the Classroom/Teacher Level
-
Use online tools that provide frequent or ongoing assessment to quickly understand what students know.
-
Use frequent or ongoing measurement to tailor instruction to meet individual learning needs.
-
Use active student response measurement systems to:
-
Check for real-time student understanding of content being taught
-
Display responses of the group and also occasion discussion and reflection
-
Gather formative data to guide instruction
-
Save time in administering and scoring quizzes
-
Incorporate individualized adaptive instructional programs as part of whole-class instruction as well as for intervention
-
-
Use supplemental programs to provide additional practice opportunities.
-
Consider using evidence-based educational games to increase student motivation and engagement.
-
Select and use educational technology products for their affordances in terms of student interaction, engagement, and assessment (for example, use an interactive whiteboard to increase student interaction and not simply to present information).
At the Parent, Community Level
-
Request openness and accountability, with verification of student benefit from expenditures.
At the Teacher/Administrator Pre-Service Level
-
Emphasize the integration of technology into teaching, including offering courses on digital media pedagogy and literacy.
Appendix A. Critical Evaluation of an iPad/iPod App: Kathy Schrock
Appendix B. Educational App Evaluation Rubric: Tony Vincent
Appendix C. Educational App Evaluation Checklist: Tony Vincent
Appendix D. Mobile Application Selection Rubric: eSkillsLearning
Appendix E. ievaluate Apps for Special Needs: Jeannette Van Houten
Appendix F. AZ Technology Integration Matrix
Appendix F. AZ Technology Integration Matrix, cont.
References
AL-Bataineh, A., & Brooks, L. (2003). Challenges, advantages, and disadvantages of instructional technology in the community college classroom. Community College Journal of Research and Practice, 27, 473–484.
Alexiou-Ray, J., Wilson, E., Wright, V., & Peirano, A. M. (2003). Changing instructional practice: The impact of technology integration on students, parents, and school personnel. Electronic Journal for the Integration of Technology in Education. http://ejite.isu.edu/Volume2No2/AlexRay.htm.
Allsopp, D. H., McHatton, P. A., & Farmer, J. L. (2010). Technology, mathematics PS/RTI, and students with LD: What do we know, what have we tried, and what can we do to improve outcomes now and in the future? Learning Disability Quarterly, 33, 273–288.
Bardwell, R. (1981). Feedback: How does it function? Journal of Experimental Education, 50, 4–9.
Barron, A. E., Kemker, K., Harmes, C., & Kalaydjian, K. (2003). Large-scale research study on technology in K-12 schools: Technology integration as it relates to the national technology standards. Journal of Research on Technology in Education, 35(4), 489–507.
Bates, A. W., & Poole, G. (2003). Effective teaching with technology in higher education. San Francisco: Jossey-Bass.
Becker, H. J., Ravitz, J. L., & Wong, Y. T. (1999). Teacher and teacher-directed student use of computers and software. Teaching, learning, and computing: 1998 National Survey. Report #3.
Bienkowski, M., Feng, M., & Means, B. (2012). Enhancing teaching and learning through educational data mining and learning analytics: An issue brief. Washington, DC: U.S. Department of Education, Office of Educational Technology.
Blanchard, J., Stock, W., & Marshall, J. (1999). Meta-analysis of research on a multimedia elementary school curriculum using personal and video-game computers. Perceptual and Motor Skills, 88, 329–336.
Bloom, B. S. (1968). Learning for mastery. Evaluation Comment, 1(2), 1–12.
Burns, M. K. (n.d.). Using technology to enhance RtI implementation. http://www.rtinetwork.org/getstarted/implement/using-technology-to-enhance-rti-implementation.
Burns, M. K., Appleton, J. J., & Stehouwer, J. D. (2005). Meta-analysis of response-to-intervention research: Examining field-based and research-implemented models. Journal of Psychoeducational Assessment, 23, 381–394.
Caldwell, J. E. (2007). Clickers in the large classroom: Current research and best-practice tips. CBE—Life Sciences Education, 6(1), 9–20.
Califee, R., Chambliss, M., & Beretz, M. (1991). Organizing for comprehension and composition. In W. Ellis (Ed.), All language and the creation of literacy (pp. 79–93). Baltimore: International Dyslexia Association.
Chism, N. (2004). Using a framework to engage faculty in instructional technologies. Educause Quarterly, 27(2), 39–45.
Christensen, R. (2002). Effects of technology integration education on the attitudes of teachers and students. Journal of Research on Technology in Education, 34(4), 411–433.
Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 55(4), 445–459.
Clark, R. E. (1985). Evidence for confounding in computer-based instruction studies: Analyzing the meta-analyses. Educational Communications and Technology Journal, 33(4), 249–262.
Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42(2), 21–29.
Clark, R. E., & Salomon, G. (1986). Media in teaching. In M. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp. 464–478). New York: Macmillan.
Dick, W., & Carey, L. (1996). The systematic design of instruction (4th ed.). New York: Harper Collins Publishing.
Earle, R. S. (1994). Instructional design and the classroom teacher: Looking back and moving ahead. Educational Technology, 34(3), 6–10.
Earle, R. S. (2002). The integration of instructional technology into public education: Promises and challenges. ET Magazine, 42(1), 5–13.
Embry D. D., & Biglan A. (2008). Evidence-based kernels: Fundamental units of behavioral influence. Clinical Child and Family Psychology Review, 11(3), 75–113.
Ertmer, P. (1999). Addressing first- and second-order barriers to change: Strategies for technology implementation. Educational Technology Research and Development, 47(4), 47–61.
Fadel, C., & Lemke, C. (2006). Technology in schools: What the research says? http://www.cisco.com/web/strategy/docs/education/tech_in_schools_what_research_says.pdf.
Fixsen, D., Naoom, S. F., Blase, D. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231).
Flecknoe, M. (2002). How can ICT help us to improve education? Innovations in Education and Teaching International, 39(4), 271–279.
Foorman, B. R. (2007). Primary prevention in classroom reading instruction. Teaching Exceptional Children, 39, 24–30.
Fouts, J. T. (2000). Research on computers and education: Past, present, and future. Seattle: Bill and Melinda Gates Foundation.
Fox, E. J. (2004). The personalized system of instruction: A flexible and effective approach to mastery learning. In D. J. Moran & R. W. Malott (Eds.), Evidence based educational methods (pp. 201–221). San Diego: Elsevier.
Fuchs, L. S., & Fuchs, D. (2009). On the importance of a unified model of Response-to-intervention. Child Development Perspectives, 3(1), 41–43.
Fuchs, L. S., & Vaughn, S. (2012). Responsiveness-to-intervention: A decade later. Journal of Learning Disabilities, 45(3), 195–203.
Fuchs, L. S., Fuchs, D., Powell, S. R., Seethaler, P. M., Cirino, P. T., & Fletcher, J. M. (2008). Intensive intervention for students with mathematics disabilities: Seven principles of effective practice. Learning Disability Quarterly, 31, 79–92.
Fuchs, L. S., Fuchs, D., & Compton, D. L. (2012). The early prevention of mathematics difficulty: Its power and limitations. Journal of Learning Disabilities, 45(3), 257–269.
Fullan, M. (2000). The three stories of education reform. Phi Delta Kappan, 81, 581–584. Copyright (c) 2000.
Gammon, R. (2011, July 15). itunes app store educational apps 2011 vs. 2009. http://lh-llc.com/itunes-edu-apps-2011-2009.
Gansle, K. A., & Noell, G. H. (2007). The fundamental role of intervention implementation in assessing resistance to intervention. In S. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 244–254). New York: Springer.
Gardner, R., Heward, W. L., & Grossi, T. A. (1994). Effects of response cards on student participation and academic achievement: A systematic replication with inner-city students during whole-class science instruction. Journal of Applied Behavior Analysis, 27, 63–71.
Gerber, M. M. (2005). Teachers are still the test: Limitations of response to instruction strategies for identifying children with learning disabilities. Journal of Learning Disabilities, 38, 516–524.
Gersten, R., Beckmann, S., Clarke, B., Foegen, A., Marsh, L., Star, J. R., & Witzel, B. (2009). Assisting students struggling with mathematics: Response to Intervention (RTI) for elementary and middle schools (NCEE 2009-4060). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. http://ies.ed.gov/ncee/wwc/publications/practiceguides/.
Gilbert, S. (1995). Technology and the changing academy. Change, 27(5), 58–61.
Gilbert, S. (1996). Making the most of a slow revolution. Change, 28(2), 10–23.
Grasha, A., & Yanbarger-Hicks, N. (2000). Integrating teaching styles and learning styles with instructional technology. College Teaching, 48(1), 2–10.
Greer, R. D. (2002). Designing teaching strategies: An applied behavior analysis systems approach. New York: Academic.
Gülbahar, Y. (2007). Technology planning: A roadmap to successful technology integration in schools. Computers & Education, 49(4), 943–956.
Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New York: Routledge.
Hope, W. C. (1997). Why technology has not realized its potential in schools? American Secondary Education, 25(4), 2–9.
Jitendra, A., Carnine, D., & Silbert, J. (1996). Descriptive analysis of fifth grade division instructions in basal mathematics programs: Violations of pedagogy. Journal of Behavioral Education, 6(4), 381–403.
Jitendra, A. K., Griffin, C. C., McGoey, K., Gardill, M. C., Bhat, P., & Riley, T. (1998). Effects of mathematical word problem solving by students at risk or with mild disabilities. The Journal of Educational Research, 91(6), 345–355.
Johnson, L., Smith, R., Willis, H., Levine, A., & Haywood, K. (2011). The 2011 Horizon Report. Austin: The New Media Consortium.
Jostens Learning Corporation. (1997). Survey analysis by global strategy group. San Diego: Jostens Learning Corporation.
Kamil, M. L., & Lane, D. (1998). Researching the relationship between technology and literacy: An agenda for the 21st century. In D. Reinking, M. C. McKenna, L. D. Labbo, & R. D. Kieffer (Eds.), Handbook of literacy and technology: Transformations in a post-typographic world (pp. 323–341). Mahwah: Lawrence Erlbaum.
Kamil, M. L., Intrator, S. M., & Kim, H. S. (2000). The effects of other technologies on literacy and literacy learning. In Kamil M. L., Mosenthal P. B., Pearson P. D., & R. Barr (Eds.), Handbook of reading research: Vol. III (pp. 771–788). Mahwah: Lawrence Erlbaum.
Kay, R. H., & LeSage, A. (2009). Examining the benefits and challenges of using audience response systems: A review of the literature. Computers & Education, 53(3), 819–827.
Keller, F. (1968). “Goodbye teacher …” Journal of Applied Behavior Analysis, 1, 79–89.
Kozma, R. B. (1991). Learning with media. Review of Educational Research, 61(2), 179–212.
Kozma, R. B. (1994) Will media influence learning? Reframing the debate. Educational Technology Research and Development, 42(2), 7–19.
Kulik, J. A. (2003). Effects of using instructional technology in elementary and secondary schools: What controlled evaluation studies say. SRI Project Number P10446.001. Arlington: SRI International.
Kulik, C., & Kulik, J. (1991). Effectiveness of computer-based instruction: An updated analysis. Computers in Human Behavior, 7, 75–94.
Kulik, C., Kulik, J., & Bangert-Drowns, R. (1990). Effectiveness of mastery learning programs: A meta-analysis. Review of Educational Research, 60, 265–299.
Kupzyk, S., Daly, E. J., Ihlo, T., & Young, N. D. (2012). Modifying instruction within tiers in multitiered intervention programs. Psychology in the Schools, 49(3), 219–230.
Kurzweil, R. (1999). The age of spiritual machines. New York: Penguin Books.
Layng, T. V. J., Twyman, J. S., & Stikeleather, G. (2004). Selected for success: How Headsprout Reading Basics™ teaches beginning reading? In D. J. Moran & R. Malott (Eds.), Evidence-based educational methods. St. Louis: Elsevier Science/Academic.
Leggett, W. P., & Persichitte, K. A. (1998). Blood, sweat, and TEARS: 50 years of technology implementation obstacles. Tech Trends, 43(3), 33–36.
Lembke, E. S., Hampton, D., & Beyers, S. J. (2012). Response to intervention in mathematics: Critical elements. Psychology in the Schools, 49(3), 257–272.
Leong, C. K. (1992). Enhancing reading comprehension with text-to-speech (DECtalk) computer system. Reading and Writing: An Interdisciplinary Journal, 4, 205–217.
Leu, D. J. (2000). Literacy and technology: Deictic consequences for literacy education in an information age. In M. L. Kamil, P. B. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research: Vol. III (pp. 743–770). Mahwah: Lawrence Erlbaum.
Lovitt, T. C. (1994). Tactics for teaching (2nd ed.). Englewood Cliffs: Prentice-Hall.
Lumley, D., & Bailey, G. D. (1993). Planning for technology: A guidebook for school administrators. New York: Scholastic.
Lundberg, I., & Olofsson, A. (1993). Can computer speech support reading comprehension? Computers in Human Behavior, 9, 282–293.
Ma, L. (1999). Knowing and teaching elementary mathematics: Teachers’ understanding of fundamental mathematics in China and the United States. Mahwah: Lawrence Erlbaum.
McIntire, T. (2002). The administrator’s guide to data-driven decision making. Technology & Learning, 22(11), 18–33.
Montague, M. (1992). The effects of cognitive and metacognitive strategy instruction on the mathematical problem solving of middle school students with learning disabilities. Journal of Learning Disabilities, 25(4), 230–248.
Mory, E. H. (1992). The use of informational feedback in instruction: Implications for future research. Educational Technology Research and Development, 40(3), 5–20.
Narayan, J. S., Heward, W. L., Gardner, R., Courson, F. H., & Omness, C. (1990). Using response cards to increase student participation in an elementary classroom. Journal of Applied Behavior Analysis, 23, 483–490.
National Center on Response to Intervention. (n.d.). What is RTI? http://www.rti4success.org/whatisrti.
National Reading Panel. (2000). Report of the National Reading Panel: Teaching children to read (NIH Publication No. 00-4654). Bethesda: National Institute of Child Health and Human Development, National Institutes of Health.
Niemiec, R., Samson, G., Weinstein, T., & Walberg, H. J. (1987). The effects of computer based instruction in elementary schools: A quantitative synthesis [Abstract]. Journal of Research on Computing in Education, 20(2), 85–103.
Niemiec, R. P., Sikorski, C., & Walberg, H. J. (1996). Learner-control effects: A review of reviews and a meta-analysis. Journal of Educational Computing Research, 15(2), 157–174.
Niess, N. L. (1991). Computer-using teachers in a new decade. Education and Computing, 7(3–4), 151–156.
O’Neal, J. B., Jr. (1991). Proceedings from frontiers in education twenty-first annual conference: Engineering education in a new world order. Raleigh: North Carolina State University.
Penuel, W. R., Boscardin, C. K., Masyn, K., & Crawford, V. M. (2007). Teaching with student response systems in primary and secondary education settings: A survey study. Educational Technology Research & Development, 55, 315–346.
Polya, G. (2004). How to solve it. Princeton: Princeton University Press.
Poole, D. (2012). The impact of anonymous and assigned use of student response systems on student achievement. Journal of Interactive Learning Research, 23(2), 101–112.
Redecker, C., Ala-Mutka, K., & Punie, Y. (2010). Learning 2.0––The impact of social media on learning in Europe. Policy brief. JRC Scientific and Technical Report. EUR JRC56958 EN. http://www.ict-21.ch/com-ict/IMG/pdf/learning-2.0-EU-17pages-JRC56958.pdf.
Reiser, R. A. (2012). What field did you say you were in? Defining and naming our field. In R. A. Reiser & J. V. Dempsey (Eds.), Trends and issues in instructional design and technology (3rd ed.). Boston: Pearson Education.
Rice, J. W. (2007). Assessing higher order thinking in video games. Journal of Technology and Teacher Education, 15(1), 87–100.
Rittle-Johnson, B., & Star, J. R. (2009). Compared with what? The effects of different comparisons on conceptual knowledge and procedural flexibility for equation solving. Journal of Educational Psychology, 101(3), 529–544.
Roblyer, M. D. (1989). The impact of microcomputer-based instruction on teaching and learning. A review of recent research. Washington, DC: Office of Educational Research and Improvement. (ERIC Document Reproduction Service No. ED346082).
Roblyer, M. D., & Edwards, J. (2000). Integrating educational technology into teaching and learning (2nd ed.). Upper Saddle River: Prentice-Hall.
Roblyer, M. D., Castine, W. H., & King, F. J. (1988). Assessing the impact of computer-based instruction: A review of recent research. Computers in the Schools, 5, 117–149.
Rosenshine, B. V., & Berliner, D. C. (1978). Academic engaged time. British Journal of Teacher Education, 4(1), 3–16.
Rumph, R., Ninness, C., McCuller, G., Holland, J., Ward, T., & Wilbourn, T. (2007). Stimulus change: Reinforcer or punisher? Reply to Hursh. Behavior and Social Issues, 16(1), 47–49.
Sang, G., Valcke, M., Braak, J. V., & Tondeur, J. (2010). Student teachers’ thinking processes and ICT integration: Predictors of prospective teaching behaviors with educational technology. Computers & Education, 54(1), 103–112.
Shapiro, E. (n.d.). Tiered instruction and intervention in a response-to-intervention model. RTI Action Network. http://www.rtinetwork.org/essential/tieredinstruction/tiered-instruction-and-intervention-rti-model.
Sheingold, K., & Hadley, M. (1990). Accomplished teachers: Integrating computers into classroom practice. New York: Bank Street College of Education, Center for Technology in Education.
Shields, M. K., & Behrman, R. E. (2000). Children and computer technology: Analysis and recommendations. The Future of Children, Children and Computer Technology, 10(2), 1–27.
Shlechter, T. M. (Ed.). (1991). Problems and promises of computer-based training. Norwood: Ablex Publishing.
Shuler, C. (2012). iLearn II: Addendum, an analysis of the games category of the iTunes app store. New York: The Joan Ganz Cooney Center at Sesame Workshop.
Shute, V. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189.
Siegel, J. (1994). No computer know how? Electronic Learning, 13(5), 58.
Simmons, D. C., & Kame’enui, K. J. (2003). A consumer’s guide to evaluating a core reading program Grades K–3: A critical elements analysis. http://iris.peabody.vanderbilt.edu/rti03_reading/cons_guide_instr.pdf.
Skinner, B. F. (1968). The technology of teaching. New York: Appleton-Century-Crofts.
Smith, P., & Ragan, T. (1999). Instructional design (2nd ed.). New York: Wiley.
Spector, J. M. (2010). An overview of progress and problems in educational technology. Interactive Educational Multimedia, 3, 27–37.
Tiemann, P. W., & Markle, S. M. (1990). Analyzing instructional content: A guide to instruction and evaluation. Seattle: Morningside Press.
Torgesen, J. K. (2009). The response to intervention instructional model: Some outcomes from a large-scale implementation in reading first schools. Child Development Perspectives, 3(1), 38–40.
Trotter, A. (1997). Taking technology’s measure. In Technology counts: Schools and reform in the information age. Education Week, 17(11), 6–11.
Twyman, J. S. (2011). Emerging technologies and behavioural cusps: A new era for behaviour analysis? European Journal of Behavior Analysis, 12(2), 461–482.
Twyman, J. S., & Sota, M. (2008). Identifying research-based practices for RTI: Scientifically-based instruction. Journal of Evidence-Based Practices for Schools, 9(2), 86–97.
Twyman, J. S., Layng, T. V. J., Stikeleather, G., & Hobbins, K. A. (2004). A non-linear approach to curriculum design: The role of behavior analysis in building an effective reading program. In W. L. Heward et al. (Eds.), Focus on behavior analysis in education, Vol. 3 (pp. 55–68). Upper Saddle River: Merrill/Prentice-Hall.
U.S. Congress, Office Of Technology Assessment. (1995). Teachers and technology: Making the connection. OTA-EHR-616 Washington DC: U.S. Government Printing Office. http://www.fas.org/ota/reports/9541.pdf.
U.S. Department of Education, Office of Educational Technology. (2010). Transforming American education: Learning powered by technology. Washington, DC: U.S. Department of Education, Office of Educational Technology.
Valdez, G., McNabb, M., Foertsch, M., Anderson, M., Hawkes, M., & Raack, L. (1999). Computer-based technology and learning: Evolving uses and expectations. Oakbrook: North Central Regional Educational Laboratory.
VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multi-year evaluation of the effects of a response to intervention (RTI) model on identification of children for special education. Journal of School Psychology, 45, 225–256.
Vogel, J. J., Vogel, D. S., Cannon-Bowers, J., Bowers, C. A., Muse, K., & Wright, M. (2006). Computer gaming and interactive simulations for learning: A meta-analysis. Journal of Educational Computing Research, 34, 229–243.
Wager, W. (1992) Educational technology: A broader vision. Educational and Urban Society, 24(4), 454–465.
Watts, G., & Hammons, J. (2002). Professional development: Setting the context. In G. E. Watts (Vol. Ed.), Enhancing community colleges through professional development. New directions for community colleges (number 120, pp. 5–10). San Francisco: Jossey-Bass.
Wayman, J. C. (2005). Involving teachers in data-driven decision-making: Using computer data systems to support teacher inquiry and reflection. Journal of Education for Students Placed At Risk, 10, 295–308.
West, D. (2011). Using technology to personalize learning and assess students in real-time. Washington, DC: Brookings Institution Press.
Whimbey, A., & Lochhead, J. (1984). Beyond problem solving and comprehension: An exploration of quantitative reasoning. Philadelphia: The Franklin Institute Press.
Wise, B. W., & Olson, R. K. (1995). Computer-based phonological awareness and reading instruction. Annals of Dyslexia, 45, 99–122.
Wise, B., Ring, J., & Olson, K. (2000). Individual differences in gains from computer assisted-remedial reading. Journal of Experimental Child Psychology, 77, 197–235.
Wolery, M., Bailey, D. B., Jr., & Sugai, G. M. (1988). Effective teaching: Principles and procedures of applied behavior analysis with exceptional students. Boston: Allyn and Bacon, Inc.
Xin, Y. P., Jitendra, A. K., & Deatline-Buchman, A. (2005). Effects of mathematical word problem-solving instruction on middle school students with learning problems. The Journal of Special Education, 39(3), 181–192.
Ysseldyke, J. E. (2005). Assessment and decision making for students with learning disabilities: What if this is as good as it gets? Learning Disability Quarterly, 28, 125–128.
Ysseldyke, J. E., & McCleod, S. (2007). Using technology tools to monitor response to intervention. In S. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 396–407). New York: Springer.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer Science+Business Media New York
About this chapter
Cite this chapter
Twyman, J., Sota, M. (2016). Educational Technology and Response to Intervention: Affordances and Considerations. In: Jimerson, S., Burns, M., VanDerHeyden, A. (eds) Handbook of Response to Intervention. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7568-3_29
Download citation
DOI: https://doi.org/10.1007/978-1-4899-7568-3_29
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4899-7567-6
Online ISBN: 978-1-4899-7568-3
eBook Packages: Behavioral Science and PsychologyBehavioral Science and Psychology (R0)