Expectations for the role of the computer in modern education have changed since the first computers were developed (Cuban, 1986). Early advocates believed that computers would make learning more efficient and increase student motivation to learn, and ultimately change how teachers teach, how students learn, and the way schools are organized. This belief was based on the computer’s ability to provide individualized instruction, facilitate drill activities, and provide immediate and non-judgmental feedback. Papert (1980) predicted that the computer would revolutionize every aspect of educating students. The computer would provide students with new ways to learn, think, and grow intellectually. By using a computer, Papert believed that students would be able to take control of their own learning, thus transforming the classroom.

Numerous studies were conducted to measure the effectiveness of computer-based instruction in the schools. Meta-analyses have been conducted on studies that compared achievement levels of students who received computer-based instruction (CBI) to those of their peers who did not (Kulik & Kulik, 1991; Kulik, Kulik, & Bangert-Drowns, 1985). The first meta-analysis examined 32 classroom-based studies that quantitatively compared the results of instruction with the computer to a traditional classroom. The analysis concluded that CBI generally increased the achievement levels of elementary students. A subsequent meta-analysis of 254 studies that looked at the effects of CBI on achievement confirmed the findings that CBI had a positive effect on students (Kulik & Kulik, 1991). The studies included participants from kindergarten to adult. Computer instruction typically yielded higher achievement at all levels. In addition, instruction time was often less with the computer, and students tended to have a more positive attitude toward courses that included computer-based instruction. Sivin-Kachala (1997) reported similar findings in a meta-analysis of 219 studies conducted between 1990 and 1997 examining the effects of the computer on student achievement. Once again students involved in a technology environment demonstrated increased achievement. Students also reported more favorable attitudes towards subjects when instruction involved the computer.

On the other hand, several studies have challenged claims that CBI contributes to increased achievement. Clark (1983, 1994) argued that the measured differences in student achievement from computer instruction could actually be attributed to either a difference in (1) the instructional method or content of the lesson, or (2) the novelty effect caused by a new medium that disappears as students become familiar with the new medium. Clark agreed that certain capabilities of the computer could be used to assist students who were having difficulty in a particular area. However, Clark argued that the instructional methods used by teachers, the attributes of the academic task and the student are the real causes of any measurable differences in student achievement.

Swenson and Anderson (1982) argued that the greatest educational benefit of CBI could be the increased motivation and improved attitudes. Seymour, Sullivan, Story, and Mosley (1987) reported the results of a study designed to measure students’ continuing motivation to perform a future geography task when it was offered on the computer or in paper-pencil format. An overwhelming 97% of all participants expressed a desire to do subsequent tasks on the computer rather than in paper–pencil format. Those who worked on the computer consistently rated their own performance on the activity higher, found the material to be more interesting, and believed the questions to be easier, than those who completed the task on paper. No achievement differences among the groups were found however, supporting the notion that computer instruction can be beneficial without necessarily increasing student achievement. Kinzie, Sullivan, and Burdel (1992) later reported that a group of ninth-grade students who were given CBI on a science topic indicated a strong preference for instruction on the computer and an increased interest in studying science if the science instruction was to be conducted on the computer. Students who did not receive science instruction on the computer did not have such an increased interest in studying science. When given an option of subjects to study, students consistently chose the subject offered on the computer. These results suggest that CBI may sometimes appeal to students over other forms of instruction.

Early CBI systems emulated the paper-based programmed instruction of the day, with relatively simple explanatory passages, frequent interactions (often of questionable meaningfulness), and simple feedback. Such systems have since been criticized as “drill and kill” instruction, capable of teaching only low-level activities. More recent CBI tutorials, such as those developed by Plato Learning Systems, use instructional models based on current cognitive learning theory with interactive instructional sequences to support mastery of declarative and procedural knowledge (Foshay, 1998). CBI tutorials can assess what an individual student knows about a subject, compare that to what the student needs to know (usually determined by the teacher), and then determine an optimum starting place for each student. Such programs can also track student progress and generate and update student performance profiles. As with the earlier computer tutors, recent generations of CBI provide immediate feedback. Hannafin (1999) and Hannafin and Oppenheimer (2002) found that at-risk high-school students who used a customized CBI curriculum scored higher on state-mandated Mathematics and English tests than their classmates who did not use the CBI-supported curriculum.

Since the early 1990s, the discussion has shifted to how computers should be used and what kinds of learning outcomes should be sought. Many researchers and theorists have argued that until computers are used to support less directive and more student-centered learning environments, they will have minimal impact on learning in K-12 and other environments (e.g., Jonassen, 2000; Kozma, 1994; Means & Olson, 1995). Kozma (1994) has argued that the impact of computers on learning has been limited because they have been used primarily to support lower-level learning activities that emphasize recall and memorization. More recently, Jonassen (2000) has advocated using computers less to deliver instruction and more to support student-centered, open-ended learning environments. However, even though there seems to be general agreement in the research community that computers would have greater impact in schools if used in more constructivist ways, this seems to have had little impact on actual K-12 practice.

Coinciding with this discussion among researchers and theorists was an increasing political emphasis on academic standards and the associated high-stakes testing. According to Hess (2000), at least 46 states have embraced some form of a standardized test to ensure that all students are receiving the same education and learning the same information. The stakes are high for students because graduation from high school is frequently dependent upon passing such tests. Stakes are equally high for teachers and administrators because not only are teachers and administrators evaluated on their students’ performance, but school accreditation often hangs in the balance. Schools are faced with the need to pass state mandated subject area tests to retain federal and state funding. Students and teachers are feeling pressure to cover more information faster. The discussion about using computers to help learners develop higher order skills continues, but is often subordinate to helping students pass a test in order to graduate. The introduction of state-wide assessment tests has increased the pressure on teachers and administrators to produce results in the form of test scores. Hess (2000) noted that these tests are based on the idea that all students should acquire a specific set of knowledge in a specific grade. The U.S. federal government has designated accountability as the cornerstone of its education program.

The State of Massachusetts administers the Massachusetts Comprehensive Assessment System (MCAS) test to assess performance in grades eight and ten. Students who graduate in 2003 and beyond are required to pass the 10th grade MCAS in English and Mathematics or they will not receive a high school diploma. Teachers quickly learned that their evaluations were tied directly to student performance on the state tests. School funding and accreditation rely on the school’s success in passing the state mandated tests.

In the fall of 2000, Patriot High School (pseudonym used) adopted a remediation strategy that included a computer-based (PLATO) component in an effort to improve the performance of their 10th grade students on the MCAS. In this case study we investigated the overall effectiveness of the school’s remediation strategies which included: CBI coursework, better alignment with state standards, staff development, improved delivery of traditional instruction, standards-based lesson planning, and helping at-risk students improve study and organizational skills. This discussion focuses more narrowly on the CBI component. No attempts were made to isolate the effect of individual components of PHS’s remediation effort.

Remediation program

Learners

PHS serves a largely white, working middle class area in a Northeastern resort town. PHS is in a resort town with large number of absent, second-home owners who live there only in the summers. There were approximately 987 students enrolled at PHS during the 2000–2001 academic year, 83% of whom were white, 7% Native American, 6% African American, and 3% Hispanic students. Eleven percent of PHS students received a free or reduced lunch from the federal lunch program.

Program goal

By the end of the 1998–1999 academic year, MCAS Math scores for the PHS 10th graders had fallen to 15th among the 16 high schools on Cape Cod creating a sense of urgency to reverse this trend. The single unambiguous goal for PHS was to provide instruction to help struggling students pass the MCAS exam.

Program description

Beginning in the Fall 1999, PHS launched a three-pronged program, focusing on faculty training, to improve MCAS scores for at-risk students that included:

  • Curriculum realignment—PHS faculty and administrators systematically reviewed course content and state standard to check for alignment and to emphasize areas of weakness among the at-risk students.

  • Staff development—The administration invested in regular training for math faculty both during the school’s professional development days and in targeted after school sessions. The training emphasized improving pedagogy and strategies to deal with underperforming students.

  • Standards-based planning and delivery of traditional instruction—Release time was provided to encourage teachers to be fully engaged in this new effort.

In fall 2000, after a year of a reasonably successful remediation effort, the school administration decided to add a computer-based option to the mix and formed a team of teachers to design the course using Plato Learning Systems. Plato, as with many CBI programs, supports a mastery approach to learning. Students work independently and at their own pace through instructional modules, and are not permitted to advance from one module to the next until they achieved a predetermined mastery score (e.g., 80%) on the end-of-module exam. The team specified the content to be covered in CBI modules and recommended that students spend 4 days a week with CBI modules and 1 day with an instructor working on critical study and test-taking skills. This course was required of sophomores who were in danger of failing the Math portion of the MCAS. Students were classified as “at risk” based on their 8th grade MCAS scores. Rising sophomores whose 8th grade Math score were near or below the passing MCAS scale score of 220 were automatically enrolled in the remediation course in Fall 2000. This course, which met daily for 45 min, used CBI to align the course content with competencies covered on the MCAS exam. The course goals were to:

  • Match course curriculum to target objectives covered in the MCAS and the state standards.

  • Provide individual remediation for academically at-risk students to pass the MCAS.

  • Provide assessment and tracking for individual students, and

  • Improve students learning habits.

PHS’s proactive strategy to identify students early, based on 8th-grade MCAS release test, was grounded in the belief that these students’ best chance to pass the MCAS is on their first attempt. PHS faculty reported that once students failed the MCAS in their sophomore year, they tended to lose motivation, thus making successful remediation in the junior and senior years more difficult.

Mr. Rob Smith (pseudonym), a certified classroom teacher, was charged with managing the CBI remediation course. In consultation with the PHS Math faculty, Mr. Smith developed a 34-module MCAS-aligned CBI curriculum. All 10th graders who scored marginally or failed the Math section of the 8th grade MCAS exam were enrolled. Students worked independently but were encouraged to seek one-on-one help from Mr. Smith. There were also parent and community volunteers who worked as needed in the lab to assist high-need students. The management of the remediation effort required the full time efforts of Mr. Smith, whose task it was to monitor student progress through the course and assign course grades. Course grades were assigned based on the number of CBI modules completed and several other criteria such as participation and attendance.

Evaluation design

The evaluation was designed to determine the effectiveness of the remediation program at PHS. While the CBI component was part of a larger remediation effort, the PHS administration had a particular interest in examining the effectiveness of the newly added CBI component. With this in mind, the effort focused on 2001 MCAS test scores, CBI module-mastery data for Fall 2001, and the perspectives of the CBI instructor.

MCAS scores were examined to detect performance differences between students who were enrolled in the CBI course and those who were not. The state reported MCAS scores in a range from 200 to 280 with a passing threshold of 220. Beginning with the class of 2003, which includes the subjects in this study, all students in the state were required to pass the MCAS in English and Math in order to graduate. The state classified student MCAS performance into four categories:

  • Advancedstudents who score 260–280

  • Proficient—students who score 240–259

  • Needs Improvement—students who score 220–239

  • Warning/Failing—students who score below 220.

For the academic year 2000–2001, Mr. Smith provided CBI and MCAS data for 189 10th graders (99 enrolled in the Fall 2000 CBI course, 90 were not). Of these 189 students, 63 were missing either 8th or 10th grade MCAS scores and were not considered in this analysis, reducing the N to 126 (87 enrolled in the CBI course, 39 not). Students were assigned to the CBI course based on failing or marginal 8th grade MCAS scores. The average 8th grade MCAS scale score of the 87 enrolled students was 215.7, while the average score for the non-CBI group was 234.2. To get a sense for how PHS students were performing relative to the rest of the state, scores were compared to the overall state MCAS scores for the same years. Module-mastery data were examined to identify any relationships that may exist between CBI use and student performance on the MCAS exam.

Data analysis

In reporting the interview results, main ideas were summarized and analyzed. For MCAS results, since the gain scores for students who took the test in 8th grade and again in 10th grade were of primary interest, a repeated measures analysis of variance was used to test whether observed gains were significant at the .05 alpha level. The Repeated Measures test allowed for comparison between the two groups, sophomores assigned to the CBI course and their classmates who were not.

Also of interest was whether student CBI use in terms of the number of modules mastered was associated with student performance on the MCAS exam. Pearson Product Moment bivariate correlations were used to examine the relationship between CBI use and 10th grade MCAS test scores.

Procedures for data collection

Mr. Smith provided both CBI and MCAS data. State MCAS data were available at the state web site (see http://www.doe.mass.edu/mcas for these data). The first author telephone-interviewed Mr. Smith using prepared questions to structure the interviews and then allowing the line of inquiry to be guided by Mr. Smith’s concerns and perspectives.

Results

The results are organized into three sections, MCAS Scores, CBI Data, and Interview. The MCAS Scores section examines the MCAS Math performance of all PHS 10th graders who took the test in both the 8th and 10th grades. For analysis purposes, students were grouped into two categories; those who were enrolled in the CBI course and those who were not. The CBI Data section presents module mastery data for the students who used CBI during Fall 2000, in preparation for taking the May 2001 MCAS exam. Correlations were calculated between students’ CBI module data and their MCAS scores. The Interview section summarizes the interview with Mr. Smith.

MCAS scores

Overall student scores increased significantly from 8th grade to 10th grade, increasing from an average of 221.5 to 239.0, F(1, 124) = 108.64, p < .001 (see Table 1). After the significant univariate effect was found, Tukey’s Wholly Significant Differences post hoc test was applied to examine individual differences across groups. This test established that gain scores across groups larger than 5.68 were significant at the .05 level. Students in both CBI and non-CBI groups achieved gain scores that exceeded that threshold. Further, both treatment groups, CBI and non-CBI, enjoyed a significant increase in MCAS performance from 8th to 10th grade. But of particular interest is the significant interaction, F(1, 124) = 9.08, p = .003, between the treatment groups and the scores recorded on each test date. The CBI group’s improvement (from M = 215.5 to M = 236.1, estimated effect size 1.27) was more dramatic (statistically) than the improvement of the students in the non-CBI group (M = 234.2 to M = 245.4). In other words, while it is true that the non-CBI group average score (M = 245.4) represents a significant difference over their at-risk CBI classmates (M = 234.2), the interaction (graphically depicted in Fig. 1) indicates that the CBI group outperformed the non-CBI group in terms of the MCAS gain scores from 8th grade to 10th grade (a gain of 20.4 vs. 11.2).

Table 1 MCAS math scores for 1999 (8th grade) and 2001 (10th grade) and PLATO usage data for PHS students
Fig. 1
figure 1

MCAS 2001 scale score treatment by test date interaction

The average 1998–2001 scale scores and % passing rates for PHS 10th-grade students are compared to the overall state averages in Table 2 and graphically displayed in Fig. 2. PHS’s improvement in scores in 2000 and 2001 is in line with the overall statewide trend, increasing from 219 in 1999 to 237Footnote 1 in 2001 while the overall state averages jumped from 222 to 237. As mentioned earlier, it was Massachusetts’s poor performance in 1999 when 60% of 10th graders failed the MCAS, that prompted the adoption of new strategies to reverse the trend. And the pre-CBI strategies employed in 1999–2000 (curriculum alignment with state standards, staff development, etc.) were in fact successful in bringing up scores from 219 to 229. When CBI was added to the mix in 2000–2001, scores further improved from 229 to 237 (bearing in mind that not all PHS students used CBI). Of particular interest (especially if you are a high school student or parent) are the passing rates. In 1999, only 40% of 10th graders passed the MCAS at PHS. The state average was not much better at 47%. In 2000, the passing rate improved to 62% at PHS compared to 55% in the state and in 2001, when CBI was added, the number of students passing increased to 84% vs. 75% statewide.

Table 2 PHS and statewide 10th grade math MCAS scores and percent passing rates for 1998–2001
Fig. 2
figure 2

PHS and statewide 10th grade math scale scores and percent passing rates for 1998–2001

That PHS’s 2001 passing rate outpaced the state (84% vs. 75%) given identical scale score averages (M = 237), warrants closer inspection. As Table 3 Footnote 2 shows after lagging behind the state average in 1999 (40% vs. 46%), the % of PHS students who passed the MCAS exceeded the state in 2000 and 2001. In other words, a greater % of failing PHS students improved to a passing grade or better in 2000 and 2001 compared with students in the rest of the state. This may seem counterintuitive at first, but is actually consistent with the earlier finding where at-risk PHS students outperformed other higher ability PHS students—at least on the 2001 MCAS. Further, it is consistent with the stated goal for CBI use with weak students. PHS focused on its failing students possibly at the expense of the MCAS preparation for students expected to pass. Higher-ability students at PHS did in fact pass the MCAS and improved their scores somewhat, but they were outperformed by their higher-ability peers in the rest of the state. As Table 3 shows, PHS lagged behind the state averages for % of students in the Advanced category (13% vs. 18%). The more consistent increase in MCAS scores for students of all ability levels might indicate that other high schools implemented a remediation strategy that addressed all students rather than the targeted approach at PHS.

Table 3 Percent of 10th-grade students at PHS and statewide passing MSAS by student performance categories for 1999–2001

In sum, what appears to have happened is that MCAS scores for low-ability PHS students improved more dramatically than higher-ability students, while statewide, students of all abilities improved in a more consistent manner. The upshot is that a greater number of PHS students passed the MCAS (84%) than students in the rest of the state (75%). One could speculate that if the higher-ability students at PHS had taken the CBI-supported course then their gain score may have been closer to the magnitude of the at-risk students and that overall, PHS would have outpaced the rest of the state in 2001.

About 13 of the 87 students who were enrolled in the CBI remediation course (and 6 students of the 39 who were not) failed the May 2001 MCAS. As of November 2002, only 3 of these 13 students had not passed the math portion of the MCAS; all 3 were identified as special needs students.

CBI data

To assess the possible contribution of CBI to student success on the MCAS exam, student CBI performance, defined as the number of modules mastered, was correlated to the Spring 2001 MCAS scale scores. All 87 students in the remediation course were assigned to work on the same 34-module CBI curriculum, which was aligned to both the state standards and the MCAS exam. Students completed an average of 23.8 of the 34 possible modules. A significant correlation was identified (r = .53, p < .001) between the MCAS scores and the CBI usage data. In other words, student mastery of the modules was related to higher MCAS scores on the May 2001 exam.

Interview

Mr. Rob Smith, PLATO Teacher–Coordinator. Mr. Smith is a certified teacher with an endorsement in special education. It was the special education department at PHS in fact, that first started using CBI. Mr. Smith explained, “I saw its (CBI’s) potential and when Mr. Black (the school’s principal) was looking to adopt PLATO on a bigger scale, I volunteered to be part of it. I just saw tremendous potential to help under-performing students.”

In describing the 34-module CBI curriculum, Mr. Smith explained how it was designed. “We did not start out trying to align it exactly with the state framework. Instead, we customized the course (34 modules) based on our Math teachers’ recommendations. And I leave (dedicate) 1 day a week to teach study skills. For example, students are required to bring agendas and show organizations skills.” He continued, “It is entirely self-paced. Kids work independently—and they like that. They like that they are totally responsible for their work and grade.”

Students working independently allowed Mr. Smith to work with the students who were in most need, which is a major benefit, although in the beginning, it was very difficult to manage. “I would ask questions but then 5–6 kids’ hands would shoot up right away—and stay up. They all needed help at once. To help me help these kids, I recruited a few community volunteers to help me. They (volunteers) would sit with the ‘high maintenance’ students the whole class, which really helped a lot. But then after a while, we stopped needing the extra help and didn’t need the volunteers any more. I also use the advanced kids to help others. It is really great—you see kids helping each other all over the place.”

In terms of classroom management, Mr. Smith stresses flexibility. “You have to let them talk and interact a little. That’s the only way many of them will learn—they need to move around a little and talk. I allow about 5 min at the beginning of class for some socializing and getting settled down, and 5 min at the end to get organized and pack up. That leaves about 30–35 min for focused work on the computer, which is about all these kids can stand at once. It works really well.”

Mr. Smith believes that CBI has made a big difference. “I have been here 16 years. The Math scores (prior to the CBI strategy) were the lowest I’ve ever seen. Then after we tried PLATO, the passing rate went up. And from all the students who took the test, there are only three left who have yet to pass it—and they are all special needs. One girl—a particularly tough case—asked me ‘Can I do it after school this year?’ I’d reminded her that last year (pre-CBI), you did not want it (to work hard), you never showed up. Are you sure you want to? She answered that yes she ‘really wanted to.’ And she did.”

Discussion

PHS administrators accomplished their goal of improving the passing rate on the 10th grade Math MCAS test. The combination of using a CBI-supported targeted curriculum and other instructional strategies was effective, improving the passing rate from 62% in 1999–2000 to 84% in 2000–2001. The improvement among at-risk students was particularly encouraging.

PHS’s 2-year improvement from 1999 to 2001 is encouraging. But the fact that it was accompanied by a similar increase statewide is reason to pause. The state improvement is probably due in part to a concerted statewide response among all high schools to meet the challenge of the MCAS graduation requirement. It could also be true that the exam was easier in 2000 and 2001 than it had been in 1999, although there is no evidence this was the case in spite of such discussions around the state. Improved instructional strategies is a also plausible explanation for some of the statewide improvement.

According to this analysis, the overall remediation effort at PHS was successful. How much improvement is attributable to CBI? Isolating CBI’s contribution to the overall improvement is not feasible in this context and not easily accomplished in other contexts. We do not know how much of the PHS students success was attributable to CBI, how much was the result of the other strategies used at PHS to improve test scores, and how much was possibly due to an easier exam. Notwithstanding Mr. Smith’s strong belief that the CBI course was responsible for helping students pass the MCAS exam, it is premature and unwarranted to try to make too much out of these findings. We do not know, for example, how much the additional training (study-skills and test-taking) helped the students and whether those effects would have been observed with non-CBI students. Nor do we know the effect of the staff development, or the impact of the new standards-based planning, or what was happening in non-CBI courses. In addition, it was not possible to impose such experimental controls as assigning student randomly to groups and establishing a control treatment group. There are in fact alternative explanations for relative improvement in score.

One plausible explanation is a tendency for scores to regress to the mean. It is also possible that a Hawthorne effect, where subjects improve performance simply because they know they are under study, was at play here. However, it is doubtful that students perceived that they were being studied—at least in a manner that might lead to an expectancy effect. And the fact that the intervention spanned a relatively long period of time would likely diminish the impact of any Hawthorne effect.

It is not our intent to argue that CBI was the only, or the most important component of the overall remediation strategy. We do argue, however, that there is some evidence that CBI played an integral role within the overall plan that resulted in improved MCAS scores in this case. The fact that student success in CBI was related to passing test scores suggests this conclusion. The combination of CBI and the efforts of a skillful and dedicated teacher together made a difference with a group of students who were among the toughest to reach and the most disenfranchised in the system. One outcome—at least according to Mr. Smith—is that many PHS students in the CBI course felt good about experiencing some success. Whether or not improved motivation will be sustained and pay long-term dividends with these at-risk students remains to be seen, and should be investigated.

We believe these findings, with the limitations noted, have practical implications for K-12 educators and administrators. The PHS model seems to be effective, but whether this model can be replicated depends on several factors. The specific elements of the CBI program at PHS are in fact, consistent with the critical success factors identified by Foshay (2000). For example, aligning of the curriculum to the MCAS and state standards appears to be critical, as is the ability for students to self-pace in a mastery program. Foshay (2000) argued that having students work regularly over an extended period is generally effective. Finally, strong leadership with clear objectives is a success factor not directly investigated in this study.

The discussion about how to use best computers in schools continues. There is evidence that students can benefit when challenged and stimulated with authentic and complex problems in computer-supported learning environments (Jonassen, 2000). However, educators in public schools have also learned that providing well-designed direct instruction delivered via computers can efficiently and effectively help students and teachers cover ambitious state content requirements. The fact that PHS’s most challenged students performed so well is particularly encouraging.

Robert D. Hannafin

is a member of the Educational Psychology department at the University of Connecticut. His research interests include technology integration in K-12 and teacher education and open learning environments.

Wellesley R. Foshay

was VP-Instructional Design and Research at PLATO Learning, Inc. while this study was completed. He is now Research Manager at Texas Instruments, Inc.