Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Algebra plays a significant role in a student’s academic pathway, from being a graduation requirement to acting as a gatekeeper to more advanced coursework required of science, technical, engineering, and mathematics majors and careers. Yet, the magnitude of students’ poor math preparation is staggering at every level of schooling. For example, in the Los Angeles Unified School District, the second largest school district in the United States, the pass rate for the California High School Exit Exam (CAHSEE) in math was 47 % (CDE, 2011) and mirrors the weak performance of eighth graders on the 2009 National Assessment of Educational Progress (NCES, 2010). Poor math preparation has resulted in large numbers of students failing the California State University math entrance exam (16,900 or 35 % of the fall 2010 first-time freshmen; CSU, 2011). The consequence of poor math preparation is of national importance, as failure to maintain a pipeline of prepared students is diminishing the nation’s competitive technology infrastructure and lead in the global economy (NAE & IOM, 2006; NAS, 2005; NSB, 2010). In our own studies with middle school students, we have found about 50 % of eighth-grade students did not recognize that \(12\times (1/12)\) equals 1, and about 34 % of these students could not provide a solution to the equation 3x + 1 = 13 (Chung et al., 2007). Success in algebra is predicated on students developing foundational math concepts and skills. The National Mathematics Advisory Panel (NMAP, 2008) identified fluency with whole numbers as a critical skill underlying algebra. Fluency refers to the ease with which learners can manipulate whole numbers quickly and with automaticity.

O’Neil’s cognitive readiness learning model (O’Neil, Lang, Perez, Escalante, & Fox, this volume) comprises a domain-independent set of knowledge, skills, and attributes. The knowledge component is conceptualized as the prerequisite knowledge and the domain-specific knowledge needed to develop cognitive readiness in the domain. The skills component includes the adaptability, adaptive problem solving, communication, decision making, and situation awareness. The attributes/competencies component includes adaptive expertise, creative thinking, metacognition, and teamwork.

This chapter examines the prerequisites for cognitive readiness that underlie solving equations, a fundamental topic in algebra. The prerequisite for solving equations exists under the knowledge component in O’Neil et al.’s cognitive readiness learning model. The prerequisites for solving equations require fluency with the basic operations (addition, subtraction, multiplication, division), use of properties (e.g., commutative, associative, and distributive properties), and the knowledge of how to apply these operations to problem solving. In the remainder of this chapter we first illustrate the complexity of solving equations by enumerating the set of mathematical operations that are needed to solve an equation. We then report on an assessment technique we have been investigating to measure cognitive readiness for solving equations.

1 What Knowledge Is Required to Solve an Equation?

In the context of school algebra, being cognitively ready to solve multistep equations suggests a learner possesses the knowledge of the properties of operations on numbers (often referred to as the properties of arithmetic or algebra), skill in applying a particular operation constrained by its properties, reasoning during the simplification (or transformation) process, and monitoring of the solution. Students need to be fluent with a variety of fundamental concepts such as negative numbers and the use of letters as an unknown (Carraher & Schliemann, 2007; Chazan & Yerushalmy, 2003; Vlassis, 2002).

A cognitive task analysis conducted by Chung et al. (2007) identified over 50 concepts related to solving equations. Students need to be facile with a substantial body of knowledge, including the meaning of equality; unit/1; the properties of algebra; rational numbers and integers; theorems and conventions (e.g., −1 × −1 = 1; order of operations); the operations; and factorization.

Table 7.1 lists the general types of knowledge identified from the cognitive task analysis to underlie solving equations and Table 7.2 lists the properties of operations on real numbers. Thus, depending on the knowledge of and fluency with these concepts, solving equations can range from being automated and error-free to being deliberate and error-prone.

Table 7.1 Sample of knowledge related to solving multistep equations at the middle school level
Table 7.2 Properties of operations on real numbers

2 The Cognitive Demands of Solving Equations

To illustrate the cognitive complexity of solving an equation, we conducted a cognitive task analysis on the equation \(7x-(3x-2)=38\). This equation is typical of what students would receive in an algebra class. While the solution is straightforward, we have observed a low percentage of students in algebra—particularly those struggling in math—unable to solve this equation, with less than 15 % of our sample successfully solving this equation (Chung et al., 2007). What is puzzling about this low performance is that the equation is simple and straightforward, and only requires knowledge of math concepts drawn from Tables 7.1 and 7.2 that presumably have been covered in the elementary math curriculum.

Our analysis focused on uncovering the underlying mathematical operations required to solve \(7x-(3x-2)=38\). Our analysis technique was to solve the equation step-by-step, such that each transition from one step to the next step used only one mathematical operation. Table 7.3 shows one solution path for the equation \(7x-(3x-2)=38\). The solution steps use a single operation from Table 7.1 or 7.2. Overall, the steps in Table 7.3, when categorized in terms of the categories listed in Table 7.1 or 7.2, have nine properties of operations, one theorem, eight arithmetic operations, and two simplifications. Of particular interest are two critical transitions: Step 11 → 12 and Step 15 → 16. These transitions are critical because they are the only ways to simplify the equation. Less obvious examples include Step 13 → 14 and Step 17 → 18, of which the underlying mathematical reason (identity properties) for the 0 or 1 “vanishing” from the expression is often a mystery to students.

Table 7.3 Solution steps for a multistep equation

One advantage of deriving solution steps using only one operation is that it reveals all the underlying knowledge that is often chunked when one actually carries out the procedure. For example, while \(7x-(3x-2)=38\) might be solved in four steps (\(7x-(3x-2)=38\)\(7x-3x+2=38\)\(4x+2=38\)\(4x=36\)\(x=9\)), the solution expands to 20 steps as shown in Table 7.3. The large number of steps suggests that solving equations can involve a high number of operations that is routine if the learner has the requisite knowledge. In addition, use of a single mathematical operation per step allowed us to standardize the analysis.

When the process of solving \(7x-(3x-2)=38\) is examined using O’Neil et al.’s cognitive readiness framework, the complexity of equation solving becomes clearer. The cognitive processes of adaptability and metacognition are particularly relevant.

Adaptability is defined as an effective change in response to an altered situation (O’Neil et al., this volume). A student competent in solving equations can respond effectively to different equation forms that involve multiple terms, grouping symbols, number types, operations, and properties, as initially presented to the learner as well as during the solution process whereby the form of the equation changes as the equation goes through successive transformations (or simplifications). This competency captures the fluency and automaticity identified as critical to the preparation for algebra by the National Mathematics Advisory Panel (NMAP, 2008). As students attain fluency with the symbols and operations required to transform equations, they may advance toward adaptive expertise—the possession of a deep understanding of the conceptual structure of the problem and understanding when and why particular procedures are appropriate or not. Students who have attained this stage are capable of solving equations across various surface forms (e.g., with the variable on both sides of the equal sign), are facile at iteratively simplifying complex expressions (e.g., nested quantities), and are able to recognize the optimal point during a solution path to eliminate terms and factors. Finally, successful students presumably engage in metacognition—composed of planning and self-monitoring—whereby the correctness of each solution step is monitored and the “chunk” size of a step is adjusted to reduce cognitive load and decrease errors.

3 Implications for an Assessment of Cognitive Readiness

An assessment of cognitive readiness for solving equations requires a way of measuring the process of solving equations to determine whether the student can respond appropriately to various forms of an equation that result from successive transformations. A key innovation we developed was to use the steps in the solution path of a given equation as a source to sample items from. A step in the solution path is treated as a test item for the participant to solve. By sampling a range of steps from a given equation, the item set inherently captures the process of solving that equation. The complexity of the items systematically decreases because solving an equation results in successive transformations of that equation into simpler equations. Further, because all the steps flow from the same equation, the steps are internally coherent. Knowing the transitions from one step to the next—the transformation of each step into a simpler equation—is the key competency in solving equations. Solving an equation requires the learner to iteratively identify and execute the appropriate operation given evolving constraints of the equation. By testing students on each step, the assessment can identify where in the solution path students may be having difficulties.

The second innovation was inspired by Kalyuga and colleagues (Kalyuga, 2006; Kalyuga & Sweller, 2004). Their research has suggested that asking participants only for the next step (vs. a fully worked solution) is predictive of participants’ performance on a fully worked solution (Kalyuga, 2006; Kalyuga & Sweller, 2004). We adopted this procedure as it is highly efficient. Thus, our research question was to what extent can steps in a solution (as in Table 7.3), when sampled as an assessment, capture the cognitive complexity of solving the equation they were derived from?

4 Method

4.1 Participants

Data for 42 participants were analyzed. The sample was from a larger study (Chung et al., 2007). Students were from an urban middle school in southern California. Students were tested at the end of the first semester. Participants were drawn from two sixth-grade algebra readiness (pre-algebra topics) classes and three eighth-grade algebra 1A classes (pre-algebra and algebra topics). There were 23 males and 17 females, and two participants did not report their sex. The students’ ethnicity was diverse, including 19 % Latino, 33 % Asian or Pacific Islander, 24 % White, and 12 % African American, and 11 % unreported. About 80 % of students reported receiving A’s or B’s in math, and nearly all students agreed or strongly agreed that they were able to understand their teacher’s explanations in math class, and nearly all students agreed or strongly agreed that they were able to read and understand most of the problems and explanations in their math textbook.

4.2 Measures

Pretest. A 27-item selected-response measure was used to measure students’ knowledge and skills required to solve \(7x-(3x-2)=38\), as described in Table 7.3. Detailed information on the measure is reported in Chung et al. (2007). Cronbach’s α for this measure was 0.75.

Next step scale. An eight-item measure was developed to measure students’ skills related to solving \(7x-(3x-2)=38\). Each item was drawn from a step in the solution path shown in Table 7.3. For the next step items, participants were instructed to write down just their next step and not work through the full solution. The items were scored as correct or incorrect. The presentation order of the items was randomized on the form. The letters denoting variables were changed across the items and the values of terms and coefficients were changed so as to give the illusion of different equations. However, the structure of the equation was preserved. Cronbach’s α for this scale was 0.69. Table 7.4 lists the items. Note that Table 7.4 also represents the form of the steps in a fully worked solution to \(7x-(3x-2)=38\). In addition, substituting the appropriate value for each of the coefficients, terms, and variable labels in each item in Table 7.4 would yield the exact solution to \(7x-(3x-2)=38\), which is the innovation of our general approach.

Table 7.4 Next step items

4.3 Task and Procedure

Participants were administered the measures as part of their normal math instruction. Participants completed the pretest, next step measure, and a background questionnaire. Participants were allowed the entire class period of 50 min to complete the tasks.

5 Results

Our research question focused on the extent to which steps from the solution path of an equation capture the cognitive complexity of solving the equation they were derived from. To address this question, our analysis examined (a) the extent to which an item’s difficulty corresponded to its relative position in the solution path; and (b) the extent to which performance on the next step format predicted performance on a fully worked solution.

5.1 Descriptive Statistics

Means, standard deviations, and intercorrelations among knowledge of pre-algebra concepts, and self-reported math grades are shown in Table 7.5. Performance on the next step assessment, pretest, and self-reported grades were significantly correlated with each other. The mean performance on the next step assessment was 50 %, and 76 % for the pretest.

Table 7.5 Descriptive statistics and intercorrelations (Spearman) on background variables

5.2 To What Extent Does the Difficulty of an Item Correspond to Its Relative Position in the Solution Path?

To answer this question, we examined the overall performance on each item relative to its position in the solution path to the equation \(7x-(3x-2)=38\). Because solving an equation by definition results in a series of simpler and simpler equations, we expected an increase in performance across items that corresponded to simpler steps in the solution path.

The lowest item difficulty (or p value) was 0.17 for the most complex equation, and generally increased as the items became simpler. The highest p values were in the 0.7 range and were for the three simplest equations. Figure 7.1 shows the p values of each item. Higher item numbers indicate simpler equations. As expected, overall performance generally increased across simpler equations. In addition, the items appeared to cluster into three performance levels. Cluster 1 was the hardest and composed of the first two items in the solution path that were complex multistep equations where distribution was required to simplify the equations. Cluster 2 was composed of three items of multistep equations and did not require distribution. Cluster 3 was composed of items that were the last three steps in the solution path and were items that were generally single-step equations.

Fig. 7.1
figure 00071

p value of items corresponding to steps in the solution. Error bars indicate standard errors around the mean

Because the items were dichotomous and dependent, a nonparametric procedure was performed to test whether there were differences in student performance on the eight items (Cochran’s Q test, Pett, 1997). Cochran’s Q test yielded a significant omnibus effect (Q = 67.34, df = 1, p < 0.001), indicating a significant difference between at least two items. Follow-up multiple comparisons using Cochran’s Q test which were conducted between items that corresponded to adjacent steps in the solution (e.g., 1 vs. 2; 2 vs. 3) showed significant differences between items 2 and 3 (Q = 10.29, df = 1, n = 42, p = 0.001) and items 5 and 6 (Q = 4.57, df = 1, n = 42, p = 0.03). No other differences were detected, suggesting that the difficulty of the items within each item cluster was similar.

An inspection of the items suggested that items within each cluster reflected similar cognitive demands. Cognitive demands refer to the type of cognitive processing required of the student to be successful on an assessment task (Baker, 1997). In this case, the major next step for items 1 and 2 is distribution over subtraction. For items 3–5, the major next step is subtracting terms from both sides of the equation (addition property of equality), and for items 6–8 the major next step is dividing both sides of the equation (multiplication property of equality).

Items within each cluster were summed to form a scale as student performance on the items within a scale was similar and the math operations were mainly simplification. Interestingly, Cluster 1 (distribution) was related significantly to Cluster 2 (addition property of equality), r s(41) = 0.39, p = 0.01, but not to Cluster 3 (multiplication property of equality), while Cluster 2 was related significantly to Cluster 3, r s(41) = 0.32, p = 0.04. These relationships are consistent with how the cognitive demands unfold during solving the equation. The major next step after Cluster 1 is Cluster 2, and the major next step from Cluster 2 is Cluster 3. In terms of cognitive readiness, the discontinuities after Cluster 1 suggest student difficulties with distribution over subtraction. The discontinuity after Cluster 2 suggests some students were having difficulty with the transformation step—reasoning about when and how to “eliminate” terms and coefficients in an expression to simplify the equation.

5.3 To What Extent Does Asking for Just the “Next Step” Predict Performance on a Fully Worked Solution?

To answer this question, we examined the relation between participants’ performance on three next step items and their performance on three similar items that required participants to solve the items completely. Our analysis approach first examined format differences between the next step and fully worked items. Because we were examining whether the next step format could be used as a substitute for the typical approach of requiring students to solve each equation completely, we examined whether there existed format differences and whether the next step items predicted performance on the fully worked items. Both of these properties would be required if the next step item format were to be used to measure students’ skill at solving equations.

Table 7.6 shows the items and item statistics for the next step and fully worked item formats. The next step format appeared to underestimate students’ skill at actually solving the equation. Cochran’s test of whether there were differences in item types yielded a significant difference for item pair 1 (Q = 11.0, df = 1, p = 0.001) and item pair 2 (Q = 9.0, df = 1, p = 0.003). There was no difference between the formats for item pair 3. These results point to a potential format effect. The task to write down just the next step and not provide the full solution may have been unusual. Another difference was that the values of the coefficients were different across formats, which may have contributed to differences in difficulty.

Table 7.6 Comparison between the next step only and fully worked solution items (N = 42)

The next analysis examined how well performance on the next step item predicted performance on the fully worked item. These analyses could not be performed for item pair 1 because all participants answered the item correctly on the fully worked format. For item pair 2, the prediction of performance on the fully worked problem from the next step performance was marginally significant (Somer’s d = 0.21, p = 0.09), and significant for item pair 3 (Somer’s d = 0.51, p = 0.03). Somer’s d is a measure of concordance between two ordinal variables and ranges in value from −1 to +1. Values close to −1 or +1 indicate a strong negative or positive relationship, respectively, and values close to 0 indicate little or no relationship between the variables (IBM SPSS, 2009).

We then summed items within each format to form scales and examined the correlation between the two scales. The purpose for forming a scale was to create a more general measure that spanned the full range of solving equation steps. The next step scale correlated significantly with the fully worked scale, r s(40) = 0.39, p = 0.012, although the magnitude of the correlation was lower than that reported in other research (r s between 0.7 and 0.9, Kalyuga, 2006; Kalyuga & Sweller, 2004). The difference in magnitude may be due to the small number of items in each scale. These results, while based on a small sample, suggest that the next step item format can be predictive of whether participants will be successful at solving an equation.

6 Discussion

In this study we tested a novel assessment technique to measure the cognitive readiness for solving equations. Our technique, inspired from Kalyuga and colleagues (Kalyuga, 2006; Kalyuga & Sweller, 2004), sampled steps from the solution path of an equation and used those steps as assessment items. We also examined whether simply asking participants to specify their “next step” captured the complexity of solving the entire equation.

Our first finding was that items drawn from steps from a solution path yielded item difficulties consistent with the step’s relative position in the solution path. Items drawn from the beginning of the solution path were more difficult than items near the end of the solution path. However, item difficulties were similar among items that differed only in the arithmetic complexity (e.g., simple subtraction or division). The major discontinuities in performance occurred in the steps that required operations related to distribution and equality (subtraction and division). This result is consistent with the finding that many students have neither the skills nor precise understanding of the body of basic mathematical knowledge to successfully transform equations (e.g., Demby, 1997; Herscovics & Linchevski, 1994; Kieran, 2007; MacGregor & Stacey, 1997; Pierce & Stacey, 2007).

Our second finding was that the next step item format was apparently more difficult than like items that required participants to work out the full solution, which was similar to a finding by Kalyuga (2006). Kalyuga found that students performed lower on the next step items than on items requiring fully worked solutions. However, Kalyuga used word problems as the task and overall performance was low for both formats. Kalyuga speculated that students’ problem solving skills were impoverished which led to a lower success rate on the next step items because providing an accurate next step required an existing schema of the general approach, compared to solving the problem where the solution could be discovered through a variety of approaches. These results, however, were inconsistent with earlier work using a similar approach. In Study 1 that used an equation solving task, Kalyuga and Sweller (2004) found that students solved a higher percentage of the next step items (72 %) than the fully worked items (58 %). While the general trend of format differences appears to exist among all the studies, which format is more difficult is unclear as all studies scored student responses differently. We think the most likely explanation for the differences observed in the current study is that the next step format is too novel, as asking students to write only their first step is atypical. However, despite these format differences, performance on next step items predicted performance on items requiring a fully worked solution in the current study as well as Kalyuga (2006) and Kalyuga and Sweller (2004).

Given the amount of knowledge and skills required of solving equations, an important assessment question is what should be measured and how should it be measured? The use of assessments as diagnostic tools is not new and has seen numerous forms, many of which are clinical and intensive in nature (Black & Wiliam, 2009; Heritage, Kim, Vendlinski, & Herman, 2009; Sadler, 1989; Shepard, 2001; Wiliam & Thompson, 2007).

The objective of an assessment is to understand what the student is doing and to elicit why he or she is doing it. The cognitive demands of solving equations include knowing which operation to apply (e.g., distribution over subtraction) and, just as importantly, knowing when to apply those operations (e.g., equality property of addition). In the context of cognitive readiness, the assessment presented in this chapter was designed to measure both the prerequisites for solving equations and the skill itself. In addition, the technique may provide a feasible way to capture the process of solving equations. Our results show that the prerequisites for solving equations can be measured feasibly. Our general approach of sampling from the solution path yields performance differences that suggest chokepoints that in the context of solving equations map to adaptive expertise and adaptability. That is, successful problem solvers appear to know the conditions under which to apply particular operations to solve the equation (e.g., using the additive identity to isolate terms; using the multiplicative identity to isolate variables).

Finally, the practical use for algebra instruction is straightforward: because the item set is sampled from the derivation of an equation (e.g., \(7x-(3x-2)=38\), as shown in Table 7.3), the performance dropoff can be used to pinpoint where in the solution path students have difficulty. Further, the set of steps in Table 7.3 tap all the properties of algebra and most of the operations. The item clusters shown in Fig. 7.1 suggest that, in our sample, single-step problems requiring division of the coefficient to isolate the unknown were relatively easy compared to multistep equations that required the use of the additive inverse. The very low performance on the first item cluster suggests that distribution over subtraction is posing a substantial barrier for students. The instructional implications of Fig. 7.1 are clear—students in our sample need support on (a) the use of the additive inverse to isolate the term with the unknown and (b) distribution over subtraction. These implications would not be as straightforward if the items were developed by other means. Our technique of sampling items from the set of steps in a solution path appears to be a promising approach, combining rapid testing time, breadth of coverage, and diagnostic potential.