1 Introduction

How does it work? Why is that true? Students who generate more explanations when learning new information tend to learn more (Chi, Bassok, Lewis, Reimann, & Glaser, 1989; Renkl, 1999). Further, prompting people to explain new information often leads them to learn more than people who are not prompted to explain across a variety of topics and age groups (Atkinson, Derry, Renkl, & Wortham, 2000; Wylie & Chi, 2014). Because people generate the explanations themselves and the explanations are directed to themselves, they are called self-explanations (e.g., Chi et al., 1989). Throughout this article, we refer to self-explanations, which stand in contrast to explanations provided by others or generated for others, such as instructional explanations.

Asking students to generate explanations is a recommended study strategy (Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013; Pashler et al., 2007) and a recommended instructional practice in mathematics (e.g., Common Core State Standards, 2010). However, there are numerous cases where prompting for self-explanation did not improve, or even harmed, learning (e.g., Berthold, Röder, Knörzer, Kessler, & Renkl, 2011; Matthews & Rittle-Johnson, 2009; Mwangi & Sweller, 1998). Such null and negative results highlight the need for guidelines for effectively promoting self-explanation.

The aim of this meta-analysis and integrative review is to develop evidence-based instructional guidelines for how to harness self-explanation to promote mathematics learning. We begin with a definition of self-explanation and its theoretical underpinnings. Next, we provide an example of a line of studies on prompted self-explanation conducted by our research team. Following this, we present a meta-analysis of experimental research on the effects of self-explanation prompts on mathematics learning, including an exploration of two potential moderators of the effectiveness of self-explanation prompts. Next, we provide four instructional recommendations for mathematics educators for promoting effective self-explanation. Finally, we discuss theoretical and methodological issues for future research.

2 Explanation of the instructional design principle and its theoretical underpinnings

2.1 Defining self-explanation

Self-explanation is defined as generating explanations for oneself in an attempt to make sense of relatively new information (Chi et al. 1994; Rittle-Johnson, 2006). The explanations are inferences by the learner that go beyond the given information. These inferences may be focused on the reasoning of experts presented in worked-out examples or text or about one’s own problem solving efforts.

Certainly, all verbalizations are not explanations. Re-stating text is not considered self-explanation (Chi et al., 1994), nor is reporting what one’s solution method was (i.e., strategy reports). Strategy reports are typically “direct articulation of information stored in a language (verbal) code,” and are already active in working memory when working on a task without overt verbalization (Ericsson & Simon, 1980, p. 227). They do not involve inferences and typically do not impact learning (Ericsson & Simon, 1980; Rittle-Johnson & Siegler, 1999).

It is also important to distinguish self-explanations from other types of explanations. Self-explanations are generated by the learner, rather than by an instructor, parent or other person who already knows the content, and are generated for the learner, not intended to teach the content to other people (Chi et al., 1994). Sometimes self-explanations are told to an experimenter, but without the intent of teaching (e.g., Renkl, Stark, Gruber, & Mandl, 1998; Rittle-Johnson, 2006).

2.2 Theoretical underpinnings: proposed mechanisms

Self-explanation is thought to promote learning via two primary processes. First, self-explanation aids comprehension by promoting knowledge integration (Chi, 2000). In particular, explanations often integrate pieces of new information together or integrate new information with prior knowledge. For example, when studying text with worked-out examples, learners’ explanations often linked solution steps to prior knowledge and/or information in the text (Atkinson, Renkl, & Merrill, 2003; Chi et al., 1989; Renkl, 1997). Further, when new information conflicts with prior knowledge, students have multiple opportunities to notice this conflict and attempt to resolve it (Chi 2000). For example, their explanations sometimes include integration of critical features that were originally overlooked or misinterpreted (Durkin & Rittle-Johnson, 2012).

Second, self-explanation aids comprehension and transfer by guiding attention to structural features over surface features of the to-be-learned content (McEldoon, Durkin, & Rittle-Johnson, 2013; Rittle-Johnson, 2006; Siegler & Chen, 2008). This makes knowledge more generalizable because it is less tied to particular problem features, so it is more likely to be transferred to new problems and situations (e.g., Gick & Holyoak, 1983). For example, generating explanations can make learners more aware of reasons for their own solution steps and more attentive to general characteristics of the solution method that are less tied to particular problem features (Berry, 1983). Similarly, generating explanations can help learners notice key structural features of exemplars and invent rules and solution methods that are less tied to particular surface features of the exemplars (McEldoon et al., 2013; Rittle-Johnson, 2006; Siegler & Chen, 2008). In summary, self-explanation supports knowledge integration and/or knowledge generalization, which should improve future performance.

3 An illustrative line of research on self-explanation

A series of studies by our research team illustrates both benefits and constraints on learning from prompted self-explanation. In all studies, elementary-school children learned about mathematical equivalence—the idea that the two sides of an equation represent the same amount. Children in this age group typically think the equal sign means “get the total” and solve equations with operations on both sides of the equal sign, such as 5 + 3 + 7 = 5 + __, by adding all the numbers or adding the numbers before the equal sign (McMcNeil & Alibali, 2005).

We measured three knowledge outcomes. Conceptual knowledge is defined as knowledge of concepts, which are abstract and general principles, such as the concept of mathematical equivalence (Rittle-Johnson & Schneider, 2015; Rittle-Johnson, Schneider, & Star, 2015). For example, children were asked to define the equal sign and to evaluate whether closed number sentences such as 8 = 2 + 6 were true or false. Procedural knowledge is often defined as knowledge of procedures (Rittle-Johnson, Siegler, & Alibali, 2001; Star, 2005, 2007). A procedure is a series of steps, or actions, done to accomplish a goal. This knowledge often develops through problem-solving practice, and thus is tied to particular problem types. Procedural transfer is adaptation and/or integration of procedures to solve problems with structural as well as surface features that differed from the learning phase (e.g., require use of learnt procedures in new combinations) (Atkinson et al., 2003; Wong, Lawson, & Keeves, 2002).

In our studies, self-explanation prompts asked children to consider how and why a given answer was correct and a common incorrect answer was incorrect. In previous research on this topic, children who were prompted to explain both correct and incorrect solutions were better able to solve transfer problems than children who were only prompted to explain correct solutions (Siegler, 2002).

Rittle-Johnson (2006) investigated whether self-explanation prompts were more effective in combination with direct instruction or invention and whether self-explanation prompts led to improvements in knowledge that lasted for several weeks. In a single one-on-one tutoring session, children first solved warm-up problems and were told they had solved the problems incorrectly, to motivate them to learn correct ways to solve the problems. Next, some children were given direct instruction on a procedure for solving the problem (without being told why the procedure worked) or were asked to try to think of a new way to solve the problems on their own (invention). Finally, children solved a set of six problems, and some children were prompted to self-explain (e.g., Why do you think that’s a good way to solve it?) and some children were not prompted.

Prompts to self-explain led to greater procedural knowledge and transfer immediately after the session as well as 2-weeks later, regardless of instructional condition. Self-explanation prompts led children to invent new solution procedures, even if they had already been taught one procedure, to generalize and adapt correct solution procedures to a wider range of transfer problems, and to continue to use the correct procedures after a delay. However, self-explanation prompts did not lead to greater improvements on an independent measure of conceptual knowledge. Children’s self-explanations often described a procedure for solving the problem, rarely including rationale for why a solution was correct, or were very vague. Thus, self-explanation prompts promoted deeper learning of procedures regardless of whether the procedures were self-generated or taught directly, but did not promote conceptual knowledge in this study. This may be because children were not given instruction on the underlying concept of equivalence.

In Matthews and Rittle-Johnson (2009), we investigated whether the content of instruction affected self-explanation quality and subsequent learning outcomes. In Experiment 1 some children received instruction on the concept of equivalence and others received instruction on a solution procedure (without instruction on why it works), and then all children solved problems with self-explanation prompts. Instruction on the concept led to higher quality explanations. In Experiment 2, all children received instruction on the concept of equivalence, and afterwards, some children were prompted to self-explain while solving problems and some children were not prompted to explain while solving problems. Self-explanation prompts did not lead to greater knowledge on any measure, a finding we replicated in DeCaro and Rittle-Johnson (2012). This suggested that instruction on a core concept might sometimes replace the benefits of self-explanation prompts, as the inferences and links that children could make while self-explaining were provided in the instruction on the concept.

Finally, in McEldoon, Durkin and Rittle-Johnson (2013), we explicitly tested the effectiveness of self-explanation prompts relative to the effectiveness of solving additional problems to equate for time on task (additional-practice condition) and to solving the same number of problems (to equate for problem-solving experience; control condition) and used a more extensive knowledge assessment than in our previous studies. All children first received brief instruction on a correct solution procedure. Compared to the control condition, self-explanation prompts promoted conceptual and procedural knowledge, particularly knowledge of equation structures and procedural transfer. Compared to the additional-practice condition, the benefits of self-explanation were more modest and only apparent on some subscales. The self-explain condition tended to have greater knowledge of equation structures and procedural transfer. Students’ self-explanations indicated that they were often able to describe a correct solution procedure when asked how to find the correct answer and continued to focus on procedures when asked why an answer was correct or incorrect. In turn, frequency of describing correct procedures was related to knowledge of equation structures and procedural transfer. Thus, self-explanation prompts had a focused impact on learning relative to solving additional problems to control for time on task.

Our own research on self-explanation highlights both potential advantages and limitations to prompting for self-explanation as an instructional technique for improving children’s mathematics learning. One potential advantage is promoting procedural knowledge that is robust enough to support solving math problems with novel problem features (procedural transfer) and maintaining this benefit over a delay. This advantage was present when children had not received instruction or were taught a solution procedure (McEldoon et al., 2013; Rittle-Johnson, 2006; Siegler, 2002). A second potential advantage is improving aspects of conceptual knowledge, particularly knowledge of problem structure. This advantage has only been documented in our research when children had been taught a solution procedure (McEldoon et al., 2013). One potential limitation is that instruction on a core concept may replace the benefits of self-explanation prompts for some topics, such as mathematical equivalence. Self-explanation prompts did not improve learning outcomes when children were taught the underlying concept of the equal sign directly (DeCaro & Rittle-Johnson, 2012; Matthews & Rittle-Johnson, 2009). Many children invent appropriate solution procedures for solving math equivalence problems when they have sufficient conceptual knowledge of the equal sign. For procedures that are much harder to invent, such as fraction division, instruction on underlying concepts is unlikely to be sufficient on its own. Self-explanation prompts may be more effective in combination with instruction on concepts in these types of domains. Finally, the effectiveness of self-explanation prompts may be more modest when time-on-task is controlled by having students engage in other active instructional techniques.

4 Review of empirical research

4.1 Effectiveness of prompted self-explanation in general

What do we know about the benefits and constraints on the use of self-explanation prompts to promote learning in a variety of disciplines and contexts? Self-explanation can clearly be an effective instructional technique. Past reviews of the self-explanation literature have focused on particular contexts and documented the generally positive effects of prompting for self-explanation in that context. For example, Dunlosky and colleagues (Dunlosky et al., 2013) were interested in learning techniques that students could implement on their own, without specially designed materials, so only reviewed studies that used general, content-free prompts to promote self-explanation. They concluded that self-explanation prompts in this context have moderate utility as a learning techniques. Other reviews have focused on the benefits of self-explanation prompts when using particular types of materials, such as multimedia learning materials (Wylie & Chi, 2014) or worked-examples (Atkinson et al., 2000).

4.2 Meta-analysis for mathematics learning

To better understand when prompted self-explanation does and does not promote mathematics learning in particular, we conducted a meta-analysis of research on prompted self-explanation within mathematics learning. We identified published articles in which (a) self-explanation was experimentally manipulated and compared to a no-prompted-explanation control condition and (b) mathematics learning was the topic of interest. We searched PsychInfo and ERIC (Education Resources Information Center), checked citations in central articles on self-explanation, and browsed psychology and math education journals. Our intent was to capture the range of experimental research on self-explanation, with a focus on research published in journals. Our focus on published research could inflate the ratio of positive effects to non-effects, but inclusion of unpublished research has been shown to increase rather than decrease potential publication bias (Ferguson & Brannick, 2012). Further, given our focus on experimental research, self-explanation was defined by condition (i.e., receiving explanation prompts or not), rather than the quality of the responses that learners were able to generate. We confirmed that the prompts in each study encouraged explanation (i.e., making inferences that went beyond the given information). Note that learners in the control condition may have self-explained spontaneously, so these studies primarily address the effects of prompted self-explanation, not the effects of spontaneous self-explanation.

Articles included in the meta-analysis are presented in Table 1 with our coding of the studies along key dimensions, including the age group and topic. To help make sense of when prompting for self-explanation benefits learning, we identified a set of potentially important design features. We coded whether the study occurred in a classroom context (i.e., was conducted in classrooms with course-relevant content). We also coded whether self-explanations were scaffolded through training prior to the self-explanation phase or through structured responses, such as selecting an explanation from a list or filling in a partially provided explanation. Scaffolding was provided in about half of the studies. We also coded whether the control condition spent a comparable amount of time studying the target content. When time on task information was not reported, we made a judgment based on available information. If the control condition studied the same fixed amount of material without additional activities, then we inferred that the time on task was not the same. If the control condition engaged in an alternative activity, such as thinking aloud while studying, we assumed the same amount of time on task.

Table 1 Published studies that contrasted prompts to self-explain with a no-prompted-explanation control condition: key design features and standardized mean difference effect sizes (ES) by learning outcome

We coded outcomes into three types: conceptual knowledge, procedural knowledge, and procedural transfer, using the definitions provided in Sect. 2. We also coded whether the outcome was measured immediately (on the same day as the intervention) or after a delay.

We calculated a standardized mean different effect size (ES) for each outcome reported in a study, as shown in Table 1. When a study included more than one self-explanation or control condition, we used the condition that was the strongest self-explanation condition (e.g., explain both correct and incorrect information) and the strongest control condition (e.g., that controlled for time on task).

4.2.1 Meta-analysis results

Results are summarized in Table 2. As expected, prompting students to self-explain led to a small to moderate improvement in mathematics learning. In particular, self-explanation prompts promote greater procedural knowledge (ES = 0.28), conceptual knowledge (ES = 0.33) and procedural transfer (ES = 0.46) when knowledge was assessed immediately after the intervention. In line with the mechanisms self-explanation is thought to support—knowledge integration and/or knowledge generalization—the effects were stronger on the measures that required deeper knowledge.

Table 2 Overall standardized mean difference effect sizes of self-explanation

Much less research has investigated whether these effects persist over a delay. Only nine experiments have included a delayed posttest, with the delay ranging from one week to one month. Self-explanation prompts did improve procedural transfer over a delay (ES = 0.32). However, the effect sizes were much lower for procedural knowledge (ES = 0.13) and conceptual knowledge (ES = −0.05) and were far from significant.

Only seven experiments have been conducted in a classroom context, with some limited evidence that prompted self-explanation can promote procedural knowledge when assessed immediately after the intervention (ES = 0.38, p = 0.08), although not conceptual knowledge (ES = 0.15, p = 0.56). Too few experiments in a classroom context have assessed procedural transfer or knowledge after a delay, making meta-analytic techniques inappropriate. Four out of the seven experiments conducted in a classroom context were implemented using computer-tutoring systems, and the remaining three were done in college classrooms, either during class time or as homework (Broers & Imbos, 2005; Große and Renkl, 2006; Hodds, Alcock, & Inglis, 2014). Thus, evidence that self-explanation prompts can effectively promote mathematics learning and retention in regular primary- or secondary-education classrooms is quite limited.

Overall, prompting for self-explanation is an effective way to promote deep learning of mathematics content. However, evidence that self-explanation reliably promotes retention of knowledge over a delay or learning within a classroom context is much more limited.

Despite the general benefit of self-explanation prompts for immediate learning outcomes, in some studies, self-explanation prompts did not impact learning (as indicated by an effect size near 0, see Table 1) and occasionally even had a negative impact on learning (as indicated by a negative effect size, see Table 1). Indeed, there was substantial heterogeneity of effects Q(18) = 49.38, p < 0.001 for procedural knowledge, Q(15) = 55.33, p < 0.001 for conceptual knowledge, and Q(8) = 21.27, p = 0.006 for procedural transfer), indicating substantial variability in the effect sizes across studies. This heterogeneity was mainly due to true heterogeneity (I 2 ranged from 62.4 to 72.9%).

To try to identify when prompts to self-explain were most beneficial, we considered two potential features of the study design that might influence the effectiveness of self-explanation prompts (i.e. moderators of the effectiveness of prompting for explanation on immediate learning outcomes). First, whether studies controlled for time on task did not influence the effectiveness of explanation prompts for procedural knowledge (β = −0.34, p = 0.145), conceptual knowledge (β = −0.22, p = 0.458), or procedural transfer (β = 0.33, p = 0.304). Thus, the size of the self-explanation effect was not significantly smaller when studies controlled for time on task (e.g., by fixing study time). This is in line with the arguments that self-explanation promotes effective learning processes, not simply more time on task.

However, whether self-explanations were scaffolded did impact the effectiveness of prompting for explanations. In about half of the studies, learners received scaffolding via training prior to the self-explanation phase or via structured self-explanation responses, such as selecting an explanation from a list. In these studies, the effect of self-explanation was larger for conceptual knowledge outcomes (β = 0.67, p = 0.004) compared to studies that did not provide scaffolding, but providing scaffolding did not influence procedural transfer and procedural knowledge (β = 0.36, p = 0.273 and β = 0.18, p = 0.454, respectively). This was true even when we only contrasted effect sizes in studies with structured response formats to those with no scaffolding (excluding studies with training on self-explanation), with reliably higher conceptual knowledge with structured response formats (β = 0.55, p = 0.043). Thus, scaffolding self-explanations is particularly helpful for promoting conceptual knowledge. The effects of scaffolding self-explanation on procedural transfer and procedural knowledge are not reliable, so the advantages of providing scaffolding may be less substantial for promoting those types of knowledge.

5 Recommendations for mathematics educators

We propose four evidence-based guidelines for effectively promoting self-explanation. The guidelines are based on the current meta-analysis, our integrative review of the broader literature on prompted self-explanation (Rittle-Johnson & Loehr, 2016, 2017), as well as research that has experimentally tested different self-explanation conditions.

5.1 Guideline 1: Scaffold high-quality explanations via training on self-explanation or structuring self-explanation responses

As noted in the previous section, scaffolding explanations improves the effectiveness of prompting for self-explanation, especially for improving conceptual knowledge. One scaffolding approach is to provide training on self-explanation beforehand. Self-explanation training often includes (1) describing and motivating self-explanation strategies, (2) modeling use of the strategies and (3) practicing self-explaining (Hodds et al., 2014; Renkl et al., 1998). For example, the instructor can provide a description of specific self-explanation strategies high-performing students use when studying (e.g., explaining each line in a proof using previous ideas presented in the proof or previous knowledge), highlighting the learning benefits of engaging in self-explanation (Hodds et al., 2014; Kramarski & Dudai, 2009). Next, learners can watch a videotape, listen to an audiotape, or read a transcript of someone modeling use of self-explanation on similar content (Wong et al., 2002). Learners can also practice self-explaining without feedback (Hodds et al., 2014) or with coaching (Renkl et al., 1998). For example, undergraduate mathematics students who received in-class training on self-explanation and later studied two mathematical proofs developed a better understanding of the proofs than students who did not receive self-explanation training (Hodds et al., 2014).

An alternative approach to scaffolding high-quality explanations is to structure the self-explanation response format. Rather than generating free-responses to questions, learners fill in blanks in partially complete explanations or select an explanation from a menu or glossary. For example, high-school students could self-explain by selecting the geometry principle from a glossary of principles to justify each solution step (Aleven & Koedinger, 2002). Alternatively, high-school and college students sometimes self-explained by filling in blanks with missing information for partially-provided explanations and other times provided unstructured explanations (Berthold, Eysink, & Renkl, 2009; Berthold and Renkl, 2009). College students who self-explained in this way developed better conceptual knowledge than students who self-explained without structured response formats or who were not prompted to self-explain (Berthold et al.. 2009). In part, this was because the structured response format increased the frequency of principle-based explanations.

Overall, structured response formats are a promising way to support explanation quality and learning. It is worth noting that structured self-explanation response formats have been used exclusively in studies implemented in computer tutors. They may be particularly important as an alternative to typing open-ended explanations. They also facilitate providing feedback on the accuracy of the explanations, which is rarely done when responses are not structured. Future research is needed to evaluate the effectiveness of structured response formats on paper-and-pencil assignments, as well as on approaches for transitioning from structured response formats to open-ended response formats.

5.2 Guideline 2: Design explanation prompts so they do not sacrifice attention to other important content

Self-explanation prompts influence the focus of learners’ attention and cognitive effort in particular ways. By focusing attention on some types of information, other information can be neglected. In particular, explanation prompts that focused attention on key concepts increased conceptual knowledge of domain principles, but also reduced procedural knowledge, on both a mathematics task (Berthold & Renkl, 2009) and a tax law task (Berthold et al., 2011). The explanation prompts supported more detailed explanations than unguided note taking, including a greater number of elaborations on domain principles. However, the explanation prompts also decreased the number of calculations performed during learning (Berthold et al., 2011). In other words, the prompts focused attention on concepts while detracting attention from solution procedures. A reverse trade-off occurred in Groβe and Renkl (2006), in which self-explanation prompts harmed conceptual knowledge and had no effect on procedural knowledge. The explanation prompts in this study focused attention on solution procedures. These findings highlight that self-explanation prompts focus attention on particular aspects of the to-be-learned material. There can be hidden costs, drawing attention away from other important information. Self-explanation activities must be designed to carefully balance attention to all of the important content.

5.3 Guideline 3: Prompt learners to explain correct information

In a large majority of experimental studies on self-explanation, learners were prompted to explain correct information. Correct information was usually worked-out examples or correct answers to math problems. For example, middle-school students learned more when prompted to self-explain how steps in the fraction addition procedure corresponded with graphical representations of the procedure (e.g., on a number line or using pie charts) (Rau, Aleven, & Rummel, 2015) and college students learned more when prompted to self-explain why you do particular steps when calculating probabilities (Berthold et al., 2009).

Three studies have directly contrasted learning from prompts to explain correct information versus ones’ own reasoning prior to feedback, and all have reported better procedural knowledge when learners explained correct information (Calin-Jageman & Ratner, 2005; Siegler, 1995, 2002). For example, 5-year-old children were (a) prompted to explain correct solutions after first attempting to solve each problem, (b) prompted to explain their own solution prior to feedback on its accuracy or (c) solved the problems without prompts to explain (Siegler, 1995). Children who explained correct solutions solved substantially more problems correctly than children who explained their own solutions, who did not differ from children who were not prompted to explain. Children’s own solutions were often incorrect, and thus children in the explain-own condition spent time justifying and making inferences about information that was not correct.

Overall, prompting learners to explain correct information, rather than their own reasoning, seems more likely to support learning, at least in part because the explanations are more likely to include correct inferences and generalizations. Prompting learners to explain their own solutions or reasoning is less likely to improve learning if the solutions and inferences are often incorrect. Given the potential risk, we recommend prompting learners to explain known-to-be correct information.

5.4 Guideline 4: Prompt learners to explain why incorrect information is incorrect if there are common errors or misconceptions

Guideline 3 does not mean that learners should not be prompted to explain known-to-be incorrect information. Rather, including prompts to explain why incorrect information is incorrect, as well as why correct information is correct, can improve procedural or conceptual knowledge relative to no explanation (McEldoon et al., 2013; Rittle-Johnson, 2006; Siegler, 2002) or explanation of only correct information (Booth, Lange, Koedinger, & Newton, 2013; Durkin & Rittle-Johnson, 2012; Siegler, 2002). For example, Algebra I students who were prompted to self-explain incorrect worked examples gained better conceptual knowledge than students who were prompted to explain only correct worked examples on a computer tutoring system (Booth et al., 2013). Prompted self-explanation of correct and incorrect worked-out examples on mathematics assignments and homework are also viable in Algebra classrooms and can be especially effective for students with low prior knowledge (Booth et al., 2015; Lange, Booth, & Newton, 2014).

Self-explaining contrasts between correct and incorrect examples can help learners distinguish correct and incorrect ideas by supporting inferences about their differences (Durkin & Rittle-Johnson, 2012), spark greater attempts to explain than correct examples alone (Legare, Gelman, & Wellman, 2010) and reduce use of incorrect ideas and strategies (Siegler, 2002). Unfortunately, this recommendation is based on a small number of studies, so additional research is needed before making a strong recommendation.

6 Issues for future research

In order to more effectively use self-explanation as an instructional technique, future research is needed in at least three areas.

6.1 Classroom context

Although a majority of research on prompted self-explanation has been conducted on educationally-relevant content, only seven published experiments on mathematics learning have been conducted in realistic classroom contexts (see Table 1). A majority of research was conducted in a laboratory context with close supervision of an experimenter. The demand characteristics of a laboratory setting may increase the probability that participants attempt to generate reasonable explanations. In unmonitored settings, learners may put in less effort. At the same time, participants in the control condition are much less likely to engage in their own alternative study strategies when they are not being held accountable for learning the material.

Our meta-analysis indicated that prompted self-explanation in a classroom context does promote procedural knowledge when assessed immediately after the intervention, but evidence for other outcomes is too limited to be confident in its effectiveness in a classroom context. In addition, classroom-based evidence comes from math classes using computer tutors (Aleven & Koedinger, 2002; Rau et al., 2015) or other computer software (Kramarski & Dudai, 2009). Computers allow for individually-paced instruction and immediate feedback, features which are much less common in seatwork or homework. A few studies have tested the effectiveness of self-explanation prompts in classroom contexts without use of computers. Results are mixed, with one study reporting positive results (Hodds et al., 2014) and two studies reporting no effect (Broers & Imbos, 2005; Große & Renkl, 2006). The Hodds and colleagues (2014) study included training on self-explanation, while the other two did not, suggesting that teachers should provide training on self-explanation when promoting for explanations without the aid of computer software. Clearly, additional research is needed to refine and expand guidelines for effectively promoting self-explanation in classrooms. For example, what are effective and practical scaffolds for helping students generate relevant and useful explanations?

6.2 Time demands and alternative instructional techniques

Self-explanation takes considerable time. For example, in one study, learners who were prompted to self-explain spent about 75 min on the task, compared to 45 min in the control condition (Berthold et al., 2009). To address this additional time on task, some studies have fixed study time across conditions, requiring learners in the control condition to continue studying the material for an equivalent amount of time. Our moderation analysis indicates that the effectiveness of prompted self-explanation is not dependent on whether studies controlled time on task across conditions. Nevertheless, given the substantial time demands of self-explanation, an open question is when alternative activities would more easily or efficiently achieve the same learning outcomes.

Evidence suggests some alternative instructional activities have the potential to be as effective as prompted self-explanation. A few studies have compared prompted self-explanation to receiving instructional explanations for mathematics learning. In these studies, instructional explanations provided justifications for why things worked, linking procedures and concepts. Students who were prompted to self-explain learned a similar amount as those who studied instructional explanations (Gerjets, Scheiter, & Catrambone, 2006; Große & Renkl, 2006), in line with a meta-analysis of studies including a broader range of domains (Wittwer & Renkl, 2010). For example, preschoolers learning about identifying rules in repeating patterns of objects learned a similar amount from self-explanation prompts and instructional explanations (Rittle-Johnson, Fyfe, Loehr, & Miller, 2015). In these studies, learners in the self-explanation condition had difficulty generating correct inferences and generalizations. In addition, generating self-explanations can take considerably more time than receiving instructional explanations (Gerjets et al., 2006). This past research has not included scaffolding of self-explanations. Of course, null results are difficult to interpret. Future research should evaluate whether self-explanation is more consistently beneficial than instructional explanations when self-explanation quality is scaffolded and when time on task is comparable.

Another alternative to self-explaining is solving additional non-routine problems. At least one study found greater procedural knowledge for the prompted self-explanation condition (Aleven & Koedinger, 2002), but another found mixed effects depending on the learning outcome (McEldoon et al., 2013), and still others found comparable conceptual and procedural knowledge when students in the control condition spent an equivalent time solving more problems (and received instruction on the key concepts) (DeCaro & Rittle-Johnson, 2012; Matthews & Rittle-Johnson, 2009). Solving unfamiliar problems can be a constructive learning technique, as it requires responses that go beyond what is provided in the original material (Chi, 2009). Both self-explanation prompts and solving unfamiliar problems can provide opportunities for thinking about correct procedures, including when each is most appropriate. This may be especially true when problem-solving exercises are designed with problems sequenced to support noticing of underlying concepts (Canobi, 2009; McNeil et al., 2012). Too little research has contrasted the effectiveness of prompting for self-explanation relative to spending a comparable amount of time solving unfamiliar problems.

Future research should also consider how self-explanation can be combined with other instructional techniques. For example, exploration prior to direct instruction can lead to better learning than beginning with direct instruction, as exploration seems to prepare students to learn more from the instruction (Kapur, 2010; Schwartz, Chase, Chin, & Oppezzo, 2011). Prompting for self-explanation during the exploration phase could augment the benefits of this phase, but this possibility has rarely been evaluated. In one study, we evaluated this possibility. Students who explored problems prior to direct instruction on the concept of equivalence learned more than students who received the same direct instruction, followed by solving the problems (DeCaro & Rittle-Johnson, 2012). However, whether students were prompted to self-explain during the problem-solving phase did not lead to greater learning, regardless of whether problem solving occurred before or after direct instruction. Despite this discouraging finding, the topic merits additional research in contexts where self-explanation prompts are more effective (e.g., with scaffolding).

Finally, because prompting for self-explanation may not reliably improve retention of conceptual and procedural knowledge over a delay, additional techniques may be needed to promote retention as well as learning. For example, repeated-testing, with or without self-explanation, improved medical-students’ retention of conceptual knowledge over a 6-month delay (Roediger & Karpicke, 2006). Overall, future research is needed to better specify when and for whom self-explanation prompts are more effective than alternative instructional techniques.

6.3 Additional scaffolds

Given the importance of self-explanation quality, a third open question is on additional ways to scaffold self-explanation quality. As noted above, more research is needed on the use of structured self-explanation formats, especially outside of computer tutors. Integrating instructional- and self-explanations is another promising avenue for future research. For example, instructional explanations could be used to model high-quality explanations, followed by prompting learners to self-explain. Alternatively, learners could be asked to make inferences and generalizations about instructional explanations (i.e., to self-explain instructional explanations). Fading from instructional explanations or structured self-explanation responses to open-ended self-explanations may be another fruitful technique. Given the importance of the content of explanations, we predict that a variety of ways to scaffold high-quality explanations will be beneficial in many contexts.

Another emerging possibility is having learners generate explanations for someone else. Explanations for someone else could still be considered self-explanations when they are generated by the self and are not generated with the intent to teach. People often produce more detailed and explicit explanations and justify their ideas more for other people than for themselves (Krauss, 1987; Loewenthal, 1967). Indeed, in one study, generating explanations for others supported greater procedural transfer than generating explanations for oneself. 4- and 5-year-old children were prompted to explain correct examples to their moms, to themselves, or to restate the correct example without explanation (Rittle-Johnson, Saylor, & Swygert, 2008). Explanations were of higher quality when explaining to their moms, and prompts to explain to mom led to the greatest procedural transfer. Less direct evidence comes from studies indicating that frequency of generating explanations in small groups positively predicts learning (Webb, 1991). Partner work and peer tutoring are natural contexts for encouraging explanation to others in a classroom context. Explaining homework to a parent is another familiar context worth exploring (Loehr, Rittle-Johnson, & Rajendran, 2014). Overall, additional research is needed on the constraints of explaining to others as a way to scaffold explanation quality.

7 Conclusion

Self-explanation prompts are a reliable instructional technique that can support procedural knowledge, conceptual knowledge and procedural transfer in a variety of math topics for learners ranging from preschool-age to adulthood. However, evidence that self-explanation reliably promotes learning within a classroom context or retention of knowledge over a delay is much more limited. Further, the effectiveness of self-explanation is not dependent on whether the control condition spends a comparable amount of time on task, but it is increased when self-explanation responses are scaffolded.

We identified four evidence-based guidelines for effectively promoting self-explanation. By scaffolding high-quality explanations via training or structured responses, by designing prompts to carefully balance attention to all of the important content, by prompting learners to explain correct information, and by prompting them to explain why incorrect information is incorrect when appropriate, learners are more likely to benefit from prompts to self-explain. Future research is needed to expand and refine guidelines for when and how to effectively promote self-explanation, especially in classroom contexts and in comparison to alternative instructional techniques.