Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Modern learning technology (e.g., hypermedia systems, intelligent tutoring systems, microworlds) usually provides information in various forms such as text, “realistic” pictures, formal graphs, or algebraic equations. In other words, information is presented by multiple external representations (MER). Although these MER can support learning processes in various ways, the integration function is most important (see Ainsworth, 2006). This function refers to the fact that internally representing and integrating different external representations on an abstract level can lead to deeper understanding. Actually, it is typical of experts to have multiple internal representations (de Jong et al., 1998). Consider the example of linear regression: In order to approach expert-like understanding, a learner has to encode and integrate verbal-conceptual information about the meaning and interpretation of regression analyses, the corresponding equation (e.g., y  =  a  +  bx), and typical scatter plots with regression lines.

An instructional problem arises from the fact that students very often do not spontaneously integrate different MER and they may not be successful even when trying to do so (e.g., Ainsworth, 2006). As a consequence, although MER presented by learning technology are expected to foster learning, they frequently do not enhance and sometimes even impede learning (e.g., Ainsworth, Bibby, & Wood, 2002). Against this background, learners must be supported to productively use MER.

Typical instructional procedures to support the integration of MER include measures that make the particular elements in one representation that correspond to particular elements in another representation salient. For example, text and pictures are often presented in an integrated format, meaning that both information sources are not provided in separate information boxes; instead the text parts are located in close proximity to the corresponding parts of the picture (e.g., Chandler & Sweller, 1991). Another possibility is color coding (e.g., Kalyuga, Chandler, & Sweller, 1999) in which the same colors are used for corresponding information elements in different presentations. Although such instructional procedures can foster learning, they have the ­disadvantage of just supporting the mapping of different presentations on the surface level (e.g., Berthold & Renkl, 2009; Seufert & Brünken, 2006). They do not directly support the integration of different representations at an abstracted and deep (i.e., semantic) level. For example, Fig. 26.1 provides a multi-representational worked example. By integrating the tree diagram and the equation, it becomes clear why the fractions have to be multiplied. Ideally, the learners would integrate the multiplication sign of the equation and the “points of branching” in the tree diagram. This is done in order to understand the underlying structure, that is, that the multiplication sign stands for the inclusion of all possible combinations represented by the 20 branches in the pictorial tree diagram. The employed color coding, however, just hints at “which belongs to what” but it does not convey conceptual information; the latter has to be inferred by integrating the MER on an abstract, semantic level.

Fig. 26.1
figure 00261

Screenshot from a learning environment with worked examples from the domain of probability

In a series of studies, we investigated two measures tightly related to metacognition in order to foster the integration of MER provided in computer-based learning environments at the semantic level: (a) self-explanation prompts and (b) informing the learners about the function of MER. We employed learning environments about mathematics, typically but not exclusively about probability (see Fig. 26.1). The learners could gain conceptual understanding of the domain as well as domain-specific problem-solving skills (i.e., procedural knowledge). The participating learners were typically senior high-school students or university freshmen.

Self-Explanation Prompts

The Self-Explanation Effect

Chi, Bassok, Lewis, Reimann, and Glaser (1989) introduced the “self-explanation effect” by showing that students who engage in actively explaining the solution procedures of worked examples to themselves achieve better learning outcomes; the self in self-explanation, thus, refers both to the agent who provides the explanation and, even more importantly, to the addressee of the explanation. Different concrete learning activities are subsumed under the umbrella of self-explanation depending on the specific authors and, in part, on the specific study (for a recent overview, see Fonseca & Chi, 2011). In any case, self-explanations go beyond the information given. Four very typical types of self-explanations are principle-based self-explanations (i.e., relating solution or problem features to underlying domain principles), goal-operator elaborations (i.e., the subgoals that were achieved by certain operators are explicated), elaborations on preconditions to apply certain operators, and identifying communalities and differences between examples or problems (see Chi et al., 1989; Reimann & Neubert, 2000; Renkl, 1997, 2011).

Meanwhile, it has been shown that self-explanations foster knowledge acquisition in a variety of learning methods such as text learning (Chi, de Leeuw, Chiu, & LaVancher, 1994; Ozuru, Briner, Best, & McNamara, 2010) or problem solving (e.g., Aleven & Koedinger, 2002). Roy and Chi (2005) also argued that self-explanations are especially helpful when learning from MER (called multimedia in their chapter); however, they mainly relied on indirect evidence.

An instructional problem is that many learners do not spontaneously engage in effective self-explanation activities (Renkl, 1997). A well-established approach of assistance is the use of self-explanation prompts (see Koedinger & Aleven, 2007). Prompts are questions or hints that induce productive learning processes. They are designed to overcome passive or superficial processing by inducing activities that the learners are, in principle, capable of but do not spontaneously demonstrate or demonstrate to an unsatisfactory degree (production deficiency; e.g., Pressley et al., 1992). For example, Atkinson, Renkl, and Merrill (2003) showed that prompting principle-based self-explanations in a computer-based learning environment that provided worked examples on probability led to favorable learning outcomes (for similar findings on self-explanation prompts in computer-based learning environments see, e.g., Aleven & Koedinger, 2002; Conati & VanLehn, 2000; Schworm & Renkl, 2007).

Is Self-Explanation Metacognition?

Self-explanation is often considered to be a metacognitive learning strategy (e.g., Aleven & Koedinger, 2002; Conati & VanLehn, 2000). Against the background of the classical notion of metacognition as “cognition about cognition” (e.g., Efklides, 2008; Flavell, 1979; Nelson, 1996), one might argue that self-explanation is “just” a cognitive learning strategy because activities such as justifying solution steps by underlying principles or subgoals to be achieved are not related to cognition but to the learning domain. So, is self-explanation really a metacognitive activity? The answer provided in this chapter is very clear: yes and no. How can such an answer be clear?

Recently, Renkl (2008, 2009) has argued that categorizing certain learning activities into the usual strategy categories is rarely convincing. For example, Weinstein and Mayer’s (1986) classic taxonomy “generating an example” would be classified as a cognitive strategy or, more specifically, as an elaboration (i.e., relating new contents to prior knowledge or experiences), but not as a metacognitive strategy. However, learning activities such as “generating an example” can, of course, fulfill several functions. The effort of “generating an own example” has not only an elaborative function but also it can tell the learners whether or not they have understood a concept or principle (i.e., usually it requires understanding to generate an own example). Against the background that certain learning activities can very often fulfill different functions, Renkl (2008, 2009) argues that the analysis of learning activities should mainly consider the function of activities, being aware (a) that “superficially” different learning activities can fulfill the same function (e.g., generating an example and self-questioning can both have the function of comprehension monitoring) and (b) that one activity can fulfill different functions (e.g., generating an example can have both the functions of comprehension monitoring and of elaboration).

Under such a functional perspective, a “clear answer” might be to say yes and no when considering self-explanation as metacognition. The typical self-explanation activity of principle-based explanation (i.e., relating a solution or problems feature to a domain principle) elaborates on the learning contents on the one hand. On the other hand, it can lead to metacognitive knowledge about task types and solution strategies (Flavell, 1979). In particular, self-explanations should lead to conditional knowledge (Paris, Lipson, & Wixson, 1983; Schraw, 1998), that is, knowledge about the “when and why” of knowledge, in particular about solution strategies.

Experiments on Prompting Self-Explanation for Processing Multiple Representations

In three experiments, we employed learning environments in the domains of combinatorics and probability. When teaching these closely related domains, it is common to use multiple representations. In addition to text (e.g., problem formulations), there are two types of typical solution methods: arithmetic solution (relying on an equation) and pictorial solution (relying on a tree diagram). In all experiments, we tested self-explanation prompts as an instructional support procedure. As the main dependent variables, we assessed conceptual understanding and problem-solving performance (procedural knowledge).

Grosse and Renkl (2006, Exp. 1) analyzed the effects of self-explanation prompts in comparison to instructional explanations or no such support when students learned combinatorics from worked examples with multi-representational solutions (i.e., arithmetic equation and pictorial tree diagram). We tested 170 student teachers of an educational university (mean age approximately 22 years) in a 2  ×  3-factorial experiment: (a) type of solutions (multi-representational solutions vs. mono-representational solutions) and (b) instructional support (self-explanation prompts vs. instructional explanations vs. no support).

The learning materials included two pairs of examples (four examples in total). Each example could be solved by two different methods (arithmetic equation or pictorial tree diagram). Each pair contained two structurally identical problems that also shared a number of surface features in order to make the correspondence salient to the learners. The first factor, solutions, referred to the number of different presented solutions (multi-representational solutions vs. mono-representational solutions). In the “multi-representational solution” conditions, the two almost identical examples of a pair were solved using different solution methods (i.e., a pictorial tree diagram in one example and an arithmetic equation in the other example). Thus, the participants could learn that more or less the same problem can be solved by two different solutions procedures. In the “mono-representational solution” conditions, both examples of a pair were solved with the same solution method; the two examples of one pair included a pictorial tree diagram, and the two examples of the other pair included an arithmetic equation. Thus, two different solution methods were demonstrated in the “mono-representational” conditions as well. However, they were not presented as being interchangeable. The second factor, instructional support, referred to the help the learners received: open self-explanation prompts vs. instructional explanations vs. no support. For the multi-representational solutions conditions, self-explanation prompts and instructional explanations concentrated on commonalities between the pictorial and arithmetic solutions and on advantages and disadvantages of these methods dependent on the given problem type. The learners in the “self-explanation prompts” condition were asked to answer in written form, for example, the following question for an example pair: “Where do you see commonalities and differences between the two solution methods?” The instructional explanations could be regarded as the answers to the self-explanation prompts. The self-explanation prompts and instructional explanations for the “mono-representational solution” conditions focused on a single solution but were roughly equivalent to those used for the multi-representational solution groups with respect to the number of covered aspects and time necessary to process them (as determined by pilot studies).

We found that multi-representational solutions fostered conceptual knowledge and procedural knowledge. However, no positive effect was found for instructional support in the form of self-­explanation prompts or instructional explanations. Self-explanation prompts actually even led to inferior conceptual understanding when learning with MER compared to having no support at all (no negative prompt effect when learning with mono-representational solutions). This finding confirmed recent assumptions that the demand to self-explain complex material (i.e., including MER) may take cognitive load over the limits (Kalyuga, 2010; Sweller, 2006). Thus, even when self-explanations are prompted, they can be ineffective or can even have a detrimental effect with respect to conceptual understanding.

In line with this conclusion, we also found in a pilot study in the domain of probability that learners have difficulties with self-explanation prompts added to complex multi-representational materials (see Berthold, Eysink, & Renkl, 2009). When we used open self-explanation prompts (i.e., open questions inducing self-explanations such as “Why do you calculate the total acceptable outcomes by multiplying?”), the learners had severe difficulties in adequately answering such prompts. Often the learners could not provide the correct explanation. Thus, we assumed that the learners might benefit from stronger instructional support than open self-explanation prompts (cf. Roy & Chi, 2005). We chose to also include a condition with some form of instructional assistance (Koedinger & Aleven, 2007). Hence, in the main study of Berthold et al. (2009), we tested the effects of three conditions: “assisting self-explanation prompts” that directed the learners to integrate the MER on a conceptual level, open self-explanation prompts, and no self-explanation prompts. We presented eight worked examples with multi-representational solutions from the domain of probability in a computer-based learning environment. Participants were 62 psychology students with a mean age of about 25 years. In all conditions, a relating aid consisting of color coding and flashing was included to help learners see which elements in different representations corresponded to each other on a surface level (see Fig. 26.1). By supporting the learners in finding the corresponding parts in different representations, cognitive capacity for self-explanation and learning should have been increased.

The experimental variation was realized as follows. Participants in the condition assisting self-explanation prompts received six questions such as “Why do you calculate the total acceptable outcomes by multiplying?” in each worked-out example. In the first worked-out example of each pair of isomorphic examples, the answers were supported in the form of fill-in-the-blank self-explanations (e.g., “There are ___ times ___ branches. Thereby, all possible combinations are included,” see Fig. 26.1). In the isomorphic examples that followed, this support was faded out, and the participants received six open self-explanation prompts. The answers had to be typed into corresponding text boxes. In the condition open self-explanation prompts, the learners were provided with six open self-explanation prompts only (e.g., open answer to “Why do you calculate the total acceptable outcomes by multiplying?”) in each worked-out example. The assisting self-explanation prompts and the open prompts put an emphasis on integrating the pictorial and arithmetic representations to each other on a structural level. For example, the prompt, “Why is there a 4 in the denominator of the second single experiment, even though there are 20 branches in the tree diagram?” referred to the arithmetic representations (“the 4 in the denominator”) and to the pictorial representations (“20 branches in the tree diagram”). To answer this question, the learners had to relate the denominator of the arithmetic equation to the corresponding branches of the pictorial tree diagram. Thereby, they could understand that the 4 stands for the number of remaining events of one initial branch. Due to the fact that there are five initial branches in the first single experiment, five times four branches, that is, 20, are included.

In the condition without self-explanation prompts (control condition), the learners studied the same worked examples as presented in the other two conditions. The only difference was that the learners of the condition without self-explanation prompts were merely provided with a text box in order to take notes. They did not receive any prompts.

Both types of self-explanation prompts fostered conceptual knowledge. Furthermore, assisting self-explanation prompts had additional effects on conceptual understanding in comparison to open self-explanation prompts. The effect on conceptual understanding was mediated by self-explanations that not only relate a solution step to an underlying principle but also explicate the rationale of the principle (e.g., “For the denominator, there are five times four branches. Thus, each of the five first branches of the tree diagram forks out in four further branches as each of the five first events can occur in combination with one of the four remaining events,” Berthold & Renkl, 2009). With respect to procedural knowledge, the pattern of results shows that either type of prompts was effective; the two prompt types did not differ.

To conclude, both prompt types fostered procedural knowledge. For conceptual knowledge assisting self-explanation prompts, interleaved with open self-explanation prompts, worked best because they supported the learners in generating self-explanations about the rationale of a principle. The overall pattern of performance indicated that assisting self-explanation prompts best fostered the integration of MER. In particular for enhancing high-quality self-explanations and conceptual understanding, assisting self-­explanation prompts should be provided.

In a further experiment, Berthold and Renkl (2009) took up the findings on the effects of self-explanation prompts. We used a relating aid and assisting self-explanation prompts that were more or less identical to the ones used in Berthold et al. (2009). In a computer-based learning environment which was also almost identical to Berthold et al., 170 high-school students (mean age approx. 16 years) learned about probability theory. We varied the type and number of representations (multi-representational solutions vs. mono-­representational solutions) and the availability of two support procedures: (a) a relating aid and (b) assisting self-explanation prompts (for details of the complex experimental design of this study, see Berthold & Renkl). In the multi-representational conditions, the solution steps were provided in the form of both a pictorial tree diagram and an arithmetic equation in each example. In the mono-representational conditions, which we included to have a baseline for evaluating the effects of multiple solutions, the solution steps were presented in the form of a pictorial tree diagram or an arithmetic equation.

We found that MER per se did not foster conceptual understanding. In contrast, both support instructional procedures enhanced it: The relating aid and assisting self-explanation prompts had additive effects on conceptual understanding. Similar to Berthold et al. (2009), the effects of self-explanation prompts on conceptual knowledge were mediated by self-explanations that not only relate a solution step to an underlying principle but also explicate the rationale of the principle.

Interestingly, there was a relatively small but statistically significant negative effect of self-explanation prompts on procedural knowledge. This detrimental effect was mediated by prompt-induced incorrect self-explanations in terms of mixing up different probability principles. Hence, the assisting prompts had double-edged effects: positive effects on conceptual knowledge, via the elicitation of productive self-explanations, and simultaneously negative effects on procedural knowledge, via the elicitation of incorrect self-explanations (for analogous double-edged effects of self-explanation prompts, see also Berthold, Röder, Knörzer, Kessler, & Renkl, 2011). Note, however, that Berthold et al. (2009) found positive effects for the same type of prompts and the same learning contents on both conceptual and procedural knowledge. The main difference between these experiments was how advanced the participating learners were. Whereas generally positive effects were found for university students in a (selective) psychology program, the double-edged effects were found for high-school students. For the latter learners, the learning materials were more complex in relation to their prior knowledge. Hence, a tentative conclusion is that prompts lose their general effectiveness if learners are heavily loaded by the complexity of the learning materials (Kalyuga, 2010; Sweller, 2006). Prompts added to the learning material may overload them or “enforce” that they concentrate on selected aspects (i.e., conceptual aspects) in order to prevent overload.

In summary, prompting self-explanation can help learning from MER. However, there are boundary conditions to be considered. Prompts can lead to negative effects if the learners are confronted with learning materials that are very complex in relation to their prior knowledge. In addition, it may depend on the desired learning outcomes whether prompts are effective and whether it is sensible to employ assisting prompts. Assisting prompts are particularly helpful when conceptual understanding should be fostered.

“Instructions for Use” of Multiple Representations

The rationale of employing prompts is to more or less directly activate self-explanations. As previously shown, such an “intrusive” method can have detrimental side effects, presumably by posing overwhelming demands to the learners. Another, more indirect option to induce effective processing of MER might be to inform the learners about what to do with MER. Note, however, that such an intervention also presupposes that the learners just have a production deficiency, that is, they can “produce” the appropriate strategy if they are first informed how to use the MER.

Metacognitive Knowledge on How to Use the Affordances of Learning Environments

A typical metacognitive instructional procedure is to inform learners about “what to do with strategies.” In other words, the learners are provided with conditional knowledge about when and why to use certain knowledge such as strategies (Paris, Lipson, & Wixson, 1983). In recent studies, we expanded this idea and informed learners about “what to do” with the instructional affordances of a learning environment (e.g., multiple representations). Although this knowledge is not about strategies or about tasks (i.e., the learning tasks; see Flavell, 1979), it can be considered metacognitive knowledge about the instructional context, that is, about how to use the instructional features of a learning environment.

When instructional designers include certain elements into learning environments, they may rely on certain models, empirical findings, and—in many cases—on their intuitive knowledge about what can help learning. For example, when they present information in MER, they have some ideas on how these instructional features should be used. In the case of MER, it is typically expected that the learners relate the different representations to each other in order to gain deeper understanding (e.g., Ainsworth, 2006; Berthold et al., 2009). Often, however, the learners ignore some representations and concentrate on only one type of representation that seems to be most useful to them (Ainsworth, 2006). Such behavior can be seen as a strategy deficit on the learner’s side (e.g., a production deficiency); accordingly prompts that activate effective strategies seem to be a sensible remedy (see Berthold et al., 2009; Berthold & Renkl, 2009). However, one can also ask: How should learners know what the ideas of the instructional designer on how to use the learning environment were? Maybe the deficit of suboptimal use of instructional affordances such as MER is, at least in part, a “deficit” of the instructional designer who has not provided “instructions for use.” Actually, Schwonke, Berthold, and Renkl (2009) found that learners are hardly aware of any helpful function that MER can have.

Experiments on “Instructions for Use”

Schwonke, Renkl et al., (2009) used a slightly modified version of the learning environment of Berthold and Renkl (2009; version without relating aid and prompts). We tested the effects of informing learners about how to use MER on learning outcomes. More specifically, we briefly explained to the learners that there are two solutions procedures—tree diagram and arithmetic equation—and that the tree diagram should be used to gain an understanding on how the arithmetic equation is related to the problem formulation. For this purpose, we used the metaphor of a bridge (“… the tree diagrams ‘build’ a bridge between the problem texts and the equations…”). This instruction consisted just of an enrichment of one introductory screen that oriented the learners about the upcoming type of learning tasks in the form of worked examples. In addition, a line drawing of a bridge shortly popped up between the single worked examples as a reminder.

In this experiment, 30 students of psychology were randomly assigned to the “informed” condition and a control condition (introductory screen without instructions for the use of MER and ­without “reminding” line drawings). In addition, we collected eye-tracking data in order to gain some insight into learning processes.

The instructions for use led to higher learning outcomes (as assessed by a posttest that included problems tapping conceptual and procedural knowledge) without leading to increased learning time. In addition, this effect was mediated by altering the patterns of attention of students with different levels of prior knowledge, as assessed by eye-tracking (e.g., preventing learners with high prior knowledge to neglect the tree diagrams and leading the low-prior-knowledge learners to study the presented examples more efficiently; for details see Schwonke, Renkl et al., 2009).

In a nutshell, a lean intervention that provides metacognitive knowledge about the use of MER can lead to substantial learning gains. A restriction of this study might be seen in the fact that our instructions for use concentrated on just one aspect of certain MER. However, complex learning environments pose many problems to the learners when they try to optimally use its multiple information sources and representations. In other words, when learning environments are suboptimally constructed in the sense that they require manifold integration demands, “instruction for use” as employed by Schwonke, Renkl et al., (2009) might not work.

Schwonke, Ertelt, Otieno, Renkl, Aleven, & Salden (2013) employed a rather complex learning environment that is widely used in the field: Cognitive Tutor (Koedinger & Aleven, 2007; Koedinger & Corbett, 2006; see also Carnegielearning.com, 2011). This learning environment is an intelligent tutor system, primarily for mathematics learning. We used a Cognitive Tutor lesson on geometry that included worked solution steps that were gradually faded and replaced by steps to be solved by the learners; this version proved to be particularly effective in prior studies (e.g., Schwonke, Renkl et al., 2009). Nevertheless, informal observations showed that many learners had difficulties in handling this in the generally effective environment. These difficulties are not really surprising given that the geometry lesson involved MER (e.g., problem text, diagrams, and computations) and a number of support facilities such as hints for performance demands, a glossary including the relevant geometry principles, and areas providing an overview of the single subgoals to be achieved when solving the geometry problems at hand; these help devices also included multiple representations. We tested the effects of a cue card providing metacognitive knowledge about what to do with all these elements (Fig. 26.2). The design of the cue card was partly inspired by a help-seeking model developed by Aleven and Koedinger (2000).

Fig. 26.2
figure 00262

Cue card providing metacognitive knowledge about the use of different elements of a Cognitive Tutor lesson (translated from German)

In this experiment, 60 high-school students with a mean age of about 14 years were randomly assigned to one of two conditions. Half of the participants worked on a Cognitive Tutor geometry lesson while having metacognitive support in the form of the cue card available; the other half of the participants worked without a cue card. The length of the lesson was about 1 h. As learning outcomes, we used measures of conceptual and procedural knowledge. Again, eye-tracking should help to get insight into learning processes.

We found that the provision of the cue card reduced learning time by about 20% in comparison to the control condition. With respect to conceptual knowledge, learners with low prior knowledge profited from the cue card; no such positive effect was found for learners with high prior knowledge. With respect to procedural knowledge, we also found that the learners with clearly below-average prior knowledge (lower third) profited from the cue card. Mediation analyses with the eye-tracking data suggested that the cue card effects on conceptual knowledge were in particular due to a more focused use of the Cognitive Tutor’s different elements. Low-prior-knowledge students spend less time on inspecting available help facilities, while they simultaneously achieved better learning outcomes. Obviously, unfocussed and overextended use of the help facilities was prevented by the cue card. In a nutshell, the cue card had positive effects for all learners in terms of reducing learning time. However, only learners with low prior knowledge also gained more knowledge within this reduced learning time.

In summary, the studies by Schwonke, Berthold et al. (2009) and Schwonke et al. (2013) showed that providing learners with metacognitive knowledge about the affordances of the learning environments can be sensible. It is important to note that these interventions were very parsimonious. Such interventions did not increase learning time in Schwonke, Berthold et al. (2009) and even saved time in Schwonke et al. (2013). Nevertheless, we have to admit that the positive effects were different in both studies: Schwonke, Berthold et al. (2009) found “generally” enhanced learning outcomes, whereas Schwonke et al. (2013) found “generally” reduced learning times and enhanced outcomes for low-prior-knowledge learners. In addition, the cue card of Schwonke et al. (2013) included a number of elements that were not related to MER. Hence, it might be that other aspects and not primarily the information about MER were effective. Substantial further research is needed in order to determine the specific effects and their particular boundary conditions (e.g., prior knowledge level of the learners) that the different metacognitive information about learning environments’ affordances has.

Conclusions and Outlook

In this final section, we outline the most important points that we have learned from our studies and the issues that have to be addressed in further studies. In doing so, we touch on theoretical, methodological, and instructional issues.

  1. (a)

    One important issue relates to the generalizability of our findings. We have gained some knowledge about how to foster mathematics learning of senior high-school students and university students by self-explanation prompts and by “instruction for use.” Obviously, it is not straightforward to generalize our finding to other learning domains and to younger learners. With respect to the generalizability to other age groups, it is important to note that we found striking differences even between university students (Berthold et al., 2009) and senior high-school students (Berthold & Renkl, 2009). Although this difference in educational level does not seem to be so large at a first glance, it seems all the more implausible that the present findings can be generalized to younger learners. Also developmental research on metacognition and strategies (for an overview, see Schneider & Bjorklund, 1998) shows that substantial development can be found up until the age of 16 (i.e., the average age of the participants in Berthold & Renkl). Hence, younger students might have not only production deficiencies that can be remedied by prompts or “instructions for use” but also more profound deficits (e.g., mediation deficiencies, meaning that the learners are not able to execute the relevant strategies appropriately). Successful interventions with younger students might need additional instructional components by which strategies are explicitly taught (e.g., via modeling or worked examples; see Hübner, Nückles, & Renkl, 2010).

  2. (b)

    Although we have shown that self-explanation prompts can be helpful when learning from multiple representations, the pattern of results clearly shows that there are boundary conditions, even for a given educational level. If these conditions are not met, even negative effects can result. As discussed, one important boundary condition seems to be the complexity of the learning materials in relation to the learners’ prior knowledge. Prompts can be useless or even detrimental when necessary prior knowledge prerequisites are missing. A theoretical as well as instructional problem is that we presently lack ways to specify, a priori and in a precise manner, what prior knowledge is “necessary.” It might not be too difficult to determine whether a learner just has a production deficiency so that s/he is actually able to provide adequate self-explanations when prompted. However, it might be much more difficult to determine when a learner gets overloaded by prompts. Perhaps recent developments in online measures of cognitive load (Park & Brünken, 2010: deviations in foot tapping rhythm; Walter, Cierniak, Bogdan, Rosenstiel, & Gerjets, 2010: EEG measures) can help to solve this challenge in the future, at least for research purposes (such measures are still too intrusive for practical use in the field). Prompts can be automatically omitted if learners show (too) high cognitive load. Nevertheless, it would be desirable to first refine our theoretical models of self-explanation so that precise assumptions can be made about prior knowledge prerequisites necessary for prompts to be effective.

  3. (c)

    The idea to provide instructions for use with respect to the central affordances of learning environments is relatively new (for similar approaches see Roll, Aleven, McLaren, & Koedinger, 2007, 2011). This instructional procedure seems to be promising because it is parsimonious and can even save learning time (i.e., greater learning efficiency). It has to be noted, however, that so far, all we have are initial promising studies, and these studies used instructions for use that were rather specific to the respective learning environments. What is presently missing is a general rationale or a set of principles to guide the construction of instructions for use that fit other learning environments. A sound basis for the construction of instructions for use might come from usability studies (Nielsen, 1994) or learning process data (e.g., thinking aloud protocols) showing suboptimal use of the learning environment. Such evidence might reveal that learners lack specific knowledge about how to best use the affordances of a given learning environment. However, in practical settings such data are often not available, rather informal observations and “intuition” have to be used in order to determine what information might best help the learners to work in a learning environment. On the other hand, we assume that future learning technologies will more easily produce usable log data of student activities so that information about how learners use these environments will be increasingly available.

In addition, it is also open as to when it is best to provide instructions for use: in advance (Schwonke, Berthold et al., 2009) or concurrently with the learning environment (Schwonke et al., 2013)? Both options have advantages and disadvantages. For example, providing a “concurrent” cue card as in Schwonke et al. (2013) might be suboptimal because it creates a type of split-attention problem (i.e., problems in relating the contents of the cue card with the learning environment and distraction from the learning contents). On the other hand, instructions for use provided in advance might be forgotten when working in the learning environments. Further research has to compare different options to present such instructions.