Keywords

1 Introduction

Technology-rich learning environments refer to a learning environment where any application software is used in supporting learners to achieve instructional goals [1]. A fundamental characteristic of this type of learning environment is that the design and evaluation of technology is guided by theories of learning and instruction. A case in point is the design of computers as cognitive tools [26], a metaphor that conceptualizes the design process as the creation of external representations aligned with the cognitive activities that are involved in learning. In doing so, the application software may perform several functions, namely, to support logic, memory, or any other activities that would be out of the learner’s reach, and to direct attentional resources to higher-order processes by automating lower-order thinking skills [7].

The use of computers as metacognitive tools [811] emerged from this long-standing research tradition. This development led to emphasize learners’ efforts to regulate their own learning during the design process. Self-regulation requires a learner to set goals, use strategies to achieve these goals, and monitor their own progress [1216]. It involves motivation and awareness as well as the capacity to adjust by evaluating one’s own learning. Self-regulation raises an important design challenge for metacognitive tools since the learners’ efforts to regulate their own learning involve latent and unobservable processes, which should be captured and analyzed by the software application in an unobtrusive manner [17, 18]. A considerable amount of literature has been published during the last decade on the adaptivity of metacognitive tools and how this type of assessment can improve instruction for learners that have difficulties regulating their own learning [1921].

As a matter of fact, there is a growing body of empirical evidence showing learners’ difficulties to regulate their own learning of complex topics in the basic sciences [22] and social sciences [23, 24]. Researchers have documented these different classes of failures that lead to minimal learning, referring to them as instances of dysregulated learning [25]. In studying historical texts, for instance, dysregulated learning may consist of insufficient amounts of activities related to planning and monitoring, in spite of the fact that setting goals, in particular, is predictive of declarative knowledge gains [23]. In addition, although learners often summarize texts and take notes, these strategies are traditionally less effective as compared to engaging in elaborative and inferential activities.

The structure of historical texts is an important antecedent to instances of dysregulated learning while studying historical texts. Learners often fail to notice instances of confusion and offer plausible explanations while reading historical texts that do not mention the causes of events [24]. In doing so, self-regulatory knowledge acts as compensatory processing to infer the most likely causes that led to the occurrence of the event under investigation. Self-regulated learners are able to search across multiple text documents and recall prior knowledge in an effort to build a coherent mental representation of a chain of events.

Given that learners may lack the requisite knowledge, researchers have outlined principled methods to revise the causal structure of historical texts with the aim of facilitating comprehension [26, 27]. This approach assumes that uncertainty is undesirable, and should be minimized by providing coherent explanations. This line of research hypothesizes that more coherent texts require fewer inferential processing; therefore, learners should demonstrate better learning outcomes when texts are revised in order to make them more coherent. This effect is mediated by the interaction of several factors, including the source of the incoherence, the amount of prior knowledge of the reader, and whether learning is assessed in terms of the ability to recall or understand the relevant material.

On the other hand, others have maintained that confusion can be conducive to learning if appropriately induced and resolved while providing the necessary assistance [28, 29]. This alternative approach maintains that technology-rich learning environments can intentionally induce confusion to promote deep inquiry that can benefit learning, albeit if learners engage in the requisite activities with the help of software features. This research tradition states that confusion is beneficial to learning given the occurrence of activities that are associated with the search for a solution, namely, causal reasoning and effortful elaboration. Rather than eliminating potential sources of confusion that may arise in future learning situations, learners should be scaffolded in terms of resolving these issues, which increases the likelihood that learners will apply the relevant skills to other situations.

This chapter examines the latter approach by describing the MHRt tool, a computer-based learning environment designed to induce confusion to benefit learning through problem-solving within the domain of history [30]. The MHRt induces confusion by failing to mention any information pertaining to the causes of an event. Learners are expected to attain a coherent understanding of the event by searching and transforming information obtained from authentic source documents in accordance with disciplinary-based practices. Modules embedded in the MHRt target the requisite skills that are involved in regulating one’s own investigation into the causes of the event. The scope of this chapter is limited to comparing and contrasting assessment mechanisms with respect to different stages of skill development. To do so, an illustrative case study is reviewed to exemplify how the assessment mechanisms adapt instruction to the specific needs of different learners. The next section provides a brief review of the three-phase model of cognitive and metacognitive activities in historical inquiry, the theoretical framework that is used to define the aforementioned skills.

2 The Three-Phase Model of Cognitive and Metacognitive Activities in Historical Inquiry

The existing models of self-regulated learning share several basic assumptions [31, 32]. First, learners are actively involved in making sense of information given the resources that originate from their own cognitive system or the external environment. Second, the notion of phases characterizes learners’ efforts to plan, monitor, control, and evaluate their own learning in an iterative manner. Third, conditions that are inherent to the learner and a situation constrain the self-regulation of learning, including relevant cognitive, affective, behavioral, and contextual factors. The fourth assumption is related to knowledge about self-regulation determines skill deployment in response to obstacles and challenges to learning. The fifth concerns to the deployment of these skills mediate learning outcomes. Although self-regulated learning theorists have outlined detailed accounts of these mechanisms [1316], researchers have recently called for further clarification of the domain-generality or –specificity of the relevant constructs [17, 33, 34].

With regard to the domain of history, the three-phase model of cognitive and metacognitive activities in historical inquiry provides a domain-specific account of self-regulated learning [34]. According to the model, history learners regulate their own search for the causes of historical events. The search process is characterized by several phases, spanning from an initial lack of knowledge about the causes of the event to the reinstatement and attainment of a coherent understanding. Theoretical constructs from models of historical reasoning [3537] and self-regulated learning [15, 3840] are synthesized in order to account for the regulatory mechanisms that facilitate the learners’ transition across each phase. These mechanisms consist of metacognitive activities that are adaptively and iteratively deployed while investigating the causes of historical events.

Metacognitive monitoring activities involve the comparison of one’s own comprehension of an event against standards for causal coherence. Causation constrains the inquiry process through the need to interpret information obtained from sources in terms of events that logically follow from their antecedents [35, 41]. However, the causal structure of a narrative text is not necessarily conducive to comprehension since relevant information may be missing from the account of an event [27]. Self-regulated learners continually evaluate their understanding of the causes of historical events and take remedial actions when the explanation is unknown or uncertain.

Planning related activities refer to setting goals that define the desired result of an inquiry into the causes of an event. In the early stages of an investigation, when the exact causes of the event under investigation are still unknown, self-regulated learners search for evidence to confirm a potential cause. However, as the learners’ understanding of the causes gradually becomes more certain, learners attempt to weigh the likelihood of other potential causes or to anticipate counter-arguments against their own account of the event. In doing so, self-regulated learners reinstate coherence in understanding the causes of an event by building an increasingly sophisticated argument.

Metacognitive control activities refer to the disciplinary-based strategies that are involved during the learners’ inquiries. These strategies, also known as historical thinking skills [37], stipulate how to evaluate the trustworthiness of a source document, gather and situate evidence within the time and place of its creation, find corroborating information across other sources, and use substantive concepts pertaining to the event under investigation. Self-regulated learners are able to choose and deploy the strategies appropriately and evaluate the certainty of the resulting argument.

As an example, a typical learner may notice that a text does not explain why an event occurred. As an example, the causal factors that led to the occurrence of the 2008 world financial crisis were not mentioned in the circumstances stated in the text. Confused as to why investors were pulling their money from banks, the learner may set the goal of investigating further by attempting to find information that would confirm that financial institutions were highly levered. To reach this goal, the learner first formulates a question: “What is the degree of financial leverage of a major financial institution, in particular, Lehman Brothers Holdings Inc, during the end of 2007?” Using a credible source of information, the annual report of the firm, the learner finds a leverage ratio of 31 to 1, suggesting that Lehman was at considerable risk. As such, the learner argues that investor panic was partially attributed to levered financial institutions, a claim that is corroborated by the fact that shares for Lehman plummeted sharply during the same time period. The learner may also contextualize this information by recalling that investor confidence was lowered by the near collapse of another firm, Bear Stearns, at the beginning of the following year. The learner may then engage in an additional line of inquiry in order to answer a follow-up question: “Did Lehman Brothers and Bear Stearns share a similar investment portfolio?” The example described here illustrates how the activities involved in regulating one’s own investigation are recursive as the outcome of the previous search determines the direction of the next.

The three-phase model of cognitive and metacognitive activities in historical inquiry guides the development of the MHRt by decomposing the relevant activities into skill components. These skill components serve as the instructional goals of the MHRt as modules embedded within the system are designed to facilitate skill development. The Training Module implements example-based skill acquisition as an instructional approach, allowing learners to study examples of the requisite skills and to receive help in the form of hints and prompts [42, 43]. The Inquiry Module allows the learner to practice and refine the skills that were acquired in the previous module by performing a structured inquiry-based learning task [4447]. The following sections describe the design of both modules, and how the system assesses the learners’ progress through each stage of skill development.

3 The MetaHistoReasoning Tool Training Module

3.1 The Design Guidelines of the Training Module

The Training Module supports skill acquisition by providing learners with a set of examples and prompting them to analyze and differentiate each skill. The module is organized according to a series of phases, referred to as the training, categorization, and self-explanation phase. The training phase is completed by the learner at the beginning of the session, where an instructional video introduces the topic under investigation, the relevant skills, and the interface features of the module. The categorization phase requires that the learner analyze a series of examples by identifying the corresponding skill among a list of options, which include the correct response. Learners make as many attempts as necessary to choose the correct option. The self-explanation phase starts at pre-determined intervals, where the learner explains how the skills shown in a set of examples contributes to the investigation of the topic. The examples are displayed on the lower left corner of the screen, as shown in Fig. 14.1 an example consists of a brief verbal utterance that resembles a historian talking aloud while analyzing a historical document.

Fig. 14.1
figure 1

The main interface of the training module

Although each example demonstrates a specific skill through a unique utterance, the learner is also provided with sets of examples in order to illustrate how skills are interrelated with each other in the context of an investigation. Each set is delivered in increasing order of complexity, including one, three, and five examples in a given set. The order of each set is determined on the basis of learners’ performance during the categorization phase as four sets of examples, containing both three and five examples each, are used to establish a baseline. If the baseline performance is greater than the 70 % accuracy threshold, then the learner skips the following sets of examples to either solve more complex sets of examples or investigate the event in the Inquiry Module.

The artificial pedagogical agent is located in the upper right corner of the screen. The agent interacts with the learner by providing definitions, prompts, and feedback. The definition of each skill was provided by the agent at the beginning of the session, when the learner solved the first sets of examples, each one demonstrating a single skill (e.g., “This example shows an historian asking a question. In doing so, the historian begins to search for the most important cause of the Acadian Deportation.”).

The prompts were meant to encourage the learner to either categorize an example (e.g., “Which instance of historical thinking does this example show? Choose the option that best describes what the historian says.”) or write a self-explanation regarding the skills that are shown in a set of examples (e.g., Explain how each instance of historical thinking relates to the historian’s goal, which is to explain why the Acadian Deportation occurred.). The feedback was provided by the agent immediately after the learner categorized an example, and was either positive (e.g., “Your answer is correct.”) or negative (e.g., “Your answer is incorrect, try again.”).

3.2 Modeling Skill Acquisition in the Training Module

The skill acquisition model allows the MHRt to generate learning curves, a representation of the increasing rate of skill acquisition as a function of exposure to several examples of different skills in the context of the Training Module. The rate of skill acquisition is inferred on the basis of performance on the categorization task. A learning curve can be decomposed according to several performance metrics. These metrics include the observed and predicted cumulative percentage of correct attempts, the error ratio, and the time taken to categorize an example.

The cumulative percentage of correct attempts illustrates the rate of correct categorizations for each opportunity. Researchers have outlined several methods to model the rate of skill acquisition on the basis of user interactions with interface features [48, 49]. The skill acquisition model relies on a logistic function to predict whether a categorization attempt is correct or incorrect. The following parameters are included in the model: (1) the elapsed time duration in seconds; (2) the number of attempts; (3) the amount of exposure to examples of a particular skill; (4) the type of skill illustrated by the example.

On the one hand, the benefits of practice can be ascertained by comparing the observed and predicted performance as a function of the increasing amount of opportunities to categorize examples. Figure 14.2 shows the cumulative average percentage of correct categorizations obtained by a learner and predicted by the model. The predicted probability value is also plotted across each opportunity to categorize an example. The slope of the learning curve has a good fit to the predictions of the model, as the average values range consistently above the 70 % threshold, suggesting that the learner’s rate of skill acquisition is satisfactory.

Fig. 14.2
figure 2

The cumulative percentage of correct attempts to categorize examples as a function of the number of opportunities

On the other hand, the predicted performance can be plotted for a specific opportunity in order to determine when the system should intervene and provide assistance to the learner. Figure 14.3 shows the predicted percentage of correct categorization obtained on the learner’s 10th opportunity, plotted across the elapsed time taken to categorize the example. The downward slope of the learning curve suggests that the probability of correctly categorizing the example decreases as a function of the elapsed time. The system can rely on this information in order to provide remedial instruction when the learning curve reaches predetermined thresholds. To do so, the pedagogical agent could deliver prompts to elaborate that specific type of skill or provide the learner with a hint.

Fig. 14.3
figure 3

The predicted percentage of a correct attempt to categorize an example on a specific opportunity

The error ratio consists of the probability of an incorrect categorization on a first attempt, relative to the probability of a correct categorization. The bar chart shown in Fig. 14.4 shows the error ratios corresponding to each skill, calculated for the learners’ entire session with the Training Module. As an example, the learner correctly categorized a total of 25 examples, and one of these correct categorizations corresponded to the skill of contextualization (i.e., 1:25 = 0.04). The learner incorrectly categorized a total of 7 examples, and 3 of those examples corresponded to the aforementioned skill (i.e., 3:7 = 0.4286). Therefore, the error ratio related to contextualizing evidence is 10.714 (i.e., 0.4286/0.04 = 10.714), suggesting that the learner is more likely to incorrectly categorize an example that shows this particular type of skill. The error ratio is a useful metric for ordering the sequence of examples that are shown to the learner since greater amounts of examples can be delivered as a means to target specific deficiencies in skill acquisition.

Fig. 14.4
figure 4

The error ratios for each type of skill exemplified in the training module

The elapsed time taken to study an example can be calculated in seconds by adding all the time durations for each opportunity that the learner made to categorize an example. The time duration can also be plotted for examples where the first attempt was correct or incorrect, as shown in Fig. 14.5. The stacked barchart indicates that although the skill of contextualization was associated with the highest error ratio, the learner nonetheless spent on average less time to study and categorize the relevant examples. The average elapsed time to incorrectly categorize an example of contextualizing evidence was 7.33 s, suggesting that the system should intervene by encouraging the learner to further analyze such examples.

Fig. 14.5
figure 5

The average elapsed time taken to categorize each type of skill exemplified in the training module

4 The MetaHistoReasoning Tool Inquiry Module

4.1 The Design Guidelines of the Inquiry Module

The Inquiry Module supports the application and refinement of skills by allowing learners to inquiry into the causes of historical events. The module facilitates a learner’s investigation through a digital collection of primary and secondary source documents with the help of embedded investigative tools. These tools are both dynamic and interactive (i.e., pedagogical agent) or static (i.e., series of instructional videos, the annotation tool, a digital library, as well as the explanation and evidence palette), as shown in Fig. 14.6. Using these tools, a learner is able to iteratively revise their explanation in light of new evidence and improve his understanding of the event.

Fig. 14.6
figure 6

The main interface of the inquiry module

A new session with the Inquiry Module begins with the learner reading a short narrative text that describes the circumstances surrounding the event. However, the text purposely makes no mention of any causes that would allow the learner to explain why the event occurred. Once the learner is done reading the text, the agent prompts them to monitor their own understanding by highlighting the missing information (i.e., “Read this text and you will notice that it does not explain why Charles Lawrence made the decision to deport the Acadians.”) and asking an appropriate question (i.e., “What was the most important cause of the Acadian Deportation?”).

The task of the learner is to search across the digital collection of source documents in order to answer the question. The system interface is designed to structure the learner’s investigation into a series of steps, each step involving the use of a specific skill. For instance, the learner first evaluated the trustworthiness of a source, then gathered evidence from this source, searched across other sources for similar or contradictory information, and situated the evidence within that time period. During the initial line of inquiry, the pedagogical agent guides the learner through each step. As a result of each line of inquiry, the learners’ explanation is revised in light of the new evidence that is obtained.

The static tools embedded in the module are tailored to support learners in using specific skills. Instructional videos are made available to the learner in order to explain how to use each skill within the context of the module. The annotation tool allows the learner to write notes and select listbox items that are constrained to facilitate the following skills: evaluating the credibility of sources, gathering, corroborating, and contextualizing evidence. A digital library allows the learner to use a wide range of substantive concepts corresponding to the time period, including the relevant historical figures (e.g., Governor Charles Lawrence), the broader societal and political context (e.g., the Seven Year’s War, and the governmental policies (e.g., Treaty of Utrecht). The explanation palette enables the learner to formulate an explanation by ranking the likelihood of several causal factors while investigating the event. The evidence palette serves as an external memory aid, allowing the learner to review a record of their own annotations.

4.2 Modeling Skill Practice and Refinement in the Inquiry Module

The skill practice and refinement model allows detecting states that are indicative of proficiency while the learner performs inquiries into the causes of historical events in the Inquiry Module. Learner states are classified through a series of decision rules applied on the items selected in the annotation tool and the causes ranked in the explanation palette. This argument-driven approach classifies learner states in terms of the type of goal pursued by the learner and whether strategies are appropriately used to achieve the goal, as shown in Figs. 14.7 and 14.8.

Fig. 14.7
figure 7

Decision rules featured in the argument-driven model for assessing strategy use

Fig. 14.8
figure 8

Decision rules featured in the argument-driven model for classifying goal-setting

As an example, the learner investigated the causes of the Acadian Deportation, the forceful removal of the French inhabitants of Nova Scotia by the British authorities during the Seven Years’ War. The explanation palette allows the learner to rank the likelihood of five plausible causes at the beginning and end of each line of inquiry. The event may be due to the influence and intentions of political figures, referring to British Governor Charles Lawrence’s discontent towards the Acadians. The deportation might be attributed to the political situation as the Acadian deputies and communities refused to swear the unconditional oath of allegiance. An alternative is the economic situation at the time, which may have motivated Charles Lawrence to seize Acadians’ land, property, and livestock. The deportation may have been ordered for ideological reasons, as the Acadians would likely become loyal British subjects if they could be assimilated across the colonies. Charles Lawrence however may have wanted to prevent the Acadians from joining with their enemies in the conflict between the French and British empires.

At the beginning of the learner’s first line of inquiry, the assimilation of the Acadians and the need to avoid a military conflict were both ranked as the most probable causes of the deportation. However, the learner annotated a source document that was found to support the claim that the deportation was due to Charles Lawrence’s discontent towards the Acadians. This claim was supported by a quote taken from the source document that described an attack on the French army at Fort Beauséjour, which was ordered by Charles Lawrence. Therefore, “it is reasonable to infer that he displays general discontent for their presence and/or refusal to swear oaths and loyalty”. The learner corroborated this piece of evidence, noting that five other source documents mentioned similar information, whereas only three sources refuted the evidence. As a result, the learner’s explanation changed in favour of the Governor’s discontent, the Acadian’s refusal to swear the oath of allegiance, and the need to avoid a military conflict, as shown in Fig. 14.9.

Fig. 14.9
figure 9

Timeline of changes in the explanation for the event

In the following line of inquiry, the learner argued in favour of the Acadian’s refusal to swear the oath of allegiance as the most likely cause for the event. The annotation referred to two sources, wherein the “Acadians were content with the first treaty with Philipps [past Governor of Nova Scotia Richard Philipps] evident by their letter to Cornwallis”, but that the “British clearly weren’t as per the source provided by the library”. This evidence suggests that the refusal to swear an unconditional oath of allegiance meant “a cultural threat against British dominance […] thus it is reasonable to assume that this was the final stimulus amongst many others that finally drove the British to expel the French in order to ensure dominance”. The learner later indicated that the majority of sources agreed with this notion, referring to a total of six source documents that corroborated the evidence.

The skill practice and refinement model classifies both lines of inquiry as inappropriate in terms of achieving the different types of goals stated in the model. At the end of both lines of inquiry, the causal rankings suggest that the learner considered multiple causes in their explanation for the event under investigation. In doing so, the model cannot classify either lines of inquiry as an attempt to confirm an explanation, weigh an alternative cause, or to rule out an alternative. To address the learner’s uncertainty, the pedagogical agent should support the learner to monitor their own understanding of the event. The agent may challenge the learner’s beliefs by highlighting information obtained from other source documents that either confirm or refute a piece of evidence, thereby prompting the learner to re-evaluate their own explanation for the event.

5 Discussion

Modularization enables the MHRt to capture and analyze user interactions at several stages of skill development. On the one hand, the Training Module generates learning curves to track skill acquisition as the learner categorizes illustrative examples of these skills and explains their underlying purpose. On the other hand, the Inquiry Module relies on an argument-driven model to characterize how the learner practices and refines the use of skills to investigate the causes of historical events. The learners’ progress is assessed along a trajectory towards competency that is particular to the domain [50]. The modules complement each other as the learning outcomes at a previous stage dictate the learners’ progress through the next stage.

The pedagogical agent is thus capable of facilitating transitions along this trajectory by selecting and delivering the instructional content that is most suitable to the needs of different learners. As evident in the review of the case study, the challenge is to identify the critical moments along the different trajectories of individual learners. First and foremost, the rate of skill acquisition varies greatly from one skill to another, depending on the complexity of the procedure that is applied and the variability of the information that is transformed. As a result, the agent should have an active role in selecting the examples that are delivered to the learner, providing just-in-time hints and prompts, as well as engaging learners in elaborative and evaluative processes. Furthermore, the agent should provide better guidance in relation to learners’ efforts to plan their investigations and evaluate the outcomes of their inquiries into the causes of the event. When a learner is unsure of the most important cause for the event under investigation, the agent could challenge learners’ beliefs by outlining a rebuttal argument or facilitate their search by referring to corroborating evidence obtained from other source documents.

There are several issues to consider in order improving the adaptive capabilities of the MHRt. One of the most important issues is to enhance both the quantity and quality of the self-regulation of learning. Although quantity can be strictly defined as the amount of lines of inquiry performed by the learner, each line also differs in terms of the amount of sources that were consulted and the pieces of evidence that were found to warrant or corroborate a particular claim.

The quality of these activities, however, reflects the depth of processing involved in each line of inquiry. For instance, learners who ruled out alternative explanations in addition to attempting to confirm an explanation built a more persuasive argument. In contextualizing evidence, the amount of elaborated information makes an argument more comprehensive to an audience, but the diversity of aspects that are considered is critical, such as whether the location of the event was described, a timeline established, and the values of the characters explained, These examples illustrate the importance of improving the assessment capabilities of the system in terms of targeting both the quantity and quality of self-regulation.

The main limitation to the modularization approach is that the quantity and quality of processing during the early stages of skill development determines the level of performance at the later stages. As a case example, the system detects that learners are engaging in inappropriate strategy use given a particular goal. The learners are restricted to the affordances of the interface elements embedded in this module. The system does not allow learners to repeat the Training Module and differentiate examples of appropriate and inappropriate strategy use. This is due to the fact that the modularization approach used in the MHRt relies on static interface elements to structure the self-regulation of learning.

After discussing the benefits and limitations of modularization, the following paragraphs of this paper now moves on to consider ways to improve this approach to the design of metacognitive tools. Improvements are proposed with respect to several areas, including the development of external representations, assessment mechanisms, and pedagogical agents. These are discussed in the context of a computer-based learning environment called the MetaEnquirer, a system under development at the University of Utah by the Advanced Instructional Systems and Technologies laboratory.

5.1 The Role of External Representations

An open-learner model may be defined as a representation that is made visible to a learner and that displays acquired knowledge during task performance [51]. In other words, the content of the learner model is continually updated with the aim of fostering self-reflection. The progress made in knowledge acquisition is inferred on the basis of user interactions logged by the system. What we know about open-learner models is largely based upon empirical studies that have compared the impacts of several characteristics of these representations in order to establish evidence-based design guidelines that are generalizable across systems [52]. Together these studies provide insights into the manner in which the MetaEnquirer should illustrate the learners’ progress in investigating the causes of historical events.

The MetaEnquirer should represent how learners change their arguments as a result of searching for evidence across source documents. The benefit of this approach is that learners become more aware of the outcomes of each line of inquiry, which is hypothesized to improve learners’ planning of their investigation. Besides highlighting how the outcomes of each investigation inform the next, the pedagogical agent embedded in the MetaEnquirer could challenge learners by critiquing weak points in their arguments, only to support them later in searching for clues.

5.2 The Role of Assessment Mechanisms

A novice-expert overlay model is an assessment approach whereby learners’ steps that were taken to solve a problem are compared to an ideal solution, which is typically validated from several domain experts [53]. As such, computer-based learning environments allow learners to visualize the similarities and differences between both solution paths. Lajoie [7] implemented novice-expert overlay models in BioWorld, a computer-based learning environment that allows novices to practice medical diagnostic reasoning. The findings obtained show that experts pursue different paths in solving a case; however, their reasoning can be modeled in the form of commonly identified evidence items that are pertinent to obtaining the correct diagnosis. The model allows the system to track user interactions and compare the evidence items identified as relevant by the novices to the ones of the experts. Individualized reports are provided to the learners that highlight these similarities and differences as well as explain the correct approach to managing and treating the patient.

The MetaEnquirer stands to improve the quality of the feedback that is delivered to learners by assessing the self-regulation of learning in accordance with how experts perform inquiries into the causes of historical events. This method builds on previous work with argument-driven models in the context of the MHRt since the user interactions are appraised not only in terms of the requirements of achieving a particular goal, but also for the correctness of written annotations. An expert model of annotations that relies on decision rules to appraise the quality of learners’ investigation into the causes of historical events allows the MetaEnquirer to individualize instruction. Feedback is delivered to the learner in order to distinguish between pieces of evidence that were similar and different from the ones obtained by the experts. The same model can also be used by the system to recommend source documents that may corroborate or discorroborate certain explanations for the event under investigation.

5.3 The Role of Pedagogical Agents

Multi-agent intelligent tutoring systems rely on several pedagogical agents that emulate different roles with the aim of achieving an instructional objective. As an example, Betty’s Brain allows learners to teach Betty, an artificial pedagogical agent, and evaluate her understanding of river-ecosystem processes [54]. Mr. Davis supports learners by delivering quiz results, guiding them in their search of the library, and scaffolding learners’ efforts to regulate their own learning. MetaTutor assigns each pedagogical agent to support a construct from the information processing theory of self-regulated learning in order to scaffold them in using the relevant skills [55]. These agents include Mary the monitor, Sam the strategizer, and Pam the planner.

The use of multiple agents embedded in a modular system such as the MetaEnquirer stands to address the issue in relation to the current design of the MHRt. Modularization as distinct configurations of interface elements, each set designed for an instructional objective, is limited in terms of its flexibility. However, the dialogue that occurs between different agents and the learner can be tailored by the system to compensate for this lack of flexibility, while also guiding the learner across each module.

For instance, the role of the first agent may be to support skill acquisition by asking learners to differentiate between examples of appropriate and inappropriate strategy use. The second agent coaches learners in practicing and refining the use of these strategies by suggesting relevant content and highlighting deviations from the experts’ written annotations. Both agents could intervene at appropriate moments depending on how the system appraises the quality of learners’ investigations. The benefit of this approach is that the revised version of the system would be capable of targeting the specific needs of learners at different stages of skill development.

6 Conclusion

In summary, this chapter compared and contrasted two assessment mechanisms that each targeted different stages of skill acquisition. An illustrative case study of one learner was reviewed as an example of the use of learning curves to model skill acquisition and an argument-driven model to assess skill practice and refinement. The role of these assessment mechanisms was explained in terms of their capacity to adapt instruction in the context of the MHRt. In doing so, the MHRt modules facilitate the regulation of learning while performing inquiries into the causes of historical events in accordance with disciplinary-based practices.

Difficulties arise, however, when modules are completed in a linear manner, as the early stages of skill development are critical to ensuring consolidation during the later stages. Since learners are not allowed to go backward along the trajectory to develop the targeted skills, modularization assumes that prerequisite knowledge and skills have been gained for future learning to be successful. In reviewing recent advances pertaining to the roles of external representations, assessment mechanisms, and pedagogical agents in the context of metacognitive tools, a set of design principles were outlined to guide the development of the MetaEnquirer, which will address this issue by redefining modules as dynamic components that are delivered to the learner when necessary. Considerably more work will need to be done to determine whether modularization as a mechanism to deliver instruction within metacognitive tools is generalizable.