Keywords

1 Introduction

The concept of adaptive training involves the use of Artificial Intelligence (AI) tools and methods to augment a learning experience based on the needs of a given individual, or in future instances, a set of individuals (i.e., team). This entails tracking what an individual knows and doesn’t know within a domain and using that information to guide the training experience. These automated techniques are intended to provide personalized training in the absence of live instruction through modeling approaches that balance challenge levels and recognize errors and misconceptions during a practice scenario. These models are also used to invoke feedback and guidance strategies designed to aid a learner in overcoming an impasse. Using a simple analogy, adaptive training systems apply AI methods to mimic the interactions conducted by an effective human tutor engaging with single learner.

Adaptive training programs are commonly referred to as Intelligent Tutoring Systems (ITS), with ITS examples dating back to the 1970s [1]. There are notable success stories looking at the effect ITSs have when compared against more traditional classroom methods [1, 2]. However, many of these success stories associate with well-defined domains (e.g., physics, algebra, computer programming) and are mostly confined to stove-piped training environments. A current goal of the U.S. Army Research Laboratory (ARL) is investigating how to extend these technologies into military relevant domains through the development of the Generalized Intelligent Framework for Tutoring (GIFT) [3]. As such, it is important to conceptualize how ITS methods fit within the context of a unified Army training model that manages skill acquisition through identifying approaches to leverage multiple systems and simulations. This requires approaches that leverage ITS methods to track progress and manage interaction at the individual level. In this paper, we present current work surrounding the development of GIFT, a generalized ITS environment in support of a skill acquisition model of training. We present the foundations associated with ITS implementation in a military context and provide a breakdown of technologies required to create this type of training spectrum.

1.1 Training Foundations

A common research question is: How can ITS tools and methods be applied in a training continuum that incorporates multiple training platforms and events to progress an individual from novice to expert in a single domain? Before a solution can be devised, it is important to discuss the theory and foundations of military based training. In the context of this paper, we present training foundations as they pertain to simulation-based exercises devised in support of skill development and application. The majority of training practices adhere to commonly known theories of skill acquisition that associate with phases of application (e.g., cognitive phase, associative phase, and autonomous phase) [4, 5]. In each phase, varying instructional approaches and training exercises are organized to assist an individual in developing an understanding of not only what to do, but also an understanding of how to do it. Anderson’s ACT-R* [6] distinguishes these processes as either containing declarative information associated with the domain, or procedural information that associates with performing a task as they adhere to the declarative constraints of the task environment.

The goal of an effective training program is to the balance the interplay of declarative and procedural knowledge training, so as to accelerate an individual’s sequence through the phases of skill acquisition. This interplay is based on a Crawl-Walk-Run (CWR) model of interaction [7, 8], where training programs are designed to: (1) establish the rules and requirements associated with task performance (Crawl), (2) provide opportunities for skill development through scenario exercises with performance monitoring and real-time coaching (Walk), and (3) provide opportunities for trainees to execute tasks based on their ability with little instructor intervention (Run) [8]. Taking it a step further, we propose an additional layer of abstraction to the CWR model by including the sequencing of training exercises across complimentary simulation platforms (see Fig. 1). In these instances, you apply CWR pedagogical practices within each event, while using those individualized events to prepare that trainee for subsequent training interactions that build on one another.

Fig. 1.
figure 1

The CWR model of adaptive interaction applied across multiple simulation platforms

To reduce the cost and manpower associated with military training, simulation-based methods have been developed to compliment the training process, especially as they associate to the crawl and walk phases of skill development. Resulting simulation platforms provide cost-effective, safe environments that allow individuals to practice procedures and tasks. These simulation exercises support a drill-and-repeat task approach with little time and post-acquisition cost constraints, with the notion that repetitive scenario interactions will prepare an individual for live task execution when the time comes. The limitation of simulation-based training is often linked to an instructor-in-the-loop requirement for the purpose of monitoring training interactions to make sure errors are corrected and training principles are reinforced. As such, many of these platforms require coordination at the trainee/trainer level to make sure the simulations are used in a way that promotes efficient skill transfer.

With advancements in adaptive training research and the Army Research Lab’s investment in the Generalized Intelligent Framework for Tutoring (GIFT) [3], utilizing ITS tools and methods to automate the CWR model of training is achievable. The goal is to manage an interplay of simulation exercises that enforce different training principles, along with configuring ITS functions in each simulation for the purpose of monitoring performance and providing guidance and adaptation at the individual level. In the following sections, we discuss the utility of ITS technologies to support the CWR model of adaptive interaction in a self-regulated training context. This will include a use case centered on land navigation to guide the discussion.

2 Creating a Network of Simulations in Support of Self-regulated Training

To fully implement ITS services in an adaptive CWR context, an architecture is required that enables interoperability across multiple sets of simulation platforms. Each platform can vary in the types of tasks and interactions it supports, so the architecture must account for exercises that train similar concepts at varying phases of the skill acquisition process. In addition, each platform can also incorporate varying data formats and messaging protocols that are unique to that specific application. Therefore, assessing performance in each environment requires specific modeling approaches that account for the data made available by the training environment as it pertains to a set of established concepts and training objectives [9]. Despite these complexities, a CWR instantiation of ITS methods is possible. It requires a network of simulations that build on one another in support of skill development, and a generalized framework to manage the interactions across each experience. For the purposes of this discussion and the defined use case, we are focused on existing stand-alone platforms and applications that are used for training land navigation oriented concepts and skills. These applications are available, and in some instances used in training programs by instructors in controlled settings with human-in-the-loop monitoring.

To take full advantage of available training simulation tools, we advocate the use of ITS technologies to manage training delivery across simulations based on a self-regulated learning construct. To support adaptive functions in all phases of training, the ITS architecture must be able to integrate with all platforms for the purpose of retrieving necessary data sources for managing assessment and driving pedagogical decisions. This includes being able to collect information in real-time produced from the training environment itself and from relevant sensing technologies linked to physiological and behavioral markers. In support of this approach, we are leveraging GIFT’s modular architecture to demonstrate the utility of using ITSs to automatically manage multiple training events across disparate simulations.

2.1 Generalized Intelligent Framework for Tutoring (GIFT)

GIFT is a domain-independent framework and serves as a set of best-practices for constructing, delivering, and evaluating ITS technologies [9]. GIFT provides an ontological schema for linking observable interactions occurring in a training environment with concepts and competencies linked to that interaction. Through this mapping, GIFT can monitor what a trainee is doing within an environment, and use that information to assess performance based on established models configured around data-informed thresholds. These thresholds associate with three common modeling techniques: (1) expert models used to identify interaction and behavior that deviates from a desired path, (2) buggy models that map interactions to specific common error types and misconceptions, and (3) event-based models used for triggering situational relevant training interventions [10]. With established models and threshold values linked to domain concepts, instructional interventions can now be enacted based on what a trainee does during their training event.

GIFT is unique in that its generalizability provides a means for leveraging multiple systems to train common domain concepts and skill sets. The framework supports interoperability across multiple training environments, with persistent modeling components in place to track a set of skills across multiple training instantiations. GIFT also supports a mastery learning approach, where lessons can be structured and delivered in a way that adheres to common learning theory. In this fashion, subsequent lessons are dependnent on the materials and skills trained prior to, with criteria defined to warrant an individual to progress to the next lesson/skill set. The architecture is comprised of the modular components necessary to develop adaptive training functions across a seamlessly endless number of domains, with schemas in place to support cognitive, affective, and psychomotor training spaces [11].

3 A Real-World Use Case: Land Navigation

The goal of this use case is to establish a set of procedures and interactions that will dictate the training experience an individual will receive during a CWR adaptive training event, utilizing GIFT tools. To frame the discussion, we are using the domain of land navigation. Land navigation is a great example to work from, as it incorporates a mix of cognitive and psychomotor task components that are critical to effectively performing the skills required to orienteer unknown terrain. In establishing the criteria for driving an adaptive training approach, we are leveraging currently available training platforms that are used to train varying aspects of the land navigation domain. From an adaptive training standpoint, each platform facilitates a specific aspect of the training experience that adheres to the CWR model of interaction.

The notion is to utilize a set of independent training programs to facilitate the CWR phases of skill acquisition (see Fig. 2). Secondly, through the incorporation of GIFT, each of these independent programs can be configured to support adaptive training at the individual level. As such, we will frame the sub-sections as follows. We start by identifying the tasks prescribed in each phase of the training model, followed by a description of their role in the training process. Next, we will highlight specific concepts each phase will target and the types of assessments required for determining proficiency and competency across each. To round out each sub-section, we will present conceptual adaptive training functions that can be configured in each phase based on GIFT capabilities.

Fig. 2.
figure 2

GIFT lesson structure for land navigation training with CWR sequencing

3.1 CRAWL: Terrain Association Exercises

Task Description.

In this phase, the training focus will be on the fundamentals of land navigation according to FM3-25.26 [12]. Initial instruction will be on three primary components: (1) map reading, (2) terrain association, and (3) route planning. Each of these areas has specific training objectives that can be taught via simulation. Exercises and interactions are designated in this portion of the training to instill the required cognitive components required to perform land navigation procedures. As the primary components of the identified objectives are cognitive in nature, this crawl portion of training will heavily center on all elements related to the planning portion of a navigation exercise.

Training Environment.

The environment used in the crawl session of this use case is ARL’s Augmented REality Sandtable (ARES) [12]. ARES is a research testbed which aims to provide a common operating picture at the point of need for military planning. ARES uses low-cost commercially available technologies (Microsoft Kinect®, a projector, a large screen television monitor, a laptop, and a sand table) to project map data and visualizations onto the sand surface. ARES allows customized presentations of terrain for training purposes. This includes features such as: the use of contours as well as being able to place military standard units. Once a particular terrain is configured, a scenario file can be saved for later use. ARES is currently integrated with GIFT through a gateway configuration that allows the delivery and assessment of ARES configured scenarios, with GIFT driving training prompts and assessments.

How GIFT can Personalize this Interaction.

Through the integration of GIFT with ARES [14], a training event can be created that prompts a user to interact with ARES scenarios that target land navigation type concepts. An example would be displaying a map on ARES, with GIFT asking the trainee to plot a way point based on specific GPS coordinates. Scenarios can be defined that progress in complexity, with pedagogical practices in place to devise a CWR approach to interaction within this specific environment. The culminating ‘Run’ type event would include an individual devising a planned route to be executed in the ‘Walk’ training environment, which is described next.

During all ARES events, GIFT can correct trainees when they perform erroneous actions based on configured assessment criteria. GIFT can also use information provided from ARES-based assessments to support personalized remediation through responses to GIFT driven questions. With learner models in place, GIFT can progress a trainee through content and scenarios based on performance outcomes, and can personalize feedback and interactions based on individual differences. As the crawl portion is primarily problem-based, GIFT can sequence a set of ARES scenarios for preparing that trainee for the subsequent ‘Walk’ portion of training. When all concepts have been deemed mastered through GIFT assessments, this phase of training can be completed and the trainee progressed on to a new training environment for further application of land navigation skill sets.

3.2 WALK: Game-Based Interactive Exercises

Task Description.

The next phase of events in the proposed training interaction model associate with procedural applications using the knowledge learned in the ARES crawl session. The goal is to immerse an individual in a set of interactive exercises that elicit the application of knowledge associated with land navigation, while introducing additional concepts that associate with the psychomotor components of task execution. To facilitate this walk phase of training, utilizing a game-based land navigation scenario provides the environment necessary to replicate the decisions and movements associated with performing the domain tasks without subjecting the trainee to the live environment and all the associated constraints that come with that. This allows a trainee to go through the procedural steps of performing land navigation tasks, while having the ability to make critical mistakes without consequence and the ability to replay events to get multiple trials of skill application.

The tasks in this phase of training are inherently designed to mimic a live training event, with the exception of physically walking the course itself. This includes rehearsing cognitive and psychomotor task components that would be performed in the real-world. The cognitive aspects include: plotting a set of waypoints, devising a route based on known location and terrain features, measuring distance between route points for referencing, and calculating azimuths that will guide navigation practices. Each of these factors coincides with the psychomotor aspect of the domain, where the route is executed based on upfront planning. With a plan in place, the psychomotor task components include: shooting an azimuth vector with a compass, walking along that vector while maintaining pace count to judge distance, and identifying land features to maintain orientation.

Training Environment.

For the purpose of the walk phase of training, we are leveraging Bohemia Interactive’s Virtual Battle Space 3 (VBS3) to drive the use case development. VBS3 is an excellent platform for this phase of training as it mimics real-world tasks through first-person shooter gaming methods. GIFT is integrated with VBS3 through a configured gateway module that can receive and route Distributed Interactive Simulation (DIS) protocol data units. These data units provide real-time game-state information across a number of environment related variables. This integration is important, as it provides the GIFT framework with valuable interaction data that can be used for assessment purposes. The DIS data types of most interest to this use case associate with entity information (i.e., location, movement, collision) and known scenario objects that impact task interactions (i.e., way points, buildings and structures, terrain features).

How GIFT can Personalize this Interaction.

GIFT is designed to integrate with game-based applications for the purpose of monitoring player actions and assessing performance against a set of specified objectives [9]. In the context of land navigation in VBS3, GIFT supports concept assessments as they relate to available DIS information produced during run-time. These DIS data units are applied as inputs to inform condition classes designed to assess the concepts identified above. These condition classes are established to produce measurable metrics from available interaction data (e.g., measuring the distance of an entity to an object). In addition, GIFT provides mechanisms in each condition class to establish configured thresholds that designate performance states based on associated data types. For example, a condition class might be used to determine how much a player is staying on a path (i.e., vector). Through a condition class, you can update a performance state for the concept ‘stay on path’ when a player is measured greater than 10 feet off path based on current location and their starting point. With a set of established training concepts, and defined condition classes informing their real-time performance, GIFT can monitor interaction and trigger feedback and scenario adaptations based on changes in performance states.

3.3 Live Training Exercises (RUN)

Task Description.

In this phase of training, individuals will be tasked with completing a land navigation course in a live training environment. Trainees will be asked to conduct all required tasks to navigate across a set of waypoints using a map, a compass, and known GPS coordinates. This will require individuals to apply all components of the crawl and walk phases of training, thus providing a metric of training transfer as it relates to going from simulation to live exercise. The main addition to this level of interaction is the full incorporation of psychomotor task components. This includes walking the course based on designated routes, monitoring distance traveled through pace counts, and maintaining appropriate orientation through registered land marks (e.g., roads and rivers) and observable points (e.g., large hilltops, unique rock formations).

A benefit to the CWR adaptive interaction model is that more complex concepts are integrated as trainees progress through the designated training environments. An assumption is that individuals should have gained all basic and procedural knowledge to more effectively exploit the training benefits of live events when the ‘Run’ exercise is initiated. That is, they are no longer struggling with lower level concepts and skills which could hinder higher level integration and transfer of knowledge in very active training. This results in better skill integration as well as better cost/time effectiveness with live environments which can be expensive and where we don’t want people struggling on basic elements.

Training Environment.

The training environment for the ‘Run’ phase of this use case is a live land navigation course. The course will consist of designated waypoints a trainee will be asked to navigate. The trainee will also be outfitted with a map, a protractor, a pencil, and a compass. Depending on the location, individuals will need to navigate the optimal route based on land features and vegetation. The environment itself is rugged in nature and will require physical endurance to complete the course in a timely fashion.

How GIFT can Personalize this Interaction.

A dependency of personalized training interactions is that there are assessments in place that can drive performance determinations and guide pedagogical strategy selections. In GIFT, this requires data sources that can be used to designate what a trainee is doing in relation to a set of defined concepts and objectives. In this phase of training, we are now implementing GIFT modeling techniques in a ‘in-the-wild’ type environment, where simulation data is not conveniently made available. As such, future GIFT-based research is focused on the integration of cellular device data feeds to capture behavioral information as it relates to a known environment. In this instance, GIFT will need to receive real-time location data as informed through GPS and cell network tracking technologies. With real-time location data, we can recreate the assessment conditions performed within the VBS3 land navigation scenario. This requires the ability to author similar assessments performed in VBS3, with the main difference being the incorporation of new map data sources and linking those to location data for tracking trainee interactions.

The benefit with this integration is the ability to apply crawl and walk type training interventions in a ‘Run’ type training environment. We now have the capacity to enact ‘crawl’ based pedagogical interventions in the live environment with the ability fade support as a trainee progresses through the run designated exercise. This allows GIFT to create an interplay between training and transfer, with the latter focusing on performance with the removal of training supports and scaffolds.

4 Authoring Adaptive Functions Across Disparate Systems

Integrating all land navigation oriented training environments with GIFT provides the ability to create a unified adaptive training experience that guides individuals through phases of training based on the delivery of customized lessons. These lessons are authored using the GIFT Authoring Tool (GAT; see Fig. 3). At its foundation, GIFT lessons are created by configuring GAT course objects that drive learner interaction [15]. For the purpose of this paper, we are focusing on the Adaptive Course Flow object and its associated practice delivery component. This course object manages the delivery of instructional content and guides practice events with available external training applications [16]. An authoring burden is building GIFT logic for managing assessment and pedagogy for all external training environments used for practice exercises. To put it simple, without assessment, these systems cannot support ITS instructional practices. In the following sub-sections, we present a relevant problem statement and research targeting the development of authoring tools to support an intuitive approach to authoring assessment logic for training environment applications integrated in GIFT.

Fig. 3.
figure 3

The GIFT Authoring Tool (GAT) with course objects linked to land navigation training in ARES.

4.1 Assessment Authoring Considerations

The GAT primarily addresses the authoring processes associated with creating and organizing course objects that define the sequence of events an individual will experience (e.g., surveys, tests, lesson materials, etc.). However, GIFT interactions go well beyond these more conventional training formats. GIFT provides developers with authoring capabilities for creating an adaptive training experience across interactive external training environments that associate with actual skill application and practice opportunities. This will involve the incorporation of both virtual and augmented reality technologies.

Authoring Adaptive Course Flow objects for these external training applications poses a number of authoring challenges. Continuing with the VBS3 land navigation example, let’s say a developer has created a land navigation course and would like to assess the learner’s ability to navigate across various waypoints within the environment. In addition to authoring and configuring the condition classes associated with this assessment (e.g., how much a player is staying on a path (i.e., vector)), the developer may also need to make edits to the VBS3 scenario itself (e.g., add a waypoint to reference for assessment purposes). Or perhaps even create the VBS3 land navigation scenario in the first place. This disconnect between the GIFT authoring environment and the VBS3 scenario editor requires users to constantly switch between each tool and can be very cumbersome, often leading to increases in development time and potential user error and frustration from the authoring standpoint.

Beyond the challenges associated with toggling between authoring environments, there are also authoring challenges associated with creating adaptive assessment logic for a variety of disparate training applications. GIFT uses a Domain Knowledge File (DKF), to configure the adaptive training experience for any training application by associating generalized schemas that map concept ontologies to condition classes used for assessment, and linking the outcomes of those assessments with available pedagogical interventions. The DKF Authoring Tool (DAT) is designed to allow developers to create adaptive training across any GIFT integrated training application (i.e., training environment with an established gateway). However, due to the wide variety of training applications developers could employ, as well as the unique types of assessments one may potentially use within, the DAT was designed to be an extremely flexible tool; however, this flexibility required that users possess the technical skills necessary to use such a tool. While this flexibility extended the reach of GIFT to many training applications, it limited the accessibility of the tool for users without particular technical expertise.

4.2 GIFT Wrap

ARL is currently investigating new methods to address the authoring challenges described above. This includes the development of GIFT Wrap, a fully-integrated, user-friendly tool for authoring adaptive assessment logic and instruction within external training applications.

First, GIFT Wrap seeks to overcome the disconnect between the GIFT authoring environment, in this case the DAT, and a training application’s scenario editor through the use of an overlay interface. When configuring a condition class within a DKF, the GIFT Wrap provides the ability to manipulate objects in a training applications scenario editor, with relevant information automatically populating within the DKF’s xml schema. For example, if the user wanted to designate a specific set of waypoints to associate within a scenario, GIFT Wrap would enable a user to establish waypoints in the scenario editor, with those coordinates being referenced in the DKF for assessment configuration. While being somewhat linear in nature, this user-friendly interface allows for the flexibility to make changes within either tool at any time.

Second, GIFT Wrap addresses the technical skills gap by providing an intuitive user interface for configuring the adaptive training experience. For example, rather than editing a specific condition class and it’s corresponding assessment parameters within an XML editor, users access the same functionality via the GIFT Wrap user interface. No specific technical skills are required to use the tool. By adopting a user-centered approach, GIFT Wrap will greatly increase access for those seeking to develop and deploy adaptive training.

In general, GIFT Wrap allows instructional designers to designate triggers for when an intervention or tutoring event will take place within a training application. When the trigger is invoked the tutoring event is initiated. Triggers available depend upon the training application (ARES, VBS3, Live environment, etc.) in use, and the available assessment techniques supported within (i.e., existing condition classes based on available data inputs). GIFT Wrap is designed to provide a training developer with the information to understand what type of assessments can be supported within a training application through descriptions of assessment types once they designate what application will be used for a lesson/exercise. The first development iteration of GIFT Wrap is focused on ARES training events, with VBS3 and live training next in the queue.

GIFT Wrap in ARES.

In ARES, GIFT Wrap includes features to develop specific tests of knowledge and skill. Two existing assessments have been conceptualized to guide initial development efforts, including a check on learning and a terrain layout task (see Fig. 4 for notional terrain layout task configurations). With the check on learning, GIFT Wrap facilitates the development of queries assessing land navigation knowledge for which a trainee provides answers by selecting icons through the ARES interface. Interaction with these queries may be embedded within a larger ARES tactical scenario. For example, a GIFT Wrap instructional designer may use GIFT Wrap and the ARES interface to create a map scene in which trainees must evaluate the scenario to identify relevant landmarks. The instructional designer could then create a set of queries, through a GIFT survey item, requesting the trainee to identify specific landmarks.

Fig. 4.
figure 4

GIFT wrap layout task configuration interface.

With an ARES gateway module in place, GIFT would utilize the established conditions for the check on learning to evaluate in real-time the correctness of the answer. Correct answers might result in adaptation of the training through GIFT to include another test or review of a different map scene. Wrong answers might result in feedback through GIFT, highlighting the correct answer; or it may include instructional material that guides the user through remedial content focused on the underlying concepts assessed within that ARES scenario.

For a terrain layout tasks, GIFT WRAP facilitates the development of assessments that evaluate the degree to which users place icons in the correct location on an ARES presented map. Here the GIFT WRAP author selects an ARES scenario, designs a query regarding positioning elements on the ARES map, and designates the correct positions of elements via the interface. Trainees using ARES and experiencing the test would position icons, the positions of which would be compared to the correct positions for determining skill level. As with check on learning events, feedback or scenario adaptations can be initiated based on performance outcomes. Assessment capabilities will be further extended to support more sophisticated measures required for route planning type exercises.

GIFT Wrap in VBS3.

In VBS3, GIFT Wrap is being conceptualized to support a variety of real-time assessments driven by the data that can be directly received from the game environment. Multiple trigger types, tutoring events, and performance tests are supported. For example, taking an event-based approach GIFT Wrap can be used to author tutoring events triggered by entity states (avatar health) or environmental events (weapons fired). In a land navigation example, tutoring events may easily be triggered based on user/entity locations within the VBS3 environment as they relate to identified way points and scenario objects. Say a trainee reaches a GIFT Wrap–specified location. The system may prompt the user to complete a task or it may automatically begin to assess behaviors. Past research has demonstrated the capability for using triggers (time, location, entity state, etc.) for automatically and intelligently presenting interventions such as real-time prompts [17]. The prompts may guide users in building metacognitive skills or they may assess user awareness of specific environmental elements (See Fig. 5 illustrating the presentation of prompts to trainees during a VBS3 land-navigation training scenario). In any case, GIFT WRAP is designed for authors to configure all triggers, associated tests or tasks, and the condition classes used for assessing trainee performance across those varying events.

Fig. 5.
figure 5

VBS3 assessment configurations and notional GIFT training intervention prompt based on a configured trigger.

GIFT Wrap in Live Training.

Conceptually, GIFT Wrap would perform similarly in a live training environment. Authored triggers however would be based on data pulled from the live interaction (e.g., trainee location as tracked via GPS-capable technology). Real time assessment may include automated assessment of physical behaviors (e.g., how long a trainee stayed in a specific location, how many time a trainee “backtracked” to a specific location). Intelligent tutoring could take the form of prompts or coaching messages, as described within the VBS3 example. In this case, the trainee interface would be presented via smart-phone or tablet technologies, which would present user tasks and collect data from assessments that require user response to cognitive tasks while they complete physical tasks during a land navigation training event. To support this approach to adaptive training, two functions need to be addressed: (1) the ability to configure assessment data based on real-world terrain data, as represented across multiple map data resources (e.g., google maps interface), and (2) a GIFT mobile app that manages the transmission of GPS data to a centralized server for assessment purposes and for the delivery of prompts triggered during training based on GIFT Wrap oriented assessments.

5 Conclusions

Developing training to support a CWR adaptive model of interaction in a self-regulated environment requires technology to facilitate the assessment and coaching required to guide a trainee through the varying phases of skill acquisition. GIFT provides the tools and methods to build intelligent tutoring functions across an array of instructional domains, but there are no mechanisms to assist a training developer in building a set of lessons that build upon each other and incorporate a sequence of complimentary training events and simulations. In this paper, we present a use case showing GIFT’s utility in training a set of knowledge and skills across multiple environments that incorporate scenarios intended to progress a trainee from novice to expert. In addition, we show how GIFT can support personalized instruction during each training interaction. Lastly, we present current research surrounding the development of a new generalized tool, GIFT Wrap, to assist training developers in building the assessment logic required to drive these adaptive experiences.