Keywords

1 Introduction

The U.S. Army is modernizing the way it employs simulation-based technologies to train today’s Soldier. The program, called the Synthetic Training Environment (STE), aims to leverage recent advancements in gaming, virtual, and augmented reality tools and methods to build immersive and engaging interactions that promote team development and collective readiness. Part of the STE are a set of Training Management Tools (TMT) that assist in the Planning, Preparation, Execution and Assessment of all scenarios and interactions supported within. Part of that capability set includes the provision of adaptive instructional system (AIS) functions at two levels of interaction. The first, aims to optimize the learning gains within a single training exercise by establishing data-derived performance assessments, using those assessments to drive real-time coaching and exercise control, and facilitating in depth After Action Reviews (AAR) in the absence of expert trainers. The second level of interaction is focused on skill acquisition tracking and establishing persistent representations of performance over time. Through this capability requirement, STE will specify performance data to be captured and stored across all interactions and environments.

What does persistent performance tracking provide? Under a persistent context, skills are built to establish higher-level transferable competencies that associate with measurable requirements for solving a problem or completing a task. To support both AIS interaction levels in STE, two primary challenges need to be addressed; establishing a framework of both individual and team-level competency models across appropriate echelon structures and team configurations, and establishing a workflow for converting raw data captured in a training scenario into meaningful measures of performance and effectiveness that not only drive real-time coaching and exercise control, but also map to the competencies of interest. This supports an evidence-centered design strategy for persistently tracking performance through data-driven methods, but requires careful consideration on authoring AIS functions for a specific scenario and configuring it with a long-term competency framework.

In this paper, we describe on-going work to extend the Generalized Intelligent Framework for Tutoring (GIFT [1]) to fit within a competency development strategy that utilizes an ecosystem of training resources [2]. This involves integrating GIFT’s assessment and pedagogical models with persistent learner modeling functions and building authoring workflows to map scenario level performance with a competency framework tracking a long-term knowledge and skill development progression. To support this experiential tracking requirement, GIFT is integrating with functional components from the Advanced Distributed Learning (ADL) Initiative’s Total Learning Architecture (TLA [2]) that enable longitudinal performance tracking across a set of specified competency models. For our purposes, these baseline components are being extended to enable tracking of team-level attributes correlated with performance outcomes and mastery determinations.

To contextualize the discussion, we will present a low-level use case based on measurement of infantry squad competencies, and how training strategies can apply data-driven methods to track progression toward a state of readiness across disparate training interventions/environments (e.g., game-based, immersive simulation, live, etc.). Using this example, we will define the extended architecture and inherent data dependencies created by integrating GIFT and the TLA. We will then review the flow of data between architectural components that link trainee(s) interaction to real-time assessments in GIFT, the translation of those assessments into granular and aggregated measures of performance and measures of effectiveness, and how tracked measures map to an infantry squad competency model that tracks both individual and team level performance. Based on this data strategy, an authoring and configuration workflow will be considered that addresses each level of data inference.

1.1 Competency Development Training Strategy

A competency-driven training strategy is based on carefully designed pedagogical and andragogical experiences that target a set of specified Knowledge, Skills, Abilities, Attitudes (KSAAs [21]). When developing a training strategy to target specific job roles and functions, these KSAA representations are carefully established within a defined “competency object”. In this instance, a job role or educational topic is comprised of a competency model (i.e., multiple inter-related competency objects) that associates with KSAAs to support domain understanding and ability to perform domain functions. Within this context, the operational definition for each category is as follows:

  • Knowledge: an understanding of the declarative, procedural and conceptual components of a topic or job function.

  • Skill: context-driven behaviors that can be learned and improved over time.

  • Ability: underlying natural or inbuilt behaviors that are transferrable and can be honed and improved upon to some extent (e.g., speed, strength, intelligence, etc.).

  • Attitude: affective and emotional control functions driven by task characteristics/demands and common job/role functions.

When considering the interplay between competency dimensions, the common rule is that ability and knowledge combine to create usable skills [3]. Knowing how to do something, and having the cognitive and physical attributes to apply that knowledge under realistic task conditions is the foundation skill builds from. Through this formalization, competency models establish these KSAA inter-dependencies. And training strategies establish a crawl, walk, run approach to development [4, 23], and design a program of instruction based on the established competency model relational dependencies. When knowledge and ability combine to create skill, the trainee requires multiple opportunities of replication under realistic conditions, with supporting scaffolds to guide mental model development and deliberate feedback to correct errors and reinforce proper technique [5]. The goal of a competency-based strategy is to achieve a specified level of mastery across the KSAAs [6], so that their application is seamless and accurate when required during operation.

For complex job roles and across military occupational specialties, it is recognized that the development of knowledge and skill is not contained within a single environment or interaction. An ecosystem, like the TLA, establishes a library of resources that are mapped across a set of competency objects. The overall competency model establishes a high-level abstracted view of measurable data fields across KSAAs that are independent of any deliverable training or educational experience within that ecosystem. This abstracted view of competency objects is then mapped to the available resources, and can then ultimately be used to prescribe training and track progress towards expert role standards. The objective is to create a path through the resources (i.e., training aids, devices, simulations and simulators) that build the KSAAs, and then provide ample opportunities to apply of those skills under novel contexts. The dependency to implement a strategy of this nature is access to data from the training resources a model of this nature would subscribe to.

The challenge is properly contextualizing the data cross multiple sources and variables. The assumption is a training or educational experience is designed with problems/events/scenarios that target the application of specific competency objects. Within this paradigm, the scenario context defines the competencies required to meet task objectives, along with context-oriented assessments to inform performance and derive evidence [24]. GIFT’s real-time assessment tools provide the ability to code those context oriented assessments in its domain-agnostic task model, providing a translation layer and converting raw multi-modal data into key measures of performance and effectiveness. Tracking this recorded evidence over time across a large number of assessment opportunities provides a reinforcement update function for adjusting competency assertions and building a competency profile of high confidence. What’s critical is establishing profiles that track team competencies and developing models to measure readiness as a means of experience applying core competencies over time and under required conditions determined by leadership.

1.2 Team Competencies and Accelerating Readiness

In the arena of military operations, it is well understood that most occupational specialties are not performed in an isolated fashion. Majority of individuals are trained to serve a role within a team structure, where skill sets are combined to support complex mission types with often multiple associated objectives. As such, from an organizational perspective, it is critical to not only track an individual’s ability to apply a set of skills within a unique set of contexts (i.e., can they perform their role functions?), but it is also vital to monitor the impact of that function on overarching team performance and outcomes (i.e., can they serve as an optimal teammate while satisfying their role?). In examining the literature, team-level attributes are critical when determining what makes an expert team [7]. Building unit-level competency objects and tracking performance dimensions that correlate with team outcomes is a critical component the research looks to support (see Fig. 1 [8]). Building out these competencies into their constituent KSAAs and establishing assessments techniques to track progress against them, programs of instruction can use these data points to better guide follow-on training events that target weak points recognized. To that point, there is a need for training resources that focus on team development techniques, and applying proven sports psychology coaching methods to establish policy-driven andragogical models that focus on experiential learning events. Using GIFT to support a training strategy of this nature is an objective we are designing towards. Specifically, authoring and configuration specifications are required to map assessments captured in a scenario to models tracking long-range performance.

Fig. 1.
figure 1

Team competencies: attitudes, behaviors, and cognitions. This figure provides a subset of evidence-based team competencies [8].

2 Architecture

To support a data-driven competency-based training strategy using AIS applications, an architecture is required to enable data capture across multiple training experiences. In this instance, data capture associates with experiential measures of performance derived from a training environment that are contextualized around a task and an associated competency model. Ultimately, the framework will convert raw data generated during execution of a scenario or problem into meaningful evidence statements, where evidence is used to drive updates to a persistent model that tracks levels of mastery. As described above, a training intervention or exercise is selected based on the foundational assumption that the content is linked against specific competencies, and that measures generated are realistic and sensitive to the context they are derived within, and that the measures provide discriminatory evidence against defined standards and criteria for successful application [9].

To support this vision through a proof of concept prototype, we are integrating three technology baselines: (1) the Generalized Intelligent Framework for Tutoring (GIFT) to provide data contextualization through its real-time assessment functions, (2) the Learner Record Store (LRS) with xAPI statements to provide a performance reporting data standard to enable warehousing of performance records across multiple environments, and (3) the Competency and Skill System (CaSS) to provide a data-defined competency model standard that consumes LRS data to build competency profiles based on available evidence. In the following subsections, each technology baseline within the architecture is described. This section is then followed up by a use case highlighting the competency-based training strategy applied within the domain of infantry squad development and implemented within the persistent AIS framework.

2.1 Generalized Intelligent Framework for Tutoring (GIFT)

The Generalized Intelligent Framework for Tutoring (GIFT [10, 11]) is an open-source, domain-independent, modular, service-oriented architecture used to make computer-based tutoring systems. The careful design of GIFT provides levels of reusability across domains, training applications and technologies. It has also been proven to be a backbone for the integration of automated and observed real time assessments across the Live, Virtual and Constructive (LVC) training system landscape. One unique feature of GIFT is the ability to easily define assessment and pedagogical/andragogical models for individual or team training by creating a Domain Knowledge File (DKF).

A DKF is used by the GIFT architecture (see Fig. 2) to continuously evaluate the learner against scenario specific measures while executing tasks in the training environment. The structure of the DKF provides a configurable schema to define the measures that are important for contextualizing various streams of raw data for higher level meaning. GIFT can utilize state information produced by the training environment (e.g. entity locations), physical sensors (e.g. breathing waveform), biographical information, affective state (e.g. excitement) and historical data to determine learner state across performance, cognitive and affective categories. In order to make sense of the continuous real time assessments being calculated, DKF authors organize measures as evidence of performance for each concept being evaluated.

Fig. 2.
figure 2

GIFT real-time assessment components that contextualize data in evidence based performance statements.

Concepts are user defined labels given performance assessment values which can be used by GIFT for updating learner state during training. These learner state updates (e.g., trainee performance transitioned from at-expectation to below-expectation after observed error) are used to trigger real-time adaptations, and populate After Action Reviews (AARs), as part of historical records. The same concepts can then be reused across DKFs for persistent tracking of learner state, the concepts are elevated to course concepts. Course concepts can be used in other parts of the course beyond training applications such as surveys/quizzes/tests, tagging of content with metadata, identification of which concepts should be taught in a specific portion of a course, and the specific concepts that need remediation to be delivered.

During a GIFT course, learner state information is continuously being evaluated, updated and shared between various components of the GIFT architecture. This information can be delivered to external, long term learner record stores (LRS) for use in future training experiences by GIFT and other applications. In order to facilitate this collaboration among disconnected systems there needs to be an agreed upon standard by which records can be defined and understood. A widely used standard that provides this shared format for both the receiving and sending of data is the eXperience Application Program Interface (xAPI [12, 13]). The GIFT application currently produces and consumes xAPI statements from a Learner Record Store (LRS).

2.2 eXperience Application Programming Interface (xAPI) and Learner Record Stores (LRS)

GIFT consumes, analyzes and interprets various forms of raw data such as training environment state information and historic events. By routing selected information like learner state to the LMS module in GIFT, xAPI statements can be written to a Learner Record Store (LRS) for long term storage and reuse later. LRSs are useful for receiving, storing and returning information about learning events created by one or more systems in which the event happened or was documented in. These events can be anything from high level experiences (e.g. Alice scored an 89 on the Algebra test) to low level training environment details (e.g. GPS location of Bravo team). This same LRS can easily be updated or read from by other systems the learner may interact with in order to provide a hub for a learning ecosystem (see Fig. 3 for anxAPI data flow strategy [2]).

Fig. 3.
figure 3

The Advanced Distributed Learning Initiative xAPI Data Flow (Experience API, 2020)

To support interoperability across platforms that utilize LRSs in their reporting functions, xAPI is being applied in this use case. The Advanced Distributed Learning (ADL) Initiative xAPI effort was created in order to meet a need to track overall learning experiences. While xAPI is a vast and growing technical specification [14], one basic building block of xAPI relevant to GIFT is the “Statement” object. A Statement in consists of syntactical subcomponents, namely Actors, Verbs, Objects, Results, and Contexts (see Fig. 4 for an example).

Fig. 4.
figure 4

An Example xAPI Statement [13]

These subcomponents, being bound together by additional syntax rules, provide the specification necessary to create well-formed Statements. When implemented correctly, these well-formed Statements allow disparate systems to communicate about learning experiences and outcomes in a common language. Thus GIFT, being both a consumer and producer of xAPI statements, is able to participate in learner ecosystems.

As GIFT is an ongoing open-source research and development project, the team does not mandate the use of any specific LMS. Instead, GIFT offers a compatible API with options to communicate with any stakeholder preferred LMS. Thus, GIFT provides a direct AIS subcomponent as either a standalone learning experience with basic LRS options, or as part of a larger instructional system framework such as the Total Learning Architecture (Total Learning Architecture [15]).

It is worth noting GIFT’s utilization of the xAPI specification has been architected in strict compliance with the official xAPI standards in order to both produce and consume data. The open nature of GIFT’s API has allowed for compatibility with the learning community at large, and allowed GIFT to be integrated with other systems architected in similar fashion. The next section will describe one such framework with which GIFT has been integrated, the Competency and Skill System (CaSS).

2.3 Competency and Skill System (CaSS)

The CaSS is a project that began in 2016 under ADL [16]. Like GIFT, the CaSS consists of open source code and was created in an effort to satisfy similar high-level goals of being able to improve the sharing of information concerning learners and learning resources. In this section we present a brief introduction to CaSS, the services that CaSS provides, and the relevant integrations with GIFT.

The reader is welcome to visit the CaSS Homepage at the link above for a full explanation of CaSS, but for the purposes of this paper, please reference the CaSS Overview Document [17]. Directly quoted; CaSS provides the following two services:

  • (1) “Define, store, manage, and access objects called competencies that are organized into structured collections called frameworks. Competencies can represent competencies, skills, knowledge, abilities, traits, learning objectives, learning outcomes, and other similar constructs that define performance, mastery, attainment, or capabilities. Frameworks are associated with a knowledge domain, a domain of endeavor, a job, or a task with structure defined by relations among the competencies they (or other frameworks) contain.”

  • (2) “Store assertions about the competencies held by an individual (or team), and compile assertions and other data into profiles that describe a learner’s current state. CaSS is designed to respond to queries from other applications that, for instance, ask whether an individual X holds a competency Y (at performance level Z). CaSS will answer yes or no and might include a number indicating its confidence in the answer, a link to evidence, and an expiry date. In addition, CaSS can collect assertions and other data from multiple sources and apply relations and rules to formulate a response to a query.”

At this time, GIFT software developers are in the process of formally integrating GIFT capabilities with the first CaSS service revolving primarily around the storage and access of competencies. To support mission readiness analysis in the future, a second CaSS service will be integrated to make assertions across individual and team competency structures. As referenced in the introduction, the concept and implementation of a system in support of KSAAs can be directly related to combining software suites like GIFT and CaSS.

Competency Models in CaSS.

CaSS, through a database of Subject Matter Expert (SME)-defined competencies, acts first as a store of expert knowledge concerning KSAAs that are necessary to meet job/role requirements. Every Competency Object in a CaSS database can be accessed not only as an individual object, but also contains relationships to other objects in that CaSS instance. The competency relationships consist of links such as “requires,” “narrows,” or “is the same as” to name just a few. Integrated with a fully defined CaSS competency network, other software frameworks such as GIFT can gain historical performance records previously inaccessible concerning learner qualifications across the user base.

As referenced above, GIFT contains information in the DKF that is used to analyze learner performance throughout a course. Utilizing knowledge about a learner gained through course/DKF analysis, GIFT is able to contrast that information against competencies defined in a CaSS database. The end result is an aggregated training tool that, combined with xAPI and an LRS, constitutes an AIS capable of real-time performance analysis, tracking and gauging learner skills over time, and mission readiness analysis for both individuals and teams. For a visual representation of the technological components of a GIFT-to-CaSS integration, please see Fig. 5.

Fig. 5.
figure 5

CaSS Components adapted from CaSS Developer Guide [17]

The second CaSS service relating to assertions about learner qualifications was less integrated with GIFT at the time of this writing, but is being designed to support a proof of concept implementation in the area of squad infantry tactical competencies. After all new GIFT capabilities relating to xAPI, LRS/LMS operations, and integration with a CaSS framework reach technological maturity, the GIFT software suite will be able to utilize an assertion system in conjunction with the DKF to allow authorized stakeholders to query on topics such as mission readiness on a learner database. This capability requires careful consideration on the configuration and mapping workflows, which will be conceptually introduced below in the use case example.

With KSAAs, GIFT’s DKF, xAPI, LRS, and CaSS components discussed, it should be evident how training content metadata combined from these disparate systems can help create an adaptive training plan for a learner utilizing an AIS assessment function resident in GIFT. Furthermore, as a learner consumes more training content in a configured ecosystem, the learner’s performance is tracked and cataloged over time creating a robust learner profile from which to gauge past performance, current readiness, and future training recommendations related to both individual and mission goals.

3 Infantry Squad Use Case: From Individual to Unit

To drive the discussion from concept to application, in this section we present a high-level use case focused on unit competencies associated with an Army infantry squad, and a proposed data strategy to manage performance tracking across a training cycle. This involves establishing KSAAs at the role level, and development of cohesion and expertise at the unit level. From a performance and tactical perspective, an infantry squad must be well trained across a group of Mission Essential Tasks (METs). Among these are a set of commonly encountered situations called Battle Drills (BDs).

BDs are defined as “a collective action rapidly executed without applying a deliberate decision-making process [18]”. Mastering these competency sets requires training cycles that focus on instilling an automated response to mission situations that require this set of common collective actions. Focusing on specific BD level competencies that target individual and unit KSAA constructs is serving as the first use case to support data strategy development and prototyping, in this instance Moving Under Direct Fire. The below subsections will discuss the utility of an ecosystem of resources to support unit mastery of a specific BD, and then discuss the system level requirements to build evidence across the resources using the architecture components described above.

3.1 An Ecosystem of Training Experiences

A strategy leveraging an ecosystem of training resources that combine didactic pedagogical instruction with experiential andragogic interactions can be used to guide unit development. This might involve common courseware to introduce the fundamentals of the BDs (e.g., multi-media, classroom lecture, worked example walkthroughs, etc.), exposing cognitive decision making points and assessing tactical understanding through semi-immersive gaming environments, providing fully immersive mixed-reality environments that combine the cognitive and physical task characteristics, and then evaluating application in a live environment with full task fidelity under realistic operational constraints. An approach of this nature was examined under the Squad Overmatch program, which focused on a program of instruction leveraging simulation-based techniques to target the crawl and walk portions of skill development [19]. This research builds off the success of that program, and examines data analytic techniques that help refine the utility of training resources based on the needs of a given unit. In addition, the data strategy proposed is designed to scale across team structures and operational domains.

When instituting a training ecosystem, it will initially apply existing Training Aids, Devices, Simulators and Simulations (TADSS [20]) available to the command. For an approach of this nature to meet its intended application, an assumption is the TADSS at play are enabled to meet data reporting specifications required to track KSAA development. Thus, an ecosystem is only realized when systems at play are able to share data. It is important to covey here that GIFT is not required to support the data strategy at the system of system level of resources; however, each TADSS must adhere to the performance reporting requirement through the production of xAPI Statements that are used as assertions against the defined squad level competency model. Following an interaction, a set of experience statements (via xAPI) are required to contextualize the resource completed and evidence of observed performance. In the following sub-section, we walk through the data flow where GIFT is utilized to translate data into performance metrics, and mapping those metrics to a competency model established in CaSS.

3.2 Proposed Data Strategy

Below is a set of data inference procedures that specify the data strategy being implemented through the GIFT and CaSS integration at the learning resource level. The flow represents the required analytic processes to contextualize data, regardless of the environment, and defines the data flow for single instance of training executed in a multi-modal dynamic training environment. Following, we examine the role GIFT and CaSS have in the data strategy.

  1. 1.

    Training environment PRODUCES multi-modal Raw Data (e.g., STE Training Simulation Software) comprised of simulation events, behavioral sensors, physiological sensors, verbal and non-verbal communication, etc.

  2. 2.

    Raw data is CONSUMED by Training Management Tools (e.g., GIFT) and PRODUCES Metrics (e.g., task performance, team performance, model features) and States (e.g., fatigue, workload, team work, etc.).

    1. a.

      NOTE: These metrics and states are delivered to the GIFT LMS module for long term storage.

  3. 3.

    GIFT LMS GENERATESxAPI (experiential statements on specified data context).

    1. a.

      NOTE: The GIFT LMS module, connected to an LRS, translates performance and state information into xAPI statements.

  4. 4.

    xAPIROUTED TOLRS/DataLakes for storage

  5. 5.

    LRS/DataLakesACCESSED BY Competency and Skills System (CaSS)

  6. 6.

    CaSSUPDATES Squad Lethality Competency Profile based on data assertions captured during training interaction.

  7. 7.

    FUTURE: Recommender engines linked to CaSSTARGET solider development opportunities and GUIDE and/or AUTOMATE plan/prepare scenario generation activities across subsequent training resources.

Building Evidence with GIFT.

In this subsection, we highlight the functions of GIFT that translate data generated during a training exercise into contextualized measures of performance and effectiveness. Specifically within the DKF, GIFT has been architected according to Data Encapsulation Design Patterns. These design patterns, represented at conditional classes, are mapped against a designated task model built using GIFT’s DKF authoring environment (see Fig. 6 for example of DKF interface with BD task model established within).

Fig. 6.
figure 6

Real-Time Assessment Domain Knowledge File task and concept representation for squad level battle drill.

As described in the architecture section above, the DKF allows an author to build a customized task model for use within a specified training environment. For a squad level exercise, the DKF is structured around the tasks a unit can experience within a specified scenario storyboard. In this instance, as seen in Fig. 5, there are three tasks represented in the Virtual Battle Space 3 (VBS3) training environment, each with established performance concepts represented within. Each concept can be graded using data-driven processes supported across the reusable metrics, or they can specify a human observer to provide the assessment state based on their subjective expertise. From a skill development standpoint, understanding the limitations of VBS3 is critical when linking the experience to a competency-based training ecosystem. There needs to be a way to distinguish similar task models applied in environments of varying fidelity (e.g., semi-immersive vs. fully-immersive). Essentially, an instructional designer should ask, “what are the learning objectives of a training resource and how do they support operational performance?”

At run-time, the data produced during interaction is consumed within GIFT’s gateway and passed to the domain module for assessment. There are currently a set of pre-existing domain independent condition classes that can be applied to drive concept performance state determination (see Fig. 6 for list supported in VBS3). Through this mechanism, the DKF task structure creates a scalable and traceable data schema to build performance context within a simulated training environment. Each inferred state (e.g., concept 1 is at-expectation) and transition between states (e.g., concept 1 changed from at-expectation to below-expectation) is reported to the LMS module for long-term storage. Additionally, there are scoring rules to establish a skill state (e.g., novice, journeyman, expert) based on all observed performance states. Fundamentally, the DKF is the GIFT representation of a superset of data, and the evaluation on the actual learning experience is what is translated into xAPI statements and broadcast out to other interested systems.

Establishing Persistence with xAPI, and CaSS.

The concept of data persistence combines all topics discussed in this paper up to this point. The evidence created by GIFT described in the previous section forms the initial data element for the perspective analysis. Considering the GIFT software suite as part of a larger system, the complex relationships between all parts of the system and how those relationships affect data permanence is what needs to be considered.

First, it is important to understand internal GIFT data representations may not necessarily be exposed to outside systems 100% of the time, namely the DKF. It is beyond the scope of this paper to fully describe all internal GIFT processes, so the reader should simply note that internal GIFT data along with the DKF and associated processes are constantly being created and referred to by GIFT during authoring and tutoring user experiences. A complete DKF contains real time assessment metrics as well as overall assessment rules that would define overall pass/fail grading on selected concepts. When the DKF is completed those selected concepts are graded based on the events that unfolded. The grades are then sent to the LMS module where they are converted into CaSS compatible xAPI statements and stored in the LRS connected to GIFT.

At the time of this writing, CaSS uses xAPI statements strictly for its assertion processing logic. The Assertion logic is the part of CaSS responsible for comparing learner performance (gleaned through xAPI messages) to the internal framework of CaSS competencies, and then returning assessments of readiness/mastery. CaSS can either directly receive xAPI statements or poll an existing xAPI endpoint. It’s worth noting, CaSS stores any data received about learners through xAPI statements until purged. A good, simple example that explains competency acquisition can be found here [16].

CaSS can then use the information gained from xAPI messages (for instance, Learner “X” “Scored” “At-Expectation” on “Concept 1” in “Course Y”) and then respond to queries as to the qualifications of the learners in question. This could be useful in the short-term as learners gain competencies throughout a GIFT course piping out xAPI statements that other systems in an architecture, say such as the TLA or STE, may be interested in. Or, over the long term, GIFT or Instructor Dashboards or LMSs, for instance, will be able to ping the CaSS Server against Learner Profiles to be able to quickly assert which learners possess which competencies, so as to personalize current training content. That long-term store of learner information, combined with the Subject Matter Expert-created CaSS database, combined with GIFT course content tagged against that CaSS database, will allow us to analyze which content is most appropriate to deliver to the learner through GIFT to maximize readiness and track individual/squad readiness over time.

Linking GIFT to CaSS.

Linking DKF configurations with CaSS frameworks is an important authoring consideration that needs to be addressed. Current development is underway to connect the two modeling components. Ultimately, we envision two potential paths. The first is where authors can start from scratch and explore existing CaSS frameworks, hopefully with some text searching capabilities to make it easier to find what a developer is looking for based on key terms. If a framework is found that relates to what will be taught than authors are able to select specific concepts under that framework that will be assessed during the GIFT course. This approach provides a quick templated format to authoring a specific domain as well as providing a possible indirect link to other training assets that can be referenced/imported into the training ecosystem. For example, if the M249 Machine Gun concept in a CaSS Framework had assertions or references to a PDF, VBS training content/scenarios or even other GIFT courses could re-apply that material in instructional or remedial capacities.

The other approach is to start adding the concepts you want to teach and then link each concept to a concept in one or more CaSS frameworks. If CaSS didn’t have a relevant entry it wouldn’t prevent GIFT authors from continuing to build out the course and the GIFT assessments. GIFT could continue to store overall assessments in a long-term learner model with the hope that other systems like CaSS would one day be another consumer besides GIFT. There could even be some post processing logic we would run on the GIFT LRS to help link xAPI statements to newly created or discovered CaSS frameworks and concepts.

Also communicating with an LMS System could be the integrated CaSS Framework, primarily to satisfy the CaSS Assertion capability. GIFT tracks all previously mentioned course, learner, and assessment data. CaSS, contrarily, is a database storing only competencies, relationships between competency frameworks, and performing logic on capability readiness assertions. GIFT stores and reads information revolving around course metadata, learner profiles, and learner assessments. CaSS stores information relating to competencies, and then queries other systems for learner performance data to assert positive or negative qualification status. Together, both GIFT and CaSS are architected in such a way to encourage communication with a permanent learner data store such as LRS/LMS systems. This architecture, as visually represented below in Fig. 7, allows each system to perform its specialized function, communicate necessary information in a common xAPI syntax, and reach higher effective levels of training, assessment, and recommendation through centralized competency profile systems that are a result of all software suites working together in the same network.

Fig. 7.
figure 7

Centralized Competency Profile as a Persistent Model

4 Future Work and Conclusions

In this paper, we introduced on-going work that enables GIFT to support competency tracking in a training ecosystem model of skill development. GIFT was integrated with the TLA and CaSS through the xAPI data standard. This integration enables complex skill tracking in dynamic multi-modal data environments, and supports andragogic experiential training. Yet, future work is required.

Establishing granular representations of experience is critical to support the experiential training approach being described in this paper. Generally speaking, at what level in the DKF concept structure do we report performance for competency tracking? We believe the level is something that goes back to when GIFT course concepts are authored. For example, the author would create a course concept ‘move under direct fire’ mapped to an entry in the CaSS framework with a similar meaning, name and ID XXX.XX.XXXX. The author can then continue to define the course concepts as they see fit. Perhaps they don’t author M249 Machine Gun as a course concept even though it’s in the CaSS framework because the training environment doesn’t support that weapon system. Maybe the author adds a photon torpedo as a course concepts because they are assessing a space force training environment but CaSS framework doesn’t have an entry for that yet.

Future work is also establishing a “headless” version of GIFT that allows other environments to leverage the DKF run-time assessment capabilities without linking it directly to a GIFT course. In this instance, an external system is leveraging the DKF to consume and contextualize data, but does not associate with a course concept structure that is built at the course level. As a result, headless GIFT essentially means that some other logic like an API needs to provide similar information to GIFT that would normally come from things like a course.xml. In one implementation, the external system initializing a GIFT configuration could provide the course concepts object via an API in the Gateway module. The structure of that object could be the same structure that GIFT stores in the course.xml so that two data models aren’t being managed.