Keywords

1 Introduction

Systems engineering (SE) and technical leadership are multidisciplinary practices that are as much an art as a science. While a traditional model of education can teach the fundamental body of knowledge, it is not until the knowledge is put into real-world practice that a systems engineer can develop the required insights and wisdom to become proficient. Due to the exponential advancement of technology, rapidly evolving needs, and increasing systems complexity, it is even more challenging for educators to meet the growing educational demands for a workforce able to solve complex systems engineering problems [1,2,3].

Traditional techniques to assess competencies of systems engineering involve reviewing industry experiences together with written recommendations, which is very time consuming and is limited in accuracy. As systems engineering is an art as well as a science, capabilities are determined not only by knowledge but by skills and competencies [4, 5]. Assessing competencies of a candidate based solely on written statements and interviews is comparable to requiring drivers to only take the written test without road tests; information about the candidates’ real-world performance will be lacking in this assessment. A new set of assessment techniques together with a comprehensive assessment model is needed to help fill the workforce gap by providing efficient and accurate assessment of systems engineering competencies both for existing systems engineers and those who are new to the field. Furthermore, an assessment method needs to be developed for the academic environment to assess systems engineering learning to provide feedback on instructional efficacy.

2 Systems Engineering Experience Accelerator

2.1 Introduction

The Systems Engineering Experience Accelerator (SEEA) project created a new approach to developing the systems engineering workforce, which augments traditional, in-class education methods with educational technologies aimed at accelerating skills and experience with immersive simulated learning situations that engage learners with problems to be solved. Although educational technology is used in a variety of domains to support learning, the SEEA is one of the few such technologies that support development of the systems engineering workforce.

The SEEA was developed to support a single-person role-playing experience in a digital environment, as well as a specific learning exercise in which a learner plays the role of a lead systems engineer for a Department of Defense (DoD) program developing a new unmanned aerial system. This exercise is based on the notion of experiential learning, and thus will be referenced as an experiential learning module. The learner engages with the experience (i.e., simulated world), makes decisions to solve problems, sees the results of those decisions, abstracts lessons learned from what was successful and what was unsuccessful, and then repeats the process in a series of cycles, simulating the evolution of the program over time.

The SEEA technology provides a graphical user interface allowing the learner to see the program status, interact with nonplayer characters to gain additional program information, and make technical decisions to correct problems. It also provides the capability to simulate the future behavior of the program, based on these learner decisions, so that outcomes can be shown to the learner. This cycle of decision and simulation-into-the-future supports the Kolb cycle of experiential learning [6]; the Experience Accelerator uses multiple such cycles operating through the lifecycle of the program. In particular, this approach allows communication of the effect of upstream decisions on downstream outcomes in the system lifecycle. The SEEA can support a wide variety of systems domains and areas of expertise through changes to the experience. Recently, additional multiplayer technology has been developed to allow live player support for team-based learning, as well as for a mentor or instructor to provide advice and feedback. The following are the problem statements and goals for this project.

Problem Statement

Traditional systems engineering (SE) education is not adequate to meet the emerging challenges posed by ever-increasing systems and societal demands, the workforce called upon to meet them, and the timeframe in which these challenges need to be addressed.

Project Goal

Transform the education of SE by creating a new paradigm capable of accelerating the time to mature a systems engineer while providing the skills necessary to address emerging system’s challenges.

2.2 SEEA for Learning and Assessment

Learning assessment is a critical component of accelerated learning [7]. It is crucial to understand the learning results and the efficacy of different kind of learning experiences. This is imperative both in assessing the capabilities of the learner and in improving the efficacy and the capabilities of the learning experience. While assessment capabilities are critically important, nothing was found in the literature that was directly applicable to automated assessment of systems engineering skills in the SEEA. Therefore, a new experimental design grounded in the literature will need to be devised, along with a set of tools to facilitate its application.

While the Experience Accelerator (EA) has a broader goal of accelerating the learning of critical SE competencies through an experience-based system, systems thinking skills are a key component of the targeted learning outcomes. Systems thinking is at the core of the targeted EA SE competencies and therefore one of the primary competencies to be assessed in order to evaluate the effectiveness of the EA.

Systems thinking seeks to improve decision making and complex problem solving through deep systemic understanding. Typically, in order to assess learning gains in these areas, three approaches are utilized: measuring performance resulting from decisions (such as a game or simulation score), reviewing decisions and actions that were taken, or measuring learner understanding (the rules and mental operations that lead to decision making) [8, 9]. Measuring learner understanding seeks to verify that improved decision making arises from understanding the system and not simply from trial and error [10]. All of these approaches are valid and can result in worthwhile evaluation. As systems thinking skills are applied in order to understand and solve complex problems, educational research on the assessment of problem-solving skills can be helpful in designing an effective evaluation.

In order to solve an ill-structured problem, students must be able to deconstruct the problem into its constituent parts (e.g., stakeholders, relationships among them, impacts of the problem on them), define the problem in their own words, determine resources to help them understand the problem, determine and pursue learning issues, and develop and test a solution. Research on the evaluation of problem-solving skills tells us that in order to evaluate problem-solving ability, we must assess students’ ability to do each of these steps. The EA seeks to accelerate the learning of novice SEs and advance them more quickly to expert SE performance. Experts use heuristics to skip steps; novices typically are not capable of doing this.

A meta­analysis of problem-solving assessment literature found that 18 of 23 studies deemed of high-quality use cases or simulations as assessment methods [11, 12]. With the EA simulation, we have the means to measure learner’s performance within the experience. Learners make decisions within the EA, the simulation determines the results of those decisions, and we are provided with outcomes that we can utilize in order to assess the effectiveness of learners’ decisions.

In order to assess learners’ levels of understanding and to determine if the EA improves learning, a more thorough picture of the thinking behind learners’ choices is needed. Therefore, to assess learners’ understanding, it is important to elicit their views of the system, the problems they faced, and the thinking behind their decisions to solve these problems. Emerging literature in systems dynamics increasingly has instead been seeking to assess learners’ understanding or mental models.

Therefore, learner performance assessment can be performed through analyzing the captured actions and decisions taken by the learner. EA captures learner approaches to decision making (through verbal protocols), and by using expert choices and protocols as a baseline for “good” decision making, one can assess learner understanding.

The evaluation plan therefore focused on:

  • Benchmarking with an objective “score” which is also useful in motivating students

  • Comparing subject matter expert (SME) EA actions and results to novice SE actions and results

  • Comparing SME written (or transcribed verbal) descriptions of their decision-making process during the EA to novice SE written (or transcribed verbal) descriptions of their decision-making process during the EA in experience 1 and experience 2

  • Tracking learning with changes in 1–3 above through a learner’s multiple iterations through the experience

To support this plan, the EA has been instrumented to record information as a learning laboratory . Research will be done to determine the requisite data that needs to be recorded and the EA will be updated accordingly. Prior to completing this research, the following data has been selected and will be collected from the EA:

  • Participant identification

    • Learner’s name and demographic information

    • Team name and other members

    • Instruction name and roles played in experience

  • Experience session information

    • Experience name and version

    • Date of experience start and end

    • Login dates and duration of each session

    • Phases/cycles covered in each login session

    • Elapsed time and number of session per phase/cycle

    • Links to past experience information

  • Learner experience inputs and actions

    • Self-assessment

    • Initial recommendation input

    • All subsequent recommendation inputs

    • Workflow sequence with each action recorded with a timestamp

    • Who is called and which questions are asked, in which order

  • Instructor input

    • Feedback provided to learners (dialog, email, etc.)

    • Recommendations accepted/rejected

    • Instructor’s observations

  • Simulation output

    • Last phase/cycle completed

    • Results of schedule, cost, range, and quality

    • Final status charts

    • Final score

  • Reflection

    • Reflection feedback provided to the learner

    • Learner’s reflection input

Next, a set of analysis tools are being developed to analyze this information. Test cases are being created to provide benchmarks to baseline this analysis. Finally, a demonstrable set of learning experiences will be recorded and analyzed to provide feedback on the capabilities of the system.

3 The Learning Experience

3.1 Learning with the SEEA

The Systems Engineering Experience Accelerator provides the capability to simulate the program into the future, based on these learner decisions, so that outcomes can be presented to the learner. This cycle-based decision-making process and simulation-into-the-future supports the Kolb cycle of experiential learning [6]; the Experience Accelerator uses multiple such cycles operating through the life cycle of the program. Specifically, this approach allows illustration of the effect of upstream decisions on downstream outcomes in the system life cycle.

Applied in an academic setting, the SEEA concept provides the possibility for a much broader scope of learning environments than a capstone project or industry internship [13]. These more traditional approaches provide a beneficial learning experience and support integrating the various components of the SE body of knowledge, but are limited by time and domain. The capstone is usually a single project and at most a year in length. If it covers the full life cycle, then it must be a simple project and most likely represents only one domain. An internship is even more limited, given that few companies would assign a student to a significant role or provide much variation of role or domain. The SEEA envisions the ability to provide learning experiences that involve significant decision making at various levels of authority and drawn from many different domains. Neither a capstone nor an internship could likely present the same range of specific challenges and “aha” moments that the SEEA can provide. Whether the SEEA experience is as effective as a truly in vivo experience is part of the research underway, with results from academic and industrial pilots of the SEEA as the primary means of validating effectiveness.

3.2 The Current Learning Experience

The current SEEA learning experience was designed in a defense acquisition program context [12] where an unmanned aerial vehicle (UAV) acquisition program is underway. The learning experience utilizes the following scenario.

The XZ-5 is a sophisticated UAV system being developed for all services for reconnaissance, surveillance, and targeting missions. In this experience, the lead learner assumes the role of the lead systems engineer just after the preliminary design review, replacing the previous lead program systems engineer. The XZ-5 project completed a technical development phase and preliminary design review (PDR) in the second fiscal year (FY-2) and entered a cost-plus-fixed-fee (CPFF) contract for engineering and manufacturing development (EMD) phase after a favorable Milestone B (MSB) decision. The contract budget base is $200 M with $195 M initially allocated to the performance measurement baseline (PMB). The program is supported by a prime contractor and three major subcontractors. The experience starts with the beginning of FY-3, just after the EMD contract is awarded. The learners’ team has just checked on board to the XZ-5 government program office. The XZ-5 program manager is counting on the team to establish “ground truth” on the technical status and trajectory of the XZ-5 development and make recommendations to keep the program on track to enter critical design review (CDR) on time at the end of FY-4.

The current XZ-5 project under development consists of three major subsystems: The airframe and propulsion is primarily electromechanical, the command and control system is mainly software, and the ground support system is mainly human based. The key performance measures (KPMs) are schedule, quality, range, and cost. Each of the learner’s sessions in the experience represents a single day in the program and is estimated to take approximately 1 h to complete, although the learner is free to log in and out any number of times during a session (Fig. 80.1).

Fig. 80.1
figure 1

Context for the UAV experience [7]

3.3 Pilot Use of the Learning Experience in a Project Management Course

More than 30 junior and senior engineering undergraduates at the University of Alabama in Huntsville (UAH) used the SEEA during the 2016 spring semester as a team project. The students were enrolled in the Management Systems Analysis course, which focuses primarily on project management skills. Students were asked to participate in teams of five. Each student in a team plays a different role in the XZ-5 UAV experience. Those roles include lead systems engineer (LSE), airframe and propulsion system (APS) lead, command and control system (CCS) lead, ground stations launch and retrieval system lead (LGLRS), and integration lead (Prime). Each team was tasked with using the SEEA in the UAV scenario given as two homework assignments – one near the beginning of the semester, and one near the end of the semester to evaluate the students’ skill advancement.

Phase 0 introduces the students to the SEEA and the XZ-5 program; phase 1 explains to the students their new assignment; phase 2 requires the students to analyze the current situation just after the completion of the preliminary design review (PDR) and make recommendations to keep the program on track, leading to the critical design review (CDR); phase 6 provides the results of the current simulation based on the performance of the students; and phase 7 gathers information and provides feedback to the students based on their actions taken during the experience and reflect on learning skills. Phases 3, 4, and 5, simulating integration, system test, and limited production and deployment, are currently being updated.

4 Results and Analysis

4.1 Pilot Results

After the pilot course was completed, the performance data of the teams were gathered and compared. Due to technical difficulties in the first run, only the results from the second run of the SEEA are used in this analysis. The performance measures include range, critical software defects, schedule, CDR artifact completion, and budget overrun. The SEEA combines these measures to determine if the CDR can be achieved successfully and determines the risk to proceed with the UAV program. During the pilot, each of the seven teams made different decisions, resulting in a range of performances and different program results. Among the seven teams participating, five teams were able to complete the whole project cycle and reach phase 7 to receive performance feedback from the SEEA. Teams 1, 5, and 6 all finished with a low risk of proceeding based on CDR results; team 3 finished with medium risk and team 2 finished with high risk. Teams 4 and 7 did not complete the simulation.

The data gathered during the pilot application can be analyzed to provide insights on students’ decision making, their capability to discover issues in the system, their ability to prioritize resources and the outcomes of their decisions. As mentioned in Sect. 80.2.2, many different types of data were gathered by the SEEA system. Participant identification and experience session information are used to identify specific user and their use of the system. Learner experience inputs and actions are valuable data to track the learner’s actions and behaviors during the experience, which will provide insights into the learner’s decision-making process. Simulation output data was used to determine the general performance for the learner; it also demonstrates the outcomes of learner decisions. Instructor input and reflections can be used to evaluate the efficacy of the learning and to improve the learning experience.

The performance of the teams is shown in Fig. 80.2. Range of the UAV is affected by weight, drag coefficient, and thrust-specific fuel consumption (TSFC). Team 2 performed very well with range, whereas teams 1, 3, 5, and 6 achieved the requirement. In the beginning of the experience, there were early signs of a range problem caused by weight issues, and most of the teams identified this issue by reallocating the weight balance and adding more workforce to the airframe and propulsion team.

Fig. 80.2
figure 2

XZ-5 UAV range performance

Budget is an important measure to the success of the UAV program. Teams need to control the budget to be successful in the experience. While team 2 performed well in range, the recommendations they made caused a significant budget overrun. All the successful teams managed the budget and had a budget overrun of less than 15%. Figure 80.3 shows the overall budget overrun performance for the pilot application.

Fig. 80.3
figure 3

XZ-5 UAV budget overrun performance

The XZ-5 UAV program has an original plan of 27 months between PDR and CDR. Any significant delay will potentially undermine the success of the program. It is recommended by the experts that the schedule shall not be delayed over 20 months while the delay within 10% of the period is considered good. Teams that manage the schedule well are likely to pass CDR proceed with low risk. Teams 3, 5, 6, and 7 managed the schedule well. Team 4 recommended advancing the CDR time by 5 months, which resulted in incomplete work. Teams 1 and 2 performed within acceptable range (Fig. 80.4).

Fig. 80.4
figure 4

XZ-5 UAV schedule performance

Another performance measure was software critical defects; these indicators are affected by the mix of senior-junior staff and the number of software reviews. It is recommended to have less than eight critical defects to pass CDR proceed with low risk. Teams 1, 5, 6, and 7 kept the critical defects quite low, while teams 2 and 3 kept them controlled (Fig. 80.5).

Fig. 80.5
figure 5

XZ-5 UAV software critical defects performance

4.2 Pilot Analysis

As mentioned in Sect. 80.4.1, seven teams performed quite differently throughout the experience. Based on the data gathered from the SEEA, their downstream performance reflected their decision-making capabilities at crucial points in the project. The simulation challenged students to take on a project that has existing issues from the previous development phase and thus requires them to make changes to the system and project quickly and accurately. The teams that reacted more quickly in the right direction performed generally better than the teams who simply observed the situation without making the necessary changes. Table 80.1 shows the performance of the different teams along with their presentation results, decisions, and actions throughout the experience. Team scores were calculated using a weighted system based on the learners’ performance on schedule, range, budget, software critical defects, and CDR readiness. The scores were normalized such that a score of zero would be equivalent to making no changes in the program and 100 was the best score that experts were able to achieve.

Table 80.1 Students’ input and reflection

5 Summary and Future Works

This paper discussed the use of Systems Engineering Experience Accelerator (SEEA) in the domain of systems engineering (SE) education and learning assessment . During the pilot application of the technology, data was gathered from seven teams of students who participated in the UAV learning experience. Data gathered from the pilot application provided insights into the students’ decision making and their understanding of systems engineering and project management. The technical difficulties encountered in the first run of this pilot have been resolved, so for future pilot applications, multiple runs of the SEEA will be performed and compared for performance analysis. While there were technical issues during the pilot application, SEEA was unanimously praised by the students in that it provided an opportunity to practice the skills that were illustrated in the classroom.

The future works for this research include gathering data through pilot application with a number of systems engineering experts; using data gathered from expert pilot use of SEEA to calibrate the experience and scoring mechanism; comparing students’ behavior data and decision-making process with experts’; and pilot applications with two separate runs of the SEEA before and after the learning and using the data gathered to assess the efficacy of the learning.

The SEEA will be utilized for another pilot application in a graduate Introduction to Systems Engineering course at UAH in the Fall 2016 semester and will be utilized again in the Management Systems Analysis course in the Spring 2017 semester.