Keywords

1 Introduction

In this article, we present our concept of self-scaling human-agent cooperation, which is supposed to enable a single-seat fighter pilot to perform joint fighter-UCAV missions with multiple UCAVs. In such operations, the pilot’s range of responsibility includes the operation of the own fighter aircraft as well as the guidance and the mission management of multiple cooperating UCAVs in a dynamic air warfare environment. As a consequence, the task of flying the own fighter aircraft is in conflict with the operation of the UCAVs. State of the art armed medium-altitude long-endurance unmanned aerial vehicles such as the MQ9-Reaper (General Atomics) are operated by a two-man team, consisting of a pilot and a sensor and weapon operator [1]. The operation of multiple UCAVs from aboard a single-seat fighter aircraft therefore requires an appropriate automation concept for effective joint operations, since the UAV to operator ratio increases considerably. Our automation integration concept is supposed to enable such operations and, beyond this, to balance the operator’s activity-related work demands, and to address automation-induced shortfalls in human-system interaction.

In the following section, we first summarize the findings from former own studies on multiple UCAV operations with highly-automated agents and joint fighter-UCAV operations. Then, we identify key issues of human-machine system design. Chapter 4 gives a detailed overview of our concept. In Chap. 5 we analyze our approach with regard to similar international work. Finally, in Chap. 6, we conclude and provide an outlook on open conceptual issues.

2 Background

The Institute of Flight Systems (IFS) investigates joint fighter-UCAV operations in future air warfare scenarios, in which we expect a mix of manned and unmanned combat aircraft to be deployed (compare [2]). In the first phase of our research, we mainly examined the capabilities of highly automated cognitive and cooperative agents operating multiple UCAVs in joint air-to-ground attack missions. For this purpose, a desktop simulation was developed in that artificial cognitive agents autonomously operated the associated UCAVs on the basis of high-level abstract goals, without the need of further human involvement [3]. A key feature of this solution was the ability of the agents to negotiate the allocation of the tasks to be performed during the mission between each other based on explicit goals for cooperation and rules of coordination and communication.

In the second phase of our research, we focused on the realization of a mixed manned-unmanned fighter-UCAV team. Therefore, we replaced the on-board agent of one UCAV by a human pilot, who used a simple desktop human-machine interface to control the aircraft and to communicate with the other agent-guided UCAVs [4]. In human in-the-loop experiments, we could show the applicability of human-agent cooperation to joint fighter UCAV operations. For a more realistic evaluation of the human-machine cooperation, in a next step the agents of the desktop simulation were modified and integrated in a generic single-seat fighter cockpit simulator [57]. For this setup and the following human-in-the-loop experiments, a high level of automation was chosen for the UCAV agents. This team-based cooperative guidance only required an absolute minimum of interventions by the pilot. The cooperative allocation of tasks between the UCAVs (i.e., suppression of enemy air defense, target reconnaissance, target designation, battle damage assessment, and fighter escort) was performed by the agents fully automated, based upon negotiation mechanisms. This high level of automation produced the need for an associative assistant system, which consisted of a Team Coordination Module (TCM) [5], and a Self-Explanation Capability Module (SECM) [7]. The assistant system informed the operator about the behavior of the unmanned team members, and thereby, ensured a required minimum of Situation Awareness (SA) and trust in the automation. The TCM operated in an associative assistance mode, which provided the pilot with spatial and temporal coordination information of the UCAVs, and in a few use cases also in an alerting assistance mode that directed the pilot’s attention in the case of coordination conflicts. Additionally, the agents routinely explained the UCAVs’ behavior to the pilot, and provided further information upon request. To evaluate the joint fighter-UCAV system, an experimental study with German Air Force pilots was conducted. These experiments included air-to-ground missions, where the manned fighter was supported by three UCAVs. The main tasks of these missions were the reconnaissance, the designation and the engagement of a high priority target, the suppression of ground-based enemy air defense, and the protection of the manned fighter in general. To accomplish these tasks, the UCAVs and the manned aircraft had different capabilities and payloads such as High-Speed Anti-Radiation Missiles (HARM) for Suppression of Enemy Air Defense (SEAD), Laser-Guided Bombs (LGB) for the target destruction, cameras for reconnaissance, and laser designators. The manned fighter aircraft could only perform the high-level mission goal, the engagement of high-priority targets, after a visual verification of reconnaissance pictures, provided by one of the UCAVs. All other mission-relevant tasks were autonomously anticipated by the UCAVs’ agents according to their capabilities and the dynamic environment. Throughout the mission, the UCAV agents proactively pursued the overall mission goal. The experiments showed, that the cognitive agent-controlled UCAVs could adapt to the changing environment and to unforeseen situations (e.g., pop-up threats) effectively. The assistant system, consisting of the TCM and SECM, was considered as helpful and could slightly increase the SA. However, a significant increase of trust in the unmanned members could not be shown [7]. The study also indicated that the constant high level of automation temporarily led to mental under-load of the pilots and was lacking in adaptability to balance the operator’s activity and work demands over the course of the mission. The experimental subjects further expressed the desire to be able to assign specific tasks to the UCAVs during mission execution, especially in less demanding situations.

Generally speaking, the experiments approved that high-level automation in the form of cooperating cognitive agents is realizable and suitable for multi-UCAV guidance as well as for joint fighter-UCAV operations in demanding future air warfare scenarios. However, the study also showed the need to balance the operator’s workload in higher as well as in less demanding situations.

3 Key Aspects of Automation in Human-Machine Systems

Due to severe human-induced system errors in highly-automated civil aviation aircraft, researchers invested considerable effort to identify negative effects correlated to automation in complex human-machine systems. In this chapter, we want to summarize important aspects that may also negatively affect joint fighter-UCAV operations. Afterwards we derive some basic guidelines for the design of our system.

The first issue, called human-out-of-the-loop performance, is attributed to several factors. These factors include vigilance decrements, complacency, over-reliance, under-reliance, loss of SA, automation surprise, inappropriate feedback, and skill degradation, which are closely coupled to each other. Inappropriate automation in human-machine systems may initiate vigilance decrements or reduced operator alertness and as a consequence undetected system failures, especially for systems where the human has a supervising role. Out-of-the-loop issues due to operator vigilance are in many cases closely linked to complacency and over-reliance on automation (compare [8, p. 438, 9, p. 544]). According to [9, p. 544] detection and SA problems, as well as skill degradation can be negative consequences of automation-induced complacency. In [10, p. 192, 11] the authors also identify over-reliance as a negative effect of automation. At the same time under-reliance on automation as a result of system unreliability (or automation surprises) may also be a shortfall of automation [9, p. 543, 12]. Another important source of errors in human-machine systems is the loss of SA. According to [13] SA is mainly affected by complacency and vigilance, active-passive role switching, and system feedback. Out-of-the-loop issues related to automation surprise base on automation complexity, where the human operator may not be able to understand the system behavior [9, p. 542, 14]. Another automation-induced out-of-the-loop-performance issue is skill degradation. In [13] the authors link skill degradation to changed operator vigilance and complacency. In [8, p. 438, 10, p. 195, 11, 15] the authors also name automation-induced skill degradation as a general drawback of automation.

A second aspect, which is associated to automation problems in human-machine systems is unbalanced mental workload. According to [8, p. 438] automation may increase as well as decrease mental workload, which can cause mental over-load as well as mental under-load. In over-load situations the human operator is not able to cope with the situation, under-load situations can be linked to the named out-of-the-loop issues. Another problem with regard to workload is “clumsy automation” [10, p. 193], which affects the operator’s mental workload counterproductively. In low-workload situations the workload is reduced, in high-workload situations the operator’s workload may even be increased.

Based on these theoretical underpinnings, we identified approaches such as human-centered automation (compare [10]), adaptable and adaptive automation (e.g., [16]), and cognitive and cooperative automation (compare [17]) as key design elements for our concept.

4 The Concept of Self-scaling Human-Agent Cooperation

To achieve the aforementioned capabilities, we suggest a human automation integration concept that features adaptable and adaptive human-agent cooperation. The concept bases on dual-mode cognitive automation [17], which has already been applied successfully in ground-based UAV guidance [18] and multi-UAV guidance from aboard a helicopter cockpit (compare [19, 20]).

4.1 Framework for Automation Design in Complex Human-Machine Systems

According to [17, 21] the starting point of sophisticated automation in human-machine systems is the definition of a work process (WProc), the associated work objective (WObj), the associated work process output (WPOut), and the affected work object (WO). Furthermore, other work processes, the environment (Env), supplies (Sup), and information (Inf) are considered. After defining the WProc, the physical system running the WProc is considered. This system is called a work system (WSys). Within a WSys, in principle, two roles are distinguished—the Worker and the Tools role [21] (see Fig. 1).

  • Worker: The Worker knows, understands, and pursues the WObj by own initiative. By definition a WSys cannot exist without a human Worker.

  • Tools: The Tools receive tasks from the Worker and only perform them when told to do so. Hence, the Worker has a hierarchical relationship to the Tools.

Fig. 1
figure 1

Work system with worker and tools roles according to [21]

In order to describe highly automated work systems involving human-cognitive agent teaming the authors in [21] suggest a symbolic language that allows describing a small number of building blocks. These are the human Worker, the Tool, and the Cognitive Agent blocks. Cognitive Agents are entities providing higher cognitive capabilities and can exist in the role of a Worker or a Tool. In addition, to describe the relation between these entities, hierarchical relationship, heterarchical relationship, and teambuilding symbols are used (see Fig. 2).

Fig. 2
figure 2

Symbolic building blocks following [21]

One approach to facilitate automation in complex work systems is dual-mode cognitive automation as suggested by Onken and Schulte [17]. This concept is supposed to overcome the drawbacks of conventional automation by incorporating Cognitive Agents into the WSys. The second key aspect of this approach concerns the design of human-agent relationships. Dual-mode cognitive automation applies two modes of relationship in the form of (1) a hierarchical and (2) a heterarchical link between the human Worker and the Cognitive Agents.

4.2 Work System Design for Joint Fighter-UCAV Operations

To integrate self-scaling human-agent cooperation in the fighter-UCAV system we define the WProc “Perform Fighter-UCAV Mission”, the WO “Target”, and the highly abstract WObj “Perform Fighter-UCAV Mission” as presented in Fig. 3. This WProc is initiated by the WProc “Command & Control Operation”, which represents the WProc of a superior command and control authority (e.g., Air Operation Center), as well as the variables Env (e.g., weather conditions in target area), Sup (e.g., fuel), and Inf (e.g., status of reconnaissance of target area). By introducing the human Worker (fighter pilot), the Tools (manned aircraft and UCAVs), and Cognitive Agents, the WProc is afterwards materialized as a work system.

Fig. 3
figure 3

WProc “Perform Fighter-UCAV Mission” with WO “Target”, WPOut “Attack/RECCE”, and WObj “Perform Fighter-UCAV Mission”

As a conceptual basis of our automation integration concept, we choose the aforementioned dual-mode cognitive automation. For the heterarchical relationship we incorporate a Cognitive Agent in the form of an assistant system, which cooperates with the human Worker. Furthermore, we incorporate Cognitive Agents controlling the unmanned aircraft. The human Worker and the assistant system have a hierarchical relationship towards these agents, although these UCAV agents are supposed to hold a Worker role. The hierarchical relationship is realized in the form of three delegation modes, featuring team-based, intent-based, and task-based guidance of the UCAVs. These modes are used to scale the hierarchical relations between the pilot and the assistant system agent towards the UCAVs as well as the cooperation among the UCAV agents.

  1. 1.

    Team-based guidance (Fig. 4a): This delegation level comprises multiple UCAVs pursuing a highly abstract mission goal as a team, e.g., a coordinated target attack. Therefore, each agent needs to have the capabilities to anticipate tasks in consultation with other team members. Furthermore, the team members need to cooperate effectively within the team to keep the team plan and schedule, but also to cooperate with other UCAVs/UCAV teams. This delegation mode may be applied in high workload situations, where the pilot is not able to guide and manage individual UCAVs on an intent- or task-based level.

    Fig. 4
    figure 4

    Work system configurations. a Team-based guidance, b Multiple intent-based guidance, c Multiple task-based guidance, and d Mixed team + intent-based guidance

  2. 2.

    (Multiple) intent-based guidance (Fig. 4b): Here a single UCAV is supposed to pursue a mid-level goal, e.g., the observation of a target area. Therefore, the associated agent needs to be able to autonomously plan and schedule towards the mid-level goal. Although the UCAV pursues an individual task, the associated agent cooperates with other UCAVs/UCAV teams to avoid conflicts with regard to the high-level mission goal. This delegation mode is supposed to be appropriate for balanced workload.

  3. 3.

    (Multiple) task-based guidance (Fig. 4c): The unmanned team member receives a low-level task, e.g., taking a recce picture. For this delegation mode the associated UCAV agent must provide low-level task assignment and execution. This mode is supposed to balance low workload phases by increasing the operator’s delegation tasks. It might as well be used for tasks, where immediate action or ethically responsible decision (e.g., weapon deployment) implementation is required that shall not be left to the automation.

Besides these levels, the delegation of the UCAVs could also be applied in a mixed manner (Fig. 4d), where e.g., a sub-team of UCAVs shall suppress the enemy air defense in the target area, while one UCAV shall reconnoiter the egress route.

The delegation modes correlate to a two-dimensional definition of automation. As suggested in [22], we want to take into account the ability of an entity to take care of itself (self-sufficiency) and the freedom from outside control (self-directedness). In our application the Cognitive Agents operating the UCAVs are supposed to have a high self-sufficiency (high-level skills), but are restricted in their self-directedness according to the delegation modes team-based, intent-based, and task-based. In contrast, the assistant system (heterarchical component) has a high self-sufficiency and a high self-directedness. For the definition of the operator-assistant system relationship, we introduce a third dimension of automation, which we call assistance. This dimension provides alarming, supporting, and cooperating functionalities, which enable the assistant system to vary between attention guidance, proposal of task modifications, and task adaptions (due to ethical issues, task adaptions may require human approval).

4.3 Self-scaling Capabilities

To enable our system to work effectively in joint fighter-UCAV operations, our automation integration concept is supposed to actively balance the operator’s mental state, and to address automation-induced negative effects. Therefore, the heterarchical and the hierarchical relationship between the human Worker and the Cognitive Agents feature scalability. The hierarchical relationship (the guidance of the UCAVs) is scaled according to the proposed delegation modes, whereas the heterarchical relationship (operator-assistant system cooperation) is scaled by adapting the mode of assistance and the Human-Machine Interface (HMI).

As the assistant system is able to initiate system adaptions autonomously, it represents the key component of our automation concept. To trigger system adaptations, we intend to pursue an operationalization of the pilot’s mental workload similar to the approach in [23]. The operationalization of mental workload considers the operator’s tasks, activities, and related mental resource demands, as well as observable variations in behavior patterns.

Although the assistant system primarily initiates system adaptations (system-initiated adaption mode), the pilot is able to intervene and to directly assign tasks to the UCAVs in case of faulty system behavior or when such interaction is desired (operator-initiated adaption mode). That implies that our assistant system allows two modes of cooperation with the human Worker. In case of direct operator task assignment not only the delegation of the UCAVs is adapted, but also the HMI and the mode of assistance. Nevertheless, the assistant system is still able to propose system adaptations when working in the operator-initiated adaption mode. If the human operator accepts such a proposal, the assistance, the HMI, and the delegation of UCAVs is adapted according to the pilot’s mental state and the environmental situation (as in the system-initiated adaption mode). By monitoring operator inputs and mixed-initiative capabilities we want to enable the assistant system to solve evolving conflicts in the operator-initiated adaption mode interactively with the human operator.

4.4 Relations Between Human Worker and Cognitive Agents

In this section we want to particularize the relations between the Cognitive Agents and the human Worker. We intend to design effective human-agent cooperation by taking account of the four requirements basic compact (to work together), maintenance of a common ground, directability, and predictability as suggested in the joint-activity concept for effective team work in [24].

To achieve a basic compact for common-grounding activities, all team members need to understand and accept their role in the work system. In the context of our military application the Cognitive Agents shall not be able to refuse operator commands, unless the situation does not allow the execution of the mission or any subtasks due to ethical reasons (e.g., the bombing of a target or an air defense position, where civilians may be affected).

To maintain a common ground, we intend to use shared knowledge in the form of a blackboard, and Cognitive Agents, which are enabled to model the human Worker and other Cognitive Agents. The use of a blackboard is supposed to permanently provide appropriate information on the status, actions, and intentions of all team members. By linking all agents to the blackboard they are able to observe the other team members. Understanding of this information shall be enabled by incorporating appropriate agent knowledge. The assistant system agent is supposed to pertinently interpret information on the blackboard and thereby shall be enabled to manage attention, e.g., by informing the human Worker about conflicts and problems which may negatively influence the mission. In addition, we want to incorporate reasoning capabilities in our agents, which take individual goals and mutual impact on actions into account. Our assistant system is supposed to act de-conflicting and enable goal negotiation by mixed-initiative strategies. To reduce coordination costs we want to build up our system on a Multi-Agent System (MAS) which offers sufficient coordination.

Mutual directability shall be assured by applying the aforementioned hierarchical delegation modes, mutual predictability shall be generated with the blackboard, which is accessible and understandable by each team member.

5 Related Work

In the following we present selected works which are closely linked to our concept. In the context of multiple UAV guidance we identified resemblances with the works of [25, 26]. In [25] an assistant system with mixed-initiative planning and execution capabilities was incorporated in a future mountain search and rescue system. Their approach made use of hierarchical delegation modes to enable a co-located operator to guide and manage multiple UAVs on adjustable levels of automation. In [26] a manned fighter aircraft was accompanied by three simulated and one real UCAV in mixed-reality flight tests. In their scenarios the pilot largely took on a supervisory control role after defining mission goals taken from a “pool” of goals. In their previous work the authors had recognized the need for team-based task allocation to UCAVs due to operator over-load issues. The self-organized task allocation among participating UCAVs was realized with a MAS, in which four different types of agents cooperated (user, group, specialist planning, and UAV agent). Although, the named concepts both bear a resemblance with our approach, they lack of self-scaling capabilities to autonomously adapt to the operator’s mental state and the environment. Related works with respect to adaptability and/or adjustability are numerous (e.g. [16, 27]). However, such works often apply system- and user-initiated dynamic function allocation whereas the scaling in our approach aims at adapting human-agent and agent-agent cooperation. Besides the abovementioned workload operationalization method [23], there exist further approaches, which are used to trigger adaptive automation. The authors of [28] correlated the mental workload to eye fixation time in combat management scenarios with different cognitive task load. Further works base on EEG (electroencephalography) measurements. In [29] a subject-specific discrimination of two workload levels was achieved with artificial neural networks. The cross-subject training and testing of a hierarchical Bayes model in [30] enabled the separation of three workload levels.

6 Conclusion

In this article, we propose a human-automation integration concept, which is supposed to enable a single-seat fighter pilot to guide and manage multiple UCAVs from aboard the cockpit in joint fighter-UCAV operations. In a first step, we define the associated work process, in which we incorporate Cognitive Agents, Tools, and the pilot as a human Worker to instantiate a work system. The allocation of Cognitive Agents in the work system bases on the concept of dual-mode cognitive automation. We make use of an assistant system and multiple UCAV guidance on the basis of three different delegation modes (team-based, intent-based, and task-based). The conceptual core of our approach is the self-scaling human-agent cooperation, which features human-centricity, adaptivity, and cognitive and cooperative automation. Thereby, we intend to balance the operator’s mental state, and to address automation-induced negative effects. In general, system adaptions are initiated by the assistant system on the basis of the operator’s mental state and the dynamic battlefield environment. To intervene in faulty system behavior or when desired, our approach also allows operator-initiated system adaptions.

Although the named capabilities are supposed to allow effective and efficient joint fighter-UCAV operations, we identified some open issues which show the need for further investigations. The first essential point we identified is the choice of an appropriate cognitive (multi-)agent software. Another challenge will be the design of the Human-Machine Interface in the cockpit. It must enable effective interaction between the human operator and the automation for changing system configurations.

To investigate the proposed concept, we intend to implement a software prototype in our single-seat fighter simulator for experimental studies in the mid-term future.