Keywords

1 Introduction

Operating in a contested mission environment requires comprehensive operational awareness, with the ability to accurately and rapidly perceive and interpret relevant events and circumstances. In order to provide the context, insight and foresight is required for effective decision-making. Complex multi-domain operations are of particular concern; while some operational tasks necessarily would employ a human component, other tasks can only be accomplished through non-human intelligent entities, acting autonomously within the socio-technical enterprise. According to Definition 1 below, EDGE Operations at the conceptual level [1] comprise:

Definition 1:

The coordination of kinetic and non-kinetic means of power in the physical, information, and cognitive domains, to asymmetrically exploit the adversary’s vulnerabilities, and defeat or render irrelevant adversarial capabilities, structures, systems, and will to fight.

EDGE Operations execute at a level of fluidity and flexibility that matches the degree of variation in the external environment, a principle known as requisite variety [2], proven in a broad spectrum of safety-critical systems, missions, and operating environments. EDGE Operations are conducted by EDGE Capabilities, exhibiting the following overarching characteristics:

  1. 1.

    Emergent: In EDGE Operations, distributed, interdependent complex adaptive systems create emergent effects – effects that are greater than the sum of the individual effects of the input systems and that cannot be unambiguously attributed to individual observed properties.

  2. 2.

    Dynamic: EDGE Operations are complex, laborious and dangerous endeavors, requiring resolute and determined action under extreme conditions. EDGE Capabilities accomplish missions successfully under exposure to uncertainty, risk, time-criticalities and resource shortages.

  3. 3.

    Global: Actions by Hybrid Cognitive Systems in multiple operational domains, integrated in planning, synchronized in execution, with the speed, reach and scale needed to gain advantage and accomplish their mission.

  4. 4.

    Evolutionary: Heterogeneous, self-learning and adaptive behavior, originating in qualitative, structural change within and between complex system components. EDGE capabilities display three main evolutionary characteristics:

    1. (a)

      Adaptive – Ability to perceive, understand and manage change under time-, risk- and resource-critical circumstances.

    2. (b)

      Exaptive – Radical re-purposing under conditions of stress, driving an evolving, emergent system characterized by qualitative, structural change.

    3. (c)

      Learning – Experience from ongoing and completed campaigns are translated into action, reducing the time from discovery to implementation.

A complex system is any system in which the parts of the system and their interactions together represent a specific behavior, such that an analysis of all its constituent parts cannot explain the behavior. In such systems, the cause and effect cannot necessarily be related, and relationships are non-linear - a small change could have a disproportionate impact. In other words, as Aristotle said: ‘the whole is greater than the sum of its parts’. This requires adaptive and versatile principles and concepts for complex multi-domain operations along with high-performance human, technological and organizational architectures [3]. Operational success is strongly linked to effective interaction and collaboration within and between the physical, information and cognitive dimensions.

Autonomous systems, different organizational cultures, people with different backgrounds, education and experience rely heavily on collectively managing and maintaining operational availability, versatility and efficiency. In many situations the desired effects cannot be linearly planned and reliably predicted, but must be anticipated to emerge from shaping the Operational Environment (OE) and influencing the agents operating in the OE.

There are several issues concerning the use of mission-specific and contextual information and knowledge for judgment, decision, and choice, as well as the information-coupled activities leading to supervisory control of a complex, partly or completely automated process, and the more obvious control of the involved technological systems [4, 5]. This also concerns the degree of autonomy and automation functions that are crucial for achieving flexible task execution and resource allocation, relating to all Human-Machine interaction and management concerns required to execute supervisory control at every organizational level, and to ensure rapid and reliable, autonomous response in routine decision situations:

  • Monitoring and feedback functions,

  • Functions enabling learning and adapting over time, and

  • Feedforward functions,

DARPA’s Mosaic Warfare concept [6] is an ambitious endeavor into Human-Machine capabilities in extensively, sometimes entirely, autonomous operations.

2 Human-Autonomy Systems: A Definition

Previous research on autonomy has largely focused on understanding how different “levels” of automation changes the working conditions for human operators [7, 8]. This view largely prevails today, as can be seen in the development of self-driving cars. Future applications of robotics and autonomous capabilities suggest a world were different robotic or software entities are integrated in society, fulfilling many tasks and even taking on responsibility for different managerial tasks. As described later in this chapter, this calls for technologies that are able to autonomously engage with its environment, without continuous human surveillance. In terms of perspectives that can provide some theoretical context for such a future, this can be seen as a case of a socio-technical system. However, while socio-technical aspects of human-autonomy constellations are of importance, we need also to focus understanding towards the cognitive aspects of both autonomous agents and human operators and commanders in order to better grasp the possibilities and limitations of joint human-autonomy systems in terms of performance and the types of tasks that can be supported.

2.1 A Functional Perspective on Intelligent, Autonomous Collaboration

In the case of Multi-Domain Command in the information and cognitive dimensions, we need to understand how a unit consisting of both humans and autonomous agents can reach their goals and how control, rather than functions, is allocated in the human-machine system. Further, both humans and autonomous agents are bounded in their rationality, although by different characteristics, deciding how control should be allocated between humans and autonomous agents depending on context and current goals.

The discussion benefits from this as it takes place in a hypothetical zone where the exact technical components cannot be described, as they do not yet exist. However, we can describe what a Human-Autonomy Team (HAT) is/should be in terms of what it can do (its functional properties) [9], which is in line with the Cognitive Systems Engineering (CSE) perspective [10].

Below, we elaborate on why a human-autonomy system can be seen as a cognitive system in its own right, and how the CSE approach can be used to better understand the human-autonomy system in different situations and contexts.

Autonomous systems are systems capable of making decisions independently and function without human intervention. One example is a Cyber-Physical System (CPS) [11], in which computing and physical processes are intricately woven together, with data from the environment and actuators being managed by the computer.

The concept of a human-autonomy system is integrated with the central premise of the human operator and decision maker as a capability component, operating symbiotically with technological artifacts [12]. Human operators are constantly collecting and building knowledge about themselves, other agents and the operational environment. They apply skills, rules and heuristics to plan and modify their actions based on that knowledge. Every commander and every human and artificial agent must develop a capability for sensemaking to enable a comprehensive detailed system insight, leading to safe and efficient mission accomplishment [13]. A human-autonomy system and its properties is found in Definition 2 below:

Definition 2:

A Human-Autonomy System (HAS) is a system comprising at least one human operator and one adaptive artificial entity, with the capability to autonomously engage with its environment in direct interaction, involvement and/or interdependency with humans and other artificial entities in order to meet a certain mission objective.

Besides deciding and acting on an individual basis, both the human and the artificial entity complement each other’s decision-making process and actions and jointly solve problems. In order to do so, they must be able to understand complex ideas (relative to the activity) to adapt effectively to the environment and to combine task related with social and team related skills that enable effective and efficient collaboration. This leads to the following corollary for Definition 2:

Corollary 1:

A Human-Autonomy System (HAS) is capable to create, sustain and evolve Comprehensive Situational Awareness.

2.2 Complex Adaptive Systems (CAS)

The research literature describes the broader aspects of defense systems and in terms of Complex Adaptive Systems (CAS) [14, 15], [5] in the sense that military or crisis management organizations demonstrate CAS properties, and identify adaptive mechanisms at the levels of adaptive systems, capability development and collective/society, which adjust through learning, evolutionary development and cultural change to fulfill an externally imposed purpose. CAS has characteristics of self-learning, emergence, and evolution among the entities of the complex system. The entities or agents in a CAS demonstrate heterogeneous behavior. The key characteristics for a complex adaptive system are:

  • The behavior or output cannot be predicted simply by analyzing the parts and inputs of the system.

  • The behavior of the system is emergent and changes with time. The same input and environmental conditions do not always guarantee the same output.

  • The entities or agents of a system are self-learning and change their behavior based on the outcome of the previous experience.

3 Cognitive Systems and Autonomous Adaptive Agents

The concept of autonomy is important for human-autonomy systems, as they are assumed to have capabilities for performing their tasks independently or interdependently and to have capabilities for reasoning and interaction that are needed for collaboration. The term “autonomy”, however, needs more clarification as it may be used in multiple ways. Autonomy in relation to robotics is sometimes conflated with automation. An autonomous system, then, “performs its actions without human intervention”. It can be fully pre-programmed and may have no choices about its action execution.

AI researchers have imposed requirements on autonomous systems regarding their internal reasoning process and decision-making process [16]. Furthermore, an autonomous system is not necessarily independent; it may allow external influences (e.g. human guidance), as long as it explicitly accepts these influences. This notion is important in the context of HAS, as it combines social and collaborative capabilities in autonomous systems. Lastly, autonomy of artificial systems, just as in the case of humans, is context dependent. A flying autonomous system, such as a UAV, may be autonomous in the sense that it can operate without guidance during flight, much like a human being, but it will only be autonomous in certain operational contexts and in relation to specific goals. If these conditions are changed, then the system is no longer “autonomous” in any of perspectives presented above. From this point of view, the idea of a “cognitive system” actually fits the description of what we in common parlor refer to as “autonomous systems”. In reality, no systems to be considered for military usage should be truly autonomous, as even when tasked to do something that requires autonomy in a specific situation and context, the autonomous unit should only present agency within the frames of the task given to it.

3.1 Cognitive Systems

An autonomous unit like a human-autonomy system fits the definition of a “cognitive system”. A cognitive system operates by using knowledge about itself and its environment to plan and modify its actions based on that knowledge [10]. Hollnagel [17] defines a cognitive system as a system that “can modify its pattern of behavior on the basis of past experience in order to achieve specific anti-entropic ends”. This definition fits any organism or system that is to prevail in a dynamic environment.

The conclusion from this is that an a human-autonomy system must possess three fundamental capabilities to act as a Cognitive System (CS), defined by Norlander [11] as cornerstones of modern complex cognitive systems science:

  1. 1.

    A cognitive system is capable of adaptation to the varying conditions of the surrounding environment;

  2. 2.

    A cognitive system is capable of prediction of how the surrounding environment evolves over time;

  3. 3.

    A cognitive system is capable of regulation in order to reach an equilibrium that matches the current conditions of the surrounding environment.

These capabilities are well in line with properties of Complex Adaptive Systems previously outlined. If we view the role of Human-Autonomy Systems in the context of Multi-Domain Operations, the agents must be able to apply these capabilities in relation to a multitude of organizational entities, human/artificial operators, sensor systems, communication systems, doctrine and networks are all elements of the total operational system. Analogous to the findings of Conant and Ashby [18], the conclusion of this is that an artificial cognitive system has to be capable to adapt, predict and regulate to a level at least in line with human decision-making process and action to be able to complement each other.

The adaptive capability of cognitive command can be understood in the light of the CS definition provided above. Additionally, recent work by Prof. Tom Malone’s research group in the realm of Superminds [19], suggest that human and artificial entities can jointly utilize Artificial Intelligence and Hyperconnectivity to form learning loops, constituting strategic planning and decision-making capabilities of business corporations, government agencies and global organizations. The conceptual structures of Cognitive Systems, Complex Adaptive Systems, Autonomy and Superminds all support the characterization of human-autonomy systems, enabling the foundation of a principal concept of Cognitive Command, based on the supporting concepts below, as Autonomous Adaptive Agents (AAAs).

3.2 Autonomous Adaptive Agents (AAAs) Executing High-Risk Missions as Part of High-Reliability Organizations

Besides constituting an autonomous intelligent entity, an Autonomous Adaptive Agent is also designed as a collaborator, meaning it is able when executing its tasks to complement the human decision-making process and task execution. Because of this, AAAs will, when integrated in human-based teams, be more perceived as team members than a collection of tools.

In most day-to-day operations, operational reliability, availability and high technical performance at the lowest possible cost are persisting overall objectives, and risk awareness in the organization is often limited. On the other hand, more specialized operational domains i.e. aviation, space, maritime, intensive care, nuclear power and military systems, require extraordinary risk awareness and risk management. These cases can be classified as complex endeavors, and the costs of incidents, attacks and breakdowns are valued not only in economic terms but also in human lives.

Additionally, the concept of risk and uncertainty is indivisibly unified with trust. Employing capabilities containing joint systems in the form of Human operators and AAAs must rely on an organization and doctrine that aims to achieve error-free performance and safety in every procedure, every time—all while operating in complex, high-risk or hazardous environments.

Such organizations have been studied extensively and defined by [13] as High-reliability Organizations (HRO). HROs are comprised by predictable and repeatable capabilities and systems that support consistent operations while identifying and preventing potentially catastrophic incidents before they happen.

4 Recommendations: Towards an Essence of Multi-domain Command

A conflict situation within, or with operational reach into, the information and cognitive dimensions can rapidly escalate or change character in fractions of a second, and this requires adequate response times. This is beyond the ability of humans, hence requiring the use of high-performance, automated cognitive capabilities comprised of multiple, distributed human-autonomy systems. Furthermore, without the appropriate distribution of information, and the necessary decision rights to the AAAs that match their required level of autonomy, the decisions and actions needed for success in Multi-Domain Operations (MDO) [20] will not be achieved in a timely manner. Reduction of response times enables losses of Command and Control (C2) capability to be minimized, or restored more quickly if degraded. This would indicate that command approaches that can respond more rapidly to changes in circumstances (e.g., a loss of communications capability or an unforeseen cross-domain system shock) would be more appropriate for operating in a contested operational environment.

In addition to the ability to act in a timely manner to exploit or manage rapidly changing circumstances, the requirement to interact and collaborate in Joint Systems Operations call for command approaches that.

  1. 1.

    Utilize multiple paths for information dissemination,

  2. 2.

    Adapt its interactions to changing circumstances, and

  3. 3.

    Dynamically delegate decision rights between AAA and human agents.

Norlander [21] propose formulating a future-oriented essence of Multi-Domain Command, with equal relevance and applicability on human operators, AAAs, and the HAS they jointly create and operate. The following is a first attempt, with three overarching conceptual mainstays, each with two corollaries:

Make Uncertainty Your Ally

  1. 1)

    Command in future security and defense operations will be complex, laborious and in many cases mission-critical, requiring unprecedented vigilance, awareness and determination. Decision-makers and operators will frequently encounter uncertainty, risks, time-criticalities and resource shortages. 

  2. 2)

    Operating in a contested mission environment requires Comprehensive Operational Awareness, with the abilities to accurately and rapidly perceive and interpret relevant events and circumstances in order to provide the context, insight and foresight required for effective decision-making, enabling every commander and operator to develop a wide-ranging appreciation of the situation. 

Stagnation Equals Defeat

  1. 3)

    Operational characteristics will be highly dynamic and non-linear; Minor events, decisions and actions may have serious and irreversible consequences for the entire mission. Success in future security and defense operations requires extraordinary capabilities to operate in contested operating environments, and to master the Command challenges of complex systems and interdependencies.

  2. 4)

    Mission success is strongly linked to effective interaction and collaboration within and between different organizational cultures, between people with different backgrounds, education and experience, non-human autonomous and intelligent systems, and on managing and maintaining operational availability, versatility and efficiency. 

Multi-domain Command is Joint Cognitive Systems Command

  1. 5)

    Complex operations in a socio-technical enterprise require more than just human service providers; some tasks must be accomplished exclusively by nonhuman intelligent entities. This requires adaptive and versatile principles and concepts for Joint Cognitive Systems Command along with high-performance human, technological and organizational architectures - cognitive mission architectures.

  2. 6)

    The turbulent environment in which Multi-Domain Operations play out stresses the need further for Organizational Agility, to be adaptable and resilient without having to change. The goal is to keep internal operations at a level of fluidity and flexibility that matches the degree of turmoil in external environments, a principle known as requisite variety.

5 Summary and Conclusions

Operating in a contested mission environment requires comprehensive situational awareness, with the ability to accurately and rapidly perceive and interpret mission-relevant events and circumstances, in order to provide the context, insight and foresight required for effective decision-making and action. Complex Multi-Domain Operations are of particular concern; while some operational tasks necessarily would employ a human component, other tasks can only be accomplished through non-human intelligent entities, acting autonomously within the socio-technical enterprise.

The Cognitive Systems body of research was utilized to overcome the duality of traditional human-machine research, focusing on better understanding what people actually do with technology rather than what functions belong to the machine and what functions belong to the human. The Complex Adaptive Systems (CAS) body of research contributed with characteristics of self-learning, emergence, and evolution among the entities of the complex system, demonstrating heterogeneous and adaptive behavior. According to the body of research for Autonomous Adaptive Agents (AAAs), an agent is also viewed as a team member, meaning it is able to autonomously complement human decision-making when executing its tasks. Building cognitive systems and capabilities requires a mental shift – striving towards an Agility mindset that permeates security and defence policy, legal and financial frameworks, science and technology agendas, strategy and operations.

Employing the Cognitive Systems, CAS and AAA paradigms for MDO permits the integration of all capability elements into an adaptive distributed system that can achieve a mission safely and efficiently. Based on these studies and with the support from other fields of study, we devised a number of strategy elements as part of an essence of Cognitive Command and Decision-making in Complex Multi-Domain Operations.