Keywords

1 Introduction

As research in human-robot teaming (HRT) continues to advance, it is important to consider the role of theory in the field. In addition to the diverse contexts and domains in which HRT occurs, there is also the variability in robot design, communication mode, and functionality, which make it difficult to identify theories that can generalize across different human-robot teams. For this reason, many have questioned the relevance of theory in the field of human-robot teaming [1] and some researchers have resorted to using qualitative methods (e.g. [2]) and case-based usability research for direction in developing robots (e.g., [3]). While these approaches are essential for individual applications, they do less to consolidate and build on the field’s knowledge base compared to theory-based approaches, which can guide subsequent research efforts. The present work begins with an overview of some of the applications of theory in HRT work and ends with a discussion of the role of theory in HRT and the implications of increasing that role.

2 Background

2.1 The Role of Theory in Research

Theory is a system of knowledge that depicts generalized relationships about how the world works, which enables predictions to be made. In most fields of study, theory provides a framework for organizing and guiding research. It frames observations and links a single study to a broader, common base of knowledge to which other researchers contribute. Gaps in understanding can be identified from such an organization of knowledge, which in turn, drive future research [4, 5].

There are different levels of theories corresponding to their level of abstraction. General theories are highly abstract and are almost unlimited in scope, while middle-range theories explain a less comprehensive set of phenomenon. Theories at a lower level of abstraction are generalized statements that account for a more restricted range of empirical observations with limited application [6]. It is possible that newer areas of research, such as human-robot teaming, will have a greater number of lower level theories. This is especially true since much of the research in HRT has been driven by expediency [7] and the need to understand the impact of the use and application of human-robot teams to increasingly more areas in our work and lives.

2.2 The Field of Human-Robot Teaming

There are several research areas related to the field of HRT, the closest being Human-Robot interaction (HRI). Domains of application for HRI research include the military, healthcare, manufacturing, and others. For instance, robots have been deployed in the military in bomb disposal as well as search and rescue missions. In healthcare, there are surgical assistant robots with high-definition 3D vision systems and dexterous robotic arms that help with surgeries [8], and service robots that allow caregivers to monitor and communicate with homebound patients [9]. Robots are also developed to facilitate rehabilitation regimens and provide therapy [10]. Finally, robots have been used in manufacturing because their ability to manipulate materials and objects with great speed and precision boosts productivity [11].

Apart from HRI, other areas related to HRT include automation, human-computer interaction (HCI), psychology, and neuroscience. Human-robot teaming research has benefited from research on automation (e.g., the unintended effects of inappropriate automation use), while HCI research has informed the design of interfaces that facilitate human-robot teaming (e.g., [12]). On the other hand, cognition, neuroscience, systems theory, control theory, and others have informed the design of architecture and mechanisms underlying human-robot teams (e.g., [13]).

2.3 Non-theoretical Development of Human-Robot Teaming

Non-theoretical techniques for developing robots and their interactions with humans are available and in common use. These techniques take a targeted approach typically informed by watching interactions and applying tried-and-true rules to the design of interfaces. For example, Clarkson and Arkin developed a heuristic evaluation (HE), or a set of guidelines for evaluators to use to identify human factors issues, for HRT. They based their HE on previous evaluations used for human-computer interface (HCI) and computer-supported cooperative work (CSCW) domains. They modified items from these earlier evaluations and added new items by brainstorming, consulting subject matter experts, and other informal techniques. They then validated the list by testing its performance, thus evaluating HRIs. In addition, Michaud et al. utilized focus groups and usability test scenarios in the development of a homecare tele-assistive mobile robot [14].

2.4 Applications of Theory in Human-Robot Teaming

Theory can be used to optimize or to enhance human-robot teaming. To optimize human-robot interactions, it is important to use theory to understand how humans perceive, think, and act in relevant situations so that robots can be designed in such a way that increases the efficiency of interactions, while minimizing errors. This encompasses work on the interface for human-robot communication. It encompasses studies on the effects of robot appearance and form, input methods and modalities, displays, and adaptive interfaces and displays on human social behavior and cognitive processes.

Second, theory can be used to enhance human-robot teaming by augmenting the abilities of the robot. Specifically, theories of human perception, cognition, and action can be used to identify and implement advanced capabilities to facilitate human-robot teaming. This area relates to the social or teaming aspect of HRT, and includes capabilities and features of robots that specifically enables them to function as a teammate. Research is likely to model HRT after human teams as humans naturally team with other humans. Hence, to develop robots that can team, the robot would need to be more human-like. Studies also address research questions such as how to organize a human-robot team for various task and missions.

3 Theories Related to Human-Robot Teaming Research

We will review two broad areas of HRT research and will identify the relevant theories and discuss the role of theory within the area. The areas pertain to the research and development of (i) Human-Robot interfaces and (ii) capabilities that would help a robot team.

3.1 Theories Related to Research on Human-Robot Interfaces

Several theories that are used to optimize HRT are used in older fields such as HCI and their application is modified to address the unique challenges of HRT. Gillan and colleagues identified three areas that are of particular importance to HRT: situation awareness (SA), spatial cognition and mental maps, and task switching, which relates to executive functioning [15]. SA relates to how well the robot can support the human teammate’s ability to perceive a situation, interpret it, and project a future state. Spatial cognition relates to the ability of a human teammate to ascertain the robots orientation and build a mental map of its environment. Task switching is important because the human teammate will need to perform relevant tasks and keep track of the robots state and location. Task switching ability is especially important when there are multiple robot teammates. A theoretical understanding of how humans switch tasks can help robot designers facilitate the calling of human teammate’s attention appropriately.

The modalities utilized in the human-robot communication interface is also an important factor in HRT. Wickens’ Multiple Resources Theory postulates that the human attentional capacity can be thought of as multiple “resource pools”. These “pools” loosely correspond to encoding and response modalities, as well as stages of information processing. In a multi-tasking context, performance on the tasks undertaken simultaneously would be better if the tasks drew upon different resource pools than if the tasks required resources from the same resource pool [16]. The theory would predict that human-robot interfaces that enable tasks to be performed with various modalities would be more advantageous than interfaces with limited modalities. This has largely been supported by research, which found that interacting with robots that have multimodal interfaces can result in a reduction in human cognitive load especially when multiple tasks have to be performed concurrently (e.g., [1721]).

Such predictions from theory have contributed to the recent focus on multimodal communications in robots, which encompasses visual displays, gestures, speech, non-speech audio and haptics [22]. Inclusion of speech and gesture detectors in interfaces can facilitate human robot teaming as these are modalities associated with natural language processing and do not require translational input devices like a keyboard or mouse. Such studies on robot interfaces with speech and gesture in teaming have motivated work in speech and gesture recognition and classification (e.g., [23]).

Some theories relate specifically to HRI research. The type of physical nature of a robot can influence the nature of the human-robot interaction. Humans may perceive robot behavior and interact with robots more effectively with those that have a human-like appearance than those more mechanical-looking. They may be more inclined to talk to a robot or smile at it if the robot has a human face or appears to understand speech [17]. In fact, research has shown that a robot’s appearance affects the expectations humans have of its capabilities and function [24]. This finding suggests that humans are more likely to team with a robot that resembles a human. Furthermore, the more human-like the robot is in appearance, the more likely it would be accepted by the human as a teammate. However, as robot appearance becomes more and more human, the trend reverses, and the robot becomes repulsive because the appearance and functionality are incongruent. This is the “Uncanny Valley” in Mori’s theory, named for the dip in the graph that plots level of acceptance against anthropomorphism.

Mori’s Uncanny Valley theory was proposed directly from empirical research on human-robot interaction. The theory predicts that the robot’s appearance can impact its practical application [25]. For instance, robots tending to trapped victims were perceived as “creepy” and not reassuring [26]. As a result of studies related to the Uncanny Valley theory, [27] argued that a robot with more human like appearance and behavior would be more acceptable and interact with people more effectively so long as the degree of robot anthropomorphism stops short of the uncanny valley. The quality of HRI and by extension performance in HRT, even with the most efficient and easy-to-use robot, can be jeopardized if the human teammate dislikes, distrusts, doubts, or resents the robot [28]. The United Theory of Acceptance and Use of Technology (UTAUT) combines many of the above themes into a comprehensive look at factors that bear on acceptance. UTAUT identifies four factors that contribute to technology acceptance: effort expectancy, performance expectancy, social influence, and facilitating conditions [29]. The first two factors refer to traditional HCI considerations. Effort expectancy refers to the ease of use or usability of the robotic system and performance expectancy refers to the benefits of working with the robot. Social influence involves the approval of the human teammate’s peers on his use of the robot teammate in a given situation. Finally, facilitating conditions refers to the extent that the human teammate believes that the organizational and technological infrastructure is in place to facilitate their use of the robot teammate.

3.2 Theories Related to Research on Characteristics that Help Robots Team

Human-robot teaming research departs from HCI and HRI research in that HRT seeks to develop robots with which humans can team. This necessarily entails having a human teammate collaborate with the robot to work towards a common goal. In such a situation, the robot is less of a tool and more of a partner or teammate. Much of the research in this area has drawn upon the factors affecting human-to-human relationships and human teams, as the premise is that humans are more likely to team with robots if robots possessed the characteristics that allow humans to team with other humans. These characteristics encompass (i) attributes of the robot that directly facilitate teaming, or (ii) factors that promote emergent characteristics that contribute to teaming.

Attributes that Directly Facilitate Teaming. There is a line of HRT research on the social-cognitive mechanism and processes required to design robots that can team. Several studies have proposed that robot teammates need to possess the human attribute of having theory of mind (ToM), which allows humans to cooperate and team with other humans (e.g., [3034]. ToM involves inferring other’s mental states (i.e., thoughts, beliefs, intents) from their behaviors such as speech, actions, facial expressions, and gestures [35]. The cognitive mechanisms implicated in ToM relate to Simulation Theory, which posit that we, humans, infer the other person’s mental states by thinking as if we are the other person, i.e., we simulate the other’s actions and stimuli they experience in our own minds, using our own cognitive mechanisms, to predict what they are thinking [36]. Together with ToM, Simulation theory has provided HRT researchers some direction in terms of the social-cognitive mechanisms that robots should possess. For example, robots should be able to discern where its human teammate’s attention is directed by following his/her eye gaze. This notion has been investigated in a number of studies (e.g., [3740]).

Another human characteristic that supports teaming is the ability to possess joint intent. The Joint Intention Theory postulates that teammates need a set of shared beliefs so that they can work together towards a common goal. The theory includes the concepts of joint activity and joint commitment. Joint activity results from the sharing of specific mental properties, while joint commitment is prioritizing the common goal above individual goals, as well as having a mutual belief about the status of the goal [41]). Ideas of the theory have been investigated in HRT research (e.g., [4245], and have been implemented in models and frameworks such as STEAM (Shell for Teamwork) and GRATE* [46]. Another line of research related to the Joint Intention Theory is the work on shared mental models (SMMs) between humans and agents or robots. These studies include developing a research approach to measure and evaluate SMMs in human-robot teams [47], testing if SMM achieved in the planning phase can benefit teamwork in the execution phase [48], understanding how SMM can inform design of agents capable of teaming [49], and specifying requirements for a robot’s computational mental model of the task and teammate [50].

Factors that Promote Emergent Characteristics that Contribute to Teaming. A substantial amount of HRT research has been in trust in human-robot teams. In HRT, trust refers to the human’s “attitude that an agent will help achieve an individual’s (the human’s) goals in a situation characterized by uncertainty and vulnerability” [51]. Unlike communication capabilities and computational mental models, trust cannot be “built” into a robot, but is an emergent property of the human-robot relationship. The level of trust the human has in the robot can determine whether or not the human uses and relies on the robot [52]. A meta-analysis on trust in human-robot teams identified the following classes of factors [53]:

  • Human-related

    • self-confidence [54]

    • inclination to trust [55]

    • expertise [56]

    • familiarity and understanding of robot functioning [57]

  • Robot-related

    • robot reliability, predictability [57]

    • proximity [58]

    • robot personality [59]

    • anthropomorphism

  • Environmental factors

    • culture [60]

The Uncertainty Reduction theory, which postulates that humans act to reduce uncertainty in their interactions, accounts for a few of these factors. For instance, the Uncertainty reduction theory is supported by the factor relating to being familiar and having an understanding of robot functioning, i.e., trust in the robot is more likely when the human understands how the robot works, when the robot’s mechanisms and algorithms are transparent to the human. Research in the effects of robot transparency as related to trust indicate that transparency is related to perceived predictability [61, 62], another factor that impacts trust. Robot reliability and predictability denotes a high degree of consistency in robot performance, which minimizes the uncertainty experienced by the human. Hence, the theory appears to be supported by the results of the meta-analysis.

4 Conclusion

The review in the current paper indicates that theory is still an important part of HRT research. In areas reviewed, there are middle abstraction level theories that can still inform direction of research and explain certain observations. Findings from HR teaming research can still be informative for theory building because as researchers reverse engineer human capabilities in robots, they should be able to test and contribute to theories of human social behavior and cognitive processes.