Abstract
Purpose of Review
Current real-world interaction between humans and robots is extremely limited. We present challenges that, if addressed, will enable humans and robots to collaborate fluently.
Recent Findings
Humans and robots have unique advantages best leveraged in Human-Robot Teams. However, human and robot collaboration is challenging, and creating algorithmic advances to support teaming requires careful consideration. Prior research on Human-Robot Interaction, Multi-Agent Robotics, and Human-Centered Artificial Intelligence is often limited in scope or application due to unique challenges in combining humans and robots into teams. Identifying the key challenges that apply to a broad range of Human-Robot Teaming applications allows for a focused and collaborative development of a future toward a world where humans and robots can work together in every layer of society.
Summary
In order to realize the potential of Human-Robot Teaming while avoiding potential societal harm, several key challenges must be addressed: (1) Communication, (2) Modeling Human Behavior, (3) Long-Term Interaction, (4) Scalability, (5) Safety, (6) Privacy, (7) Ethics, (8) Metrics and Benchmarking, (9) Human Social and Psychological Wellbeing.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Robotics research has made incredible strides in recent years, providing benefits across a wide variety of applications, such as manufacturing, search-and-rescue, and healthcare. However, we lack seamless human-robot team interactions similar to those that appeared in science fiction decades ago, with robots such as R2-D2 or C-3PO working with Luke in Star Wars or Rosey helping her family in The Jetsons. Today, most industries utilizing robots keep them in caged setups or human-free environments to avoid accidents. The few appearances of human collaborative robots (i.e., “cobots”) are in manufacturing [1,2,3], healthcare [4, 5], search and rescue [6], and military [7]. However, even such “collaboration” is highly predefined and substantially constraining (e.g., robots will stop or slow down when humans are in the vicinity), limiting the impact of such technologies [8]. These systems maintain high levels of autonomy [9,10,11] and can perform rigid, predefined behaviors, being able to assist humans, but not effectively team with humans.
Teaming has been incredibly significant in human history, allowing humans to build at incredible speed and scale, and ultimately spearheading technological development and cultural growth. As the field of robotics has reached a level of maturity, we are now at a critical point where we can enhance the collaboration between humans and robots, namely human-robot teaming (HRT), which can bring us into a new technological age. HRT will be crucial in increasing efficiency in production lines [8], reducing workload for healthcare professionals by creating healthcare robot aides [4], and saving lives through rapid and coordinated disaster response. However, HRT also brings certain risks, such as human manipulation of models for personal gain, misapplication of robots outside of their trained context leading to harm to humans or damage of property or privacy violations. Effective HRTs require robots and humans to understand and support each other, develop and maintain shared mental models, generate and dynamically adapt long-term collaboration plans, all while ensuring the safety and privacy of humans and jointly considering the ethical implications of the robot’s actions on different users.
In this paper, we carefully define nine grand challenges to guide the research community toward successful HRT while avoiding potential pitfalls. The challenges are the following: (1) Communication, (2) Modeling Human Behavior, (3) Long-Term Interactions, (4) Scalability, (5) Safety, (6) Privacy, (7) Ethics, (8) Metrics and Benchmarking, (9) Human Social and Psychological Wellbeing (Fig. 1).
Communication
Information sharing is crucial for team cooperation and achieving shared goals [12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29]. In human-robot collaboration, effective communication depends on the robot’s autonomy level and the human’s supervisory role [18, 30, 31]. In lower levels of automation, communication in HRTs is necessary for passing task information, while in higher levels, it increases situational awareness. Building a successful HRT requires a sense of partnership where robots work jointly with humans, not just to follow commands [18]. Such interpretation of an HRT requires social dexterity and understanding from both the human and the robot, where they need to reason about their counterparts’ intentions, beliefs, and goals to take appropriate actions at the right time. Such social dexterity can be achieved through communication.
There’s extensive literature studying communication in successful human teams [12, 13, 32,33,34,35], but developing effective communication frameworks for HRT remains an open challenge. Although some prior works have explored leveraging communication strategies from human-human teams for human-robot and robot-robot teams [14, 32, 36, 37], core challenges in communication still need to be resolved, including communication modality (i.e., how to communicate), communication frequency (i.e., when to communicate), and communication content (i.e., what to communicate) in HRTs.
Communication Modality
For successful, smooth, and efficient cooperation and collaboration, HRTs need clear communication that is maintained throughout interactions to synchronize goals, task states, and actions [38].
Researchers have studied various forms of communication in HRT, such as two-way dialogue [39], natural language [40], multi-modal (i.e., including gesture, gaze, etc.) communication [18, 41], and visual messages [18]. However, these approaches can increase mental workload [42] and pose challenges to situational awareness (see the “Communication Frequency” section) [30, 43]. To address these issues, investigating discrete and sparse communication channels that preserve human interpretability in shared information may be a potential solution for high-quality decision-making [44].
In specific domains (e.g., military, underwater, and road signs), humans use gestures or visual interfaces to communicate; however, these require pre-training and knowledge of domain-specific message-spaces. For the general public, natural language appears to be more intuitive [45], but it poses challenges due to its ambiguity, colloquialisms, and context-dependent use [46, 47]. Nonverbal communication, such as hand gestures and visual interfaces (i.e., multi-modal), together with language, can be effective in human-robot interaction and coordination [48, 49].
Communication Frequency
To optimize communication frequency, it is important to consider its impact on workload, situational awareness, and decision-making. Too many messages can be overwhelming and reduce situational awareness, resulting in low-quality decision-making [50,51,52]. Conversely, too few messages can lead to low situational awareness, insufficient knowledge of the world state, and thus, reduced teamwork effectiveness and performance [53, 54].
Previous research recommends sending messages at fixed intervals to create a steady stream of anticipatory information among team members [32, 35]. However, other studies propose event-triggered communication to enhance communication efficiency [55,56,57,58]. Efficient communication in HRTs requires an effective balance between the amount of information conveyed in each message and the frequency of messages sent and received during a task.
Communication Content
In this section we discuss the communication content, or what to send, in HRTs. Both humans and robots need to effectively communicate their world-state, action-intentions, and objectives to their collaborator [59]. HRTs can benefit from such information to enable establishing true human-robot collaboration and shoulder-to-shoulder teamwork (rather than simply following commands), including self- and world-assessment for mutual support, communicating back to support joint activity, and negotiating labor division and task allocation [18].
Sharing state observations has been the central communication technique in prior works [15, 16, 60,61,62,63,64,65], while communicating action-decisions allows for strategic decision-making through theory of mind (ToM) and cognitive hierarchy [17, 66,67,68]. Recently, sharing experiences (i.e., state, action, and task reward trajectories) has also been proposed [69], but with larger teams, such experience-sharing mechanisms can become exhaustive and pose high communication and computation overhead (see the “Scalability” section). An effective communication mechanism should efficiently summarize behaviors and include messages rich in essential information for decision-making, while avoiding unnecessary information [59, 70, 71].
Modeling Human Behavior
Teamwork is best achieved when team members understand one another [72, 73]. Prior works in HRT have shown that shared mental models among humans and robots positively correlate with team performance [74,75,76,77]. This section addresses the challenges associated with modeling the goals and capabilities of human teammates.
Modeling user behavior may involve learning the user’s objective [78,79,80,81,82], or learning the policy [83,84,85] to predict how the user will respond in different situations. Prior works have explored model-based and data-driven approaches for modeling human behavior [71, 82, 86,87,88,89,90]. Here, we highlight critical challenges for robots modeling human behavior, namely: Complexity and Suboptimality.
Complexity
The ability to decipher another person’s mental state is named the Theory of Mind (ToM) capability [91]. Robots with a ToM capability could understand how people behave. Thus, augmenting robot policies with reasoning about human behavior can enhance robot assistance and collaboration with humans. Prior HRT research uses various techniques to model human behavior across different levels of reasoning [92]: first-order models infer human goals or intents solely from human behavior, whereas second-order models learn how humans make inferences about a robot’s objectives [93].
Modeling human behavior poses a significant challenge for robots due to the complex nature of human behaviors. Several internal and external factors often influence human behaviors, including trust in robots [87], stress levels [94], physical capability [95], engagement [96], sleep deprivation and caffeine or alcohol intake [97]. Current computational models of human behavior only explore a subset of these factors at once and often rely on simplistic assumptions about latent dynamics in human behaviors [87, 96, 98]. Hence, we need additional work exploring multi-faceted approaches that can incorporate various interaction effects to model human behavior.
The difficulty in modeling human behavior also increases with higher orders of robotic reasoning. As robots become complex, understanding the robot’s objective will become essential for humans to collaborate effectively. Thus, robots should consider how users understand the robot’s objectives (second-order models) to choose more predictable plans [99] or disambiguate their intentions via communication [100, 101]. The development of second-order models of human behaviors is still nascent [102], and recent works assume that humans and robots in HRT share the same goal or objective [59] which may not be true [103]. Thus, we need further work exploring how to tie in first-order and second-order models of human behavior to enable fluent collaboration.
Suboptimality
Several human modeling works assume that humans exhibit rational behavior, i.e., they choose actions that are approximately proportional to their intent or reward function [104,105,106]. However, humans deviate from rational behavior due to certain cognitive biases, time pressure, or limited processing capabilities. Accounting for such suboptimality can improve human behavior modeling. Few recent works explore incorporating such inconsistencies for human behavior modeling [85, 86, 89, 107, 108]. These approaches mainly address human suboptimality in simple domains for short time horizons, while ideally, robots should model humans for longer interactions (see the “Longer-Term Interactions” section). To overcome this challenge, we require more sophisticated models of human decision-making from other disciplines, such as psychology, cognitive science [109], and behavioral economics [110] for modeling humans in HRT.
Moreover, modeling suboptimality at the team level remains under-explored in HRT. Suboptimal behaviors in HRT can arise not only from individual agent behaviors but also from the interaction of various entities. For instance, misunderstanding between humans and robots can lead to task redundancy [111], and robot suboptimality can lower human trust and willingness to coordinate with robots, reducing team efficiency [112, 113]. Hence, we need models that account for team dynamics when modeling suboptimality in HRT.
Longer-Term Interactions
Teamwork between humans and robots will not complete within a single moment but rather develop over time [114,115,116] and can last for a variable duration. Furthermore, these teaming interactions may repeat, resulting in an interaction that could last weeks, months, or even years, contextually changing the interaction to a lifelong deployment [117,118,119]. Across the various domains that HRT will be beneficial to, there will be dynamic components of the environment, requiring a robot to intelligently and continuously reason over streams of information [120]. Furthermore, as an interaction proceeds, it is important for the robot to understand a situation by considering both past and current information as well as predict the future to create a collaboration plan.
We must enable robots to reason effectively in such longer-term interactions, adapting their behavior to new situations and personalizing to ever-learning users.
In the past, robots have been deployed long-term within applications that require relatively limited interaction and have been shown to provide benefits across cardiac rehabilitation [121], robot therapy for autism [117, 118, 122], and education [123, 124]. However, as we shift to the rich interactions required in HRT, longer-term collaboration is especially challenging as it can involve providing robots with the ability to (1) dynamically learn new concepts and adapt learned behaviors to accomplish objectives and (2) collaborate with a human that may exhibit changes. Furthermore, evaluating algorithms in these longer-term contexts can prove difficult as these studies require substantial resources, and interactions can vary widely across users [125]. Augmenting robots with the ability to learn and adapt to new contexts and behaviors, understand human behavior, and personalize their actions will enable them to support lifelong HRT in unstructured and dynamic settings.
Continuous Task Learning and Adaptation
To effectively team with humans in long-term interactions, robots must be able to learn new behaviors and adapt current behaviors to new situations [126,127,128,129]. There has been much progress toward the goal of facilitating speedy task learning [130,131,132]. Approaches include creating task-agnostic world or model representations [131,132,133,134,135], development of models that can support continual learning [136, 137], and techniques that minimize the forgetting of previously acquired knowledge by these models [138,139,140]. Other work has studied the learning of sub-skills to allow for reasoning over how to adapt a current set of sub-skills in a new context [141,142,143].
However, these works have not been extended to HRT scenarios, a domain where robots may need to simultaneously team with humans and learn new behaviors, all while a dynamic scenario is evolving. In HRT, human teammates will need to teach robots or correct existing robot behaviors online so that the robot can perform duties essential to the teaming interaction. Addressing this challenge will require creating new paradigms to facilitate human-in-the-loop robot learning [144], and developing techniques so that (1) human teammates can quickly teach/correct robot behaviors [145] and (2) robots are able to update models with minimal exploration (without prolonged training or excessive environment interaction).
Accounting for a Changing Human Teammate
A unique challenge in HRT is that effective reasoning over context requires the robot to understand its human teammate (e.g., a teammate’s intent, latent characteristics, current state, and future behavior), not only addressing the state of the world. The fields of Human-Robot Interaction and Human-Computer Interaction have utilized an understanding of human behavior to personalize robot decision-making (approaches discussed in the “Modeling Human Behavior” section), resulting in benefits across education [146, 147], healthcare [118, 148], and domestic applications [149].
However, many of these models can only perform well at reasoning over a set of behaviors well-represented within a dataset. Common assumptions are that the human will maintain a static modality throughout an interaction [150], humans are rational [78, 151], or that humans have an advanced understanding of the task-at-hand. In a long-term interaction, such assumptions will be violated at some point, rendering these models unsuitable for HRT. We need to provide robots with the flexibility to understand human teammates in more complex, “in-the-wild” [152] long-term interactions.
Evaluating Longer-Term Interaction
A long-term teaming interaction may last a variable duration and may repeat, resulting in an interaction spanning months or even years. Conducting studies looking into such repeated interactions within a HRT scenario can prove difficult as robot systems are not currently robust for such long-term deployments [153], and interactions can vary widely over time. Some works have begun to deploy robots within homes for longer periods [154], but the interaction between the robot and user is limited and as such, does not fit our definition of teaming. Thus, it is critically important to begin conducting HRT studies at longer scales of interaction so that research questions for future work can be clearly identified. Furthermore, at these longer scales of interaction, it is important to pivot from episodic measures of teaming, such as minimizing workload or maximizing performance, to longer-term measures that may provide overarching benefits [115].
Scalability
Modeling a larger number of diverse team members [155], accounting for changes in constraints or resources [156] and diverse computation methods [157, 158] are challenging for scalable HRT. The challenges of scaling humans [159,160,161] and multi-robot teams [162, 163] apply to HRTs for coordination [164,165,166] and collaboration [167] but do not account for the added heterogeneity.
Modeling of Large Heterogeneous Teams
In HRT, modeling, training, and sharing information face challenges due to diverse humans and robots.
When modeling heterogeneous teams, one-size-fits-all models [168] or multi-modal approaches [169] are used to account for stochasticity [170], preferences [85] and capabilities [171, 172].
Training HRT on large scale is hazardous, due to proximity to humans, and current scalable training approaches [173] leverage curriculum learning in simulation [174] with fine-tuning on larger scales [175] or through calibration with online learning [176]. Training methods for large-scale robot teams [22, 168, 177,178,179,180] may become infeasible in HRT [181] due to the credit assignment problem [182].
As HRT scales, communication overhead [183] and limited large-scale communication [184] become relevant, requiring human [185] or robot [168] supervisors for efficient coordination.
Future works may include developing stochastic and type-independent models for coordination and communication in HRT.
Robustness to Different Conditions
HRTs should be robust enough to handle changes in constraints, distances, and available resources.
A versatile HRT could handle different constraints (e.g., temporal, spatial, motion control) [186,187,188] but current training methods to learn new constraints (e.g., curriculum learning [189], zero-shot transfer [190], multi-task learning [191]) may lead to catastrophic forgetting [192].
Changes in environment scale and distances must be accounted for [193, 194] as units take time to reach them [195]. HRT algorithms should consider the scalability of the map size and different failures that may happen.
Dynamic changes in team composition (e.g., breakdown of robots, reassignment of humans) and resource availability can lead to different policies in cooperative planning [196,197,198,199].
Further research is needed to account for the human stochasticity in scaling problem domains and feasibility of using methods associated with multi-robot teams in HRTs, as these domains are both easier and safer to explore than HRT systems.
Architecture of Solution
Industry 4.0 and IoT have shifted robot decision-making toward a decentralized model [200, 201], with challenges in training communication and application.
Team coordination and robots can be centralized while humans are inherently decentralized [202]. Scaling optimal central planners is intractable [156, 165, 203], and remote control of robots may lead to communication problems [27, 194, 204,205,206,207,208].
Decentralized HRT is possible through cloud robotics and edge computing [209] and crowdsourcing can be used for training [210,211,212,213]; however, communicating in large decentralized systems remains an open challenge [214, 215].
Semi-centralized model leverage a central high-level supervisor and low-level decision-making in sub-teams [27], becoming robust to communication interruptions despite the sub-optimality [216, 217].
Safety
As human-robot collaboration increases, the chance of safety hazards also increases, making safety one of the most important factors of HRT [218]. In industrial settings, where human workers team with strong robots [1,2,3], collisions can result in serious injuries [219], while domestic robots’ proximity to end-users and the broad population they team with poses special challenges in ensuring safety [147]. However, the issue of safety extends beyond these examples and is prevalent across all HRT applications, e.g., assistive driving [220], rescue operations [221], agriculture [222] and supporting astronauts [223].
Multiple safety guarantee frameworks are proposed in prior work. Control Barrier Function [224, 225] encodes safe and dangerous states and could maintain the system within safe states. Reachability analysis [226, 227] and minimally invasive safe control [228] override the policy to avoid unsafe regions when the agent is on the safe-unsafe boundary. Researchers also constrain Markov Decision Process such that certain unsafe states and actions are banned from visiting [229]. Other safety assurance methods include regulating the control energy, velocity, and force [230,231,232] when humans and robots are in close proximity.
Safety is a key bottleneck in achieving effective HRT, as measures such as stopping the robot when humans are close significantly impact teaming fluency [8] and more sophisticated safety modules that are compatible with complex environments and different human partners are required. Two major challenges on the road to effective safety for HRT are (1) adaptation and personalization and (2) human understanding of robotic safety.
Adaptation and Personalization
The International Organization for Standardization (ISO) has provided safe-robot-behavior guidelines for industry robots [233, 234] and general human-robot collaboration [235] to stop, reduce speed, or limit the applied force when humans enter a robot’s safety region. However, for dynamic and unknown environments, the definition of the safety region itself must be adapted to account for the environment and the task, making the enforcement of ISO Standards ambiguous. Most aforementioned approaches only work with fixed unsafe regions, rendering them unsuitable for HRT. Online re-planning methods for dynamic environments [236, 237] are often impractical due to high computation requirements. Brown and Niekum [238] reasons conservatively about unknown space but requires a high amount of user queries about trajectory ranking, similar to [239]. One possible direction for safety in dynamic and unknown environments is to develop interfaces and approaches to allow end-users or domain experts to intuitively specify and adjust the safe/unsafe regions as needed.
Lasota et al. [240] defines two types of robotic safety: physical safety and psychological safety. The previous paragraph focuses on physical safety (human safety and environmental safety). Psychological safety (i.e., subjective safety) is equally essential, as a perceived safe system is key to maximizing team performance. For instance, an experienced worker may regard working closely with a robot as safe and productive. In contrast, a new user who just unboxes a robot may prefer keeping a distance from the robot [241]. Enabling users to define preferred safe states and thresholds could also work for this challenge. Demonstration-based techniques provide a promising direction to empower end-users to specify their safety boundaries [242]. The robot could also create informative queries to ask human teammates about uncertain states.
Once robots can ensure safety in complex environments and fit different teammates’ needs, the acceptance of HRT systems could significantly increase in various risk-averse applications.
Human Understanding of Robotic Safety
Understanding the robot’s limitations and potential hazards is crucial for humans making decisions in HRT. For example, an elderly-care robot may not be equipped with depth sensing and could fall off stairs. If the human partner knows the robot’s capability, he/she can decide to deploy the robot only on the ground floor. Prior work has explored explaining to humans after a robot fails [207, 243], but few works have considered the best way to inform humans about robots’ limits and potential failures and proactively prevent unsafe cases from happening. Huang et al. [101] is a close work where an autonomous vehicle informs the user about its policy such that the human understands possible safety concerns.
More research on how to best inform the human partner about the robot’s limits could grant the human more confidence to collaborate with the robot. As humans acquire full safety knowledge of robotic teammates, the HRT will become safer, more robust, and more seamless.
Privacy
Social robots are expected to become prevalent in highly privacy-sensitive domains, such as industrial floors [244], healthcare [5, 245, 246], assistive therapy [247], schools [248, 249], homes [250, 251], and workplaces [244, 252]. As these robots become prevalent in day-to-day environments, humans and robots will share workspaces, participate in conversations, and collaborate on tasks, while robots actively manage and utilize sensory information [253, 254]. Such information can include audio and video recordings, personal information, and even biometric data. However, little is known how robots discern the sensitivity of the information, which has major implications for human-robot trust [254]. Mishandling sensitive information can lead to great harm in government applications (e.g., through leakage of classified information), healthcare (e.g., HIPAA), citizen security and wellbeing (e.g., Illinois BIPA, EU GDPR [255]), and any application involving sensitive populations (e.g., minors, prisoners). We list two key challenges in privacy in HRT: (1) Personalization in HRT and (2) Maintaining Domain-specific Policies.
Personalizaition and Privacy: Opposing Objectives
Effective personalization requires detailed records of user interactions with the robot and understanding their habits and lifestyle to uncover user needs, preferences, and expectations [256, 257]. Downstream, such personalization can enhance a user’s trust and anthropomorphism within the HRT [256, 258]. However, the question remains can we have personal, trustworthy, and reliable robots without giving away personal data and respecting human’s privacy?
Some end-users simply rely on privacy policies and terms and conditions developed and released with a robot by the manufacturing companies, while more recently, researchers have developed privacy controllers for human-robot interactions to improve privacy awareness and trustworthiness [254]. Creating transparency via Explainable AI (further discussed in the “Transparency to Minimize Overreliance” section) techniques can also help build privacy awareness and support a trustworthy, private relationship in HRT.
Following Domain-Specific Regulations
Robots interacting with humans will capture, store, and transmit information to improve their ability to reason effectively. However, this information can also be misused or mishandled. There has been a string of litigation to avoid such misuse, striving to improve the overall quality of life across citizens of the world. For example, in America, the Family Educational Rights and Privacy Act (FERPA) and the Health Insurance Portability and Accountability Act (HIPAA) have been passed to protect the transference of sensitive information that can be easily misused. However, such policies are not applied to robots, and such misuses have already occurred both for virtual agents such as Alexa [259], and robots such as the iRobot Roomba [260].
Collecting data across users can improve the accuracy of data-based techniques, but requires sending private information to a centralized server, which may violate laws or user-specific criteria. Federated techniques attempt to address this issue by only sending back gradient information to centralized servers, keeping the benefits of crowd sourcing data while maintaining privacy [261, 262]. Other work in differential privacy [263, 264] and Homomorphic encryption [265] add noise or encrypt the data to protect individual user information directly from data. However, even with these techniques, users are not given the ability to control the information sent to the centralized server.
It is important for a robot to understand its context (e.g., whether it is working with a child in an educational setting or in a hospital), and use that context to control the transference of information following current litigation and user-specified preferences. Furthermore, where possible, data should be encrypted or anonymized, and transparent procedures should be in place for data collection, use, and storage to minimize data leakage and misuse.
Ethics
The challenges associated with ethical decision-making in HRT include identifying the responsibilities of system designers, incorporating transparency to improve robot trustworthiness, and the importance of designing robots that promote diversity, equity, and inclusion.
Ethical Decision Making and Responsible of Decisions
Ethical Decision Making plays a crucial role in HRT as robots may need to make quick, life-altering decisions [266, 267]. The uncertainty in the information available to robots and the designing algorithmic frameworks that account for ethical issues pose decision-making in HRT [268, 269].
Challenges in this area go beyond the well-known “Trolley Problem” [270, 271] which does not assume any uncertainty of actions. With robots being involved in emergency services, such as the redistribution of critical medical supplies during COVID-19 or triage care, it is the ethical responsibility of HRT researchers to consider the accountability of each actor [272,273,274,275,276].
The lack of clarity regarding the ethical and legal responsibilities of actors in HRT, as well as the possibility that robots can become autonomous moral agents, further complicates matters [277,278,279]. In such cases, it becomes challenging to decide who is responsible for a failure when robots and humans work together [280, 281].
Transparency to Minimize Overreliance
Successful HRT depends on humans trust and willingness to collaborate. However, adopting a solely user-centric approach for building trust in AI and robots risk creating “dark patterns,” i.e., it may lead to improving user trust without ensuring the system is trustworthy [282]. Developing trustworthy robots that empower humans to make informed decisions is crucial. Explainable AI (xAI) aims to address the trustworthiness gap, but developing appropriate explanations that cater to different stakeholders’ expertise and functional roles remains a challenge [282, 283].
Despite these challenges, there exists positive evidence highlighting the critical role of transparency in AI decision-making in establishing human trust in human-AI systems [284]. Recently, Paleja et al. demonstrated the effectiveness of user-readable decision trees in increasing situational awareness [150]. However, such increased situational awareness comes at the cost of significant cognitive load, making it impractical for rapid decision-making. Additionally, Miller describes the pitfalls of operationalizing xAI without incorporating relational information about the operator, task, and environment, counter-indicating a one-size-fits-all approach to xAI [285].
Miller describes a lifecycle approach to transparency and trust, building upon prior work in human-human teams [286]. The study found that high-performing teams with a high level of priori and posteriori transparency (i.e., displaced transparency) can obtain a high level of trust with very little in-the-moment transparency. This displaced transparency facilitates trust across each of the three tiers defined by [287], namely affective, analogic, and analytic. This research indicates the need for AI development approaches that facilitate explainability, which is accessible to a wide variety of stakeholders while mitigating unfair bias and ensuring the safety and privacy of various individuals involved.
Diversity, Equity and Inclusion
Diversity, equity, and inclusion (DEI) promote fairness and equal opportunity leading to a more creative workforce that enhances innovation and problem-solving. DEI in HRT should lead to robots that support and enhance diversity rather than perpetuate existing inequalities.
Widespread adoption of automation with human characteristics impact how people perceive other people. Design choices of voice personal assistants (VPA) such as Siri, Alexa, and Cortana [288] which utilize a default female voice can strengthen gender stereotypes [289]. When designing robots that will interact with all members of the society, designers must ensure the body types, voices, and appearances of these robots do not reinforce negative stereotypes. The effect of widespread usages of these automated systems needs to be further explored.
Prior works in HRT have shown that the acceptance of robots depend on many factors, including previous experience or familiarity with robots and technologies, robot predictability, robot policy’s transparency, and the human’s sense of control and trust [290,291,292]. Moreover, research in psychology has shown that people with different cultural backgrounds and personalities have different preferences over proxemics with others [293].
As such, robot designers should include a personalization module in the system, or cater to a specific target population and tailor the hardware and software design for the desired population, but care must be taken to avoid inherent biases and harmful grouping of people.
Metrics and Benchmarking
As the field of HRT continues to expand, measuring interaction quality and success is becoming increasingly important [294,295,296,297]. It is critical to develop reliable metrics to assess the performance of teams and the human experience within these teams [298, 299]. Metrics help to quantify and evaluate (1) performance of the team and (2) the human experience of being within the team. Performance metrics can include task metrics such as time to complete tasks, operation time, concurrent activity, and accuracy, as well as physiological measures like heart rate and skin conductance to estimate the current state of the interaction.
It is equally crucial to measure the human experience within HRTs, such as safety [300, 301], trust [302], workload [303, 304], and acceptance [305]. Measuring these factors over long periods of time and scaling them with multiple humans and robots present significant challenges (“Longer-Term Interactions”, “Scalability” sections). This section specifically focuses on challenges in measuring human factors, correlating metrics with team performance, and benchmarking. Overcoming these challenges and improving the metrics will facilitate the development of more effective HRT that can tackle increasingly complex tasks, contributing to the progress and evolution of the field.
Selecting Metrics
The human-robot interaction community has combined methodologies from psychology, automation, and human factors [301] to quantitatively assess the usability, user experience, and accessibility of the team [306].
Robots, especially those that are social, may influence the group dynamics when they are active participants and may impact people in the group differently [307]. Applying metrics used for individual interactions to group settings does not capture the social dynamics and can miss the group-level dynamics analyzed in social psychology [308].
Measuring Shared Mental Models and Situational Awareness
Researchers use human-only shared mental model methods for measuring shared mental models in human-robot teams. These include similarity [309], perceived mutual understanding [310], and situational awareness. However, a robotic teammate cannot easily express its belief of human teammates or the world [311]. Situational awareness metrics (SAGAT and SART [312]) can be used to directly compare mental models between robots and humans [313]. However, these measures again can only measure the human’s perception of the robot and cannot be used to equivalently measure the robot’s level of situational awareness of humans.
Developing ways to measure shared mental models between humans and robots can lead to a better understanding of team fluency and creating a standardized methodology can help researchers compare results from various task domains. Recent trends in explainable AI (xAI) have explored the issue of black box models and aim to create systems where we can more easily measure the belief overlap between humans and robots [314].
Benchmarking
Benchmarking HRTs allows for researchers to quantitatively compare novel approaches with previous approaches. Great advances in reinforcement learning [315], computer vision [316], and natural language processing [317] have utilized competitions and benchmarks to advance the state of the art. However, similar competitions and benchmarks are scarce in evaluating HRTs due to the diversity of tasks and the nature of setting up physical experimental testbeds. Robotics competitions serve as an intermediary way to measure performance metrics but commonly do not measure safety or human factor aspects of teaming [318].
Recently, simulated cooperative human-agent teams have been used to evaluate the performance of artificial collaborative agents, such as Overcooked [319, 320], Minecraft [321], and Roblox [322]. However, it is not clear that algorithms and methods conducted within a simulated environment (Wizard of Oz studies [323, 324]) transfer to those in the real world. Additionally, while physical constraints may be present in simulated environments, aspects such as perceived safety, communication, and physical workload may not transfer between simulated and real environments.
Creating common benchmarks beyond limited assembly tasks [325, 326] has the potential for accelerating the progress in designing effective human-robot teams.
Human Social and Psychological Wellbeing
Discussions in the previous sections focus on the performance of HRTs. However, considering the scale and the ubiquity of future HRT applications, we must consider social and psychological implications of HRT, as HRT aims to alleviate the physical and mental burden of humans [327]. In this section, we highlight the challenges in ensuring humans’ social and psychological wellbeing in HRTs — Robot Sociability and Human Replacement by Robots. Tackling these challenges will go a long way in HRT’s road to achieving a net positive for society.
Sociability of Robots
HRT can be applied to many applications that require understanding social cues and conventions. For example, a robot receptionist meeting customers can be modeled as an ad-hoc HRT: the robot greets the visitor, asks about their visit, and leads the way, during which the facial expression, eye contact, and appearance of the robot all contribute to the success of the HRT [328]. The requirement of sociability of robots is further amplified in multi-robot multi-human team settings where the robots must be capable of understanding the human social dynamics to contribute effectively to the team [329]. Despite the importance of sociability, most prior work focuses on the social navigation of robots, an over-simplified version of the rich, complex human social behaviors [330,331,332,333,334]. More research is needed on empowering robots to understand human social behaviors and equipping robots with the strategies for various social occasions.
Further, robots must adhere to social norms while collaborating in HRT. When robots need assistance from human teammates, they must assess when, whom, and how to seek help [88, 335, 336]. Inappropriate interruption causes negative impacts on task performance, user’s social perception of the robot, and the willingness to collaborate henceforth [337]. Hence, there is a growing need to develop robots that reason about social and contextual cues in real time when collaborating with humans.
Robot Replacement of Human
One of the most profound social impacts that HRT may cause is human job replacements by robots [338, 339]. Robot teammates offer multiple benefits, including higher durability, stronger physical capabilities, and arguably better robustness. However, replacing human-only teams with HRT could have significant social impacts on both humans replaced by robots and humans who team up with robots after the replacement.
It is hypothesized that when robots start taking over high-risk, physical-demanding, repetitive, or tedious jobs, it also creates more flexibility for humans to conduct creative and novel jobs, worrying less about physical limitations. However, more work is needed to verify the hypothesis; blindly adopting HRT may result in significant psychological, ethical, and social concerns [340,341,342].
Further, humans teaming with robots after replacement will require additional training since humans need to understand the robot’s limits as mentioned in the “Safety” section. Mariah et al. [339] showed that humans prefer human partners over robot teammates due to lower perceived team fluency and rapport established. Therefore, it is essential to explore the psychological and social consequences HRT may bring before the large-scale deployment of HRT.
Conclusion
As technology progresses, effective human-robot collaboration is becoming more feasible. xAI can improve shared mental models, enhance implicit “communication”, safety, and ethical decision-making. Short-term advancements in personalized algorithms can aid in modeling human behavior, increasing long-term interaction, and improving robot acceptance. Establishing benchmarks and metrics will help assess HRT factors. Longer-term challenges include scaling to larger teams, preserving privacy, and accounting for robot sociability.
HRT has the potential to benefit a multitude of fields and applications. In manufacturing, collaborative robots can work closely with humans, improving production lines’ efficiency, productivity, and safety. In healthcare, assistive robots can aid professionals and patient/elderly care, reducing the burden on healthcare workers and allowing for better patient care. Assistive driving systems can create a safer and more ergonomic driving experience for humans. HRT can help us achieve a new level of efficiency and productivity, allowing us to tackle some of the most pressing issues facing our world today.
To realize this vision of HRT, addressing the challenges in this paper will pave the way for a new era of human-robot collaboration, in which robots and humans work together seamlessly to accomplish tasks that were once impossible. We hope this paper will inspire further research and development in this exciting field, and we look forward to the day when human-robot teaming is a reality in all our lives.
References
Krüger J, Lien TK, Verl A. Cooperation of human and machines in assembly lines. CIRP Ann. 2009;58(2):628–646.
Will Knight. 2013. Smart robots can now work right next to auto workers. MIT Technology Review 17.
Liu C, Tomizuka M. Algorithmic safety measures for intelligent industrial co-robots. 2016 IEEE International conference on robotics and automation (ICRA), IEEE; pp 3095–3102; 2016.
2019. Diligent robotics collects $3m seed funding, launches autonomous robot assistants for hospitals. https://www.mobihealthnews.com/news/north-america/diligent-robotics-collects-3m-seed-funding-launches-autonomous-robot-assistants.
Iroju O, Ojerinde OA, Ikono R. 2017. State of the art: a study of human-robot interaction in healthcare.
Nourbakhsh IR, Sycara K, Koes M, Yong M, Lewis M, Burion S. Human-robot teaming for search and rescue. IEEE Pervasive Comput. 2005;4(1):72–79.
Giachetti RE, Marcelli V, Cifuentes J, Rojas JA. An agent-based simulation model of human-robot team performance in military environments. Syst. Eng. 2013;16(1):15–28.
Sanneman L, Fourie C, Shah JA, et al. The state of industrial robotics: emerging technologies, challenges, and key research directions. Foundations and Trends®; in Robotics 2021;8(3):225–306.
Endsley MR, Kaber DB. Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics 1999;42(3):462–92.
Huang H-M, Pavek K, Novak B, Albus JS, Messina E. 2005. A framework for autonomy levels for unmanned systems (alfus).
Parasuraman R, Sheridan TB, Wickens CD. A model for types and levels of human interaction with automation. IEEE Trans Syst Man Cybernet Part A Syst Humans: Publication IEEE Syst Man Cybernet Soc 2000;30(3):286–97.
Salas E, Dickinson TL, Converse SA, Tannenbaum SI. 1992. Toward an understanding of team performance and training.
MacMillan J, Entin EE, Serfaty D. 2004. Communication overhead. The hidden cost of team cognition.
Seraj E. Embodied team intelligence in multi-robot systems. AAMAS, pp 1869–1871; 2022.
Das A, Gervet T, Romoff J, Batra D, Parikh D, Rabbat M, Pineau J. Tarmac: targeted multi-agent communication. International conference on machine learning, PMLR. pp 1538–1546; 2019.
Seraj E, Wang Z, Paleja R, Martin D, Sklar M, Patel A, Gombolay M. Learning efficient diverse communication for cooperative heterogeneous teaming. Proceedings of the 21st international conference on autonomous agents and multiagent systems, pp 1173–1182; 2022a.
Konan SG, Seraj E, Gombolay M. Iterated reasoning with mutual information in cooperative and byzantine decentralized teaming. International conference on learning representations; 2022.
Hoffman G, Breazeal C. Collaboration in human-robot teams. AIAA 1st intelligent systems technical conference, p 6434; 2004.
Akyildiz IF, Kasimoglu IH. Wireless sensor and actor networks: research challenges. Ad Hoc Netw 2004;2(4):351–367.
Seraj E, Gombolay M. Coordinated control of uavs for human-centered active sensing of wildfires. 2020 American control conference (ACC), IEEE, p 1845–1852; 2020.
Liu M, Gong H, Wen Y, Chen G, Cao J. The last minute: efficient data evacuation strategy for sensor networks in post-disaster applications. 2011 Proceedings IEEE INFOCOM, IEEE, pp 291–295; 2011.
Seraj E, Silva A, Gombolay M. Multi-uav planning for cooperative wildfire coverage and tracking with quality-of-service guarantees. Auton Agent Multi-Agent Syst 2022b;36(2):39.
Li M, Liu Y, Chen L. Nonthreshold-based event detection for 3d environment monitoring in sensor networks. IEEE Trans Knowl Data Eng 2008;20(12):1699–1711.
Pham HX, La HM, Feil-Seifer D, Deans M. A distributed control framework for a team of unmanned aerial vehicles for dynamic wildfire tracking. 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE; p 6648–6653; 2017.
Bays MJ, Wettergren TA. A solution to the service agent transport problem. 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE; p 6443–6450; 2015.
Mozaffari M, Saad W, Bennis M, Debbah Mérouane. Efficient deployment of multiple unmanned aerial vehicles for optimal wireless coverage. IEEE Commun Lett 2016;20(8):1647–1650.
Seraj E, Azimi V, Abdallah C, Hutchinson S, Gombolay M. Adaptive leader-follower control for multi-robot teams with uncertain network structure. 2021 American control conference (ACC), IEEE; p 1088–1094; 2021a.
Ahmadzadeh A, Buchman G, Cheng P, Jadbabaie A, Keller J, Kumar V, Pappas G. Cooperative control of uavs for search and coverage. Proceedings of the AUVSI conference on unmanned systems, vol 2. Citeseer; 2006.
Xia F, Tian Yu-Chu, Li Y, Sun Y. Wireless sensor/actuator network design for mobile control applications. Sensors 2007;7(10):2157–2173.
Gao F, Cummings ML, Solovey ET. Modeling teamwork in supervisory control of multiple robots. IEEE Trans Hum-Mach Syst 2014;44(4):441–453.
Sheridan TB, Verplank WL. 1978. Human and computer control of undersea teleoperators. Technical report Massachusetts Inst of Tech Cambridge Man-Machine Systems Lab.
Butchibabu A, Sparano-Huiban C, Sonenberg L, Shah J. Implicit coordination strategies for effective team communication. Hum Factors 2016;58(4):595–610.
Mathieu JE, Heffner TS, Goodwin GF, Salas E, Cannon-Bowers JA. The influence of shared mental models on team process and performance. J Appl Psychol 2000;85(2):273.
Taylor H. 2007. The effects of interpersonal communication style on task performance and well being. PhD thesis University of Buckingham.
Butchibabu A. 2016. Anticipatory communication strategies for human robot team coordination. PhD thesis Massachusetts Institute of Technology.
Reed KB, Peshkin MA. Physical collaboration of human-human and human-robot teams. IEEE Trans Haptics 2008;1(2):108–120.
Williams T, Briggs P, Scheutz M. Covert robot-robot communication: human perceptions and implications for human-robot interaction. J Hum-Robot Interact 2015;4(2):24–49.
Laengle T, Hoeniger T, Zhu L. Cooperation in human-robot-teams. ISIE’97 Proceeding of the IEEE international symposium on industrial electronics, IEEE; p 1297–1301; 1997.
Jones H, Rock S. Dialogue-based human-robot interaction for space construction teams. Proceedings, IEEE aerospace conference, vol 7, IEEE;p 7–7; 2002.
Perzanowski D, Schultz AC, Adams W, Marsh E, Bugajska M. Building a multimodal human-robot interface. IEEE Intell Syst 2001;16(1):16–21.
Rickel J, Lewis Johnson W. 2000. Task-oriented collaboration with embodied agents in virtual worlds. Embodied conversational agents, 95–122.
Boyer M, Cummings ML, Spence LB, Solovey ET. Investigating mental workload changes in a long duration supervisory control task. Interact Comput 2015;27(5):512–520.
Nehme CE, Crandall JW, Cummings ML. Using discrete-event simulation to model situational awareness of unmanned-vehicle operators. Virginia modeling, analysis and simulation center capstone conference. Norfolk: Citeseer; 2008.
Karten S, Tucker M, Li H, Kailas S, Lewis M, Sycara K. 2023. Interpretable learned emergent communication for human-agent teams. IEEE Transactions on Cognitive and Developmental Systems.
Torrance MC. 1994. Natural communication with robots. PhD thesis Massachusetts Institute of Technology.
Khurana D, Koli A, Khatter K, Singh S. 2022. Natural language processing: state of the art, current trends and challenges/Multimed. Tools Appl. 1–32.
Hameed IA. Using natural language processing (nlp) for designing socially intelligent robots. 2016 Joint IEEE international conference on development and learning and epigenetic robotics (ICDL-EpiRob), IEEE; p 268–269; 2016.
Bonarini A. Communication in human-robot interaction. Current Robotics Reports 2020;1: 279–285.
Saunderson S, Goldie Nejat. How robots influence humans: A survey of nonverbal communication in social human–robot interaction. Int J Soc Robot 2019;11:575–608.
Gawron VJ. 2008. Human performance, workload, and situational awareness measures handbook. CRC Press.
Durso FT, Gronlund SD. Situation awareness. Handb Appl Cogn 1999;283:314.
Wickens CD. Situation awareness and workload in aviation. Curr Direct Psychol Sci 2002;11(4): 128–133.
Stout RJ, Cannon-Bowers JA, Salas E. The role of shared mental models in developing team situational awareness: implications for training. Situational awareness, Routledge; p 287–318; 2017a.
Bui H, Chau VS, Degl’Innocenti M, Leone L, Vicentini F. The resilient organisation: a meta-analysis of the effect of communication on team diversity and team performance. Appl Psychol 2019;68(4):621–657.
Shibata K, Jimbo T, Matsubara T. Deep reinforcement learning of event-triggered communication and control for multi-agent cooperative transport. 2021 IEEE international conference on robotics and automation (ICRA), IEEE; p 8671–8677 ; 2021.
Tasooji TK, Marquez HJ. Cooperative localization in mobile robots using event-triggered mechanism theory and experiments. IEEE Trans Autom Sci Eng 2021;19(4):3246–3258.
Zuo R, Li Y, Lv M, Dong Z. 2022. Learning-based distributed containment control for hfv swarms under event-triggered communication. IEEE Transactions on Aerospace and Electronic Systems.
Nowzari C, Cortes J, Pappas GJ. 2017. Event-triggered communication and control for multi-agent average consensus. Cooperative Control of Multi-Agent Systems: Theory and Applications, 177–207.
Huang SH, Held D, Abbeel P, Dragan AD. Enabling robots to communicate their objectives. Auton Robot 2019;43:309–326.
Foerster J, Assael IA, Freitas ND, Whiteson S. 2016. Learning to communicate with deep multi-agent reinforcement learning. Advances in neural information processing systems, 29.
Sukhbaatar S, Fergus R, et al. 2016. Learning multiagent communication with backpropagation. Advances in neural information processing systems, 29.
Singh A, Jain T, Sukhbaatar S. Learning when to communicate at scale in multiagent cooperative and competitive tasks. International conference on learning representations; 2018.
Seraj E, Wang Z, Paleja R, Sklar M, Patel A, Gombolay M. 2021b. Heterogeneous graph attention networks for learning diverse communication. 2108.09568.
Niu Y, Paleja RR, Gombolay MC. Multi-agent graph-attention communication and teaming. AAMAS, p 964–973; 2021.
Kim D, Moon S, Hostallero D, Kang WJ, Lee T, Son K, Yi Y. 2019. Learning to schedule communication in multi-agent reinforcement learning. arXiv:1902.01554.
Frith C, Frith U. Theory of mind. Curr Biol 2005;15(17):R644–R645.
Goodie AS, Doshi P, Young DL. Levels of theory-of-mind reasoning in competitive games. J Behav Decis Mak 2012;25(1):95–108.
Jaques N, Lazaridou A, Hughes E, Gulcehre C, Ortega P, Strouse DJ, Leibo JZ, Freitas ND. Social influence as intrinsic motivation for multi-agent deep reinforcement learning. International conference on machine learning, PMLR; p 3040–3049; 2019.
Christianos F, Schäfer L, Albrecht S. Shared experience actor-critic for multi-agent reinforcement learning. Adv Neural Inf Process Syst 2020;33:10707–10717.
Amir D, Amir O. Highlights: summarizing agent behavior to people. Proceedings of the 17th international conference on autonomous agents and multiagent systems, p 1168–1176; 2018.
Chen K, Fong J, Soh H. 2022a. Mirror: differentiable deep social projection for assistive human-robot communication. arXiv preprint arXiv:2203.02877.
Groom V, Nass C. Can robots be teammates?: benchmarks in human–robot teams. Interact Stud Soc Behav Commun Biol Artif Syst 2007;8(3):483–500. ISSN 1572-0373, 1572-0381. http://www.jbe-platform.com/content/journals/10.1075/is.8.3.10gro.
Shah J, Breazeal C. An empirical analysis of team coordination behaviors and action planning with application to human–robot teaming. Hum Factors 2010;52(2):234–245.
Nikolaidis S, Shah J. 2012. Human-robot teaming using shared mental models. ACM/IEEE HRI.
Gervits F, Fong TW, Scheutz M. Shared mental models to support distributed human-robot teaming in space. 2018 aiaa space and astronautics forum and exposition, p 5340; 2018.
Demir M, McNeese NJ, Cooke NJ. Understanding human-robot teams in light of all-human teams Aspects of team interaction and shared cognition. Int J Hum-Comput Stud 2436;140(10):2020.
Gervits F, Thurston D, Thielstrom R, Fong T, Pham Q, Scheutz M. Toward genuine robot teammates: improving human-robot team performance using robot shared mental models. Proceedings of the 19th international conference on autonomous agents and multiagent systems, p 429–437; 2020.
Abbeel P, Ng A. 2004. Apprenticeship learning via inverse reinforcement learning. Proceedings of the twenty-first international conference on machine learning.
Bradley Knox W, Stone P. Framing reinforcement learning from human reward: reward positivity, temporal discounting, episodicity, and performance. Artif Intell 2015;225:24–50.
Warnell G, Waytowich N, Lawhern V, Peter Stone P. Deep tamer: interactive agent shaping in high-dimensional state spaces. Proceedings of the AAAI conference on artificial intelligence, volume 32; 2018.
Leike J, Krueger D, Everitt T, Martic M, Maini V, Legg S. 2018. Scalable agent alignment via reward modeling: a research direction. arXiv:1811.07871.
Reddy S, Dragan A, Levine S, Legg S, Leike J. Learning human objectives by evaluating hypothetical behavior. International conference on machine learning, PMLR; p 8020–8029; 2020.
Ross Stéphane, Gordon G, Bagnell D. A reduction of imitation learning and structured prediction to no-regret online learning. Proceedings of the fourteenth international conference on artificial intelligence and statistics, JMLR; p 627–635. Workshop and Conference Proceedings; 2011.
Florence P, Lynch C, Zeng A, Ramirez OA, Wahid A, Downs L, Wong A, Lee J, Mordatch I, Tompson J. Implicit behavioral cloning. Conference on robot learning, PMLR; p 158–168; 2022.
Schrum ML, Hedlund-Botti E, Moorman N, meld Matthew C Gombolay. Mind Personalized meta-learning for robot-centric imitation learning. 2022 17th ACM/IEEE international conference on human-robot interaction (HRI), IEEE; p 157–165; 2022.
Chen L, Paleja R, Gombolay M. Learning from suboptimal demonstration via self-supervised reward regression. Conference on robot learning, PMLR; p 1262–1277; 2021a.
Chen M, Nikolaidis S, Soh H, Hsu D, Srinivasa S. Planning with trust for human-robot collaboration. Proceedings of the 2018 ACM/IEEE international conference on human-robot interaction, p 307–315; 2018.
Nanavati A, Mavrogiannis CI, Weatherwax K, Takayama L, Cakmak M, Srinivasa SS. Modeling human helpfulness with individual and contextual factors for robot planning. Robotics: science and systems; 2021.
Nikolaidis S, Hsu D, Srinivasa S. Human-robot mutual adaptation in collaborative tasks models and experiments. Int J Robot Res 2017;36(5-7):618–634.
Xu A, Dudek G. Optimo: Online probabilistic trust inference model for asymmetric human-robot collaborations. Proceedings of the Tenth Annual ACM/IEEE international conference on human-robot interaction, p 221–228; 2015.
Thomaz A, Hoffman G, Cakmak M, et al. Computational human-robot interaction. Found Trends®;, Robot 2016;4(2-3):105–223.
Hiatt LM, Narber C, Bekele E, Khemlani SS, Gregory Trafton J. Human modeling for human–robot collaboration. Int J Robot Res 2017;36(5-7):580–596.
Tabrez A, Luebbers MB, Hayes B. A survey of mental modeling techniques in human–robot teaming. Curr Robot Rep 2020;1:259–267.
Gervasi R, Aliev K, Mastrogiacomo L, Franceschini F. User experience and physiological response in human-robot collaboration: a preliminary investigation. J Intell Robotic Syst 2022;106(2):36.
Paleja R, Ghuy M, Arachchige NR, Jensen R, Gombolay M. The utility of explainable ai in ad hoc human-machine teaming. Adv Neural Inf Process Syst 2021;34:610–623.
Ramachandran A, Sebo SS, Scassellati B. Personalized robot tutoring using the assistive tutor pomdp (at-pomdp). Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, p 8050–8057; 2019.
Vasconez JP, Carvajal D, Cheein FA. On the design of a human–robot interaction strategy for commercial vehicle driving based on human cognitive parameters. Adv Mech Eng 2019;11(7):1687814019862715.
Lee J, Fong J, Kok BC, Soh H. Getting to know one another: calibrating intent, capabilities and trust for human-robot collaboration. 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE; p 6296–6303; 2020.
Dragan AD, Lee Kenton CT, Srinivasa SS. Legibility and predictability of robot motion. 2013 8th ACM/IEEE international conference on human-robot interaction (HRI), IEEE; p 301–308; 2013.
Zhou A, Hadfield-Menell D, Nagabandi A, Dragan AD. Expressive robot motion timing. Proceedings of the 2017 ACM/IEEE international conference on human-robot interaction, p 22–31; 2017.
Huang SH, Bhatia K, Abbeel P, Dragan AD. Establishing appropriate trust via critical states. 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE; p 3929–3936; 2018.
Brooks C, Szafir D. 2019.
Sadigh D, Sastry S, Seshia SA, Dragan AD. Planning for autonomous cars that leverage effects on human actions. Robotics: science and systems, volume 2, p 1-9, Ann Arbor, MI, USA; 2016.
Brown DS, Goo W, Niekum S. Better-than-demonstrator imitation learning via automatically-ranked demonstrations. Conference on robot learning, PMLR; p 330–359; 2020.
Basu C, Yang Q, Hungerman D, Singhal M, Dragan AD. Do you want your autonomous car to drive like you? Proceedings of the 2017 ACM/IEEE international conference on human-robot interaction. p 417–425; 2017.
Osogami T, Otsuka M. 2014. Restricted boltzmann machines modeling human choice. Adv Neural Inf Process Syst. 27.
Tabrez A, Hayes B. Improving human-robot interaction through explainable reinforcement learning. 2019 14th ACM/IEEE international conference on human-robot interaction (HRI), IEEE; p 751–753; 2019.
Kwon M, Biyik E, Talati A, Bhasin K, Losey DP, Sadigh D. When humans aren’t optimal: robots that collaborate with risk-aware humans. Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction, p 43–52; 2020.
Simon HA. 1990. Bounded rationality. Utility and probability 15–18.
Young DL, Goodie AS, Hall DB, Wu E. Decision making under time pressure, modeled in a prospect theory framework. Org Behav Hum Dec Process 2012;118(2):179–188.
Marge M, Rudnicky AI. Miscommunication detection and recovery in situated human–robot dialogue. ACM Trans Interact Intell Syst (TiiS) 2019;9(1):1–40.
Desai M, Kaniarasu P, Medvedev M, Steinfeld A, Yanco H. Impact of robot failures and feedback on real-time trust. 2013 8th ACM/IEEE international conference on human-robot interaction (HRI), IEEE; p 251–258; 2013.
Natarajan M, Gombolay M. Effects of anthropomorphism and accountability on trust in human robot interaction. Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction, HRI ’20. New York: Association for Computing Machinery; 2020a. p. 33–42. ISBN 9781450367462. https://doi.org/10.1145/3319502.3374839.
Greeff Joachim de, Hayes B, Gombolay MC, Johnson M, Neerincx MA, van Diggelen J, Cefkin M, Kruijff-Korbayová I. Workshop on longitudinal human-robot teaming. Companion of the 2018 ACM/IEEE international conference on human-robot interaction; 2018.
Michael A. 2018. Goodrich using narrative to enable longitudinal human-robot interactions.
Logacjov A, Kerzel M, Wermter S. 2021. Learning then, learning now, and every second in between: lifelong learning with a simulated humanoid robot. Frontiers in Neurorobotics 15.
Pakkar R, Clabaugh CE, Lee R, Deng E, Matarić MJ. Designing a socially assistive robot for long-term in-home use for children with autism spectrum disorders. 2019 28th IEEE international conference on robot and human interactive communication (RO-MAN), p 1–7; 2019.
Scassellati B, Boccanfuso L, Huang C-M, Mademtzi M, Qin M, Salomons N, Ventola P, Shic F. 2018. Improving social skills in children with asd using a long-term, in-home social robot. Science Robotics 3.
Wiwatcharakoses C, Berrar DP. 2020. Soinn+, a self-organizing incremental neural network for unsupervised learning from noisy data streams. Expert Syst Appl 143.
Dautenhahn K. Robots we like to live with?! - a developmental perspective on a personalized, life-long robot companion. RO-MAN 2004. 13th IEEE international workshop on robot and human interactive communication (IEEE Catalog No.04TH8759), p 17–22; 2004.
Céspedes N, Irfan B, Senft E, Cifuentes CA, Gutiérrez LF, Rincon-Roncancio M, Belpaeme T, Múnera MC. 2021. A socially assistive robot for long-term cardiac rehabilitation in the real world. Frontiers in Neurorobotics 15.
Rakhymbayeva N, Amirova A, Sandygulova A. 2021. A long-term engagement with a social robot for autism therapy. Frontiers in Robotics and AI 8.
Spain RD, Rowe JP, Goldberg BS, Pokorny RA, Hoffman M, Harrison S, Lester JC. Developing adaptive team coaching in gift: a data-driven approach. TTW@AIED; 2021.
Belpaeme T, Kennedy J, Ramachandran A, Scassellati B, Tanaka F. 2018a. Social robots for education: a review. Science Robotics, 3.
Barros PVA, Bloem AC, Hootsmans IM, Opheij LM, Toebosch RHA, Barakova EI, Sciutti A. 2021. You were always on my mind: introducing chef’s hat and copper for personalized reinforcement learning. Frontiers in Robotics and AI 8.
Thrun S, Mitchell TM. Lifelong robot learning. Robot Auton Syst 1993;15:25–46.
Lesort T, Lomonaco V, Stoian A, Maltoni D, Filliat D, Rodríguez ND. Continual learning for robotics definition, framework, learning strategies, opportunities and challenges. Inf Fusion 2019;58: 52–68.
Churamani N, Kalkan S, Gunes H. Continual learning for affective robotics: why, what and how. 2020 29th IEEE international conference on robot and human interactive communication (RO-MAN), p 425–431; 2020.
Chen L, Jayanthi S, Paleja RR, Martin D, Zakharov V, Gombolay MC. 2022b. Fast lifelong adaptive inverse reinforcement learning from demonstrations. ArXiv:2209.11908.
Kirk JR, Wray RE, Lindes P, Laird JE. 2022. Evaluating diverse knowledge sources for online one-shot learning of novel tasks. ArXiv:2208.09554.
Finn C, Abbeel P, Levine S. Model-agnostic meta-learning for fast adaptation of deep networks. International conference on machine learning; 2017.
He X, Sygnowski J, Galashov A, Rusu AA, Teh YW, Pascanu R. 2019. Task agnostic continual learning via meta learning. ArXiv:1906.05201.
Jamal MA, Qi G-J, Shah M. Task agnostic meta-learning for few-shot learning. 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR), p 11711–11719; 2018.
Lampinen AK, McClelland JL. Transforming task representations to perform novel tasks. Proc Natl Acad Sci 2020;117:32970–32981.
Wu P, Escontrela A, Hafner D, Goldberg K, Abbeel P. 2022. Daydreamer: world models for physical robot learning. ArXiv:2206.14176.
Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9:1735–1780.
Parisi GI, Tani J, Weber C, Wermter S. 2018. Lifelong learning of spatiotemporal representations with dual-memory recurrent self-organization. Front Neurorobot. 12.
Kirkpatrick J, Pascanu R, Rabinowitz NC, Veness J, Desjardins G, Rusu AA, Milan K, Quan J, Ramalho T, Grabska-Barwinska A, Hassabis D, Clopath C, Kumaran D, Hadsell R. Overcoming catastrophic forgetting in neural networks. Proc Natl Acad Sci 2016;114:3521–3526.
Strober J, Meeden L, Blank D. 2004. The governor architecture. Avoiding catastrophic forgetting in robot learning.
Ayub A, Fendley C. 2022. Few-shot continual active learning by a robot. ArXiv:2210.04137.
Konidaris GD, Kuindersma S, Grupen RA, Barto AG. 2011. Cst: constructing skill trees by demonstration. International Conference on Machine Learning.
Sutton RS, Precup D, Singh S. Between mdps and semi-mdps: a framework for temporal abstraction in reinforcement learning. Artif Intell 1999;112:181–211.
Riemer M, Liu M, Tesauro G. 2018. Learning abstract options. ArXiv:1810.11583.
Hejna J, Sadigh D. 2022. Few-shot preference learning for human-in-the-loop rl. ArXiv:2212.03363.
Woodward MP, Finn C. 2017. Active one-shot learning. ArXiv:1702.06559.
Leyzberg D, Spaulding S, Scassellati B. Personalizing robot tutors to individuals’ learning differences. 2014 9th ACM/IEEE international conference on human-robot interaction (HRI), p 423–430; 2014.
Belpaeme T, Kennedy J, Ramachandran A, Scassellati B, Tanaka F. Social robots for education: a review. Sci Robot 2018b;3(21):eaat5954.
Tapus A, Tapus C, Matarić MJ. Long term learning and online robot behavior adaptation for individuals with physical and cognitive impairments. International symposium on field and service robotics; 2009a.
Saunders J, Syrdal DS, Koay KL, Burke N, Dautenhahn K. “teach me–show me”—end-user personalization of a smart home and companion robot. IEEE Trans Hum-Mach Syst 2016;46:27–40.
Paleja R, Silva A, Chen L, Gombolay M. Interpretable and personalized apprenticeship scheduling: learning interpretable scheduling policies from heterogeneous user demonstrations. Advances in neural information processing systems, volume 33, Curran Associates, Inc.; p 6417–6428. In: Larochelle H, Ranzato M, Hadsell R, Balcan MF, and Lin H, editors; 2020. https://proceedings.neurips.cc/paper/2020/file/477bdb55b231264bb53a7942fd84254d-Paper.pdf.
Ziebart BD, Maas AL, Andrew Bagnell J, Dey AK. Maximum entropy inverse reinforcement learning. AAAI conference on artificial intelligence; 2008.
Chen AS, Nair S, Finn C. 2021b. Learning generalizable robotic reward functions from “in-the-wild” human videos. ArXiv:2103.16817.
Gerling KM, Hebesberger D, Dondrup C, Körtner T, Hanheide M. Robot deployment in long-term care. Zeitschrift Fur Gerontologie Und Geriatrie 2016;49:288–297.
Clabaugh CE, Jain S, Thiagarajan B, Shi Z, Mathur L, Mahajan K, Ragusa G, Matarić MJ. Month-long, in-home socially assistive robot for children with diverse needs. International symposium on experimental robotics; 2018.
Cummings R, Ligett K, Radhakrishnan J, Roth A, Wu ZS. Coordination complexity: small information coordinating large populations. Proceedings of the 2016 ACM conference on innovations in theoretical computer science, ITCS ’16. New York: Association for Computing Machinery; 2016. p. 281–290. ISBN 9781450340571. https://doi.org/10.1145/2840728.2840767.
Gombolay MC, Wilcox RJ, Shah JA. Fast scheduling of robot teams performing tasks with temporospatial constraints. IEEE Trans Robot 2018a;34(1):220–239.
Tews AD, Mataric MJ, Sukhatme GS. A scalable approach to human-robot interaction. 2003 IEEE international conference on robotics and automation (Cat. No. 03CH37422), volume 2, IEEE; p 1665–1670; 2003.
Andronas D, Apostolopoulos G, Fourtakas N, Makris S. Multi-modal interfaces for natural human-robot interaction. Proc Manuf 2021;54:197–202.
D’Ambrosio DB, Lehman J, Risi S, Stanley KO. Evolving policy geometry for scalable multiagent learning. Proceedings of the 9th international conference on autonomous agents and multiagent systems: volume 1-Volume 1, Citeseer; p 731–738; 2010.
Raiden AB, Dainty Andrew RJ, Neale RH. Current barriers and possible solutions to effective project team formation and deployment within a large construction organisation. Int J Proj Manag 2004;22(4): 309–316.
Hecklau F, Galeitzke M, Flachs S, Kohl H. Holistic approach for human resource management in industry 4.0. Procedia Cirp 2016;54:1–6.
Doriya R, Mishra S, Gupta S. A brief survey and analysis of multi-robot communication and coordination. International conference on computing, communication & automation, IEEE; p 1014–1021; 2015.
Queralta JP, Taipalmaa J, Pullinen BC, Sarker VK, Gia TN, Tenhunen H, Gabbouj M, Raitoharju J, Westerlund T. Collaborative multi-robot search and rescue: planning, coordination, perception, and active vision. IEEE Access 2020;8:191617–191643.
Verma JK, Ranga V. Multi-robot coordination analysis, taxonomy, challenges and future scope. J Intell Robot Syst 2021;102(1):10. ISSN 1573-0409. https://doi.org/10.1007/s10846-021-01378-2.
Wang Z, Liu C, Gombolay M. Heterogeneous graph attention networks for scalable multi-robot scheduling with temporospatial constraints. Auton Robot 2022;46(1):249–268. ISSN 1573-7527. Publisher: Springer.
Yan Z, Jouandeau N, Cherif AA. A survey and analysis of multi-robot coordination. Int J Adv Robot Syst 2013;10(12):399. ISSN 1729-8814, 1729-8814. http://journals.sagepub.com/doi/10.5772/57313.
Yasar MS, Iqbal T. A scalable approach to predict multi-agent motion for human-robot collaboration. IEEE Robot Autom Lett 2021;6(2):1686–1693. ISSN 2377-3766. Publisher: IEEE.
Altundas B, Wang Z, Bishop J, Gombolay M. Learning coordination policies over heterogeneous graphs for human-robot teams via recurrent neural schedule propagation. 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS). Kyoto: IEEE; 2022. p. 11679–11686. ISBN 978-1-66547-927-1. https://ieeexplore.ieee.org/document/9981748/.
Liu S, Wang L, Wang XV. Multimodal data-driven robot control for human–robot collaborative assembly. J Manuf Sci Eng 2022;144(5):051012.
Liu R, Natarajan M, Gombolay MC. 2021. Coordinating human-robot teams with dynamic and stochastic task proficiencies. J Hum-Robot Interact. 11(1). https://doi.org/10.1145/3477391.
Ravichandar H, Shaw K, Chernova S. STRATA: unified framework for task assignments in large teams of heterogeneous agents. Auton Agents Multi-Agent Syst 2020a;34(2):38. ISSN 1387-2532, 1573-7454. https://springerlink.bibliotecabuap.elogim.com/10.1007/s10458-020-09461-y.
Bettini M, Shankar A, Prorok A. 2023. Heterogeneous multi-robot reinforcement learning. arXiv:2301.07137.
Kriegman S, Nasab AM, Shah D, Steele H, Branin G, Levin M, Bongard J, Kramer-Bottiglio R. Scalable sim-to-real transfer of soft robot designs. 2020 3rd IEEE international conference on soft robotics (RoboSoft), IEEE; p 359–366; 2020.
Wang W, Yang T, Liu Y, Hao J, Hao X, Yujing H u, Chen Y, Fan C, Gao Y. From few to more: large-scale dynamic multiagent curriculum learning. Proceedings of the AAAI conference on artificial intelligence, volume 34, p 7293–7300; 2020.
Lin R, Li Y, Feng X, Zhang Z, Fung XHW, Zhang H, Wang J, Du Y, Yang Y. 2022. Contextual transformer for offline meta reinforcement learning. arXiv:2211.08016.
Meng L, Wen M, Yang Y, Le C, Li X, Zhang W, Wen Y, Zhang H, Wang J, Xu B. 2021. Offline pre-trained multi-agent decision transformer: one big sequence model conquers all starcraftii tasks. arXiv:2112.02845.
Shalev-Shwartz S, Shammah S, Shashua A. 2016. Safe, multi-agent, reinforcement learning for autonomous driving. arXiv:1610.03295.
Iqbal T, Riek LD. 2019. Human-robot teaming: approaches from joint action and dynamical systems. Humanoid robotics: A reference. 2293–2312.
Handelman DA, Rivera CG, Amant RS, Holmes EA, Badger AR, Yeh BY. Adaptive human-robot teaming through integrated symbolic and subsymbolic artificial intelligence: preliminary results. Artificial intelligence and machine learning for multi-domain operations applications IV, volume 12113, SPIE; p 145–157; 2022.
Talamadupula K, Kambhampati S, Schermerhorn P, Benton J, Scheutz M. Planning for human-robot teaming. Proceedings of the 21th international conference on automated planning and scheduling (ICAPS 2011), Citeseer; p 82–89; 2011.
Nguyen TT, Silander T, Li Z, Leong Tze-Yun. Scalable transfer learning in heterogeneous, dynamic environments. Artif Intell 2017;247:70–94.
Agogino AK, Tumer K. Unifying temporal and structural credit assignment problems. Autonomous agents and multi-agent systems conference; 2004.
Al-Ani B, Keith Edwards H. A comparative empirical study of communication in distributed and collocated development teams. 2008 IEEE international conference on global software engineering, IEEE; p 35–44; 2008.
Varakantham P, Yeoh W, Velagapudi P, Sycara K, Scerri P. Prioritized shaping of models for solving dec-pomdps. Proceedings of the 11th international conference on autonomous agents and multiagent systems - volume 3, AAMAS ’12, page 1269–1270, Richland, SC; 2012. International Foundation for Autonomous Agents and Multiagent Systems. ISBN 0981738133.
Marble JL, Bruemmer DJ, Few DA, Dudenhoeffer DD. Evaluation of supervisory vs. peer-peer interaction with human-robot teams. 37th Annual Hawaii international conference on system sciences, 2004. Proceedings of the, IEEE; pages 9–pp; 2004.
Korsah GA, Stentz A, Dias MB. A comprehensive taxonomy for multi-robot task allocation. Int J Robot Res 2013;32(12):1495–1512. ISSN 0278-3649, 1741-3176. http://journals.sagepub.com/doi/10.1177/0278364913496484.
Bajcsy A, Herbert SL, Fridovich-Keil D, Fisac JF, Deglurkar S, Dragan AD, Tomlin CJ. A scalable framework for real-time multi-robot, multi-human collision avoidance. 2019 international conference on robotics and automation (ICRA), IEEE, p 936–943; 2019.
Chen D, Li S, Liao L. A recurrent neural network applied to optimal motion control of mobile robots with physical constraints. Appl Soft Comput 2019;85:105880.
Wang X, Chen Y, Zhu W. A survey on curriculum learning. IEEE Trans Pattern Anal Mach Intell 2021;44(9):4555–4576.
Pham H, Dai Z, Ghiasi G, Liu H, Yu AW, Luong M-T, Tan M, Le QV. 2021. Combined scaling for zero-shot transfer learning. arXiv:2111.10050.
Vats S, Kroemer O, Likhachev M. Synergistic scheduling of learning and allocation of tasks in human-robot teams. 2022 international conference on robotics and automation (ICRA), IEEE; p 2789–2795; 2022.
French RM. Catastrophic forgetting in connectionist networks. Trends Cogn Sci 1999;3(4):128–135.
Seraj E, Wu X, Gombolay M. 2020. Firecommander: an interactive, probabilistic multi-agent environment for heterogeneous robot teams. arXiv:2011.00165.
Kent D, Saldanha C, Chernova S. A comparison of remote robot teleoperation interfaces for general object manipulation. Proceedings of the 2017 ACM/IEEE international conference on human-robot interaction, p 371–379; 2017.
Zacharia PTh, Aspragathos NA. Optimal robot task scheduling based on genetic algorithms. Robot Comput Integr Manuf 2005;21(1):67–79.
Li B, Ouyang Y, Zhang Y, Acarman T, Qi K, Shao Z. Optimal cooperative maneuver planning for multiple nonholonomic robots in a tiny environment via adaptive-scaling constrained optimization. IEEE Robot Autom Lett 2021;6(2):1511–1518.
Kawatsuma S, Fukushima M, Okada T. Emergency response by robots to fukushima-daiichi accident: summary and lessons learned. Indus Robot Int J 2012;39(5):428–435.
Hong A, Igharoro O, Liu Y, Niroui F, Nejat G, Benhabib B. Investigating human-robot teams for learning-based semi-autonomous control in urban search and rescue environments. J Intell Robot Syst 2019;94(3-4):669–686.
Kaupp T, Makarenko A, Durrant-Whyte H. Human–robot communication for collaborative decision making—a probabilistic approach. Robot Auton Syst 2010;58(5):444–456. ISSN 0921-8890. Publisher: Elsevier.
Al Tair H, Taha T, Al-Qutayri M, Dias J. 2015. Decentralized multi-agent POMDPs framework for humans-robots teamwork coordination in search and rescue. pages 210–213. IEEE. ISBN 1-4799-8966-5.
Zhang P, Wang H, Bo D, Shang S. Cloud-Based framework for scalable and real-time multi-robot SLAM. 2018 IEEE international conference on web services (ICWS), p 147–154; 2018.
Garcıa S, Menghi C, Pelliccione P, Berger T, Wohlrab R. 2018. An architecture for decentralized, collaborative, and autonomous robots. pages 75–7509. IEEE. ISBN 1-5386-6398-8.
Zhao X, Wu C. 2021. Large-scale machine learning cluster scheduling via multi-agent graph reinforcement learning. IEEE Transactions on Network and Service Management.
Kent D, Saldanha C, Chernova S. Leveraging depth data in remote robot teleoperation interfaces for general object manipulation. Int J Robot Res 2020;39(1):39–53.
Hu G, Tay WP, Wen Y. Cloud robotics: architecture, challenges and applications. IEEE network 2012;26(3):21–28.
Hale MT, Nedić A, Egerstedt M. Cloud-based centralized/decentralized multi-agent optimization with communication delays. 2015 54th IEEE conference on decision and control (CDC), IEEE; p 700–705; 2015.
Banerjee S, Gombolay M, Chernova S. A tale of two suggestions Action and diagnosis recommendations for responding to robot failure. 2020 29th IEEE international conference on robot and human interactive communication (RO-MAN), IEEE; p 398–405; 2020.
Jiang SD. A study of initiative decision-making in distributed human-robot teams. 2019 Third IEEE international conference on robotic computing (IRC), p 349–356; 2019.
Toris R, Kammerl J, Lu DV, Lee J, Jenkins OC, Osentoski S, Wills M, Chernova S. Robot web tools: efficient messaging for cloud robotics. 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, p 4530–4537; 2015.
Kent D, Behrooz M, Chernova S. Crowdsourcing the construction of a 3d object recognition database for robotic grasping. 2014 IEEE international conference on robotics and automation (ICRA), IEEE, p 4526–4531; 2014.
Ravichandar H, Polydoros AS, Chernova S, Billard A. Recent advances in robot learning from demonstration. Ann Rev Contr Robot Auton Syst 2020b;3:297–330.
Ström N. 2015. Scalable distributed dnn training using commodity gpu cloud computing.
Tolstaya E, Gama F, Paulos J, Pappas G, Kumar V, Ribeiro A. 2020. Learning decentralized controllers for robot swarms with graph neural networks. pages 671–682. PMLR. ISBN 2640-3498.
Nesnas Issa AD, Wright A, Bajracharya M, Simmons R, Estlin T. Claraty and challenges of developing interoperable robotic software. Proceedings 2003 IEEE/RSJ international conference on intelligent robots and systems (IROS 2003)(Cat. No. 03CH37453), volume 3, IEEE; p 2428–2435; 2003.
Bi Z, Wang G, Li Da X u, Thompson M, Mir R, Nyikos J, Mane A, Witte C, Cliff S. 2017. IoT-based system for communication and coordination of football robot team. Internet Research. ISSN 1066-2243, Emerald Publishing Limited.
Johnson L, Ponda S, Choi H-L, How J. Improving the efficiency of a decentralized tasking algorithm for uav teams with asynchronous communications. AIAA guidance, navigation, and control conference, p 8421; 2010.
Mansouri SA, Nematbakhsh E, Ahmarinejad A, Jordehi AR, Javadi MS, Marzband M. A hierarchical scheduling framework for resilience enhancement of decentralized renewable-based microgrids considering proactive actions and mobile units. Renew Sustain Energy Rev 2854;168(11):2022.
Peternel L, Tsagarakis N, Ajoudani A. A human–robot co-manipulation approach based on human sensorimotor information. IEEE Trans Neural Syst Rehabil Eng 2017;25(7):811–822.
Ham A, Park M-J. Human–robot task allocation and scheduling: Boeing 777 case study. IEEE Robot Autom Lett 2021;6(2):1256–1263.
Srivastava S. Unifying principles and metrics for safe and assistive ai. Proceedings of the AAAI conference on artificial intelligence, volume 35, p 15064–15068; 2021.
Murphy RR. Human-robot interaction in rescue robotics. IEEE Trans Syst Man Cybernet Part C (Appl Rev) 2004;34(2):138–153.
Benos L, Bechar A, Bochtis D. Safety and ergonomics in human-robot interactive agricultural operations. Biosyst Eng 2020;200:55–72.
Hambuchen K, Marquez J, Fong T. A review of nasa human-robot interaction in space. Curr Robot Rep 2021;2(3):265–272.
Ames AD, Grizzle JW, Tabuada P. Control barrier function based quadratic programs with application to adaptive cruise control. 53rd IEEE conference on decision and control, IEEE; p 6271–6278; 2014.
Ames AD, Coogan S, Egerstedt M, Notomista G, Sreenath K, Tabuada P. Control barrier functions: theory and applications. 2019 18th European control conference (ECC), IEEE; p 3420–3431; 2019.
Bansal S, Chen M, Herbert S, Tomlin CJ. Hamilton-jacobi reachability: a brief overview and recent advances. 2017 IEEE 56th annual conference on decision and control (CDC), IEEE; p 2242–2253; 2017.
Fisac JF, Akametalu AK, Zeilinger MN, Kaynama S, Gillula J, Tomlin CJ. A general safety framework for learning-based control in uncertain robotic systems. IEEE Trans Autom Control 2018;64 (7):2737–2752.
Gillula JH, Hoffmann GM, Huang H, Vitus MP, Tomlin CJ. Applications of hybrid reachability analysis to robotic aerial vehicles. Int J Robot Res 2011;30(3):335–354.
Chou G, Berenson D, Ozay N. Learning constraints from demonstrations. Algorithmic foundations of robotics XIII: proceedings of the 13th workshop on the algorithmic foundations of robotics 13, Springer; p 228–245; 2020.
Laffranchi M, Tsagarakis NG, Caldwell DG. Safe human robot interaction via energy regulation control. 2009 IEEE/RSJ international conference on intelligent robots and systems, IEEE; p 35–41; 2009.
Lasota PA, Rossano GF, Shah JA. Toward safe close-proximity human-robot interaction with standard industrial robots. 2014 IEEE international conference on automation science and engineering (CASE), IEEE; p 339–344; 2014.
Heinzmann J, Zelinsky A. Quantitative safety guarantees for physical human-robot interaction. Int J Robot Res 2003;22(7-8):479–504.
ISO ISO. 2011. 10218: Robots and robotic devices—safety requirements for industrial robots—part 1: Robots. ISO: Geneve, Switzerland.
Robots ISO, Robotic Devices. 2011. Safety requirements for industrial robots–part 2: robot systems and integration. International Organization for Standardization: Geneva, Switzerland.
S ISO. 2016. Robots and robotic devices—collaborative robots (iso-15066: 2016) International Organization for Standardization.
Kulić D, Croft EA. Safe planning for human-robot interaction. J Robot Syst 2005;22(7): 383–396.
Kulić D, Croft E. Pre-collision safety strategies for human-robot interaction. Auton. Robot. 2007;22:149–164.
Brown DS, Niekum S. 2019.
Fridovich-Keil D, Bajcsy A, Fisac JF, Herbert SL, Wang S, Dragan AD, Tomlin CJ. Confidence-aware motion prediction for real-time collision avoidance1. Int J Robot Res 2020;39(2-3): 250–265.
Lasota PA, Fong T, Shah JA, et al. A survey of methods for safe human-robot interaction. Foundations and Trends®; in Robotics 2017;5(4):261–349.
Waveren SV, Carter EJ, Örnberg O, Leite I. Exploring non-expert robot programming through crowdsourcing. Front Robot AI 2021;8:646002.
Yang Y, Chen L, Gombolay M. 2022. Safe inverse reinforcement learning via control barrier function. arXiv:2212.02753.
Das D, Banerjee S, Chernova S. Explainable ai for robot failures: generating explanations that improve user assistance in fault recovery. Proceedings of the 2021 ACM/IEEE international conference on human-robot interaction, p 351–360; 2021.
Eric E, Geuna A, Guerzoni M, Nuccio M, et al. 2018. Mapping the evolution of the robotics industry: a cross country comparison.
Chang W-L, Šabanović S. Studying socially assistive robots in their organizational context: studies with paro in a nursing home. Proceedings of the Tenth Annual ACM/IEEE international conference on human-robot interaction extended abstracts, p 227–228; 2015.
Yang G-Z, Nelson BJ, Murphy RR, Choset H, Christensen H, Collins SH, Dario P, Goldberg K, Ikuta K, Jacobstein N, et al. 2020. Combating covid-19—the role of robotics in managing public health and infectious diseases.
Tapus A, Tapus C, Mataric MJ. The use of socially assistive robots in the design of intelligent cognitive therapies for people with dementia. 2009 IEEE international conference on rehabilitation robotics, IEEE; p 924–929; 2009b.
Sharkey AJC. Should we welcome robot teachers. Ethics Inf Technol 2016;18:283–297.
Davison DP, Wijnen FM, Charisi V, Meij Jan van der, Evers V, Reidsma D. Working with a social robot in school: a long-term real-world unsupervised deployment. Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction, p 63–72; 2020.
Graaf Maartje MA D., Allouch SB, Dijk Jan AGM van. Long-term evaluation of a social robot in real homes. Interaction studies 2016;17(3):462–491.
de Graaf MMA, Allouch SB, Dijk JAGMV. Why would i use this in my home? a model of domestic social robot acceptance. Hum–Comput Interact 2019;34(2):115–173.
Sauppé A, Mutlu B. The social impact of a robot co-worker in industrial settings. Proceedings of the 33rd annual ACM conference on human factors in computing systems, p 3613–3622; 2015.
Sung JaYoung, Grinter RE, Christensen HI. Domestic robot ecology. Int J Soc Robot 2010; 2(4):417–429.
Tang B, Sullivan D, Cagiltay B, Chandrasekaran V, Fawaz K, Mutlu B. Confidant: A privacy controller for social robots. 2022 17th ACM/IEEE international conference on human-robot interaction (HRI), IEEE; p 205–214; 2022.
Voigt P, Bussche AVd. 2017. The eu general data protection regulation (gdpr). a Practical Guide, 1st Ed. Cham: Springer International Publishing.
Chatzimichali A, Harrison R, Chrysostomou D. Toward privacy-sensitive human–robot interaction: Privacy terms and human–data interaction in the personal robot era. Paladyn, J Behav Robot 2021;12 (1):160–174.
Pagallo U. 2016. The impact of domestic robots on privacy and data protection, and the troubles with legal regulation by design. Data protection on the move: Current developments in ICT and privacy/data protection. p 387–410.
Natarajan M, Gombolay M. Effects of anthropomorphism and accountability on trust in human robot interaction. Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction, p 33–42; 2020b.
Esq S, Julia R. Alexa, amazon assistant or government informant? Univ Miami Bus Law Rev 2019;27:301.
Hafner J, Baig EC. 2017. Your roomba already maps your home. now the ceo plans to sell that map. https://www.usatoday.com/story/tech/nation-now/2017/07/25/roomba-plans-sell-maps-users-homes/508578001/.
Bonawitz K, Eichner H, Grieskamp W, Huba D, Ingerman A, Ivanov V, Kiddon C, Konecný J, Mazzocchi S, McMahan HB, Overveldt TV, Petrou D, Ramage D, Roselander J. 2019. Towards federated learning at scale System design. ArXiv:1902.01046.
AbhishekV A, Binny S, JohanT R, Raj N, Thomas V. 2022. Federated learning: collaborative machine learning without centralized training data. International journal of engineering technology and management sciences.
Abadi M, Chu A, Goodfellow IJ, McMahan HB, Mironov I, Talwar K, Li Z. Deep learning with differential privacy. Proceedings of the 2016 ACM SIGSAC conference on computer and communications security; 2016.
Kim M, Günlü O, Schaefer RF. Federated learning with local differential privacy: trade-offs between privacy, utility, and communication. ICASSP 2021 - 2021 IEEE international conference on acoustics, speech and signal processing (ICASSP), p 2650–2654; 2021.
Zhang C, Li S, Xia J, Wang W, Yan F, Liu Y. Batchcrypt: efficient homomorphic encryption for cross-silo federated learning. USENIX annual technical conference; 2020.
Nyholm S, Smids J. The ethics of accident-algorithms for self-driving cars: an applied trolley problem? Ethical Theory Moral Pract 2016;19(5):1275–1289. ISSN 1572-8447. https://doi.org/10.1007/s10677-016-9745-2.
Holstein T, Dodig-Crnkovic G, Pelliccione P. 2018. Ethical and social aspects of self-driving cars. https://arxiv.org/abs/1802.04103.
Short E, Hart J, Michelle V u, Scassellati B. No fair!! an interaction with a cheating robot. 2010 5th acm/ieee international conference on human-robot interaction (hri), IEEE; p 219–226; 2010.
Kuipers B. 2022. Trust and cooperation. Frontiers in Robotics and AI. 65.
Hansson SO. A panorama of the philosophy of risk. Handbook of risk theory: epistemology, decision theory, ethics, and social implications of risk. In: Roeser S, Hillerbrand R, Sandin P, and Peterson M, editors. Dordrecht: Springer; 2012. p. 27–54. ISBN 978-94-007-1433-5. https://doi.org/10.1007/978-94-007-1433-5_2.
Karnouskos S. Self-driving car acceptance and the role of ethics. IEEE Trans Eng Manag 2020; 67(2):252–265.
Bednarski BP, Singh AD, Jones WM. On collaborative reinforcement learning to optimize the redistribution of critical medical supplies throughout the covid-19 pandemic. J Am Med Inform Assoc 2021;28(4): 874–878.
Yu L, Halalau A, Dalal B, Abbas AE, Ivascu F, Amin M, Nair GB. Machine learning methods to predict mechanical ventilation and mortality in patients with covid-19. PLoS One 2021;16 (4):e0249285.
Burke RV, Berg BM, Vee P, Morton I, Nager A, Neches R, Wetzel R, Upperman JS. Using robotic telecommunications to triage pediatric disaster victims. J Pediatr Surg 2012;47(1):221–224.
Zemmar A, Lozano AM, Nelson BJ. The rise of robots in surgical environments during covid-19. Nat Mach Intell 2020;2(10):566–572.
Gombolay M, Yang XJ, Hayes B, Seo N, Liu Z, Wadhwania S, Yu T, Shah N, Golen T, Shah J. Robotic assistance in the coordination of patient care. Int J Robot Res 2018b;37(10): 1300–1316.
Boden M, Bryson J, Caldwell D, Dautenhahn K, Edwards L, Kember S, Newman P, Parry V, Pegman G, Rodden T, Sorrell T, Wallis M, Whitby B, Winfield A. Principles of robotics: regulating robots in the real world. Connect Sci 2017; 29(2):124–129. ISSN 0954-0091, 1360-0494. https://www.tandfonline.com/doi/full/10.1080/09540091.2016.1271400.
Gless S, Silverman E, Weigend T. If robots cause harm, who is to blame? self-driving cars and criminal liability. N Crim Law Rev 2016;19(3):412–436.
Asaro PM. What should we want from a robot ethic?
Kim T, Hinds P. Who should i blame? effects of autonomy and transparency on attributions in human-robot interaction. ROMAN 2006 - The 15th IEEE international symposium on robot and human interactive communication, p 80–85; 2006.
Macrae C. Learning from the failure of autonomous and intelligent systems: accidents, safety, and sociotechnical sources of risk. Risk Anal 2022;42(9):1999–2025.
Gilpin LH, Paley AR, Alam MA, Spurlock S, Hammond KJ. 2022. “explanation” is not a technical term: The problem of ambiguity in xai. arXiv:2207.00007.
de Bruijn H, Warnier M, Janssen M. The perils and pitfalls of explainable ai: strategies for explaining algorithmic decision-making. Govern Inf Quart 2022;39(2):101666.
Miller CA. Trust, transparency, explanation, and planning: why we need a lifecycle perspective on human-automation interaction. Trust in human-robot interaction, Elsevier; p 233–257; 2021.
Miller T. Explanation in artificial intelligence: insights from the social sciences. Artif Intell 2019; 267:1–38.
Entin EE, Serfaty D. Adaptive team coordination. Hum Factor 1999;41(2):312–325.
Lee JD, See KA. Trust in automation: designing for appropriate reliance. Hum Factor 2004; 46(1):50–80.
Loideain NN, Adams R. From Alexa to Siri and the GDPR: the gendering of virtual personal assistants and the role of data protection impact assessments. Comput Law Secur Rev 2020;36:105366. ISSN 02673649. https://linkinghub.elsevier.com/retrieve/pii/S0267364919303772.
Hwang G, Lee J, Oh CY, Lee J. It sounds like a woman: exploring gender stereotypes in South Korean voice assistants. Extended abstracts of the 2019 CHI conference on human factors in computing systems. Glasgow Scotland: ACM; 2019. p. 1–6. ISBN 978-1-4503-5971-9. https://dl.acm.org/doi/10.1145/3290607.3312915.
Zacharaki A, Kostavelis I, Gasteratos A, Dokas I. Safety bounds in human robot interaction a survey. Safety Sci 2020;127:104667.
Wang W, Chen Y, Li R, Jia Y. Learning and comfort in human–robot interaction: a review. Appl Sci 2019;9(23):5152.
Apraiz A, Lasa G, Mazmela M. 2023. Evaluation of user experience in human–robot interaction: a systematic literature review. Int J Soc Robot. 1–24.
Akalin N, Kristoffersson A, Loutfi A. Do you feel safe with your robot? factors influencing perceived safety in human-robot interaction based on subjective and objective measures. Int J Hum-Comput Stud 2744; 158(10):2022.
Coronado E, Kiyokawa T, Ricardez GAG, Ramirez-Alpizar IG, Venture G, Yamanobe N. Evaluating quality in human-robot interaction: A systematic search and classification of performance and human-centered factors, measures and metrics towards an industry 5.0. J Manuf Syst 2022;63:392–410. ISSN 0278-6125. https://www.sciencedirect.com/science/article/pii/S0278612522000577.
Marvel JA, Bagchi S, Zimmerman M, Antonishek B. 2020. Towards effective interface designs for collaborative hri in manufacturing: metrics and measures.J Hum-Robot Interact. 9(4). https://doi.org/10.1145/3385009.
Mingyue Ma L, Fong T, Micire MJ, Kim YK, Feigh K. Human-robot teaming: concepts and components for design. Field and service robotics, volume 5. In: Hutter M and Siegwart R, editors. Cham: Springer; 2018. p. 649–663. ISBN 978-3-319-67360-8 978-3-319-67361-5. http://springerlink.bibliotecabuap.elogim.com/10.1007/978-3-319-67361-5_42. Series Title: Springer Proceedings in Advanced Robotics.
Fong T, Zumbado JR, Currie N, Mishkin A, Akin DL. Space telerobotics: unique challenges to human–robot collaboration in space. Rev Hum Fact Ergonom 2013;9(1):6–56.
Jacoff A, Messina E, Weiss BA, Tadokoro S, Nakagawa Y. Test arenas and performance metrics for urban search and rescue robots. Proceedings 2003 IEEE/RSJ international conference on intelligent robots and systems (IROS 2003)(Cat. No. 03CH37453), volume 4, IEEE; p 3396–3403; 2003a.
Jacoff A, Weiss B, Messina E. 2003b. Evolution of a performance metric for urban search and rescue robots (2003). Technical report, NATIONAL INST OF STANDARDS AND TECHNOLOGY GAITHERSBURG MD.
Feil-Seifer D, Skinner K, Matarić MJ. Benchmarks for evaluating socially assistive robotics. Interact Stud Soc Behav Commun Biol Artif Syst 2007;8(3):423–439. ISSN 1572-0373, 1572-0381. http://www.jbe-platform.com/content/journals/10.1075/is.8.3.07fei.
Steinfeld A, Fong T, Kaber D, Lewis M, Scholtz J, Schultz A, Goodrich M. Common metrics for human-robot interaction. Proceedings of the 1st ACM SIGCHI/SIGART conference on human-robot interaction. Salt Lake City: ACM; 2006. p. 33–40. ISBN 978-1-59593-294-5. https://dl.acm.org/doi/10.1145/1121241.1121249.
Mittu R, Sofge D, Wagner A, Lawless WF, (eds). 2016. Robust Intelligence and Trust in Autonomous Systems. Boston: Springer. ISBN 978-1-4899-7666-6 978-1-4899-7668-0. http://springerlink.bibliotecabuap.elogim.com/10.1007/978-1-4899-7668-0.
Hart SG. Nasa-task load index (NASA-TLX); 20 Years Later. th ANNUAL MEETING.
McKendrick RD, Cherry E. A deeper look at the nasa tlx and where it falls short. Proceedings of the human factors and ergonomics society annual meeting, volume 62. Los Angeles: SAGE Publications Sage CA; 2018. p. 44–48.
Hopko S, Wang J, Mehta R. Human factors considerations and metrics in shared space human-robot collaboration: a systematic review. Front Robot AI 2022;9:6.
Ma LM, Ijtsma M, Feigh KM, Pritchett AR. Metrics for human-robot team design: a teamwork perspective on evaluation of human-robot teams. ACM Trans Hum-Robot Interact 2022;11(3):1–36. ISSN 2573-9522, 2573-9522. https://dl.acm.org/doi/10.1145/3522581.
Oliveira R, Arriaga P, Paiva A. Human-robot interaction in groups: methodological and research practices. Multimodal Technol Interact 2021;5(10):59. ISSN 2414-4088. https://www.mdpi.com/2414-4088/5/10/59.
Stangor C. Social groups in action and interaction, 2nd ed. Social groups in action and interaction. 2nd. New York: Routledge/Taylor & Francis Group; 2016. ISBN 978-1-84872-692-5 (Paperback); 978-1-84872-691-8 (Hardcover); 978-1-31567-716-3 (Digital (undefined format)). Pages: viii, 453.
DeChurch LA, Mesmer-Magnus JR. Measuring shared team mental models: a meta-analysis. Group Dyn Theory Res Pract 2010;14(1):1–14. ISSN 1930-7802, 1089-2699. http://doi.apa.org/getdoi.cfm?doi=10.1037/a0017455.
Burtscher M, Oostlander J. Perceived Mutual Understanding (PMU): development and initial testing of a German short scale for perceptual team cognition. Eur J Psychol Assess 2016;35:1–11.
Andrews RW, Lilly JM, Srivastava D, Feigh KM. The role of shared mental models in human-AI teams: a theoretical review. Theoretical Issues in Ergonomics Science, p 1–47; 2022. ISSN 1463-922X, 1464-536X. https://www.tandfonline.com/doi/full/10.1080/1463922X.2022.2061080.
Endsley MR, Selcon SJ, Hardiman TD, Croft DG. A comparative analysis of Sagat and Sart for evaluations of situation awareness. Proc Hum Factors Ergonom Soc Ann Meet 1998;42(1):82–86. ISSN 2169-5067, 1071-1813. http://journals.sagepub.com/doi/10.1177/154193129804200119.
Stout RJ, Cannon-Bowers JA, Salas E. The role of shared mental models in developing team situational awareness: implications for training. Situational awareness, 1st Ed. Routledge; p 287–318. In: Salas E, editors; 2017b. ISBN 978-1-315-08792-4. https://www.taylorfrancis.com/books/9781351548564/chapters/10.4324/9781315087924-18.
Andreas J, Dragan A, Klein D. 2017. Translating neuralese. https://arxiv.org/abs/1704.06960.
Duan Y, Xi C, Houthooft R, Schulman J, Abbeel P. Benchmarking deep reinforcement learning for continuous control. International conference on machine learning, PMLR; p 1329–1338; 2016.
Deng J, Dong W, Socher R, Li Li-Jia, Li K, Li F-F. Imagenet A large-scale hierarchical image database. 2009 IEEE conference on computer vision and pattern recognition, IEEE; p 248–255; 2009.
Wang A, Singh A, Michael J, Hill F, Levy O, Bowman SR. 2018. Glue: a multi-task benchmark and analysis platform for natural language understanding. https://arxiv.org/abs/1804.07461.
Wada K. New robot technology challenge for convenience store. 2017 IEEE/SICE international symposium on system integration (SII), p 1086–1091; 2017. ISSN: 2474-2325.
Sarkar B, Talati A, Shih A, Sadigh D. PantheonRL: a MARL library for dynamic training interactions. Proc AAAI Conf Artif Intell 2022;36(11):13221–13223. ISSN 2374-3468, 2159-5399. https://ojs.aaai.org/index.php/AAAI/article/view/21734.
Fontaine MC, Hsu Y-C, Zhang Y, Tjanaka B, Nikolaidis S. 2021. On the importance of environments in human-robot coordination. arXiv:2106.10853.
Wong M, Ezenyilimba A, Wolff A, Anderson T, Chiou E, Demir M, Cooke N. A remote synthetic testbed for human-robot teaming: an iterative design process. Proc Hum Factor Ergonom Soc Ann Meet 2021;65(1):781–785. ISSN 2169-5067, 1071-1813. http://journals.sagepub.com/doi/10.1177/1071181321651336.
Raimondo FR, Wolff AT, Hehr AJ, Peel MA, Wong ME, Chiou EK, Demir M, Cookea NJ. Trailblazing roblox virtual synthetic testbed development for human-robot teaming studies. Proc Hum Factor Ergonom Soc Ann Meet 2022;66(1):812–816. ISSN 2169-5067, 1071-1813. http://journals.sagepub.com/doi/10.1177/1071181322661470.
Steinfeld A, Jenkins OC, Scassellati B. The oz of wizard: simulating the human for interaction research. Proceedings of the 4th ACM/IEEE international conference on Human robot interaction, p 101–108; 2009.
Riek LD. Wizard of oz studies in hri: a systematic review and new reporting guidelines. J Hum-Robot Interact 2012;1(1):119–136.
Gombolay M, Bair A, Huang C, Shah J. Computational design of mixed-initiative human–robot teaming that considers human factors: situational awareness, workload, and workflow preferences. Int J Robot Res 2017;36(5-7):597–617. ISSN 0278-3649, 1741-3176. http://journals.sagepub.com/doi/10.1177/0278364916688255.
Makrini IE, Merckaert K, Lefeber D, Vanderborght B. Design of a collaborative architecture for human-robot assembly tasks. 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), p 1624–1629; 2017.
Storm FA, Chiappini M, Dei C, Piazza C, André E, Reißner N, Brdar I, Fave AD, Gebhard P, Malosio M, et al. Physical and mental well-being of cobot workers: A scoping review using the software-hardware-environment-liveware-liveware-organization model. Hum Factor Ergonom Manuf Serv Ind 2022;32(5):419–435.
Johanson DL, Ho SA, Sutherland CJ, Brown B, MacDonald BA, Lim JY, Ahn BK, Broadbent E. Smiling and use of first-name by a healthcare receptionist robot: effects on user perceptions, attitudes, and behaviours. Paladyn, J Behav Robot 2020;11(1):40–51.
Breazeal C, Dautenhahn K, Kanda T. 2016. Social robotics. Springer Handb Robot. 1935–1972.
Mavrogiannis C, Baldini F, Wang A, Zhao D, Trautman P, Steinfeld A, Oh J. 2021. Core challenges of social robot navigation: a survey. arXiv:2103.05668.
Che Y, Okamura AM, Sadigh D. Efficient and trustworthy social navigation via explicit and implicit robot–human communication. IEEE Trans Robot 2020;36(3):692–707.
Bera A, Randhavane T, Prinja R, Kapsaskis K, Wang A, Gray K, Manocha D. 2019. The emotionally intelligent robot: improving social navigation in crowded environments. arXiv:1903.03217.
Kirby R. 2010. Social robot navigation Carnegie Mellon University.
Charalampous K, Kostavelis I, Gasteratos A. Recent trends in social aware robot navigation a survey. Robot Auton Syst 2017;93:85–104.
Banerjee S, Silva A, Chernova S. Robot classification of human interruptibility and a study of its effects. ACM Trans Hum-Robot Interact (THRI) 2018;7(2):1–35.
Staffa M, Rossi S. Recommender interfaces: the more human-like, the more humans like. Social Robotics: 8th international conference, ICSR 2016, Kansas City, MO, USA, November 1-3, 2016 Proceedings 8, Springer; p 200–210; 2016.
Chiang Yi-Shiu, Chu Ting-Sheng, Lim CD, Tung-Yen W u, Tseng Shih-Huan, Fu L-C. Personalizing robot behavior for interruption in social human-robot interaction. 2014 IEEE international workshop on advanced robotics and its social impacts, p 44–49; 2014.
Brynjolfsson E, McAfee A. 2014. The second machine age: work, progress, and prosperity in a time of brilliant technologies WW Norton & Company.
Schrum ML, Neville G, Johnson M, Moorman N, Paleja R, Feigh KM, Gombolay MC. Effects of social factors and team dynamics on adoption of collaborative robot autonomy. Proceedings of the 2021 ACM/IEEE international conference on human-robot interaction. Boulder: ACM; 2021. p. 149–157. ISBN 978-1-4503-8289-2. https://dl.acm.org/doi/10.1145/3434073.3444649.
Granulo A, Fuchs C, Puntoni S. Psychological reactions to human versus robotic job replacement. Nat Hum Behav 2019;3(10):1062–1069.
Nam T. Citizen attitudes about job replacement by robotic automation. Futures 2019;109:39–49.
Paredes D, Fleming-Muñoz D. Automation and robotics in mining: jobs, income and inequality implications. Extractive Indus Soc 2021;8(1):189–193.
Acknowledgements
DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. This material is based upon work supported by the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Under Secretary of Defense for Research and Engineering. Delivered to the U.S. Government with Unlimited Rights, as defined in DFARS Part 252.227-7013 or 7014 (Feb 2014). Notwithstanding any copyright notice, U.S. Government rights in this work are defined by DFARS 252.227-7013 or DFARS 252.227-7014 as detailed above. Use of this work other than as specifically authorized by the U.S. Government may violate any copyrights that exist in this work.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Manisha Natarajan, Esmaeil Seraj, Batuhan Altundas, Rohan Paleja, Sean Ye and Letian Chen contributed equally to this work.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Natarajan, M., Seraj, E., Altundas, B. et al. Human-Robot Teaming: Grand Challenges. Curr Robot Rep 4, 81–100 (2023). https://doi.org/10.1007/s43154-023-00103-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s43154-023-00103-1