1 Introduction

Robots have been a driving force in revolutionizing factories and modern manufacturing with their ability to tirelessly perform repetitive, mundane tasks with accuracy and reliability beyond human capability. However, traditional robots do not have the flexibility, creativity, and experience of human workers, and as such have been primarily limited to exact, pre-programmed tasks. A new class of industrial robots is emerging which works alongside people, forming human–robot collaborative teams, unlike their cousins that work in isolated safety cages. In this hybrid approach, people and robots work together and support each other, handing items back and forth, leveraging the specialized abilities of both the person and the robot: people are flexible and adaptive, robots are precise and fast, and people can “teach” (re-program) robots as needed. Such human–robot teams are already being used for simple assembly and electronics manufacturing.

To work with people, these robots are often programmed to interact socially like a person, using human-like language and techniques; this enables human co-workers to use their existing inter-personal skills to work with the robot. These social robots use techniques such as speech, gestures, or gaze [8] to help communicate their state, intent, and uncertainty about their task. As team members, robots can further leverage their sociality and exhibit a synthetic personality to work within and impact team dynamics, for example, to mitigate conflict [23] or modify peoples’ opinions [40].

An important element of social interaction among people is rapport building: people exhibit particular behaviors and give social cues for developing collegiate and harmonious relationships, and sense of being in a team, which is important for establishing trust and confidence [2]. In this paper, we explore to what extend people exhibit their rapport-building behaviors toward social robots. This information will be crucial for designing robotic collaborators that ultimately engage rapport behaviors to become effective team members.

Robots in research settings have informally used rapport building, for example, to appear more friendly or to encourage people to care about them [35, 41]. We formally address the question of rapport by synthesizing human–human work on rapport into a framework targeted at human–robot interaction, and conducting an experiment that qualitatively investigates people’s rapport-building behaviors toward robots. Our results indicate that people indeed do apply such behaviors to their robotic colleagues, and our analysis unpacks the particular behaviors people apply, and in which contexts, for both rapport building and hindering.

The contributions of this paper are:

  • A survey of human–human, verbal and non-verbal rapport building and hindering behaviors, synthesized for human–robot interaction study;

  • A novel human–robot interaction study scenario for industrial human–robot interaction that aims for ecological validity, and induces a range of social interactions (Fig. 1);

  • A formal qualitative study illustrating people applying rapport building and hindering behaviors to a robot collaborator;

  • Analysis of which rapport-relevant behaviors we found participants to exhibit toward robots, information crucial for informing the design of collaborative industrial robots.

2 Related Work

Robots as professional team members is an established research area [20]. Much of this involves appropriate robot use of low-level social mechanics, for example, proper timing in turn-taking with people [9], using gaze and gestures to facilitate group dialog [6], collaborative work [16] including establishing people’s roles [33], or approaching people politely when initiating interaction [25]. Robots can also use hesitation gestures to mediate conflict when simultaneously reaching for an object with a person [29], and handing objects back and forth between robots and people has been a topic of interest [46]. We move beyond this wealth of work showing how robots can interact socially, and look at how such behaviors relate to the mechanics of rapport-building in human–robot teams.

In addition to these more mechanical social interactions, robots can also take higher-level roles such as establishing task execution plans [43], attempting to moderate conflict [23], shaping interpersonal relations [40] or attempting to anticipate and read human actions for improved planning [18]. These social behaviors can directly impact rapport with a person, for example, a robot showing fear to elicit empathy [41], using humor to increase likability [34], or hindering trust by appearing to cheat [45]. In this paper, we move beyond applications of rapport and more formally investigate the social mechanics behind rapport building in a professional collaborative scenario, as a step toward clear guidelines for rapport in human–robot interaction.

The importance of rapport has been established in affective computing [5], for example, virtual agents can build rapport by eliciting laughter [28] or improving feedback [14]. Researchers have put extensive work into building agents that can take a proactive role in supporting and establishing a positive rapport with people, for example, by using back-channel prediction to improve timing and naturalness of interaction [13, 22]. We build on this work by considering rapport specifically for robots (which demand special consideration over existing agent work, [51]), and target a physical work scenario with participants. Further, a primary aim of our work at this stage is to establish what rapport behaviors people use with robots, even before starting to program robots to detect these and themselves exhibit rapport behaviors.

Rapport has been used by robots, by engineering a robot’s friendliness [21], or robots using individually personalized interactions [27]. Others have proposed self-report measures that can be given to participants to measure their rapport with the robot post interaction [37], and have surveyed the public’s opinion of what they expect with robots in terms of rapport [36]. One similar prior work coded specific rapport-building interactions of a person toward a delivery robot: greeting the robot by name, using flattery, and disclosing personal information [27]. We build on this work that shows people do indeed build rapport and investigates how to measure overall opinions of rapport; instead we aim to detail which behaviors people may use to build or hinder rapport, and move beyond the self-report and hypothetical-situation techniques in the literature, toward unpacking the specific rapport-building and hindering behavior with real robots.

We provide improved knowledge of social interactions that people use to build or hinder rapport, which will define both what robots can expect and watch for, and how perhaps robots can likewise act. With this knowledge, roboticists will be able to move beyond measuring rapport with their robot toward engineering rapport, and this paper moves toward building a toolkit that robots can use to both build rapport and understand rapport-relevant behaviors being exhibited by their human counterparts.

Finally, research has demonstrated how social robots may integrate into a broad range of contexts where it will be important to develop a friendly rapport with people, including kiosk-like public supports (e.g., in train stations, [19]), personal assistants, (e.g., for shopping, [24]), domestic robots [47], or companion robots in homes and hospitals [44]. As such, we anticipate that our rapport work will be relevant beyond our current target application of industrial team-work robots.

3 Rapport

Rapport is an individual’s experience of harmonious interaction with another person, often described as “clicking” or “having chemistry” [48]. Given the collaborative nature of professional work—and that collaborative groups are often more effective than individuals [2]—rapport has important impact on the overall functioning of a team. Research has shown, for example, how good rapport is essential for effective work, as well as for overall worker satisfaction [31]. Likewise, rapport is an important element of professional interaction between human and robot collaborators. In this section we present a survey of human–human rapport building in professional contexts, placing particular emphasis on identifying the lower-level social techniques people use that can be readily observed and coded in a study.

3.1 Rapport Building Fundamentals in Professional Contexts

Rapport building relies on emphasizing common ground and shared experiences as a basis for a relationship with another person [1]. On a more fundamental level, people mutually express attentiveness and interest in one another, are appropriately responsive to the sentiments of others, and express general positivity [48]. This is a collaborative process, and one can strengthen rapport with solid reciprocation of these techniques [1, 2, 15].

Particularly at the outset of establishing a relationship, but continuing throughout interaction, people exhibit common grounding behavior in an attempt to discover areas of similarity or mutual interest with the other [2, 15]. For example, people make inquiries or discussion external to the task at hand (e.g., related to one’s social life or hobbies), or share one’s own personal information [1], which is more potent with a more intimate level of self-disclosure [2]. This goal of developing a sense of familiarity can also be achieved by establishing shared work vocabulary and background knowledge [2].

People build rapport by promoting the in-group and similarity, for example, by using inclusive pronouns (“we,” “let’s,” etc., [10]). A mechanical way to establish similarity and strengthen sense of group is imitation: matching behaviors, voice patterns and tone, posture, or facial expressions [15].

Positivity can be expressed by explicit agreement with others’ ideas or suggestions, providing compliments and encouragement, and thanking others [1]. Politeness and courteousness are examples of positivity, such as taking genuine interest in others [2], listening emphatically and holding a friendly demeanor, responding to thanks and compliments in a positive way [15], or making accommodation in how one interacts or speaks (e.g., slower or louder, [2]). Use of humor is also common for exhibiting friendliness [2]. Being attentive is important, such as maintaining eye contact, physical proximity, and appropriate back channel communications such as nodding, verbal affirmations (“mmhmm,” etc., [15]). Conflict resolution is an additional opportunity for rapport building and to build positivity, for example, offering apologies, and mitigating criticism.

In the remainder of this section we summarize these points into a list of discrete social interactions and behaviors that can be observed and identified in interaction. This can also be used as an initial template for programming robotic interactions. We approach this from both a verbal behavior and non-verbal behavior perspective.

3.2 Verbal Rapport-Building

Rapport-building language is the use of voice in interaction with the primary purpose of impacting the social elements of interaction to develop intimacy in the relationship [1], in contrast to more practical task-oriented use of language. Concrete examples of rapport-building verbal behaviors are:

  • complimenting the co-worker [1];

  • thanking the co-worker [1, 15];

  • asking the co-worker off-task questions, e.g., personal information [2];

  • emphatically and appropriately responding to questions, e.g., in full sentences [2];

  • freely disclosing personal information when asked, or volunteering personal information [2];

  • use of inclusive, in-group speech, e.g., pronouns such as “we” or “let’s,” [10] or using the robot’s name [2];

  • mitigating response to criticism, such as genuinely apologizing when criticized [1];

  • responding to general (non-criticism) complaints or concerns with agreement and empathy [1, 10, 15].

Verbal behaviors can also hinder rapport building. This includes not only omission of supporting behaviors, but also proactive behaviors that hinder rapport:

  • ignoring co-worker politeness, such as no response when thanked;

  • no mitigation of criticism from co-worker, such as simply ignoring it or responding in a clearly insincere fashion;

  • unusually brief responses to co-worker questions, without disclosing any personal information or developing common ground;

  • the use of aggressive or derogatory techniques such as an insincere tone, sarcasm, insults, or questioning a co-workers abilities [7].

3.3 Non-verbal Rapport-Building Behaviors

Similar to the verbal behaviors above, rapport-building non-verbal behaviors are those that are solely for the purposes of impacting the social interaction, exclusive of the work task at hand. Concrete examples are:

  • display an open, inviting and friendly posture to the co-worker, e.g., leaning toward or facing, or having uncrossed arms [48];

  • engaging facial expressions, including smiling and making friendly eye contact while speaking [4, 15, 48];

  • friendly back-channel body language, such as laughing, nodding, waving, etc. [48];

  • maintaining a friendly proximal relation, staying in personal or social space with a co-worker. [4].

In addition to avoiding the above rapport-building behaviors, people can hinder rapport development with physical behaviors that display discomfort, distance, or disinterest:

  • a closed posture, e.g., crossed arms, or leaning or facing away from a co-worker;

  • showing disinterest and not engaging a co-worker, e.g., excessive looking around the room or checking a phone, or not looking at a co-worker when they are talking;

  • neutral and un-interested facial expression while interacting;

  • maintaining a socially awkward physical distance from a co-worker.

3.4 Gender-Related Considerations

There is a body of evidence that suggests differences in how women and men build rapport in professional settings, which would be relevant for rapport building in human–robot interaction, particularly given the timely importance of addressing issues of gender [50]. For example, women may engage in prosocial interpersonal behaviors more than men [11], including being more friendly or sympathetic to a colleague, being more positive and encouraging [3], and using group-inclusive language such as “we” rather than “I” [32]. There is also evidence suggesting that men may be more engaged with new technologies than women [26], and women may have more anxiety when around them [49]. However differences in attitude toward technology in workplace settings may be disappearing with the younger generations [30]. We include a balanced gender sample and targeted analysis in our study.

4 Professional Production-Line Interaction Scenario

We developed an original human–robot professional collaborative task for the purposes of this study (components of the scenario design have been previously published, [42]). Our goal was to develop a production-line scenario that aims for as high a level of ecological validity as possible within the constraints of a laboratory setting. Simultaneously, we aimed to inject a range of social interactions to create opportunities for rapport building or hindering, without harming the validity of the scenario.

4.1 Approach to Believability and Ecological Validity

Participants will clearly understand and be aware of the fact that they are participating in a laboratory study, and that they are not engaging in real work; ethics protocols generally require this to be disclosed. However, given that rapport building is embedded within the social elements of interaction, it is important to help the participants interact as naturally as possible within these constraints. Using an obvious mock scenario (e.g., sorting colored blocks, or having a tiny robot lifting heavy objects) may impact how seriously people treat the robot and task. Therefore, we aim for a task that, as much as possible, feel likes a simulation of real work to imbue a sincere interaction tone, e.g., assembly, packing, or inspection, and to not use a toy example.

Avoiding mock work scenarios is difficult, as it is non-trivial to get a robot to do real collaborative work, particularly with many research or prototype models. Even for robots that are capable, the programming overhead can be prohibitive for small-scale research. As such, we aim to use scenario design and storytelling to achieve believability, avoiding onerous programming. Within this approach, we aim to ensure that both the robot and person are necessary for the work, or at the least, that working together is clearly more effective than working alone. In particular, we consider the strengths of both the robot and person, from the perspective of a participant: robots can be highly accurate and precise, are strong and tireless, have specialized sensors, have perfect memory and have access to databases, etc. People, on the other hand, have higher creative ability for unforeseen problems, are more flexible for on-the-fly work changes, have historical knowledge of work, have much more dexterous hands, etc. If the work could be done by either the person or robot alone, the participant may feel the interaction is forced or fake, whereas a convincing collaborative task that leverages both members can help create a believable and engaging task.

Finally, to maintain validity of the social situation we pay particular attention to the social elements of a robot’s interaction to ensure that they do not break the social mood. For example, since our robot has eyes and arms, we ensure that they are used in a socially acceptable manner; a robot staring fixedly at a person or having limp arms while talking may seem awkward, and confound the rapport building.

4.2 A Range of Social Interactions

For the purposes of exploring people’s rapport-related behaviors toward robots, we aim to ensure that our task encompasses a sufficient range of social situations where relevant behaviors may emerge. Based on our rapport explorations described earlier in the paper, we have identified several such situations that should be included in the scenario:

  • In a real interaction, robots will inevitably make mistakes, give incorrect information, change their decisions, and so forth. Our study should ensure that the robot makes mistakes that are clear to the participant.

  • People may engage in small talk with a robot, similar to how they do with other people. We will provide a natural situation for simple discussion to emerge between the person and the robot.

  • Robots in real work will sometimes have to offer negative or positive feedback to the person; we will ensure the robot both praises and criticizes the participant.

  • Touch is a very intimate personal action which can illustrate a person’s opinion toward another; we will include a situation for people to touch the robot.

4.3 Scenario

Our scenario aims for high ecological validity by using a practical and believable collaborative task that requires both the person’s and robot’s skills. Further, it creates a range of opportunities to observe behaviors related to rapport, as at various points the robot makes a mistake, criticizes the person, and compliments the person, there is opportunity for small talk, and the person is required to touch the robot.

The person and robot work together on an inspection task where they collaboratively sort laundered squares of cloth (handkerchiefs) depending on whether the cloths have remaining difficult-to-see dirt or not. The robot ostensibly has advanced dirt sensors that enable it to find dirt, and the person has the manual dexterity to grab, show, and fold the pieces of cloth, as well as to spray them with additional cleaner where needed (Figs. 1, 2).

The robot holds a friendly demeanor and uses social cues appropriately, such as using its gaze to look at the cloth (while inspecting) and person (while speaking). The robot points at the cloth to show where dirt is found, and when not pointing, holds its arms in a casual way and makes micro-gestures while speaking. The robot further casually shifts its weight to increase realism, and while processing, instead of keeping silence, uses conversational fillers such as “hmm, let me see...” The robot does not have an actual dirt sensor, this is simulated for the purposes of the experiment. Below we describe the components of the experiment.

Fig. 1
figure 1

A person and robot collaborating on a professional task. The person exhibits rapport-building and rapport-hindering behaviors toward the robot during interaction

4.3.1 Task: Sorting a Cloth

The person takes a cloth from an unsorted bin, un-ruffles it, and holds it up to the robot for inspection. If the robot finds the cloth to be clean, it says so, and the person folds the cloth in half and half again (into a square) and places it in the clean bin. If the robot identifies that cloth is dirty, it points at the quadrant with the dirt while verbally reporting, for example, “there is dirt on this top corner” (we have small variation in the speech for naturalness). In this case, the participant sprays the region with a cleaner, and places the cloth in the dirty bin.

4.3.2 Praise or Criticism from the Robot

At fixed points in the scenario, the robot says “You are doing a great job, thanks!” and “Can you please hurry up? You are being really slow.” These statements are not a reflection of the actual work done by the person.

Fig. 2
figure 2

A participant working collaboratively with a robot to sort pieces of laundered cloth. The participant leverages their manual dexterity to pick up a piece of cloth and hold it in front of the robot, while the robot ostensibly has a dirt sensor that can detect how clean the cloth is [42]

4.3.3 Robot Mistake

At a fixed point, the robot makes a mistake on a dirty cloth: after announcing the result (“There is dirt on this top corner”), it then quickly says it was wrong and provides a new answer (“No, I meant over here”) and then after a few seconds, again it changes its answer (“No, sorry \(\langle \)pause\(\rangle \) it’s actually clean. Can you fold it and put it in the clean box?”). It is clear to participants that the robot makes a mistake, and its abrupt speech patterns cut off their actions or responses during this.

4.3.4 Casual Interaction

The robot ostensibly overheats at certain points and needs to sit down to rest. During this time, the robot attempts to start casual conversation. Specifically, it starts by asking about the weather, then asking the person if they get paid for this, and if they go to the local university. The conversation tree is tightly controlled by the robot with minimal flexibility given on-the-fly based on participant questions.

4.3.5 Touching the Robot

At certain points the robot’s sensors supposedly become dirty, and the person is asked to clean the robot’s hands and face with a provided wet tissue. There is no check here on how well the person cleans the robot.

4.3.6 Implementation and Environment

We use an Aldebaran Nao N25, a small child-sized humanoid robot (22.5 inches, 57cm tall), capable of gaze and simple gestures. The robot’s actions are remotely controlled unbeknownst to the participant (Wizard of Oz, Fig. 3), using in-house software based on the NaoQi SDK 2.1 and written in C#. The Wizard follows a strict script, playing pre-defined behaviors and responses to follow a conversation tree. Only minor deviations were allowed to bring the interaction back on track, and the wizard had the capability of moving the robot’s head to follow the person, and free-typing small amounts of text for the robot to speak. The Wizard also can point correctly at a quadrant of cloth to indicate dirt (with auto-calibration as the person moves).

4.4 Pilot Study

We conducted an initial pilot study to test the feasibility and believability of our scenario; specifically, whether people would work with the robot to complete the task. We recruited ten participants (4 male, 6 female) from first and second-year Sociology classes, who were paid $20 for their time. The pilot was approved by our institution’s research ethics board.

Overall, the pilot was a success: participants appeared to believe that they were working with an advanced intelligent humanoid robot prototype, and that the task represented real work. Further, all the breadth of our social interactions were successful in that they appeared to elicit authentic responses; no participant was observed treating the interaction insincerely. All participants engaged the robot and scenario, and no one reported feelings of awkwardness or make-work in our post-test interview.

We noted that all participants engaged the robot socially, giving appropriate responses. For example, when the robot asked the person to do something (e.g., “can you show me a piece of cloth to inspect?”) participants responded verbally (e.g., “sure”) and with body language (e.g., nodding) in addition to doing the work, even though this is not necessary to complete the task. Participants further used socially-appropriate prompts to the robot, similar to prompting a person. For example, while lifting a cloth, saying “is this a clean one?”, “how about this one?”, etc. Again, this is not necessary to complete the task. All participants engaged the robot in small talk during the short break. Three participants attempted to shake the robot’s hand at the end of the task, a collegiate action typical of work environments.

Overall, our scenario proved to be effective in creating a reasonably realistic production-line task, with a range of social behaviors. We used this task for our full study, outlined below.

5 Study: Rapport Building with a Robot Co-worker

We conducted an exploratory qualitative study to investigate how people exhibit rapport-building or hindering behaviors when interacting with a robotic co-worker. Our hypothesis is that, when working with a social robot on a collaborative task, people will attempt to develop rapport with the robot similar to how they would with a person, as an important element of social interaction among people is rapport building [2]. The aim of this study is to test this hypothesis, and to further describe which rapport behaviors manifest.

Fig. 3
figure 3

Our Wizard of Oz interface used during the experiment. The various panels and buttons enable the operator to activate a range of pre-defined behaviors, gestures, conversation topics, etc. The operator can also give low-level commands and custom speech for unexpected behavior. Here, the participant is nodding his head when the robot says the piece of cloth is clean

5.1 Measures

Our primary measure was observations of rapport-building or rapport-hindering behaviors toward our robot. We developed a coding scheme based directly on the rapport-building techniques outlined earlier in the paper (abbreviated coding scheme attached as “Appendix”). This guideline was used to code videos of the interactions between people and robots, to provide insight into which rapport-related behaviors people exhibited toward our robot. The experiment was recorded with video cameras from two angles that ensured a clear view of both the side profile of the person and robot, and, the person’s face (Fig. 4). For analysis, these were combined into a single video for simultaneous observation.

We administered a range of established questionnaires post-test relating to human–human rapport and opinions of the robot as a professional partner, with minor changes such as removing items not relevant to our study, and replacing words such as “system” or “person” with “robot.”

We administered several components of the Unified Theory of Acceptance and Use of Technology model [49]: performance expectancy (the robot would help them do their job better), effort expectancy (low effort required to work with the robot), general attitude toward the robot, and anxiety toward the robot. In addition, we administered general rapport [14] and likability scales [39], and a scale to measure desire to work with a particular co-worker in the future [17]. Overall, we expect that these broad measures will reflect the level of rapport that the participant felt they built with the robot, and provide insight into why or why not (e.g., if the robot was likable).

5.2 Procedure

Participants met the experimenter in a different location and completed an informed consent form before moving to the experiment room. The experiment room layout is shown in Fig. 4. The robot stood up and introduced itself while waving, asking the participant for their name. Participants were told that the robot is highly intelligent (and were not told that it was remote controlled), but that it was not good at highly precise manual tasks such as handling cloth and folding.

Fig. 4
figure 4

Experimental setup: a robot on a desk with pieces of cloth which will be sorted into the boxes labeled clean and dirty, wet tissues for cleaning the robot’s sensors, and a bottle of water representing detergent to spray onto the dirty pieces of cloth. Two cameras are used: one records the profile of the participant and robot, and the other is front-facing to the participant

Fig. 5
figure 5

A flow chart representing the exact procedure followed for the experiment

The task is explained to the participant (the criticism and praise are omitted), and the participant is led through four examples of sorting cloths and cleaning the robot. The participant is also told that the robot may overheat and require a break. If at any point the participant asks detailed technical questions about the robot or algorithm, they are deflected to the end of the experiment. The experimenter leaves and the participant is alone with the robot during the tasks. The participant sorts all cloths collaboratively with the robot, and at specified times is asked to clean the robot sensors. In addition, the robot praises and criticizes the participant at fixed times, and “overheats” to provide a discussion opportunity.

There are 36 squares of cloth to sort, and the clean or dirty state of each cloth is pre-determined and consistent for all participants. The entire experiment procedure is specified in Fig. 5. After 6 cloths, the robot asks to be cleaned, and after an additional 5 (11 total), the robot either praises or criticizes the participant (order counterbalanced). After 4 more (15 total), the robot makes a mistake (as described earlier), and after 18 total, the robot requires a rest: it states it is overheating and sits down; this ranged from 30 s to roughly 5 min depending on how conversational the person was. Following, after 4 squares (22 total) the robot requests to be cleaned again, and after 8 additional (30 total) the robot either praises or criticizes (order counterbalanced) the person. After the last 6 squares the experiment is finished. If at any point the participant attempts to engage in off-topic discussion or ignore the work task, the robot says “let’s focus on our task. We can talk later when we are not working.” Once the task is finished, the experimenter returns to administer a post-test questionnaire and give a debriefing interview. The protocol takes about 45 min, approximately 20 of which is spent interacting with the robot.

We recruited 36 participants (18 men, 18 women) and maintained a gender balance for analysis purposes. This study was approved by our institution’s research ethics board. Participants were compensated $20.

Fig. 6
figure 6

Post-test questionnaire data averages with 95% confidence intervals

Fig. 7
figure 7

Histograms of rapport-building and hindering behaviors. Each bar represents one feature, and how many participants fall into that category. For example, red represents the number of participants who exhibited that behavior 1–3 times during the interaction

5.3 Results

The post-experiment quantitative scales use Likert-like items on a 1–7 scale, where items are averaged across the scale, giving a numerical representation of participant reaction to and opinion of the robot. The average ratings across participants are presented in Fig. 6. Specifically: performance expectancy mean \(=\) 5.2, SD .9, effort expectancy mean \(=\) 6.0, SD .8, general attitude toward the robot mean \(=\) 5.9, SD 1.3, sense of rapport mean \(=\) 4.7, SD .9, robot likability mean \(=\) 5.2, SD .9, would like a robot as a co-worker mean \(=\) 5.6, SD 1.0, and anxiety toward the robot mean \(=\) 3.4, SD 1.1.

The remaining results are from our qualitative analyses of video data of participants completing the task with the robot. Two video coders were trained on pilot data and met regularly to ensure mutual understanding. Figure 7 provides a graphical overview of the coding results, where a participant is allocated one of four bins: 0, no observations of the behavior, 1–3, low incidence, 4–6, moderate incidence, and 7\(+\), high incidence of the behavior, and each bar in the figure represents a histogram of how many participants fall into each bin. These bins were selected as even intervals across the range of data observed, with the 0 bin being particularly important for identifying how often a behavior was completely omitted from interaction.

Fig. 8
figure 8

Examples of participant facial expressions. Left, rapport-building smiling and friendly demeanor. Right, rapport-hindering stern gaze while verbally expressing doubt about the robot’s abilities

Fig. 9
figure 9

Examples of body postures. Left, open and friendly posture with relaxed shoulders and open arms. Right, closed posture with crossed legs, clasped arms, and crouched shoulders

Fig. 10
figure 10

Proximity to robot, during “overheating” break. Top, indicating rapport-building social distance, leaning in to chat with the robot. Bottom, rapport-hindering social distance, taking the furthest chair from the robot, and rapport-hindering distracted behavior, engaging their cell phone while the robot is attempting conversation workplaces, and particularly toward new technologies, may be diminishing

5.3.1 Non-verbal Rapport Building

Overall, most participants exhibited some non-verbal rapport-building behaviors, with a median of 3 coded instances per participant, a mode of 1 coded instance (at 17%), and 5 participants with no coded instances overall. The breakdown of each specific behavior is given in Fig. 7.

Of the non-verbal rapport-building behaviors exhibited, facial expressions and gaze were the most common, with 72% of participants showing such behaviors, and 25% being coded with 7 or more instances. This includes genuine smiling at the robot and appropriately making eye contact with the robot while it was speaking (Fig. 8). Behavioral engagement was also common (exhibited by 64% of people, and commonly shown by 17%), including laughing with the robot, waving back during introductions, and nodding at the robot in understanding while the robot is talking. About half of participants (53%) exhibited open postures some of the time (Fig. 9), including leaning toward the robot with uncrossed arms, and orienting themselves directly toward the robot. Physical proximity was much less evident, with only 22% of participants keeping a closer, socially accepting distance (Fig. 10).

Participants also displayed non-verbal rapport-hindering behaviors; the median was 3 coded instances per participant, a mode of 1 instance at 22% of participants, and 5 participants with no coded instances (these were not the same participants who had no codes in the rapport-building case above).

The most common rapport hindering behavior was acting distracted in a way that would be considered socially awkward or rude toward a person, with 44% of participants doing this to some degree (Fig. 10). A few participants (8%) did this extensively (7\(+\) times), for example, turning away while the robot is talking, or ignoring it while looking at one’s cell phone. Also, half of participants (53%) kept a distance from the robot that would be considered socially awkward (Fig. 10).

Twenty-six percentage of participants displayed some form of ability-testing behavior toward the robot that would be considered rude if it were toward another person, with some doing this extensively. This code also has verbal components. Examples of this include trying to trick the robot by purposely doing the task incorrectly (such as spraying a clean cloth) and asking if it noticed, or asking the robot questions simply to see if it could answer, such as “you said this corner?” while pointing to a clearly wrong place.

5.3.2 Verbal Rapport Building

Verbal-rapport building behaviors were observed much more often than non-verbal behaviors. The median coded instances across participants was 15, with the mode of 9 at 11% of participants. No participant had 0 codes. The full breakdown is given in Fig. 7.

The most common instances were providing compliments to the robot (e.g., “you are very interesting, I’ve never met a robot like you,” participant 22), thanking the robot (e.g., “thanks for your help today, I appreciate it,” participant 16), and responding sincerely and in detail to questions. In addition, 72% of participants asked the robot questions external to the task, such as “do you speak any languages other than English?” (participant 12), evidence of common-ground building. About half of participants were observed using in-group language, such as saying “okay buddy” or “for you Nao [robot’s name], anything!”, (participant 25), and empathetic speech, such as when the robot states that it is sad it cannot go outside, saying “aww..” (participant 28). In addition, 39% of participants gave personal disclosures of information, such as disclosing their major of study or job, or hints on their relationship status. Finally, 58% of participants used criticism-mitigating language, such as apologizing: “oh I’m slow? Sorry” (participant 40).

Verbal rapport-hindering behavior was less often observed, with a median of 3 and a mode of 0 instances at 25%. The most common occurrence was limited responses, such having no reaction to robot thanks or simple questions, or giving short and abrupt answers such as “yep.” or “okay.” (participant 17), which seemed to signal a lack of interest in discussion. More specifically, 58% of participants, at least once, completely ignored robot politeness such as thanking, and 27% completely ignored criticism from the robot.

Twenty-two percentage of participants used aggressive speech or sarcasm. For example, one participant got quite angry and said things including “it doesn’t matter...you are just a machine, you can’t feel anything! ... I don’t have to answer you. Because you are just a machine, you won’t understand anything!” (participant 19). Another participant started criticizing the robot in the same manner they were criticized by the robot, and sarcastically said “you are doing a very good job.” (participant 27).

5.3.3 Gender Analysis

We performed a thorough gender analysis on all of our measures, but failed to find any effects. As the variance, medians, and modes of the data were very similar between the male and female sub-groups, we did not investigate further.

6 Discussion

The quantitative post-test results indicate that participants were positive toward the robot as a potential collaborator and colleague, and felt that they had a positive rapport with the robot. This is an important indicator of how people engaged the robot and the task, and these results suggest that we can accept our task as a reasonable analog of how people may exhibit rapport-related behaviors in an actual work task.

The goal of our study was to investigate if and how people may apply rapport-building or rapport hindering behaviors to robots in a professional work setting. While the result is a clear indication that people do indeed apply such behaviors as indicated in the relevant prior human–human interaction literature surveyed, what is more important is the breakdown of how these behaviors were applied. If people treated the robots merely as another piece of industrial equipment, then we would expect little rapport building behavior and only see rapport-hindering behavior; yet, participants were overwhelmingly found to exhibit positive behaviors such as complimenting the robot, thanking the robot, and giving socially-enriched responses to questions. On the other hand, if people were treating the robot as a human co-worker, we would not expect to see as many rapport-hindering behaviors, such as ignoring the robot by being distracted, or explicitly testing the robot’s abilities. Our results indicate that people’s interactions with robots fall somewhere in the middle: there is clearly enough anthropomorphism happening that people exhibit rapport-building behaviors, but many people do not hesitate to treat them as cold machines.

This result falls in line with the existing body of work in human–robot interaction that explores the extent of anthropomorphism of robots, but additionally adds the rapport framework to unpack and understand the types of behaviors that people exhibit. That is, taking the rapport perspective helps to explain what impact certain behaviors may have on the human–robot team, for example, a person overtly testing the limits of a robot’s abilities can be seen as a rapport-hindering act, and a person thanking the robot should be seen as a rapport-building act. Further, this rapport focus provides a yard stick that can be used to compare robot and behavior alternatives, which can highlight the deeper team-work impacts of particular design decisions, e.g., if one robot elicits a more welcoming proximity than others.

Overall, the qualitative data paints a detailed picture of how rapport may be built and hindered in human–robot collaborative work. This data itself is an important contribution, as it paints a detailed picture of the kinds of verbal and non-verbal rapport-relevant behaviors that we found participants to exhibit when working with a robot collaborator. This can serve as a starting point for developing robots that are more rapport-savvy (e.g., similar to Gratch et al.’s virtual agents, [13, 22]), as the data indicates what sorts of behaviors—with explicit social mechanics and examples—a robot’s behavior system can attempt to detect and appropriately respond to, and themselves use, to shape rapport. For example, the particular use of facial expressions, asking question external to the task, and using polite and courteous behaviors and words.

Our lack of gender findings is perhaps surprising given the literature suggesting potential gender differences. Our low variance between our female and male participants make us less inclined to believe that the experiment was under powered for a moderate effect. Instead, we believe that the noted tendency for gender differences to be diminishing with younger people may be manifesting here [30], a result emerging in the human–robot interaction literature as well [38]. Even recent work that finds gender differences in human–robot interaction, only finds a small effect [12]. As such, perhaps our results are another example of how established gender differences in

7 Ongoing Directions

While our results provide insight into which rapport-relevant behaviors people may use toward robots, continuing work on rapport with robots should investigate further the impact that this would have on production and worker happiness outcomes. While the literature details the importance of rapport in human–human collaborations for work efficiency and worker satisfaction, we do not yet know how this will translate to people working with robots in an actual factory.

In this study, we saw clear examples of people embracing the robot as a colleague and applying rapport-building behaviors, but also saw people who were blatantly rude or angry with the robot, and some ignoring its social attempts nearly completely. This should be studied more broadly for impacts on the person, for example, how does a person’s social engagement with a robot impact their work satisfaction and team effectiveness? Could this be mitigated by improved rapport with a robot co-worker, and would this reap similar work benefits as rapport with another person? Additionally, when a person is rude or aggressive to human co-worker, there are consequences to the person on the receiving end, with implications for team effectiveness, but these effects may not exist when the aggression is toward a robot without fragile emotions and feelings. However, productivity may be lost in new ways, for example, some of our participants slowed productivity by testing the robot’s abilities and trying to trick it, with some doing so out of annoyance to the robot. Moving forward, answering these bigger-picture rapport questions will be crucial for better understanding how a social robot will fit into a workplace, and what role rapport will play.

Through this study we discovered limitations in our scenario. In terms of validity, in retrospect we realized that this scenario has a hierarchical slant to it: the people are asking the robot’s opinion, and the robot gives direction, and the opposite does not happen. This may feel, to some, like a manager-employee relationship and less as a colleague. We should aim to include elements of the participant directing the robot more, to improve this balance. Other limitations revolve around opportunities for exploration. While the robot gives feedback to the participant, we did not include opportunities for the participant to give criticism or praise to the robot. This may further hinder the collegiality of the work environment. Another limitation is that there is no opportunity for either the robot or the person to give instructions to each other. People and robots who work collaboratively will have to teach and explain things to each other, which is an aspect of collaborative assembly line work we need to be exploring. This may be addressed with advanced artificial intelligent techniques to provide personalized interaction between people and robots.

8 Conclusion

Robots are entering industrial workforces as collaborative team members that will work with human co-workers on professional tasks, and are using human-like social techniques to simplify interaction with people. Particularly with these social robots, it will be important for robot designers to proactively address the human tendency to want to create social relations with co-workers, whether they be human or robot. Designing human–robot interaction in ways that meet human needs and tendencies toward rapport-building may be important for developing productive and effective human–robot collaborative teams.

This paper provides several contributions leading to improved rapport between human and robot co-workers. We synthesized a framework of rapport-building and rapport-hindering behaviors from relevant human–human work, which can be used to direct and explore rapport in human–robot teams. Our original human–robot collaborative scenario is useful for ongoing rapport work or for exploring social aspects of human–robot professional teams in general. Finally, we conducted a formal study that provides detailed insight into how rapport-relevant behaviors manifest in human–robot team work. Overall, this work lays the foundation for ongoing rapport work in professional human–robot teams, both in terms of a framework for exploring and designing interactions, and as an initial data set of rapport in human–robot collaboration.