1 Introduction

The capability of recognizing and expressing emotions is an important feature in human–robot communication. Within the field of social robotics, and especially in contexts of healthcare or education, robots should be able to engage with humans on an emotional level, by expressing emotions in a certain degree [20].

Facial expressiveness is considered as of great importance in building and maintaining social relationships together with facial and head micro movements [6, 7]. According to Cole [4] the face plays a crucial role in the expression of character and identity. Mehrabian [12] showed that facial expressions is the major modality in human face-to-face communication (55 % of affective information is transferred this way, 38 % by paralanguage and only 7 % is transferred by spoken language).

Some social robots are only able to show a discrete set of facial expressions or move abruptly and unnaturally, in contrast to the smooth, elegant motion displayed by humans and animals [19]. Movements of the facial degrees of freedom (DOFs) associated to emotions of these robots are hard-coded and platform-dependent. A recent trend in social robotics focuses on platform-independent implementation is which robot’s movements are coded using parameters. Consequently, movements of a certain robot can be transferred to another without reprogramming. Van de Perre et al. [13] developed a generic method to generate upper-body emotional expressions for different social robots. A number of social robots use valence-arousal emotional space based on the circumplex model of affect defined by Russell [15, 17] to select emotions e.g. [11, 18].

Fig. 1
figure 1

Demonstration of the NAO eyebrows. These two picture are screenshots from the demo video available on: youtu.be/UEvnBWidlck

1.1 Expressing Emotion with NAO

The NAO robot, from Aldebaran Robotics, is a widely used robot in human–robot interaction (HRI) studies. As NAO has no means for facial expression except the RGB LED eyes, some researchers have designed gestures to express emotional states [3], where they studied the impact of the head position on the identification of displayed emotions; or [8], which created different designs using body movements, sounds and eye colors; or [9], which used the eye LEDs to express six different emotions. In particular, in the work of [8], Haring et al. created and evaluated emotion expression of NAO robot by a combination of fixed movement and sound, and fixed eye colors for discrete emotions i.e. red for anger, dark violet for sadness, bright yellow for joy, dark green for fear. In another study, Johnson et al. [9] also decided fixed colors for discrete emotions i.e. red for anger, yellow for surprise, green for disgust, blue for sadness, orange for happiness, cyan for fear.

Since NAO robot does not have sufficient DOFs in its head, it makes difficult to use facial expressions to replicate more sophisticated scenarios where other robots have been used. It is the case of Probo, with 20 dofs in its head, the robot is capable of expressing several emotions [18]. Another example is the robot FACE, mapping with 32 dofs the major facial muscles to simulate realistic facial expressions [14]. Today, the lack of emotion expression of NAO is one of the main limitation for social interaction and studies. Indeed, the robot is unable to perform an action (for example pointing) while performing a pose to express an emotion. Similarly, because the library proposed by Haring et al. [8] perform sounds during the poses, the robot is also unable to express emotion while interacting verbally. In this paper we want to propose a novel method to provide emotional feedback while playing the game, winning precious time and providing more realistic HRI.

Throughout this paper we aim at enhancing the facial expressivity of the NAO robot using a 3D-printed pluggable mechanism that emulates the robot’s eyebrows [5]. To our knowledge, it is the first time a device is proposed to enhance NAOs emotion. This paper also proposes a general method for platform-independent social robots, to express emotions using eyebrows.

The first part of the method proposes an extensive description of the eyebrow setup (see Fig. 1) focusing on the mechanical design, the electronics, and its integration using Aldebaran’s Choregraphe software.

We then propose four experiments on the eyebrows. The first experiment conducts an open survey to asset the eyebrows emotion recognition. The second experiment describes the relation between the eyebrows angle and the expressed emotion. The third experiment evaluates different eyebrow shapes to find the most suitable one for social interaction. Finally, the fourth experiment proves that, using the eyebrows, the NAO robot is able to express emotion while performing other tasks.

Fig. 2
figure 2

Actuation of the eyebrows. The arrows show the conversion between rotation and translation. The white structure is 3D printed and can be cliped on the robot’s face

2 Method

2.1 Design of the Eyebrows

2.1.1 Mechanical Design

The main difficulty in designing such a device for NAO comes from the lack of space available for the actuators as the device should not modify to much the robot’s appearance. Therefore, we propose a design where two micro servo-motors are placed at the back of NAOs head supported by a 3D-printed structure in Acrylonitrile butadiene styrene (ABS) which is clipped around the head. The torque needed to move the eyebrows is transmitted from the back to the front of the head through a rigid cable. This cable is sufficiently rigid to act as a push-pull mechanism. As illustrated in Fig. 2, the rotation of the servo is converted in translation of the cable (cable in red). At the front of the head, this translation is converted back in rotation of a pulley behind the eyebrow. The eyebrow device is directly clipped on this pulley, allowing to easily change the shape of the eyebrows (as for example in Fig. 7a–f). The micro servo-motor actuating the eyebrow is controlled by an Arduino-based board. As the two micro servos and the board have small power consumption, the board can be directly connected to the NAO robot through its USB port (see Fig. 3a) at the back of its head without any additional power supply.

The eyebrows solution is simple and easy to use. Indeed, this device does not require any extra fixation (e.g. screws, glue) and can be directly clipped on NAO’s head without affecting the robot’s hardware. Consequently, there is no risk of losing the warranty as no modification of the robot are required. Finally, when not needed, the eyebrows device can be unclipped and removed the same way they are clipped, in few a seconds.

2.1.2 Electronics

NAO robot has a USB port behind the head of the robot opening a possibility to connect the robot with external hardware which is the NAO eyebrows system consisting of two servo motors in this case. Figure 3a illustrates how NAO eyebrows system connects with NAO robot. An Arduino-based PCB acts as a bridge to transfer commands from the robot to PWM values in order to control the two motors. The Arduino-based PCB is designed with the dimension of 46 mm \(\times \) 15 mm with a USB connector and headers to connect two servo motors as depicted in Fig. 3a, b.

A firmware is uploaded to the ATmega328P microcontroller of the PCB which manages the data received from NAO robot via the USB communication. The data is then translated into PWM values to control the positions of two servo motors. The firmware is programmed using serial and servo libraries of Arduino. The PCB design and the Arduino code are available on Github: github.com/hoanglongcao/ArduiNao-RMM.

Fig. 3
figure 3

The design of NAO eyebrows PCB. Its small dimension allows an embedded plug and play solution. Additionally, servos are directly powered by the USB port, avoiding the necessity of an external power supply. a An Arduino-based PCB is connected to the USB port of NAO robot to control two servo motors. Each servo motor can be controlled separately. b The NAO eyebrows PCB is plugged into the USB port behind the head of the robot

Fig. 4
figure 4

An example of using tactile head sensors to control two movements of the eyebrows in Choregraphe. On the top right the “NAO Eyebrows” box allows users to easily control the positions of the NAO eyebrows

Fig. 5
figure 5

Pictures used in the first experiment. For each picture, participants had to guess the robot emotion. We used poses expressing emotions from [8]. For each pose, the robot was either with or without eyebrows. a Angry pose. b Angry pose with eyebrows. c Sad pose. d Sad pose with eyebrows

2.1.3 Choregraphe

In order to control two DOFs of NAO eyebrows from Choregraphe, we created a box called “NAO Eyebrows” as shown in Fig. 4. The functionalities of this box are to setup a serial communication between NAO and the Arduino-based PCB, and to send the desired positions of the two eyebrows to the mechanical system. The serial communication is configured with the following parameters: Port name=/dev/ttyUSB0, Baud rate=115200, Data bits=8, Parity=None, Stop bits=1. Users are allowed to change these parameters, however, they have to change the corresponding parameters of the Arduino firmware. The box is programmed in Python using the serialtools library with two integer inputs for the two positions of NAO eyebrows. Once the input values are received, the data are processed and then sent to the PCB in formatted data i.e. [“positionL”, int] and [“positionR”, int]. Figure 4 presents an example of toggling the eyebrows between two positions (\(-40^\circ \) and \(40^\circ \)). For users using NAOqi SDKs e.g. Python, C++, Java, the same process is required to communicate between NAO and the NAO eyebrows system.

An online video youtu.be/UEvnBWidlck illustrating how easy it is to use the device. In this video, we show how the eyebrow device can be directly clipped on NAO face without damaging the robot. We also perform a demo in witch the eyebrows and eyes’ color are synchronized with the others actions of the robot using Choregraphe.

3 Experiments

Experiments are organized to validate the pluggable eyebrows device. We hypothesize that: (1) the eyebrows are able to express emotions of anger and sadness, (2) the relation between eyebrows angle and expressed emotion can be approximate by a linear relation, (3) the size and the shape of the eyebrows will influence its likeability, (4) using the eyebrows, the NAO robot can express emotions while performing another non-expressive task. Consequently, four independent experiments are conducted. The first experiment is an exploratory questionnaire with open questions to ensure that the eyebrows are conveying the emotions. The second experiment focuses on the relation between the output angles of the eyebrows and the robot’s emotion. The third one investigates different eyebrows designs and evaluates the most appropriate one. The last experiment demonstrates the interest of the eyebrows to express emotions while the robot performs certain tasks.

3.1 Experiment 1

The first experiment is interested in the emotion that the robot conveys and the improvements that can be achieved by using the eyebrows. Therefore, we compare a conditions of the NAO robot expressing emotion with or without the eyebrows. We hypothesize that the recognition rate of anger and sadness emotions as defined by [7] will be higher using this eyebrows device than without it.

3.1.1 Procedure

To assess the functionality of this device, an online questionnaire has been filled in by 70 voluntary participants (23 were rejected because they did not answer all questions). All the participants belong to the 3rd year of a bachelor degree in psychology, as such they had no prior experience in robots. This exploratory questionnaire was made using LimeSurvey [10] and contained eight open questions. Participants were randomly split in two groups, one control group (without eyebrows), and one group with eyebrows. For each group, eight pictures of NAO expressing emotions were presented in a random order. For both groups, pictures contained emotional expressions from the literature: four pictures using body language [8] (two for anger and two for sadness), two pictures using eyes colors (one for anger and one for sadness) and two being neutral. Figure 5 shows an example of the pictures used in the study. For each picture, the participant had to write in the questionnaire the emotion that was, according to him/her, expressed by the NAO robot. Participants had to guess the robot emotions as none of them were suggested during the survey. Consequently, participants’ answers were not influenced by pre-defined choice. Finally, participants were encouraged to answer “I do not know” if it was the case. This was done to ensure that the participants were not answering randomly.

Table 1 Results of experiment 2: one sample t test with hypothesis: emotion score \(=\) 4, for each angle

3.1.2 Results and Discussion

Participants were randomly separated in two groups and rejection of incomplete answer led to N \(=\) 21 for the group without eyebrows and N \(=\) 26 for the group with eyebrows. A content analysis was performed on the participant answers. Each of them was classified in one of the following categories: the six basic Ekman’s emotions: anger, disgust, fear, happiness, sadness, and surprise, others for the emotions that did not fit the previous categories, or no anwer if the participant answered “I do not know”. Responses were evaluated by three independent raters and were considered as correct when they had a meaning similar or close to the targeted emotion. With R [16], the inter-rater reliability for the raters was found to be almost perfect, Kappa \(=\) 0.84 (p \(=\) 0). The results revealed that the rate of recognition is greatly improved by the eyebrows device. In fact, the recognition rate of sadness increased by 32.7 % (5.8 % without eyebrows to 38.5 % with eyebrows). More impressively, the recognition rate of anger was improved by 80.6 % (14.2 % without eyebrows to 94.8 % with eyebrows). Pre-defined choice answer would probably give even higher recognition rates. It should be noted that we did not perform a qualitative analysis on the neutrals picture because the recognition rate was too low (only one participant). In their study, Haring et al. [8] expressed emotions using sequences of body movements and sound. In this experiment, only pictures representing one body movement were used, suggesting why the pictures without eyebrows had such a low recognition rate. This first experiment shows that it is therefore possible to express emotions using only the eyebrows.

Fig. 6
figure 6

Emotion expressed by NAO robot in function of the eyebrows’ angle (in \(\deg \)). Emotion was rated on a Likert scale from 1 (anger) to 7 (sadness). Dots represent the mean of the Likert scale value for each angle. Error bars indicate 95 % CI. The line is the linear regression

3.2 Experiment 2

This experiment focuses on the relation between the eyebrow’s angle and the corresponding expressed emotion. Moreover, we are also interested in the eyebrow’s neutral position. Indeed, the eyebrow device allows NAO to express emotion, but it is very important that it can also not express emotion, that is being neutral. This study aims to confirm the angle at which no emotions are carried with the eyebrows. In consequence we hypothesize that: (1) the relation between expresses emotions and eyebrow angle can be approximate by a linear regression; (2) at an angle of \(0^\circ \) the eyebrows do not express emotions (neutral).

3.2.1 Procedure

For this study, 40 participants (11 women and 29 men) with a mean age of 29.02 (SD \(=\) 12.17) we recruited using Prolific Academic website [1]. To access the study, participants had to be aged between 18 and 80. The study, made in LimeSurvey [10], consisted of nine questions and lasted on average three minutes in total. Each participants received a compensation of €0.6 after completing the study. For each question, a picture of NAO with eyebrows was presented and participant had to rate NAO emotion. NAO’s eye LED were turned off to avoid emotions bias, and angle of the eyebrow variated from \(-40^\circ \) to \(40^\circ \) by a step of \(10^\circ \). Each picture was presented one time in a randomized order. To evaluate the robot emotions we asked “What is the emotion expressed by the robot?” using a 7-point Likert scale: very angry (1)—angry (2)—a little bit angry (3)—neutral (4)—a little bit sad (5)—sad (6)—very sad (7).

Fig. 7
figure 7

Experiment 3 is performed on the eyebrows designs. A total of 6 solution are explored using 3 different shapes in two sizes: small (ac) and big (de). Results are presented in Fig. g, h. a Small bar, b small comma, c small circumflex, d big bar, e big comma, f big circumflex, g Interaction plot of emotion rating for the 6 eyebrows designs. Error bars indicate 95 % CI, h Interaction plot of likeability rating for the 6 eyebrows designs. Error bars indicate 95 % CI

3.2.2 Results and Discussion

Using R [16], the questionnaires results indicated a strong positive correlation between the eyebrow angle and the perceived emotion, r(358) \(=\) 0.88, p <.0001. An increase of the eyebrows angle was correlated by an increase in the Likert score for the expressed emotion. Moreover a one-sample t test performed on the results obtained for \(\alpha = 0^\circ \) (M \(=\) 3.92, SD \(=\) .57) show they are not significantly different from the neutral value (4 on the Likert scale), t(39) \(=\) −0.83, p \(=\) .41. As all other angles are significantly different from the neutral value (except \(\alpha = -10^\circ \) that is only marginally significant), we therefore assume that an angle of \(\alpha = 0^\circ \) can be considered as close to the neutral value. Table 1 summarizes the statistical analysis and Fig. 6 represents Likert results (emotion rating in function of eyebrow angle). These results confirm our hypothesis and show that we can assume a linear relation between the eyebrow angle and its affective interpretation. This approximation however does not mean that the relation is linear, but only that a linear relation is a sufficient way to describe it. Indeed, an attentive eye might see a slight sigmoid shape in Fig. 6. This would then suggest a visual hint of categorical perception, such as that a small angle variation would end up in large affective interpretation. Further research should be conducted to answer this question.

3.3 Experiment 3

This experiment focuses on the look of the eyebrows and raises two questions. As the eyebrows are intended for social interaction, we want to know which design is the most appreciated. However, we are also concerned by the prevaricate that could induce such design. Indeed, although we want to express emotions, we also want to be able to avoid expressing them at some moment: the neutral position. New shapes could however prevaricate the neutrality of the robot making it always look angry for example. We hypothesize that some eyebrow designs will be more appreciated than others. In particular we suppose that shape and size will have influences on the likeability of the robot. Additionally, shape and size of the eyebrows could also cancel robot’s neutral emotion by inducing a bias toward anger or sadness.

3.3.1 Procedure

We recruited 40 participants (16 women and 24 men) with a mean age of 27.48 (SD \(=\) 8.14) using Prolific Academic [1]. Participants had to be aged between 18 and 80 and could not have participated in a previous study on NAO eyebrows. The study, is similar to the previous experiment: it consisted of 6 questions composed of two sub questions and lasted on average 4 minutes in total. Each participants received a compensation of €0.8 after finishing the study. During the survey, 6 pictures were presented one by one in a randomized order. The pictures contained the face of NAO with different eyebrows designs, all in neutral position (see Fig. 7a–f). In addition, NAO’s eyes were turned off to avoid any bias that could be cause by the eyes color. For each pictures, participants had to rate the likeability of the eyebrows and the emotions of the robot trough two separated Likert scale. For the likeability rating [2], we asked: “Do you like the eyebrows?” on a 5-point Likert scale: not at all (1)—not really (2)—undecided (3)—somewhat (4)—very much (5). For the robot emotion, the question was similar to the question of experiment 2 (Sect. 3.2).

3.3.2 Results and Discussion

We first look at the factors (shape or size) that have influences on the eyebrow neutrality and likeability. Secondly we select the most appropriate eyebrows regarding the neutrality then the likeability.

A repeated-measures ANOVA with shape (bar, comma, circumplex) and size (small, big) as within-subjects factors was conducted. There was a significant main effect for shape, F(2, 228) \(=\) 4.10, p \(=\) .017. In general results where higher for the bar shapes (M \(=\) 4.156, SD \(=\) 0.96) than for the comma shape (M \(=\) 3.42, SD \(=\) 1.07) and the circumflex shape (M \(=\) 3.06, SD \(=\) 1.12). However there was no significant effect of size, F(1, 228) \(=\) 1.44, p \(=\) .230. In addition, there was a marginally significant interaction of size and shape, F(2, 228) \(=\) 2.70, p \(=\) .069. Similarly, a one-way within subjects ANOVA on likeability reported a significant effect of the shape, F(2, 228) \(=\) 3.19, p \(=\) .042. In general results seems to be higher for the comma shapes (M \(=\) 2.96, SD \(=\) 1.07) than for the circumflex shape (M \(=\) 2.83, SD \(=\) 0.98) and the bar shape (M \(=\) 2.69, SD \(=\) 0.98). There was no effect of the size, F(1, 228) \(=\) 1.22, p \(=\) .270, and a marginally significant interaction between the two factors, F(2, 228) \(=\) 2.79, p \(=\) .063.

Interaction plots are presented in Fig. 7g for the rated emotion and in Fig. 7h for the likeability. These results suggest that, when designing eyebrows, shape is an important concern. Surprisingly, and in opposition to our hypothesis, the size does not seem to have a direct influence. However size might have an interaction with the shape, suggesting that some shapes are better small, while other shapes are better big. Because pictures used in this experiment where focusing on the eyebrows, we believe that these results could be generalized to other social robots.

In the second part of these results, we select the most appropriate shape to be used as a reference in our next studies. We first select the shapes that are not biased in neutral position. One-sample t tests performed on expressed emotion shows that three of the shapes are significantly different from the neutral value (4): shape small comma, t(39) \(= -6.862\), p <.0001, shape small circumflex, t(39) \(= -3.1631\), p \(=\) .003, and shape big circumflex, t(39) \(= -\)11.00, p <.0001. Theses shapes are therefore excluded. We then compare the likeability of the three remaining shapes (small bar, big bar and big comma). As t test report no significant differences, p >.1, it suggest that theses three remaining shapes are equally adapted. As a personal choice, we propose to use the shape big comma as the design of reference. A possible limitation of this experiment comes from the definition of neutral position (horizontal). While the horizontal position is quite direct for bar and circumflex shapes, obtaining a horizontal position for the coma shape is not as straightforward, and could therefore conduct to different results.

3.4 Experiment 4

In this experiment, we want to show that it is possible for NAO to express emotion while performing other tasks. In consequence we hypothesize that participants watching the NAO robot performing neutral action such as pointing or waving will be able to decode the robot’s emotions through its eyebrows. In addition, we hypothesize that the type of action performed should not influence such emotion rating.

3.4.1 Procedure

For this study, 40 participants (19 women and 21 men) with a mean age of 28.7 (SD \(=\) 9.04) were recruited using Prolific Academic [1]. To access the study, participants had to be aged between 18 and 80 years and could not have participated in a previous study on NAO eyebrows. The study was made using Limesurvey [10] and consisted of 6 questions presented in a randomized order. In total the survey lasted on average 3 minutes and its competition granted €0.6 to each participants. For each question, participants had to watch a video of 7 seconds of the robot NAO performing an action, then rate the emotion expressed by the robot using the Likert scale presented in experiment 2 (Sect. 3.2). In the videos, NAO performed either a hello or a pointing action. In the hello action, NAO waved its hand while saying “hello, my name is NAO”. In the pointing action, NAO pointed on its right side with its arm and head, while saying “hey look!”. For each action, the angle of the eyebrows where either corresponding to anger, neutral or sadness emotions. The eyes of the robot where colored in white in all conditions to avoid interference with the eyebrows.

3.4.2 Results and Discussion

A one-way within subjects ANOVA was conducted on robot perceived emotion to compare the effect of the factor action and the factor eyebrow angle. There was a significant main effect for the eyebrow angle, F(2, 228) \(=\) 27.62, p < .0001. In general angry eyebrows (M \(=\) 2.78 , SD \(=\) 0.88) were rated lower than neutral eyebrows (M \(=\) 3.92 , SD \(=\) 0.83) and sad eyebrows (M \(=\) 5.10, SD \(=\) 0.96) on the 7-point Likert scale. These results are confirmed by one-sample t tests showing that: (1) angry condition is significantly lower than neutral condition, t(158) \(=\) −8.36, p <.0001; (2) sad condition is significantly higher than neutral condition t(158) \(=\) 8.23, p <.0001; (3) neutrality is conserved as the neutral condition is not significantly different from the neutral value (4 on the Likert scale), t(79) \(=\) 0.8, p \(=\) .426. In addition there was no significant effect of the action, F(1, 228) \(=\) 0.530, p \(=\) .467, and no interaction of the two factors, F(2, 228) \(=\) 0.68, p \(=\) .507. The interaction graph of the study is presented in Fig. 8. This experiment supports our claim and confirms our hypothesis. Indeed, participants were able to recognize correctly the robot’s emotion while it was performing other tasks.

Fig. 8
figure 8

Experiment 4: Interaction plot of the rated emotions for the videos. Error bars indicate 95 % CI. We consider two factors: action performed and emotion of the face

4 Conclusion and Future Work

In this work we have proposed a unique and novel eyebrows device for the NAO robot. The eyebrows device is easy to use and can be directly controlled through the Choreograph programming environment. In addition, four experiments explored different questions about the eyebrows. First, we showed the interest of the device as participants emotion recognition greatly increased with by its addition. Second, we confirmed the linear relation between angle and expressed emotion. Third, we investigated design criteria of eyebrows for NAO, and more generally for other robots. Fourth, we showed that with this device NAO robot was now able to express anger or sadness while performing other tasks.

In the future, we would like to see if it is possible to express other emotions with these eyebrows. Indeed, in this paper, the two eyebrows angles were always equal. But considering that left and right eyebrows can be controlled independently, we think that it could be possible to create other emotions, like disgust, where the face becomes not symmetric. Additionally, we would like to explore how small variation of the eyebrows around the neutral position could increase NAOs impression of aliveness.