Keywords

1 Introduction

Over the past few years, artificial intelligence [1] has been advancing by leaps and bounds. To address the lack of relevant skilled workers [2] caused by demographic changes, research of social robots [3,4,5] continue to emerge. One of the main capabilities of social robots is to establish natural interactions with humans. Considering the domain intersection of humans and robots, how to obtain and utilize the relationships between the two to make social robots successful in human-robot interaction (HRI) [6, 7] is becoming more and more important. In recent years, based on a growing interest in HRI, robots are not only in industry but also in other fields such as schools [8, 9] , homes[10] , hospitals [11] and rehabilitation centers [12]. However, these works have only taken into account a single scenario so that the behavioral expression of robots in different scenarios appears repetitive and monotonous.

Emotional signals have been shown as important factors in human-robot relationships. With the improvement of robot imitation ability, their expression is increasingly favorable for HRI [13,14,15]. Whether and how expressive whole-body movements of real humanoid robots influence cooperative decision-making has been investigated in [16]. In [17], facial emotion expression during human-robot interaction has been discussed. It has been verified in [18] that humanoid robot head position has a certain influence on imitating human emotions. The paper [19] discussed emotional expression and its application in social robots. However, neither of them investigated the differences in robot behavioral expression in multiple aspects, including motion, voice, and facial expression.

The paper designs an emotion block diagram for robot emotion-based behavioral expression. In the diagram, based on social rules, we consider robot personality, internal impacts, and external impacts as the factors that affect robot emotions. Then according to the emotion value, the robot can choose the appropriate emotion quantization level and show the corresponding performance in movement, voice, and facial expression. The main contributions of the paper are proposed as follows.

  1. 1)

    Different robot characters in different scenarios are designed to achieve the robot’s differential behavioral expression. In the family scene, the robot behaves as a lively partner. And the robot behaves like a serious guard in the school scene.

  2. 2)

    Three factors are presented to parameterize the robot’s emotional value. Robot personality is determined by the pre-set according to the task. According to user needs, external impacts are brought up to change robot emotions. Internal impacts are proposed to make robot emotions stable in the corresponding character.

  3. 3)

    Emotion expression is manifested in three aspects: movement, voice, and facial expression. Robots can better display emotions and make themselves more anthropomorphic to interact with people by emotion quantization level.

2 Method

Fig. 1.
figure 1

Emotion block diagram

Figure 1 is the block diagram that is used to describe the process of emotion control and expression of the robot. We make the robot behave as a serious guard in school and a lively partner in the family to accord with the usual social norms. The robot’s emotions can be more flexible by adapting self-feedback to regulate its emotion within a range. User evaluation and user behavior are used to appropriately change the robot’s emotions to cater to people’s needs.

Table 1. Processing unit parameters

The diagram contains seven modules of emotion model, personality related to environment scenes, body self-feedback, external impact, emotion level, processing unit, and execution unit. It can be seen that the value of robot personality p is firstly determined by the task environment. The robot’s emotions are influenced by the internal parameter self-feedback which is used to regulate itself like the self consolation process, and the external parameter which is used to imitate some social behaviors such as user evaluation and user behavior. Then through the adder, we can obtain the value of the emotion model E. We divide emotions into exciting, lively, neural, serious, and depressed so that emotion level L can select one of five emotions according to E. The behavior processing unit utilizes the parameters saved in advance as depicted in Table 1 to compute and pass the quantized value to the execution unit. According to the current emotion, the execution unit receives and implements the corresponding orders.

2.1 Social Emotion Model

Based on the impacts of personality, user evaluation, user motion, and self-feedback, the social emotion model E of the robot is proposed as follows:

$$\begin{aligned} E= {\left\{ \begin{array}{ll} p+w_aa+w_ee+b,&{} \text {0<E < 1}\\ 0,&{} E \le 0\\ 1,&{} E \ge 1 \end{array}\right. } \end{aligned}$$
(1)

where \(E\in [0,1]\) denotes the emotional value and is positively correlated with the liveliness of the robot, p denotes the initial personality variable in the specific environment, a is the user behavior variable which is related to distance d between man and robot, \(w_a\) is the weight parameter of the behavior variable a, e is the user evaluation variable which is related to the satisfaction level of the user as the external influence of robot emotion, \(w_e\) is the weight parameter of the evaluation variable e, b is the self-feedback regulation variable as the internal influence of robot emotion.

2.2 Robot Personality

The personality of the robot should be dissimilar in different scenes. For instance, in the family scene, the personality of the robot should be lively and chatty, so the value of the personality variable of the robot should be high to make the robot have a high emotional value. In the school scene, the personality of the robot should be serious and taciturn, so the value of the personality variable of the robot should be low to make the robot have a low emotional value. As a consequence, p as the personality variable of the robot is proposed to make the robot play different roles and meet people’s needs as follows:

$$\begin{aligned} p= {\left\{ \begin{array}{ll} 0.7,&{} \text {family scene}\\ 0.3,&{} \text {school scene}\\ E_p \end{array}\right. } \end{aligned}$$
(2)

where \( p\in [0,1]\), \(p = 0.7\) denotes a lively character in the family scene, and \(p = 0.3\) denotes a serious character in the school scene. \(E_p\) is the recorded last score value of E. It can be seen that personality is the main factor for the emotion of robots so that robots can better show the differentiation of different personality expressions in different scenes.

2.3 User Behaviour

Taking into account general social rules, during a conversation, the emotion should change due to the distance between two people. When the distance between the two is close, it often indicates that the relationship between the two is closer, that is, a more lively emotion should be displayed. Then we present the functional relation between the variable a and the distance d between the robot and person to describe the social relationship. The user behavior variable a is defined as follows:

$$\begin{aligned} a=5-5d \end{aligned}$$
(3)

where \(a \in [0,1]\), and we choose \(d\in [0.8,1.2]\) denotes the commonly social distance. The reason why d is in the range of 0.8m to 1.2m is that we let robots maintain a safe and normal social distance from humans so that better HRI can be achieved safely.

Fig. 2.
figure 2

Behaviour variable

Fig. 3.
figure 3

Evaluation variable

Figure 2 shows that the robot’s emotion elevates when the social distance is getting closer, on the contrary, the robot gets gradually serious if the distance is getting further. We choose d \(=\) 1m as the median social distance. Since a is a secondary factor, we need to consider that its change can not seriously affect the robot’s emotions without ignoring its role. We propose the weight parameter of the behavior variable \(w_a = 0.1\) to constrain the behavior variable of user a to be in the range of \([-0.1,0.1]\).

2.4 User Evaluation

For user evaluation, we consider that different users have different evaluation methods. Some people like to express themselves directly, some like to suggest indirectly. Under comprehensive consideration, we divide e into direct evaluation \(e_1\) and indirect evaluation \(e_2\) two parts. The user evaluation e is presented as follows:

$$\begin{aligned} e = e_1 + e_2 \end{aligned}$$
(4)

where \(e \in [-1,2]\). According to the verbal evaluation of users, the judgment should be made accordingly by the robot to make appropriate adjustments promptly. Then by dividing user reviews into five kinds, the user direct evaluation variable \(e_1\) is defined as:

$$\begin{aligned} e_1 = {\left\{ \begin{array}{ll} 1,&{} \text {very satisfied}\\ 0.5,&{} \text {satisfied}\\ 0,&{} \text {generally satisfied}\\ -0.5,&{} \text {dissatisfied}\\ -1,&{} \text {very dissatisfied} \end{array}\right. } \end{aligned}$$
(5)

where \(e_1 \in \left\{ -1,-0.5,0,0.5,1\right\} \), because of its immediacy, its value is discrete and variable. The degree of satisfaction including very satisfied, satisfied, generally satisfied, dissatisfied, and very dissatisfied five kinds is judged by the robot itself according to the user’s speech. As for indirect user evaluation, we choose the relationship of the user indirect evaluation variable \(e_2\) and interaction time t to describe. The equation is defined as follows:

$$\begin{aligned} e_2 = {\left\{ \begin{array}{ll} \frac{t}{300}, &{} 0 \le t \le 300 \\ 1, &{} t \ge 300 \end{array}\right. } \end{aligned}$$
(6)

where \(e_2 \in [0,1]\), and we choose t \(=\) 300s as the HRI time threshold. The relationship between \(e_2\) and t is shown in Fig. 3. It can be seen that the indirect user evaluation value gradually increases with the interaction time. We present the weight parameter of the evaluation variable \(w_e = 0.1\) to constrain user evaluation variable e to be in the range of \([-0.1,0.2]\).

2.5 Self Feedback Regulation

To avoid robot emotions dominated by user behavior and losing themselves, we consider a self-regulating mechanism to compensate for this deficiency. Different emotions need to be maintained in different scenarios, which also requires that the self-regulation mechanism has different roles in different scenarios. For example, in the family scene, the robot needs to maintain a lively personality, which requires the self-regulation mechanism to quickly resist depression and slowly suppress excitement. In the school scene, the robot needs to maintain a serious personality, which requires the self-regulation mechanism to slowly resist depression and quickly suppress excitement. Then the variable of self-feedback regulation b is divided into two cases according to the environment. In the family scene, due to various environmental impacts, the emotion of the robot is always changing. To keep the personality of the robot stable, the self-regulatory mechanism need to be used to make the robot maintain the corresponding character in different scenes. The variable of self-feedback regulation b is defined as follows:

$$\begin{aligned} b = {\left\{ \begin{array}{ll} -7.5E^2+12E-4.8,&{} 0.8<E \le 1\\ 0,&{} 0.7\le E \le 0.8,family\, scene\\ -\frac{10}{7}E^2+0.7,&{} 0 \le E<0.7 \end{array}\right. } \end{aligned}$$
(7)

where \(b \in [-0.3,0.7]\).

The figure of self-feedback regulation b is illustrated in Fig. 4, it can be seen that in the family scene when the emotion parameter is lower than 0.6, the self-feedback regulation b increases its value quickly to make the robot lively. And within an acceptable range, the self-feedback regulation b doesn’t work. When the emotion parameter is higher than 0.8, the self-feedback regulation b decreases its value slowly to make the robot not get too excited. The overall idea is that the robot can keep lively in the family scene when emotions fluctuate. In the school scene, taking into account environmental needs the self-feedback regulation b should keep robots serious to create a formal atmosphere. The equation is defined as follows:

$$\begin{aligned} b = {\left\{ \begin{array}{ll} \frac{10}{7}E^2-\frac{20}{7}E+\frac{51}{70},&{} 0.3<E \le 1\\ 0,&{} 0.2\le E\le 0.3,school \,scene\\ 7.5E^2-3E+0.3,&{} 0\le E<0.2 \end{array}\right. } \end{aligned}$$
(8)

where \(b \in [-0.7,0.3]\).

Fig. 4.
figure 4

Self-feedback regulation in the family scene

Fig. 5.
figure 5

Self-feedback regulation in the school scene

In Fig. 5, it can be seen that in the school scene when the emotion parameter is lower than 0.2, the self-feedback regulation b increases its value slowly to make the robot not get too serious. And within an acceptable range, the self-feedback regulation b doesn’t work. When the emotion parameter is higher than 0.4, the self-feedback regulation b decreases its value quickly to make the robot a little serious. The overall idea is that the robot can keep serious in the school scene when emotions fluctuate. Comparing two pictures Fig. 4 and 5, we can obtain that robot emotion can maintain the desired level under the impact of self-feedback regulation. In the family scene, the robot is more active in facial expressions, voice, and movement, such as higher movement speed, richer facial expressions, and fuller language. On the contrary, in the school scene, the robot is more solemn in facial expressions, voice, and movement, such as normal movement speed, simple facial expressions, and refined language.

2.6 Processing and Execution

Table 2. Emotion level

According to the value of the emotion variable E and Table 2, the robot can choose the appropriate behavior level L. The behavior processing unit chooses the parameters saved in advance to quantify based on the corresponding emotion level. The execution unit controls the various actuators of the robot to perform corresponding actions according to the received parameters.

3 Results and Discussion

In the family scene, the robot’s emotional expression examples are shown in Fig. 6 and 7. In the school scene, the robot’s emotional expression examples are shown in Fig. 8 and 9. Comparing Fig. 6 and Fig. 8, when the robot is idle, the robot turns the body with a lively personality in the family scene and turns the head with a serious personality in the school scene. According to Figs. 7 and 9, in contrast with expressions in the school scene, the expressions of the robot are more lively and vivid in the family scene. It can be obtained that the robot performs differently in different scenarios.

Fig. 6.
figure 6

Movement in the family scene

Fig. 7.
figure 7

Facial expressions in the family scene

Fig. 8.
figure 8

Movement in the school scene

Fig. 9.
figure 9

Facial expressions in the school scene

Fig. 10.
figure 10

Emotion Value Changes with parameters changes

Relevance Interpretation for Robotic Expression is listed in Table 3. It can be seen that the emotion parameter values in the family scene are higher than those in the school scene, which means that the robot in the school scene is more serious. The emotion value change and interaction distance are shown in Fig. 10(a) in the family scene and Fig. 10(b) in the school scene. It can be seen that self-feedback b, user evaluation e and user motion a can influence the emotional value. The emotion level L is illustrated as the orange lines. In the family scene, as Fig. 10(a) shows, the robot gets an initial lively personality based on the current environment. Then for user evaluation e, when we talk to the robot, its emotional value gradually increased. When we make a subjective assessment, good evaluations accordingly improve robot emotions and bad evaluations lower the emotion value. For user motion a, when the interaction distance is close to 0.8m, the robot gets high emotion to behave excitedly to respond to people’s interests. When the interaction distance is close to 1.2m, the robot thinks people are a little uninterested or unsatisfactory in it and becomes peaceful. With self-feedback b, the robot can avoid seriousness when robot emotion drop to keep itself behaving lively so that robot emotions can be maintained around the lively level. In the school scene, as Fig. 10(b) shows, the robot gets an initial serious personality based on the current environment. The user evaluation e and user motion a could influence robot emotion as described in the family scene. And with self-feedback b, the robot could avoid liveliness when robot emotion boosts to keep itself behaving seriously so that robot emotion could be maintained around the serious level.

To verify the actual performance of robot emotion models, we adopted a questionnaire to ask 50 people to evaluate the robot’s performance by watching our demo in two scenarios. As shown in Table 4, six questions were chosen to discuss rationality, diversity, and acceptability of people’s perceptions of robot personalities. The first question was used to test whether the robot’s personality matches the current environment. The second question was used to test whether people are willing to interact with the robot. The third question was used to check whether the robot’s behavioral expression meets expectations. The fourth question was used to test the satisfaction of people with robot personality differences in different scenarios. The fifth question was used to check how people feel about robot emotions. The sixth question was used to test whether robots have diverse personalities. The results demonstrate that most people are more accepting of the emotional expression of robots.

Table 3. Relevance interpretation for robotic expression
Table 4. Descriptive statistics for each item of the questionnaire

4 Conclusion

In the paper, the robot emotion diagram was presented to describe the process of robot emotion changes and expression in different scenarios. We chose family and school two scenes. By considering the influence of personality, internal impacts, and external impact three factors, robot emotions could change accordingly to show different roles. Then the motion, the voice, and the facial expression were used to make the robot demonstrate anthropomorphic behavioral expression. Based on emotion, the performances of the robot were shown in the results. Questionnaires were used to demonstrate user satisfaction.