1 Introduction

The ability to autonomously, safely, and reliably navigate through social environments such as homes, offices, museums, airports, shopping malls, and urban environments at a useful pace is crucial for mobile service robots. If we wish to deploy mobile service robots in such social environments, the first and most important issue is that the robot must avoid not only regular obstacles but also humans while navigating safely and socially towards a given goal. In order to achieve that goal, several human-aware robot navigation systems have been proposed in the recent years [8, 18].

There are some research directions have been considered to develop the human-aware robot navigation systems which are either biased on path planning, such as social costmap-based technique [23] and randomized kinodynamics motion planning [17], or motion control, such as social force model [3], velocity obstacles-based technique [2]. The technique proposed in this paper is inherited from the social force model and biased on motion control because we wish to design a control system enabling a mobile robot to react fast to social interactive situations.

Although the existing human-aware robot navigation systems based on social force model have been developed and verified in the real-world environment and have achieved considerable success, only the human state information such as human position, orientation, and velocity is taken into account to develop the systems. Ferrer et al. [3] presented a robot companion using human states information and the social force model (SFM) [6] for human-aware mobile robot navigation in an urban environment. In this paper, an interactive learning technique is also used to adjust the parameters of the proposed model to ensure that the system works correctly and smoothly. Ratsamme et al. [16] proposed a human–robot collision avoidance technique based on an extended SFM modified from the conventional SFM [6] by using additional human factors including body pose, face orientation, and personal space definition [4]. Shiomi et al. [20] presented a socially acceptable collision avoidance technique for a mobile robot navigating among pedestrians. The modified SFM in Zanlungo et al. [26] was used to model pedestrian motion and develop human-like collision avoidance. Although the robot provides safe collision avoidance behaviours towards humans, the technique has only been verified in single-human situations.

The aforementioned human-aware navigation techniques have been implemented and verified in simulation and real-world environments to prove that they are capable of generating socially acceptable behaviours for the mobile robot. However, these techniques suffer a major drawback in social interactive environments; that is, they only address with single-human situations rather than social interactive groups that are more common in human daily-life activities [10]. To overcome the shortcoming of the existing human-aware robot navigation systems, we incorporate the socio-spatio-temporal characteristics of the humans into the SFM to generate a social reactive control that is capable of controlling a mobile service robot to socially and safely navigate in human interactive environments.

The remainder of the paper is organized as follows. Section 2 presents the proposed social reactive control algorithm for mobile service robots in social environments. Section 3 presents the experimental results. We discuss and conclude this paper in Section 4.

Fig. 1
figure 1

An extended navigation scheme for mobile service robots in social environments

2 Social reactive control

To enable a mobile robot to safely and socially navigate in a social environment, the robot must be aware of social situations of individuals and human groups through the extraction of human socio-spatio-temporal characteristics, and then incorporate those information into the navigation system. In this study, we propose a socially aware robot navigation framework added on the conventional robot navigation scheme [21], as shown in Fig. 1, to enable the mobile robot to deal with different social situations of individuals and human groups when navigating in social environments. The proposed framework is composed of four functional blocks: (1) the human detection and tracking is to detect and track the humans in the vicinity of the robot; (2) the human state extraction is to extract the human position, orientation, velocity, and human postures; (3) the social interaction detection is to estimate groups of interactive humans and human–object interactions; (4) the social reactive control (SRC) incorporates information of individuals and social interactions into a conventional social force model. The SRC can be integrated with a conventional path planning technique to generate a socially aware motion planning system that enables the mobile robot to navigate towards the given goal while safely, socially, and reactively avoiding individuals, human group, other robots, and environmental objects in social norms (polite and respectful behaviours like humans).

2.1 Human detection and tracking, and human states extraction

In this study, we used the human detection and tracking system developed in [24]. The basic idea is to fuse the human leg information using the laser rangefinder data proposed by Arras et al. [1] and human body information using the RGB-D data presented by Munaro et al. [11]. A detailed description of the sensor fusion techniques can be found in [24]. Furthermore, to extract the 3D pose and the orientation of a human, we use the work developed in [14].

We assume that there are N people appearing in the vicinity of the robot, \(P =\lbrace {p_1}, {p_2},\ldots , {p_N} \rbrace \), where \(p_i\) is the \(i^{th}\) person. The human states of person \(p_i\) are represented as \(p_i = (x_{i}^p, y_{i}^p, {\theta }_{i}^{p}, v_{i}^{p})\), where \((x_{i}^p, y_{i}^p)\) is the position, \({\theta }_{i}^{p}\) is the orientation, and \(v_{i}^{p}\) is the velocity. This information is then used to estimate the social interaction of human groups and as an input of individual states for the social reactive control.

2.2 Social interaction estimation

Findings in Moussaid et al. [10] point out that \(70\%\) of humans intend to form interactive groups in social environments, thus detecting interactive human groups plays an essential role in a socially aware robot navigation system in human interactive environments. A number of methods of detecting social group have been recently proposed [19]. In this paper, we utilize the social group detection algorithm presented by Truong et al. [23]. Let \(G=\lbrace {g_1}, {g_2},\ldots , {g_K} \rbrace \) be the number of detected human groups in the vicinity of the robot; each social group \(g_k\) has a set of parameters \(g_k = (x_{k}^{g}, y_{k}^{g}, {\theta }_{k}^{g}, v_{k}^{g}, r_{k}^{g})\), where \((x_{k}^{g}, y_{k}^{g})\) is the centre point, \({\theta }_{k}^{g}\) is the orientation, \(v_{k}^{g}\) is the velocity, and \(r_{k}^{g}\) is the radius of the human group interaction, as shown in Fig. 2a.

In addition, we pay more attention to objects which people are interacting with, such as televisions, refrigerators, phones, screens, and paintings. Depending on social interaction contexts between humans and objects, a robot needs to estimate human–object interaction because this information is the key to defining an interaction space between humans and interesting objects. To detect a human–object interaction we reuse the algorithm presented by Truong et al. [23]. A set of parameters extracted from a human–object interaction space is \(o_m = (x_{m}^{o}, y_{m}^{o},{\theta }_{m}^{o}, v_{m}^{o},r_{m}^{o})\), where \(o_m\) is the \(m^{th}\) human–object interaction space in the vicinity of the robot, \((x_{m}^{o}, y_{m}^{o})\) is the centre point, \({\theta }_{m}^{o}=0\) is the orientation, \(v_{m}^{o}=0\) is the velocity, and \(r_{m}^{o}\) is the radius of the human–object interaction, as shown in Fig. 2e. Figure 2 shows example results of the social interaction detection algorithm for groups of two and three standing people, a group of two and three moving people, and a person and a group of two people interacting with an object, respectively.

Fig. 2
figure 2

Examples of the human group estimation algorithm: a a group of two standing people, b a group of three standing people, c a group of two moving people, d a group of three moving people, e a person interacting with an object, and f two people interacting with an object

2.3 Social reactive control

The conventional social force model (SFM) [6] uses various attractive and repulsive forces to model both agent–agent and agent–object social force fields. These forces are based on both physical and psychological factors reflecting how agents avoid and approach each other, and how agents interact with their surrounding environment. According to the definition of the SFM for pedestrians presented in [6] and for mobile robots presented in [3], the attractive force to the goal \(\mathbf F _r^\mathrm{goal}\), the repulsive force from humans \(\mathbf F _{r}^h\), and the repulsive force from obstacles \(\mathbf F _{r}^o\) influencing the robot motion are computed as follows:

$$\begin{aligned} \mathbf F _r^{\mathrm{goal}}= & {} {K}_{r}^v ({ \mathbf v _r^0 - \mathbf v _r(t)}) \end{aligned}$$
(1)
$$\begin{aligned} \mathbf F _{r}^h= & {} \sum _{j \ne r} A_r^h \mathrm{e}^{\frac{(r_{r,j}-d_{r,j})}{B_r^h}} \mathbf n _{r,j} \psi _{r,j} \end{aligned}$$
(2)
$$\begin{aligned} \psi _{r,j}= & {} \lambda _r + (1-\lambda _r) \frac{1+\mathrm{cos}(\gamma _{r,j})}{2} \end{aligned}$$
(3)
$$\begin{aligned} \mathbf F _{r}^o= & {} \sum _{o \in O} A_r^o \mathrm{e}^{\frac{(r_{r,o}-d_{r,o})}{B_r^o}} \mathbf n _{r,o} \psi _{r,o} \end{aligned}$$
(4)
$$\begin{aligned} \psi _{r,o}= & {} \lambda _r + (1-\lambda _r) \frac{1+\mathrm{cos}(\gamma _{r,o})}{2} \end{aligned}$$
(5)

where \(({K_r^v})^{-1}\) is the relaxation time; \(\mathbf v ^0_r\) and \(\mathbf v _r(t)\) are the robot’s desired velocity and actual velocity, respectively; \(A_r^h\) and \(B_r^h\) are the strength and the range of the repulsive force from humans, respectively; \(r_{r,j} = r_r+r_j\) is the sum of the radius of the robot and the human j; \(d_{r,j}\) is the Euclidean distance between robot and human; and \(\mathbf n _{r,j}\) describes the unit vector pointing from the robot to the human j. The influence of the repulsive force is limited to the field of view of the agent; therefore, the anisotropic term \(\psi _{r,j}\) is used. The \(\lambda _r \in [0,1]\) is defined as the strength of the anisotropic factor, \(\gamma _{r,j}\) is the relative direction of the human j w.r.t the line through the centres of the focal robot r and the human j. Ultimately, the SFM for the robot r is synthesised by the force \(\mathbf F _r^{\mathrm{goal}}\) attracting it to the goal, the repulsive force \(\mathbf F _{r}^h\) from the humans j, and the repulsive force \(\mathbf F _{r}^o\) from objects o as follows:

$$\begin{aligned} \mathbf F _{r}^{\mathrm{fsm}} = \mathbf F _r^{\mathrm{goal}} + \mathbf F _{r}^h + \mathbf F _{r}^o \end{aligned}$$
(6)

The primary set of parameters of the conventional SFM is \([K_r^v, A_r^h, B_r^h, A_r^o, B_r^o, \lambda _r]\).

Fig. 3
figure 3

An example of the extended social force model of a robot: the human body repulsive force \(\mathbf F _r^{h}\); the human group repulsive force \(\mathbf F _r^{\mathrm{hg}}\); the human–object repulsive force \(\mathbf F _r^{\mathrm{ho}}\); the obstacle repulsive force \(\mathbf F _r^{o}\); the attractive force to the goal \(\mathbf F _r^{\mathrm{goal}}\); the final extended social force \(\mathbf F _r^{\mathrm{esfm}}\)

In the conventional social force model [6] and [3], the repulsive force from the humans presented in Eq. (2) only uses the relative position, orientation, velocity between the robot and the human individual. However, many other factors, such as human group, human–object interaction, also influence the motion of the robot in social environments. Hence, these information should be recognized and incorporated into the socially aware navigation framework to ensure human safety and comfort, and to generate socially acceptable behaviours for the mobile robot. In addition to the social forces applied to individuals, we propose a new method taking information of human–object interactions and human groups into account to develop the social reactive control algorithm. Figure 3 shows an example of the social forces that influence the motion of the mobile robot.

Human–object interaction-based repulsive forces To take the human–object interaction into account for the social reactive control, we propose a virtual human at the centre of the human–object interaction space. Therefore, the repulsive force \(\mathbf F _{r}^{\mathrm{ho}}\) of the human–object interaction can be calculated using Eq. (2). The set of parameters of the repulsive forces from the human–object interaction is \([A_r^{h}, B_r^{\mathrm{ho}}, \lambda _r]\), where \(B_r^{\mathrm{ho}}\) is computed as follows:

$$\begin{aligned} B_r^{\mathrm{ho}} = B_r^h \frac{r_m^o}{r_h} \end{aligned}$$
(7)

where \(r_h\) is the radius of the human body, and \(r_m^o\) is computed in Sect. 2.2.

Human group-based repulsive forces Similar to the repulsive forces from the human–object interaction, the repulsive forces based on the social group of people \(\mathbf F _{r}^{hg}\) are computed using Eq. (2). The set of parameters of the repulsive forces from the human group interaction is \([A_r^{h}, B_r^{hg}, \lambda _r]\).

$$\begin{aligned} B_r^{\mathrm{hg}} = B_r^h \frac{r_k^g}{r_h} \end{aligned}$$
(8)

where \(r_h\) is the radius of the human body, and \(r_k^g\) is computed in Sect. 2.2.

Ultimately, we integrate all the repulsive forces including the human repulsive force \(\mathbf F _{r}^{h}\), the object repulsive force \(\mathbf F _{r}^{o}\), the human–object repulsive force \(\mathbf F _{r}^{\mathrm{ho}}\), and the human group repulsive force \(\mathbf F _{r}^{\mathrm{hg}}\) to create the extended social force model as follows:

$$\begin{aligned} \textit{F}_{r}^{\mathrm{esfm}} = \mathbf F _{r}^{h} + \mathbf F _{r}^{o} + w_{\mathrm{ho}} \textit{F}_{r}^{\mathrm{ho}} + w_{\mathrm{hg}} \mathbf F _{r}^{\mathrm{hg}} \end{aligned}$$
(9)

where \(w_{\mathrm{ho}}\) and \(w_{\mathrm{hg}}\) are, respectively, the weight of the human–object interaction and human group interaction. The set of parameters of the extended social force model \(\mathbf F _{r}^{\mathrm{esfm}}\) is \([{K}_{r}^v, A_r^{h}, B_r^{h}, A_r^{o}, B_r^{o}, \lambda _r, w_{\mathrm{ho}}, w_{\mathrm{hg}}]\).

Once the extended social force model \(\mathbf F _r^{\mathrm{esfm}}\) has been calculated in Eq. (9), the social reactive control for the mobile robot is computed according to Newton’s second law of motion as follows:

$$\begin{aligned}&\mathbf{a }_r(t) = \frac{\mathbf{F }_r^{\mathrm{esfm}}(t)}{m_r} \end{aligned}$$
(10)
$$\begin{aligned}&\mathbf{v }_r^{\mathrm{new}}(t) = \mathbf{v }_r(t) + \mathbf{a }_r(t) dt \end{aligned}$$
(11)

where \(\mathbf a _r(t)\) and \(m_r\) are the acceleration vector and mass of the robot, respectively; \(\mathbf v _r(t)\) is the current velocity of the robot; dt denotes the time interval; and \(\mathbf v _r^{\mathrm{new}}(t)\) is the velocity command, which is used to control the mobile to navigate safely and socially in social environments.

3 Experiments

3.1 Experimental installation and setup

To examine and validate the feasibility of the developed framework in the real world, we have implemented and tested it on our mobile platform. The PEDSIM libraryFootnote 1, and the software packageFootnote 2 are utilized as the initialization to develop the proposed system. The software core of the robot is developed on the Robot Operating System (ROS) [15] run on an Intel core i7 2.2 GHz laptop. The proposed control was implemented using the C++ programming language.

Four essential experiments were conducted to verify the effectiveness of the proposed social reactive control. For each of the experiments, we use the same initial start and goal poses of the robot \(q_{\mathrm{start}}=(0,0,\frac{\pi }{2})\) and \(q_{\mathrm{goal}}=(0,4.8,\frac{\pi }{2})\). The robot was planned to move from \(q_{\mathrm{start}}\) to \(q_{\mathrm{goal}}\) while avoiding both individuals and social interactions.

By observing several experiments with a wide variety of social situations, we empirically set the values of the parameter set of the proposed SRC algorithm in Table 1. Note that, parameter setting will be a part of our future work when we focus on machine learning techniques to automatically adjust the parameter according to human behaviours and environmental surrounding, which is not a scope of this paper.

Table 1 Parameters set in experiments

3.2 System integration

In this study, we choose a two-wheel differential drive mobile robot platform with two additional castors as a representative for the system integration. We define the state of the robot \(r(t) = (x_{r}(t), y_{r}(t), \theta _{r}(t))\) at the time t, where the position is \((x_{r}(t), y_{r}(t))\), and the orientation is \(\theta _{r}(t)\). The state of the robot at the time \((t+1)\) is governed by the following equations:

$$\begin{aligned} \begin{bmatrix} x_{r}(t+1)\\ y_{r}(t+1)\\ \theta _{r}(t+1) \end{bmatrix} = \begin{bmatrix} x_{r}(t)\\ y_{r}(t)\\ \theta _{r}(t) \end{bmatrix} + \begin{bmatrix} \frac{v_{r}^r+v_{r}^l}{2}\cos (\theta _{r}(t))\mathrm{d}t\\ \frac{v_{r}^r+v_{r}^l}{2}\sin (\theta _{r}(t))\mathrm{d}t\\ \frac{v^{r}_r-v_{r}^l}{L}\mathrm{d}t \end{bmatrix} \end{aligned}$$
(12)

where \(v^{r}_r\) and \(v_{r}^l\) are the linear velocity commands of the right and left wheels of the robot, respectively; and L denotes the wheelbase of the robot.

Suppose that the velocity command computed in Eq. (11) \(\mathbf v _r^{\mathrm{new}}(t) = (v_r^x(t), v_r^y(t))\), the preferred orientation of the robot is \(\theta _r^{\mathrm{pref}}(t) = \mathrm{atan}2(v_r^y(t), v_r^x(t))\). The following equations are used to compute \(v^{r}_r\) and \(v_{r}^l\).

$$\begin{aligned} v^r_r= & {} \Vert \mathbf v _r^{\mathrm{new}}(t) \Vert _2 + K_r^{\theta } \frac{L (\theta _r^{\mathrm{pref}}(t)-\theta _{r}(t))}{2} \end{aligned}$$
(13)
$$\begin{aligned} v_r^l= & {} \Vert \mathbf v _r^{\mathrm{new}}(t) \Vert _2 - K_r^{\theta } \frac{L (\theta _r^{\mathrm{pref}}(t)-\theta _{r}(t))}{2} \end{aligned}$$
(14)

where \(\mathbf v _r^{\mathrm{new}}(t)\) is computed in Eq. (11), \((K_r^{\theta })^{-1}\) is the time internal that the robot needs to adjust the current orientation \(\theta _{r}(t)\) to the preferred orientation \(\theta _r^{\mathrm{pref}}(t)\).

3.3 Experimental results

We examined the social reactive control for socially aware robot navigation in two cases: (1) SFM—the robot is equipped with the conventional social force model, where humans are treated as individuals and (2) SRC—the robot is equipped with the proposed social reactive control, where individuals and social interactive groups are taken into account for the development of the social reactive control. First, we conducted an experiment to demonstrate that, if social interactive information of human groups and human–object interactions is not taken into account, there is no difference between the conventional social force model and the proposed SRC algorithm. Second, we took such social interactive information into account to develop the social reactive control, then compared the performances of the mobile robot in both the conventional social force model and the social reactive control. A video clip of these real experiments can be found at this link.Footnote 3

Fig. 4
figure 4

The experimental results of avoiding individuals: a third-person view; b the trajectory of the robot equipped with the conventional social force model; and c the trajectory of the robot equipped with the proposed social reactive control

3.3.1 Avoiding individuals

In this experiment, we examined how the mobile robot avoids individuals \(p_1\), \(p_2\) and \(p_3\) when navigating from \(q_{\mathrm{start}}\) to \(q_{\mathrm{goal}}\) in two cases: (1) the robot is equipped with the conventional social force model, and (2) the robot is equipped with the proposed SRC algorithm, but the social interaction detection block in the Fig. 2 is deactivated. The third-person view and the experimental results are shown in Fig. 4a–c, respectively. As shown in Fig. 4b, c, in both cases, the mobile robot can avoid individuals in social environments, but it does not respect to the social interactive group of persons p1-p2. This indicates that, although the robot is equipped with the proposed SRC technique it is not aware about the social contexts, and then it reacts with the individuals similar to the robot equipped with the conventional social force model, but it is aware about the social interactions.

Fig. 5
figure 5

The experimental results with three case studies: (1) a group of two standing people—the first row, (2) a human–object interaction—the second row, and (3) a group of two moving people—the third row. The first column shows the third-person view of the scenario. The second column illustrates the trajectories of the humans and the robots when the robot was equipped with the conventional social force model. The third column shows the trajectories of the humans and the robots when the robot navigation system was equipped with the proposed social reactive control

3.3.2 Avoiding social interactive groups

In these experiments, we aim to compare the performance of the proposed social reactive control method and the conventional social force model when the social interactions are available.

Avoiding a group of two standing people We examined how the proposed social reactive control algorithm drives the mobile robot to avoid a group of two standing people \(p_1\), \(p_2\) when navigating from \(q_{\mathrm{start}}\) to \(q_{\mathrm{goal}}\). Note that the relative distance between the person \(p_1\) and \(p_2\) is about 2.0 [m]. As shown in Fig. 5b, the robot equipped with the conventional SFM crossed through the social interaction space between two people because it was not aware of this social interaction space. Although the robot did not physically collide with people, they might not feel comfortable when the robot interfered their social interaction situation. In contrast, the robot equipped with the proposed SRC technique moved around these people, as shown in Fig. 5c, because the robot was aware and took into account of the social interaction space to generate the human repulsive force \(\mathbf F _{r}^{h}\) and the human group repulsive force \(\mathbf F _{r}^{\mathrm{hg}}\) for the proposed social reactive control technique.

Avoiding human–object interactions We validated how the proposed social reactive control technique enables the mobile robot to avoid a group of two people \(p_1\), \(p_2\) when traversing from \(q_{\mathrm{start}}\) to \(q_{\mathrm{goal}}\). In this case, two people formed a social interactive group, and they were also interacting with a interesting object. Note that the relative distances between the persons \(p_1\) and \(p_2\) to the object are about 1.7 [m] and 1.8 [m], respectively. The trajectories of the robot equipped with the conventional SFM are illustrated in Fig. 5e. As can be seen in Fig. 5e, the robot did not physically collide with the people and the object, but it crossed the space in front of the people without respecting the social group interaction and the human–object interaction spaces. In contrast, in Fig. 5f, the robot quipped with the proposed SRC method politely and, respectively, avoided humans causing convenience to the human interactive groups by moving behind them, instead of crossing through their social interaction spaces, because the robot took into account human states of individuals \(p_1\) and \(p_2\), the social interactive spaces of human group of persons \(p_1-p_2\), and the social interactive spaces of human–object interactions of \(p_1-o\) and \(p_2 -o\) to generate the repulsive forces including the human repulsive forces \(\mathbf F _{r}^{h}\), the human group repulsive force \(\mathbf F _{r}^{\mathrm{hg}}\) and the human–object repulsive forces \(\mathbf F _{r}^{\mathrm{ho}}\), respectively, for the social reactive control.

Avoiding a group of two moving people We examined how the proposed social reactive control algorithm allows the mobile robot to avoid a group of two moving people \(p_1\), \(p_2\) when moving from \(q_{\mathrm{start}}\) towards \(q_{\mathrm{goal}}\). Note that the relative distance between the moving persons \(p_1\) and \(p_2\) is about 1.8 [m]. The trajectories of the people and the robot with the SFM and SRC models are illustrated in Fig. 5h, i, respectively. As shown in Fig. 5h, the robot moved through the social interactive space causing humans uncomfortable, because their interaction might be interrupted. In contrast, the robot politely and, respectively, avoided the social group interaction as shown in Fig. 5i, because the robot navigation system took into account the social group interaction of two moving people \(p_1\) and \(p_2\) to generate the human group repulsive force \(\mathbf F _{r}^{\mathrm{hg}}\) for the social reactive control.

Overall, the experimental results shown in Fig. 5 demonstrate that our proposed social reactive control is fully capable of enabling mobile robots to avoid both stationary and dynamic human interactive groups, not only providing safety and comfort to the humans but also guaranteeing socially acceptable behaviours for the mobile robots in human interactive environments.

4 Discussions and conclusions

We have presented a social reactive control (SRC) for socially aware robot navigation systems in social environments. The socio-spatio-temporal characteristics of humans including human position, orientation, motion, field of view, human group and human–object interaction information are incorporated into the conventional social force model to develop the SRC algorithm. We have demonstrated the effectiveness of the proposed method through real-world experiments. The experimental results show that the proposed SRC method is capable of enabling a mobile robot to navigate safely and socially around humans, providing socially acceptable manners for the mobile robot in human interactive environments.

In human interactive environments, a mobile robot must detect and identify social states of individuals, human group and human–object interaction and react accordingly to maintain politeness and respect to human behaviours. Instead of using the conventional motion planning biased on path planning rather than motion control such as randomized kinodynamic (RRT) [9], the probabilistic roadmap (PRM) [7], the D* [22], the A* [5] techniques, we more focus on the motion control, particularly the social reactive control, because social situations of human interactive groups might vary according to their daily activities, so that the mobile robot must react fast to any social interactive situations. Thanks to this approach, we can combine the proposed technique with a conventional path planning technique such as RRT, PRM, D*, A* to create a robot motion planning system that is capable of reacting to any social situations in the real-world environment.

In the future, various kinds of social cues and signals introduced in [25] and [13] will be recognized and applied to the model. Furthermore, we also consider using machine learning techniques, such as inverse reinforcement learning technique [12], to optimize the parameter setting of the proposed social reactive control method.