Keywords

1 Introduction

Mobile service robots are very promising application of robotics. They should help people to deal with various tasks. The autonomous movement is the primary capability, so to let robot know where it is and how to achieve the goal would be fundamental problems of the mobile robot. To solve this problem, there are several simultaneous localization and mapping (SLAM) approaches, such as Kalman filter–based algorithm [1], Monte Carlo localization algorithm [2].

Monte Carlo localization algorithm is a probabilistic method which utilizes a samples set (particles) to approximate the probability density functions from a Bayesian perspective. This method had successfully introduced several indoor navigation applications. But the disadvantage is the high computational requirements.

Kalman filters are a efficient algorithm and widely introduced to solve the SLAM [3, 4]. However, the regular Kalman filter only can cope with linear problem. In Smith, Self, Cheeseman paper [5], they proposed the extended Kalman filter(EKF) to incrementally estimate the posterior distribution over robot pose along with the positions of the landmarks, through a Taylor expansion to linearize nonlinear system.

Obstacle avoidance is an another importance issue studied by researchers. Artificial potential field method [6], the neural network method [7], vector field histogram method (VFH ) [8], the curvature velocity method (CVM) [9], the lane curvature method (LCM) [10], and the beam curvature method (BCM) [11] are typical techniques for obstacle avoidance. Among these techniques, beam curvature method is a simple and efficient method to find the promising forward path, which combined the LCM with CVM.

In this paper, we combine several methods to implement our algorithm. First, we utilize split–merge method to extract the essential features as the landmarks. Then, a EKF is used to solve the SLAM problem. Moreover, a modified BCM is adopted to avoid obstacles. At last, a kinematic control law is used to command the motion of the robot.

2 The Proposed Robot System

2.1 Mobile Service Robot

The main task of proposed system is to help operators to collect external environment information and build environment map in real time without collisions in the whole process. Our developed mobile robot platform is illustrated in Fig. 1. It equips two sensors, SICK LMS111 Laser Finder and Kinect, to grasp external information. The two wheels are driven by servo motors, and each servo motor is controlled by an ELMO driver which communicates with host through Kvaser CAN device.

Fig. 1
figure 1

The developed mobile service robot

3 Simultaneous Localization and Mapping

SLAM is a process which estimates the robot’s pose and landmark’s position at the same time, namely location and mapping. The fundamental step is to obtain external information to search proper features (landmarks). After landmarks’ extraction, it is essential to match the observed landmarks with the existed landmarks in map. The heart of our SLAM process is correlating the mean and covariance matrix of system state through EKF technique.

3.1 Feature Extraction

For the indoor environment, there exist many line features; accordingly, we choose the corners as landmarks. There are several common methods to extract line features [12]. In this paper, we use split–merge method [13] to extract lines and compute the corners’ position.

The basic idea of split–merge is to repeatedly search a point with the maximum distance \(d_p\) to a line, which is decided by the start and end points of a dot set, and compare \(d_p\) with the threshold \(d_{\text{ threshold }}\). If \(d_p\) is lesser than \(d_{\text{ threshold }}\), the point \(P\) is considered as the point in the line, otherwise divide the dot set into two new sets at point \(P\). After split, we utilize least square method to fit corresponding lines.

We can consider the fitting problem as a regression problem under polar coordinate using the raw laser data. So we can describe line as

$$\begin{aligned} L: \rho =x{\text{ cos }}(\alpha )+y{\text{ sin }}(\alpha ), \end{aligned}$$
(1)

where \(\rho \) is the perpendicular distance from the origin to the line and \(\alpha \) is the angle between the \(x\) axis and the normal of the line. So the mean-squared error of distance is obtained as follows:

$$\begin{aligned} \sum _{i=1}^{N}{d_i}^2 =\sum _{i=1}^{N}{\big (\rho -\left( x_icos(\alpha )+y_isin(\alpha )\right) ^2}, \end{aligned}$$
(2)

where \(d_i\) is the distance from point \((x_i,y_i)\) to the line \(L\). \(x_i\) and \(y_i\), respectively, are the positions of samples in Cartesian coordinates. \(\alpha \) and \(\rho \) can be computed as follows [14]

$$\begin{aligned} \tan (2\alpha )&=\frac{2\sum _{i=1}^{N}{( x_i-{\bar{x}})(y_i -{\bar{y}})}}{\sum _{i=1}^{N}{\big [(x_i -{\bar{x}})^2 -(y_i -{\bar{y}})^2 }\big ]}, \end{aligned}$$
(3)
$$\begin{aligned} \rho&={\bar{x}} {\text{ cos }}\alpha + {\bar{y}} {\text{ sin }}\alpha , \end{aligned}$$
(4)
$$\begin{aligned} {\bar{x}}&= \frac{1}{N}\sum _{i=1}^{N} x_i, {\bar{y}} = \frac{1}{N}\sum _{i=1}^{N} y_i. \end{aligned}$$
(5)

Combining with these two methods, we obtain the extracted line features. Through computing the cross dot of every two lines and comparing its position with the laser data, we can decide which cross dot could be corner. Moreover, we make some constraints, for example, a line should not be extracted if its length is under 1 meter or the number of contained dot is lesser than 10. The extracted results are shown in Figs. 2 and 3.

Fig. 2
figure 2

The result without constraints

Fig. 3
figure 3

The result with constraints

In Fig. 2, the blue dots are the raw laser data, the red circle represents robot, the red lines are extracted lines, and the green dot is the computed corner.

3.2 Data Association

In SLAM applications, when the robot detects a feature, it must be associated with a landmark in the existed map or incorporated as a new landmark. Data association is the process that deals with the matching problem between observations and existed landmarks. A typical data association algorithm is composed of two elements: compatibility test to find potential pairs between observation and landmarks and a selection rule to choose the best matchings among the set of compatible matchings.

Generally, the gated nearest neighbor (NN) algorithm is used to associate data. The main process of NN algorithm is compatibility test through utilizing the Mahalanobis distance to determine whether a feature corresponds to a landmarks and then find the one with the minimum Mahalanobis distance. However, in this paper, due to that the landmarks are set exactly, we use a simple method to determine the correspondence between features and landmarks. We compute the degree of matching through a simple computation as follows:

$$\begin{aligned} {\text{ Match }}(i,j)=|{\text{ error }}_r| {\text{ cos }}({\text{ error }}_b), \end{aligned}$$
(6)

where \({\text{ error }}_r\) and \({\text{ error }}_b\) are the error of range and bearing between feature \(i\) and landmark \(j\), respectively. Then, we regard the feature with the minimum match as the best match of corresponding landmark.

3.3 Extended Kalman Filter

Extended Kalman filter is a repetitive process of prediction and correction to estimate the system state. In our implementation, we can estimate the pose of robot by observing several set landmarks.

$$\begin{aligned} x_n(k)=\left[ x(k), y(k), \theta (k)\right] ^T \end{aligned}$$
(7)

where \(x_n(k)\) is the posture of robot, \(x(k)\) and \(y(k)\) are the current robot positions, \(\theta (k)\) is the heading angle, and \(k\) is the current time. The developed robot platform is a nonholonomic two-wheeled driven mobile robot, and its velocity motion model is shown in Fig. 4. In Fig. 4, \(D\) is the distance between two wheels. So velocity motion model can described as

Fig. 4
figure 4

The motion model

$$ g(x,u,k+1)=\left[ {\begin{array} {l}x(k)+\Delta t V(k) \cos (\theta (k)) \\ y(k)+\Delta t V(k) \sin(\theta (k)) \\ \theta (k)+\Delta t \omega (k) \end{array}}\right] $$
(8)

where \(V(k)\) and \(\omega (k)\) are the linear velocity and angular velocity, respectively.

To fit EKF, we should linearize the motion model to acquire the Jacobian matrices \(\frac{{\text{d}}g}{{\text{d}}x}\) and \(\frac{{\text{d}}g}{{\text{d}}u}\) as follows:

$$ \begin{aligned} G_x=\frac{{\text{d}}g} {{\text{d}}x}=& \left[ \begin{array}{llc} 1 &{} 0 &{} -\Delta t V(k)\sin (\theta (k)) \\ 0 &{} 1 &{} \Delta t V(k)\cos (\theta (k)) \\ 0 &{} 0 &{} 1 \\ \end{array} \right] \end{aligned} $$
(9)
$$ \begin{aligned} G_u=\frac{{\text{d}}g} {{\text{d}}u}=&\left[ {\begin{array}{ccc} \frac{R}{2}\Delta t \cos (\theta (k)) &{} \frac{R}{2}\Delta t \cos (\theta (k)) \\ \frac{R}{2}\Delta t \sin (\theta (k)) &{} \frac{R}{2}\Delta t \sin (\theta (k)) \\ \frac{R}{2D}\Delta t &{} -\frac{R}{2D}\Delta t \\ \end{array}} \right] . \end{aligned} $$
(10)

The global framework of robot is described as shown in Fig. 5.

Fig. 5
figure 5

The global framework of robot

Then, we can easily obtain the measurement model that shown as

$$ \begin{aligned} h(x)=\left[ \begin{array}{clcr} \rho \\ \alpha \end{array}\right] =\left[ \begin{array}{clcr} \sqrt{(x_l-x)^2+(y_l-y)^2} \\ \arctan (\frac{y_l-y}{x_l-x})-(\theta - \frac{\pi }{2}) \end{array}\right] , \end{aligned}$$
(11)

where \(x_l, y_l\) are the Cartesian coordinates of landmarks, \(x\) and \(y\) are the positions of robot, and \(\theta \) is the heading angle. Similarly, by linearizing the measurement model, we obtain the Jacobian matrix \(H_x\) of measurement model that shown as

$$\begin{aligned} H_x= \left[ \begin{array}{lll} \frac{x-x_l}{\rho } &{} \frac{y-y_l}{\rho } &{} 0\\ \frac{y_l-y}{\rho ^2} &{} \frac{x-x_l}{\rho ^2} &{} -1 \\ \end{array} \right] . \end{aligned}$$
(12)

3.4 Motion Plan and Obstacle Avoidance

Collision avoidance has been widely studied in autonomous mobile system, and several techniques has been proposed to tackle with collision avoidance. In this paper, we utilize a extended beam curvature method. The details are illustrated as follows:

Obtain the environmental information The laser’s range is from \(0^{\circ }\) to \(270^{\circ }\) with \(1^{\circ }\) resolution. As a result, in every step, we obtain 271 values and every value \(d_{i}\) represents the distance of the \(i\)th beam.

Evaluate each beam To find the best beam, we evaluate the beams through a equation [11]:

$$\begin{aligned} f_e(d_{i},p_{i})&= \alpha d_{i}\cos (\mid dp_{i} \mid )-(1-\alpha )\mid \frac{dp_{i}}{\pi }\mid , \end{aligned}$$
(13)

where \(p_{i}\) is defined as the angle between the \(X\) axis and the \(i\)th beam and \(p_{G}\) is defined as the angle between the \(X\) axis and the goal direction in the coordinates system which is illustrated in Fig. 6. \(dp_{i} = p_{G} - p_{i}\), \(d_{i}\cos (\mid dp_{i} \mid )\) may be regarded as the projected distance on the goal direction, and \(\alpha \) and \(\beta \) are constants.

Fig. 6
figure 6

The beam appraise

Searching the instant goal candidate and safe path In order to find the proper instant goal, it is essential to find the potential safe area. The safe area could be searched by [11]:

$$\begin{aligned} p_{1{\text{ safe }}}&= \mathop {\text{ max }}\limits _{(P_i,d_i)}(P_i + \frac{d_{\text{ safe }}}{d_i}),{-\pi < P_i < P_{\text{ best }}},\end{aligned}$$
(14)
$$\begin{aligned} p_{2{\text{ safe }}}&= \mathop {\text{ min }}\limits _{(P_i,d_i)}(P_i - \frac{d_{\text{ safe }}}{d_i}),{P_{\text{ best }} < P_i < \pi }, \end{aligned}$$
(15)

where \(p_{2{\text{ safe }}}, p_{1{\text{ safe }}}\) are the top angle and the bottom angle of the safe area and \(d_{\text{ safe }}\) is a safe threshold adjusted in experiments. Then, we decide the heading angle through comparing the angle between robot’s position and goal’s position with the limits of safe area [11]:

$$ \begin{aligned} \theta _{\text{ heading }} =\left\{ {\begin{array}{*{20}l} {p_0}, &{} {p_0\in [p_{1{\text{ safe }}}, p_{2{\text{ safe }}}]}\\ {p_{1{\text{ safe }}}},&{} {p_0 < p_{1{\text{ safe }}}}\\ {p_{2{\text{ safe }}}}, &{} {p_0 > p_{2{\text{ safe }}}} \end{array}}\right. \end{aligned} $$
(16)

Determining velocity According to the instant goal and safe path, we determine angular velocity \(\omega \) such that the heading angle of the robot is approaching the instant goal.

$$\begin{aligned} \omega = k_{\omega }(\theta _{i}-\theta _{\text{ heading }}) \end{aligned}$$
(17)

where \(k_{\omega }\) is a adjustable constant, \(\theta _{i}\) is the angle between the robot and the instant goal, and \(\theta _{\text{ heading }}\) is the heading angle of the robot. Moreover, in order to make sure that the robot will not deviate from the safe path suddenly, it is essential to limit the maximum linear speed \(\upsilon _{\text{ MAX }}\). The linear velocity \(\upsilon \) can be defined as follows:

$$ \begin{aligned} \upsilon&= \left\{ \begin{array}{lc} \upsilon _{\text{ MAX }}-k_{\upsilon }\mid \omega \mid &{} \upsilon > 0\\ 0 &{} \upsilon < 0 \\ \end{array} \right. \end{aligned} $$
(18)
$$\begin{aligned} \upsilon _{r}&=\frac{2\upsilon +\omega D}{2}\end{aligned}$$
(19)
$$\begin{aligned} \upsilon _{l}&=\frac{2\upsilon -\omega D}{2}, \end{aligned}$$
(20)

where \(\upsilon \) is the linear velocity of the robot and \(\upsilon _{r}\) and \(\upsilon _{l}\) are the forward velocities of the right and left wheels.

Detecting goal We detect whether robot has arrived the goal position through the distance constraints

$$\begin{aligned} |x - x_{\text{ goal }}|<\sigma _e,\end{aligned}$$
(21)
$$\begin{aligned} |y - y_{\text{ goal }}|<\sigma _e, \end{aligned}$$
(22)

where \(x_{\text{ goal }}\), \(y_{\text{ goal }}\) are the position of goal and \(\sigma _e \) is the error tolerance.

4 Experiments

In the experiments, the parameters of the robot are set as \(R=0.10\) m, \(D=0.6\) m, \(k_{\upsilon }=1.0\), \(k_{\omega }\) = 0.4, \(\alpha \) = 0.05, \(\beta \)= 0.95, and \(\sigma _e\) = 0.2 m, and the sampling interval is \(\Delta T\)=0.1 s.

4.1 Estimate Start Position

Placing the robot at different initial positions, we begin to estimate the start position, and the result is shown in Table 1.

Table 1 Estimated result

In the table, \(x, y\) are the practical positions, \(x_e, y_e\) are the estimated positions, and \(e_x\), \(e_y\) are the relative errors. From the table, we can see that the results are acceptable for extended Kalman filter algorithm.

4.2 Obstacle Avoidance

The pulse number per circle is 2048, and if the control input is angular velocity, we should turn it into angular velocity. The laser’s accuracy is \(0.01\) m and \(1^{\circ }\).

So we initialize the measurement noise covariance matrix \(Q\) and control noise covariance matrix \(R\) as follows:

$$ \begin{aligned} R= \left[ \begin{array}{lll} (1.0/2048/\pi /\Delta T)^2 &{} 0\\ 0 &{} (1.0/2048/\pi /\Delta T)^2 \nonumber \end{array} \right] , \ Q= \left[ \begin{array}{lll} (0.01)^2 &{} 0\\ 0 &{} (1.0*\pi /180)^2 \nonumber \end{array} \right] . \end{aligned} $$

After initialized these matrices, we place the robot at \((-5.4,1.0)\) and set the goal position at \((-3.0,2.1)\) and then run our program. Finally, we obtain the map of our experimental environment as shown in Fig. 7.

Fig. 7
figure 7

The built map of SLAM

Fig. 8
figure 8

The motion trajectory of robot

In Fig. 7, the blue and green circles are the original position and the goal position, respectively. The red circle represents the robot position. The motion trajectory is shown in Fig. 8. During the overall process, robot moves toward left to acquire the enough diameter to bypass the desk on the right. Then, robot turns right to avoid the desk and moves to the goal. From the motion trajectory and the velocity trajectory, we can see that the whole process is smooth and continuous and the final errors are 0.1882 and 0.1044 m, which are within \(\sigma _e\).

5 Conclusions

In this paper, a mobile system which integrates SLAM and obstacle avoidance is implemented. In SLAM, through comparing the detected features with landmarks which set previously in global coordinate system, we utilize EKF to estimate the pose of robot. Meanwhile, an obstacle avoidance algorithm is planning the motion path to ensure that the robot will safely arrive the goal position. In the implementation, the results are satisfying.