Keywords

1 Introduction

Simultaneous localization and mapping (SLAM) is a crucial part of robot navigation in unknown environments. It is necessary, whenever a robot needs to locate itself and build a map of the environment simultaneously [1]. Using the landmark measurements and odometry of the robot, we can calculate the robot’s position and build a map of the environment. There are two popular approaches to SLAM: EKF-SLAM [2] and FastSLAM [3]. The EKF-SLAM is based on extended Kalman filter (EKF), using landmark measurement and odometry information to calculate the EKF’s covariance matrix, and then conducts the prediction and correction to finish the localization and mapping. But EKF-SLAM can only converge at a fast speed when the robot system is linear. Several attempts were proposed to address this problem for nonlinear systems. For example, Dissanayake et al. describe a strategy for deleting unsuitable landmarks to improve the accuracy of robot’s position and landmark estimation [4]. There are some soft-computing approaches [57] that also enhance the EKF-SLAM algorithm. Due to computational complexity, increasing the number of landmarks results in squaring the size of the covariance matrix.

Section 2 presents an overview of related work. Section 3 presents a detailed description of the ACSLAM algorithm. Empirical results of several realistic environments are shown in Sect. 4. The paper concludes with Sect. 5.

2 Related Work

In order to speed up EKF-SLAM, some FastSLAM algorithms are proposed that use Rao-Blackwell normalization on particle filters (RBPF) [8]. The FastSLAM algorithm separates the SLAM problem into localization and mapping phases, where localization uses particle filters (PF) and the mapping uses EKF. Unlike EKF-SLAM, FastSLAM allows each landmark to be calculated by a single EKF, respectively. This can avoid the cost of dealing with large matrices. FastSLAM has two versions, FastSLAM 1.0 [3] and FastSLAM 2.0 [1]. FastSLAM 1.0 only uses odometry to estimate the robot’s position, while FastSLAM 2.0 adds the recent sensor measurements to improve estimation accuracy. Though FastSLAM 2.0 seems better than FastSLAM 1.0, the run-time of FastSLAM 2.0 can increase dramatically when the number of landmarks increases.

CESLAM [9] is proposed to solve this problem. Taking advantage of FastSLAM 1.0, and FastSLAM 2.0, CESLAM improves the estimation accuracy and computational efficiency. In the beginning, CESLAM only uses odometry information to estimate the robot’s position and updates the particle state and landmark information when a measurement has a maximum likelihood. This paper proposes a modified SLAM algorithm derived from CESLAM, called adaptive computation SLAM (ACSLAM) which improves runtime performance of the algorithm.

3 ACSLAM for Mobile Robots

The number of particles is a crucial parameter in FastSLAM 1.0, FastSLAM 2.0, and CESLAM. Too few particles will result in poor localization accuracy or aliasing problems, but too many particles will result in poor runtime, since the runtime grows linearly with the number of particles.

The proposed ACSLAM generates a probabilistic guess of the robot’s position similar to CESLAM. The main difference between the new ACSLAM algorithm and CESLAM is that ACSLAM tries to reduce the runtime of the algorithm by dynamically changing the number of particles.

To improve the runtime, ACSLAM uses the effective sample size (ESS) [10] to decide the next generation’s particle number. If the value of ESS is small, it means the difference between each particle’s importance weight is large. Then, we will increase the particle number. If the value of ESS is large, it means the differences between each particle’s importance weight is small. Next, we decrease the particle number. Figure 1 shows the flowchart of the proposed ACSLAM. Detailed descriptions are listed below:

Fig. 1
figure 1

Flowchart showing the ACSLAM

3.1 Generate a New Proposal

For particle m = 1, …, M, consider the control input \(u_{t}\) to generate a probabilistic guess of the robot’s position

$$x_{t}^{m} { \sim }p\left( {x_{t} |x_{t - 1}^{m} ,u_{t} } \right)$$
(1)

3.2 Data Association

Use the robot’s position and the information of landmarks k = 1, …, N in particle m to generate estimated measurement.

$$\hat{z}_{t}^{m} = g\left( {\hat{x}_{t}^{m} ,\theta_{k,t - 1}^{m} } \right)$$
(2)

If the estimated measurement is out of the laser range, then we will set the estimated measurement’s likelihood as zero and skip further calculation. Otherwise, we use the estimated measurement to compute the likelihood by using the following equation.

$$p_{k} = \frac{1}{{\sqrt {\left| {2\pi S_{k} } \right|} }}\text{exp}\left[ { - \frac{1}{2}\left( {z_{t} - \hat{z}_{k,t} } \right)^{T} \left( {S_{k} } \right)^{ - 1} \left( {z_{t} - \hat{z}_{k,t} } \right)} \right]$$
(3)

where \(z_{t}\) is an actual measurement gathered by a sensor, \(\hat{z}_{k,t}\) is the estimated measurement, and \(S_{k}\) is the innovation covariance matrix:

$$\begin{aligned} H_{{\theta_{k} }} & = \partial g\left( {x_{t}^{m} ,\theta_{k,t - 1}^{m} } \right)/{\partial }\theta_{k,t - 1}^{m} \\ S_{k} & = H_{{\theta_{k} }}\Sigma _{k,t - 1}^{m} H_{{\theta_{k} }}^{T} + R_{t} \\ \end{aligned}$$
(4)

If any of the estimated measurement’s likelihood is larger than the threshold, then the largest one will be used to update the landmark. If not, that means the landmark was not detected before, then we add it as a new landmark to the particle.

3.3 Updating the Particle State and the Landmark Estimates

As CESLAM, if the likelihood is larger than the threshold, this means the landmark has been recorded. Before updating the landmark, we have to use the value to correct the particle’s state first. This is to make sure that our particle would not be updated to the wrong position.

$$\begin{aligned} H_{{x_{t} }} & = {\partial }g\left( {x_{t}^{m} ,\theta_{k,t - 1}^{m} } \right)/{\partial }x_{t}^{m} \\\sum_{{x_{t} ,t}}^{m} & = \left[ {H_{{x_{t} }}^{T} \left( {S_{k} } \right)^{ - 1} H_{{x_{t} }} + \left( {\sum_{{x_{t} ,t}}^{m} } \right)^{ - 1} } \right]^{ - 1} \\ u_{{x_{t} ,t}}^{m} & =\sum_{{x_{t} ,t}}^{m} H_{{x_{t} }}^{T} \left( {S_{k} } \right)^{ - 1} \left( {z_{t} - \hat{z}_{t}^{m} } \right) + u_{{x_{t} ,t - 1}}^{m} \\ \end{aligned}$$
(5)

Next, we use the particle’s state to calculate the new estimated measurement to update the landmark estimation by EKF.

$$\begin{aligned} K & = \sum_{{n_{t} ,t - 1}}^{m} H_{{\theta_{{\hat{n}}} }}^{T} S_{{\hat{n}}}^{ - 1} \\ u_{{\hat{n},t}}^{m} & = u_{{\hat{n},t - 1}}^{m} + K\left( {z_{t} - \hat{z}_{t}^{m} } \right) \\\sum_{{\hat{n},t}}^{m} & = \left( {I - KH_{{\theta_{{\hat{n}}} }} } \right)\sum_{{\hat{n},t - 1}}^{m} \\ \end{aligned}$$
(6)

We add the new landmark to the particle if there is no likelihood larger than the threshold.

$$\begin{aligned} N_{t}^{m} & = N_{t - 1}^{m} + 1 \\ u_{{\hat{n},t}}^{m} & = g^{ - 1} \left( {x_{t}^{m} ,\hat{z}_{{\hat{n},t}} } \right) \\\sum _{{\hat{n},t}}^{m} & = \left( {H_{{\theta_{{\hat{n}}} }}^{T} R_{t}^{ - 1} H_{{\theta_{{\hat{n}}} }} } \right)^{ - 1} \\ \end{aligned}$$
(7)

3.4 Calculating Importance Weight and Effective Sample Size

We calculate the importance weight by the following equation:

$$\begin{aligned} w_{t}^{m} = & \frac{{\text{target}\;\text{distribution}}}{{\text{proposal}\;\text{distribution}}} \\ = & \frac{{p(x^{t,m} |z^{t} ,u^{t} ,n^{t} )}}{{p\left( {x^{t - 1,m} |z^{t - 1} ,u^{t - 1} ,n^{t - 1} } \right)p\left( {x_{t}^{m} |x^{t - 1,m} ,z^{t} ,u^{t} ,n^{t} } \right)}} \\ \end{aligned}$$
(8)

Then, we use the importance weight to calculate the effective sample size (ESS).

$$ESS = \frac{M}{{1 + cv_{t}^{2} }}$$
(9)

M is the particle number, \(cv_{t}^{2}\) is the importance weight’s coefficient of variation.

$$cv_{t}^{2} = \frac{1}{M}\sum\limits_{i = 1}^{M} {\left( {Mw(i) - 1} \right)^{2} }$$
(10)

According to the value of ESS, we decrease resampling particle number if ESS is large, and we increase the resampling particle number if ESS is small.

3.5 Resampling

After each particle is assigned an importance weight and we have the next generation’s particle number, a tournament selection [11] mechanism is adopted for resampling.

4 Simulation Results

To evaluate the performance, we compare FastSLAM 1.0 and FastSLAM 2.0 with the proposed method. The software environment uses Visual Studio 2010 with the OpenCV library. The map can separate in sparse and compressed block, while the dimension is 500 * 500 pixels. A robot can move 10 pixels or rotate 15° per step. Since real world application of the odometry consists of errors, therefore, the Gaussian noise is introduced with distance and rotation. The simulation of laser range is 180°. The lasers and each landmark in the particle will generate 181 measurements. The maximum distance of the laser range finder is 100 pixels. A robot will need to move 100 steps along the path in each run.

In the proposed method, the maximum particle number we set is 20, and the minimum particle number we set is 10. Once we run the process, the average particle number is 16, then we set the other algorithm’s particle number as 16.

Figure 2 depicts the simulation results of the three different SLAM algorithms. The pink points are the true path of the robot trajectory, and gray points are the localization from the best particle. The green circles are the particle, and the yellow circle which is overlapped with the convergent green particles is the robot. The black dots with brown circle are the landmarks, and the red circles are estimations of landmarks. The laser range scanning is shown by blue lines. As the simulation shown in Fig. 2, the robot’s localization and landmark estimation in the ACSLAM are more accurate than FastSLAM 1.0 and FastSLAM 2.0. Figure 3 shows the relation between each step and the localization error. Apparently, the compressed block, three algorithms’ localization errors are small before counting to the forty fifth step. But after the forty fifth step, the proposed method can adjust the particle number according to the environment as the landmark becomes sparse. As a result, the proposed method outperforms others.

Fig. 2
figure 2

Simulation results of different SLAM algorithms. a FastSLAM 1.0. b FastSLAM 2.0. c ACSLAM

Fig. 3
figure 3

Simulation’s localization error

Table 1 shows the localization results of the simulation in different SLAM algorithms. The ‘localization error’ represents the error between estimated and true robot’s position; ‘localization RMSE’ means the RMSE between estimated and true robot’s position. In the table, the performance of ACSLAM in localization is better than FastSLAM 1.0 and FastSLAM 2.0.

Table 1 Simulation localization results of different SLAM algorithms

Table 2 shows the runtime efficiency in different SLAM algorithms. Since the computational time can be reduced, the proposed method performs better than FastSLAM 1.0 and FastSLAM 2.0.

Table 2 Runtime efficiency of different SLAM algorithms

5 Conclusion

In this paper, we propose an extension of CESLAM. The goal is to improve the accuracy, when the particle’s measurement has a maximum likelihood in the known landmark. We update the particle state before updating landmark estimation. The proposed algorithm also uses the effective sample size to adjust the particle calculation number so that we can improve the runtime efficiency. As a result, the proposed ACSLAM adopts the advantages of FastSLAM 1.0, FastSLAM 2.0, and CESLAM to ensure better estimation accuracy and runtime in the simulations.