1 Introduction

Simultaneous localization and mapping (SLAM) has been a thrust area of research for the past two decades owing to its significant contribution in the field of disaster management, defence, under-water navigation, unmanned aerial vehicles, etc. [1,2,3,4,5,6,7,8,9,10,11]. In the recent past, the combination of autonomous mobile robot and SLAM algorithm played an indispensable role in the field of disaster management. Especially, autonomous robots have been used for rescue and maintenance operation as a part of disaster management during gas or radioactivity leak accidents. Since the affected area becomes inaccessible, concurrent mapping of the environment and localization of the robot becomes vital to identify the exact location of the source. SLAM has been an egg or chicken problem as the degree of localization depends upon effective mapping and vice versa. The implementation of SLAM using extended Kalman filter (EKF) is quite challenging due to the approximation of real-time stochastic type system and sensor noises as Gaussian. Thus, improper tuning of noise covariance may lead to divergence of the filter over time, resulting in instability of the entire system [12]. Alternate algorithms that are relatively straightforward but computationally intensive have the advantage of accommodating noise model other than Gaussian such as Monte Carlo localization [13,14,15], FastSLAM [16], occupancy grid method [17] and unscented Kalman filter (UKF) [18] were also proposed.

Recently, the implementation of EKF with a Laser Range Finder (LRF) and wheel encoder model using lines as a feature was carried out in which the accuracy of SLAM was improved by incorporating the derived input and output covariance matrices through least-square techniques [6, 11]. However, implementation of EKF with line features led to increased computational time in addition to memory requirements. Also, the authors concluded that the point features of the environment may be a better solution to enhance the computational speed in implementing SLAM. The type of features, their state and placement play a crucial role in successful implementation of EKF [1, 19]. Bacca et al [20] proposed a feature stability histogram method inspired by human memory system to identify static or dynamic features through short- and long-term memory models. This approach was able to improve the mapping accuracy and scalability. Above all, the effective implementation of SLAM lies in proper data association between two sets of features taken at different time instances. A Joint Compatibility Test (JCT) approach was proposed over the direct nearest neighbour method to minimize possibility of wrong association [21].

The classic recursive least-square algorithm has been widely regarded as a useful tool to reduce the cost function. Hamzah and Toru [22] have shown how the measurement innovation matrix can be used to effectively localize the mobile robot through EKF with intermittent measurements even if the data from the sensors are partially or fully missing. It was reported that the success of this method lies in effective initialization of the state and measurement noise covariance matrix. To overcome the limitation of EKF due to its memory requirement, a complete hardware implementation of scan-matching genetic SLAM (SMG-SLAM) on Field Programmable Gate Array (FPGA) was proposed [23].

Among the available algorithms, the simplicity of EKF in predicting the true state of a system amidst noise has made it unique and popular. Hence, in this work, EKF-based SLAM in a structured environment using range-only sensor along with other associated tasks such as line fitting, feature extraction and data association has been implemented. In this approach, one of the recommendations proposed in the literature, namely point features of environment, has been considered for effective implementation of EKF-based SLAM in real time.

2 EKF-based SLAM and mathematical models

2.1 Mathematical models

EKF is a mathematical-model-based optimal state estimator. Basically, two types of mathematical models, namely the system and sensor models, are established in EKF as given in the following equation:

$$ \varvec{X}_{k + 1} = f\left( {\varvec{X}_{k} ,\varvec{U}_{k} ,\varvec{\omega}_{k} } \right) $$
$$ \varvec{Z}_{k + 1} = h\left( {\varvec{X}_{k + 1} ,\vartheta_{k + 1} } \right) $$
(1)
$$ \varvec{\omega}_{k } \sim N\left( {0,\varvec{Q}_{k} } \right) $$
$$ \vartheta_{k } \sim N\left( {0,\varvec{R}_{k} } \right) $$
(2)

where \( \varvec{X}_{k + 1} \) is the estimated state vector at a discrete time instant k+1 for a known input \( \varvec{U}_{k} \) assuming all the noises to be \( \varvec{\omega}_{k} \), which is of zero mean and Gaussian in nature. The \( \varvec{Z}_{k + 1} \) is the estimated measurement vector of the sensor at k+1th instant for the estimated state and \( \vartheta_{k} \) is the sensor noise, again of zero mean and Gaussian. The system noise and measurement noise are assumed to be independent in nature. \( \varvec{Q}_{k} \) and \( \varvec{R}_{k} \) are the noise covariance functions of the system and sensor noise, respectively. For a more robust system the noise covariance itself can also be assumed to be a function of discrete time instant k or it can be just assumed to be non-varying with respect to time.

2.2 Robot model

The mobile robot that was used for the experimental application is shown in figure 1. It is a two-wheeled differential drive robot with a 2.5 GHz Intel core i7 quad-core processor. An LRF of 5.6 m range with 0.02 m resolution and scanning angle 240° with 0.36° angular resolution is used for environmental scanning. Figure 2 depicts a line diagram of the robot.

Figure 1
figure 1

The Corobot CL2 mobile robot used for the experiment.

Figure 2
figure 2

Line diagram of the mobile robot.

The system state vector constitutes the rectangular global coordinates (x k , y k ), bearing of the robot (θ k ) and global feature coordinates (x fi , y fi ) given by Eq. (3). The basic system model is the velocity model of the mobile robot, which exhibits a linear or angular motion depending upon the voltage excitation given to the servos attached to the wheels. A derived parameter of the velocity through an odometer is used as a model for this experiment. Thus, Eq. (1) can be rewritten with the odometer model as given in Eq. (4):

$$ \varvec{X}_{k} = \left[ {\varvec{X}_{K}^{R} \,\varvec{X}_{k}^{\text{F}} } \right]^{\text{T}} $$
(3)

where \( \varvec{X}_{k}^{\text{R}} = \left[ {x_{k\,} \,y_{k} \,\theta_{k} } \right] \) and \( \varvec{X}_{k}^{\text{F}} = \left[ {x_{{{\text{f}}1}} \,y_{{{\text{f}}1}} \,x_{{{\text{f}}2}} \,y_{{{\text{f}}2}} \ldots x_{{{\text{f}}N}} \,y_{{{\text{f}}N}} } \right] \)

$$ \varvec{X}_{k + 1} = A\varvec{X}_{k} + B\varvec{U}_{k} + \omega_{k} $$
(4)

where A is the system state matrix and B the input matrix\( x_{k + 1} = x_{k} +\Delta x \),\( y_{k + 1} = y_{k} -\Delta y \),\( \theta_{k + 1} = \theta_{k} +\Delta \theta . \)

Assuming the noise covariance of the odometer and features to be \( Q_{k}^{R} \) and \( Q_{k}^{F} \), respectively, Q k can be written as

$$ \varvec{Q}_{k} = \left[ {\begin{array}{*{20}l} {Q_{k}^{\text{R}} } \hfill & {Q_{k}^{\text{R}} Q_{k}^{\text{F}} } \hfill \\ {Q_{k}^{\text{F}} Q_{k}^{\text{R}} } \hfill & {Q_{k}^{\text{F}} } \hfill \\ \end{array} } \right]. $$
(5)

Similarly, the error covariance P k of the robot and the environmental features is given as follows:

$$ \varvec{P}_{k} = \left[ {\begin{array}{*{20}l} {\left( {{\hat{\sigma }}_{x} } \right)^{2} } \hfill & {{\hat{\sigma }}_{x} {\hat{\sigma }}_{y} } \hfill & {{\hat{\sigma }}_{x} {\hat{\sigma }}_{\theta } } \hfill & {} \hfill & {} \hfill & {} \hfill & {} \hfill \\ {{\hat{\sigma }}_{y} {\hat{\sigma }}_{x} } \hfill & {\left( {{\hat{\sigma }}_{y} } \right)^{2} } \hfill & {{\hat{\sigma }}_{y} {\hat{\sigma }}_{\theta } } \hfill & \cdots \hfill & {} \hfill & {} \hfill & {} \hfill \\ {{\hat{\sigma }}_{\theta } {\hat{\sigma }}_{x} } \hfill & {{\hat{\sigma }}_{\theta } {\hat{\sigma }}_{y} } \hfill & {\left( {{\hat{\sigma }}_{\theta } } \right)^{2} } \hfill & {} \hfill & {} \hfill & {} \hfill & {} \hfill \\ {} \hfill & \vdots \hfill & {} \hfill & \ddots \hfill & {} \hfill & \vdots \hfill & {} \hfill \\ {} \hfill & {} \hfill & {} \hfill & {} \hfill & {\left( {{\hat{\sigma }}_{xN} } \right)^{2} } \hfill & {} \hfill & {{\hat{\sigma }}_{xN} {\hat{\sigma }}_{yN} } \hfill \\ {} \hfill & {} \hfill & {} \hfill & \cdots \hfill & {} \hfill & {} \hfill & {} \hfill \\ {} \hfill & {} \hfill & {} \hfill & {} \hfill & {{\hat{\sigma }}_{yN} {\hat{\sigma }}_{xN} } \hfill & {} \hfill & {\left( {{\hat{\sigma }}_{yN} } \right)^{2} } \hfill \\ \end{array} } \right] . $$
(6)

2.3 Measurement model

With a known state X k of the robot at k th instant for a given input U k the k+1th state is estimated using Eq. (4) and the same is used to predict the observable measurement using the model (7) derived from Eq. (1):

$$ \varvec{Z}_{k + 1} = H\varvec{X}_{k + 1} + \vartheta_{k + 1} $$
(7)

where Z k contains the observable feature states r i,k and φ i,k . For illustration, let us assume two states of a robot as shown in figure 3. The global map (G M,k ) at k th instant contains the features G M,k = {F 1, F 2, F 3}, of which the visible features V F during state k is V F,k = {F 1, F 2, F 3} and state k+1 is V F,k+1 = {F 2, F 3, F 4}. To reduce the computational burden, associated features between the current observation (k+1) and global map at k th instant alone are considered for measurement estimation using Eq. (8). From figure 3 one can see the associated features between the map and k+1th state to be A F,k = V F,k+1G M,k = {F 2, F 3}. The measurement model given by Eq. (7) can be elaborated as follows:

$$ \varvec{Z}_{k} = \left[ {\begin{array}{*{20}l} {r_{1,k} } \hfill & {r_{2,k} } \hfill & {r_{3,k} } \hfill & {} \hfill & {r_{m,k} } \hfill \\ {} \hfill & {} \hfill & {} \hfill & \vdots \hfill & {} \hfill \\ {\varphi_{1,k} } \hfill & {\varphi_{2,k} } \hfill & {\varphi_{3,k} } \hfill & {} \hfill & {\varphi_{m,k} } \hfill \\ \end{array} } \right] $$
(8)
Figure 3
figure 3

Illustration of data variables with respect to local and global frame of reference.

Assume the noise covariance of the sensor to be \( \varvec{R}_{i,k} = \left[ {\begin{array}{*{20}l} {\left( {\sigma_{i,k}^{r} } \right)^{2} \,0} \hfill \\ {0\,\left( {\sigma_{i,k}^{\varphi } } \right)^{2} } \hfill \\ \end{array} } \right], \)where the noise of r and φ are completely independent and hence the cross-diagonal elements are assumed to be zero. Initializations of the noise covariance Q k and R k of the system and measurement, respectively, are very crucial in implementing EKF as improper initialization may lead to divergence and instability. The values of Q and R were initialized as follows for the experimental studies:

$$ \varvec{Q} = \left[ {\begin{array}{*{20}l} {0.0225} \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & {0.0225} \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & {0.5} \hfill \\ \end{array} } \right],\,\varvec{R} = \left[ {\begin{array}{*{20}l} {0.0125} \hfill & 0 \hfill \\ 0 \hfill & {0.035} \hfill \\ \end{array} } \right] $$
$$ \varvec{P}_{k} = \left[ {\begin{array}{*{20}c} 1 & 0 & 0 & {} & {} & {} & {} \\ 0 & 1 & 0 & \cdots & {} & {} & {} \\ 0 & 0 & 1 & {} & {} & {} & {} \\ {} & \vdots & {} & \ddots & {} & \vdots & {} \\ {} & {} & {} & {} & 1 & {} & 0 \\ {} & {} & {} & \cdots & {} & {} & {} \\ {} & {} & {} & {} & 0 & {} & 1 \\ \end{array} } \right] $$

The size of P increases as the number of features identified from the environment increases. The off-diagonal elements immediately change from zero after the first update. The value of P decreases if innovation reduces, and moves in the same direction, and the matrix remains symmetrical.

2.4 State correction using EKF

After examining Eqs. (4)–(8) it is understood that the system and sensor models are nonlinear; hence obtaining the A and H matrix directly is not possible as both vary dynamically with respect to k. A first-order Taylor series approximation is done to find the Jacobian of A and H as follows:

$$ A_{J,k} = \left[ {\begin{array}{*{20}l} {A_{J,k}^{\text{R}} } \hfill & 0 \hfill \\ 0 \hfill & {A_{J,k}^{\text{R}} } \hfill \\ \end{array} } \right] $$

where \( A_{J,k}^{\text{R}} = \frac{{\partial \varvec{f}}}{{\partial \varvec{X}_{k}^{\text{R}} }} = \left[ {\begin{array}{*{20}l} 1 \hfill & 0 \hfill & { -\Delta y_{k} } \hfill \\ 0 \hfill & 1 \hfill & {\Delta x_{k} } \hfill \\ 0 \hfill & 0 \hfill & 1 \hfill \\ \end{array} } \right] \) and \( A_{J,k}^{\text{F}} = \frac{\partial f}{{\partial \varvec{X}_{k}^{\text{F}} }} = \left[ {\begin{array}{*{20}l} {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \hfill & \cdots \hfill & {\begin{array}{*{20}c} 0 & 0 \\ 0 & 0 \\ \end{array} } \hfill \\ \vdots \hfill & \ddots \hfill & \vdots \hfill \\ {\begin{array}{*{20}c} 0 & 0 \\ 0 & 0 \\ \end{array} } \hfill & \cdots \hfill & {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \hfill \\ \end{array} } \right]. \)

$$ H_{J,k} = \frac{{\partial \varvec{h}}}{{\partial \varvec{Z}_{i,k} }} = \left[ {\begin{array}{*{20}l} {\frac{{\left( {x_{k} - x_{{{\text{f}}i}} } \right)}}{{r_{i,k} }}} \hfill & {\frac{{\left( {y_{k} - y_{{{\text{f}}i}} } \right)}}{{r_{i,k} }}} \hfill & 0 \hfill \\ {\frac{{\left( {y_{{{\text{f}}i}} - y_{k} } \right)}}{{\left( {r_{i,k} } \right)^{2} }}} \hfill & {\frac{{\left( {x_{{{\text{f}}i}} - x_{k} } \right)}}{{\left( {r_{i,k} } \right)^{2} }}} \hfill & { - 1} \hfill \\ \end{array} } \right]. $$
(9)

The prior error covariance matrix, which is generated prior to any observation of the environment through the sensor during state k, is updated using Eq. (10), which is used to determine the Kalman gain K k using Eq. (11). The H J,k and R i,k are calculated for every associated feature during state k:

$$ \varvec{P}_{k + 1}^{ - } = A_{J,k} \varvec{P}_{k}^{ + } A_{J,k}^{T} + \varvec{Q}_{k} $$
(10)
$$ \varvec{K}_{k} = \varvec{P}_{k + 1}^{ - } H_{J,k}^{T} \left( {H_{J,k} \varvec{P}_{k + 1}^{ - } H_{J,k}^{T} + \varvec{R}_{i,k} } \right)^{ - 1} $$
(11)
$$ \varvec{X}_{k + 1}^{ + } = \varvec{X}_{k + 1} + \varvec{K}_{k} \left( {\hat{\varvec{Z}}_{k + 1} - \varvec{Z}_{k + 1} } \right) $$
(12)

where \( \left( {\hat{\varvec{Z}}_{k + 1} - \varvec{Z}_{k + 1} } \right) \) is the innovation C k , Z k is the estimated measurement data from Eq. (7) and \( \hat{\varvec{Z}}_{k} \) is the observed data at k th instant. The posterior covariance is updated as follows:

$$ \varvec{P}_{k + 1}^{ + } = \left( {I - \varvec{K}_{\varvec{k}} H_{J,k} } \right)\varvec{P}_{k + 1}^{ - } . $$
(13)

3 Line fitting and feature extraction

Two types of line fitting approach are used for the experimental study.

3.1 Scaled split and merge technique (SSM)

The data set U containing the processed data from LRF is divided into subsets S i , where i varies from 1 to N. The S i is created such a way that it contains data points from U if the distance between two consecutive data points in U is less than or equal to (E d) 30 mm. If the distance is greater than (E d) 30 mm then a new subset S i is created with the same condition. A subset S i is valid only if the number of continuous points in the set is at least (N p) 5, assuming that the validity of a line to exist is (15 × N p) 75 mm in length. A different S i indicates a break in the continuous line segment, which may suggest end of a wall, a door opening or invisibility due to angle of view of the environment by LRF. S i is again divided into small subsets W i,j containing five points each and j varies from 1 to M.

Normal line fitting is done with W1,1 and the perpendicular distances of the points from W1,2 are checked. If at least three point distances from set W1,2 to the line from set W1,1 is less than or equal to the threshold T d then the sets W1,1 and W1,2 are combined to form a new line, and this process is repeated with the successive sets. If the threshold condition T d fails then an edge is detected and a new line fitting is started. The algorithm shown in figure 4 explains the line fitting approach in a much simpler way and a schematic of the same is shown in figure 5.

Figure 4
figure 4

Algorithm for line fitting using SSM technique.

Figure 5
figure 5

Schematic of line fitting approach using SSM technique.

3.2 Split only (SO) technique

The subset S i is created in the same way as in the previous technique from set U. The end point in the subset S i for i = 1 is joined by a straight line L1, and the point from the same set that has the maximum deviation from the line is identified. If this deviation exceeds the allowable deviation T d then S1 is broken at this point into two subsets S11 and S12. Straight line fitting is done for S11 by line L11. The initial and final values of S12 are joined by a straight line L12 and the point from set S12 that has the maximum orthogonal deviation is identified. Again S12 is broken into subsets S121 and S122. Straight line fitting L121 is done for S121 and the procedure is repeated until all the possible lines are fitted in the set S1. The same is carried out for all S i for i varying from 1 to N. This is the simplest method for identifying straight lines with a set of points from LRF. The algorithm used for this approach is shown in figure 6 and a schematic of the same is depicted in figure 7.

Figure 6
figure 6

Algorithm for line fitting using SO technique.

Figure 7
figure 7

Schematic of line fitting approach using SO technique.

3.3 Comparison of SSM and SO

A detailed comparison is done between the SSM- and SO-based line fitting algorithms. Though both these technique identify all the edges, only orthogonal edges were considered as features for localization purpose. The SSM technique is able to identify long lines but it fails for short lines as the subset size has to be reinitialized for short walls. Sometimes this approach detects false edges and end points are wrongly interpreted due to the restriction on sample size. The SO technique proved to be better for shorter walls as well as longer walls. The manipulation time was considerably less in SO approach as the line fitting was done only when an edge was detected, unlike in SSM, where the line fitting was carried out with every additional subset. The SO technique is primarily used in this application but SSM approach is used when a conflict occurs in identifying a feature. The very idea behind SO technique is to keep the identification process very close to human interpretation of a scene and extraction.

Figure 8 shows the line fitting done using the SSM method and figure 9 through SO on real-time data. It is evident from figure 8a, where SSM approach is used, and figure 9a that SO approach enables fitting straight lines as the laser beam is intercepted by longer walls. It is seen in figure 8b, where SSM approach is used, that it performs badly in fitting a straight line as compared with figure 9b, where SO method is employed. Finally, figure 10 shows the extracted features using SO technique. Table 1 shows a comparison between SSM and SO techniques in terms of potential line fitting and feature identification. The analysis was done based on 500 scans collected around the experimental arena at three different places. It is very much evident from the analysis that the SO algorithm gives better results when compared with the SSM in terms of % potential wall identification per scan for both the long and short walls. The ratio of walls identified by the SSM and SO algorithm to the actual number of walls present in the environment is % potential wall.

Figure 8
figure 8

(a) Line fitting using SSM technique for long walls. (b) Line fitting using SSM technique for short walls.

Figure 9
figure 9

(a) Line fitting using SO technique for long walls, (b) Line fitting using SO technique for short walls.

Figure 10
figure 10

Extracted features after line fitting and noise removal.

Table 1 Comparison of potential wall identification using SSM and SO techniques.

However, the performance of the SSM algorithm can be improved by adding strict constraints and setting an adaptive threshold in selecting all possible points to form a line. However, this comes at the cost of computational time and complexity.

4 Association of features

Association of features identified at k th instant with the global map build till k−1th instant is very important to find the innovation C k . A simple Mahalanobis-distance-based nearest-neighbour method is used for data association. S k as derived from Eq. (11) is the innovation covariance function given as follows:

$$ \varvec{S}_{k} = \varvec{H}_{Ji} \varvec{P}_{k}^{ - } \varvec{H}_{Ji}^{\text{T}} + \varvec{R} . $$
(14)

Once the innovation covariance is obtained, the features observed by the sensor at k th instant and the features mapped till k–1th instant are passed through a basic pre-validation gate to remove spurious data, and it passes the most probable solutions for Mahalanobis distance calculation given by Eq. (15). This validation gate will significantly reduce the computational overhead of the entire algorithm and also eliminate the probability of wrong association.

$$ \varvec{B}_{k} \subset D_{k}^{2} = \left( {\varvec{Z} - \hat{\varvec{Z}}} \right)_{k}^{T} S^{ - 1} \left( {\varvec{Z} - \hat{\varvec{Z}}} \right)_{k} < \chi_{d,\alpha }^{2} $$
(15)

where \( \varvec{B}_{k} \) contains the best possible associated features that clear the chi-square hypothesis, α is the significance level and d the degrees of freedom, which is 2 in this case. The significance level α is set at 0.9 to reject the null hypothesis that the features are associated. This was arrived after repeated trials for various environmental conditions.

5 Practical implementation of EKF

The Corobot CL2 mobile robot with skid steering was used for implementing EKF-based SLAM in real time with a structured environment. The environment is designed in such a way that the mobile robot exhibits twists and turns very frequently, which introduces lots of drift and slip. Thus the effectiveness of the algorithm in predicting the states of the robot amidst slip and other noises can be estimated. The navigated path and resultant map generated are shown in figure 11a. Further, to analyse the short-term loop closing performance, which is a major parameter to validate the effectiveness of the SLAM algorithm [24], the proposed point-feature-based localization method was tested and the results are shown in figure 11b. The robot travelled nearly 80 m distance with a looping length of 18 m. It is evident from figure 11b that the path updated through EKF is consistent and matched 100% with the actual robot path. This has validated the robustness of the implemented SLAM algorithm.

Figure 11
figure 11

(a) Map generated using EKF along with the robot trajectory. (b) Path of the mobile robot depicting loop closure.

The convergence of the filter may be largely affected at times where the noise covariances are not properly tuned. Hence, repeatability of the algorithm was analysed through multiple runs and deviation in the generated map was found to be less than 2% at all times. With respect to the effectiveness of feature extraction algorithm, almost all the visible features are identified and also associated with a maximum deviation of ±0.08 m in X direction and ±0.11 m in Y direction as shown in figure 12.

Figure 12
figure 12

(a) The deviation of X coordinates of the identified features from its mean. (b) The deviation of Y coordinates of the identified features from its mean.

The analysis of EKF algorithm is quite important in terms of error covariance variation on X, Y and θ of the mobile robot. It is seen from figure 13 that at two points the error covariance has exceeded the threshold of 0.02 and this can be attributed to the absence of associated features in the observed map. This can also be corroborated with the map generated in figure 11, where one can observe the thickness of wall growing inappropriately at two instances till a feature association is done again. Once a feature association is done it is simultaneously observed from figure 13 that the error covariance reduces in a continuous manner, thus validating the performance of EKF algorithm.

Figure 13
figure 13

Error covariance plot for X, Y and θ (bearing) of the mobile robot.

Figure 14 shows the filter innovations for distance and angle of features with respect to the mobile robot during part of the trial. The effect of variations in filter innovation can be attributed to abnormal variations in wall thickness of the generated map. Figure 15 illustrates the residuals of the estimated position, angle of the features, against the measured values, through correlations that are white. The position and angle of the mobile robot are derived parameters from these estimates. It can be observed that the auto-correlation shows a single peak at the centre of the plot.

Figure 14
figure 14

Filter innovations for distance and angle of features with respect to the robot.

Figure 15
figure 15

Correlation of estimated positions, angle of the features against measured values.

6 Conclusion and further work

The implementation of EKF considering only point features is done in a real-time environment designed to be complex in terms of feature locations and spacing. Effective tuning of system and measurement noise covariance values through experiments lead to highly precise implementation of SLAM with a mapping accuracy of ±0.11 m. The variation in error covariance matrix is consistent with respect to the variation in the finally predicted state of the mobile robot against its actual position. The entire work is done in a real-time simulated environment and can be extended for outdoor implementation. More number of distinct point features can be identified so that temporary robot kidnapping problem, which may occur due to intermediate data association failure, may be avoided. The robustness of the proposed algorithm confirmed through short- and long-term loop closure encourages its application for real-time outdoor environment.