Keywords

1 Introduction

This paper deals with indoor localization of mobile robot. Mobile robot localization is one of the main parts of mobile robotics. Mobile robots navigation system is composed from two essential parts. The first one is absolute localization method, which will be described in this paper. This part is main topic of this paper. The second one is odometry. The commonly used type of odometry is based on drives encoders measurement, where is captured a travelled distance of robots wheels. The encoders measure each revolution of robots wheel, where the travelled distance depends on wheels diameter. Travelled distance of couple mobile robot wheels defines travelled trajectory of mobile robot, which depends on type of vehicle chassis. The main problem of odometry is continuously cumulating random error of measurement. This paper presents experimental odometry approach, which is based on computer vision.

This paper deals with description of specific parts of localization and mapping system. Figure 1 contains schematic of localization and mapping system, where the inputs to system are image data, distance measured by range finder, odometry data and navigation data [4, 5]. Image data are used for mapping the environment. During mobile robot movement is continuously created the map of environment. Mapping is based on phase correlation [1,2,3, 7] and stitching method. Visual odometry has two inputs the first one is data from range finder, which measure camera distance to ceiling. The second one is data from mapping module, especially global shift from phase correlation. Global shift is in pixels, which is recalculated to millimetres thanks to knowledge distance to ceiling and camera parameters. This is output of visual odometry part. The localization part has a few inputs. The first one are image data, which are used by particle filters for re-sampling. The second one is measured step from visual odometry, which initialize location estimation. Next one is continuously created map of environment. The next one is navigation data [4, 5], which determine vehicle turning angle in each step of measurement. The last one is data from odometry, which initialize location estimation as in the case visual odometry. Both odometry are folded together. The output of localization part is mobile robot position in the map of environment. The part visual odometry and part localization will be described in this paper.

Fig. 1.
figure 1

Mapping and localization schematic.

This paper is organized as follows: The Sect. 2 describes experimental odometry approach. The Sect. 3 contains explanation of localization method, which is based on probabilistic approach. Practical experiments are described in the Sect. 4. The last section is reserved to conclusions and future works.

2 Visual Odometry

Let’s focus to experimental odometrical approach. The main odometrical sensor is camera against to commonly used approach, where main sensors are encoders. Camera is positioned perpendicular to ceiling. Presented odometry uses mapping part, where global shift is input to visual odometry. The global shift is computed by phase correlation [1,2,3, 7], which is part of mapping module. The next input is distance measured by range finder. Global shift is in pixel units, which have to be recalculated to real units of distance. Camera diagonal angle of view can be used for this purpose. This can be experimentally determined, or find out in camera datasheet, if it is written there. Converting formula is (1). Table 1 contains legend for (1). This part of the whole system is not important, but the current testing platform does not contain odometrical systems. This will be described in a second practical experiment. This part was made to replace missing odometry into current testing platform, despite that will be considered to final solution.

Table 1. Variables description in (1).
$$ shift = \left| {(dist*{ \tan }({\text{angle}})/{\text{diagLengthPix}})*{\text{globalShift}}} \right| $$
(1)

3 Localization Method

Localization is based on probabilistic method [6], especially on particle filters. Simplified schematic of localization algorithm is on Fig. 1. Localization part has a few inputs: Image data, data from odometry, data from visual odometry, navigation data and continuously creating map of environment.

Weighted mean is calculated from both odometry. When is measured the previously determined step, then position estimation is started.

Sample generating is equipped by motion model of mobile robot. This depends on type of chassis. Differential chassis was chosen in this case. Motion model is on (2) and (3).

$$ x = d \cdot \cos (\varphi ) $$
(2)
$$ y = d \cdot \sin (\varphi ) $$
(3)

There x and y represents robot position in environment. Variable d is traveled distance per step. Variable φ represents orientation of mobile robot. Sample generating uses that motion model as was written previously. Samples are generated by formulas (4) and (5).

$$ x_{i} = d_{N} \cdot \cos (\varphi_{N} + \varphi ) $$
(4)
$$ y_{i} = d_{N} \cdot \sin (\varphi_{N} + \varphi ) $$
(5)

Where x i and y i represents position of sample (also called particle) with index i. Variables d N and φ N was randomly generated by distribution function with parameters: μ = d, σ d  = 1 and φ N for μ = φ, σ φ  = 4. Variable φ is current turning angle of chassis in this step. This angle was passed from navigation. Navigation [4, 5] contains the next necessary information, whether mobile robot going forward or backward. This is integrated in localization as well.

This is all samples are placed into the map of environment. For each sample is extracted the area around the sample position. Dimensions are the same as image data dimensions. Currently captured image data are compared with each particle area. Phase correlation [1,2,3, 7] was used for this comparison. This is not a same phase correlation as for mapping task, but this is simplified version of this algorithm, which is able to register shift only.

The next step is weights calculation. Shifts are base for this purpose. The general rude: smaller shift, the higher weight.

After that all weights have to be normalized, this means all weights have to be converted to closed interval <0, 1>.

Such data are re-sampled. All chosen samples are preserved, otherwise not considered for next algorithm steps.

Robot position is calculated from weighted mean of positions re-sampled samples (Fig. 2).

Fig. 2.
figure 2

Localization schematic.

4 Practical Experiments

This section is reserved to practical experiments, which demonstrate functionality of presented solution. In all experiments were used 5 samples on the start of algorithm. 25 samples were used during position estimation. The higher samples count, means higher computational demands. All experiments were performed on hardware with following hardware specifications:

  1. 1.

    CPU: Intel i7

  2. 2.

    Ram: 4 GB

  3. 3.

    Camera: Logitech c930

4.1 Simulation

This experiment is deals with verifying of visual odometry and localization functionality. There were not used real data.

The experiment visualization is on Fig. 3. Left side positioned blue circle represents starting position of mobile robot. Green circles are mobile robot estimated positions. Right side positioned blue circle is mobile robot current position. Red dots represent samples, which were not considered to next steps localization algorithm. On the other hand green dots represent re-sampled samples. There is evident, that visual odometry works well. Position estimation works well to. Let’s move to experiment with real data.

Fig. 3.
figure 3

Simulation of localization.

4.2 Localization with Real Data in Online Mode

This experiment was performed on real mobile robot in online mode. Localization ran in real time mode. The mobile robot [9] does not contain own odometry system. Visual odometry was used only in this case. The experiment visualization is on Fig. 4. The legend is the same as previous case, except current mobile robot position. The measurement step was 0.6 m. There is evident localization works well. Position calculating time takes 0.5 s. This fact applies only for 25 samples, which were used in this case. Mapping frame rate is approximately 15 frames per second.

Fig. 4.
figure 4

Real time localization.

5 Conclusions

This paper presented mobile robot two parts of mapping and localization system for mobile robots, which operate inside of buildings. The main advantage of this solution is no need to modify the environment, where robot operates. The visual odometry and localization system were designed and tested. Presented solution was verified by couple of practical experiments. The localization is based on probabilistic approach, which seems appropriate for this purpose. The system is still in development phase. Many problems have to be resolved. Source code is completely written in C++ language with the support of OpenCV libraries.

The future works will be intensive due to system is not able to resolve closure loop [8] problem and problem of kidnapped robot yet. We are preparing mobile robot, which contains odometry system based on encoders. Micro PC will be main control unit for robots low level systems. The image data together with odometrical data will be streamed to server, which will be equipped by strong computing power to reach real time localization.