Abstract
Due to the limited penetration ability of GPS signal to buildings, indoor high-precision positioning combined with a variety of technologies has been paid more and more attention by researchers. Based on the traditional indoor positioning technology, a new indoor positioning method is proposed in this paper, which combines vision and inertial sensor. In this paper, we will first independently evaluate the quality of inertial positioning and visual positioning results, and then integrate them with complementary advantages to achieve the effect of high-precision positioning.
This work is supported by the National Natural Science Foundation of China (61771186), University Nursing Program for Young Scholars with Creative Talents in Heilongjiang Province (UNPYSCT-2017125), Distinguished Young Scholars Fund of Heilongjiang University, and postdoctoral Research Foundation of Heilongjiang Province (LBH-Q15121).
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In recent years, due to the continuous development of wireless communication networks and wireless sensor networks, people have higher and higher requirements for location services in complex indoor environments. Researchers have proposed a series of indoor location technology, known as WI-FI location, Radio Frequency Identification (RFID) location, Ultra Wideband (UWB) location and so on. In comparison, various technologies have both advantages and disadvantages.
At present, the focus of high precision indoor positioning research is to combine the advantages of different technologies. [1] describes the HiMLoc solution, which focuses on the advantages of Pedestrian Dead Reckoning (PDR) and WIFI fingerprint technology, that is, the solution only relies on the parameters of some buildings, as well as the accelerometer, WIFI card and compass in smart phones for positioning. However, the inertial sensor is unable to accurately extract the corresponding motion trajectory due to the constant checking of the mobile phone in the room. What is described in [2] is to evaluate the effectiveness of PDR in the corresponding camera through the change rate of pixels, which can effectively improve the positioning accuracy of PDR. In [3], Ashish Gupa and Alper Yilmaz showed that their proposed indoor positioning method is to install a plane building information model and multi-directional sensor suite in the mobile phone. The above research improves the accuracy of indoor positioning system to a certain extent.
Visual indoor positioning and PDR indoor positioning are two common indoor positioning methods. [4,5,6] describe the use of visual indoor positioning, which has the advantages of low power consumption and low cost. However, some disadvantages of the smart phone itself, such as the resolution of the camera itself, as well as external lighting, will affect its imaging. In 2016, Google came up with Tango and launched the first Tango smartphone with Lenovo. Tango is known for its regional learning, a feature that, when combined with a smartphone, allows it to capture and remember architectural features of interior Spaces quickly and efficiently, such as corners, walls and bumps. Tango has its own coordinate system and its own method of extracting feature points, which makes Tango’s positioning system more accurate. But in indoor the place with darker light, the accuracy of this certain bit method is about to discount greatly.
By contrast, Tango visual positioning is highly reliable in indoor positioning accuracy, except that it has certain shortcomings in light. Therefore, we need to use other indoor positioning techniques to make up for Tango’s shortcomings. The smart phone inertial sensor described in [8] is similar to Tango visual positioning in that it USES PDR and Tango visual positioning, because both of them are relative positioning technologies that estimate walking direction and walking distance from known positions. [9] describes the main problems existing in PDR: its positioning accuracy will gradually decrease with the passage of time, and the error will accumulate more and more.
This paper presents an indoor positioning algorithm based on the combination of visual positioning and inertial sensor. The algorithm combines the advantages of the two methods and achieves better precision and robustness. The algorithm will be described in the next section. The third part is the experimental results and the fourth part is the conclusion.
2 System Description
The system block diagram of the indoor positioning system proposed in this paper is shown in Fig. 1. Tango is used for visual location measurement. The inertial sensor is located using the accelerometer and gyroscope inside the smartphone. These two parts of the system do not affect each other and work independently. Then the accuracy of the positioning results will be evaluated, and the loose coupling method is taken to achieve the fusion.
2.1 Tango Visual Interior Location Algorithm
The cyclic process of Tango device positioning is screening, identifying feature points, matching feature points, filtering error matches, and coordinate transformation. In Tango positioning, the motion and acceleration direction of the smartphone are measured by the accelerometer and gyroscope in the device, and the measured sensor data are fused to solve the accumulated errors in the motion through regional learning, so as to achieve the effect of three-dimensional motion tracking. The coordinates Tango uses for its location are in a custom virtual framework, so the coordinate transformations must be made to integrate with PDR. The expression to convert the coordinates in the PDR frame into the coordinates in the Tango visual frame is:
Instead, the expression for converting coordinates in Tango visual frames to coordinates in PDR frames is will be as (3) and (4).
where \( \left( {x_{0} ,y_{0} } \right) \) is the origin coordinate of Tango visual frame in PDR positioning frame. \( \theta \) is the angle between Tango visual frame \( y \) axis. \( \left( {x_{P} ,y_{P} } \right) \) is the coordinate in PDR positioning frame. \( \left( {x_{T} ,y_{T} } \right) \) is the coordinate in Tango visual framework.
The sampling frequency of Tango is \( 100\,{\text{Hz}} \), and the output is a continuous sample value. We need to find the zero velocity point from the coordinates of the output value, and use this point to estimate the coordinates of step size and heading. Where, the step size and heading can be calculated by (5) and (6):
2.2 PDR Localization Algorithm
PDR is a relative, cumulative position of the navigation technology. Namely, starting from a known position, the displacement generated by the target’s motion trajectory is added. The displacement calculation can be given in the form of a change in Cartesian coordinates or a change in heading.
According to the inverted pendulum model described in [11], the vertical distance can be converted into the horizontal step length. The inverted pendulum model is shown in Fig. 2.
where \( L \) is the radius of the model, indicating the target step; \( h \) is the vertical displacement, and the time interval of this displacement is from the moment of landing on the heel to the moment of standing firm. \( H \) can be obtained by (7). Step length \( D \) can also be obtained through Pythagorean Theorem with the first half \( D_{1} \) and the last half \( D_{2} \) as (8):
Using the angular information and quaternion algorithm in the 6-axis inertial sensor, we can transform the coordinates in the pedestrian coordinate system into the coordinates in the navigation coordinate system by using the coordinate trans-formation matrix as in (9).
The direction of pedestrian can be calculated from (10):
where \( \varPsi \) is defined between \( 0 \) and \( 360^{ \circ } \).
2.3 Fusion Algorithm of Tango and PDR
Compared with PDR localization, Tango can achieve relatively accurate indoor positioning result with a good lighting condition. The lighting sensitivity, however, may be the weakness for Tango, because if the lighting conditions change, such as too much reflection on the indoor floor or too much white on the surrounding walls, it will affect Tango’s positioning effect. And PDR positioning although will over time to produce the error affecting the positioning accuracy of the system, but the IMU (Inertial Measurement Unit) in each point estimation error is reliable, so the IMU is used to determine the effectiveness of the Tango dots: if the dots in the range of error, the output is valid, otherwise is invalid. The error range \( \varepsilon \) is shown in Fig. 3.
The validity of step size and heading observed in Tango can be expressed as (11) and (12):
\( L^{V} \) and \( L^{I} \) are the step size estimation of tango visual positioning and PDR positioning, respectively. \( \theta^{V} \) and \( \theta^{I} \) are the course observed by tango visual positioning system and PDR positioning system respectively. Kalman filter is part of inertial navigation system. In this paper, Kalman filter and heading are used for fusion: once the inertial navigation system detects the zero velocity value point, it will trigger the Kalman filter to change the vertical velocity to zero. Then the optimal estimation is obtained from the measured value of the current state:
(13) above can be expanded as (14):
where \( Z\left( {k - 1} \right) \) is the course observation detected by Tango at the previous moment. \( X\left( {k - 1 |k - 2} \right) \) is the course information estimated by PDR positioning system in the previous moment. The previous non-zero vertical velocity is fed back to the Kalman filter as compensation so that the cumulative error can be eliminated.
3 Performance Analysis
The accuracy of the proposed method is evaluated and analyzed by conducting field positioning experiments. The experiments are performed by Tango smart phone jointly launched by Google and Lenovo: Lenovo PHAB 2 Pro. The location was located on the 7th floor of the physical experiment building of Heilongjiang University. We will use the verification point to represent the ground truth value, carry out error analysis, and compare with the traditional positioning method.
3.1 Tango Visual Positioning Evaluation
Figure 4 shows the results of Tango. The red dots indicate that we are at the preset checkpoint, the Tango output trajectory is blue, and the red lines indicate the walls of the floor plan of the experimental building. Tango’s output jumps, as shown in the black circle, because the floor reflects light at that point.
3.2 PDR Positioning Evaluation
Figure 5 shows the results of PDR positioning. The results of PDR positioning deviated greatly from the areas we specified. As mentioned above, the error gradually increases with time, which leads to the black point in Fig. 5 moving from \( \left( { - 2.6,1.1} \right) \) to \( \left( { - 0.8,3.8} \right) \).
3.3 Analysis of Fusion Positioning System
Figure 6 intuitively shows that the output trajectory of Tango and PDR fusion positioning system is very similar to that of Tango visual positioning system. However, it can be seen from Fig. 6 that when the light ray changes significantly, Tango displacement is inconsistent with the normal displacement, especially with the merging system.
The accuracy of the three positioning systems are compared, and the median, mean, root-mean-square and three-fourths of the positioning errors of all the markers are calculated. As it can be seen from Table 1, the accuracy of the three positioning systems is higher than that of Tango visual positioning system and higher than that of PDR positioning system from high to low. The median error of all markers was similar. The mean error of all markers in the fusion system is lower than that of Tango positioning system.
Figure 7 shows the error distribution of the two typical positioning methods. The X-axis is the positioning error and the Y-axis is the cumulative probability. The significance of any point on the curve is that 90% of the positioning error is below (assuming \( y \) is less than \( 0.9 \)) \( x \) meters. When \( y = 1 \), its corresponding X-axis value is the maximum error of this experiment. On the contrary, when \( y = 0 \), the corresponding X-axis value is the minimum error of this experiment.
Moreover, the error accumulating percentage of the two simulated positioning methods is shown in Fig. 8, which indicates that the error of Tango positioning system is 80% the same as that of fusion positioning system, but the error of Tango will increase greatly once the light changes. In conclusion, the accuracy of fusion positioning system is higher.
4 Conclusion
In this paper, a kind of indoor positioning system is proposed, which integrates the visual positioning system and inertia-based positioning system in a loosely coupled architecture. The measurements obtained by Tango are evaluated using the PDR output. If the step size given by Tango is reliable, it will be directly used in the fusion system; otherwise, the step size will be derived from the inverted pendulum model. The fusion system also uses a Kalman filter for course fusion. Data from the inertial sensor is used for prediction, and Tango provides measurements. Experimental results show that, compared with the traditional PDR or Tango positioning method, we achieve a more accurate indoor positioning system.
References
Radu, V., Marina, M.K.H.: Indoor smartphone localization via activity aware pedestrian dead reckoning with selective crowdsourced WiFi fingerprinting. In: International Conference on Indoor Positioning and Indoor Navigation (IPIN), pp. 1–10. IEEE, Montbeliard-Belfort (2013)
Li, Y., He, Z., Nielsen, J.: Enhancing Wi-Fi based indoor pedestrian dead reckoning with security cameras. In: Fourth International Conference on Ubiquitous Positioning, Indoor Navigation and Location Based Services (UPINLBS), pp. 107–112. IEEE, Shanghai (2016)
Yilmaz, A., Gupta, A.: Indoor positioning using visual and inertial sensors. In: SENSORS, pp. 1–3. IEEE, Orlando (2016)
Werner, M., Hahn, C., Schauer, L:. DeepMoVIPS: visual indoor positioning using transfer learning. In: International Conference on Indoor Positioning and Indoor Navigation (IPIN), pp. 1–7. IEEE, Alcala de Henares (2016)
Werner, M., Kessel, M., Marouane, C.: Indoor positioning using smartphone camera. In: International Conference on Indoor Positioning and Indoor Navigation (IPIN), pp. 1–6. IEEE, Portugal (2011)
Hile, H., Borriello, G.: Information overlay for camera phones in indoor environments. In: Hightower, J., Schiele, B., Strang, T. (eds.) LoCA 2007. LNCS, vol. 4718, pp. 68–84. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-75160-1_5
Kao, W.W., Huy, B.Q.: Indoor navigation with smartphone-based visual SLAM and bluetooth-connected wheel-robot. In: CACS International Automatic Control Conference (CACS), pp. 395–400. IEEE, Nantou (2013)
Ma, L., Fan, Y., Xu, Y., et al.: Pedestrian dead reckoning trajectory matching method for radio map crowdsourcing building in WiFi indoor positioning system. In: IEEE International Conference on Communications (ICC), pp. 1–6. IEEE, Paris (2017)
Kang, W., Han, Y.: SmartPDR: Smartphone-based pedestrian dead reckoning for indoor localization. IEEE Sens. J. 15(5), 2906–2916 (2015)
Beauregard, S., Haas, H.: Pedestrian dead reckoning: a basis for personal positioning. In: Proceedings of the 3rd Workshop on Positioning, Navigation and Communication, pp. 27–35 (2006)
Wu, D., Peng, A., Zheng, L., et al.: A smart-phone based hand-held indoor tracking system. In: International Conference on Indoor Positioning and Indoor Navigation (IPIN), pp. 1–7. IEEE, Sapporo (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Xu, G., Qin, D., Zhao, M., Guo, R. (2019). Research on Fusion of Multiple Positioning Algorithms Based on Visual Indoor Positioning. In: Han, S., Ye, L., Meng, W. (eds) Artificial Intelligence for Communications and Networks. AICON 2019. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 287. Springer, Cham. https://doi.org/10.1007/978-3-030-22971-9_29
Download citation
DOI: https://doi.org/10.1007/978-3-030-22971-9_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-22970-2
Online ISBN: 978-3-030-22971-9
eBook Packages: Computer ScienceComputer Science (R0)