Abstract
SLAM technology has been playing a key role for mobile robots to execute complex tasks such as environment perception, path planning and human-computer interaction. Due to the single sensor has its limitation, the multi-sensor methods that can efficiently improve the performance of the SLAM are becoming the important issues. In this paper, we propose a method based on the fusion of Lidar and monocular vision to improve the positioning accuracy of SLAM. The experimental results in KITTI dataset and Apollo Scape dataset show that the proposed method performs higher positioning accuracy and robustness than existing SLAM methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Leonard, J.J., Durrant-Whyte, H.F.: Mobile robot localization by tracking geometric beacons. IEEE Trans. Robot. Autom. 7(3), 376–382 (1991)
Davison, A.J., Reid, I.D., Molton, N.D., et al.: MonoSLAM: real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007)
Aycard, O., Baig, Q., Bota, S., et al.: Intersection safety using lidar and stereo vision sensors. In: IEEE Intelligent Vehicles Symposium (IV), pp. 863–869. IEEE (2011)
Zhang, J., Kaess, M., Singh, S.: Real-time depth enhanced monocular odometry. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4973–4980. IEEE (2014)
Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)
Mur-Artal, R., Tardos, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)
Zhang, J., Singh, S.: Visual-lidar odometry and mapping: low-drift, robust, and fast. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 2174–2181. IEEE (2015)
Andert, F., Ammann, N., Maass, B.: Advances in Aerospace Guidance, Navigation and Control. Springer, Cham (2015)
Ku, J., Harakeh, A., Waslander, S.L.: In defense of classical image processing: fast depth completion on the CPU. In: 15th Conference on Computer and Robot Vision (CRV), pp. 16–22 (2018)
Shivakumar, S.S., Nguyen, T., Miller, I.D., et al.: DFuseNet: deep fusion of RGB and sparse depth information for image guided dense depth completion. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 13–20. IEEE (2019)
Premebida, C., Garrote, L., Asvadi, A., et al.: High-resolution LIDAR-based depth map-ping using bilateral filter. In: 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pp. 2469–2474. IEEE (2016)
Bescos, B., Fácil, J.M., Civera, J., et al.: DynaSLAM: tracking, mapping, and inpainting in dynamic scenes. IEEE Robot. Autom. Lett. 3(4), 4076–4083 (2018)
Apollo Scape. http://apolloscape.auto/
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Li, Y., Zhang, R., Li, Q., Lou, L. (2022). An Improved SLAM Based on the Fusion of Lidar and Monocular Vision. In: Jansen, T., Jensen, R., Mac Parthaláin, N., Lin, CM. (eds) Advances in Computational Intelligence Systems. UKCI 2021. Advances in Intelligent Systems and Computing, vol 1409. Springer, Cham. https://doi.org/10.1007/978-3-030-87094-2_35
Download citation
DOI: https://doi.org/10.1007/978-3-030-87094-2_35
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87093-5
Online ISBN: 978-3-030-87094-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)