Skip to main content

An Improved SLAM Based on the Fusion of Lidar and Monocular Vision

  • Conference paper
  • First Online:
Advances in Computational Intelligence Systems (UKCI 2021)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1409))

Included in the following conference series:

  • 836 Accesses

Abstract

SLAM technology has been playing a key role for mobile robots to execute complex tasks such as environment perception, path planning and human-computer interaction. Due to the single sensor has its limitation, the multi-sensor methods that can efficiently improve the performance of the SLAM are becoming the important issues. In this paper, we propose a method based on the fusion of Lidar and monocular vision to improve the positioning accuracy of SLAM. The experimental results in KITTI dataset and Apollo Scape dataset show that the proposed method performs higher positioning accuracy and robustness than existing SLAM methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Leonard, J.J., Durrant-Whyte, H.F.: Mobile robot localization by tracking geometric beacons. IEEE Trans. Robot. Autom. 7(3), 376–382 (1991)

    Article  Google Scholar 

  2. Davison, A.J., Reid, I.D., Molton, N.D., et al.: MonoSLAM: real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007)

    Article  Google Scholar 

  3. Aycard, O., Baig, Q., Bota, S., et al.: Intersection safety using lidar and stereo vision sensors. In: IEEE Intelligent Vehicles Symposium (IV), pp. 863–869. IEEE (2011)

    Google Scholar 

  4. Zhang, J., Kaess, M., Singh, S.: Real-time depth enhanced monocular odometry. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4973–4980. IEEE (2014)

    Google Scholar 

  5. Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)

    Article  Google Scholar 

  6. Mur-Artal, R., Tardos, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)

    Article  Google Scholar 

  7. Zhang, J., Singh, S.: Visual-lidar odometry and mapping: low-drift, robust, and fast. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 2174–2181. IEEE (2015)

    Google Scholar 

  8. Andert, F., Ammann, N., Maass, B.: Advances in Aerospace Guidance, Navigation and Control. Springer, Cham (2015)

    Google Scholar 

  9. Ku, J., Harakeh, A., Waslander, S.L.: In defense of classical image processing: fast depth completion on the CPU. In: 15th Conference on Computer and Robot Vision (CRV), pp. 16–22 (2018)

    Google Scholar 

  10. Shivakumar, S.S., Nguyen, T., Miller, I.D., et al.: DFuseNet: deep fusion of RGB and sparse depth information for image guided dense depth completion. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 13–20. IEEE (2019)

    Google Scholar 

  11. Premebida, C., Garrote, L., Asvadi, A., et al.: High-resolution LIDAR-based depth map-ping using bilateral filter. In: 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pp. 2469–2474. IEEE (2016)

    Google Scholar 

  12. Bescos, B., Fácil, J.M., Civera, J., et al.: DynaSLAM: tracking, mapping, and inpainting in dynamic scenes. IEEE Robot. Autom. Lett. 3(4), 4076–4083 (2018)

    Article  Google Scholar 

  13. Apollo Scape. http://apolloscape.auto/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lu Lou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, Y., Zhang, R., Li, Q., Lou, L. (2022). An Improved SLAM Based on the Fusion of Lidar and Monocular Vision. In: Jansen, T., Jensen, R., Mac Parthaláin, N., Lin, CM. (eds) Advances in Computational Intelligence Systems. UKCI 2021. Advances in Intelligent Systems and Computing, vol 1409. Springer, Cham. https://doi.org/10.1007/978-3-030-87094-2_35

Download citation

Publish with us

Policies and ethics