Skip to main content

A Vision Based Deep Reinforcement Learning Algorithm for UAV Obstacle Avoidance

  • Conference paper
  • First Online:
Intelligent Systems and Applications (IntelliSys 2021)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 294))

Included in the following conference series:

Abstract

Integration of reinforcement learning with unmanned aerial vehicles (UAVs) to achieve autonomous flight has been an active research area in recent years. An important part focuses on obstacle detection and avoidance for UAVs navigating through an environment. Exploration in an unseen environment can be tackled with Deep Q-Network (DQN). However, value exploration with uniform sampling of actions may lead to redundant states, where often the environments inherently bear sparse rewards. To resolve this, we present two techniques for improving exploration for UAV obstacle avoidance. The first is a convergence-based approach that uses convergence error to iterate through unexplored actions and temporal threshold to balance exploration and exploitation. The second is a guidance-based approach using a Domain Network which uses a Gaussian mixture distribution to compare previously seen states to a predicted next state in order to select the next action. Performance and evaluation of these approaches were implemented in multiple 3-D simulation environments, with variation in complexity. The proposed approach demonstrates a two-fold improvement in average rewards compared to state of the art.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Chavan, R., Gengaje, S.R.: Multiple object detection using GMM technique and tracking using Kalman filter (2017)

    Google Scholar 

  2. Dadi, H., Venkatesh, P., Poornesh, P., Narayana Rao, L., Kumar, N.: Tracking multiple moving objects using gaussian mixture model. Int. J. Soft Comput. Eng. (IJSCE) 3, 114–119 (2013)

    Google Scholar 

  3. Gou, S.Z., Liu, Y.: DQN with model-based exploration: efficient learning on environments with sparse rewards. ArXiv, abs/1903.09295 (2019)

    Google Scholar 

  4. Habibian, S., et al.: Design and implementation of a maxi-sized mobile robot (Karo) for rescue missions. ROBOMECH J. 8(1), 1–33 (2021)

    Article  Google Scholar 

  5. van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI 2016, pp. 2094–2100. AAAI Press (2016)

    Google Scholar 

  6. Kahn, G., Villaflor, A., Pong, V., Abbeel, P., Levine, S.: Uncertainty-aware reinforcement learning for collision avoidance. ArXiv, abs/1702.01182 (2017)

    Google Scholar 

  7. Lee, H., Jung, S., Shim, D.: Vision-based UAV landing on the moving vehicle, pp. 1–7, 06 2016

    Google Scholar 

  8. Long, P., Fan, T., Liao, X., Liu, W., Zhang, H., Pan, J.: Towards optimally decentralized multi-robot collision avoidance via deep reinforcement learning. 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 6252–6259 (2017)

    Google Scholar 

  9. Ma, Z., Wang, C., Niu, Y., Wang, X., Shen, L.: A saliency-based reinforcement learning approach for a UAV to avoid flying obstacles. Robot. Auton. Syst. 100, 108–118 (2018)

    Article  Google Scholar 

  10. Maimaitijiang, M., Sagan, V., Sidike, P., Hartling, S., Esposito, F., Fritschi, F.B.: Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 237, 111599 (2020)

    Google Scholar 

  11. Mammadli, R., Wolf, F., Jannesari, A.: The art of getting deep neural networks in shape. ACM Trans. Archit. Code Optim. (TACO) 15(4), 62:1–62:21 (2019)

    Google Scholar 

  12. Masadeh, A.E., Wang, Z., Kamal, A.E.: Convergence-based exploration algorithm for reinforcement learning. Electrical and Computer Engineering Technical Reports and White Papers 1, Iowa State University, Ames, IA (2018)

    Google Scholar 

  13. Michels, J., Saxena, A., Ng, A.Y.: High speed obstacle avoidance using monocular vision and reinforcement learning. In: Proceedings of the 22nd International Conference on Machine Learning, ICML 2005, pp. 593–600. Association for Computing Machinery, New York (2005)

    Google Scholar 

  14. Mnih, V., et al.: Playing Atari with deep reinforcement learning. ArXiv, abs/1312.5602 (2013)

    Google Scholar 

  15. Niaraki, A., Roghair, J., Jannesari, A.: Visual exploration and energy-aware path planning via reinforcement learning (2021)

    Google Scholar 

  16. Oh, J., Guo, X., Lee, H., Lewis, R.L., Singh, S.P.: Action-conditional video prediction using deep networks in Atari games. In: NIPS (2015)

    Google Scholar 

  17. Pathak, D., Agrawal, P., Efros, A.A., Darrell, T.: Curiosity-driven exploration by self-supervised prediction. In: ICML (2017)

    Google Scholar 

  18. Preiss, J.A., Hönig, W., Sukhatme, G.S., Ayanian, N.: Crazyswarm: a large nano-quadcopter swarm. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 3299–3304 (2017)

    Google Scholar 

  19. Schaul, T., Quan, J., Antonoglou, I., Silver, D.: Prioritized experience replay. CoRR, abs/1511.05952 (2015)

    Google Scholar 

  20. Shah, S., Dey, D., Lovett, C., Kapoor, A.: AirSim: high-fidelity visual and physical simulation for autonomous vehicles. ArXiv, abs/1705.05065 (2017)

    Google Scholar 

  21. Smolyanskiy, N., Kamenev, A., Smith, J., Birchfield, S.T.: Toward low-flying autonomous MAV trail navigation using deep neural networks for environmental awareness. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4241–4247 (2017)

    Google Scholar 

  22. Subrahmanyam, V., Kim, D., Kumar, C., Shad, S., Jannesari, A.: Efficient object detection model for real-time UAV applications. Comput. Inf. Sci. 14(1) (2021)

    Google Scholar 

  23. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, 2nd edn., The MIT Press (2018)

    Google Scholar 

  24. Wang, C., Wang, J., Zhang, X., Zhang, X.: Autonomous navigation of UAV in large-scale unknown complex environment with deep reinforcement learning. In: 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 858–862 (2017)

    Google Scholar 

  25. Wang, Z., Schaul, T., Hessel, M., Hasselt, H., Lanctot, M., Freitas, N.: Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581 (2015)

  26. Xie, L., Wang, S., Markham, A., Trigoni, N.:Towards monocular vision based obstacle avoidance through deep reinforcement learning. In: RSS 2017 workshop on New Frontiers for Deep Learning in Robotics (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ali Jannesari .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Roghair, J., Niaraki, A., Ko, K., Jannesari, A. (2022). A Vision Based Deep Reinforcement Learning Algorithm for UAV Obstacle Avoidance. In: Arai, K. (eds) Intelligent Systems and Applications. IntelliSys 2021. Lecture Notes in Networks and Systems, vol 294. Springer, Cham. https://doi.org/10.1007/978-3-030-82193-7_8

Download citation

Publish with us

Policies and ethics