Abstract
Detecting hand gestures can provide a useful non-contact interaction tool with machines and systems and it has been employed for a wide range of applications. Recently, smart glasses and Virtual Reality (VR) headsets become viable solutions for various training applications ranging from surgical training in medicine to operator training for heavy equipment. A major challenge in these systems is to interact with the training platform since user’s view is blocked. In this paper, we present hand gesture detection using deep learning as a means of interaction with the VR system. Real world images are streamed by a camera mounted on the VR headset. User’s hand gestures are detected and blended into the virtual images providing more immersive and interactive user experience.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Rautaray, S.S., Agrawal, A.: Vision based hand gesture recognition for human computer interaction: a survey. Artif. Intell. Rev. 43(1), 1–54 (2015)
Gattupalli, S., Ebert, D., Papakostas, M., Makedon, F., Athitsos, V.: CogniLearn: a deep learning-based interface for cognitive behavior assessment. In: Proceedings of the 22nd International Conference on Intelligent User Interfaces, pp. 577–587 (2017)
Elsayed, R.A., Abdalla, M. I., Sayed, M.S.: Hybrid method based on multi-feature descriptor for static sign language recognition. In: Eighth International Conference on Intelligent Computing and Information Systems, pp. 98–105 (2017)
Zabulis, X., Baltzakis, H., Argyros, A.: Vision-based hand gesture recognition for human–computer interaction. In: The Universal Access Handbook. CRC press, Boca Raton (2009)
Yun, L., Peng, Z.: An automatic hand gesture recognition system based on Viola–Jones method and SVMs. In: Proceeding of 2nd International Workshop on Computer Science and Engineering, pp. 72–76 (2009)
Nagi, J., Ducatelle, F., Di Caro, G. A., Ciresan, D., Meier, U., Giusti, A., Nagi, F., Schmidhuber, J., Gambardella, L.M.: Max-pooling convolutional neural networks for vision-based hand gesture recognition, In: IEEE International Conference on Signal and Image Processing Applications, pp. 342–347 (2011)
Wu, D., Pigou, L., Kindermans, P.J., Le, N.D., Shao, L., Dambre, J., Odobez, J.M.: Deep dynamic neural networks for multimodal gesture segmentation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 38(8), 1583–1597 (2016)
Redmon, J., Farhadi, A.: YOLO9000: Better, Faster, Stronger (2016). arXiv:1612.08242
Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: MobileNets: Efficient convolutional neural networks for mobile vision applications (2017). arXiv:1704.04861
Redmon, J.: Darknet: Open source neural networks in C (2013–2016) http://pjreddie.com/darknet/
Hand Gesture video demo link with MobileNetSSD in VR09. https://youtu.be/78-IUv9vR20
Acknowledgements
This study is sponsored by Singapore Ministry of Education under the grant number MOE2015-TIF-2-T-039
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Fikret Ercan, M., Liu, A.Q. (2019). Hand Gesture Detection and Its Application to Virtual Reality Systems. In: Zawawi, M., Teoh, S., Abdullah, N., Mohd Sazali, M. (eds) 10th International Conference on Robotics, Vision, Signal Processing and Power Applications. Lecture Notes in Electrical Engineering, vol 547. Springer, Singapore. https://doi.org/10.1007/978-981-13-6447-1_63
Download citation
DOI: https://doi.org/10.1007/978-981-13-6447-1_63
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-6446-4
Online ISBN: 978-981-13-6447-1
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)