Abstract
To improve the applicability and robustness of the three-dimensional tracking method of an augmented reality-aided assembly guiding system for mechanical products, a tracking method based on the combination of point cloud and visual feature is proposed. First, the tracking benchmark coordinate system is defined using a reference model point cloud to determine the position of the virtual assembly guiding information. Then a camera tracking algorithm combining visual feature matching and point cloud alignment is implemented. To obtain enough matching points of visual features in a textureless assembly environment, a novel ORB feature-matching strategy based on the consistency of direction vectors is presented. The experimental results show that the proposed method has good robust stability and tracking accuracy in an assembly environment that lacks both visual and depth features, and it can also achieve good real-time results. Its comprehensive performance is better than the point cloud-based KinectFusion tracking method.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Liu JH (2011) Digital assembly technology in military industry. Def Manuf Technol 4:5–7
Bortolini M, Faccio M, Gamberi M, Pilati F (2017) Multi-objective assembly line balancing considering component picking and ergonomic risk. Comput Ind Eng 112:348–367
Faccio M (2014) The impact of production mix variations and models varieties on the parts-feeding policy selection in a JIT assembly system. Int J Adv Manuf Technol 72(1–4):543–560
Finetto C, Faccio M, Rosati G, Rossi A (2014) Mixed-model sequencing optimization for an automated single-station fully flexible assembly system (F-FAS). Int J Adv Manuf Technol 70(5–8):797–812
Faccio M, Gamberi M, Pilati F, Bortolini M (2015) Packaging strategy definition for sales kits within an assembly system. Int J Prod Res 53(11):3288–3305
Hu SJ, Ko J, Weyand L, ElMaraghy HA, Lien TK, Koren Y, Bley H, Chryssolouris G, Nasr N, Shpitalni M (2011) Assembly system design and operations for product variety. CIRP Ann Manuf Technol 60(2):715–733
Hu SJ, Zhu X, Wang H, Koren Y (2008) Product variety and manufacturing complexity in assembly systems and supply chains. CIRP Ann Manuf Technol 57(1):45–48
Wang QH, Huang ZD, Ni JL, Xiong W, Li JR (2016) A novel force rendering approach for virtual assembly of mechanical parts. Int J Adv Manuf Technol 86(1–4):977–988
Liu Z, Tan J (2007) Constrained behavior manipulation for interactive assembly in a virtual environment. Int J Adv Manuf Technol 32(7–8):797–810
Chen J, Mitrouchev P, Coquillart S, Quaine F (2017) Disassembly task evaluation by muscle fatigue estimation in a virtual reality environment. Int J Adv Manuf Technol 88(5–8):1523–1533
Kyriazis N, Argyros A (2014) Scalable 3D tracking of multiple interacting objects. IEEE Conference on Computer Vision and Pattern Recognition, pp 3430–3437
Tombari F, Franchi A, Stefano L D. (2014) BOLD features to detect texture-less objects. IEEE International Conference on Computer Vision, pp 1265–1272
Wang Y, Zhang S, Yang S, He W, Bai X, Zeng Y (2017) A line-mod-based markerless tracking approach for AR applications. Int J Adv Manuf Technol 89(5–8):1699–1707
Engel J, Stückler J, & Cremers D (2015) Large-scale direct SLAM with stereo cameras. IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 1935–1942
Mengyin F, Xianwei L, Tong L, Yi Y, Li X, Yu L (2015) Real-time slam algorithm based on RGB-D data. Robot 6(37):683–692
Henry P, Krainin M, Herbst E, Ren X, & Fox D (2014) RGB-D mapping: using depth cameras for dense 3D modeling of indoor environments. In the 12th International Symposium on Experimental Robotics, pp 647–663
Garon M, Lalonde JF (2017) Deep 6-DOF tracking. IEEE Trans Vis Comput Graph 23(11):2410–2418
Tan D J, & Ilic S (2014) Multi-forest tracker: a chameleon in tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 1202–1209
Joseph Tan D, Tombari F, Ilic S, & Navab N (2015) A versatile learning-based 3d temporal tracker: scalable, robust, online. In Proceedings of the IEEE International Conference on Computer Vision, pp 693–701
Besl PJ, Mckay ND (2002) Method for registration of 3-D shapes. IEEE Trans Pattern Anal Mach Intell 14(2):239–256
Rusinkiewicz S, Levoy M (2001) Efficient variants of the ICP algorithm. 3DIM. IEEE Computer Society, pp 145
Newcombe R A, Izadi S, Hilliges O, Molyneaux D, Kim D, Davison A J, … & Fitzgibbon A (2011) KinectFusion: real-time dense surface mapping and tracking. 10th IEEE international symposium on Mixed and augmented reality (ISMAR), pp 127–136
Izadi S, Kim D, Hilliges O, Molyneaux D, Newcombe R, Kohli P, … & Fitzgibbon A (2011). KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. In Proceedings of the 24th annual ACM symposium on User interface software and technology, pp 559–568
Izadi S, Newcombe R A, Kim D, Hilliges O, Molyneaux D, Hodges S, … & Fitzgibbon A (2011) Kinectfusion: real-time dynamic 3d surface reconstruction and interaction. In ACM SIGGRAPH, pp 23
Audras C, Comport A I, Meilland M, & Rives P (2016) Real-time dense RGB-D localisation and mapping. Australian Conference on Robotics and Automation, pp 1–10
Stuckler J, Behnke S (2012) Integrating depth and color cues for dense multi-resolution scene mapping using RGB-D cameras. Multisensor Fusion and Integration for Intelligent Systems, pp 162–167
Endres F, Hess J, Engelhard N, Sturm J, Cremers D, & Burgard W (2012) An evaluation of the RGB-D SLAM system. 2012 IEEE International Conference on Robotics and Automation (ICRA), pp 1691–1696
Whelan T, Johannsson H, Kaess M, Leonard J J, & McDonald J (2012) Robust tracking for real-time dense RGB-D mapping with Kintinuous. Technical Report, (Query date: 5-13-2018.)
Henry P, Krainin M, Herbst E, Ren X, Fox D (2014) RGB-D mapping: using depth cameras for dense 3D modeling of indoor environments. Experimental Robotics. Springer, Berlin Heidelberg, pp 647–663
Steinbrucker F, Kerl C, & Cremers D (2013) Large-scale multi-resolution surface reconstruction from RGB-D sequences. In Proceedings of the IEEE International Conference on Computer Vision, pp 3264–3271
Whelan T, Kaess M, Leonard J J, & McDonald J (2013) Deformation-based loop closure for large scale dense RGB-D SLAM. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 548–555
Whelan T, Leutenegger S, Salas-Moreno R, Glocker B, & Davison A (2015) ElasticFusion: dense SLAM without a pose graph. Robotics: Science and Systems, pp 1–9
Fioraio N, Taylor J, Fitzgibbon A, Di Stefano L, & Izadi S (2015) Large-scale and drift-free surface reconstruction using online subvolume registration. IEEE Conference on Computer Vision and Pattern Recognition, pp 4475–4483
Glocker B, Shotton J, Criminisi A, Izadi S (2015) Real-time RGB-D camera relocalization via randomized ferns for keyframe encoding. IEEE Trans Vis Comput Graph 21(5):571–583
Valentin J, Nießner M, Shotton J, Fitzgibbon A, Izadi S, & Torr P H (2015) Exploiting uncertainty in regression forests for accurate camera relocalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4400–4408
Matsuo T, Fukushima N, Ishibashi Y (2013) Weighted joint bilateral filter with slope depth compensation filter for depth map refinement. VISAPP 2:300–309
Rublee E, Rabaud V, Konolige K, & Bradski G (2011) ORB: an efficient alternative to SIFT or SURF. 2011 IEEE International Conference on Computer Vision (ICCV), pp 2564–2571
Bian J, Lin WY, Matsushita Y, Yeung SK, Nguyen TD, & Cheng MM (2017) GMS: grid-based motion statistics for fast, ultra-robust feature correspondence. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2828–2837
Mur-Artal R, Montiel JMM, Tardos JD (2015) ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans Robot 31(5):1147–1163
Li S, Xu C, Xie M (2012) A robust O (n) solution to the perspective-n-point problem. IEEE Trans Pattern Anal Mach Intell 34(7):1444–1450
Acknowledgements
We would like to acknowledge the experimental assistance provided by Xiaozhou Dong and language editing assistance provided by Marc D. Baldwin, PhD.
Funding
This study received financial support provided by “The Fundamental Research Funds for the Central Universities” of China (3102015BJ(II)MYZ21).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Wang, Y., Zhang, S., Wan, B. et al. Point cloud and visual feature-based tracking method for an augmented reality-aided mechanical assembly system. Int J Adv Manuf Technol 99, 2341–2352 (2018). https://doi.org/10.1007/s00170-018-2575-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00170-018-2575-8