Abstract
Motion segmentation plays an important role in many vision applications, yet it is still a challenging problem in complex scenes. The typical conditions in real world scenarios like illumination variations, dynamic backgrounds and camera shaking make negative effects on segmentation performance. In this paper, a newly designed method for robust motion segmentation is proposed, which is mainly composed of two interrelated models. One is a normal random model(N-model), and the other is called enhanced random model(E-model). They are constructed and updated in spatio-temporal information for adapting to illumination changes and dynamic backgrounds, and operate in an Ada- Boost-like strategy. The exhaustive experimental evaluations on complex scenes demonstrate that the proposed method outperforms the state-of-the-art methods.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Chen Z, Ellis T. A self-adaptive Gaussian mixture model[J]. Computer Vision and Image Understanding, 2014, 122: 35–46.
Yoshinaga S, Shimada A, Nagahara H, et al. Object detection based on spatiotemporal background models[J]. Computer Vision and Image Understanding, 2014, 122: 84–91.
Zhang K, Zhang L, Liu Q, et al. Fast visual tracking via dense spatio-temporal context learning [C] // Computer Vision—ECCV 2014. Berlin: Springer-Verlag, 2014: 127–141.
Liao S, Zhao G, Kellokumpu V, et al. Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes [C] // 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Washington D C: IEEE Press, 2010: 1301–1306.
Choi J M, Chang H J, Yoo Y J, et al. Robust moving object detection against fast illumination change[J]. Computer Vision and Image Understanding, 2012, 116(2): 179–193.
Al-Najdawi N, Bez H E, Singhai J, et al. A survey of cast shadow detection algorithms[J]. Pattern Recognition Letters, 2012, 33(6): 752–764.
Sobral A, Vacavant A. A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos[J]. Computer Vision and Image Understanding, 2014,122: 4–21.
Barnich O, Van Droogenbroeck M. ViBe: A universal background subtraction algorithm for video sequences[J]. IEEE Transactions on, Image Processing, 2011, 20(6): 1709–1724.
Freund Y, Schapire R E. A decision-theoretic generalization of on-line learning and an application to boosting[J]. Journal of Computer and System Sciences, 1997, 55(1): 119–139.
Koppal S J, Narasimhan S G. Appearance derivatives for isonormal clustering of scenes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(8): 1375–1385.
Brutzer S, Höferlin B, Heidemann G. Evaluation of background subtraction techniques for video surveillance [C] // 2011 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Washington D C: IEEE Press, 2011: 1937–1944.
Stander J, Mech R, Ostermann J. Detection of moving cast shadows for object segmentation[J]. IEEE Transactions on Multimedia, 1999, 1(1): 65–76.
Author information
Authors and Affiliations
Corresponding author
Additional information
Foundation item: Supported by the National Natural Science Foundation of China(61502364); Key Scientific and Technological Project of Henan Province (132102210246); Enterprises-Universities-Research Institutes Cooperation Project of Henan Province (142107000022); CERNET Innovation Project(NGII20150311)
Biography: FAN Zhihui, male, Ph.D., research direction: computer vision.
Rights and permissions
About this article
Cite this article
Fan, Z., Li, Z., Li, P. et al. Motion segmentation based on dual interrelated models. Wuhan Univ. J. Nat. Sci. 22, 79–84 (2017). https://doi.org/10.1007/s11859-017-1220-y
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11859-017-1220-y