Abstract
This paper presents a novel online object tracking algorithm with sparse representation for learning effective appearance models under a particle filtering framework. Compared with the state-of-the-art ℓ 1 sparse tracker, which simply assumes that the image pixels are corrupted by independent Gaussian noise, our proposed method is based on information theoretical Learning and is much less sensitive to corruptions; it achieves this by assigning small weights to occluded pixels and outliers. The most appealing aspect of this approach is that it can yield robust estimations without using the trivial templates adopted by the previous sparse tracker. By using a weighted linear least squares with non-negativity constraints at each iteration, a sparse representation of the target candidate is learned; to further improve the tracking performance, target templates are dynamically updated to capture appearance changes. In our template update mechanism, the similarity between the templates and the target candidates is measured by the earth movers’ distance(EMD). Using the largest open benchmark for visual tracking, we empirically compare two ensemble methods constructed from six state-of-the-art trackers, against the individual trackers. The proposed tracking algorithm runs in real-time, and using challenging sequences performs favorably in terms of efficiency, accuracy and robustness against state-of-the-art algorithms.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram Proceedings of the International Conference on Computer Vision and Pattern Recognition, pp 798–805 (2006)
Arulampalam, M., Maskell, S., Gordon, N., Clapp, T.: A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Trans. Signal Process. 50(2), 174–188 (2002)
Avidan, S.: Support vector tracking. IEEE Trans. Pattern Anal. Mach. Intell. 26(8), 1064–1072 (2004)
Avidan, S.: Ensemble tracking. Proceedings of the 10th European Conference on Computer Vision, 494C501 (2005)
Babenko, B., Yang, M., Belongie, S.: Visual tracking with online multiple instance learning. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1619–1632 (2011)
Black, M.: EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation. Int. J. Comput. Vision 26(1), 63C84 (1998)
Collins, R., Liu, Y., Leordeanu, M.: Online selection of discriminative tracking features. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1631–1643 (2004)
Comaniciu, D., Member, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–577 (2003)
Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via on-line boosting. Proceedings of the British Machine Vision Conference, 47–56 (2006)
Grabner, H., Leistner, C., Bischof, H.: Semi-supervised on-line boosting for robust tracking. Proceedings of the 10th European Conference on Computer Vision, 234–247 (2008)
Hare, S., Saffari, A., Torr, P.: Struck:structured output tracking with kernels. Proceedings of the International Conference on Computer Vision and Pattern Recognition, 263–270 (2011)
Henriques, J., Caseiro, R., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. Proceedings of European Conference on Computer Vision, 702–715 (2012)
Kwon, J., Lee, K.: Visual tracking decomposition. Proceedings of the International Conference on Computer Vision and Pattern Recognition, 1269–1276 (2010)
Liu, W., Pokharel, P., Principe, J.: Error Entropy, Correntropy and M-Estimation. Proceedings Workshop of Machine Learning for Signal Processing (2006)
Liu, W., Pokharel, P., Principe, J.: Correntropy: Properties and applications in Non-Gaussian signal processing. IEEE Trans. Signal Process. 55(11), 5286–5298 (2007)
Mei, X., Ling, H.: Robust visual tracking and vehicle classification via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 33(11), 2259–2272 (2011)
Ran, H., Zheng, S., Gang, H.: Maximum correntropy criterion for robust face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1561–1576 (2011)
Ross, D., Lim, J., Lin, R., Yang, M.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(8), 125–141 (2008)
Wang, D., Lu, H., Yang, M.: Online object tracking with sparse prototypes. IEEE Trans. Image Process. 22(1), 314–325 (2013)
Wright, J., Yang, A. Y., Ganesh, A., Sastry, S. S., Ma, Y.: Roubust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)
Wu, Y., Lim, J., Yang, M.: Online object tracking: A benchmark. Proceedings of the International Conference on Computer Vision and Pattern Recognition (2011)
Yilmaz, A., Javed, O., Shah, M.: Object Tracking: A survey. ACM Comput. Surv. 38(4), 81–93 (2006)
Yuan, X., Hu, B.: Robust Feature Extraction via information theoretic learning. Proceedings of International Conference on Machine learning, 1193–1200 (2009)
Zhang, K., Zhang, L., Yang, M.: Real-time compressive tracking. Proceedings of the 10th European Conference on Computer Vision, 864–877 (2012)
Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Low-rank sparse learning for robust visual tracking. Proceedings of the 10th European Conference on Computer Vision, 470–484 (2012)
Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Robust visual tracking via multi-task sparse learning. Proceedings of the International Conference on Computer Vision and Pattern Recognition, 2042–2049 (2012)
Acknowledgments
This work was supported by the National Basic Research Program of China (973 Program) under Grant no. 2013CB329404, the Major Research Project of the National Natural Science Foundation of China under Grant no. 91230101, the National Natural Science Foundation of China under Grant no. 61075006 and 11201367, the Key Project of the National Natural Science Foundation of China under Grant no. 11131006 and the Research Fund for the Doctoral Program of Higher Education of China under Grant no. 20100201120048, natural science Fund of Ningxia, China under Grant no. NZ12209.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Ding, W., Zhang, J. Robust visual tracking using information theoretical learning. Ann Math Artif Intell 80, 113–129 (2017). https://doi.org/10.1007/s10472-017-9543-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10472-017-9543-0
Keywords
- Robust visual object tracking
- Information theoretical learning
- Adaptive appearance model
- Particle filtering
- Occlusion and outlier