Abstract
This paper proposes a novel automatic model selection algorithm for learning Gaussian mixtures. Unlike EM, we shall further increase the negative entropy of the posterior of latent variables to exert an indirect effect on model selection. The increase of negative entropy can be interpreted as a competition, which corresponds to an annihilation of those components with insufficient data to support. More importantly, this competition only depends on the data itself. Additionally, we seamlessly integrate parameter estimation and model selection into a single algorithm, which can be applied to any kind of parametric mixture model solved by an EM algorithm. Experiments involving Gaussian mixtures show the efficiency of our approach on model selection.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
References
Lanterman, A.D.: Schwarz, wallace, and rissanen: Intertwining themes in theories of model selection. International Statistical Review 69(2), 185–212 (2001)
Akaike, H.: A new look at the statistical model identification. IEEE Transactions on Automatic Control 19(6), 716–723 (1974)
Schwarz, G.: Estimating the dimension of a model. The Annals of Statistics 6(2), 461–464 (1978)
Jordan, M.I., Ghahramani, Z., Jaakkola, T.S., Saul, L.K.: An introduction to variational methods for graphical models. Machine Learning 37(2), 183–233 (1999)
Xu, L., Krzyzak, A., Oja, E.: Rival penalized competitive learning for clustering analysis, RBF net, and curve detection. IEEE Transactions on Neural Networks 4(4), 636–649 (1993)
Xu, L.: Bayesian Ying-Yang system, best harmony learning, and five action circling. Frontiers of Electrical and Electronic Engineering in China 5(3), 281–328 (2010)
Figueiredo, M.A.T., Jain, A.K.: Unsupervised learning of finite mixture models. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(3), 381–396 (2002)
Neal, R.M., Hinton, G.E.: A view of the EM algorithm that justifies incremental, sparse, and other variants. In: Jordan, M.I. (ed.) Learning in Graphical Models, 1st edn., pp. 355–368. MIT Press, Cambridge (1998)
Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B 39(1), 1–38 (1977)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Liu, G., Tang, X. (2013). Harmonious Competition Learning for Gaussian Mixtures. In: Sun, C., Fang, F., Zhou, ZH., Yang, W., Liu, ZY. (eds) Intelligence Science and Big Data Engineering. IScIDE 2013. Lecture Notes in Computer Science, vol 8261. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-42057-3_49
Download citation
DOI: https://doi.org/10.1007/978-3-642-42057-3_49
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-42056-6
Online ISBN: 978-3-642-42057-3
eBook Packages: Computer ScienceComputer Science (R0)