Abstract
The Probabilistic RBF (PRBF) network constitutes an adaptation of the RBF network for classification. Moreover it extends the typical mixture model by allowing the sharing of mixture components among all classes, in contrast to the conventional approach that suggests mixture components describing only one class. The typical learning method of PRBF for a classification task employs the Expectation – Maximization (EM) algorithm. This widely used method depends strongly on the initial parameter values. The Greedy EM algorithm is a recently proposed method that tries to overcome this drawback, in the case of the density estimation problem using mixture models. In this work we propose a similar approach for incremental training of the PRBF network for classification. The proposed algorithm starts with a single component and incrementally adds more components. After convergence the algorithm splits all the components of the network. The addition of a new component is based on criteria for detecting a region in the data space that is crucial for the classification task. Experimental results using several well-known classification datasets indicate that the incremental method provides solutions of superior classification performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Dempster, P., Laird, N.M., Rubin, D.B.: Maximum Likelihood Estimation from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society B 39, 1–38 (1977)
McLachlan, G., Krishnan, T.: The Em Algorithm and Extensions. Wiley, Chichester (1997)
Mitchell, T.: Machine Learning. McGraw-Hill, New York (1997)
Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1995)
McLachlan, G., Peel, D.: Finite Mixture Models. Wiley, Chichester (2000)
Titsias, M., Likas, A.: A Probabilistic RBF network for Classification. In: Proc. of International Joint Conference on Neural Networks, Como, Italy (July 2000)
Titsias, M., Likas, A.: Shared Kernel Models for Class Conditional Density Estimation. IEEE Trans. on Neural Networks 12(5), 987–997 (2001)
Titsias, M., Likas, A.: Mixture of Experts Classification Using a Hierarchical Mixture Model. Neural Computation 14(9) (September 2002)
Vlassis, N.A., Likas, A.: A Greedy-EM Algorithm for Gaussian Mixture Learning. Neural Processing Letters 15, 77–87 (2002)
Verbeek, J.J., Vlassis, N., Krose, B.: Efficient Greedy Learning of Gaussian Mixtures. Neural Computation 15(2) (2003)
Cortes, C., Vapnik, V.: Support–vector Networks. Machine Learning 20(3) (1995)
Tipping, M., Bishop, C.: Mixtures of Probabilistic Principal Component Analysers. Neural Computation 11(2) (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Constantinopoulos, C., Likas, A. (2004). Efficient Training Algorithms for the Probabilistic RBF Network. In: Vouros, G.A., Panayiotopoulos, T. (eds) Methods and Applications of Artificial Intelligence. SETN 2004. Lecture Notes in Computer Science(), vol 3025. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24674-9_20
Download citation
DOI: https://doi.org/10.1007/978-3-540-24674-9_20
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-21937-8
Online ISBN: 978-3-540-24674-9
eBook Packages: Springer Book Archive