Abstract
In this paper, we propose a fast and accurate approximation to the information potential of Information Theoretic Learning (ITL) using the Fast Gauss Transform (FGT). We exemplify here the case of the Minimum Error Entropy criterion to train adaptive systems. The FGT reduces the complexity of the estimation from O(N 2) to O(pkN) wherep is the order of the Hermite approximation and k the number of clusters utilized in FGT. Further, we show that FGT converges to the actual entropy value rapidly with increasing order p unlike the Stochastic Information Gradient, the present O(pN) approximation to reduce the computational complexity in ITL. We test the performance of these FGT methods on System Identification with encouraging results.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Principe, J.C., Xu, D., Fisher, J.: Information Theoretic Learning. In: Haykin, S. (ed.) Unsupervised Adaptive Filtering, vol. I, pp. 265–319. Wiley, New York (2000)
Principe, J.C., Xu, D.: Information-Theoretic Learning Using Renyi’s Quadratic Entropy. In: Proceedings of the 1st Int. Workshop on Independent Component Analysis and Signal Separation, Aussois, France, January 11-15, pp. 407–412 (1999)
Hild, K.E., Erdogmus, D., Principe, J.C.: Blind Source Separation using Renyi’s Mutual Information. IEEE Signal Processing Letters 8(6), 174–176 (2001)
Lazaro, M., Santamaria, I., Erdogmus, D., Hild, K.E., Pantaleon, C., Principe, J.C.: Stochastic Blind Equalization Based on PDF Fitting Using Parzen Estimator. IEEE Transactions on Signal Processing 53(2), 696–704 (2005)
Jenssen, R., Eltoft, T., Principe, J.C.: Information Theoretic Spectral Clustering. In: Proc. Int. Joint Conference on Neural Networks, Budapest, Hungary, July 2004, pp. 111–116 (2004)
Torkkola, K.: Learning discriminative feature transforms to low dimensions in low dimensions. In: Advances in neural information processing systems 14, Vancouver, BC, Canada, December 3-8. MIT Press, Cambridge (2001a)
Erdogmus, D., Principe, J.C., Hild, K.E.: Online entropy manipulation: stochastic Information Gradient. IEEE Signal Processing Letters 10(8), 242–245 (2003)
Greengard, L., Strain, J.: The fast Gauss transform. SIAM J. Sci. Statist. Comput. 12, 79–94 (1991)
Gonzalez, T.: Clustering to minimize the maximum intercluster distance. Theoretical Computer Science 38, 293–306 (1985)
Erdogmus, D., Principe, J.C.: An Entropy Minimization algorithm for Supervised Training of Nonlinear Systems. IEEE trans. of Signal Processing 50(7), 1780–1786 (2002)
Yang, C., Duraiswami, R., Gumerov, N., Davis, L.: Improved fast gauss transform and efficient kernel density estimation. In: 9th Int. Conference on Computer Vision, vol. 1, pp. 464–471 (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Han, S., Rao, S., Principe, J. (2006). Estimating the Information Potential with the Fast Gauss Transform. In: Rosca, J., Erdogmus, D., Príncipe, J.C., Haykin, S. (eds) Independent Component Analysis and Blind Signal Separation. ICA 2006. Lecture Notes in Computer Science, vol 3889. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11679363_11
Download citation
DOI: https://doi.org/10.1007/11679363_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-32630-4
Online ISBN: 978-3-540-32631-1
eBook Packages: Computer ScienceComputer Science (R0)