Abstract
Feature extraction is one of the principal goals of unsupervised learning. In biological systems it is the first step of the cognitive mechanism that enables processing of the higher order cognitive functions. Chapter 3 and Chapter 4 focus on the case of linear feature extraction. Linear feature extraction removes redundancy from the data in a linear fashion.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
I.T. Jolliffe: Principal Component Analysis. Springer-Verlag, New York, 1986.
R. Gonzalez and P. Wintz: Digital Image Processing. Addison-Wesley, Reading, MA, second edition, 1987.
E. Oja: A Simplified Neuron Model as a Principal Component Analyser. Journal of Mathematical Biology, 15, 267–73, 1982.
E. Oja: Subspace Methods of Pattern Recognition. John Wiley & Sons, New York, 1983.
E. Oja and J. Karhunen: On Stochastic Approximation of the Eigenvectors and Eigenvalues of the Expectation of a Random Matrix. Journal of Mathematical Analysis and Applications, 106, 69–84, 1985.
P. Földiak: Adaptive Network for Optimal Linear Feature Extraction. In Proceedings of the International Joint Conference on Neural Networks, 401–405, Washington, DC, 1989.
T. Sangen Optimal Unsupervised Learning in a Single-Layer Feedforward Neural Network. Neural Networks, 2, 459–473, 1989.
R. Linsken Self-organization in a Perceptual Network. IEEE Computer, 21, 3, 105–117, 1988.
M. Plumbley and F. Fallside: An Information-theoretic Approach to Unsupervised Connectionist Models. In Proceedings of the 1988 Connectionist Models Summer School, D. Touretzky, G. Hinton and T. Sejnowski, eds., Morgan Kaufmann, San Mateo, CA, 239–245, 1988.
M. Plumbley: On Information Theory and Unsupervised Neural Networks. Technical Report CUED/F-INFENG/TR. 78, Cambridge University Engineering Department, UK, 1991.
D. Obradovic and G. Deco: Generalized Linear Features Extraction: An Information Theory Approach. Neurocomputing, in press, 1995.
G. Deco and D. Obradovic: Linear Redundancy Reduction Learning. Neural Networks, in press, 1995.
S. Watanabe: Pattern Recognition: Human and Mechanical. John Wiley & Sons, New York, 1985.
G. Strang: Linear Algebra and its Applications. Academic Press, New York, 1976.
T. Kohonen: Self-Organization and Associative Memory. Springer-Verlag, New York, second edition, 1984.
G.H. Golub and C.F. van Loan: Matrix Computations. North Oxford Academic, Oxford, England, 1983.
H.B. Barlow: Unsupervised Learning. Neural Computation, 1, 295–311, 1989.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer-Verlag New York, Inc.
About this chapter
Cite this chapter
Deco, G., Obradovic, D. (1996). Linear Feature Extraction: Infomax Principle. In: An Information-Theoretic Approach to Neural Computing. Perspectives in Neural Computing. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-4016-7_3
Download citation
DOI: https://doi.org/10.1007/978-1-4612-4016-7_3
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4612-8469-7
Online ISBN: 978-1-4612-4016-7
eBook Packages: Springer Book Archive