Skip to main content

Part of the book series: Perspectives in Neural Computing ((PERSPECT.NEURAL))

Abstract

Feature extraction is one of the principal goals of unsupervised learning. In biological systems it is the first step of the cognitive mechanism that enables processing of the higher order cognitive functions. Chapter 3 and Chapter 4 focus on the case of linear feature extraction. Linear feature extraction removes redundancy from the data in a linear fashion.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. I.T. Jolliffe: Principal Component Analysis. Springer-Verlag, New York, 1986.

    Google Scholar 

  2. R. Gonzalez and P. Wintz: Digital Image Processing. Addison-Wesley, Reading, MA, second edition, 1987.

    Google Scholar 

  3. E. Oja: A Simplified Neuron Model as a Principal Component Analyser. Journal of Mathematical Biology, 15, 267–73, 1982.

    Article  MathSciNet  MATH  Google Scholar 

  4. E. Oja: Subspace Methods of Pattern Recognition. John Wiley & Sons, New York, 1983.

    Google Scholar 

  5. E. Oja and J. Karhunen: On Stochastic Approximation of the Eigenvectors and Eigenvalues of the Expectation of a Random Matrix. Journal of Mathematical Analysis and Applications, 106, 69–84, 1985.

    Article  MathSciNet  MATH  Google Scholar 

  6. P. Földiak: Adaptive Network for Optimal Linear Feature Extraction. In Proceedings of the International Joint Conference on Neural Networks, 401–405, Washington, DC, 1989.

    Chapter  Google Scholar 

  7. T. Sangen Optimal Unsupervised Learning in a Single-Layer Feedforward Neural Network. Neural Networks, 2, 459–473, 1989.

    Article  Google Scholar 

  8. R. Linsken Self-organization in a Perceptual Network. IEEE Computer, 21, 3, 105–117, 1988.

    Google Scholar 

  9. M. Plumbley and F. Fallside: An Information-theoretic Approach to Unsupervised Connectionist Models. In Proceedings of the 1988 Connectionist Models Summer School, D. Touretzky, G. Hinton and T. Sejnowski, eds., Morgan Kaufmann, San Mateo, CA, 239–245, 1988.

    Google Scholar 

  10. M. Plumbley: On Information Theory and Unsupervised Neural Networks. Technical Report CUED/F-INFENG/TR. 78, Cambridge University Engineering Department, UK, 1991.

    Google Scholar 

  11. D. Obradovic and G. Deco: Generalized Linear Features Extraction: An Information Theory Approach. Neurocomputing, in press, 1995.

    Google Scholar 

  12. G. Deco and D. Obradovic: Linear Redundancy Reduction Learning. Neural Networks, in press, 1995.

    Google Scholar 

  13. S. Watanabe: Pattern Recognition: Human and Mechanical. John Wiley & Sons, New York, 1985.

    Google Scholar 

  14. G. Strang: Linear Algebra and its Applications. Academic Press, New York, 1976.

    MATH  Google Scholar 

  15. T. Kohonen: Self-Organization and Associative Memory. Springer-Verlag, New York, second edition, 1984.

    MATH  Google Scholar 

  16. G.H. Golub and C.F. van Loan: Matrix Computations. North Oxford Academic, Oxford, England, 1983.

    MATH  Google Scholar 

  17. H.B. Barlow: Unsupervised Learning. Neural Computation, 1, 295–311, 1989.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag New York, Inc.

About this chapter

Cite this chapter

Deco, G., Obradovic, D. (1996). Linear Feature Extraction: Infomax Principle. In: An Information-Theoretic Approach to Neural Computing. Perspectives in Neural Computing. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-4016-7_3

Download citation

  • DOI: https://doi.org/10.1007/978-1-4612-4016-7_3

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4612-8469-7

  • Online ISBN: 978-1-4612-4016-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics