Abstract
It is widely believed in the pattern recognition field that the number of examples needed to achieve an acceptable level of generalization ability depends on the number of independent parameters needed to specify the network configuration. The paper presents a neural network for classification of high-dimensional patterns. The network architecture proposed here uses a layer which extracts the global features of patterns. The layer contains neurons whose weights are induced by a neural subnetwork. The method reduces the number of independent parameters describing the layer to the parameters describing the inducing subnetwork.
An erratum to this chapter can be found at http://dx.doi.org/10.1007/11550907_163 .
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
- Multilayer Perceptron
- Large Pattern
- Dimensional Pattern
- Neural Information Processings System
- Convolutional Network
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Diamantaras, K.I., Kung, S.Y.: Principal Component Neural Networks: Theory and Applications. Wiley, Chichester (1996)
Jacobs, R.A., Jordan, M.I.: Adaptative Mixture of Local Expert. Neural Computation 3, 79–87 (1991)
Jordan, M.I., Jacobs, R.A.: Herarchical mixtures of experts and the EM algorithm. Neural Computation 6, 181–214 (1994)
LeCun, Y., Bengio, Y.: Convolutional networks for images, speech, and time series. In: The Handbook of Brain Theory and Neural Networks, pp. 255–258. MIT Press, Cambridge (1995)
LeCun, Y., Bottou, L., Benigo, Y., Haffner, P.: Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE 86, 2278–2324 (1998)
Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In: Proceedings of the IEEE International Conference on Neural Networks 1993, ICNN 1993 (1993)
Solla, S., LeCun, Y., Denker, J.: Optimal Brain Damage. In: Advances in Neural Information Processings Systems, vol. 2, pp. 598–605. Morgan Kaufmann Publishers Inc., San Mateo (1990)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Golak, S. (2005). Induced Weights Artificial Neural Network. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds) Artificial Neural Networks: Formal Models and Their Applications – ICANN 2005. ICANN 2005. Lecture Notes in Computer Science, vol 3697. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11550907_47
Download citation
DOI: https://doi.org/10.1007/11550907_47
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28755-1
Online ISBN: 978-3-540-28756-8
eBook Packages: Computer ScienceComputer Science (R0)