Abstract
Results obtained by Pineda for supervised learning in arbitrarily structured neural nets (including feed-back) are extended to nonsupervised learning. For the first time a unique set of 3 equations is derived which governs the learning dynamics of neural models that make use of objective functions. A general method to construct objective functions is outlined that helps organize the network output according to application-specific constraints. Several well-known learning algorithms are deduced exemplarily within the general frame. The unification turns out to result in economical design of software as well as hardware.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
U. Ramacher, “Hardware Concepts for Neural Networks”, in: R. Eckmiller (ed.), Advanced Neural Computers, pp.209–218, North-Holland 1990
M.L. Minski, S.A. Papert, “Perceptrons: An Introduction to Computational Geometry”, MIT Press, Cambridge, expanded edition 1988
T. Kohonen, “Self-Organization and Associative Memory”, Springer, Heidelberg, Berlin, 3rd edition 1989
D.E. Rumelhart, J.L. McClelland, “Parallel Distributed Processing, Explorations in the Microstructure of Cognition”, Vol. 1, Chap.8, MIT Press, Cambridge, 1986
J.J. Hopfield, “Neurons with graded response have collective computational properties like those of two state neurons”, Proc. Nat. Acad. Sci. USA 81, pp. 3088–3092, 1984
F.J. Pineda, “Generalization of Backpropagation to Recurrent and Higher Order Neural Networks”, in: D.Z. Anderson (ed.), Neural Information Processing Systems, Am. Inst. Phys., pp. 602–611, New York 1988
B. Kosko, “Adaptive Bidirectional Associative Memories”, Applied Optics 26, pp. 4947–4960, 1987
B. Schür mann, “Stability and Adaptation in Artificial Neural Systems”, Phys. Rev. A 40, pp. 2681–2688, 1989
T. Kohonen, “The neural ‘phonetic’ typewriter”, Computer 21, pp. 11–22, 1988
B. Schür mann, J. Hollatz, U. Ramacher, “Adaptive Recurrent Neural Networks and Dynamic Stability”, in: L. Carrido (ed.), Proc. XI Sitges Conf. on Neural Networks, 1990, Springer, Heidelberg, to appear
U. Ramacher et al., “Design of a 1st Generation Neurocomputer”, in: VLSI DESIGN OF NEURAL NETWORKS, edited by U. Ramacher, U. Rückert, Kluwer Academic Publishers, Nov. 1990
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1991 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Ramacher, U., Schürmann, B. (1991). Unified Description of Neural Algorithms for Time-Independent Pattern Recognition. In: Ramacher, U., Rückert, U. (eds) VLSI Design of Neural Networks. The Springer International Series in Engineering and Computer Science, vol 122. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-3994-0_13
Download citation
DOI: https://doi.org/10.1007/978-1-4615-3994-0_13
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-6785-7
Online ISBN: 978-1-4615-3994-0
eBook Packages: Springer Book Archive