Abstract
Neural networks are a very successful machine learning technique. At present, deep (multi-layer) neural networks are the most successful among the known machine learning techniques. However, they still have some limitations, One of their main limitations is that their learning process still too slow. The major reason why learning in neural networks is slow is that neural networks are currently unable to take prior knowledge into account. As a result, they simply ignore this knowledge and simulate learning “from scratch”. In this paper, we show how neural networks can take prior knowledge into account and thus, hopefully, learn faster.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, N.Y. (2006)
Hinton, G.E., Osindero, S., Teh, Y.-W.: A fast learning algorithm for deep belief nets. Neural Comput. 18, 1527–1554 (2006)
Acknowledgements
This work was supported in part by NSF grants HRD-0734825, HRD-1242122, and DUE-0926721, and by an award from Prudential Foundation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Baral, C., Ceberio, M., Kreinovich, V. (2020). How Neural Networks (NN) Can (Hopefully) Learn Faster by Taking into Account Known Constraints. In: Ceberio, M., Kreinovich, V. (eds) Decision Making under Constraints. Studies in Systems, Decision and Control, vol 276. Springer, Cham. https://doi.org/10.1007/978-3-030-40814-5_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-40814-5_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-40813-8
Online ISBN: 978-3-030-40814-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)