Skip to main content

Artificial Neural Networks

  • Chapter
  • First Online:
Machine Learning in Modeling and Simulation

Part of the book series: Computational Methods in Engineering & the Sciences ((CMES))

Abstract

To many, artificial neural networks (ANNs) are the archetypal machine learning algorithm; they have a power and generality, together with a long pedigree, which makes them difficult to displace from this position. As one of the earliest general machine learners, ANNs were one of the first to be adopted for addressing data-based engineering problems. This chapter will discuss the historical development of ANNs in the context of engineering usage; in that context, it will prove useful to divide the history into three main periods: pre-history, the first (MLP) age, and the second (deep) age.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  • Abeles M (1991) Corticonics–neural circuits of the cerebral cortex. Cambridge University Press, Cambridge

    Google Scholar 

  • Bishop CM (1994) Training with noise is equivalent to Tikhonov regularization. Neural Comput 7:108–116

    Google Scholar 

  • Bishop CM (2013) Pattern Recognition and machine learning. Springer, Berlin

    Google Scholar 

  • Bishop CM, Svensém M, Williams CKI (1998) Developments of the generative topographic mapping. Neurocomputing 21:203–224

    Google Scholar 

  • Broomhead DS, Lowe D (1988) Multivariable functional interpolation and adaptive networks. Complex Syst 2:321–355

    Google Scholar 

  • Brown M, Harris CJ (1994) Neuro fuzzy adaptive modelling and control. Prentice Hall

    Google Scholar 

  • Bryson A, Denham W, Dreyfuss S (1963) Optimal programming problem with inequality constraints. I: Necessary conditions for extremal solutions. AIAA J 1:25–44

    Google Scholar 

  • Chen Z, Cen J, Xiong J (2020) Rolling bearing fault diagnosis using time-frequency analysis and deep transfer convolutional neural network. IEEE Access 8:150248–150261

    Google Scholar 

  • Churchland PS, Seknowski TJ (2017) The computational brain. MIT Press

    Google Scholar 

  • Cybenko G (1989) Approximation by superpositions of sigmoidal functions. Math Control, Signals Syst 2:303–314

    Google Scholar 

  • Fukushima K (1979) Neural network model for a mechanism of pattern recognition unaffected by shift in position—Neocognitron. Trans IECE J62-A:658–665

    Google Scholar 

  • Gardner PA, Liu X, Worden K (2020) On the application of domain adaptation in structural health monitoring. Mech Syst Signal Process 138:106550

    Google Scholar 

  • Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press

    Google Scholar 

  • Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680

    Google Scholar 

  • Haykin S (1994) Neural networks. Macmillan College Publishing Company, A comprehensive foundation

    Google Scholar 

  • Hebb DO (1949) The Organisation of Behaviour. Wiley, New York

    Google Scholar 

  • Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18:1527–1554

    Google Scholar 

  • Hopfield JJ (1984) Neural networks and physical systems emergent collective computational abilities. Proc Natl Acad Sci, USA 52:2554–2558

    Google Scholar 

  • Hopfield JJ, Tank DW (1985) Neural computation of decisions in optimization problems. Biol Cybern 52:141–152

    Google Scholar 

  • Kohonen T (1982) Self-organized formation of topologically correct feature maps. Biol Cybern 43:59–69

    Google Scholar 

  • LeCun Y (1986) Learning processes in an asymmetric threshold network. Disordered systems and biological organisations. Les Houches, France, Springer, pp 233–240

    Google Scholar 

  • LeCun Y./, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, Jackel LD, (1989) Back-propagation applied to handwritten zip code recognition. Neural Comput 1:541–551

    Google Scholar 

  • LeCun Y./, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, Jackel LD, (1990) Handwritten digit recognition with a back-propagation network. Proceedings of advances in neural information processing systems 2:396–404

    Google Scholar 

  • LeCun Y./, Bottou L, Bengio Y, Haffner P, (1998) Gradient-based learning applied to document recognition. Proc IEEE 86:2278–2324

    Google Scholar 

  • Luo FL, Unbehauen R (1998) Applied neural networks for signal processing. Cambridge University Press

    Google Scholar 

  • Mackay DJC (2003) Information theory, inference and learning algorithms. Cambridge University Press, Cambridge

    Google Scholar 

  • Manson G, Worden K, Allman DJ (2003) Experimental validation of structural health monitoring methodology II: novelty detection on an aircraft wing. J Sound Vib 259:363–435

    Google Scholar 

  • Manson G, Worden K, Allman DJ (2003) Experimental validation of structural health monitoring methodology III: Damage location on an aircraft wing. J Sound Vib 259:365–385

    Google Scholar 

  • Markou S, Singh S (2003) Novelty detection a review. Part I: statistical approaches. Signal Process 83:2481–2497

    Google Scholar 

  • Markou S, Singh S (2003) Novelty detection a review. Part II: neural network based approaches. Signal Process 83:2499–2521

    Google Scholar 

  • McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5:115–133

    Google Scholar 

  • Miller P (2018) An introductory course in computational neuroscience. MIT Press

    Google Scholar 

  • Minsky ML, Papert SA (1988) Perceptrons. MIT Press

    Google Scholar 

  • Möller MF (1943) A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw 6:525–533

    Google Scholar 

  • Nabney IT (2001) Netlab: algorithms for pattern recognition. Springer, Berlin

    Google Scholar 

  • Nair V, Hinton GE (2010) Rectified linear units improve restricted Boltzmann machines. In: Proceedings of international conference on machine learning (ICML)

    Google Scholar 

  • Neal RM (1996) Bayesian learning in neural networks. Springer, Berlin

    Google Scholar 

  • Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1992) Numerical recipes in C. Cambridge University Press

    Google Scholar 

  • \(\tilde{{\rm {R}}}\)amón y Cajál S (1911) Histologie du Systéme Nerveux de l’Homme et des Vertébrés. Maloine, Paris

    Google Scholar 

  • Rosenblatt F (1962) Principles of neurodynamics. Spartan

    Google Scholar 

  • Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back propagating errors. Nature 323:533–536

    Google Scholar 

  • Schmidhuber J (2015) Deep learning in neural networks: An overview. Neural Netw 61:85–117

    Google Scholar 

  • Simard P, Steinkraus D, Platt J (2003) Best practices for convolutional neural networks applied to visual document analysis. In: Proceedings of 7th international conference on document analysis and recognition, pp 958–963

    Google Scholar 

  • Tarassenko L (1998) A guide to neural computing applications. Arnold

    Google Scholar 

  • Tarassenko L, Hayton P, Cerneaz Z, Brady M (1995) Novelty detection for the identification of masses in mammograms. In: Proceedings of the 4th international conference on artificial neural networks. Cambridge, pp 442–447

    Google Scholar 

  • Tsialiamanis G, Champneys MD, Wagg DJ, Dervilis N, Worden K (2022) On the application of generative adversarial networks for nonlinear modal analysis. Mech Syst Signal Process 166:108473

    Google Scholar 

  • Vapnik V (1995) The nature of statistical learning theory. Springer, Berlin

    Google Scholar 

  • Werbos PJ (1974) Beyond Regression: New Tools for Prediction and Analysis in the Behavioural Sciences. Ph.D. thesis, Applied Mathematics. Harvard University

    Google Scholar 

  • Widrow B, Hoff ME (1960) Adaptive switching circuits. IRE Wescon Conv Rec Part 4:96–104

    Google Scholar 

  • Worden K (1997) Structural fault detection using a novelty measure. J Sound Vib 201:85–101

    Google Scholar 

  • Worden K, Hensman JJ, Staszewski WJ (2011) Natural computing for mechanical systems research: A tutorial overview. Mech Syst Signal Process 25:4–111

    Google Scholar 

  • Worden K, Manson G, Allman DJ (2003) Experimental validation of structural health monitoring methodology I: novelty detection on a laboratory structure. J Sound Vib 259:232–343

    Google Scholar 

  • Worden K, Manson G, Hilson G (2003) Genetic optimisation of a neural damage locator. J Sound Vib 309:529–544

    Google Scholar 

  • Yang Q, Zhang Y, Dai W, Pan S-J (2020) Transfer learning. Cambridge University Press

    Google Scholar 

Download references

Acknowledgements

The authors would like to acknowledge the support of the UK EPSRC via the Programme Grants EP/R006768/1 and EP/R004900/1. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC-BY-ND) licence to any Author Accepted Manuscript version arising.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to K. Worden .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Worden, K., Tsialiamanis, G., Cross, E.J., Rogers, T.J. (2023). Artificial Neural Networks. In: Rabczuk, T., Bathe, KJ. (eds) Machine Learning in Modeling and Simulation. Computational Methods in Engineering & the Sciences. Springer, Cham. https://doi.org/10.1007/978-3-031-36644-4_2

Download citation

Publish with us

Policies and ethics