Skip to main content

Part of the book series: Perspectives in Neural Computing ((PERSPECT.NEURAL))

  • 185 Accesses

Abstract

The purpose of this chapter is to consider various ways in which the fast second-order training algorithms discussed in Chapters 2 and 3 can be modified so that they are more likely to converge to global minima — rather than local minima — in the MLP error surface. Local minima are known to be a serious obstacle to successful training when MLPs are applied to many, if far from all, practical tasks (see the discussion in Section 1.2.2). In this respect, the significance of the benchmark test results presented in Chapter 5 is that they suggest that local minima are an even more serious obstacle for certain second-order training methods — notably the quasi-Newton methods of Section 3.3.2 and (to a lesser extent) the conjugate gradient methods of Section 3.3.4 — than they are for the conventional backpropagation-related training methods of Section 1.3.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag London Limited

About this chapter

Cite this chapter

Shepherd, A.J. (1997). Global Optimisation. In: Second-Order Methods for Neural Networks. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0953-2_6

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0953-2_6

  • Publisher Name: Springer, London

  • Print ISBN: 978-3-540-76100-6

  • Online ISBN: 978-1-4471-0953-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics