Abstract
The purpose of this chapter is to consider various ways in which the fast second-order training algorithms discussed in Chapters 2 and 3 can be modified so that they are more likely to converge to global minima — rather than local minima — in the MLP error surface. Local minima are known to be a serious obstacle to successful training when MLPs are applied to many, if far from all, practical tasks (see the discussion in Section 1.2.2). In this respect, the significance of the benchmark test results presented in Chapter 5 is that they suggest that local minima are an even more serious obstacle for certain second-order training methods — notably the quasi-Newton methods of Section 3.3.2 and (to a lesser extent) the conjugate gradient methods of Section 3.3.4 — than they are for the conventional backpropagation-related training methods of Section 1.3.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1997 Springer-Verlag London Limited
About this chapter
Cite this chapter
Shepherd, A.J. (1997). Global Optimisation. In: Second-Order Methods for Neural Networks. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0953-2_6
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0953-2_6
Publisher Name: Springer, London
Print ISBN: 978-3-540-76100-6
Online ISBN: 978-1-4471-0953-2
eBook Packages: Springer Book Archive