Abstract
In this chapter we discuss the quasi-Newton versions of the algorithms presented in chapters 12, 13 and 15. Just as in the case of unconstrained problems (see § 4.4), the quasi-Newton approach is useful when one does not want to compute second order derivatives of the functions defining the optimization problem to solve. This may be motivated by various reasons, which are actually the same as in unconstrained optimization: computing second derivatives may demand too much human investment, or their computing time may be too important, or the problem dimensions may not allow the storage of Hessian matrices (in the latter case, limited-memory quasi-Newton methods will be appropriate). Generally speaking, quasi-Newton methods require more iterations to converge, but each iteration is faster (at least for equality constrained problems).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Bonnans, J.F., Gilbert, J.C., Lemaréchal, C., Sagastizábal, C.A. (2003). Quasi-Newton Versions. In: Numerical Optimization. Universitext. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-05078-1_16
Download citation
DOI: https://doi.org/10.1007/978-3-662-05078-1_16
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-00191-1
Online ISBN: 978-3-662-05078-1
eBook Packages: Springer Book Archive