Skip to main content

The watchdog technique for forcing convergence in algorithms for constrained optimization

  • Chapter
  • First Online:
Algorithms for Constrained Minimization of Smooth Nonlinear Functions

Part of the book series: Mathematical Programming Studies ((MATHPROGRAMM,volume 16))

Abstract

The watchdog technique is an extension to iterative optimization algorithms that use line searches. The purpose is to allow some iterations to choose step-lengths that are much longer than those that would be allowed normally by the line search objective function. Reasons for using the technique are that it can give large gains in efficiency when a sequence of steps has to follow a curved constraint boundary, and it provides some highly useful algorithms with a Q-superlinear rate of convergence. The watchdog technique is described and discussed, and some global and Q-superlinear convergence properties are proved.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. R. M. Chamberlain, “Some examples of cycling in variable metric algorithms for constrained minimization”, Mathematical Programming 16 (1979) 378–383.

    Article  MATH  MathSciNet  Google Scholar 

  2. R. Fletcher, “An exact penalty function for nonlinear programming with inequalities”, Mathematical Programming 5 (1973) 129–150.

    Article  MATH  MathSciNet  Google Scholar 

  3. R. Fletcher, “An ideal penalty function for constrained optimization”, Journal of the Institute of Mathematics and its Applications 15 (1975) 319–342.

    Article  MATH  MathSciNet  Google Scholar 

  4. S. P. Han, “A globally convergent method for nonlinear programming”, Journal of Optimization Theory and Applications 22 (1977) 297–309.

    Article  MATH  MathSciNet  Google Scholar 

  5. N. Maratos, “Exact penalty function algorithms for finite dimensional and control optimization problems”, Thesis, University of London (London, 1978).

    Google Scholar 

  6. M. J. D. Powell, “A fast algorithm for nonlinearly constrained optimization calculations”, in: G. A. Watson, ed., Numerical analysis, Dundee, 1977, Lecture notes in mathematics 630 (Springer-Verlag, Berlin, 1978) pp. 144–157.

    Google Scholar 

  7. K. Schittkowski, Nonlinear programming codes, Lecture notes in economics and mathematical systems 183 (Springer-Verlag, Berlin, 1980).

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

A. G. Buckley J.- L. Goffin

Rights and permissions

Reprints and permissions

Copyright information

© 1982 The Mathematical Programming Society, Inc.

About this chapter

Cite this chapter

Chamberlain, R.M., Powell, M.J.D., Lemarechal, C., Pedersen, H.C. (1982). The watchdog technique for forcing convergence in algorithms for constrained optimization. In: Buckley, A.G., Goffin, J.L. (eds) Algorithms for Constrained Minimization of Smooth Nonlinear Functions. Mathematical Programming Studies, vol 16. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0120945

Download citation

  • DOI: https://doi.org/10.1007/BFb0120945

  • Received:

  • Accepted:

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-00812-2

  • Online ISBN: 978-3-642-00813-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics