Abstract
We introduced an algorithm for unconstrained optimization based on the transformation of the Newton method with the line search into a gradient descent method. Main idea used in the algorithm construction is approximation of the Hessian by an appropriate diagonal matrix. The steplength calculation algorithm is based on the Taylor’s development in two successive iterative points and the backtracking line search procedure. The linear convergence of the algorithm is proved for uniformly convex functions and strictly convex quadratic functions satisfying specified conditions.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
Andrei, N.: An acceleration of gradient descent algorithm with backtracking for unconstrained optimization. Numer. Algor. 42, 63–73 (2006)
Andrei, N.: An unconstrained optimization test functions collection. http://www.ici.ro/camo/journal/vol10/v10a10.pdf
Armijo, L.: Minimization of functions having Lipschitz first partial derivatives. Pac. J. Math 6, 1–3 (1966)
Barzilai, J., Borwein, J.M.: Two point step size gradient method. IMA J. Numer. Anal. 8, 141–148 (1988)
Dai, Y.H.: Alternate step gradient method. Report AMSS–2001–041, Academy of Mathematics and Systems Sciences, Chinese Academy of Sciences (2001)
Dai, Y.H., Fletcher, R.: On the asymptotic behaviour of some new gradient methods. Numerical Analysis Report, NA/212, Dept. of Math. University of Dundee, Scotland, UK (2003)
Dai, Y.H., Yuan, J.Y., Yuan, Y.: Modified two-point step-size gradient methods for unconstrained optimization. Comput. Optim. Appl. 22, 103–109 (2002)
Dai, Y.H., Yuan, Y.: Alternate minimization gradient method. IMA J. Numer. Anal. 23, 377–393 (2003)
Dai, Y.H., Yuan, Y.: Analysis of monotone gradient methods. J. Ind. Manage. Optim. 1, 181–192 (2005)
Dai, Y.H., Zhang, H.: An adaptive two-point step-size gradient method. Research report, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences (2001)
Dai, Y.H., Liao, L.Z.: R-linear convergence of the Barzilai and Borwein gradient method. IMA J. Numer. Anal. 22, 1–10 (2002)
Fletcher, R., Reeves, C.M.: Function minimization by conjugate gradients. Comput. J. 7, 149–154 (1964)
Friedlander, A., Martinez, J.M., Molina, B., Raydan, M.: Gradient method with retards and generalizations. SIAM J. Numer. Anal. 36, 275–289 (1999)
Goldstein, A.A.: On steepest descent. SIAM J. Control 3, 147–151 (1965)
Lemaréchal, C.: A view of line search. In: Auslander, A., Oetti, W., Stoer J. (eds.) Optimization and Optimal Control, pp. 59–78. Springer, Berlin (1981)
Luenberg, D.G., and Ye, Y.: Linear and nonlinear programming. Springer Science + Business Media, LLC, New York (2008)
Molina, B., Raydan, M.: Preconditioned Barzilai–Borwein method for the numerical solution of partial differential equations. Numer. Algor. 13, 45–60 (1996)
Moré, J.J., Thuente, D.J.: On line search algorithm with guaranteed sufficient decrease. Mathematics and Computer Science Division Preprint MCS-P153-0590, Argone National Laboratory, Argone (1990)
Ortega, J.M., Rheinboldt, W.C.: Iterative Solution Of Nonlinear Equation in Several Variables. Academic, London (1970)
Polak, E., Ribiére, G.: Note sur la convergence de méthodes de directions conjuguées. Revue Francaise Informat. Reserche Opérationnelle, 3e Année 16, 35–43 (1969)
Polyak, B.T.: The conjugate gradient method in extreme problems. USSR Comp. Math. Math. Phys. 9, 94–112 (1969)
Potra, F.A., Shi, Y.: Efficient line search algorithm for unconstrained optimization. J. Optim. Theory Appl. 85, 677–704 (1995)
Powell, M.J.D.: Some Global Convergence Properties of a Variable-Metric Algorithm For Minimization Without Exact Line Search, vol. 9, pp. 53–72. AIAM-AMS Proc., Philadelphia (1976)
Raydan, M.: On the Barzilai and Borwein choice of steplength for the gradient method. IMA J. Numer. Anal. 13, 321–326 (1993)
Raydan, M.: The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM J. Optim. 7, 26–33 (1997)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Shi, Z.-J.: Convergence of line search methods for unconstrained optimization. Appl. Math. Comput. 157, 393–405 (2004)
Vrahatis, M.N., Androulakis, G.S., Lambrinos, J.N., Magoulas, G.D.: A class of gradient unconstrained minimization algorithms with adaptive step-size. J. Comp. and Appl. Math. 114, 367–386 (2000)
Yuan, Y.: A new stepsize for the steepest descent method. Research report, Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Sciences, Chinese Academy of Sciences (2004)
Sun, W., Yuan, Y.-X.: Optimization Theory and Methods: Nonlinear Programming. Springer, New York (2006)
Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11, 226–235 (1968)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Stanimirović, P.S., Miladinović, M.B. Accelerated gradient descent methods with line search. Numer Algor 54, 503–520 (2010). https://doi.org/10.1007/s11075-009-9350-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-009-9350-8