Abstract.
We present two new error bounds for optimization problems over a convex set whose objective function f is either semianalytic or γ-strictly convex, with γ≥1. We then apply these error bounds to analyze the rate of convergence of a wide class of iterative descent algorithms for the aforementioned optimization problem. Our analysis shows that the function sequence {f(x k )} converges at least at the sublinear rate of k -ε for some positive constant ε, where k is the iteration index. Moreover, the distances from the iterate sequence {x k} to the set of stationary points of the optimization problem converge to zero at least sublinearly.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Author information
Authors and Affiliations
Additional information
Received: October 5, 1999 / Accepted: January 1, 2000¶Published online July 20, 2000
Rights and permissions
About this article
Cite this article
Luo, ZQ. New error bounds and their applications to convergence analysis of iterative algorithms. Math. Program. 88, 341–355 (2000). https://doi.org/10.1007/s101070050020
Published:
Issue Date:
DOI: https://doi.org/10.1007/s101070050020