Abstract
In practical problems, iterative methods can hardly be used without some acceleration of convergence, commonly called preconditioning, which is typically achieved by incorporation of some (incomplete or modified) direct algorithm as a part of the iteration. Effectiveness of preconditioned iterative methods increases with possibility of stopping the iteration when the desired accuracy is reached. This requires, however, incorporating a proper measure of achieved accuracy as a part of computation.
The goal of this paper is to describe a simple and numerically reliable estimation of the size of the error in the preconditioned conjugate gradient method. In this way this paper extends results from [Z. Strakoš and P. Tichý, ETNA, 13 (2002), pp. 56–80] and communicates them to practical users of the preconditioned conjugate gradient method.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
M. Arioli, A stopping criterion for the conjugate gradient algorithms in a finite element method framework, Numer. Math., 97 (2004), pp. 1–24.
M. Arioli, J. W. Demmel and I. S. Duff, Solving sparse linear systems with sparse backward error, SIAM J. Matrix Anal. Appl., 10 (1989), pp. 165–190.
M. Arioli, I. Duff and D. Ruiz, Stopping criteria for iterative solvers, SIAM J. Matrix Anal. Appl., 13 (1992), pp. 138–144.
M. Arioli, E. Noulard and A. Russo, Stopping criteria for iterative methods: applications to PDE’s, Calcolo, 38 (2001), pp. 97–112.
O. Axelsson, Iterative Solution Methods, Cambridge University Press, 1994.
O. Axelsson and I. Kaporin, Error norm estimation and stopping criteria in preconditioned conjugate gradient iterations, Numer. Linear Algebra Appl., 8 (2001), pp. 265–286.
I. Babuška, Mathematics of the verification and validation in computational engineering, in Mathematical and Computer Modelling in Science and Engineering, M. Kočandrlová and V. Kelar, eds., pp. 5–12, Union of Czech Mathematicians and Physicists, Prague, 2003.
R. Barrett, M. Berry, T. F. Chan et al., Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM, Philadelphia, PA, 1994.
P. Benner, Solving large-scale control problems, to appear in IEEE Control Syst. Magazine, 24 (2004), pp. 44–59.
G. Dahlquist, S. Eisenstat and G. H. Golub, Bounds for the error of linear systems of equations using the theory of moments, J. Math. Anal. Appl., 37 (1972), pp. 151–166.
G. Dahlquist, G. H. Golub and S. G. Nash, Bounds for the error in linear systems, in Proc. Workshop on Semi-Infinite Programming, R. Hettich, ed., pp. 154–172, Springer, Berlin, 1978.
P. Deuflhard, Cascadic conjugate gradient methods for elliptic partial differential equations: algorithm and numerical results, in Domain decomposition methods in scientific and engineering computing (University Park, PA, 1993), Contemp. Math., vol. 180, pp. 29–42, Am. Math. Soc., Providence, RI, 1994.
V. Frayssé, L. Giraud, S. Gratton and J. Langou, A set of GMRES routines for real and complex arithmetics on on high performance computers, TR/PA/03/3, CERFACS, Toulouse Cedex, France, 2003.
G. H. Golub and G. Meurant, Matrices, moments and quadrature, in Proc. 15-th Dundee Conf., June 1993, D. Sciffeths and G. Watson, eds., pp. 105–156, Longman Sci. Tech. Publ., 1994.
G. H. Golub and G. Meurant, Matrices, moments and quadrature. II. How to compute the norm of the error in iterative methods, BIT, 37 (1997), pp. 687–705.
G. H. Golub and Z. Strakoš, Estimates in quadratic formulas, Numer. Algorithms, 8 (1994), pp. 241–268.
G. H. Golub and C. van Loan, Matrix Computation, The Johns Hopkins University Press, Baltimore MD, third edn., 1996.
A. Greenbaum, Behavior of slightly perturbed Lanczos and conjugate-gradient recurrences, Linear Algebra Appl., 113 (1989), pp. 7–63.
A. Greenbaum, Estimating the attainable accuracy of recursively computed residual methods, SIAM J. Matrix Anal. Appl., 18 (1997), pp. 535–551.
A. Greenbaum, Iterative methods for solving linear systems, Frontiers in Applied Mathematics, vol. 17, SIAM, Philadelphia, PA., 1997.
A. Greenbaum and Z. Strakoš, Predicting the behavior of finite precision Lanczos and conjugate gradient computations, SIAM J. Matrix Anal. Appl., 13 (1992), pp. 121–137.
M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bureau Stand., 49 (1952), pp. 409–435.
N. J. Higham, Accuracy and Stability of Numerical Algorithms, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1996.
R. Kouhia, Description of the CYLSHELL set, Laboratory of Structural Mechanics, Finland, May 1998. Matrix Market.
Matrix Market, http://math.nist.gov/MatrixMarket/. The Matrix Market is a service of the Mathematical and Computational Sciences Division of the Information Technology Laboratory of the National Institute of Standards and Technology.
G. Meurant, Computer solution of large linear systems, Studies in Mathematics and its Applications, vol. 28, North-Holland Publishing Co., Amsterdam, 1999.
G. Meurant, Numerical experiments in computing bounds for the norm of the error in the preconditioned conjugate gradient algorithm, Numer. Algorithms 22, 3–4 (1999), pp. 353–365.
G. Meurant, Towards a reliable implementation of the conjugate gradient method, Invited plenary lecture at the Latsis Symposium: Iterative Solvers for Large Linear Systems, Zurich, February 2002.
E. Noulard and M. Arioli, Vector stopping criteria for iterative methods: Theoretical tools, pubblicazioni n. 956, Instituto di Analisi Numerica, Pavia, Italy, 1995.
W. Oettli and W. Prager, Compatibility of approximate solution of linear equations with given error bounds for coefficients and right-hand sides, Numer. Math., 6 (1964), pp. 405–409.
C. C. Paige, Error analysis of the lanczos algorithm for tridiagonalizing a symmetric matrix, J. Inst. Math. Appl, 18 (1976), pp. 341–349.
C. C. Paige and M. A. Saunders, LSQR: an algorithm for sparse linear equations and sparse least squares, ACM Trans. Math. Softw., 8 (1982), pp. 43–71.
C. C. Paige and Z. Strakoš, Residual and backward error bounds in minimum residual Krylov subspace methods, SIAM J. Sci. Comput., 23 (2002), pp. 1898–1923 (electronic).
Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, Philadelphia, PA, second edn., 2003.
R. D. Skeel, Iterative refinement implies numerical stability for Gaussian elimination, Math. Comp., 35 (1980), pp. 817–832.
G. L. G. Sleijpen, H. A. van der Vorst and D. R. Fokkema, BiCGstab(l) and other hybrid Bi-CG methods, Numer. Algorithms, 7 (1994), pp. 75–109.
Z. Strakoš, Theory of Convergence and Effects of Finite Precision Arithmetic in Krylov Subspace Methods, thesis for the degree doctor of science, Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague, February 2001.
Z. Strakoš and P. Tichý, On error estimation in the conjugate gradient method and why it works in finite precision computations, Electron. Trans. Numer. Anal., 13 (2002), pp. 56–80 (electronic).
Z. Strakoš and P. Tichý, On estimation of the A-norm of the error in CG and PCG, PAMM, 3 (2003), pp. 553–554 (published online).
H. A. van der Vorst, Iterative Krylov methods for large linear systems, Cambridge Monographs on Applied and Computational Mathematics, vol. 13, Cambridge University Press, Cambridge, 2003.
Author information
Authors and Affiliations
Corresponding author
Additional information
AMS subject classification (2000)
15A06, 65F10, 65F25, 65G50
Rights and permissions
About this article
Cite this article
Strakoš, Z., Tichý, P. Error Estimation in Preconditioned Conjugate Gradients. Bit Numer Math 45, 789–817 (2005). https://doi.org/10.1007/s10543-005-0032-1
Issue Date:
DOI: https://doi.org/10.1007/s10543-005-0032-1