Abstract
By utilizing the preconditioned Hermitian and skew-Hermitian splitting (PHSS) iteration technique, we establish a non-alternating PHSS (NPHSS) iteration method for solving large sparse non-Hermitian positive definite linear systems. The convergence analysis demonstrates that the iterative series produced by the NPHSS method converge to the unique solution of the linear system when the parameters satisfy some moderate conditions. We also give a possible optimal upper bound for the iterative spectral radius. Moreover, to reduce the computational cost, we establish an inexact variant of the NPHSS (INPHSS) iteration method whose convergence property is studied. Both theoretical and numerical results validate that the NPHSS method outperforms the PHSS method when the Hermitian part of the coefficient matrix is dominant.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
We consider the system of linear equations
where \(A\in \mathbb {C}^{n \times n}\) is a large sparse non-Hermitian and positive definite matrix. Such linear systems arise in many problems in scientific computing and engineering applications.
Since the matrix \(A\in \mathbb {C}^{n \times n}\) naturally possesses the Hermitian and skew-Hermitian splitting (HSS)
where
HSS (Bai et al. 2003) and preconditioned HSS (PHSS) (Bai et al. 2004, 2007a) iteration methods were proposed to compute the approximate solution of the linear system (1). In fact, the PHSS method was defined as follows.
Algorithm 1
(The PHSS iteration method) Let \(x^{(0)}\in \mathbb {C}^n\) be an arbitrary initial guess. For \(k=0,1,2,\ldots \) until the sequence of iterates \(\{x^{(k)}\}_{k=0}^\infty \subset \mathbb {C}^n\) converges, compute the next iterate \(x^{(k+1)}\) according to the following procedure:
where \(\alpha \) is a given positive constant and \(P \in \mathbb {C}^{n \times n}\) is a prescribed Hermitian positive definite matrix.
We can see that each iterate of the PHSS iteration alternates between \(H\) and \(S\), analogously to the classical alternating direction implicit (ADI) iteration for solving partial differential equations; see Peaceman and Rachford (1955), Douglas (1962). The choice of preconditioner \(P\) is mostly dependent on the structure of the coefficient matrix \(A\). For example, we can choose a preconditioner based on incomplete Cholesky factorization, incomplete LU factorization (Saad 2003), and incremental unknowns (Chen and Temam 1993; Yang and Wu 2014; Yang et al. 2013, 2015); see Benzi (2002), Chen (2005) for a detailed discussion.
In particular, if we choose \(P = I\), the identity matrix, the PHSS iteration method is reduced to the HSS iteration method (Bai et al. 2003). When \(P \ne I\), we can suitably choose \(P\) and \(\alpha \) such that the induced PHSS iteration method achieves fast convergence and high computing efficiency. Theoretical analysis shows that both methods converge unconditionally to the unique solution of the linear system (1). Due to its promising performance and elegant mathematical properties, the HSS iteration has attracted many researchers’ attention, resulting in numerous papers devoted to various aspects of this algorithm; see Bai et al. (2005, 2006, 2007b, 2008, 2010, 2011), Bai (2007, 2009, 2010), Benzi (2009), Bai and Golub (2007), Li et al. (2007a, b), Li et al. (2014a, b), Salkuyeh and Behnejad (2012), Yang et al. (2010), Yin and Dou (2012), Cao et al. (2012), Huang (2014).
However, the challenges of the HSS and PHSS iteration methods lie in solving the shifted skew-Hermitian sub-system of linear equations at each iteration step, which is as difficult as that of the original problem. In this work, we present a non-alternating PHSS (NPHSS) iteration method for solving non-Hermitian positive definite linear systems (1). The NPHSS iteration method can be described as follows:
Algorithm 2
(The NPHSS iteration method) Let \(x^{(0)}\in \mathbb {C}^n\) be an arbitrary initial guess. For \(\ell =0,1,2,\ldots \) until the sequence of iterates \(\{x^{(\ell )}\}_{\ell =0}^\infty \subset \mathbb {C}^n\) converges, compute the next iterate \(x^{(\ell +1)}\) according to the following procedure:
where \(\alpha \) is a given positive constant and \(P \in \mathbb {C}^{n \times n}\) is a prescribed Hermitian positive definite matrix.
Due to Hermitian positive definiteness of the matrix \(\alpha P+H\), every sub-system in (3) can be effectively solved either exactly by a sparse Cholesky factorization, or inexactly by a preconditioned conjugate gradient (PCG) scheme (Saad 2003). Theoretical analysis shows that the iterative sequence produced by NPHSS iteration method converges to the unique solution of the linear system (1), with a loose restriction on the choice of \(\alpha \). The contraction factor of the NPHSS iteration can be bounded by a function that depends only on the choice of \(\alpha \), the smallest eigenvalue of matrix \(P^{-1}H\), and the maximum modulus of eigenvalue of matrix \(P^{-1}S\).
In particular, if we choose \(P = I\), the NPHSS iteration method is reduced to the non-alternating HSS (NHSS) iteration method, which was originally proposed in Axelsson et al. (2004) and recently discussed in Li and Wu (2015). Therefore, our method is a preconditioned generalization of the method in Axelsson et al. (2004) and Li and Wu (2015). Moreover, based on the results in Axelsson et al. (2004) a more general framework of the splitting iteration method is established and its inexact variant using the preconditioned conjugate gradient method as the inner solver is discussed in our paper. The comparison of the convergence speeds of PHSS and NPHSS iteration methods is also discussed. A recent work related to this is Axelsson et al. (2014).
This paper is organized as follows. In Sect. 2, we analyze the convergence properties of the NPHSS iteration for linear system (1), including convergence condition, spectral radius of the iterative matrix, the choices of iterative parameter, etc. The comparison of the convergence speeds of PHSS and NPHSS iteration methods is discussed in Sect. 3. In Sect. 4, an inexact variant of the NPHSS (INPHSS) iteration method is presented and its convergence property is discussed. Numerical results are presented in Sect. 5 to illustrate the effectiveness of our methods. Finally, in Sect. 6, we end this work with a brief conclusion.
2 Convergence analysis of the NPHSS iteration method
In this section, we first consider the convergence properties of the NPHSS iteration method. We can reformulate the NPHSS iteration scheme (3) as
where
Here, \(M(P; \alpha )\) is the iteration matrix of the NPHSS iteration method.
Note that \(M(P; \alpha )\) can be reformulated as
then the matrix \(\alpha P+H\) can be viewed as a preconditioner for the coefficient matrix \(A\in \mathbb {C}^{n \times n}\).
Since \(H\) is Hermitian and \(S\) is skew-Hermitian, we easily know that all eigenvalues of \(P^{-1}H\) are real and positive, and all eigenvalues of \(P^{-1}S\) are imaginary. Here and in the sequel, denote
where \(sp(X)\) denotes the spectrum of the matrix \(X\) and \(i=\sqrt{-1}\).
The following theorem gives the convergence result of the NPHSS iteration method.
Theorem 1
Let \(A\in \mathbb {C}^{n \times n}\) be a positive definite matrix, let \(H=\frac{1}{2}(A+A^*)\) and \(S=\frac{1}{2}(A-A^*)\) be its Hermitian and skew-Hermitian parts, respectively, and let \(\alpha \) be a positive constant. Let \(P \in \mathbb {C}^{n \times n}\) be a Hermitian positive definite matrix. Then, the spectral radius \(\rho (M(P; \alpha ))\) of the NPHSS iteration matrix (4) satisfies \(\rho (M(P; \alpha ))\le \sigma (\alpha )\), where
Moreover, it holds that
-
(i)
If \(\lambda _{\min }\ge \xi _{\max }\), then \(\sigma (\alpha )<1\) for any \(\alpha > 0\), which means that the NPHSS iteration method is unconditionally convergent;
-
(ii)
if \(\lambda _{\min }<\xi _{\max }\), then \(\sigma (\alpha )<1\) if and only if
$$\begin{aligned} \alpha >\frac{\xi _{\max }^2-\lambda _{\min }^2}{2\lambda _{\min }}, \end{aligned}$$(6)which means that the NPHSS iteration method is convergent under the condition (6).
Proof
By direct computations, we have
Thus the upper bound of \(\rho (M(P; \alpha ))\) given in (5) is obtained.
By simple derivations, \(\sigma (\alpha )<1\) is equivalent to
If \(\lambda _{\min }\ge \xi _{\max }\), then (7) holds true for any \(\alpha > 0\), i.e., the NPHSS iteration converges to the unique solution of the system of linear equations (1); if \(\lambda _{\min }<\xi _{\max }\), then (7) or \(\sigma (\alpha )<1\) follows if and only if \(\alpha \) satisfies (6). Therefore, in case (ii), the sufficient and necessary condition for the convergence of the NPHSS iteration method is inequality (6). \(\square \)
Theorem 1 gives the convergence conditions of the NPHSS iteration method by analyzing the upper bound \(\sigma (\alpha )\) of the spectral radius of the iteration matrix \(M(P; \alpha )\). Since the optimal parameter \(\alpha \) minimizing the spectral radius \(\rho (M(P; \alpha ))\) is hardly obtained, we instead give the parameter \(\alpha ^\star \), which minimizes the upper bound \(\sigma (\alpha )\) of the spectral radius \(\rho (M(P; \alpha ))\), in the following corollary.
Corollary 1
Let the conditions of Theorem 1 be satisfied. Then, the parameter \(\alpha ^\star \) minimizing the upper bound \(\sigma (\alpha )\) of the spectral radius \(\rho (M(P; \alpha ))\) is
and
Proof
Simple calculation gives
It is obviously that \(\sigma '(\alpha )>0\) for \(\alpha >\xi _{\max }^2/\lambda _{\min }\) and \(\sigma '(\alpha )<0\) for \(\alpha <\xi _{\max }^2/\lambda _{\min }\). Hence, the upper bound \(\sigma (\alpha )\) of the spectral radius \(\rho (M(P; \alpha ))\) achieves its minimum at \(\alpha ^\star =\xi _{\max }^2/\lambda _{\min }\). Taking \(\alpha ^\star \) into \(\sigma (\alpha )\), the minimum value of \(\sigma (\alpha )\) given in (8) is obtained. \(\square \)
Remark 1
For case (ii) of Theorem 1, i.e., \(\lambda _{\min }<\xi _{\max }\), we see that the \(\alpha ^\star \) given in Corollary 1 satisfies condition (6) since
Remark 2
The parameter \(\alpha ^\star \) in Corollary 1 minimizes only the upper bound \(\sigma (\alpha )\) of the spectral radius of the iteration matrix. However, it is still helpful to us to choose an effective parameter \(\alpha \) for the NPHSS iteration method. We call \(\alpha ^\star \) the theoretical quasi-optimal parameter of the NPHSS iteration method.
3 Comparison of the NPHSS and PHSS methods
We first introduce a lemma briefly reviewing the convergence analysis of the PHSS method established in Bai et al. (2007a).
Lemma 1
Let the conditions of Theorem 1 be satisfied. Then, the spectral radius \(\rho (L(P; \alpha ))\) of the PHSS iteration matrix \(L(P; \alpha )=(\alpha P+S)^{-1}(\alpha P-H)(\alpha P+H)^{-1}(\alpha P-S)\) satisfies \(\rho (L(P; \alpha ))\le \gamma (\alpha )\), where
Thus, it holds that
Moreover, the minimum point \(\alpha _\star \) and the minimum value \(\gamma (\alpha _\star )\) of the upper bound \(\gamma (\alpha )\) are, respectively, as
and
Note that PHSS method is a two-step iteration. For a fair comparison, the NPHSS iteration scheme (3) can be equivalently rewritten as the following two-step iteration
and the iteration matrix of (9) is \(M(P; \alpha )^2\). Then we just need to compare the NPHSS and PHSS methods by analyzing the optimal upper bounds of the spectral radius \(\rho (M(P; \alpha )^2)\) and \(\rho (L(P; \alpha ))\). Using Corollary 1 and Lemma 1, we give the following comparison theorem.
Theorem 2
The respective optimal upper bounds of the spectral radii of NPHSS and PHSS iteration matrices satisfy
if and only if
Proof
From Corollary 1 and Lemma 1, inequality \(\sigma ^2(\alpha ^\star )\le \gamma (\alpha _\star )\) becomes
which is equivalent to
Thus, the condition (10) follows by noticing that \(\lambda _{\max }\ge \lambda _{\min }>0\). \(\square \)
Remark 3
When inequality (10) holds, i.e., the Hermitian part of the coefficient matrix \(A\) comparing with its skew-Hermitian part is dominant, we tend to choose NPHSS method rather than PHSS method to solve the linear system (1), and vice verse.
4 Inexact NPHSS iteration method
To further improve computational efficiency of the NPHSS iteration, we develop an inexact NPHSS (INPHSS) iteration, which solves (3) by PCG iterative method. We write the INPHSS iteration scheme in the following algorithm for solving the linear system (1).
Algorithm 3
(The INPHSS iteration method) Given an initial guess \(x^{(0)}\in \mathbb {C}^{n}\), then this algorithm leads to the solution of the linear system (1):
\(\ell = 0\);
while (not convergent)
\(r^{(\ell )}=b-Ax^{(\ell )}\);
approximately solve \((\alpha P+H)z^{(\ell )}=r^{(\ell )}\) by employing PCG method, such that the residual \(p^{(\ell )}=r^{(\ell )}-(\alpha P+H)z^{(\ell )}\) of the iteration satisfies \(\Vert p^{(\ell )}\Vert \le \eta _\ell \Vert r^{(\ell )}\Vert \);
\(x^{(\ell +1)}=x^{(\ell )}+z^{(\ell )}\);
\(\ell =\ell +1\);
end
Here, \(\{\eta _\ell \}\) is prescribed tolerances used to control the accuracies of the inner iterations.
We remark that when \(P=I\), the INPHSS method reduces to the inexact NHSS (INHSS) method.
The convergence properties for the inexact HSS (IHSS) method have been carefully studied in Bai et al. (2003, 2008). Analogously, we can demonstrate the following convergence result about the above INPHSS iteration method.
Theorem 3
Let the conditions of Theorem 1 be satisfied. If \(\{x^{(\ell )}\}_{\ell =0}^{\infty }\subseteq \mathbb {C}^{n}\) is an iteration sequence generated by the INPHSS iteration method and if \(x^{\star }\in \mathbb {C}^{n}\) is the exact solution of the linear system (1), then it holds that
where the constants \(\mu \) and \(\theta \) are given by
In particular, when
then the iteration sequence \(\{x^{(\ell )}\}_{\ell =0}^{\infty }\subseteq \mathbb {C}^{n}\) converges to \(x^{\star }\in \mathbb {C}^{n}\), where \(\eta _{\max } = \max _{\ell }\{\eta _\ell \}\).
Proof
From Algorithm 3, we have
Because \(x^{\star }\in \mathbb {C}^{n}\) is the exact solution of the linear system (1), it must satisfy
By subtracting (15) from (14), we have
Taking norms on both sides of the identity (16), we can obtain
Noticing that
by (17) we can obtain (12) and (13). \(\square \)
We remark that Theorem 3 gives the choice of the tolerance \(\{\eta _\ell \}\) for convergence. In general, to guarantee the convergence of the INPHSS iteration, the tolerance \(\{\eta _\ell \}\) only needs to satisfy the condition (13).
5 Numerical examples
In this section, we are going to examine the efficiency of NHSS and NPHSS iteration methods and their inexact variants for solving linear system (1) by comparing the spectral radii, iteration numbers (denoted as IT) and CPU time (in seconds, denoted as CPU) of these two algorithms.
We consider the three-dimensional convection–diffusion equation
on the unit cube \(\Omega =[0, 1] \times [0, 1]\times [0, 1]\), with constant coefficient \(q\) and subjected to Dirichlet-type boundary conditions. Discretizing this equation with seven-point finite difference, and assuming the numbers \((m)\) of grid points in all three directions are the same, we obtain a positive definite system (1) with the coefficient matrix
where \(\otimes \) denotes the Kronecker product, and \(T_x\), \(T_y\), and \(T_z\) are tridiagonal matrices given by
with
if the first-order derivatives are approximated by the centered difference scheme, and
if the first-order derivatives are approximated by the upwind difference scheme. Here \(r = qh/2 \) is the mesh Reynolds number with \(h = 1/( m+1)\) being the step size. For details, we refer to Bai et al. (2003).
In our implementation, the initial guess is chosen to be \(x^{(0)}=\mathbf 0 \) and the iteration is terminated once the current iterate \(x^{(\ell )}\) satisfies \(\Vert b-Ax^{(\ell )}\Vert _2/\Vert b\Vert _2\le 10^{-6}\). In addition, we set \(n=m^3=10^3\), and the right-hand side vector \(b = A\mathbf 1 \), with \(\mathbf 1 \) being the vector of all entries equal to \(1\). For convenience, the preconditioner \(P\) used in NPHSS and PHSS methods are chosen to be \(P=\text {diag}(a_{11},a_{22},\ldots ,a_{nn})\), where \(a_{11},a_{22},\ldots ,a_{nn}\) are main diagonal elements of the matrix \(A\).
In IHSS, INHSS, IPHSS and INPHSS iteration methods, we set all the tolerances \(\{\eta _\ell \}=0.01\). We solve the linear systems with coefficient matrices \(\alpha P+H\) iteratively by the CG method, and solve the linear systems with the coefficient matrix \(\alpha P+S\) iteratively by the GMRES method. We denote the average inner iteration numbers of CG and GMRES as IT\(_\mathrm{{CG}}\) and IT\(_\mathrm{{GMRES}}\), respectively.
The comparisons of spectral radii of different iteration matrices derived by HSS, NHSS, PHSS and NPHSS iteration methods with different damping coefficients \(q\) are performed in Figs. 1, 2, 3 and 4. From Figs. 1, 2, 3 and 4, we find that when \(q\) is small (the Hermitian part of the coefficient matrix is dominant), the spectral radius of the iteration matrix of NHSS method is much smaller than that of HSS method, and the spectral radius of the iteration matrix of NPHSS method is much smaller than that of PHSS method. As \(q\) becomes large (the skew-Hermitian part is dominant), HSS and PHSS methods perform better and better.
In Tables 1, 2, 3 and 4, we present iteration numbers and CPU time for HSS, NHSS, PHSS and NPHSS iteration methods with different damping coefficients \(q\). From the results in Tables 1, 2, 3 and 4, we see that when \(q\) is small, NHSS and NPHSS methods, no matter compared with the experimental optimal parameter \(\alpha _\text {exp}\) or compared with the theoretical quasi-optimal parameter \(\alpha ^\star \), perform much better than HSS and PHSS methods both in iteration numbers and in CPU time. As \(q\) becomes large, the superiorities of NHSS and NPHSS methods disappear.
In Tables 5 and 6, numerical results for IHSS, INHSS, IPHSS and INPHSS iteration methods are listed; here we adopt the iteration parameters in Tables 1 and 2 for convenience and not the experimental optimal parameters. From Tables 5 and 6, we can obtain the same conclusions as above Tables.
In addition, if Theorem 2 is applied to this numerical example, then we known that when \(q < 1.606\) in the centered difference scheme or \(q < 1.725\) in the upwind difference scheme, the NPHSS is faster than PHSS and the NHSS is faster than HSS. The above numerical results have also examined this fact.
Therefore, we tend to use NHSS or NPHSS iteration methods to solve the linear system (1) when the Hermitian part \(H\) is dominant and employ HSS or PHSS if the skew-Hermitian part \(S\) is dominant.
6 Conclusions
In this paper, we have developed a non-alternating PHSS (NPHSS) method and its inexact variant for solving non-Hermitian positive definite linear systems. Theoretical analysis demonstrates that for any initial guess and a wide range of parameter \(\alpha \), NPHSS method converges to the unique solution of the linear system (1). We also derive an upper bound \(\sigma (\alpha )\) for the spectral radius of NPHSS iteration matrix and give the quasi-optimal parameter \(\alpha ^\star \) which minimizes the upper bound \(\sigma (\alpha )\). In addition, both theoretical and numerical results verify that when the Hermitian part of the coefficient matrix is dominant, NPHSS method performs better than HSS and PHSS methods. Hence, our work gives a better choice for solving the linear system (1) when the Hermitian part \(H\) of coefficient matrix \(A\) is dominant.
At last, we should mention that the choice of the optimal iteration parameters for the NPHSS method is an interesting but difficult topic. Some related discussions for the HSS method can be seen in Bai et al. (2006); Huang (2014). The investigation of this point may be considered in future study.
Comments: The result of recent paper (Li and Wu 2015) is the special case for \(P = I\) of our results here.
References
Axelsson O, Bai ZZ, Qiu SX (2004) A class of nested iteration schemes for linear systems with a coefficient matrix with a dominant positive definite symmetric part. Numer Algorithms 35(2–4):351–372
Axelsson O, Neytcheva M, Ahmad B (2014) A comparison of iterative methods to solve complex valued linear algebraic systems. Numer Algorithms 66(4):811–841
Bai ZZ (2007) Splitting iteration methods for non-Hermitian positive definite systems of linear equations. Hokkaido Math J 36(4):801–814
Bai ZZ (2009) Optimal parameters in the HSS-like methods for saddle-point problems. Numer Linear Algebra Appl 16(6):447–479
Bai ZZ (2010) On semi-convergence of Hermitian and skew-Hermitian splitting methods for singular linear systems. Computing 89(3–4):171–197
Bai ZZ, Golub GH (2007) Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems. IMA J Numer Anal 27(1):1–23
Bai ZZ, Golub GH, Ng MK (2003) Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J Matrix Anal Appl 24(3):603–626
Bai ZZ, Golub GH, Pan JY (2004) Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems. Numer Math 98(1):1–32
Bai ZZ, Golub GH, Lu LZ, Yin JF (2005) Block triangular and skew-Hermitian splitting methods for positive-definite linear systems. SIAM J Sci Comput 26(3):844–863
Bai ZZ, Golub GH, Li CK (2006) Optimal parameter in Hermitian and skew-Hermitian splitting method for certain two-by-two block matrices. SIAM J Sci Comput 28(2):583–603
Bai ZZ, Golub GH, Li CK (2007a) Convergence properties of preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite matrices. Math Comp 76(257):287–298
Bai ZZ, Golub GH, Ng MK (2007b) On successive-overrelaxation acceleration of the Hermitian and skew-Hermitian splitting iterations. Numer Linear Algebra Appl 14(4):319–335
Bai ZZ, Golub GH, Ng MK (2008) On inexact Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. Linear Algebra Appl 428(2–3):413–440
Bai ZZ, Benzi M, Chen F (2010) Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 87(3–4):93–111
Bai ZZ, Benzi M, Chen F (2011) On preconditioned MHSS iteration methods for complex symmetric linear systems. Numer Algorithms 56(2):297–317
Benzi M (2002) Preconditioning techniques for large linear systems: a survey. J Comput Phys 182(2):418–477
Benzi M (2009) A generalization of the Hermitian and skew-Hermitian splitting iteration. SIAM J Matrix Anal Appl 31(2):360–374
Cao Y, Tan WW, Jiang MQ (2012) A generalization of the positive-definite and skew-Hermitian splitting iteration. Numer Algebra Control Optim 2(4):811–821
Chen K (2005) Matrix preconditioning techniques and applications. Cambridge University Press, Cambridge
Chen M, Temam R (1993) Incremental unknowns in finite differences: condition number of the matrix. SIAM J Matrix Anal Appl 14(2):432–455
Douglas J Jr (1962) Alternating direction methods for three space variables. Numer Math 4:41–63
Huang YM (2014) A practical formula for computing optimal parameters in the HSS iteration methods. J Comput Appl Math 255:142–149
Li CX, Wu SL (2015) A single-step HSS method for non-Hermitian positive definite linear systems. Appl Math Lett 44:26–29
Li L, Huang TZ, Liu XP (2007a) Asymmetric Hermitian and skew-Hermitian splitting methods for positive definite linear systems. Comput Math Appl 54(1):147–159
Li L, Huang TZ, Liu XP (2007b) Modified Hermitian and skew-Hermitian splitting methods for non-Hermitian positive-definite linear systems. Numer Linear Algebra Appl 14(3):217–235
Li X, Yang AL, Wu YJ (2014a) Lopsided PMHSS iteration method for a class of complex symmetric linear systems. Numer Algorithms 66(3):555–568
Li X, Yang AL, Wu YJ (2014b) Parameterized preconditioned Hermitian and skew-Hermitian splitting iteration method for saddle-point problems. Int J Comput Math 91(6):1224–1238
Peaceman DW, Rachford HH Jr (1955) The numerical solution of parabolic and elliptic differential equations. J Soc Ind Appl Math 3:28–41
Saad Y (2003) Iterative methods for sparse linear systems, 2nd edn. Society for Industrial and Applied Mathematics, Philadelphia
Salkuyeh DK, Behnejad S (2012) ‘Modified Hermitian and skew-Hermitian splitting methods for non-Hermitian positive-definite linear systems’ [Numer. Linear Algebra Appl. 14, (2007) 217–235] [Letter to the editor]. Numer Linear Algebra Appl 19(5):885–890
Yang AL, Wu YJ (2014) Preconditioning analysis of the one dimensional incremental unknowns method on nonuniform meshes. J Appl Math Comput 44(1–2):379–395
Yang AL, An J, Wu YJ (2010) A generalized preconditioned HSS method for non-Hermitian positive definite linear systems. Appl Math Comput 216(6):1715–1722
Yang AL, Song LJ, Wu YJ (2013) Algebraic preconditioning analysis of the multilevel block incremental unknowns method for anisotropic elliptic operators. Math Comput Model 57(3–4):512–524
Yang AL, Wu YJ, Huang ZD, Yuan JY (2015) Preconditioning analysis of nonuniform incremental unknowns method for two dimensional elliptic problems. Appl Math Model. doi:10.1016/j.apm.2015.01.009
Yin JF, Dou QY (2012) Generalized preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive-definite linear systems. J Comput Math 30(4):404–417
Acknowledgments
The authors would like to thank the referees for their valuable comments and suggestions which greatly improve the presentation of the paper. This work is supported by the National Natural Science Foundation of China (Grant Nos. 11471150, 11401281, 11271173), and the CAPES and CNPq in Brazil.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Ya-xiang Yuan.
Rights and permissions
About this article
Cite this article
Wu, YJ., Li, X. & Yuan, JY. A non-alternating preconditioned HSS iteration method for non-Hermitian positive definite linear systems. Comp. Appl. Math. 36, 367–381 (2017). https://doi.org/10.1007/s40314-015-0231-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40314-015-0231-6
Keywords
- Non-Hermitian positive definite linear systems
- Hermitian and skew-Hermitian splitting
- Non-alternating PHSS iteration
- Spectral radius
- Convergence analysis