1 Introduction

We consider the system of linear equations

$$\begin{aligned} Ax=b, \quad A\in \mathbb {C}^{n \times n}, \quad x, b\in \mathbb {C}^n, \end{aligned}$$
(1)

where \(A\in \mathbb {C}^{n \times n}\) is a large sparse non-Hermitian and positive definite matrix. Such linear systems arise in many problems in scientific computing and engineering applications.

Since the matrix \(A\in \mathbb {C}^{n \times n}\) naturally possesses the Hermitian and skew-Hermitian splitting (HSS)

$$\begin{aligned} A=H+S, \end{aligned}$$

where

$$\begin{aligned} H=\frac{1}{2}(A+A^*) \quad \mathrm{and} \quad S=\frac{1}{2}(A-A^*), \end{aligned}$$

HSS (Bai et al. 2003) and preconditioned HSS (PHSS) (Bai et al. 2004, 2007a) iteration methods were proposed to compute the approximate solution of the linear system (1). In fact, the PHSS method was defined as follows.

Algorithm 1

(The PHSS iteration method) Let \(x^{(0)}\in \mathbb {C}^n\) be an arbitrary initial guess. For \(k=0,1,2,\ldots \) until the sequence of iterates \(\{x^{(k)}\}_{k=0}^\infty \subset \mathbb {C}^n\) converges, compute the next iterate \(x^{(k+1)}\) according to the following procedure:

$$\begin{aligned} \left\{ \begin{array}{l} (\alpha P+H)x^{(k+\frac{1}{2})}=(\alpha P-S)x^{(k)}+b,\\ (\alpha P+S)x^{(k+1)}=(\alpha P-H)x^{(k+\frac{1}{2})}+b, \end{array}\right. \end{aligned}$$
(2)

where \(\alpha \) is a given positive constant and \(P \in \mathbb {C}^{n \times n}\) is a prescribed Hermitian positive definite matrix.

We can see that each iterate of the PHSS iteration alternates between \(H\) and \(S\), analogously to the classical alternating direction implicit (ADI) iteration for solving partial differential equations; see Peaceman and Rachford (1955), Douglas (1962). The choice of preconditioner \(P\) is mostly dependent on the structure of the coefficient matrix \(A\). For example, we can choose a preconditioner based on incomplete Cholesky factorization, incomplete LU factorization (Saad 2003), and incremental unknowns (Chen and Temam 1993; Yang and Wu 2014; Yang et al. 2013, 2015); see Benzi (2002), Chen (2005) for a detailed discussion.

In particular, if we choose \(P = I\), the identity matrix, the PHSS iteration method is reduced to the HSS iteration method (Bai et al. 2003). When \(P \ne I\), we can suitably choose \(P\) and \(\alpha \) such that the induced PHSS iteration method achieves fast convergence and high computing efficiency. Theoretical analysis shows that both methods converge unconditionally to the unique solution of the linear system (1). Due to its promising performance and elegant mathematical properties, the HSS iteration has attracted many researchers’ attention, resulting in numerous papers devoted to various aspects of this algorithm; see Bai et al. (2005, 2006, 2007b, 2008, 2010, 2011), Bai (2007, 2009, 2010), Benzi (2009), Bai and Golub (2007), Li et al. (2007a, b), Li et al. (2014a, b), Salkuyeh and Behnejad (2012), Yang et al. (2010), Yin and Dou (2012), Cao et al. (2012), Huang (2014).

However, the challenges of the HSS and PHSS iteration methods lie in solving the shifted skew-Hermitian sub-system of linear equations at each iteration step, which is as difficult as that of the original problem. In this work, we present a non-alternating PHSS (NPHSS) iteration method for solving non-Hermitian positive definite linear systems (1). The NPHSS iteration method can be described as follows:

Algorithm 2

(The NPHSS iteration method) Let \(x^{(0)}\in \mathbb {C}^n\) be an arbitrary initial guess. For \(\ell =0,1,2,\ldots \) until the sequence of iterates \(\{x^{(\ell )}\}_{\ell =0}^\infty \subset \mathbb {C}^n\) converges, compute the next iterate \(x^{(\ell +1)}\) according to the following procedure:

$$\begin{aligned} (\alpha P+H)x^{(\ell +1)}=(\alpha P-S)x^{(\ell )}+b, \end{aligned}$$
(3)

where \(\alpha \) is a given positive constant and \(P \in \mathbb {C}^{n \times n}\) is a prescribed Hermitian positive definite matrix.

Due to Hermitian positive definiteness of the matrix \(\alpha P+H\), every sub-system in (3) can be effectively solved either exactly by a sparse Cholesky factorization, or inexactly by a preconditioned conjugate gradient (PCG) scheme (Saad 2003). Theoretical analysis shows that the iterative sequence produced by NPHSS iteration method converges to the unique solution of the linear system (1), with a loose restriction on the choice of \(\alpha \). The contraction factor of the NPHSS iteration can be bounded by a function that depends only on the choice of \(\alpha \), the smallest eigenvalue of matrix \(P^{-1}H\), and the maximum modulus of eigenvalue of matrix \(P^{-1}S\).

In particular, if we choose \(P = I\), the NPHSS iteration method is reduced to the non-alternating HSS (NHSS) iteration method, which was originally proposed in Axelsson et al. (2004) and recently discussed in Li and Wu (2015). Therefore, our method is a preconditioned generalization of the method in Axelsson et al. (2004) and Li and Wu (2015). Moreover, based on the results in Axelsson et al. (2004) a more general framework of the splitting iteration method is established and its inexact variant using the preconditioned conjugate gradient method as the inner solver is discussed in our paper. The comparison of the convergence speeds of PHSS and NPHSS iteration methods is also discussed. A recent work related to this is Axelsson et al. (2014).

This paper is organized as follows. In Sect. 2, we analyze the convergence properties of the NPHSS iteration for linear system (1), including convergence condition, spectral radius of the iterative matrix, the choices of iterative parameter, etc. The comparison of the convergence speeds of PHSS and NPHSS iteration methods is discussed in Sect. 3. In Sect. 4, an inexact variant of the NPHSS (INPHSS) iteration method is presented and its convergence property is discussed. Numerical results are presented in Sect. 5 to illustrate the effectiveness of our methods. Finally, in Sect. 6, we end this work with a brief conclusion.

2 Convergence analysis of the NPHSS iteration method

In this section, we first consider the convergence properties of the NPHSS iteration method. We can reformulate the NPHSS iteration scheme (3) as

$$\begin{aligned} x^{(\ell +1)}= M(P; \alpha )x^{(\ell )}+N(P; \alpha )b, \quad \ell =0, 1, 2 \ldots , \end{aligned}$$

where

$$\begin{aligned} M(P; \alpha )=(\alpha P+H)^{-1}(\alpha P-S) \quad \mathrm{and} \quad N(P; \alpha )=(\alpha P+H)^{-1}. \end{aligned}$$
(4)

Here, \(M(P; \alpha )\) is the iteration matrix of the NPHSS iteration method.

Note that \(M(P; \alpha )\) can be reformulated as

$$\begin{aligned} M(P; \alpha )=I-(\alpha P+H)^{-1}A, \end{aligned}$$

then the matrix \(\alpha P+H\) can be viewed as a preconditioner for the coefficient matrix \(A\in \mathbb {C}^{n \times n}\).

Since \(H\) is Hermitian and \(S\) is skew-Hermitian, we easily know that all eigenvalues of \(P^{-1}H\) are real and positive, and all eigenvalues of \(P^{-1}S\) are imaginary. Here and in the sequel, denote

$$\begin{aligned} \lambda _{\max }=\max _{\lambda _j\in sp(P^{-1}H)}\{\lambda _j\},~~ \lambda _{\min }=\min _{\lambda _j\in sp(P^{-1}H)}\{\lambda _j\}~~ \mathrm{and} ~~ \xi _{\max }=\max _{i\xi _j\in sp(P^{-1}S)}\{|\xi _j|\}, \end{aligned}$$

where \(sp(X)\) denotes the spectrum of the matrix \(X\) and \(i=\sqrt{-1}\).

The following theorem gives the convergence result of the NPHSS iteration method.

Theorem 1

Let \(A\in \mathbb {C}^{n \times n}\) be a positive definite matrix, let \(H=\frac{1}{2}(A+A^*)\) and \(S=\frac{1}{2}(A-A^*)\) be its Hermitian and skew-Hermitian parts, respectively, and let \(\alpha \) be a positive constant. Let \(P \in \mathbb {C}^{n \times n}\) be a Hermitian positive definite matrix. Then, the spectral radius \(\rho (M(P; \alpha ))\) of the NPHSS iteration matrix (4) satisfies \(\rho (M(P; \alpha ))\le \sigma (\alpha )\), where

$$\begin{aligned} \sigma (\alpha )=\frac{\sqrt{\alpha ^2+\xi _{\max }^2}}{\alpha +\lambda _{\min }}. \end{aligned}$$
(5)

Moreover, it holds that

  1. (i)

    If \(\lambda _{\min }\ge \xi _{\max }\), then \(\sigma (\alpha )<1\) for any \(\alpha > 0\), which means that the NPHSS iteration method is unconditionally convergent;

  2. (ii)

    if \(\lambda _{\min }<\xi _{\max }\), then \(\sigma (\alpha )<1\) if and only if

    $$\begin{aligned} \alpha >\frac{\xi _{\max }^2-\lambda _{\min }^2}{2\lambda _{\min }}, \end{aligned}$$
    (6)

    which means that the NPHSS iteration method is convergent under the condition (6).

Proof

By direct computations, we have

$$\begin{aligned} \rho (M(P; \alpha ))= & {} \rho \left( (\alpha P+H)^{-1}(\alpha P-S)\right) \\= & {} \rho \left( \left( \alpha I+P^{-1}H\right) ^{-1}\left( \alpha I-P^{-1}S\right) \right) \\\le & {} \left\| \left( \alpha I+P^{-1}H\right) ^{-1}\left( \alpha I-P^{-1}S\right) \right\| _2\\\le & {} \left\| \left( \alpha I+P^{-1}H\right) ^{-1}\right\| _2\left\| \left( \alpha I-P^{-1}S\right) \right\| _2\\= & {} \max _{\lambda _j\in sp(P^{-1}H)}\left| \frac{1}{\alpha +\lambda _j}\right| \cdot \max _{i\xi _j\in sp(P^{-1}S)}\left| \alpha -i\xi _j\right| \\= & {} \max _{\lambda _j\in sp(P^{-1}H)}\frac{1}{\alpha +\lambda _j}\cdot \max _{i\xi _j\in sp(P^{-1}S)}\sqrt{\alpha ^2+\xi _j^2}\\= & {} \frac{\sqrt{\alpha ^2+\xi _{\max }^2}}{\alpha +\lambda _{\min }}. \end{aligned}$$

Thus the upper bound of \(\rho (M(P; \alpha ))\) given in (5) is obtained.

By simple derivations, \(\sigma (\alpha )<1\) is equivalent to

$$\begin{aligned} 2\lambda _{\min }\alpha >\xi _{\max }^2-\lambda _{\min }^2. \end{aligned}$$
(7)

If \(\lambda _{\min }\ge \xi _{\max }\), then (7) holds true for any \(\alpha > 0\), i.e., the NPHSS iteration converges to the unique solution of the system of linear equations (1); if \(\lambda _{\min }<\xi _{\max }\), then (7) or \(\sigma (\alpha )<1\) follows if and only if \(\alpha \) satisfies (6). Therefore, in case (ii), the sufficient and necessary condition for the convergence of the NPHSS iteration method is inequality (6). \(\square \)

Theorem 1 gives the convergence conditions of the NPHSS iteration method by analyzing the upper bound \(\sigma (\alpha )\) of the spectral radius of the iteration matrix \(M(P; \alpha )\). Since the optimal parameter \(\alpha \) minimizing the spectral radius \(\rho (M(P; \alpha ))\) is hardly obtained, we instead give the parameter \(\alpha ^\star \), which minimizes the upper bound \(\sigma (\alpha )\) of the spectral radius \(\rho (M(P; \alpha ))\), in the following corollary.

Corollary 1

Let the conditions of Theorem 1 be satisfied. Then, the parameter \(\alpha ^\star \) minimizing the upper bound \(\sigma (\alpha )\) of the spectral radius \(\rho (M(P; \alpha ))\) is

$$\begin{aligned} \alpha ^\star \equiv \mathrm{arg} \min _{\alpha }\left\{ \sigma (\alpha )\right\} = \mathrm{arg} \min _{\alpha }\left\{ \frac{\sqrt{\alpha ^2+\xi _{\max }^2}}{\alpha +\lambda _{\min }}\right\} =\frac{\xi _{\max }^2}{\lambda _{\min }} \end{aligned}$$

and

$$\begin{aligned} \sigma (\alpha ^\star )=\frac{\xi _{\max }}{\sqrt{\lambda _{\min }^2+\xi _{\max }^2}}. \end{aligned}$$
(8)

Proof

Simple calculation gives

$$\begin{aligned} \sigma '(\alpha )=\frac{\alpha \lambda _{\min } -\xi _{\max } ^2}{(\alpha +\lambda _{\min } )^2 \sqrt{\alpha ^2+\xi _{\max } ^2}}. \end{aligned}$$

It is obviously that \(\sigma '(\alpha )>0\) for \(\alpha >\xi _{\max }^2/\lambda _{\min }\) and \(\sigma '(\alpha )<0\) for \(\alpha <\xi _{\max }^2/\lambda _{\min }\). Hence, the upper bound \(\sigma (\alpha )\) of the spectral radius \(\rho (M(P; \alpha ))\) achieves its minimum at \(\alpha ^\star =\xi _{\max }^2/\lambda _{\min }\). Taking \(\alpha ^\star \) into \(\sigma (\alpha )\), the minimum value of \(\sigma (\alpha )\) given in (8) is obtained. \(\square \)

Remark 1

For case (ii) of Theorem 1, i.e., \(\lambda _{\min }<\xi _{\max }\), we see that the \(\alpha ^\star \) given in Corollary 1 satisfies condition (6) since

$$\begin{aligned} \alpha ^\star =\frac{\xi _{\max }^2}{\lambda _{\min }}>\frac{\xi _{\max }^2-\lambda _{\min }^2}{\lambda _{\min }}>\frac{\xi _{\max }^2-\lambda _{\min }^2}{2\lambda _{\min }}. \end{aligned}$$

Remark 2

The parameter \(\alpha ^\star \) in Corollary 1 minimizes only the upper bound \(\sigma (\alpha )\) of the spectral radius of the iteration matrix. However, it is still helpful to us to choose an effective parameter \(\alpha \) for the NPHSS iteration method. We call \(\alpha ^\star \) the theoretical quasi-optimal parameter of the NPHSS iteration method.

3 Comparison of the NPHSS and PHSS methods

We first introduce a lemma briefly reviewing the convergence analysis of the PHSS method established in Bai et al. (2007a).

Lemma 1

Let the conditions of Theorem 1 be satisfied. Then, the spectral radius \(\rho (L(P; \alpha ))\) of the PHSS iteration matrix \(L(P; \alpha )=(\alpha P+S)^{-1}(\alpha P-H)(\alpha P+H)^{-1}(\alpha P-S)\) satisfies \(\rho (L(P; \alpha ))\le \gamma (\alpha )\), where

$$\begin{aligned} \gamma (\alpha )=\max _{\lambda _j\in sp(P^{-1}H)}\left| \frac{\alpha -\lambda _j}{\alpha +\lambda _j}\right| . \end{aligned}$$

Thus, it holds that

$$\begin{aligned} \rho (L(P; \alpha ))\le \gamma (\alpha )<1, \quad \forall \, \alpha >0. \end{aligned}$$

Moreover, the minimum point \(\alpha _\star \) and the minimum value \(\gamma (\alpha _\star )\) of the upper bound \(\gamma (\alpha )\) are, respectively, as

$$\begin{aligned} \alpha _\star \equiv \mathrm{arg} \min _{\alpha }\left\{ \gamma (\alpha )\right\} = \mathrm{arg} \min _{\alpha }\left\{ \max _{\lambda _j\in sp(P^{-1}H)}\left| \frac{\alpha -\lambda _j}{\alpha +\lambda _j}\right| \right\} =\sqrt{\lambda _{\min }\lambda _{\max }} \end{aligned}$$

and

$$\begin{aligned} \gamma (\alpha _\star )=\frac{\sqrt{\lambda _{\max }}-\sqrt{\lambda _{\min }}}{\sqrt{\lambda _{\max }}+\sqrt{\lambda _{\min }}}. \end{aligned}$$

Note that PHSS method is a two-step iteration. For a fair comparison, the NPHSS iteration scheme (3) can be equivalently rewritten as the following two-step iteration

$$\begin{aligned} \left\{ \begin{array}{l} (\alpha P+H)x^{(k+\frac{1}{2})}=(\alpha P-S)x^{(k)}+b,\\ (\alpha P+H)x^{(k+1)}=(\alpha P-S)x^{(k+\frac{1}{2})}+b, \end{array}\right. \end{aligned}$$
(9)

and the iteration matrix of (9) is \(M(P; \alpha )^2\). Then we just need to compare the NPHSS and PHSS methods by analyzing the optimal upper bounds of the spectral radius \(\rho (M(P; \alpha )^2)\) and \(\rho (L(P; \alpha ))\). Using Corollary 1 and Lemma 1, we give the following comparison theorem.

Theorem 2

The respective optimal upper bounds of the spectral radii of NPHSS and PHSS iteration matrices satisfy

$$\begin{aligned} \sigma ^2(\alpha ^\star )\le \gamma (\alpha _\star ) \end{aligned}$$

if and only if

$$\begin{aligned} \xi _{\max }\le \sqrt{\frac{\sqrt{\lambda _{\max }}-\sqrt{\lambda _{\min }}}{2\sqrt{\lambda _{\min }}}}\cdot \lambda _{\min }. \end{aligned}$$
(10)

Proof

From Corollary 1 and Lemma 1, inequality \(\sigma ^2(\alpha ^\star )\le \gamma (\alpha _\star )\) becomes

$$\begin{aligned} \frac{\xi _{\max }^2}{\lambda _{\min }^2+\xi _{\max }^2}\le \frac{\sqrt{\lambda _{\max }}-\sqrt{\lambda _{\min }}}{\sqrt{\lambda _{\max }}+\sqrt{\lambda _{\min }}}, \end{aligned}$$

which is equivalent to

$$\begin{aligned} \frac{\xi _{\max }^2}{\lambda _{\min }^2}\le \frac{\sqrt{\lambda _{\max }}-\sqrt{\lambda _{\min }}}{2\sqrt{\lambda _{\min }}}. \end{aligned}$$
(11)

Thus, the condition (10) follows by noticing that \(\lambda _{\max }\ge \lambda _{\min }>0\). \(\square \)

Remark 3

When inequality (10) holds, i.e., the Hermitian part of the coefficient matrix \(A\) comparing with its skew-Hermitian part is dominant, we tend to choose NPHSS method rather than PHSS method to solve the linear system (1), and vice verse.

4 Inexact NPHSS iteration method

To further improve computational efficiency of the NPHSS iteration, we develop an inexact NPHSS (INPHSS) iteration, which solves (3) by PCG iterative method. We write the INPHSS iteration scheme in the following algorithm for solving the linear system (1).

Algorithm 3

(The INPHSS iteration method) Given an initial guess \(x^{(0)}\in \mathbb {C}^{n}\), then this algorithm leads to the solution of the linear system (1):

\(\ell = 0\);

while (not convergent)

\(r^{(\ell )}=b-Ax^{(\ell )}\);

approximately solve \((\alpha P+H)z^{(\ell )}=r^{(\ell )}\) by employing PCG method, such that the residual \(p^{(\ell )}=r^{(\ell )}-(\alpha P+H)z^{(\ell )}\) of the iteration satisfies \(\Vert p^{(\ell )}\Vert \le \eta _\ell \Vert r^{(\ell )}\Vert \);

\(x^{(\ell +1)}=x^{(\ell )}+z^{(\ell )}\);

\(\ell =\ell +1\);

end

Here, \(\{\eta _\ell \}\) is prescribed tolerances used to control the accuracies of the inner iterations.

We remark that when \(P=I\), the INPHSS method reduces to the inexact NHSS (INHSS) method.

The convergence properties for the inexact HSS (IHSS) method have been carefully studied in Bai et al. (2003, 2008). Analogously, we can demonstrate the following convergence result about the above INPHSS iteration method.

Theorem 3

Let the conditions of Theorem 1 be satisfied. If \(\{x^{(\ell )}\}_{\ell =0}^{\infty }\subseteq \mathbb {C}^{n}\) is an iteration sequence generated by the INPHSS iteration method and if \(x^{\star }\in \mathbb {C}^{n}\) is the exact solution of the linear system (1), then it holds that

$$\begin{aligned} \Vert x^{(\ell +1)}-x^{\star }\Vert _2\le (\sigma (\alpha )+\mu \theta \eta _\ell )\Vert x^{(\ell )}-x^{\star }\Vert _2,\quad \ell =0,1,2,\ldots \end{aligned}$$
(12)

where the constants \(\mu \) and \(\theta \) are given by

$$\begin{aligned} \mu =\Vert (\alpha P+H)^{-1}\Vert _2,\quad \theta =\Vert A\Vert _2. \end{aligned}$$

In particular, when

$$\begin{aligned} \sigma (\alpha )+\mu \theta \eta _{\max }<1, \end{aligned}$$
(13)

then the iteration sequence \(\{x^{(\ell )}\}_{\ell =0}^{\infty }\subseteq \mathbb {C}^{n}\) converges to \(x^{\star }\in \mathbb {C}^{n}\), where \(\eta _{\max } = \max _{\ell }\{\eta _\ell \}\).

Proof

From Algorithm 3, we have

$$\begin{aligned} x^{(\ell +1)}= & {} x^{(\ell )}+(\alpha P+H)^{-1}(r^{(\ell )}+p^{(\ell )})\nonumber \\= & {} x^{(\ell )}+(\alpha P+H)^{-1}\left( b-Ax^{(\ell )}+p^{(\ell )}\right) \nonumber \\= & {} \left( I-(\alpha P+H)^{-1}A\right) x^{(\ell )}+(\alpha P+H)^{-1}b+(\alpha P+H)^{-1}p^{(\ell )} \nonumber \\= & {} (\alpha P+H)^{-1}(\alpha P-S)x^{(\ell )}+(\alpha P+H)^{-1}b+(\alpha P+H)^{-1}p^{(\ell )}. \end{aligned}$$
(14)

Because \(x^{\star }\in \mathbb {C}^{n}\) is the exact solution of the linear system (1), it must satisfy

$$\begin{aligned} x^{\star }=(\alpha P+H)^{-1}(\alpha P-S)x^{\star }+(\alpha P+H)^{-1}b. \end{aligned}$$
(15)

By subtracting (15) from (14), we have

$$\begin{aligned} x^{(\ell +1)}-x^{\star }=(\alpha P+H)^{-1}(\alpha P-S)(x^{(\ell )}-x^{\star })+(\alpha P+H)^{-1}p^{(\ell )}. \end{aligned}$$
(16)

Taking norms on both sides of the identity (16), we can obtain

$$\begin{aligned} \begin{array}{ll} &{}\Vert x^{(\ell +1)}-x^{\star }\Vert _2\\ &{}\le \Vert (\alpha P+H)^{-1}(\alpha P-S)\Vert _2\Vert x^{(\ell )}-x^{\star }\Vert _2+\Vert (\alpha P+H)^{-1}\Vert _2\Vert p^{(\ell )}\Vert _2\\ &{}\le \sigma (\alpha )\Vert x^{(\ell )}-x^{\star }\Vert _2+\mu \eta _\ell \Vert r^{(\ell )}\Vert _2. \end{array} \end{aligned}$$
(17)

Noticing that

$$\begin{aligned} \Vert r^{(\ell )}\Vert _2=\Vert b-Ax^{(\ell )}\Vert _2=\Vert A(x^{\star }-x^{(\ell )})\Vert _2\le \theta \Vert x^{(\ell )}-x^{\star }\Vert _2, \end{aligned}$$

by (17) we can obtain (12) and (13). \(\square \)

We remark that Theorem 3 gives the choice of the tolerance \(\{\eta _\ell \}\) for convergence. In general, to guarantee the convergence of the INPHSS iteration, the tolerance \(\{\eta _\ell \}\) only needs to satisfy the condition (13).

5 Numerical examples

In this section, we are going to examine the efficiency of NHSS and NPHSS iteration methods and their inexact variants for solving linear system (1) by comparing the spectral radii, iteration numbers (denoted as IT) and CPU time (in seconds, denoted as CPU) of these two algorithms.

We consider the three-dimensional convection–diffusion equation

$$\begin{aligned} -(u_{xx}+u_{yy}+u_{zz})+q(u_{x}+u_{y}+u_{z})=f(x,y,z) \end{aligned}$$
(18)

on the unit cube \(\Omega =[0, 1] \times [0, 1]\times [0, 1]\), with constant coefficient \(q\) and subjected to Dirichlet-type boundary conditions. Discretizing this equation with seven-point finite difference, and assuming the numbers \((m)\) of grid points in all three directions are the same, we obtain a positive definite system (1) with the coefficient matrix

$$\begin{aligned} A=T_x\otimes I\otimes I+I\otimes T_y\otimes I+I\otimes I\otimes T_z, \end{aligned}$$
(19)

where \(\otimes \) denotes the Kronecker product, and \(T_x\), \(T_y\), and \(T_z\) are tridiagonal matrices given by

$$\begin{aligned} T_x = \text {tridiag}(t_2, t_1, t_3), \quad T_y = \text {tridiag}(t_2, 0, t_3), \quad \text {and} \quad T_z = \text {tridiag}(t_2, 0, t_3), \end{aligned}$$

with

$$\begin{aligned} t_1 = 6, \quad t_2 = -1 -r, \quad t_3 = -1 + r \end{aligned}$$

if the first-order derivatives are approximated by the centered difference scheme, and

$$\begin{aligned} t_1 = 6+6r, \quad t_2 = -1 -2r, \quad t_3 = -1 \end{aligned}$$

if the first-order derivatives are approximated by the upwind difference scheme. Here \(r = qh/2 \) is the mesh Reynolds number with \(h = 1/( m+1)\) being the step size. For details, we refer to Bai et al. (2003).

In our implementation, the initial guess is chosen to be \(x^{(0)}=\mathbf 0 \) and the iteration is terminated once the current iterate \(x^{(\ell )}\) satisfies \(\Vert b-Ax^{(\ell )}\Vert _2/\Vert b\Vert _2\le 10^{-6}\). In addition, we set \(n=m^3=10^3\), and the right-hand side vector \(b = A\mathbf 1 \), with \(\mathbf 1 \) being the vector of all entries equal to \(1\). For convenience, the preconditioner \(P\) used in NPHSS and PHSS methods are chosen to be \(P=\text {diag}(a_{11},a_{22},\ldots ,a_{nn})\), where \(a_{11},a_{22},\ldots ,a_{nn}\) are main diagonal elements of the matrix \(A\).

In IHSS, INHSS, IPHSS and INPHSS iteration methods, we set all the tolerances \(\{\eta _\ell \}=0.01\). We solve the linear systems with coefficient matrices \(\alpha P+H\) iteratively by the CG method, and solve the linear systems with the coefficient matrix \(\alpha P+S\) iteratively by the GMRES method. We denote the average inner iteration numbers of CG and GMRES as IT\(_\mathrm{{CG}}\) and IT\(_\mathrm{{GMRES}}\), respectively.

The comparisons of spectral radii of different iteration matrices derived by HSS, NHSS, PHSS and NPHSS iteration methods with different damping coefficients \(q\) are performed in Figs. 1, 2, 3 and 4. From Figs. 1, 2, 3 and 4, we find that when \(q\) is small (the Hermitian part of the coefficient matrix is dominant), the spectral radius of the iteration matrix of NHSS method is much smaller than that of HSS method, and the spectral radius of the iteration matrix of NPHSS method is much smaller than that of PHSS method. As \(q\) becomes large (the skew-Hermitian part is dominant), HSS and PHSS methods perform better and better.

Fig. 1
figure 1

Centered difference scheme. The spectral radii of the iteration matrices of NHSS and HSS methods versus \(\alpha \)

Fig. 2
figure 2

Upwind difference scheme. The spectral radii of the iteration matrices of NHSS and HSS methods versus \(\alpha \)

Fig. 3
figure 3

Centered difference scheme. The spectral radii of the iteration matrices of NPHSS and PHSS methods versus \(\alpha \)

Fig. 4
figure 4

Upwind difference scheme. The spectral radii of the iteration matrices of NPHSS and PHSS methods versus \(\alpha \)

In Tables 1, 2, 3 and 4, we present iteration numbers and CPU time for HSS, NHSS, PHSS and NPHSS iteration methods with different damping coefficients \(q\). From the results in Tables 1, 2, 3 and 4, we see that when \(q\) is small, NHSS and NPHSS methods, no matter compared with the experimental optimal parameter \(\alpha _\text {exp}\) or compared with the theoretical quasi-optimal parameter \(\alpha ^\star \), perform much better than HSS and PHSS methods both in iteration numbers and in CPU time. As \(q\) becomes large, the superiorities of NHSS and NPHSS methods disappear.

In Tables 5 and 6, numerical results for IHSS, INHSS, IPHSS and INPHSS iteration methods are listed; here we adopt the iteration parameters in Tables 1 and 2 for convenience and not the experimental optimal parameters. From Tables 5 and 6, we can obtain the same conclusions as above Tables.

In addition, if Theorem 2 is applied to this numerical example, then we known that when \(q < 1.606\) in the centered difference scheme or \(q < 1.725\) in the upwind difference scheme, the NPHSS is faster than PHSS and the NHSS is faster than HSS. The above numerical results have also examined this fact.

Therefore, we tend to use NHSS or NPHSS iteration methods to solve the linear system (1) when the Hermitian part \(H\) is dominant and employ HSS or PHSS if the skew-Hermitian part \(S\) is dominant.

Table 1 Numerical results for HSS and NHSS with the experimental optimal iteration parameters
Table 2 Numerical results for PHSS and NPHSS with the experimental optimal iteration parameters
Table 3 Numerical results for HSS and NHSS with the theoretical quasi-optimal iteration parameters
Table 4 Numerical results for PHSS and NPHSS with the theoretical quasi-optimal iteration parameters
Table 5 Numerical results for IHSS and INHSS
Table 6 Numerical results for IPHSS and INPHSS

6 Conclusions

In this paper, we have developed a non-alternating PHSS (NPHSS) method and its inexact variant for solving non-Hermitian positive definite linear systems. Theoretical analysis demonstrates that for any initial guess and a wide range of parameter \(\alpha \), NPHSS method converges to the unique solution of the linear system (1). We also derive an upper bound \(\sigma (\alpha )\) for the spectral radius of NPHSS iteration matrix and give the quasi-optimal parameter \(\alpha ^\star \) which minimizes the upper bound \(\sigma (\alpha )\). In addition, both theoretical and numerical results verify that when the Hermitian part of the coefficient matrix is dominant, NPHSS method performs better than HSS and PHSS methods. Hence, our work gives a better choice for solving the linear system (1) when the Hermitian part \(H\) of coefficient matrix \(A\) is dominant.

At last, we should mention that the choice of the optimal iteration parameters for the NPHSS method is an interesting but difficult topic. Some related discussions for the HSS method can be seen in Bai et al. (2006); Huang (2014). The investigation of this point may be considered in future study.

Comments: The result of recent paper (Li and Wu 2015) is the special case for \(P = I\) of our results here.