1 Introduction

We consider the system of linear equations

$$\begin{aligned} A x \equiv \left( W+i T\right) x=b, \end{aligned}$$
(1)

where \(W=-W_{1}+W_{2}\), \(W_{1}\), \(W_{2}\) and T \(\in \mathbb {R}^{n \times n}\), and \(i= \sqrt{-1}\) denotes the imaginary unit. We assume that the matrices \(W_{1}\), \(W_{2}\) and T are symmetric positive definite (SPD). This type of complex symmetric linear systems come from many problems in scientific computing and engineering applications, such as structural dynamics [7, 8, 15] and Helmholtz equations [9, 17, 33, 34].

Since the coefficient matrix of the system (1) is often of large size, iterative methods are preferred for solving the system. Based on the Hermitian and skew-Hermitian splitting (HSS) method [6], Bai et al. proposed the modified HSS method in [7]. This method for solving Eq. (1) can be written as following.

The MHSS method Let \( x^{(0)}\in \mathbb {C} ^{n} \) be an initial guess. For \(k=0,1,2,\ldots \) until the sequence of iterates \(\{ x^{(k)}\}_{k = 1}^\infty \) converges, compute the next iterate \( x^{(k+1)}\) via the following procedure:

$$\begin{aligned} \left\{ \begin{array}{l} \left( \alpha I+W\right) x^{\left( k+\frac{1}{2}\right) }=\left( \alpha I-i T\right) x^{(k)}+b,\\ \left( \alpha I+T\right) x^{(k+1)}= (\alpha I+i W) )x^{\left( k+\frac{1}{2}\right) }-ib, \end{array} \right. \end{aligned}$$
(2)

where \(\alpha \) is a given positive constant and I is the identity matrix.

There are several other methods for solving the system (1). For example in [8] Bai et al. constructed the preconditioned modified HSS (PMHSS) iteration method. Salkuyeh et al. applied the successive overrelaxation (SOR) method for solving this system [28] (see also [13, 14, 18, 19]). Hezari et al. in [19] presented the scale-splitting (SCSP) method (see also [20, 21, 27, 29] for some other versions of SCSP). The C-to-R method was presented by Axelsson and Kucherov in [2]. The transformed matrix iteration method was presented by Axelsson and Salkuyeh in [5]. Xie and Li proposed the preconditioned Euler-extrapolated single-step HSS splitting method in [36]. A parameterized splitting iteration method for complex symmetric system of linear equations was presented by Zhang and Zheng [37].

The authors of [7] proved that the MHSS method is unconditionally convergent to the unique solution of the system (1), when both of the matrices W and T are symmetric positive semidefinite (SPSD) with at least one of them being positive definite. However, when the matrices \(W_{1}\) and \(W_{2}\) are symmetric positive definite (SPD), the matrix W may be symmetric indefinite. In this case, the matrices \(\alpha I+W\) and \(\alpha W+T\) (or \(\alpha T+W\)) may be indefinite or singular and then the MHSS and PMHSS methods may fail to converge. This is the case for all the aforementioned methods.

Li and Wu in [35] proposed the modified positive/negative stable splitting (MPNS) method when W is symmetric indefinite. A preconditioned version of MPNS was presented by Cui et al. in [12]. In this paper, we present a two-parameter iteration method which is called the symmetric positive definite and negative stable splitting (SNSS) method for solving complex symmetric linear system (1) in the case that matrix W is symmetric indefinite. We prove that the method is convergent under some conditions. Numerical results show that the SNSS method is more effective than the MPNS method.

Throughout the paper we use the following notation. \(\Vert .\Vert _2\) denotes the Euclidean norm. Spectrum and spectral radius of a square matrix are denoted by \(\sigma (.)\) and \(\rho (.)\), respectively. The imaginary unit is shown by i  (\(i=\sqrt{-1}\)). The real and imaginary parts of a complex number (or vector) z are denoted by \(\mathfrak {I}(z)\) and \(\mathfrak {R}(z)\), respectively. We use \(\Vert .\Vert _F\) for the Frobenius norm of a matrix. The Kronecker product is denoted by \(\otimes \).

This paper is organized as following. In Sect. 2, we preset a brief review of the MPNS method. In Sect. 3, we design the SNSS method and discuss its convergence properties. In Sect. 4, we present inexact version of the SNSS method. The SNSS preconditioner is introduced in Sect. 5. Section 7 is devoted to some numerical experiments. Eventually, we present some conclusions in Sect. 8.

2 The MPNS method

In this section, we briefly study the MPNS method. Li and Wu [35] split the coefficient matrix A in (1) as

$$\begin{aligned} \begin{array}{l} A=P+N, \end{array} \end{aligned}$$

where

$$\begin{aligned} \begin{array}{l} P=W_{2} \quad \mathrm{and} \quad N=-W_{1}+iT. \end{array} \end{aligned}$$

This is a positive/negative stable splitting (PNS) because P and N are positive-stable and negative stable, respectively. Therefore, the complex symmetric linear system (27) can be written as

$$\begin{aligned} \begin{array}{l} \left( \alpha I+W_{2}\right) x=\left( \alpha I+W_{1}-i T\right) x+b, \end{array} \end{aligned}$$

By multiplying both sides of the system (1) by \(-i\) gives

$$\begin{aligned} \left( iW_{1}-iW_{2}+T\right) x=-ib. \end{aligned}$$

Hence, the system (1) is expressed as following fixed point equation

$$\begin{aligned} \begin{array}{l} \left( \alpha I+i W_{1}+T\right) x=\left( \alpha I +iW_{2}\right) x-ib. \end{array} \end{aligned}$$

This yields the modified positive/negative stable splitting (MPNS) method which can be written as following.

The MPNS method Let \( x^{(0)}\in \mathbb {C} ^{n} \) be an initial guess. For \(k=0,1,2,\ldots \) until the sequence of iterates \(\{x^{(k)}\}_{k = 0}^\infty \) converges, compute the next iterate \( x^{(k+1)} \) via:

$$\begin{aligned} \left\{ \begin{array}{l} \left( \alpha I+W_{2}\right) x^{\left( k+\frac{1}{2}\right) }=\left( \alpha I+W_{1}-i T\right) x^{(k)}+b, \\ \left( \alpha I+i W_{1}+T\right) x^{(k+1)}=\left( \alpha I+iW_{2} \right) x^{\left( k+\frac{1}{2}\right) }-ib, \end{array}\right. \end{aligned}$$
(3)

where \( \alpha \) is a given positive constant.

To obtain the fixed point form of the MPNS method, one can compute vector \(x^{(k+\frac{1}{2})}\) from the first equation in (3) and substitute in the second equation to obtain

$$\begin{aligned} x^{(k+1)}=P_{\alpha }x^{(k)}+Q_{\alpha }b, \end{aligned}$$

where

$$\begin{aligned} P_{\alpha }=\left( \alpha I+i W_{1}+T\right) ^{-1}\left( \alpha I+iW_{2} \right) \left( \alpha I+W_{2}\right) ^{-1}\left( \alpha I+W_{1}-i T\right) , \end{aligned}$$

and

$$\begin{aligned} Q_{\alpha }=(1-i) \alpha \left( \alpha I+iW_{1}+T\right) ^{-1}\left( \alpha I+W_{2}\right) ^{-1}. \end{aligned}$$

Here \(P_{\alpha }\) is the iteration matrix of the MPNS method. In addition, if we introduce

$$\begin{aligned} \begin{array}{l} E_{\alpha }=\frac{1+i}{2 \alpha }\left( \alpha I+W_{2}\right) \left( \alpha I+i W_{1}+T\right) \quad \mathrm{and} \quad F_{\alpha }=\frac{1+i}{2 \alpha }\left( \alpha I+iW_{2}\right) \left( \alpha I+W_{1}-i T\right) , \end{array} \end{aligned}$$

then

$$\begin{aligned} A=E_{\alpha }-F_{\alpha },\quad \mathrm{and} \quad P_{\alpha }=E_{\alpha } ^{-1} F_{\alpha }. \end{aligned}$$

As a result we have \(P_{\alpha }=I-E_{\alpha } ^{-1} A\). Therefore, the matrix \(E_{\alpha }\) can be used as a preconditioner for the system (1). If the matrix \(T-W_{1}\) is positive semi-definite, then the MPNS method is convergent for every \(\alpha >0\) (See [35, Theorem 3.1]).

3 The SNSS method

Multiplying both sides of Eq. (1) by \(-1\) and then adding \(\alpha T\) gives

$$\begin{aligned} \left( \alpha T+ W_{1}\right) x=\left( (\alpha +i) T+ W_{2}\right) x-b. \end{aligned}$$
(4)

On the other hand, by adding \(i\beta T\) to the both sides of (1) results in the equation

$$\begin{aligned} \left( i(\beta +1) T+W_{2}\right) x=\left( i \beta T+W_{1}\right) x+b. \end{aligned}$$
(5)

Now using Eqs. (4) and (5) we state the SNSS iteration methods as follows.

The SNSS method Let \( x^{(0)}\in \mathbb {C}^{n} \) be an initial guess. For \(k=0,1,2,\ldots \) until the sequence of iterates \(\{ x^{(k)}\}_{k = 0}^\infty \) converges, compute the next iterate \( x^{(k+1)} \) via:

$$\begin{aligned} \left\{ \begin{array}{l} (\alpha T+W_{1})x^{\left( k+\frac{1}{2}\right) }=\left( (\alpha +i) T+ W_{2}\right) x^{(k)}-b, \\ \left( i(\beta +1) T+W_{2}\right) x^{(k+1)}=(i \beta T + W_{1} )x^{\left( k+\frac{1}{2}\right) }+b, \end{array}\right. \end{aligned}$$
(6)

where \( \alpha \) is a given positive constant.

In each iteration of the SNSS method two linear systems with the coefficient matrices \(S_1=\alpha T+ W_{1}\) and \(S_2=i(\beta +1) T+W_{2}\) should be solved. Obviously, the matrix \(S_1\) is SPD. Therefore, the first subsystem in the SNSS method can be solved exactly using the Cholesky factorization or inexactly using the conjugate gradient (CG) method. On the other hand, the matrix \(S_2\) is complex symmetric with real and imaginary parts being SPD. So the second subsystem can be solved exactly using the LU factorization or inexactly using an iteration method. There are several Krylov subspace methods for solving the second subsystem (see [16, 31, 32]). We will shortly see that this subsystem can be efficiently solved using the generalized minimal residual (GMRES) [24, 25] or Chebyshev acceleration [24] methods in conjunction with preconditioned square block (PRESB) matrix as a preconditioner [4, 5].

By eliminating the vector \(x^{(k+\frac{1}{2})}\) from Eq. (6) we get

$$\begin{aligned} x^{(k+1)}=G_{\alpha ,\beta } x^{(k)}+H_{\alpha ,\beta }b, \end{aligned}$$

where

$$\begin{aligned} G_{\alpha ,\beta }=\left( i (\beta +1) T+W_{2}\right) ^{-1} \left( i \beta T+ W_{1}\right) \left( \alpha T+ W_{1}\right) ^{-1} \left( (\alpha +i) T+ W_{2}\right) , \end{aligned}$$

and

$$\begin{aligned} H_{\alpha ,\beta }= \left( \alpha -i \beta \right) \left( i (\beta +1) T+W_{2}\right) ^{-1}T\left( \alpha T+W_{1}\right) ^{-1}, \end{aligned}$$

in which \(G_{\alpha ,\beta }\) is the iteration matrix of the SNSS method. The next theorem investigates the convergence of the SNSS iteration method.

Theorem 1

Let \(A=-W_{1}+W_{2}+i T\) such that \(W_{1}\), \(W_{2}\) and T be real symmetric and positive definite matrices. Suppose that \(\lambda _{\max }\) is the largest eigenvalue of \(\tilde{W}_{1}=T^{-\frac{1}{2}} W_{1} T^{-\frac{1}{2}}\) and \(\mu _{\min }\) is the smallest eigenvalue of \(\tilde{W}_{2}=T^{-\frac{1}{2}} W_{2} T^{-\frac{1}{2}}\). If \(\alpha \) is large enough and

$$\begin{aligned} \beta > \max \left\{ 0, \frac{1}{2} \left( \lambda _{\max } ^2 -\mu _{\min } ^2-1 \right) \right\} , \end{aligned}$$

then \(\rho (G_{\alpha ,\beta }) <1\), where \(G_{\alpha ,\beta }\) is the iteration matrix of the SNSS method. This means that the SNSS iteration method converges to the unique solution of complex symmetric linear system (1).

Proof

First of all, we see that the matrix \(G_{\alpha ,\beta }\) is similar to

$$\begin{aligned} \tilde{G}_{\alpha ,\beta }= \big (i (\beta +1) T+W_{2}\big ) G_{\alpha ,\beta } \big (i (\beta +1) T+W_{2}\big )^{-1}. \end{aligned}$$

Therefore,

$$\begin{aligned} \rho (G_{\alpha ,\beta })^{2}= & {} \rho (\tilde{G}_{\alpha ,\beta })^{2} \nonumber \\= & {} \rho \left( (i \beta T+ W_{1})( \alpha T+ W_{1})^{-1} \big ((\alpha +i) T+W_{2}\big ) \big (i (\beta +1) T+W_{2}\big )^{-1}\right) ^{2} \nonumber \\= & {} \rho \left( T^{\frac{1}{2}} (i \beta I+\tilde{W_{1}}) (\alpha I+\tilde{W_{1}})^{-1} \big ((\alpha +i) I+ \tilde{W_{2}}\big )\big (i(\beta +1) I+\tilde{W_{2}}\big )^{-1} T^{-\frac{1}{2}}\right) ^{2} \nonumber \\= & {} \rho \left( (i \beta I+\tilde{W_{1}}) (\alpha I+\tilde{W_{1}})^{-1} \big ((\alpha +i) I+ \tilde{W_{2}}\big )\big (i(\beta +1) I+\tilde{W_{2}}\big )^{-1} \right) ^{2} \nonumber \\\le & {} \left\| (i \beta I+\tilde{W_{1}}) (\alpha I+\tilde{W_{1}})^{-1} \big ((\alpha +i) I+ \tilde{W_{2}} \big )\big (i(\beta +1) I+\tilde{W_{2}}\big )^{-1} \right\| _{2} ^{2} \nonumber \\\le & {} \left\| (i \beta I+\tilde{W_{1}}) (\alpha I+\tilde{W_{1}})^{-1} \right\| _{2} ^{2} \left\| \big ((\alpha +i) I+ \tilde{W_{2}}\big )\big (i(\beta +1) I+\tilde{W_{2}}\big )^{-1} \right\| _{2} ^{2} \nonumber \\= & {} \max _{\lambda _{i} \in \sigma (\tilde{W_{1}})} \frac{\beta ^{2}+\lambda _{i}^{2}}{(\alpha +\lambda _{i})^{2}} \max _{\mu _{j} \in \sigma (\tilde{W_{2}})} \frac{(\alpha + \mu _{j})^{2}+1}{(\beta +1)^{2}+\mu _{j}^{2}}, \end{aligned}$$
(7)

where

$$\begin{aligned} \tilde{W_{1}}=T^{-\frac{1}{2}} W_{1} T^{-\frac{1}{2}}, \end{aligned}$$

and

$$\begin{aligned} \tilde{W_{2}}=T^{-\frac{1}{2}} W_{2} T^{-\frac{1}{2}}. \end{aligned}$$

Clearly, the matrices \(\tilde{W_{1}}\) and \(\tilde{W_{2}}\) are SPD. Therefore, for every \(\lambda _i\in \sigma (\tilde{W_{1}})\) and \(\mu _i\in \sigma (\tilde{W_{2}})\) we have \(\lambda _i>0\) and \(\mu _i>0\). There exist \(1 \le k \le n\) and \(1 \le r \le n\) such that

$$\begin{aligned} \max _{\lambda _{i} \in \sigma (\tilde{W_{1}})} \frac{\beta ^{2}+\lambda _{i}^{2}}{(\alpha +\lambda _{i})^{2}}= \frac{\beta ^{2}+\lambda _{k}^{2}}{(\alpha +\lambda _{k})^{2}}, \end{aligned}$$

and

$$\begin{aligned} \max _{\mu _{j} \in \sigma (\tilde{W_{2}})} \frac{(\alpha + \mu _{j})^{2}+1}{(\beta +1)^{2}+\mu _{j}^{2}}=\frac{(\alpha + \mu _{r})^{2}+1}{(\beta +1)^{2}+\mu _{r}^{2}}. \end{aligned}$$

Therefore, it follows from Eq. (7) that

$$\begin{aligned} \rho (G_{\alpha ,\beta })^{2}\le & {} \frac{\beta ^{2}+\lambda _{k}^{2}}{(\alpha +\lambda _{k})^{2}} \frac{(\alpha + \mu _{r})^{2}+1}{(\beta +1)^{2}+\mu _{r}^{2}} \nonumber \\= & {} \frac{\beta ^{2}+\lambda _{k}^{2}}{(\beta +1)^{2}+\mu _{r}^{2}} \frac{(\alpha + \mu _{r})^{2}+1}{(\alpha +\lambda _{k})^{2}} \nonumber \\= & {} f(\beta ) g(\alpha ), \end{aligned}$$
(8)

where

$$\begin{aligned} f (\beta )= \frac{\beta ^{2}+\lambda _{k}^{2}}{(\beta +1)^{2}+\mu _{r}^{2}}, \end{aligned}$$
(9)

and

$$\begin{aligned} {0 < g(\alpha ) = \frac{(\alpha + \mu _{r})^{2}+1}{(\alpha +\lambda _{k})^{2}}}. \end{aligned}$$
(10)

It is easy to see that \(f (\beta )<1\) is equivalent to

$$\begin{aligned} \beta > \frac{1}{2} \left( \lambda _{k} ^2 -\mu _{r} ^2-1\right) . \end{aligned}$$

Hence to achieve \(f(\beta )<1\) it is enough to have

$$\begin{aligned} \beta > \max \left\{ 0, \frac{1}{2} \left( \lambda _{\max } ^2 -\mu _{\min } ^2-1 \right) \right\} . \end{aligned}$$
(11)

On the other hand,

$$\begin{aligned} \lim _{\alpha \rightarrow +\infty } \frac{(\alpha + \mu _{r})^{2}+1}{(\alpha +\lambda _{k})^{2}}=1. \end{aligned}$$

Thus, for every \(\epsilon >0\), there exists a \(P>0\) such that for all \( \alpha > P\),

$$\begin{aligned} \left| \frac{(\alpha + \mu _{r})^{2}+1}{(\alpha +\lambda _{k})^{2}}-1 \right| < \epsilon . \end{aligned}$$

So for \(\alpha > P\), we have

$$\begin{aligned} 0\le g(\alpha )=\frac{(\alpha + \mu _{r})^{2}+1}{(\alpha +\lambda _{k})^{2}} <1+\epsilon . \end{aligned}$$

Therefore, from (8) we deduce that

$$\begin{aligned} \rho (G_{\alpha ,\beta })^{2} \le f(\beta ) g(\alpha ) < f(\beta ) (1+\epsilon ). \end{aligned}$$

Hence to attain the convergence we need to have \(f(\beta ) (1+\epsilon ) <1\) and this can be achieved if the parameter \(\epsilon \) satisfies \(\epsilon < \frac{1}{f(\beta )}-1\). □

4 Inexact version of SNSS

To improve the computational efficiency of the SNSS iteration method, we can employ iteration methods for solving the two subsystems. This results in the inexact version of the SNSS (ISNSS) iteration method.

Set \(\gamma ^{(k)}= x^{(k+\frac{1}{2})}-x^{(k)}\). So, we have

$$\begin{aligned} x^{(k+\frac{1}{2})}=x^{(k)}+\gamma ^{(k)}. \end{aligned}$$

Substituting \( x^{(k+\frac{1}{2})}\) in the first half-step of (6), gives

$$\begin{aligned} (\alpha T+ W_{1}) \gamma ^{(k)}= & {} \left( -W_{1}+W_{2}+i T\right) x^{(k)} -b \nonumber \\= & {} (W+i T) x^{(k)} -b \nonumber \\= & {} A x^{(k)} -b =: -r^{(k)}. \end{aligned}$$
(12)

For computing \(\gamma ^{(k)}\), since the coefficient matrix of system (12) is SPD, we can use the CG method or its preconditioned version. Similarly, by setting \(\gamma ^{(k+\frac{1}{2})}= x^{(k+1)}-x^{(k+\frac{1}{2})}\) and substituting

$$\begin{aligned} x^{(k+1)}=\gamma ^{\left( k+\frac{1}{2}\right) }+x^{\left( k+\frac{1}{2}\right) }, \end{aligned}$$

in the second half-step in (6), we get

$$\begin{aligned} \left( i(\beta +1) T+W_{2}\right) \gamma ^{(k+\frac{1}{2})}= & {} (W_{1}-W_{2}-i T) x^{(k+\frac{1}{2})}+b \nonumber \\= & {} (-W-i T) x^{\left( k+\frac{1}{2}\right) } +b \nonumber \\= & {} b- A x^{\left( k+\frac{1}{2}\right) } =: r^{(k+\frac{1}{2})}. \end{aligned}$$
(13)

This system is equivalent to

$$\begin{aligned} \mathcal {A} u \equiv \begin{pmatrix} W_{2}&{} -(\beta +1) T \\ (\beta +1) T &{} W_{2} \end{pmatrix} \begin{pmatrix} \gamma ^{(k+\frac{1}{2})}_{1} \\ \gamma ^{(k+\frac{1}{2})}_{2} \end{pmatrix} = \begin{pmatrix} r^{(k+\frac{1}{2})}_{1} \\ r^{(k+\frac{1}{2})}_{2} \end{pmatrix} \equiv c, \end{aligned}$$
(14)

where \(\gamma ^{(k+\frac{1}{2})}_{1}=\mathfrak {R}(\gamma ^{(k+\frac{1}{2})})\), \(\gamma ^{(k+\frac{1}{2})}_{2}=\mathfrak {I}(\gamma ^{(k+\frac{1}{2})})\), \(r^{(k+\frac{1}{2})}_{1}=\mathfrak {R}(r^{(k+\frac{1}{2})})\), and \(r^{(k+\frac{1}{2})}_{2}=\mathfrak {I}(r^{(k+\frac{1}{2})})\). To solve the above system, we can use the following PRESB matrix as a preconditioner (see [1, 3,4,5])

$$\begin{aligned} \mathcal {P}_{PRESB}= \begin{pmatrix} W_{2} &{}-(\beta +1) T \\ (\beta +1)T &{} W_{2}+2(\beta +1) T \end{pmatrix}. \end{aligned}$$
(15)

In [4], Axelsson et al. showed that the eigenvalues of the preconditioned matrix \(\mathcal {P}_{PRESB}^{-1}\mathcal {A}\) are contained in the interval \([\frac{1}{2},1]\). So the Chebyshev acceleration method can be used for solving the system

$$\begin{aligned} \mathcal {P}_{PRESB}^{-1}\mathcal {A} u=\mathcal {P}_{PRESB}^{-1} c. \end{aligned}$$
(16)

In the implementation of the preconditioner \(\mathcal {P}\) in each iteration of the Chebyshev acceleration method a linear system of form

$$\begin{aligned} \mathcal {P}_{PRESB} \begin{pmatrix} w_{1} \\ w_{2} \end{pmatrix} = \begin{pmatrix} r_{1} \\ r_{2} \end{pmatrix}, \end{aligned}$$

should be solved, which is written as

$$\begin{aligned} \left\{ \begin{array}{l} W_{2}w_{1}-(\beta +1) T w_{2}=r_{1}, \\ (\beta +1) T w_{1}+(W_{2}+2 (\beta +1) T) w_{2}=r_{2}. \end{array}\right. \end{aligned}$$
(17)

By adding the second equation in (17) to the first one, gives

$$\begin{aligned} Qs=r_{1}+r_{2}, \end{aligned}$$
(18)

where \(Q=W_{2}+ (\beta +1) T\) and \(s=w_{1}+w_{2}\). Since, the matrix Q is SPD, we can solve the system (18) exactly using the Cholesky factorization or inexactly using the PCG mehtod. On the other side, from the second equation in (17) we obtain

$$\begin{aligned} Qw_{2}=r_{2}-(\beta +1) T s, \end{aligned}$$
(19)

and it can be solved similar to Eq. (18). Note that we obtain the vectors \(w_{2}\) and s from Eqs. (18) and (19), respectively. Finally, we can calculate the vector \(w_{1}\) using \(w_{1}=s-w_{2}\). According to above notes, one can use the following steps for computing the vector \((w_{1};w_{2})\).

  1. 1

    Solve \(Qs=r_{1}+r_{2}\) for s.

  2. 2

    \(r=r_{2}-(\beta +1) T s\).

  3. 3

    \(Q w_{2}=r\) for \(w_{2}\).

  4. 4

    \(w_{1}=s-w_{2}\).

As we mentioned the eigenvalues of the preconditioned matrix \(\mathcal {P}_{PRESB}^{-1}\mathcal {A}\) are contained in the interval \([\frac{1}{2},1]\). In this case, the lower and upper bounds for the eigenvalues \(\mathcal {P}_{PRESB}^{-1}\mathcal {A}\) are \(\eta =\frac{1}{2}\) and \(\zeta =1\), respectively. Therefore we can solve the inner systems by the Chebyshev acceleration method [24]. The Chebyshev acceleration algorithm for solving the preconditioned system (16) is as following.

Algorithm 1. The Chebyshev acceleration algorithm for \(\mathcal {P}_{PRESB}^{-1}\mathcal {A} x= \mathcal {P}_{PRESB}^{-1} b\).

  1. 1.

    Choose an initial guess \(x^{(0)}\) and \(\theta {:}{=} {(\eta +\zeta )}/{2}\) and \(\delta {:}{=} {(\zeta -\eta )}/{2}\);

  2. 2.

    \(r^{(0)}{:}{=}b-\mathcal {A} x^{(0)}\), \(z^{(0)} {:}{=} \mathcal {P}_{PRESB}^{-1} r^{(0)} \) and \(\sigma _{1}{:}{=}{\theta }/{\delta }\);

  3. 3.

    \(\rho ^{(0)} {:}{=} {1}/{\sigma _{1}}\) and \(d^{(0)}{:}{=} {z^{0}}/{\theta } \);

  4. 4.

    For \( k = 0, 1, \ldots ,\) until convergence, Do

  5. 5.

          \( x^{(k+1)}= x^{(k)}+d^{(k)}\);

  6. 6.

          \(r^{(k+1)}=r^{(k)}-\mathcal {A} d^{(k)}\);

  7. 7.

          \(z^{(k+1)}=\mathcal {P}_{PRESB} ^{-1} r^{(k+1)}\);

  8. 8.

          \(\rho ^{(k+1)}=(2 \sigma _{1}-\rho ^{(k)})^{-1}\);

  9. 9.

          \(d^{(k+1)}=\rho ^{(k+1)} \rho ^{(k)} d^{(k)} +\frac{2 \rho ^{(k+1)}}{\delta } z^{k+1}\);

  10. 10.

    EndDo

Now, the ISNSS iteration scheme is described in the following algorithm.

Algorithm 2. The ISNSS iteration method

  1. 1.

    Choose an initial guess \(x^{(0)}\);

  2. 2.

    For \(k=0,1,2,\ldots ,\) until convergence, Do

  3. 3.

          Compute \(r^{(k)}=b-A x^{(k)}\);

  4. 4.

          Solve system (12) approximately for \(\gamma ^{(k)}\) using PCG;

  5. 5.

          \( x^{(k+\frac{1}{2})}=x^{(k)}+\gamma ^{(k)}\);

  6. 6.

          Compute \(r^{(k+\frac{1}{2})}=b-A x^{(k+\frac{1}{2})}\);

  7. 7.

          Solve \(\mathcal {A} u= c\) using Algor. 1 in with preconditioner \(\mathcal {P}_{PRESB}\);

  8. 8.

          \( x^{(k+1)}=\gamma ^{(k+\frac{1}{2})}+x^{(k+\frac{1}{2})}\);

  9. 9.

    EndDo

5 SNSS preconditioner and its implementation issues

In the SNSS iteration method, if we introduce

$$\begin{aligned} B_{\alpha ,\beta }=\frac{1}{ (\alpha -i \beta )}\left( \alpha T+ W_{1}\right) T^{-1} \left( i (\beta +1) T+W_{2} \right) , \end{aligned}$$

and

$$\begin{aligned} C_{\alpha ,\beta }=\frac{1}{ (\alpha -i \beta )}(i \beta T+W_{1}) T^{-1} \left( (\alpha +i) T+W_{2}\right) , \end{aligned}$$
(20)

then

$$\begin{aligned} A=B_{\alpha ,\beta }-C_{\alpha ,\beta }, \quad \mathrm{and} \quad G_{\alpha ,\beta }=B_{\alpha ,\beta }^{-1} C_{\alpha ,\beta }. \end{aligned}$$
(21)

It follows from Eq. (21) that

$$\begin{aligned} B_{\alpha ,\beta }^{-1} A=I-G_{\alpha ,\beta }. \end{aligned}$$

From the latter equation we conclude that if the SNSS method is convergent, then the eigenvalues of the matrix \(B_{\alpha ,\beta }^{-1} A\) are clustered in a circle with radius 1 centered at (1, 0). In this case, a Krylov subspace method like GMRES is quite suitable for solving the preconditioned system (see [10])

$$\begin{aligned} B_{\alpha ,\beta }^{-1}A x=B_{\alpha ,\beta }^{-1} b. \end{aligned}$$
(22)

In each iteration of a Krylov subspace method like GMRES for solving the system (22) a vector of the form

$$\begin{aligned} w=B_{\alpha ,\beta }^{-1}z= { (\alpha -i \beta )} \left( i (\beta +1) T+W_{2} \right) ^{-1} T (\alpha T+ W_{1})^{-1}z, \end{aligned}$$

should be computed. To do so, we need solving a system with the coefficient matrix \(\alpha T+W_{1}\) which can be accomplished exactly using the Cholesky factorization or inexactly using the CG method. We also need to solve a linear system with the coefficient matrix \(i (\beta +1) T+W_{2}\) which can be solved using the GMRES or the Chebyshev acceleration methods with the PRESB preconditioner presented in the previous section.

Remark 1

The structure of subsystems in the MPNS method are the same as the ones in the SNSS iteration method. So the subsystems in the MPNS method, as well as the MPNS preconditioner, can be treated similar to the SNSS iteration.

6 Parameters estimation

In this section, using the idea of Ren and Cao [23] we present a strategy for estimating the iteration parameters \(\alpha \) and \(\beta \) of the SNSS preconditioner. We rewrite the matrix in (20) as

$$\begin{aligned} C_{\alpha ,\beta }= & {} \frac{-i^2}{ (\alpha -i \beta )}\left( i \beta T+W_{1}\right) T^{-1} \left( (\alpha +i) T+W_{2}\right) \nonumber \\= & {} \frac{1}{ (i \beta -\alpha )}(- \beta T+i W_{1}) T^{-1} \left( \alpha i T+i W_{2}-T \right) , \end{aligned}$$
(23)

and define the function \(\psi \) as

$$\begin{aligned} \psi (\alpha ,\beta )= & {} \left( - \beta \Vert T\Vert _F+ \Vert i W_{1}\Vert _F\right) \Vert T^{-1}\Vert _F \left( \alpha \Vert i T\Vert _F+ \Vert i W_{2}\Vert _F-\Vert T\Vert _F \right) \\= & {} \left( - \beta \Vert T\Vert _F+ \Vert W_{1}\Vert _F\right) \Vert T^{-1}\Vert _F \left( \alpha \Vert T\Vert _F+ \Vert W_{2}\Vert _F-\Vert T\Vert _F \right) . \end{aligned}$$

To estimate the iteration parameters \(\alpha \) and \(\beta \), we set

$$\begin{aligned} -\beta \Vert T \Vert _{F}+\Vert W_{1}\Vert _{F} =0, \end{aligned}$$

and

$$\begin{aligned} \alpha \Vert T\Vert _{F}+\Vert W_{2} \Vert _{F} -\Vert T\Vert _{F}=0. \end{aligned}$$

Since the matrix T is SPD, then \(\Vert T\Vert _{F}\), \(\Vert T^{-1}\Vert _{F} \ne 0\). Then we obtain the following estimation formulas for the iteration parameters \(\alpha \) and \(\beta \) of the SNSS preconditioner

$$\begin{aligned} \beta _{est}&=\frac{\Vert W_{1}\Vert _{F}}{\Vert T\Vert _{F}}, \end{aligned}$$
(24)
$$\begin{aligned} \alpha _{est}&=1-\frac{\Vert W_{2}\Vert _{F}}{\Vert T\Vert _{F}}. \end{aligned}$$
(25)

Note that if \(\alpha _{est}<0\), then we apply the above strategy for the function \(-\psi \). In this case, we get

$$\begin{aligned} \alpha _{est}=\frac{\Vert W_{2}\Vert _{F}}{\Vert T\Vert _{F}}-1. \end{aligned}$$

Summarizing the above discussion results in the the following estimation parameters:

$$\begin{aligned} (\alpha _{est},\beta _{est})=\left( \left| 1-\frac{\Vert W_{2}\Vert _{F}}{\Vert T\Vert _{F}}\right| ,\frac{\Vert W_{1}\Vert _{F}}{\Vert T\Vert _{F}} \right) . \end{aligned}$$
(26)

In the next section we see that the above strategy gives often suitable results.

7 Numerical experiments

In this section, we consider the following two examples for our numerical tests.

Example 1

We consider the complex symmetric linear system of equations [7, 8, 35]

$$\begin{aligned} \left[ \left( -\omega ^2 M +K\right) +i \omega C\right] x=b, \end{aligned}$$
(27)

where M and K are the inertia and stiffness matrices, respectively. We take \(C=\omega C_{V}+C_{H}\) where \(C_{V}\) and \(C_{H}\) are the viscous and hysteretic damping matrices, respectively; and \(\omega \) is the driving circular frequency. In our numerical experiments, we set \(M=I\), \(C_{V}=5M\) and \(C_{H}=\mu K\) with a damping coefficient \(\mu =0.02\) and K the five-point centered difference matrix approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions, on a uniform mesh in the unit square \( [0,1] \times [0,1] \) with the mesh size \(h={1}/{(m+1)} \). In this case, we have

$$\begin{aligned} K=\left( I \otimes V_{m}+V_{m}\otimes I \right) \in \mathbb {R}^{n \times n}, \end{aligned}$$

with \(V_{m} = h^{-2}\text {tridiag} (-1,2,-1)\in \mathbb {R}^{m \times m}\). Hence, the total number of variables is \(n=m^{2}\). In addition, the right-hand side vector f is to be adjusted such that \(b=(1+i)Ae\) where \(e=(1,1,\ldots ,1)^T \in \mathbb {R}^{n}\). Note that in both of the SNSS and MPNS methods, we consider \(W_{1}=\omega ^{2} M\), \(W_{2}=K\) and \(T=\omega C\).

Example 2

We consider the complex Helmholtz equation in 2-D of the form [7, 8, 30, 35]

$$\begin{aligned} \left\{ \begin{array}{rl} -\varDelta u+\sigma _{1} u+i \sigma _{2} u &{}= f, \quad in \quad \Omega ,\\ u &{} = g, \quad on \quad \partial \Omega , \end{array}\right. \end{aligned}$$
(28)

where

$$\begin{aligned} \varDelta = \sum _{j=1}^{2} \frac{\partial ^2}{\partial x_{j}^2}, \end{aligned}$$

\(\sigma _{1} \in \mathbb {R}\), \(\sigma _{2} \ge 0\) and \(\Omega =[0,1]^2\), \(i=\sqrt{-1}\). The discretization of the equation above in 2-D, using the second order central difference scheme on an \((m+2) \times (m+2)\) grid of \(\Omega \) with mesh-size \(h=1/(m+1)\) leads to a system of linear equations with the coefficient matrix \(A=W+iT \in \mathbb {C}^{n \times n}\), such that \(n=m^2\), and

$$\begin{aligned} W=K+\sigma _{1} h^{2} (I_{m} \otimes I_{m}) \quad \mathrm{and} \quad T=\sigma _{2} h^2 (I_{m} \otimes I_{m}), \end{aligned}$$

with \(K=I_{m} \otimes V_{m} + V_{m} \otimes I_{m}\) and \(V_{m}=tridiag(-1,2,-1) \in \mathbb {R}^{m \times m}\). In addition, the right-hand side vector b is to be adjusted such that \(b = (1 + i)Ae\) where \(e = (1,1,\ldots ,1)^T \in \mathbb {R}^{n}\). In our numerical experiments, we set \((\sigma _{1},\sigma _{2})=(-100,100)\) and \((\sigma _{1},\sigma _{2})=(-1000,10)\). Note that in both of the SNSS and MPNS methods, we consider \(W_{1}=-\sigma _{1} h^{2} (I_{m} \otimes I_{m})\), \(W_{2}=K\).

We present our numerical results in two parts. In the first part, we compare the numerical results of the ISNSS with the incomplete version of MPNS (IMPNS) method. In both of the ISNSS and IMPNS methods, the first subsystem is solved using the preconditioned CG (PCG) method incorporated with the incomplete Cholesky factorization with dropping tolerance \(10^{-2}\) as a preconditioner. In the Matlab notation the following command can be used for computing the incomplete Cholesky factor

$$\begin{aligned} \texttt {L=ichol(Z,struct('type','ict','droptol',1e-2))}; \end{aligned}$$

where Z is a given SPD matrix. The second subsystem is solved by the Chebyshev acceleration method in conjunction with the PRESB preconditioner. Note that in all of the methods we solve the subsystems (18) and (19) exactly using the sparse Cholesky factorization incorporated with the symmetric approximate minimum degree permutation. To do so, the symamd.m command of Matlab is applied. We use a null vector as an initial guess. The outer iteration is stopped as soon as the residual 2-norm is reduced by a factor of \(10^6\) and the the inner iteration by a factor of \(10^2\). The maximum number of inner and outer iterations are set to be 10000, respectively. In the tables, a dagger (\(\dagger \)) shows that the method has not converged in 10000 iterations. All of the numerical experiments are performed in Matlab R2018b by using a Laptop with 2.50 GHz central processing unit (Intel(R) Core(TM) i5-7200U), 6 GB RAM and Windows 10.

Numerical results are presented for Examples 1 and 2 in Tables 1, 2, 3 and 4. In these tables we report the number of outer iterations (“Iters”), elapsed CPU time in seconds (“CPU”) and the following values

$$\begin{aligned} R_k=\frac{\Vert b -A x^{(k)}\Vert _2}{\Vert b\Vert _2}, \quad E_k=\frac{\Vert x^*- x^{(k)}\Vert _2}{\Vert x^{*} \Vert _2}, \end{aligned}$$

to demonstrate the accuracy of the computed solutions, where \( x^{(k)}\) and \( x^{*}\) are the computed solution at iteration k and the exact solution, respectively. We also present the minimum eigenvalue of the matrices W and T to see that whether the matrix is indefinite or not. For both the ISNSS and IMPNS methods the the optimal values of the parameters (the one with minimum number of iterations) are computed experimentally.

As the numerical results show the ISNSS outperforms the IMPNS method from both the CPU time and the iterations. We observe that ISNSS iteration method is h-independent. However, by increasing the parameter \(\omega \), the number of iterations is increased moderately.

Table 1 Numerical results of IMPNS and ISNSS for Example 1 and \(m=128~(n=16384)\)
Table 2 Numerical results of IMPNS and ISNSS for Example 1 and \(m=256~(n=65536)\)
Table 3 Numerical results of IMPNS, ISNSS for Example 2 with \(\sigma _{1}=-100\) and \(\sigma _{2}=100\)
Table 4 Numerical results of IMPNS, ISNSS for Example 2 with \(\sigma _{1}=-1000\) and \(\sigma _{2}=10\)

For the second set of our numerical experiments, we present the numerical results for solving Examples 1 and 2 by means of the flexible version of GMRES (FGMRES) [24, 26] incorporated with the SNSS and MPNS preconditioners (hereafter, are denoted by FGMRES-SNSS and FGMRES-MPNS, respectively). We use a zero vector as an initial guess and the iteration of FGMRES is stopped as soon as the residual 2-norm is reduced by a factor of \(10^6\). The maximum number of iterations is set to be 10,000. In the implementation of the preconditioners the subsystems are solved as mentioned in the first set of the numerical experiments. Numerical results are presented in Tables 5, 6, 7 and 8. To demonstrate the efficiency of the preconditioners we also present the numerical results of FGMRES without preconditioning. We use the optimal value of the parameter in the MPNS method, however the estimated parameters given in Eq. (26) are used in the SNSS method.

As we see both of the preconditioners expedite greatly the convergence speed of the FGMRES method. Numerical results show that the SNSS preconditioner is more efficient than the MPNS preconditioner in terms of both the number of iterations and the elapsed CPU time. We also see that as the mesh size is refined the number of iterations of FGMRES-SNSS remains almost constant. However, the number of iterations increased moderately as the value of the parameter \(\omega \) is increased.

Table 5 Numerical results of FGMRES, and FGMRES with the SNSS and MPNS preconditioners for Example 1 and \(m=128~(n=16384)\)
Table 6 Numerical results of FGMRES, and FGMRES with the SNSS and MPNS preconditioners for Example 1 and \(m=256~(n=65536)\)
Table 7 Numerical results of FGMRES, and FGMRES with the SNSS and MPNS preconditioners for Example 2 with \(\sigma _{1}=-100\) and \(\sigma _{2}=100\)
Table 8 Numerical results of FGMRES, and FGMRES with the SNSS and MPNS preconditioners for Example 2 with \(\sigma _{1}=-1000\) and \(\sigma _{2}=10\)

8 Conclusion

In this paper, we have presented the symmetric positive definite and negative stable splitting (SNSS) method to solve the complex symmetric indefinite linear systems. Theoretical analysis show that the SNSS iteration method is convergent under suitable condition. Numerical results that the ISNSS-Cheb and SNSS-FGMRES methods are efficient when the real part of the coefficient matrix of the systems is symmetric indefinite.