Abstract
We present a new iteration method, namely symmetric positive definite and negative stable splitting (SNSS) method for solving complex symmetric indefinite linear systems. Theoretical analysis shows that the proposed method is convergent under suitable conditions. In each iteration of the method two subsystems should be solved. One of them can be solved inexactly using the conjugate gradient method, and the second one by the Chebyshev acceleration method in conjunction with the well-known PRESB preconditioner. Numerical experiments are reported to indicate efficiency of the SNSS method.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
We consider the system of linear equations
where \(W=-W_{1}+W_{2}\), \(W_{1}\), \(W_{2}\) and T \(\in \mathbb {R}^{n \times n}\), and \(i= \sqrt{-1}\) denotes the imaginary unit. We assume that the matrices \(W_{1}\), \(W_{2}\) and T are symmetric positive definite (SPD). This type of complex symmetric linear systems come from many problems in scientific computing and engineering applications, such as structural dynamics [7, 8, 15] and Helmholtz equations [9, 17, 33, 34].
Since the coefficient matrix of the system (1) is often of large size, iterative methods are preferred for solving the system. Based on the Hermitian and skew-Hermitian splitting (HSS) method [6], Bai et al. proposed the modified HSS method in [7]. This method for solving Eq. (1) can be written as following.
The MHSS method Let \( x^{(0)}\in \mathbb {C} ^{n} \) be an initial guess. For \(k=0,1,2,\ldots \) until the sequence of iterates \(\{ x^{(k)}\}_{k = 1}^\infty \) converges, compute the next iterate \( x^{(k+1)}\) via the following procedure:
where \(\alpha \) is a given positive constant and I is the identity matrix.
There are several other methods for solving the system (1). For example in [8] Bai et al. constructed the preconditioned modified HSS (PMHSS) iteration method. Salkuyeh et al. applied the successive overrelaxation (SOR) method for solving this system [28] (see also [13, 14, 18, 19]). Hezari et al. in [19] presented the scale-splitting (SCSP) method (see also [20, 21, 27, 29] for some other versions of SCSP). The C-to-R method was presented by Axelsson and Kucherov in [2]. The transformed matrix iteration method was presented by Axelsson and Salkuyeh in [5]. Xie and Li proposed the preconditioned Euler-extrapolated single-step HSS splitting method in [36]. A parameterized splitting iteration method for complex symmetric system of linear equations was presented by Zhang and Zheng [37].
The authors of [7] proved that the MHSS method is unconditionally convergent to the unique solution of the system (1), when both of the matrices W and T are symmetric positive semidefinite (SPSD) with at least one of them being positive definite. However, when the matrices \(W_{1}\) and \(W_{2}\) are symmetric positive definite (SPD), the matrix W may be symmetric indefinite. In this case, the matrices \(\alpha I+W\) and \(\alpha W+T\) (or \(\alpha T+W\)) may be indefinite or singular and then the MHSS and PMHSS methods may fail to converge. This is the case for all the aforementioned methods.
Li and Wu in [35] proposed the modified positive/negative stable splitting (MPNS) method when W is symmetric indefinite. A preconditioned version of MPNS was presented by Cui et al. in [12]. In this paper, we present a two-parameter iteration method which is called the symmetric positive definite and negative stable splitting (SNSS) method for solving complex symmetric linear system (1) in the case that matrix W is symmetric indefinite. We prove that the method is convergent under some conditions. Numerical results show that the SNSS method is more effective than the MPNS method.
Throughout the paper we use the following notation. \(\Vert .\Vert _2\) denotes the Euclidean norm. Spectrum and spectral radius of a square matrix are denoted by \(\sigma (.)\) and \(\rho (.)\), respectively. The imaginary unit is shown by i (\(i=\sqrt{-1}\)). The real and imaginary parts of a complex number (or vector) z are denoted by \(\mathfrak {I}(z)\) and \(\mathfrak {R}(z)\), respectively. We use \(\Vert .\Vert _F\) for the Frobenius norm of a matrix. The Kronecker product is denoted by \(\otimes \).
This paper is organized as following. In Sect. 2, we preset a brief review of the MPNS method. In Sect. 3, we design the SNSS method and discuss its convergence properties. In Sect. 4, we present inexact version of the SNSS method. The SNSS preconditioner is introduced in Sect. 5. Section 7 is devoted to some numerical experiments. Eventually, we present some conclusions in Sect. 8.
2 The MPNS method
In this section, we briefly study the MPNS method. Li and Wu [35] split the coefficient matrix A in (1) as
where
This is a positive/negative stable splitting (PNS) because P and N are positive-stable and negative stable, respectively. Therefore, the complex symmetric linear system (27) can be written as
By multiplying both sides of the system (1) by \(-i\) gives
Hence, the system (1) is expressed as following fixed point equation
This yields the modified positive/negative stable splitting (MPNS) method which can be written as following.
The MPNS method Let \( x^{(0)}\in \mathbb {C} ^{n} \) be an initial guess. For \(k=0,1,2,\ldots \) until the sequence of iterates \(\{x^{(k)}\}_{k = 0}^\infty \) converges, compute the next iterate \( x^{(k+1)} \) via:
where \( \alpha \) is a given positive constant.
To obtain the fixed point form of the MPNS method, one can compute vector \(x^{(k+\frac{1}{2})}\) from the first equation in (3) and substitute in the second equation to obtain
where
and
Here \(P_{\alpha }\) is the iteration matrix of the MPNS method. In addition, if we introduce
then
As a result we have \(P_{\alpha }=I-E_{\alpha } ^{-1} A\). Therefore, the matrix \(E_{\alpha }\) can be used as a preconditioner for the system (1). If the matrix \(T-W_{1}\) is positive semi-definite, then the MPNS method is convergent for every \(\alpha >0\) (See [35, Theorem 3.1]).
3 The SNSS method
Multiplying both sides of Eq. (1) by \(-1\) and then adding \(\alpha T\) gives
On the other hand, by adding \(i\beta T\) to the both sides of (1) results in the equation
Now using Eqs. (4) and (5) we state the SNSS iteration methods as follows.
The SNSS method Let \( x^{(0)}\in \mathbb {C}^{n} \) be an initial guess. For \(k=0,1,2,\ldots \) until the sequence of iterates \(\{ x^{(k)}\}_{k = 0}^\infty \) converges, compute the next iterate \( x^{(k+1)} \) via:
where \( \alpha \) is a given positive constant.
In each iteration of the SNSS method two linear systems with the coefficient matrices \(S_1=\alpha T+ W_{1}\) and \(S_2=i(\beta +1) T+W_{2}\) should be solved. Obviously, the matrix \(S_1\) is SPD. Therefore, the first subsystem in the SNSS method can be solved exactly using the Cholesky factorization or inexactly using the conjugate gradient (CG) method. On the other hand, the matrix \(S_2\) is complex symmetric with real and imaginary parts being SPD. So the second subsystem can be solved exactly using the LU factorization or inexactly using an iteration method. There are several Krylov subspace methods for solving the second subsystem (see [16, 31, 32]). We will shortly see that this subsystem can be efficiently solved using the generalized minimal residual (GMRES) [24, 25] or Chebyshev acceleration [24] methods in conjunction with preconditioned square block (PRESB) matrix as a preconditioner [4, 5].
By eliminating the vector \(x^{(k+\frac{1}{2})}\) from Eq. (6) we get
where
and
in which \(G_{\alpha ,\beta }\) is the iteration matrix of the SNSS method. The next theorem investigates the convergence of the SNSS iteration method.
Theorem 1
Let \(A=-W_{1}+W_{2}+i T\) such that \(W_{1}\), \(W_{2}\) and T be real symmetric and positive definite matrices. Suppose that \(\lambda _{\max }\) is the largest eigenvalue of \(\tilde{W}_{1}=T^{-\frac{1}{2}} W_{1} T^{-\frac{1}{2}}\) and \(\mu _{\min }\) is the smallest eigenvalue of \(\tilde{W}_{2}=T^{-\frac{1}{2}} W_{2} T^{-\frac{1}{2}}\). If \(\alpha \) is large enough and
then \(\rho (G_{\alpha ,\beta }) <1\), where \(G_{\alpha ,\beta }\) is the iteration matrix of the SNSS method. This means that the SNSS iteration method converges to the unique solution of complex symmetric linear system (1).
Proof
First of all, we see that the matrix \(G_{\alpha ,\beta }\) is similar to
Therefore,
where
and
Clearly, the matrices \(\tilde{W_{1}}\) and \(\tilde{W_{2}}\) are SPD. Therefore, for every \(\lambda _i\in \sigma (\tilde{W_{1}})\) and \(\mu _i\in \sigma (\tilde{W_{2}})\) we have \(\lambda _i>0\) and \(\mu _i>0\). There exist \(1 \le k \le n\) and \(1 \le r \le n\) such that
and
Therefore, it follows from Eq. (7) that
where
and
It is easy to see that \(f (\beta )<1\) is equivalent to
Hence to achieve \(f(\beta )<1\) it is enough to have
On the other hand,
Thus, for every \(\epsilon >0\), there exists a \(P>0\) such that for all \( \alpha > P\),
So for \(\alpha > P\), we have
Therefore, from (8) we deduce that
Hence to attain the convergence we need to have \(f(\beta ) (1+\epsilon ) <1\) and this can be achieved if the parameter \(\epsilon \) satisfies \(\epsilon < \frac{1}{f(\beta )}-1\). □
4 Inexact version of SNSS
To improve the computational efficiency of the SNSS iteration method, we can employ iteration methods for solving the two subsystems. This results in the inexact version of the SNSS (ISNSS) iteration method.
Set \(\gamma ^{(k)}= x^{(k+\frac{1}{2})}-x^{(k)}\). So, we have
Substituting \( x^{(k+\frac{1}{2})}\) in the first half-step of (6), gives
For computing \(\gamma ^{(k)}\), since the coefficient matrix of system (12) is SPD, we can use the CG method or its preconditioned version. Similarly, by setting \(\gamma ^{(k+\frac{1}{2})}= x^{(k+1)}-x^{(k+\frac{1}{2})}\) and substituting
in the second half-step in (6), we get
This system is equivalent to
where \(\gamma ^{(k+\frac{1}{2})}_{1}=\mathfrak {R}(\gamma ^{(k+\frac{1}{2})})\), \(\gamma ^{(k+\frac{1}{2})}_{2}=\mathfrak {I}(\gamma ^{(k+\frac{1}{2})})\), \(r^{(k+\frac{1}{2})}_{1}=\mathfrak {R}(r^{(k+\frac{1}{2})})\), and \(r^{(k+\frac{1}{2})}_{2}=\mathfrak {I}(r^{(k+\frac{1}{2})})\). To solve the above system, we can use the following PRESB matrix as a preconditioner (see [1, 3,4,5])
In [4], Axelsson et al. showed that the eigenvalues of the preconditioned matrix \(\mathcal {P}_{PRESB}^{-1}\mathcal {A}\) are contained in the interval \([\frac{1}{2},1]\). So the Chebyshev acceleration method can be used for solving the system
In the implementation of the preconditioner \(\mathcal {P}\) in each iteration of the Chebyshev acceleration method a linear system of form
should be solved, which is written as
By adding the second equation in (17) to the first one, gives
where \(Q=W_{2}+ (\beta +1) T\) and \(s=w_{1}+w_{2}\). Since, the matrix Q is SPD, we can solve the system (18) exactly using the Cholesky factorization or inexactly using the PCG mehtod. On the other side, from the second equation in (17) we obtain
and it can be solved similar to Eq. (18). Note that we obtain the vectors \(w_{2}\) and s from Eqs. (18) and (19), respectively. Finally, we can calculate the vector \(w_{1}\) using \(w_{1}=s-w_{2}\). According to above notes, one can use the following steps for computing the vector \((w_{1};w_{2})\).
-
1
Solve \(Qs=r_{1}+r_{2}\) for s.
-
2
\(r=r_{2}-(\beta +1) T s\).
-
3
\(Q w_{2}=r\) for \(w_{2}\).
-
4
\(w_{1}=s-w_{2}\).
As we mentioned the eigenvalues of the preconditioned matrix \(\mathcal {P}_{PRESB}^{-1}\mathcal {A}\) are contained in the interval \([\frac{1}{2},1]\). In this case, the lower and upper bounds for the eigenvalues \(\mathcal {P}_{PRESB}^{-1}\mathcal {A}\) are \(\eta =\frac{1}{2}\) and \(\zeta =1\), respectively. Therefore we can solve the inner systems by the Chebyshev acceleration method [24]. The Chebyshev acceleration algorithm for solving the preconditioned system (16) is as following.
Algorithm 1. The Chebyshev acceleration algorithm for \(\mathcal {P}_{PRESB}^{-1}\mathcal {A} x= \mathcal {P}_{PRESB}^{-1} b\).
-
1.
Choose an initial guess \(x^{(0)}\) and \(\theta {:}{=} {(\eta +\zeta )}/{2}\) and \(\delta {:}{=} {(\zeta -\eta )}/{2}\);
-
2.
\(r^{(0)}{:}{=}b-\mathcal {A} x^{(0)}\), \(z^{(0)} {:}{=} \mathcal {P}_{PRESB}^{-1} r^{(0)} \) and \(\sigma _{1}{:}{=}{\theta }/{\delta }\);
-
3.
\(\rho ^{(0)} {:}{=} {1}/{\sigma _{1}}\) and \(d^{(0)}{:}{=} {z^{0}}/{\theta } \);
-
4.
For \( k = 0, 1, \ldots ,\) until convergence, Do
-
5.
\( x^{(k+1)}= x^{(k)}+d^{(k)}\);
-
6.
\(r^{(k+1)}=r^{(k)}-\mathcal {A} d^{(k)}\);
-
7.
\(z^{(k+1)}=\mathcal {P}_{PRESB} ^{-1} r^{(k+1)}\);
-
8.
\(\rho ^{(k+1)}=(2 \sigma _{1}-\rho ^{(k)})^{-1}\);
-
9.
\(d^{(k+1)}=\rho ^{(k+1)} \rho ^{(k)} d^{(k)} +\frac{2 \rho ^{(k+1)}}{\delta } z^{k+1}\);
-
10.
EndDo
Now, the ISNSS iteration scheme is described in the following algorithm.
Algorithm 2. The ISNSS iteration method
-
1.
Choose an initial guess \(x^{(0)}\);
-
2.
For \(k=0,1,2,\ldots ,\) until convergence, Do
-
3.
Compute \(r^{(k)}=b-A x^{(k)}\);
-
4.
Solve system (12) approximately for \(\gamma ^{(k)}\) using PCG;
-
5.
\( x^{(k+\frac{1}{2})}=x^{(k)}+\gamma ^{(k)}\);
-
6.
Compute \(r^{(k+\frac{1}{2})}=b-A x^{(k+\frac{1}{2})}\);
-
7.
Solve \(\mathcal {A} u= c\) using Algor. 1 in with preconditioner \(\mathcal {P}_{PRESB}\);
-
8.
\( x^{(k+1)}=\gamma ^{(k+\frac{1}{2})}+x^{(k+\frac{1}{2})}\);
-
9.
EndDo
5 SNSS preconditioner and its implementation issues
In the SNSS iteration method, if we introduce
and
then
It follows from Eq. (21) that
From the latter equation we conclude that if the SNSS method is convergent, then the eigenvalues of the matrix \(B_{\alpha ,\beta }^{-1} A\) are clustered in a circle with radius 1 centered at (1, 0). In this case, a Krylov subspace method like GMRES is quite suitable for solving the preconditioned system (see [10])
In each iteration of a Krylov subspace method like GMRES for solving the system (22) a vector of the form
should be computed. To do so, we need solving a system with the coefficient matrix \(\alpha T+W_{1}\) which can be accomplished exactly using the Cholesky factorization or inexactly using the CG method. We also need to solve a linear system with the coefficient matrix \(i (\beta +1) T+W_{2}\) which can be solved using the GMRES or the Chebyshev acceleration methods with the PRESB preconditioner presented in the previous section.
Remark 1
The structure of subsystems in the MPNS method are the same as the ones in the SNSS iteration method. So the subsystems in the MPNS method, as well as the MPNS preconditioner, can be treated similar to the SNSS iteration.
6 Parameters estimation
In this section, using the idea of Ren and Cao [23] we present a strategy for estimating the iteration parameters \(\alpha \) and \(\beta \) of the SNSS preconditioner. We rewrite the matrix in (20) as
and define the function \(\psi \) as
To estimate the iteration parameters \(\alpha \) and \(\beta \), we set
and
Since the matrix T is SPD, then \(\Vert T\Vert _{F}\), \(\Vert T^{-1}\Vert _{F} \ne 0\). Then we obtain the following estimation formulas for the iteration parameters \(\alpha \) and \(\beta \) of the SNSS preconditioner
Note that if \(\alpha _{est}<0\), then we apply the above strategy for the function \(-\psi \). In this case, we get
Summarizing the above discussion results in the the following estimation parameters:
In the next section we see that the above strategy gives often suitable results.
7 Numerical experiments
In this section, we consider the following two examples for our numerical tests.
Example 1
We consider the complex symmetric linear system of equations [7, 8, 35]
where M and K are the inertia and stiffness matrices, respectively. We take \(C=\omega C_{V}+C_{H}\) where \(C_{V}\) and \(C_{H}\) are the viscous and hysteretic damping matrices, respectively; and \(\omega \) is the driving circular frequency. In our numerical experiments, we set \(M=I\), \(C_{V}=5M\) and \(C_{H}=\mu K\) with a damping coefficient \(\mu =0.02\) and K the five-point centered difference matrix approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions, on a uniform mesh in the unit square \( [0,1] \times [0,1] \) with the mesh size \(h={1}/{(m+1)} \). In this case, we have
with \(V_{m} = h^{-2}\text {tridiag} (-1,2,-1)\in \mathbb {R}^{m \times m}\). Hence, the total number of variables is \(n=m^{2}\). In addition, the right-hand side vector f is to be adjusted such that \(b=(1+i)Ae\) where \(e=(1,1,\ldots ,1)^T \in \mathbb {R}^{n}\). Note that in both of the SNSS and MPNS methods, we consider \(W_{1}=\omega ^{2} M\), \(W_{2}=K\) and \(T=\omega C\).
Example 2
We consider the complex Helmholtz equation in 2-D of the form [7, 8, 30, 35]
where
\(\sigma _{1} \in \mathbb {R}\), \(\sigma _{2} \ge 0\) and \(\Omega =[0,1]^2\), \(i=\sqrt{-1}\). The discretization of the equation above in 2-D, using the second order central difference scheme on an \((m+2) \times (m+2)\) grid of \(\Omega \) with mesh-size \(h=1/(m+1)\) leads to a system of linear equations with the coefficient matrix \(A=W+iT \in \mathbb {C}^{n \times n}\), such that \(n=m^2\), and
with \(K=I_{m} \otimes V_{m} + V_{m} \otimes I_{m}\) and \(V_{m}=tridiag(-1,2,-1) \in \mathbb {R}^{m \times m}\). In addition, the right-hand side vector b is to be adjusted such that \(b = (1 + i)Ae\) where \(e = (1,1,\ldots ,1)^T \in \mathbb {R}^{n}\). In our numerical experiments, we set \((\sigma _{1},\sigma _{2})=(-100,100)\) and \((\sigma _{1},\sigma _{2})=(-1000,10)\). Note that in both of the SNSS and MPNS methods, we consider \(W_{1}=-\sigma _{1} h^{2} (I_{m} \otimes I_{m})\), \(W_{2}=K\).
We present our numerical results in two parts. In the first part, we compare the numerical results of the ISNSS with the incomplete version of MPNS (IMPNS) method. In both of the ISNSS and IMPNS methods, the first subsystem is solved using the preconditioned CG (PCG) method incorporated with the incomplete Cholesky factorization with dropping tolerance \(10^{-2}\) as a preconditioner. In the Matlab notation the following command can be used for computing the incomplete Cholesky factor
where Z is a given SPD matrix. The second subsystem is solved by the Chebyshev acceleration method in conjunction with the PRESB preconditioner. Note that in all of the methods we solve the subsystems (18) and (19) exactly using the sparse Cholesky factorization incorporated with the symmetric approximate minimum degree permutation. To do so, the symamd.m command of Matlab is applied. We use a null vector as an initial guess. The outer iteration is stopped as soon as the residual 2-norm is reduced by a factor of \(10^6\) and the the inner iteration by a factor of \(10^2\). The maximum number of inner and outer iterations are set to be 10000, respectively. In the tables, a dagger (\(\dagger \)) shows that the method has not converged in 10000 iterations. All of the numerical experiments are performed in Matlab R2018b by using a Laptop with 2.50 GHz central processing unit (Intel(R) Core(TM) i5-7200U), 6 GB RAM and Windows 10.
Numerical results are presented for Examples 1 and 2 in Tables 1, 2, 3 and 4. In these tables we report the number of outer iterations (“Iters”), elapsed CPU time in seconds (“CPU”) and the following values
to demonstrate the accuracy of the computed solutions, where \( x^{(k)}\) and \( x^{*}\) are the computed solution at iteration k and the exact solution, respectively. We also present the minimum eigenvalue of the matrices W and T to see that whether the matrix is indefinite or not. For both the ISNSS and IMPNS methods the the optimal values of the parameters (the one with minimum number of iterations) are computed experimentally.
As the numerical results show the ISNSS outperforms the IMPNS method from both the CPU time and the iterations. We observe that ISNSS iteration method is h-independent. However, by increasing the parameter \(\omega \), the number of iterations is increased moderately.
For the second set of our numerical experiments, we present the numerical results for solving Examples 1 and 2 by means of the flexible version of GMRES (FGMRES) [24, 26] incorporated with the SNSS and MPNS preconditioners (hereafter, are denoted by FGMRES-SNSS and FGMRES-MPNS, respectively). We use a zero vector as an initial guess and the iteration of FGMRES is stopped as soon as the residual 2-norm is reduced by a factor of \(10^6\). The maximum number of iterations is set to be 10,000. In the implementation of the preconditioners the subsystems are solved as mentioned in the first set of the numerical experiments. Numerical results are presented in Tables 5, 6, 7 and 8. To demonstrate the efficiency of the preconditioners we also present the numerical results of FGMRES without preconditioning. We use the optimal value of the parameter in the MPNS method, however the estimated parameters given in Eq. (26) are used in the SNSS method.
As we see both of the preconditioners expedite greatly the convergence speed of the FGMRES method. Numerical results show that the SNSS preconditioner is more efficient than the MPNS preconditioner in terms of both the number of iterations and the elapsed CPU time. We also see that as the mesh size is refined the number of iterations of FGMRES-SNSS remains almost constant. However, the number of iterations increased moderately as the value of the parameter \(\omega \) is increased.
8 Conclusion
In this paper, we have presented the symmetric positive definite and negative stable splitting (SNSS) method to solve the complex symmetric indefinite linear systems. Theoretical analysis show that the SNSS iteration method is convergent under suitable condition. Numerical results that the ISNSS-Cheb and SNSS-FGMRES methods are efficient when the real part of the coefficient matrix of the systems is symmetric indefinite.
References
Axelsson, O., Boyanova, P., Kronbichler, M., Neytcheva, M., Wu, X.: Numerical and computational efficiency of solvers for two-phase problems. Comput. Math. Appl. 65, 301–314 (2013)
Axelsson, O., Kucherov, A.: Real valued iterative methods for solving complex symmetric linear systems. Numer. Linear Algebra Appl. 7, 197–218 (2000)
Axelsson, O., Neytcheva, M., Ahmad, B.: A comparison of iterative methods to solve complex valued linear algebraic systems. Numer. Algorithms 66, 811–841 (2014)
Axelsson, O., Neytcheva, M., Liang, Z.-Z.: Parallel solution methods and preconditioners for evolution equations. Math. Model. Anal. 23, 287–308 (2018)
Axelsson, O., Salkuyeh, D.K.: A new version of a preconditioning method for certain two-by-two block matrices with square blocks. BIT Numer. Math. 59, 321–342 (2018)
Bai, Z.-Z., Golub, G.H., Ng, M.K.: Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 24, 603–626 (2003)
Bai, Z.-Z., Benzi, M., Chen, F.: Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 87, 93–111 (2010)
Bai, Z.-Z., Benzi, M., Chen, F.: On preconditioned MHSS iteration methods for complex symmetric linear systems. Numer. Algorithms 56, 297–317 (2011)
Bayliss, A., Goldstein, C.I., Turkel, E.: An iterative method for Helmholtz equation. J. Comput. Phys. 49, 443–457 (1983)
Benzi, M.: Preconditioning techniques for large linear systems: a survey. J. Comput. Phys. 182, 418–477 (2002)
Bunse-Gerstner, A., Stöver, R.: On a conjugate gradient-type method for solving complex symmetric linear systems. Linear Algbra Appl. 287, 105–123 (1999)
Cui, L.B., Zhang, X.Q., Zheng, Y.T.: A preconditioner based on a splitting-type iteration method for solving complex symmetric indefinite linear systems. Jpn. J. Ind. Appl. Math. (2021). https://doi.org/10.1007/s13160-021-00471-1
Edalatpour, V., Hezari, D., Salkuyeh, D.K.: Two efficient inexact algorithms for a class of large sparse complex linear systems. Mediterr. J. Math. 13, 2301–2318 (2016)
Edalatpour, V., Hezari, D., Salkuyeh, D.K.: Accelerated generalized SOR method for a class of complex systems of linear equations. Math. Commun. 20, 37–52 (2015)
Feriani, A., Perotti, F., Simoncini, V.: Iterative system solvers for the frequency analysis of linear mechanical systems. Comput. Methods Appl. Mech. Eng. 190, 1719–1739 (2000)
Freund, R.W.: Conjugate gradient-type methods for linear systems with complex symmetric coefficient matrices. SIAM J. Sci. Stat. Comput. 13(1), 425–448 (1992)
Guo, C.-H.: Incomplete block factorization preconditioner for linear systems arising in the numerical solution of the Helmholtz equation. Appl. Numer. Math. 19, 495–508 (1996)
Hezari, D., Salkuyeh, D.K., Edalatpour, V.: Preconditioned GSOR iterative method for a class of complex symmetric system of linear equations. Numer. Linear Algebra Appl. 22, 761–776 (2015)
Hezari, D., Salkuyeh, D.K., Edalatpour, V.: A new iterative method for solving a class of complex symmetric system of linear equations. Numer. Algorithms 73, 927–955 (2016)
Huang, Z.-G.: A new double-step splitting iteration method for certain block two-by-two linear systems. Comput. Appl. Math. 39, 193 (2020)
Huang, Z.-G.: Modified two-step scale-splitting iteration method for solving complex symmetric linear systems. Comput. Appl. Math. 40, 122 (2021)
Li, C.-X., Wu, S.-L.: Modified complex-symmetric and skew-Hermitian splitting iteration method for a class of complex-symmetric indefinite linear systems. Numer. Algorithms 76, 93–107 (2017)
Ren, Z.-R., Cao, Y.: An alternating positive-semidefinite splitting preconditioner for saddle point problems from time-harmonic eddy current models. IMA J. Numer. Anal. 36, 922–946 (2016)
Saad, Y.: Iterative Methods for Sparse Linear Systems, 2nd edn. SIAM, Philadelphia (2003)
Saad, Y., Schultz, M.H.: GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 7, 856–869 (1986)
Saad, Y.: A flexible inner-outer preconditioned GMRES algorithm. SIAM J. Sci. Comput. 14, 461–469 (1993)
Salkuyeh, D.K.: Two-step scale-splitting method for solving complex symmetric system of linear equations, math. NA. (2017) arXiv:1705.02468
Salkuyeh, D.K., Hezari, D., Edalatpour, V.: Generalized SOR iterative method for a class of complex symmetric linear system of equations. Int. J. Comput. Math. 92, 802–815 (2015)
Salkuyeh, D.K., Siahkolaei, T.S.: Two-parameter TSCSP method for solving complex symmetric system of linear equations. Calcolo 55, 8 (2018)
Siahkolaei, T.S., Salkuyeh, D.K.: A new double-step method for solving complex Helmholtz equation. Hacet. J. Math. Stat. 49, 1245–1260 (2020)
Sogabe, T., Zhang, S.-.L.: A COCR method for solving complex symmetric linear systems. J. Comput. Appl. Math. 199, 297–303 (2007)
Van der Vorst, H.A., Melissen, J.B.M.: A Petrov-Galerkin type method for solving \(Ax=b\), where \(A\) is symmetric complex. IEEE Trans. Mag. 26, 706–708 (1990)
Wu, S.-L., Huang, T.-Z., Li, L., Xiong, L.-L.: Positive stable preconditioners for symmetric indefinite linear systems arising from Helmholtz equations. Phys. Lett. A. 373, 2401–2407 (2009)
Wu, S.-L., Li, C.-X.: A modified SSOR preconditioning strategy for Helmholtz equations. J. Appl. Math. 2012, 365124 (2012)
Wu, S.-L., Li, C.-X.: A splitting method for complex symmetric indefinite linear system. J. Comput. Appl. Math. 313, 343–354 (2017)
Xie, X., Li, H.B.: On preconditioned Euler-extrapolated single-step Hermitian and skew-Hermitian splitting method for complex symmetric linear systems. Jpn .J. Ind. Appl. Math. 38, 503–518 (2021)
Zhang, G.F., Zheng, Z.: A parameterized splitting iteration method for complex symmetric linear systems. Jpn .J. Ind. Appl. Math. 31, 265–278 (2014)
Acknowledgements
The authors would like to thank the referees for their careful reading of the paper and giving several valuable comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Pourbagher, M., Salkuyeh, D.K. A new two-parameter iteration method for indefinite complex symmetric linear systems. Japan J. Indust. Appl. Math. 39, 145–163 (2022). https://doi.org/10.1007/s13160-021-00479-7
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13160-021-00479-7