1 Introduction

Many problems in the field of scientific computation can be viewed as solving a large and space symmetric linear system (Axelsson and Kucherov 2000), including quantum mechanics, fluid dynamics, and electromagnetic problems. Now, we consider a large sparse system of linear equations as follows:

$$\begin{aligned} Ax=b, \quad A\in C^{n\times n}\quad x,b\in C^{n}, \end{aligned}$$
(1.1)

where A is symmetric and nonsingular.

There are many iterative methods that need to effectively split the coefficient matrix A to solve this problem. For example, the classic Jacobi and Gauss–Seidel iterations method (Hageman and Young 1971; Saad 1996) and the generalized Lanczos method (Widlund 1978) split the matrix A into the Hermitian and skew-Hermitian parts which is formed as:

$$\begin{aligned} A=W+iT, \end{aligned}$$
(1.2)

where \(W=\frac{1}{2}(A+A^{*})\) and \(iT=\frac{1}{2}(A-A^{*})\), \(i=\sqrt{-1}\) is the imaginary unit, and \(A^{*}\) is denoted the conjugate transpose of the matrix A. We assume \(T\ne 0\), which implies that A is non-Hermitian matrix.

Bai et al. (2003) presented the formation of the HSS iteration method: given an initial guess \(x^{(0)}\) for \(k=0,1,\ldots \) until \(\{x^{(k)}\}\) converges, compute:

$$\begin{aligned} \begin{aligned}&(\alpha I+W)x^{(k+1/2)}=(\alpha I-iT)x^{(k)}+b ,\\&(\alpha I+iT)x^{(k+1)}=(\alpha I-W)x^{(k+1/2)}+b, \end{aligned} \end{aligned}$$
(1.3)

where \(\alpha \) is a given positive constant. Bai et al. (2004) applied this method to the saddle point problem directly or as a preconditioner. Bai et al. (2006) analyzed the optimal parameter \(\alpha ^{*}\) that minimizes the spectral radius of the iteration matrix of the HSS method to accelerate convergence. Later, Bai et al. (2010) skillfully designed a modified Hermitian and skew-Hermitian splitting (MHSS) method to solve two linear systems with real symmetric positive definite coefficient matrices. Based on Hermitian positive semidefinite matrix \({-(-iT)^{2}=(-iT)(-iT)}\), Bai (2008) also established Skew-Normal Splitting (SNS) method and Skew-Scaling Splitting (SSS) method.

Wu (2015) multiplied (1.1) on the left by W to obtain a splitting of the Hermitian-Normal equations: \(W(W+iT)x=Wb\). The Hermitian-Normal Splitting (HNS) iteration method is written as following.

The HNS Iteration Method.Given an initial value \(x^{(0)}\) to (1). For \(k=0,1,2, \ldots \) until the sequence of iterates \(x^{(k)}\) converges, compute the next iterate \(x^{(k+1)}\) according to the following procedure:

$$\begin{aligned} \left\{ \begin{aligned}&(\alpha I+iW){x^{(k+1/2)}}=(\alpha T-W^{2}){x^{(k)}}+Wb, \\&(\alpha T+W^{2}){x^{(k+1)}}=(\alpha I-iW){x^{(k+1/2)}}+Wb, \end{aligned} \right. \end{aligned}$$
(1.4)

where \(\alpha >0\) and I is the identity matrix. Furthermore, a simplified HNS (SHNS) method is presented.

In this paper, to further generalize the Hermitian-Normal Splitting iteration methods and accelerate convergence rate, the modification of the Hermitian-Normal Splitting iteration method is proposed. In Sect. 2, the modified Hermitian-Normal Splitting iteration method (MHNS) and the modified simplified Hermitian-Normal Splitting iteration method (MSHNS) are described and their convergence properties are discussed. In Sect. 3, the conclusion on eigenvalue distribution of the preconditioned matrix is presented when the modified methods as the preconditioner. Some implementation aspects are briefly discussed in Sect. 4. The results of numerical experiments in Sect. 5 are displayed. Finally, in Sect. 6, we offer some conclusions to end the paper.

2 The modified Hermitian-normal splitting iteration methods

2.1 The MHNS iteration method

We first multiply (1.1) on the left by \(-iW\) to obtain a equation:

$$\begin{aligned} -iWAx=(-iW^{2}+WT)x=-iWb, \end{aligned}$$

and then multiply W on the left to obtain another equation:

$$\begin{aligned} WAx=(W^{2}+iWT)x=Wb. \end{aligned}$$

Then, we can obtain the following forms as:

$$\begin{aligned} \begin{aligned} (\alpha T+WT)x=(\alpha T+iW^{2})x-iWb,\\ (\alpha T+W^{2})x=(\alpha T-iWT)x+Wb. \end{aligned} \end{aligned}$$
(2.1)

Now, by alternately iterating between the two systems of equations, we can establish the following modified HNS iteration method.

The MHNS iteration method Let \(x^{(0)}\in C^{n}\) is an initial value. For \(k=0,1,2, \ldots \), we compute the sequence of iterates \(x^{(k)}\) until it converges; the next iterate \(x^{(k+1)}\) accords the following procedure:

$$\begin{aligned} \left\{ \begin{aligned}&(\alpha I+W){x^{(k+1/2)}}=(\alpha T+iW^{2}){x^{(k)}}-iWb, \\&(\alpha T+W^{2}){x^{(k+1)}}=(\alpha I-iW){x^{(k+1/2)}}+Wb, \end{aligned} \right. \end{aligned}$$
(2.2)

where \(\alpha >0\) and I is the identity matrix.

By eliminating the intermediate vector \(x^{(k+1/2)}\), we can obtain the following iteration form as:

$$\begin{aligned} {x^{(k+1)}}=M_{\alpha }{x^{(k)}}+N_{\alpha }Wb,\quad k=0,1,2,\ldots , \end{aligned}$$

where

$$\begin{aligned} M_{\alpha }=(\alpha T+W^{2})^{-1}(\alpha I-iW)(\alpha I+W)^{-1}(\alpha T+iW^{2}) \end{aligned}$$

is the iteration matrix. In addition, we introduce matrices:

$$\begin{aligned} B(\alpha )=\frac{1+i}{2\alpha }(\alpha I+W)(\alpha T+W^{2})\quad \text{ and }\quad C(\alpha )=\frac{1+i}{2\alpha }(\alpha I-iW)(\alpha T+iW^{2}), \end{aligned}$$
(2.3)

it holds that:

$$\begin{aligned} WA=B(\alpha )-C(\alpha )\quad \text{ and }\quad M(\alpha )=B(\alpha )^{-1}C(\alpha ). \end{aligned}$$

Matrix \(B(\alpha )\) can be used as a preconditioner matrix for the complex matrix WA. Then, we analyze the convergence rate of the method and derive the upper bound of the contraction factor.

Theorem 2.1

Let \(A=W+iT \in C^{n\times n}\), with \(W \in R^{n\times n}\) symmetric positive definite and \(T \in R^{n\times n}\) symmetric positive semidefinite, \(\alpha \) is a positive constant. Then, the spectral radius \(\rho (M(\alpha ))\) of the MHNS iteration matrix \(M(\alpha )\) satisfies \(\rho (M(\alpha ))\le \sigma (\alpha )<1\), where:

$$\begin{aligned} \sigma (\alpha )\equiv \ \mathop {\text {max}}\limits _{\mu _{i}\in sp(W)}\frac{\sqrt{\alpha ^{2}\mu _{i}^{2}+1}}{\alpha \mu _{i}+1}, \end{aligned}$$

and sp(W) denotes the spectrum of the matrix W, i.e., the MHNS iteration converges to the unique solution \(x_{\star }\in C^{n}\) of the complex symmetric linear system (1.1) for any initial guess.

Proof

By direct computations, we have:

$$\begin{aligned} \begin{aligned} \rho (M(\alpha ))&=\rho ((\alpha T+W^{2})^{-1}(\alpha I-iW)(\alpha I+W)^{-1}(\alpha T+iW^{2}))\\&=\rho ((\alpha T+W^{2})(\alpha T+W^{2})^{-1}(\alpha I-iW)(\alpha I+W)^{-1}(\alpha T+iW^{2})(\alpha T+W^{2})^{-1})\\&=\rho ((\alpha I-i{W})(\alpha I+W)^{-1}(\alpha T+iW^{2})(\alpha T+W^{2})^{-1})\\&\le \parallel (\alpha I-iW)(\alpha I+W)^{-1}\parallel _{2}\parallel (\alpha T+iW^{2})(\alpha T+W^{2})^{-1}\parallel _{2}\\&=\parallel (\alpha I-iW)(\alpha I+W)^{-1}\parallel _{2}\parallel W(\alpha W^{-1}TW^{-1}+iI)(\alpha W^{-1}TW^{-1}+I)^{-1}W^{-1}\parallel _{2}\\&=\parallel (\alpha I-iW)(\alpha I+W)^{-1}\parallel _{2}\parallel (\alpha W^{-1}TW^{-1}+iI)(\alpha W^{-1}TW^{-1}+I)^{-1}\parallel _{2}. \end{aligned} \end{aligned}$$

Let \(Q=W^{-1}TW^{-1}\), because \(W\in R^{n\times n}\) is symmetric positive definite and \(T\in R^{n\times n}\) is symmetric positive semidefinite, \(Q=W^{-1}TW^{-1}\) is also symmetric positive semidefinite, and there exist orthogonal matrices \(U,V\in R^{n\times n}\), such that:

$$\begin{aligned} U^{T}QU=\Lambda _{Q},\quad V^{T}WV=\Lambda _{W}, \end{aligned}$$

where

$$\begin{aligned} \Lambda _{Q}=\text {diag}(\lambda _{1},\lambda _{2},\ldots ,\lambda _{n}) \end{aligned}$$

and

$$\begin{aligned} \Lambda _{W}=\text {diag}(\mu _{1},\mu _{2},\ldots ,\mu _{n}) \end{aligned}$$

with \(\lambda _{i}(1\le i\le n)\) and \(\mu _{i}(1\le i\le n)\) being the eigenvalues of the matrices Q and W, respectively. By our assumption, it holds that:

$$\begin{aligned} \lambda _{i}\ge 0\quad \text{ and }\quad \mu _{i}>0,\quad i=1,2,\ldots . \end{aligned}$$

Now, based on the orthogonal invariance of the Euclidean norm \(\parallel \cdot \parallel _{2}\), we can further obtain the following upper bound on \(\rho (M(\alpha ))\):

For all \(\lambda _{i}(1\le i\le n)\), we know that \(\sqrt{\alpha ^{2}\lambda _{i}^{2}+1}\le \alpha \lambda _{i}+1\). It follows that:

$$\begin{aligned} \rho (M(\alpha ))\le \mathop {\text {max}}\limits _{\mu _{i}\in sp(W)}\frac{\sqrt{\alpha ^{2}+\mu _{i}^{2}}}{\alpha +\mu _{i}}=\sigma (\alpha )<1. \end{aligned}$$

i.e., the MHNS iteration converges to the unique solution of the complex symmetric linear system (1.1). \(\square \)

Corollary 1

Let conditions satisfy theorem 2.1, and \(\gamma _{\text {min}}\) and \(\gamma _{\text {max}}\) are the extreme eigenvalues of the symmetric positive definite matrix \(W\in R^{n\times n}\), respectively. Then:

$$\begin{aligned} \alpha _{\star }\equiv \text {arg} \ \mathop {\text {min}}\limits _{\alpha }\left\{ \mathop {\text {max}}\limits _{\gamma _{\text {min}}\le \mu \le \gamma _{\text {max}}} \frac{\sqrt{\alpha ^{2}+\mu ^{2}}}{\alpha +\mu }\right\} =\sqrt{\gamma _{\text {min}}\gamma _{\text {max}}} \end{aligned}$$
(2.4)

and

$$\begin{aligned} \sigma (\alpha _{\star })=\frac{\sqrt{\frac{\gamma _{\text {max}}}{\gamma _{\text {min}}}+1}}{\sqrt{\frac{\gamma _{\text {max}}}{\gamma _{\text {min}}}}+1}=\frac{\sqrt{\kappa (W)+1}}{\sqrt{\kappa (W)}+1}, \end{aligned}$$
(2.5)

where \(\kappa (W)=\gamma _{(\text {max})}/\gamma _{(\text {min})}\) is the spectral condition number of the matrix W.

Proof

It easily holds that:

$$\begin{aligned} \sigma (\alpha )=\text {max}\left\{ \frac{\sqrt{\alpha ^{2}+\gamma ^{2}_{\text {min}}}}{\sqrt{\alpha ^{2}+\gamma ^{2}_{\text {min}}}}, \frac{\sqrt{\alpha ^{2}+\gamma ^{2}_{\text {max}}}}{\sqrt{\alpha ^{2}+\gamma ^{2}_{\text {max}}}}\right\} . \end{aligned}$$

To compute an approximate optimal \(\alpha >0\), such that the convergence factor \(\rho (M(\alpha ))\) of the MHNS iteration is minimized, we let:

$$\begin{aligned} \frac{\sqrt{\alpha ^{2}+\gamma ^{2}_{\text {min}}}}{\sqrt{\alpha ^{2}+\gamma ^{2}_{\text {min}}}}= \frac{\sqrt{\alpha ^{2}+\gamma ^{2}_{\text {max}}}}{\sqrt{\alpha ^{2}+\gamma ^{2}_{\text {max}}}}. \end{aligned}$$

Then, solving this equation, we obtain:

$$\begin{aligned} \alpha _{\star }=\sqrt{\gamma _{\text {min}}\gamma _{\text {max}}}. \end{aligned}$$

Therefore, substituting \(\alpha _{\star }\) into \(\sigma (\alpha )\), we can easily obtain the formula (2.4). \(\square \)

2.2 The MSHNS iteration method

To obtain the modified simplified Hermitian-normal equations, we write the linear system (1.1) as:

$$\begin{aligned} \begin{aligned} \left( W+\frac{1}{\alpha }W^{2}\right) x=\left( -iT+\frac{1}{\alpha }W^{2}\right) x+b,\\ \left( iT+\frac{i}{\alpha }W^{2}\right) x=\left( -W+\frac{i}{\alpha }W^{2}\right) x+b. \end{aligned} \end{aligned}$$

By simple computation, we can obtain that:

$$\begin{aligned} \begin{aligned} (\alpha I+W)iWx=(\alpha T+iW^{2})x+i\alpha b, \\ (\alpha T+W^{2})x=(\alpha I-iW)iWx-i\alpha b. \end{aligned} \end{aligned}$$
(2.6)

From (2.6), we can obtain the following iteration form:

$$\begin{aligned} \left\{ \begin{aligned}&(\alpha I+W)iW{x^{(k+1/2)}}=(\alpha T+iW^{2}){x^{(k)}}+i\alpha b, \\&(\alpha T+W^{2}){x^{(k+1)}}=(\alpha I-iW)iW{x^{(k+1/2)}}-i\alpha b. \end{aligned} \right. \end{aligned}$$
(2.7)

Hence, we can establish the modified SHNS iteration method which is described as follows.

The MSHNS iteration method Let \(x^{(0)}\in C^{n}\) is an initial value. For \(k=0,1,2, \ldots \) , we compute the sequence of iterates \(x^{(k)}\) until it converges; the next iterate \(x^{(k+1)}\) accords the following procedure:

$$\begin{aligned} \left\{ \begin{aligned}&(\alpha I+W){x^{(k+1/2)}}=(\alpha T+iW^{2}){x^{(k)}}+i\alpha b,\\&(\alpha T+W^{2}){x^{(k+1)}}=(\alpha I-iW){x^{(k+1/2)}}-i\alpha b, \end{aligned} \right. \end{aligned}$$
(2.8)

where \(\alpha >0\) and I is the identity matrix.

By eliminating the intermediate vector \(x^{(k+1/2)}\), we can obtain the following iteration form as:

$$\begin{aligned} {x^{(k+1)}}=M_{\alpha }{x^{(k)}}+{N_{\alpha }b}, k=0,1,2,\ldots , \end{aligned}$$

where

$$\begin{aligned} M_{\alpha }=(\alpha T+W^{2})^{-1}(\alpha I-iW)(\alpha I+W)^{-1}(\alpha T+iW^{2}) \end{aligned}$$

is the iteration matrix. Let

$$\begin{aligned} B(\alpha )=\frac{1+i}{2\alpha }(\alpha I+W)(\alpha T+W^{2}) \text{ and } C(\alpha )=\frac{1+i}{2\alpha }(\alpha I-iW)(\alpha T+iW^{2}), \end{aligned}$$
(2.9)

it holds that

$$\begin{aligned} WA=B(\alpha )-C(\alpha ) \text{ and } M(\alpha )=B(\alpha )^{-1}C(\alpha ). \end{aligned}$$

Matrix \(B(\alpha )\) can be used as a preconditioner matrix for the complex matrix WA.

Comparing the MSHNS method with the MHNS method, the obvious difference is on the constant vector terms, one is \(\alpha b\) and the other one is Wb. Therefore, the MSHNS method is cheaper than the MHNS method. Just like the MHNS method, the MSHNS method also converges unconditionally to a unique solution of linear system (1.1).

Just like the MHNS method, the MSHNS also have its convergence theorem and corollary, and proof of theorem and corollary are the same.

Theorem 2.2

Let \(A=W+iT \in C^{n\times n}\), with \(W \in R^{n\times n}\) symmetric positive definite and \(T \in R^{n\times n}\) symmetric positive semidefinite, \(\alpha \) is a positive constant. Then, the spectral radius \(\rho (M(\alpha ))\) of the MSHNS iteration matrix \(M(\alpha )\) satisfies \(\rho (M(\alpha ))\le \sigma (\alpha )<1\), where:

$$\begin{aligned} \sigma (\alpha )\equiv \ \mathop {\text {max}}\limits _{\mu _{i}\in sp(W)}\frac{\sqrt{\alpha ^{2}+\mu _{i}^{2}}}{\alpha +\mu _{i}} \end{aligned}$$

and sp(W) denotes the spectrum of the matrix W, i.e., the MSHNS iteration converges to the unique solution \(x_{\star }\in C^{n}\) of the complex symmetric linear system(1.1) for any initial guess.

Corollary 2

Let conditions satisfy theorem 3.1, \(\gamma _{\text {min}}\) and \(\gamma _{\text {max}}\) are the extreme eigenvalues of the symmetric positive definite matrix W, respectively. Then:

$$\begin{aligned} \alpha _{\star }\equiv arg \ \mathop {\text {min}}\limits _{\alpha }\left\{ \mathop {\text {max}}\limits _{\gamma _{\text {min}}\le \mu \le \gamma _{\text {max}}} \frac{\sqrt{\alpha ^{2}+\mu ^{2}}}{\alpha +\mu }\right\} =\sqrt{\gamma _{\text {min}}\gamma _{\text {max}}} \end{aligned}$$
(2.10)

and

$$\begin{aligned} \sigma (\alpha _{\star })=\frac{\sqrt{\frac{\gamma _{\text {max}}}{\gamma _{\text {min}}}+1}}{\sqrt{\frac{\gamma _{\text {max}}}{\gamma _{\text {min}}}}+1}=\frac{\sqrt{\kappa (W)+1}}{\sqrt{\kappa (W)}+1}, \end{aligned}$$

where \(\kappa (W)=\gamma _{(\text {max})}/\gamma _{(\text {min})}\) is the spectral condition number of the matrix W.

Some remarks on Theorems 2.1 and 2.2 and Corollaries 1 and 2 are given as follows:

Remark 1

It is easy to see that the convergence rates of the modified Hermitian-Normal Splitting iteration methods are bounded by \(\sigma (\alpha )\), which only depends on the spectrum of the symmetric positive definite matrix W.

Remark 2

It is worth noting that the iterative parameter \(\alpha \) only minimizes the upper bound \(\sigma (\alpha )\) on the spectral radius \(\rho (M(\alpha ))\) of the iterative matrix of \(M(\alpha )\), not \(\rho (M(\alpha ))\) itself.

3 The modified methods—preconditioner

From modified iteration methods, it is easy to see that the splitting matrix \(B_{\alpha }\) can be used as a preconditioner matrix for the complex matrix WA. Note that the multiplicative factor \(\frac{1+i}{2\alpha }\) is no effect on preconditioned system. Therefore, we let:

$$\begin{aligned} B_{\alpha }=(\alpha I+W)(\alpha T+W^{2}) \end{aligned}$$

as a preconditioner matrix of Krylov subspace method such as GMRES method (Saad 1993). Application of the preconditioner within GMRES requires solving a linear system \(B_{\alpha }^{-1}Ax=B_{\alpha }^{-1}b\). The matrix \(B_{\alpha }\) is positive definite, so a direct conclusion can be obtained about \(B_{\alpha }\) as a preconditioner.

Theorem 3.1

Let \(W \in R^{n\times n}\) is symmetric positive definite and \(T \in R^{n\times n}\) is symmetric positive semidefinite, and then, the spectral distribution of \(B_{\alpha }^{-1}WA\) is clustered around (0, 1).

Proof

From modified iteration methods, we obtain that:

$$\begin{aligned} WA=B(\alpha )-C(\alpha )\quad \text{ and }\quad M(\alpha )=B(\alpha )^{-1}C(\alpha ). \end{aligned}$$

By direct computations, we have:

$$\begin{aligned} B_{\alpha }^{-1}WA=I-B_{\alpha }^{-1}C_{\alpha }=I-M_{\alpha }. \end{aligned}$$

From Theorem 2.1-2.2, we know that the eigenvalues \(\lambda \) of \(M(\alpha )\) are less than 1. Therefore, it follows that the eigenvalues of \(B_{\alpha }^{-1}WA\) satisfy \(\mid 1-|\lambda |\mid <1\). \(\square \)

4 Implementation aspects

In the modified iteration methods, the two half-steps comprising each iteration require the exact solution of two symmetric positive definite systems with matrices \(\alpha I+W\) and \(\alpha T+W^{2}\). However, this may be very costly and impractical in actual implementations. To improve the computing efficiency of the modified iteration methods, we can employ an inner iterative method which is similar to inexact HSS method (Bai et al. 2008). Because the coefficient matrices for both subsystems are symmetric and positive definite, some Krylov subspace methods can be utilized, such as CG (Hestenes and Stiefel 1952) and GMRES (Saad and Schultz 1986) method. Moreover, if the matrices \(\alpha I+W\) and \(\alpha I+T\) are available as preconditioners, we can use the preconditioned conjugate gradient (PCG) method instead of CG at solving in subsystems, so the computing efficiency of inexact modified Hermitian-Normal Splitting methods can be further improved.

figure a

In Algorithm 1, \(\varepsilon _{k}\) and \(\eta _{k}\) are the tolerances of internal iteration. In general, the tolerance of internal iteration methods and the number of internal iteration steps are different, resulting in different precision of external iteration schemes. The tolerance of inner iteration is bigger; more iterations are needed to achieve the precision of the external iteration. When the tolerance of internal iteration tends to zero with the increase of outer iteration index, the convergence rate of the inexact modified Hermitian-Normal Splitting iteration methods is approximately equal to that of modified Hermitian-Normal Splitting iteration methods. Therefore, the inexact modified Hermitian-Normal Splitting methods are actually nonstationary iterative methods for solving linear system 1.1.

5 Numerical experiments

In this section, we use different types of test problems to assess the feasibility and effectiveness of the MHNS iteration method and the MSHNS iteration method when they are used either as a solver or as a preconditioner for solving the system of linear equations (1.1) with complex symmetric coefficient matrix. We also compare the MHNS with the HNS both as iterative solvers and as preconditioners for the GMRES method. According to the optimal parameter \(\alpha _{\star }=\sqrt{\gamma _{\text {min}}\gamma _{\text {max}}}\) in the corollary 1 and 2, we compare the two methods from the number of inner iterations (denoted as IT) and the total CPU time (denoted as CPU).

In addition, all numerical experiments were performed on a personal computer with 3.20 GHz central processing unit [Intel(R) Core(TM) i5-3470 CPU], 6.0 G memory, and Windows 8.1 operating system. In our implementations, The tests are performed in MATLAB R2016a. Here, \({x^{(0)}=0}\) is the initial guess and the iteration is terminated once the current iterate \(x^{(k)}\) satisfies:

$$\begin{aligned} \frac{\Vert b-Ax^{(k)}\Vert _{2}}{\Vert b\Vert _{2}}\le 10^{-6}. \end{aligned}$$

Example 1

We consider a direct frequency-domain analysis of n-degree-of-freedom (n-DOF) linear system (Feriani et al. 2000) which can be written in matrix form as:

$$\begin{aligned} M\ddot{q}+C\dot{q}+Kq=p, \end{aligned}$$

where q is the configuration vector and p is the vector of generalized components of dynamic forces, M and K are the inertia and stiffness matrices, respectively, and C is the viscous damping matrix. This leads to the complex symmetric linear system:

$$\begin{aligned}{}[(-\omega ^{2}M+K)+i(\omega C_{V}+C_{H})]x=b, \end{aligned}$$

where \(M=I\), \(C_{V}=10I\), and \(C_{H}=\mu K\). The matrix \(K\in R^{n\times n}\) possesses the tensor-product form \(K=I\otimes V_{m}+V_{m}\otimes I\), with \(V_{m}=h^{-2}\)tridiag\((-1,2,-1)\in R^{m\times m}\), the mesh-size \(h=\frac{1}{m+1}\). In addition, we set \(\omega =\pi \), \(\mu =0.02\), and the right-hand side vector \(b=(1+i)Ae\), with e being the vector of all entries equal to 1.

Table 1 IT and CPU for HNS and MHNS methods for Example 1

Example 2

We consider the complex Helmholtz equation (Bertaccini 2004):

$$\begin{aligned} -\Delta u+\sigma _{1}u+i\sigma _{2}u=f, \end{aligned}$$

where \(\sigma _{1}\) and \(\sigma _{2}\) are real coefficient functions, u satisfies Dirichlet boundary conditions in \(D=[0,1]\times [0,1]\), we use finite differences on an \(m\times m\) grid with mesh size \(h=\frac{1}{m+1}\), and obtain the linear system:

$$\begin{aligned} ((K+\sigma _{1}I)+i\sigma _{2}I)x=b, \end{aligned}$$

where \(K=I\otimes V_{m}+V_{m}\otimes I\) is the discretization of \(-\Delta \), \(V_{m}=h^{-2}tridiag(-1,2,-1)\in R^{m\times m}\). The right-hand side vector \(b = Ae\), with e being the vector of all entries equal to 1. For our numerical tests, we set \(\sigma _{1}=-1\), \(\sigma _{2}=1\).

Table 2 IT and CPU for SHNS and MSHNS methods for Example 2
Table 3 IT and CPU for GMRES methods for Example 1
Table 4 IT and CPU for GMRES methods for Example 2

In Tables 1 and 2, we list the iteration numbers(IT) and CPU times(CPU) for the original and modified methods as the iterative solver. In Tables 3 and 4, we present some results of GMRES and the methods as a preconditioner of GMRES. By comparing the data results, we find that as the n increases, the IT and CPU also increase for all methods. Especially when the mesh size h is large, the difference between the two methods is obvious. In addition, the modified methods are superior to the original methods both in terms of IT and CPU time, both as an iterative solver and as a preconditioner.

It is well known that the spectrum of preconditioned matrices is an important basis for improving the convergence of Krylov subspace methods. In particular, for symmetric linear systems, we want the number of different eigenvalues to be small, so that the convergence of the method will be very fast. Therefore, based on the above idea, it is necessary to test the eigenvalue distribution of preconditioned matrix \(B_{\alpha }^{-1}WA\) in numerical analysis. All the matrices of Figs. 1, 2, 3, and 4 tested are \(256\times 256\).

Fig. 1
figure 1

The eigenvalue distribution of HNS preconditioner for Example 1

Fig. 2
figure 2

The eigenvalue distribution MHNS preconditioner for Example 1

Fig. 3
figure 3

The eigenvalue distribution of HNS preconditioner for Example 2

Fig. 4
figure 4

The eigenvalue distribution of MHNS preconditioner for Example 2

From Figs. 1, 2, 3, and 4, it is easy to find that the numerical results are in correspondence with our previous results and the preconditioner of the modified methods makes the spectrum more concentrated near 0. The results validate our previous Theorem 3.1 and show that the convergence of the method will be accelerated.

6 Conclusion

In this paper, a class of modified Hermitian-Normal Splitting methods for complex symmetric linear systems with a Hermitian part being real symmetric and definite has been introduced. Theoretical analysis shows that for any initial value of \(x^{(0)}\), the modified methods converge unconditionally to the unique solution of the system (1.1) for any positive \(\alpha \). We also give the spectral radius of the iterative matrix and derive \(\alpha _{\star }\) which minimizes its upper bound. In practical cases, inexact versions of these methods and the Krylov subspace method as a preconditioner are proposed to reduce computational costs. Numerical results show that the modified Hermitian-Normal Splitting methods perform better in both iteration number and CPU time. These methods are very effective for solving the above matrix equations.