1 Introduction

This paper is devoted to study of the numerical solution of linear systems of the form

$$ \mathcal{A}u=b,\quad \text{with} \quad \mathcal{A}=W+iT, $$
(1.1)

where \(\mathcal {A} \in \mathbb {C}^{n\times n}\) is complex symmetric matrix, \(W \in \mathbb {R}^{n\times n}\) and \(T \in \mathbb {R}^{n\times n}\) are large and sparse symmetric positive definite matrices which imply that the complex symmetric matrix \(\mathcal {A}\) is nonsingular. The right-hand side vector \(b \in \mathbb {C}^{n}\) is given and \(i=\sqrt {-1}\) denotes the imaginary unit. The linear systems (1.1) are widely used in sciences and engineering including diffuse optical tomography [1], electrical power system modeling [2], quantum mechanics [3], and molecular scattering [4]; see, e.g., [5,6,7,8] and references therein for more details concerning the applications of this kind of linear systems.

To numerically approximate the solution of the complex symmetric linear systems (1.1), several iteration methods have been considered by many authors over the years. A number of efficient techniques based on the extension of Krylov subspace methods have been developed in the literature for solving (1.1), such as the conjugate orthogonal conjugate gradient (COCG) method [9], the quasi-minimal residual (QMR) method [10], the conjugate A-orthogonal conjugate residual (COCR) method [11], and the symmetric complex bi-conjugate gradient (SCBiCG) method [12, 13].

On the basis of the Hermitian and skew-Hermitian splitting (HSS) of the coefficient matrix, Bai et al. [14] established the HSS iteration method for solving the non-Hermitian positive definite linear systems. Any non-Hermitian matrix A can be decomposed into its Hermitian and skew-Hermitian parts as A = H + S where

$$ H=\frac{1}{2}(A+A^{*}), \qquad S=\frac{1}{2}(A-A^{*}). $$
(1.2)

The symbol A(orAT) denotes the complex conjugate transpose of \(A\in \mathbb {C}^{n\times n} (\text {or} A\in \mathbb {R}^{n\times n})\). The construction of the HSS iteration method is based on matrix splitting (1.2) similar in spirit to the classical alternating direction implicit (ADI) method. It is immediate to see that W and iT are the Hermitian and skew-Hermitian parts of the complex symmetric matrix \(\mathcal {A}\), respectively. Then, the HSS method produces the approximate solutions of (1.1) with the following iteration scheme:

The HSS iteration method: :

Let α be a positive constant and I be the identity matrix. Given an initial guess u(0). For k = 0,1,2,…, until u(k) converges, compute

$$ \begin{cases} (\alpha I+W)u^{(k+\frac{1}{2})}=(\alpha I-iT)u^{(k)}+b,&\\ (\alpha I+iT)u^{(k+1)}=(\alpha I-W)u^{(k+\frac{1}{2})}+b. \end{cases} $$
(1.3)

At each iteration step of the HSS method, we need to solve the shifted skew-Hermitian linear subsystem with coefficient matrix αI + iT, which may be problematic behavior in the actual implementations of this method. To avoid solving the shifted skew-Hermitian subsystem of linear equations, Bai et al. [15] introduced a modified HSS (MHSS) method which is much more efficient than the HSS method. To accelerate the convergence rate of the MHSS iteration method, Bai et al. [16] proposed a preconditioned variant of the MHSS (PMHSS) method. The PMHSS iteration scheme is given as follows:

The PMHSS iteration method: :

Let α be a positive constant and \(V\in \mathbb {R}^{n\times n}\) be a symmetric positive definite matrix. Given an initial guess u(0). For k = 0,1,2,…, until u(k) converges, compute

$$ \begin{cases} (\alpha V+W)u^{(k+\frac{1}{2})}=(\alpha V-iT)u^{(k)}+b,&\\ (\alpha V+T)u^{(k+1)}=(\alpha V+iW)u^{(k+\frac{1}{2})}-ib. \end{cases} $$
(1.4)

We comment here that if the matrix V is equal to the identity matrix, then the PMHSS method reduces to MHSS method. To improve the computational efficiency of the MHSS and PMHSS methods, more practical iteration schemes for the complex symmetric linear systems (1.1) have been derived recently. By introducing another parameter in the PMHSS scheme, the generalized PMHSS (GMPHSS) iteration method was developed in [17] which is a kind of useful variant of the AHSS iteration method initially proposed and discussed in [18] and later in [19]; see also [20]. In addition, by applying the lopsided technique studied in [21], Li et al. [22] proposed the lopsided PMHSS (LPMHSS) iteration method. There are also other variants of the MHSS-like methods; see, e.g., [23,24,25]. Subsequently, more and more other iteration methods have been presented in the literature such as the skew-normal splitting (SNS) method [26], the Hermitian normal splitting (HNS) method [27], the scale-splitting (SCSP) iteration method [28], the double-step scale splitting (DSS) iteration method [29], and the combination of real part and imaginary part (CRI) iteration method [30]. Here, we should realize that one class of important and effective iteration methods, called the quasi-HSS (QHSS) iteration methods, was designed and analyzed recently in [31] also based on the HSS-like iteration methods. However, if we use these iteration methods for finding the solution of linear systems (1.1), we face complex arithmetic. To avoid complex arithmetic, the problem (1.1) can be recast in real formulation. Let u = x + iy and b = p + iq, where the vectors x,y,p,q are all in \(\mathbb {R}^{n}\). Then, the complex linear systems (1.1) can be expressed as two-by-two block structure

$$ \mathscr{A}z=\begin{pmatrix}W& -T\\T& W\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}= \begin{pmatrix}p\\q\end{pmatrix}= f, $$
(1.5)

which can be solved in real arithmetics by HSS method or some Krylov subspace methods such as GMRES method.

For the block two-by-two linear systems (1.5), some efficient iteration methods have been investigated. For example, Bai et al. [32] developed the PMHSS iteration method for solving the block two-by-two linear systems (1.5) and applied it to solve distributed control problems. The block PMHSS iteration scheme is algorithmically described as follows: The block PMHSS iteration method: Let \((x^{(0)}; y^{(0)}) \in \mathbb {R}^{2n}\) be an initial guess. Using the following iteration scheme, for k = 0,1,2,…, compute {(x(k+ 1);y(k+ 1))} until {(x(k);y(k))} converges:

$$ \begin{cases} \begin{pmatrix}\alpha V+W& 0\\0& \alpha V+W\end{pmatrix}\begin{pmatrix}x^{(k+\frac{1}{2})}\\y^{(k+\frac{1}{2})}\end{pmatrix}=\begin{pmatrix}\alpha V& T\\-T& \alpha V\end{pmatrix}\begin{pmatrix}x^{(k)}\\y^{(k)}\end{pmatrix}+ \begin{pmatrix}p\\q\end{pmatrix},\\ \begin{pmatrix}\alpha V+T& 0\\0& \alpha V+T\end{pmatrix}\begin{pmatrix}x^{(k+1)}\\y^{(k+1)}\end{pmatrix}=\begin{pmatrix}\alpha V& -W\\W& \alpha V\end{pmatrix}\begin{pmatrix}x^{(k+\frac{1}{2})}\\y^{(k+\frac{1}{2})} \end{pmatrix}+ \begin{pmatrix}q\\-p\end{pmatrix}, \end{cases} $$
(1.6)

where α is a given positive constant and \(V \in \mathbb {R}^{n\times n}\) is a prescribed symmetric positive definite matrix.

In 2005, Bai et al. [20] first introduced the generalized successive overrelaxation (GSOR) method to find the solution of the augmented linear systems and discussed the optimal iteration parameters. Recently, Salkuyeh et al. [33] applied the GSOR method to the linear systems (1.5) and derived the GSOR iteration method for solving the complex linear systems which is defined in the following:

The GSOR iteration method: :

Let α be a positive constant and \((x^{(0)};y^{(0)}) \in \mathbb {R}^{2n}\) be an initial guess. For k = 0,1,2,…, until the sequence of iterates {(x(k);y(k))} converges, compute

$$ \begin{cases} Wx^{(k+1)}=(1-\alpha)Wx^{(k)}+\alpha Ty^{(k)} +\alpha p,&\\ Wy^{(k+1)}=-\alpha Tx^{(k+1)} +(1-\alpha)Wy^{(k)}+\alpha q. \end{cases} $$
(1.7)

It is worth mentioning that the iteration scheme in (1.7) is a straightforward application of the GSOR iteration method for the saddle-point linear systems. Inspired by the ideas of the upper and lower triangular (ULT) iteration method and the parameterized ULT (PULT) iteration method in [34, 35], Li et al. [36] designed the symmetric block triangular splitting (SBTS) iteration method for solving (1.5). The SBTS iteration method can be described as follows: The SBTS iteration method: Let α be a positive constant and \((x^{(0)};y^{(0)}) \in \mathbb {R}^{2n}\) be an initial guess. For k = 0,1,2,…, until the sequence of iterates {(x(k);y(k))} converges, compute the next iteration according to the following procedures

$$ \begin{cases} Wx^{(k+\frac{1}{2})}=Ty^{(k)} +p,&\\ \alpha Wy^{(k+\frac{1}{2})}= (\alpha-1)Wy^{(k)}-Tx^{(k+\frac{1}{2})} + q,&\\ \alpha Wy^{(k+1)}=(\alpha-1)Wy^{(k+\frac{1}{2})}-Tx^{(k+\frac{1}{2})}+q,&\\ Wx^{(k+1)}=Ty^{(k+1)} +p. \end{cases} $$
(1.8)

By following the idea of the shift-splitting iterative and preconditioning methods [37], Zeng et al. have proposed the generalized shift-splitting iteration method for solving linear systems (1.5) in [38]. For more iteration methods and additional references, we refer to [39, 40].

A classical accelerated overrelaxation (AOR) method introduced by Hadjidimos (1978) [41] is a simple iteration method to solve the linear systems of equations, which is used in scientific computing and engineering applications. For instance, it has been used to solve augmented linear system or generalized saddle-point problems [42, 43]. In this paper, we employ the generalized AOR (GAOR) iteration method as a special case of PIU method in [44] and an extension of GSOR method in [20] to find the solution of the block two-by-two linear systems (1.5). We also consider the use of the generalized coujugate gradient (GCG) method of Concus and Golub [45] and Widlund [46] for problems of the form (1.5). Indeed, the ideas of the GAOR and the GCG methods are utilized to construct iteration schemes for solving the real equivalent linear systems in the next section. The remainder of this paper is organized as follows. In Section 3, some numerical experiments are tested to show the performance of these two methods. Section 4 is devoted to some brief conclusions.

2 The GAOR and GCG methods for block two-by-two linear systems

This section consists of two parts. In Section 2.1, the generalized AOR (GAOR) method for solving (1.5) is studied and its convergence properties are stated. The second subsection is concerned with the generalized coujugate gradient (GCG) method and it is applied to solve the linear systems (1.5).

2.1 The GAOR method

We consider the following splitting of the coefficient matrix \({\mathscr{A}}\) in (1.5)

$$ \mathscr{A}=\begin{pmatrix}W& -T\\T& W\end{pmatrix}= \mathscr{D}-\mathscr{L}-\mathscr{U}, $$
(2.1)

where

$$ \mathscr{D}=\begin{pmatrix}W& 0\\0& W+Q\end{pmatrix},\quad \mathscr{L}=\begin{pmatrix}0& 0\\-T& 0\end{pmatrix},\quad \mathscr{U}=\begin{pmatrix}0& T\\0& Q\end{pmatrix},\quad $$
(2.2)

in which \(Q\in \mathbb {R}^{n\times n}\) is given symmetric positive definite matrix. Let z(k) = (x(k);y(k)) be the k th approximation solution, then the following iteration scheme can be constructed

$$ (\mathscr{D}-r\mathscr{L})z^{(k+1)} = [(1-\omega)\mathscr{D}+(\omega-r)\mathscr{L}+\omega \mathscr{U}]z^{(k)}+\omega f. $$
(2.3)

It is not difficult to see that (2.3) can be reformulated as follows

$$ z^{(k+1)} = \mathcal{G}z^{(k)}+\omega(\mathscr{D}-r\mathscr{L})^{-1}f, $$
(2.4)

where

$$ \mathcal{G}=\begin{pmatrix}(1-\omega)I& \omega W^{-1}T\\ {\Phi}& {\Psi} \end{pmatrix}, $$
(2.5)

in which Φ = ω(r − 1)(W + Q)− 1T, Ψ = −rω(W + Q)− 1TW− 1T + (W + Q)− 1[Q + (1 − ω)W] and ω,r are two positive parameters. Simple computations show that the iteration scheme (2.4) can be recast as follows:

The GAOR iteration method: :

Let ω and r be positive constants and \((x^{(0)};y^{(0)}) \in \mathbb {R}^{2n}\) be an initial guess. For k = 0,1,2,…, until the iteration sequence {(x(k);y(k))} converges, compute

$$ \begin{cases} x^{(k+1)}=(1-\omega)x^{(k)}+\omega W^{-1}(Ty^{(k)} +p),&\\ y^{(k+1)}=y^{(k)}+(W+Q)^{-1}[T((r-\omega)x^{(k)}-rx^{(k)})-\omega Wy^{(k)}+\omega q]. \end{cases} $$
(2.6)

The GAOR method involves two parameters ω,r and one preconditioning matrix Q. The iteration method (2.6) reduces to that of (1.7) when r = ω and Q = 0. It is expected that by the proper choices of the parameters and preconditioning matrix, the GAOR method has faster convergence rate. As was said above, the GAOR method is a special case of PIU method [44] and a generalized inexact accelerated overrelaxation (GIAOR) method [20] (by replacing P = W andQ := W + Q).

In the following discussion, we just state the convergence properties of the GAOR iteration method. The sketch of the proof of theorems can be found in [20]; we omit the details here.

Theorem 1

Let W,Q and T be symmetric positive definite matrices. Assume that λ is an eigenvalue of the iteration matrix \(\mathcal {G}\) and z = (x;y) is the corresponding eigenvector. If r = 1, then λ = 1 − ω is an eigenvalue of \(\mathcal {G}\) at least with multiplicity of n and the other eigenvalues satisfy

$$ \lambda=\frac{(\beta+\gamma)-\omega(\beta+r\alpha)}{\beta+\gamma}, $$
(2.7)

where

$$ \alpha:=\frac{y^{T}TW^{-1}Ty}{y^{T}y},\qquad \beta:=\frac{y^{T}Wy}{y^{T}y}, \qquad \gamma:=\frac{y^{T}Qy}{y^{T}y}. $$

Corollary 1

Let the conditions of Theorem 1 be satisfied. Then, the GAOR iteration method (2.4) is convergent if

$$ 0<\omega <\min\Big\{2,\frac{2(\beta+\gamma)}{\beta+r\alpha}\Big\}. $$
(2.8)

In the following theorem, the case r≠ 1 for the GAOR iteration method is considered.

Theorem 2

Let W,Q and T be symmetric positive definite matrices. Assume that λ is an eigenvalue of the iteration matrix \(\mathcal {G}\) and z = (x;y) is the corresponding eigenvector. Then λ satisfies the following quadratic equation

$$ \lambda^{2}-\frac{2(\gamma+\beta-\omega\beta)-\omega(\gamma+\alpha r)}{\beta+\gamma}\lambda+\frac{(1-\omega)(\beta-\omega \beta+\gamma)+\omega \alpha(\omega-r)}{\beta+\gamma}=0. $$
(2.9)

Theorem 3

Let W,Q, and T be symmetric positive definite matrices. Then, the GAOR method is convergent if the following condition holds

$$ \frac{(\omega^{2}-2\omega)\beta+\omega^{2}\alpha-\omega\gamma}{\omega\alpha}<r<\frac{(\frac{\omega^{2}}{2}-2\omega+2)\beta+\frac{\omega^{2}}{2}\alpha+(2-\omega)\gamma}{\omega\alpha}. $$
(2.10)

To implement all the methods mentioned in previous section efficiently, there is an important issue of how to choose the iteration parameters. The selection of the optimal value of iteration parameter that minimizes the spectral radius of the iteration matrix of the GSOR and SBTS methods depends on the eigenvalues μmax and μmin of S = W− 1T, which is not an easy task to be obtained when the size of S = W− 1T is large enough. Hence, good estimates of the eigenvalues are required for the methods to be efficient. Besides, how to find the optimal parameters and the most efficient preconditioning matrix for the GAOR method are difficult and we leave this as a topic for further research. In the next subsection, we give a brief overview of generalized CG (GCG) iteration method and use this method to solve the linear systems of (1.5). The GCG method has the advantage that no priori information about the spectral radius of iteration matrix is needed for estimating parameters.

2.2 The GCG method

Concus and Golub [45] and Widlund [46] proposed the GCG iteration method for solving the systems of linear equations Ax = b, where A is an n × n real matrix and has positive definite symmetric part, i.e., \(M=\frac {1}{2}(A+A^{T})\) is positive definite. The matrix M is taken into account as a preconditioner for an associated conjugate gradient method. This method is quite interesting and valuable if the system Mv = g can be solved with less computational effort than the original system Ax = b and this limits its range of applicability. In this case, we can write A in the following form:

$$ A=M-N, $$
(2.11)

where

$$ M=\frac{1}{2}(A+A^{T}), \qquad N=\frac{1}{2}(A^{T}-A). $$

Concus and Golub [45] and Widlund [46] considered the following generalized CG procedure

$$ x^{(k+1)}=x^{(k-1)}+\omega_{k+1}(v^{(k)}+x^{(k)}-x^{(k-1)}), $$
(2.12)

where v(k) = M− 1(bAx(k)) and

$$ \omega_{k+1}= \begin{cases} 1 \quad & if k=0;\\ \Big[1+\frac{(Mv^{(k)},v^{(k)})}{(Mv^{(k-1)},v^{(k-1)})\omega_{k}}\Big]^{-1} & if k\neq 0. \end{cases} $$

Therefore, the implementation of the GCG method for the splitting (2.11) is summarized as follows:

  • The GCG iteration method:

  1. 1.

    Let x(0) be given and set x(− 1) = 0.

  2. 2.

    For k = 0,1,…,untilconvergencedo

    1. 2.1.

      Solve Mv(k) = bAx(k);

    2. 2.2.

      Compute ρk = (Mv(k),v(k));

    3. 2.3.

      \(\omega _{k+1}= \begin {cases} 1 \quad & if k=0;\\ [1+\frac {\rho _{k}}{\rho _{k-1}\omega _{k}}]^{-1} & if k\neq 0; \end {cases}\)

    4. 2.4.

      Compute x(k+ 1) = x(k− 1) + ωk+ 1(v(k) + x(k)x(k− 1)).

From the GCG algorithm, we can see that the cost per iteration is one matrix multiply (by A), one solve of system of the form Mv = g and 2n multiplies. The GCG procedure converges to the true solution of (1.5) in a finite number of iterates in the absence of rounding errors. The reader is referred to the papers [46, 47] for further details about this method.

Now the GCG method is extended to a block version for solving linear systems (1.5). We consider the following splitting of the coefficient matrix \({\mathscr{A}}\) of (1.5)

$$ \mathscr{A}=\mathscr{M}-\mathscr{N}, $$
(2.13)

where

$$ \mathscr{M}=\frac{1}{2}(\mathscr{A}+\mathscr{A}^{T})=\begin{pmatrix} W& 0\\0& W\end{pmatrix},\qquad \mathscr{N}=\frac{1}{2}(\mathscr{A}^{T}-\mathscr{A})=\begin{pmatrix} 0& T\\-T& 0\end{pmatrix}, $$

in which \({\mathscr{M}}\) and \({\mathscr{N}}\) are the symmetric and negetive skew-symmetric part of \({\mathscr{A}}\), respectively. It is clear that \({\mathscr{M}}\) is positive definite. Hence, the GCG method can be applied to the linear systems (1.5). Based on the splitting (2.13) and the idea of the GCG method, the following iteration scheme can be constructed to solve (1.5):

  • The GCG iteration method for real block linear systems:

  1. 1.

    Let x(0),y(0) be given and set x(− 1) = y(− 1) = 0.

  2. 2.

    For k = 0,1,…,untilconvergencedo

    1. 2.1.

      Solve \({\mathscr{M}}v^{(k)}=\begin {pmatrix}p\\q\end {pmatrix}-{\mathscr{A}} \begin {pmatrix}x^{(k)}\\y^{(k)}\end {pmatrix}\equiv r^{(k)}\);

    2. 2.2.

      Compute \(\rho _{k}=({\mathscr{M}}v^{(k)},v^{(k)})\);

    3. 2.3.

      \(\omega _{k+1}= \begin {cases} 1 \quad & if k=0;\\ [1+\frac {\rho _{k}}{\rho _{k-1}\omega _{k}}]^{-1} & if k\neq 0; \end {cases}\)

    4. 2.4.

      Compute \(\begin {pmatrix}x^{(k+1)}\\y^{(k+1)}\end {pmatrix}=\begin {pmatrix}x^{(k-1)}\\y^{(k-1)}\end {pmatrix} +\omega _{k+1}\Bigg (v^{(k)}+\begin {pmatrix}x^{(k)}\\y^{(k)}\end {pmatrix} -\begin {pmatrix}x^{(k-1)}\\y^{(k-1)}\end {pmatrix}\Bigg )\).

From the above algorithm, it is seen that the GCG method requires extra inner product per iteration step and needs to compute the initial residual vector at the starting of algorithm. However, the computational cost of the inner product ωk+ 1 may be increased. The GCG method is attractive by effectively solving an equation \({\mathscr{M}}v^{(k)}=r^{(k)}\) at each iteration and this subsystem of linear equations can be solved directly by some direct method, e.g., the Cholesky factorization of the coefficient matrix.

3 Numerical experiments

In this section, we discuss the numerical performance of the iteration methods described in the previous sections for solving systems of linear (1.5). Numerical results of the well-known GMRES method with l = 30 as the restart and the preconditioned GMRES(30) methods are also given. In the tables, the number of iteration steps, the elapsed CPU time in seconds and the relative residual error are denoted by IT, CPU and RES, respectively. For solving all linear subsystems involved in these iteration methods, we use the Cholesky factorization of the coefficient matrices. In all of the following experiments, the iteration is terminated when \(||f-{\mathscr{A}}z^{(k)}||_{2}/||f||_{2} < 10^{-6}\) or if the number of iteration steps exceed 3000. The starting vector is equal to z(0) = (x(0);y(0)) = (0;0). We comment that all the numerical experiments are implemented on a 64-bit 1.80 GHz core i7 processor and 12.00GB RAM using MATLAB version R2017a. The following examples are taken from [15, 22].

Example 1

Consider the complex symmetric linear systems of (1.1) as follows

$$ \Big [\Big(K+\frac{3 - \sqrt{3}}{\tau} I\Big) + i \Big(K+\frac{3 + \sqrt{3}}{\tau} I \Big)\Big] u = b, $$
(3.1)

where τ is the time step-size and K is the five-point centered difference matrix approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions on a uniform mesh in the unit square [0,1] × [0,1] with the mesh-size \(h =\frac {1}{m+1}\). The matrix \(K \in \mathbb {R}^{n\times n}\) has the tensor product form K = ImVm + VmIm with \(V_{m}= h^{-2} tridiag(-1,2,-1) \in \mathbb {R}^{m\times m}\). Hence, K is a block tridiagonal matrix with n = m2. In our tests, we take

$$ W=K+\frac{3 - \sqrt{3}}{\tau} I, \qquad T=K+\frac{3 + \sqrt{3}}{\tau} I, $$

and the vector b is given with its jth entry:

$$ b_{j}=\frac{(1-i)j}{\tau(j+1)^{2}}, \quad j=1,2,\ldots,n. $$

Furthermore, the linear system (3.1) is normalized by multiplying both sides with h2 and we set τ = h.

Example 2

Consider the following complex systems

$$ \Big[(-\omega^{2}M+K) + i(\omega C_{V} +C_{H})\Big]u = b, $$
(3.2)

where M and K are inertia and stiffness matrices, CV and CH are viscous and hysteretic damping matrices, respectively, and ω is a driving circular frequency constant. In this example, we take M = I,CV = 10I and CH = μK with μ = 0.02 being a damping coefficient. The matrix K is defined analogously to Example 1. We also set ω = π and the right-hand side vector b is taken to be \(b=(1 + i) \mathcal {A }\bold {1}\) with 1 being the vector of all entries equal to one. In addition, the complex linear system (3.2) is normalized by multiplying both sides with h2.

Example 3

Consider the linear systems of (1.1) with

$$ T= I\otimes V+ V \otimes I, \qquad W=10(I\otimes V_{c}+ V_{c} \otimes I)+ 9(e_{1}{e_{m}^{T}}+e_{m}{e_{1}^{T}})\otimes I, $$
(3.3)

where \(V = tridiag(-1, 2 ,-1)\in \mathbb {R}^{m\times m}\), \(V_{c} = V - e_{1}{e_{m}^{T}} -e_{m}{e_{1}^{T}}\in \mathbb {R}^{m\times m}\), e1 and em are the first and the last unit vectors in \(\mathbb {R}^{m}\), respectively. We take the right-hand side vector b to be \(b=(1 + i)\mathcal {A }\bold {1}\) with 1 being the vector of all entries equal to one. Here, T and W correspond to the five-point centered difference matrices approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions and periodic boundary conditions, respectively, on a uniform mesh in the unit square [0,1] × [0,1] with the mesh size \(h =\frac {1}{m+1}\).

Example 4

We consider the following complex Helmholtz equation

$$ -{\Delta} u+\sigma_{1}u+i\sigma_{2}u=g, $$
(3.4)

where σ1 and σ2 are real coefficient functions and u satisfies Dirichlet boundary conditions in D = [0,1] × [0,1]. By discretizing the problem with finite differences on a m × m grid with mesh size \(h = \frac {1}{m+1}\), we obtain a system of linear equations

$$ [(K+\sigma_{1} I)+i \sigma_{2} I]u=b, $$
(3.5)

where K is the five-point centered difference matrix approximating the negative Laplacian operator −Δ. For this problem, K is defined the same as in Example 1. We also take the right-hand side vector \(b=(1 + i)\mathcal {A }\bold {1}\), with 1 being the vector of all entries equal to one and normalized the system by multiplying both sides with h2. In actual computations, we set σ1 = σ2 = 100.

According to [33, 36], the optimal parameters for the GSOR and SBTS methods are selected based on the following formulas

$$ \alpha_{GSOR}=\frac{2}{1+\sqrt{1+\mu_{max}^{2}}}, \qquad \alpha_{SBTS}=\frac{\gamma \pm \sqrt{\gamma^{2}-2\gamma}}{2}, $$
(3.6)

where \(\gamma = 2+\mu _{max}^{2}+\mu _{min}^{2}\), and μmax,μmin are the largest and the smallest eignvalues of S = W− 1T. In our implementations, the eigenvalues μmax and μmin are obtained by means of the power and inverse power methods, respectively. The parameter of PMHSS iteration method is chosen experimentally which minimizes the number of iteration steps. The optimal parameters of the PMHSS, GSOR, and SBTS iteration methods are listed in Table 1 for different values of m. The selection of the optimal parameters in the GAOR method that minimize the rate of convergence is a difficult task. In the GAOR iteration method, the parameters ω and r can be determined by trial and error on a small example and then used with good results on larger problems. In our computations, we take ω = 0.6,r = 0.9 in Examples 1 and 2, ω = 0.3,r = 0.4 in Example 3 and ω = 1,r = 0.95 in Example 4. Moreover, in the block PMHSS and GAOR methods, the preconditioning matrices V and Q are chosen as W and tW + T, respectively. Here, t is a given real positive constant. It is clear that matrix tW + T is symmetric positive definite and in actual computations, we set t = 0.01.

Table 1 Choice of parameters α for tested methods

In Tables 234 and 5, the numerical results are presented for Examples 1–4. According to the exprerimental results for all examples, the GAOR method outperforms the PMHSS, GSOR, and SBTS methods in terms of IT, CPU time, and RES, especially when the size of problem increases (except for Example 3 that the iteration number for grids m = 16,32,64 with the estimation parameters are more than those of the other iteration methods). From the reported results in Tables 25, we can conclude that the iteration steps of the GAOR iteration method almost remain stable. Therefore, the GAOR iteration method is almost independent of the problem size. From the above results, we can observe that if suitable preconditioning matrix Q and numbers ω,r are chosen, the GAOR iteration method is comparable to the mentioned methods. A dash (–) in Tabel 4 means that the method not only has many iterations but also takes a long time, so no result is obtained.

Table 2 Numerical results for Example 1
Table 3 Numerical results for Example 2
Table 4 Numerical results for Example 3
Table 5 Numerical results for Example 4

We also give the numerical results for the GMRES(30), the PMHSS, GSOR, and GAOR preconditioned GMRES(30) (PGMRES(30)) methods and the GCG iteration method in Tables 678 and 9. As seen, the GMRES(30) method converges slowly or even fails to converge without preconditioning. A good preconditioner can be improved the convergence rate of this method. The PMHSS and GAOR preconditioners are defined by

$$ \mathscr{P}_{PMHSS} = \frac{\alpha+1}{2\alpha}\begin{pmatrix} I& - I\\I& I\end{pmatrix} \begin{pmatrix} \alpha W+T& 0\\0& \alpha W+T\end{pmatrix},\qquad \mathscr{P}_{GAOR} = \begin{pmatrix} W& 0\\rT& W+Q\end{pmatrix}. $$
(3.7)

In all cases, we see that the use of these three preconditioners leads to better performance and reduced the number of iteration of the GMRES(30). In Table 8, dagger symbol (‡) means that no convergence has been obtained. From Tables 69, we observe that the iteration steps of the PMHSS preconditined GMRES(30) are stable, while the IT of the GSOR and GAOR preconditioned GMRES(30) are growing with the increase of m for Examples 1 and 3. We can also see that the GCG method converges to the true solution of (1.5) in acceptable iterations and the results for this method show that the IT and CPU timings are clearly much better than GMRES(30), especially for larger problems. In comparison with PGMRES(30) methods, the GCG method requires few iterations and computational costs for some of these examples. In general, with problem size increases, the IT of the GCG method remains almost constant or grows slowly. This shows the advantage of the GCG method in the solution of linear systems (1.5). Hence, the GCG method can be competitive and effective for solving some of these kinds of problems. We stress that it is possible that the GAOR and GCG methods are not competitive for all test problems considered here and they may turn out to be useful on some other problems.

Table 6 Numerical results of GMRES(30), PGMRES(30), and GCG methods for Example 1
Table 7 Numerical results of GMRES(30), PGMRES(30), and GCG methods for Example 2
Table 8 Numerical results of GMRES(30), PGMRES(30), and GCG methods for Example 3
Table 9 Numerical results of GMRES(30), PGMRES(30), and GCG methods for Example 4

4 Conclusions

In this paper, we have revisited the use of the GAOR and GCG methods for solving the real equivalent formulation of complex linear systems (1.1). We reported some numerical experiments to compare the performance of the GAOR and GCG with recently proposed iteration methods in the literature. The numerical experiments show that the GAOR method may be better than PMHSS, GSOR and SBTS methods if we choose the suitable value of parametrs and preconditioning matrix. Furthermore, the GAOR iteration method was applied as a preconditioner to accelerate the convergence rate of GMRES(30). We also employed the generalization of the CG (GCG) method for solving the linear systems of (1.5). As seen, the GCG method works fine when the systems of the form Mv = g are esay to solve. Our experiments indicate that the GCG method can outperform in practice. It appears that the methods described here, possibly with some modification, may be applicable in at least some of these kind of problems and exploring these would be an interesting area for future research.