Abstract
For solving a class of complex symmetric system of linear equations, we apply the minimum residual technique to the modified Hermitian and skew-Hermitian splitting (MHSS) iteration scheme and propose an iteration method referred to as minimum residual MHSS (MRMHSS) iteration method. Compared with the classical MHSS method, the MRMHSS method has two more iteration parameters, which can be automatically and easily computed. Then, some properties of the MRMHSS iteration method are carefully studied. Finally, we use four examples to test the performance of the MRMHSS iteration method by comparing its numerical results with three other iteration methods.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
We consider the following systems of linear equations
where A is a complex symmetric matrix of the form
and \(\imath = \sqrt {-1} \) denotes the imaginary unit, \( W,T \in \mathbb {R}^{n \times n}\) are both real symmetric matrices with at least one of them being positive definite. Hereafter, without loss of generality, we suppose that W is positive definite and T≠ 0 is positive semi-definite. This kind of complex symmetric linear systems can be found in many scientific problems, such as fast Fourier transform-based (FFT) solution of certain time-dependent partial differential equations [9], diffuse optical tomography [1], quantum mechanics [20], molecular scattering [17], and structural dynamics [13]. For more applications of this kind of complex symmetric systems, we refer to [8] and the references therein.
Let A = H(A) + S(A) be the Hermitian and skew-Hermitian splitting (HSS) of coefficient matrix A, where
are the Hermitian and the skew-Hermitian parts of matrix A, respectively. Here A∗ denotes the conjugate transpose of matrix A. Since A is a non-Hermitian and positive definite matrix, one can straightforwardly employ the HSS iteration method [6] to solve the above complex symmetric system of linear equation (1). Nevertheless, as is known that there is a potential difficulty with the HSS iteration method since a shifted skew-Hermitian system of linear equations should be solved at each iteration step, which is time-consuming. To avoid this shortcoming, a modified HSS (MHSS) iteration method was presented lately [3] for solving this kind of complex symmetric systems of linear equations, in which the solution of the shifted skew-Hermitian linear system was replaced by solving a real symmetric positive definite systems of linear equations. The iterative scheme of MHSS iteration method can be written as
where α is a given positive constant and I is the identity matrix, k = 0,1,2,… and x(0) is an initial guess.
The MHSS method is very effective and reliable, thus has been extended and developed to many new variants, such as preconditioned MHSS (PMHSS) iteration method [4, 5], generalized PMHSS [12, 22], lopsided PMHSS [16] and so on. In addition, the MHSS iteration method and its variants have been used to solve complex symmetric nonlinear equations [24], complex singular linear systems [10, 25], and also been used as the smoother of the multigrid methods [15].
Recently, in another point of view, Yang, Cao and Wu [23] proposed a minimum residual HSS (MRHSS) iteration method to improve the efficiency of the HSS iteration method by making use of the minimum residual technique on HSS iteration scheme. Numerical results show that the MRHSS method is much more effective than the HSS iteration method. However, when we use it to solve complex symmetric system of linear (1), a shifted skew-Hermitian system of linear equations still needs to be solved in each step of the MRHSS method. In order to avoid this problem, we further use the minimum residual technique to the MHSS iteration scheme. Then, a new method named as minimum residual MHSS (MRMHSS) iteration method is proposed in this work. In the implementation of this method, the shifted skew-Hermitian system of linear equations need to be solved is replaced by a symmetric positive definite systems of linear equations. Thus, we may expect that the MRMHSS iteration method outperforms the MRHSS iteration method.
Throughout this paper, we denote by (x,y) = y∗x the Euclidean inner product for any complex vectors \(\mathbf {x},\mathbf {y}\in \mathbb {C}^{n}\), and denote by \(\| \mathbf {x} \| = \sqrt {(\mathbf {x},\mathbf {x})}\) the Euclidean norm for any complex vector \(\mathbf {x}\in \mathbb {C}^{n}\). For an arbitrary matrix \(X\in \mathbb {C}^{n\times n}\), ∥X∥ denotes the spectral norm of X, which is actually induced by the Euclidean vector norms. The remainder of this work is organized as follows. In Section 2, based on the residual form of the MHSS method, we first construct the MRMHSS iteration method. Some essential properties of the involved parameters and the convergence property of the MRMHSS method are discussed in Section 3. Numerical experiments are presented in Section 4 to test the efficiency and robustness of the MRMHSS iteration method. Finally, in Section 5, we draw a brief conclusion for this work.
2 The MRMHSS iteration method
In this section, we first rewrite the MHSS method into an equivalent form, and thereafter introduce the MRMHSS iteration method by using the minimum residual technique onto the new form of the MHSS method.
Note that (αI + W)− 1(αI − ıT) = I − (αI + W)− 1A and (αI + T)− 1(αI + ıW) = I + ı(αI + T)− 1A, and denote r(k) = b − Ax(k) and \(\mathbf {r}^{(k+\frac {1}{2})} = \mathbf {b} - A\mathbf {x}^{(k+\frac {1}{2})}\), the MHSS iteration scheme (3) can be rewritten as
which is mathematically equivalent to the direct-splitting scheme (3). However, the numerical behaviors of (3) and (4) are different and the latter generally performs more efficient than the former; See details in [7].
Now, denote
(4) can be rewritten as
It is obvious that the search distances along the two updating directions d(k) and \(\mathbf {d}^{(k+\frac {1}{2})}\) are both unitary. Naturally, we can multiply two generalized parameters \(\lambda _{k}, \theta _{k} \in \mathbb {C}\) in front of the updating directions d(k) and \(\mathbf {d}^{(k+\frac {1}{2})}\) to expect them work well to improve the convergence rates, which is similar with some other accelerating techniques too, such as the minimum residual smoothing [14, 21, 26]. Then, an improved iteration scheme is derived as
For the reason of clearness in the narrative, we define two notations
then the residual form of the iteration scheme (6) can be written as
Now our task is to determine proper values of the parameters λk and 𝜃k on minimizing the residual norms \(\| \mathbf {r}^{(k+\frac {1}{2})} \|\) and ∥r(k+ 1)∥ at each iteration step, respectively. By straightforwardly calculating, they can be formulated as
and
where Re(⋅) and Im(⋅) represent the real and the imaginary parts of a complex number, \({\mathcal{H}}(\cdot )\) and \(\mathcal {S}(\cdot )\) denote the Hermitian and the skew-Hermitian parts of a matrix, respectively. It is easily observed that these two norms can be viewed as two real valued convex functions of two variables Re(λk) and Im(λk), Re(𝜃k) and Im(𝜃k), respectively. Then, the minimum point of each functions can be directly derived as
and
It is certainly true that we get the formulas of two introduced parameters, however, we can hardly compute them by (11) and (12) in actual implementations because of the difficulty involved in calculating the matrix vector products \({\mathcal{H}}(G_{1})\mathbf {r}^{(k)}\), \(\mathcal {S}(G_{1})\mathbf {r}^{(k)}\), \({\mathcal{H}}(G_{2})\mathbf {r}^{(k+\frac {1}{2})}\) and \(\mathcal {S}(G_{2})\mathbf {r}^{(k+\frac {1}{2})}\), for given vectors r(k) and \(\mathbf {r}^{(k+\frac {1}{2})}\). Thus, we ought to find a practical manner for computing these parameters. Fortunately, they can be reformulated to easier forms in the following
and
Using (7) and (5), the computing formulas (13) and (14) can be further rewritten as
On the basis of previous analyses of iteration parameters λk and 𝜃k of the residual updating iteration scheme (6), the MRMHSS iteration method can be formulated as follows.
Method 1 (The MRMHSS iteration method)
Given an initial guess \(\mathbf {x}^{(0)}\in \mathbb {C}^{n}\), compute r(0) = b − Ax(0). For k = 0,1,2,…, until {x(k)} converges
-
1.
calculate \(\mathbf {x}^{(k+\frac {1}{2})}\) from the iteration formula \(\mathbf {x}^{(k+\frac {1}{2})} = \mathbf {x}^{(k)} + \lambda _{k} \mathbf {d}^{(k)}\), where
$$ \mathbf{d}^{(k)} = (\alpha I + W )^{-1} \mathbf{r}^{(k)},~~\lambda_{k} = \frac{(\mathbf{r}^{(k)}, A\mathbf{d}^{(k)})}{\| A\mathbf{d}^{(k)}\|^{2}}; $$ -
2.
calculate x(k+ 1) from the iteration formula \(\mathbf {x}^{(k+1)} = \mathbf {x}^{(k+\frac {1}{2})} - \imath \theta _{k} \mathbf {d}^{(k+\frac {1}{2})}\), where
$$ \mathbf{d}^{(k+\frac{1}{2})} = (\alpha I + T )^{-1} \mathbf{r}^{(k + \frac{1}{2} )},~~\theta_{k} =\imath\frac{(\mathbf{r}^{(k+\frac{1}{2})},A\mathbf{d}^{(k+\frac{1}{2})})}{\| A\mathbf{d}^{(k+\frac{1}{2})}\|^{2}}, $$
where α is a given positive constant and I is the identity matrix.
Remark 1
When we choose λk = 𝜃k ≡ 1 at each iteration step of Method 1, then the MRMHSS iteration method immediately reduces to the classical MHSS iteration method.
It is evident that there are only two real and symmetric linear subsystems need to be solved at each step of the MRMHSS iteration method, whose coefficient matrices are completely same as those in the MHSS. Moreover, since \(W\in \mathbb {R}^{n\times n}\) is symmetric positive definite, \(T\in \mathbb {R}^{n\times n}\) is symmetric positive semi-definite, and α is positive real scalar, we know that both matrices αI + W and αI + T are symmetric positive definite. Thus, the two linear subsystems involved in each step of the MRMHSS iteration can be solved effectively using Cholesky factorization or (preconditioned) conjugate gradient (CG) method.
Remark 2
The MRMHSS iteration method is also applicable to the case where matrix W is symmetric positive semi-definite and T is symmetric positive definite. More generally, if there exist real numbers β and δ such that both matrices \(\tilde {W} := \beta W + \delta T\) and \(\tilde {T} := \delta T - \beta W\) are symmetric positive semi-definite with at least one of them being positive definite, we can first multiply the complex symmetric system (1) by the complex number β − ıδ to obtain the equivalent system
and then employ the MRMHSS iteration method to solve (16) approximately.
3 The convergence property of the MRMHSS iteration method
As we know, the iteration parameters λk and 𝜃k involved in the MRMHSS iteration method are chosen to minimize the corresponding residual norms \(\| \mathbf {r}^{(k+\frac {1}{2})} \|\) and∥r(k+ 1)∥, respectively. In this section, we will show a closer relationship between λk,𝜃k and ∥r(k+ 1)∥. Furthermore, the convergence property of the MRMHSS iteration method will also be discussed.
To do so, we firstly present an important lemma.
Lemma 1 (See 23)
Let \( C = (c_{ij}) \in \mathbb {C}^{n \times n } \) be a Hermitian matrix. For any vector \( \mathbf {z} \in \mathbb {C}^{n} \), we have
where ∂ is the symbol of partial derivatives.
Now, by making use of the above lemma, we can obtain the following property of the iteration parameters λk and 𝜃k.
Theorem 1
The quadruple (Re(λk),Im(λk),Re(𝜃k),Im(𝜃k)) determined by (11) and (12) is the minimum point of the function ∥r(k+ 1)∥, which implies that the values of λk and 𝜃k defined by (13) and (14) are optimal in the complex field \(\mathbb {C}\).
Proof 1
From (8) and (10), the residual norm ∥r(k+ 1)∥2 can be viewed as a real valued function of the complex variables λk and 𝜃k, or equivalently, of the real variables Re(λk), Im(λk), Re(𝜃k) and Im(𝜃k). Let \(\tilde {\mathbf {r}}(\xi _{1},\xi _{2}) := \mathbf {r}^{(k)} - (\xi _{1}+\imath \xi _{2})G_{1} \mathbf {r}^{(k)}\), and define a function ϕ as
Because of \(\| G_{2}\tilde {\mathbf {r}} \|^{2} = (G_{2}^{\ast } G_{2}\tilde {\mathbf {r}}, \tilde {\mathbf {r}})\), thus using (17) and Lemma 1, we have
Denote by
Then, the four first-order partial derivatives of φ(ξ1,ξ2,η1,η2) can be calculated, which are
It is easy to see, from (11) and (12), that the quadruple (Re(λk),Im(λk),Re(𝜃k), Im(𝜃k)) is the unique stationary point of the function φ defined by (19). Furthermore, in order to verify the quadruple is the minimum point of the function φ, its second-order partial derivatives are required. For the convenience of representation, we firstly define four notations as follows:
Then, the second-order partial derivatives of φ(ξ1,ξ2,η1,η2) read
Meanwhile, it is easy to see that Ψ1(Re(λk)) = Ψ2(Im(λk)) = 0 and
Hence, the Hessian matrix of the function φ at the unique stationary point (Re(λk), Im(λk),Re(𝜃k),Im(𝜃k)) reads
which is a Hermitian and positive definite matrix.
Therefore, the quadruple (Re(λk),Im(λk),Re(𝜃k),Im(𝜃k)) determined by (11) and (12) is the minimum point of the residual norm ∥r(k+ 1)∥. □
Now, we study the convergence property of the MRMHSS iteration method. Denote by \({\mathcal{F}}(M)\) the field values of the complex matrix \(M\in \mathbb {C}^{n\times n}\), i.e.,
where (⋅,⋅) represents the Euclidean inner product of two vectors in \(\mathbb {C}^{n}\).
Theorem 2
The MRMHSS iteration method used for solving the complex symmetric system of linear (1) is convergent for any initial guess \(\mathbf {x}^{(0)}\in \mathbb {C}^{n}\) if and only if
Furthermore, when (20) holds, the residuals satisfy
for any nonnegative integer k, where δ1 and δ2 represent the distances from the origin to \({\mathcal{F}}(A(\alpha I + W)^{-1})\) and \({\mathcal{F}}(A(\alpha I + T)^{-1})\), respectively.
Proof 2
Denote by \(\mathcal {R}(\cdot )\) and superscript ⊥ the range space of a given matrix and the orthogonal complement of a vector space contained in \(\mathbb {C}^{n}\), respectively. From the construction of MRMHSS iteration method in Section 2, we can find that each step of MRMHSS method contains two residual projection procedures, i.e., \(\mathbf {r}^{(k+\frac {1}{2})}\) and r(k+ 1) are the projections of r(k) and \(\mathbf {r}^{(k+\frac {1}{2})}\) onto the subspaces \(\mathcal {R}(A(\alpha I+W)^{-1})^{\bot }\) and \(\mathcal {R}(A(\alpha I+T)^{-1})^{\bot }\), respectively. Hence, \(\| \mathbf {r}^{(k+\frac {1}{2})} \| \leq \| \mathbf {r}^{(k)} \| \) with equality if and only if r(k) is already orthogonal to A(αI + W)− 1r(k), i.e.,
This implies \(\| \mathbf {r}^{(k+\frac {1}{2})} \| < \| \mathbf {r}^{(k)} \| \) if and only if \(0\notin {\mathcal{F}}((\alpha I+W)^{-\ast } A^{\ast } )\). Because, for any matrix \(M\in \mathbb {C}^{n\times n}\), the field of values of M∗ is no other than the complex conjugate of the field of values of M, so
Similarly, we can obtain that
Therefore, we can finally obtain that ∥r(k+ 1)∥ < ∥r(k)∥ if and only if
Recalling that the field of values is a closed set, the distances of \({\mathcal{F}}(A(\alpha I+W)^{-1})\) and \({\mathcal{F}}(A(\alpha I+T)^{-1})\) from the origin can be, respectively, written as
Substituting (11) and (12), respectively, into (9) and (10), we obtain
and
Combining the above two inequalities, the result (21) is obtained, which completes the proof of the theorem. □
Based on the above theorem, we in the following show a specific criterion for guaranteeing the convergence of Method 1. To this end, we firstly give an useful lemma.
Lemma 2
[18, Proposition 1.18] Let \(M\in \mathbb {C}^{n\times n}\). Then the field of values of M is a convex set which contains the convex hull of its spectrum. It is equal to the convex hull of its spectrum when M is normal, i.e., \(\mathcal {F}(M) = Co(\sigma (M))\), where Co(⋅) denotes the convex hull of a given set and σ(⋅) indicates the spectrum of a matrix.
Corollary 1
Assume that the matrix A = W + ıT is normal, then the MRMHSS iteration method used for solving the complex symmetric system of linear equations (1) is convergent for any initial guess \(\mathbf {x}^{(0)}\in \mathbb {C}^{n}\).
Proof 3
Because \(A=W+\imath T\in \mathbb {C}^{n\times n}\) is normal, by straightforward computations we find that the real symmetric matrices W and T are communicative, i.e., it holds that WT = TW. Thus, we can easily verify that both the matrices A(αI + W)− 1 and A(αI + T)− 1 are normal. From Lemma 2, it follows
and
Hence, (20) reduces to
while this is automatically valid since in this situation the spectrum of both the matrices A(αI + W)− 1 and A(αI + T)− 1 are located in the right half plane of complex field. Therefore, the MRMHSS iteration method used for solving the complex symmetric system of linear equations (1) is convergent for any initial guess \(\mathbf {x}^{(0)}\in \mathbb {C}^{n}\) when the system matrix A is normal. □
Corollary 2
Assume that the matrix A = W + ıT is normal, then we have an upper bound of the convergence rate presented in (21), which is followed by
where κ1 and κ2 are the condition numbers of matrices A(αI + W)− 1 and A(αI + T)− 1, respectively, \(\kappa = \min \limits (\kappa _{1},\kappa _{2})\).
Proof 4
Since A = W + ıT is normal, it holds that WT = TW. Hence, there exists an orthogonal matrix \(Q\in \mathbb {R}^{n\times n}\) can be viewed as the eigenvector matrices of matrices W and T,
where
are diagonal matrices and λj > 0,γj ≥ 0,j = 1,2,…,n. It follows that
let \(\tilde {\mathbf {y}} = Q^{T}\mathbf {y}=(\tilde {y}_{1},\tilde {y}_{2},\ldots ,\tilde {y}_{n})\), it yields that
where μj = (ξj + ıγj)(α + ξj)− 1 and \(\beta _{j} = \tilde {y}_{j}^{2}/{\sum }_{j=1}^{n}\tilde {y}_{j}^{2}\), j = 1,2,…,n. By straightforward computations, we have
Thus, we can obtain δ1 ≥ μmin, and then it follows that
where \(\mu _{max} := \underset {j}{max}\{|\mu _{j}|\}\) is equal to the 2-norm of matrix A(αI + W)− 1 and \(\kappa _{1} = \frac {\mu _{max}}{\mu _{min}}\) denotes the condition number of matrix A(αI + W)− 1.
Similarly we can obtain an estimate of the second term in the right-hand side of (21), denoting by κ2 the condition number of matrix A(αI + T)− 1, we have
Furthermore, both \(({\kappa _{1}^{2}}-1)^{1/2}/\kappa _{1}\) and \(({\kappa _{2}^{2}}-1)^{1/2}/\kappa _{2}\) are less than unity; thus, the inequality of (23) can be obtained. The proof is completed. □
4 Numerical experiments
In this section, we use four examples to illustrate the feasibility and effectiveness of the MRMHSS iteration method.
We will compare the numerical results including numbers of iteration steps (denoted as IT) and elapsed CPU times (in seconds, denoted as CPU) of the MRMHSS iteration method with those of the MHSS, the generalized successive overrelaxation (GSOR) [19], the precondtioned MHSS (PMHSS) [4] and the MRHSS methods. In the tests, the MRMHSS, the MRHSS, the PMHSS and the MHSS iteration methods are used to solve the original complex system (1), while the GSOR method being used to solve the equivalent real system of linear equations. For more details about the equivalent real system, one can refer to [2, 11, 19] and the references therein.
The subsystems of linear equations with the coefficient matrices αI + W and αI + ıT in each step of the MRHSS iteration method are solved directly by using the Cholesky factorization and the LU decomposition, respectively. The subsystems of linear equations of the MHSS, GSOR, PMHSS and MRMHSS iteration methods are solved directly by using the Cholesky factorization since their coefficient matrices are symmetric positive definite. It is noteworthy to mention that the symmetric approximate minimum degree (SYMAMD) reordering [18] is used before the Cholesky or the LU decomposition.
In actual implementations, we choose the initial guess as zero vector and all the computing processes are terminated once the current iterate x(k) satisfies the stopping criterion
or the number of iteration steps exceeds kmax = 1000. In addition, all the numerical experiments are computed in MATLAB [version 9.1.0.441655 (R2016b)] in double precision on a personal laptop, with 2.53GHz central processing unit (Intel®; Core™ i3 M380), 4.87 G memory and Windows 10 operating system.
For the choice of the parameter values of the MHSS, the MRHSS and the MRMHSS methods, we use the experimentally found optimal ones, which lead to the least number of iteration steps. If the optimal parameter values form an interval, then we use the one that belongs to this interval and leads to the least CPU time. While, for the GSOR method, the optimal iteration parameter values are chosen as presented in [19], and for the PMHSS method, those are chosen as presented in [4] except for Example 4 and for the cases of m = 512 in Examples 1, 2 and 3, which are determined as above-mentioned manner used for the MHSS, the MRHSS and the MRMHSS methods.
Example 1
Consider the complex symmetric linear systems of the form (1) as below (cf. [3])
where τ is the time step-size and K is the matrix of a standard five point centered difference formula, approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions on an uniform mesh in the two dimensional unit square [0,1] × [0,1], and we set h = 1/(m + 1), τ = h, where m is a number of inner grid-points in one direction. Then, we can know that the matrix \( K \in \mathbb {R}^{n \times n } \) is with the tensor product form K = Vm ⊗ I + I ⊗ Vm, with \( V_{m} = h^{-2} tridiag(-1,2,-1) \in \mathbb {R}^{m \times m} \). What’s more, it is obvious that K is an n × n block tridiagonal matrix and n = m2. In our tests, we take
and the right-hand side vector b with the j-th entry bj being the form as follows
In addition, we normalize the system by multiplying two sides by h2. And for other details, we can refer to [3].
Example 2
Consider the complex symmetric linear systems of the form (1) as below (cf. [3])
where M is the inertia matrix, which is typically real symmetric, possibly singular and we take M = I, K is the (real symmetric) stiffness matrix, CV is the viscous damping matrix (real, diagonal, “highly singular” and possibly zero) and we choose CV = 10I, CH is the hysteretic damping matrix (also real symmetric) and we here take CH = 𝜖K, with 𝜖 is a damping coefficient, ω is the circular frequency. Besides, we assume that K is the five-point centered difference matrix approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions on a uniform mesh in the unit square [0,1] × [0,1] whose mesh-size is h = 1/(m + 1). Then, we can know that the matrix \( K \in \mathbb {R}^{n \times n }\) possesses the tensor product form K = Vm ⊗ I + I ⊗ Vm, where \( V_{m} = h^{-2}tridiag(-1,2,-1) \in \mathbb {R}^{m \times m } \) and I is the identical matrix in \( \mathbb {R}^{m \times m } \). Thus, K is an n × n block tridiagonal matrix, n = m2. Moreover, we take ω = π, 𝜖 = 0.02, and the right-hand side vector b as b = (1 + ı)A1, where 1 being the vector whose all entries equal to one. Furthermore, we normalize the (26) by multiplying both sides by h2. For more details, we refer to [3].
Example 3
The system of linear equations (1) is of the form (W + ıT)x = b, where (cf. [3])
with \(V=tridiag(-1,2,-1)\in \mathbb {R}^{m\times m},~V_{c} = V - \mathbf {e}_{1}\mathbf {e}_{m}^{T} - \mathbf {e}_{m}\mathbf {e}_{1}^{T}\in \mathbb {R}^{m\times m},\) and e1 and em are the first and the last unit vectors in \(\mathbb {R}^{m}\), respectively. We take the right-hand side vector b to be b = (1 + ı)A1, with 1 being the vector of all entries equal to one.
Here T and W correspond to the five-point centered difference matrices approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions and periodic boundary conditions, respectively, on a uniform mesh in the unit squares [0,1] × [0,1] with the mesh-size h = 1/(m + 1).
Example 4
We consider the complex Helmholtz equation (cf. [9])
where σ1 and σ2 are real coefficient functions, u satisfies Dirichlet boundary conditions in D = [0,1] × [0,1]. We discretize the problem with finite differences on a m × m grid with mesh-size h = 1/(m + 1). This leads to a system of linear equations
where K = I ⊗ Vm + Vm ⊗ I is the discretization of −Δ by means of centered differences, wherein \(V_{m} = h^{-2}tridiag(-1,2,-1)\in \mathbb {R}^{m\times m}\). The right-hand side vector b is taken to be b = (1 + ı)A1, with 1 being the the vector of all entries equal to one. Furthermore, before solving the system we normalize the coefficient matrix and the right-hand side vector by multiplying both by h2. For the numerical tests we set σ1 = σ2 = 100.
The experimentally found optimal iteration parameters αexp of the MHSS, GSOR, PMHSS, MRHSS and MRMHSS iteration methods for all the numerical examples with different mesh numbers are listed in Table 1. We observe that, for all the iteration methods and all the numerical examples except for the PMHSS iteration method, the optimal parameters αexp decrease as the mesh number increasing. In addition, almost for all the numerical examples with different mesh numbers, the optimal parameter values of the MRHSS and the MRMHSS iteration methods are much smaller than those of the MHSS, the PMHSS and the GSOR iteration methods.
In Table 2, we list numerical results of the four tested iteration methods for all the four examples. From the numerical results, we see that all the tested iteration methods are convergent within kmax iteration steps for all the four examples with different mesh numbers m. For Examples 1, 2 and 4, the GSOR and the MRHSS iteration methods perform much better than the MHSS iteration method no matter in iteration number or in computing time. However, among the PMHSS, the GSOR and the MRHSS iteration methods, the MRHSS becomes the worst one for Example 3 when mesh number is large. Meanwhile, the PMHSS costs more computing time than the MHSS for Example 4 since in this example the matrix W is more complicated to decompose than the matrix T. Finally, we should note that the MRMHSS iteration method discussed in this work is always the most efficient one for all the four tested examples, since it costs the least computing time and the number of iteration steps to achieve convergence. Moreover, for Examples 1, 2 and 4, the number of iteration steps of the MRMHSS, the MRHSS and the PMHSS methods are nearly h-independent.
In Fig. 1, we draw the curves of the computing times of the three iteration methods, i.e., MHSS, MRHSS and MRMHSS, with respect to the values of iteration parameter α when m = 128. The curves vividly illustrate that the MRMHSS iteration method always takes much less computing times than the MHSS and the MRHSS iteration methods in all tested cases. In addition, compared with the MHSS method, the MRMHSS and the MRHSS methods are not very sensitive to the value of parameter α. The MRMHSS iteration method always achieves its minimum computing time when α becomes very small, which is similar to the MRHSS iteration method.
Therefore, we can conclude that the MRMHSS iteration method discussed in this work is very efficient and robust when applied to solve the complex symmetric systems of linear equations (1).
5 Conclusions
For the large sparse complex symmetric system of linear equations (1), we proposed an efficient iteration method named as minimum residual MHSS (MRMHSS) iteration method, by applying the minimum residual technique to the MHSS iteration scheme. Although the MRMHSS method involves two more iteration parameters than the classical MHSS iteration scheme, they can be computed easily and automatically. The convergence property of the MRMHSS iteration method was carefully studied, and the numerical results illustrated that the MRMHSS is very robust and powerful.
References
Arridge, S.R.: Optical tomography in medical imaging. Inverse Probl. 15(2), R41–R93 (1999)
Axelsson, O., Neytcheva, M., Ahmad, B.: A comparison of iterative methods to solve complex valued linear algebraic systems. Numer. Algor. 66(4), 811–841 (2014)
Bai, Z.-Z., Benzi, M., Chen, F.: Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 87(3-4), 93–111 (2010)
Bai, Z.-Z., Benzi, M., Chen, F.: On preconditioned MHSS iteration methods for complex symmetric linear systems. Numer. Algor. 56(2), 297–317 (2011)
Bai, Z.-Z., Benzi, M., Chen, F., Wang, Z.-Q.: Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems. IMA J. Numer. Anal. 33(1), 343–369 (2013)
Bai, Z.-Z., Golub, G.H., Ng, M.K.: Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 24(3), 603–626 (2003)
Bai, Z.-Z., Rozložník, M.: On the numerical behavior of matrix splitting iteration methods for solving linear systems. SIAM J. Numer. Anal. 53(4), 1716–1737 (2015)
Benzi, M., Bertaccini, D.: Block preconditioning of real-valued iterative algorithms for complex linear systems. IMA J. Numer. Anal. 28(3), 598–618 (2008)
Bertaccini, D.: Efficient preconditioning for sequences of parametric complex symmetric linear systems. Electron. Trans. Numer. Anal. 18, 49–64 (2004)
Chen, F., Liu, Q.-Q.: On semi-convergence of modified HSS iteration methods. Numer. Algor. 64(3), 507–518 (2013)
Day, D., Heroux, M.A.: Solving complex-valued linear systems via equivalent real formulations. SIAM J. Sci. Comput. 23(2), 480–498 (2001)
Dehghan, M., Dehghani-Madiseh, M., Hajarian, M.: A generalized preconditioned MHSS method for a class of complex symmetric linear systems. Math. Model. Anal. 18(4), 561–576 (2013)
Feriani, A., Perotti, F., Simoncini, V.: Iterative system solvers for the frequency analysis of linear mechanical systems. Comput. Methods Appl. Mech. Engrg. 190 (13–14), 1719–1739 (2000)
Gutknecht, M.H., Rozložník, M.: By how much can residual minimization accelerate the convergence of orthogonal residual methods? Numer. Algor. 27(2), 189–213 (2001)
Li, S.-S., Li, W.-Q.: The analysis of PMHSS-multigrid methods for elliptic problems with smooth complex coefficients. Appl. Math. Comput. 265, 196–206 (2015)
Li, X., Yang, A.-L., Wu, Y.-J.: Lopsided PMHSS iteration method for a class of complex symmetric linear systems. Numer. Algor. 66(3), 555–568 (2014)
Poirier, B.: Efficient preconditioning scheme for block partitioned matrices with structured sparsity. Numer. Linear Algebra Appl. 7(7–8), 715–726 (2000)
Saad, Y.: Iterative Methods for Sparse Linear Systems. SIAM, Philadelphia (2003)
Salkuyeh, D.K., Hezari, D., Edalatpour, V.: Generalized successive overrelaxation iterative method for a class of complex symmetric linear system of equations. Int. J. Comput. Math. 92(4), 802–815 (2015)
Dijk, V.W., Toyama, F.M.: Accurate numerical solutions of the time-dependent schrödinger equation. Phys. Rev. E 75(3), 1–10 (2007)
Vecharynski, E., Knyazev, A.: Preconditioned steepest descent-like methods for symmetric indefinite systems. Linear Algebra Appl. 511, 274–295 (2016)
Xu, W.-W.: A generalization of preconditioned MHSS iteration method for complex symmetric indefinite linear systems. Appl. Math. Comput. 219(21), 10,510–10,517 (2013)
Yang, A.-L., Cao, Y., Wu, Y.-J.: Minimum residual Hermitian and skew-Hermitian splitting iteration method for non-Hermitian positive definite linear systems. BIT Numer. Math. 59, 299–319 (2019)
Yang, A.-L., Wu, Y.-J.: Newton-MHSS methods for solving systems of nonlinear equations with complex symmetric Jacobian matrices. Numer. Algebra Control Optim. 2, 839–853 (2012)
Yang, A.-L., Wu, Y.-J., Xu, Z.-J.: The semi-convergence properties of MHSS method for a class of complex nonsymmetric singular linear systems. Numer. Algor. 66(4), 705–719 (2014)
Zhou, L., Walker, H.F.: Residual smoothing techniques for iterative methods. SIAM J. Sci. Comput. 15(2), 297–312 (1994)
Acknowledgments
The authors would like to thank the referees for comments that lead to improvements of the presentation.
Funding
This work is supported by the National Natural Science Foundation of China (Grant Nos. 11401281, 11471150) and the China Scholarship Council (CSC No. 201806180081).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zhang, WH., Yang, AL. & Wu, YJ. Minimum residual modified HSS iteration method for a class of complex symmetric linear systems. Numer Algor 86, 1543–1559 (2021). https://doi.org/10.1007/s11075-020-00944-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-020-00944-3
Keywords
- Complex symmetric matrix
- Modified Hermitian and skew-Hermitian splitting iteration method
- Minimum residual technique
- Convergence property
- Iteration parameter