Abstract
For the large sparse complex symmetric linear systems, we construct a preconditioned triangular splitting (PTS) iteration method based on utilizing the preconditioned technique and the triangular splitting of a matrix. Compared with the two-parameter two-step scale-splitting one established by Salkuyeh and Siahkolaei (Calcolo 55:8, 2018), PTS iteration method does not involve the complex arithmetic. The convergence theory of the PTS iteration method is established and the spectral properties of the PTS-preconditioned matrix are analyzed. In addition, by applying the minimum residual technique to the PTS iteration method, we develop the minimum residual PTS (MRPTS) iteration method to further improve the efficiency of the PTS one, then establish the corresponding convergence theory. Also, inexact version of the MRPTS iteration method and its convergence properties are presented. Numerical experiments are reported to verify the effectiveness of the proposed methods.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper we consider iterative methods for solving the complex symmetric linear systems of the form:
where \(W,T\in {\mathbb {R}}^{n\times {n}}\) are symmetric positive semi-definite matrices with at least one of them, for example, W, being positive definite, \(b\in {\mathbb {R}}^{n}\) and \(i=\sqrt{-1}\).
Linear systems of the form (1) arise frequently from many practical problems in scientific computing and engineering applications, such as FFT-based solution of certain time-dependent PDEs, diffuse optical tomography, molecular scattering, Helmholtz equations [9, 10, 22, 23, 35], and many more, see e.g. [18].
Due to the universal existence and significance of the complex symmetric linear systems, there has been a surge of interest in (1), and numerous solution techniques have been proposed for this type of system in recent years. Based on the Hermitian and skew-Hermitian splitting (HSS) of the matrix A in (1): \(A=H+S\) with \(H=\frac{1}{2}(A+A^{*})=W\) and \(S=\frac{1}{2}(A-A^{*})=iT\), Bai et al. [14] initially established the HSS iteration method. Here, \(A^{*}\) denotes the conjugate transpose of the matrix A. However, a shifted skew-Hermitian linear system needs to be solved at each iteration step of the HSS iteration method. To overcome this difficult, Bai et al. [9] skillfully designed a modified HSS (MHSS) method. Then, Bai et al. [10] proposed a preconditioned version of the MHSS iteration method called PMHSS for solving (1), and proved it is convergent unconditionally. Due to the excellent properties of the PMHSS iteration method, it is a matter of great concern and its some modified and generalized versions are derived recently. Dehghan et al. [21] presented the generalized PMHSS (GPMHSS) iteration method by introducing an additional parameter. After that, the lopsided PMHSS (LPMHSS) iteration method was proposed by Li et al. [26], which outperforms the PMHSS one when the real part of the coefficient matrix is dominant. In addition to the aforementioned methods, there exist other meaningful variants of the MHSS method, see [37, 39] for more details.
Moreover, by multiplying a complex number through both sides of the complex system (1), Hezari et al. [23] designed a scale-splitting (SCSP) iteration method. Using the idea of [23], Zheng et al. [42] proposed a double-step scale splitting (DSS) iteration method. Subsequently, Salkuyeh and Siahkolaei [32] constructed a two-parameter version of the DSS iteration method called the two-parameter two-step SCSP (TTSCSP) method, which is more efficient than the DSS one and some known ones. The TTSCSP iteration method can be described as follows:
The two-parameter two-step SCSP (TTSCSP) iteration method (see [32]) Let \(\alpha \) and \(\beta \) be two positive constants. Given an initial guess \(x^{(0)}\). For \(k=0,1,2,\ldots \), until \(x^{(k)}\) converges, compute
The iteration matrix of the TTSCSP iteration method is
Besides, the CRI (the combination method of real part and imaginary part) iteration method was derived by Wang et al. [34], where coefficient matrices of the two sub-linear systems are the same as those in the DSS one. Recently, Xiao and Wang [36] constructed the parameterized variant of the fixed-point iteration adding the asymmetric error (PFPAE) iteration method for solving (1). To further accelerate the convergence rates of the PFPAE and the DSS iteration methods, Huang et al. [24] used the idea of PFPAE method and the scaling technique to design the two-step parameterized (TSP) iteration method. Some numerical experiments showed that the performance of the TSP method is superior to some existing ones.
As stated in [8], in order to simplify data handling and construct efficient preconditioning matrix, it is beneficial to transform (1) with \(x=u+iv\), \(b=p+iq\) and \(u,v,p,q\in {\mathbb {R}}^{n}\) into the two-by-two block real equivalent formulation
with u and v being unknown vectors. This can avoid complex arithmetic when solving it. And it may be efficiently solved in real arithmetics by many iteration methods. The linear system (4) can be formally regarded as a special case of the generalized saddle point problem [17] and frequently arises from many important practical problems. As an illustration, the distributed control problem can be transformed into the linear system:
by using the discretize-then optimize method [5], where M and K are the mass matrix and the stiffness matrix, respectively.
For saddle point problems, Bai et al. [15] established the generalized successive overrelaxation (GSOR) iteration method. After that, Bai and Wang [16] proposed the parameterized inexact Uzawa (PIU) iteration method, which is a generalized version of the GSOR one. Based on [15], Salkuyeh et al. [31] defined the GSOR iteration method for solving the linear system (4) recently. To further improve the efficiency of the GSOR iteration method, Hezari et al. [22] designed the preconditioned GSOR (PGSOR) iteration method for (4), whose optimal convergence factor is smaller than that of the GSOR one. Alternatively in [27], Li et al. newly derived a symmetric block triangular splitting (SBTS) iteration method. Then its preconditioned variant was proposed by Zhang et al. [41], which performs better than the SBTS one under some restrictions. Recently, Axelsson and Salkuyeh [3] designed a novel method called the transformed matrix iteration (TMIT) method by transforming (4) into
and using the triangular splitting of the corresponding coefficient matrix. And the corresponding transformed matrix preconditioner (TMP) was also given. The authors proved that the proposed method and the preconditioned matrix have tight eigenvalue bounds. Numerical results included in [3] showed that the proposed method and preconditioner outperform some existing ones. For more iteration methods for linear system (4), we refer to [2, 8, 11, 25, 28, 33, 40] and the references therein.
It can be seen from (2) that a potential difficulty with the TTSCSP iteration method is the need to use complex arithmetic, which may cost some computing times. To overcome this disadvantage, in this current work, we are going to develop a new approach called the preconditioned triangular splitting (PTS) method for solving (4). Although its convergence factor is the same as that of the TTSCSP one, in the PTS iteration method we deal only with real arithmetic. So it is expect that the PTS iteration method exhibits better numerical results than the TTSCSP one. Besides as a solver, the PTS iteration is also used as a preconditioner to accelerate Krylov subspace methods such as the generalized minimal residual (GMRES) method. In order to further improve the convergence behavior of the PTS iteration method, we establish the minimum residual PTS (MRPTS) iteration method by introducing a variable for the PTS iteration method. It is noteworthy that this idea comes essentially from [38], which is firstly applied to develop the MRHSS iteration method in [38]. Besides, by adopting the approximate solution strategy, we correspondingly design an inexact MRPTS (IMRPTS) iteration method. Theoretically, we discuss the convergence of the proposed methods and the spectral properties of the PTS-preconditioned matrix.
The organization of this paper is as follows. The next section introduces the PTS iteration method and analyzes its convergence. In Sect. 3, we propose the PTS preconditioner and study the spectral properties of the PTS-preconditioned matrix. We derive the MRPTS iteration method and investigate its convergence properties in Sect. 4. The inexact MRPTS iteration method and its convergence analysis are given in Sect. 5. Section 6 is devoted to some numerical results to demonstrate that the performances of the proposed methods are better than the aforementioned ones. Finally, in Sect. 7 we put forth some concluding remarks to end the paper.
We end this section with an introduction of some notations that will be used in the subsequent analysis. For a square matrix H, \(\rho (H)\), \(\kappa (H)\) and \(\det (H)\) denote the spectral radius, the condition number and the determinant of H, respectively. Moreover, the Euclidean norm of a vector or a matrix is represented by \(\Vert \cdot \Vert _{2}\).
2 The preconditioned triangular splitting (PTS) iteration method
In this section, we establish the preconditioned triangular splitting (PTS) iteration method and its convergence properties.
Firstly, we premultiply (4) with the block matrix
with \(\alpha ,\beta >0\) and obtain
Here, the matrix P is nonsingular owing to the fact that \(\det (P)=\alpha +\beta >0\).
Then we split the coefficient matrix of (5) into
which is the block lower and upper triangular splitting of the matrix \(\bar{\mathcal {A}}\).
Based on the splitting (6), we construct the preconditioned triangular spitting (PTS) iterative scheme:
which results in the following PTS iteration method.
The preconditioned triangular spitting (PTS) iteration method Let \(\alpha ,\beta >{0}\). Given initial vectors \(u^{(0)}\) and \(v^{(0)}\). For \(k=0,1,2,\ldots \), until \((u^{(k)};v^{(k)})\) converges, compute
It is noteworthy that the motivation of constructing the PTS iteration method stems from [3] where Axelsson and Salkuyeh transformed the given matrix to a proper form and designed the efficient method. The idea of using such a simple block two-by-two matrix P in (5) to transform the block two-by-two matrix \(\mathcal {A}\) was first given by Axelsson and Kucherov [1]. This idea was further generalized by Bai [6, 8], in which the author also first technically employed block Gauss–Seidel and block symmetric Gauss–Seidel splitting techniques to the corresponding transformed matrix and obtained the rotated block triangular preconditionings based on PMHSS. Moreover, the idea of introducing different parameters into the two block equations separately is dated back to [13, 15, 16].
The iteration scheme (8) indicates that the main costs at each step of the PTS iteration method are solving two linear subsystems with respect to the symmetric positive definite matrices \(\alpha W+T\) and \(\beta T+W\). They can be solved exactly by the Cholesky factorization or inexactly by the conjugate gradient (CG) or the preconditioned CG (PCG) methods.
It follows from (7) that the iteration matrix of the PTS iteration method is
Remark 2.1
Equation (9) implies that \(\rho (\mathcal {G}(\alpha ,\beta ))=\rho (\mathcal {T}(\alpha ,\beta ))\), and the PTS and the TTSCSP iteration methods have the same optimal convergence factors for the same linear systems. However, the PTS iteration method in (8) does not involve complex arithmetic compared with the TTSCSP one in (2). So it may cost less times than the TTSCSP one, which is illustrated by the numerical experiments.
In what follows, according to Theorem 3 in [32], we obtain the convergence conditions of the PTS iteration method.
Theorem 2.1
[32] Let the matrices W and T be symmetric positive definite and symmetric positive semidefinite, respectively. Then, the PTS iteration method is convergent if \(\alpha \) and \(\beta \) satisfy
where \(\mu _{s}\) and \(\mu _{n}\) are the minimum and maximum nonzero eigenvalues of the matrix \(W^{-\frac{1}{2}}{} { TW}^{-\frac{1}{2}}\), respectively.
3 The PTS preconditioner
The present section consists of two main parts. In the first part, the PTS preconditioner and its implementing procedure are presented. The second one is concerned with the spectral properties of the PTS-preconditioned matrix.
It is known that the Krylov subspace methods, such as the GMRES method, are a class of efficient iteration methods for solving large sparse linear system (4) [30]. However, in many practical problems, the coefficient matrices of linear system (4) are ill-conditioned. This makes the Krylov subspace methods often suffer from slow convergence or even stagnation, so preconditioning is in most cases indispensable for the iterative solution of (4).
3.1 The PTS preconditioner and its implementing procedure
It can be observed in (6) that the PTS iteration method is induced by the matrix splitting \(\bar{\mathcal {A}}=\mathcal {M}(\alpha ,\beta )-\mathcal {N}(\alpha ,\beta )\). The splitting matrix \(\mathcal {M}(\alpha ,\beta )\) can be employed to precondition the coefficient matrix of (5) and will be referred to as the PTS preconditioner. The form of the PTS preconditioner is
Note that the PTS preconditioner is used to precondition the linear system (5), not (4).
In order to apply the PTS preconditioner of the form (10) within a Krylov-type method, it is necessary to solve the following linear system at each step:
with \(z_{1},z_{2},r_{1},r_{2}\in {\mathbb {R}}^{n}\). In view of (11), we have the following algorithmic implementation of the PTS iterations:
Algorithm 3.1
For a given vector \(r=(r_{1};r_{2})\), the vector \(z=(z_{1};z_{2})\) of the linear system (11) can be solved according to the following procedure:
-
(1)
solve \(z_{1}\) from the linear subsystem \((\alpha W+T)z_{1}=r_{1}\);
-
(2)
compute \(t_{1}=r_{2}-(T-\beta W)z_{1}\);
-
(3)
solve \(z_{2}\) from the linear subsystem \((\beta T+W)z_{2}=t_{1}\).
Remark 3.1
The form of PTS preconditioner is similar to that of the rotated block lower triangular (RBLT) preconditioner [6], which is structured as
Note that two linear subsystems with different coefficient matrices need to be solved at each step of the PTS iteration, whereas in the RBLT iterations, the coefficient matrices of the two linear subsystems are the same. Nevertheless, the PTS preconditioner contains two parameters, and we expert that choosing proper parameters can lead to less iteration steps than the RBLT one.
3.2 Spectral analysis of the PTS-preconditioned matrix
The convergence behaviors of the Krylov subspaces methods are closely related to the eigenvalues and eigenvectors of the coefficient matrices [7] as well as the sizes of the Jordan blocks [4]. In view of this fact, next we study the spectral distribution and independency of the eigenvectors of the PTS-preconditioned matrix \(\mathcal {P}_{{ PTS}}^{-1}\bar{\mathcal {A}}\).
Theorem 3.1
Let W be symmetric positive definite and T be symmetric positive semi-definite, respectively. Denote by \(\mu _{i}\) \((1\le i\le {n})\) the eigenvalues of the matrix \(S=W^{-\frac{1}{2}}{} { TW}^{-\frac{1}{2}}\), and by \(\mu _{\max }\) and \(\mu _{\min }\) the maximum and minimum values among them, respectively. Then the eigenvalues \(\lambda \) of the PTS-preconditioned matrix are all real numbers, which contain an eigenvalue 1 with multiplicity at least n, and the remaining ones are of the form:
Their upper and the lower bounds are
-
(a)
If \(\mu _{\max }\le 1\), then
$$\begin{aligned}&\min \left\{ \frac{(\alpha +\beta )(\mu _{\max }^{2}+1)}{\beta \mu _{\max }^{2}+ (\alpha \beta +1)\mu _{\max }+\alpha },\frac{2(\alpha +\beta )\mu _{\min }^{2}}{\beta \mu _{\min }^{2}+(\alpha \beta +1)\mu _{\min }+\alpha }\right\} \nonumber \\&\quad \le \lambda \le \max \left\{ \frac{(\alpha +\beta )(\mu _{\min }^{2}+1)}{\beta \mu _{\min }^{2}+(\alpha \beta +1)\mu _{\min }+\alpha },\frac{2(\alpha +\beta )}{\beta \mu _{\min }^{2}+(\alpha \beta +1)\mu _{\min }+\alpha }\right\} ;\nonumber \\ \end{aligned}$$(13) -
(b)
If \(\mu _{\min }\ge 1\), then
$$\begin{aligned}&\min \left\{ \frac{2(\alpha +\beta )}{\beta \mu _{\max }^{2}+(\alpha \beta +1)\mu _{\max } +\alpha },\frac{(\alpha +\beta )(\mu _{\min }^{2}+1)}{\beta \mu _{\min }^{2}+(\alpha \beta +1) \mu _{\min }+\alpha }\right\} \nonumber \\&\quad \le \lambda \le \max \left\{ \frac{(\alpha +\beta )(\mu _{\max }^{2}+1)}{\beta \mu _{\max }^{2}+(\alpha \beta +1)\mu _{\max }+\alpha },\frac{2(\alpha +\beta )\mu _{\max }^{2}}{\beta \mu _{\max }^{2}+(\alpha \beta +1)\mu _{\max }+\alpha }\right\} ;\nonumber \\ \end{aligned}$$(14) -
(c)
If \(\mu _{\min }=\mu _{1}\le \mu _{2}\le \cdots \le \mu _{k}\le 1\le \mu _{k+1}\le \cdots \le \mu _{n}=\mu _{\max }\), then
$$\begin{aligned}&(\alpha +\beta )\min \left\{ \frac{\mu _{k}^{2}+1}{(\beta \mu _{k}+1)(\alpha +\mu _{k})}, \frac{2\mu _{\min }^{2}}{(\beta \mu _{\min }+1)(\alpha +\mu _{\min })},\right. \\&\qquad \left. \frac{2}{(\beta \mu _{\max }+1)(\alpha +\mu _{\max })},\frac{\mu _{k+1}^{2}+1}{(\beta \mu _{k+1}+1)(\alpha + \mu _{k+1})}\right\} \\&\quad \le \lambda \le (\alpha +\beta )\max \left\{ \frac{\mu _{\min }^{2}+1}{( \beta \mu _{\min }+1)(\alpha +\mu _{\min })},\frac{2}{(\beta \mu _{\min }+1)(\alpha + \mu _{\min })},\right. \\&\qquad \left. \frac{\mu _{\max }^{2}+1}{(\beta \mu _{\max }+1)(\alpha +\mu _{\max })}, \frac{2\mu _{\max }^{2}}{(\beta \mu _{\max }+1)(\alpha +\mu _{\max })}\right\} . \end{aligned}$$
Furthermore, the quasi-optimal parameters of the PTS-preconditioned matrix are
which minimize the upper bound of \(|\lambda -1|\). Then
if the quasi-optimal parameters are adopted.
Proof
It follows from (9) that
with \(S=W^{-\frac{1}{2}}{} { TW}^{-\frac{1}{2}}\), which is similar to the matrix
The above relation confirms that the eigenvalues of the PTS-preconditioned matrix are given by 1 with algebraic multiplicity at least n, and the remaining eigenvalues are those of the matrix \((\alpha +\beta )(\beta S+I)^{-1}(S^{2}+I)(\alpha I+S)^{-1}\). Let \(\mu _{i}\) \((1\le i\le n)\) be the eigenvalues of the matrix S, then the remaining eigenvalues of \(\mathcal {P}_{{ PTS}}^{-1}\bar{\mathcal {A}}\) are of the form (12). From Lemma 2 of [31], we see that \(\mu _{i}\in {\mathbb {R}}\) and hence \(\lambda \in {\mathbb {R}}\). Straightforward computations reveal that
Now we discuss the eigenvalue distributions of \(\mathcal {P}_{{ PTS}}^{-1}\bar{\mathcal {A}}\) in the following cases:
-
If \(\mu _{\max }\le 1\) and \(\alpha \le \beta \), then \(\mu _{i}^{2}-1\le {0}\) for all \(i=1,2,\ldots ,n\) and hence \(g(\mu _{i})\le {0}\), which leads to
$$\begin{aligned} \frac{(\alpha +\beta )(\mu _{\max }^{2}+1)}{\beta \mu _{\max }^{2}+(\alpha \beta +1)\mu _{\max }+ \alpha }\le \lambda \le \frac{(\alpha +\beta )(\mu _{\min }^{2}+1)}{\beta \mu _{\min }^{2}+ (\alpha \beta +1)\mu _{\min }+\alpha }; \end{aligned}$$(19)if \(\mu _{\max }\le 1\) and \(\alpha >\beta \), then
$$\begin{aligned}&\frac{(\alpha +\beta )(\mu _{i}^{2}+1)}{\beta \mu _{i}^{2}+(\alpha \beta +1)\mu _{i}+ \alpha }\le \frac{2(\alpha +\beta )}{\beta \mu _{i}^{2}+(\alpha \beta +1)\mu _{i}+\alpha }\\&\quad \le \,\frac{2(\alpha +\beta )}{\beta \mu _{\min }^{2}+(\alpha \beta +1)\mu _{\min }+\alpha },\frac{(\alpha +\beta )(\mu _{i}^{2}+1)}{\beta \mu _{i}^{2}+(\alpha \beta +1)\mu _{i}+ \alpha }\\&\quad \ge \frac{2\mu _{i}^{2}(\alpha +\beta )}{\beta \mu _{i}^{2}+(\alpha \beta +1) \mu _{i}+\alpha }. \end{aligned}$$In addition, it is not difficult to verify that
$$\begin{aligned} h(\mu _{i}):=\frac{\partial }{\partial \mu _{i}}\left( \frac{2\mu _{i}^{2}(\alpha +\beta )}{\beta \mu _{i}^{2}+(\alpha \beta +1)\mu _{i}+\alpha }\right) =\frac{2(\alpha +\beta )[(\alpha \beta +1)\mu _{i}^{2}+2\alpha \mu _{i}]}{(\beta \mu _{i}^{2}+(\alpha \beta +1)\mu _{i}+\alpha )^{2}}>0.\nonumber \\ \end{aligned}$$(20)One may deduce the following result
$$\begin{aligned} \frac{2(\alpha +\beta )\mu _{\min }^{2}}{\beta \mu _{\min }^{2}+(\alpha \beta +1) \mu _{\min }+\alpha }\le \lambda \le \frac{2(\alpha +\beta )}{\beta \mu _{\min }^{2}+ (\alpha \beta +1)\mu _{\min }+\alpha }, \end{aligned}$$ -
If \(\mu _{\min }\ge 1\) and \(\alpha \ge \beta \), then \(\mu _{i}^{2}-1\ge {0}\) for all \(i=1,2,\ldots ,n\) and hence \(g(\mu _{i})\ge {0}\). As a consequence,
$$\begin{aligned} \frac{(\alpha +\beta )(\mu _{\min }^{2}+1)}{\beta \mu _{\min }^{2}+(\alpha \beta +1) \mu _{\min }+\alpha }\le \lambda \le \frac{(\alpha +\beta )(\mu _{\max }^{2}+1)}{\beta \mu _{\max }^{2}+(\alpha \beta +1)\mu _{\max }+\alpha } \end{aligned}$$is valid; if \(\mu _{\min }\ge 1\) and \(\alpha <\beta \), then we derive
$$\begin{aligned}&\frac{(\alpha +\beta )(\mu _{i}^{2}+1)}{\beta \mu _{i}^{2}+(\alpha \beta +1)\mu _{i}+\alpha }\le \frac{2(\alpha +\beta )\mu _{i}^{2}}{\beta \mu _{i}^{2}+(\alpha \beta +1)\mu _{i}+\alpha } \\&\quad \le \,\frac{2(\alpha +\beta )\mu _{\max }^{2}}{\beta \mu _{\max }^{2}+(\alpha \beta +1)\mu _{\max }+\alpha },\frac{(\alpha +\beta )(\mu _{i}^{2}+1)}{\beta \mu _{i}^{2}+(\alpha \beta +1)\mu _{i}+\alpha }\\&\quad \ge \frac{2(\alpha +\beta )}{\beta \mu _{i}^{2}+(\alpha \beta +1)\mu _{i}+\alpha }\\&\quad \ge \,\frac{2(\alpha +\beta )}{\beta \mu _{\max }^{2}+(\alpha \beta +1)\mu _{\max }+\alpha } \end{aligned}$$ -
If \(\mu _{\min }=\mu _{1}\le \mu _{2}\le \cdots \le \mu _{k}\le 1\le \mu _{k+1}\le \cdots \le \mu _{n}=\mu _{\max }\), then similar to the derivation of (13) and (14), the following inequalities can be obtained
$$\begin{aligned}&\min \left\{ \frac{(\alpha +\beta )(\mu _{k}^{2}+1)}{\beta \mu _{k}^{2}+(\alpha \beta +1) \mu _{k}+\alpha },\frac{2(\alpha +\beta )\mu _{\min }^{2}}{\beta \mu _{\min }^{2}+(\alpha \beta +1)\mu _{\min }+\alpha }\right\} \nonumber \\&\quad \le \,\{\lambda |\mu _{1}\le \mu _{2}\le \cdots \le \mu _{k}\le 1\} \nonumber \\&\quad \le \,\max \left\{ \frac{(\alpha +\beta )(\mu _{\min }^{2}+1)}{\beta \mu _{\min }^{2}+ (\alpha \beta +1)\mu _{\min }+\alpha },\frac{2(\alpha +\beta )}{\beta \mu _{\min }^{2}+ (\alpha \beta +1)\mu _{\min }+\alpha }\right\} \nonumber \\ \end{aligned}$$(21)and
$$\begin{aligned}&\min \left\{ \frac{2(\alpha +\beta )}{\beta \mu _{\max }^{2}+(\alpha \beta +1)\mu _{\max }+\alpha },\frac{(\alpha +\beta )(\mu _{k+1}^{2}+1)}{\beta \mu _{k+1}^{2}+(\alpha \beta +1)\mu _{k+1}+\alpha }\right\} \nonumber \\&\quad \le \,\{\lambda |1\le \mu _{k+1}\le \cdots \le \mu _{n}\}\nonumber \\&\quad \le \,\max \left\{ \frac{(\alpha +\beta )(\mu _{\max }^{2}+1)}{\beta \mu _{\max }^{2}+(\alpha \beta +1)\mu _{\max }+\alpha },\frac{2(\alpha +\beta )\mu _{\max }^{2}}{\beta \mu _{\max }^{2}+(\alpha \beta +1)\mu _{\max }+\alpha }\right\} .\nonumber \\ \end{aligned}$$(22)Combining (21) and (22) yields that
$$\begin{aligned}&(\alpha +\beta )\min \left\{ \frac{\mu _{k}^{2}+1}{(\beta \mu _{k}+1)(\alpha +\mu _{k})},\frac{2\mu _{\min }^{2}}{(\beta \mu _{\min }+1)(\alpha +\mu _{\min })},\right. \\&\qquad \left. \frac{2}{(\beta \mu _{\max }+1)(\alpha +\mu _{\max })},\frac{\mu _{k+1}^{2}+1}{(\beta \mu _{k+1}+1)(\alpha +\mu _{k+1})}\right\} \\&\quad \le \lambda \le (\alpha +\beta )\max \left\{ \frac{\mu _{\min }^{2}+1}{(\beta \mu _{\min }+1)(\alpha +\mu _{\min })},\frac{2}{(\beta \mu _{\min }+1)(\alpha +\mu _{\min })},\right. \\&\qquad \left. \frac{\mu _{\max }^{2}+1}{(\beta \mu _{\max }+1)(\alpha +\mu _{\max })},\frac{2\mu _{\max }^{2}}{(\beta \mu _{\max }+1)(\alpha +\mu _{\max })}\right\} .\nonumber \\ \end{aligned}$$
Furthermore, it is known that \(\overline{\lambda }=1-\lambda \) holds, where \(\overline{\lambda }\) stands for the eigenvalue of the PTS iteration matrix. By Theorem 1 of [32], we have
Minimizing the upper bound \(\sigma (\alpha ,\beta )\) of \(|\lambda -1|\) in (23) may lead to clustered spectrum of the PTS-preconditioned matrix. Hence, all we need is to determine the quasi-optimal parameters which minimize \(\sigma (\alpha ,\beta )\). According to Theorem 2 of [32], the quasi-optimal parameters in (15) are obtained. Substituting them into (23) directly derives (16). This completes our proof of this theorem. \(\square \)
Theorem 3.2
Let W be symmetric positive definite and T be symmetric positive semi-definite, respectively. Then the eigenvalues \(\lambda \) of the PTS-preconditioned matrix are 1 or satisfy
-
When \(\alpha \rightarrow 0_{+}\) and \(\beta \rightarrow 0_{+}\), then \(\lambda \rightarrow 0_{+};\)
-
When \(\alpha \rightarrow +\infty \) and \(\beta \rightarrow +\infty \), then \(\lambda \rightarrow 0_{+};\)
-
When \(\alpha \rightarrow +\infty \) and \(\beta \rightarrow 0_{+}\), then \(\mu _{\min }^{2}+1\le \lambda \le \mu _{\max }^{2}+1;\)
-
When \(\alpha \rightarrow 0_{+}\) and \(\beta \rightarrow +\infty ,\) then \(\frac{1}{\mu _{\max }^{2}}+1\le \lambda \le \frac{1}{\mu _{\min }^{2}}+1\).
Proof
From Theorem 3.2, it can be seen that \(\lambda =1\) or
When \(\alpha \rightarrow 0_{+}\) and \(\beta \rightarrow 0_{+}\), then \(\lambda \rightarrow 0_{+}\) in view of (24); when \(\alpha \rightarrow +\infty \) and \(\beta \rightarrow +\infty \), we infer that
when \(\alpha \rightarrow +\infty \) and \(\beta \rightarrow 0_{+}\), it follows from straightforward computations that
thus, \(\mu _{\min }^{2}+1\le \lambda \le \mu _{\max }^{2}+1\) holds. At last, when \(\alpha \rightarrow 0_{+}\) and \(\beta \rightarrow +\infty \), it holds that
which yields that \(\frac{1}{\mu _{\max }^{2}}+1\le \lambda \le \frac{1}{\mu _{\min }^{2}}+1\). \(\square \)
Theorem 3.2 implies that the PTS-preconditioned matrix has clustered spectrum if proper parameters are chosen. Meanwhile, both \(\alpha \) and \(\beta \) can not be adopted so small or large that the PTS-preconditioned matrix becomes too close to being singular. Moreover, when \(\alpha \rightarrow +\infty \) and \(\beta \rightarrow 0_{+}\) or \(\alpha \rightarrow 0_{+}\) and \(\beta \rightarrow +\infty \), we get \(\mu _{\min }^{2}+1\le \lambda \le \mu _{\max }^{2}+1\) or \(\frac{1}{\mu _{\max }^{2}}+1\le \lambda \le \frac{1}{\mu _{\min }^{2}}+1\), respectively in accordance to Theorem 3.2. However, when \(\mu _{\min }\ll \mu _{\max }\), the spectrum of the PTS-preconditioned may be very scattered. Thus more proper choice of the parameters contained in the PTS preconditioner needs to be investigated.
As mentioned in [7], when the coefficient matrix A is nonsymmetric but diagonalizable with the eigenvector matrix V, the kth residual of the Krylov subspace methods is bounded by
where \(\psi _{k}(x)\) is the polynomial of degree not greater than k. This indicates that the Euclidean norm of the residual is determined not only by the eigenvalues, but also by the eigenvectors of A for this case. The following theorem gives some analyses about the eigenvector distributions of the PTS-preconditioned matrix.
Theorem 3.3
Let W be symmetric positive definite and T be symmetric positive semi-definite, respectively, then the PTS-preconditioned matrix has 2n linearly independent eigenvectors. There are
-
(1)
i linearly independent eigenvectors of the form \(\left( \begin{array}{cc} W^{-\frac{1}{2}} &{}\quad 0 \\ 0 &{}\quad W^{-\frac{1}{2}} \\ \end{array} \right) \left[ \begin{array}{c} I_{i} \\ 0 \\ \end{array} \right] \) that correspond to the eigenvalue 1, where \(I_{i}\) denotes the \(i\times {i}\) \((i\ge {n})\) unit matrix;
-
(2)
\(2n-i\) linearly independent eigenvectors of the form \(\left( \begin{array}{cc} W^{-\frac{1}{2}} &{}\quad 0 \\ 0 &{}\quad W^{-\frac{1}{2}} \\ \end{array} \right) \left[ \begin{array}{c} D\bar{V} \\ \bar{V} \\ \end{array} \right] \) with \(D=\mathrm {diag}(\frac{\beta \mu _{1}+1}{\beta -\mu _{1}},\ldots ,\frac{\beta \mu _{2n-i}+1}{\beta -\mu _{2n-i}})\) that correspond to the eigenvalues \(\frac{(\alpha +\beta )(\mu _{k}^{2}+1)}{\beta \mu _{k}^{2}+(\alpha \beta +1)\mu _{k}+\alpha }\) \((1\le k\le 2n-i)\), where \(\bar{V}\) is the eigenvectors corresponding to the nonunit eigenvalues of \(\mathcal {P}_{{ PTS}}^{-1}\bar{\mathcal {A}}\).
Besides, let
be the 2n linearly independent eigenvectors of \(\mathcal {P}_{{ PTS}}^{-1}\bar{\mathcal {A}}\), then its condition number satisfies
where \(\lambda _{\max }\) and \(\lambda _{\min }\) are the maximum and minimum eigenvalues of the matrix W, respectively, and
is the maximum value of the diagonal elements of D in modulus, with \(\tilde{\mu }_{\max }\) and \(\tilde{\mu }_{\min }\) being the maximum and minimum nonunit eigenvalues of G defined in (28), respectively.
Proof
According to Theorem 3.1, we see that \(\mathcal {P}_{{ PTS}}^{-1}\bar{\mathcal {A}}\) has the structure of the following form:
Now we first consider the eigenvectors of the matrix
Equation (28) implies that the matrix G has the eigenvalue 1 with multiplicity at least n, and the remaining ones are those of the matrix \((\alpha +\beta )(\beta S+I)^{-1}(S^{2}+I)(\alpha I+S)^{-1}\). We can check easily
Then G has n linearly independent eigenvectors of the form \(\left[ \begin{array}{c} I_{n} \\ 0 \\ \end{array} \right] \) that correspond to the eigenvalue 1. In addition, taking into account that S is symmetric positive semi-definite matrix, there exists an orthogonal matrix V such that \(G=V\Lambda V^{T}\), where \(\Lambda \) is the diagonal matrix with the eigenvalues of S as its diagonal elements. Let (x; y) be the eigenvector of G that corresponds to the eigenvalue \(\lambda \), then it has
We rewrite the above equation in the form
If \(y=0\), then from first equation of (29) it can be deduced that \((\lambda -1)x=0\). This leads to \(\lambda =1\) due to \(x\ne {0}\) in this case. Hence (x; y) belongs to the Case (1). Now we consider the case that \(y\ne {0}\). Notice that \(\lambda \ne {1}\), otherwise (29) implies that \(y=(\beta S+I)^{-1}(S-\beta I)(\alpha I+S)^{-1}(I-\alpha S)y=0\) in contradiction with the assumption that \(y\ne {0}\). The second equation of (29) reveals that \(\lambda \) is the eigenvalue of \((\alpha +\beta )(\beta S+I)^{-1}(S^{2}+I)(\alpha I+S)^{-1}\), and therefore
with \(\mu _{k}\) being the eigenvalues of the matrix S. It follows from \(\lambda _{k}\ne 1\) that \((\mu _{k}-\beta )(1-\alpha \mu _{k})\ne {0}\). For this case, we assume that \(y_{k}=v_{k}\). Then
that is
which can be simplified as
As a result, it holds that \(\left[ \frac{\beta \mu _{k}+1}{\beta -\mu _{k}}v_{k};v_{k}\right] \) is the eigenvector of the matrix G that corresponds to the nonunit eigenvalue \(\frac{(\alpha +\beta )(\mu _{k}^{2}+1)}{\beta \mu _{k}^{2}+(\alpha \beta +1)\mu _{k}+\alpha }\). Since \(v_{k}\) \((1\le k\le 2n-i)\) are linearly independent, there are \(2n-i\) linearly independent eigenvectors corresponding to the nonunit eigenvalues of G. Then the conclusions shown in (1) and (2) follow by the above analysis and Lemma 4.1 of [19], and the independence of the 2n eigenvectors of G is given by the nonsingularity of the matrix H defined as in (26).
In the sequel, we turn to discuss the condition number of the matrix H in (26). It follows from straightforward computations that
By making use of the expression of the matrix H in the above equation we obtain
with \(\bar{\mu }_{\max }\) being the maximum value of the diagonal entry of D in modulus and defined as in (27). \(\square \)
Remark 3.2
Theorem 3.3 shows that the PTS-preconditioned matrix has 2n linearly independent eigenvectors. As mentioned in [7] and shown in (25), if the matrix formed by the eigenvectors is well-conditioned in this case, at each additional iteration the degree increases by one and the accuracy can be quickly achieved. Thus a proper parameter \(\beta \) should be taken to make the upper bound of \(\kappa _{2}(H)\) in Theorem 3.3 as small as possible.
The GMRES method will terminate when the degree of the minimum polynomial is attained. In particular, the degree of the minimum polynomial is equal to the dimension of the corresponding Krylov subspace [30]. In the sequel, some upper bounds of the dimension of the Krylov subspace \(K(\mathcal {P}_{{ PTS}}^{-1}\bar{\mathcal {A}},\mathcal {P}_{{ PTS}}^{-1}\bar{b})\) are given.
Theorem 3.4
Let W be symmetric positive definite and T be symmetric positive semi-definite, respectively, then the degree of the minimum polynomial of the PTS-preconditioned matrix is less than \(n+1\), i.e., \(K(\mathcal {P}_{{ PTS}}^{-1}\bar{\mathcal {A}},\mathcal {P}_{{ PTS}}^{-1}\bar{b})\le {n+1}\); in particular, if the matrix \((\alpha +\beta )(\beta S+I)^{-1}(S^{2}+I)(\alpha I+S)^{-1}\) has q distinct nonunit eigenvalues, then the degree of the minimum polynomial of the PTS-preconditioned matrix is less than \(q+1\), i.e., \(K(\mathcal {P}_{{ PTS}}^{-1}\bar{\mathcal {A}},\mathcal {P}_{{ PTS}}^{-1}\bar{b})\le {q+1}\).
Proof
It follows from Theorem 3.3 that \(\mathcal {P}_{{ PTS}}^{-1}\bar{\mathcal {A}}\) has 2n linearly independent eigenvectors, which means that \(\mathcal {P}_{{ PTS}}^{-1}\bar{\mathcal {A}}\) is diagonalizable, and hence
where \(p_{\min }(A)\) and p(g(x)) represent the degrees of the minimal polynomial of the matrix A and g(x), respectively. In addition, if the matrix \((\alpha +\beta )(\beta S+I)^{-1}(S^{2}+I)(\alpha I+S)^{-1}\) has q distinct nonunit eigenvalues, then it has
which completes the proof of this theorem. \(\square \)
4 The minimum residual PTS (MRPTS) iteration method and its convergence properties
In order to further improve the efficiency of the PTS iteration method, we generalize the PTS iteration method and derive a minimum residual PTS (MRPTS) iteration method in this section.
In [38], the authors defined the minimize residual HSS (MRHSS) iteration method by introducing two parameters into the HSS iteration method. These parameters are adopted by minimizing the residual norms at each step of the HSS iteration scheme. Numerical experiments in [38] showed that the MRHSS method has advantages over the HSS one.
The PTS iteration method in (7) can be rewritten as
By applying the idea of [38], we introduce a variable \(\beta _{k}\) into (30), which leads to a new iteration scheme of the form
As mentioned in [38], if proper parameter \(\beta _{k}\) is chosen, the convergence rate of (31) may be faster than that of (7). Next we discuss the optimal value of \(\beta _{k}\) in the MRPTS method. Inasmuch as (5) is a real linear system, it should be proper to adopt \(\beta _{k}\) in the real field \({\mathbb {R}}\) here.
The residual form of the iteration scheme (31) can be written as
where \(r^{(k)}=\bar{b}-\bar{\mathcal {A}}(u^{(k)};v^{(k)})\in \mathbb {R}^{2n}\).
In view of Equation (8) in [38] and (32), \(\Vert r^{(k+1)}\Vert _{2}^{2}\) can be briefly expressed as
As a matter of fact, \((r^{(k)},\bar{\mathcal {A}}\mathcal {M}(\alpha ,\beta )^{-1}r^{(k)})=(\bar{\mathcal {A}}\mathcal {M}(\alpha ,\beta )^{-1}r^{(k)},r^{(k)})\) due to the facts that both \(\bar{\mathcal {A}}\mathcal {M}(\alpha ,\beta )^{-1}\) and \(r^{(k)}\) are real matrices and \((r^{(k)})^{T}\bar{\mathcal {A}}\mathcal {M}(\alpha ,\beta )^{-1}r^{(k)}=(\bar{\mathcal {A}}\mathcal {M}(\alpha ,\beta )^{-1}r^{(k)})^{T}r^{(k)}\). One then obtains
by solving the equation \(\frac{\partial \Vert r^{(k+1)}\Vert _{2}^{2}}{\partial \beta _{k}}=0\). Furthermore, we can easily check that
If \(\Vert \bar{\mathcal {A}}\mathcal {M}(\alpha ,\beta )^{-1}r^{(k)}\Vert _{2}^{2}={0}\), then \(r^{(k)}=0\) by the nonsingularity of \(\bar{\mathcal {A}}\) and \(\mathcal {M}(\alpha ,\beta )\). Hence \((u^{(k)};v^{(k)})\) is the exact solution of (4), and we need not construct \(\beta _{k}\) to obtain \((u^{(k+1)};v^{(k+1)})\) in this case. Then we assume that \(r^{(k)}\ne 0\) for \(k=0,1,2,\ldots \) throughout this paper. Under this assumption, it has
which implies that \(\beta _{k}\) defined as in (33) is the minimum point of the function \(\Vert r^{(k+1)}\Vert _{2}\).
Therefore, we can form the following detailed implementation of the MRPTS iteration method:
The minimum residual PTS (MRPTS) iteration method Let \(\alpha ,\beta >{0}\). Given \(\varepsilon >0\), an initial guess \((u^{(0)};v^{(0)})\) and right-hand side vector \(b=(p;q)\) of (4). For \(k=0,1,2,\ldots \), until \((u^{(k)};v^{(k)})\) converges,
-
Step 1: compute \(\bar{r}^{(0)}=b-{\mathcal {A}}(u^{(0)};v^{(0)})\), and divide \(\bar{r}^{(0)}\) into \((\bar{r}_{1}^{(0)};\bar{r}_{2}^{(0)})\) with \(\bar{r}_{1}^{(0)},\bar{r}_{2}^{(0)}\in {\mathbb {R}}^{n}\);
-
Step 2: compute \({r}_{1}^{(0)}=\alpha \bar{r}_{1}^{(0)}+\bar{r}_{2}^{(0)}\) and \({r}_{2}^{(0)}=\bar{r}_{2}^{(0)}-\beta \bar{r}_{1}^{(0)}\), and set \(r^{(0)}=({r}_{1}^{(0)};{r}_{2}^{(0)})\) and \(k=0\);
-
Step 3: if \(\frac{\Vert b-\mathcal {A}(u^{(k)};v^{(k)})\Vert _{2}}{\Vert b\Vert _{2}}<\varepsilon \), then stop; otherwise, continue;
-
Step 4: solve \((\alpha W+T)t_{1}^{(k)}=r_{1}^{(k)}\);
-
Step 5: compute \(t_{2}^{(k)}=r_{2}^{(k)}-(T-\beta W)t_{1}^{(k)}\);
-
Step 5: solve \((\beta T+W)t_{3}^{(k)}=t_{2}^{(k)}\);
-
Step 6: compute \(t_{4}^{(k)}=(\alpha W+T)t_{1}^{(k)}+(W-\alpha T)t_{3}^{(k)}\) and \(t_{5}^{(k)}=(T-\beta W)t_{1}^{(k)}+(\beta T+W)t_{3}^{(k)}\) and set \(t^{(k)}=(t_{4}^{(k)};t_{5}^{(k)})\);
-
Step 7: compute the value of \(\beta _{k}\):
$$\begin{aligned} \beta _{k}=\frac{(r^{(k)},t^{(k)})}{\Vert t^{(k)}\Vert _{2}^{2}}=\frac{(r_{1}^{(k)})^{T}t_{4}^{(k)}+(r_{2}^{(k)})^{T}t_{5}^{(k)}}{\Vert t_{4}^{(k)}\Vert _{2}^{2}+\Vert t_{5}^{(k)}\Vert _{2}^{2}}; \end{aligned}$$ -
Step 8: compute \(u^{(k+1)}=u^{(k)}+\beta _{k}t_{1}^{(k)}\), \(v^{(k+1)}=v^{(k)}+\beta _{k}t_{3}^{(k)}\), \(r_{1}^{(k+1)}=r_{1}^{(k)}-\beta _{k}t_{4}^{(k)}\) and \(r_{2}^{(k+1)}=r_{2}^{(k)}-\beta _{k}t_{5}^{(k)}\); set \(k=k+1\) and return to Step 3.
Remark 4.1
When \(\beta _{k}=1\), the MRPTS iteration method automatically reduces to the PTS one. With a suitable choice of the parameter \(\beta _{k}\) the convergence rate of the MRPTS iteration method may be accelerated so as to be faster than the PTS one.
In what follows, the convergence properties of the MRPTS iteration method for solving the complex symmetric linear systems are discussed. To this end, we first demonstrate the following lemma.
Lemma 4.1
Let \(A=\mathrm {diag}(a_{1},\ldots ,a_{n})\) and \(B=\mathrm {diag}(b_{1},\ldots ,b_{n})\) be two \(n\times {n}\) real diagonal matrices, then
Proof
Direct computations give
Note that the nonzero blocks of H are all diagonal matrices, then its eigenvalues are those of the matrices
By direct calculations, we then immediately achieve the conclusion that we were proving. \(\square \)
Theorem 4.1
Let W be symmetric positive definite and T be symmetric positive semi-definite, respectively. Then the MRPTS iteration method is convergent if for any \(\theta \in [0,\frac{\pi }{2}]\), the parameters \(\alpha \) and \(\beta \) satisfy
Proof
Since \(\beta _{k}\) defined as in (33) is the minimum point of the function \(\Vert r^{(k+1)}\Vert _{2}\), it holds that
Straightforward computations reveal that
Therefore, by making use of Lemma 4.1, we can conduct the estimate
with \(\lambda _{\max }\) and \(\lambda _{\min }\) being the maximum and minimum eigenvalues of W, respectively. To get \(\Vert \mathcal {N}(\alpha ,\beta )\mathcal {M}(\alpha ,\beta )^{-1}\Vert _{2}<1\), it is enough to have
for all \(\mu _{i}\) \((1\le i\le {n})\). For notational simplicity we denote by
If \(\bar{c}<g\cos \theta \) and \(\bar{d}<\tan \theta \) hold for any \(\theta \in [0,\frac{\pi }{2}]\), then \(\bar{c}^2(1+\bar{d}^{2})<g^{2}\), which is exactly (36). It follows from straightforward computations that a sufficient condition for guaranteeing \(\bar{c}<g\cos \theta \) and \(\bar{d}<\tan \theta \) is (34). Thus if (34) holds, we deduce that
which results in
i.e., the MRPTS iteration method is convergent. \(\square \)
In the following, we study the choice for the parameters of the MRPTS iteration method, which can be used in practical computation. At each step of the MRPTS iteration method, two linear subsystems with the coefficient matrices \(\alpha W+T\) and \(\beta T+W\) require to be solved. By applying the idea of [20], the authors in [24] gave a strategy for choosing the parameters of the TSP iteration method. According to [24], the parameters \(\alpha \) and \(\beta \) should satisfy
which is equivalent to \((\alpha \beta -1)(\mu _{\max }-\mu _{\min })=0\). Owing to the fact that \(\mu _{\max }\ne \mu _{\min }\) holds in many practical problems, we consider to adopt proper parameters \(\alpha \) and \(\beta \) satisfying \(\alpha \beta =1\) and the conditions of Theorem 3.1 in the MRPTS iteration method. The numerical results in Sect. 6 will illustrate the effectiveness of the MRPTS iteration method by using this strategy. Here, \(\mu _{\max }\) and \(\mu _{\min }\) denote the maximum and minimum eigenvalues of \(W^{-\frac{1}{2}}{} { TW}^{-\frac{1}{2}}\), respectively.
5 The inexact MRPTS iteration method
The two half-steps at each step of the MRPTS iteration require finding solutions with the coefficient matrices \(\alpha W+T\) and \(\beta T+W\). If the problem size is very large, then using the direct methods like the Cholesky factorization to solve these problems is time consuming. To overcome this disadvantage, we prefer to utilize the inexact methods to solve these two linear equations. More specifically, we may employ the CG or PCG methods to solve the linear systems with coefficient matrices \(\alpha W+T\) and \(\beta T+W\), because they are symmetric positive definite. This yields the inexact MRPTS (IMRPTS) iteration for solving the linear system (4) as follows.
The inexact MRPTS (IMRPTS) iteration method Given an initial guess \((\bar{u}^{(0)};\bar{v}^{(0)})\), for \(k=0,1,2,\ldots \), until \((\bar{u}^{(k)};\bar{v}^{(k)})\) converges, solve \((\bar{u}^{(k+1)};\bar{v}^{(k+1)})\) approximately from
by employing the CG (or PCG) method to solve the linear subsystems with the coefficient matrices \(\alpha W+T\) and \(\beta T+W\); then compute \((\bar{w}_{1}^{(k)};\bar{w}_{2}^{(k)})\) by
and compute \((\bar{u}^{(k+1)};\bar{v}^{(k+1)})\) by
where \(\alpha \) and \(\beta \) are given positive constants.
To simplify numerical implementation and convergence analysis, we may rewrite the above IMRPTS iteration method as the following equivalent scheme.
Given an initial guess \((\bar{u}^{(0)};\bar{v}^{(0)})\) and right-hand side vector \({b}=({p};{q})\) of (4), for \(k=0,1,2,\ldots \), until \((\bar{u}^{(k)};\bar{v}^{(k)})\) converges,
-
1.
compute \(\bar{r}^{(k)}=b-{\mathcal {A}}(\bar{u}^{(k)};\bar{v}^{(k)})\), and divide \(\bar{r}^{(k)}\) into \((\bar{r}_{1}^{(k)};\bar{r}_{2}^{(k)})\) with \(\bar{r}_{1}^{(k)},\bar{r}_{2}^{(k)}\in {\mathbb {R}}^{n}\);
-
2.
compute \({r}_{1}^{(k)}=\alpha \bar{r}_{1}^{(k)}+\bar{r}_{2}^{(k)}\) and \({r}_{2}^{(k)}=\bar{r}_{2}^{(k)}-\beta \bar{r}_{1}^{(k)}\);
-
3.
approximate the solution of \((\alpha W+T)\bar{z}_{1}^{(k)}={r}_{1}^{(k)}\) by the CG (or PCG) method until \(\bar{z}_{1}^{(k)}\) is such that the residual \(\bar{p}^{(k)}={r}_{1}^{(k)}-(\alpha W+T)\bar{z}_{1}^{(k)}\) satisfies \(\Vert \bar{p}^{(k)}\Vert \le \varepsilon _{k}\Vert \bar{r}_{1}^{(k)}\Vert \);
-
4.
compute \(\bar{t}^{(k)}=r_{2}^{(k)}-(T-\beta W)z_{1}^{(k)}\);
-
5.
solve \((\beta T+W)\bar{z}_{2}^{(k)}=\bar{t}^{(k)}\) by the CG (or PCG) method to compute the approximate solution \(\bar{z}_{2}^{(k)}\) until it is such that the residual \(\bar{q}^{(k)}=\bar{t}^{(k)}-(\beta T+W)\bar{z}_{2}^{(k)}\) satisfies \(\Vert \bar{q}^{(k)}\Vert \le \eta _{k}\Vert \bar{t}^{(k)}\Vert \);
-
6.
compute \(\bar{w}_{1}^{(k)}=(\alpha W+T)\bar{z}_{1}^{(k)}+(W-\alpha T)\bar{z}_{2}^{(k)}\) and \(\bar{w}_{2}^{(k)}=(T-\beta W)\bar{z}_{1}^{(k)}+(\beta T+W)\bar{z}_{2}^{(k)}\);
-
7.
compute the value of \(\bar{\beta }_{k}\):
$$\begin{aligned} \bar{\beta }_{k}=\frac{(\bar{r}_{1}^{(k)})^{T}\bar{w}_{1}^{(k)}+(\bar{r}_{2}^{(k)})^{T} \bar{w}_{2}^{(k)}}{\Vert \bar{w}_{1}^{(k)}\Vert _{2}^{2}+\Vert \bar{w}_{2}^{(k)}\Vert _{2}^{2}}; \end{aligned}$$ -
8.
compute \(\bar{u}^{(k+1)}=\bar{u}^{(k)}+\beta _{k}\bar{z}_{1}^{(k)}\), \(\bar{v}^{(k+1)}=\bar{v}^{(k)}+\beta _{k}\bar{z}_{2}^{(k)}\), and \((\bar{r}_{1}^{(k+1)};\bar{r}_{2}^{(k+1)})=b-\mathcal {A}(\bar{u}^{(k+1)};\bar{v}^{(k+1)})\).
Here, \(\Vert \cdot \Vert \) is a norm of a vector, and \(\{\varepsilon _{k}\}\) and \(\{\eta _{k}\}\) are two prescribed tolerances.
Next, we analyze the convergence of the IMRPTS method. To this end, we consider a vector norm \(|||x|||_{M_{2}}=\Vert M_{2}x\Vert _{2}\) (\(\forall x\in \mathbb {C}^{n}\)) defined as in [14] and let \(\delta _{k}=\max \{\varepsilon _{k},\eta _{k}\}\).
Motivated by Theorem 3.1 of [14], we establish the following result related to the convergence property of the MRPTS iteration method.
Theorem 5.1
Let \(W,T\in {\mathbb {R}}^{n\times {n}}\) be symmetric positive definite and symmetric positive semi-definite, respectively, and let \(\alpha \) and \(\beta \) be positive constants satisfying the condition (34), and \(\bar{x}^{(k)}=(\bar{u}^{(k)};\bar{v}^{(k)})\). If \(\{\bar{x}^{(k)}\}\) is an iterative sequence generated by the IMRPTS iteration method and if \(x^{*}\in {\mathbb {C}}^{n}\) is the exact solution of the system (4), then
where
In particular, if
then the sequence \(\{\bar{x}^{(k)}\}\) converges to \(x^{*}\), where \(\delta _{k}=\max \{\varepsilon _{k},\eta _{k}\}\).
Proof
It follows from Steps 3 and 5 of the IMRPTS iteration method in the above and the definition of \(\delta _{k}\) that
which along with (37) results in
and
The combination of the above two equations yields the following result
Under the condition (34), it has \(\Phi (\alpha ,\beta ,\lambda _{\max },\lambda _{\min })<1\). If \(\{\delta _{k}\}\) is chosen such that
then the iterative sequence \(\{\bar{x}^{(k)}\}\) converges to the exact solution \(x^{*}\) of (4). Moreover, in view of (35) we deduce that
from which one gives the following result
Then a sufficient condition for guaranteeing \(\psi (k)<1\) is (38). Up to now, the proof has been completed. \(\square \)
Remark 5.1
Theorem 5.1 can be utilized for the inexact PTS (IPTS) iteration method due to the fact that it is the special case of the IMRPTS one.
6 Numerical experiments
In this section, we present the results of numerical experiments aimed at validating the effectiveness of the proposed methods and comparing their numerical behaviors with those of some known ones. Further, we exhibit the numerical advantages of the PTS preconditioner over the PMHSS, additive block diagonal (ABD) [12], RBLT and block triangular (BT) [29] ones. Here, the preconditioned matrix V in the PMHSS iteration method is taken as W. For the tested iteration methods, their parameters are the experimentally found optimal ones which minimize the total number of iterations. In all tables, the parameters \(\alpha _{{ pre}}\) and \(\beta _{{ pre}}\) are the ones that satisfy \(\alpha \beta =1\) and minimize the IT of the MRPTS iteration method. Numerical results are compared in terms of both the number of iteration steps (denoted by “IT”) and the elapsed CPU time in seconds (denoted by “CPU”).
All experiments are carried out using MATLAB (version R2016b) on a personal computer with Intel (R) Pentium (R) CPU G3240T 2.870 GHz, 16.0 GB memory and Windows 10 system. In actual computations, the initial vector is taken to be the zero vector. All iterations are terminated once the current relative residual (denoted by “RES”) norm is reduced by a factor of \(10^{12}\) or the number of iteration steps exceeds 500, or the CPU times are over 3600 s. The latter two are indicated by “–” in the numerical tables.
Example 6.1
We consider the following complex symmetric linear system [9, 10]:
where M and K are the inertia and the stiffness matrices, \(C_{V}\) and \(C_{H}\) are the viscous and the hysteretic damping matrices, respectively, and \(\varpi \) is the driving circular frequency. We take \(C_{H}=\mu K\) with \(\mu \) a damping coefficient, \(M=I\), \(C_{V}=10I\), and K the five-point centered difference matrix approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions, on a uniform mesh in the unit square \([0,1]\times [0,1]\) with the mesh-size \(h=1/(m+1)\). The matrix \(K\in {\mathbb {R}}^{n\times {n}}\) possesses the tensor-product form \(K=I\otimes V_{m}+V_{m}\otimes I\), with \(V_{m}=h^{-2}\mathrm {tridiag}(-1,2,-1)\in {\mathbb {R}}^{m\times {m}}\). Hence, K is an \(n\times {n}\) block-tridiagonal matrix, with \(n=m^{2}\). In addition, we set the right-hand side vector b to be \(b=(1+i)A\mathbf {1}\), with \(\mathbf {1}\) being the vector of all entries equal to 1. The linear system is normalized by multiplying both sides with \(h^{2}\).
Example 6.2
Consider the complex Helmholtz equation [22, 36, 39]:
with \(\sigma _{1}\) and \(\sigma _{2}\) being real coefficient functions. Here, u satisfies Dirichlet boundary conditions in the square \(D=[0,1]\times [0,1]\). By discretizing this equation with finite differences on an \(m\times {m}\) grid with mesh size \(h=1/(m+1)\), we obtain a complex linear system
where the matrix \(K\in {\mathbb {R}}^{n\times {n}}\) possesses the tensor-product form
Actually, K is the five-point centered difference matrix approximating the negative Laplacian operator \(L=-\Delta \). In our tests, let the right-hand side vector \(b=(1+i)A\mathbf {1}\) with \(\mathbf {1}\) being the vector of all entries are equal to 1. As before, we normalize the complex linear system by multiplying both sides by \(h^{2}\).
Example 6.3
[9, 10] Consider the linear system of equations \((W+iT)x=b\), with
where \(V=\mathrm {tridiag}(-\,1,2,-\,1)\in {\mathbb {R}}^{m\times {m}}\), \(V_{c}=V-e_{1}e_{m}^{T}-e_{m}e_{1}^{T}\in {\mathbb {R}}^{m\times {m}}\) and \(e_{1}\) and \(e_{m}\) are the first and last unit vectors in \(\mathbb {R}^{m}\), respectively. We take the right-hand side vector b to be \(b=(1+i)A\mathbf {1}\), with \(\mathbf {1}\) being the vector of all entries equal to 1.
Here T and W correspond to the five-point centered difference matrices approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions and periodic boundary conditions, respectively, on a uniform mesh in the unit square \([0,1]\times [0,1]\) with the mesh-size \(h=\frac{1}{m+1}\).
Example 6.4
[9, 10] Consider the linear system of equations
where \(\tau \) is the time step-size and K is the five-point centered difference matrix approximating the negative Laplacian operator \(L=-\Delta \) with homogeneous Dirichlet boundary conditions, on a uniform mesh in the unit square \([0,1]\times [0,1]\) with the mesh-size \(h=\frac{1}{m+1}\). The matrix \(K\in {\mathbb {R}}^{n\times {n}}\) possesses the tensor-product form \(K=I\otimes V_{m}+V_{m}\otimes I\), with \(V_{m}=h^{-2}\mathrm {tridiag}(-1,2,-1)\in {\mathbb {R}}^{m\times {m}}\). Hence, K is an \(n\times {n}\) block-tridiagonal matrix, with \(n=m^{2}\). We take
and the right-hand side vector b with its jth entry \(b_{j}\) being given by
In our tests, we take \(\tau =h\). Besides, we normalize coefficient matrix and right-hand side by multiplying both by \(h^{2}\).
For all the tested exact iteration methods, we apply the Cholesky factorization incorporated with the symmetric approximate minimum degree reordering [30] for solving the sub-systems. To do so, the symamd.m command of Matlab [32] is used here.
The experimentally found optimal parameters, IT and CPU times of the tested exact iteration methods for Examples 6.1–6.4 with respect to different problem sizes are listed in Tables 1, 2, 3, 4, 5, 6, 7 and 8. From Tables 1, 2, 3 and 4, we observe that the PTS iteration method outperforms the PMHSS, SCSP and CRI ones, and the advantage of the PTS one becomes more pronounced as the system size increases. By comparing with the PTS and the TTSCSP iteration methods, we find that their IT are almost the same. This is owing to the fact that they have the same optimal convergence factors as remarked in Sect. 2. While the PTS iteration method requires less CPU times than the latter one. The reason is that in the TTSCSP iteration method we deal with complex arithmetic whereas in the PTS one we deal only with real arithmetic. The MRPTS iteration method performs better than the PMHSS, SCSP, CRI and PTS ones, and its IT is almost unchanged even decreases with respect to m. Meanwhile, the numerical behaviors of the MRPTS, MRTMIT (The minimum residual version of TMIT) and TMIT iteration methods are comparable for Examples 6.1 and 6.2, while the former is the most efficient method for Examples 6.3 and 6.4. In addition, it is apparent that the experimentally found optimal parameters of the MRPTS method are stable, which brings great convenience for us to find those for large systems. As the numerical results of Tables 5, 6, 7 and 8 show, the computing efficiency of the PTS preconditioner is superior to the other ones in terms of IT and CPU times.
The numerical performances of the MRPTS iteration method with the parameters \(\alpha _{{ pre}}\) and \(\beta _{{ pre}}\) and with the experimentally found optimal parameters are comparable. Thus, the strategy for choosing the parameters \(\alpha _{{ pre}}\) and \(\beta _{{ pre}}\) of the MRPTS iteration method can be applicable for the practical problems.
Besides, Tables 9, 10, 11 and 12 report the numerical results of the tested inexact iteration methods. We adopt the PCG method as the inner solver for the tested inexact methods. More concretely, we employ the modified incomplete Cholesky factorization (Matlab code: ichol(C,struct(‘michol’,‘on’,‘type’,‘ict’,‘droptol’,1e-1))) as the preconditioner in the PCG method, as it was used in [32]. The stopping tolerance used for the inner PCG iteration method is \(10^{-1}\). Note that the parameters adopted in Table 9 are the ones that minimize the IT of the tested methods without the condition that the diagonal elements of the matrix is nonpositive when applying the function ichol of Matlab for Example 6.1.
By comparing the results in Tables 9, 10, 11 and 12, we can see that, the IPTS iteration method is more efficient than the ITTSCSP one for most cases from the CPU times point of view. For Examples 6.1 and 6.2, the IMRPTS iteration method returns better numerical results than the other tested ones. The exception is that the IT of the ITMIT and the IPGSOR iteration methods are less than that of the IMRPTS one, while the latter outperforms the ITMIT and the IPGSOR ones with respect to the CPU times. For Examples 6.3 and 6.4, the IMRPTS iteration method performs the best among the tested ones. Another important fact which can be pointed out is that the convergence behavior of the IMRPTS iteration method is insensitive to m and almost independent to the problem size.
7 Concluding remarks
In the current paper, we first establish a preconditioned triangular splitting (PTS) iteration method, which avoids involving complex arithmetic compared with the TTSCSP one derived in [32]. Besides, by using the minimum residual technique put forward in [38], we construct a minimum residual PTS (MRPTS) iteration method which exhibits higher calculation efficiency than the PTS one. The inexact version of the MRPTS iteration method is also derived. We establish the convergence theories for the proposed iteration methods and investigate the spectral properties of the PTS-preconditioned matrix in detail. Finally, numerical experiments validate the effectiveness of the proposed methods, and they are superior to some known ones in terms of the iterations and CPU times.
However, in this work the parameters of the proposed iteration methods and preconditioner in the numerical experiments are the experimentally found optimal ones or the proper ones satisfying \(\alpha \beta =1\). The choice of the optimal parameters of them has not been derived, which needs further discussions in our future work.
References
Axelsson, O., Kucherov, A.: Real valued iterative methods for solving complex symmetric linear systems. Numer. Linear Algebra Appl. 7, 197–218 (2000)
Axelsson, O., Neytcheva, M.G., Ahmad, B.: A comparison of iterative methods to solve complex valued linear algebraic systems. Numer. Algorithms 66, 811–841 (2014)
Axelsson, O., Salkuyeh, D.K.: A new version of a preconditioning method for certain two-by-two block matrices with square blocks. BIT Numer. Math. (2018). https://doi.org/10.1007/s10543-018-0741-x
Bai, Z.-Z.: Sharp error bounds of some Krylov subspace methods for non-Hermitian linear systems. Appl. Math. Comput. 109, 273–285 (2000)
Bai, Z.-Z.: Block preconditioners for elliptic PDE-constrained optimization problems. Computing 91, 379–395 (2011)
Bai, Z.-Z.: Rotated block triangular preconditioning based on PMHSS. Sci. China Math. 56, 2523–2538 (2013)
Bai, Z.-Z.: Motivations and realizations of Krylov subspace methods for large sparse linear systems. J. Comput. Appl. Math. 283, 71–78 (2015)
Bai, Z.-Z.: On preconditioned iteration methods for complex linear systems. J. Eng. Math. 93, 41–60 (2015)
Bai, Z.-Z., Benzi, M., Chen, F.: Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 87, 93–111 (2010)
Bai, Z.-Z., Benzi, M., Chen, F.: On preconditioned MHSS iteration methods for complex symmetric linear systems. Numer. Algorithms 56, 297–317 (2011)
Bai, Z.-Z., Benzi, M., Chen, F., Wang, Z.-Q.: Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems. IMA J. Numer. Anal. 33, 343–369 (2013)
Bai, Z.-Z., Chen, F., Wang, Z.-Q.: Additive block diagonal preconditioning for block two-by-two linear systems of skew-Hamiltonian coefficient matrices. Numer. Algorithms 62, 655–675 (2013)
Bai, Z.-Z., Golub, G.H.: Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems. IMA J. Numer. Anal. 27, 1–23 (2007)
Bai, Z.-Z., Golub, G.H., Ng, M.K.: Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 24, 603–626 (2003)
Bai, Z.-Z., Parlett, B.N., Wang, Z.-Q.: On generalized successive overrelaxation methods for augmented linear systems. Numer. Math. 102, 1–38 (2005)
Bai, Z.-Z., Wang, Z.-Q.: On parameterized inexact Uzawa methods for generalized saddle point problems. Linear Algebra Appl. 428, 2900–2932 (2008)
Benzi, M., Golub, G.H., Liesen, J.: Numerical solution of saddle point problems. Acta Numer. 14, 1–137 (2005)
Bertaccini, D.: Efficient preconditioning for sequences of parametric complex symmetric linear systems. Electron. Trans. Numer. Anal. 18, 49–64 (2004)
Cao, Y., Jiang, M.-Q., Zheng, Y.-L.: A splitting preconditioner for saddle point problems. Numer. Linear Algebra Appl. 18, 875–895 (2011)
Chen, F.: On choices of iteration parameter in HSS method. Appl. Math. Comput. 271, 832–837 (2015)
Dehghan, M., Dehghani-Madiseh, M., Hajarian, M.: A generalized preconditioned MHSS method for a class of complex symmetric linear systems. Math. Model. Anal. 18, 561–576 (2013)
Hezari, D., Edalatpour, V., Salkuyeh, D.K.: Preconditioned GSOR iterative method for a class of complex symmetric system of linear equations. Numer. Linear Algebra Appl. 22, 761–776 (2015)
Hezari, D., Salkuyeh, D.K., Edalatpour, V.: A new iterative method for solving a class of complex symmetric system of linear equations. Numer. Algorithms 73, 927–955 (2016)
Huang, Z.-G., Wang, L.-G., Xu, Z., Cui, J.-J.: An efficient two-step iterative method for solving a class of complex symmetric linear systems. Comput. Math. Appl. 75, 2473–2498 (2018)
Huang, Z.-G., Wang, L.-G., Xu, Z., Cui, J.-J.: Preconditioned accelerated generalized successive overrelaxation method for solving complex symmetric linear systems. Comput. Math. Appl. 77, 1902–1916 (2019)
Li, X., Yang, A.-L., Wu, Y.-J.: Lopsided PMHSS iteration method for a class of complex symmetric linear systems. Numer. Algorithms 66, 555–568 (2014)
Li, X.-A., Zhang, W.-H., Wu, Y.-J.: On symmetric block triangular splitting iteration method for a class of complex symmetric system of linear equations. Appl. Math. Lett. 79, 131–137 (2018)
Liang, Z.-Z., Zhang, G.-F.: On SSOR iteration method for a class of block two-by-two linear systems. Numer. Algorithms 71, 655–671 (2016)
Liao, L.-D., Zhang, G.-F.: A note on block diagonal and block triangular preconditioners for complex symmetric linear systems. Numer. Algorithms 80, 1143–1154 (2019)
Saad, Y.: Iterative Methods for Sparse Linear Systems. SIAM, Philadelphia (2003)
Salkuyeh, D.K., Hezari, D., Edalatpour, V.: Generalized successive overrelaxation iterative method for a class of complex symmetric linear system of equations. Int. J. Comput. Math. 92, 802–815 (2015)
Salkuyeh, D.K., Siahkolaei, T.S.: Two-parameter TSCSP method for solving complex symmetric system of linear equations. Calcolo 55, 8 (2018)
Wang, T., Lu, L.-Z.: Alternating-directional PMHSS iteration method for a class of two-by-two block linear systems. Appl. Math. Lett. 58, 159–164 (2016)
Wang, T., Zheng, Q.-Q., Lu, L.-Z.: A new iteration method for a class of complex symmetric linear systems. J. Comput. Appl. Math. 325, 188–197 (2017)
Wu, S.-L., Li, C.-X.: A splitting method for complex symmetric indefinite linear system. J. Comput. Appl. Math. 313, 343–354 (2017)
Xiao, X.-Y., Wang, X.: A new single-step iteration method for solving complex symmetric linear systems. Numer. Algorithms 78, 643–660 (2018)
Xiao, X.-Y., Wang, X., Yin, H.-W.: Efficient single-step preconditioned HSS iteration methods for complex symmetric linear systems. Comput. Math. Appl. 74, 2269–2280 (2017)
Yang, A.-L., Cao, Y., Wu, Y.-J.: Minimum residual Hermitian and skew-Hermitian splitting iteration method for non-Hermitian positive definite linear systems. BIT Numer. Math. 59, 299–319 (2019)
Zeng, M.-L., Ma, C.-F.: A parameterized SHSS iteration method for a class of complex symmetric system of linear equations. Comput. Math. Appl. 71, 2124–2131 (2016)
Zhang, J.-H., Dai, H.: A new block preconditioner for complex symmetric indefinite linear systems. Numer. Algorithms 74, 889–903 (2017)
Zhang, J.-H., Wang, Z.-W., Zhao, J.: Preconditioned symmetric block triangular splitting iteration method for a class of complex symmetric linear systems. Appl. Math. Lett. 86, 95–102 (2018)
Zheng, Z., Huang, F.-L., Peng, Y.-C.: Double-step scale splitting iteration method for a class of complex symmetric linear systems. Appl. Math. Lett. 73, 91–97 (2017)
Acknowledgements
We would like to express our sincere thanks to the editor and the anonymous reviewer for their valuable suggestions and constructive comments which greatly improved the presentation of this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This research was supported by the National Natural Science Foundation of China (No. 10802068).
Rights and permissions
About this article
Cite this article
Huang, ZG., Xu, Z. & Cui, JJ. Preconditioned triangular splitting iteration method for a class of complex symmetric linear systems. Calcolo 56, 22 (2019). https://doi.org/10.1007/s10092-019-0318-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10092-019-0318-3
Keywords
- Complex symmetric linear systems
- Preconditioned triangular splitting iteration method
- Minimum residual technique
- Convergence properties
- Inexact variant