Abstract
In this paper, we study a triple-parameter modified SSOR (TMSSOR) method for solving singular saddle point problems. We prove the semi-convergence of the TMSSOR method under suitable restrictions on the iteration parameters, and obtain the local optimal parameters which minimize the pseudo-spectral radii of the associated iteration matrices. Finally, numerical experiments demonstrate the effectiveness of the TMSSOR method for solving singular saddle point problems.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
We consider the iterative solutions of a consistent linear system with the following form:
where \(A \in R^{m\times m}\) is positive real, i.e., \(v^TAv>0\), for any nonzero \(v\in R^m\), \( B \in R^{m\times n}\) is rank-deficient, i.e., \(rank(B)<n\) and \( b\in R^m\), \( q\in R^n\) with \( m\ge n\). We use \(B^{T}\) and \(B^*\) to denote the transpose and the conjugate transpose of the matrix B, respectively. Linear system (1.1) is often referred to a saddle point problem, which is important and arises in a wide variety of scientific and engineering applications such as computational fluid dynamics, optimization, optimal control, constrained least-squares problems, and so on, see [1, 6, 7, 9, 12, 13]. For a wider class of (generalized) saddle point problems, the readers can refer to [10, 14, 15, 17, 18, 20–22].
In the case of A being symmetric positive definite and B being of full column rank, a number of efficient iteration methods as well as their numerical properties have been studied. Bai et al. [3–7] proposed Hermitian and skew-Hermitian splitting (HSS) iteration method and developed it to solving standard and generalized saddle point problems. Golub et al. [21] presented SOR-like methods for solving the linear system (1.1). Bai et al. [9, 10] developed SOR-like methods, and presented the generalized SOR method and the parameterized inexact Uzawa method. Wu et al. [26] proposed the modified symmetric SOR (MSSOR) method. Recently, Najari Saberi and Edalatpanah [24] proposed a triple-parameter modified SSOR (TMSSOR) method for solving saddle point problems based on a new splitting and new relaxation parameters.
In the linear system (1.1), when B is rank-deficient, then the coefficient matrix is singular, and we call the linear system (1.1) a singular saddle point problem. Some iteration methods and preconditioning techniques for solving singular saddle point problems are proposed in the recent literature, see, e.g., [2, 16, 23, 27, 28, 30, 31]. Zheng et al. [30] applied parameterized Uzawa methods to solve singular saddle point problems. Li and Huang [23] investigated the semi-convergence of the generalized SSOR methods. Bai et al. [8, 22] studied constraint preconditioners, and for non-Hermitian singular saddle point problems, Zhang and Shen [28] provided some constraint preconditioners to accelerate GMRES. In this paper, we study the TMSSOR method for solving singular saddle point problems.
The rest of this paper is organized as follows. In Sect. 2, we present the TMSSOR method for solving the singular saddle point problem (1.1). In Sect. 3, we demonstrate the semi-convergence of this new method. In Sect. 4, we give the local optimal parameters of the TMSSOR method under certain restrictions on the parameters involved. In Sect. 5, several numerical experiments are given to show the efficiency of the method. Finally, conclusions are made for this paper in Sect. 6.
2 The triple-parameter modified SSOR method
For solving the singular saddle point problem (1.1), we make the following matrix splitting [24]
where
\(Q \in R^{n\times n}\) is symmetric positive definite, and \( \alpha ,\beta \) are two real parameters. Let \( c=(b^T,-q^T)^T\) and \(z^{(i)}=((x^{(i)})^T,(y^{(i)})^T)^T\) be the i-th approximate solution to (1.1).
We compute the approximate solution \( z^{(i+1)}\) as follows.
where
Notice
Assume that \(\alpha \ne 0, 1, \beta \ne 1, \omega \ne 0\). Since A is positive real and Q is symmetric positive definite, then we obtain that
and
if and only if \(\beta (1-\alpha \omega )\ne 0 \), \((\beta +\omega -\beta \omega )(1-\omega +\alpha \omega )\ne 0 \), \(\beta \ne 0 \), i.e., \( \omega \ne \left\{ 0, \dfrac{1}{\alpha },\dfrac{1}{1-\alpha },\dfrac{\beta }{\beta -1}\right\} \). From the above corresponding equations, we obtain the TMSSOR method as follows:
with
Here, \(T_{\alpha ,\beta ,\omega }\) is the iteration matrix of the TMSSOR iteration, where
and
Let
Then \(C_{\alpha ,\beta ,\omega }=M_{\alpha , \beta , \omega }^{-1}\left( \begin{array}{c} b \\ -q \end{array}\right) \), and it is easy to see that the TMSSOR method can also be induced by the splitting
and
Obviously, \(M_{\alpha ,\beta ,\omega }\) can be regarded as the preconditioner of the linear system (1.1). It is easy to see the TMSSOR method (2.6) can be written in the following form:
Evidently, when the relaxed parameters satisfy \(\alpha = \frac{1}{2}\), \(\beta =1\), the TMSSOR method reduces to the MSSOR method [26]; which is also the GMSSOR method when the two parameters \(\tau \) and \(\omega \) of the GMSSOR method [31] are equal.
3 The semi-convergence of the TMSSOR method
In this section, we discuss the semi-convergence of the TMSSOR method. It is well known that if \(\mathscr {A}\) is nonsingular, the iteration (2.6) is convergent when the spectral radius of the iteration matrix is less than 1. When \(\mathscr {A}\) is singular, so \(I-T_{\alpha ,\beta ,\omega }\) is singular. Then \(T_{\alpha ,\beta ,\omega }\) has an eigenvalue 1, which means its spectral radius cannot be less than 1. So, for the iteration matrix \(T_{\alpha ,\beta ,\omega }\) of the singular linear system (1.1), we need to introduce its pseudo-spectral radius [11] \( \nu (T_{\alpha ,\beta ,\omega })\):
If the iteration (2.6) is convergent to a solution of the linear system (1.1), then we call the iteration (2.6) is semi-convergent. It is known that the iteration (2.6) is semi-convergent, if and only if the iteration matrix \(T_{\alpha ,\beta ,\omega }\) is semi-convergent. The semi-convergence of the matrix \(T_{\alpha ,\beta ,\omega }\) can be described as follows [11]:
-
(a)
The elementary divisors of the iteration matrix \(T_{\alpha ,\beta ,\omega }\) associated with its eigenvalue \(\lambda =1\) are linear, i.e., \(rank(I-T_{\alpha ,\beta ,\omega })^2=rank(I-T_{\alpha ,\beta ,\omega })\), or equivalently, \(index(I-T_{\alpha ,\beta ,\omega })=1\), where index(.) denotes the index of the corresponding matrix;
-
(b)
\(\nu (T_{\alpha ,\beta ,\omega })<1 \).
Theorem 3.1
Let \(T_{\alpha ,\beta ,\omega }\) be the iteration matrix of the TMSSOR method with \(\omega \ne 0, 2\). For any eigenvalue \( \mu \) of \(Q^{-1}B^{T}A^{-1}B\), if \(\lambda \) satisfies
then, \( \lambda \) is an eigenvalue of \(T_{\alpha ,\beta ,\omega }\). Conversely, for any eigenvalue \(\lambda \) of \(T_{\alpha ,\beta ,\omega }\), and there exists \(\mu \) which satisfies Eq. (3.1), then \( \mu \) is an eigenvalue of \(Q^{-1}B^{T}A^{-1}B\).
Proof
Let \(\lambda \) be an eigenvalue of \(T_{\alpha ,\beta ,\omega }\), not loss of generality, we assume that \(\lambda \ne 0\) and \(z=(x^T,y^T)^T\) be the corresponding eigenvector. By
we have
Then
That is, it holds
If \((1-\lambda )(\beta +\omega -\beta \omega )-\omega (2-\omega )=0\), which means \(\lambda =\dfrac{(1-\omega )(\beta -\omega )}{\beta +\omega -\beta \omega }\), then it is easy to see \(\mu =0\) is a zero eigenvalue of \(Q^{-1}B^{T}A^{-1}B\) which satisfies (3.1). If \(\lambda \ne \dfrac{(1-\omega )(\beta -\omega )}{\beta +\omega -\beta \omega }\), then it is easy to see \(y\ne 0\). Substituting x of (3.2) to the Eq. (3.3), we obtain
and
For \(\lambda \ne 0\) and \(\omega \ne 0, 2\), by the above equation it is easy to see that \(\mu \) is an eigenvalue of \(Q^{-1}B^{T}A^{-1}B\) which satisfies (3.1).
Conversely, for any eigenvalue \(\mu \) of \(Q^{-1}B^{T}A^{-1}B\), if \(\lambda \) satisfies (3.1), then we can prove similarly that \(\lambda \) is an eigenvalue of \(T_{\alpha ,\beta ,\omega }\), here omitted. \(\square \)
Remark 3.1
From Theorem 3.1, we know that \(\dfrac{(1-\omega )(\beta -\omega )}{\beta +\omega -\beta \omega }\) is an eigenvalue of \(T_{\alpha ,\beta ,\omega }\). In fact, if \(\lambda =\dfrac{(1-\omega )(\beta -\omega )}{\beta +\omega -\beta \omega }\ne 0\), then from Eq. (3.2), it holds \(A^{-1}By=0\) for \(y=0\). Also, we have
Notice B is rank-deficient, then the equation \(B^{T}x=0\) has nonzero solutions. Therefore, \(\dfrac{(1-\omega )(\beta -\omega )}{\beta +\omega -\beta \omega }\) is an eigenvalue of \(T_{\alpha ,\beta ,\omega }\) with the corresponding eigenvector \((x^T,0^T)^T\).
To obtain the semi-convergence conditions of the TMSSOR method, we introduce some lemmas which will be useful for our discussion.
Lemma 3.1
[9, 10] Both roots of the quadratic equation \(x^{2}-px+q=0\) are less than one in modulus if and only if \(|p-\overline{p}q|<1-|q|^2\), where \(\overline{p}\) is the complex conjugate of p.
Lemma 3.2
[31] Let A be positive real, Q be symmetric positive definite and B be rank-deficient. Then for any nonzero eigenvalue \(\mu =\mu _{Re}+\mathbf{i} \mu _{Im}\) of \(Q^{-1}B^TA^{-1}B\), where \(\mu _{Re}\) and \(\mu _{Im}\) are the real part and imaginary part of \(\mu \), respectively, and \(\mathbf{i}=\sqrt{-1} \), it holds \(\mu _{Re}>0\).
Theorem 3.2
Let A be positive real, Q be symmetric positive definite and B be rank-deficient. Then \(\nu (T_{\alpha ,\beta ,\omega })<1\) if the following conditions hold:
-
(C1) For \(\omega \in (0,1) \cup (2, +\infty )\), it holds \(\dfrac{\omega ^{2}}{2(\omega -1 )}<\beta \) ; for \(\omega \in (-\infty , 0)\cup (1, 2) \), it holds \(\beta <\dfrac{\omega ^{2} }{2(\omega -1 )}\);
-
(C2) \(\omega (2-\omega )(1-\alpha \omega )(1-\omega +\alpha \omega )>0\);
-
(C3) \(\dfrac{\omega (2-\omega )}{\omega ^2-2\omega \beta +2\beta }\mu _{Re}^{2}\,+\,\dfrac{\omega ^2-2\omega \beta +2\beta }{\omega (2-\omega )}\mu _{Im}^{2}<\dfrac{2(1-\alpha \omega )(1-\omega +\alpha \omega )}{\omega (2-\omega )}\mu _{Re} \).
Proof
Making use of Eq. (3.1) and by some algebra, we have
From the above equation, we observe that \(\lambda =1\) or \(\lambda =\dfrac{(1-\omega )(\beta -\omega )}{\beta +\omega -\beta \omega } \) in the case of \(\mu =0\), and \(\lambda \ne 1\) in the case of \(\mu \ne 0\).
Case 1 When \(\lambda =\dfrac{(1-\omega )(\beta -\omega )}{\beta +\omega -\beta \omega }\), it holds
Notice when \(\omega =1 \), the above inequality reduces to \(0<1<2\), which holds true trivially. Excluding \(\omega =1 \), the above inequality is equivalent to
By some algebra we have
i.e.,
which means the condition (C1) holds.
Case 2 When \(\lambda \ne \left\{ \dfrac{(1-\omega )(\beta -\omega )}{\beta +\omega -\beta \omega }, 1\right\} \), by Lemma 3.1, for the nonzero eigenvalue \(\lambda \) of \(T_{\alpha ,\beta ,\omega }\), denote \(d=\dfrac{\omega (2-\omega )}{\beta +\omega -\beta \omega }\), \(e=\dfrac{\omega (2-\omega )}{(1-\alpha \omega )(1-\omega +\alpha \omega )}\). Then it holds
The first inequality of (3.8) is equivalent to \(0<d<2\), same as in Case 1, we immediately obtain the condition (C1).
After some algebra, the second inequality of (3.8) is equivalent to
Notice \(0<d<2\), \(\mu =\mu _{Re}+\mathbf{i}\mu _{Im}\). Then the above inequality is equivalent to
which is also equivalent to
By Lemma 3.2, we have \(\mu _{Re}>0\), and together with \(0<d<2\), it is easy to see \(e=\dfrac{\omega (2-\omega )}{(1-\alpha \omega )(1-\omega +\alpha \omega )}>0\) from (3.9), which is equivalent to the condition (C2).
After some algebra, the inequality (3.9) can be simplified as
i.e.,
which is the condition (C3). \(\square \)
Lemma 3.3
[29] \(Index(I-T_{\alpha ,\beta ,\omega })=1\) if and only if, for any \(\mathbf {0}\ne Y\in \mathscr {R}(\mathscr {A})\), \(Y\notin \mathscr {N}(\mathscr {A}M_{\alpha , \beta , \omega }^{-1})\).
Theorem 3.3
Assume A be positive real and Q be symmetric positive definite, and B be of rank-deficient. Then \(index(I-T_{\alpha ,\beta ,\omega })=1\).
Proof
Let
By Lemma 3.3, to prove \(index(I-T_{\alpha ,\beta ,\omega })=1\), it suffices to prove \(\mathscr {A}M_{\alpha , \beta , \omega }^{-1}Y\ne \mathbf {0}\), we give the proof for \(\mathscr {A}M_{\alpha , \beta , \omega }^{-1}Y\ne \mathbf {0}\) by contradiction. Suppose \(\mathscr {A}M_{\alpha , \beta , \omega }^{-1}Y=\mathbf {0}\). Notice \(\mathscr {N}(\mathscr {A})=span\left\{ \left( \begin{array}{c} \mathbf {0}\\ \varphi \end{array} \right) \right\} \), where \(B\varphi =\mathbf {0}\).
Then there exists a vector \(\varphi _0\) such that
So
Making use of Eqs. (3.10) and (3.11), we have \(B\varphi _0=-BQ^{-1}B^{T}z_1=\mathbf {0}\). Noticing \(Q^{-1}\) is symmetric positive definite, then \(B^{T}z_1=\mathbf {0}\), which means \( \varphi _0=-Q^{-1}B^{T}z_1=\mathbf {0}\). Hence,
which contradicts with \(Y\ne \mathbf {0}\), so we reach the conclusion that \(index(I-T)=1\).
The proof is completed. \(\square \)
4 The optimal iteration parameters
In this section, we will discuss the optimal iteration parameters and the corresponding optimal semi-convergence factor for the TMSSOR method. Let \(A\in R^{m\times m}\) and \( Q\in R^{n\times n}\) be symmetric positive definite. Then all the eigenvalues of the matrix \(Q^{-1}B^TA^{-1}B\) are real and nonnegative. Denote the smallest and largest nonzero eigenvalues of the matrix \(Q^{-1}B^TA^{-1}B\) by \(\mu _{min}\) and \(\mu _{max}\), respectively. By the proof of Theorem 3.2, we see that the eigenvalues \(\lambda \) of \(T_{\alpha , \beta , \omega }\) can be \(\lambda =1\), \(\lambda =\dfrac{(1-\omega )(\beta -\omega )}{\beta +\omega -\beta \omega }\) or \(\lambda \) can be represented by the following \(\lambda _{1}, \lambda _{2}\):
where
Consider the following two cases:
-
1.
If \({\varDelta }=f(\alpha ,\beta ,\omega ,\mu )^2-4g(\beta ,\omega )\le 0\), then
$$\begin{aligned} |\lambda _{1}(\alpha ,\beta ,\omega ,\mu )|=|\lambda _{2}(\alpha ,\beta ,\omega ,\mu )|=\sqrt{1-\dfrac{\omega (2-\omega )}{\beta +\omega - \beta \omega }}=\sqrt{\dfrac{(1-\omega )(\beta -\omega )}{\beta +\omega -\beta \omega }} \end{aligned}$$(4.1) -
2.
If \({\varDelta }>0\), then both \(\lambda _{1}(\alpha ,\beta ,\omega ,\mu )\) and \(\lambda _{2}(\alpha ,\beta ,\omega ,\mu )\) are real, and it holds
$$\begin{aligned} \lambda (\alpha ,\beta ,\omega ,\mu )=\left\{ \begin{array}{l@{\quad }l} \lambda _{1}(\alpha ,\beta ,\omega ,\mu ) &{}\quad if ~f(\alpha ,\beta ,\omega ,\mu )>0 \\ -\lambda _{2}(\alpha ,\beta ,\omega ,\mu ) &{}\quad if ~f(\alpha ,\beta ,\omega ,\mu )\le 0 \end{array}\right. \end{aligned}$$
where \(\lambda (\alpha ,\beta ,\omega ,\mu )=max\left\{ |\lambda _{1}(\alpha ,\beta ,\omega ,\mu )|,|\lambda _{2}(\alpha ,\beta ,\omega ,\mu )|\right\} \). From the Eq. (3.1), we obtain
\(\lambda _{1}(\alpha ,\beta ,\omega ,\mu )\lambda _{2}(\alpha ,\beta ,\omega ,\mu )=1-\dfrac{\omega (2-\omega )}{\beta +\omega -\beta \omega }=\dfrac{(1-\omega )(\beta -\omega )}{\beta +\omega -\beta \omega }\)
So, it is easy to see
Hence, the pseudo-spectral radius of the TMSSOR iteration matrix can be defined by:
\(\nu (T_{\alpha ,\beta ,\omega })=\max \limits _{0\ne \mu \in \sigma (Q^{-1}B^TA^{-1}B) }\{\lambda (\alpha ,\beta ,\omega ,\mu )\}\)
Denote
\(\lambda _{i}(\alpha ,\beta ,\omega )=\max \limits _{0\ne \mu \in \sigma (Q^{-1}B^TA^{-1}B) }\{|\lambda _{i}(\alpha ,\beta ,\omega ,\mu )|\}, \quad i=1, 2\)
Obviously, it holds
By Theorem 3.2, it follows that \(0<\dfrac{\omega (2-\omega )}{\beta +\omega -\beta \omega }<2\), \(\dfrac{\omega (2-\omega )}{(1-\alpha \omega )(1-\omega +\alpha \omega )}>0\) which means
From the above equations, it holds \(|\lambda _{1}(\alpha ,\beta ,\omega )|\ge |\lambda _{2}(\alpha ,\beta ,\omega )|\) while \({\varDelta }>0\) and \(f(\alpha ,\beta ,\omega ,\mu )>0\), and \(|\lambda _{2}(\alpha ,\beta ,\omega )|\ge |\lambda _{1}(\alpha ,\beta ,\omega )|\) while \({\varDelta }>0\) and \(f(\alpha ,\beta ,\omega ,\mu )\le 0\). So, we have
Now we consider the optimal parameters which minimize the pseudo-spectral radius of the iteration matrix \(T_{\alpha ,\beta ,\omega }\). Generally, it is very difficult to determine the global optimal parameters of the TMSSOR method. So, here, we consider a special case and discuss the local optimal parameters, that is, we assume the three parameters satisfying
Remark 4.1
By Eqs. (4.1) and (4.2), it is easy to see, to investigate the optimal parameters which minimize \(\nu (T_{\alpha ,\beta ,\omega })\), it suffices to consider the case of \({\varDelta }=f(\alpha ,\beta ,\omega ,\mu )^2-4g(\beta ,\omega )> 0\). So, from now on, we always assume \({\varDelta }> 0\).
Notice
Then it is easy to see there exist two variables \(\mu _{1}\) and \(\mu _{2}\) satisfying the following equations:
where \(0\le \mu _{1}\le \mu _{2}\), furthermore, by (4.5), it is easy to see \(\mu _{1}, \mu _{2} \in [\mu _{min},\mu _{max}]\). Actually, if \(\mu _{1}<\mu _{min} \) or \(\mu _{max}<\mu _{2}\), then \(-2\sqrt{1-\alpha }<f(\alpha ,\beta ,\omega ,\mu )<2\sqrt{1-\alpha }\), which leads to \({\varDelta }<0\). So it is in contradiction with \({\varDelta }>0\).
From (4.7) and (4.8), by some algebra, it holds
So, \(f(\alpha ,\beta ,\omega ,\mu )\) can be rewritten as
Substituting \(f(\alpha ,\beta ,\omega ,\mu )\) to (4.5), therefore, we have
For convenience, let \(\lambda _{1}(\alpha ,\beta ,\omega )=\lambda _{1}(\mu _{1},\mu _{2})\), and \(\lambda _{2}(\alpha ,\beta ,\omega )=\lambda _{2}(\mu _{1},\mu _{2})\). It is easy to see that the following results hold true.
Then by Eqs. (4.3) and (4.13), it holds
By some algebra, it holds that
Hence, we have
In the same way, we obtain
We now declare that \(\nu (T_{\alpha ,\beta ,\omega })\) has no minimum when \(\mu _1+\mu _2 \ne \mu _{min}+\mu _{max}\). Observe the following two cases:
-
1.
Assume \(\nu (T_{\alpha ,\beta ,\omega })\) has minimum at \(\mu _1+\mu _2 <\mu _{min}+\mu _{max}\). By (4.14), it holds that \(\nu (T_{\alpha ,\beta ,\omega })=\lambda _2(\mu _1, \mu _2)\). Let \(\overline{\mu }=\mu _{min}+\mu _{max}-\mu _2\). Then, \(\mu _1<\overline{\mu }\). By the monotone property of the function \(\lambda _2(\mu _1, \mu _2)\), we have \(\lambda _2(\mu _1, \mu _2)>\lambda _2(\overline{\mu }, \mu _2)\), which contradicts the assumption.
-
2.
Assume \(\nu (T_{\alpha ,\beta ,\omega })\) has minimum at \(\mu _1+\mu _2 >\mu _{min}+\mu _{max}\). By (4.14), it holds that \(\nu (T_{\alpha ,\beta ,\omega })=\lambda _1(\mu _1, \mu _2)\). Let \(\overline{\mu }=\mu _{min}+\mu _{max}-\mu _2\). Then, \(\mu _1>\overline{\mu }\). By the monotone property of the function \(\lambda _1(\mu _1, \mu _2)\), we have \(\lambda _1(\mu _1, \mu _2)>\lambda _1(\overline{\mu }, \mu _2)\), which also contradicts the assumption.
So \(\nu (T_{\alpha ,\beta ,\omega })\) may have minimum only at \(\mu _1+\mu _2 = \mu _{min}+\mu _{max}\).
Now we give the local optimal parameters and the corresponding optimal convergence factor of the TMSSOR method by follows.
Theorem 4.1
Let A and Q be symmetric positive definite, and B be rank-deficient. Assume Eq. (4.6) is satisfied. Then the optimal parameters of the TMSSOR method are given by
\(\omega _{opt}\) is the root of the equation
where
Furthermore, the corresponding optimal semi-convergence factor of the TMSSOR method is
Proof
From the above analysis, we know that \(\nu (T_{\alpha ,\beta ,\omega })\) may have minimum only at \(\mu _1+\mu _2 = \mu _{min}+\mu _{max}\). When \(\mu _1+\mu _2 = \mu _{min}+\mu _{max}\), from (4.13) and (4.14), it holds
By the technique in [19], let \(r=-\mu _1^2+(\mu _{min}+\mu _{max})\mu _1\ge \mu _{min}\mu _{max}\) and define
It is easy to see that g(r) has minimum at \(r=\mu _{min}\mu _{max}\), which means that \(\nu (T_{\alpha ,\beta ,\omega })\) has minimum \(\dfrac{\sqrt{\mu _{max}}-\sqrt{\mu _{min}}}{\sqrt{\mu _{max}}+\sqrt{\mu _{min}}}\) at \(\mu _1=\mu _{min}\) and \(\mu _2=\mu _{max}\). By Eqs. (4.6), (4.9), (4.10) and together with \(\mu _1=\mu _{min}\), \(\mu _2=\mu _{max}\), after some simple algebra, we obtain the following optimal parameters:
and \(\omega _{opt}\) is the root of the equation
where \(a_{1}=(\sqrt{\mu _{max}}+\sqrt{\mu _{min}})^4\), \(a_{2}=(\sqrt{\mu _{max}}-\sqrt{\mu _{min}})^2\). \(\square \)
Remark 4.2
Theorem 4.1 shows that the TMSSOR method has the same asymptotic convergence rate as the GSOR method [9] at their own optimal points of the iteration parameters. Otherwise, these two methods may exhibit different convergence behavior. The GSOR method has less computational cost per iteration, so, the GSOR method is more effective in practical applications.
5 Numerical experiments
In this section, some numerical experiments are given to compare the performance of the TMSSOR method with GSOR, MSSOR, MSSOR-like [25], GMSSOR methods. We denote the number of iteration steps by IT, elapsed CPU time in seconds by CPU and the norm of residual vectors by RES, respectively.
All the computations are implemented in MATLAB 7.0.19920 (R14) on a PC computer with a 1.86GHz 64-bit processor and 2GB memory. In actual computations, we choose the right-hand vector \((b^T,-q^T)^T\in R^{m+n}\) such that the exact solution of the augmented linear system (1.1) is \(((x^*)^T,(y^*)^T)^T=(1,1,\ldots ,1)^T\in R^{m+n}\) and all runs are started from the initial vector \(((x^{ (0)})^T)^T,(y^{(0)})^T)^T=(0,0,\ldots ,0)^T\in R^{m+n}\). The stopping criterion is taken when the current iteration satisfies \(RES\le 10^{-6}\), where
with \(((x^{(k)})^T, (y^{(k)})^T)^T\) being the final approximate solution.
Example 5.1
Consider the Oseen equation:
where \({\varOmega }\) is an open bounded domain in \( R^{2}\) or \( R^{3}\), the vector field \(\pmb {\mu }\) represents the velocity in \({\varOmega }\), p denotes pressure, and the scalar \(\nu \) is the viscosity, which is inversely proportional to the Reynolds number. The test problem is a leaky two-dimensional lid-driven cavity problem in the square domain: \({\varOmega }=(0<x<1; 0<y<1)\), where \(\pmb {\mu }=(\mu , \upsilon )^{T}\) denotes the velocity field, and \(\pmb {\omega }=(a,b)^{T}\) denotes the wind. The boundary conditions are \(\mu =\upsilon =0\) on the three walls \((x=0, y=0, x=1)\), and \(\mu =1, \upsilon =0\) on the moving wall\((y=1)\). We take constant “wind” \(a=1, b=2\). To discretize (5.1), we use the “market and cell”(MAC) finite difference scheme [20]. Divide \({\varOmega }\) into a uniform \(l\times l\) grid of cells of width \(h=\frac{1}{l}\). The discrete velocities and pressures are defined on a staggered grid in which the discrete values of \(\mu \) lie in the centers of the cell boundaries orthogonal to the \(x-axis\), the discrete values of \(\upsilon \) lie in center boundaries orthogonal to the \(y-axis\), and the discrete pressures lie in the cell centers. Then we obtain the matrix representation of the Oseen equations (5.1):
with the matrix blocks of the following form:
\(A=\left( \begin{array}{c@{\quad }c}F_1&{}0\\ 0&{}F_2\end{array}\right) \in R^{2l(l-1)\times 2l(l-1)}\), \(B=(B_1,B_2)\in R^{2l(l-1)\times l^2}\),
\(F_{i}=\nu A_{i}+N_{i}\in R^{l(l-1)\times l(l-1)}, \quad (i=1,2)\). For convenience, let \(m=2l(l-1)\), \(n=l^2\) in this example, and A is positive real, \(rank(B)=l^2-1\). \(\mathscr {N}(B)=span\{e_{n}\}, e_{n}=(1, 1, \ldots , 1)^T\in R^{n}\). To ensure the (1,1)-block matrix symmetric positive definite, finally we take the test coefficient matrix as follows:
where \(\bar{A}=\frac{1}{2}(A+A^{T})\). For the preconditioner of the TMSSOR method, notice
and \(t=\dfrac{\omega (2-\omega )}{(\beta +\omega -\beta \omega )(1-\omega +\alpha \omega )(1-\alpha \omega )}\). We compute the \(\bar{A}^{-1}\), \(Q^{-1}\) in \(M_{\alpha , \beta , \omega }^{-1}\) by the incomplete LU factorization with drop tolerance 0.001. We take the viscosity \(\nu =0.5\) in our experiments and choose the matrix Q by Table 1. For this example, here we consider the following four cases when l = 8, l = 16, l = 24 and l = 32.
In Fig. 1, we choose Q as Case II in Table 1, and we display the distribution on the eigenvalues of matrix \(\mathscr {A}\) and the matrices \(M_{\alpha , \beta , \omega }^{-1}\mathscr {A}\) for different iteration parameters. We find that the eigenvalues of the preconditioned matrices are quite clustered. In Tables 2, 3, 4 and 5, we choose Q as Case III in Table 1, and we present the numerical results for the different iteration methods, both as solvers and preconditioners to accelerate the restarted GMRES(k). In Table 6, we choose Q as Case I and Case II in Table 1, respectively, and we list the optimal parameters and the corresponding optimal convergence factors of the different iteration methods. When the optimal parameters are employed, it reveals that the TMSSOR method is effective, especially it has the smallest number of iterations.
6 Conclusion
In this paper, we propose a triple-parameter modified SSOR iteration method to solve a class of large sparse singular saddle point problems. We prove the semi-convergence of the TMSSOR method under suitable restrictions on the iteration parameters, and obtain the local optimal parameters which minimize the pseudo-spectral radii of the associated iteration matrices, as for the global optimal parameters, it still needs further studies.
References
Bai, Z.-Z.: Structured preconditioners for nonsingular matrices of block two-by-two structures. Math. Comput. 75, 791–815 (2006)
Bai, Z.-Z.: On semi-convergence of Hermitian and skew-Hermitian splitting methods for singular linear systems. Computing 89, 171–197 (2010)
Bai, Z.-Z., Golub, G.H., Ng, M.K.: Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 24, 603–626 (2003)
Bai, Z.-Z., Golub, G.H., Pan, J.-Y.: Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems. Numer. Math. 98, 1–32 (2004)
Bai, Z.-Z., Golub, G.H., Li, C.-K.: Optimal parameter in Hermitian and skew-Hermitian splitting method for certain two-by-two block matrices. SIAM J. Sci. Comput. 28, 583–603 (2006)
Bai, Z.-Z., Golub, G.H.: Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems. IMA J. Numer. Anal. 27, 1–23 (2007)
Bai, Z.-Z.: Optimal parameters in the HSS-like methods for saddle-point problems. Numer. Linear Algebra Appl. 16, 447–479 (2009)
Bai, Z.-Z., Ng, M.K., Wang, Z.-Q.: Constraint preconditioners for symmetric indefinite matrices. SIAM J. Matrix Anal. Appl. 31, 410–433 (2009)
Bai, Z.-Z., Parlett, B.N., Wang, Z.-Q.: On generalized successive overrelaxation methods for augmented linear systems. Numer. Math. 102, 1–38 (2005)
Bai, Z.-Z., Wang, Z.-Q.: On parameterized inexact Uzawa methods for generalized saddle point problems. Linear Algebra Appl. 428, 2900–2932 (2008)
Berman, A., Plemmons, R.J.: Nonnegative matrices in the mathematical sciences. Academic Press, New York (1979) (SIAM 1994)
Betts, J.T.: Practical methods for optimal control using nonlinear programming. SIAM, Philadelphia (2001)
Brezzi, F., Fortin, M.: Mixed and hybrid finite element methods. Springer, New York, London (1991)
Benzi, M., Golub, G.H.: A preconditioner for generalized saddle point problems. SIAM J. Matrix Anal. Appl. 26, 20–41 (2005)
Benzi, M., Golub, G.H., Liesen, J.: Numerical solution of saddle point problems. Acta Number 14, 1–137 (2005)
Cao, Z.-H.: Comparison of performance of iterative methods for singular and nonsingular saddle point linear systems arising from Navier-Stokes equations. Appl. Math. Comput. 174, 630–642 (2006)
Cao, Z.-H.: A note on spectrum distribution of constraint preconditioned generalized saddle point matrices. Numer. Linear Algebra Appl. 16, 503–516 (2009)
Cao, Z.-H.: A note on spectrum analysis of augmentation block preconditioned generalized saddle point matrices. J. Comput. Appl. Math. 238, 109–115 (2013)
Chao, Z., Zhang, N.-M., Lu, Y.-Z.: Optimal parameters of the generalized symmetric SOR method for augmented systems. J. Comput. Appl. Math. 266, 52–60 (2014)
Elman, H.C.: Preconditioning for the steady-state Navier-Stokes equations with low viscosity. SIAM J. Sci. Comput. 20, 1299–1316 (1999)
Golub, G.H., Wu, X., Yuan, J.-Y.: SOR-like methods for augmented systems. BIT 41, 71–85 (2001)
Keller, C., Gould, N.I.M., Wathen, A.J.: Constraint precoditioning for indefinite linear systems. SIAM J. Matrix Anal. Appl. 21, 1300–1317 (2000)
Li, J.-L., Huang, T.-Z.: The semi-convergence of generalized SSOR method for singular augmented systems. In: High performance computing and applications, lecture notes in computer science, vol. 5938, pp. 230–235. Springer, New York (2010)
Saberi, H., Najafi, S.A., Edalatpanah, A.: new modified SSOR iteration method for solving augmented linear systems. Int. J. Comput. Math. 91, 539–552 (2014)
Wen, C., Huang, T.-Z.: Modified SSOR-like method for augmented systems. Math. Model. Anal. 16, 475–487 (2011)
Wu, S.-L., Huang, T.-Z., Zhao, X.-L.: A modified SSOR iterative method for augmented systems. J. Comput. Appl. Math. 228, 424–433 (2009)
Zhang, N.-M.: A note on preconditioned GMRES for solving singular linear equations. BIT 50, 207–220 (2010)
Zhang, N.-M., Shen, P.: Constraint preconditioners for solving singular saddle point problems. J. Comput. Appl. Math. 238, 116–125 (2013)
Zhang, N.-M., Wei, Y.-M.: On the convergence of general stationary iterative methods for Range-Hermitian singular linear systems. Numer. Linear Algebra Appl. 17, 139–154 (2010)
Zheng, B., Bai, Z.-Z., Yang, X.: On semi-convergence of parameterized Uzawa methods for singular saddle point problems. Linear Algebra. Appl. 431, 808–817 (2009)
Zhou, L.-J., Zhang, N.-M.: Semi-convergence analysis of GMSSOR methods for singular saddle point problems. Comput. Math. Appl. 68, 596–605 (2014)
Acknowledgments
The authors are grateful to the referees and the editor for their very detailed comments and suggestions, which significantly improved this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Zhong-Zhi Bai.
This work is supported by National Natural Science Foundation of China under Grant No. 61572018 and Zhejiang Provincial Natural Science Foundation of China under Grant No: LY15A010016.
Rights and permissions
About this article
Cite this article
Li, J., Zhang, NM. A triple-parameter modified SSOR method for solving singular saddle point problems. Bit Numer Math 56, 501–521 (2016). https://doi.org/10.1007/s10543-016-0610-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10543-016-0610-4