Abstract
For nonsymmetric saddle point problems, Pan et al. (Appl Math Comput 172:762–771, 2006) proposed a deteriorated positive-definite and skew-Hermitian splitting (DPSS) preconditioner. In this paper, a variant of the DPSS preconditioner is proposed to accelerate the convergence of the associated Krylov subspace methods. The new preconditioner is much closer to the coefficient matrix than the DPSS preconditioner. The spectral properties of the new preconditioned matrix are analyzed. Theorem which provides the dimension of the Krylov space for the preconditioned matrix is obtained. Numerical experiments of a model Navier–Stokes problem are presented to illustrate the effectiveness of the new preconditioner.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Consider the solution of the large and sparse saddle point linear system
where \(A\in {\mathbb {R}}^{n\times n}\) is nonsymmetric and positive definite (i.e., its symmetric part is positive definite), \(B\in {\mathbb {R}}^{m\times n}\) has full row rank, \(B^{T}\) is the transpose of the matrix B, \(x,f\in {\mathbb {R}}^n\), \(y, g\in {\mathbb {R}}^m\) and \(m\le n\). Under these conditions, \({\mathscr {A}}\) is nonsingular so that the solution of (1.1) exists and is unique. The system (1.1) is important and arises in a variety of scientific and engineering applications, such as computational fluid dynamics, constrained optimization, mixed or hybrid finite element approximations of second-order elliptic problems, and so on. See [2, 15] and the references therein for a detailed discussion.
Since the coefficient matrix of (1.1) is large and sparse, iterative methods for solving (1.1) are more attractive than direct methods in terms of storage requirements and computing time. Recently, many effective iterative methods have been proposed to solve the saddle point problem (1.1), which include, to name just a few of them, Uzawa method [1, 15], SOR-like and GSOR iteration methods [4, 11, 12, 24], HSS (Hermitian and skew-Hermitian splitting) based methods [5, 8–10]. See also [2] and [15] for a comprehensive survey and detailed study. However, some iterative methods, such as Krylov subspace methods, often suffer from slow convergence or even stagnation in some special engineering problems. In order to accelerate the convergence of the associated Krylov subspace method, the preconditioning technique is often used [13, 30, 32].
In the past few years, in light of the special structure of problem (1.1), numerous preconditioners have been proposed. These include, block diagonal and block triangular preconditioners [2, 3, 7, 18, 25, 27, 29] (which are based on the matrix splitting or matrix factorization of the coefficient matrix \({\mathscr {A}}\)), constraint preconditioners [14, 21, 22, 26, 29] (which are seldom used when the coefficient matrix \({\mathscr {A}}\) is nonsymmetric), HSS-based preconditioners [6, 8, 10, 17, 33], dimensional splitting (DS) preconditioners [16, 19] (which have difficulties dealing with low-viscosity problems on stretched grids), and so on.
In this paper, for nonsymmetric saddle point problem (1.1), a new preconditioner based on the DPSS preconditioner in [28] will be proposed, which is referred to as a variant of the deteriorated positive-definite and skew-Hermitian splitting (VDPSS) preconditioner. After a brief introduction of the DPSS preconditioner, we present the new preconditioner in Sect. 2. The spectral properties of the preconditioned matrix are derived in Sect. 3. In that section, we also investigate the impact upon the convergence of the corresponding Krylov subspace method. In Sect. 4, numerical experiments are presented to illustrate the effectiveness of our preconditioner, including comparisons with other preconditioners. Finally, this paper is ended with some conclusions in Sect. 5.
For convenience, we briefly explain some of the terminologies used in this paper. \({{\mathbb {R}}}^l\) means the usual l-dimensional Euclidean space. For \(P\in {\mathbb {R}}^{l\times l}\), \(P^T\) and \(P^{-1}\) indicate the transpose and the inverse of the matrix P, respectively. \(\lambda (P)\) denotes the eigenvalue of P. \(u^T\), \(u^*\) and \(\parallel u \parallel _2\) are the transpose, conjugate transpose and the 2-norm of a vector u. \(I_n\) denotes the identity matrix of size n.
2 A variant of the DPSS preconditioner
2.1 The DPSS preconditioner
For nonsymmetric saddle point problem (1.1), Pan, Ng and Bai in [28] proposed the DPSS preconditioner, which is induced by a deteriorated positive-definite and skew-Hermitian splitting (DPSS) iterative method [3]. Here we give a brief introduction to the DPSS preconditioner, and readers can consult [28] for more details. In [28], for the special property of the (1,1)-block matrix A, the authors split the coefficient matrix \({\mathscr {A}}\) into
where
Then we have two splittings of \({\mathscr {A}}\), i.e.,
where I is the identity matrix and \(\alpha \) is a given positive parameter. Since \({\mathscr {M}}\) is positive semi-definite and \({\mathscr {N}}\) is skew-Hermitian, \(\alpha I+{\mathscr {M}}\) and \(\alpha I+{\mathscr {N}}\) are both nonsingular. Motivated by the classical ADI iteration technique, the splitting iteration is proposed
Thus, the deteriorated positive-definite and skew-Hermitian splitting (DPSS) iteration can be obtained
It is easy to observe that (2.1) can also be induced by the splitting \({\mathscr {A}}={{\mathscr {P}}}_\text {DPSS}-{{\mathscr {Q}}}_\text {DPSS}\) (with \({{\mathscr {P}}}_\text {DPSS}\) nonsingular), which yields
where
Then the DPSS preconditioner \({{\mathscr {P}}}_\text {DPSS}\) can be obtained. In [20], Cao et al. have proved that the DPSS iterative method (2.1) is convergent unconditionally. For the DPSS preconditioner, eigenvalue distribution of the preconditioned matrix \({{\mathscr {P}}}_\text {DPSS}^{-1} {\mathscr {A}}\) is discussed in [28]. It is demonstrated in [28] that the eigenvalues of the preconditioned matrix gather into two cluster, i.e., (0, 0) and (2, 0), when \(\alpha \) tends to \(0_{+}\).
2.2 A variant of the DPSS preconditioner
Since the factor 1 / 2 has no effect on the preconditioned system, we can take any other constant instead of it. In this paper, we omit it for convenience. Then the DPSS preconditioner can be written as
From (1.1) and (2.3), we get the difference between the preconditioner \({{\mathscr {P}}}_\text {DPSS}\) and the coefficient matrix \({\mathscr {A}}\)
It shows that when \(\alpha \) tends to \(0_+\), the weight of the two diagonal blocks in the matrix \({{\mathscr {P}}}_\text {DPSS}-{\mathscr {A}}\) also approaches 0, while the weight of the nonzero off-diagonal block approaches \(+\infty \). Hence, the choice of \(\alpha \) needs to be balanced.
A general criterion for an efficient preconditioner is that it should be as close as possible to the coefficient matrix \({\mathscr {A}}\), such that the preconditioned matrix will have a clustered spectrum (away from 0) [15]. Therefore, by replacing the shift term \(\alpha I\) in the (1,1) block of the first matrix in (2.3) with zero matrix, we get a variant of the DPSS preconditioner \({{\mathscr {P}}}_\text {VDPSS}\) as follows
The difference between \({{\mathscr {P}}}_\text {VDPSS}\) and \({\mathscr {A}}\) is
Although the (1,2)-block matrix is different from that in (2.4), the (1,1)-block matrix in (2.6) now vanishes, which indicates that \({{\mathscr {P}}}_\text {VDPSS}\) gives a better approximation to the coefficient matrix \({\mathscr {A}}\) for the same \(\alpha \). Therefore, \({{\mathscr {P}}}_\text {VDPSS}\) is expected to be a better preconditioner than \({{\mathscr {P}}}_\text {DPSS}\). Furthermore, the structure of (2.6) facilitates the analysis of the eigenvalue distribution of the preconditioned matrix; see the discussion in the next section.
In fact, the VDPSS preconditioner \({{\mathscr {P}}}_\text {VDPSS}\) can also be obtained by the following splitting of the coefficient matrix \({\mathscr {A}}\)
which recasts in a fixed-point iteration, i.e., the VDPSS iteration
with \(u^0=((x^0)^T,(y^0)^T)^T\) being an initial vector, and \(\varGamma ={{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {R}}\), \(\varTheta ={{\mathscr {P}}}_\text {VDPSS}^{-1}\mathbf {b}\). But it should also come to our attention that the VDPSS preconditioner \({{\mathscr {P}}}_\text {VDPSS}\) no longer relates to an alternating iteration method.
3 Eigenvalue analysis of the preconditioned matrix
In this section, we examine the spectral properties of the preconditioned matrix \({{\mathscr {P}}}_\text {VDPSS}^{-1} {\mathscr {A}}\) whose eigenvalue distribution influences the convergence of an iterative method. In particular, it is desirable that the number of distinct eigenvalues, or at least the number of clusters, is small, then the rate of convergence will be rapid. To be more precise, if there are only a few distinct eigenvalues, then the iterative method will terminate within a small number of steps.
We first give a lemma about the explicit expression of \({{\mathscr {P}}}_\text {VDPSS}^{-1}\).
Lemma 3.1
Let
Here \({\mathscr {P}}_2\) has the block-triangular factorization
with \({\tilde{A}}=\alpha I+\displaystyle \frac{1}{\alpha }BB^{T}\). Then we have
Theorem 3.1
Let the VDPSS preconditioner be defined in (2.5), then the preconditioned matrix \({{\mathscr {P}}}_\text {VDPSS}^{-1} {\mathscr {A}}\) has eigenvalue 1 of algebraic multiplicity at least n. The real part of the remaining eigenvalues of the preconditioned matrix \({{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {A}}\) satisfies
where \(\sigma _m\) and \(\sigma _1\) are the smallest and largest singular values of matrix B, \({\tilde{H}}=(A^{-1}+(A^{-1})^T)/2\) is the symmetric part of matrix \(A^{-1}\).
Proof
From Lemma 3.1 we have
where \(S_1=-(\displaystyle \frac{1}{\alpha } B^{T}-A^{-1}B^T-\displaystyle \frac{1}{\alpha ^2}B^T{{\tilde{A}}}^{-1}BB^T +\frac{1}{\alpha }B^T{{\tilde{A}}}^{-1}BA^{-1}B^T-B^T{{\tilde{A}}}^{-1})\), \(S_2={\tilde{A}}^{-1}BA^{-1}B^T\).
Therefore, the preconditioned matrix \({{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {A}}\) has eigenvalue 1 of algebraic multiplicity at least n. The remaining non-unit eigenvalues of \({{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {A}}\) are the solution of the eigenvalue problem
Since \({\tilde{A}}=\alpha I+\displaystyle \frac{1}{\alpha }BB^{T},\) (3.4) is equivalent to
It is obvious that \(u \ne 0\). Without loss of generality, we assume \(\parallel u\parallel _2=1\). Multiplying the equation (3.5) by \(u^*\) yields
Let \(v=B^Tu\), then (3.6) reduces to
The conjugate transpose of (3.7) is
Since \({\tilde{H}}=(A^{-1}+(A^{-1})^T)/2\) is the symmetric part of matrix \(A^{-1}\), from (3.7) and (3.8), it is easy to obtain
According to the Courant-Fisher theorem,
where, \(\sigma _m\) and \(\sigma _1\) are the smallest and largest singular values of matrix B, respectively.
Then, we have
Thus, the proof of the theorem is completed. \(\square \)
Remark 3.1
From (3.7), the non-unit eigenvalues of the preconditioned matrix \({{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {A}}\) satisfy
Taking \(\alpha \rightarrow 0_+\) and \(\alpha \rightarrow +\infty \), we get the non-unit eigenvalues \(\lambda _i\rightarrow 0\), which will be confirmed by the figures in the next section.
It has been mentioned above that the idea of preconditioning attempts to improve on the spectral properties, such that the total number of iterations required to solve the system to within some tolerance will be indeed decreased. Among the iterative methods currently available, Krylov subspace methods are the most important for solving the underlying system. The iterative method with an optimal property, such as GMRES method, will terminate when the degree of the minimal polynomial is attained [31]. In particular, the degree of the minimal polynomial is equal to the dimension of the corresponding Krylov subspace [30]. Next theorem provides some analysis to the dimension of the Krylov subspace \({\mathscr {K}}({{\mathscr {P}}}_\text {VDPSS}^{-1} {\mathscr {A}},b)\).
Theorem 3.2
Let the VDPSS preconditioner be defined in (2.5), then the dimension of the Krylov subspace \({\mathscr {K}}({{\mathscr {P}}}_\text {VDPSS}^{-1} {\mathscr {A}}, b)\) is at most \(m+1\). Specially, once the matrix \(S_2={\tilde{A}}^{-1}BA^{-1}B^T\) has k (\(1\le k \le m\)) distinct eigenvalues \(\mu _i\) (\(1\le i\le k\)), of respective multiplicity \(\theta _i\), where \(\sum \nolimits _{i=1}^k\theta _i=m\), the dimension of the Krylov subspace \({\mathscr {K}}( {{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {A}}, b)\) is at most \(k+1\).
Proof
According to the form of \({{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {A}}\) in (3.3) and the eigenvalue distribution described in Theorem 3.1, it is evident that the characteristic polynomial of the preconditioned matrix \({{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {A}}\) is
Expanding the polynomial \(( {{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {A}}-I)\prod \nolimits _{i=1}^{m}({{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {A}}-\lambda _iI)\) of degree \(m+1\), we have
Since \(\lambda _i^{'}\)s are the eigenvalues of matrix \(S_2\), then
Therefore, the degree of the minimal polynomial of \( {{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {A}}\) is at most \(m+1\). Consequently, the dimension of the corresponding Krylov subspace \({\mathscr {K}}( {{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {A}}, b)\) is at most \(m+1\).
In the case when the matrix \(S_2={\tilde{A}}^{-1}BA^{-1}B^T\) has k (\(1\le k \le m\)) distinct eigenvalues \(\mu _i\) of respective multiplicity \(\theta _i\). We write the characteristic polynomial of the preconditioned matrix \({{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {A}}\) as
Let \(\varPhi =( {{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {A}}-I) \left[ \displaystyle \prod _{i=1}^{k}( {{\mathscr {P}}}_\text {VDPSS}^{-1} {\mathscr {A}}-\mu _iI)\right] \), then
Since \(\prod \nolimits _{i=1}^{k}(S_2-\mu _iI)=0\), it is obvious that \(\varPhi \) is a zero matrix. Therefore, the dimension of the Krylov subspace \({\mathscr {K}}( {{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {A}}, b)\) is at most \(k+1\). Thus, the proof of the theorem is completed. \(\square \)
Remark 3.2
Theorem 3.2 indicates that if a Krylov subspace method (such as GMRES) preconditioned by the VDPSS preconditioner is used for solving (1.1), then it will terminate in at most \(m+1\) steps within some tolerance. Even in some special case, it will terminate in at most \(k+1\) (\(1\le k \le m\)) iterations. This theorem reveals the excellent acceleration effect of the VDPSS preconditioner, which will also be confirmed in the section of numerical experiments.
Before the end of this section, we shall also analyze some implementation aspects about the preconditioner \({{\mathscr {P}}}_\text {VDPSS}\). At each step of the VDPSS iteration or applying the VDPSS preconditioner \({{\mathscr {P}}}_\text {VDPSS}\) within a Krylov subspace method, a linear system with \({{\mathscr {P}}}_\text {VDPSS}\) as the coefficient matrix needs to be solved. That is to say, a linear system of the form
needs to be solved for a given vector r at each step, where \(z=[z^T_1,z^T_2]^T\), \(r=[r^T_1,r^T_2]^T\), \(z_1,r_1\in R^n\), \(z_2,r_2\in R^m\). Based on the matrix factorization of \({{\mathscr {P}}}_\text {VDPSS}\) in Lemma 3.1, we have
Hence, the following algorithmic version of the VDPSS iterative method can be derived.
Algorithm 3.1
For a given \(r=[r^T_1,r^T_2]^T\), we can compute the vector \(z=[z^T_1,z^T_2]^T\) by (3.11) from the following steps:
-
(i)
solve \(Ap_1=r_1\);
-
(ii)
solve \((\alpha I+\frac{1}{\alpha }BB^{T})z_2=Bp_1+r_1\);
-
(iii)
solve \(z_1=p_1-\frac{1}{\alpha }B^Tz_2\).
Remark 3.3
From Algorithm 3.1, it is required to solve two sub-linear systems with coefficient matrices A and \({\tilde{A}}=\alpha I+\frac{1}{\alpha }BB^{T}\) at each iteration. Obviously, according to the assumption given in Sect. 1, we know that the matrix A is positive definite, and \({\tilde{A}}=\alpha I+\frac{1}{\alpha }BB^{T}\) is symmetric positive definite. Therefore, in inexact manner, we can employ the GMRES method and the conjugate gradient (CG) method to solve the two sub-linear systems, respectively. In actual implementations, the inexact solvers can be used to reduce the cost of each iteration, but they will also lead to somewhat slower convergence. Thus we can also solve the two sub-linear systems exactly. The system with the coefficient matrix A can be solved with the sparse LU factorization, and the system with the coefficient matrix \({\tilde{A}}=\alpha I+\frac{1}{\alpha }BB^{T}\) can be solved with the sparse Cholesky factorization.
Remark 3.4
The RDPSS preconditioner proposed in [20] is similar to the VDPSS preconditioner, just different in the (2,2) block. In [20], the shift term \(\alpha I\) in the (2,2) block of the RDPSS preconditioner also vanishes. Consequently, two sub-linear systems with coefficient matrices A and \(\frac{1}{\alpha }BB^{T}\) need to be solved at each iteration when applying the RDPSS preconditioner within a Krylov subspace method. It is easy to observe that the condition number of \(\alpha I+ \frac{1}{\alpha } BB^{T}\) should be better than \(\frac{1}{\alpha }BB^{T}\) when choosing large value of \(\alpha \), which will bring about substantial improvement on the number of iterations. However, it does not mean that we should use some unduly large \(\alpha \) when applying the VDPSS preconditioner. In fact, experiments indicate that \(\alpha =100\) is appropriate, which is also the reason why we choose \(\alpha =100\) in Sect. 4. This attractive merit of the VDPSS preconditioner will be stressed by the numerical results in the next section.
4 Numerical experiments
In this section, we carry out some numerical experiments to illustrate the effectiveness of the VDPSS preconditioner for the nonsymmetric saddle point problem (1.1). The DPSS preconditioner [28] and the RDPSS preconditioner [20] are adopted to highlight the proposed preconditioner in terms of eigenvalue distribution, iteration and CPU time. Unless otherwise specified, we use left preconditioning with the restarted GMRES method [31] which has restarting frequency 30, i.e., GMRES(30). The zero vector is adopted as the initial vector. The iteration stops when \(\parallel r^k\parallel _2/\parallel r^0\parallel _2<10^{-6}\) or the prescribed maximum number of restarts \(k_\text {max}=1000\) is exceeded, where \(r^k\) is the residual at the kth iteration. The main criteria used for comparison are the total number of iteration steps and CPU time which are abbreviated to Iter and CPU. All experiments are run on a PC using MATLAB 2008 under Windows 7 operating system.
Here we consider the Oseen problem which is obtained from the linearization of the steady-state Navier–Stokes equation with suitable boundary condition on \(\partial \varOmega \):
where \(\nu >0\) represents the kinematic viscosity, div is the divergence, \(\varDelta \) is the vector Laplace operator, \(\nabla \) is the gradient, u, p stand for the velocity and pressure of the fluid, respectively. We mainly focus on the “regularized” two-dimensional lid-driven cavity problems discretized by Q2-P1 finite element on both uniform and stretched grids. The IFISS software package [23] developed by Elman, Ramage and Silvester is used to generate the test problems. In each case, we consider three viscosity values, that is, \(\nu =1\), \(\nu =0.1\), \(\nu =0.01\) to generate linear systems corresponding to \(16\times 16\), \(32\times 32\), \(64\times 64\) and \(128\times 128\) meshes. The resulting linear systems have the form
where \(A\in {\mathbb {R}}^{n\times n}\) corresponds to the discretization of the convection-diffusion term, the rectangular matrix \(B^T\in {\mathbb {R}}^{n\times m}\) represents the discrete gradient operator, and \(B\in {\mathbb {R}}^{m\times n}\) represents the divergence operator.
It should be emphasized that the sub-linear systems arising from the application of the preconditioners are solved by direct methods. In Matlab, this corresponds to computing the Cholesky or LU factorization in combination with AMD or column AMD reordering.
4.1 Discretizing with Q2-P1 finite element on uniform grids
We first consider discretizing the Oseen problem with Q2-P1 finite element on uniform grids. Although the rate of convergence of nonsymmetric Krylov iterations preconditioned by a parameter-dependent preconditioner depends on the particular choice of the parameter, the analysis to the optimal parameter seems to be quite a difficult problem. In our numerical experiments, the parameters are determined experimentally. Since in Remark 3.1, we have analyzed theoretically that the non-unit eigenvalues will tend to 0 when \(\alpha \rightarrow 0_+\) and \(\alpha \rightarrow +\infty \). In order to verify this property, but also to justify Remark 3.4, \(\alpha =0.001\), \(\alpha =1\), \(\alpha =10\) and \(\alpha =100\) are adopted as samples to investigate the trait of the new preconditioner \({{\mathscr {P}}}_\text {VDPSS}\). Here, \(\alpha =1\) are also considered in [20].
Figure 1 demonstrates the eigenvalue distribution of the new preconditioned matrix \({{\mathscr {P}}}_\text {VDPSS}^{-1} {\mathscr {A}}\) and the DPSS preconditioned matrix \({{\mathscr {P}}}_\text {DPSS}^{-1} {\mathscr {A}}\) for \(\alpha =0.001\), \(\alpha =1\) and \(\alpha =100\) on \(32\times 32\) uniform grids with \(\nu =0.1\). Numerical results for different choices of \(\nu \) and \(\alpha \) on the uniform grids are tabulated in Tables 1, 2, 3. We compare the three preconditioners in terms of the iteration steps and CPU time. It should be noted that in these tables, the symbol “–” means that the corresponding algorithm will not be convergent within the prescribed number of restarts \(k_\text {max}=1000\).
From the figures and tables, we can get
-
(i)
Figure 1 echoes Theorem 3.1 that the preconditioned matrix \({{\mathscr {P}}}_\text {VDPSS}^{-1} {\mathscr {A}}\) has at least n eigenvalues 1. In fact, there are 2178 (the dimension n) eigenvalues of value 1 in (b)(d)(f) of Fig. 1 as compared to none in the subplot (a)(c)(e) (using the DPSS preconditioner). From the figures, we find that the non-unit eigenvalues converge to 0 when \(\alpha \rightarrow 0_+\) and \(+\infty \). This phenomenon confirms the statement in Remark 3.1.
-
(ii)
Tables 1, 2, 3 indicate that using the VDPSS preconditioner often leads to much better performance than using the DPSS and RDPSS preconditioners in terms of iteration counts and CPU time. Specially in the case of some large values of \(\alpha \), the VDPSS preconditioner has significant reduction in the iteration counts, while the other two preconditioners are difficult to implement efficiently, even the RDPSS preconditioner sometimes fails to converge. But it also comes to our attention that the new preconditioner is not competitive in the case of small \(\alpha \). Apart from that, the new preconditioner seems to be difficult in dealing with low viscosity, that is, the iteration counts increase with the decreasing of the kinematic viscosity.
4.2 Discretizing with Q2-P1 finite element on stretched grids
To further investigate the VDPSS preconditioner, we also consider discretizing the Oseen problem with Q2-P1 finite element on stretched grids. Similar to the uniform grids, we also test four values of the parameter \(\alpha \), that is, \(\alpha =0.001\), \(\alpha =1\), \(\alpha =10\) and \(\alpha =100\) with the kinematic viscosity \(\nu =0.01\), \(\nu =0.1\) and \(\nu =1\).
Figure 2 depicts the eigenvalue distribution of the preconditioned matrices \({{\mathscr {P}}}_\text {DPSS}^{-1}{\mathscr {A}}\) and \({{\mathscr {P}}}_\text {VDPSS}^{-1}{\mathscr {A}}\) for \(\alpha =0.001\) \(\alpha =1\) and \(\alpha =100\) on \(32\times 32\) stretched grids with \(\nu =0.1\). Tables 4, 5, 6 demonstrate the numerical results for \(\nu =1\), \(\nu =0.1\) and \(\nu =0.01\) on the stretched grids, where the default stretch factors are provided by IFISS.
The conclusion obtained from Fig. 2 and Tables 4, 5, 6 is similar to that given in Sect. 4.1. The VDPSS preconditioner leads to much better performance than the DPSS and RDPSS preconditioners in the case of some large values of \(\alpha \). And the new preconditioner also has difficulty in dealing with low-viscosity problems on stretched grids.
5 Conclusion
In this paper, we present a variant of the deteriorated positive-definite and skew-Hermitian splitting (VDPSS) preconditioner for nonsymmetric saddle point problem (1.1). The proposed preconditioner is based on the deteriorated positive-definite and skew-Hermitian splitting (DPSS) preconditioner, but is much closer to the coefficient matrix. The eigenvalue distribution and the acceleration effect (as a preconditioner) on GMRES(30) for solving the Oseen problems have been investigated. Numerical experiments have shown that the new preconditioner is effective. However, how to deal with low-viscosity problems still requires further in-depth studies. Meanwhile, future work should also focus on the choice of the optimal parameter \(\alpha \).
References
Arrow, K.J., Hurwicz, L., Uzawa, H.: Studies in Linear and Non-Linear Programming. Stanford University Press, Stanford (1958)
Bai, Z.-Z.: Structured preconditioners for nonsingular matrices of block two-by-two structures. Math. Comput. 75, 791–815 (2006)
Bai, Z.-Z., Golub, G.H., Lu, L.-Z., Yin, J.-F.: Block triangular and skew-Hermitian splitting methods for positive-definite linear systems. SIAM J. Sci. Comput. 26, 844–863 (2005)
Bai, Z.-Z.: A class of modified block SSOR preconditioners for symmetric positive definite systems of linear equations. Adv. Comput. Math. 10, 169–186 (1999)
Bai, Z.-Z., Golub, G.H., Ng, M.K.: Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 24, 603–626 (2003)
Bai, Z.-Z., Golub, G.H., Pan, J.-Y.: Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems. Numer. Math. 98, 1–32 (2004)
Bai, Z.-Z., Ng, M.K.: On inexact preconditioners for nonsymmetric matrices. SIAM J. Sci. Comput. 26, 1710–1724 (2005)
Bai, Z.-Z., Golub, G.H.: Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems. IMA J. Numer. Anal. 27, 1–23 (2007)
Bai, Z.-Z.: Optimal parameters in the HSS-like methods for saddle-point problems. Numer. Linear Algebra Appl. 16, 447–479 (2009)
Bai, Z.-Z., Golub, G.H., Li, C.-K.: Convergence properties of preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite matrices. Math. Comput. 76, 287–298 (2007)
Bai, Z.-Z., Parlett, B.N., Wang, Z.-Q.: On generalized successive overrelaxation methods for augmented linear systems. Numer. Math. 102, 1–38 (2005)
Bai, Z.-Z., Wang, Z.-Q.: On parameterized inexact Uzawa methods for generalized saddle point problems. Linear Algebra Appl. 428, 2900–2932 (2008)
Bai, Z.-Z.: Motivations and realizations of Krylov subspace methods for large sparse linear systems. J. Comput. Appl. Math. 283, 71–78 (2015)
Bai, Z.-Z., Ng, M.K., Wang, Z.-Q.: Constraint preconditioners for symmetric indefinite matrices. SIAM J. Matrix Anal. Appl. 31, 410–433 (2009)
Benzi, M., Golub, G.H., Liesen, J.: Numerical solution of saddle point problems. Acta Numer. 14, 1–137 (2005)
Benzi, M., Guo, X.-P.: A dimensional split preconditioner for Stokes and linearized Navier-Stokes equations. Appl. Numer. Math. 61, 66–76 (2011)
Benzi, M., Golub, G.H.: A preconditioner for generalized saddle point problems. SIAM J. Matrix Anal. Appl. 26, 20–41 (2004)
Cao, Z.-H.: Positive stable block triangular preconditioners for symmetric saddle point problems. Appl. Numer. Math. 57, 899–910 (2007)
Cao, Y., Yao, L.-Q., Jiang, M.-Q.: A modified dimensional split preconditioner for generalized saddle point problems. J. Comput. Appl. Math. 250, 70–82 (2013)
Cao, Y., Dong, J.-L., Wang, Y.-M.: A relaxed deteriorated PSS preconditioner for nonsymmetric saddle point problems from the steady Navier-Stokes equation. J. Comput. Appl. Math. 273, 41–60 (2015)
Dollar, H.S., Wathen, A.J.: Approximate factorization constraint preconditioners for saddle-point matrics. SIAM J. Sci. Comput. 27, 1555–1572 (2006)
Dollar, H.S.: Constraint-style preconditioners for regularized saddle-point problems. SIAM J. Matrix Anal. Appl. 29, 672–684 (2007)
Elman, H.C., Ramage, A., Silvester, D.J.: IFISS: a Matlab toolbox for modelling incompressible flow. ACM Trans. Math. Software 33, Article 14 (2007)
Golub, G.H., Wu, X., Yuan, J.-Y.: SOR-like methods for augmented systems. BIT 41, 71–85 (2001)
Ipsen, I.C.F.: A note on preconditioning nonsysmmetric matrices. SIAM J. Sci. Comput. 23, 1050–1051 (2001)
Keller, C., Gould, N.I.M., Wathen, A.J.: Constraint preconditioning for indefinite linear systems. SIAM J. Matrix Anal. Appl. 21, 1300–1317 (2000)
Murphy, M.F., Golub, G.H., Wathen, A.J.: A note on preconditioning for indefinite linear systems. SIAM J. Sci. Comput. 21, 1969–1972 (2000)
Pan, J.-Y., Ng, M.K., Bai, Z.-Z.: New preconditioners for saddle point problems. Appl. Math. Comput. 172, 762–771 (2006)
Sturler, E.D., Liesen, J.: Block-diagonal and constraint preconditioners for nonsymmetric indefinite linear systems. SIAM J. Sci. Comput. 26, 1598–1619 (2005)
Saad, Y.: Iterative Methods for Sparse Linear Systems, 2nd edn. SIAM, Philadelphia (2003)
Saad, Y., Schultz, M.H.: GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 7, 856–869 (1986)
Saad, Y.: A flexible inner-outer preconditioned GMRES algorithm. SIAM J. Sci. Comput. 14, 461–469 (1993)
Zhang, J.-L., Gu, C.-Q., Zhang, K.: A relaxed positive-definite and skew-Hermitian splitting preconditioner for saddle point problems. Appl. Math. Comput. 249, 468–479 (2014)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Zhong-Zhi Bai.
The work is supported by National Natural Science Foundation (11371243), Innovation major project of Shanghai Municipal Education Commission (13ZZ068), Key Disciplines of Shanghai Municipality (S30104), and by Foundation of Zhejiang Educational Committee (Y201431769).
Rights and permissions
About this article
Cite this article
Zhang, J., Gu, C. A variant of the deteriorated PSS preconditioner for nonsymmetric saddle point problems. Bit Numer Math 56, 587–604 (2016). https://doi.org/10.1007/s10543-015-0590-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10543-015-0590-9