Abstract
Due to the indefiniteness and poor spectral properties, the discretized linear algebraic system of the vector Laplacian by mixed finite element methods is hard to solve. A block diagonal preconditioner has been developed and shown to be an effective preconditioner by Arnold et al. (Acta Numer 15:1–155, 2006). The purpose of this paper is to propose alternative and effective block diagonal and approximate block factorization preconditioners for solving these saddle point systems. A variable V-cycle multigrid method with the standard point-wise Gauss–Seidel smoother is proved to be a good preconditioner for the discrete vector Laplacian operator. The major benefit of our approach is that the point-wise Gauss–Seidel smoother is more algebraic and can be easily implemented as a black-box smoother. This multigrid solver will be further used to build preconditioners for the saddle point systems of the vector Laplacian. Furthermore it is shown that Maxwell’s equations with the divergent free constraint can be decoupled into one vector Laplacian and one scalar Laplacian equation.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Discretization of the vector Laplacian in spaces \(\varvec{H}_{0}(\mathrm{curl\,})\) and \(\varvec{H}_{0}({\text {div}})\) by mixed finite element methods is well-studied in [1, 3]. The discretized linear algebraic system is ill-conditioned and in the saddle point form which leads to the slow convergence of classical iterative methods as the size of the system becomes large. In [1], a block diagonal preconditioner has been developed and shown to be an effective preconditioner. The purpose of this paper is to present alternative and effective block diagonal and approximate block factorization preconditioners for solving these saddle point systems.
Due to the similarity of the problems arising from spaces \(\varvec{H}_{0}(\mathrm{curl\,})\) and \(\varvec{H}_{0}({\text {div}})\), we use the mixed formulation of the vector Laplacian in \(\varvec{H}_{0}(\mathrm{curl\,})\) as an example to illustrate our approach. Choosing appropriate finite element spaces \(S_{h} \subset H_{0}^{1}\) (a vertex element space) and \( \varvec{U}_{h} \subset \varvec{H}_{0}(\mathrm{curl\,})\) (an edge element space), the mixed formulation is: Find \(\sigma _h \in S_{h}, \varvec{u}_h\in \varvec{U}_{h}\) such that
The corresponding matrix formulation is
Here \(M_v\) and \(M_f\) are mass matrices of the vertex element and the face element, respectively, \(B^{\intercal }\) corresponds to a scaled \(\mathrm{grad\,}\) operator, and C corresponds to the \(\mathrm{curl\,}\) operator.
Based on the stability of (1) in \(H_0^1\times \varvec{H}_0(\mathrm{curl\,})\) norm, in [1], a block diagonal preconditioner in the form
with \(G = M_e^{-1}B^{\intercal }\) and \(M_e\) the mass matrix of the edge element, is proposed and the preconditioned Krylov space method is shown to converge with optimal complexity. To compute the inverse operators in the diagonal, multigrid methods based on additive or multiplicative overlapping Schwarz smoothers [2], multigrid methods based on Hiptimair smoothers [18, 19], or HX auxiliary space preconditioner [21] can be used. To achieve a mesh independent condition number, a special smoother taking care of the large kernel of the \(\mathrm{curl\,}\) (or \({\text {div}}\)) differential operators is needed, which requires more information of the corresponding mesh.
In contrast, we shall apply multigrid methods with the standard point-wise Gauss–Seidel (G–S) smoother to the Schur complement of the (1, 1) block
which is a matrix representation of the following identity of the vector Laplacian
In (2), the inverse of the mass matrix, i.e., \(M^{-1}_v\) is dense. To be practical, the exact Schur complement can be replaced by an approximation
with \(\tilde{M}_v\) an easy-to-invert matrix, e.g., the diagonal or a mass lumping of \(M_v\).
We shall prove that a variable V-cycle multigrid method using the standard point-wise Gauss–Seidel smoother is a good preconditioner for the Schur complement A or its approximation \(\tilde{A}\). The major benefit of our approach is that the point-wise Gauss–Seidel smoother is more algebraic and can be easily implemented as a black-box smoother without the geometric information. The block smoothers proposed in [2] for the \(\varvec{H}(\mathrm{curl\,})\) and \(\varvec{H}({\text {div}})\) problems, however, require more geometric information and solving local problems in small patches.
Although the finite element spaces are nested and A is symmetric positive definite, due to the inverse of the mass matrix, the bilinear forms on the coarse grid are non-inherited from the fine one. To overcome this difficulty, we shall follow the multigrid framework developed by Bramble et al. [4]. In this framework, we need only to verify two conditions: (1) Regularity and approximation assumption; (2) Smoothing property. Since A is symmetric and positive definite, the smoothing property of the Gauss–Seidel smoother is well known, see e.g. [5]. To prove the approximation property, we make use of the \(L^2\)-error estimates of mixed finite element methods established in [2] and thus have to assume the full regularity of elliptic equations. Numerically our method works well for the cases when the full regularity does not hold. With the approximation and smoothing properties, we show that one V-cycle with variable smoothing steps is an effective preconditioner. As noticed in [4], W-cycle with fixed smoothing steps or two V-cycles with fixed or variable smoothing steps may not be a valid preconditioner as the corresponding operator may not be positive definite. In other words, the proposed multigrid method for the Schur complement may not be used as an iterative method but one variable V-cycle can be used as an effective preconditioner.
The multigrid preconditioner for \(\tilde{A}\) will be used to build preconditioners for the saddle point system (1). We propose a block diagonal preconditioner and an approximate block factorization preconditioner:
The action \(M_{v}^{-1}\) can be further approximated by \(\tilde{M}_v^{-1}\) and \(\tilde{A}^{-1}\) by one V-cycle multigrid. Following the framework of [23], we prove that the preconditioned system using these two preconditioners has a uniformly bounded conditional number by establishing a new stability result of the saddle point system (1) in the \(\Vert \cdot \Vert \times \Vert \cdot \Vert _A\) norm.
As an application we further consider a prototype of Maxwell equations with divergence-free constraint
A regularized system obtained by the augmented Lagrangian method [16] has the form
where \(\tilde{A}\) is the vector Laplacian defined in (3). We can factorize the system (5) as
where \(A_p = BG\) is a scalar Laplacian operator. We invert \(\tilde{A}\) and \(A_p\) by preconditioned conjugate gradient method with one V-cycle multigrid. Namely we can solve the Maxwell’s equation by inverting two Laplace operators and no need to solve the saddle point system. Our method is new and different with the solver proposed in [13, 14].
Our results can be easily generalized to the mixed discretization of the Hodge Laplacian in discrete differential forms [1, 3]. We keep the concrete form in \(\varvec{H}(\mathrm{curl\,})\) and \(\varvec{H}({\text {div}})\) conforming finite element spaces for the easy access of these results.
The paper is organized as follows. In Sect. 2, we introduce the discretization of the mixed formulation of the vector Laplacian, and prove stability results. In Sect. 3, we consider the multigrid methods for the discrete vector Laplacian and verify the approximation and smoothing properties. In Sect. 4, we propose the uniform preconditioner for the vector Laplacian and apply to Maxwell equation in the saddle point form. At last, we support our theoretical results with numerical experiments.
2 Discretization
In this section, we first recall the function spaces and finite element spaces, and then present discrete formulations of the vector Laplacian problems. We shall define a new norm using the Schur complement and present corresponding Poincaré inequalities and inverse inequalities.
We assume that \({\varOmega }\) is a bounded and convex polyhedron in \(\mathbb {R}^3\), and it is triangulated into a mesh \({\mathcal {T}}_h\) with size h. We assume that the mesh \({\mathcal {T}}_h\) belongs to a shape regular and quasi-uniform family.
2.1 Function Spaces and Finite Element Spaces
Denote by \(L^2({\varOmega })\) the space of all square integrable scalar or vector functions on \({\varOmega }\), \((\cdot ,\cdot )\) for both the scalar and vector \(L^2\)-inner product and \(\Vert \cdot \Vert \) for both the scalar and vector \(L^2\) norm. Given a differential operator \({\mathcal {D}} = \mathrm{grad\,}, \mathrm{curl\,},\) or \({\text {div}}\), introduce the Sobolev space \(H({\mathcal {D}},{\varOmega }) = \{v\in L^2({\varOmega }), {\mathcal {D}} v\in L^2({\varOmega })\}\). For \({\mathcal {D}}=\mathrm{grad\,}, H(\mathrm{grad\,},{\varOmega })\) is the standard \(H^1({\varOmega })\). For simplicity, we will suppress the domain \({\varOmega }\) in the notation. Let \(\varvec{n}\) be the unit outwards normal vector of \(\partial {\varOmega }\). We further introduce the following Sobolev spaces on domain \({\varOmega }\) with homogenous traces:
Then, recall the following finite element spaces:
-
\(S_h\subset H^1_0\) is the Lagrange elements, i.e., continuous and piecewise polynomials,
-
\(\varvec{U}_h\subset \varvec{H}_0(\mathrm{curl\,})\) is the edge element space [26, 27],
-
\(\varvec{V}_h\subset \varvec{H}_0({\text {div}})\) is the face element space [6,7,8, 26, 27, 29],
-
\(W_h\subset L^2_0\) is the discontinuous and piecewise polynomial space.
To discretize the vector Laplacian problem posed in \(\varvec{H}_{0}({\text {div}})\) or \(\varvec{H}_{0}(\mathrm{curl\,})\), we start from the following de Rham complex
Choose appropriate degrees and types of finite element spaces such that the discrete de Rham complex holds
An important example is: \(S_{h}\) is the linear Lagrange element; \(\varvec{U}_{h}\) is the lowest order Nedelec edge element; \(\varvec{V}_{h}\) is the lowest order Raviart-Thomas element, and \(W_{h}\) is piecewise constant. By our assumption on \({\varOmega }\), both (7) and (8) are exact, i.e., \(\ker (\mathrm{curl\,}) = \mathrm{img\,}(\mathrm{grad\,})\) and \(\ker ({\text {div}}) = \mathrm{img\,}(\mathrm{curl\,})\).
We now define weak differential operators and introduce the following exact sequence in the reversed ordering:
The weak divergence \({\text {div}}_{h}: \varvec{U}_{h} \rightarrow S_h\) is defined as the adjoint of \(-\mathrm{grad\,}\) operator in the \(L^2\)-inner product, i.e., \({\text {div}}_{h}\varvec{w}_h \in S_h\), s.t.,
Weak operator \(\mathrm{curl\,}_{h}\) and weak operator \(\mathrm{grad\,}_{h}\) are defined similarly. For a given \(\varvec{w}_{h} \in \varvec{V}_{h}\), define \(\mathrm{curl\,}_{h}\varvec{w}_{h}\in \varvec{U}_{h}\) as
For a given \(w_{h} \in \varvec{W}_{h}\), define \(\mathrm{grad\,}_{h}w_{h}\in \varvec{V}_{h}\) as
The exactness of (9) can be easily verified by the definition of weak differential operators and the exactness of (8). Note that the inverse of mass matrices will be involved when computing the weak differential operators and thus they are global operators.
We introduce the null space of differential operators:
and the null space of weak differential operators
Similar notation \(Z^c, Z^d\) will be used for the null spaces in the continuous level when the subscript h is skipped. The spaces \(K^c\) and \(K^d\) are defined as
The superscript \(^c\) or \(^d\) indicates the ambient space \(H(\mathrm{curl\,})\) or \(H({\text {div}})\), respectively.
According to the exact sequence (8), we have the discrete Hodge decompositions [1]:
The notation \(\oplus ^{\bot }\) stands for the \(L^2\) orthogonal decomposition. These discrete version of Hodge decompositions play an important role in the analysis.
We update the exact sequences as:
The space in the end of the arrow is the range of the operator and the space in the beginning is the real domain and these two are isomorphism through the differential operators.
2.2 Discrete Formulations of the Vector Laplacian
On continuous level, the mixed formulation of the vector Laplacian in space \(\varvec{H}_{0}(\mathrm{curl\,})\) is: Find \(\sigma \in H_{0}^{1}, \varvec{u}\in \varvec{H}_{0}(\mathrm{curl\,})\) such that
The problem (14) on the discrete level is: Find \(\sigma _h \in S_{h},\varvec{u}_h\in \varvec{U}_{h}\) such that
Note that the first equation of (15) can be interpreted as \(\sigma _h = -{\text {div}}_{h}\varvec{u}_h\) and in the second equation of (15) the term \((\mathrm{grad\,}\sigma _h,\varvec{v}_h) = - (\sigma _h, {\text {div}}_{h}\varvec{v}_h)\). After eliminating \(\sigma _h\) from the first equation, we can write the discrete vector Laplacian for edge elements as
which is a discretization of the identity for smooth \(\varvec{u}\)
Choosing appropriate bases for the finite element spaces, we can represent the spaces \(S_{h}\) and \(\varvec{V}_{h}\) by \(\mathbb R^{\dim S_{h}}\) and \(\mathbb R^{\dim \varvec{V}_{h}}\) respectively. In the following, we shall use the same notation for the vector representation of a function if no ambiguity arises. Then we have the corresponding operator and matrix formulations as: \(\mathcal L_{h}^{c} : S_{h}\times \varvec{U}_{h} \rightarrow S_{h}'\times \varvec{U}_{h}'\)
Here \(M_v, M_e\) and \(M_f\) are mass matrices of the vertex element, edge element and the face element, respectively, \(B^{\intercal } = M_e G\) corresponds to a scaling of the \(\mathrm{grad\,}\) operator G, and C to the \(\mathrm{curl\,}\) operator. We follow the convention of Stokes equations to reserve B for the (negative) divergence operator. Note that to form the corresponding matrices of weak derivative operators, the inverse of mass matrices will be involved. The Schur complement
is the matrix representation of discrete vector Laplacian (16). The system (17) can be reduced to the Schur complement equation
Similarly, the mixed formulation of the vector Laplacian in space \(\varvec{H}_{0}({\text {div}})\) is: Find \(\varvec{\sigma }\in \varvec{H}_{0}(\mathrm{curl\,}),\varvec{u}\in \varvec{H}_{0}({\text {div}})\) such that
The corresponding discrete mixed formulation is: Find \(\varvec{\sigma }_{h} \in \varvec{U}_{h},\varvec{u}_{h}\in \varvec{V}_{h}\) such that
Eliminating \(\varvec{\sigma }_{h}\) from the first equation of (21), we have the discrete vector Laplacian for face elements as
and the operator and matrix formulations are: \(\mathcal L_{h}^{d} : \varvec{U}_{h}\times \varvec{V}_{h} \rightarrow \varvec{U}_{h}'\times \varvec{V}_{h}'\)
where \(M_t\) denotes the mass matrix of the discontinuous element. The Schur complement \(A^{d} = CM_{e}^{-1}C^{\intercal } + B^{\intercal }M_t B\) is the matrix representation of discrete vector Laplacian (22). Similarly, the reduced equation of (23) is
We shall consider multigrid methods for solving (19) and (24) and use them to construct efficient preconditioners for the corresponding saddle point systems (17) and (23), respectively.
2.3 Discrete Poincaré Inequality and Inverse Inequality
In this subsection, we define the norms associated with the discrete vector Laplacian, and prove discrete Poincaré and inverse inequalities.
Definition 1
For \( \varvec{u}_{h} \in \varvec{U}_{h}\), define \(\Vert \varvec{u}_{h}\Vert _{A^{c}_{h}}^{2} =a^{c}_{h}(\varvec{u}_{h},\varvec{u}_{h})\), where the bilinear form \(a^{c}_{h}(\cdot , \cdot )\) is defined as
Similarly, for \(\varvec{u}_{h} \in \varvec{V}_{h}\), define \(\Vert \varvec{u}_{h}\Vert _{A^{d}_{h}}^{2} =a^{d}_{h}(\varvec{u}_{h}, \varvec{u}_{h})\), where the bilinear form \(a^{d}_{h}(\cdot , \cdot )\) is defined as
Lemma 1
(Discrete Poincaré Inequality) We have the following discrete Poincaré inequalities:
Proof
We prove the first inequality (25) and refer to [10] for a proof of (26). From the discrete Hodge decomposition, we have: for any \(\varvec{u}_h \in \varvec{U}_h\), there exist \(\rho \in S_{h}\) and \(\varvec{\phi }\in Z_h^d\) such that
Applying \(-{\text {div}}_{h}\) to (27), we have \(-{\text {div}}_{h}\varvec{u}_h = -{\text {div}}_{h} \mathrm{grad\,}\rho \), thus
which leads to
To control the other part, we first prove a discrete Poincaré inequality in the form
By the exactness of the complex (13), there exists \( \varvec{v} \in K_h^c\) such that \(\varvec{\phi }= \mathrm{curl\,}\varvec{v}\). We recall another Poincaré inequality [20, 25]
Then we have
Canceling one \(\Vert \varvec{\phi }\Vert \), we obtain the desired inequality (29).
Applying \(\mathrm{curl\,}\) to the Hodge decomposition (27) and using the inequality (29), we have \(\mathrm{curl\,}\varvec{u}_h = \mathrm{curl\,}\mathrm{curl\,}_{h} \varvec{\phi }\), thus
which leads to the inequality
Combine inequalities (28) and (30), we have proved that
The proof is thus complete. \(\square \)
A generate version of discrete Poincaré inequality for differential forms can be found in [11, Theorem 5].
Lemma 2
(Inverse Inequality)
Proof
It suffices to prove that
Since for conforming cases, the inverse inequalities
are well known.
For any \(\varvec{u}_{h} \in \varvec{U}_{h}\), let \(\sigma _{h} = -{\text {div}}_{h} \varvec{u}_{h}\), then
which implies (31). The proof of (32) is analogous. \(\square \)
3 Multigrid Methods for Discrete Vector Laplacian
In this section, we describe a variable V-cycle multigrid algorithm to solve the Schur complement equations (19) and (24), and prove that it is a good preconditioner.
3.1 Problem Setting
Let us assume that nested tetrahedral partitions of \({\varOmega }\) are given as
and the corresponding \(H_0^1\), \(\varvec{H}_0(\mathrm{curl\,})\) and \(\varvec{H}_0({\text {div}})\) finite element spaces are
For a technical reason, we assume that the edge element space and the face element space contain the full linear polynomial which rules out only the lowest order case. When no ambiguity can arise, we replace subscripts h by the level index k for \(k=1,2,\ldots , J\).
The discretization (14) of the mixed formulation of the vector Laplacian in space \(\varvec{H}_{0}(\mathrm{curl\,})\) based on \({\mathcal {T}}_k\), for \(k=1,2,\ldots , J\), can be written as
Eliminating \(\sigma _k\) from (33), we get the reduced Schur complement equation
The discretization (20) of the mixed formulation of vector Laplacian in space \(\varvec{H}_{0}({\text {div}})\) on \({\mathcal {T}}_k\), for \(k=1,2,\ldots , J\), can be written as
and the reduced Schur complement equation is
We are interested in preconditioning the Schur complement equations (34) and (36) in the finest level, i.e., \(k=J\).
Notice that, for \(k<J\), \(A_{k}^{c}\) and \(A_{k}^{d}\) are defined by the discretization of the vector Laplacian on the trianglulation \({\mathcal {T}}_{k}\), but not by the Galerkin projection of \(A_{J}^{c}\) or \(A_{J}^{d}\) since the inverse of a mass matrix is involved. In other words, \(A_{k}^{c}\) and \(A_{k}^{d}\) are non-inherited from \(A_{J}^{c}\) or \(A_{J}^{d}\) for \(k<J\).
When necessary, notation without the superscript c and d is used to unify the discussion. The notation \(\mathcal V_{k}\) is used to represent both \(\varvec{U}_{k}\) and \(\varvec{V}_{k}\) spaces.
3.2 A Variable V-cycle Multigrid Method
We introduce some operators first. Let \(R_{k}\) denote a smoothing operator on level k, which is assumed to be symmetric and convergent. Let \(I^{k}\) denote the prolongation operator from level \(k-1\) to level k, which is the natural inclusion since finite element spaces are nested. The transpose \(Q_{k-1} = (I^k)^{\intercal }\) represents the restriction from level k to level \(k-1\). The Galerkin projection \(P_{k-1}\), which is from level k to level \(k-1\), is defined as: for any given \(\varvec{u}_{k} \in \mathcal V_{k}, P_{k-1}\varvec{u}_k\in \mathcal V_{k-1}\) satisfies
The variable V-cycle multigrid algorithm is as following.
Algorithm 1. Multigrid Algorithm: \(\varvec{u}_{k}^{MG} = MG_{k}(\varvec{f}_{k}; \varvec{u}_{k}^{0}, m_{k})\) | |
---|---|
Set \(MG_{1} = A_1^{-1}\). | |
For \(k\ge 2\), assume that \(MG_{k-1}\) has been defined. Define \(MG_{k}(\varvec{f}_{k}; \varvec{u}_{k}^{0}, m_{k})\) as follows: | |
- Pre-smoothing: Define \(\varvec{u}_{k}^{l}\) for \(l=1, 2, \ldots , m_{k}\) by | |
\( \varvec{u}_k^l = \varvec{u}_k^{l-1} + R_k( \varvec{f}_k - A_k \varvec{u}_k^{l-1}).\) | |
- Coarse-grid correction: Define \(\varvec{u}_k^{m_{k}+1} = \varvec{u}_k^{m_{k}} + I^{k}\varvec{e}_{k-1}\), where | |
\( \varvec{e}_{k-1} = MG_{k-1}(Q_{k-1}(\varvec{f}_k - A_k \varvec{u}_k^{m_{k}}); 0, m_{k-1}).\) | |
- Post-smoothing: Define \(\varvec{u}_{k}^{l}\) for \(l=m_{k}+ 2, \ldots , 2m_{k}+1\) by | |
\( \varvec{u}_k^l = \varvec{u}_k^{l-1} + R_k(\varvec{f}_k - A_k \varvec{u}_k^{l-1}).\) | |
Define \(\varvec{u}_{k}^{MG} = \varvec{u}_{k}^{2m_{k}+1}\). |
In this algorithm, \(m_{k}\) is a positive integer which may vary from level to level, and determines the number of smoothing iterations on the k-th level, see [4, 5].
3.3 Multigrid Analysis Framework
We employ the multigrid analysis framework developed in [4]. Denoted by \(\lambda _{k}\) the largest eigenvalue of \(A_{k}\). For the multigrid algorithm to be a good preconditioner to \(A_k\), we need to verify the following assumptions:
-
(A.1)
“Regularity and approximation assumption”: For some \(0<\alpha \le 1\),
$$\begin{aligned} \left| a_k((I-P_{k-1})\varvec{u}_k, \varvec{u}_k)\right| \le C_A\left( \frac{\Vert A_k\varvec{u}_k\Vert ^2}{\lambda _k}\right) ^{\alpha }a_k( \varvec{u}_k, \varvec{u}_k)^{1-\alpha }\qquad \text {for all }\varvec{u}_k\in \mathcal V_k, \end{aligned}$$holds with constant \(C_{A}\) independent of k;
-
(A.2)
“Smoothing property”:
$$\begin{aligned} \frac{\Vert \varvec{u}_k\Vert ^2}{\lambda _k}\le C_R (R_k \varvec{u}_k, \varvec{u}_k)\qquad \text {for all } \varvec{u}_k\in \mathcal V_k, \end{aligned}$$holds with constant \(C_{R}\) independent of k.
Following the standard arguments, we can show that the largest eigenvalue of \(A_k, \lambda _k\), satisfies \(\lambda _k \eqsim h_k^{-2}\) for \(k=1,2,\ldots , J\).
The symmetric Gauss–Seidel (SGS) or a properly weighted Jacobi iteration both satisfy the smoothing property (A.2), a proof of which can be found in [5].
3.4 Regularity Results
In this subsection, we are going to develop an \(H^2\) regularity result of Maxwell’s equation based on a regularity assumption on the intersection space \(\varvec{H}_0({\text {div}}; {\varOmega })\cap \varvec{H}(\mathrm{curl\,}; {\varOmega })\). We first present a classical \(H^1\)-regularity result. Recall that, we assume that \({\varOmega }\) is a bounded and convex polyhedron throughout of this paper.
Lemma 3
(Theorems 3.7 and 3.9 in [17]) The space \(\varvec{H}({\text {div}}; {\varOmega })\cap \varvec{H}_0(\mathrm{curl\,}; {\varOmega })\) and \(\varvec{H}_0({\text {div}}; {\varOmega })\cap \varvec{H}(\mathrm{curl\,}; {\varOmega })\) are continuously imbedded into \(\varvec{H}^1({\varOmega })\) and
for all functions \(\varvec{\phi }\in \varvec{H}({\text {div}}; {\varOmega })\cap \varvec{H}_0(\mathrm{curl\,}; {\varOmega })\) or \(\varvec{H}_0({\text {div}}; {\varOmega })\cap \varvec{H}(\mathrm{curl\,}; {\varOmega })\).
To prove the \(H^2\) regularity of Maxwell’s equation, we requires the following regularity assumption.
Assumption 1
Assume that \({\varOmega }\) in \(\mathbb R^3\) is a bounded and convex polyhedron domain. For any function \(\varvec{\xi }\in \varvec{M}_0 =\{\varvec{v} \in \varvec{H}_0({\text {div}}; {\varOmega })\cap \varvec{H}(\mathrm{curl\,}; {\varOmega });\ \mathrm{curl\,}\varvec{v}\in \varvec{H}^1({\varOmega }),\ {\text {div}}\varvec{v} = 0\}\), there holds \(\varvec{\xi }\in \varvec{H}^2({\varOmega })\) and
It should be pointed out that such result holds on \({\mathcal {C}}^{2,1}\) domains, see [16, Corollary 3.7], but we are not able to adapt to convex polyhedrons. We are in the position to present the following \(H^2\) regularity of Maxwell’s equation.
Lemma 4
For any \(\varvec{\psi }\in K^c\), define \(\varvec{\zeta }\in K^c\) to be the solution of
Then \(\mathrm{curl\,}\varvec{\zeta }\in \varvec{H}^2({\varOmega })\) and
Proof
The problem (38) is well-posed due to the following Poincaré inequality (see Corollary 4.4 in [20] or Corollary 3.51 in [25])
Then \(\mathrm{curl\,}\varvec{\zeta }\in \varvec{H}_0({\text {div}}; {\varOmega })\) with \({\text {div}}\mathrm{curl\,}\varvec{\zeta }= 0\). Taking \(\varvec{\theta }\in (K^c)^\bot \), by the exactness of the sequence, \(\varvec{\theta }\in Z^c\) and \(\mathrm{curl\,}\varvec{\theta }= 0\). Thus (38) implies
And therefore,
The desired \(H^1\) regularity (39) of \(\mathrm{curl\,}\varvec{\zeta }\) then follows from Lemma 3.
As \(\varvec{\psi }\in K^c\), we have
Again applying Lemma 3 to \(\mathrm{curl\,}\mathrm{curl\,}\varvec{\zeta }\in \varvec{H}_0(\mathrm{curl\,}) \cap \varvec{H}({\text {div}})\), it holds
The desired result (40) is then obtained by Assumption 1. \(\square \)
3.5 Error Estimate of Several Projection Operators
We define several projection operators to the null space \(K_h^{{\mathcal {D}}}\). Given \(u\in H({\mathcal {D}})\), define \(P_h^{{\mathcal {D}}} u \in K_h^{{\mathcal {D}}}\) such that
Equation (41) determines \(P_h^{{\mathcal {D}}} u\) uniquely since \(({\mathcal {D}} \cdot , {\mathcal {D}} \cdot )\) is an inner product on the subspace \(K_h^{{\mathcal {D}}}\) which can be proved using the Poincaré inequality (Lemma 1). For \({\mathcal {D}} = \mathrm{grad\,}\), we understand \(K_h^{\mathrm{grad\,}}\) as \(S_h\).
Lemma 5
(Theorem 2.4 in Monk [24]) Suppose that \(\varvec{u} \in \varvec{H}^{k+1}\) and let \(\varvec{u}_h = P_h^c \varvec{u}\) to \(\varvec{U}_h\) which contains polynomial of degree less than or equal to k. Then we have the error estimate
We are also interested in the estimate of projections between two consecutive finite element spaces. Following the convention of multigrid community, for any \(2< k\le J\), let \({\mathcal {T}}_H = {\mathcal {T}}_{k-1}\) and \({\mathcal {T}}_h = {\mathcal {T}}_k\). Notice that the ratio \(H/h \le C\).
The following error estimates are obtained in [2].
Lemma 6
Given \(\varvec{u}_h\in K_h^c\), let \(\varvec{u}_H = P_H^c\varvec{u}_h\). Then
Lemma 7
Give \(\varvec{v}_h\in K_h^d\), let \(\varvec{v}_H = P_H^d\varvec{v}_h\). Then
We now introduce \(L^2\) projections to \(K^c\) and \(K^c_h\). Let \(Q_{K}^c: \varvec{L}^2 \rightarrow K^c\) be the \(L^2\)-projection to \(K^c\). Notice that for \(\varvec{u}\in \varvec{L}^2, \displaystyle Q_K^c \varvec{u} = \varvec{u} - \nabla p\) where \(p\in H_0^1\) is determined by the Poisson equation \(\displaystyle (\nabla p, \nabla q) = (\varvec{u}, \nabla q)\) for all \(q\in H_0^1\). Therefore \(\mathrm{curl\,}Q_K^c \varvec{u} = \mathrm{curl\,}\varvec{u}\). Similarly we define \(Q_h^c: \varvec{L}^2 \rightarrow K_h^c\) as \(Q_h^c \varvec{u} = \varvec{u} - \nabla p\) where \(p\in S_h\) is determined by the Poisson equation \((\nabla p, \nabla q) = (\varvec{u}, \nabla q)\) for all \(q\in S_h\). We have the following error estimate, c.f. [2, 31].
Lemma 8
For \(\varvec{u}_h \in K_h^c\), we have
For \(\varvec{u}_H \in K_H^c\), we have
In the estimate (42)–(43), we lift a function in a coarse space to a fine space while in Lemma 6, we estimate the projection from a fine space to a coarse space. The \(L^2\)-projection \(Q_h^c: K_H^c \rightarrow K_h^c\) can be thought of as a prolongation between non-nested spaces \(K_H^c\) and \(K_h^c\).
3.6 Approximation Property of Edge Element Spaces
Let \(\varvec{u}_h\in \varvec{U}_{h}\) be the solution of equation
and \(\varvec{u}_H\in \varvec{U}_{H}\subset \varvec{U}_{h}\) be the solution of equation
Recall the Hodge decomposition
We use the Hodge decomposition to decompose \(\varvec{u}_H = \mathrm{grad\,}\phi _H \bigoplus ^\bot \tilde{\varvec{u}}_{0,H}\) first, and then define \(\varvec{u}_{0,H} = P_H^c \varvec{u}_{0,h}\) and \(\varvec{e}_H = \tilde{\varvec{u}}_{0,H} - \varvec{u}_{0,H}\) to get
Then by Lemma 6, we immediately get the following estimate.
Lemma 9
Let \(\varvec{u}_{0,h} \) and \(\varvec{u}_{0,H}\) be defined as in equations (46) and (48). It holds
Now we turn to the estimate of \(\varvec{e}_H\) being given in equation (48).
Lemma 10
Let \(\varvec{e}_H\in K_H^c\) be defined as in equation (48). It holds
Proof
By equations (44) and (45), we have
where \(g_h\) and \(\varvec{q}_h\) are defined in equation (47). Then
Let \(\varvec{e}_h = Q_h^c\varvec{e}_H\), then \({\text {div}}_h\varvec{e}_h = 0\) and by Lemma 8, we have
Thus it holds
which implies
Using the fact that \({\text {div}}_h\varvec{e}_h = 0\), the inverse inequality and the above inequality, we immediately get
The desired result then follows. \(\square \)
We now explore the relation between \(\phi _h, \phi _H\), and \(g_h\) defined in equations (46)–(47).
Lemma 11
Let \(\phi _h\in S_h\) and \(\phi _H\in S_H\) be defined as in equations (46) and (48). It holds
Proof
For equation (44), test with \(\varvec{v}_{h} \in \mathrm{grad\,}S_{h}\) to get
which implies \(-{\text {div}}_{h} \mathrm{grad\,}\phi _{h}=-{\text {div}}_{h} \varvec{u}_{h} = g_{h}\), i.e.,
From equation (50), we can see that \(\phi _h\) is the Galerkin projection of \(\phi \) to \(S_h\), where \(\phi \in H_0^1({\varOmega })\) satisfies the Poisson equation:
Therefore by the standard error estimate of finite element methods, we have
For equation (45), choose \(\varvec{v}_{H} = \mathrm{grad\,}\psi _H \in \mathrm{grad\,}S_{H}\), we have
which implies \(-{\text {div}}_{H} \mathrm{grad\,}\phi _{H} = P_{H}^{g}g_{h}\), i.e.,
From equation (51), we can see that \(\phi _H\) is the Galerkin projection of \(\tilde{\phi }\) to \(S_H\), where \(\tilde{\phi }\in H_0^1({\varOmega })\) satisfies the Poisson equation:
The \(H^1\)-projection \(P_H^g\) is not stable in \(L^2\)-norm. Applied to functions in \(S_h\), however, we can recover the stability as follows
In the last step, we have used the fact that the ratio of the mesh size between consecutive levels is bounded, i.e., \(H/h\le C\).
We then have
And by the triangle inequality and the stability of the projection operator \(P_H^{g}\)
Using the error estimate of negative norms and the inverse inequality, we have
Here we use \(H^{-1}\) norm estimate for \(S_{H}\) having degree greater than or equal to 2. Noticing that \(g_h = {\text {div}}_h \varvec{u}_h\), we thus get
\(\square \)
As a summary of the above results, we have the following approximation result.
Theorem 2
Condition (A.1) holds with \(\alpha = \frac{1}{2}\), i.e. for any \(\varvec{u}_k\in \varvec{U}_k\), there hold
Proof
We use h to denote level k and H for level \(k-1\). Let \(\varvec{u}_h\), \(\varvec{u}_H\), and \(\varvec{f}_h\) be defined in equations (44)–(45) which have Hodge decompositions (46), (47), and (48), respectively. The definitions of \(\varvec{u}_h\) and \(\varvec{u}_H\) (44)–(45) imply that
which means \(\varvec{u}_H = P_H\varvec{u}_h\) by the definition of the projection \(P_H\). Let \(\delta _1 =\varvec{u}_{0,h} - \varvec{u}_{0,H}\), \(\delta _2 = \mathrm{grad\,}\phi _h - \mathrm{grad\,}\phi _H\), by Lemmas 9, 10 and 11, it holds
\(\square \)
3.7 Approximation Property of Face Element Spaces
Let \(\varvec{u}_h\in \varvec{V}_{h}\) be the solution of equation
and \(\varvec{u}_H\in \varvec{V}_{H}\subset \varvec{V}_{h}\) be the solution of equation
We can easily see that \(\varvec{f}_h = A_h^d\varvec{u}_h\).
By the Hodge decomposition, we have
We use the Hodge decomposition to decompose \(\varvec{u}_H = \mathrm{curl\,}\varvec{\phi }_H \bigoplus ^\bot \tilde{\varvec{u}}_{0,H}\) first, and then define \(\varvec{u}_{0,H} = P_H^d \varvec{u}_{0,h}\) and \(\varvec{e}_H = \tilde{\varvec{u}}_{0,H} - \varvec{u}_{0,H}\) to get
By Lemma 7, we immediately have the following result.
Lemma 12
Let \(\varvec{u}_{0,h}\in \mathrm{grad\,}_h W_h\) and \(\varvec{u}_{0,H}\in \mathrm{grad\,}_H W_H\) be defined as in equations (56) and (58). It holds
The estimate of \(\varvec{e}_H\in K_H^d\) defined in equation (58) can be proved analogously to Lemma 10 and thus will be skipped.
Lemma 13
Assume that \(\varvec{e}_H \in \mathrm{grad\,}_H W_H\) be defined as in equation (58). Then it holds
The relation between \(\varvec{\phi }_h,\ \varvec{\phi }_H\) and \(\varvec{g}_h\) defined in equations (56)–(58) is more involved.
Lemma 14
Assume that \(\varvec{\psi }_h\in K_h^c\). Let \(\varvec{\zeta }_h\in K_h^c\) be the solution of equation
and let \(\varvec{\zeta }\in K^c\) be the solution of equation
Then, it holds
Proof
Let \(\tilde{\varvec{\zeta }} _h = P_h^c \varvec{\zeta }\). By Lemma 5, we have
Notice that \(\varvec{\zeta }_h\ne \tilde{\varvec{\zeta }}_h\). Indeed by the definition of \(\tilde{\varvec{\zeta }}_h\), we have
The fact that \(Z_h^c \subset Z^c\) and \(\varvec{U}_h \subset \varvec{H}_0(\mathrm{curl\,})\) implies that \(K_h^c\) has some part in \(Z^c\) and some part in \(K^c\), therefore, it holds
together with the definition of \(\varvec{\zeta }_h\), we have
Thus, with \(\delta _h = \varvec{\zeta }_h - \tilde{\varvec{\zeta }}_h\), we have
The desired result follows by canceling one \(\Vert \mathrm{curl\,}\delta _h\Vert \) and the triangle inequality. \(\square \)
We are in the position to estimate \(\varvec{\phi }_h\) and \(\varvec{\phi }_H\).
Lemma 15
Let \(\varvec{\phi }_h\in \varvec{U}_h\) and \(\varvec{\phi }_H\in \varvec{U}_H\) be defined as in equations (56) and (58). It holds
Proof
Choose the test function \(\varvec{v}_h = \mathrm{curl\,}\varvec{w}_h\) with \(\varvec{w}_h \in \varvec{U}_h\) in equation (54) to simplify the left hand side of (54) as
and the right hand side becomes
Denoted by \(\varvec{\tau }_h = \mathrm{curl\,}_h\mathrm{curl\,}\varvec{w}_h\in K_h^c\). We get
Let \(\varvec{\phi }\in K^c\) satisfy the Maxwell equation:
By Lemma 14, we have
When moving to the coarse space, the left hand side of equation (55) can be still simplified to \((\mathrm{curl\,}\varvec{\phi }_H,\mathrm{curl\,}\tau _H)\). But the right hand side becomes
Project \(\varvec{g}_h\) to the coarse space and arrives at the equation
Let \(\tilde{\varvec{\phi }}\in K^c\) satisfy the Maxwell equation:
By Lemma 14, it holds
By the triangle inequality, it remains to estimate \(\Vert \mathrm{curl\,}(\varvec{\phi }- \tilde{\varvec{\phi }})\Vert \). We first write out the error equation for \(\varvec{\phi }- \tilde{\varvec{\phi }}\)
We then apply the standard duality argument. Let \(\varvec{\zeta }\in K^c\) satisfies
Then
where in the last step we have used the \(H^2\)-regularity of Maxwell’s equation, c.f. Lemma 4. Then chose \(\varvec{\psi }= \varvec{\phi }- \tilde{\varvec{\phi }}\) and cancel one \(\Vert \mathrm{curl\,}(\varvec{\phi }- \tilde{\varvec{\phi }})\Vert \) to get
The estimate of \(\Vert \mathrm{curl\,}\varvec{\phi }_h - \mathrm{curl\,}\varvec{\phi }_H\Vert \) follows from the triangle inequality. \(\square \)
As a summary of the above results, we have the following theorem.
Theorem 3
Condition (A.1) holds with \(\alpha = \frac{1}{2}\), i.e. for any \(\varvec{u}_k\in \varvec{V}_k\), there hold
3.8 Results
According to the framework in [4], we conclude that the variable V-cycle multigrid algorithm is a good preconditioner for the Schur complement equations (19) and (24). We summarize the result in the following theorem.
Theorem 4
Let \(V_k\) denote the operator of one variable V-cycle of \(MG_k\) in Algorithm 1. Assume the smoothing steps \(m_k\) satisfy
Here we assume that \(\beta _0\) and \(\beta _1\) are constants which are greater than one and independent of k. Then the condition number of \(V_JA_J\) is \(\mathcal O(1)\).
Remark 1
As noticed in [4], W-cycle with fixed number smoothing steps or one V-cycles with fixed number smoothing steps may not be a valid preconditioner as the corresponding operator may not be positive definite. In other words, the proposed multigrid method for the Schur complement may not be a convergent iterative method but one variable V-cycle can be used as an effective preconditioner. \(\square \)
4 Uniform Preconditioners
In this section, we will show that the multigrid solver for the Schur complement equations can be used to build efficient preconditioners for the mixed formulations of vector Laplacian (17) and (23). We also apply the multigrid preconditioner of the vector Laplacian to the Maxwell equation discretized as a saddle point system. We prove that the preconditioned systems have condition numbers independent of mesh parameter h.
4.1 A Stability Result
Follow the framework in [23], to develop a good preconditioner, it suffices to prove the boundedness of operators \({\mathcal {L}}_{h}^{c}\) and \({\mathcal {L}}_{h}^{d}\) and their inverse in appropriate norms. In the sequel, to unify the notation, we use M for the mass matrix and A the vector Laplacian. When necessary, we use superscript \(^c\) or \(^d\) in A to distinguish the \(\varvec{H}(\mathrm{curl\,})\) and \(\varvec{H}({\text {div}})\) case and use subscript \(_v, _e, _f\) in M to indicate different mass matrices associated to vertex, edge, and face, respectively. The inverse of the mass matrix can be thought of as the matrix representation of the Riesz representation induced by the \(L^2\)-inner product and the inverse of A is the Riesz representation in the A-inner product. Riesz representation of \(L^2\times A\)-inner product will give an effective preconditioner. We clarify the norm notation using M and A as follows:
-
\(\Vert \cdot \Vert _{M}\): \(\Vert \sigma _{h}\Vert _{M}^{2} = \langle M \sigma _{h}, \sigma _{h}\rangle \);
-
\(\Vert \cdot \Vert _{A}\): \(\Vert u_{h}\Vert _{A}^{2} = \langle A u_{h}, u_{h}\rangle \);
-
\(\Vert \cdot \Vert _{M^{-1}}\): \(\Vert g_{h}\Vert _{M^{-1}}^{2} = \langle M^{-1}g_{h}, g_{h}\rangle \);
-
\(\Vert \cdot \Vert _{A^{-1}}\): \(\Vert f_{h}\Vert _{A^{-1}}^{2} = \langle A^{-1} f_{h}, f_{h}\rangle \).
Here, A and M are matrices, \(\sigma _h,\ u_h,\ g_h,\ f_h\) are vectors and \(\langle \cdot ,\cdot \rangle \) denote the \(l^2\) inner product.
In most places, we prove only \(\varvec{H}_0(\mathrm{curl\,})\) case as the proof of \(\varvec{H}_0({\text {div}})\) is simply a change of notation. Again the result and the proof can be unified using the language of discrete differential forms [1, 3]. We keep the concrete form for the easy access of these results.
The following lemma gives a bound of the Schur complement \(BA^{-1}B^{\intercal }\) similar to the corresponding result of the Stokes equation.
Lemma 16
We have the inequality
Proof
Let \(\varvec{v}_{h} = (A^{c})^{-1} B^{\intercal } \phi _{h}\). Then
Now we identify \(\varvec{v}_{h}\in \varvec{V}_{h}'\) by the Riesz map in the A-inner product, and have
In the last step, we have used identity (18) which implies \(\Vert B\varvec{u}_{h}\Vert _{M^{-1}}\le \Vert \varvec{u}_{h}\Vert _{A}\). The desired result (60) then follows easily. \(\square \)
We present a stability result of the mixed formulation of the vector Laplacian which is different with that established in [1].
Theorem 5
The operators \({\mathcal {L}}_{h}^{c}, {\mathcal {L}}_{h}^{d}\) and there inverse are both bounded operators:
are bounded and independent of h from \((\Vert \cdot \Vert _{M^{-1}}, \Vert \cdot \Vert _{A^{-1}}) \rightarrow (\Vert \cdot \Vert _{M}, \Vert \cdot \Vert _{A}),\) and
are bounded and independent of h from \((\Vert \cdot \Vert _{M}, \Vert \cdot \Vert _{A}) \rightarrow (\Vert \cdot \Vert _{M^{-1}}, \Vert \cdot \Vert _{A^{-1}})\).
Proof
Let \((\sigma _{h}, \varvec{u}_{h}) \in S_{h}\times \varvec{U}_{h}\) and \((g_{h}, \varvec{f}_{h}) \in S_{h}'\times \varvec{U}_{h}'\) be given by the relation with
To prove \(\Vert {\mathcal {L}}_{h}^{c}\Vert _{\mathrm{L}( S_{h}\times \varvec{U}_{h}, S_{h}'\times \varvec{U}_{h}')} \lesssim 1\), it is sufficient to prove
From (61), we have \(g_{h}= -M_{v}\sigma _{h} + B \varvec{u}_{h}\) and \(\varvec{f}_{h} = A^{c}\varvec{u}_{h} - B^{\intercal }M_{v}^{-1}g_{h}\). The norm of \(g_h\) is easy to bound as follows
To bound the norm of \(\varvec{f}_h\), we first have
Let \(\phi _{h} = M_{v}^{-1} g_{h}\), by Lemma 16, we have
Thus we get
Then the desired inequality (62) follows from the bound of \(\Vert g_h\Vert _{M^{-1}}\) and \(\Vert \varvec{f}_h\Vert _{A^{-1}}\).
To prove \( \Vert ({\mathcal {L}}_{h}^{c})^{-1}\Vert _{\mathrm{L}( S_{h}'\times \varvec{U}_{h}', S_{h}\times \varvec{U}_{h})} \lesssim 1\), we need to prove
From (61), we have \(\varvec{u}_{h} =( A^{c})^{-1}( f_{h} + B^{\intercal }M_{v}^{-1}g_{h})\). Then
We also have \(\sigma _{h}= M_{v}^{-1}(B \varvec{u}_{h}-g_{h})\) and thus
Combining with the bound for \(\Vert \varvec{u}_h\Vert _A\), we obtain the stability (63). \(\square \)
Remark 2
Note that here \(B^{\intercal }\) is the matrix form defined as \(\langle B^\intercal \tau _h,\varvec{v}_h\rangle _{M_e^{-1}} = (\mathrm{grad\,}\tau _h ,\varvec{v}_h )\), by the second equation of (15) we have
Therefore, we can obtain an additional stability
Namely we can control not only the \(L^2\)-norm but also the energy norm of \(\sigma _h\). \(\square \)
4.2 Block Diagonal Preconditioners
The inverse of the mass matrices \(M^{-1}_v\) and \(M_{e}^{-1}\) are in general dense. To be practical, the exact Schur complement can be replaced by an approximation
with \(\tilde{M}_v\) and \(\tilde{M}_{e}\) easy-to-invert matrices, e.g., diagonal or mass lumping of \(M_v\) and \(M_{e}\), respectively. In this way, we actually change the \(L^{2}\)-inner product into a discrete \(L^{2}\) inner product. We can define the adjoint operators with respect to the discrete \(L^{2}\)-inner product. For example, define \(\widetilde{{\text {div}}}_{h} \varvec{w}_h \in S_h\), s.t.,
where \(\langle \cdot ,\cdot \rangle _h\) is the discrete \(L^2\)-inner product defined by \(\tilde{M}_v\).
It is not hard to see that the modification of the \(L^{2}\)-inner product will not bring any essential difficulty to the proof of the previous results. We can easily reproduce all the results that we have proved in the previous sections. For example, by a simple change of notation in the proof of Lemma 16, we have:
To simplify the presentation, for any two symmetric matrices \(H_1\) and \(H_2\), \(H_1\le H_2\) means \(H_2 - H_1\) is a semi-positive defined matrix and \(H_1 < H_2\) means \(H_2 - H_1\) is a positive defined matrix.
The following proposition can be easily proved, and a proof can be found in [10].
Proposition 1
Assume that there exist positive constants \(c_1\) and \(c_2\) which are independent of h such that
Then
Finally we introduce our block diagonal preconditioner \({\mathcal {D}}^{-1}\) with:
where \(\tilde{A}_\mathrm{mg}\) is one variable V-cycle multigrid approximation for \(\tilde{A}\). By Theorem 4, there exist positive constants \(\kappa _1,\kappa _2\) independent of h, such that
We shall use this block diagonal preconditioner in the minimum residual method (MINRES). Theorem 5 and spectral equivalence (70) implies that \({\mathcal {D}}^{-1}\mathcal L\) has a uniform bounded condition number. Therefore, we have the following uniform convergence result.
Theorem 6
Under the assumption (68), MINRES method for the preconditioned system \({\mathcal {D}}^{-1} \mathcal L\) is uniformly convergent.
We will skip the prove of Theorem 6 here, for detailed convergence analysis of MINRES, we refer to [28, 30].
4.3 Approximate Block Factorization Preconditioner
In this subsection, we will construct approximate block factorization preconditioners for systems (17) and (23). Let
We have the block decomposition
We then approximate \(\tilde{A}^{-1}\) by one variable V-cycle. Define
and the operator \(\mathcal G: S_h'\times \varvec{U}_h' \rightarrow S_h \times \varvec{U}_h\) as
From the definition and (70), it is trivial to verify that \(\mathcal G^{-1} \) is spectral equivalent to \( \widetilde{{\mathcal {L}}}\) and thus conclude that the preconditioned system is uniform bounded.
Theorem 7
The \(\mathcal G\) is a uniform preconditioner for \({\mathcal {L}}\), i.e., the corresponding operator norms
are bounded and independent with parameter h.
As non SPD operators used in the preconditioners, we can apply the generalized minimal residual method (GMRES) to \(\mathcal G{\mathcal {L}}\). To prove the convergence of GMRES, we need to show that the field-of-value of \(\mathcal G\mathcal L\) is bounded, see [15, 30, 22, Algorithm 2.2] which will be explored in our future work [12].
4.4 Maxwell Equations with Divergence-Free Constraint
We consider a prototype of Maxwell equations with divergence-free constraint
The solution \(\varvec{u}\) is approximated in the edge element space \(\varvec{U}_{h}\). The divergence-free constraint can then be understood in the weak sense, i.e., \({\text {div}}_h \varvec{u} = 0\). By introducing a Lagrangian multiplier \(p\in S_{h}\), the matrix form is
We apply the augmented Lagrangian method [16], by adding \(B^{\intercal }\tilde{M}_{v}^{-1}B\) to the first equation, to get an equivalent matrix equation
Now the (1, 1) block \(\tilde{A}=C^{\intercal }M_{f}C + B^{\intercal }\tilde{M}_{v}^{-1}B\) in (74) is a discrete vector Laplacian and the whole system (74) is in Stokes type.
We can thus use the following diagonal preconditioner.
Theorem 8
The following block-diagonal matrix
is a uniform preconditioner for \(\begin{pmatrix} \tilde{A} &{} B^{\intercal }\\ B &{} 0 \end{pmatrix},\) and \( \begin{pmatrix} C^{\intercal }M_{f}C &{} B^{\intercal }\\ B &{} 0 \end{pmatrix}. \)
Proof
Following the proof of Theorem 5, it suffices to prove that the Schur complement \(S = B\tilde{A}^{-1}B^{\intercal }\) is spectrally equivalent to \(M_v\). The inequality \((Sp,p) \le (M_vp,p)\) for all \(p\in S_h\) has been proved in Lemma 16. To prove the inequality in the other way, it suffices to prove the inf-sup condition: there exists a constant \(\beta \) independent of h such that
Given \(p_h \in S_h\), we solve the Poisson equation \({\varDelta } \phi = p_h\) with homogenous Dirichlet boundary condition and let \(\varvec{v} = \mathrm{grad\,}\phi \). Then \(\varvec{v}\in \varvec{H}_0(\mathrm{curl\,})\) and \({\text {div}}\varvec{v} = p_h\) holds in \(L^2\). We define \(\varvec{v}_h = Q_h \varvec{v}\) where \(Q_h: \varvec{H}_0(\mathrm{curl\,}) \rightarrow \varvec{U}_h\) is the \(L^2\) projection. Then \(({\text {div}}_h \varvec{v}_h, q_h) = -(\varvec{v}_h, \mathrm{grad\,}q_h) = -(\varvec{v}, \mathrm{grad\,}q_h) = ({\text {div}}\varvec{v}, q_h) = (p_h, q_h)\), i.e., \({\text {div}}_h \varvec{v}_h = p_h\). To control the norm of \(\mathrm{curl\,}\varvec{v}_h\), we denote \(\varvec{v}_0\) as the piecewise constant projection of \(\varvec{v}\). Then
In the last step, we have used the \(H^2\)-regularity result of Poisson equation.
In summary, given \(p_h \in S_h\), we have found a \(\varvec{v}_h \in U_h\) such that \(\langle B\varvec{v}_h, p_h\rangle = \Vert p_h\Vert ^2\) while \(\Vert \varvec{v}_h\Vert _A^2 = \Vert {\text {div}}_h \varvec{v}_h\Vert ^2 + \Vert \mathrm{curl\,}\varvec{v}_h\Vert ^2 \lesssim \Vert p_h\Vert ^2\). Therefore the inf-sup condition (76) has been proved which implies the inequality \(\langle Sp,p\rangle \ge \beta ^{2}\langle M_vp,p\rangle \). \(\square \)
Solving (73) or (74) is mathematically equivalent. When using Krylov subspace method, however, formulation (73) is more efficient since \(C^{\intercal }M_fC\) is sparser than \(\tilde{A}\). Next we present an even faster solver.
We can decouple the saddle point system (74) by considering the following block factorization
where
The key observation is that G is the matrix representation of the gradient operator \(S_{h}\rightarrow U_{h}\) and thus \(CG = \mathrm{curl\,}\mathrm{grad\,}= 0\). Therefore we can easily solve (73) by inverting two Laplacian operators: one is a vector Laplacian of the edge element and another is a scalar Laplacian for the Lagrange element:
5 Numerical Examples
In this section, we will show the efficiency and the robustness of the proposed diagonal and approximate block factorization preconditioners. We perform the numerical experiments using the iFEM package [9].
Example 1
(Two Dimensional Vector Laplacian using Edge Elements) We first consider the mixed system (17) arising from the lowest order discretization of the vector Laplace equation in \(\varvec{H}_0(\mathrm{curl\,})\) space.
We consider three domains in two dimensions: the unit square \((0,1)^2\), the L-shape domain \((-1,1)^2\backslash \left\{ [0,1]\times [-1,0]\right\} \), and the crack domain \(\{|x|+|y|<1\}\backslash \{0\le x\le 1, y=0\}\); see Fig. 1. Note that the crack domain is non-Lipschitz.
We use the diagonal preconditioner \({\mathcal {D}}^{-1}\) in the MINRES method and the approximate block factorization preconditioner (72) in the GMRES (with the restart step 20) to solve (17). In these preconditioners, one and only one variable V-cycle is used for approximating \(\tilde{A}^{-1}\). In the variable V-cycle, we chose \(m_J = 2\) and \(m_{k} = \lceil 1.5^{J - k}m_J \rceil \) for \(k=J,\ldots , 1\), and a random initial value. We stop the Krylov space iteration when the relative residual is less than or equal to \(10^{-8}\). Iteration steps and CPU time are summarized in Tables 1, 2, and 3.
Example 2
(Three Dimensional Vector Laplacian using Edge Elements) We then consider the three dimensional case. Still consider the lowest order discretization of the vector Laplace equation in \(\varvec{H}_0(\mathrm{curl\,})\) space. We use almost the same setting except \(m_J = 3\) for which the performance is more robust.
We consider two domains. One is the unit cube \((0,1)^3\) for which the full regularity assumption holds and another is a L-shape domain \((-1,1)^3\backslash \left\{ (-1,0)\times (0,1)\times (0,1)\right\} \) which violates the full regularity assumption; see Fig. 2. Iteration steps and CPU time are summarized in Tables 4 and 5.
Based on these tables, we present some discussion on our preconditioners.
-
1.
Both diagonal and approximate block factorization preconditioners perform very well. The one based on the approximate block factorization is more robust and efficient.
-
2.
The diagonal preconditioner is more sensitive to the elliptic regularity result as the iteration steps are slowly increased, which is more evident in the three dimensional case; see the third column of Tables 4 and 5. For general domains, the \(\varvec{H}_0(\mathrm{curl\,})\cap \varvec{H}({\text {div}})\) is a strict subspace of \(\varvec{H}^1\) and thus the approximation property may fail. On the other hand, the numerical effectiveness even in the partial regularity cases is probably due to the fact that the full regularity of elliptic equations always holds in the interior of the domain. Additional smoothing for near boundary region might compensate the loss of full regularity.
-
3.
Only the lowest order element is tested while our theory assumes the finite element space should contain the full linear polynomial to ensure the approximation property. This violation may also contribute to the slow increase of the iteration steps. We do not test the second type of edge element due to the complication of the prolongation operators. The lowest order edge element is the most popular edge element. For high order edge elements, we prefer to use the V-cycle for the lowest order element plus additional Gauss–Seidel smoothers in the finest level to construct preconditioners.
Example 3
(Three dimensional Maxwell’s equations with divergent-free constraint) We consider the lowest order discretization of Maxwell equations in the saddle point form (73). As we mentioned before, we can solve the regularized formulation (74) by inverting two Laplacians. We use one variable V-cycle with the same setting as in Example 2 to precondition the vector Laplacian operator \(\tilde{A}\) and use preconditioned conjugate gradient method to invert \(\tilde{A}\) (abbr. PCG-\(\tilde{A}\)). Inverting of \(A_p\) is standard, and the computation time is ignorable. We also test the block-diagonal preconditioner (75) for solving the original formulation (73) using same setting as in Example 2. We report the iteration steps and corresponding CPU times in Tables 6 and 7.
From these results, we conclude that both decoupled solver and block-diagonal preconditioner work pretty well. The iteration steps may increase but very slowly. In terms of CPU time, the decoupled solver is more than twice faster than the diagonal preconditioner.
References
Arnold, D., Falk, R., Winther, R.: Finite element exterior calculus, homological techniques, and applications. Acta Numer. 15, 1–155 (2006)
Arnold, D.N., Falk, R.S., Winther, R.: Multigrid in H(div) and H(curl). Numer. Math. 85, 197–218 (2000)
Arnold, D.N., Falk, R.S., Winther, R.: Finite element exterior calculus: from Hodge theory to numerical stability. Bull. Am. Math. Soc. 47(2), 281–354 (2010)
Bramble, J.H., Pasciak, J.E., Xu, J.: The analysis of multigrid algorithms with nonnested spaces or noninherited quadratic forms. Math. Comput. 56, 1–34 (1991)
Bramble, J.H., Pasciak, J.E.: The analysis of smoothers for multigrid algorithms. Math. Comput. 58, 467–488 (1992)
Brezzi, F., Douglas, J., Duran, R., Fortin, M.: Mixed finite elements for second order elliptic problems in three variables. Numer. Math. 51, 237–250 (1987)
Brezzi, F., Douglas, J., Marini, L.D.: Two families of mixed finite elements for second order elliptic problems. Numer. Math. 47(2), 217–235 (1985)
Brezzi, F., Fortin, M.: Mixed and Hybrid Finite Element Methods. Springer, Berlin (1991)
Chen, L.: \(i\)FEM: an integrated finite element methods package in matlab, Technical report, University of California at Irvine (2009)
Chen, L., Wang, M., Zhong, L.: Convergence Analysis of Triangular MAC Schemes for Two Dimensional Stokes Equations. J. Sci. Comput. https://doi.org/10.1007/s10915-014-9916-z
Chen, L., Wu, Y.: Convergence of adaptive mixed finite element methods for the Hodge Laplacian equations: without harmonic forms. SIAM J. Numer. Anal. 55(6), 2905–2929 (2017)
Chen, L., Wu, Y.: Convergence Analysis for A Class of Iterative Methods for Solving Saddle Point Systems, arXiv:1710.03409 [math.NA]
Chen, J., Xu, Y., Zou, J.: An adaptive inverse iteration for Maxwell eigenvalue problem based on edge elements. J. Comput. Phys. 229(7), 2649–2658 (2010)
Ciarlet, P., Wu, J,H., Zou, J.: Edge element methods for maxwells equations with strong convergence for gauss laws. SIAM J. Numer. Anal. 53(4), 2350–2372 (2015)
Elman, H. C.: Iterative Methods for Large Sparse Non-Symmetric Systems of Linear Equations, Ph.D. thesis, Yale University, New Haven, CT (1982)
Fortin, M., Glowinski, R.: Augmented Lagrangian Methods, Applications to the Numerical Solution of Boundary Value Problems. North-Holland Publishing Co., Amsterdam (1983)
Girault, V., Raviart, P.: Finite Element Methods for Navier–Stokes Equations. Springer, New York (1986)
Hiptmair, R.: Multigrid method for H(div) in three dimensions. Electron. Trans. Numer. Anal. 6, 133–152 (1997)
Hiptmair, R.: Multigrid method for Maxwell’s equations. SIAM J. Numer. Anal. 36(1), 204–225 (1999)
Hiptmair, R.: Finite elements in computational electromagnetism. Acta Numer. 11, 237–339 (2002)
Hiptmair, R., Xu, J.: Nodal auxiliary space preconditioning in H(curl) and H(div) spaces. SIAM J. Numer. Anal. 45(6), 2483–2509 (2007)
Loghin, D., Wathen, A.J.: Analysis of preconditioners for saddle-point problems. SIAM J. Sci. Comput. 25, 20292049 (2004)
Mardal, K.A., Winther, R.: Uniform preconditioners for the time dependent Stokes problem. Numer. Math. 98, 305–327 (2004)
Monk, P.: Analysis of a finite element method for Maxwell’s equations. SIAM J. Numer. Anal. 29, 714–729 (1992)
Monk, P.: Finite Element Methods for Maxwell’s Equations. Oxford University Press, Oxford (2003)
Nédélec, J.C.: Mixed finite elements in \(R^{3}\). Numer. Math. 35, 315–341 (1980)
Nédélec, J.C.: A new family of mixed finite elements in \(R^{3}\). Numer. Math. 50, 57–81 (1986)
Olshanskii, Maxim A., Tyrtyshnikov, Eugene E.: Iterative Methods for Linear Systems Theory and Applications Society for Industrial and Applied Mathematics, Philadelphia (2014)
Raviart, P.A., Thomas, J.: A mixed finite element method fo 2-nd order elliptic problems. In: Galligani, I., Magenes, E. (eds.) Mathematical aspects of the Finite Elements Method. Lectures Notes in Math, pp. 292–315. Springer, Berlin (1977)
Saad, Y.: Iterative Methods for Sparse Linear Systems. PWS, Boston (1996)
Zhou, J., Hu, X., Zhong, L., Shu, S., Chen, L.: Two-grid methods for maxwell eigenvalue problem. SIAM J. Numer. Anal. 52(4), 2027–2047 (2014)
Author information
Authors and Affiliations
Corresponding author
Additional information
L. Chen was supported by the National Science Foundation (NSF) DMS-1418934 and in part by the Sea Poly Project of Beijing Overseas Talents. Y. Wu was supported by the National Natural Science Foundation of China (11501088), the Fundamental Research Funds for the Central Universities of China (ZYGX2015J097, UESTC) and partially supported by NSF Grant DMS-1115961. L. Zhong was supported by NSF Grant DMS-1115961 and DMS-1418934. J. Zhou was supported by National Nature Science Foundation of China (11501485) and doctoral research project of Xiangtan University (15QDZ08).
Rights and permissions
About this article
Cite this article
Chen, L., Wu, Y., Zhong, L. et al. MultiGrid Preconditioners for Mixed Finite Element Methods of the Vector Laplacian. J Sci Comput 77, 101–128 (2018). https://doi.org/10.1007/s10915-018-0697-7
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10915-018-0697-7