1 Introduction

Some mathematical fields such as exterior calculus and mesh generation play an important role in finite element analysis. In 2006, Arnold et al. [1] introduced the finite element exterior calculus. They applied Hodge theory and De Rham cohomology and stated conditions with which one can obtain a stable discrete finite element space [2]. In 2011, Arnold et al. [3] formulated the serendipity spaces with a different description. They gave a geometric decomposition of them. They also obtained the serendipity finite element subcomplex of De Rham complex and showed that these spaces satisfy the basic hypotheses of the finite element exterior calculus and hence may be used for the stable discretization of partial differential equations [5].

Mesh generation is also an important field in finite element analysis. The accuracy of mesh generation depends on the mesh size in discretization. The h-adaptive finite element method is a process that refines the mesh size, but allows the polynomial degree to be fixed.

In this paper, we derive computational bases for h-adaptive finite element spaces \( \mathcal {S}_{2}\Lambda ^{0} (\mathcal {T}_{h})\) and \( \mathcal {S}_{r}\Lambda ^{1}(\mathcal {T}_{h}) \) in two dimensions. \( \mathcal {S}_{2}\Lambda ^{0}(\mathcal {T}_{h}) \) is the space of serendipity elements of superlinear degree 2, and \( \mathcal {S}_{r}\Lambda ^{1} (\mathcal {T}_{h})\) is the space of 1-forms that coincides with the rectangular Brezzi, Douglas, and Marini (BDM) space [7]. We also consider the mixed finite element method for solving a Hodge Laplacian equation and use the stable pair \(\mathcal {S}_{r+1}\Lambda ^{0}(\mathcal {T}_{h})\times \mathcal {S}_{r}\Lambda ^{1}(\mathcal {T}_{h})\) for its discretization.

Usually, the linear algebraic system derived by discretization of the Hodge Laplacian equation is ill-conditioned and hard to solve. Arnold et al. [1] introduced block diagonal preconditioners for these problems and showed that they are effective. Also, alternative blocks diagonal and block triangular preconditioners are proposed by Chen et al. [10]. Our goal is to use the modified generalized shift-splitting (MGSS) preconditioner [13] for solving the saddle point matrix of the Hodge Laplacian equation with the discretization \(\mathcal {S}_{2}\Lambda ^{0}(\mathcal {T}_{h})\times \mathcal {S}_{1}\Lambda ^{1}(\mathcal {T}_{h})\).

The outline of this paper is as follows. In Sect. 2, we derive the computational bases for \(\mathcal {S}_{r}\Lambda ^{1}(\mathcal {T}_{h})\) in \( \mathbb {R}^{2} \). In Sect. 3, we study Hodge Laplacian equations. In Sect. 4 , computational bases for the h-adaptive serendipity basis functions of superlinear degree 2 and rectangular BDM elements are derived. Section 5 deals with the MGSS preconditioners. In Sect. 6, numerical approximation of the Hodge Laplacian equation by using the mixed finite element method is presented.

2 Computational Bases for \( \mathcal {S}_{_{r}}\Lambda ^{^{1}}(\Omega )\)

The definition and properties of the spaces \( \mathcal {S}_{r}\Lambda ^{k} \) are completely discussed in [5]. For \(k=1\) and \( \Omega \subseteq \mathbb {R}^{2} \) according to equation (17) in [5], we have

$$\begin{aligned} \mathcal {S}_{r}\Lambda ^{1}(\Omega )= \mathcal {P}_{r}\Lambda ^{1}(\Omega )\oplus \left\{ {{\mathrm{curl}}}\left( x^{r+1}y\right) \right\} \oplus \left\{ {{\mathrm{curl}}}\left( x y^{r+1}\right) \right\} . \end{aligned}$$
(2.1)

As mentioned before, this space coincides with the rectangular Brezzi, Douglas, and Marini (BDM) space [5]. The BDM finite element spaces approximate the \( H({{\mathrm{div}}}) \) space and are used in the mixed finite element method [8]. These spaces are defined on both triangular and rectangular meshes. For triangular meshes, BDM elements are formulated by

$$\begin{aligned} BDM_{r}(\Omega )= \left( \mathcal {P}_{r}(\Omega )\right) ^{2}, \end{aligned}$$
(2.2)

and the computational bases for them are derived in [11]. In this section, we derive the computational bases for space \(\mathcal {S}_{r}\Lambda ^{1}(\Omega )\) on rectangular meshes that satisfy  (2.1). The formulation of the proposed bases is absolutely different from the defined bases on triangular meshes in [11].

Let \( n_{_{k}} , 1\le k \le 4\) denote the outer unit normals to the respective edges on square \( I^{^{2}}=[-1,1]^{2} \) (Fig.  1). By using equation (21) in [5], the basis functions for each point \( a^{^{s,[t]}}_{_{(0,0)}}\) on the edges must have the following property:

$$\begin{aligned} \varphi ^{^{p,[q]}}_{_{(0,0)}}\left( a^{^{s,[t]}}_{_{(0,0)}}\right) \cdot n_{_{k}}=\delta _{qk}\delta _{qt}\delta _{ps},\quad 1 \le k , q , t\le 4\quad \text {and} \quad 1\le p,s \le 2, \end{aligned}$$
(2.3)

and the basis functions for the interior points must satisfy the following relation:

$$\begin{aligned} \varphi ^{^{[q]}}_{_{(0,0)}}\cdot n_{_{k}}=0, \qquad 1 \le k \le 4 \quad \text {and} \quad q> 4. \end{aligned}$$
(2.4)

Now for \( v_{_{1}},v_{_{2}}\in \mathbb {R}\), we define

Fig. 1
figure 1

Reference square \( I^{^{2}} \)

$$\begin{aligned} \tilde{f}_{_{1}}\left( v_{_{1}},v_{_{2}}\right)&=\frac{1}{v_{_{1}}-v_{_{2}}} \left( \frac{1}{4} \left( 1-x^{2}\right) {{\mathrm{\mathrm {d}\!}}}x + \frac{(y-1)}{2}\left( x-v_{_{2}}\right) {{\mathrm{\mathrm {d}\!}}}y\right) ,\nonumber \\ \tilde{f}_{_{2}}\left( v_{_{1}},v_{_{2}}\right)&=\frac{1}{v_{_{1}}-v_{_{2}}} \left( \frac{1+x}{2} \left( -v_{_{2}}+y\right) {{\mathrm{\mathrm {d}\!}}}x + \frac{1}{4}\left( 1-y^{2}\right) {{\mathrm{\mathrm {d}\!}}}y \right) , \nonumber \\ \tilde{f}_{_{3}}\left( v_{_{1}},v_{_{2}}\right)&=\frac{1}{v_{_{1}}-v_{_{2}}} \left( \frac{1}{4} \left( 1-x^{2}\right) {{\mathrm{\mathrm {d}\!}}}x + \frac{( 1+y)}{2}\left( x-v_{_{2}}\right) {{\mathrm{\mathrm {d}\!}}}y \right) ,\nonumber \\ \tilde{f}_{_{4}}\left( v_{_{1}},v_{_{2}}\right)&=\frac{1}{v_{_{1}}-v_{_{2}}} \left( \frac{1-x}{2} \left( v_{_{2}}-y\right) {{\mathrm{\mathrm {d}\!}}}x + \frac{1}{4}\left( 1-y^{2}\right) {{\mathrm{\mathrm {d}\!}}}y \right) , \nonumber \\ \tilde{f}_{_{5}}\left( v_{_{1}},v_{_{2}}\right)&=\dfrac{(1+y)}{v_{_{1}}-v_{_{2}}} \left( \frac{1}{4}\left( 1-x^{2}\right) {{\mathrm{\mathrm {d}\!}}}x + \frac{y-1}{2} \left( x-v_{_{2}}\right) {{\mathrm{\mathrm {d}\!}}}y \right) ,\nonumber \\ \tilde{f}_{_{6}}\left( v_{_{1}},v_{_{2}}\right)&=\dfrac{(1-x)}{v_{_{1}}-v_{_{2}}} \left( \frac{1+x}{2} \left( y-v_{_{2}}\right) {{\mathrm{\mathrm {d}\!}}}x + \frac{1}{4}\left( 1-y^{2}\right) {{\mathrm{\mathrm {d}\!}}}y \right) . \end{aligned}$$
(2.5)

It is clear that \(\tilde{ f}_{_{i}}\, ,\, 1\le i\le 6 \) are linearly independent.

2.1 Basis Functions for \( \mathcal {S}_{1}\Lambda ^{1}(I^{2})\)

Let \( g_{_{1}}=-\frac{1}{\sqrt{3}} \) and \( g_{_{2}}=\frac{1}{\sqrt{3}} \) denote the two Gaussian quadrature points on the interval [− 1,1]. Consider eight points

$$\begin{aligned} a^{^{p,[q]}}_{_{(0,0)}}=\left\{ \begin{array}{lll} \left( g_{_{p}},(-1)^{k}\right) ,&{} q=2k-1,&{}k=1,2,\\ &{}&{}\\ \left( (-1)^{k+1}, g_{_{p}}\right) , &{} q=2k,&{}k=1,2, \end{array}\right. \end{aligned}$$

as shown in Fig. 1. The basis functions for \( 1\le q\le 4 \) are

$$\begin{aligned} \varphi ^{^{p,[q]}}_{_{(0,0)}}(x,y)=\left\{ \begin{array}{ccc} \tilde{f}_{_{q}}(g_{_{1}},g_{_{2}}),&{}&{}p=1,\\ &{}&{}\\ \tilde{f}_{_{q}}(g_{_{2}},g_{_{1}}),&{}&{}p=2. \end{array}\right. \end{aligned}$$

Note that for all \( 1\le p\le 2 \quad \text {and} \quad 1\le q\le 4, \; \varphi ^{^{p,[q]}}_{_{(0,0)}}(x,y)\in \mathcal {S}_{1}{\Lambda }^{1}(I^{2}) \) and they satisfy the property (2.3).

2.2 The General Case \( \mathcal {S}_{r}\Lambda ^{1}(I^{^{2}})\)

Let \( g_{_{n }} ,1 \le n\le r+1 \) denote the \( r+1 \) Gaussian quadrature points on the interval \( [-1,1] \), \( g_{_{r+n+1}}=g_{_{n}}\), and

$$\begin{aligned} L_{_{p}}(t)=\prod _{_{k=p+2}}^{^{p+r}}\dfrac{t-g_{_{k}}}{g_{_{p}}-g_{_{k}}}, \end{aligned}$$

denote the Lagrangian polynomials of degree \( r-1 \).

$$\begin{aligned}&\varphi ^{^{p,[q] }}_{_{(0,0)}}(x,y)=L_{_{p}}(x)\, \tilde{f}_{_{q}}\left( g_{_{p}},g_{_{p+1}}\right) , \quad 1 \le p \le r+1, \quad q=1,3,\\&\varphi ^{^{p,[q] }}_{_{(0,0)}}(x,y)=L_{_{p}}(y)\, \tilde{f}_{_{q}}\left( g_{_{p}},g_{_{p+1}}\right) , \quad 1 \le p \le r+1, \quad q=2,4. \end{aligned}$$

For the interior points it is easy to see that the number of interior nodes in \( \mathcal {S}_{r}\Lambda ^{1}(I^{2})\) is \( r(r-1)\). Now, let

$$\begin{aligned} \left\{ \nu _{_{i}}(x,y)\, :\, 1 \le i\le \frac{r(r-1)}{2}\right\} \end{aligned}$$

denote a set of basis functions for \( P_{_{r-2}}(I^{^{2}}) \). We define \( r(r-1) \) independent basis functions:

$$\begin{aligned}&\varphi ^{^{i,[5]}}_{_{(0,0)}}(x,y)= \nu _{_{i}}(x,y) \tilde{f}_{_{5}}(g_{_{1}},g_{_{2}}), \quad 1 \le i\le \frac{r(r-1)}{2},\\&\varphi ^{^{i,[6]}}_{_{(0,0)}}(x,y)=\nu _{_{i}}(x,y) \tilde{f}_{_{6}}(g_{_{1}},g_{_{2}}), \quad 1 \le i\le \frac{r(r-1)}{2}. \end{aligned}$$

3 Hodge Laplacian Equation

In this section, we consider the Hodge Laplacian equation for 1-forms on a domain \(\Omega \subseteq \mathbb {R}^{2} \). The discretization method can be found in [1, 2].

Let \(\Omega \subseteq \mathbb {R}^{2} \) be a simply connected domain with the Lipschitz continuous boundary \( \partial \Omega \). The mixed formulation is: find \( (\sigma , u)\in H^{1}(\Omega )\times H_{{{\mathrm{div}}}}(\Omega ) \) such that

$$\begin{aligned} \begin{aligned} \sigma&={{\mathrm{rot}}}u, \quad {{\mathrm{curl}}}{{\mathrm{rot}}}u - {{\mathrm{grad}}}{{\mathrm{div}}}u = f \qquad \text {in}\quad \Omega \\ u\cdot s&=0, \quad \quad {{\mathrm{div}}}\ u =0 \qquad \text {on}\quad \partial \Omega , \end{aligned} \end{aligned}$$
(3.1)

where \({{\mathrm{rot}}}u=\frac{\partial u_{_{2}}}{\partial x}- \frac{\partial u_{_{1}}}{\partial y} \) and \({{\mathrm{curl}}}\sigma =\left( \frac{\partial \sigma }{\partial y}, -\frac{\partial \sigma }{\partial x}\right) \). We use the Galerkin method for the discretization of the mixed formulation of the Hodge Laplacian equation. At first, we consider variational or weak formulation by multiplying both sides of the equation (3.1) by test functions \( \tau \) and \( \nu \). Then we must find \( (\sigma ,u) \in H^{1}(\Omega )\times H_{{{\mathrm{div}}}}(\Omega )\) such that

$$\begin{aligned} \begin{aligned} \left\langle \sigma , \tau \right\rangle -\left\langle u, {{\mathrm{curl}}}\tau \right\rangle&=0, \qquad \qquad \tau \in H^{1}(\Omega ),\\ \left\langle {{\mathrm{curl}}}\sigma ,\nu \right\rangle + \left\langle {{\mathrm{div}}}u , {{\mathrm{div}}}\nu \right\rangle&=\left\langle f, \nu \right\rangle , \quad \nu \in H_{{{\mathrm{div}}}}(\Omega ). \end{aligned} \end{aligned}$$
(3.2)

To perform discretization, Arnold et al. [4] chose finite element spaces \( \Sigma _{h} \) of the Lagrangian elements and \( V_{h} \) of the Raviart–Thomas elements . Here, we consider another possible choice of stable discretization \( \mathcal {S}_{r+1}\Lambda ^{0}(\Omega )\subset H^{1}(\Omega ) \) and \(\mathcal {S}_{r}\Lambda ^{1}(\Omega )\subset H_{{{\mathrm{div}}}}(\Omega )\) and determine \( \sigma _{h}\in \mathcal {S}_{r+1}\Lambda ^{0}(\Omega )\) and \( u_{h}\in \mathcal {S}_{r}\Lambda ^{1}(\Omega ) \) such that

$$\begin{aligned} \begin{aligned} \left\langle \sigma _{h} , \tau \right\rangle -\left\langle u_{h}, {{\mathrm{curl}}}\tau \right\rangle&=0, \qquad \qquad \tau \in \mathcal {S}_{r+1}\Lambda ^{0}(\Omega ),\\ \left\langle {{\mathrm{curl}}}\sigma _{h},\nu \right\rangle + \left\langle {{\mathrm{div}}}u_{h} , {{\mathrm{div}}}\nu \right\rangle&=\left\langle f, \nu \right\rangle , \;\quad \nu \in \mathcal {S}_{r}\Lambda ^{1}(\Omega ). \end{aligned} \end{aligned}$$
(3.3)

Let \( {{\mathrm{dim}}}\mathcal {S}_{r+1}\Lambda ^{0}(\Omega )=n \), \( {{\mathrm{dim}}}\mathcal {S}_{r}\Lambda ^{1}(\Omega )=l \), and \( \{\varphi _{i}\}_{i=1}^{m} \) , \( \{\psi _{i}\}_{i=1}^{n} \) be the sets of basis functions for \(\mathcal {S}_{r}\Lambda ^{1}(\Omega ) \) and \(\mathcal {S}_{r+1}\Lambda ^{0}(\Omega ) \), respectively. Let \( u_{h}=\sum \nolimits _{i=1}^{n} c_{i} \varphi _{i} \) and \( \sigma _{h}=\sum \nolimits _{j=1}^{l} m_{j} \psi _{j} \), so \( \left\{ c_{i}\right\} _{i=1}^{n} \) and \( \left\{ m_{j}\right\} _{j=1}^{l} \) are unknown coefficients that are to be determined. By substituting \( u_{h} \) and \( \sigma _{h} \) in Eq. (3.3) and using \( \psi _{1}, \ldots , \psi _{l} \) and \( \varphi _{1}, \ldots ,\varphi _{n} \) as test functions, we have the following saddle point linear system:

$$\begin{aligned} \mathcal {A}u= \begin{pmatrix} A&{}B^{T}\\ -B&{}C \end{pmatrix} \begin{pmatrix} m\\ c \end{pmatrix} = \begin{pmatrix} 0\\ g \end{pmatrix} \equiv b, \end{aligned}$$
(3.4)

where \( A=\Big [ \left\langle \psi _i ,\psi _j\right\rangle \Big ] _{l\times l} \) is symmetric positive definite,

$$\begin{aligned} C=\Big [ \left\langle {{\mathrm{div}}}\varphi _i ,{{\mathrm{div}}}\varphi _j\right\rangle \Big ] _{n\times n}, \end{aligned}$$

is symmetric positive semidefinite,

$$\begin{aligned} B=\Big [ -\left\langle {{\mathrm{curl}}}\psi _i ,\varphi _j\right\rangle \Big ] _{l\times n} \text {and} \quad g= \Big [ \left\langle f ,\varphi _i\right\rangle \Big ]_{n\times 1}. \end{aligned}$$
Fig. 2
figure 2

The h-adaptive finite element method for eight-node serendipity elements

Fig. 3
figure 3

Eight nodes on \(A_{(i_{n},j_{n})}\)

4 h-Adaptive Method

The h-adaptive method generate new mesh by refining the mesh size, and therefore increasing the number of elements (see Fig. 2). The definition of this method is described in [14], but in this section we formulate the h-adaptive serendipity basis functions of superlinear degree 2 and h-adaptive basis functions of rectangular BDM spaces, which are calculated in Sect. 2. These formulas are useful and efficient in calculating the mass matrix and the stiffness matrix for solving partial differential equations in each step of the h-adaptive method. Let us first divide the square \( I^{2}=[-1,1]^{2}\subseteq \mathbb {R}^{2} \) into \( 2^{n+1}\times 2^{n+1} \) subsquares and set

$$\begin{aligned} \mathbf {A_{n}}=\left\{ (i_{n},j_{n}) \Big |\ i_{n} , j_{n}=\pm \dfrac{2k+1}{2^{n+1}}, \quad k=0,\ldots , 2^{n}-1\right\} . \end{aligned}$$
(4.1)

For each \( (i_{n},j_{n})\in \mathbf {A_{n}},\, n\ge 0 \), we define the square \( A_{(i_{n},j_{n})} \) such as in Fig. 3b.

Let \(\, x_{i_{n}}=2^{n}\left( x-i_{n}\right) , y_{j_{n}}=2^{n}\left( y-j_{n} \right) \). The serendipity basis functions corresponding to each node in square \(A_{(i_{n},j_{n})}\) are obtained by the following equation:

$$\begin{aligned}&\psi ^{^{kl}}_{_{(i_{n},j_{n})}}(x,y)\nonumber \\&\quad =\left\{ \begin{array}{ll} 2( -1) ^{k+1}\left( x_{i_{n}}+(-1)^{k}\, \dfrac{1}{2} \right) \left( y_{j_{n}}+(-1)^{l}\dfrac{1}{2} \right) \left( ( -1) ^{k+l+1}x_{i_{n}}-y_{j_{n}}+( -1) ^{l}\dfrac{1}{2} \right) , &{} \quad k\ne 3 , l\ne 3,\\ 4\left( x_{i_{n}} +\dfrac{1}{2}\right) \left( x_{i_{n}}-\dfrac{1}{2} \right) \left( y_{j_{n}}+(-1)^{l}\dfrac{1}{2} \right) , &{} \quad k=3, l\ne 3,\\ 4\left( y_{j_{n}} +\dfrac{1}{2}\right) \left( y_{j_{n}}-\dfrac{1}{2} \right) \left( x_{i_{n}}+(-1 )^{k}\dfrac{1}{2} \right) , &{} \quad k\ne 3, l =3.\\ \end{array}\right. \nonumber \\ \end{aligned}$$
(4.2)

and the BDM elements in the square \(A_{(i_{n},j_{n})}\) are

$$\begin{aligned}&\varphi ^{^{k,[l]}}_{_{(i_{n},j_{n})}}(x,y)\nonumber \\&\quad =\left\{ \begin{array}{ll} ( -1) ^{k+1} \frac{\sqrt{3}}{4}\left[ \left( 2x_{i_{n}}^{2}-\dfrac{1}{2} \right) {{\mathrm{\mathrm {d}\!}}}x -\left( 2x_{i_{n}}+(-1)^{k}\dfrac{\sqrt{3}}{3} \right) \left( 2y_{j_{n}}+(-1)^{[\frac{l}{2}]+1} \right) {{\mathrm{\mathrm {d}\!}}}y\right] , &{} \quad k=1,2, \, l=1,3,\\ ( -1) ^{k}\frac{\sqrt{3}}{4}\left[ \left( 2x_{i_{n}}-(-1)^{[\frac{l}{2}]} \right) \left( 2y_{j_{n}}+(-1)^{k}\dfrac{\sqrt{3}}{3} \right) {{\mathrm{\mathrm {d}\!}}}x- \left( 2y_{j_{n}}^{2}-\dfrac{1}{2} \right) {{\mathrm{\mathrm {d}\!}}}y \right] , &{} \quad k=1,2, \,l=2,4.\\ \end{array}\right. \nonumber \\ \end{aligned}$$
(4.3)
Fig. 4
figure 4

Non-conformal mesh

To preserve the continuity of \( \varphi \cdot n \) across the interface \(l_{B}=A\cap B\) (Fig. 4), we have

$$\begin{aligned} \varphi ^{A}\cdot n_{_{A}}\big | _{l_{B}}+\varphi ^{B}\cdot n_{_{B}}\big | _{l_{B}}=0, \end{aligned}$$
(4.4)

which means

$$\begin{aligned} c_{1}\varphi ^{^{1,[2]}}_{_{(i_{n},j_{n})}}(x,y)+ c_{2}\varphi ^{^{2,[2]}}_{_{(i_{n},j_{n})}}(x,y)+b_{1}\varphi ^{^{1,[4]}}_{_{(i_{n+1},j_{n+1})}}(x,y)+ b_{2}\varphi ^{^{2,[4]}}_{_{(i_{n+1},j_{n+1})}}(x,y)=0;\nonumber \\ \end{aligned}$$
(4.5)

therefore,

$$\begin{aligned} c_{1}=\frac{\sqrt{3}+1}{2}b_{1}-\frac{\sqrt{3}+3}{2}b_{2},\quad c_{2}=\frac{\sqrt{3}-3}{2}b_{1}+\frac{-\sqrt{3}+1}{2}b_{2}. \end{aligned}$$

4.1 Mass Matrix and Stiffness Matrix

Let \( \Psi _{(i_{n},j_{n})}^{^{r}} \) be the set of serendipity basis functions of superlinear degree r in square \( A_{_{(i_{n},j_{n}) }}\). For \( \psi _{_{(i_{n},j_{n})}}^{^{\gamma }} \in \Psi ^{^{r}} _{_{(i_{n},j_{n})}} \,\, n\ge 1 , \, 1\le \gamma \le \dim \mathcal {S}_{r}\Lambda ^{0}(I^{2})\), we have the following results:

Lemma 4.1

$$\begin{aligned}&\mathrm{{(i)}}\iint \limits _{ A_{_{(i_{n},j_{n})}}}\psi _{_{(i_{n},j_{n})}}^{^{\alpha }}\cdot \psi _{_{(i_{n},j_{n})}}^{^{\beta }} {{\mathrm{\mathrm {d}\!}}}x {{\mathrm{\mathrm {d}\!}}}y=\iint \limits _{ A_{_{(i^{\prime }_{n},j^{\prime }_{n})}}}\psi _{_{(i^{\prime }_{n}, j^{\prime }_{n})}}^{^{\alpha }}\cdot \psi _{_{(i^{\prime }_{n},j^{\prime }_{n})}}^{^{\beta }} {{\mathrm{\mathrm {d}\!}}}x {{\mathrm{\mathrm {d}\!}}}y, \\&\mathrm{{(ii)}} \iint \limits _{ A{_{_(i^{\prime }_{n+1},j^{\prime }_{n+1})}}}\psi _{_{(i^{\prime }_{n+1},j^{\prime }_{n+1})}}^{^{\alpha }} \cdot \psi _{_{(i^{\prime }_{n+1},j^{\prime }_{n+1})}}^{^{\beta }} {{\mathrm{\mathrm {d}\!}}}x {{\mathrm{\mathrm {d}\!}}}y,\\&\quad \quad \quad \quad =\frac{1}{4}\iint \limits _{ A_{_{(i_{n},j_{n})}}}\psi _{_{(i_{n},j_{n})}}^{^{\alpha }}\cdot \psi _{_{(i_{n},j_{n})}}^{^{\beta }} {{\mathrm{\mathrm {d}\!}}}x {{\mathrm{\mathrm {d}\!}}}y \\&\mathrm{{(iii)}} \iint \limits _{ A_{_{(i_{n},j_{n})}}}{{\mathrm{div}}}\psi _{_{(i_{n},j_{n})}}^{^{\alpha }}\cdot {{\mathrm{div}}}\psi _{_{(i_{n},j_{n})}}^{^{\beta }} {{\mathrm{\mathrm {d}\!}}}x {{\mathrm{\mathrm {d}\!}}}y=\frac{1}{4}. \end{aligned}$$

Proof

The proof is straightforward. \(\square \)

Now, let \( M_{_{A_{(i_{n},j_{n})}}}^{^{r}} = \left[ m_{\alpha \beta } \right] _{\alpha ,\beta =1,\ldots ,\dim \mathcal {S}_{r}\Lambda ^{0}(I^{2})}\) be the mass matrix of serendipity basis functions in square \( A_{_{(i_{n},j_{n})}}, \) then

Theorem 4.2

$$\begin{aligned}&\mathrm{{(i)}} \quad M_{_{A_{(i_{n},j_{n})}}}^{^{r}} =M_{_{A_{(i^{\prime }_{n},j^{\prime }_{n})}}}^{^{r}},\\&\mathrm{{(ii)}}\quad M_{_{A_{(i_{n},j_{n})}}}^{^{r}} =\frac{1}{4} M_{_{A_{(i^{\prime }_{n+1},j^{\prime }_{n+1})}}}^{^{r}}. \end{aligned}$$

Proof

The proof is a consequence of Lemma ( 4.1). \(\square \)

Let \( \Phi _{_{(i_{n},j_{n})}}^{^{r}} \) be the set of basis functions of space \( \mathcal {S}_{r}\Lambda ^{1} \left( A_{_{(i_{n},j_{n}) }}\right) \). We have the following theorem:

Theorem 4.3

If \( \varphi _{_{(i_{n},j_{n})}} \in \Phi ^{^{r}} _{(i_{n},j_{n})} \) and \( \psi _{_{(i_{n},j_{n})}}\in \Psi ^{^{r}} _{(i_{n},j_{n})} \) then

$$\begin{aligned} \int _{ A_{_{(i^{\prime }_{n+1},j^{\prime }_{n+1})}}}{{\mathrm{curl}}}\psi _{_{(i^{\prime }_{n+1},j^{\prime }_{n+1})}} \wedge \star \varphi _{_{(i^{\prime }_{n+1},j^{\prime }_{n+1})}}= \frac{1}{2}\int _{ A_{_{(i_{n},j_{n})}}}{{\mathrm{curl}}}\psi _{_{(i_{n},j_{n})}}\wedge \star \varphi _{_{(i_{n},j_{n})}}. \end{aligned}$$

Proof

The basis functions on square \( A_{_{(i^{\prime }_{n+1},j^{\prime }_{n+1}) }}\) are obtained by the scaling and transferring of basis functions on square \( A_{_{(i_{n},j_{n}) }}\). \(\square \)

Theorem 4.4

If \( \mathcal {A}_{n}=\left[ \begin{array}{ccc} A_{n}&{}&{}B^{^{T}}_{n}\\ -B_{n}&{}&{}C_{n}\\ \end{array} \right] \), and \( \mathcal {A}_{n+1}=\left[ \begin{array}{ccc} A_{n+1}&{}&{}B^{^{T}}_{n+1}\\ -B_{n+1}&{}&{}C_{n+1}\\ \end{array} \right] \) are the stiffness matrix of Eq. (3.4) in square \( A_{_{(i_{n},j_{n})}} \) and \( A_{_{(i^{\prime }_{n+1} ,j^{\prime }_{n+1})}} \), respetively, then

(i):

\(A_{n+1}=\frac{1}{4} A_{n}, \)

(ii):

\(B_{n+1}=\frac{1}{2} B_{n}, \)

(iii):

\(C_{n+1}=C_{n}. \)

5 Modified Generalized Shift-Splitting (MGSS) Preconditioner

The generalized shift-splitting (GSS) preconditioner was initially proposed by Bai et al. [6]. Then it was used by Cao et al. [9] to solve nonsingular saddle point problems

$$\begin{aligned} \mathcal {A}= \begin{pmatrix} A&{}B^{T}\\ -B&{} C \end{pmatrix}, \end{aligned}$$
(5.1)

where A is positive definite, B is full rank and \( C=0 \). For positive definite A, full rank matrix B and symmetric positive semidefinite C, it was extended as the modified generalized shift-splitting (MGSS) preconditioner by Salkuyeh et al. [13].

The purpose of this section is to show that the MGSS preconditioner is effective for the saddle point matrix of the Hodge Laplacian equation with discretization \(\mathcal {S}_{2}\Lambda ^{0}(\mathcal {T}_{h})\times \mathcal {S}_{1}\Lambda ^{1}(\mathcal {T}_{h})\). In this kind of problem, \( A\in \mathbb {R}^{n\times n} \) is symmetric positive definite (SPD), \( B \in \mathbb {R}^{m\times n }, m\le n \) is rank deficient, and

$$\begin{aligned} \exists 1\le j \le n \quad \text {such that}\quad \sum _{i=1}^{m}b_{ij}\ne 0. \end{aligned}$$
(5.2)

We also have

$$\begin{aligned} C=\alpha \left[ \begin{array}{ccc} 1&{}\ldots &{}1\\ \vdots &{}\vdots &{}\vdots \\ 1&{}\ldots &{}1\\ \end{array} \right] \ne 0. \end{aligned}$$
(5.3)

Now, let

$$\begin{aligned} \mathcal {A} =\mathcal {M}_{\alpha ,\beta }-\mathcal {N}_{\alpha ,\beta }=\frac{1}{2} \begin{pmatrix} \alpha \mathcal {I}+A&{}B^{T}\\ -B&{}\beta \mathcal {I}+C \end{pmatrix}-\frac{1}{2} \begin{pmatrix} \alpha \mathcal {I}-A&{}-B^{T}\\ B&{}\beta \mathcal {I}-C \end{pmatrix}, \end{aligned}$$
(5.4)

where \( \alpha , \beta \) are two positive real parameters and \(\mathcal { I} \) is the identity matrix. In the modified generalized shift-splitting iterative method [13], the iteration matrix is

$$\begin{aligned} \Gamma _{\alpha ,\beta }= \begin{pmatrix} \alpha \mathcal {I}+A&{}B^{T}\\ -B&{}\beta \mathcal {I}+C \end{pmatrix}^{-1} \begin{pmatrix} \alpha \mathcal {I}-A&{}-B^{T}\\ B&{}\beta \mathcal {I}-C \end{pmatrix}, \end{aligned}$$
(5.5)

and the MGSS preconditioner is

$$\begin{aligned} \mathcal {P}_\mathrm{GSS}=\frac{1}{2} \begin{pmatrix} \alpha \mathcal {I}+A&{}B^{T}\\ -B&{}\beta \mathcal {I}+C \end{pmatrix}. \end{aligned}$$
(5.6)

Let \( \rho (\Gamma _{\alpha , \beta }) \) denote the spectral radius of \( \Gamma _{\alpha , \beta } \). Then the modified generalized shift-splitting iterative method for every initial guess \( u^{0} \) is convergent if and only if \( \rho (\Gamma _{\alpha ,\beta }) <1\). Let \( \lambda \) be an eigenvalue of \( \Gamma _{\alpha ,\beta } \) and \( \left[ \begin{array}{c} x\\ y \end{array} \right] \) be the corresponding eigenvector. Then we have

$$\begin{aligned} \begin{pmatrix} \alpha \mathcal {I}-A&{}-B^{T}\\ B&{}\beta \mathcal {I}-C \end{pmatrix} \begin{pmatrix} x\\ y \end{pmatrix} = \begin{pmatrix} \alpha \mathcal {I}+A&{}B^{T}\\ -B&{}\beta \mathcal {I}+C \end{pmatrix} \begin{pmatrix} \lambda x\\ \lambda y \end{pmatrix}, \end{aligned}$$
(5.7)

or equivalently

Table 1 \(L^2\) errors and convergence rates for mixed finite element approximation of Hodge Laplacian equation
Table 2 The condition number of matrix \( \mathcal {P}_{_\mathrm{MGSS}} ^{-1}\mathcal {A} \) in different meshes
Fig. 5
figure 5

Eigenvalues distribution of the saddle point matrix \(\mathcal {A}\) (left) and the preconditioned matrix, \(\mathcal {P}^{-1}_{MGSS}\mathcal {A}\) where \( \alpha , \beta =0.001 \) (right) in steps 1,2 and 3 of the h-adaptive method

Table 3 Numerical results of preconditioned GMRES method for the Hodge Laplacian problem in step n of the h-adaptive method
$$\begin{aligned} (\alpha \mathcal {I} -A)x-B^{T}y= & {} \lambda (\alpha \mathcal {I}+A)x +\lambda B^{T}y, \end{aligned}$$
(5.8)
$$\begin{aligned} Bx+(\beta \mathcal {I}-C)y= & {} -\lambda B x+\lambda (\beta \mathcal {I}+C)y. \end{aligned}$$
(5.9)

Lemma 5.1

[13] Let \( \alpha , \beta >0 \). If \( \lambda \) is an eigenvalue of the matrix \( \Gamma _{\alpha ,\beta } \), then \( \lambda \ne \pm 1 \).

Theorem 5.2

Let \( \lambda \) be an eigenvalue of the matrix \( \Gamma _{\alpha , \beta } \), then \( \mid \lambda \mid < 1 \).

Proof

Let \(x=0,\) (5.8) and Lemma 5.1 imply that \(B^T y=0.\) Now, by (5.9) and (5.3) we have \( C\, y =\frac{1-\lambda }{1+\lambda }\, \beta \, \mathcal {I} \, y \) and \( y_{1}=y_{2}=\cdots =y_{n} \); hence, (5.2) implies that \( B^{T}y\ne 0 \). Therefore, \(x\ne 0\).

The rest of the proof is similar to the proof of Theorem 1 in [13]. \(\square \)

Theorem 5.3

If we consider the stable pair \(\mathcal {S}_{2}\Lambda ^{0}(A_{_{(i_{n},j_{n})}})\times \mathcal {S}_{1}\Lambda ^{1}(A_{_{(i_{n},j_{n})}})\) in discretization of the Hodge Laplacian equation (3.1) , then for all n ( each step of h-adaptive method) , the MGSS iterative method of the linear system (3.4) is convergent.

Proof

It follows from Theorems 5.2 and 4.4. \(\square \)

6 Numerical Results

For numerical experiment, we considered a vector field

$$\begin{aligned} f= \frac{-\pi ^2}{2}\sin \left( \frac{\pi x}{2}\right) \cos \left( \frac{\pi y}{2}\right) {{\mathrm{\mathrm {d}\!}}}x -(\pi ^2)\cos \left( \frac{\pi x}{2}\right) \sin \left( \frac{\pi y}{2}\right) {{\mathrm{\mathrm {d}\!}}}y \end{aligned}$$

on domain \( \Omega =I^{^{2}}\). We chose \( \mathcal {S}_{2}\Lambda ^{0}(\Omega )\subset H^{1}(\Omega ) \) and \(\mathcal {S}_{1}\Lambda ^{1}(\Omega )\subset H_{{{\mathrm{div}}}}(\Omega )\) and determined \( \sigma _{h}\in \mathcal {S}_{2}\Lambda ^{0}(\Omega )\) and \( u_{h}\in \mathcal {S}_{1}\Lambda ^{1}(\Omega ) \) such that they satisfy the linear system 3.4. The meshes were obtained by dividing the square \(I^2\) into \(2^{n+1}\times 2^{n+1}\) subsquares according to (4.1). The results which are presented in Table 1 are \( L^2 \) error and convergence rates of the numerical solution of Eq. 3.4 for \( n=5,6,7,8 \). All the numerical experiments presented in this section were computed using MATLAB on a laptop with Intel Core i7 CPU 1.6 GHz, 4 GB RAM.

The condition numbers of \( \mathcal {A} \) in different meshes are determined in Table 3. The results show that the matrix \( \mathcal {A}\) is ill-conditioned. Next, we used the modified generalized shift-splitting (MGSS) preconditioner \( \mathcal {P}_{_\mathrm{MGSS}} \), as mentioned in the previous section. The condition number of matrix \( \mathcal {P}_{_\mathrm{MGSS}} ^{-1}\mathcal {A} \) in different meshes are determined in Table 2. The eigenvalues distribution of the matrices \( \mathcal {A} \) and \( \mathcal {P}_{_\mathrm{MGSS}} ^{-1}\mathcal {A} \) with \( \alpha , \beta =0.001 \) are displayed in Fig. 5. For further investigation, we took the MHSS-II preconditioner from [12] to compare with the proposed MGSS preconditioner.

$$\begin{aligned} \mathcal {P}_{_{\mathrm{MHSS}\hbox {-}\mathrm{II}}}=\frac{1}{2\alpha } \begin{pmatrix} 2\alpha \mathcal {I} &{}0\\ 0&{}\alpha \mathcal {I}+C \end{pmatrix} \begin{pmatrix} A&{}B^{T}\\ -B&{}\alpha \mathcal {I} \end{pmatrix}, \end{aligned}$$
(6.1)

where \( \alpha >0 \) is a parameter. We combined these two preconditioners with the GMRES(m) algorithm.

In Table 3, the numerical results of the MHSS-II, MGSS, preconditioned GMRES methods are depicted. The advantages of the MGSS preconditioned GMRES methods over the MHSS-II preconditioned GMRES methods, in view of the condition number and relative residual, can be observed.