1 Introduction

In this paper, we investigate superconvergence properties of the direct discontinuous Galerkin (DDG) method with interface corrections (DDGIC) [19] and the symmetric DDG [28] method for linear diffusion equations.

The DDG methods are a class of discontinuous Galerkin (DG) methods for solving diffusion problems. The original DDG method was proposed by Liu and Yan [18], where a numerical flux concept of \(\widehat{(u_h)_x}\) was introduced to approximate the solution’s spatial derivative \(u_x\) at the element interface. Different from the local DG (LDG) method, where auxiliary variables are introduced for the solution’s spatial derivatives, and the original equation is rewritten as a first-order system, the DDG method is based on the direct weak formulation of diffusion equations. The original DDG method suffers from the challenge of identifying suitable coefficients for higher-order (\(\geqslant 4\)) numerical fluxes and the accuracy loss on the nonuniform mesh. The DDGIC method proposed in [19] modified the original DDG method by adding interface correction terms to balance the solution and test function in the bi-linear form, which guarantees the optimal convergence and improves the capacity of the DDG method. It is the best solver so far for time-dependent diffusion equations. The symmetric DDG method [28] is another variation of the original DDG method by introducing the numerical flux for the derivative of the test function to carry out the \(L^2(L^2)\) error estimate, resulting a more suitable solver for elliptic-type equations.

Superconvergence properties of DG and LDG methods for hyperbolic and parabolic problems have been intensively studied in the literature via different approaches, including treating the problem as an initial or boundary value problem [1,2,3,4, 16], establishing the negative norm estimate [14, 17, 27], introducing special projections to decompose the error and manipulating with test functions in the weak formulation [7, 8, 10,11,12, 22, 32, 33], applying the Fourier analysis technique [13, 15, 24, 25, 31, 39], and constructing special correction functions [20, 21, 30], etc. In recent years, the superconvergence of the DDG methods was studied for diffusion equations. The authors in [6] proved that, under suitable choice of numerical fluxes, the DDG solution with \(P^k\) polynomial approximation is superconvergent of order \(k+2\) to the Gauss-Lobatto projection of the exact solution. The authors in [38] carried out the superconvergence of moment errors for DDGIC and symmetric DDG methods via the Fourier analysis approach for \(P^2\) polynomial approximations. The authors in [23] investigated the superconvergence properties of the original DDG method and its variations (DDGIC, symmetric and nonsymmetric DDG methods) via the Fourier analysis approach for both \(P^2\) and \(P^3\) approximations. It is worth mentioning that \(P^3\) case is more challenging for the Fourier type analysis.

The Fourier analysis is a powerful technique to study the stability and error estimates for DG methods, especially when standard finite element techniques can not be applied. Besides the superconvergence studies mentioned above, this technique was applied to provide a sufficient condition for the instability of “bad” schemes in [34] and to demonstrate the optimal convergence in [35,36,37], etc. Although the Fourier analysis is restricted to linear problems with periodic boundary conditions and uniform mesh, it can be used as a guidance to problems under general settings.

In this paper, we continue to study superconvergence properties of the DDGIC and symmetric DDG methods for one-dimensional linear diffusion equation by the Fourier analysis approach based on the eigen-structure of the amplification matrix. Our work is motivated by the superconvergence properties of DDG methods at shifted Lobatto points in [23] and is an extension of the superconvergence study via the eigen-structure based the Fourier analysis for the DG and LDG methods [15] to DDG methods. We first choose basis functions as Lagrange polynomials based on shifted Lobatto points and rewrite DDG finite element methods as finite difference schemes. Then we carry out the Fourier analysis and symbolically compute eigenvalues and the corresponding eigenvectors of the amplification matrices of the DDGIC and symmetric DDG methods. We consider the coefficients in numerical fluxes with both settings of \(\beta _1\ne \frac{1}{2k(k+1)}\) and \(\beta _1=\frac{1}{2k(k+1)}\) for \(P^k \,\,(k=2,3)\) polynomial approximations. We observe the following properties.

  • The amplification matrices of the DDGIC and symmetric DDG methods are diagonalizable with \(k+1\) distinct eigenvalues, among which one is physically relevant and approximates the analytical wave propagation speed with the order of 2k in the dissipation error, while the others are non-physical and of order \(\frac{1}{{\Delta {x}}^2}\) with a negative real coefficient.

  • The amplification matrices of both DDG methods have \(k+1\) corresponding eigenvectors. For \(\beta _1 \ne \frac{1}{2k(k+1)}\), the eigenvector corresponding to the physically relevant eigenvalue approximates the wave function with order \(k+1\) for \(P^2\) polynomial approximations, and with order \(k+2\) for \(P^3\) polynomial approximations. For \(\beta _1=\frac{1}{2k(k+1)}\), it approximates the wave function with order \(k+2\) for both \(P^2\) and \(P^3\) polynomial approximations.

Following the eigen-structure analysis of amplification matrices, we establish error estimates of the DDGIC and symmetric DDG methods, which can be decomposed into three parts. The first part is the dissipation error of the physically relevant eigenvalue, which grows linearly with the time and is superconvergent of order 2k for \(P^k\ (k=2,3)\) with any admissible \(\beta _1\). The second part is the projection error related to the eigenvector corresponding to the physically relevant eigenvalue. This part of error is decreasing over the time and is superconvergent of order \(k+2\) for \(P^2\) approximations with \(\beta _1=\frac{1}{12}\) and for \(P^3\) approximations with any admissible \(\beta _1\). The error degrades to optimal (\(k+1\))-th order for \(P^2\) approximations with \(\beta _1\ne \frac{1}{12}\). The third part is dissipative errors of non-physically relevant eigenvalues, which decay exponentially with respect to \(\Delta {x}\). Therefore, the error between the numerical solution and the exact solution decreases with the time at the beginning, and is superconvergent of order \(k+2\) for \(P^2\) case with \(\beta _1=\frac{1}{12}\) and \(P^3\) case with any admissible \(\beta _1\), while it is only optimal of order \(k+1\) for \(P^2\) case with \(\beta _1\ne \frac{1}{12}\). As time increases to \({\mathcal {O}}\left( \frac{1}{{\Delta {x}}^{k-2}}\right)\) for \(P^2\) approximations with \(\beta _1=\frac{1}{12}\) and \(P^3\) approximations with any admissible \(\beta _1\), or to \({\mathcal {O}}\left( \frac{1}{{\Delta {x}}^{k-1}}\right)\) (longer time simulation) for \(P^2\) approximations with \(\beta _1\ne \frac{1}{12}\), the error grows linearly with the time, and is superconvergent of order 2k. We also provide an alternative way to check the long-time behaviour of the numerical solution. Numerical experiments are provided to demonstrate the theoretical results.

The rest of the paper is organized as follows. We briefly review the scheme formulation of the DDGIC and symmetric DDG methods in Sect. 2. Section 3 is devoted to the superconvergence study of both DDG methods with the Fourier analysis procedure presented in Sect. 3.1, the eigen-structure analysis of amplification matrices carried out in Sect. 3.2, and error estimates shown in Sect. 3.3. Numerical experiments are presented in Sect. 4 to validate the theoretical results. Conclusions are given in Sect. 5.

2 DDGIC and Symmetric DDG Methods

In this section, we present the algorithm formulation of the DDGIC and symmetric DDG methods for the one-dimensional linear diffusion problem

$$\begin{aligned}&u_t-u_{xx}=0, \quad x\in [0,2\uppi ], \,\,t>0 \end{aligned}$$
(1)

with the initial condition \(u(x,0)=\sin x\) and the periodic boundary condition. The exact solution is

$$\begin{aligned} u(x,t)=\text{e}^{-t}\sin x. \end{aligned}$$
(2)

To define DDG methods for this model problem, we first uniformly divide \([0,2\uppi ]\) into N cells with the mesh size \(\Delta {x}=\frac{2\uppi }{N}\). We denote the cell by \(I_j=[x_{j-1/2}, x_{j+1/2}]\), where

$$\begin{aligned} 0=x_{\frac{1}{2}}<x_{\frac{3}{2}}<\cdots <x_{N+\frac{1}{2}}=2\uppi , \end{aligned}$$

and further denote the cell center by \(x_j=\frac{1}{2} \left( x_{j-1/2}+x_{j+1/2}\right)\), for \(j=1,\cdots , N\). The finite approximation space is defined by

$$\begin{aligned} {\mathbb {V}}_h^k :=\{v_h\in L^2[0,2\uppi]{:} v_h|_{I_j} \in P^k(I_j), ~~j=1,\cdots , N\}, \end{aligned}$$

where \(P^k(I_j)\) denotes the set of polynomials of degree up to k defined in the cell \(I_j\).

For \(v_h \in {\mathbb {V}}^k_{h}\), we denote by \(v_h^-\) and \(v_h^+\) the left and right limits of \(v_h\) at the cell interface, respectively, and denote the jump and average of \(v_h\) at the cell interface as

$$\begin{aligned} \llbracket v_h\rrbracket =v_h^+ - v_h^-, \quad \{\!\{ v_h \}\!\} =\frac{v_h^+ + v_h^-}{2}. \end{aligned}$$
(3)

Now we are ready to define DDGIC and symmetric DDG methods for (1).

2.1 DDG Method with Interface Correction

Before introducing the DDGIC method for solving the model (1), we first review the original DDG method [18], which is defined as follows: find the solution \(u_h \in {\mathbb {V}}^k_{h}\), such that for any test function \(v_h \in {\mathbb {V}}^k_{h}\), we have

$$\begin{aligned} \int _{I_j}(u_h)_tv_h{{\rm d}}x-\widehat{(u_h)_x}(v_h)^-_{j+\frac{1}{2}} + \widehat{(u_h)_x}(v_h)^+_{j-\frac{1}{2}} +\int _{I_j}(u_h)_x(v_h)_x{{\rm d}}x=0. \end{aligned}$$
(4)

This weak formulation is obtained by multiplying both sides of the model (1) by test functions in \({\mathbb {V}}_h^k\), and performing integration by parts in the cell \(I_j\). \(\widehat{(u_h)_x}\) is the so-called numerical flux to approximate the derivative of the solution \((u_h)_x\) at the cell interfaces \(x_{j\pm \frac{1}{2}}\), for \(j=1,\cdots ,N\), since \(u_h \in {\mathbb {V}}^k_{h}\) is discontinuous at cell interfaces. \(\widehat{(u_h)_x}\) is uniquely defined at the cell interface as

$$\begin{aligned} \widehat{(u_h)_x}=\beta _0\frac{\llbracket u_h \rrbracket }{\Delta {x}}+ \{\!\{ (u_h)_x \}\!\} +\beta _{1}\Delta {x} \llbracket (u_h)_{xx}\rrbracket + \beta _2 (\Delta {x})^3 \llbracket (u_h)_{{xxxx}}\rrbracket +\cdots , \end{aligned}$$
(5)

which involves the jump of the numerical solution, the average of derivative, as well as the jump of even-th order derivatives at the cell interface, and is consistent to \(u_x\).

Although there exists a large group of admissible coefficient pairs \((\beta _0, \beta _{1})\) that ensures the stability and convergence of the DDG method, it is challenging to identify suitable higher order numerical flux coefficients; see [18] for more details. To guarantee the optimal convergence and improve the capability of the DDG method, the DDGIC method [19] was thus introduced by adding interface correction terms to the original scheme (4) to balance the solution and test functions in the bi-linear form.

The DDGIC method for solving (1) is defined as follows: find the solution \(u_h \in {\mathbb {V}}^k_{h}\), such that for any test function \(v_h \in {\mathbb {V}}^k_{h}\), we have

$$\begin{aligned} \int _{I_j}(u_h)_tv_h{{\rm d}}x-\left. \widehat{(u_h)_x}v_h\right| ^{j+\frac{1}{2}}_{j-\frac{1}{2}}+\int _{I_j}(u_h)_x(v_h)_x{{\rm d}}x +\frac{(v_h)_x^-}{2}\llbracket u_h \rrbracket _{j+\frac{1}{2}}+\frac{(v_h)_x^+}{2}\llbracket u_h \rrbracket _{j-\frac{1}{2}}=0, \end{aligned}$$
(6)

where

$$\begin{aligned} \left. \widehat{(u_h)_x}v_h \right| ^{j+\frac{1}{2}}_{j-\frac{1}{2}}\; :=\widehat{(u_h)_x}(v_h)^-_{j+\frac{1}{2}} - \widehat{(u_h)_x}(v_h)^+_{j-\frac{1}{2}}. \end{aligned}$$

The numerical flux is given by

$$\begin{aligned} \widehat{(u_h)_x}=\beta _0 \frac{\llbracket u_h \rrbracket }{\Delta {x}}+ \{\!\{ (u_h)_x \}\!\} +\beta _{1}\Delta {x} \llbracket (u_h)_{xx}\rrbracket , \end{aligned}$$
(7)

involving only the jump of the solution, the average of the first derivative, and the jump of the second-order derivative. Here jumps of higher order (\(\geqslant\) 4) derivatives are dropped off from (5).

With a suitable coefficient pair \((\beta _0,\beta _1)\), the DDGIC method was proved to be stable and optimal accurate in [19]. It is worth mentioning that for lower order piecewise constant (\(k=0\)) and linear (\(k=1\)) approximations, the second derivative jump term \(\llbracket (u_h)_{xx}\rrbracket\) has no contribution to the numerical flux (7), and the DDGIC method degenerates to the classical interior penalty DG (IPDG) method [5, 29]. For higher approximations (\(k\geqslant 2\)), the DDGIC method has a few advantages over the IPDG method. The DDGIC solution satisfies strict maximum principle with at least third order of accuracy [9], while only second order can be obtained for the IPDG method. The DDGIC solution is proved to be superconvergent on its approximation to the solution’s spatial derivative \(u_x\) [38] with the Fourier analysis technique, while no such superconvergence result is observed for the IPDG method. In [23], the DDGIC method is superconvergent of order \((k+2)\) at shifted Lobatto points with both \(k=2\) and \(k=3\) polynomial approximations, while the IPDG method is superconvergent of order \((k+2)\) with \(P^3\) polynomial approximations. For \(P^2\) approximation, the IPDG method is superconvergent of order \((k+2)\) at the cell center, but is convergent with the optimal order of \((k+1)\) at the other two Lobatto points.

2.2 Symmetric DDG Method

In this section, we present the symmetric DDG method [28], which is also a variation of the original DDG method. It introduces the concept of the numerical flux for the test function’s derivative \((v_h)_x\) and is defined as follows: find the solution \(u_h \in {\mathbb {V}}^k_{h}\), such that for any test function \(v_h \in {\mathbb {V}}^k_{h}\), we have

$$\begin{aligned} \int _{I_j}(u_h)_tv_h{{\rm d}}x-\left. \widehat{(u_h)_x}v_h\right| ^{j+\frac{1}{2}}_{j-\frac{1}{2}}+ \int _{I_j}(u_h)_x(v_h)_x{{\rm d}}x + \widetilde{(v_h)}_x\llbracket u_h \rrbracket _{j+\frac{1}{2}}+ \widetilde{(v_h)}_x\llbracket u_h \rrbracket _{j-\frac{1}{2}}=0 \end{aligned}$$
(8)

with the numerical fluxes of the solution \(u_h\) and test function \(v_h\) given by

$$\left\{\begin{array}{ll} \widehat{(u_h)_x}&=\beta _{0u}\frac{\llbracket u_h \rrbracket }{\Delta {x}}+ \{\!\{ (u_h)_x \}\!\} +\beta _1\Delta {x}\llbracket (u_h)_{xx} \rrbracket ,\\ \widetilde{(v_h)_x}&=\beta _{0v}\frac{\llbracket v_h \rrbracket }{\Delta {x}}+ \{\!\{ (v_h)_x \}\!\} +\beta _1\Delta {x}\llbracket (v_h)_{xx} \rrbracket . \end{array}\right.$$
(9)

In fact, it follows by summing up (8) over all cells \(I_j\) that

$$\begin{aligned} \int _{0}^{2\uppi }(u_h)_tv_h{{\rm d}}x+{\mathbb {B}}(u_h,v_h)=0 \end{aligned}$$
(10)

with the bi-linear form \({\mathbb {B}}(u_h,v_h)\) given by

$$\begin{aligned} {\mathbb {B}}(u_h,v_h)= \sum _{j=1}^N\int _{I_j}(u_h)_x(v_h)_x{{\rm d}}x+\sum _{j=1}^N\Big (\widehat{(u_h)_x}\llbracket v_h \rrbracket +\widetilde{(v_h)_x}\llbracket {u_h} \rrbracket \Big )_{j+\frac{1}{2}}. \end{aligned}$$

Clearly, \({\mathbb {B}}(u_h,v_h)\) = \({\mathbb {B}}(v_h, u_h)\), i.e., the bi-linear form \({\mathbb {B}}(u_h,v_h)\) is symmetric.

Denote \(\beta _0=\beta _{0u}+\beta _{0v}\). It was proved in [28] that a quadratic form satisfied by the coefficient pair \((\beta _0,\beta _1)\) can lead to the admissible numerical flux (9) and guarantee the optimal accuracy. Similar to the DDGIC method, the symmetric DDG method also degenerates to the IPDG method with lower order (\(k\leqslant 1\)) approximations. As shown in [23], the symmetric DDG method is also superconvergent of order \((k+2)\) at shifted Lobatto points with both \(k=2\) and \(k=3\) polynomial approximations.

3 Superconvergence Study via Eigen-structure Analysis

In this section, we study the superconvergence properties of the DDGIC and symmetric DDG methods via the Fourier analysis approach based on the eigen-structure of the amplification matrices.

3.1 Fourier Analysis Procedure

In this section, we present in details the Fourier analysis procedure for the DDGIC and symmetric DDG methods.

We first present the details of rewriting the DDGIC scheme (6) and the symmetric DDG scheme (8) as finite difference schemes. By choosing a local basis of \(P^k(I_j)\), denoted as \(\phi ^l_j\left( x\right)\), \(l=1,\cdots ,k+1\), we can express the numerical solution as

$$\begin{aligned} u_h|_{I_j}=\sum _{l=1}^{k+1}u_j^l\phi _j^l\left( x\right) ,\quad x\in I_j. \end{aligned}$$
(11)

After substituting (11) into the DDGIC scheme (6) and the symmetric DDG scheme (8), and inverting a local \((k+1)\times (k+1)\) mass matrix, the DDGIC method (6) and the symmetric DDG method (8) can be rewritten in the form of

$$\begin{aligned} \frac{\hbox{d}\varvec{u_j}}{\hbox{d}t}=A{\varvec{u_{j-1}}}+B{\varvec{u_j}}+C{\varvec{u_{j+1}}}, \end{aligned}$$
(12)

where \(\varvec{u_j}=\left( u^1_{j},u^2_{j},\cdots ,u^{k+1}_{j}\right) ^{\rm T}\), and \(A, \ B\), and C are \((k+1)\times (k+1)\) constant matrices. They depend on the coefficients (\(\beta _0,\beta _1)\) related to the numerical fluxes (7) and (9).

In particular, as in [23], to reveal the superconvergence properties of DDG methods at Lobatto points, the basis functions \(\{\phi _j^l\}\) are chosen to be the Lagrange polynomials based on the following \(k+1\) shifted Lobatto points in the cell \(I_j\):

$$\begin{aligned} x^l_{j}=x_j+\frac{\zeta _l}{2}\Delta {x},\quad l=1,\ 2, \ \cdots ,\ k+1, \end{aligned}$$

where \(\{\zeta _l\}_{l=1}^{k+1}\) are the roots of the polynomial \(\left( 1-x^2 \right) {P_k}'\left( x \right) =0\) with \(P_k(x)\) being the Legendre polynomial of degree k. With such a basis, the coefficients of the solution \(u_h\) in the cell \(I_j\), \({\textbf{u}}_j\), are a vector of length \(k + 1\) containing the values of the solution at these shifted Lobatto points in the cell \(I_j\). In this way, the DDG schemes (6) and (8) become finite difference schemes. However, they are not standard finite difference schemes, since each point in the group of \(k + 1\) points belonging to the cell \(I_j\) obeys a different form. We refer to [23] for explicit expressions of the matrices \(A,\,B\), and C in (12) for the DDGIC and symmetric DDG methods with \(P^2\) and \(P^3\) approximations.

Now we carry out the standard Fourier analysis technique to solve (12). It is worth mentioning that this analysis depends heavily on the assumption of the uniform mesh and periodic boundary conditions. Assume

$$\begin{aligned} \varvec{u}_j(t) = \hat{\textbf{u}}(t) \hbox{e}^{\textrm{i}x_j}, \end{aligned}$$
(13)

where \(\textrm{i}\) is the imaginary unit satisfying \(\textrm{i}^2=-1\). It follows from substituting (13) into (12) that the coefficient vector \(\hat{\textbf{u}}\) satisfies the following ODE system:

$$\begin{aligned} \frac{\hbox{d}}{\hbox{d}t}\hat{\textbf{u}}(t)=G(\Delta {x})\hat{\textbf{u}}(t), \end{aligned}$$
(14)

where \(G(\Delta {x})\) is the amplification matrix, given by

$$\begin{aligned} G(\Delta {x})=A\hbox{e}^{-\textrm{i}\Delta {x}}+B+C\hbox{e}^{\textrm{i}\Delta {x}} \end{aligned}$$
(15)

with the matrices AB, C defined in (12). If we denote the eigenvalues of G as \(\lambda _{1},\lambda _2, \cdots , \lambda _{k+1}\), and the corresponding eigenvectors as \({\tilde{V}}_{1}, {\tilde{V}}_{2},\cdots , {\tilde{V}}_{k+1}\), then the general solution of the ODE system (14) is

$$\begin{aligned} \hat{\textbf{u}}(t)=a_1\hbox{e}^{\lambda _1t}{\tilde{V}}_{1}+a_2\hbox{e}^{\lambda _2t} {\tilde{V}}_{2}+\cdots +a_{k+1}\hbox{e}^{\lambda _{k+1}t}{\tilde{V}}_{k+1}, \end{aligned}$$
(16)

where the coefficients \(a_1,a_2,\cdots ,a_{k+1}\) are determined by the initial condition

$$\begin{aligned} \hat{\textbf{u}}(0)=\left( \hbox{e}^{\zeta _1\Delta {x}/2},\hbox{e}^{\zeta _2\Delta {x}/2},\cdots ,\hbox{e}^{\zeta _{k+1}\Delta {x}/2}\right) . \end{aligned}$$

Thus, the explicit expression of the coefficient vector can be written as

$$\begin{aligned} \hat{\textbf{u}}(t)=\hbox{e}^{\lambda _1t}V_1+\hbox{e}^{\lambda _2t}V_2+\cdots +\hbox{e}^{\lambda _{k+1}t}V_{k+1} \end{aligned}$$
(17)

by letting \(V_l\) = \(a_l\tilde{V_l}\), which, combining with (13), yields the explicit expression of the DDG solution \(\textbf{u}_j\), and further helps to conduct the error estimate by comparing with the exact solution.

3.2 Eigen-structure of the Amplification Matrix G

In this section, we analyze the eigen-structure of the amplification matrix G defined in (15) obtained by the DDGIC and symmetric DDG methods with the basis functions taken as the Lagrange polynomials based on the shifted Lobatto points. It is worth emphasizing that the amplification matrix G depends on the choices of basis functions in the DDG scheme. However, the eigenvalues of G stay the same for different basis functions, since DG methods are independent of the choice of basis functions, while the eigenvectors are different according to different basis functions.

The amplification matrix G involves the matrices \(A,\ B\), and C defined in (12) which depend on the coefficients \((\beta _0,\beta _1)\) given in the numerical flux (7) for the DDGIC scheme and the coefficients (\(\beta _0=\beta _{0u}+\beta _{0v}, \beta _1\)) in the numerical flux (9) for the symmetric DDG scheme. We investigate both settings of coefficients with \(\beta _1\ne \frac{1}{2k(k+1)}\) and \(\beta _1=\frac{1}{2k(k+1)}\) to analyze \(P^2\) and \(P^3\) polynomial approximations. The coefficients settings used throughout this paper are listed in Table 1. It is worth mentioning that the results hold for other admissible coefficients. Moreover, it was investigated in [23] that the errors of DDG methods stay the same for different choices of \(\beta _0\), while the errors are sensitive to \(\beta _1\) for \(P^2\) approximations. In particular, the error is superconvergent with \(\beta _1=\frac{1}{12}\) (\(\beta _1=\frac{1}{2k(k+1)}\)) for \(P^2\) polynomial approximations, while the superconvergence property is not sensitive to the choice of \(\beta _1\) for the \(P^3\) case. We also refer to [6, 23, 38] for related studies regarding the dependence of the superconvergence property on \(\beta _1\) and its independence of \(\beta _0\).

Table 1 The coefficient settings (\(\beta _0,\beta _1\)) for the DDGIC and symmetric DDG methods

Proposition 1

(eigenvalues of G) Consider solving the model problem (1) with periodic boundary condition and uniform mesh using the DDGIC scheme (6) or the symmetric DDG scheme (8) with \(P^k \ (k=2,3)\) polynomial approximations, when the basis functions are taken as the Lagrange polynomials based on the shifted Lobatto points, and the coefficients \((\beta _0,\beta _1)\) in numerical fluxes are set as in Table 1. The amplification matrix G defined in (15) is diagonalizable with \(k+1\) distinct eigenvalues, denoted as \(\lambda _1,\cdots ,\lambda _{k+1}\), among which \(\lambda _1\) is the physically relevant eigenvalue, approximating \(-1\) with dissipation error of order 2k, while the non-physically relevant eigenvalues \(\lambda _2, \cdots , \lambda _{k+1}\), are of order \(\frac{1}{{\Delta {x}}^2}\) with negative real coefficients.

Proof

We carry out symbolic computations via Mathematica, and list the eigenvalues of G for the DDGIC and symmetric DDG methods in Tables 2 and 3, respectively.

Table 2 Symbolic analysis of G’s eigenvalues for the DDGIC method
Table 3 Symbolic analysis of G’s eigenvalues for the symmetric DDG method

It follows from the results that for \(k=2,3\), the eigenvalues of the amplification matrix G of both DDG methods satisfy

$$\begin{aligned} \lambda _1=-1+{\mathcal {O}}({\Delta {x}}^{2k}), ~~\lambda _l=-\frac{C}{{\Delta {x}}^2}+{\mathcal {O}}(1), ~~l=2, \cdots , k+1 \end{aligned}$$

with both settings of \(\beta _1=\frac{1}{2k(k+1)}\) and \(\beta _1\ne \frac{1}{2k(k+1)}\) in numerical fluxes. Here C is a positive constant independent of \(\Delta {x}\).

According to Proposition 1, the non-physically relevant eigenvalues \(\{\lambda _l\}_{l=2}^{k+1}\) are negative real and of order \(\frac{1}{\Delta {x}^{2}}\). Therefore, the corresponding terms in the explicit representation (7) are damped out exponentially with respect to \(\Delta {x}\) over time, while the term with the physically relevant \(\lambda _1\) dominates in the numerical solution. It is observed from the symbolic computation that the eigenvalues of the amplification matrices G for both DDG methods are real for \(k = 2, 3\), though it is difficult to prove this fact.

Proposition 2

(eigenvectors of G) With the same assumption as Proposition 1, denote the \(k+1\) eigenvectors of G as \(V_1, V_2, \cdots , V_{k+1}\). Let \(\Vert \cdot \Vert\) be any norm vector. Then,

  • for \(P^2\) approximations with \(\beta _1= \frac{1}{2k(k+1)}=\frac{1}{12}\), and for \(P^3\) approximations with any admissible \(\beta _1\),

    $$\begin{aligned} \Vert V_1-\hat{\textbf{u}}(0) \Vert = {\mathcal {O}}({\Delta {x}}^{k+2}), \qquad \Vert V_{l} \Vert = {\mathcal {O}}({\Delta {x}}^{k+2}), ~~l=2, \cdots , k+1; \end{aligned}$$
  • for \(P^2\) approximations with \(\beta _1\ne \frac{1}{2k(k+1)}\),

    $$\begin{aligned} \Vert V_1-\hat{\textbf{u}}(0) \Vert = {\mathcal {O}}({\Delta {x}}^{k+1}), \qquad \Vert V_{l} \Vert = {\mathcal {O}}({\Delta {x}}^{k+1}), ~~l=2, \cdots , k+1. \end{aligned}$$

Proof

We carry out symbolic computations via Mathematica, and list the eigenvectors of G for the DDGIC and symmetric DDG methods with \(P^2\) polynomials in Table 4 and with \(P^3\) polynomials in Table 5, respectively. We obtain the following observations.

  • For \(P^2\) approximations with \(\beta _1= \frac{1}{2k(k+1)}=\frac{1}{12}\), and \(P^3\) approximations with both settings of \(\beta _1= \frac{1}{2k(k+1)}\) and \(\beta _1\ne \frac{1}{2k(k+1)}\), the eigenvector \(V_1\) corresponding to the physically relevant eigenvalue \(\lambda _1\) approximates \(\hat{\textbf{u}}(0)\) in (17) with order \(k+2\) at all the shifted Lobatto points. The non-physically relevant eigenvectors \(V_2, \cdots , V_{k+1}\) are of order at least \(k+2\) at all the shifted Lobatto points.

  • For \(P^2\) approximations with \(\beta _1\ne \frac{1}{2k(k+1)}\), the eigenvector \(V_1\) approximates \(\hat{\textbf{u}}(0)\) with order \(k+2\) at the cell center and with order \(k+1\) at the other two shifted Lobatto points. The eigenvectors \(V_2, \cdots , V_{k+1}\) are of order at least \(k+1\) at all three shifted Lobatto points.

The proof is complete based on these observations.

Table 4 Symbolic analysis of G’s eigenvectors for \(P^2\) approximations
Table 5 Symbolic analysis of G’s eigenvectors for \(P^3\) approximations

3.3 Error Estimates Based on the Eigen-structure of G

In this section, we carry out error estimates based on the eigen-structure of the amplification matrix G discussed in Sect. 3.2 and investigate superconvergence properties of the DDGIC and symmetric DDG methods.

Theorem 1

(error estimate) With the same assumption as Proposition 1, let \({\textbf{u}}(T)\) = \(\hat{ {\textbf{u}}}(0)\exp (\textrm{i}x_j-T )\) and \({\textbf{u}}_{h}(T)\) = \(\hat{ {\textbf{u}}}(T)\exp (\textrm{i}x_j )\) be the point values of the exact solution and numerical solutions at shift Lobatto points in the cell \(I_{j}\), respectively. For \(T>0\), the error vector \(\varvec{\varepsilon }(T)\) = \({\textbf{u}}(T)-\textbf{u}_{h}(T)\) satisfies

  • for \(P^2\) approximations with \(\beta _1=\frac{1}{12}\) and \(P^3\) approximations with any admissible \(\beta _1\),

    $$\begin{aligned} \Vert \varvec{\varepsilon }(T) \Vert \leqslant C_{1} T {\Delta {x}}^{2k} + C_{2}\exp (-T){\Delta {x}}^{k+2} + C_{3}\exp \left( -\frac{CT}{{\Delta {x}}^{2}}\right) {\Delta {x}}^{k+2}; \end{aligned}$$
    (18)
  • for \(P^2\) approximations with \(\beta _1\ne \frac{1}{12}\),

    $$\begin{aligned} \Vert \varvec{\varepsilon }(T) \Vert \leqslant C_{1} T {\Delta {x}}^{2k} + C_{2}\exp (-T){\Delta {x}}^{k+1} + C_{3}\exp \left( -\frac{CT}{{\Delta {x}}^{2}}\right) {\Delta {x}}^{k+1}. \end{aligned}$$
    (19)

Here C, \(C_{1}\), \(C_{2}\), and \(C_{3}\) are positive constants independent of \(\Delta {x}\), and \(\Vert \cdot \Vert\) can be any vector norm.

Proof

It follows from (17) that \(\hat{\textbf{u}}_0=\sum \nolimits _{l=1}^{k+1} V_l\) and

$$\begin{aligned} \begin{aligned} \Vert \varvec{\varepsilon }(T) \Vert&= \Vert {\textbf{u}}(T)-{\textbf{u}}_{h}(T) \Vert \\&=\Vert \exp (-T)\hat{ {\textbf{u}}}(0)-\hat{ {\textbf{u}}}(T) ) \Vert \\&= \left\Vert \exp (-T) \sum \limits _{l=1}^{k+1} V_l-\sum \limits _{l=1}^{k+1} \exp (\lambda _l T)V_l \right\Vert \\&\leqslant \Vert \left( \exp (-T)-\exp (\lambda _1 T)\right) V_1 \Vert +|\exp (-T) |\left\Vert \sum \limits _{l=2}^{k+1} V_l \right\Vert +\sum \limits _{l=2}^{k+1}\Vert \exp (\lambda _l T)V_l\Vert \\&\leqslant |(\exp (-T)-\exp (\lambda _1 T)) |\Vert V_1 \Vert + \exp (-T)\Vert \hat{ {\textbf{u}}}(0)-V_1 \Vert +\sum \limits _{l=2}^{k+1} |\exp (\lambda _l T) |\Vert V_l\Vert , \end{aligned} \end{aligned}$$

which completes the proof by combining with Propositions 1, 2, and the fact that \(\Vert V_1 \Vert\) is of order 1 according to Proposition 2.

It can be seen from (18) and (19) that under the assumption of the uniform mesh, the errors of the DDGIC and symmetric DDG solutions for the model problem (1) can be decomposed as three parts.

  1. (i)

    Dissipation errors of physically relevant eigenvalues. This part of error grows linearly over time and is superconvergent of order 2k.

  2. (ii)

    Projection error \(\Vert \textbf{u}^*-\textbf{u} \Vert\), where \(\textbf{u}^*\) is a special projection of the solution, defined by

    $$\begin{aligned} \textbf{u}^*(T)|_{I_j}=P_h^* \textbf{u}(T)|_{I_j}=\exp (\textrm{i}x_j-T)V_1,\quad j=1,\cdots ,N. \end{aligned}$$
    (20)

    This part of error is closely related to \(\Vert V_1 - \hat{ \textbf{u}}(0) \Vert\) via

    $$\begin{aligned} \Vert \textbf{u}^*-\textbf{u} \Vert = \Vert \exp (\textrm{i}x_j-T)V_1 - \hat{ {\textbf{u}}}(0)\exp (\textrm{i}x_j-T ) \Vert =\exp (-T)\Vert \hat{ {\textbf{u}}}(0)- V_1 \Vert , \end{aligned}$$

    and is decreasing over the time. It follows from Proposition 2 that such a projection approximates the exact solution at shifted Lobatto points with the superconvergent order \(k+2\) for \(P^2\) approximations with \(\beta _1=\frac{1}{12}\) and \(P^3\) approximations with any admissible \(\beta _1\), while with the optimal order \(k+1\) for \(P^2\) approximations with \(\beta _1\ne \frac{1}{12}\).

  3. (iii)

    Dissipation errors of non-physically relevant eigenvalues. This part of error decays exponentially with respect to \(\Delta {x}\) over the time.

Moreover, the numerical solution is much closer to the special projection of the exact solution (\(\Vert \textbf{u}^*-\textbf{u}_h \Vert ={\mathcal {O}}(\Delta {x}^{2k})\)) than to the exact solution itself. In fact, similar to the proof of Theorem 1, we have

$$\begin{aligned} \begin{aligned} \Vert \textbf{u}^*-\textbf{u}_h \Vert&= \Vert \exp (\textrm{i}x_j-T)V_1 - \hat{ {\textbf{u}}}(T)\exp (\textrm{i}x_j ) \Vert \\&= \left\Vert \exp (-T)V_1 - \sum \limits _{l=1}^{k+1} \exp (\lambda _l T)V_l \right\Vert \\&\leqslant \Vert (\exp (-T)-\exp (\lambda _1 T))V_1 \Vert + \sum \limits _{l=2}^{k+1}\Vert \exp (\lambda _l T)V_l\Vert \\&\leqslant C_{1} T {\Delta {x}}^{2k} + C_{2}\exp \left( -\frac{CT}{{\Delta {x}}^{2}}\right) {\Delta {x}}^{k+2}, \end{aligned} \end{aligned}$$

where C, \(C_{1}\), and \(C_{2}\) are positive constants independent of \(\Delta {x}\). It is worth mentioning that this paper focuses on the eigenvector analysis of this special projection, and its analytical form is subject to future investigation.

We now investigate the time evolution of the error between the DDG solutions and the exact solution based on the error estimates in Theorem 1.

  • For the short time T, the second terms in (18) and (19), which are related to the projection error, dominate. The error \(\Vert \varvec{\varepsilon }\Vert\) decreases with the rate \(\hbox{e}^{-T}\) over the time and is superconvergent of order \(k+2\) for \(P^2\) approximations with \(\beta _1=\frac{1}{12}\) and \(P^3\) approximations with any admissible \(\beta _1\), while optimal of order \(k+1\) for \(P^2\) approximations with \(\beta _1\ne \frac{1}{12}\).

  • As T increases to \({\mathcal {O}}\left( \frac{1}{{\Delta {x}}^{k-2}}\right)\) for \(P^2\) approximations with \(\beta _1=\frac{1}{12}\) and \(P^3\) approximations with any admissible \(\beta _1\), or to \({\mathcal {O}}\left( \frac{1}{{\Delta {x}}^{k-1}}\right)\) (longer time simulation) for \(P^2\) approximations with \(\beta _1\ne \frac{1}{12}\), the first terms in (18) and (19) dominate. The error \(\Vert \varvec{\varepsilon }\Vert\) grows linearly with the time and is superconvergent of order 2k.

It is usually challenging to check the long-time behavior of the DDG solutions numerically. We propose the following corollary as a way to numerically assess our theoretical results above.

Corollary 1

Let \({\textbf{u}}_{h}\) be the numerical solution obtained by the DDGIC or symmetric DDG method with \(P^k\ (k=2,3)\) approximations on uniform mesh for the model problem (1). Let \(T\geqslant t>0\), and denote \(\tilde{\varvec{\varepsilon }}(T;\,t)={\textbf{u}}_{h}(T)-\textbf{u}_{h}(t)\exp (-(T-t))\). Then,

$$\begin{aligned} \Vert \tilde{\varvec{\varepsilon }}(T;\,t)\Vert =\Vert \textbf{u}_{h}(T)-{\textbf{u}}_{h}(t)\exp (-(T-t)) \Vert \leqslant C_{1} (T-t) {\Delta {x}}^{2k} + C_{2}\exp \left( -\frac{Ct}{{\Delta {x}}^{2}}\right) {\Delta {x}}^{k+1}, \end{aligned}$$
(21)

where \(C_{1}\) and \(C_{2}\) are positive constants independent of \(\Delta {x}\).

Proof

It follows from the explicit expression of the numerical solution in (13) with (17) as well as Propositions 1 and 2 that

$$\begin{aligned} \begin{aligned}&\Vert {\textbf{u}}_{h}(T)-{\textbf{u}}_{h}(t)\exp (-(T-t)) \Vert \\& =\left \Vert \sum \limits _{l=1}^{k+1} \exp (\lambda _l T)V_l-\sum \limits _{l=1}^{k+1} \exp (\lambda _l t-(T-t))V_l \right \Vert \\& \leqslant |\exp (\lambda _1 T)-\exp (\lambda _1 t-(T-t)) |\Vert V_1 \Vert +\sum \limits _{l=2}^{k+1} |\exp (\lambda _l T)-\exp (\lambda _l t-(T-t)) |\Vert V_l\Vert \\& \leqslant |\exp (\lambda _1 (T-t))-\exp (-(T-t)) ||\exp (\lambda _1 t) |\Vert V_1\Vert + C_{2}\exp \left( -\frac{Ct}{{\Delta {x}}^{2}}\right) {\Delta {x}}^{k+1} \\& \leqslant C_{1} (T-t) {\Delta {x}}^{2k} + C_{2}\exp \left( -\frac{Ct}{{\Delta {x}}^{2}}\right) {\Delta {x}}^{k+1}, \end{aligned} \end{aligned}$$

where \(C_{1}\) and \(C_{2}\) are positive constants independent of \(\Delta {x}\). Again, we have applied the fact that \(\Vert V_1\Vert\) is of order 1 and \(\{\lambda _l\}_{l = 2}^{k+1}\) are negative real with order \(\frac{1}{\Delta {x}^2}\).

It is worth emphasizing that (21) holds for \(P^k\ (k=2,3)\) with any admissible \(\beta _1\), since we adopt the optimal order of \(k+1\) for \(\Vert V_l\Vert ,l=2,\cdots ,k+1\). In fact, for \(t={\mathcal {O}}(1)\), the second term on the right-hand side of (21) decays exponentially with respect to \(\Delta {x}\). Then this term is damped out, and the first term on the right-hand side of (21) dominates, which grows linearly with \((T-t)\) and is superconvergent of order 2k.

We end this section with the relation between the error \(\varvec{\varepsilon }(T)\) in Theorem 1 and \(\tilde{\varvec{\varepsilon }}(T;\,t)\) in Corollary 1. Recall that the exact solution is \({\textbf{u}}(T)\) = \(\hat{ \textbf{u}}(0)\exp (\textrm{i}x_j-T )\), then we have

$$\begin{aligned} \begin{aligned} \Vert \varvec{\varepsilon }(T) \Vert&= \Vert {\textbf{u}}(T) - {\textbf{u}}_{h}(T) \Vert \\&= \Vert {\textbf{u}}(t)\exp (-(T-t)) - {\textbf{u}}_{h}(T) \Vert \\&\leqslant \Vert {\textbf{u}}_{h}(T)- {\textbf{u}}_{h}(t)\exp (-(T-t)) \Vert + |\exp (-(T-t)) |\Vert {\textbf{u}}(t)- {\textbf{u}}_{h}(t) \Vert \\&\leqslant \Vert \tilde{\varvec{\varepsilon }}(T;\,t)\Vert + \Vert \varvec{\varepsilon }(t) \Vert . \end{aligned} \end{aligned}$$

For \(t={\mathcal {O}}(1)\), \(\Vert \tilde{\varvec{\varepsilon }}(T;\,t)\Vert\) grows linearly with T and is of order 2k by Corollary 1. According to Theorem 1, \(\Vert \varvec{\varepsilon }(t) \Vert\) is superconvergent of order \(k+2\) for \(P^2\) case with \(\beta _1=\frac{1}{12}\) and \(P^3\) case with any admissible \(\beta _1\), while it is optimal of order \(k+1\) for \(P^2\) case with \(\beta _1\ne \frac{1}{12}\). We conclude that \(\Vert \varvec{\varepsilon }(T) \Vert\) will not grow in time until \(T={\mathcal {O}}\left( \frac{1}{{\Delta {x}}^{k-2}}\right)\) for \(P^2\) case with \(\beta _1=\frac{1}{12}\) and \(P^3\) case with any admissible \(\beta _1\) and until \(T={\mathcal {O}}\left( \frac{1}{{\Delta {x}}^{k-1}}\right)\) for \(P^2\) case with \(\beta _1\ne \frac{1}{12}\).

4 Numerical Results

In this section, we provide numerical experiments to demonstrate the theoretical results presented in Sect. 3.

We numerically solve (1) with both DDGIC and symmetric DDG methods for spatial discretization. For temporal discretization, we apply the third-order strong-stability-preserving (SSP) Runge-Kutta (RK) method [26] for Example 1 and the classical fourth-order RK method for Example 2. To make the temporal error negligible comparing with the spatial error, we take \(CFL = 0.001\) and set \({\Delta t} = CFL{\Delta {x}}^2\). We investigate different settings of coefficients \((\beta _0,\beta _1)\) in the numerical fluxes for \(P^2\) and \(P^3\) polynomials. The coefficients \((\beta _0,\beta _1)\) used in the numerical experiments are given in Table 1 for both two examples.

Example 1

This example concerns the model problem (1). We examine two types of error measures. One is \(\varvec{\varepsilon }(T) = {\textbf{u}}(T)-{\textbf{u}}_{h}(T)\), i.e., the regular error between the numerical solution and the exact solution. The other is \(\tilde{\varvec{\varepsilon }}(T;\,t)= {\textbf{u}}_{h}(T)-\textbf{u}_{h}(t)\exp (-(T-t))\) as discussed in Corollary 1. In this paper, we do not show the errors \(\varvec{\varepsilon }(T)\) at shifted Lobatto points as they have been well documented in [23]. Instead, we show the time evolution of the regular errors \(\varvec{\varepsilon }(T)\) for short-time interval. We use forty spatial meshes for \(P^2\) approximation and twenty spatial meshes for \(P^3\) approximation. Figure 1 plots the time evolution of the \(L^2\) norm of \(\varvec{\varepsilon }(T)\) for \(T\in [2,20]\) in semi-log scale. It can be observed that the errors decay exponentially with respect to the time T as expected from Theorem 1, where the dominating term in \(\varvec{\varepsilon }(T)\) for short time is the project error, which is decreasing with the rate \(\text{e}^{-T}\).

Fig. 1
figure 1

Time evolution of \(L^2\)-norm of \(\varvec{\varepsilon }(T)\) for the DDGIC (left) and symmetric DDG (right) methods. Forty spatial meshes for \(P^2\) case and 20 spatial meshes for \(P^3\) case

We then investigate the error measure \(\tilde{\varvec{\varepsilon }}(T;\,t)\) as an alternative way to check the long-time behaviour of the DDG solutions, as discussed in Corollary 1. We list the \(L^2\)- and \(L^\infty\)-norms of the errors \(\tilde{\varvec{\varepsilon }}(2;\,1)\), \(\tilde{\varvec{\varepsilon }}(3;\,2)\), and \(\tilde{\varvec{\varepsilon }}(3;\,1)\) and their orders of accuracy in Tables 678, and 9 for the DDGIC and symmetric DDG methods with \(P^2\) and \(P^3\) approximations. Again we choose different coefficients in numerical fluxes. It can be observed that both DDG solutions can achieve 2k-th order of accuracy in the error measure \(\Vert \tilde{\varvec{\varepsilon }}(T;\,t)\Vert\) for \(P^k\ (k=2,3)\) approximations with both settings of \(\beta _1 \ne \frac{1}{2k(k+1)}\) and \(\beta _1 = \frac{1}{2k(k+1)}\) in numerical fluxes, as expected from Corollary 1. It is also observed that \(\Vert \tilde{\varvec{\varepsilon }}(3;1)\Vert \approx 2 \,\Vert \tilde{\varvec{\varepsilon }}(3; 2)\Vert\), which is consistent with Corollary 1 that the dominating term of \(\Vert \tilde{\varvec{\varepsilon }}(T;\,t)\Vert\) in (21) grows linearly with \(T-t\) for \(t={\mathcal {O}}(1)\). Moreover, it follows from a simple check that \(\text{e}^{-1}\Vert \tilde{\varvec{\varepsilon }}(2;\,1)\Vert \approx \Vert \tilde{\varvec{\varepsilon }}(3;\,2)\Vert\), which is consistent with the fact that the dominating term \(\Vert \tilde{\varvec{\varepsilon }}(T;\,t)\Vert\) is

$$\begin{aligned} \exp (\lambda _1 t) (T-t) \Delta {x}^{2k}\approx\,\hbox{e}^{-t}(T-t)\Delta {x}^{2k}, \end{aligned}$$

according to the proof of Corollary 1.

Table 6 The \(L^2\)- and \(L^{\infty }\)-norms and orders of \(\tilde{\varvec{\varepsilon }}(T;\,t)\) for the DDGIC method with \(P^2\)
Table 7 The \(L^2\)-and \(L^{\infty }\)-norms and orders of \(\tilde{\varvec{\varepsilon}}(T;\,t)\) for the DDGIC method with \(P^3\)
Table 8 The \(L^2\)- and \(L^{\infty }\)-norms and orders of \(\tilde{\varvec{\varepsilon }}(T;\,t)\) for the symmetric DDG method with \(P^2\)
Table 9 The \(L^2\)- and \(L^{\infty }\)-norms and orders of \(\tilde{\varvec{\varepsilon }}(T;\,t)\) for the symmetric DDG method with \(P^3\)

We further compare the error measures \(\varvec{\varepsilon}(T)\) and \(\tilde{\varvec{\varepsilon}}(T;\,t)\). We get almost the same results for numerical fluxes with \(\beta _1 \ne \frac{1}{2k(k+1)}\) and \(\beta _1=\frac{1}{2k(k+1)}\), and thus we only show the results obtained by \(\beta _1=\frac{1}{2k(k+1)}\). Figures 2 and 3 plot point values of \(\varvec{\varepsilon }(2)\) and \(\tilde{\varvec{\varepsilon }}(2;1)\) for the DDGIC and symmetric DDG methods with piecewise \(P^2\) and \(P^3\) polynomials. We take 20 points (shifted Lobatto points included) on each cell. It can be observed that the regular errors \(\varvec{\varepsilon }(2)\) of DDG solutions are highly oscillatory, while the errors \(\tilde{\varvec{\varepsilon }}(2;1)\) are non-oscillatory. Moreover, the magnitude of \(\tilde{\varvec{\varepsilon }}(2;1)\) is much smaller than \(\varvec{\varepsilon }(2)\).

Fig. 2
figure 2

\(\varvec{\varepsilon }(2)\) (left) and \(\tilde{\varvec{\varepsilon }}(2;1)\) (right) for the DDGIC (top) and symmetric DDG (bottom) methods with \(P^2\) polynomials. y-axis denotes logarithmic scale of the errors

Fig. 3
figure 3

\(\varvec{\varepsilon }(2)\) (left) and \(\tilde{\varvec{\varepsilon }}(2;1)\) (right) for the DDGIC (top) and symmetric DDG (bottom) methods with \(P^3\) polynomials. y-axis denotes logarithmic scale of the errors

Example 2

This example considers the following convection-diffusion equation:

$$\begin{aligned} u_t+(\alpha u)_x=u_{xx}, \quad x \in [0,2\uppi ],\quad t>0 \end{aligned}$$
(22)

with the initial condition \(u(x,0)=\text{e}^{\sin (x)/2}\) and periodic boundary conditions, where the variable coefficient \(\alpha =1+\cos (x-t)/2\). The exact solution is

$$\begin{aligned} u=\text{e}^{\sin (x-t)/2}. \end{aligned}$$

For the convection term \((\alpha u)_x\), the numerical flux is taken the upwind flux. We examine the error measure \(\varvec{\varepsilon }(T) = {\textbf{u}}(T)-{\textbf{u}}_{h}(T)\) and list the \(L^2\)- and \(L^\infty\)- norms of \(\varvec{\varepsilon }(0.3)\) and their orders of accuracy in Tables 10 and 11 for the DDGIC and symmetric DDG methods with \(P^2\) and \(P^3\) approximations. For \(\beta _1=\frac{1}{2k(k+1)}\), the errors are superconvergent of order \((k + 2)\) for both \(P^2\) and \(P^3\) case. For \(\beta _1\ne \frac{1}{2k(k+1)}\), the errors are of order \((k + 1)\) for \(P^2\) approximation and of order \((k + 2)\) for \(P^3\) approximations. These results agree well with those in [6].

Table 10 The \(L^2\)- and \(L^{\infty }\)-norms and orders of \(\varvec{\varepsilon }(0.3)\) for the DDGIC method
Table 11 The \(L^2\) - and \(L^{\infty }\)-norms and orders of \(\varvec{\varepsilon }(0.3)\) for the symmetric DDG method

5 Conclusion

In this paper, we discuss superconvergence properties of the DDGIC and symmetric DDG methods for the one-dimensional linear diffusion equation. Under the assumption of the uniform mesh and periodic boundary conditions, we carry out the Fourier analysis for both DDG methods with \(P^2\) and \(P^3\) polynomials. We also investigate different choices of the coefficient pairs \((\beta _0,\beta _1)\) in numerical fluxes.

We analyze the eigen-structure of amplification matrices associated with the Lagrange basis functions based on shifted Lobatto points and concludes that: (i) the eigenvalues are not sensitive to \(\beta _1\). The physically relevant eigenvalue approximates the value \(-1\) with dissipation errors of order 2k. The non-physically relevant eigenvalues are negative real and of order \(\frac{1}{{\Delta {x}}^2}\). The corresponding parts in the solution decay exponentially with respect to \(\Delta {x}\). (ii) The eigenvectors are sensitive to \(\beta _1\) for \(P^2\) case. The eigenvector corresponding to the physically relevant eigenvalue approximates the wave function with the superconvergent order \(k+2\) for \(P^2\) case with \(\beta _1=\frac{1}{12}\) and \(P^3\) case with any admissible \(\beta _1\).

Based on the eigen-structure analysis of the amplification matrices, we establish error estimates of the DDG solutions which can be decomposed into three parts: (i) dissipation errors of physically relevant eigenvalues, which are superconvergent of order 2k and grow linearly with time. We also propose an error measure to verify this superconvergence; (ii) projection error, which is superconvergent of order \(k+2\) for \(P^2\) polynomial with \(\beta _1=\frac{1}{12}\) and \(P^3\) polynomial with any admissible \(\beta _1\); (iii) dissipative errors of non-physically relevant eigenvalues, which decay exponentially with respect to \(\Delta {x}\).