1 Introduction

In this paper we consider methods for the numerical solution of hyperbolic conservation laws in one dimension in space in the form

$$\begin{aligned} \frac{\partial }{\partial t} u(x,t) + \frac{\partial }{\partial x} (f(u(x,t))) =0. \end{aligned}$$
(1)

Time-dependent hyperbolic differential equations and nonlinear conservation laws model many physical problems. Because of that their numerical solution has a great importance in fields of meteorology, chemical engineering, aeronautics, astrophysics financial modeling and environmental sciences. The main difficulty for the numerical solution of (1) is the appearance of shocks or discontinuities even if the initial condition is smooth. Discretization of spatial derivatives with the method of lines (MOL) gives rise to a system of ordinary differential equations (ODEs). In recent years discontinuous Galerkin (DG) methods have become a popular choice for the spatial discretization. This approach introduced by Reed and Hill [22] was considered in various applications, e.g. [16, 17, 20, 21, 23, 28]. The ODE system generated by DG spatial discretizations is often solved in time by explicit Runge–Kutta (RK) methods. These methods developed by Cockburn and Shu [5] are known as RKDG methods.

The time steps of RKDG methods must satisfy the Courant–Friedrichs–Levy (CFL) condition. For hyperbolic differential equations there are two kinds of restrictions for the time steps, first, linear stability to ensure convergence for smooth solutions and second, some forms of nonlinear stability, e.g. total variation (TV) stability, to prevent non-physical oscillations of the numerical solution around discontinuities or shocks. In order to achieve that a so-called generalized slope limiter is applied by which RKDG methods are total variation diminishing (TVD) and total variation bounded (TVB) in the means under a suitable CFL restriction [5]. The restriction for the time step to ensure TV stability is provided by the so-called strong stability preserving (SSP) coefficient.

There are many investigations in the field of SSP RK methods to optimize the SSP coefficient, an overview is given in [8]. However, Kubatko et al. [18] observed that for RK methods the condition on linear stability is more restrictive than the condition on nonlinear stability. This means that methods optimized with respect to the SSP property are not optimal, in general, for schemes resulting from DG spatial discretizations. Hence, in [18] SSP RK methods have been constructed that are optimal for DG spatial discretizations. So, the schemes are optimal concerning both linear and SSP stability. In this paper we follow this approach for explicit peer methods. We develop new methods with favourable properties with respect to both linear and TV stability for DG spatial discretizations.

Explicit peer methods introduced by Weiner et al. [29] are a special class of general linear methods (GLM), see [1, 12]. Constantinescu and Sandu [6] constructed optimal SSP GLM up to order four. Izzo and Jackiewicz [11] give SSP coefficients for GLM of order \(p\le 4\) and stage order \(q\le p\). Explicit peer methods are considered in several papers [2, 19, 25, 30]. For explicit peer methods the stage order is equal to the order of consistency. These methods produce excellent results, especially in the application to nonstiff ordinary differential equations with step size control. In Horváth et al. [10] proved a sufficient condition for the SSP property of explicit peer methods and constructed SSP methods up to order 13.

The outline of the paper is as follows: In Sect. 2 we discuss the discretization of the spatial operator of a hyperbolic conservation law in one space dimension with discontinuous Galerkin method. An overview of explicit peer methods is given in Sect. 3. We mention important properties like consistency, zero-stability and convergence. A sufficient condition for strong stability preserving explicit peer methods is covered and the TVDM (TVD in the means) property for explicit peer methods is also stated. The construction of explicit peer methods optimized with respect to both linear and SSP stability is detailed in Sect. 4. In Sect. 5 numerical tests are presented. We apply the peer methods to a linear transport equation and the inviscid Burgers equation and compare the results with RK methods. We show numerically the advantage of high stage order. Finally, we draw some conclusions and discuss future work in Sect. 6.

The coefficients of the optimized explicit peer methods can be found in an Appendix.

2 Discontinuous Galerkin Spatial Discretization

We begin by considering a time-dependent hyperbolic conservation law in one dimension in space

$$\begin{aligned} \frac{\partial }{\partial t} u(x,t) + \frac{\partial }{\partial x} (f(u(x,t))) =0,\quad x \in \Omega =(a,b)\subseteq \mathbb {R},\quad t>0 \end{aligned}$$
(2)

with periodic boundary conditions \(u(a,t)=u(b,t)\) and initial condition \(u_0(x)=u(x,0)\). The function f is usually said to be the flux function.

First, we define for the approach of discontinuous Galerkin spatial discretization a partition of \(\Omega \)

$$\begin{aligned} a=x_0< x_1< \cdots < x_N=b \end{aligned}$$

with

$$\begin{aligned} \Omega _j=[x_{j-1},x_j],\quad \Delta x_j =x_j - x_{j-1},\quad \text {for all }j=1,\ldots ,N. \end{aligned}$$

The weak formulation of problem (2) is obtained by multiplying with a sufficiently smooth test function \(v:\Omega \rightarrow \mathbb {R}\) and integrating over each \(\Omega _j\), so we have

$$\begin{aligned} \begin{aligned} 0&= \int _{\Omega _j} \frac{\partial }{\partial t} u(x,t) v(x)\mathrm {d}x + \int _{\Omega _j} \frac{\partial }{\partial x} (f(u(x,t))) v(x)\mathrm {d}x \\&= \int _{\Omega _j} \frac{\partial }{\partial t} u(x,t) v(x)\mathrm {d}x + {[f(u(x,t)) v(x)]}_{x_{j-1}}^{x_j} \\&\quad -\, \int _{\Omega _j} f(u(x,t)) \frac{\mathrm {d}}{\mathrm {d}x} v(x)\mathrm {d}x,\quad j=1,\ldots ,N. \end{aligned} \end{aligned}$$

The boundary flux terms are denoted by \(f_j(t):=f(u(x_j,t))\), which leads to

$$\begin{aligned} \begin{aligned} 0&= \int _{\Omega _j} \frac{\partial }{\partial t} u(x,t) v(x)\mathrm {d}x - \int _{\Omega _j} f(u(x,t)) \frac{\mathrm {d}}{\mathrm {d}x} v(x)\mathrm {d}x \\&\quad +\, f_j(t) v(x_j)- f_{j-1}(t) v(x_{j-1}),\quad j=1,\ldots ,N. \end{aligned} \end{aligned}$$
(3)

Second, we replace the exact solution u and the test function v in (3) by the discrete functions \(u_h \in V_h^k\) and \(v_h \in V_h^k\), where the finite-dimensional space of functions is given by

$$\begin{aligned} V_h^k := \{v:v|_{\Omega _j} \in \Pi _k(\Omega _j)\}. \end{aligned}$$

Here \(\Pi _k(\Omega _j)\) denotes the space of polynomials over \(\Omega _j\) of degree at most k. By this choice the functional continuity is not ensured at the boundaries of \(\Omega _j\). We denote the left function value of \(v_h \in V_h^k\) at \(x_j\) over \(\Omega _j\) by \(v_h(x_j^-)\) and the right function value of \(v_h \in V_h^k\) at \(x_j\) over \(\Omega _{j+1}\) by \(v_h(x_j^+)\), see Fig. 1. We replace the boundary fluxes \(f_j\) by numerical fluxes

$$\begin{aligned} \widehat{f}_j(t)&=\widehat{f}_j\left( u_h\left( x_j^-,t\right) ,u_h\left( x_j^+,t\right) \right) . \end{aligned}$$
(4)

These substitutions lead to the discrete weak formulation of problem (2) for all \(v_h \in V_h^k\) in the form

$$\begin{aligned} \begin{aligned} 0&= \int _{\Omega _j} \frac{\partial }{\partial t} u_h(x,t) v_h(x)\mathrm {d}x - \int _{\Omega _j} f(u_h(x,t)) \frac{\mathrm {d}}{\mathrm {d}x} v_h(x)\mathrm {d}x +\widehat{f_j}(t) v_h(x_j^{-}) \\&\quad -\widehat{f}_{j-1}(t) v_h(x_{j-1}^{+}),\quad j=1,\ldots ,N. \end{aligned} \end{aligned}$$
(5)
Fig. 1
figure 1

Illustration of discontinuities of \(v_h \in V_h^k\) at boundaries of \(\Omega _j\)

Let a set of basic functions \(\Phi =(\varphi _{0,j},\ldots ,\varphi _{k,j})^{\top }\) for the finite-dimensional space \(V_h^k\) over \(\Omega _j\) be given. The discrete solution \(u_h\) over \(\Omega _j\) can be expressed as

$$\begin{aligned} u_h|_{\Omega _j} = \sum _{i=0}^k y_{i,j}(t) \varphi _{i,j}(x). \end{aligned}$$

With \(\mathbf {y}_j =(y_{0,j}(t),\ldots ,y_{k,j}(t))^{\top }\) and the notation

$$\begin{aligned} F_j(\varphi _{i,j}) = \int _{\Omega _j} f(u_h(x,t)) \frac{\mathrm {d}}{\mathrm {d}x} \varphi _{i,j}(x)\mathrm {d}x +\widehat{f_j}(t) \varphi _{i,j}\left( x_j^{-}\right) -\widehat{f}_{j-1}(t) \varphi _{i,j}\left( x_{j-1}^{+}\right) \end{aligned}$$

and

$$\begin{aligned} \mathbf {M}_j = (M_{il})_{i,l=0}^k = \left( \int _{\Omega _j}\!\varphi _{i,j}(x) \varphi _{l,j}(x)\mathrm {d}x\right) _{i,l=0}^k,\quad \mathbf {F}_j = ( F_j(\varphi _{0,j}),\ldots ,F_j(\varphi _{k,j}))^{\top } \end{aligned}$$

the discrete weak formulation (5) can be written as

$$\begin{aligned} \mathbf {M}_j \frac{\mathrm {d}}{\mathrm {d}t} \mathbf {y}_j = \mathbf {F}_j,\quad j=1,\ldots ,N. \end{aligned}$$

The DG approach is also applied to the initial condition. By inverting the element mass matrices \(\mathbf {M}_j\) we obtain with

$$\begin{aligned} \mathbf {y} = \left( \mathbf {y}_1^{\top },\ldots ,\mathbf {y}_N^{\top }\right) ^{\top },\quad \mathbf {L} = \left[ \left( \mathbf {M}_1^{-1} \mathbf {F}_1\right) ^{\top },\ldots ,\left( \mathbf {M}_N^{-1} \mathbf {F}_N\right) ^{\top }\right] ^{\top } \end{aligned}$$

a system of ordinary differential equations in the form

$$\begin{aligned} \begin{aligned} \frac{\mathrm {d}}{\mathrm {d}t} \mathbf {y}&= \mathbf {L}(\mathbf {y}) \\ \mathbf {y}(0)&= \mathbf {y}_0 \in \mathbb {R}^{(k+1)N}, \end{aligned} \end{aligned}$$
(6)

where \(\mathbf {L}\) is the DG spatial operator. In the case of a linear flux function \(f(u)=cu,\ c\in \mathbb {R}\), problem (6) leads to a linear initial value problem, i.e. we have \(\mathbf {L}(\mathbf {y}) =\mathbf {L}\mathbf {y}\). In the linear case the integrals appearing in the DG approach can be solved exactly. Otherwise, they can be computed by using suitable numerical quadrature rules.

3 Strong Stability Preserving Explicit Peer Methods

3.1 Explicit Peer Methods

We consider a system of ODEs in the form

$$\begin{aligned} y^{\prime } = f(t,y),\quad y(t_0)=y_0 \in \mathbb {R}^n,\quad t \in [t_0,t_e]. \end{aligned}$$
(7)

Explicit peer methods for an initial value problem (7) as introduced in [29] are given by

$$\begin{aligned} \begin{aligned} U_{m,i}&=\sum _{j=1}^s b_{ij}U_{m{-}1,j}+ h_m\sum _{j=1}^s a_{ij}f(t_{m{-}1,j},U_{m{-}1,j})\\&\quad +\, h_m \sum _{j=1}^{i-1} r_{ij}f(t_{m,j},U_{m,j}),\quad i=1,\dots ,s. \end{aligned} \end{aligned}$$
(8)

Here \(b_{ij}\), \(a_{ij}\), \(c_i\) and \(r_{ij}\), \(i,j=1,\dots , s\) are the parameters of the method. At each step s stage values \(U_{m,i}\), \(i=1,\dots ,s\) are computed. They approximate the exact solution \(y(t_{m,i})\) where \(t_{m,i}=t_m+c_i h_m\). The nodes \(c_i\) are assumed to be pairwise distinct, we always assume \(c_s=1\). The coefficients of the method (8) depend, in general, on the step size ratio \(\sigma _m=h_m/h_{m-1}\). Defining matrices \(B_m=(b_{ij})_{i,j=1}^s\), \(A_m=(a_{ij})\), \(R_m=(r_{ij})\) and vectors \(U_m=(U_{m,i})_{i=1}^s\in \mathbb {R}^{sn}\) and \(F_m=(f(t_{m,i},U_{m,i}))_{i=1}^s\) leads to the compact form

$$\begin{aligned} U_m=(B_m\otimes I)U_{m-1}+h_m(A_m\otimes I)F_{m-1}+h_m(R_m\otimes I) F_m, \end{aligned}$$

where \(R_m\) is strictly lower triangular. Like multistep methods peer methods need also s starting values \(U_{0,i}\). We collect here some results from [29].

Conditions for the order of consistency of explicit peer methods can be derived by considering the residuals \(\Delta _{m,i}\) obtained when the exact solution is put into the method

$$\begin{aligned} \begin{aligned} \Delta _{m,i}&:=y(t_{m,i})-\sum _{j=1}^sb_{ij}y(t_{m-1,j}) -h_m\sum _{j=1}^s a_{ij}y'(t_{m-1,j})\\&\quad -\,h_m\sum _{j=1}^{i-1}r_{ij}y'(t_{m,j}),\quad i=1,\dots ,s. \end{aligned} \end{aligned}$$

Definition 1

A peer method (8) is consistent of order p if

$$\begin{aligned} \Delta _{m,i}=\mathcal {O}\left( h_m^{p+1}\right) ,\quad i=1,\dots ,s. \end{aligned}$$

\(\square \)

Note that the stage order of peer methods is equal to the order of consistency so that order reduction is avoided. By Taylor series expansion follows, see e.g. [29]

Theorem 1

A peer method (8) has order of consistency p iff

$$\begin{aligned} \begin{aligned} AB_i(l)&:= c_{i}^{l}-\sum _{j=1}^{s}b_{ij}\frac{(c_{j}-1)^{l}}{\sigma _m^{l}}-l\sum _{j=1}^{s}a_{ij} \frac{(c_{j}-1)^{l-1}}{\sigma _m^{l-1}}-l\sum _{j=1}^{i-1}r_{ij}c_{j}^{l-1}=0, \quad i=1,\ldots ,s \end{aligned} \end{aligned}$$
(9)

is satisfied for all \(l=0,\dots ,p\). \(\square \)

We set \(\mathbb {AB}(l):=(AB_i(l))_{i=1}^s\). The condition (9) can also be written in the form

$$\begin{aligned} \exp (c \sigma _{\! m} z)-B_m\exp (z(c-\mathbb {1}))-A_m\sigma _{\! m} z \exp (z(c-\mathbb {1}))-R_m\sigma _{\! m} z\exp (c\sigma _{\! m} z)\!=\!\mathcal {O}(z^{p+1}), \end{aligned}$$

where \(\mathbb {1}=(1,\ldots ,1)^{\top }\). The exponentials are defined componentwise. The condition (9) for order \(l=0\) is referred to as preconsistency. It takes the form

$$\begin{aligned} B_m \mathbb {1} = \mathbb {1}. \end{aligned}$$
(10)

Definition 2

A peer method (8) is zero stable if there is a constant \(K>0\), so that for all \(m,k \ge 0\) holds

$$\begin{aligned} \Vert B_{m+k} \cdots B_{m+1} B_m\Vert \le K. \end{aligned}$$

\(\square \)

Consistency and zero stability are necessary for convergence of peer methods. One has [30]

Theorem 2

Let a peer method (8) be given. If the method is consistent of order p and zero stable and the starting values satisfy \(U_{0,i}-y(t_{0,i})=\mathcal {O}(h_0^p)\), then the peer method is convergent of order p. \(\square \)

Let the scalar test equation [7]

$$\begin{aligned} y^{\prime }(t) = \lambda y(t),\quad \lambda \in \mathbb {C},\quad \mathfrak {R}\lambda \le 0 \end{aligned}$$

be given. The application of a peer method (8) with constant step size h leads to

$$\begin{aligned} U_m = M(z) U_{m-1},\quad z=\lambda h, \end{aligned}$$

where \(M:\mathbb {C} \rightarrow \mathbb {C}^{s,s}\) with \(M(z)=(I-zR)^{-1}(B+zA)\) denotes the stability matrix [29]. The stability domain of a peer method (8) is defined by

$$\begin{aligned} S=\left\{ z \in \mathbb {C}:\varrho (M(z))\le 1; \text { eigenvalues } \lambda _z \text { of } M(z) \text { with } \left| \lambda _z\right| =1 \text { are simple}\right\} , \end{aligned}$$

where \(\varrho (\cdot )\) is the spectral radius.

Explicit peer methods were tested successfully with step size control. Methods of order of consistency \(p=s\) and order of convergence \(p=s+1\) were constructed and applied to nonstiff ODE systems in [30]. In the following investigations we always consider a constant time step size.

3.2 SSP Property for Explicit Peer Methods

Strong stability preserving (SSP) explicit peer methods are investigated in [10]. Methods up to order 13 are constructed and tested on semidiscretized hyperbolic equations. We summarize here some results from [10].

Definition 3

Let an autonomous initial value problem (7) be given and let there exist \(\Delta t_{\text {FE}}>0\) so that

$$\begin{aligned} \Vert y+\Delta t f(y)\Vert \le \Vert y\Vert ,\quad \text {for all } y \in \mathbb {R}^n\quad \text {and}\quad 0\le \Delta t \le \Delta t_{\text {FE}} \end{aligned}$$

holds true, where \(\Vert \,\!\cdot \,\!\Vert \) is a norm or a convex functional. An explicit peer method (8) is strong stability preserving with SSP coefficient \(\mathcal {C}>0\) if for all \(0\le \Delta t \le \mathcal {C} \Delta t_{\text {FE}}\) the condition

$$\begin{aligned} \max \limits _{i=1,\ldots ,s} \Vert U_{m,i}\Vert \le \max \limits _{i=1,\ldots ,s} \Vert U_{{m-1},i}\Vert \end{aligned}$$

is satisfied. \(\square \)

We indicate the following sufficient condition for SSP property of explicit peer methods [10]

Theorem 3

Let an explicit peer method (8) be given. Assume that

$$\begin{aligned} \mathcal {C}&= \max _{r\in \mathbb {R}^+} \left\{ r:g(r)=(I+rR)^{-1}[R,A,B-rA] \ge 0 \right\} > 0 \end{aligned}$$
(11)

holds true, where the inequalities are meant componentwise. Then this explicit peer method is strong stability preserving with SSP coefficient \(\mathcal {C}\). \(\square \)

The relation of (11) to general conditions for GLM given in [27] is discussed in [10].

3.3 TVDM Property for Explicit Peer Methods

We consider a system of ODEs in the form (6) resulting from the DG spatial discretization. Applying the forward Euler method in time and a generalized slope limiter [3] yields a scheme, which is TVDM (total variation diminishing in the means) under the condition [5]

$$\begin{aligned} \Delta t \le \Delta t_{\text {FE}} = \frac{\min _j\Delta x_j}{2(L_1+L_2)}. \end{aligned}$$
(12)

Here \(L_1\) and \(L_2\) denote the Lipschitz constants of the numerical flux \(\widehat{f}\) with respect to the first and second argument. Generalized slope limiter enforce stability without degrading the accuracy achieved by the space and time discretizations. For an overview of slope limiters and TVDM we refer to [3] and [5].

In the following we consider the linear case of the hyperbolic equation (2)

$$\begin{aligned} \frac{\partial }{\partial t} u(x,t) + c\frac{\partial }{\partial x} u(x,t)&=0 \end{aligned}$$

with periodic boundary conditions. Considering a uniform mesh of width \(\Delta x\) and numerical upwind flux

$$\begin{aligned} \widehat{f}\left( u_h\left( x_j^-,t\right) ,u_h\left( x_j^+,t\right) \right)&={\left\{ \begin{array}{ll} cu_h\left( x_j^-,t\right) ,&{}\quad c\ge 0,\\ cu_h\left( x_j^+,t\right) ,&{}\quad c< 0 \end{array}\right. } \end{aligned}$$
(13)

results in the linear system of ordinary differential equations

$$\begin{aligned} y'&=\mathbf {L}y. \end{aligned}$$
(14)

Condition (12) gives rise to

$$\begin{aligned} |c| \frac{\Delta t}{\Delta x} \le \Delta t_{\text {FE}} =\frac{1}{2}. \end{aligned}$$

This can be applied to high order RK schemes [5]. Under the assumptions above, a RKDG method is TVDM under the condition [5]

$$\begin{aligned} |c| \frac{\Delta t}{\Delta x} \le \nu (\mathcal {C}) = \frac{1}{2} \min \limits _{i,j} \frac{\alpha _{ij}}{\beta _{ij}} = \frac{1}{2} \mathcal {C}, \end{aligned}$$
(15)

where \(\mathcal {C}\) is referred to as SSP coefficient of the RK method. Moreover, \(\alpha _{ij}\) and \(\beta _{ij}\) are the coefficients of the RK method in Shu-Osher representation [14]. Note that the ratio \(\frac{\alpha _{ij}}{\beta _{ij}}\) is set to be infinite if \(\beta _{ij}=0\).

A similar result can be proved for peer methods. Considering again the assumptions above, these schemes are TVDM under the condition

$$\begin{aligned} |c| \frac{\Delta t}{\Delta x} \le \nu (\mathcal {C}) = \frac{1}{2} \max _{r\in \mathbb {R}^+} \left\{ r:(I+rR)^{-1} [R,A,B-rA] \ge 0 \right\} = \frac{1}{2} \mathcal {C}, \end{aligned}$$
(16)

where \(\mathcal {C}\) is the SSP coefficient of the peer method, cf. Theorem 3. A proof and more details are stated in [15].

Besides TV-stability properties numerical methods for hyperbolic problems must satisfy linear stability properties in order to guarantee convergence. When polynomials of degree \(k>0\) are used in the DG discretization the forward Euler method becomes unstable for all time step sizes [4]. Higher order RKDG schemes or DG peer methods are linearly stable under a condition of the form

$$\begin{aligned} \Delta t \lambda \in S\quad \text {for all } \lambda \in \Lambda , \end{aligned}$$

where \(\Lambda \) denotes the set of eigenvalues of the DG spatial operator \(\mathbf {L}\) in (14). This leads to a restriction of the form

$$\begin{aligned} |c| \frac{\Delta t}{\Delta x}&\le \mu (k,S), \end{aligned}$$
(17)

where \(\mu \) depends on the degree k of the DG spatial discretization and the absolute stability region S of the method.

SSP methods with optimal SSP coefficient are studied in many applications, the results are e.g. summarized in [8] for RK methods and [10] for peer methods. But in the context of methods applied to ODEs resulting from the DG spatial discretization the condition

$$\begin{aligned} |c| \frac{\Delta t}{\Delta x} \le \kappa = \min (\mu (k,S),\nu (\mathcal {C})) \end{aligned}$$
(18)

must be satisfied to guarantee both linear and SSP stability.

Kubatko et al. [18] observed that for RKDG methods (17) is more restrictive than (15). They constructed SSP RKDG methods with optimal stability up to order four. For this they first optimized the stability function of the RKDG method. The free parameters of the RK scheme are then used to optimize the SSP stability. For all considered stages and orders a method with \(\nu \ge \mu _{\text {opt}}\) could be found, i.e. condition (18) is successfully optimized with \(\kappa =\mu _{\text {opt}}\).

4 DGSSP-Optimized Explicit Peer Methods

In this section, we present an approach of constructing SSP explicit peer methods that are suitable for discontinuous Galerkin spatial discretizations.

The optimization of DG-optimized SSP RK methods is split in two main steps. First, the coefficients of the stability function are optimized with respect to linear stability depending on the degree of the DG spatial discretization. This procedure is described in detail in [13]. Second, the coefficients of the RK method are determined subject to the stability polynomial found in the previous step with the goal of maximization the SSP coefficient, see [14].

This principle cannot be applied to peer methods directly, since a stability matrix must be considered instead of a scalar stability function, the optimization can be done iteratively, only. We proceed therefore as follows:

Step 1 Optimization of \(\mu \):

We compute a numerical approximation \(\widetilde{\mu }\) for \(\mu (k,S)\). Let an explicit peer method (8) of order p and a DG spatial operator \(\mathbf {L}\) (14) be given. We denote by \(\Lambda \) a finite set of eigenvalues of the DG spatial operator. A finite set of eigenvalues \(\Lambda ~\subseteq ~\mathbb {C}\) of the DG spatial operator with \(|\Lambda |\approx 150\) was chosen. Then the optimization problem can be written as

(19)

Analogously to [13] we reformulate problem (19) in terms of the least deviation problem and apply a bisection approach. This is summarized in Algorithm 1. The output \(\Delta t_{\varepsilon }\) of Algorithm 1 gives \(\widetilde{\mu }\) by [cf. (17)]

figure a
$$\begin{aligned} \widetilde{\mu }= |c| \frac{\Delta t_{\varepsilon }}{\Delta x}. \end{aligned}$$

Here \(\varepsilon \) denotes the tolerance used for the stopping criterion in Algorithm 1, we used \(\varepsilon =10^{-6}\). The internal numerical optimization in lines 8 and 9 of Algorithm 1 is done using fmincon from the optimization toolbox in Matlab. The algorithm also gives the coefficients of the corresponding method, which however are not used for Step 2. Note, that Algorithm 1 is a modification of the algorithm from [18] for peer methods. Algorithm 1 works well in practice, despite the fact that a stability matrix has to be considered instead of a scalar stability function. In all cases considered the algorithm converged to a bound for linear stability.

Step 2 Optimization of \(\mathcal {C}\):

We set \(\mu =\widetilde{\mu }\) and search for coefficients ABR and nodes c of a peer method with respect to the following optimization problem [9, 10]:

(20)

To apply a bisection approach we rewrite problem (20) for a given r in the form

$$\begin{aligned} \begin{aligned}&\min _{A,B,R,c}\ \max (\max (-g(r))) \\&\text {subject to }\ \ \ \mathbb {AB}(l)=0,\quad l=0,\ldots ,p, \end{aligned} \end{aligned}$$
(21)

where \(\max (\max (\cdot ))\) denotes the largest element of a matrix. In order to respect the linear stability bound from \(\mu \) we add for a given \(\mu \) the constraint

$$\begin{aligned} \max \limits _{\lambda \in \Lambda } \varrho (M(\lambda \mu )) \le 1. \end{aligned}$$

For practical computations it is useful to include constraints for the nodes c. We take [29, 30]

$$\begin{aligned} -s \le c_i < 1,\quad i=1,\ldots ,s-1,\quad c_s=1. \end{aligned}$$

Our approach is given by Algorithm 2 where we used \(\varepsilon =10^{-6}\) and \(\delta =s\). The output \(r_{\varepsilon }\) gives an approximation \(\nu (\mathcal {C})=r_{\varepsilon }/2\). Again, we use fmincon for the internal optimization.

figure b

The optimization in Step 2 is successful, if the condition \(\nu (\mathcal {C})\ge \mu \) is satisfied. In contrast to RK methods this was not achieved for \(\widetilde{\mu }\) from Step 1 for the studied methods. Therefore we reduce stepwise \(\mu \) and repeat with that Algorithm 2 until the condition \(\nu (\mathcal {C})\ge \mu \) is satisfied. Through this approach Algorithms 1 and 2 determine methods where \(\kappa =\min (\mu ,\nu )=\mu \) with \(\nu \approx \mu \). The methods are neither DG-optimal nor SSP-optimal, but they are optimized with respect to condition (18). Hence we denote our new explicit peer methods DGSSP-optimized.

Remark 1

We cannot guarantee that our optimization in Algorithms 1 and 2 converges to the global extremum. However, the obtained values for \(\mu \) and \(\nu \) are quite satisfactory and in general larger than those for corresponding RK methods, cf. Tables 1 and 2.

We denote an s-stage explicit peer method of order p with SSPEP(sp) and, accordingly, a RK method with SSPRK(sp). The DG spatial discretization of degree k is referred to as DG(\(k+1\)). We consider the following explicit peer methods in our search:

  • SSPEP(s, 2) with DG(2) spatial operator, \(s=2,3,4,5,6\) and

  • SSPEP(s, 3) with DG(3) spatial operator, \(s=3,4,5\).

The results of our optimizations are presented in Tables 1 and 2. For comparison we include the CFL restrictions for the DG-optimized [18] and SSP-optimized [14] RK methods and for the SSP-optimized explicit peer methods [10]. In Tables 1 and 2 the third column shows for the SSP-optimized methods the obtained value \(\nu _{\text {opt}}\) for SSP stability and the second column lists the corresponding CFL restriction \(\mu \) for linear stability. The fourth column indicates \(\kappa =\min (\mu ,\nu _{\text {opt}})\), cf. condition (18). The maximal obtained value \(\widetilde{\mu }\) for linear stability is given in the fifth column. To have the same notation as for RK methods we denote it by \(\mu _{\text {opt}}\), too. For RK methods it holds \(\nu \ge \mu _{\text {opt}}\), see [18], i.e. \(\kappa =\mu _{\text {opt}}\). For peer methods we have to decrease the value \(\widetilde{\mu }\) to find a corresponding method with \(\nu \ge \mu \). This value \(\mu \) is given in column six. Nevertheless, the resulting \(\kappa \) for the DGSSP-optimized peer methods is, except for the case \(s=p=2\) with DG(2) spatial operator, greater than for the DG-optimized RK methods. Whereas for \(p=2\) there is only a small advantage, for \(p=3\) the difference of the \(\kappa \)-values for peer and RK methods is more visible. The percentage improvements in the CFL restrictions \(\kappa \) are denoted with \(\varepsilon \kappa \,(\%)\). In all cases considered we observe an improving in the CFL restrictions compared to the SSP explicit peer methods of [10]. For peer methods we indicate with \(\mu /\mu _{\text {opt}}\, (\%)\) the percentage of \(\mu \) relative to \(\mu _{\text {opt}}\). Note that \(\mu /\mu _{\text {opt}}\, (\%)\) increases with larger s. So, the method \(s=6,\ p=2\) with DG(2) spatial operator has nearly \(\mu =\mu _{\text {opt}}\).

Table 1 CFL restrictions for linear stability \(\mu \) and SSP stability \(\nu \) of SSP-optimized and DGSSP-optimized explicit peer methods (top) and SSP-optimized and DG-optimized RK methods (bottom) of order \(p=2\) with DG(2) spatial operator

Figures 2 and 3 illustrate the stability regions and the eigenvalues of the DG spatial operator scaled with the maximum linearly stable time step size. It shows that the stability domains of the new methods are better suited with respect to the set of eigenvalues, which allows using a larger time step size.

All peer methods are consistent of order p. Due to the SSP property it holds \(B\ge 0\), cf. Theorem 3. Together with preconsistency (10) this is sufficient for zero stability, i.e. the peer methods are convergent of order p, see Theorem 2.

The parameters of the new DGSSP-optimized explicit peer methods can be found in the Appendix, Tables 5, 6, 7, 8, 9, 10, 11, 12.

Table 2 CFL restrictions for linear stability \(\mu \) and SSP stability \(\nu \) of SSP-optimized and DGSSP-optimized explicit peer methods (top) and SSP-optimized and DG-optimized RK methods (bottom) of order \(p=3\) with DG(3) spatial operator
Fig. 2
figure 2

Stability domains (black) of the SSP-optimized SSPEP methods (left) and DGSSP-optimized SSPEP methods (right) of order \(p=2\) and also the with the maximum linearly stable time step size scaled eigenvalues of the DG(2) spatial operator. (blue, dotted), \(s=2,\ldots ,6\) (Color figure online)

5 Numerical Tests

In this section, we test explicit SSP peer methods and the new DGSSP-optimized peer methods and compare them with RK methods. All numerical tests are performed in Matlab. We always consider constant mesh grids in space and constant time steps in our experiments. Explicit peer methods require additional starting values, which are computed with ode45 from the Matlab ODE-suite [26] with tolerances \(atol=rtol=5.e{-}14\).

5.1 Test Case 1: Linear Transport Equation

We consider the linear advection equation in one dimension in space [18]

$$\begin{aligned} \frac{\partial }{\partial t} u(x,t) + \frac{\partial }{\partial x} (c u(x,t)) =0,\quad x \in [-\pi ,\pi ],\quad t \in (0,t_e] \end{aligned}$$
(22)

with periodic boundary conditions and initial condition

$$\begin{aligned} u(x,0)=u_0(x)=\sin (2\pi x/D). \end{aligned}$$
(23)

Here, \(D=2\pi \) is the length of the domain. The exact solution of the problem (22) with initial condition (23) is given by

$$\begin{aligned} u(x,t)=\sin (x-ct). \end{aligned}$$
(24)

We use the advection constant \(c=1\) and integrate the problem to an end point \(t_e=315\). We consider meshes in space of \(N=50,100,200,400,800\) elements, i.e. we have \(\Delta x = 2\pi /50,2\pi /100,2\pi /200,2\pi /400,2\pi /800\). The \(L^2\)-error is calculated from the solution at \(t_e=315\).

Fig. 3
figure 3

Stability domains (black) of the SSP-optimized SSPEP methods (left) and DGSSP-optimized SSPEP methods (right) of order \(p=3\) and also the with the maximum linearly stable time step size scaled eigenvalues of the DG(3) spatial operator. (blue, dotted), \(s=3,4,5\) (Color figure online)

First, we test the SSP-optimized and DGSSP-optimized explicit peer method with \(s=3,\ p=2\) and DG(2) spatial operator. The maximal stable time step for both stability properties \(\kappa =0.6237\) for the DGSSP-optimized method is used, cf. Table 1. Note that the SSP-optimized peer method does not meet this CFL restriction for linear stability. We compute both without a slope limiter and with application of the modified generalized slope limiter. We take the parameter \(M=1\) as suggested in [5]. The results are given in Table 3. Here \(p_{\text {num}}\) denotes the observed numerically order of convergence. The DGSSP-optimized peer method shows the expected order. Even though a slope limiter is applied, the SSP-optimized method displays only first order. This test case emphasizes the importance of the linear stability requirement.

Table 3 SSP and DGSSP explicit peer method with \(s=3\) and \(p=2\), paired with DG(2) spatial operator, for the linear transport equation (22), without application of a slope limiter (top) and with application of the modified generalized slope limiter (bottom)
Table 4 Methods used in our numerical tests

Next, we apply for an order test the new DGSSP-optimized explicit peer methods and compare them with the DG-optimized RK methods from [18] and the forward Euler method. The tested methods are listed in Table  4. We consider \(\Delta x=2\pi /N\) with \(N=20,40,80,160,320\), the maximum stable time step \(\kappa \) for each method and run to a final time \(t_e=2\pi \). The results are illustrated in Fig. 4a. All methods show the expected order.

The greater CFL requirements allow for peer methods using of a larger time step. Hence, the tested peer methods need less function evaluations for a similar accuracy compared to RK methods, see Fig. 4b.

Fig. 4
figure 4

Order test for the linear transport equation (22), accuracy versus space step size (a) and accuracy versus number of function evaluations (b)

5.2 Test Case 2: Burgers Equation

We consider a hyperbolic conservation law with nonlinear flux function, namely, Burgers equation [17]

$$\begin{aligned} \frac{\partial }{\partial t} u(x,t) + \frac{\partial }{\partial x} \left( \frac{1}{2}u(x,t)^2\right) =0,\quad x \in [0,200],\quad t \in (0,t_e] \end{aligned}$$
(25)

with periodic boundary conditions and the initial condition (23) with \(D=200\). The exact solution to the problem (25) is given by the implicit formula

$$\begin{aligned} u(x,t)=\sin (2\pi /D (x-t u(x,t))). \end{aligned}$$

The exact solution forms a shock in \(x=100\) at time \(t=100/\pi \). We take the local Lax-Friedrichs flux for numerical flux \(\widehat{f}\), see, e.g. [5].

First, we test the performance of the new explicit peer method dgpeer32. A mesh grid in space of \(N=100\) elements is taken and we use the maximum allowable time step size \(\kappa =0.6237\), cf. Table 1. We work out an advantage of applying the modified generalized slope limiter and compute for that with and without slope limiter. As proposed in [5] the parameter \(M=\pi ^2/10000\) is used. Figure 5a illustrates the results for problem (25) before a shock wave forms at time \(t_e=24\). There are no spurious oscillations in the numerical solution. The results after a shock wave builds up at time \(t_e=35\) are shown in Fig. 5b. We observe oscillations around the discontinuity in the numerical solution.

Fig. 5
figure 5

Comparison of the numerical peer solution (DGSSP-optimized, \(s=3,\ p=2\) with DG(2) spatial operator (blue dashed lines, blue crosses) to the exact solution (black solid lines) for the problem (25) with \(\kappa =0.6237\) at time \(t_e=24\) (a) and at time \(t_e=35\) (b), without application of a slope limiter (top) and with application of the modified generalized slope limiter (bottom) (Color figure online)

For an order test we apply the methods given in Table 4. Analogous to the linear test case meshes in space of \(N=50,100,200,400,800\) elements are considered. We run in our experiment to \(t_e=24\), well before a shock wave forms. The maximum stable time step is always used. The results are presented in Fig. 6. The expected orders can be seen. We can clearly observe the good properties of the new DGSSP-optimized peer methods due to the larger maximum stable time step in comparison with RK methods.

Fig. 6
figure 6

Order test for the Burgers equation (25), accuracy versus space step size (left) and accuracy versus number of function evaluations (right)

5.3 Test Case 3: Test of Order Reduction

The stage order of explicit RK methods is only one. For explicit peer methods the stage order is equal to the order of consistency. In the next test, we show the advantage of higher stage orders. For that, we consider a hyperbolic conservation law with source term in the form [6]

$$\begin{aligned} \frac{\partial }{\partial t} u(x,t) + \frac{\partial }{\partial x}(c u(x,t)) =b(x,t),\quad x \in [0,1],\quad t \in (0,t_e], \end{aligned}$$
(26)

where the right-hand side of the problem (26) is given by

$$\begin{aligned} b(x,t)=\frac{t-x}{(1+t)^2}. \end{aligned}$$

The initial condition \(u_0(x)=u(x,0)\) and the right boundary condition are taken from the exact solution \(u(x,t)=(1+x)/(1+t)\). Sanz-Serna et al. [24] show, that explicit RK methods of order \(p \ge 3\) applying to problems of the form (26) suffer from order reduction. We test again the methods listed in Table 4.

First, we consider a fixed mesh in space of equal width, i.e. \(N=8\) respectively \(\Delta x=1/8\). We decrease the time step size h, or, to be more precise, we take \(h=1/nstep,\ nstep=32,48,64,\ldots ,240\). The \(L^{\infty }\)-errors are calculated for the solution at \(t_e=1\). The results are presented in Fig. 7a. All methods show the classical order of convergence.

Next, we decrease the space step size \(\Delta x\) and the time step size h simultaneously, i.e. we choose \(N=8,12,16,\ldots ,60\) elements in space and as stated above \(nstep=32,48,64,\ldots ,240\) in time. Figure 7b shows that the RK method dgrk43 suffers clearly from order reduction, while there is no order reduction for the peer methods.

Fig. 7
figure 7

Numerical tests for problem (26), order test (a) and test of order reduction (b)

6 Conclusions

In this paper, we have constructed new SSP explicit peer methods for discontinuous Galerkin spatial discretizations, which are optimized concerning both linear and SSP stability. In general, the CFL restrictions for linear stability of peer methods are greater compared to RK methods. The methods were tested successfully on linear and nonlinear test examples and have shown a good performance when compared with the SSP-optimized explicit peer methods. The advantage of high stage order was verified numerically.

New methods up to order three were constructed. The optimization of methods of higher order and the investigation of explicit DGSSP peer methods with variable step sizes will be topic of future work.