1 Introduction

In this paper, we are concerned with the numerical integration of the system of the form

$$ \left\{ \begin{array}{l} y^{\prime}(t)+Ay(t)=g(y(t)),\quad t\in[t_{0},t_{end}],\cr\noalign{\vskip1truemm} y(t_{0})=y_{0}, \end{array} \right. $$
(1.1)

where \(g: R^{m}\rightarrow R^{m}\) is an analytic function and ARm×m is a symmetric positive definite or skew-Hermitian with eigenvalues of large modulus. Accordingly, in this case, etA is exactly the matrix exponential function and ∥etA∥≤ C,t ≥ 0. Such problems frequently arise in different fields of applied science such as fluid mechanics, quantum mechanics, and spatial semi-discretization of semilinear parabolic problems. These problems can be solved with the adapted methods by considering the special structure of the problems (see, e.g., [7, 23, 27]). The idea of exponential integrators is not a new one and has been discussed by many authors [14, 20, 24, 25]. However, it has been widely perceived that exponential Runge–Kutta (ERK) methods possess more advantages over the classical Runge–Kutta methods for the numerical integration of (1.1).

Generally, the problem (1.1) may have the following equivalent form

$$ \left\{ \begin{array}{l} y^{\prime}(t)=f(y(t)),\quad t\in[t_{0},t_{end}],\cr\noalign{\vskip1truemm} y(t_{0})=y_{0}, \end{array} \right. $$
(1.2)

with f(y) = −Ay + g(y) and many well-known numerical methods have been presented for (1.2) in the scientific literatures. Compared with multistep methods whose implementation requires a series of starting values, Runge–Kutta (RK) methods are favorable because the initial values are available for them to run [17,18,19]. Runge–Kutta methods were initially proposed for (1.2) by Runge [21] and Kutta [15]. Since then, many Runge–Kutta methods with special structure, such as implicit methods with strong stability, exponential methods for stiff problems, trigonometrically fitted methods for oscillatory problems, and smyplectic/energy-preserving methods for Hamiltonian systems, were investigated [8, 9].

With the development of parallel computers, several kinds of parallel RK-type methods were designed and studied in [3,4,5,6]. In fact, the parallel methods consume less time than the sequential methods with the same order accuracy. Cong et al. presented a general class of explicit pseudo two-step Runge–Kutta (EPTSRK) methods as well as explicit pseudo two-step Runge–Kutta–Nyström (EPTSRKN) methods for the numerical integration of first-order differential equations and second-order differential equations, respectively. In [16], Li et al. constructed extended explicit pseudo two-step RKN methods for second-order oscillatory system and derived the error bounds of the new methods.

In this paper, we focus on the construction of explicit pseudo two-step exponential RK methods (EPTSERK) adapted to the special structure of (1.1). This paper is organized as follows: Section 2 summarizes some basic results for pseudo two-step Runge–Kutta methods and the explicit pseudo two-step exponential Runge–Kutta methods. The local truncation error and order conditions are analyzed in Section 3. In Section 4, we present the global error analysis of the new method. Several practical numerical methods are constructed in Section 5. In Section 6, the numerical experiments are reported to show the efficiency of the new methods. The concluding remarks are included in the last section.

2 Explicit pseudo two-step exponential Runge–Kutta methods

As is known, the solution of the semilinear problem (1.1) satisfies the variation-of-constants formula

$$ y(t_{n}+h)=e^{-hA}y(t_{n})+{{\int}_{0}^{h}}e^{-(h-\tau)A}g(y(t_{n}+\tau))d\tau, $$
(2.1)

which motivates the following numerical method

$$ y_{n+1}=e^{z}y_{n}+h\sum\limits_{i=1}^{s}b_{i}(z)g(y(t_{n}+c_{i}h)), $$

with z = −hA. We call this an exponential quadrature rule with weights bi(z) and nodes ci [14, 20]. The order conditions of order p are given by

$$ \sum\limits_{i=1}^{s}b_{i}(z)\frac{c_{i}^{j-1}}{(j-1)!}=\varphi_{j}(z), j=1,\cdots,p, $$

where φj(z) is determined by

$$ \varphi_{j}(z)={{\int}_{0}^{1}}e^{(1-\theta)z}\frac{\theta^{j-1}}{(j-1)!}d\theta, j\geq 1, $$

which have the following property:

$$ \varphi_{j+1}(z)=\frac{\varphi_{j}(z)-\varphi_{j}(0)}{z}, \varphi_{0}(z)=e^{z}. $$

We approximate the integral (2.1) with some quadrature formulas and obtain the following explicit pseudo two-step exponential Runge–Kutta methods.

Definition 1

An s-stage explicit pseudo two-step exponential Runge–Kutta method for the numerical integration (1.1) is defined as

$$ \left\{ \begin{array}{l} Y_{i}^{[n]}=e^{c_{i}z}y_{n}+h\sum\limits_{j=1}^{s}a_{ij}(z)g(Y_{i}^{[n-1]}), \ i=1,\cdots,s\cr\noalign{\vskip4truemm} y_{n+1}=e^{z}y_{n}+h\sum\limits_{i=1}^{s}b_{i}(z)g(Y_{i}^{[n]}), \end{array} \right. $$
(2.2)

where aij(z),bi(z) with i,j = 1,…n are matrix-valued function of z = −hA and \(Y_{i}^{[n]}\approx y(t_{n}+c_{i}h)\) for i = 1,⋯ ,s. The method (2.2) can be displayed by the following Butcher Tableau

figure a

It follows from g(y) = f(y) − Ay that the classical explicit pseudo two-step Runge–Kutta methods considered in [3] are recovered, once \(A\rightarrow 0\). It should be noted that EPTSERK is a new kind of ERK method which only needs to compute s function evaluations for each step. These s function evaluations can be computed in parallel on s processors. Therefore, the s-stage EPTSERK only requires one sequential function evaluation per step evaluated on an s-processor computer.

3 Order conditions of the EPTSERK methods

Definition 2

Suppose that the n −th step numerical solution of (2.2) satisfies yn = y(tn) and \(Y_{i}^{[n-1]}=y(t_{n-1}+c_{i}h)+\mathcal {O}(h^{p_{1}+1})\), then the methods are said to be of step order p = p2 and stage order \(q=\min \limits \{p_{1},p_{2}\}\), if

$$ y(t_{n}+c_{i}h)-Y_{i}^{[n]}=\mathcal{O}(h^{p_{1}+1}),\quad y(t_{n}+h)-y_{n+1}=\mathcal{O}(h^{p_{2}+1}). $$

The main task in this section is to derive the order conditions of the method (2.2), combining the exact solution with (2.2) yields

$$ \begin{array}{@{}rcl@{}} &&\displaystyle y(t_{n}+c_{i}h)=e^{-c_{i}hA}y(t_{n})+h\sum\limits_{j=1}^{s}a_{ij}(z)\hat g(t_{n}+(c_{j}-1)h)+{\Delta}_{ni},\\ &&\displaystyle y(t_{n}+h)=e^{-hA}y(t_{n})+h\sum\limits_{i=1}^{s}b_{i}(z)\hat g(t_{n}+c_{i}h)+\delta_{n+1}. \end{array} $$
(3.1)

in which \(\hat g(t)=g(y(t))\) and the expression of Δni can be given by

$$ \begin{array}{l} \displaystyle {\Delta}_{ni}=h{\int}_{0}^{c_{i}}e^{-(c_{i}-\tau)hA}\hat g(t_{n}+h\tau)d\tau-h\sum\limits_{j=1}^{s}a_{ij}(z)\hat g(t_{n}+(c_{j}-1)h)\cr\noalign{\vskip4truemm} \displaystyle=c_{i}h{{\int}_{0}^{1}}e^{-c_{i}(1-\tau)hA}\hat g(t_{n}+c_{i}h \tau)d\tau-h\sum\limits_{j=1}^{s}a_{ij}(z)\hat g(t_{n}+(c_{j}-1)h) \end{array} $$

Substituting \(\hat g(t_{n}+c_{i}h\tau )\) and \(\hat g(t_{n}+(c_{i}-1)h)\) with their Taylor series gives

$$ \begin{array}{@{}rcl@{}} {\Delta}_{ni}&=&c_{i}h{{\int}_{0}^{1}}e^{-c_{i}(1-\tau)hA}\sum\limits_{m=0}^{\infty} \frac{{c_{i}^{m}}h^{m}\tau^{m}}{m!}\hat g^{(m)}(t_{n})d\tau - h\sum\limits_{j=1}^{s}a_{ij}(z)\sum\limits_{m=0}^{\infty} \frac{(c_{i} - 1)^{m}h^{m}}{m!}\hat g^{(m)}(t_{n})\\ &=&\sum\limits_{m=0}^{\infty}h^{m+1}\Big(c_{i}^{m+1}{{\int}_{0}^{1}}e^{-c_{i}(1-\tau)hA}\frac{\tau^{m}}{m!}d\tau-\sum\limits_{j=1}^{s}a_{ij}(z)\frac{(c_{i}-1)^{m}}{m!}\Big)\hat g^{(m)}(t_{n})\\ &=&\sum\limits_{m=0}^{\infty} h^{m+1}\Big(c_{i}^{m+1}\varphi_{m+1}(c_{i}z)-\sum\limits_{j=1}^{s}a_{ij}(z)\frac{(c_{i}-1)^{m}}{m!}\Big)\hat g^{(m)}(t_{n}). \end{array} $$
(3.2)

Likewise, we have

$$ \begin{array}{@{}rcl@{}} \displaystyle \delta_{n+1}&=&h{{\int}_{0}^{1}}e^{-(1-\tau)hA}\sum\limits_{m=0}^{\infty}\frac{h^{m}\tau^{m}}{m!}\hat g^{(m)}(t_{n})d\tau-h\sum\limits_{i=1}^{s}b_{i}(z)\sum\limits_{m=0}^{\infty}\frac{{c_{i}^{m}}h^{m}}{m!}\hat g^{(m)}(t_{n})\\ &=&\sum\limits_{m=0}^{\infty} h^{m+1}\Big({{\int}_{0}^{1}}e^{-(1-\tau)hA}\frac{\tau^{m}}{m!}d\tau-\sum\limits_{i=1}^{s}b_{i}(z)\frac{{c_{i}^{m}}}{m!}\Big)\hat g^{(m)}(t_{n})\\ &=&\sum\limits_{m=0}^{\infty} h^{m+1}\Big(\varphi_{m+1}(z)-\sum\limits_{i=1}^{s}b_{i}(z)\frac{{c_{i}^{m}}}{m!}\Big)\hat g^{(m)}(t_{n}). \end{array} $$
(3.3)

It can be observed that for arbitrary numbers p1 and p2, the error \({\Delta }_{ni}=\mathcal {O}(h^{p_{1}+1})\) and \(\delta _{n+1}=\mathcal {O}(h^{p_{2}+1})\) if

$$ \begin{array}{@{}rcl@{}} &&c_{i}^{m+1}\varphi_{m+1}(c_{i}z)-\sum\limits_{j=1}^{s}a_{ij}(z)\frac{(c_{j}-1)^{m}}{m!}=\textbf{0}, m=0,\cdots,p_{1}-1\\ &&\varphi_{m+1}(z)-\sum\limits_{i=1}^{s}b_{i}(z)\frac{{c_{i}^{m}}}{m!}=\textbf{0}, m=0,\cdots,p_{2}-1. \end{array} $$
(3.4)

We will discuss the relation between (3.4) and the order of the EPTSERK method (2.2) in the next theorem.

Theorem 1

Suppose that the function g(y) is Lipschitz continuous on y and the coefficients of EPTSERK (2.2)) satisfies (3.4), then the method EPTSERK (2.2) is of step order \(p=\min \limits \{p_{1}+1,p_{2}\}\) and stage order \(q=\min \limits \{p_{1},p_{2}\}\).

Proof

Given the numerical solution of methods EPTSERK (2.2) satisfying yn = y(tn) and \(Y_{i}^{[n-1]}=y(t_{n-1}+c_{i}h)+\mathcal {O}(h^{p_{1}+1})\). We derive the following error for order estimation

$$ \begin{array}{@{}rcl@{}} y(t_{n}+c_{i}h)-Y_{i}^{[n]}&=&y(t_{n}+c_{i}h)-e^{-c_{i}hA}-h\sum\limits_{j=1}^{s}a_{ij}(z)g(y(t_{n-1}+c_{j}h))\\ &&+ h\sum\limits_{j=1}^{s}a_{ij}(z)\big(g(y(t_{n-1}+c_{j}h))-g(Y_{i}^{[n-1]})\big)\\ &=&{\Delta}_{ni}+\mathcal{O}(h)\sum\limits_{j=1}^{s}\big(y(t_{n-1}+c_{j}h)-Y_{i}^{[n-1]}\big)\\ &=&\mathcal{O}(h^{p_{1}+1})+\mathcal{O}(h^{p_{1}+2})=\mathcal{O}(h^{p_{1}+1}). \end{array} $$
(3.5)

Using a similar way, we have

$$ \begin{array}{@{}rcl@{}} y(t_{n}+h)-y_{n+1}&=&y(t_{n}+h)-e^{-hA}-h\sum\limits_{i=1}^{s}b_{i}(z)g(y(t_{n}+c_{i}h))\\ &&+ h\sum\limits_{i=1}^{s}b_{i}(z)\big(g(y(t_{n}+c_{i}h))-g(Y_{i}^{[n]}\big)\\ &=&\delta_{n+1}+\mathcal{O}(h)\sum\limits_{i=1}^{s}\big(y(t_{n}+c_{i}h)-Y_{i}^{[n]}\big)\\ &=&\mathcal{O}(h^{p_{2}+1})+\mathcal{O}(h^{p_{1}+2}). \end{array} $$
(3.6)

In light of (3.5), (3.6), and Definition 2, we prove the theorem. □

4 Global error analysis

In this section, we will conduct the global error analysis of the new methods which satisfy the conditions (3.4). In what follows, we will use the Euclidean norm and its induced matrix norm which will be denoted by ∥⋅∥. For the error analysis, the Gronwall inequality (see, e.g., [13]) is useful.

Lemma 1

Let α,ϕ,φ, and χ be nonnegative functions defined for t = nΔt,n = 0,1,…,N and assume that χ is nondecreasing. If

$$ \phi_{k}+\varphi_{k}\leq\chi_{k}+{\Delta}_{t}\sum\limits_{n=1}^{k-1}\alpha_{n}\phi_{n}, k=1,2,{\ldots},N, $$

and if there is a positive constant \(\hat c\) such that \({\Delta }_{t}\sum \limits _{n=1}^{k-1}\alpha _{n}\leq \hat c,\) then

$$ \phi_{k}+\varphi_{k}\leq\chi_{k}e^{\hat c k{\Delta}_{t}}, k=1,2,{\ldots},N, $$

where the subscript indices k and n denote the values of function at tk = kΔt and tn = nΔt, respectively.

To begin with the global error, we introduce the notations

$$ e_{n}=y(t_{n})-y_{n},\quad E_{ni}=y(t_{n}+c_{i}h)-Y^{[n]}_{i}, $$

which have following error recursions

$$ E_{ni}=e^{c_{i}z}e_{n}+h\sum\limits_{j=1}^{s}a_{ij}(z)\Big(g\big(y(t_{n}+(c_{j}-1)h)\big)-g\big(Y^{[n]}_{i}\big)\Big)+{\Delta}_{ni},\ i=1,{\ldots},s, $$
(4.1)

and

$$ e_{n+1}=e^{z}e_{n}+h\sum\limits_{j=1}^{s}b_{i}(z)\Big(g\big(y(t_{n}+(c_{j}-1)h)\big)-g\big(Y^{[n]}_{i}\big)\Big)+\delta_{n+1}. $$
(4.2)

Theorem 2

Let \(\|E_{n}\|=\max \limits _{1\leq i\leq s}\|E_{ni}\|\) and \(\|{\Delta }_{n}\|=\max \limits _{1\leq i\leq s}\|{\Delta }_{ni}\|\), and suppose that the function g(y) is Lipschitz continuous on y and A is a symmetric positive definite or skew-Hermitian in the initial value problems (1.1). Furthermore, we assume that \(\|g^{\prime }(y)\|\) is uniformly bounded, and then for the EPTSERK method (2.2), we have

$$ \|E_{n}\|\leq\sum\limits_{k=1}^{s}\Big((C_{1}h)^{n-k}\big(\|e_{k}\|+\|{\Delta}_{k}\|\big)\Big)+(hC_{1})^{n}\|E_{0}\|, $$
(4.3)

where C1 depends on \(g^{\prime }(y)\) but is independent of ∥A∥.

Proof Applying the mean value theorem yields

$$ \|g\big(y(t_{n}+(c_{j}-1)h)\big)-g\big(Y^{[n]}_{i}\big)\|\leq C_{f}\|E_{ni}\|, $$

where Cf depends on \(g^{\prime }(y)\) but is independent of ∥A∥. It follows from (4.1) that

$$ \|E_{ni}\|\leq\|e_{n}\|+h\sum\limits_{j=1}^{s}\|a_{ij}(z)\|C_{f}\|E_{n-1,j}\|+\|{\Delta}_{ni}\|, i=1,{\ldots},s. $$

Therefor, we obtain

$$ \begin{array}{@{}rcl@{}} \|E_{n}\|&\leq&\|e_{n}\|+hC_{1}\|E_{n-1}\|+\|{\Delta}_{n}\|\\ &\leq &\|e_{n}\|+hC_{1}(\|e_{n-1}\|+hC_{1}\|E_{n-2}\|+\|{\Delta}_{n-1}\|)+\|{\Delta}_{n}\|\\ &\leq& (\|e_{n}\|+hC_{1}\|e_{n-1}\|+\cdots+(hC_{1})^{n-1}\|e_{1}\|)+(hC_{1})^{n}\|E_{0}\|\\ &&+(\|{\Delta}_{n}\|+hC_{1}\|{\Delta}_{n-1}\|+\cdots+(hC_{1})^{n-1}\|{\Delta}_{1}\|)\\ &=&\sum\limits_{k=1}^{n}\Big((C_{1}h)^{n-k}\big(\|e_{k}\|+\|{\Delta}_{k}\|\big)\Big)+(hC_{1})^{n}\|E_{0}\|. \end{array} $$

Theorem 3

Under the assumption of Theorem 2, we consider the s-stages EPTSERK method (2.2) with the coefficients given by (3.4) and the stepsize is sufficient small. If

$$ \|y(t_{1})-y_{1}\|\leq c_{0} h^{p}, \quad \|y(t_{0}+c_{i}h)-Y_{i}^{[0]}\|\leq c_{0} h^{p}, $$

with \(p=\min \limits \{p_{1}, p_{2}\}\), then we have

$$ \|y(t_{n})-y_{n}\|\leq C h^{p},\ 2\leq n\leq \frac{t_{end}-t_{0}}{h}, $$

where the constant C depends on \(s, t_{end}, \|g^{\prime }(y)\|\) but is independent of ∥A∥ and n.

Proof

It follows from (4.2) that

$$ \begin{array}{@{}rcl@{}} \|e_{n}\|&\leq&\|e_{n-1}\|+h\sum\limits_{i=1}^{s}\|b_{i}(z)\|C_{f}\|E_{n-1,j}\|+\|{\Delta}_{n}\|\\ &\leq&\|e_{n-1}\|+hC_{2}\|E_{n-1}\|+\|{\Delta}_{n}\|\|\\ &\leq&\|e_{n-2}\|+C_{2}h\|E_{n-2}\|+\|{\Delta}_{n-1}\|+hC_{2}\|E_{n-1}\|+\|{\Delta}_{n}\|\|\\ &\leq&\|e_{1}\|+C_{2}h\Big(\sum\limits_{j=1}^{n-1}\|E_{j}\|\Big)+\sum\limits_{j=1}^{n-1}\|\delta_{j+1}\|\|\\ \end{array} $$
(4.4)

in which C2 depends on \(\|g^{\prime }(y)\|\) but is independent of ∥A∥. For the sufficiently small h > 0, we have

$$ \begin{array}{@{}rcl@{}} \sum\limits_{j=1}^{n-1}\|E_{j}\|&\leq&\sum\limits_{j=1}^{n-1}(\sum\limits_{k=1}^{j}\Big((C_{1}h)^{j-k}\big(\|e_{k}\|+\|{\Delta}_{k}\|\big)\Big)+\sum\limits_{j=1}^{n-1}(hC_{1})^{j}\|E_{0}\|)\\ &\leq &\sum\limits_{j=1}^{n-1}\sum\limits_{k=0}^{n-1-j}(hC_{1})^{k}(\|e_{j}\|+\|{\Delta}_{j}\|)+\sum\limits_{j=1}^{n-1}(hC_{1})^{j}\|E_{0}\|\\ &\leq &\sum\limits_{j=1}^{n-1}\bar Ch\big(\|e_{j}\|+\|{\Delta}_{j}\|\big)+\bar{\bar C}\|E_{0}\|. \end{array} $$
(4.5)

Substituting (4.5) into (4.4) yields

$$ \|e_{n}\|\leq\sum\limits_{j=1}^{n-1}C_{3}h\|e_{j}\|+C_{4}h^{p}. $$
(4.6)

According to the computation of (4.4), (4.5), and (4.6), it is clear that C3 and C4 depend on s, \(\|g^{\prime }(y)\|\), and tend but are independent of n and ∥A∥. In Lemma (1), we set

$$ \phi_{n}=\|e_{n}\|,\ \varphi_{n}=0,\ \chi_{n}=C_{4}h^{p},\ \alpha_{n}=C_{3}. $$

We then have

$$ h\sum\limits_{j=1}^{n-1}\alpha_{j}=h\sum\limits_{j=1}^{n-1}C_{3}\leq\hat C. $$

Applying the Gronwall Lemma to (4.6) obtains

$$ \|e_{n}\|\leq C_{4}h^{p}e^{\hat Cnh}\leq Ch^{p}, $$

where C depends on \(\|g^{\prime }(y)\|\) but is independent of ∥A∥. □

5 Construction of the methods

Next, we will construct some EPTSERK methods (2.2) based on the condition (3.4). Suppose that the nodes ci for i = 1,…,s are distinct and p1 = p2 = s in (3.4), we have

$$ \begin{array}{@{}rcl@{}} &&\displaystyle c_{i}^{m+1}\varphi_{m+1}(c_{i}z)-\sum\limits_{j=1}^{s}a_{ij}(z)\frac{(c_{j}-1)^{m}}{m!}=\textbf{0}, m=0,\cdots,s-1\\ &&\displaystyle\varphi_{m+1}(z)-\sum\limits_{i=1}^{s}b_{i}(z)\frac{{c_{i}^{m}}}{m!}=\textbf{0}, m=0,{\ldots},s-1. \end{array} $$
(5.1)

The coefficients aij(z) and bi(z) can be derived from (5.1) with the errors

$$ {\Delta}_{ni}=\mathcal{O}(h^{s+1}),\quad \delta_{n+1}=\mathcal{O}(h^{s+1}). $$

Therefore, the method is of step order and stage order s.

In order to express the parameters vector b(z) and matrix A(z) simplified in terms of the collocation vector c, we give the following notes

$$ \begin{array}{@{}rcl@{}} &&P_{1}:=\big(c\varphi_{1}(cz),c^{2}\varphi_{2}(cz),\cdots,c^{s}\varphi_{s}(cz)\big),\quad Q_{1}:=\big(e,c-e,\cdots,\frac{(c-e)^{s-1}}{(s-1)!}\big),\\ &&P_{2}:=\big(\varphi_{1}(z),\varphi_{2}(z),\cdots,\varphi_{s}(z)\big),\quad Q_{2}:=\big(e,c,\cdots,\frac{c^{s-1}}{(s-1)!}\big), \end{array} $$

with e = (1,⋯ ,1)T. Then, the order conditions (5.1) have the following equivalent form

$$ A(z)(Q_{1}\otimes I)=P_{1},\quad b^{T}(z)(Q_{2}\otimes I)=P_{2}, $$
(5.2)

where I is the identity matrix with the same order as z. Since the nodes ci, i = 1…,s are distinct, the matrices Q1 and Q2 are nonsingular. It follows from (5.2) that

$$ A(z)=P_{1}(Q_{1}\otimes I)^{-1},\quad b^{T}(z)=P_{2}(Q_{2}\otimes I)^{-1}. $$
(5.3)

Although (5.3) provides a way to determine the coefficients of method (2.2), it is very complicated to solve the equation (5.3) for large s and d. The following theorem provides an alternative way to obtain the coefficients of the EPTSERK method (2.2) which satisfy the order conditions (5.1).

Theorem 4

The following coefficients

$$ \begin{array}{@{}rcl@{}} &&\displaystyle a_{ij}(z)={\int}_{1}^{1+c_{i}}e^{(1+c_{i}-\tau)z}l_{j}(\tau)d\tau,\\ &&\displaystyle b_{i}(z)={{\int}_{0}^{1}}e^{(1-\tau)z}l_{i}(\tau)d\tau, \end{array} $$
(5.4)

with the Lagrange interpolation function \(l_{i}(\tau )=\prod \limits _{j=1,j\neq i}^{s}\frac {\tau -c_{j}}{c_{i}-c_{j}}\), satisfy order conditions (5.1).

Proof

Since for distinct nodes ci, the coefficients can be obtained uniquely from (5.1), we only need to verify the correction of (5.1) for the coefficients determined in (5.4). For m = 0,…,s − 1, we obtain

$$ \begin{array}{@{}rcl@{}} &&\displaystyle\sum\limits_{j=1}^{s}a_{ij}(z)\frac{(c_{j}-1)^{m}}{m!}=\sum\limits_{j=1}^{s}\left( {\int}_{1}^{1+c_{i}}e^{(1+c_{i}-\tau)z}l_{j}(\tau)d\tau\right)\frac{(c_{j}-1)^{m}}{m!}\\ &&\displaystyle={\int}_{1}^{1+c_{i}}e^{(1+c_{i}-\tau)z}\frac{1}{m!}\sum\limits_{j=1}^{s}l_{j}(\tau)(c_{j}-1)^{m}d\tau\\ &&\displaystyle={\int}_{1}^{1+c_{i}}e^{(1+c_{i}-\tau)z}\frac{1}{m!}(\tau-1)^{m}d\tau. \end{array} $$

Let τ = 1 + uci, and we have

$$ \begin{array}{@{}rcl@{}} &&\displaystyle\sum\limits_{j=1}^{s}a_{ij}(z)\frac{(c_{j}-1)^{m}}{m!}={\int}_{1}^{1+c_{i}}e^{(1+c_{i}-\tau)z}\frac{1}{m!}(\tau-1)^{m}d\tau\\ &&\displaystyle=c_{i}^{m+1}{{\int}_{0}^{1}}e^{(1-u)c_{i}z}\frac{u^{m}}{m!}du=c_{i}^{m+1}\varphi_{m+1}(c_{i}z). \end{array} $$

This implies that the (5.1) holds for aij(z). Similarly, we can prove that bi(z) satisfies (5.1). It should be noted that the condition (5.1) defines a method of order at least s for distinct ci. It is possible to obtain order p beyond s by some orthogonality assumptions. Thus, we give the following theorem. □

Theorem 5

For any distinct collocation points ci, the EPTSERK method (2.2) with s −stage is of order p = s and stage order q = s. It has order p = s + 1 provided the following conditions

$$ {{\int}_{0}^{1}}\xi^{j-1}\prod\limits_{i=1}^{s}(\xi-c_{i})d\xi=0,\quad j=1,{\ldots} l, $$

are satisfied.

Proof

The first conclusion follows from Theorem 3.2. We next show the second one. Using (3.1), we have

$$ \begin{array}{@{}rcl@{}} \delta_{n+1}&=&h{{\int}_{0}^{1}}e^{(1-\tau)z}\hat g(t_{n}+h\tau)d\tau-h\sum\limits_{i=1}^{s}b_{i}(z)\hat g(t_{n}+c_{i}h)\\ &=&h{{\int}_{0}^{1}}e^{(1-\tau)z}\hat g(t_{n}+h\tau)d\tau-h\sum\limits_{i=1}^{s}{{\int}_{0}^{1}}e^{(1-\tau)z}l_{i}(\tau)d\tau\hat g(t_{n}+c_{i}h)\\ &=&h{{\int}_{0}^{1}}e^{(1-\tau)z}\left( \hat g(t_{n}+h\tau)-\sum\limits_{i=1}^{s}l_{i}(\tau)\hat g(t_{n}+c_{i}h)\right)d\tau. \end{array} $$
(5.5)

Inserting the Taylor series of \(\hat g(t_{n}+h\tau )\) and \(\hat g(t_{n}+c_{i}h)\) at tn into (5.5) yields

$$ \begin{array}{@{}rcl@{}} \delta_{n+1}&=&\sum\limits_{m=0}^{\infty}\frac{h^{m+1}}{m!}\left( {{\int}_{0}^{1}}e^{(1-\tau)z}\Big(\tau^{m}-\sum\limits_{i=1}^{s}l_{i}(\tau){c_{i}^{m}}\Big)d\tau\right)\hat g^{(m)}(t_{n})\\ &=&\sum\limits_{m=s}^{\infty}\frac{h^{m+1}}{m!}\left( {{\int}_{0}^{1}}e^{(1-\tau)z}\Big(\tau^{m}-\sum\limits_{i=1}^{s}l_{i}(\tau){c_{i}^{m}}\Big)d\tau\right)\hat g^{(m)}(t_{n}). \end{array} $$
(5.6)

For sms + l − 1, we can write

$$ \tau^{m}=r_{m-s}(\tau)\prod\limits_{i=1}^{s}(\tau-c_{i})+\sum\limits_{i=1}^{s}l_{i}(\tau){c_{i}^{m}}, $$
(5.7)

where rms is a polynomial of degree ms. Substituting (5.7) into (5.6) yields

$$ \begin{array}{@{}rcl@{}} &&{{\int}_{0}^{1}}e^{(1-\tau)z}\Big(\tau^{m}-\sum\limits_{i=1}^{s}l_{i}(\tau){c_{i}^{m}}\Big)d\tau={{\int}_{0}^{1}}e^{(1-\tau)z}r_{m-s}(\tau)\prod\limits_{i=1}^{s}(\tau-c_{i})d\tau\\ &=&{{\int}_{0}^{1}}\sum\limits_{k=0}^{\infty}\frac{(1-\tau)^{k}z^{k}}{k!}r_{m-s}(\tau)\prod\limits_{i=1}^{s}(\tau-c_{i})d\tau\\ &=&\sum\limits_{k=0}^{n}\frac{z^{k}}{k!}{{\int}_{0}^{1}}(1-\tau)^{k}r_{m-s}(\tau)\prod\limits_{i=1}^{s}(\tau-c_{i})d\tau+\mathcal{O}(h^{n+1}), \end{array} $$
(5.8)

where n satisfies l − 1 ≤ n + msl. Hence, we obtain

$$ {\displaystyle{\int}_{0}^{1}}e^{(1-\tau)z}\Big(\tau^{m}-\sum\limits_{i=1}^{s}l_{i}(\tau){c_{i}^{m}}\Big)d\tau=\mathcal{O}(h^{l-m+s}), $$
(5.9)

where sms + l − 1. Combining (5.7) and (5.9), we have

$$ \begin{array}{@{}rcl@{}} &&\displaystyle\delta_{n+1}=\sum\limits_{m=s}^{s+l-1}\frac{h^{m+1}}{m!}{{\int}_{0}^{1}}e^{(1-\tau)z}\Big(\tau^{m}-\sum\limits_{i=1}^{s}l_{i}(\tau){c_{i}^{m}}\Big)d\tau\hat g^{(m)}(t_{n})+\mathcal{O}(h^{s+l+1})\\ &&=\mathcal{O}(h^{s+l+1})+\mathcal{O}(h^{s+l+1})=\mathcal{O}(h^{s+l+1}). \end{array} $$
(5.10)

It can be concluded from Theorem 3.2 that the step order \(p=\min \limits \{s+1,s+l\}=s+1\).

On the basis of the Theorem 5, we can construct EPTSERK methods with arbitrarily high algebraic order. In this paper, we will construct EPTSERK methods and compare the new methods with other exponential Runge–Kutta methods for first-order differential equations. Therefore, we consider EPTSERK methods expressed by the tableau (2) with the coefficients aij(z),bi(z) given by Theorem 4. We take an example of a family of 2-stage EPTSERK methods with the collocation vectors c :

$$ \displaystyle\left( \frac{1}{2},1\right),\quad \left( \frac{3-\sqrt{3}}{6},\frac{3+\sqrt{3}}{6}\right), $$
(5.11)

with orders 2 and 3 respectively. The resulting methods are denoted as EPTSERK2 and EPTSERK3. For more precise requirements, we present a 3-stage EPTSERK method of order four with the collocation vectors c :

$$ \displaystyle\left( \frac{5-\sqrt{15}}{10},\frac 1 2, \frac{5+\sqrt{15}}{10}\right), $$
(5.12)

and we denote this method as EPTSERK4.

6 Numerical experiments

In this section, we will show the numerical performance of the new methods compared with some existing codes proposed in the scientific literature. The criterion used in the numerical comparisons is the decimal logarithm of the maximum global error (LOG10 (ERROR)) versus the computational effort measured by the number of function evaluations (NUMBER OF FUNCTION EVALUATIONS) required by each method. The methods for comparison are denoted as:

  • EERK2: the explicit exponential Runge–Kutta method of order two given in [25].

  • EERK3: the explicit exponential Runge–Kutta method of order three given in [25].

  • EPTSRK2: the explicit pseudo two-step Runge–Kutta methods of order two derived in [3].

  • EPTSRK3: the explicit pseudo two-step Runge–Kutta methods of order three derived in [3].

  • EPTSERK2: the explicit pseudo two-step exponential Runge–Kutta methods of order two presented in this paper.

  • EPTSERK3: the explicit pseudo two-step exponential Runge–Kutta methods of order three presented in this paper.

  • EPTSERK4: the explicit pseudo two-step exponential Runge–Kutta methods of order four presented in this paper.

Problem 1

We study the Hénon-Heiles problem

$$ \left( \begin{array}[c]{c} y_{1}\\ y_{2}\\ y_{3}\\ y_{4} \end{array}\right)'+\left( \begin{array}[c]{cccc} 0 &0& -1& 0\\ 0& 0&0&-1\\ 1&0&0&0\\ 0&1&0&0 \end{array}\right)\left( \begin{array}[c]{c} y_{1}\\ y_{2}\\ y_{3}\\ y_{4} \end{array}\right)=\left( \begin{array}[c]{c} 0\\ 0\\ -2y_{1}y_{2}\\ -{y_{1}^{2}}+{y_{2}^{2}} \end{array}\right) $$
(6.1)

which can be used to depicting stellar motion [2, 12]. We select the initial values

$$ \left( y_{1}(0), y_{2}(0), y_{3}(0), y_{4}(0)\right)^{T}=\left( \sqrt{\frac{11}{96}},0,0,\frac 1 4\right)^{T}. $$

The problem is solved on the interval [0,50] and the numerical results are given in Fig. 1 with the stepsize h = 1/2i for EERK3, h = 1/(3 ⋅ 2i− 1) for EERK2, and h = 1/(3 ⋅ 2i) for EPTSRK2, EPTSRK2, EPTSERK2, EPTSERK3, and EPTSERK4, where i = 1,…,5.

Fig. 1
figure 1

The efficiency curves of problem 1

Problem 2

We consider the Fermi-Pasta-Ulam problem in [12, 22]. The Hamiltonian function is given by

$$ \begin{array}{l} \displaystyle H(x,y)=\displaystyle\frac{1}{2}\sum\limits_{i=1}^{2m}{y_{i}^{2}}+\displaystyle\frac{\omega^{2}}{2}\sum\limits_{i=1}^{m}x_{m+i}^{2} +\displaystyle\frac{1}{4}\Big[(x_{1}-x_{m+1})^{4}\cr\noalign{\vskip1truemm} \hskip1.5cm+\displaystyle\sum\limits_{i=1}^{m-1}(x_{i+1}-x_{m+i-1}-x_{i}-x_{m+i})^{4}+(x_{m}+x_{2m})^{4}, \Big] \end{array} $$

which leads to a nonlinear oscillatory system

$$ \left( \begin{array}[c]{c} x\\ y \end{array}\right)'+\left( \begin{array}[c]{cc} \textbf{0}_{2m\times2m} &-I_{2m\times 2m}\\ M& \textbf{0}_{2m\times2m} \end{array}\right)\left( \begin{array}[c]{c} x\\ y \end{array}\right)=\left( \begin{array}[c]{c} 0\\ -\nabla U(x) \end{array}\right) $$
(6.2)

where

$$ M=\left( \begin{array}[c]{cc} {\bf0}_{m\times m} &{\bf0}_{m\times m}\\ {\bf0}_{m\times m}&\omega^{2}I_{m\times m} \end{array}\right), $$

and

$$ U(x)=\frac{1}{4}\left[(x_{1}-x_{m+1})^{4}+\sum\limits_{i=1}^{m-1}(x_{i+1}-x_{m+i-1}-x_{i}-x_{m+i})^{4}+(x_{m}+x_{2m})^{4}\right]. $$

In this problem, we choose

$$ m=3, \omega=50, x_{1}(0)=1, y_{1}(0)=1, x_{4}(0)=\frac 1 \omega, y_{4}(0)=1 $$

and set the remaining initial values to zeros. The problem is solved on the interval [0,10] and the numerical results are plotted in Fig. 2 with the stepsize h = 1/2i+ 5 for EERK3, h = 1/(3 ⋅ 2i+ 4) for EERK2, and h = 1/(3 ⋅ 2i+ 5) for EPTSRK2, EPTSRK2, EPTSERK2, EPTSERK3, and EPTSERK4, where i = 1,…,5.

Fig. 2
figure 2

The efficiency curves of problem 2

Problem 3

We consider the Allen-Cahn equation [1, 10]

$$ \left\{ \begin{array}{l} \frac{\partial u}{\partial t}+\epsilon\frac{\partial^{2}u}{\partial x^{2}}=u-u^{3}, \quad -1<x<1,\ t>0,\cr\noalign{\vskip1truemm} u(x,0)=0.53x+0.47\sin(-1.5\pi x),\quad -1\leq x \leq 1, \end{array}\right. $$

with the periodic boundary condition u(− 1,t) = u(1,t). The Allen-Cahn equation was presented by Allen and Cahn in [1] to describe the motion of anti-phase boundaries in crystalline solids. We approximate the spatial derivatives with the classical second-order central difference operator and obtain

$$ \frac{dU}{dt}+MU=F(t,U), \quad t\in[0,t_{\text{end}}], $$

where U(t) = (u1(t),⋯ ,uN(t))T with ui(t) ≈ u(xi,t), xi = − 1 + iΔx, i = 1,…,N and Δx = 2/N. The matrix M has the form

$$ M=\frac{\epsilon}{\Delta x^{2}} \left( \begin{array} [c]{ccccc} -2 & 1 & & &1 \\ 1 & -2& 1 & & \\ & {\ddots} & {\ddots} & {\ddots} & \\ & & 1 & -2 & 1\\ 1& & & 1&-2 \end{array} \right) $$

and \(F(t,U)=u-u^{3}=(u_{1}-{u_{1}^{3}},\cdots ,u_{N}-{u_{N}^{3}})^{T}.\) We select the parameters 𝜖 = 0.01,N = 64 and solve this problem on interval [0,20]. The numerical results are stated in Fig. 3 with the stepsize h = 1/2i+ 4 for EERK3, h = 1/(3 ⋅ 2i+ 3) for EERK2, and h = 1/(3 ⋅ 2i+ 4) for EPTSRK2, EPTSRK2, EPTSERK2, EPTSERK3, and EPTSERK4, where i = 1,…,4.

Fig. 3
figure 3

The efficiency curves of problem 3

Problem 4

We study the sine-Gorden equation with periodic boundary conditions [11, 26]

$$ \left\{\begin{array}{l} \displaystyle\frac{\partial^{2}u}{\partial t^{2}}=\displaystyle\frac{\partial^{2}u}{\partial x^{2}}-\sin(u), \quad -1<x<1, t>0,\cr\noalign{\vskip1truemm} u(-1,t)=u(1,t).\cr\noalign{\vskip1truemm} \end{array}\right. $$

By using the semi-discretization on the spatial variable with second-order symmetric differences, we have the following first-order differential system

$$ \frac{d}{dt}\left( \begin{array}[c]{c} U^{\prime}\\ U \end{array}\right)+\left( \begin{array}[c]{cc} \textbf{0}&M\\ -I&\textbf{0} \end{array}\right)\left( \begin{array}[c]{c} U^{\prime}\\ U \end{array}\right)=\left( \begin{array}[c]{c} -\sin(U)\\ \textbf{0} \end{array}\right), \quad t\in[0,t_{\text{end}}], $$

where U(t) = (u1(t),⋯ ,uN(t))T with ui(t) ≈ u(xi,t) for i = 1,…,N,

$$ M=\displaystyle\frac{1}{\Delta x^{2}} \left( \begin{array} [c]{ccccc} 2 & -1 & & &-1 \\ -1 & 2& -1 & & \\ & {\ddots} & {\ddots} & {\ddots} & \\ & & -1 & 2 & -1\\ -1& & & -1&2 \end{array} \right) $$

with Δx = 2/N, xi = − 1 + iΔx and \(F(t,U)=-\sin \limits (u)=-(\sin \limits (u_{1}),\cdots ,\sin \limits (u_{N}))^{T}.\) The initial value conditions for this test are

$$ U(0)=(\pi)^{N}_{i=1}, \ U^{\prime}(0)=\sqrt{N}\left( 0.01+\sin(\displaystyle\frac{2\pi i}{N})\right)_{i=1}^{N} $$

with N = 64, and the problem is solved on the interval [0,10]. The numerical results are stated in Fig. 4 with the stepsize h = 1/(10 ⋅ (4 ∗ i)) for EERK3, h = 1/(10 ⋅ 3 ⋅ (2 ∗ i)) for EERK2, and h = 1/(3 ⋅ 10 ⋅ (2 ∗ i)) for EPTSRK2, EPTSRK3, EPTSERK2, EPTSERK3, and EPTSERK4, where i = 1,…,4.

Fig. 4
figure 4

The efficiency curves of problem 4

From Figs. 123, and 4, we can conclude the explicit pseudo two-step exponential Runge–Kutta methods show more advantages over the explicit pseudo two-step Runge–Kutta methods as well as the explicit exponential Runge–Kutta methods in the literature.

In order to show the convergence order of the constructed methods, in Figs. 567, and 8, we plot the global errors against the stepsize for methods EPTSERK2, EPTSERK3, and EPTSERK4 when solving problems (1–4). The slopes of dotted lines indicate theoretical order two (red), order three (blue), and order four (green), respectively. From Figs. 567, and 8, we find that the numerical solution of EPTSERK2 method has second order of convergence as expected. Surprisingly, the numerical solutions of EPTSERK3 and EPTSERK4 methods exceed their theoretical order three and order four, respectively. The analysis of the convergence of explicit pseudo two-step exponential Runge–Kutta methods is an interesting and challenging topic for future work.

Fig. 5
figure 5

The global error against the stepsize for problem 1

Fig. 6
figure 6

The global error against the stepsize for problem 2

Fig. 7
figure 7

The global error against the stepsize for problem 3

Fig. 8
figure 8

The global error against the stepsize for problem 4

7 Conclusion

In this paper, we investigate and study the construction of explicit pseudo two-step exponential Runge–Kutta methods for the first-order differential equations. The global error analysis is carried out and the error bounds are given. The order conditions are discussed and the two practical numerical methods of order two and three with two-stage are obtained. Numerical experiments are accompanied to verify the efficiency and convergence of the new methods. From numerical results, we can see that, for first-order differential equations of form (1.1), our new methods work better than the classical exponential Runge–Kutta methods as well as the explicit pseudo two-step Runge–Kutta methods in the literature. The constructions and discussions of the higher order methods and the variable stepsize methods of this type are in process.