Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

We consider the Richardson extrapolation algorithms for solving the following fractional order differential equation

$$\begin{aligned}&^C_0 D_{t}^{\alpha } y (t) = \beta y(t) + f(t), \quad 0 \le t \le 1, \end{aligned}$$
(1)
$$\begin{aligned}&y(0) = y_{0}, \end{aligned}$$
(2)

where \(\beta <0\) and f is a given function on [0, 1].

Extrapolation can be used to accelerate the convergence of a given sequence, [1, 2, 13]. Its applicability depends on the fact that a given sequence of the approximate solutions of the problem possesses an asymptotic expansion. Diethelm [3] introduced an algorithm for solving the above linear differential equation of fractional order, with \(0 < \alpha <1\), Diethelm and Walz [12] proved that the approximate solution of the numerical algorithm in [3] has an asymptotic expansion. See [511] for the numerical methods of the general nonlinear fractional differential equations.

Recently, Yan, Pal and Ford [14] extended the numerical method in [3] and obtained a high order numerical method for solving (1) and (2) and proved that the approximate solution has an asymptotic expansion. In this paper, we give some numerical results to show that the approximate solutions of the proposed numerical methods in this paper have the expected asymptotic expansions.

The paper is organized as follows: in Sect. 2, we introduce the numerical method for solving (1) and (2) and discuss how to approximate the starting values and the starting integrals appeared in the numerical method. In Sect. 3, we give some numerical examples to show that the approximate solutions of the proposed numerical methods in this paper have the expected asymptotic expansions.

2 Higher Order Numerical Method

In this section we will consider a higher order numerical method for solving (1) and (2). It is well-known that, when \(0 < \alpha <1\), (1) and (2) is equivalent to, with \(0 < \alpha <1\),

$$\begin{aligned} _0^R D_{t}^{\alpha } [ y(t) - y_{0}] = \beta y(t) + f(t), \quad 0 \le t \le 1, \end{aligned}$$
(3)

where \(_0^R D_{t}^{\alpha } y(t)\) denotes the Riemann-Liouville fractional derivative defined by, with \(0 < \alpha < 1\),

$$\begin{aligned} _0^R D_{t}^{\alpha } y(t)= \frac{1}{\varGamma (1-\alpha )} \frac{d}{dt} \int _{0}^{t} (t- u)^{-\alpha } y(u) \, d \tau . \end{aligned}$$
(4)

By using Hadamard finite-part integral, \(_0^R D_{t}^{\alpha }\) can be written into

$$\begin{aligned} _0^R D_{t}^{\alpha } y(t)= \frac{1}{\varGamma (-\alpha )} \oint _{0}^{t} (t- u)^{-1-\alpha } y(u) \, d u. \end{aligned}$$
(5)

Here the integral \( \oint \) denotes a Hadamard finite-part integral [3].

Yan, Pal and Ford [14] extended the numerical method in Diethelm and Walz [12] and obtained a high order numerical method for solving (1) and (2) for \(0 < \alpha <1\). Let M be a fixed positive integer and let \(0 = t_{0} < t_{1} < t_{2} < \dots < t_{2j}< t_{2j+1} <\dots < t_{2M} =1\) be a partition of [0, 1] and h the step size. At the nodes \(t_{2j} = \frac{2j}{2M}\), the Eqs. (1) and (2) satisfy

$$\begin{aligned} _0^R D_{t}^{\alpha } [ y(t_{2j}) - y_{0}] = \beta y(t_{2j}) + f(t_{2j}), \quad j=1,2,\dots , M, \end{aligned}$$

and at the nodes \(t_{2j+1} = \frac{2j+1}{2M}\), the Eqs. (1) and (2) satisfy

$$\begin{aligned} _0^R D_{t}^{\alpha } [ y(t_{2j+1}) - y_{0}] = \beta y(t_{2j+1}) + f(t_{2j+1}), \quad j=0, 1, 2,\dots , M-1. \end{aligned}$$
(6)

Note that

$$\begin{aligned} _0^R D_{t}^{\alpha } y(t_{2j})= \frac{1}{\varGamma (-\alpha )} \oint _{0}^{t_{2j}} (t_{2j}- \tau )^{-1-\alpha } y(\tau ) \, d \tau = \frac{t_{2j}^{-\alpha }}{\varGamma (-\alpha )} \oint _{0}^{1} w^{-1-\alpha } y(t_{2j} - t_{2j} w) \, dw. \end{aligned}$$
(7)

For every j, we denote \(g(w) = y(t_{2j}- t_{2j} w)\) and approximate the integral \( \oint _{0}^{1} w^{-1-\alpha } g(w) \, d w\) by \( \oint _{0}^{1} w^{-1-\alpha } g_{2}(w) \, d w\), where \(g_{2}(w)\) is the piecewise quadratic interpolation polynomials on the nodes \(w_{l}= l/2j, \; l=0, 1,2, \dots , 2j\). More precisely, we have, for \(k=1,2, \dots , j\),

$$\begin{aligned} g_{2} (w)&= \frac{(w- w_{2k-1} )(w- w_{2k} )}{( w_{2k-2} - w_{2k-1} )( w_{2k-2} - w_{2k})} g ( w_{2k-2} ) \\&\quad + \frac{(w- w_{2k-2} )(w- w_{2k} )}{( w_{2k-1} - w_{2k-2} )( w_{2k-1} - w_{2k} )} g ( w_{2k-1} ) \\&\quad + \frac{(w- w_{2k-2})(w- w_{2k-1} )}{( w_{2k} - w_{2k-2})( w_{2k}- w_{2k-1})} g ( w_{2k}), \quad \text{ for } \; w \in [ w_{2k-2}, w_{2k} ]. \end{aligned}$$

Thus

$$\begin{aligned} _0^R D_{t}^{\alpha } y(t_{2j})&= \frac{t_{2j}^{-\alpha }}{\varGamma (-\alpha )} \oint _{0}^{1} w^{-1-\alpha } y(t_{2j} - t_{2j} w) \, dw \nonumber \\&= \frac{t_{2j}^{-\alpha }}{\varGamma (-\alpha )} \Big ( \sum _{k=1}^{j} \oint _{w_{2k-2}}^{w_{2k}} w^{-1-\alpha } g_{2} (w) \, dw + R_{2j} (g) \Big ) \nonumber \\&= \frac{t_{2j}^{-\alpha }}{\varGamma (-\alpha )} \Big ( \sum _{k=0}^{2j} \alpha _{k, 2j} y(t_{2j-k}) + R_{2j} (g) \Big ) \nonumber \end{aligned}$$

where \(R_{2j}(g)\) is the remainder term and \(\alpha _{k, 2j}, k=0, 1,2, \dots , 2j\) are weights given by

$$\begin{aligned}&(-\alpha ) (- \alpha +1) (-\alpha +2) (2j)^{-\alpha } \alpha _{l, 2j} \nonumber \\&= {\left\{ \begin{array}{ll} &{} 2^{-\alpha } ( \alpha +2), \; \qquad \qquad \qquad \text{ for } \; l=0, \nonumber \\ &{} (-\alpha ) 2^{2-\alpha }, \quad \qquad \qquad \qquad \text{ for } \; l=1, \nonumber \\ &{} (-\alpha )(-2^{-\alpha } \alpha ) + \frac{1}{2} F_{0}(2), \quad \quad \text{ for } \; l=2, \nonumber \\ &{} -F_{1}(k), \qquad \qquad \qquad \qquad \text{ for } \; l=2k-1, \quad k=2,3, \dots , j, \nonumber \\ &{} \frac{1}{2}(F_{2}(k) +F_{0}(k+1)), \qquad \quad \text{ for } \; l=2k, \quad k=2,3, \dots , j-1, \nonumber \\ &{} \frac{1}{2} F_{2}(j), \qquad \qquad \qquad \qquad \text{ for } \; l=2j. \nonumber \end{array}\right. } \nonumber \end{aligned}$$

Here

$$\begin{aligned} F_{0}(k) =&(2k-1)(2k) \Big ( (2k)^{-\alpha } - (2k-2)^{-\alpha } \Big )(-\alpha +1)(-\alpha +2) \nonumber \\&-\Big ((2k-1) + 2k \Big ) \Big ((2k)^{-\alpha +1} -(2k-2)^{-\alpha +1} \Big ) (-\alpha )(-\alpha +2) \nonumber \\&+ \Big ( (2k)^{-\alpha +2} - (2k-2)^{-\alpha +2} \Big ) (-\alpha ) (-\alpha +1), \nonumber \end{aligned}$$
$$\begin{aligned} F_{1}(k) =&(2k-2)(2k) \Big ( (2k)^{-\alpha } - (2k-2)^{-\alpha } \Big )(-\alpha +1)(-\alpha +2) \nonumber \\&-\Big ((2k-2) + 2k \Big ) \Big ((2k)^{-\alpha +1} -(2k-2)^{-\alpha +1} \Big ) (-\alpha )(-\alpha +2) \nonumber \\&+ \Big ( (2k)^{-\alpha +2} - (2k-2)^{-\alpha +2} \Big ) (-\alpha ) (-\alpha +1), \nonumber \end{aligned}$$

and

$$\begin{aligned} F_{2}(k) =&(2k-2)(2k-1) \Big ( (2k)^{-\alpha } - (2k-2)^{-\alpha } \Big )(-\alpha +1)(-\alpha +2) \nonumber \\&-\Big ((2k-2) + (2k-1) \Big ) \Big ((2k)^{-\alpha +1} -(2k-2)^{-\alpha +1} \Big ) (-\alpha )(-\alpha +2) \nonumber \\&+ \Big ( (2k)^{-\alpha +2} - (2k-2)^{-\alpha +2} \Big ) (-\alpha ) (-\alpha +1). \nonumber \end{aligned}$$

Hence (3) satisfies for \(j=1,2, \dots , M\),

$$\begin{aligned} y(t_{2j}) = \frac{1}{\alpha _{0, 2j} - t_{2j}^{\alpha } \varGamma (-\alpha ) \beta } \Big [ t_{2j} \varGamma (-\alpha ) f(t_{2j}) - \sum _{k=1}^{2j} \alpha _{k,2j} y(t_{2j-k}) + y_{0} \sum _{k=0}^{2j} \alpha _{k, 2j} -R_{2j}(g) \Big ]. \end{aligned}$$
(8)

At the nodes \(t_{2j+1} = \frac{2j+1}{2M}, \; j=0, 1, 2, \dots , M-1\), we have

$$\begin{aligned} _0^R D_{t}^{\alpha } y(t_{2j+1})&= \frac{1}{\varGamma (-\alpha )} \oint _{0}^{t_{2j+1}} (t_{2j+1}- \tau )^{-1-\alpha } y(\tau ) \, d \tau \nonumber \\&= \frac{1}{\varGamma (-\alpha )} \oint _{0}^{t_{1}} (t_{2j+1}- \tau )^{-1-\alpha } y(\tau ) \, d \tau \nonumber \\&\quad + \frac{t_{2j+1}^{-\alpha }}{\varGamma (-\alpha )} \oint _{0}^{\frac{2j}{2j+1}} w^{-1-\alpha } y(t_{2j+1} - t_{2j+1} w) \, dw. \nonumber \end{aligned}$$

For every j, we denote \(g(w) = y(t_{2j+1}- t_{2j+1} w)\) and approximate the integral \( \oint _{0}^{\frac{2j}{2j+1}} w^{-1-\alpha } g(w) \, d w\) by \( \oint _{0}^{\frac{2j}{2j+1}} w^{-1-\alpha } g_{2}(w) \, d w\), where \(g_{2}(w)\) is the piecewise quadratic interpolation polynomials on the nodes \(w_{l}= \frac{l}{2j+1}, \; l=0, 1,2, \dots , 2j\). We then get

$$\begin{aligned} _0^R D_{t}^{\alpha } y(t_{2j+1})&= \frac{1}{\varGamma (-\alpha )} \int _{0}^{t_{1}} (t_{2j+1} - \tau )^{-1-\alpha } y(\tau ) \, d \tau \nonumber \\&\quad + \frac{t_{2j+1}^{-\alpha }}{\varGamma (-\alpha )} \Big ( \sum _{k=1}^{j} \oint _{w_{2k-2}}^{w_{2k}} w^{-1-\alpha } g_{2} (w) \, dw + R_{2j+1} (g) \Big ) \nonumber \\&= \frac{1}{\varGamma (-\alpha )} \int _{0}^{t_{1}} (t_{2j+1} - \tau )^{-1-\alpha } y(\tau ) \, d \tau \nonumber \\&\quad + \frac{t_{2j+1}^{-\alpha }}{\varGamma (-\alpha )} \Big ( \sum _{k=0}^{2j} \alpha _{k, 2j+1} y(t_{2j+1-k}) + R_{2j+1} (g) \Big ) \nonumber \end{aligned}$$

where \(R_{2j+1}(g)\) is the remainder term and \(\alpha _{k, 2j+1} = \alpha _{k, 2j}, \; k=0, 1,2, \dots , 2j\). Hence

$$\begin{aligned} y(t_{2j+1})&= \frac{1}{\alpha _{0,2j+1} - t_{2j+1}^{\alpha } \varGamma (-\alpha ) \beta } \Big [ t_{2j+1}^{\alpha } \varGamma (-\alpha ) f(t_{2j+1}) - \sum _{k=1}^{2j} \alpha _{k,2j+1} y(t_{2j+1-k}) \nonumber \\&+ y_{0} \sum _{k=0}^{2j} \alpha _{k,2j+1} - R_{2j+1} (g) -t_{2j+1}^{\alpha } \int _{0}^{t_{1}} (t_{2j+1} - \tau )^{-1-\alpha } y(\tau ) \, d \tau \Big ]. \end{aligned}$$
(9)

Here \(\alpha _{0, l} - t_{l}^{\alpha } \varGamma (- \alpha ) \beta < 0, \; l=2j, 2j+1\), which follow from \(\varGamma (- \alpha ) <0, \, \beta <0\) and \( \alpha _{0, 2j+1} = \alpha _{0, 2j} <0\).

Let \(y_{2j} \approx y(t_{2j})\) and \(y_{2j+1} \approx y(t_{2j+1})\) denote the approximate solutions of \(y(t_{2j})\) and \(y(t_{2j+1})\), respectively. We define the following numerical methods for solving (1) and (2), with \(j=1,2, \dots , M\),

$$\begin{aligned} y_{2j} = \frac{1}{\alpha _{0, 2j} - t_{2j}^{\alpha } \varGamma (-\alpha ) \beta } \Big [ t_{2j} \varGamma (-\alpha ) f(t_{2j}) - \sum _{k=1}^{2j} \alpha _{k,2j} y_{2j-k} + y_{0} \sum _{k=0}^{2j} \alpha _{k, 2j} \Big ], \end{aligned}$$
(10)

and, with \(j=1,2, \dots , M-1\),

$$\begin{aligned} y_{2j+1}&= \frac{1}{\alpha _{0, 2j+1} - t_{2j+1}^{\alpha } \varGamma (-\alpha ) \beta } \Big [ t_{2j+1}^{\alpha } \varGamma (-\alpha ) f(t_{2j+1}) - \sum _{k=1}^{2j} \alpha _{k,2 j+1} y_{2j+1-k} \nonumber \\&+ y_{0} \sum _{k=0}^{2j} \alpha _{k, 2j+1} -t_{2j+1}^{\alpha } \int _{0}^{t_{1}} (t_{2j+1}-\tau )^{-1-\alpha } y(\tau ) \, d \tau \Big ]. \end{aligned}$$
(11)

Yan, Pal and Ford [14] proved the following Theorem.

Theorem 1

(Theorem 2.1 in [14]). Let \( 0 < \alpha < 1 \) and M be a positive integer. Let \(0 = t_{0} < t_{1} < t_{2}< \dots < t_{2j} < t_{2j+1} < \dots < t_{2M} =1\) be a partition of [0, 1] and h the step size. Let \(y(t_{2j}), y(t_{2j+1}), y_{2j}\) and \(y_{2j+1}\) be the exact and the approximate solutions of (8)–(11), respectively. Assume that \( y \in C^{m+2}[0, 1], \; m \ge 3\). Further assume that we can approximate the starting value \(y_{1}\) and the starting integral \( \int _{0}^{t_{1}} (t_{2j+1} - \tau )^{-1-\alpha } y(\tau ) \, d \tau \) in (11) by using some numerical methods and obtain the required accuracy. Then there exist coefficients \(c_{\mu } = c_{\mu } (\alpha )\) and \(c_{\mu }^{*} = c_{\mu }^{*} (\alpha )\) such that the sequence \(\{ y_{l} \}, l=0,1, 2, \dots , 2M\) possesses an asymptotic expansion of the form

$$ y(t_{2M}) - y_{2M} = \sum _{\mu =3}^{m+1} c_{\mu } (2M)^{\alpha -\mu } + \sum _{\mu =2}^{\mu ^{*}} c_{\mu }^{*} (2M)^{-2 \mu } + o ( (2M)^{\alpha -m-1}), \quad \text{ for } \; M \rightarrow \infty , $$

that is,

$$ y(t_{2M}) - y_{2M} = \sum _{\mu =3}^{ m+1 } c_{\mu } h^{\mu - \alpha } + \sum _{\mu =2}^{\mu ^{*}} c_{\mu }^{*} h^{2 \mu } + o ( h^{ m+1-\alpha }), \quad \text{ for } \; h \rightarrow 0, $$

where \(\mu ^{*}\) is the integer satisfying \( 2 \mu ^{*} < m+1 - \alpha < 2 ( \mu ^{*} +1), \) and \(c_{\mu }\) and \(c_{\mu }^{*}\) are certain coefficients that depend on y.

3 Numerical Simulations

Example 1

Consider the following example in [9], with \(0 < \alpha <1\),

$$\begin{aligned}&_0^C D_{t}^{\alpha } y(t) + y(t) = 0, \quad t \in [0, 1], \end{aligned}$$
(12)
$$\begin{aligned}&y(0) = 1. \end{aligned}$$
(13)

It is well known that the exact solution is

$$ y(t) = E_{\alpha } (- t^{\alpha }), $$

where

$$ E_{\alpha }(z) = \sum _{k=0}^{\infty } \frac{z^{k}}{\varGamma (\alpha k +1)} $$

is the Mittag-Leffler function of order \(\alpha \). Here the given function f is smooth and \(f=0\) (Table 1).

Table 1. Errors for Eqs. (12) and (13) with \(\alpha =0.3\), taken at \(t=1\).
Table 2. Orders (“EOC ”) for Eqs. (12) and (13) with \(\alpha =0.3\), taken at \(t=1\).

Choose the step size \(h=1/10\). In Table 2, we displayed the errors of the algorithms (10) and (11) at \(t=1\) and of the first two extrapolation steps in the Romberg tableau with \(\alpha = 0.3\). We observe that the first column (the errors of the basic algorithm without extrapolation) converges as \(h^{3- \alpha }\). The second column (errors using one extrapolation step)converges as \(h^{4-\alpha }\), and the last column (two extrapolation steps) converges as \(h^4\). We also consider other values of \(\alpha \in (0, 1)\). We observe that when \(\alpha \) is close to 1, the convergence seems to be even a bit faster. But when \(\alpha \) is close to 0, the convergence is a bit slower than expected which is consistent with the numerical observation in [12] for the lower order method.

Example 2

Consider the following example in [4], with \(0 < \alpha <1\),

$$\begin{aligned}&_0^C D_{t}^{\alpha } y(t) + y(t) = t^4 - \frac{1}{2} t^3 - \frac{3}{\varGamma (4-\alpha )} t^{3- \alpha } + \frac{24}{\varGamma (5-\alpha )} t^{4-\alpha }, \quad t \in [0, 1], \end{aligned}$$
(14)
$$\begin{aligned}&y(0) = 0, \end{aligned}$$
(15)

whose exact solution is given by \(y(t)=t^4- \frac{1}{2} t^3\).

Choose the step size \(h=1/10\). In Table 3, we displayed the errors of the algorithms (10) and (11) at \(t=1\) and of the first two extrapolation steps in the Romberg tableau with \(\alpha = 0.3\). We observe that the first column converges as \(h^{3- \alpha }\). The second column converges as \(h^{4- \alpha }\) and the last column converges as \(h^4\). We also consider other values of \(\alpha \in (0, 1)\). We observe that when \(\alpha \) is close to 1, the convergence seems to be even a bit faster. But when \(\alpha \) is close to 0, the convergence is a bit slower than expected which is consistent with the numerical observation in [12] for the lower order method (Table 4).

Table 3. Errors for Eqs. (12)–(15) with \(\alpha =0.3\), taken at \(t=1\).
Table 4. Orders (“EOC”) for Eqs. (12)–(15) with \(\alpha =0.3\), taken at \(t=1\).