Abstract
An extrapolation algorithm is considered for solving linear fractional differential equations in this paper, which is based on the direct discretization of the fractional differential operator. Numerical results show that the approximate solutions of this numerical method has the expected asymptotic expansions.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
- Fractional Differential Equations
- Approximate Solution
- Direct Discretization
- Asymptotic Expansion
- Extrapolation Algorithm
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
We consider the Richardson extrapolation algorithms for solving the following fractional order differential equation
where \(\beta <0\) and f is a given function on [0, 1].
Extrapolation can be used to accelerate the convergence of a given sequence, [1, 2, 13]. Its applicability depends on the fact that a given sequence of the approximate solutions of the problem possesses an asymptotic expansion. Diethelm [3] introduced an algorithm for solving the above linear differential equation of fractional order, with \(0 < \alpha <1\), Diethelm and Walz [12] proved that the approximate solution of the numerical algorithm in [3] has an asymptotic expansion. See [5–11] for the numerical methods of the general nonlinear fractional differential equations.
Recently, Yan, Pal and Ford [14] extended the numerical method in [3] and obtained a high order numerical method for solving (1) and (2) and proved that the approximate solution has an asymptotic expansion. In this paper, we give some numerical results to show that the approximate solutions of the proposed numerical methods in this paper have the expected asymptotic expansions.
The paper is organized as follows: in Sect. 2, we introduce the numerical method for solving (1) and (2) and discuss how to approximate the starting values and the starting integrals appeared in the numerical method. In Sect. 3, we give some numerical examples to show that the approximate solutions of the proposed numerical methods in this paper have the expected asymptotic expansions.
2 Higher Order Numerical Method
In this section we will consider a higher order numerical method for solving (1) and (2). It is well-known that, when \(0 < \alpha <1\), (1) and (2) is equivalent to, with \(0 < \alpha <1\),
where \(_0^R D_{t}^{\alpha } y(t)\) denotes the Riemann-Liouville fractional derivative defined by, with \(0 < \alpha < 1\),
By using Hadamard finite-part integral, \(_0^R D_{t}^{\alpha }\) can be written into
Here the integral \( \oint \) denotes a Hadamard finite-part integral [3].
Yan, Pal and Ford [14] extended the numerical method in Diethelm and Walz [12] and obtained a high order numerical method for solving (1) and (2) for \(0 < \alpha <1\). Let M be a fixed positive integer and let \(0 = t_{0} < t_{1} < t_{2} < \dots < t_{2j}< t_{2j+1} <\dots < t_{2M} =1\) be a partition of [0, 1] and h the step size. At the nodes \(t_{2j} = \frac{2j}{2M}\), the Eqs. (1) and (2) satisfy
and at the nodes \(t_{2j+1} = \frac{2j+1}{2M}\), the Eqs. (1) and (2) satisfy
Note that
For every j, we denote \(g(w) = y(t_{2j}- t_{2j} w)\) and approximate the integral \( \oint _{0}^{1} w^{-1-\alpha } g(w) \, d w\) by \( \oint _{0}^{1} w^{-1-\alpha } g_{2}(w) \, d w\), where \(g_{2}(w)\) is the piecewise quadratic interpolation polynomials on the nodes \(w_{l}= l/2j, \; l=0, 1,2, \dots , 2j\). More precisely, we have, for \(k=1,2, \dots , j\),
Thus
where \(R_{2j}(g)\) is the remainder term and \(\alpha _{k, 2j}, k=0, 1,2, \dots , 2j\) are weights given by
Here
and
Hence (3) satisfies for \(j=1,2, \dots , M\),
At the nodes \(t_{2j+1} = \frac{2j+1}{2M}, \; j=0, 1, 2, \dots , M-1\), we have
For every j, we denote \(g(w) = y(t_{2j+1}- t_{2j+1} w)\) and approximate the integral \( \oint _{0}^{\frac{2j}{2j+1}} w^{-1-\alpha } g(w) \, d w\) by \( \oint _{0}^{\frac{2j}{2j+1}} w^{-1-\alpha } g_{2}(w) \, d w\), where \(g_{2}(w)\) is the piecewise quadratic interpolation polynomials on the nodes \(w_{l}= \frac{l}{2j+1}, \; l=0, 1,2, \dots , 2j\). We then get
where \(R_{2j+1}(g)\) is the remainder term and \(\alpha _{k, 2j+1} = \alpha _{k, 2j}, \; k=0, 1,2, \dots , 2j\). Hence
Here \(\alpha _{0, l} - t_{l}^{\alpha } \varGamma (- \alpha ) \beta < 0, \; l=2j, 2j+1\), which follow from \(\varGamma (- \alpha ) <0, \, \beta <0\) and \( \alpha _{0, 2j+1} = \alpha _{0, 2j} <0\).
Let \(y_{2j} \approx y(t_{2j})\) and \(y_{2j+1} \approx y(t_{2j+1})\) denote the approximate solutions of \(y(t_{2j})\) and \(y(t_{2j+1})\), respectively. We define the following numerical methods for solving (1) and (2), with \(j=1,2, \dots , M\),
and, with \(j=1,2, \dots , M-1\),
Yan, Pal and Ford [14] proved the following Theorem.
Theorem 1
(Theorem 2.1 in [14]). Let \( 0 < \alpha < 1 \) and M be a positive integer. Let \(0 = t_{0} < t_{1} < t_{2}< \dots < t_{2j} < t_{2j+1} < \dots < t_{2M} =1\) be a partition of [0, 1] and h the step size. Let \(y(t_{2j}), y(t_{2j+1}), y_{2j}\) and \(y_{2j+1}\) be the exact and the approximate solutions of (8)–(11), respectively. Assume that \( y \in C^{m+2}[0, 1], \; m \ge 3\). Further assume that we can approximate the starting value \(y_{1}\) and the starting integral \( \int _{0}^{t_{1}} (t_{2j+1} - \tau )^{-1-\alpha } y(\tau ) \, d \tau \) in (11) by using some numerical methods and obtain the required accuracy. Then there exist coefficients \(c_{\mu } = c_{\mu } (\alpha )\) and \(c_{\mu }^{*} = c_{\mu }^{*} (\alpha )\) such that the sequence \(\{ y_{l} \}, l=0,1, 2, \dots , 2M\) possesses an asymptotic expansion of the form
that is,
where \(\mu ^{*}\) is the integer satisfying \( 2 \mu ^{*} < m+1 - \alpha < 2 ( \mu ^{*} +1), \) and \(c_{\mu }\) and \(c_{\mu }^{*}\) are certain coefficients that depend on y.
3 Numerical Simulations
Example 1
Consider the following example in [9], with \(0 < \alpha <1\),
It is well known that the exact solution is
where
is the Mittag-Leffler function of order \(\alpha \). Here the given function f is smooth and \(f=0\) (Table 1).
Choose the step size \(h=1/10\). In Table 2, we displayed the errors of the algorithms (10) and (11) at \(t=1\) and of the first two extrapolation steps in the Romberg tableau with \(\alpha = 0.3\). We observe that the first column (the errors of the basic algorithm without extrapolation) converges as \(h^{3- \alpha }\). The second column (errors using one extrapolation step)converges as \(h^{4-\alpha }\), and the last column (two extrapolation steps) converges as \(h^4\). We also consider other values of \(\alpha \in (0, 1)\). We observe that when \(\alpha \) is close to 1, the convergence seems to be even a bit faster. But when \(\alpha \) is close to 0, the convergence is a bit slower than expected which is consistent with the numerical observation in [12] for the lower order method.
Example 2
Consider the following example in [4], with \(0 < \alpha <1\),
whose exact solution is given by \(y(t)=t^4- \frac{1}{2} t^3\).
Choose the step size \(h=1/10\). In Table 3, we displayed the errors of the algorithms (10) and (11) at \(t=1\) and of the first two extrapolation steps in the Romberg tableau with \(\alpha = 0.3\). We observe that the first column converges as \(h^{3- \alpha }\). The second column converges as \(h^{4- \alpha }\) and the last column converges as \(h^4\). We also consider other values of \(\alpha \in (0, 1)\). We observe that when \(\alpha \) is close to 1, the convergence seems to be even a bit faster. But when \(\alpha \) is close to 0, the convergence is a bit slower than expected which is consistent with the numerical observation in [12] for the lower order method (Table 4).
References
Brezinski, C.: A general extrapolation algorithm. Numer. Math. 35, 175–187 (1980)
Brezinski, C., Redivo Zaglia, M.: Extrapolation Methods, Theory and Practice. North Holland, Amsterdam (1992)
Diethelm, K.: Generalized compound quadrature formulae for finite-part integrals. IMA J. Numer. Anal. 17, 479–493 (1997)
Diethelm, K.: An algorithm for the numerical solution of differential equations of fractional order. Electron. Trans. Numer. Anal. 5, 1–6 (1997)
Diethelm, K.: The Analysis of Fractional Differential Equations: An Application-Oriented Using Differential Operators of Caputo Type. Lecture Notes in Mathematics 2004. Springer, Heidelberg (2010)
Diethelm, K.: Monotonicity results for a compound quadrature method for finite-part integrals. J. Inequalities Pure Appl. Math. 5(2) (2004). (Article 44)
Diethelm, K., Ford, N.J.: Analysis of fractional differential equations. J. Math. Anal. Appl. 265, 229–248 (2002)
Diethelm, K., Ford, N.J., Freed, A.D.: Detailed error analysis for a fractional Adams method. Numer. Algorithms 36, 31–52 (2004)
Diethelm, K., Ford, N.J., Freed, A.D.: A predictor-corrector approach for the numerical solution of fractional differential equations. Nonlinear Dynam. 29, 3–22 (2002)
Diethelm, K., Luchko, Y.: Numerical solution of linear multi-term initial value problems of fractional order. J. Comput. Anal. Appl. 6, 243–263 (2004)
Dimitrov, Y.: Numerical approximations for fractional differential equations. J. Frac. Calc. Appl. 5(3S), 1–45 (2014)
Diethelm, K., Walz, G.: Numerical solution of fractional order differential equations by extrapolation. Numer. Algorithms 16, 231–253 (1997)
Walz, G.: Asymptotics and Extrapolation. Akademie-Verlag, Berlin (1996)
Yan, Y., Pal, K., Ford, N.J.: Higher order numerical methods for solving fractional differential equations. BIT Numer. Math. 54, 555–584 (2014). doi:10.1007/s10543-013-0443-3
Acknowledgements
We wish to express our sincere gratitude to Professor Neville. J. Ford for his encouragement, discussions and valuable criticism during the research of this work.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Pal, K., Liu, F., Yan, Y. (2015). Numerical Solutions of Fractional Differential Equations by Extrapolation. In: Dimov, I., Faragó, I., Vulkov, L. (eds) Finite Difference Methods,Theory and Applications. FDM 2014. Lecture Notes in Computer Science(), vol 9045. Springer, Cham. https://doi.org/10.1007/978-3-319-20239-6_32
Download citation
DOI: https://doi.org/10.1007/978-3-319-20239-6_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-20238-9
Online ISBN: 978-3-319-20239-6
eBook Packages: Computer ScienceComputer Science (R0)