Introduction

Today, differential equations play an essential role in the advancement of science and engineering. Some of these equations are dedicated to fractional differential equations, which have found many applications in various sciences in recent decades. Mathematicians and scientists have discovered various fields to use the fractional calculus such as diffusion processes, electrochemistry, chaos, finance, plasma physics, medicine, biomathematics, probability, scattering theory, rheology, potential theory, transport theory [4, 17, 19, 32], thermoelasticity [39], nuclear reactor dynamics [44], mechanical vibrations [16], biological tissues [12, 31].

Many phenomena in different sciences are analyzed as fractional models. The rheologic properties of some polymers are studied as fractional differential models [6, 40]. Fractional phenomena include: modeling the spread of river blindness disease [8], nonlinear modeling of interpersonal relationships [27, 37], fractional model for Lassa hemorrhagic fever [7], quantum fractional differential models [47, 51], modeling the spread of computer viruses on the computer [38], cancer tumor modeling [24, 45]. Ordinary and partial nonlinear fractional differential equations have many applications in various fields of engineering, physics and medicine [26, 35, 36].

Mathematicians have proposed many numerical methods for solving fractional differential equations such as Bayesian inversion [25], Bernoulli wavelet [42, 43], Bernstein polynomials [5], Boubaker polynomials [41], Chebyshev polynomials [9, 49], Chebyshev wavelet method [33], finite element method [11, 14, 15, 52], finite difference [11, 15, 50], Galerkin method [29, 50], fractional-order Lagrange polynomials [46], Jacobi polynomials [10, 34, 48]. Some fractional ordinary differential equations can be solved by the analytical methods described in [28].

In this paper, we present a new method for the numerical solution of the following fractional initial–value problem:

$$\begin{aligned} \left\{ {\begin{array}{*{20}{c}} {{}_CD_{0,t}^\alpha u(t) = f(t,u(t)),}&{}{m - 1 < \alpha \le m,m \in {{\mathbb {Z}}^ + },} \\ {{u^{(j)}}(0) = u_0^j\in {\mathbb {R}},}&{}{j = 0,1, \ldots ,m - 1,} \end{array}} \right. \end{aligned}$$
(1)

in the Caputo sense for \(t\in [0,T]\), where \(f \ne 0\).

Basic Concepts

There are many types of fractional derivatives and integrals which are suggested by Riemann, Liouville, Riesz, Letnikov, Grünwald, Weyl, Marchaud, and Caputo. In this paper, we will consider the fractional initial value problem in the Caputo sense.

Definition 1

The Caputo fractional derivative of order \(\alpha \)–th for a function g is written as:

$$\begin{aligned} {{}_CD_{a,t}^\alpha (g) = \frac{1}{{\Gamma (m - \alpha )}}\int _a^t {\frac{{{g^{(m)}}(\xi )}}{{{{(t - \xi )}^{\alpha + 1 - m}}}}} d\xi ,} \end{aligned}$$
(2)

where \(\Gamma \) is the Euler’s Gamma function (or Euler’s integral of the second kind), and \(m - 1 < \alpha \le m,\ m \in {\mathbb {Z}}^+\).

Definition 2

The two–parameter Mittag-Leffler function that plays an important role in the fractional calculus is defined as

$$\begin{aligned} E_{\alpha ,\beta }(z) = \sum _{j = 0}^\infty \frac{z^j}{\Gamma (\alpha j + \beta )},\ \ \alpha ,\beta \in {\mathbb {R}}^ + ,z \in {\mathbb {C}}. \end{aligned}$$
(3)

There are many theorems to show the existence and uniqueness of the solution for the problem (1). In bellow two important theorems are listed.

Theorem 1

(Existence) Suppose that for some \({\chi ^ * } > 0\) and \(K>0\), define \(D=\{(t,u):t\in [0,\chi ^*],|u-\sum _{j=0}^{m-1}t^j u_0^j/j!|\le K\}\) and let the continuous function \(f:D \rightarrow {\mathbb {R}}\) and \(\chi : = \min \left\{ {{\chi ^ * },{{({{{{K\Gamma (\alpha + 1)/ } {\left\| f \right\| }}}_\infty })}^{1/\alpha }}} \right\} \). Then there exist a function \(u:[0,\chi ] \rightarrow {\mathbb {R}}\) that is the solution of the fractional Eq. (1).

Proof 1

See Theorem 6.1 in [18]. \(\square \)

Theorem 2

(Uniqueness) By the hypothesis of pervious theorem, let the function \(f:D \rightarrow {\mathbb {R}}\) be continuous and Lipschitz condition w.r.t the second variable, i.e.

$$\begin{aligned} \left| {f(t,w_1) - f(t,w_2)} \right| \le L\left| {w_1 - w_2} \right| , \end{aligned}$$

for some constant \(L>0\) and independent of \(t,\ w_1,\) and \(w_2\). Then there exist a unique function \(u:[0,\chi ] \rightarrow {\mathbb {R}}\) that is the solution of the fractional Eq. (1).

Proof 2

See Theorem 6.5 in [18]. \(\square \)

As we know, using the polynomials is very useful for finding the solutions of the initial value problems, especially in engineering applications. It is due to the simple application of them. Recently, a new class of polynomials equipped with an auxiliary parameter has been introduced by the first author in [1] and some applications of it have been shown in [2, 3, 21, 22]. See below for introducing this class.

Definition 3

[1] The a-polynomial functions is defined as follows:

$$\begin{aligned} \begin{array}{*{20}{c}} {{A_0}(t) = 1,}&{\begin{array}{*{20}{c}} {{A_n}(t) = at{U_{n - 1}}(t) + {U_n}(t),}&{n \ge 1,} \end{array}} \end{array} \end{aligned}$$

where \({U_n}\left( .\right) \) is the second kind of Chebyshev polynomial and a is an auxiliary real parameter.

The following equations are also established:

$$\begin{aligned}&{A_{n + 1}}(t) = 2t{A_n}(t) - {A_{n - 1}}(t),\ \ n\ge 1, \end{aligned}$$
(4)
$$\begin{aligned}&\begin{array}{*{20}{c}} {{A_n}(t) = (1 + \frac{a}{2}){U_n}(t) + \frac{a}{2}{U_{n - 2}}(t),}&{n \ge 2,} \end{array} \end{aligned}$$
(5)

see [1, 21, 22] for more properties.

Proposition 3

\(U_n\) is the eigenfunction of the singular Sturm–Liouville problem:

$$\begin{aligned} {[{{(1 - {t^2})}^{{{ - 1} / 2}}}\frac{d}{{dt}}\left( {{(1 - {t^2})}^{{3 / 2}}}\frac{d}{{dt}}\right) + n(n + 2)]{U_n}(t) = 0,} \end{aligned}$$
(6)

for \(n=0,1,2,\dots \).

Proposition 4

Assume that \(\omega (t) = \sqrt{1 - {t^2}} \), then

$$\begin{aligned}&{\left( {{U_n},{U_m}} \right) _\omega } = \int _{ - 1}^1 {{U_n}(t)} {U_m}(t)\omega (t)dt = \frac{\pi }{2}{\delta _{n,m}}, \end{aligned}$$
(7)
$$\begin{aligned}&{\int _{ - 1}^1 {\frac{{d{U_n}(t)}}{{dt}}} \frac{{d{U_m}(t)}}{{dt}}{\omega ^3}(t)dt = \frac{1}{2}n(n + 2){\delta _{n,m}}.} \end{aligned}$$
(8)

Remark 5

Assume that \(\omega (t) = \sqrt{1 - {t^2}} \), then

$$\begin{aligned}&\int _{-1}^1 {{\omega ^3}(t)\frac{{d{U_n}(t)}}{{dt}}dt = 0,} \end{aligned}$$
(9)
$$\begin{aligned}&\int _{-1}^1 {{\omega ^3}(t){U_n}(t)dt = 0.} \end{aligned}$$
(10)

The Implementation Method

In this method, the solution of (1) is approximated by the following truncated series form:

$$\begin{aligned} y(t) \approx {y_N}(t) = \sum \limits _{k = 0}^N {{c_k}{A_k}(t),} \end{aligned}$$
(11)

where \({c_k},k = 0,1, \ldots ,N\) are the unknown coefficients for \(\alpha \in (0,1]\). The collocation points of the present method on the interval [0, T] are defined as \({t_j} = j.\Delta t\) for \(j=0,1,\ldots ,N\) such that \(\Delta t = \frac{T}{N}\). Now we suppose the collocation method for the residual of (1) by collocation points \(t_j\) for \(j=0,1,\ldots ,N\). Therefore, the unknown coefficients \({c_k},k = 0,1, \ldots ,N\) and the unknown auxiliary parameter, a, are obtained by solving the following nonlinear system of equations:

$$\begin{aligned}\begin{array}{*{20}{l}} {\begin{array}{*{20}{c}} {Res({t_j}) = 0,}&{}{j = 0,1, \ldots ,N,} \end{array}} \\ {\begin{array}{*{20}{c}} {y(0) = 0.}&{}{} \end{array}} \end{array} \end{aligned}$$

This is a nonlinear system of equations with \(N+2\) equations and \(N+2\) unknowns. To solve this nonlinear system of equations, we are used Mathematica software version 12.0 and FindMinimum command.

Remark 6

For \(1 < \alpha \le 2\), the solution of (1) is approximated by the following truncated series form:

$$\begin{aligned} y(t) \approx {y_N}(t) = \sum \limits _{k = 0}^{N+1} {{c_k}{A_k}(t),} \end{aligned}$$
(12)

where \({c_k},k = 0,1, \ldots ,N+1\) are the unknown coefficients. We suppose the collocation method for the residual of (1) by collocation points \(t_j\) for \(j=0,1,\ldots ,N\), as before. Therefore, the unknown coefficients \({c_k},k = 0,1, \ldots ,N+1\) and the unknown auxiliary parameter, a, are obtained by solving the following nonlinear system of equations:

$$\begin{aligned} \begin{array}{*{20}{l}} {Res({t_j}) = 0,}&{}{j = 0,1, \ldots ,N,} \\ {y(0) = 0,}&{}{} \\ {y'(0) = 0.}&{}{} \end{array} \end{aligned}$$

We can rewrite the same algorithm for \(\alpha \in (2,3]\), and so on.

The Convergence Theorem

Suppose that \(\Lambda = \left[ { - 1,1} \right] \) and \(L_\omega ^2(\Lambda )\) be function Hilbert space with the standard inner product

$$\begin{aligned} {(f,g)_\omega } = \int _{ - 1}^1 \omega (t)f(t)g(t)dt, \end{aligned}$$

where \(\omega (t)=\sqrt{1 - {t^2}}\) is positive weight function and \(\Vert .\Vert _\omega ^2=(.,.)\). Let N be positive integer, we will consider the subspace of \(L_\omega ^2(\Lambda )\) by using the second kind of Chebyshev polynomials as

$$\begin{aligned} {S_N} = span\left\{ {{U_0},{U_1}, \ldots ,{U_N}} \right\} . \end{aligned}$$

We define \(L_\omega ^2(\Lambda )\)-orthogonal projection as follows:

$$\begin{aligned} \begin{array}{*{20}{l}} {{P_N}:L_\omega ^2(\Lambda ) \rightarrow {S_N}} \\ {({P_N}v)(t) = \sum \limits _{i = 0}^N {{c_i}{U_i}(t)} ,} \end{array} \end{aligned}$$

such that \(\begin{array}{*{20}{c}} {{{({P_N}v - v,\varphi )}_\omega } = 0,}&{\forall \varphi \in } \end{array}{S_N}\). To estimate \({\left\| {{P_N}v - v} \right\| _\omega }\), we have the space interpolation:

$$\begin{aligned} H_{\omega ,R}^r(\Lambda ) = \left\{ {\left. v \right| \begin{array}{*{20}{c}} {{\text { {v} is measurable and}}}&{{{\left\| v \right\| }_{r,\omega ,R}} < \infty } \end{array}} \right\} , \end{aligned}$$

where \(r>0\) is any real number, and

$$\begin{aligned} {\left\| v \right\| _{r,\omega ,R}} = {\left( \sum \limits _{i = 0}^r {\left\| {{{(t + 2)}^{\frac{r}{2} + i}}\frac{{{d^i}v}}{{d{t^i}}}} \right\| _\omega ^2} \right) ^{1/2}}. \end{aligned}$$
(13)

We define the Sturm–Liouville operator of the second-kind Chebyshev polynomials, R, as

$$\begin{aligned} R({U_n}(t)) = - {\omega ^{ - 1}(t)}\frac{d}{{dt}}({\omega ^3(t)}\frac{d}{{dt}}{U_n}(t)), \end{aligned}$$
(14)

see [23] Chapter 5.

Proposition 7

\({R^m}\) is a continuous mapping from \(H_{\omega ,R}^{2m}(\Lambda )\) to \(L_\omega ^2(\Lambda )\).

Proof 3

For showing this, we will prove that

$$\begin{aligned} {R^m}v(t) = \sum \limits _{k = 1}^{2m} {{{(t + 2)}^{m + k}}{q_k}(t)\frac{{{d^k}v(t)}}{{d{t^k}}}}, \end{aligned}$$
(15)

where \(q_k\) is a rational bounded uniformly function on the whole interval \(\Lambda \). It is proved by induction. For \(m=1\), we have

$$\begin{aligned} \begin{aligned} Rv(t)&= 3t\frac{{dv}}{{dt}} - (1 - {t^2})\frac{{{d^2}v}}{{d{t^2}}} \\&= {(t+2)^2}\left( {\frac{{3t}}{{{{(t+2)}^2}}}} \right) \frac{{dv}}{{dt}} + {(t+2)^3}\left( {\frac{{t-1}}{{{{(t+2)}^2}}}} \right) \frac{{{d^2}v}}{{d{t^2}}}. \\ \end{aligned} \end{aligned}$$

Suppose that for \(m \le n\) the relation (15) is satisfied. One can easily prove that this relation is established for \(m=n+1\). \(\square \)

Proposition 8

For any real \(r \ge 0\), \(v \in H_{\omega ,R}^r(\Lambda )\), \(v = \sum \limits _{n = 0}^\infty {{{{\hat{v}}}_n}{U_n}(t)} \) then

$$\begin{aligned} {\left\| {{P_N}v - v} \right\| _\omega } \le c{N^{ - r}}{\left\| v \right\| _{r,\omega ,R}}, \end{aligned}$$
(16)

for some real constant c.

Proof 4

First, we suppose that \(r=2m\). Due to the (6), (7), (14) and integration by parts,

$$\begin{aligned} \begin{aligned} {{{\hat{v}}}_n}&= \frac{2}{\pi }\int _\Lambda {v(t){U_n}(t)\omega (t)dt} = \frac{2}{{\pi n(n + 2)}}\int _\Lambda {v(t)R{U_n}(t)\omega (t)d\eta } \\&= - \frac{2}{{\pi n(n + 2)}}\int _\Lambda {v(t)\frac{d}{{dt}}({\omega ^3}(t)\frac{d}{{dt}}{U_n}(t))dt} \\&= \frac{2}{{\pi n(n + 2)}}\int _\Lambda {{\omega ^3}(t)\frac{d}{{dt}}v(t)\left( \frac{d}{{dt}}{U_n}(t)\right) dt} \\&= - \frac{2}{{\pi n(n + 2)}}\int _\Lambda {\frac{d}{{dt}}({\omega ^3}(t)\frac{d}{{dt}}v(t)){U_n}(t)dt} \\&= \frac{2}{{\pi n(n + 2)}}\int _\Lambda {Rv(t){U_n}(t)\omega (t)dt} \\&= \ldots = \frac{2}{{\pi {n^m}{{(n + 2)}^m}}}\int _\Lambda {{R^m}v(t){U_n}(t)\omega (t)dt.} \\ \end{aligned} \end{aligned}$$
(17)

Now according to (15), (17) and definition of \(H_{\omega ,R}^r(\Lambda )\), we have:

$$\begin{aligned} \begin{aligned} \left\| {{P_N}v - v} \right\| _\omega ^2&= \sum \limits _{n = N + 1}^\infty {{{{\hat{v}}}_n}} \left\| {{U_n}} \right\| _\omega ^2 \\&\le c{N^{ - 4m}}\sum \limits _{n = N + 1}^\infty {{{\left( {\frac{{\int _\Lambda {{R^m}v(t){U_n}(t)\omega (t)dt} }}{{\left\| {{U_n}} \right\| _\omega ^2}}} \right) }^2}} \left\| {{U_n}} \right\| _\omega ^2 \\&\le c{N^{ - 4m}}\left\| {{R^m}v} \right\| _\omega ^2 \le c{N^{ - 4m}}\left\| v \right\| _{r,\omega ,R}^2. \\ \end{aligned} \end{aligned}$$

Next, we put \(r=2m+1\). By (9), (6) and integration by part, we have:

$$\begin{aligned} \begin{aligned} {{{\hat{v}}}_n}&= \frac{2}{{\pi {n^m}{{(n + 2)}^m}}}\int _\Lambda {{R^m}v(t){U_n}(t)\omega (t)dt} \\&= - \frac{2}{{\pi {n^{m + 1}}{{(n + 2)}^{m + 1}}}}\int _\Lambda {{R^m}v(t)\frac{d}{{dt}}({\omega ^3}(t)\frac{d}{{dt}}{U_n}(t))dt} \\&= - \frac{2}{{\pi {n^{m + 1}}{{(n + 2)}^{m + 1}}}}\int _\Lambda {\frac{d}{{dt}}({R^m}v(t))\frac{d}{{dt}}{U_n}(t){\omega ^3}(t)dt} . \\ \end{aligned} \end{aligned}$$
(18)

Now using (8) and (15), the following inequality is obtained:

$$\begin{aligned} \left\| {{P_N}v - v} \right\| _\omega ^2= & {} \sum \limits _{n = N + 1}^\infty {{\hat{v}}_n^2} \left\| {{U_n}} \right\| _\omega ^2 \\= & {} \sum \limits _{n = N + 1}^\infty {\frac{4}{{{\pi ^2}{{(n(n + 2))}^{2m + 2}}}}} {\left( {\int _\Lambda {\frac{d}{{dt}}({R^m}v(t))\frac{d}{{dt}}{U_n}(t){\omega ^3}(t)dt} } \right) ^2} \\= & {} \sum \limits _{n = N + 1}^\infty {\frac{4}{{{\pi ^2}{{(n(n + 2))}^{2m + 2}}}}} {\left( {\frac{{\int _\Lambda {\frac{d}{{dt}}({R^m}v(t))\frac{d}{{dt}}{U_n}(t){\omega ^3}(t)dt} }}{{\left\| {\frac{d}{{dt}}{U_n}} \right\| _{{\omega ^3}}^2}}} \right) ^2}\left\| {\frac{d}{{dt}}{U_n}} \right\| _{{\omega ^3}}^2 \\\le & {} c{N^{ - 2(2m + 1)}}\sum \limits _{n = N + 1}^\infty {{{\left( {\frac{{\int _\Lambda {\frac{d}{{dt}}({R^m}v(t))\frac{d}{{dt}}{U_n}(t){\omega ^3}(t)dt} }}{{\left\| {\frac{d}{{dt}}{U_n}} \right\| _{{\omega ^3}}^2}}} \right) }^2}\left\| {\frac{d}{{dt}}{U_n}} \right\| _{{\omega ^3}}^2} \\\le & {} c{N^{ - 2(2m + 1)}}\left\| {\frac{d}{{dt}}({R^m}v)} \right\| _{{\omega ^3}}^2 \le c{N^{ - 2(2m + 1)}}\left\| {\frac{d}{{dt}}({R^m}v){{(t + 2)}^{7/2}}} \right\| _\omega ^2 \\\le & {} c{N^{ - 2(2m + 1)}}\left\| v \right\| _{r,\omega ,R}^2. \\ \end{aligned}$$

The general result follows from the previous results and space interpolation. \(\square \)

Theorem 9

For any real \(r > 0\), \(y \in H_{\omega ,R}^r(\Lambda )\), we have:

$$\begin{aligned} {\left\| {{y_N} - y} \right\| _\omega } \le {\tilde{ac}}{(N - 2)^{ - r}}{\left\| y \right\| _{r,\omega ,R}}. \end{aligned}$$
(19)

Proof 5

Using Eq. (5) and Proposition (8), we get the following inequality:

$$\begin{aligned} \begin{aligned} {\left\| {{y_N} - y} \right\| _\omega }&= {\left\| {\sum \limits _{i = N + 1}^\infty {{c_i}{U_i}(t)} } \right\| _\omega } = {\left\| {\sum \limits _{i = N + 1}^\infty {{c_i}(\underbrace{(1 + \frac{a}{2}){U_i}(t) + \frac{a}{2}{U_{i - 2}}(t)}_{Eq.(5)})} } \right\| _\omega } \\&\le \left| {1 + \frac{a}{2}} \right| {\left\| {\sum \limits _{i = N + 1}^\infty {{c_i}{U_i}(t)} } \right\| _\omega } + \left| {\frac{a}{2}} \right| {\left\| {\sum \limits _{i = N + 1}^\infty {{c_i}{U_{i - 2}}(t)} } \right\| _\omega } \\&\le \left| {1 + \frac{a}{2}} \right| \underbrace{c'{N^{ - r}}{{\left\| y \right\| }_{r,\omega ,R}}}_{Eq.(16)} + \left| {\frac{a}{2}} \right| \underbrace{c''{{(N - 2)}^{ - r}}{{\left\| y \right\| }_{r,\omega ,R}}}_{Eq.(16)} \le {\tilde{ac}}{(N - 2)^{ - r}}{\left\| y \right\| _{r,\omega ,R}}, \\ \end{aligned} \end{aligned}$$

where \({\tilde{a}}=\max \{|1+\frac{a}{2}|,|\frac{a}{2}|\},\ c=\max \{c',c''\}\). \(\square \)

This theorem shows that the a-polynomial approximation has exponential convergence. The similar theorems which have been proved in this section can be seen in [20] for the Chebyshev polynomials of the first kind.

Numerical Examples

In each example in this section, we set the auxiliary parameter a in polynomials \(\left\{ {{A_n}(t)} \right\} _{n = 0}^\infty \) as shown in the tables of each example. Now with these polynomials \(\left\{ {{A_n}(t)} \right\} _{n = 0}^\infty \) that have no parameter, we can restart and repeat the pervious algorithm to approximate the solution for the given fractional differential equations. So the solution of (1) is approximated by the following function:

$$\begin{aligned} y(t) \approx {y_N}(t) = \sum \limits _{k = 0}^N {{c_k}{A_k}(t),} \end{aligned}$$

where \({c_k},k = 0,1, \ldots ,N\) are the unknown coefficients. The collocation points of the present method on the interval [0, T] are defined as \({t_j} = j.\Delta t\) for \(j=0,1,\ldots ,N\) such that \(\Delta t = \frac{T}{N}\). Therefore, the unknown coefficients \({c_k},k = 0,1, \ldots ,N\) are obtained by solving the following nonlinear system of equations:

$$\begin{aligned} \begin{array}{*{20}{l}} {\begin{array}{*{20}{c}} {Res({t_j}) = 0,}&{}{j = 1, \ldots ,N,} \end{array}} \\ {\begin{array}{*{20}{c}} {y(0) = 0.}&{}{} \end{array}} \end{array} \end{aligned}$$

Then we have a nonlinear system of equations with \(N+1\) equations and \(N+1\) unknowns. The following example codes are written by Mathematics software version 12.0.

Table 1 Comparison of the absolute errors at \(T=1\) for Example 1
Table 2 Comparison of the absolute errors and order of convergence at \(T=1\) for Example 1

Example 1

Let us consider

$$\begin{aligned} \left\{ {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {{}_CD_{0,t}^\alpha y(t) + {y^2}(t) = f(t),}&{}{0< \alpha < 1,t > 0,} \end{array}} \\ {\begin{array}{*{20}{c}} {y(0) = 0,}&{}{}&{}{\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {}&{}{}&{}{} \end{array}}{} & {} {}&{} \end{array}}{} & {} {}&{} \end{array}} \end{array}} \end{array}} \right. \end{aligned}$$
(20)

where

$$\begin{aligned} f(t) = \frac{{\Gamma (6)}}{{\Gamma (6 - \alpha )}}{t^{5 - \alpha }} - \frac{{3\Gamma (5)}}{{\Gamma (5 - \alpha )}}{t^{3 - \alpha }} + \frac{{\Gamma (5)}}{{\Gamma (4 - \alpha )}}{t^{3 - \alpha }} + {({t^5} - 3{t^4} + 2{t^3})^2}, \end{aligned}$$

and the exact solution is \(y(t) = {t^5} - 3{t^4} + 2{t^3}\). In this example, the present method is implemented for different \(\alpha \). Table 1 shows absolute errors at different value of N. According to the obtained results, increasing the number of collocation points reduces the absolute error in each constant value \(\alpha \). Therefore, the convergence theorem proved in the previous section is confirmed. The comparison with other methods is tabulated in the next section. Table 2 shows the absolute errors and the order of convergence,

$$\begin{aligned} \text {OC}= \frac{{Log\left( {\frac{{erro{r_{new}}}}{{erro{r_{old}}}}} \right) }}{{Log\left( {\frac{{{N_{old}}}}{{{N_{new}}}}} \right) }}, \end{aligned}$$

at \(t=1\). Due to the increase in operations in some values N, the order of convergence has become negative. Figure 1 shows the logarithm of the error in different values \(\alpha \), which decreases with increasing value of N.

Restarted collocation method: The results obtained in Table 3 is better than the results obtained in Table 1 and the absolute errors are significantly reduced. In Tables 4 and 5, the absolute errors of the present method are compared with the Adams and forward Euler methods [30]. In Tables 4 and 5 , the number of collocation points in the Adams and forward Euler methods is 640 and \({10^5}\) points, respectively, and in the present method, 5 collocation points have been used. Absolute errors in the present method are much less than the absolute errors of the other two methods, which show the noticeable superiority of the present method. In Fig. 2 a, the graph of the function is plotted for exact and numerical solutions in \(\alpha =0.6\) and \(N=5\). In Fig. 2 b, the absolute error graph is plotted at \(\alpha = 0.6\) and \(N=5\). In Fig .3 a and b, graphs of numerical and exact solution and absolute error are plotted at \(N=5\), \(\alpha =0.6\), \(T=2\) and \(a=0.49320\), respectively.

Fig. 1
figure 1

Logarithm error at different values of \(\alpha \) for Example 1

Table 3 Comparison of the absolute errors at \(T=1\) for Example 1
Table 4 Comparison of the absolute errors at \(T=1\) for Example 1
Table 5 Comparison of the absolute errors at \(T=1\) for Example 1
Fig. 2
figure 2

Graphs of numerical and exact solution (a), Graph of absolute error (b) for Example 1 at \(N=5\), \(\alpha =0.6\), \(T=1\) and \(a=0.49320\)

Fig. 3
figure 3

Graphs of numerical and exact solution (a), Graph of absolute error (b) for Example 1 at \(N=5\), \(\alpha =0.6\), \(T=2\) and \(a=0.49320\)

Example 2

Let us consider

$$\begin{aligned} \left\{ {\begin{array}{*{20}{c}} {\begin{array}{*{20}{l}} {_CD_{0,t}^\alpha y(t) + y(t) = f(t),}&{}{0< \alpha < 1,t > 0,} \end{array}} \\ {\begin{array}{*{20}{c}} {y(0) = 0,}&{}{}&{}{\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {}&{}{}&{}{} \end{array}}{} & {} {}&{} \end{array}}{} & {} {}&{} \end{array}} \end{array}} \end{array}} \right. \end{aligned}$$
(21)
Table 6 Comparison of the absolute errors at \(T=1\) for Example 2
Table 7 Comparison of the absolute errors and order of convergence at \(T=1\) for Example 2

where

$$\begin{aligned} f(t) = \frac{{{t^{4 - \alpha }}}}{{\Gamma (5 - \alpha )}}, \end{aligned}$$

and the exact solution is \(y(t) = {t^4}{E_{\alpha ,5}}( - {t^\alpha })\) where \({E_{\alpha ,\beta }}(z)\) is the two parameter Mittag–Leffler function. We can analyze the behaviour of this example, by the method described in [28]. Table 6 shows absolute errors at different value of N. According to the obtained results, increasing the number of collocation points reduces the absolute error in each constant value \(\alpha \). Therefore, the convergence theorem proved in the previous section is confirmed. The comparison with other methods is tabulated in the next section. Table 7 shows the absolute errors and the order of convergence at \(T=1\). Due to the increase in operations in some values N, the order of convergence has become negative. Figure 4 shows the logarithm of the error in different values \(\alpha \).

Fig. 4
figure 4

Logarithm error at different values of \(\alpha \) for Example 2

Restarted collocation method: The results obtained in Table 8 are better than the results obtained in Table 6 and the absolute errors are significantly reduced. In this example, the present method is implemented for different \(\alpha \) values. In Tables 9 and 10 , the absolute errors of the present method are compared with the Adams and forward Euler methods [30]. In Tables 9 and 10 , the number of collocation points in the Adams and forward Euler methods is 640 and \({10^5}\) points, respectively, and in the present method, 20 collocation points have been used. Absolute errors in the present method are much less than the absolute errors of the other two methods, which show the noticeable superiority of the present method. In Fig. 5 a, the graph of the function is plotted for exact and numerical solutions in \(\alpha =0.9\) and \(N=20\). In Fig. 5 b, the absolute error graph is plotted at \(\alpha = 0.9\) and \(N=20\). In Figs. 6 a and b, graphs of numerical and exact solution and absolute error are plotted at \(N=10\), \(\alpha =0.4\), \(T=3\) and \(a=2.41145 \times {10^{ - 7}}\), respectively.

Table 8 Comparison of the absolute errors at \(T=1\) for Example 2
Table 9 Comparison of the absolute errors at \(T=1\) for Example 2
Table 10 Comparison of the absolute errors at \(T=1\) for Example 2

Example 3

Let us consider

$$\begin{aligned} \left\{ {\begin{array}{*{20}{l}} {_CD_{0,t}^\alpha y(t) + y(t) = f(t),}&{}{0< \alpha \le 2,t > 0,} \\ {y(0) = 0,}&{}{(0< \alpha \le 2),} \\ {y'(0) = 0,}&{}{(1 < \alpha \le 2),} \end{array}} \right. \end{aligned}$$
(22)

where

$$\begin{aligned} f(t) = \frac{{\Gamma (4 + \alpha )}}{6}{t^3} + {t^{3 + \alpha }}, \end{aligned}$$

and the exact solution is \(y(t) ={t^{3 + \alpha }}\). A complete description of the behavior of this example also can be done by the method of [28]. In this example, the present method is used for two modes, \({0 < \alpha \le 1}\) and \({1 < \alpha \le 2}\). Table 11 shows absolute errors at different value of N. According to the obtained results, increasing the number of collocation points reduces the absolute error in each constant value \(\alpha \). Therefore, the convergence theorem proved in the previous section is confirmed. The comparison with other methods is tabulated in the next section. Table 12 shows the absolute errors and the order of convergence at \(T=1\). Due to the increase in operations in some values N, the order of convergence has become negative. Figure 7 shows the logarithm of the error in different values \(\alpha \), which decreases with increasing value of N. In this figure, it is observed that the error value in integer values \(\alpha \) is less than non–integer values.

Fig. 5
figure 5

Graphs of numerical and exact solution (a), Graph of absolute error (b) for Example 2 at \(N=20\), \(\alpha =0.9\), \(T=1\) and \(a=2.41145 \times {10^{ - 7}}\)

Fig. 6
figure 6

Graphs of numerical and exact solution (a), Graph of absolute error (b) for Example 2 at \(N=10\), \(\alpha =0.4\), \(T=3\) and \(a=2.41145 \times {10^{ - 7}}\)

Restarted collocation method: In this example, to compare the present method with high order scheme [13], the maximum absolute error \({{{\max }_i}\left| {y({t_i}) - {y_N}({t_i})} \right| }\) is used. In Table 13, we assume that \({0 < \alpha \le 1}\) and we get the results for \({\alpha = 0.2,0.5,1}\). It is observed that the number of collocation points in the present method is less than the high order scheme and the results obtained in this method are better than the results of the high order scheme.

In Table 14, we assume that \({1 < \alpha \le 2}\) and we get the results for \({\alpha = 1.5,2}\). In this case, the number of collocation points in the present method is less than the high order scheme and the results obtained are better than the results of the high order scheme. In Fig. 8 a, the graph of the function is plotted for exact and numerical solutions in \(\alpha =1.5\) and \(N=20\). In Fig. 8 b, the absolute error graph is plotted at \(\alpha = 1.5\) and \(N=20\). In Figs. 9 a and b, graphs of numerical and exact solution and absolute error are plotted at \(N=10\), \(\alpha =1.3\), \(T=4\) and \(a=0.57594\), respectively. In Figs. 10 a and b, graphs of numerical and exact solution and absolute error are plotted at \(N=40\), \(\alpha =2\), \(T=3\) and \(a=0.03264\), respectively. It can be observed that the error value decreases as the number of collocation points increases.

Table 11 Comparison of the maximum absolute errors for Example 3
Table 12 Comparison of the maximum absolute errors and order of convergence for Example 3
Fig. 7
figure 7

Logarithm error at different values of \(\alpha \) for Example 3

Example 4

Assume the following nonlinear one–dimensional fractional Bratu equation:

$$\begin{aligned} \begin{array}{*{20}{l}} {{}_CD_{0,t}^\alpha y(t) + {e^{y(t)}} = 0,}&{}{1 < \alpha \leqslant 2,} \\ {y(0) = y'(0) = 0,}&{}{} \end{array} \end{aligned}$$
(23)

where the exact solution is \(y(t) = 2Ln\left( {{\text {sech}} \left( {\frac{t}{{\sqrt{2} }}} \right) } \right) \). Table 15 shows maximum absolute errors at different values of N and \(\alpha =2\). According to the obtained results, increasing the number of collocation points reduces the maximum absolute error.

Restarted collocation method: In Table 16, the maximum absolute errors are compared by placing the unknown parameter value of a at different values of N and \(\alpha =2\). In Fig. 11, graphs of numerical and exact solution are shown at different values of \(\alpha \) and \(N=15\). In Figs. 12 a and b, graphs of numerical and exact solution and absolute error are plotted at \(N=30\), \(\alpha =2\) and \(a=0.38308\), respectively. In Figs. 13 a and b, graphs of numerical and exact solution and absolute error are plotted at \(N=30\), \(\alpha =2\), \(T=5\) and \(a=2.83486\), respectively. In Figs. 14 a and b, graphs of numerical and exact solution and absolute error are plotted at \(N=40\), \(\alpha =2\), \(T=5\) and \(a=2.2.28497\), respectively. It can be observed that the error value decreases as the number of collocation points increases.

Table 13 Comparison of the maximum absolute errors for Example 3
Table 14 Comparison of the maximum absolute errors for Example 3 at \(\alpha =1.5\) and \(\alpha =2\)

Example 5

Assume the following nonlinear multi–order fractional differential equation:

$$\begin{aligned} \begin{array}{*{20}{l}} {{}_CD_{0,t}^\alpha y(t) + {}_CD_{0,t}^\beta y(t) + {}_CD_{0,t}^\gamma y(t) + {y^3}(t) = f(t),} \\ {y(0) = 0,\quad y(1) = \frac{1}{3},\quad y'\left( \frac{3}{5}\right) = \frac{9}{{25}},} \end{array} \end{aligned}$$
(24)

where \(y(t) = \frac{{{t^3}}}{3}\) is the exact solution and the function f is obtained by the exact solution at \(\alpha = \frac{{11}}{5},\beta = \frac{3}{4},\gamma = \frac{5}{4}\). Table 17 shows maximum absolute errors at different values of N and \(\alpha = \frac{{11}}{5},\beta = \frac{3}{4},\gamma = \frac{5}{4}\). According to the obtained results, increasing the number of collocation points reduces the maximum absolute error.

Restarted collocation method: In Table 18, the maximum absolute errors are compared by placing the unknown parameter value of a at different values of N and \(\alpha = \frac{{11}}{5},\beta = \frac{3}{4},\gamma = \frac{5}{4}\). In Figs. 15 a and b, graphs of numerical and exact solution and absolute error are plotted at \(N=20\), \(\alpha = \frac{{11}}{5},\beta = \frac{3}{4},\gamma = \frac{5}{4}\) and \(a=-0.18988\), respectively. In Figs. 16 a and b, graphs of numerical and exact solution and absolute error are plotted at \(N=15\), \(\alpha = \frac{{11}}{5},\beta = \frac{3}{4},\gamma = \frac{5}{4}\), \(T=5\) and \(a=2.09970 \times {10^{ - 2}}\), respectively.

Fig. 8
figure 8

Graphs of numerical and exact solution (a), Graph of absolute error (b) for Example 3 at \(N=20\), \(\alpha =1.5\) and \(T=1\)

Fig. 9
figure 9

Graphs of numerical and exact solution (a), Graph of absolute error (b) for Example 3 at \(N=10\), \(\alpha =1.3\), \(T=4\) and \(a=0.57594\)

Fig. 10
figure 10

Graphs of numerical and exact solution (a), Graph of absolute error (b) for Example 3 at \(N=40\), \(\alpha =2\), \(T=3\) and \(a=0.03264\)

Table 15 Comparison of the maximum absolute errors and order of convergence for Example 4 at \(\alpha =2\)
Table 16 Comparison of the maximum absolute errors for Example 4 at \(\alpha =2\)
Fig. 11
figure 11

Graphs of numerical and exact solution at different values of \(\alpha \) for Example 4 at \(N=15\)

Fig. 12
figure 12

Graphs of numerical and exact solution (a), Graph of maximum absolute error (b) for Example 4 at \(N=30\), \(\alpha =2\) and \(a=0.38308\)

Fig. 13
figure 13

Graphs of numerical and exact solution (a), Graph of absolute error (b) for Example 4 at \(N=30\), \(\alpha =2\), \(T=5\) and \(a=2.83486\)

Fig. 14
figure 14

Graphs of numerical and exact solution (a), Graph of absolute error (b) for Example 4 at \(N=40\), \(\alpha =2\), \(T=5\) and \(a=2.2.28497\)

Table 17 Comparison of the maximum absolute errors and order of convergence for Example 5 at \(\alpha = \frac{{11}}{5},\beta = \frac{3}{4},\gamma = \frac{5}{4}\)
Table 18 Comparison of the maximum absolute errors for Example 5 at \(\alpha = \frac{{11}}{5},\beta = \frac{3}{4},\gamma = \frac{5}{4}\)
Fig. 15
figure 15

Graphs of numerical and exact solution (a), Graph of maximum absolute error (b) for Example 5 at \(N=20\), \(\alpha = \frac{{11}}{5},\beta = \frac{3}{4},\gamma = \frac{5}{4}\) and \(a=-0.18988\)

Fig. 16
figure 16

Graphs of numerical and exact solution (a), Graph of absolute error (b) for Example 5 at \(N=15\), \(\alpha = \frac{{11}}{5},\beta = \frac{3}{4},\gamma = \frac{5}{4}\), \(T=5\) and \(a=2.09970 \times {10^{ - 2}}\)

Conclusion

As you have seen, the present method has been invented for the first time in mathematics. In this method, the optimum value of parameter a is calculated first and then polynomials with a known value a are used to approximate the solution of differential equations. As you can see in this paper, the present method has more accurate results than the other two methods. A much smaller number of collocation points is used in the present method. The absolute error in this method is much less than the other two methods. By looking at the tables and figures in this paper, the present method achieves better results in the integer values of \(\alpha \). These results are predictable because the classical derivative is used in the correct values. As shown in this paper, the convergence of the present method is guaranteed. This method is easily implemented to solve the fractional ordinary differential equations. The simplicity of using a-polynomials in fractional derivatives can be one of the advantage points of the present method, which creates less complexity to solve. Another point is that in this paper, we see the difference in error results in solving ordinary fractional differential equations in the restarted step and before it, which greatly reduces the application of the auxiliary parameter in a-polynomials.