Introduction

Since the last century, fractional differential equations have played an important role in solving a lot of physical problems because it can explain a lot of natural phenomena, engineering theories, economics and commercial models, etc [1,2,3,4]. Some of these phenomena are complex and very difficult to understand and solve. However, they can be easily analysed and solved if they are described by ordinary fractional differential equations.

Because there is no exact solution to such problems, numerical treatment of most fractional differential equations has become widespread and flourishing in the last two decades.Pedas and Tamme [5] investigated the numerical solution of fractional differential equations with initial values by piecewise polynomial collocation methods. They studied the order of convergence and established a super convergence effect for a special choice of collocation points. Yan et al. [6] introduced an accurate numerical technique for solving differential equations of fractional order.

We present in this work two approaches that are called differentiation Galerkin methods and integration Galerkin methods, that have been applied to linear and nonlinear equations by using two functions of approximation; the first is based on a Laguerre function in generalized form of the fractional formula of the fractional differential problems. The second method is based on the generalized fractional Mittag–Leffler function of the fractional differential equation in the problems which we discussed. The Galerkin Collocation method is an approximation technique to obtain numerical solutions to differential problems and has some advantages in dealing with this class of problems. The unknown coefficients can be easily obtained by using specific numerical programs. Therefore, this method is very efficient and fast in extracting results [7, 8].

The outline of the paper is as follows: In Sect. 2, we show the definitions of fractional calculus theory. The properties of the generalized fractional Mittag–Leffler function are explained in Sect. . 3.1. The formula of the Laguerre function and the generalized fractional Laguerre function are reformulated to be suitable for approximation in Sects. 3.2 and 3.3. In Sect. 4, we explain how to use the Galerkin method with linear and nonlinear equations and present four methods of numerical solution for the given problem, namely the Mittag differential Galerkin method (MDGM), the Mittag integeral Galerkin method (MIGM), the Laguerre differential Galerkin method (LDGM), and the Laguerre integeral Galerkin method (LIGM). Through Sect. 5, we show some theorems that are used to calculate the error estimation. In Sect. 6, we include some different examples in order to illustrate the simplicity and the capability of the proposed method. Finally, conclusions are presented in Sect. 7.

Preliminaries

In this section, some basic properties of derivatives and integrals of fractional order are recalled. The widely used definition of Caputo derivatives and integrals is:

Definition 1

[9] The Caputo fractional derivative \( D_C^\mu \) of order \(\mu >0\), \(n \in \mathbb {N}\) is given by:

$$\begin{aligned} D_C^\mu f(x)= \left\{ \begin{array}{ll} \frac{1}{\varGamma (n-\mu )} \int _{a}^{x} (x-\tau )^{n-\mu -1}f^{(n)}(\tau ) d\tau ,~~ x \ge a ,~~ &{} n-1<\mu < n, \\ \frac{d^{(n)}f(x)}{dx^n},&{} \mu =n. \\ \end{array} \right. \end{aligned}$$
(1)

The Caputo fractional derivative operator satisfies the following properties [10]: For constants \( \zeta _k,k=1,2,\ldots ,n, \) we have:

$$\begin{aligned} D_C^\mu \sum _{k=1}^n \zeta _k f_k(x)= \sum _{k=1}^n \zeta _k ~~D_C^\mu f_k(x), \end{aligned}$$
(2)

then

$$\begin{aligned}&D_C^\mu \big [ \zeta _1 f_1(x)+\zeta _2 f_2(x)+\cdots +\zeta _n f_n(x) \big ] \nonumber \\&\quad =\big [ \zeta _1~ {D_C^\mu } f_1(x)+\zeta _2~ {D_C^\mu } f_2(x)+\cdots +\zeta _n~ {D_C^\mu } f_n(x) \big ]. \end{aligned}$$
(3)

In fact if \(\mu \) is an integer, the Caputo differential operator will be identical with the usual differential operator as:

$$\begin{aligned} D_C^\mu (x-a)^{n}= \left\{ \begin{array}{ll} \frac{\varGamma (n+1)}{\varGamma (n-\mu +1)} (x-a)^{n-\mu } , &{} n \ne 0,\\ 0,&{} n=0. \\ \end{array} \right. \end{aligned}$$
(4)

If the function become \(x^{n}\) then the Caputo fractional derivative is:

$$\begin{aligned} D_C^\mu x^n= \left\{ \begin{array}{ll}\frac{\varGamma (n+1)}{\varGamma (n+1-\mu )} x^{n-\mu }, &{} n>\mu -1,\\ n ~x^{n-1}&{} \mu =n. \\ \end{array} \right. \end{aligned}$$
(5)

The fractional integral of \(x^{n}\) is defined by

$$\begin{aligned} I^\mu x^n= \left\{ \begin{array}{ll} \frac{\varGamma (n+1)}{\varGamma (n+1+\mu )}x^{n+\mu }, &{} x>0,~~ n>\mu -1, \\ \frac{x^{n+1}}{n+1}, &{} x>0,~~ \mu =n, \\ \xi _{n+1}, &{} x=0, \\ \end{array} \right. \end{aligned}$$
(6)

To demonstrate the definitions of fractional differentiation and integration, we use matlab and take \(n = 2\) as an example, and \(\mu \)is changed from 0 to 3, as shown in Fig. 1.

Fig. 1
figure 1

a Fractional derivative of \(x^2\) in Eq. (5). b Fractional integral of \(x^2\) in Eq. (6)

Some Properties of Approximating Polynomials

Generalized Fractional Mittag–Leffler Function

The Mittag–Leffler function of one-parameter is defined as [11]:

$$\begin{aligned} E^\beta (x)= \sum _{k=0}^\infty \frac{x^k}{\varGamma (\beta k+1)},~~\beta >0,~~ x\in {\mathbb {R}}. \end{aligned}$$
(7)

The Mittag–Leffler function of two-parameter is given by [11]:

$$\begin{aligned} E^{\beta , \gamma } (x)= \sum _{k=0}^\infty \frac{x^k}{\varGamma (\beta k+\gamma )},~~\beta ,~\gamma >0,~~ x\in {\mathbb {R}}. \end{aligned}$$
(8)

As a special case, we have \(E^{\beta ,1} (x)=E^\beta (x)\) and \(E^{1,1} (x)=E^1(x)=e^x\). The finite version of the Mittag–Leffler function in the two-parameter finite of any integer n is given by [12]:

$$\begin{aligned} E_n^{\beta , \gamma } (x)= \sum _{k=0}^n \frac{x^k}{\varGamma (\beta k+\gamma )},~~\beta ,~\gamma >0,~~ x\in {\mathbb {R}}, \end{aligned}$$
(9)

that is

$$\begin{aligned} E_n^{\beta , \gamma } (x)=\frac{x^n}{\varGamma (\beta n+\gamma )}+ \frac{x^{n-1}}{\varGamma (\beta (n-1)+\gamma )}+ \cdots +\frac{x}{\varGamma (\beta +\gamma )}+\frac{1}{\varGamma (\gamma )}, \end{aligned}$$
(10)

so, we can write

$$\begin{aligned} E_n^{\beta , \gamma } (x)=\frac{x^n}{\varGamma (\beta n+\gamma )}+\text {function of lower degrees}. \end{aligned}$$
(11)

The definition of generalized Mittag–Leffler function is:

$$\begin{aligned} E_{\alpha } (x)= \sum _{k=0}^{\infty } \frac{x^ {\alpha k}}{\varGamma (\alpha k+1)},~~\alpha >0,~~ x\in {\mathbb {R}}, \end{aligned}$$
(12)

the modify is

$$\begin{aligned} E_{\alpha } (x)= \sum _{k=0}^{\lceil \alpha \rceil } \frac{x^ {\alpha k}}{\varGamma (\alpha k+1)},~~\alpha >0,~~ x\in {\mathbb {R}}. \end{aligned}$$
(13)

From Eq. (13), we show the first GMLF modification as:

$$\begin{aligned}&E_{\alpha }^{\gamma } (x)= \sum _{k=0}^{\lceil \alpha \rceil } \frac{x^{\alpha k}}{\varGamma (\alpha k+\gamma )},~~ \alpha ,~\gamma >0,~~ x\in {\mathbb {R}}, \end{aligned}$$
(14)
$$\begin{aligned}&E_{\alpha }^{\beta , \gamma } (x)= \sum _{k=0}^{\lceil \alpha \rceil } \frac{x^{\frac{\alpha }{\lceil \alpha \rceil } k}}{\varGamma (\beta k+\gamma )},~~\alpha ,~\beta ,~ \text {and}~ \gamma >0,~~ x\in {\mathbb {R}}. \end{aligned}$$
(15)

The fractional version of the Mittag function of two parameters in Eq. (15) is the general case of the usual Mittag function in Eq. (9). When \(\alpha =n\), the two Eqs. (9) and (15) are identical.

The fractional order derivative of Eq. (15) is defined by:

$$\begin{aligned} D_C^{\mu }E_{\alpha }^{\beta , \gamma } (x)= \sum _{k=0}^{\lceil \alpha \rceil } \frac{\varGamma (\frac{\alpha }{\lceil \alpha \rceil } k+1)}{\varGamma (\frac{\alpha }{\lceil \alpha \rceil } k-\mu +1)\varGamma (\beta k+\gamma )}x^{\frac{\alpha }{\lceil \alpha \rceil } k-\mu },~~\alpha ,~\beta ,~ \text {and}~ \gamma >0,~~ x\in {\mathbb {R}}. \end{aligned}$$
(16)

The fractional order integral of Eq. (15) is defined by:

$$\begin{aligned} I^{\mu }E_{\alpha }^{\beta , \gamma } (x)= \sum _{k=0}^{\lceil \alpha \rceil } \frac{\varGamma (\frac{\alpha }{\lceil \alpha \rceil } k+1)}{\varGamma (\frac{\alpha }{\lceil \alpha \rceil } k+\mu +1)\varGamma (\beta k+\gamma )}x^{\frac{\alpha }{\lceil \alpha \rceil } k+\mu },~~\alpha ,~\beta ,~ \text {and}~ \gamma >0,~~ x\in {\mathbb {R}}. \end{aligned}$$
(17)

Laguerre Function

The generalized Laguerre polynomials \(L_{\mu , n} (x), n=0, 1, 2, \ldots \) and \(\mu >-1\) can be defined on the interval \([0, \infty )\) [13] by:

$$\begin{aligned} (n+1)L_{\mu , n+1} (x)=(2n+\mu +1-x)L_{\mu , n} (x)-(n+\mu )L_{\mu , n-1} (x),~~ x\in {\mathbb {R}}, \end{aligned}$$
(18)

where \(L_{\mu , 0} (x)=1\) and \(L_{\mu , 1} (x)=1+\mu -x.\) The explicit formula of generalized Laguerre polynomials is given by:

$$\begin{aligned} L_{\mu , n} (x)= \sum _{k=0}^n (-1)^k \frac{\varGamma (n+\mu +1) x^k}{\varGamma (k+\mu +1)\varGamma (n-k +1)\varGamma (k+1)},~~ x\in {\mathbb {R}}. \end{aligned}$$
(19)

Generalized Fractional Laguerre Functions

Here, we will give the representation of the Laguerre type functions \(L_{n}^{\gamma ,\beta }(x)\) that is related by the fractional order. We know that the Laguerre Rodrigues’ formula [14] is defined by:

$$\begin{aligned} L_{n}^{\beta ,\gamma }(x)= \frac{x^{-\beta } e^{\gamma x}}{\varGamma (n+1)} D^{n}(x^{n+\beta } e^{-\gamma x}), \end{aligned}$$
(20)

the fractional order of Laguerre polynomial (20) is:

$$\begin{aligned} L_{\alpha }^{\beta ,\gamma }(x)= \frac{x^{-\beta } e^{\gamma x}}{\varGamma (\alpha +1)} D^{\alpha }(x^{\alpha +\beta } e^{-\gamma x}). \end{aligned}$$
(21)

Using the Leibniz rule for fractional derivative [15] that states:

$$\begin{aligned} D^{\alpha }(u(x)v(x)) = \sum _{k=0}^{\lceil \alpha \rceil } \Big (^{\alpha }_{k}\Big )D^{\alpha -k}u(x)D^{k}v(x), \end{aligned}$$
(22)

using Eqs. (21) and (22) becomes:

$$\begin{aligned} L_{\alpha }^{\beta ,\gamma }(x)= \frac{x^{-\beta } e^{\gamma x}}{\varGamma (\alpha +1)}\sum _{k=0}^{\lceil \alpha \rceil } \Big (^{\alpha }_{k}\Big ) D^{\alpha -k}x^{\alpha +\beta } D^{k}e^{-\gamma x}. \end{aligned}$$
(23)

By using Eqs. (4) and (23) we get:

$$\begin{aligned}&L_{\alpha }^{\beta ,\gamma }(x)= \frac{x^{-\beta } e^{\gamma x}}{\varGamma (\alpha +1)}\sum _{k=0}^{\lceil \alpha \rceil } \Big (^{\alpha }_{k}\Big ) \frac{\varGamma (\alpha +\beta +1)}{\varGamma (k+\beta +1)} x^{k+\beta } (-\gamma )^{k} e^{-\gamma x}, \nonumber \\&\quad = \sum _{k=0}^{\lceil \alpha \rceil } (-\gamma )^{k} \frac{\varGamma (\alpha +\beta +1)}{\varGamma (k+1) \varGamma (\alpha -k+1) \varGamma (k+\beta +1)} x^{k} . \end{aligned}$$
(24)

Lemma 1

Let \(L_{\alpha }^{\beta ,\gamma }(x)\) be a generalized fractional Laguerre polynomial. Then the fractional- order derivative of it is defined by:

$$\begin{aligned} D_C^\mu L_{\alpha }^{\beta ,\gamma }(x)=\sum _{k=1}^{\lceil \alpha \rceil } (-\gamma )^k \frac{\varGamma (\alpha +\beta +1)}{\varGamma (k-\mu +1) \varGamma (\alpha -k+1) \varGamma (k+\beta +1)} x^{k-\mu }, \end{aligned}$$
(25)

where    \(x\in \mathbb {R},~~ \mu >0\)  and  \(\alpha ,~ \beta ,~ \gamma >0.\)

Lemma 2

Let \(L_{\alpha }^{\beta ,\gamma }(x)\) be a generalized fractional Laguerre polynomial. Then the fractional- order integral of it is defined by:

$$\begin{aligned} I^\eta L_{\alpha }^{\beta ,\gamma }(x)=\sum _{k=0}^{\lceil \alpha \rceil } (-\gamma )^k \frac{\varGamma (\alpha +\beta +1)}{\varGamma (k+\mu +1) \varGamma (\alpha -k+1) \varGamma (k+\beta +1)} x^{k+\mu }, \end{aligned}$$
(26)

where    \(x\in \mathbb {R},~~ \mu >0\)  and  \(\alpha ,~ \beta ,~ \gamma >0.\)

Methods of Numerical Solution

Galerkin Method

Let us begin with an abstract issue presented as a weak formulation on a Hilbert space \(L^2~ [a,b]\) to introduce Galerkin’s technique, with the following inner product as:

$$\begin{aligned} (u,v)=\int _{a}^{b} u(x)~v(x)~dx,~~~u(x),~v(x)\in L^2~ [a,b]. \end{aligned}$$
(27)

Now, consider the differentiable problem is \(F(u)=0\) and let

$$\begin{aligned} u_a (x)=\sum _{k=0}^{n} a_k \varphi _k (x), \end{aligned}$$
(28)

is an approximate solution, then Galerkin method finds the unknowns by solving the system:

$$\begin{aligned} (F(u_a ),\varphi _k )=0,~~~k=0,1,\ldots ,n. \end{aligned}$$
(29)

It is known as Weak Galerkin formulation [16].

Differential Galerkin Method

Linear Equation

Consider the linear multi-order fractional differential equation with initial conditions:

$$\begin{aligned}&A_{r}(x) D_C^{\mu _r}y(x)+\sum _{k=1}^{r-1}A_{k}(x)D_C^{\mu _{ k}} y(x)+y(x)=f(x),~~~~~ x \in [0,1], \end{aligned}$$
(30)
$$\begin{aligned}&y^{(i)} (x)=d_{i},~~~~~ i=0,1,2,\ldots ,\lceil \mu _{r} \rceil -1, \end{aligned}$$
(31)

where \(0<\mu _{1}<\cdots<\mu _{r-1}<\mu _{r}, ~~ A_{k}(x),~~k=1,2,\ldots ,r,~~f(x)\) are known continuous functions on [0, 1] and \(d_{i},~~i=0,1,2,\ldots ,\lceil \mu _r \rceil -1,\) are given constants.

Let

$$\begin{aligned} y(x)=\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(x),~~~ D_C^{\mu _r}y(x)=\sum _{i=1}^{n} a_{i}~ D_C^{\mu _r} M^{\alpha _{i}}(x), \end{aligned}$$
(32)

where \(M^{\alpha _{i}}(x) \in \{E_{\alpha _{i}}^{\beta ,\gamma }(x), L_{\alpha _{i}}^{\beta ,\gamma }(x)\}\) and \(a_{i},~~ i=0,1,2,\ldots ,n,\) are anonymous constants.

Substituting from Eq. (32) into Eqs. (30) and (31) we obtain:

$$\begin{aligned}&A_{r}(x)\sum _{i=1}^{n} a_{i}~ D_C^{\mu _r} M^{\alpha _{i}}(x)+\sum _{k=1}^{r-1} A_{k}(x) \sum _{i=1}^{n} a_{i} ~D_C^{\mu _{k}} M^{\alpha _{i}}(x)+ \sum _{i=1}^{n} a_{i}M^{\alpha _{i}}(x) - f(x) = 0,\nonumber \\ \end{aligned}$$
(33)
$$\begin{aligned}&\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(0) = d_{0}. \end{aligned}$$
(34)

Then

$$\begin{aligned}&R_{j}(a_1,a_2,\ldots ,a_n)=A_{r}(x_j)\sum _{i=1}^{n} a_{i}~ D_C^{\mu _r} M^{\alpha _{i}}(x_j)+\sum _{k=1}^{r-1} A_{k}(x_j)* \nonumber \\&\quad \sum _{i=1}^{n} a_{i} ~D_C^{\mu _{k}} M^{\alpha _{i}}(x_j) + \sum _{i=1}^{n} a_{i}M^{\alpha _{i}}(x_j) - f(x_j) = 0, \end{aligned}$$
(35)

where \(j=1,2,\ldots ,n,\) and

$$\begin{aligned} R_{n+1}(a_1,a_2,\ldots ,a_n)=\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(0) - d_{0}=0. \end{aligned}$$
(36)

Equations (35) and (36) in view of the Galerkin Eq. (29) become:

$$\begin{aligned}&G_{j}(a_1,a_2,\ldots ,a_n)=\sum _{k=1}^{n} w_k \bigg [ A_{r}(x_j)\sum _{i=1}^{n} a_{i}~ D_C^{\mu _r} M^{\alpha _{i}}(x_j)+\sum _{k=1}^{r-1} A_{k}(x_j)* \nonumber \\&\quad \sum _{i=1}^{n} a_{i} ~D_C^{\mu _{k}} M^{\alpha _{i}}(x_j) + \sum _{i=1}^{n} a_{i}M^{\alpha _{i}}(x_j) - f(x_j) \bigg ]M^{\alpha _{k}}(x_j) = 0, \end{aligned}$$
(37)

where \(j=1,2,\ldots ,n,\) and

$$\begin{aligned} G_{n+1}(a_1,a_2,\ldots ,a_n)=\sum _{k=1}^{n} w_k \bigg [\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(0) - d_{0}\bigg ]M^{\alpha _{k}}(x_j)=0, \end{aligned}$$
(38)

with \(w_0=w_n=1/2, w_k=1, k=1,2,\ldots ,n-1\). So we can construct an unconstrained optimization problem with objective function as:

$$\begin{aligned}&r(a_1,a_2,\ldots ,a_n)= \sum _{j=0}^{n} G_{j}^{2}(a_1,a_2,\ldots ,a_n) = \sum _{j=0}^{n} \Bigg (\sum _{k=1}^{n} w_k \bigg [ A_{r}(x_j)* \nonumber \\&\quad \sum _{i=1}^{n} a_{i}~ D_C^{\mu _r} M^{\alpha _{i}}(x_j)+\sum _{k=1}^{r-1} A_{k}(x_j) \sum _{i=1}^{n} a_{i} ~D_C^{\mu _{k}} M^{\alpha _{i}}(x_j) + \sum _{i=1}^{n} a_{i}M^{\alpha _{i}}(x_j) \nonumber \\&\quad - f(x_j) \bigg ]M^{\alpha _{k}}(x_j)\Bigg )^{2}+\Bigg (\sum _{k=1}^{n} w_k \bigg [\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(0) - d_{0}\bigg ]M^{\alpha _{k}}(x_j)\Bigg )^{2}. \end{aligned}$$
(39)

The solution of Eq. (39) defines the anonymous coefficients \(a_{i},~~ i=1,2,\ldots ,n,\) and so the numerical solution y(x) is defined by Eq. (32).

Nonlinear Equation

Consider the nonlinear multi-order fractional differential equation with initial conditions:

$$\begin{aligned}&A_{r}(x)~D_C^{\mu _{r}}y(x)+\sum _{k=1}^{r-1}A_{k}(x)y(x)~D_C^{\mu _{ k}} y(x)+y(x)=f(x),~~~~~ x \in [0,1], \end{aligned}$$
(40)
$$\begin{aligned}&y^{(i)} (x)=d_{i},~~~~~ i=0,1,2,\ldots ,\lceil \mu \rceil -1, \end{aligned}$$
(41)

where \(0<\mu _{1}<\cdots<\mu _{r-1}<\mu _{r},~~ A_{k}(x),~~k=1,2,\ldots ,r,~~f(x)\) are known continuous functions on [0, 1] and \(d_{i},~~i=0,1,2,\ldots ,\lceil \mu _{r} \rceil -1,\) are given constants.

Let

$$\begin{aligned} y(x)=\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(x),~~~ D_C^{\mu _{r}}y(x)=\sum _{i=1}^{n} a_{i} D_C^{\mu _{r}} M^{\alpha _{i}}(x), \end{aligned}$$
(42)

where \(M^{\alpha _{i}}(x) \in \{E_{\alpha _{i}}^{\beta ,\gamma }(x), L_{\alpha _{i}}^{\beta ,\gamma }(x)\}\) and \(a_{i},~~ i=0,1,2,\ldots ,n,\) are anonymous constants.

Substituting from Eq. (42) into Eqs. (40) and (41) we obtain:

$$\begin{aligned}&A_{r}(x)\sum _{i=1}^{n} a_{i}~ D_C^{\mu _{r}} M^{\alpha _{i}}(x) + \sum _{k=1}^{r-1} A_{k}(x) \Big ( \sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(x)\Big )* \nonumber \\&\quad \Big (\sum _{i=1}^{n} a_{i}~D_C^{\mu _{k}} M^{\alpha _{i}}(x)\Big ) +\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(x) - f(x) = 0, \end{aligned}$$
(43)
$$\begin{aligned}&\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(0)= d_{0}. \end{aligned}$$
(44)

Then

$$\begin{aligned}&R_{j}(a_1,a_2,\ldots ,a_n)=A_{r}(x_{j})\sum _{i=1}^{n} a_{i}~ D_C^{\mu _{r}} M^{\alpha _{i}}(x_{j}) + \sum _{k=1}^{r-1} A_{k}(x_{j}) \Big (\sum _{i=0}^{n} a_{i}* \nonumber \\&\quad M^{\alpha _{i}}(x_{j})\Big ) \Big (\sum _{i=1}^{n} a_{i}~D_C^{\mu _{k}} M^{\alpha _{i}}(x_{j})\Big )+\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(x_{j}) - f(x_{j}) = 0, \end{aligned}$$
(45)

where \(j=1,2,\ldots ,n,\) and

$$\begin{aligned} R_{n+1}(a_1,a_2,\ldots ,a_n)=\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(0) - d_{0}=0. \end{aligned}$$
(46)

Equations (45) and (46) in view of the Galerkin Eq. (29) become:

$$\begin{aligned}&G_{j}(a_1,a_2,\ldots ,a_n)=\sum _{k=1}^{n} w_k \bigg [A_{r}(x_{j})\sum _{i=1}^{n} a_{i}~ D_C^{\mu _{r}} M^{\alpha _{i}}(x_{j}) + \sum _{k=1}^{r-1} A_{k}(x_{j})* \nonumber \\&\quad \Big (\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(x_{j})\Big ) \Big (\sum _{i=1}^{n} a_{i}~D_C^{\mu _{k}} M^{\alpha _{i}}(x_{j})\Big )+\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(x_{j}) - f(x_{j}) \bigg ]* \nonumber \\&\quad M^{\alpha _{k}}(x_j) = 0, \end{aligned}$$
(47)

where \(j=1,2,\ldots ,n,\) and

$$\begin{aligned} G_{n+1}(a_1,a_2,\ldots ,a_n)=\sum _{k=1}^{n} w_k \bigg [\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(0) - d_{0}\bigg ]M^{\alpha _{k}}(x_j)=0, \end{aligned}$$
(48)

with \(w_0=w_n=1/2, w_k=1, k=1,2,\ldots ,n-1\). So we can construct an unconstrained optimization problem with objective function as:

$$\begin{aligned}&r(a_1,a_2,\ldots ,a_n)= \sum _{j=0}^{n} G_{j}^{2}(a_1,a_2,\ldots ,a_n) =\sum _{j=0}^{n}\Bigg ( \sum _{k=1}^{n} w_k \bigg [A_{r}(x_{j})\sum _{i=1}^{n} a_{i}D_C^{\mu _{r}} M^{\alpha _{i}}(x_{j})+ \nonumber \\&\quad \sum _{k=1}^{r-1} A_{k}(x_{j}) * \Big (\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(x_{j})\Big )\Big (\sum _{i=1}^{n} a_{i}~D_C^{\mu _{k}} M^{\alpha _{i}}(x_{j})\Big )+\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(x_{j}) - \nonumber \\&\quad f(x_{j}) \bigg ] M^{\alpha _{k}}(x_j) \Bigg )^{2}+\Bigg (\sum _{k=1}^{n} w_k \bigg [\sum _{i=0}^{n} a_{i} M^{\alpha _{i}}(0) - d_{0}\bigg ]M^{\alpha _{k}}(x_j)\Bigg )^{2}. \end{aligned}$$
(49)

The solution of Eq. (49) defined the anonymous coefficients \(a_{i},~~ i=1,2,\ldots ,n,\) and so the numerical solution y(x) is defind by Eq. (42).

Integeral Galerkin Method

Linear Equation

Consider the linear multi-order fractional differential equation with initial conditions:

$$\begin{aligned}&A_{r}(x) D_C^{\mu _r}y(x)+\sum _{k=1}^{r-1}A_{k}(x)D_C^{\mu _{ k}} y(x)+y(x)=f(x),~~~~~ x \in [0,1], \end{aligned}$$
(50)
$$\begin{aligned}&y^{(i)} (x)=d_{i},~~~~~ i=0,1,2,\ldots ,\lceil \mu _{r} \rceil -1, \end{aligned}$$
(51)

where \(0<\mu _{1}<\cdots<\mu _{r-1}<\mu _{r}, ~~ A_{k}(x),~~k=1,2,\ldots ,r,~~f(x)\) are known continuous functions on [0, 1] and \(d_{i},~~i=0,1,2,\ldots ,\lceil \mu _r \rceil -1,\) are given constants.

Firstly, we apply the integral operator \(I^{\mu _r}\) on Eq. (50) to become:

$$\begin{aligned} A_{r}(x) y(x)+\sum _{k=1}^{r-1}A_{k}(x) I^{\mu _r-{\mu _{ k}}} y(x) +~ I^{\mu _r} y(x) =~ I^{\mu _r} f(x). \end{aligned}$$
(52)

Assume that the solution of the fractional linear differential Eq. (50) can be written as:

$$\begin{aligned} y(x)=\sum _{i=0}^{n} b_{i} M^{\alpha _{i}}(x),~ I^{\mu _r} y(x) =\sum _{i=0}^{n} b_{i}~ I^{\mu _r} M^{\alpha _{i}}(x), \end{aligned}$$
(53)

where \(M^{\alpha _{i}}(x) \in \{E_{\alpha _{i}}^{\beta ,\gamma }(x), L_{\alpha _{i}}^{\beta ,\gamma }(x)\}\) and \(b_{i},~~ i=0,1,2,\ldots ,n\) are anonymous constants.

Substituting from Eq. (53) into Eq. (52) we have:

$$\begin{aligned}&A_{r}(x)\sum _{i=0}^{n} b_{i} M^{\alpha _{i}}(x) +\sum _{k=1}^{r-1}A_{k}(x)\sum _{i=0}^{n} b_{i}~ I^{\mu _r-{\mu _{ k}}} M^{\alpha _{i}}(x) \nonumber \\&\quad +\sum _{i=0}^{n} b_{i}~ I^{\mu _r} M^{\alpha _{i}}(x) =~I^{\mu _r}f(x). \end{aligned}$$
(54)

Then

$$\begin{aligned}&R_{j}(a_1,a_2,\ldots ,a_n)=A_{r}(x_{j})\sum _{i=0}^{n} b_{i}M^{\alpha _{i}}(x_{j}) + \sum _{k=1}^{r-1}A_{k}(x_{j})* \nonumber \\&\quad \sum _{i=0}^{n} b_{i}~ I^{\mu _r-{\mu _{ k}}} M^{\alpha _{i}}(x_{j})+\sum _{i=0}^{n} b_{i}~ I^{\mu _r} M^{\alpha _{i}}(x_{j}) -~I^{\mu _r}f(x_{j}) = 0, \end{aligned}$$
(55)
$$\begin{aligned}&R_{n+1}(a_1,a_2,\ldots ,a_n)=\sum _{i=0}^{n} b_{i} M^{\alpha _{i}}(0) - d_{0}=0. \end{aligned}$$
(56)

Equations (55) and (56) in view of the Galerkin Eq. (29) become:

$$\begin{aligned}&G_{j}(a_1,a_2,\ldots ,a_n)=\sum _{k=1}^{n} w_k \bigg [ A_{r}(x_{j})\sum _{i=0}^{n} b_{i}M^{\alpha _{i}}(x_{j}) + \sum _{k=1}^{r-1}A_{k}(x_{j})* \nonumber \\&\quad \sum _{i=0}^{n} b_{i}~ I^{\mu _r-{\mu _{ k}}} M^{\alpha _{i}}(x_{j})+\sum _{i=0}^{n} b_{i}~ I^{\mu _r} M^{\alpha _{i}}(x_{j}) -~I^{\mu _r}f(x_{j}) \bigg ]M^{\alpha _{k}}(x_j) = 0, \end{aligned}$$
(57)

where \(j=1,2,\ldots ,n,\) and

$$\begin{aligned} G_{n+1}(a_1,a_2,\ldots ,a_n)=\sum _{k=1}^{n} w_k \bigg [\sum _{i=0}^{n} b_{i} M^{\alpha _{i}}(0) - d_{0}\bigg ]M^{\alpha _{k}}(x_j)=0, \end{aligned}$$
(58)

with \(w_0=w_n=1/2, w_k=1, k=1,2,\ldots ,n-1\).

So we can construct an unconstrained optimization problem with objective function as:

$$\begin{aligned}&r(a_1,a_2,\ldots ,a_n)= \sum _{j=0}^{n} G_{j}^{2}(a_1,a_2,\ldots ,a_n) = \sum _{j=0}^{n} \Bigg (\sum _{k=1}^{n} w_k \bigg [ A_{r}(x_{j})\sum _{i=0}^{n} b_{i}M^{\alpha _{i}}(x_{j}) \nonumber \\&\quad + \sum _{k=1}^{r-1}A_{k}(x_{j})*\sum _{i=0}^{n} b_{i}~ I^{\mu _r-{\mu _{ k}}} M^{\alpha _{i}}(x_{j})+\sum _{i=0}^{n} b_{i}~ I^{\mu _r} M^{\alpha _{i}}(x_{j}) \nonumber \\&\quad -I^{\mu _r}f(x_{j}) \bigg ]M^{\alpha _{k}}(x_j)\Bigg )^{2}+\Bigg (\sum _{k=1}^{n} w_k \bigg [\sum _{i=0}^{n} b_{i} M^{\alpha _{i}}(0) - d_{0}\bigg ]M^{\alpha _{k}}(x_j)\Bigg )^{2}. \end{aligned}$$
(59)

The solution of Eq. (59) defined the anonymous coefficients \(b_{i},~~ i=1,2,\ldots ,n,\) and so the numerical solution y(x) is defind by Eq. (53).

Nonlinear Equation

Consider the nonlinear multi-order fractional differential equation with initial conditions:

$$\begin{aligned}&A_{r}(x)~D_C^{\mu _{r}}y(x)+\sum _{k=1}^{r-1}A_{k}(x)y(x)~D_C^{\mu _{ k}} y(x)+y(x)=f(x),~~~~~ x \in [0,1], \end{aligned}$$
(60)
$$\begin{aligned}&y^{(i)} (x)=d_{i},~~~~~ i=0,1,2,\ldots ,\lceil \mu _{r} \rceil -1, \end{aligned}$$
(61)

where \(0<\mu _{1}<\cdots<\mu _{r-1}<\mu _{r}, ~~ A_{k}(x),~~k=1,2,\ldots ,r,~~f(x)\) are known continuous functions on [0, 1] and \(d_{i},~~i=0,1,2,\ldots ,\lceil \mu _r \rceil -1,\) are given constants.

Firstly, we apply the integral operator \(I^{\mu _r}\) on Eq. (60) to become:

$$\begin{aligned} A_{r}(x) y(x)+\sum _{k=1}^{r-1}A_{k}(x) I^{\mu _r}(y(x)~I^{-\mu _{ k}} y(x)) +~ I^{\mu _r} y(x) =~ I^{\mu _r} f(x). \end{aligned}$$
(62)

Assume that the solution of Eq. (60) can be written as:

$$\begin{aligned} y(x)=\sum _{i=0}^{n} b_{i} M^{\alpha _{i}}(x),~ I^{\mu _r} y(x) =\sum _{i=0}^{n} b_{i}~ I^{\mu _r} M^{\alpha _{i}}(x), \end{aligned}$$
(63)

where \(M^{\alpha _{i}}(x) \in \{E_{\alpha _{i}}^{\beta ,\gamma }(x), L_{\alpha _{i}}^{\beta ,\gamma }(x)\}\) and \(b_{i},~~ i=0,1,2,\ldots ,n\) are anonymous constants.

Substituting Eq. (63) into Eq. (62) we get:

$$\begin{aligned}&A_{r}(x)\sum _{i=0}^{n} b_{i} M^{\alpha _{i}}(x) +\sum _{k=1}^{r-1}A_{k}(x)I^{\mu _r}\Big (\sum _{i=0}^{n} b_{i} M^{\alpha _{i}}(x)\sum _{i=0}^{n} b_{i}~ I^{-\mu _k} M^{\alpha _{i}}(x) \Big ) \nonumber \\&\quad +\sum _{i=0}^{n} b_{i}~ I^{\mu _r} M^{\alpha _{i}}(x) -~I^{\mu _r}f(x)=0. \end{aligned}$$
(64)

Then

$$\begin{aligned}&R_{j}(a_1,a_2,\ldots ,a_n)=A_{r}(x_{j})\sum _{i=0}^{n} b_{i}M^{\alpha _{i}}(x_{j}) + \sum _{k=1}^{r-1}A_{k}(x_{j})I^{\mu _r}\Big (\sum _{i=0}^{n} b_{i} * \nonumber \\&\quad M^{\alpha _{i}}(x_{j}) \sum _{i=0}^{n} b_{i}~ I^{-\mu _k} M^{\alpha _{i}}(x_{j}) \Big )+\sum _{i=0}^{n} b_{i}~ I^{\mu _r} M^{\alpha _{i}}(x_{j}) -~I^{\mu _r}f(x_{j})=0, \end{aligned}$$
(65)
$$\begin{aligned}&R_{n+1}(a_1,a_2,\ldots ,a_n)=\sum _{i=0}^{n} b_{i} M^{\alpha _{i}}(0) - d_{0}=0. \end{aligned}$$
(66)

Equations (65) and (66) in view of the Galerkin Eq. (29) become:

$$\begin{aligned}&G_{j}(a_1,a_2,\ldots ,a_n)=\sum _{k=1}^{n} w_k \bigg [A_{r}(x_{j})\sum _{i=0}^{n} b_{i}M^{\alpha _{i}}(x_{j}) + \sum _{k=1}^{r-1}A_{k}(x_{j})I^{\mu _r} \nonumber \\&\quad \Big (\sum _{i=0}^{n} b_{i} M^{\alpha _{i}}(x_{j}) \sum _{i=0}^{n} b_{i}~ I^{-\mu _k} M^{\alpha _{i}}(x_{j}) \Big )+\sum _{i=0}^{n} b_{i}~ I^{\mu _r} M^{\alpha _{i}}(x_{j}) - \nonumber \\&\quad I^{\mu _r}f(x_{j}) \bigg ] M^{\alpha _{k}}(x_j) = 0, \end{aligned}$$
(67)

where \(j=1,2,\ldots ,n,\) and

$$\begin{aligned} G_{n+1}(a_1,a_2,\ldots ,a_n)=\sum _{k=1}^{n} w_k \bigg [\sum _{i=0}^{n} b_{i} M^{\alpha _{i}}(0) - d_{0}\bigg ]M^{\alpha _{k}}(x_j)=0, \end{aligned}$$
(68)

with \(w_0=w_n=1/2, w_k=1, k=1,2,\ldots ,n-1\). So we can construct an unconstrained optimization problem with objective function as:

$$\begin{aligned}&r(a_1,a_2,\ldots ,a_n)= \sum _{j=0}^{n} G_{j}^{2}(a_1,a_2,\ldots ,a_n) =\sum _{j=0}^{n}\Bigg ( \sum _{k=1}^{n} w_k \bigg [A_{r}(x_{j})\sum _{i=0}^{n} b_{i}M^{\alpha _{i}}(x_{j}) \nonumber \\&\quad + \sum _{k=1}^{r-1}A_{k}(x_{j})I^{\mu _r}\Big (\sum _{i=0}^{n} b_{i} M^{\alpha _{i}}(x_{j})*\sum _{i=0}^{n} b_{i}~ I^{-\mu _k} M^{\alpha _{i}}(x_{j}) \Big )+\sum _{i=0}^{n} b_{i}~ I^{\mu _r} M^{\alpha _{i}}(x_{j}) \nonumber \\&\quad - I^{\mu _r}f(x_{j}) \bigg ]M^{\alpha _{k}}(x_j)\Bigg )^{2}+\Bigg (\sum _{k=1}^{n} w_k \bigg [\sum _{i=0}^{n} b_{i} M^{\alpha _{i}}(0) - d_{0}\bigg ]M^{\alpha _{k}}(x_j)\Bigg )^{2}. \end{aligned}$$
(69)

The solution of Eq. (69) defined the anonymous coefficients \(b_{i},~~ i=1,2,\ldots ,n,\) and so the numerical solution y(x) is defind by Eq. (63).

The Proposed Methods

We present four methods of numerical solution for the given problem, namely the Mittag differential Galerkin method (MDGM), the Mittag integeral Galerkin method (MIGM), the Laguerre differential Galerkin method (LDGM), and the Laguerre integeral Galerkin method (LIGM).

Error Analysis

Theorem 1

[12]: Let y(x) and \(y_n (x) \in C^\infty [0,1]\) be approximated by Eq. (32), then for every \(x\in [0,1]\), there exists \(\varpi \in [0,1]\), such that:

$$\begin{aligned} y(x)-y_n (x)=\frac{\varGamma \big (\beta (n+1) +\gamma \big )}{(n+1)!}E_{n+1}^{\beta , \gamma } (x)y^{(n+1)}(\varpi ), \end{aligned}$$
(70)

and the estimated error is:

$$\begin{aligned} ||y(x)-y_n (x)|| \le \frac{\varGamma \big (\beta (n+1)+\gamma \big )}{(n+1)!}E_{n+1}^{\beta , \gamma } (x) \mathbf{Max} _{\varpi \in [0, 1]}||y^{(n+1)}(\varpi )||. \end{aligned}$$
(71)

Theorem 2

[12]: Let \(y(x)\in C^\infty [0,1]\) satisfies Eq. (30) and it is approximated by Eq. (32) then for every \(x\in [0,1]\), there exists \(\varpi \in [0,1]\) such that the residual is estimated by:

$$\begin{aligned} R(x) \le \frac{\varGamma \big (\beta (n+1)+\gamma \big )}{(n+1)!}E_{n+1}^{\beta , \gamma } (x) \bigg [\sum _{k=1}^\nu c_k(x)D^{\alpha _k} E_{n+1}^{\beta , \gamma }\bigg ] \mathbf{Max} _{\varpi \in [0, 1]}||y^{(n+1)}(\varpi )||. \end{aligned}$$
(72)

Theorem 3

[17]: Suppose y(x) and its \((\mu -1)\) order derivatives are absolutely continuous in \([0,\infty )\) and satisfies:

$$\begin{aligned}&e^{\frac{-x}{2}}x^{1+k+\alpha }y^{(k)} (x)\rightarrow 0~~ \text { as }~~x \rightarrow \infty ,~~ k=0,1,\ldots , \mu , \text { and } \\&\quad V^2=\int _{0}^{\infty }x^{1+\alpha +\mu }e^{-x}\big [y^{(\mu +1)}\big (x)]^2dx,~~ \text { is finite},~~ \mu \ge 1, \end{aligned}$$

then for the Mittag expansion from Eq. (32), we have:

$$\begin{aligned} |a_k| \le \frac{V}{\sqrt{k(k-1) \ldots (n-\mu )}} \frac{\sqrt{\varGamma (k+1)}}{\sqrt{{\varGamma (1+k+\alpha )}}}. \end{aligned}$$
(73)

Theorem 4

[8]: Assume that y(x) satisfies the hypothesis of Theorem 3. If y(x) and \(y_n(x)\) are expressed by Eq. (32), then \(y_n(x)\) converges to y(x) as \(n \rightarrow \infty \).

Theorem 5

[8]: under the hypothesis of Theorem 3 and Theorem 4., the approximate fractional order drivative \(D_C^{\mu }~ y_n(x)\) converges to \(D_C^{\mu } ~ y(x)\) as \(n \rightarrow \infty \).

Theorem 6

under the hypothesis of the previous theorems, the approximate fractional order integral \(I^{\mu }~ y_n(x)\) converges to \(I^{\mu } ~ y(x)\) as \(n \rightarrow \infty \).

Proof

As in [8]. \(\square \)

Numerical Experiments

Problem 1

We consider the problem [12]:

$$\begin{aligned} D^{\mu }y(x)+y(x) =\frac{\varGamma (\rho +1)}{\varGamma (\rho +1-\mu )} x^{\rho -\mu }+ x^{\rho }, x \in [0,1], \mu \in (0,1], \end{aligned}$$
(74)

with \(y(0) = 0\), the exact solution of this problem is \(y(x) = x^{\rho }\).

Figure 2 presents a comparison between MIGM and LIGM with \(\mu =0.9, \rho =3\).

Figure 3 presents a comparison between the fractional and integral orders of a numerical solution by using MIGM.

Table 1 presents the numerical solution of this problem by using the integration Galerkin method with \(\alpha =3.5\) and different fractionl orders of \(\mu \).

Table 2 presents a comparison between the fractional and integral orders of a numerical solution by using LIGM.

Fig. 2
figure 2

Comparison of a numerical solution of problem 1 between MIGM and LIGM, with \(\alpha =0.8, \rho =3\)

Fig. 3
figure 3

Comparison between fractional and integral orders of a numerical solution of problem 1 by using MIGM

Table 1 Numerical solution of problem 1, with \(\alpha =3.5\) and different fractionl orders \(\mu \) by using the integration Galerkin method
Table 2 Numerical solution of problem1, with \(\mu =0.9,~\beta =0.5 \) and \(\gamma =1.5\) by using LIGM in eq (24)

Problem 2

Consider the equation [18]:

$$\begin{aligned} D^{1.5}y(x)=x^{1.5}y(x)+4\sqrt{\frac{x}{\pi }}-x^{3.5},~~ 0<x\le 1, \end{aligned}$$
(75)

with the following initial conditions \(y(0)=0,~~ y^{'}(0)=0\) and exact solution \(y(x)=x^{2}\).

Figure 4 presents the numerical solution of this problem by using MDGM and LDGM with \(\alpha =10\) and fractional order \((\mu =1.5),\) and we compare our results with those obtained by reference [18] that used the BPs method.

Table 3 presents a comparison between the fractional and integral orders of this problem by using LDGM when \(\alpha =1.5\) and \(\rho =2.0\).

Fig. 4
figure 4

Comparison of a numerical solution of problem 2 between MDGM, LDGM and BPs [18] , with \(\mu =1.5 \)

Table 3 Numerical solution of problem 2, with \(\mu =1.5,~\rho =2.0,~ \beta =0.5 \) and \(\gamma =1.5\) by using LDGM in Eq. (24)

Problem 3

Consider the problem [12]:

$$\begin{aligned} D^{\mu }y(x) - x^{3}y^{2}(x) =\frac{\varGamma (\rho +1)}{\varGamma (\rho +1-\mu )}x^{\rho -\mu }-x^{2\rho +3}, x \in [0,1], \mu \in (0,1], \end{aligned}$$
(76)

with \(y(0) = 0\). The exact solution of this problem is \(y(x) = x^\rho \). A simple case of this problem is solved in [19].

Table 4 presents the numerical solution of this problem with \(\alpha =2.5\) and a different fractional order \((\mu )\) by the differentiation Galerkin method.

In Fig. 5, the numerical solution of this problem is compared between MDGM and LDGM, by using fractional and integral orders.

Fig. 5
figure 5

Comparison of numerical solution of problem 3 between LDGM in (a, b) and MDGM in (c, d), by using fractional and integral orders

Table 4 Numerical solution of problem 3, with \(\alpha =2.5\) and different fractionl order \(\mu \) by using differentiation Galerkin method

Problem 4

We consider the following fractional differential equation with initial condition [20]:

$$\begin{aligned} D^{\mu }y(x)+2y(x)y'(x)+y'(x)=f(x),~~0< x \le 1,~~0 < \mu \le 1,~~y(0)=0. \end{aligned}$$
(77)

We take \(f(x)=2\rho x^{2\rho -1}+ \rho x^{\rho -1}+ \frac{\varGamma (\rho +1)}{\varGamma (\rho -\mu +1)} x^{\rho -\mu } \). The exact solution of this problem is \(y(x)=x^\rho \).

In Table 5, presents the numerical solution of problem 4, with \(\alpha =2.8\) and different fractional orders of \(\mu \) by using the integration Galerkin method.

In Table 6, presents a comparison between the fractional and integral orders of this problem by using MIGM with \(\mu =0.9,~ \beta =0.5 \) and \(\gamma =0.5\).

In Table 7 presents a comparison between the fractional and integral orders of this problem by using LIGM with \(\mu =0.9,~\beta =0.5 \) and \(\gamma =0.5\).

Table 5 Numerical solution of problem 4, with \(\alpha =2.8\) and different fractionl orders \(\mu \)
Table 6 Numerical solution of problem 4, with \(\mu =0.9,~ \beta =0.5 \) and \(\gamma =0.5\) by using MIGM in Eq. (15)
Table 7 Numerical solution of problem 4, with \(\mu =0.9,~\beta =0.5 \) and \(\gamma =0.5\) by using LIGM in Eq. (24)

Conclusion

In this paper, four numerical methods based on Mittag–Leffler and Laguerre with differential and integeral Galerkin methods (MDGM, LDGM, MIGM, and LIGM) are developed. The proposed methods are applied to solving linear and nonlinear differential equations of fractional order. We compared the new approximation methods for functions based on the generalized fractional Mittag–Leffler function and the generalized fractional Laguerre function, and applied them with the differential Galerkin method and integral Galerkin method to solve linear and nonlinear fractional differential equations.

  • We find that the methods give a good result (see Figs. 2, 4, Tables 1, 4, and 5).

  • We find a numerical solution to the problems when we use the fractional order that is better than the numerical solutions when we use the integer order (see Figs. 3, 5, and Table 6).

  • When we apply the generalized fractional Laguerre Galerkin method, we find the difference of the results by using fractional order and integer order is small this is due to the power of generalized fractional Laguerre function does not depend on \(\alpha \) (see Tables 2, 3, 7 and Fig. 5c, d) on the contrary the difference of the results by using generalized fractional Mittag–Leffler Galerkin method is large because it depends on \(\alpha \) (see Figs. 3 and 5a, b).