1 Introduction

Many phenomena in engineering, physics and other sciences can be modeled successfully by using mathematical tools inspired in the fractional calculus, that is, the theory of derivatives and integrals of non-integer order [119]. This allows one to describe physical phenomena more accurately. In this line of thought, FDEs have emerged as interdisciplinary area of research in the recent years. The non-local nature of fractional derivatives can be utilized to simulate accurately diversified natural phenomena containing long memory [2035].

In recent years, there has been considerable interest in employing spectral methods for numerically solving many types of integral and differential equations, due to their flexibility of implementation over finite and infinite intervals [3647]. The speed of convergence is one of the great advantages of spectral methods. Besides, spectral methods have exponential rates of convergence and high level of accuracy. Spectral methods can be classified into three types, namely the collocation [4852], tau [5356] and Galerkin [5759] methods. As it is well known, one of the most accurate methods of discretization for solving numerous differential equations is spectral method. The spectral method employs linear combinations of orthogonal polynomials, as basis functions, and so often leads to accurate approximate solutions [60, 61]. The spectral methods based on orthogonal systems, such as Bernstein polynomials, Jacobi polynomials and their special cases, are only available for bounded domains for approximation of FDEs [62, 63]. On the other hand, several problems in finance, plasma physics, porous media, dynamical processes and engineering are set on unbounded domains. The use of spectral methods based on orthogonal systems, such as the modified generalized Laguerre polynomials and their special cases, is available for unbounded domains for approximation of FDEs [6467].

From the numerical point of view, several numerical techniques were adapted for approximating the solution of FDEs in bounded domains. Saadatmandi and Dehghan [68] presented a Legendre tau scheme, combined with the fractional Caputo operational matrix of Legendre polynomials, for the numerical solution of multi-term FDEs. Doha et al. [69] formulated and derived the Jacobi operational matrix of the Caputo fractional derivative, which was applied in conjunction with the spectral tau scheme by means of Jacobi polynomials as a basis function, for solving linear multi-term FDEs. The Chebyshev [53] and Legendre [68] operational matrices can be obtained as special cases from Jacobi operational matrix [69]. Recently, Kazem et al. [70] defined new orthogonal functions, based on Legendre polynomials, to obtain an efficient spectral technique for multi-term FDEs. The authors of [71] extended this definition and presented the operational matrix of fractional derivative and integration for such functions to construct a new tau technique for solving two-dimensional FDEs. Moreover, Ahmadian et al. [72] adopted the operational matrix of fractional derivative for Legendre polynomials, which was applied with the tau method, for solving a class of fuzzy FDEs. Indeed, with a few noticeable exceptions, a limited work was developed on the use of spectral methods in unbounded domains to solve these important classes of FDEs.

The operation matrices of fractional derivatives and fractional integrals of generalized Laguerre polynomials were investigated for solving multi-term FDEs on a semi-infinite interval [66]. The generalized Laguerre spectral tau and collocation techniques were studied in [66] for solving linear and nonlinear FDEs on the half line. These spectral techniques were developed and generalized by means of the modified generalized Laguerre polynomials in [67]. Indeed, the authors of [73, 74] presented a Caputo fractional extension of the classical Laguerre polynomials and proposed new C-Laguerre functions.

There are different techniques for solving FDEs, fractional integro-differential equations and fractional optimal control problems, such as the methods denoted as variational iteration [75, 76], Adomian decomposition [77], operational matrix of B-spline functions [78], operational matrix of Jacobi polynomials [69, 79], Jacobi collocation [80], operational matrix of Chebyshev polynomials [81], Legendre collocation [82, 83], pseudo-spectral [60], operational matrix of Laguerre polynomials [84] and others [8589].

The objective of this article is to present a broad survey of recently proposed spectral methods for solving FDEs on bounded and unbounded domains. The operational matrices of fractional derivatives and integrals for some orthogonal polynomials on bounded and unbounded domains are presented. These operational matrices are employed in combination with spectral tau and collocation schemes for solving several kinds of linear and nonlinear FDEs. Moreover, we present the construction of the shifted Legendre operational matrix (SLOM), shifted Chebyshev operational matrix (SCOM), shifted Jacobi operational matrix (SJOM), Laguerre operational matrix (LOM), modified generalized Laguerre operational matrix (MGLOM) and Bernstein operational matrix (BOM) of fractional derivatives and integrals that are employed with the tau method to provide efficient numerical schemes for solving linear FDEs. We also introduce a Bernstein operational matrix (BOM) of fractional derivatives with collocation method for solving linear FDEs. Finally, we present the shifted Jacobi collocation (SJC) and the modified generalized Laguerre collocation (MGLC) methods for solving fractional initial and boundary value problem of fractional order \(\nu >0\) with nonlinear terms, in which the nonlinear FDE is collocated at the \(N\) zeros of the orthogonal functions. Several illustrative examples are implemented to confirm the high accuracy and effectiveness of the use of operational matrices combined with spectral techniques for solving FDEs on bounded and unbounded domains.

The remainder of this paper is organized as follows: Sect. 2 introduces some relevant definitions of the fractional calculus theory. Section 3 is devoted to orthogonal polynomials and polynomial approximations. Sections 4 and 5 present the SLOM, SCOM, SJOM, LOM, MGLOM and BOM of fractional derivatives in the Caputo sense and the SLOM, SCOM, SJOM, LOM, MGLOM and BOM of Riemann–Liouville fractional integrals, respectively. Section 6 employs the spectral methods, based on shifted Jacobi, modified generalized Laguerre and Bernstein polynomials in combination with the SJOM, MGLOM and BOM, for solving FDEs including linear and nonlinear terms. Finally, Sect. 7 presents several examples to illustrate the main ideas of this survey.

2 Preliminaries and notations

In this section, we recall some fundamental definitions and properties of fractional calculus theory which are used in the sequel.

Definition 1

The Riemann–Liouville fractional integral \(J^{\nu }f(x)\) of order \(\nu \) is defined by

$$\begin{aligned} J^{\nu }f(x)= & {} \frac{1}{\varGamma (\nu )}\ \int _{0}^{x}{(x-t)}^{\nu -1} f(t) \hbox {d}t, \quad x>0,\nonumber \\ J^{0}f(x)= & {} f(x). \end{aligned}$$
(2.1)

Definition 2

The Caputo fractional derivative of order \(\nu >0\) is defined by

$$\begin{aligned} D^{\nu }f(x)= & {} J^{m-\nu } D^{m}f(x)\nonumber \\= & {} \frac{1}{\varGamma (m-\nu )} \int _{0}^{x}{(x-t)}^{m-\nu -1}\nonumber \\&\times \frac{\hbox {d}^m}{\hbox {d}t^m}f(t) \hbox {d}t, \quad x>0, \end{aligned}$$
(2.2)

respectively, where \(m-1<\nu \le m\), \(m\in N^{+}\) and \(\varGamma (.)\) denotes the Gamma function.

The fractional integral and derivative operator satisfies

$$\begin{aligned}&J^\nu x^\beta = \dfrac{\varGamma {(\beta +1)}}{\varGamma {(\beta +1+\nu )}}\ x^{\beta +\nu }, \end{aligned}$$
(2.3)
$$\begin{aligned}&D^\nu x^\beta = {\left\{ \begin{array}{ll} 0, &{} \text {for}\ \beta \in N_0 \ \text {and}\ \beta <\lceil \nu \rceil ,\\ \dfrac{\varGamma {(\beta +1)}}{\varGamma {(\beta +1-\nu )}}\ x^{\beta -\nu }, &{} \text {for}\ \beta \in N_0\ \text {and}\ \beta \ge \lceil \nu \rceil \\ &{}\quad \text {or}\ \beta \not \in N\ \text {and}\ \beta > \lfloor \nu \rfloor , \end{array}\right. }\nonumber \\ \end{aligned}$$
(2.4)

where \(\lfloor \nu \rfloor \) and \(\lceil \nu \rceil \) are the floor and ceiling functions respectively, while \(N=\{1,2,\ldots \}\) and \(N_0=\{0,1,2,\ldots \}\).

The Caputo’s fractional differentiation is a linear operation,

$$\begin{aligned} D^\nu {(\lambda f{(x)}+\mu g{(x)})}=\lambda D^\nu f{(x)}+\mu D^\nu g{(x)}, \end{aligned}$$
(2.5)

where \(\lambda ,\ \mu \in R\).

Lemma 1

If \(m-1<\nu \le m,\ m\in N,\) then

$$\begin{aligned}&D^\nu J^\nu f(x)=f(x),\qquad \nonumber \\&J^\nu D^\nu f(x) =f(x)-\sum \limits _{i=0}^{m-1}f^{(i)}(0^+)\frac{x^i}{i!},\quad x>0. \end{aligned}$$
(2.6)

3 Orthogonal polynomials and polynomial approximations

Orthogonal polynomials play the most important role in spectral methods and, therefore, it is necessary to highlight their relevant properties. This section is devoted to the study of the properties of general orthogonal polynomials. We briefly review the fundamental results on the polynomial approximations [90, 91].

3.1 Legendre polynomials

The Legendre polynomials \(L_{i}(z)\) are defined on the interval \([-1,1]\). In order to use these polynomials on the interval \(x\in [0,1]\), we defined the so-called shifted Legendre polynomials by introducing the change of variable \(z=2x-1\).

Let the shifted Legendre polynomials \(L_i{(2x-1)}\) be denoted by \(P_{i}{(x)}\). Then, \(P_{i}{(x)}\) can be obtained with the aid of the following recurrence formula:

$$\begin{aligned}&(i+1)P_{i+1}{(x)}\!=\!(2i+1)(2x\!-\!1)P_{i}{(x)} -iP_{i-1}{(x)},\nonumber \\&\quad \qquad \qquad i=1,2,\ldots , \end{aligned}$$
(3.1)

where \(P_{0}{(x)}=1\) and \(P_{1}{(x)}=2x-1\).

The analytic form of the shifted Legendre polynomials \(P_{i}{(x)}\) of degree \(i\) is given by

$$\begin{aligned} P_{i}{(x)}=\sum \limits _{k=0}^{i }{(-1)}^{i+k}\frac{ (i+k)!}{{(i-k)}! \ ({k}!)^2\ }\ x^k , \end{aligned}$$
(3.2)

where \(P_{i}{(0)}={(-1)}^i\) and \(P_{i}{(1)}=1\).

The orthogonality condition is

$$\begin{aligned} \int _{0}^{1} P_{j} (x)P_{k}(x)w(x)\hbox {d}x ={\left\{ \begin{array}{ll}\dfrac{1}{2k+1}, \quad &{} k=j, \\ 0, \quad &{} k \ne j. \end{array}\right. } \end{aligned}$$
(3.3)

where \(w(x)=1\).

The special values

$$\begin{aligned} P^{(q)}_{i}(0)=(-1)^{(i-q)}\frac{ (i+q)!}{(i-q)!\ q! }, \end{aligned}$$
(3.4)

will be of important use later.

3.2 Chebyshev polynomials

The Chebyshev polynomials are defined on the interval \([-1,1]\) and can be determined with the aid of the following recurrence formula:

$$\begin{aligned} T_{i+1}{(t)}=2tT_i{(t)}-T_{i-1}{(t)},\quad i=1,2,\ldots , \end{aligned}$$

where \(T_0{(t)}=1\) and \(T_1{(t)}=t\). In order to use these polynomials on the interval \(x\in [0,L]\), we defined the so-called shifted Chebyshev polynomials by introducing the change of variable \(t=\dfrac{2x}{L}-1\).

Let the shifted Chebyshev polynomials \(T_i{\left( \dfrac{2x}{L}\!-\!1\right) }\) be denoted by \(T_{L,i}{(x)}\). Then, \(T_{L,i}{(x)}\) can be obtained as follows:

$$\begin{aligned} T_{L,i+1}{(x)}= & {} 2\left( \frac{2x}{L}-1\right) T_{L,i}{(x)}-T_{L,i-1}{(x)},\nonumber \\&i=1,2,\ldots , \end{aligned}$$
(3.5)

where \(T_{L,0}{(x)}=1\) and \(T_{L,1}{(x)}=\dfrac{2x}{L}-1\). The analytic form of the shifted Chebyshev polynomials \(T_{L,i}{(x)}\) of degree \(i\) is given by

$$\begin{aligned} T_{L,i}{(x)}=i\sum \limits _{k=0}^{i }{(-1)}^{i-k}\frac{ {(i+k-1)}!\ 2^{2k}}{ {(i-k)}! \ {(2k)}!\ L^k}\ x^k , \end{aligned}$$
(3.6)

where \(T_{L,i}{(0)}={(-1)}^i\) and \(T_{L,i}{(L)}=1\).

The orthogonality condition is

$$\begin{aligned} \int _{0}^{L} T_{L,j} (x)T_{L,k}(x)w_L(x)\hbox {d}x =h_j , \end{aligned}$$
(3.7)

where \(w_L(x)=\dfrac{1}{\sqrt{L x-x^2}}\ \text {and}\ h_j = {\left\{ \begin{array}{ll} \dfrac{b_j}{2}\pi , &{} k=j, \\ 0, &{} k \ne j, \end{array}\right. }\qquad b_0=2,\ b_j=1,\ j\ge 1.\)

The special values

$$\begin{aligned} T^{(q)}_{L,i}(0)\!=\!(-1)^{(i-q)}\frac{i\ (i+q-1)!}{\varGamma {(q\!+\!\frac{1}{2})}\ (i-q)!\ L^q}\ \sqrt{\pi },\quad q\!\le \! i,\nonumber \\ \end{aligned}$$
(3.8)

will be of important use later.

3.3 Jacobi polynomials

The Jacobi polynomials are defined on the interval [-1,1] and can be generated with the aid of the following recurrence formula:

$$\begin{aligned} P^{(\alpha ,\beta )}_{i}{(t)}= & {} \frac{(\alpha +\beta +2i-1)\{\alpha ^2-\beta ^2+ t(\alpha +\beta +2i)(\alpha +\beta +2i-2)\}}{2i(\alpha +\beta +i)(\alpha +\beta +2i-2)}P^{(\alpha ,\beta )}_{i-1}{(t)}\\&-\frac{(\alpha +i-1)(\beta +i-1)(\alpha +\beta +2i)}{i(\alpha +\beta +i)(\alpha +\beta +2i-2)}P^{(\alpha ,\beta )}_{i-2}{(t)},\quad \qquad \quad i=2,3,\ldots , \end{aligned}$$

where \(\alpha ,\ \beta >-1\) and

$$\begin{aligned} P^{(\alpha ,\beta )}_0{(t)}\!=\!1\quad \text {and} \quad P^{(\alpha ,\beta )}_1{(t)}\!=\!\frac{\alpha +\beta +2}{2}t+\frac{\alpha \!-\!\beta }{2}. \end{aligned}$$

In order to use these polynomials in the interval \(x\in [0,L]\), we define the so-called shifted Jacobi polynomials by introducing the change of variable \(t=\dfrac{2x}{L}-1\).

Let the shifted Jacobi polynomials \(P^{(\alpha ,\beta )}_i{\left( \dfrac{2x}{L}-1\right) }\) be denoted by \(P^{(\alpha ,\beta )}_{L,i}{(x)}\). Then, \(P^{(\alpha ,\beta )}_{L,i}{(x)}\) can be generated from:

$$\begin{aligned} P^{(\alpha ,\beta )}_{L,i}{(x)}= & {} \frac{(\alpha +\beta +2i-1)\left\{ \alpha ^2-\beta ^2+ (\dfrac{2x}{L}-1)(\alpha +\beta +2i)(\alpha +\beta +2i-2)\right\} }{2i(\alpha +\beta +i)(\alpha +\beta +2i-2)}\nonumber \\&\times P^{(\alpha ,\beta )}_{L,i-1}{(x)}-\frac{(\alpha +i-1)(\beta +i-1)(\alpha +\beta +2i)}{i(\alpha +\beta +i)(\alpha +\beta +2i-2)} P^{(\alpha ,\beta )}_{L,i-2}{(x)}\nonumber \\&\qquad \qquad \qquad i=2,3,\ldots , \end{aligned}$$
(3.9)

where

$$\begin{aligned}&P^{(\alpha ,\beta )}_{L,0}{(x)}=1\quad \text {and}\\&P^{(\alpha ,\beta )}_{L,1}(x)=\frac{\alpha +\beta +2}{2} \left( \dfrac{2x}{L}-1\right) +\frac{\alpha -\beta }{2}. \end{aligned}$$

The analytic form of the shifted Jacobi polynomials \(P^{(\alpha ,\beta )}_{L,i}{(x)}\) of degree \(i\) is given by

$$\begin{aligned}&P^{(\alpha ,\beta )}_{L,i}{(x)}=\sum \limits _{k=0}^{i }{(-1)}^{i-k}\nonumber \\&\quad \times \frac{\varGamma {(i+\beta +1)}\varGamma {(i+k+\alpha +\beta +1)}}{\varGamma (k+\beta +1)\varGamma (i+\alpha +\beta +1)(i-k)!k!L^k} x^k ,\nonumber \\ \end{aligned}$$
(3.10)

where

$$\begin{aligned}&P^{(\alpha ,\beta )}_{L,i}(0)={(-1)}^i\frac{\varGamma (i+\beta +1)}{\varGamma (\beta +1)\ i!}, \qquad \\&P_{L,i}^{(\alpha ,\beta )}(L)=\frac{\varGamma (i+\alpha +1)}{\varGamma (\alpha +1)\ i!}. \end{aligned}$$

The orthogonality condition of shifted Jacobi polynomials is

$$\begin{aligned} \int _{0}^{L} P^{(\alpha ,\beta )}_{L,j} (x)P^{(\alpha ,\beta )}_{L,k}(x) w^{(\alpha ,\beta )}_L(x)\hbox {d}x = h_k, \end{aligned}$$
(3.11)

where \(w^{(\alpha ,\beta )}_L(x)=x^\beta (L-x)^\alpha \ \) and

$$\begin{aligned} h_k ={\left\{ \begin{array}{ll} \dfrac{L^{\alpha +\beta +1}\varGamma (k+\alpha +1)\varGamma (k+\beta +1)}{(2k+\alpha +\beta +1) k!\varGamma (k+\alpha +\beta +1)}, \quad &{} i=j, \\ 0, \quad &{} i \ne j . \end{array}\right. } \end{aligned}$$

3.4 Laguerre polynomials

Let \(\varLambda = (0, \infty )\) and \(w(x)=\hbox {e}^{-x}\) be the weight functions, and let \(L_\ell (x)\) be the Laguerre polynomial of degree \(\ell \), defined by

$$\begin{aligned} L_{\ell }{(x)}=\frac{1}{\ell !} \hbox {e}^x \partial ^\ell _x(x^\ell \, \hbox {e}^{-x}),\quad \ell =0,1,\ldots . \end{aligned}$$
(3.12)

They satisfy the equations

$$\begin{aligned} \partial _x(x\, \hbox {e}^{-x} \partial _x L_{\ell }{(x)} )+\ell \hbox {e}^{-x}L_{\ell }{(x)}=0\, \quad x\in \varLambda , \end{aligned}$$

and

$$\begin{aligned} L_{\ell }(x) = \partial _x L_{\ell }{(x)}-\partial _x L_{\ell +1}{(x)}, \quad \ell \ge 0. \end{aligned}$$

The set of Laguerre polynomials is the \(L^2_w(\varLambda )\)-orthogonal system:

$$\begin{aligned} \int _{\varLambda } L_{j} (x)L_{k}(x)w(x)\hbox {d}x =\delta _{jk}, \quad \forall i,j \ge 0, \end{aligned}$$
(3.13)

where \(\delta _{jk}\) is the Kronecker delta function.

The special value

$$\begin{aligned} D^{q}L_{i}(0)=(-1)^{q}\sum \limits _{j=0}^{i-q}\dfrac{(i-j-1)!}{(q-1)!(i-j-q)!} , \end{aligned}$$
(3.14)

where \(q\) is positive integer, will be of important use later.

3.5 Modified generalized Laguerre polynomials

Let \(\varLambda =(0,\infty )\) and \(w^{(\alpha ,\beta )}(x)=x^{\alpha }\hbox {e}^{-\beta x}\) be a weight function on \(\varLambda \) in the usual sense. Define

$$\begin{aligned}&L^2_{w^{(\alpha ,\beta )}}(\varLambda )\\&\quad =\{v\ |\ v \ \text {is measurable on}\ \varLambda \ \text {and}\ \ ||v||_{w^{(\alpha ,\beta )}}<\infty \}, \end{aligned}$$

equipped with the following inner product and norm

$$\begin{aligned}&(u, v)_{w^{(\alpha ,\beta )}} = \int _{\varLambda } u(x)\ v(x)\ w^{(\alpha ,\beta )}(x)\ \hbox {d}x,\\&||v||_{w^{(\alpha ,\beta )}}=(u, v)_{w^{(\alpha ,\beta )}}^{\frac{1}{2}}. \end{aligned}$$

Let \(L^{(\alpha ,\beta )}_{i}{(x)}\) be the modified generalized Laguerre polynomial of degree \(i\) for \(\alpha >-1\) and \(\beta >0\) is defined by

$$\begin{aligned} L^{(\alpha ,\beta )}_{i}{(x)}=\frac{1}{i} x^{-\alpha } \hbox {e}^{\beta x}\partial ^{i}_{x}(x^{i+\alpha }\hbox {e}^{-\beta x}),\quad i=1,2,\ldots . \end{aligned}$$

For \(\alpha >-1\) and \(\beta >0\), we have

where \(L^{(\alpha ,\beta )}_0{(x)}=1\) and \(L^{(\alpha ,\beta )}_1{(x)}=-\beta x+\frac{\varGamma (\alpha +2)}{\varGamma (\alpha +1)}\).

The set of modified generalized Laguerre polynomials is the \(L^2_{w^{(\alpha ,\beta )}}(\varLambda )\)-orthogonal system, namely

$$\begin{aligned} \int _{0}^{\infty } L^{(\alpha ,\beta )}_{j} (x)L^{(\alpha ,\beta )}_{k}(x)w^{(\alpha ,\beta )}(x)\hbox {d}x =h_{k} \delta _{jk} , \end{aligned}$$
(3.15)

where \(\delta _{jk}\) is the Kronecker delta function and \(h_{k}=\frac{\varGamma (k+\alpha +1)}{\beta ^{\alpha +1}k!}\).

The modified generalized Laguerre polynomials of degree \(i\) on the interval \(\varLambda \), is given by

$$\begin{aligned} L^{(\alpha ,\beta )}_{i}{(x)}= & {} \sum \limits _{k=0}^{i }{(-1)}^{k}\frac{ \varGamma (i+\alpha +1)\beta ^{k}}{ \varGamma (k+\alpha +1)\ (i-k)! \ {k}!}\ x^k,\nonumber \\&\quad i=0,1,\ldots \end{aligned}$$
(3.16)

where \(L^{(\alpha ,\beta )}_{i}{(0)}= \frac{\varGamma (i+\alpha +1)}{\varGamma (\alpha +1)\varGamma (i+1)}\).

The special value

$$\begin{aligned} D^{q}L^{(\alpha ,\beta )}_{i} (0)=\frac{(-1)^{q}\ \beta ^{q} \varGamma (i+\alpha +1)}{(i-q)!\varGamma (q+\alpha +1)}, \quad i \ge q,\nonumber \\ \end{aligned}$$
(3.17)

will be of important use later.

Corollary 1

In particular, the special case for Laguerre polynomials may be obtained directly by taking \(\alpha =0\) and \(\beta =1\) in the modified generalized Laguerre, which are denoted by \(L_i(x)\).

3.6 Bernstein polynomials

The Bernstein polynomials of the \(n\)th degree are defined on the interval \([0,1]\) (see [62])

$$\begin{aligned} B_{i,n}(x)=\left( {\begin{array}{c}n\\ i\end{array}}\right) x^{i} (1-x)^{n-i} ,\quad i=0,\dots ,n, \end{aligned}$$
(3.18)

These Bernstein polynomials form a complete basis on over the interval \([0,1]\). A recursive definition also can be used to generate these polynomials

$$\begin{aligned}&B_{i,n}(x)= (1-x)B_{i,n-1}(x)+ x B_{i-1,n-1}(x),\nonumber \\&\quad i=0,\dots ,n, \end{aligned}$$
(3.19)

where \(B_{-1,n-1}(x)=0\) and \(B_{n,n-1}(x)=0\).

Since the power basis \(\{1,x,x^2,\dots ,x^n\}\) forms a basis for the space of polynomials of degree less than or equal to \(n\), any Bernstein polynomial of degree \(n\) can be written in terms of the power basis. This can be directly calculated using the binomial expansion of \((1-x)^{n-i}\), one can show that

$$\begin{aligned}&B_{i,n}(x)=\sum \limits _{j=i}^{n}(-1)^{j-i}\left( {\begin{array}{c}n\\ i\end{array}}\right) \left( {\begin{array}{c}n-i\\ j-i\end{array}}\right) x^{j},\nonumber \\&\quad i=0,\dots ,n. \end{aligned}$$
(3.20)

The fact that they are not orthogonal turns out to be their disadvantage when used in the least squares approximation. As mentioned in [62], one approach to direct least squares approximation by polynomials in Bernstein form relies on construction of the basis \(\{D_{0,n}(x),D_{1,n}(x),D_{2,n}(x),\dots ,D_{n,n}(x)\}\) that is “dual” to the Bernstein basis of degree \(n\) on \(x\in [0,1]\). This dual basis is characterized by the property

$$\begin{aligned} \int _{0}^{1} B_{i,n}(x) D_{j,n}(x) \hbox {d}x = {\left\{ \begin{array}{ll} 1, &{} i=j, \\ 0, &{} i \ne j, \end{array}\right. } \end{aligned}$$
(3.21)

for \(i,j=0,1,2,\dots ,n.\)

Theorem 1

the \(q\)th derivative of Bernstein polynomials

$$\begin{aligned}&D^{q} B_{i,n}(x)\nonumber \\&\quad \!=\! \frac{n!}{(n-q)!} \sum _{k=max(0,i+q-n)}^{min(i,q)} (-1)^{k+q} \left( {\begin{array}{c}q\\ k\end{array}}\right) B_{i-k,n-q}(x).\nonumber \\ \end{aligned}$$
(3.22)

(For the proof, see [62]).

4 Operational matrices of Caputo fractional derivatives

In this section, we introduce the operational matrices of Caputo fractional derivatives for some orthogonal polynomials on finite and infinite intervals.

4.1 SLOM of fractional derivatives

A function \(u(x)\), square integrable in \([0,1]\), may be expressed in terms of shifted Legendre polynomials as

$$\begin{aligned} u{(x)}=\sum \limits _{j=0}^{\infty }c_j P_{j}{(x)}, \end{aligned}$$

where the coefficients \(c_j\) are given by

$$\begin{aligned} c_j= \dfrac{1}{h_j}\int _{0}^{1}u (x)P_{j}(x)w(x)\hbox {d}x, \quad j=0,1,2,\ldots .\nonumber \\ \end{aligned}$$
(4.1)

In practice, only the first \((N+1)\)-terms shifted Legendre polynomials are considered. Hence, \(u(x)\) can be expressed in the form

$$\begin{aligned} u_N{(x)}\simeq \sum \limits _{j=0}^{N}c_j P_{j}{(x)}= C^\mathrm{T} \phi (x), \end{aligned}$$
(4.2)

where the shifted Legendre coefficient vector \(C\) and the shifted Legendre vector \( \phi (x)\) are given by

$$\begin{aligned}&C^\mathrm{T}=[c_0,c_1,\ldots ,c_N],\qquad \nonumber \\&\phi (x)=[P_{0}(x), P_{1}(x),\ldots ,P_{N}(x)]^\mathrm{T}. \end{aligned}$$
(4.3)

Lemma 2

Let \(P_{i}(x)\) be a shifted Legendre polynomial then

$$\begin{aligned} D^{\nu }{P}_{i}(x)=0, \quad i=0,1,\ldots ,\lceil \nu \rceil -1,\quad \nu >0. \end{aligned}$$
(4.4)

In the following theorem, we prove the operational matrix of fractional derivative for the shifted Legendre vector see [68].

Theorem 2

Let \(\phi (x)\) be shifted Legendre vector defined in Eq. (4.3) and also suppose \(\nu > 0\) then

$$\begin{aligned} D^\nu \phi (x)\simeq \mathbf D ^{(\nu )} \phi (x), \end{aligned}$$
(4.5)

where \( \mathbf D ^{(\nu )}\) is the \((N+1) \times (N+1)\) operational matrix of derivatives of order \(\nu \) in the Caputo sense and is defined as follows:

$$\begin{aligned} \mathbf D ^{(\nu )}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0 &{} 0 &{} 0 &{} \ldots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \ldots &{} 0 \\ \sum \limits _{k=\lceil \nu \rceil }^{\lceil \nu \rceil }\theta _{\lceil \nu \rceil ,0,k} &{} \sum \limits _{k=\lceil \nu \rceil }^{\lceil \nu \rceil }\theta _{\lceil \nu \rceil ,1,k} &{} \sum \limits _{k=\lceil \nu \rceil }^{\lceil \nu \rceil }\theta _{\lceil \nu \rceil ,2,k} &{} \ldots &{} \sum \limits _{k=\lceil \nu \rceil }^{\lceil \nu \rceil }\theta _{\lceil \nu \rceil ,N,k} \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots \\ \sum \limits _{k=\lceil \nu \rceil }^{i}\theta _{i,0,k} &{} \sum \limits _{k=\lceil \nu \rceil }^{i}\theta _{i,1,k} &{} \sum \limits _{k=\lceil \nu \rceil }^{i}\theta _{i,2,k} &{} \ldots &{} \sum \limits _{k=\lceil \nu \rceil }^{i}\theta _{i,N,k} \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots \\ \sum \limits _{k=\lceil \nu \rceil }^{N}\theta _{N,0,k} &{} \sum \limits _{k=\lceil \nu \rceil }^{N}\theta _{N,1,k} &{} \sum \limits _{k=\lceil \nu \rceil }^{N}\theta _{N,2,k} &{} \ldots &{} \sum \limits _{k=\lceil \nu \rceil }^{N}\theta _{N,N,k} \\ \end{array} \right) \nonumber \\ \end{aligned}$$
(4.6)

where

$$\begin{aligned}&\theta _{i,j,k}\nonumber \\&\quad \!=\! \sum \limits _{k=\lceil \nu \rceil }^{i}\sum \limits _{\ell =0}^{j}\dfrac{(-1)^{i+j+k+\ell }\ (2j+1)\ (i+k)!\ (\ell +j)!}{ (i-k)!\ k!\ \varGamma (k\!-\!\nu \!+\!1)\ (j-\ell )!\ (\ell !)^2\ (k\!+\!\ell \!-\!\nu \!+\!1) }. \end{aligned}$$

Note that in \(\mathbf D ^{(\nu )}\), the first \(\lceil \nu \rceil \) rows are all zero.

(For the proof, see [68]).

4.2 SCOM for fractional derivatives

A function \(u(x)\), square integrable in \([0,L]\), may be expressed in terms of shifted Chebyshev polynomials as

$$\begin{aligned} u{(x)}=\sum \limits _{j=0}^{\infty }c_j T_{L,j}{(x)}, \end{aligned}$$

where the coefficients \(c_j\) are given by

$$\begin{aligned} c_j\!=\! \dfrac{1}{h_j}\int _{0}^{L}u (x)T_{L,j}(x)w_L(x)\hbox {d}x, \quad j\!=\!0,1,2,\ldots .\qquad \nonumber \\ \end{aligned}$$
(4.7)

In practice, only the first \((N+1)\)-terms shifted Chebyshev polynomials are considered. Hence, we can write

$$\begin{aligned} u_N{(x)}\simeq \sum \limits _{j=0}^{N}c_j T_{L,j}{(x)}= C^\mathrm{T} \varphi (x), \end{aligned}$$
(4.8)

where the shifted Chebyshev coefficients vector \(C\) and the shifted Chebyshev vector \( \varphi (x)\) are given by:

$$\begin{aligned}&C^\mathrm{T}=[c_0,c_1,\ldots ,c_N],\qquad \nonumber \\&\varphi (x)=[T_{L,0}(x), T_{L,1}(x),\ldots ,T_{L,N}(x)]^\mathrm{T}. \end{aligned}$$
(4.9)

Lemma 3

Let \(T_{L,i}(x)\) be a shifted Chebyshev polynomial. Then

$$\begin{aligned} D^{\nu }{T}_{L,i}(x)=0, \quad i=0,1,\ldots ,\lceil \nu \rceil -1,\quad \nu >0.\nonumber \\ \end{aligned}$$
(4.10)

In the following theorem, we prove the operational matrix of fractional derivative for the shifted Chebyshev vector (see [53]).

Theorem 3

Let \(\varphi (x)\) be shifted Chebyshev vector defined in Eq. (4.3) and also suppose \(\nu > 0\). Then

$$\begin{aligned} D^\nu \varphi (x)\simeq \mathbf D ^{(\nu )} \varphi (x), \end{aligned}$$
(4.11)

where \( \mathbf D ^{(\nu )}\) is the \((N+1) \times (N+1)\) operational matrix of derivatives of order \(\nu \) in the Caputo sense and is defined as follows:

$$\begin{aligned} \mathbf D ^{(\nu )}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0 &{} 0 &{} 0 &{} \ldots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \ldots &{} 0 \\ S_{\nu }(\lceil \nu \rceil ,0) &{}S_{\nu }(\lceil \nu \rceil ,1) &{} S_{\nu }(\lceil \nu \rceil ,2) &{} \ldots &{} S_{\nu }(\lceil \nu \rceil ,N) \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots \\ S_{\nu }(i,0) &{} S_{\nu }(i,1) &{} S_{\nu }(i,2) &{} \ldots &{} S_{\nu }(i,N) \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots \\ S_{\nu }(N,0) &{} S_{\nu }(N,1) &{} S_{\nu }(N,2) &{} \ldots &{} S_{\nu }(N,N) \\ \end{array} \right) \nonumber \\ \end{aligned}$$
(4.12)

where

$$\begin{aligned}&S_{\nu }(i,j)\nonumber \\&\quad \!=\! \sum \limits _{k=\lceil \nu \rceil }^{i}\dfrac{(-1)^{i-k}\ 2i\ (i\!+\!k\!-\!1)!\ \varGamma (k\!-\!\nu \!+\!\frac{1}{2}) }{c_j\ L^\nu \ \varGamma (k\!+\!\frac{1}{2})\ (i\!-\!k)!\ \varGamma (k\!-\!\nu \!-\!j\!+\!1)\ \varGamma (k\!+\!j\!-\!\nu \!+\!1) }. \end{aligned}$$

Note that in \(\mathbf D ^{(\nu )}\), the first \(\lceil \nu \rceil \) rows are all zero.

(For the proof, see [53]).

4.3 SJOM for fractional derivatives

Let \(u(x)\) be a polynomial of degree \(N\). Then, it may be expressed in terms of shifted Jacobi polynomials as

$$\begin{aligned} u{(x)}=\sum \limits _{j=0}^{N }c_j P^{(\alpha ,\beta )}_{L,j}{(x)}=C^\mathrm{T} \varPhi (x), \end{aligned}$$
(4.13)

where the coefficients \(c_j\) are given by

$$\begin{aligned}&c_j=\frac{1}{h_j}\ \int _{0}^{L}w^{(\alpha ,\beta )}_L(x) u(x)P^{(\alpha ,\beta )}_{L,j}(x)\,\hbox {d}x\nonumber \\&\quad j=0,1,\ldots . \end{aligned}$$
(4.14)

If the shifted Jacobi coefficient vector \(C\) and the shifted Jacobi vector \( \varPhi (x)\) are written as

$$\begin{aligned}&C^\mathrm{T}=[c_0,c_1,\ldots ,c_N],\qquad \nonumber \\&\varPhi (x)=\left[ P^{(\alpha ,\beta )}_{L,0}(x), P^{(\alpha ,\beta )}_{L,1}(x),\ldots ,P^{(\alpha ,\beta )}_{L,N}(x)\right] ^\mathrm{T},\nonumber \\ \end{aligned}$$
(4.15)

respectively, then the first-order derivative of the vector \( \varPhi (x)\) can be expressed by

$$\begin{aligned} \frac{\hbox {d}\varPhi (x)}{\hbox {d}x}=\mathbf D ^{(1)} \varPhi (x), \end{aligned}$$
(4.16)

where \(\mathbf D ^{(1)}\) is the \( (N+1)\times (N+1)\) operational matrix of derivative given by

$$\begin{aligned} \mathbf D ^{(1)}=(d_{ij})={\left\{ \begin{array}{ll} C_1(i,j), \quad &{} i > j, \\ 0 \quad &{} \text {otherwise}, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned}&C_{1}(i, j)\nonumber \\&\quad = \frac{L^{\alpha +\beta }(i+\alpha +\beta +1)(i+\alpha +\beta +2)_{j} (j+\alpha +2)_{i-j-1} }{(i-j-1)!\ \varGamma (2j+\alpha +\beta +1)}\nonumber \\&\qquad \times \varGamma (j+\alpha +\beta +1)\ _{3}F_{2}\nonumber \\&\qquad \times \begin{pmatrix} -i+1+j, &{} i+j+\alpha +\beta +2, &{} j+\alpha +1 &{} \\ &{} &{} &{};1 \\ j+\alpha +2, &{} \qquad 2j+\alpha +\beta +2 \end{pmatrix}. \end{aligned}$$

(For the proof, see [92, 93], and for the general definition of a generalized hypergeometric series and special \(_3 F_2\), see [94]).

For example, for even \(N\) we have

$$\begin{aligned} \mathbf D ^{(1)}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0 &{} 0 &{} 0 &{} \ldots &{} 0 &{} 0 \\ C_1(1,0) &{} 0 &{} 0 &{} \ldots &{} 0 &{} 0 \\ C_1(2,0) &{} C_1(2,1) &{} 0 &{} \ldots &{} 0 &{} 0 \\ C_1(3,0) &{} C_1(3,1) &{} C_1(3,2) &{} \ldots &{} 0 &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots &{} \vdots \\ C_1(N,0) &{} C_1(N,1) &{} C_1(N,2) &{} \ldots &{} C_1(N,N-1) &{} 0 \\ \end{array}\right) . \end{aligned}$$

The main objective of this section is to generalize the SJOM of derivatives for fractional calculus. By using (4.16), it is clear that

$$\begin{aligned} \frac{\hbox {d}^n\varPhi (x)}{\hbox {d}x^n}=(\mathbf D ^{(1)})^n \varPhi (x), \end{aligned}$$
(4.17)

where \(n \in N\) and the superscript in \(\mathbf D ^{(1)}\), denotes matrix powers. Thus

$$\begin{aligned} \mathbf D ^{(n)}=(\mathbf D ^{(1)})^n ,\quad \quad n=1,2,\ldots . \end{aligned}$$
(4.18)

Corollary 2

In case of \(\alpha =\beta =0,\) it is clear that the SJOM of derivatives for integer calculus is in complete agreement with Legendre operational matrix of derivatives for integer calculus obtained by Saadatmandi and Dehghan (see [68] Eq. (11)).

Corollary 3

In case of \(\alpha =\beta =-\frac{1}{2},\) it is clear that the SJOM of derivatives for integer calculus is in complete agreement with Chebyshev operational matrix of derivatives for integer calculus obtained by Doha et al. (see [53] Eq. (3.2)).

Lemma 4

Let \(P^{(\alpha ,\beta )}_{L,i}(x)\) be a shifted Jacobi polynomial. Then

$$\begin{aligned} D^{\nu }P^{(\alpha ,\beta )}_{L,i}(x)=0 \quad i=0,1,2,\ldots ,\lceil \nu \rceil -1,\quad \nu >0.\nonumber \\ \end{aligned}$$
(4.19)

Theorem 4

Let \(\varPhi (x)\) be shifted Jacobi vector defined in Eq. (4.9) and let also \(\nu > 0\). Then

$$\begin{aligned} D^{\nu }\varPhi (x)\simeq \mathbf D ^{(\nu )} \varPhi (x), \end{aligned}$$
(4.20)

where \( \mathbf D ^{(\nu )}\) is the \((N+1) \times (N+1)\) operational matrix of derivatives of order \(\nu \) in the Caputo sense and is defined by:

$$\begin{aligned} \mathbf D ^{(\nu )}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0 &{} 0 &{} 0 &{} \ldots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \ldots &{} 0 \\ \varDelta _{\nu }(\lceil \nu \rceil ,0) &{}\varDelta _{\nu }(\lceil \nu \rceil ,1) &{} \varDelta _{\nu }(\lceil \nu \rceil ,2) &{} \ldots &{} \varDelta _{\nu }(\lceil \nu \rceil ,N) \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots \\ \varDelta _{\nu }(i,0) &{} \varDelta _{\nu }(i,1) &{} \varDelta _{\nu }(i,2) &{} \ldots &{} \varDelta _{\nu }(i,N) \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots \\ \varDelta _{\nu }(N,0) &{} \varDelta _{\nu }(N,1) &{} \varDelta _{\nu }(N,2) &{} \ldots &{} \varDelta _{\nu }(N,N) \\ \end{array} \right) ,\nonumber \\ \end{aligned}$$
(4.21)

where

$$\begin{aligned}\varDelta _{\nu }(i,j)= \sum \limits _{k=\lceil \nu \rceil }^{i}\delta _{ijk}, \end{aligned}$$

and \(\delta _{ijk}\) is given by

$$\begin{aligned} \delta _{ijk}= & {} \frac{(-1)^{i-k}\ L^{\alpha +\beta -\nu +1} \varGamma (j+\beta +1) \varGamma (i+\beta +1)\ \varGamma (i+k+\alpha +\beta +1)}{h_j\ \varGamma (j+\alpha +\beta +1) \varGamma (k+\beta +1)\varGamma (i+\alpha +\beta +1)\varGamma (k-\nu +1)(i-k)!}\nonumber \\&\times \sum \limits _{l=0}^{j}\frac{(-1)^{j-l}\varGamma (j+l+\alpha +\beta +1)\varGamma (\alpha +1) \varGamma (l+k+\beta -\nu +1)}{\varGamma (l+\beta +1)\varGamma (l+k+\alpha +\beta -\nu +2)(j-l)!\ l! }. \end{aligned}$$
(4.22)

Note that in \(\mathbf D ^{(\nu )}\), the first \(\lceil \nu \rceil \) rows, are all zeros.

Proof

The analytic form of the shifted Jacobi polynomials \(P^{(\alpha ,\beta )}_{L,i}{(x)}\) of degree \(i\) is given by (3.10). Using Eqs. (2.4) and (2.5) in Eq. (3.10) we have

$$\begin{aligned}&D^{\nu }P^{(\alpha ,\beta )}_{L,i}(x)\nonumber \\&\quad =\sum \limits _{k=0}^{i} \frac{(-1)^{i-k}\ \varGamma (i+\beta +1) \varGamma (i+k+\alpha +\beta +1)}{\varGamma (k+\beta +1)\ \varGamma (i+\alpha +\beta +1)\ (i-k)!\ k!\ L^k}\ D^{\nu }{x^k}\nonumber \\&\quad =\sum \limits _{k=\lceil \nu \rceil }^{i}\frac{(-1)^{i-k} \varGamma (i+\beta +1)\ \varGamma (i\!+\!k\!+\!\alpha \!+\!\beta \!+\!1)\ \ x^{k-\nu }}{\varGamma (k\!+\!\beta \!+\!1) \varGamma (i\!+\!\alpha \!+\!\beta +1)(i\!-\!k)! \varGamma (k\!-\!\nu \!+\!1)\ L^k } ,\nonumber \\&\qquad i=\lceil \nu \rceil ,\lceil \nu \rceil +1,\ldots . \end{aligned}$$
(4.23)

Now, approximating \(x^{k-\nu }\) by \((N+1)\) terms of shifted Jacobi series, leads to

$$\begin{aligned} x^{k-\nu }\simeq \sum \limits _{j=0}^{N}b_{k,j} P^{(\alpha ,\beta )}_{L,j}{(x)}, \end{aligned}$$
(4.24)

where \(b_{kj}\) is given from (4.14) with \(u(x)=x^{k-\nu }\). This gives

$$\begin{aligned} b_{k,j}= & {} \frac{L^{\alpha +\beta +k-\nu +1} \varGamma (j+\beta +1)}{h_j\ \varGamma (j+\alpha +\beta +1)} \nonumber \\&\times \sum \limits _{l=0}^{j} \frac{(-1)^{j-l}\varGamma (j+l+\alpha +\beta +1)\varGamma (\alpha +1)\varGamma (l+k+\beta -\nu +1)}{\varGamma (l+\beta +1)(j-l)!\ l!\ \varGamma (l+k+\alpha +\beta -\nu +2)}. \end{aligned}$$
(4.25)

Employing Eqs. (4.23)–(4.25), it yields

$$\begin{aligned} D^{\nu }P^{(\alpha ,\beta )}_{L,i}(x)= & {} \sum \limits _{j=0}^{N}\varDelta _{\nu }(i,j)P^{(\alpha ,\beta )}_{L,j}(x),\nonumber \\&\qquad i=\lceil \nu \rceil ,\lceil \nu \rceil +1,\ldots ,N, \end{aligned}$$
(4.26)

where \(\varDelta _{\nu }(i,j)\) is given in Eq. (4.22).

Accordingly, rewriting Eq. (4.26) as a vector form gives

$$\begin{aligned}&D^{\nu }P^{(\alpha ,\beta )}_{L,i}(x)\nonumber \\&\quad \simeq \Big [ \varDelta _{\nu }(i,0), \varDelta _{\nu }(i,1), \varDelta _{\nu }(i,2), \ldots , \varDelta _{\nu }(i,N)\Big ]\varPhi (x),\nonumber \\&\qquad i=\lceil \nu \rceil ,\lceil \nu \rceil +1,\ldots ,N. \end{aligned}$$
(4.27)

Also, according to Lemma 4, one can write

$$\begin{aligned} D^{\nu }P^{(\alpha ,\beta )}_{L,i}(x)\simeq & {} \Big [0, 0, 0, \ldots , 0\Big ]\varPhi (x),\nonumber \\&i=0,1,\ldots ,\lceil \nu \rceil -1. \end{aligned}$$
(4.28)

A combination of Eqs. (4.27) and (4.28) leads to the desired result. \(\square \)

Corollary 4

If \(\alpha =\beta =0\) and \(L=1\). Then \(\delta _{ijk}\) is given as follows:

$$\begin{aligned}&\delta _{ijk}=\frac{(-1)^{i-k}\ \varGamma (j+1) \varGamma (i+1)\ \varGamma (i+k+1)}{h_j\ \varGamma (j+1) \varGamma (k+1)\varGamma (i+1)\varGamma (k-\nu +1)(i-k)!}\\&\quad \times \sum \limits _{l=0}^{j}\frac{(-1)^{j-l}\varGamma (j+l+1) \varGamma (l+k-\nu +1)}{\varGamma (l+k-\nu +2)(j-l)!\ l! }. \end{aligned}$$

With the aid of properties of shifted Jacobi polynomials, and after some analytical manipulations, we have

$$\begin{aligned}&\delta _{ijk}=\theta _{ijk}=(2j+1)\sum \limits _{l=0}^{j}\\&\quad \times \dfrac{(-1)^{i+j+k+l}\ (i+k)!\ (j+l)!}{(i-k)! (k)! \varGamma (i-\nu +1) (j-l)! (l!)^2 (k+l-\nu +1)}. \end{aligned}$$

Then, one can show that

$$\begin{aligned} \varDelta _{\nu }(i,j)= \sum \limits _{k=\lceil \nu \rceil }^{i}\theta _{ijk}, \end{aligned}$$

where \(\theta _{ijk}\) is given as in Theorem 2.

It is clear that the SJOM of derivatives for fractional calculus, with \(\alpha =\beta =0,\) is in complete agreement with Legendre operational matrix of derivatives for fractional calculus obtained by Saadatmandi and Dehghan (see [68] Eq. 14).

Corollary 5

If \(\alpha =\beta =-\frac{1}{2}\). Then \(\delta _{ijk}\) is given as follows:

$$\begin{aligned} \delta _{ijk}= & {} \frac{(-1)^{i-k}\ L^{-\nu } \varGamma (j+\frac{1}{2}) \varGamma (i+\frac{1}{2})\ \varGamma (i+k)}{h_j\ \varGamma (j) \varGamma (k+\frac{1}{2})\varGamma (i)\varGamma (k-\nu +1)(i-k)!}\\\times & {} \sum \limits _{l=0}^{j}\frac{(-1)^{j-l}\varGamma (j+l)\varGamma (\frac{1}{2}) \varGamma (l+k-\nu +\frac{1}{2})}{\varGamma (l+\frac{1}{2})\varGamma (l+k-\nu +1)(j-l)!\ l! }. \end{aligned}$$

With the aid of properties of shifted Jacobi polynomials and (3.5), and after some manipulations, we have

$$\begin{aligned}&\delta _{ijk}\nonumber \\&=\dfrac{(-1)^{i-k}\ 2i\ (i+k-1)!\ \varGamma (k-\nu +\frac{1}{2}) }{\epsilon _j\ L^\nu \ \varGamma (k+\frac{1}{2})\ (i-k)!\ \varGamma (k-\nu -j+1)\ \varGamma (k+j-\nu +1)},\nonumber \\&j=0,1,\ldots ,N. \end{aligned}$$

Then one can show that

$$\begin{aligned} \varDelta _{\nu }(i,j)= S_{\nu }(i,j), \end{aligned}$$

where \(S_{\nu }(i,j)\) is given as in Theorem 3.

It is clear that the SJOM of derivatives for fractional calculus with \(\alpha =\beta =-\frac{1}{2},\) is in complete agreement with Chebyshev operational matrix of derivatives for fractional calculus obtained by Doha et al. (see [53]).

4.4 LOM of fractional derivatives

Let \(u(x)\in L^2_{w}(\varLambda )\), then \(u(x)\) may be expressed in terms of Laguerre polynomials as

$$\begin{aligned}&u{(x)}=\sum \limits _{j=0}^{\infty }a_j L_{j}{(x)}, \nonumber \\&a_j=\int _{0}^{\infty }u (x)L_{j}(x)w(x)\hbox {d}x, \quad j=0,1,2,\ldots .\nonumber \\ \end{aligned}$$
(4.29)

In practice, only the first \((N+1)\)-terms Laguerre polynomials are considered. Then, we have

$$\begin{aligned} u_N{(x)}=\sum \limits _{j=0}^{N}a_j L_{j}{(x)}= C^\mathrm{T} \emptyset (x), \end{aligned}$$
(4.30)

where the Laguerre coefficient vector \(C\) and the Laguerre vector \(\emptyset \) are given by

$$\begin{aligned}&C^\mathrm{T}=[c_0,c_1,\ldots ,c_N],\nonumber \\&\emptyset (x)=[L_0(x), L_1(x),\ldots ,L_N(x)]^\mathrm{T}. \end{aligned}$$
(4.31)

Lemma 5

Let \(L_{i}(x)\) be the Laguerre polynomial. Then

$$\begin{aligned} D^{\nu }{L}_{i}(x)=0, \quad i=0,1,\ldots ,\lceil \nu \rceil -1,\quad \nu >0.\nonumber \\ \end{aligned}$$
(4.32)

In the following theorem, we prove the operational matrix of fractional derivative for the Laguerre vector see [84].

Theorem 5

Let \(\emptyset (x)\) be the Laguerre vector defined in Eq. (4.3) and also suppose \(\nu > 0\). Then

$$\begin{aligned} D^\nu \emptyset (x)\simeq \mathbf D ^{(\nu )} \emptyset (x), \end{aligned}$$
(4.33)

where \( \mathbf D ^{(\nu )}\) is the \((N+1) \times (N+1)\) operational matrix of derivatives of order \(\nu \) in the Caputo sense and is defined as follows:

$$\begin{aligned} \mathbf D ^{(\nu )}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0 &{} 0 &{} 0 &{} \ldots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \ldots &{} 0 \\ \mathfrak {R}_{\nu }(\lceil \nu \rceil ,0) &{}\mathfrak {R}_{\nu }(\lceil \nu \rceil ,1) &{} \mathfrak {R}_{\nu }(\lceil \nu \rceil ,2) &{} \ldots &{} \mathfrak {R}_{\nu }(\lceil \nu \rceil ,N) \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots \\ \mathfrak {R}_{\nu }(i,0) &{} \mathfrak {R}_{\nu }(i,1) &{} \mathfrak {R}_{\nu }(i,2) &{} \ldots &{} \mathfrak {R}_{\nu }(i,N) \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots \\ \mathfrak {R}_{\nu }(N,0) &{} \mathfrak {R}_{\nu }(N,1) &{} \mathfrak {R}_{\nu }(N,2) &{} \ldots &{} \mathfrak {R}_{\nu }(N,N) \\ \end{array} \right) \nonumber \\ \end{aligned}$$
(4.34)

where

$$\begin{aligned} \mathfrak {R}_{\nu }(i,j)= \sum \limits _{k=\lceil \nu \rceil }^{i}\frac{(-1)^{k}\ i!\ \varGamma (j-k+\nu )}{j!\ (i-k)!\ k!\ \varGamma (-k+\nu )}. \end{aligned}$$

Note that in \(\mathbf D ^{(\nu )}\), the first \(\lceil \nu \rceil \) rows are all zero.

(For the proof, see [84]).

4.5 Modified generalized Laguerre operational matrix of fractional derivatives

If \(u(x)\in L^2_{w^{(\alpha ,\beta )}}(\varLambda )\). Then, \(u(x)\) may be expressed in terms of modified generalized Laguerre polynomials as

$$\begin{aligned}&u{(x)}=\sum \limits _{j=0}^{\infty }a_j L^{(\alpha ,\beta )}_{j}{(x)},\nonumber \\&a_j=\frac{1}{h_{k}}\int _{0}^{\infty }u (x)L^{(\alpha ,\beta )}_{j}(x)w^{(\alpha ,\beta )}(x)\hbox {d}x, \nonumber \\&\quad j=0,1,2,\ldots . \end{aligned}$$
(4.35)

In practice, only the first \((N+1)\)-terms modified generalized Laguerre polynomials are considered. Then, we have

$$\begin{aligned} u_N{(x)}=\sum \limits _{j=0}^{N}a_j L^{(\alpha ,\beta )}_{j}{(x)}= C^\mathrm{T} \psi (x). \end{aligned}$$
(4.36)

where the modified generalized Laguerre coefficient vector \(C\) and the modified generalized Laguerre vector \(\psi (x)\) are given by

$$\begin{aligned}&C^\mathrm{T}=[c_0,c_1,\ldots ,c_N],\nonumber \\&\psi (x)=[L^{(\alpha ,\beta )}_0(x), L^{(\alpha ,\beta )}_1(x),\ldots ,L^{(\alpha ,\beta )}_N(x)]^\mathrm{T},\nonumber \\ \end{aligned}$$
(4.37)

Lemma 6

Let \(L^{(\alpha ,\beta )}_{i}(x)\) be a modified generalized Laguerre polynomial. Then

$$\begin{aligned} D^{\nu }{L}^{(\alpha ,\beta )}_{i}(x)=0, \quad i=0,1,\ldots ,\lceil \nu \rceil -1,\quad \nu >0.\nonumber \\ \end{aligned}$$
(4.38)

Theorem 6

Let \(\psi (x)\) be modified generalized Laguerre vector defined in Eq. (4.37) and also suppose \(\nu > 0\). Then

$$\begin{aligned} D^\nu \psi (x)\simeq \mathbf D ^{(\nu )} \psi (x), \end{aligned}$$
(4.39)

where \( \mathbf D ^{(\nu )}\) is the \((N+1) \times (N+1)\) operational matrix of fractional derivative of order \(\nu \) in the Caputo sense and is defined as follows:

$$\begin{aligned} \mathbf D ^{(\nu )}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0 &{} 0 &{} 0 &{} \ldots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \ldots &{} 0 \\ \varOmega _{\nu }(\lceil \nu \rceil ,0) &{}\varOmega _{\nu }(\lceil \nu \rceil ,1) &{} \varOmega _{\nu }(\lceil \nu \rceil ,2) &{} \ldots &{} \varOmega _{\nu }(\lceil \nu \rceil ,N) \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots \\ \varOmega _{\nu }(i,0) &{} \varOmega _{\nu }(i,1) &{} \varOmega _{\nu }(i,2) &{} \ldots &{} \varOmega _{\nu }(i,N) \\ \vdots &{} \vdots &{} \vdots &{} \ldots &{} \vdots \\ \varOmega _{\nu }(N,0) &{} \varOmega _{\nu }(N,1) &{} \varOmega _{\nu }(N,2) &{} \ldots &{} \varOmega _{\nu }(N,N) \\ \end{array} \right) \nonumber \\ \end{aligned}$$
(4.40)

where

Note that in \(\mathbf D ^{(\nu )}\), the first \(\lceil \nu \rceil \) rows are all zero.

Proof

The analytic form of the modified generalized Laguerre polynomials \(L_{i}^{(\alpha ,\beta )}{(x)}\) of degree \(i\) is given by (3.16). Using Eqs. (2.4), (2.5) and (3.16) we have

$$\begin{aligned}&D^{\nu }{L}^{(\alpha ,\beta )}_{i}(x)=\sum \limits _{k=0}^{i}\ (-1)^{k}\ \frac{\beta ^{k}\ \varGamma (i+\alpha +1)\ }{(i-k)!\ k!\ \varGamma (k+\alpha +1)}\ D^\nu x^k\nonumber \\&\quad =\sum \limits _{k=\lceil \nu \rceil }^{i}(-1)^{k}\ \frac{\beta ^{k}\ \varGamma (i+\alpha +1) }{(i-k)!\ \varGamma (k-\nu +1)\ \varGamma (k+\alpha +1)}\ x^{k-\nu },\nonumber \\&\qquad i=\lceil \nu \rceil ,\ldots ,N. \end{aligned}$$
(4.41)

Now, approximating \(x^{k-\nu }\) by \(N+1\) terms of modified generalized Laguerre series, we have

$$\begin{aligned} x^{k-\nu }= \sum \limits _{j=0}^{N}b_{j}L_{j}^{(\alpha ,\beta )}{(x)}, \end{aligned}$$
(4.42)

where \(b_{j}\) is given from (4.35) with \(u(x)=x^{k-\nu }\), and

$$\begin{aligned} b_{j}=\sum \limits _{\ell =0}^{j}\ (-1)^{\ell }\ \frac{\beta ^{-k+\nu } j!\ \varGamma (k-\nu +\alpha +\ell +1)\ }{(j-\ell )!\ (\ell )!\ \varGamma (\ell +\alpha +1)}.\nonumber \\ \end{aligned}$$
(4.43)

Employing Eqs. (4.41)–(4.43) yields

$$\begin{aligned}&D^{\nu }{L}_{i}^{(\alpha ,\beta )}(x)= \sum \limits _{j=0}^{N}\varOmega _{\nu }(i,j)L_{j}^{(\alpha ,\beta )}(x) ,\nonumber \\&\quad i=\lceil \nu \rceil ,\ldots ,N, \end{aligned}$$
(4.44)

where

$$\begin{aligned}&\varOmega _{ijk}=\sum \limits _{k=\lceil \nu \rceil }^{i} \sum \limits _{\ell =0}^{j}\nonumber \\&\quad \times \dfrac{(-1)^{k+\ell }\ \beta ^{\nu }\ j!\ \varGamma (i+\alpha +1)\ \varGamma (k-\nu +\alpha +\ell +1)}{ (i-k)!\ (j-\ell )!\ \ell !\ \varGamma (k-\nu +1)\ \varGamma (k+\alpha +1)\ \varGamma (\alpha +\ell +1)}.\nonumber \\ \end{aligned}$$
(4.45)

Accordingly, Eq. (4.45) can be written in a vector form as follows:

$$\begin{aligned}&D^{\nu }{L}^{(\alpha ,\beta )}_{i}(x)\nonumber \\&\quad \simeq \Big [ \varOmega _{\nu }(i,0), \varOmega _{\nu }(i,1), \varOmega _{\nu }(i,2), \ldots , \varOmega _{\nu }(i,N)\Big ]\psi (x),\nonumber \\&\quad i=\lceil \nu \rceil ,\ldots ,N. \end{aligned}$$
(4.46)

Also, according to Lemma 6, we can write

$$\begin{aligned}&D^{\nu }{L}^{(\alpha ,\beta )}_{i}(x) \simeq \Big [0, 0, 0, \ldots , 0\Big ]\psi (x),\nonumber \\&\quad i=0,1,\ldots ,\lceil \nu \rceil -1. \end{aligned}$$
(4.47)

A combination of Eqs. (4.46) and (4.47) leads to the desired result. \(\square \)

4.6 Bernstein operational matrix of fractional derivatives

A function \(f(x)\), square integrable in \([0,1]\), may be expressed in terms of the Bernstein basis [95]. In practice, only the first \(n+1\) term Bernstein polynomials are considered. Hence, if we write

$$\begin{aligned} f(x) \simeq \sum \limits _{j=0}^{n} c_j B_{j,n}(x)= C^\mathrm{T} B(x), \end{aligned}$$
(4.48)

where the Bernstein coefficient vector \(C\) and the Bernstein vector \(B(x)\) are given by

$$\begin{aligned}&C^\mathrm{T}=[c_0,c_1,\ldots ,c_n],\nonumber \\&\quad B(x)=[B_{0,n}(x), B_{1,n}(x),\ldots ,B_{n,n}(x)]^\mathrm{T}, \end{aligned}$$
(4.49)

then

$$\begin{aligned} c_j\!=\!\int _{0}^{1} f(x)\ D_{j,n}(x)\hbox {d}x, \qquad j=0,1,2,\ldots ,n.\nonumber \\ \end{aligned}$$
(4.50)

Authors of [62] have derived explicit representations

$$\begin{aligned} D_{j,n}(x) = \sum _{k=0}^{n} \lambda _{j,k} B_{k,n}(x),\quad j = 0,1, \dots , n, \end{aligned}$$
(4.51)

for the dual basis functions, defined by the coefficients

$$\begin{aligned} \lambda _{j,k}= & {} \frac{(-1)^{j+k}}{ \left( {\begin{array}{c}n\\ j\end{array}}\right) \left( {\begin{array}{c}n\\ k\end{array}}\right) } \sum _{i=0}^{min(j,k)} (2i+1) \nonumber \\&\times \left( {\begin{array}{c}n+i+1\\ n-j\end{array}}\right) \left( {\begin{array}{c}n-i\\ n-j\end{array}}\right) \left( {\begin{array}{c}n+i+1\\ n-k\end{array}}\right) \left( {\begin{array}{c}n-i\\ n-k\end{array}}\right) ,\nonumber \\&\qquad \qquad \qquad \qquad j,k = 0, 1, \dots , n. \end{aligned}$$
(4.52)

Theorem 7

Let \(B(x)\) be Bernstein vector defined in Eq. (4.49) and also suppose that \(\nu > 0\). Then

$$\begin{aligned} D^\nu B(x)\simeq \mathbf D ^{(\nu )} B(x), \end{aligned}$$
(4.53)

where \( \mathbf D ^{(\nu )}\) is the \((n+1) \times (n+1)\) operational matrix of fractional derivative of order \(\nu \) in the Caputo sense and is defined as follows:

$$\begin{aligned} \mathbf D ^{(\nu )}=\left( \begin{array}{ccccc} \sum \limits _{j=\lceil \nu \rceil }^{n}\omega _{0,0,j} &{} \sum \limits _{j=\lceil \nu \rceil }^{n}\omega _{0,1,j} &{} \ldots &{} \sum \limits _{j=\lceil \nu \rceil }^{n}\omega _{0,n,j} \\ \vdots &{} \vdots &{} \ldots &{} \vdots \\ 0 &{} 0 &{} \ldots &{} 0 \\ \sum \limits _{j=\lceil \nu \rceil }^{n}\omega _{i,0,j} &{}\sum \limits _{j=\lceil \nu \rceil }^{n}\omega _{i,1,j} &{} \ldots &{} \sum \limits _{j=\lceil \nu \rceil }^{n}\omega _{i,n,j} \\ \vdots &{} \vdots &{}\ldots &{} \vdots \\ \sum \limits _{j=\lceil \nu \rceil }^{n}\omega _{n,0,j} &{}\sum \limits _{j=\lceil \nu \rceil }^{n}\omega _{n,1,j} &{} \ldots &{} \sum \limits _{j=\lceil \nu \rceil }^{n}\omega _{n,n,j}\\ \end{array} \right) .\nonumber \\ \end{aligned}$$
(4.54)

Here \(\omega _{i,\ell ,j}\) is given by

$$\begin{aligned} \omega _{i,\ell ,j}= & {} (-1)^{j-i} \left( {\begin{array}{c}n\\ i\end{array}}\right) \left( {\begin{array}{c}n-i\\ j-i\end{array}}\right) \dfrac{\varGamma (j+1)}{\varGamma (j+1-\nu )}\nonumber \\&\times \sum \limits _{k=0}^{n} \lambda _{\ell ,k} \ \mu _{k,j}, \end{aligned}$$
(4.55)

where \(\lambda _{\ell ,k}\) is given in Eq. (4.52) and

$$\begin{aligned} \mu _{k,j}=\sum \limits _{s=k}^{n} (-1)^{s-k} \left( {\begin{array}{c}n\\ k\end{array}}\right) \left( {\begin{array}{c}n-k\\ s-k\end{array}}\right) \dfrac{1}{j-\nu +s+1}. \end{aligned}$$

Proof

Using Eqs. (3.20) and (2.5) we have

$$\begin{aligned}&D^{\nu }B_{i,n}(x)=\sum \limits _{j=i}^{n}(-1)^{j-i}\left( {\begin{array}{c}n\\ i\end{array}}\right) \left( {\begin{array}{c}n-i\\ j-i\end{array}}\right) D^{\nu } x^{j}\nonumber \\&\quad =\sum \limits _{j=\lceil \nu \rceil }^{n}(-1)^{j-i}\left( {\begin{array}{c}n\\ i\end{array}}\right) \left( {\begin{array}{c}n-i\\ j-i\end{array}}\right) \frac{\varGamma (j+1)}{\varGamma (j+1-\nu )} x^{j-\nu },\nonumber \\&\qquad \qquad i=0,\dots ,n. \end{aligned}$$
(4.56)

Now, if we approximate \(x^{j-\nu }\) by Bernstein polynomials. Then

$$\begin{aligned} x^{j-\nu }\simeq \sum \limits _{\ell =0}^{n}b_{\ell j}B_{\ell ,n}(x), \end{aligned}$$
(4.57)

where

$$\begin{aligned} b_{\ell j}= & {} \int _{0}^{1} x^{j-\nu }\ D_{\ell ,n}(x)\hbox {d}x=\sum _{k=0}^{n} \lambda _{\ell ,k} \int _{0}^{1} x^{j-\nu }B_{k,n}(x)\\= & {} \sum _{k=0}^{n} \lambda _{\ell ,k}\sum \limits _{s=k}^{n}(-1)^{s-k}\left( {\begin{array}{c}n\\ k\end{array}}\right) \left( {\begin{array}{c}n-k\\ s-k\end{array}}\right) \int _{0}^{1} x^{j-\nu +s}\\= & {} \sum _{k=0}^{n} \lambda _{\ell ,k}\sum \limits _{s=k}^{n}(-1)^{s-k}\left( {\begin{array}{c}n\\ k\end{array}}\right) \left( {\begin{array}{c}n-k\\ s-k\end{array}}\right) \frac{1}{(j-\nu +s+1)}\\= & {} \sum \limits _{k=0}^{n} \lambda _{\ell ,k} \mu _{k,j}. \end{aligned}$$

Employing Eqs. (4.56) and (4.57) we get

$$\begin{aligned} D^{\nu }B_{i,n}(x)= & {} \sum \limits _{j=\lceil \nu \rceil }^{n}(-1)^{j-i}\left( {\begin{array}{c}n\\ i\end{array}}\right) \left( {\begin{array}{c}n-i\\ j-i\end{array}}\right) \nonumber \\&\times \frac{\varGamma (j+1)}{\varGamma (j+1-\nu )}b_{\ell j}B_{\ell ,n}(x)\nonumber \\= & {} \sum \limits _{\ell =0}^{n}\bigg (\sum \limits _{j=\lceil \nu \rceil }^{n}\omega _{i,\ell ,j}\bigg )B_{i,n}(x) \end{aligned}$$
(4.58)

where \(\omega _{i,\ell ,j}\) is given in Eq. (4.55). Rewriting Eq. (4.58) as a vector form results

$$\begin{aligned}&D^{\nu }B_{i,n}(x)\nonumber \\&\quad \simeq \Big [\sum \limits _{j=\lceil \nu \rceil }^{n} \omega _{i,0,j}, \sum \limits _{j=\lceil \nu \rceil }^{n} \omega _{i,1,j}, \sum \limits _{j=\lceil \nu \rceil }^{n} \omega _{i,2,j}, \ldots , \sum \limits _{j=\lceil \nu \rceil }^{n} \omega _{i,n,j}\Big ]B(x),\nonumber \\&\qquad i=\lceil \nu \rceil ,\ldots ,n. \end{aligned}$$
(4.59)

This leads to the desired result. \(\square \)

5 Operational matrices of Riemann–Liouville fractional integrals

In this section, we present the operational matrices of Riemann–Liouville fractional integrals for some orthogonal polynomials on finite and infinite intervals.

5.1 SCOM to fractional integration

The main objective of this subsection is to derive an operational matrix of fractional integration for shifted Chebyshev vector \(\varphi (x)\).

Theorem 8

Let \(\varphi (x)\) be shifted Chebyshev vector defined in Eq. (4.3) and suppose that \(\nu > 0\). Then

$$\begin{aligned} I^\nu \varphi (x)\simeq \mathbf P ^{(\nu )} \varphi (x), \end{aligned}$$
(5.1)

where \( \mathbf P ^{(\nu )}\) is the \((N+1) \times (N+1)\) SCOM of order \(\nu \) in the Riemann–Liouville sense and is defined as follows:

$$\begin{aligned} \mathbf I ^{(\nu )}=\left( \begin{array}{ccccccc} \varTheta _{\nu }(0,0)&{}\varTheta _{\nu }(0,1)&{}\cdots &{}\varTheta _{\nu }(0,N)\\ \varTheta _{\nu }(1,0)&{}\varTheta _{\nu }(1,1)&{}\cdots &{}\varTheta _{\nu }(1,N)\\ \vdots &{} \vdots &{}\cdots &{} \vdots \\ \varTheta _{\nu }(i,0)&{}\varTheta _{\nu }(i,1)&{}\cdots &{}\varTheta _{\nu }(i,N)\\ \vdots &{} \vdots &{} \vdots &{}\cdots &{} \vdots \\ \varTheta _{\nu }(N,0)&{}\varTheta _{\nu }(N,1)&{}\cdots &{}\varTheta _{\nu }(N,N)\\ \end{array} \right) \end{aligned}$$
(5.2)

and

$$\begin{aligned}&\varTheta _{\nu }(i,j)\\&\quad = \sum \limits _{k=0}^{i}\dfrac{(-1)^{i-k}\ 2i\ L^{\nu }\ (i+k-1)!\ \varGamma (k+\nu +\frac{1}{2}) }{b_j\ \varGamma (k+\frac{1}{2})\ (i-k)!\ \varGamma (k+\nu -j+1)\ \varGamma (k+j+\nu +1)}. \end{aligned}$$

(For the proof, see [81]).

5.2 LOM of fractional integration

The main objective of this section is to find the fractional integration of Laguerre vector in the Riemann–Liouville sense.

Theorem 9

Let \(\emptyset (x)\) be the Laguerre vector and \(\nu > 0\). Then

$$\begin{aligned} J^\nu \emptyset (x)\simeq \mathbf P ^{(\nu )} \emptyset (x), \end{aligned}$$
(5.3)

where \( \mathbf P ^{(\nu )}\) is the \((N+1) \times (N+1)\) operational matrix of fractional integration of order \(\nu \) in the Riemann–Liouville sense and is defined as follows:

$$\begin{aligned} \mathbf P ^{(\nu )}=\left( \begin{array}{ccccccc} \mathfrak {I}_{\nu }(0,0)&{}\mathfrak {I}_{\nu }(0,1)&{} \cdots &{}\mathfrak {I}_{\nu }(0,N)\\ \mathfrak {I}_{\nu }(1,0)&{}\mathfrak {I}_{\nu }(1,1)&{} \cdots &{}\mathfrak {I}_{\nu }(1,N)\\ \vdots &{} \vdots &{}\cdots &{} \vdots \\ \mathfrak {I}_{\nu }(i,0)&{}\mathfrak {I}_{\nu }(i,1)&{} \cdots &{}\mathfrak {I}_{\nu }(i,N)\\ \vdots &{} \vdots &{} \cdots &{} \vdots \\ \mathfrak {I}_{\nu }(N,0)&{}\mathfrak {I}_{\nu }(N,1)&{} \cdots &{}\mathfrak {I}_{\nu }(N,N)\\ \end{array} \right) \end{aligned}$$
(5.4)

where

$$\begin{aligned} \mathfrak {I}_{\nu }(i,j)= \sum \limits _{k=0}^{i}\sum \limits _{r=0}^{j}\dfrac{(-1)^{k+r}\ i!\ r!\varGamma (k+\nu +r+1)}{(i{-}k)!\,k! (j{-}r)!\, (r!)^2 \varGamma (k{+}\nu {+}1)}. \end{aligned}$$

5.3 SJOM of fractional integration

The main objective of this subsection is to present an operational matrix of fractional integration for shifted Jacobi vector \(\varPhi (x)\).

Theorem 10

Let \(\varPhi (x)\) be the shifted Jacobi vector and \(\nu > 0\). Then

$$\begin{aligned} J^\nu \varPhi (x)\simeq \mathbf P ^{(\nu )} \varPhi (x), \end{aligned}$$
(5.5)

where \( \mathbf P ^{(\nu )}\) is the \((N+1) \times (N+1)\) operational matrix of fractional integration of order \(\nu \) in the Riemann–Liouville sense and is defined as follows:

$$\begin{aligned} \mathbf P ^{(\nu )}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} \varUpsilon _{\nu }(0,0,\alpha ,\beta )&{}\varUpsilon _{\nu }(0,1,\alpha ,\beta ) &{}\cdots &{}\varUpsilon _{\nu }(0,N,\alpha ,\beta )\\ \varUpsilon _{\nu }(1,0,\alpha ,\beta )&{}\varUpsilon _{\nu }(1,1,\alpha ,\beta ) &{}\cdots &{}\varUpsilon _{\nu }(1,N,\alpha ,\beta )\\ \vdots &{} \vdots &{}\cdots &{} \vdots \\ \varUpsilon _{\nu }(i,0,\alpha ,\beta )&{}\varUpsilon _{\nu }(i,1,\alpha ,\beta )&{}\cdots &{}\varUpsilon _{\nu }(i,N,\alpha ,\beta )\\ \vdots &{} \vdots &{}\cdots &{} \vdots \\ \varUpsilon _{\nu }(N,0,\alpha ,\beta )&{}\varUpsilon _{\nu }(N,1,\alpha ,\beta ) &{}\cdots &{}\varUpsilon _{\nu }(N,N,\alpha ,\beta )\\ \end{array} \right) \nonumber \\ \end{aligned}$$
(5.6)

and

$$\begin{aligned}&\varUpsilon _{\nu }(i,j,\alpha ,\beta ) \nonumber \\&\quad = \sum \limits _{k=0}^{i}\dfrac{(-1)^{i-k}\ \varGamma {(i+\beta +1)}\ \varGamma {(i+k+\alpha +\beta +1)}}{\varGamma {(k+\beta +1)}\ \varGamma {(i+\alpha +\beta +1)}(i-k)!\ \varGamma (k+\nu +1)} \nonumber \\&\qquad \times \sum \limits _{f=0}^{j}\dfrac{(-1)^{j-f}\ \varGamma {(j+f+\alpha +\beta +1)}\ \varGamma {(\alpha +1)} }{\varGamma {(j+\alpha +1)}\ \varGamma {(f+\beta +1)}(j-f)!\ f! } \nonumber \\&\qquad \times \dfrac{\varGamma {(f+k+\nu +\beta +1)}\ (2j+\alpha +\beta +1)\ j!\ L^{\nu }}{\varGamma (f+k+\alpha +\beta +\nu +2)}.\nonumber \\ \end{aligned}$$
(5.7)

Proof

For the proof see [79]. \(\square \)

Remark 1

It is worthwhile to mention here that the operational matrices of fractional integrations, in the Riemann–Liouville sense, for shifted Legendre and shifted Chebyshev polynomials can be obtained as special cases for the operational matrix of fractional integration for Shifted Jacobi polynomials.

5.4 MGLOM of fractional integration

The main objective of this section is to derive an operational matrix of fractional integration for modified generalized Laguerre vector.

Theorem 11

Let \(\psi (x)\) be the modified generalized Laguerre vector and \(\nu > 0\). Then

$$\begin{aligned} J^\nu \psi (x)\simeq \mathbf P ^{(\nu )} \psi (x), \end{aligned}$$
(5.8)

where \( \mathbf P ^{(\nu )}\) is the \((N+1) \times (N+1)\) operational matrix of fractional integration of order \(\nu \) in the Riemann–Liouville sense and is defined as follows:

$$\begin{aligned} \mathbf P ^{(\nu )}=\left( \begin{array}{ccccccc} \varPsi _{\nu }(0,0)&{}\varPsi _{\nu }(0,1)&{} \cdots &{}\varPsi _{\nu }(0,N)\\ \varPsi _{\nu }(1,0)&{}\varPsi _{\nu }(1,1)&{} \cdots &{}\varPsi _{\nu }(1,N)\\ \vdots &{} \vdots &{}\cdots &{} \vdots \\ \varPsi _{\nu }(i,0)&{}\varPsi _{\nu }(i,1)&{} \cdots &{}\varPsi _{\nu }(i,N)\\ \vdots &{} \vdots &{}\cdots &{} \vdots \\ \varPsi _{\nu }(N,0)&{}\varPsi _{\nu }(N,1)&{} \cdots &{}\varPsi _{\nu }(N,N)\\ \end{array} \right) \end{aligned}$$
(5.9)

where

$$\begin{aligned}&\varPsi _{\nu }(i,j)= \sum \limits _{k=0}^{i}\sum \limits _{r=0}^{j}\nonumber \\&\quad \times \dfrac{(-1)^{k+r}\ \beta ^{(j-\nu -1)} j!\ \varGamma (i+\alpha +1)\ \varGamma (k\!+\!\nu \!+\!\alpha \!+\!r\!+\!1)}{ (i-k)!\ (j-r)!\ r!\ \varGamma (k+\nu +1)\ \varGamma (k\!+\!\alpha \!+\!1) \varGamma (\alpha +r+1)}. \end{aligned}$$

Proof

Using the analytic form of the modified generalized Laguerre polynomials \(L^{(\alpha ,\beta )}_{i}{(x)}\) of degree \(i\) (3.16) and (2.3). Then

$$\begin{aligned}&J^{\nu }{L}^{(\alpha ,\beta )}_{i}(x)=\sum \limits _{k=0}^{i } (-1)^{k}\ \frac{ \beta ^{k} \varGamma (i+\alpha +1)\ }{(i-k)!\ k!\ \varGamma (k+\alpha +1)}\ J^\nu x^k\nonumber \\&\quad =\sum \limits _{k=0}^{i}(-1)^{k}\ \frac{\beta ^{k} \varGamma (i+\alpha +1) }{(i-k)!\ \varGamma (k+\nu +1)\ \varGamma (k+\alpha +1)}\ x^{k+\nu },\nonumber \\&\qquad i=0,1,\ldots ,N. \end{aligned}$$
(5.10)

Now, approximate \(x^{k+\nu }\) by \(N+1\) terms of modified generalized Laguerre series, we have

$$\begin{aligned} x^{k+\nu }= \sum \limits _{j=0}^{N}c_{j}L^{(\alpha ,\beta )}_{j}{(x)}, \end{aligned}$$
(5.11)

where \(c_{j}\) is given from (4.35) with \(u(x)=x^{k+\nu }\), that is

$$\begin{aligned} c_{j}= & {} \sum \limits _{r=0}^{j}(-1)^{r}\ \frac{ \beta ^{j-k-\nu -1}j!\ \varGamma (k+\nu +\alpha +r+1)\ }{(j-r)!\ r!\ \varGamma (r+\alpha +1)} ,\nonumber \\&j= 1,2,\ldots ,N. \end{aligned}$$
(5.12)

In virtue of (5.10) and (5.11), we get

$$\begin{aligned}&J^{\nu }{L}^{(\alpha ,\beta )}_{i}(x)\nonumber \\&\quad = \sum \limits _{j=0}^{N}\varPsi _{\nu }(i,j)L^{(\alpha ,\beta )}_{j}(x) ,\quad \! i\!=\!0,1,\ldots ,N, \end{aligned}$$
(5.13)

where

$$\begin{aligned}&\varPsi _{\nu }(i,j)= \sum \limits _{k=0}^{i}\sum \limits _{r=0}^{j}\nonumber \\&\quad \times \dfrac{(-1)^{k+r}\ \beta ^{(j-\nu -1)} j!\ \varGamma (i+\alpha +1)\ \varGamma (k+\nu +\alpha +r+1)}{ (i-k)!\ (j-r)!\ r!\ \varGamma (k+\nu +1)\ \varGamma (k+\alpha +1)\ \varGamma (\alpha +r+1)},\nonumber \\&j=1,2,\ldots N. \end{aligned}$$
(5.14)

Accordingly, Eq. (5.13) can be written in a vector form as follows:

$$\begin{aligned}&J^{\nu }{L}^{(\alpha ,\beta )}_{i}(x)\nonumber \\&\quad \simeq \Big [ \varPsi _{\nu }(i,0), \varPsi _{\nu }(i,1), \varPsi _{\nu }(i,2), \ldots , \varPsi _{\nu }(i,N)\Big ] \psi (x),\nonumber \\&\qquad i=0,1,\ldots ,N. \end{aligned}$$
(5.15)

Equation (5.15) leads to the desired result. \(\square \)

6 Spectral methods for FDEs

In this section, we introduce different ways to approximate linear FDEs using the tau method, based on the presented operational matrices of fractional differentiation and integration, such that it can be implemented efficiently. Also, we present the collocation method, based on the presented operational matrices, for solving nonlinear FDEs on bounded and unbounded domains.

6.1 Applications of the operational matrix of fractional derivatives

In this subsection, in order to show the fundamental importance of the operational matrices of fractional derivatives, we apply spectral tau method based on these operational matrices to solve the multi-term FDEs.

6.1.1 Shifted Jacobi tau operational matrix formulation method

Consider the linear FDE

$$\begin{aligned} D^{\nu }u(x)= & {} \sum \limits _{j=1}^{k}\gamma _j D^{\mu _j}u(x)+\gamma _{k+1}u(x)+g(x),\nonumber \\&\quad x\in I=[0,L], \end{aligned}$$
(6.1)

with initial conditions

$$\begin{aligned} u^{(i)}(0)=d_i, \quad \quad \quad i=0,1,\ldots ,m-1, \end{aligned}$$
(6.2)

where \(\gamma _j\ (j=1,\ldots ,k+1) \) are real constant coefficients and \(m-1<\nu \le m,\ 0<\mu _1 <\mu _2 <\cdots <\mu _k <\nu \). Moreover, \(D^{\nu } u(x)\equiv u^{(\nu )}(x)\) denotes the Caputo fractional derivative of order \(\nu \) for \(u(x)\), the values of \(d_i\ (i=0, \ldots ,m-1)\) describe the initial state of \(u(x)\), and \(g(x)\) is a given source function.

The existence and uniqueness and continuous dependence of the solution of the problem are discussed in [96]. In order to solve the initial value problem (6.1)-(6.2), we approximate \(u(x)\) and \(g(x)\) by means of the shifted Jacobi polynomials as

$$\begin{aligned}&u_{N}(x)\simeq \sum \limits _{i=0}^{N}c_i P^{(\alpha ,\beta )}_{L,i}(x)=C^\mathrm{T} \varPhi (x), \end{aligned}$$
(6.3)
$$\begin{aligned}&g(x)\simeq \sum \limits _{i=0}^{N}g_i P^{(\alpha ,\beta )}_{L,i}(x)=G^\mathrm{T} \varPhi (x), \end{aligned}$$
(6.4)

where the vector \(G=[g_0,\ldots , g_N]^\mathrm{T}\) is known and \(C=[c_0,\ldots ,c_N]^\mathrm{T}\) is an unknown vector.

Using Theorem 4 (relation (4.20)) and (6.3), yields

$$\begin{aligned}&D^\nu u_{N}(x)\simeq C^\mathrm{T} D^\nu \varPhi (x)= C^\mathrm{T} \mathbf D ^{(\nu )} \varPhi (x),\qquad \qquad \qquad \quad \end{aligned}$$
(6.5)
$$\begin{aligned}&D^{\mu _j}u_{N}(x)\simeq C^\mathrm{T} D^{\mu _j} \varPhi (x)= C^\mathrm{T} \mathbf D ^{(\mu _j)} \varPhi (x),\nonumber \\&\quad j=1,\ldots ,k. \end{aligned}$$
(6.6)

Employing Eqs. (6.3)–(6.6) the residual \(R_N(x)\) for Eq. (6.1) can be written as

$$\begin{aligned}&R_N(x)\nonumber \\&\quad =\,\left( C^\mathrm{T} \mathbf D ^{(\nu )}-C^\mathrm{T} \sum \limits _{j=1}^{k}\gamma _j\mathbf D ^{(\mu _j)}-\gamma _{k+1}C^\mathrm{T} -G^\mathrm{T}\right) \varPhi (x).\nonumber \\ \end{aligned}$$
(6.7)

As in a typical tau method (see[79]), we generate \(N-m+1 \) linear equations by applying

$$\begin{aligned}&\langle R_N(x),P^{(\alpha ,\beta )}_{L,j}(x)\rangle = \int _{0}^{L}R_N(x)P^{(\alpha ,\beta )}_{L,j}(x)\hbox {d}x=0\nonumber \\&\quad j=0,1,\ldots ,N-m. \end{aligned}$$
(6.8)

The substitution of Eqs. (4.17) and (6.3) into Eq (6.2) yields

$$\begin{aligned} u^{(j)}(0)=C^\mathrm{T} \mathbf D ^{(j)}\varPhi (0)=d_j, \quad \! j=0,1,\ldots ,m-1.\nonumber \\ \end{aligned}$$
(6.9)

Equations (6.8) and (6.9) generate \( N-m+1 \) and \( m \) set of linear equations, respectively. These equations can be solved for unknown coefficients of the vector \(C\). Consequently, \(u_{N}(x)\) given in Eq. (6.3) can be obtained, which is a solution of Eq. (6.1) with the initial conditions (6.2).

Remark 2

To solve Eq. (6.1) subject to the following boundary conditions (when \(m\) is even):

$$\begin{aligned} u^{(i)}(0)=a_i, \quad \! u^{(i)}(L)=b_i, \quad \! i=0,1,\ldots ,\frac{m}{2}-1.\nonumber \\ \end{aligned}$$
(6.10)

We apply the same technique described above, but the \( m \) set of linear equations resulting from (6.9) is changed to

$$\begin{aligned}&u^{(i)}(0)=C^\mathrm{T} \mathbf D ^{(i)}\varPhi (0)=a_i, \nonumber \\&u^{(i)}(L)=C^\mathrm{T} \mathbf D ^{(i)}\varPhi (L)=b_i,\nonumber \\&i=0,1,\ldots ,\frac{m}{2}-1. \end{aligned}$$
(6.11)

Equations (6.8) and (6.11) generate \( N+1 \) system of linear equations. This system can be solved to determine the unknown coefficients of the vector \(C\).

6.1.2 Modified generalized Laguerre tau operational matrix formulation method

Consider the linear FDE

$$\begin{aligned}&D^{\nu }u(x)=\sum \limits _{j=1}^{k}\gamma _j D^{\zeta _j}u(x)+\gamma _{k+1}u(x)+g(x),\nonumber \\&\quad \text {in} \ \varLambda =(0,\infty ), \end{aligned}$$
(6.12)

with initial conditions

$$\begin{aligned} u^{(i)}(0)=d_i, \quad \quad \quad i=0,\ldots ,m-1, \end{aligned}$$
(6.13)

where \(\gamma _j \ \ (j=1,\ldots ,k+1)\) are real constant coefficients and \(m-1<\nu \le m,\ 0<\zeta _1 <\zeta _2 <\cdots <\zeta _k <\nu \). Moreover, \(D^{\nu } u(x)\equiv u^{(\nu )}(x)\) denotes the Caputo fractional derivative of order \(\nu \) for \(u(x)\), and the values of \(d_{i} \ \ (i=0, \ldots ,m-1)\) describe the initial state of \(u(x)\), and \(g(x)\) is a given source function.

Let \(w^{(\alpha ,\beta )}(x)=x^{\alpha }\hbox {e}^{-\beta x}\). Then, we denote by \(L_{w^{(\alpha ,\beta )}}^{2}(\varLambda )(\varLambda :=(0,\infty ))\) the weighted \(L^2\) space with inner product:

$$\begin{aligned} (u,v)_{w^{(\alpha ,\beta )}} = \int _{\varLambda } w^{(\alpha ,\beta )}(x) u(x) v(x) \hbox {d}x, \end{aligned}$$

and the associated norm \(\Vert u\Vert _{w^{(\alpha ,\beta )}}=(u,u)_{w^{(\alpha ,\beta )}}^{\frac{1}{2}}\). It is known that \(\{L_{i}^{(\alpha ,\beta )}(x): i \ge 0 \}\) forms a complete orthogonal system in \(L_{w^{(\alpha ,\beta )}}^2(\varLambda )\).

To solve the fractional initial value problem, (6.12)-(6.13), we approximate \(u(x)\) and \(g(x)\) by modified generalized Laguerre polynomials as

$$\begin{aligned}&u(x)\simeq \sum \limits _{i=0}^{N}c_i L^{(\alpha ,\beta )}_{i}(x)=C^\mathrm{T} \psi (x), \end{aligned}$$
(6.14)
$$\begin{aligned}&g(x)\simeq \sum \limits _{i=0}^{N}g_i L^{(\alpha ,\beta )}_{i}(x)=G^\mathrm{T} \psi (x), \end{aligned}$$
(6.15)

where vector \(G=[g_0,\ldots , g_N]^\mathrm{T}\) is known and \(C=[c_0,\ldots ,c_N]^\mathrm{T}\) is an unknown vector.

By using Theorem 6 (relation Eqs. (4.39) and (6.14)) we have

$$\begin{aligned}&D^\nu u(x)\simeq C^\mathrm{T} D^\nu \psi (x) = C^\mathrm{T} D^{(\nu )} \psi (x), \end{aligned}$$
(6.16)
$$\begin{aligned}&D^{\zeta _j}u(x)\simeq C^\mathrm{T} D^{\zeta _j} \psi (x) = C^\mathrm{T} D^{(\zeta _j)} \psi (x),\nonumber \\&\quad j=1,\ldots ,k. \end{aligned}$$
(6.17)

After employing Eqs (6.14)-(6.17), the residual \(R_N(x)\) for Eq. (6.12) can be written as

$$\begin{aligned}&R_N(x)\nonumber \\&\quad =\left( C^\mathrm{T} D^{(\nu )}-C^\mathrm{T} \sum \limits _{j=1}^{k}\gamma _j\mathbf D ^{(\zeta _j)}\!-\!\gamma _{k+1}C^\mathrm{T} \!-\!G^\mathrm{T}\right) \psi (x).\nonumber \\ \end{aligned}$$
(6.18)

As in a typical tau method (see [67, 69]), we generate \(N-m+1\) linear equations by applying

$$\begin{aligned}&\langle R_N(x),L^{(\alpha ,\beta )}_{j}(x)\rangle \!=\!\int _{\varLambda } w^{(\alpha ,\beta )} R_N(x)L^{(\alpha ,\beta )}_{j}(x)\,\hbox {d}x\!=\!0,\nonumber \\&\quad \ j=0,1,\ldots ,N-m. \end{aligned}$$
(6.19)

Also, by substituting Eq. (6.14) into Eq. (6.13), we get

$$\begin{aligned} u^{(i)}(0)=C^\mathrm{T} \mathbf D ^{(i)}\psi (0)=d_i, \quad i=0,1,\ldots ,m-1,\nonumber \\ \end{aligned}$$
(6.20)

Equations (6.19) and (6.20) generate \( N-m+1 \) and \(m\) set of linear equations, respectively. These linear equations can be solved for unknown coefficients of the vector \(C\). Consequently, \(u(x)\) given in Eq. (6.14) can be calculated, which give the solution of the initial value problem in Eqs. (6.12) and (6.13).

6.2 Applications of the operational matrix of fractional integration

The proposed multi-order FDE is integrated \(\nu \) times, in the Riemann–Liouville sense, where \(\nu \) is the highest fractional order and making use of the formula relating the expansion coefficients of fractional integration appearing in this integrated form of the proposed multi-order FDE to the orthogonal polynomials themselves. In this section, we present the tau method, based on the SJOM and MGLOM of fractional integrations, for solving FDE in bounded and unbounded intervals, respectively.

6.2.1 Tau method based on SJOM of fractional integration

In order to show the fundamental importance of SJOM of fractional integration, we apply it to solve the following multi-order FDE:

$$\begin{aligned}&D^{\nu }u(x)=\sum \limits _{i=1}^{k}\gamma _j D^{\beta _i}u(x)+\gamma _{k+1}u(x)+f(x),\nonumber \\&\quad \text {in} \ I=(0,L), \end{aligned}$$
(6.21)

with initial conditions

$$\begin{aligned} u^{(i)}(0)=d_i, \quad \quad \quad i=0,\ldots ,m-1, \end{aligned}$$
(6.22)

where \(\gamma _i\ (i=1,2,\ldots ,k+1)\) are real constant coefficients and \(m-1<\nu \le m,\ 0<\beta _1 <\beta _2 <\cdots <\beta _k <\nu \).

If we apply the Riemann–Liouville integral of order \(\nu \) on (6.21) and using (2.6), we obtain the integrated form of (6.21):

$$\begin{aligned}&u(x)-\sum \limits _{j=0}^{m-1}u^{(j)}(0^+)\frac{x^j}{j!}\nonumber \\&\quad =\sum \limits _{i=1}^{k}\gamma _i I^{\nu -\beta _i}\Big [u(x)-\sum \limits _{j=0}^{m_i-1}u^{(j)}(0^+)\frac{x^j}{j!}\Big ]+\gamma _{k+1}I^{\nu }u(x) \nonumber \\&\qquad +\,I^{\nu }f(x), \quad \ u^{(i)}(0)=d_i,\quad \ i=0,\ldots ,m-1,\nonumber \\ \end{aligned}$$
(6.23)

where \(m_i-1<\beta _i\le m_i,\ m_i\in N,\) this leads to

$$\begin{aligned}&u(x)=\sum \limits _{i=1}^{k}\gamma _i I^{\nu -\beta _i}u(x)+\gamma _{k+1}I^{\nu }u(x)+g(x),\nonumber \\&u^{(i)}(0)=d_i,\quad \quad i=0,\ldots ,m-1, \end{aligned}$$
(6.24)

where

$$\begin{aligned} g(x)= & {} I^{\nu }f(x)+\sum \limits _{j=0}^{m-1}d_j\dfrac{x^j}{j!}\\&+\sum \limits _{i=1}^{k}\gamma _i I^{\nu -\beta _i}\left( \sum \limits _{j=0}^{m_i-1}d_j\dfrac{x^j}{j!}\right) . \end{aligned}$$

In order to use the tau method with SJOM for solving the fully integrated problem (6.24) with initial conditions (6.22), we approximate \(u(x)\) and \(g(x)\) by means of the shifted Jacobi polynomials:

$$\begin{aligned}&u_N(x)\simeq \sum \limits _{i=0}^{N}c_i P_{L,i}^{(\alpha ,\beta )}(x)=C^\mathrm{T} \varPhi (x), \end{aligned}$$
(6.25)
$$\begin{aligned}&g(x)\simeq \sum \limits _{i=0}^{N}g_i P_{L,i}^{(\alpha ,\beta )}(x)=G^\mathrm{T} \varPhi (x), \end{aligned}$$
(6.26)

where the vector \(G=[g_0, g_1, \ldots , g_N]^\mathrm{T}\) is given but \(C=[c_0, c_1, \ldots ,c_N]^\mathrm{T}\) is an unknown vector.

Now, the Riemann–Liouville integral of orders \(\nu \) and \( \nu -\beta _j \) of the approximate solution (6.25), after making use of Theorem 10 (relation (5.5)), can be written as

$$\begin{aligned} I^\nu u_N(x)\simeq C^\mathrm{T} I^\nu \varPhi (x)\simeq C^\mathrm{T} \mathbf P ^{(\nu )} \varPhi (x), \end{aligned}$$
(6.27)

and

$$\begin{aligned} I^{\nu -\beta _j}u_N(x)\simeq & {} C^\mathrm{T} I^{\nu -\beta _j} \varPhi (x)\simeq C^\mathrm{T} \mathbf P ^{(\nu -\beta _j)} \varPhi (x),\nonumber \\&\quad j=1,\ldots ,k, \end{aligned}$$
(6.28)

respectively, where \( \mathbf P ^{(\nu )}\) is the \((N+1) \times (N+1)\) operational matrix of fractional integration of order \(\nu \).

After employing Eqs. (6.25)–(6.28), the residual \(R_N(x)\) of Eq. (6.24) can be written as

$$\begin{aligned}&R_N(x)\nonumber \\&\quad =\left( C^\mathrm{T} -C^\mathrm{T} \sum \limits _{j=1}^{k}\gamma _j\mathbf P ^{(\nu -\beta _j)}-\gamma _{k+1}C^\mathrm{T}\mathbf P ^{(\nu )} -G^\mathrm{T}\right) \varPhi (x).\nonumber \\ \end{aligned}$$
(6.29)

As in a typical tau method (see [67, 69]), we generate \( N-m+1 \) linear algebraic equations by applying

$$\begin{aligned} \langle R_N(x),P_{L,j}^{(\alpha ,\beta )}(x)\rangle= & {} \int _{0}^{L} R_N(x)P_{L,j}^{(\alpha ,\beta )}(x)\hbox {d}x=0,\nonumber \\&j=0,1,\ldots ,N-m. \end{aligned}$$
(6.30)

Also, substituting Eqs. (3.2) and (6.25) into Eq. (6.22) yields

$$\begin{aligned} u^{(i)}(0)= & {} \sum \limits _{i=0}^{N}c_i D^{(i)}P_{L,i}^{(\alpha ,\beta )}(0)=d_i,\nonumber \\&i=0,1,\ldots ,m-1. \end{aligned}$$
(6.31)

Equations (6.30) and (6.31) generate \( N-m+1 \) and \( m \) set of linear equations, respectively. These linear equations can be solved for unknown coefficients of the vector \(C\). Consequently, \(u_N(x)\) given in Eq. (6.25) can be calculated, which is a solution of Eq. (6.21) with the initial conditions (6.22).

6.2.2 Tau method based on MGLOM of fractional integration

The modified generalized Laguerre tau method based on operational matrix is proposed to solve numerically the FDEs. The basic idea of this technique is as follows: (i) The FDE is converted to a fully integrated form via fractional integration in the Riemann–Liouville sense. (ii) Subsequently, the integrated form equation are approximated by representing them as linear combinations of modified generalized Laguerre polynomials. (iii) Finally, the integrated form equation is converted into an algebraic equation by introducing the operational matrix of fractional integration of the modified generalized Laguerre polynomials.

Consider the following multi-order FDE:

$$\begin{aligned} D^{\nu }u(x)= & {} \sum \limits _{i=1}^{k}\gamma _j D^{\beta _i}u(x)+\gamma _{k+1}u(x)+f(x),\nonumber \\&\text {in} \ \varLambda =(0,\infty ), \end{aligned}$$
(6.32)

with initial conditions

$$\begin{aligned} u^{(i)}(0)=d_i, \quad \quad \quad i=0,\ldots ,m-1. \end{aligned}$$
(6.33)

If we apply the Riemann–Liouville integral of order \(\nu \) on (6.32) and after making use of (2.6), we get the integrated form of (6.32):

$$\begin{aligned}&u(x)-\sum \limits _{j=0}^{m-1}u^{(j)}(0^+)\frac{x^j}{j!}\nonumber \\&\quad = \sum \limits _{i=1}^{k}\gamma _i J^{\nu -\beta _i}\left[ u(x)\!-\!\sum \limits _{j=0}^{m_i-1}u^{(j)}(0^+)\frac{x^j}{j!}\right] \!+\!\gamma _{k+1}J^{\nu }u(x)\nonumber \\&\qquad +\,J^{\nu }f(x),\quad \ u^{(i)}(0)\!=\!d_i,\quad \ i\!=\!0,\ldots ,m-1,\qquad \end{aligned}$$
(6.34)

where \(m_i-1<\beta _i\le m_i,\ m_i\in N.\) This implies that

$$\begin{aligned}&u(x)=\sum \limits _{i=1}^{k}\gamma _i J^{\nu -\beta _i}u(x)+\gamma _{k+1}J^{\nu }u(x)+g(x),\nonumber \\&u^{(i)}(0)=d_i,\quad \quad i=0,\ldots ,m-1, \end{aligned}$$
(6.35)

where

$$\begin{aligned} g(x)= & {} J^{\nu }f(x)\\&+\sum \limits _{j=0}^{m-1}d_j\dfrac{x^j}{j!}+\sum \limits _{i=1}^{k}\gamma _i J^{\nu -\beta _i}\left( \sum \limits _{j=0}^{m_i-1}d_j\dfrac{x^j}{j!}\right) . \end{aligned}$$

Let us express the approximate solution \(u(x)\) and \(g(x)\) in terms of the modified generalized Laguerre polynomials

$$\begin{aligned}&u_N(x)\simeq \sum \limits _{i=0}^{N}c_i L^{(\alpha ,\beta )}_{i}(x)=C^\mathrm{T} \psi (x), \end{aligned}$$
(6.36)
$$\begin{aligned}&g(x)\simeq \sum \limits _{i=0}^{N}g_i L^{(\alpha ,\beta )}_{i}(x)=G^\mathrm{T} \psi (x), \end{aligned}$$
(6.37)

where the vector \(G=[g_0,\ldots , g_N]^\mathrm{T}\) is given but \(C=[c_0,\ldots ,c_N]^\mathrm{T}\) is an unknown vector.

The Riemann–Liouville integral of orders \(\nu \) and \(\nu -\beta _j \) of the approximate solution (6.36), after employing Theorem 11 (relation (5.8)), can be written as

$$\begin{aligned} J^\nu u_N(x)\simeq C^\mathrm{T} J^\nu \psi (x)\simeq C^\mathrm{T} \mathbf P ^{(\nu )} \psi (x), \end{aligned}$$
(6.38)

and

$$\begin{aligned}&J^{\nu -\beta _j}u_N(x)\nonumber \\&\quad \simeq C^\mathrm{T} J^{\nu -\beta _j} \psi (x)\simeq C^\mathrm{T} \mathbf P ^{(\nu -\beta _j)} \psi (x),\nonumber \\&\quad \quad j=1,\ldots ,k, \end{aligned}$$
(6.39)

respectively, where \( \mathbf P ^{(\nu )}\) is the \((N+1) \times (N+1)\) operational matrix of fractional integration of order \(\nu \). Employing Eqs. (6.36)–(6.39) the residual \(R_N(x)\) of Eq. (6.35) can be written as

$$\begin{aligned}&R_N(x)\nonumber \\&\quad =\left( C^\mathrm{T} -C^\mathrm{T} \sum \limits _{j=1}^{k}\gamma _j\mathbf P ^{(\nu -\beta _j)}-\gamma _{k+1}C^\mathrm{T}\mathbf P ^{(\nu )} -G^\mathrm{T}\right) \psi (x).\nonumber \\ \end{aligned}$$
(6.40)

As in a typical tau method, we generate \( N-m+1 \) linear algebraic equations by applying

$$\begin{aligned}&\langle R_N(x),L^{(\alpha ,\beta )}_{j}(x)\rangle \nonumber \\&\quad =\int _{\varLambda }R_N(x)w^{(\alpha ,\beta )}(x)L^{(\alpha ,\beta )}_{j}(x)\hbox {d}x=0,\nonumber \\&\quad j=0,1,\ldots ,N-m. \end{aligned}$$
(6.41)

Also, by substituting Eqs. (4.35) and (6.36) in Eq. (6.33), we get

$$\begin{aligned} u^{(i)}(0)= & {} C^\mathrm{T} \mathbf D ^{(i)}\psi (0)=d_i, \quad \quad \ i=0,1,\ldots ,m-1.\nonumber \\ \end{aligned}$$
(6.42)

Equations (6.41) and (6.42) generate \( N-m+1 \) and \( m \) set of linear equations, respectively. These linear equations can be solved for unknown coefficients of the vector \(C\). Consequently, \(u_N(x)\) given in Eq. (6.36) can be calculated, which gives a solution of Eq. (6.32) with the initial conditions (6.33).

6.3 Collocation method for nonlinear FDEs in finite interval

Here, we apply the collocation method, based on SJOM and BOM of fractional derivatives, for nonlinear FDEs in finite interval subject to initial and boundary conditions.

6.3.1 Collocation method based on SJOM

In order to present the implementation of the Jacobi collocation method based on SJOM of fractional derivative, we consider the nonlinear FDE

$$\begin{aligned} D^\nu u(x)=F(x,u(x), D^{\mu _1}\nu (x),\ldots ,D^{\mu _k}u(x)),\nonumber \\ \end{aligned}$$
(6.43)

with initial conditions (6.2), where \(F\) can be nonlinear in general.

In order to use SJOM for this problem, we first approximate \(u(x), D^\nu u(x)\) and \(D^{\mu _j}u(x)\ (j=1,\ldots ,k)\) as Eqs. (6.3), (6.5) and (6.6), respectively. By substituting these equations in Eq. (6.43), we get

$$\begin{aligned} C^\mathrm{T} \mathbf D ^{(\nu )}\varPhi (x)\simeq & {} F(x,C^\mathrm{T} \varPhi (x),C^\mathrm{T} \mathbf D ^{(\mu _1)}\varPhi (x),\ldots ,\nonumber \\&C^\mathrm{T} \mathbf D ^{(\mu _k)}\varPhi (x)). \end{aligned}$$
(6.44)

Also, by substituting Eqs. (6.3) and (4.17) in Eq. (6.2), we obtain

$$\begin{aligned} u^{(i)}(0)= & {} C^\mathrm{T} \mathbf D ^{(i)}\varPhi (0)=d_i, \quad \quad i=0,1,\ldots ,m-1.\nonumber \\ \end{aligned}$$
(6.45)

To find the approximate solution \(u_{N}(x)\), we first collocate Eq. (6.44) at \( N-m+1 \) points. We choose the \(N-m+1\) shifted Jacobi polynomial roots as the collocation points. These equations together with Eq. (6.45) generate \( N+1 \) nonlinear equations which can be solved using Newton’s iterative method. Consequently, the approximate solution \(u_{N}(x)\) can be obtained.

Remark 3

For dealing with the nonlinear FDE (6.43) with boundary conditions (6.10), we apply the same technique described in this subsection, but Eq. (6.45) should be changed to (6.11). After using the collocation method with the aid of SJOM for fractional derivatives at the \( N-m+1 \) nodes, we obtain a system of \( N+1 \) nonlinear algebraic equations which may be solved by Newton’s iterative method.

6.3.2 Collocation method based on BOM

In order to show the high importance of BOM of fractional derivative, we apply it to solve multi-order fractional differential equation

$$\begin{aligned} F\left( x,u(x), D^{\beta _1}u(x),\ldots ,D^{\beta _k}u(x)\right) =0, \end{aligned}$$
(6.46)

with boundary or supplementary conditions

$$\begin{aligned} H_{i}\left( u(\xi _i),u^{\prime }(\xi _i),\ldots ,u^{p}(\xi _i)\right) =d_i, \quad i=0,\ldots ,p,\nonumber \\ \end{aligned}$$
(6.47)

where \(0\le p<\max \{{\beta _{i},i=1,\ldots ,k}\}\le p+1\), \(\xi _{i}\in [0,1]\), \(i=0,\ldots ,p\) and \(H_i\) are linear combinations of \(u(\xi _i),u^{\prime }(\xi _i),\ldots ,u^{p}(\xi _i)\) and \(u(x)\in L^{2}[0,1]\). It should be noted that in general \(F\) can be nonlinear.

We approximate \(u(x)\) by Bernstein polynomials as

$$\begin{aligned} u(x) \simeq \sum \limits _{i=0}^{N} c_i B_{i,N}(x)= C^\mathrm{T} B(x), \end{aligned}$$
(6.48)

where vector \(C=[c_0,\ldots ,c_N]^{T}\) is unknown vector. Using Eqs. (4.53) and (6.48) we have

$$\begin{aligned} D^{\beta _{j}}u(x)\simeq & {} C^\mathrm{T} D^{\beta _{j}} B(x)\simeq C^\mathrm{T} \mathbf D ^{(\beta _{j})} B(x),\nonumber \\&\quad j=1,\ldots ,k. \end{aligned}$$
(6.49)

By substituting these equations in Eq. (6.46), we get

$$\begin{aligned}&F\left( x,C^\mathrm{T} B(x), C^\mathrm{T} D^{(\beta _{1})} B(x),\ldots ,C^\mathrm{T} D^{(\beta _{k})} B(x)\right) \!=\!0\nonumber \\ \end{aligned}$$
(6.50)

Similarly, substituting Eq. (6.48) in Eq. (6.47) yields

$$\begin{aligned}&H_{i}\left( C^\mathrm{T} B(\xi _{i}), C^\mathrm{T} D^{(1)} B(\xi _{i}),\ldots ,C^\mathrm{T} D^{(p)}B(\xi _{i})\right) =d_i, \nonumber \\&\quad i=0,\ldots ,p. \end{aligned}$$
(6.51)

To find the solution \(u(x)\), we first collocate Eq. (6.50) at \( N-p \) points. For suitable collocation points, we use

$$\begin{aligned} x_{i}= \frac{1}{2} \left( \cos \left( \frac{i\pi }{N}\right) +1\right) , \quad i=1,\ldots ,N-p.\nonumber \\ \end{aligned}$$
(6.52)

These equations together with Eq. (6.51) generate \( N+1 \) algebraic equations which can be solved to find \(c_i,\ i=0,\ldots ,N\). Consequently, the unknown function \(u(x)\) given in Eq. (6.48) can be calculated.

6.4 Collocation method for nonlinear FDEs in a semi-infinite interval

In this section, in order to show the high importance of MGLOM of fractional derivative, we apply it to solve nonlinear multi-order FDE. Regarding the nonlinear multi-order fractional initial value problems on the interval \(\varLambda \)  we propose a spectral modified generalized Laguerre collocation method based on MGLOM to find the solution \(u_N (x)\).

Consider the nonlinear FDE

$$\begin{aligned} D^\nu u(x)= & {} F(x,u(x), D^{\beta _1}u(x),\ldots ,D^{\beta _k}u(x)),\nonumber \\&\quad \text {in} \ \varLambda =(0,\infty ), \end{aligned}$$
(6.53)

with initial conditions (6.13), where \(F\) can be nonlinear in general.

In order to use modified generalized Laguerre polynomials for this problem, we first approximate \(u(x)\), \(D^\nu u(x)\) and \(D^{\beta _j}u(x)\), for \(j=1,\ldots ,k\) as Eqs. (6.14), (6.16) and (6.17), respectively. Therefore, Eq. (6.53) can be written as

$$\begin{aligned}&C^\mathrm{T} \mathbf D ^{(\nu )}\psi (x)\simeq F\left( x,C^\mathrm{T} \psi (x),\right. \nonumber \\&\left. C^\mathrm{T} \mathbf D ^{(\beta _1)} \psi (x),\ldots ,C^\mathrm{T} \mathbf D ^{(\beta _k)} \psi (x)\right) . \end{aligned}$$
(6.54)

The numerical treatment of the initial conditions as given in Eq. (6.13) yields

$$\begin{aligned} u^{(i)}(0)= & {} C^\mathrm{T} \mathbf D ^{(i)}\psi (0)=d_i, \quad \quad i=0,1,\ldots ,m-1,\nonumber \\ \end{aligned}$$
(6.55)

To find the solution \(u(x)\), we first collocate Eq. (6.54) at \( N-m \) points. For suitable collocation points, we use the \( N-m+1 \) modified generalized Laguerre roots of \(L^{(\alpha ,\beta )}_{i}(x)\). These equations together with Eq. (6.55) generate \( N+1 \) nonlinear equations which can be solved using Newton’s iterative method. Consequently, the approximate solution \(u(x)\) can be obtained.

Corollary 6

In particular, the special case for generalized Laguerre polynomials may be obtained directly by taking \(\beta =1\) in the modified generalized Laguerre, which are denoted by \(L^{(\alpha )}_i(x)\) (see [66]).

7 Fractional generalized Laguerre functions for systems of FDEs

The fractional-order generalized Laguerre functions (FGLFs) can be defined by introducing the change of variable \( t=x^\lambda \), \(\beta =1\) and \(\lambda >0\) on modified generalized Laguerre polynomials. Where the FGLFs \(L^{(\alpha )}_i{(x^\lambda )}\) be denoted by \( L^{(\alpha ,\lambda )}_{i}{(x)}\).

We use the fractional-order generalized Laguerre collocation (FGLC) method (see, [97]) to numerically solve the general form of systems of nonlinear FDE, namely

$$\begin{aligned} D^{\nu _i}u_i(x)= & {} f_i\big (x,u_1(x),u_2(x), \ldots , u_n(x)\big ),\quad x\in \varLambda , \nonumber \\&\quad i=1,\ldots ,n, \end{aligned}$$
(7.1)

with initial conditions

$$\begin{aligned} u_i(0)=u_{i0},\quad \quad \quad \quad i=1,\ldots ,n, \end{aligned}$$
(7.2)

where \(0<\nu _i\le 1 \).

Let

$$\begin{aligned} u_{iN}(x)=\sum \limits _{j=0}^{N}a_{ij} L^{(\alpha ,\lambda )}_j(x), \end{aligned}$$
(7.3)

The fractional derivatives \(D^{\nu _i}u(x),\) can be expressed in terms of the expansion coefficients \(a_{ij}\) using (4.39) where \(\beta =1\). The implementation of fractional-order generalized Laguerre collocation method to solve (7.1)–(7.2) is to find \(u_{iN}(x) \in S_N(\varLambda )\) such that

$$\begin{aligned}&D^{\nu _{i}}u_{iN}(x)=F_i\big (x,u_{1N}(x),u_{2N}(x),..., u_{nN}(x)\big ),\nonumber \\&\quad x\in \varLambda , \end{aligned}$$
(7.4)

is satisfied exactly at the collocation points \(x^{(\alpha , \lambda )}_{i, N,k},\ k=0,1, \ldots ,N-1\), \(i=1, \ldots ,n,\) which immediately yields

$$\begin{aligned}&\sum \limits _{j=0}^{N}a_{ij} D^{\nu _i}L^{(\alpha ,\lambda )}_j{(x^{(\alpha ,\lambda )}_{i,N,k})}\nonumber \\&\quad = P_i\big (x^{(\alpha ,\lambda )}_{i,N,k},\sum \limits _{j=0}^{N}a_{1j} L^{(\alpha ,\lambda )}_j{(x^{(\alpha , \lambda )}_{1,N,k})},\nonumber \\&\qquad \sum \limits _{j=0}^{N}a_{2j} L^{(\alpha ,\lambda )}_j{(x^{(\alpha ,\lambda )}_{2,N,k})},\ldots , \sum \limits _{j=0}^{N}a_{nj} L^{(\alpha ,\lambda )}_j{(x^{(\alpha ,\lambda )}_{n,N,k})}\big ),\nonumber \\ \end{aligned}$$
(7.5)

with (7.2) written in the form

$$\begin{aligned} \sum \limits _{j=0}^{N}a_{ij}L^{(\alpha ,\lambda )}_j(0) = u_{i0},\quad \quad i=1, \ldots ,n. \end{aligned}$$
(7.6)

This means the system (7.1) with its initial conditions has been reduced to a system of \(n(N+1)\) nonlinear algebraic equations (7.5)–(7.6), which may be solved by using any standard iteration technique.

8 Applications and numerical results

This section presents some numerical results obtained by using the algorithms presented in the previous sections. Comparisons of the spectral methods with those obtained by other methods reveal that spectral methods are very effective and convenient.

Example 1

Consider the inhomogeneous Bagley-Torvik equation, see [68]

$$\begin{aligned}&D^2 u(x) +D^{\frac{3}{2}} u(x)+u(x)=g(x),\nonumber \\&\quad u(0)=1 , \quad u^\prime (0)=1,\quad x\in [0,L]. \end{aligned}$$
(8.1)

where \(g(x)=1+x\).

The exact solution of this problem is \(u(x)=1+x\).

By applying the technique described in (Sect. 6.1.1) with \(N=2\), we may write the approximate solution and the right-hand side in the forms

$$\begin{aligned}&u(x)= \sum \limits _{i=0}^{2}c_iP^{(\alpha ,\beta )}_{L,i}(x)=C^\mathrm{T} \varPhi (x),\\&g(x)\simeq \sum \limits _{i=0}^{2}g_i P^{(\alpha ,\beta )}_{L,i}(x)=G^\mathrm{T} \varPhi (x). \end{aligned}$$

Here, the operational matrices corresponding to Eq. (8.1) can be written as follows

$$\begin{aligned}&\mathbf D ^{(2)} =\left( \begin{array}{c@{\quad }c@{\quad }c} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 \\ \varDelta _2(2,0)&{}\varDelta _2(2,1)&{}\varDelta _2(2,2)\\ \end{array}\right) ,\\&\mathbf D ^{\left( \frac{3}{2}\right) } =\left( \begin{array}{c@{\quad }c@{\quad }c} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 \\ \varDelta _{\frac{3}{2}}(2,0) &{} \varDelta _{\frac{3}{2}}(2,1) &{} \varDelta _{\frac{3}{2}}(2,2) \\ \end{array}\right) ,\\&G= \left( \begin{array}{c} g_0 \\ g_1 \\ g_2 \\ \end{array} \right) , \end{aligned}$$

where \(g_j\ \text {and}\ \varDelta _{\nu }(i,j)\) are computed from Eqs. (4.14) and (4.21), respectively.

Firstly, applying the tau method for (8.1) (see, Eq. (6.8)) gives

$$\begin{aligned} c_0+ \left( \varDelta _2(2,0)+\varDelta _{\frac{3}{2}}(2,0)\right) c_2-g_0=0. \end{aligned}$$
(8.2)

Secondly, the use of Eq. (6.9) in the initial conditions yields

$$\begin{aligned}&c_0-(\beta +1)c_1+\frac{(\beta +1)(\beta +2)}{2}c_2-1=0, \end{aligned}$$
(8.3)
$$\begin{aligned}&(\alpha +\beta +2)c_1-(\beta +2)(\alpha +\beta +3)c_2-L=0.\nonumber \\ \end{aligned}$$
(8.4)
Table 1 \(c_0,\ c_1\ \text {and}\ c_2\) for different values of \(\alpha \) and \(\beta \), in Example 1

Finally, if we solve the linear algebraic equations, (8.2)–(8.4). Then, the approximate solution can be written as

$$\begin{aligned} u(x)=\left( \begin{array}{cccc} c_0 &{} c_1 &{} c_2 \\ \end{array} \right) \left( \begin{array}{c} P^{(\alpha ,\beta )}_{L,0}(x) \\ P^{(\alpha ,\beta )}_{L,1}(x) \\ P^{(\alpha ,\beta )}_{L,2}(x) \end{array} \right) =x+1, \end{aligned}$$

which is the exact solution of the problem.

Table 1 exhibits the 3 unknown coefficients \(c_0,\ c_1 \text {and} c_2\) with various choices of \(\alpha \ \text {and}\ \beta .\) We observed that in each case of the Jacobi parameters \(\alpha \) and \(\beta \), we can achieve the exact solution.

Remark 4

In case of \(\alpha =\beta =0\) and \(L=1\), the previous result is in complete agreement with the result obtained by Saadatmandi and Dehghan, (see [68] Example 1).

Example 2

Consider the FDE (see [68])

$$\begin{aligned}&u^{(\nu )}(x)+ u(x)=0,\quad 0<\nu <2,\ \quad u(0)=1,\quad \nonumber \\&u'(0)=0 \quad x\in (0,1), \end{aligned}$$
(8.5)

the second initial solution is for \(\nu >1\) only.

Table 2 Maximum absolute errors at \(\nu =0.85\) for different values of \( \alpha ,\ \beta \ \text {and}\ N,\) in Example 2

The exact solution is (see [24])

$$\begin{aligned} u(x)=E_{\nu ,1}(-x^{\nu }), \end{aligned}$$
(8.6)

where

$$\begin{aligned} E_{\delta ,\epsilon }(x)=\sum \limits _{r=0}^{\infty }\frac{x^r}{\varGamma {(\delta r+\epsilon )}}, \end{aligned}$$
(8.7)

is the generalized Mittag-Leffler function.

The solution of this problem is obtained by applying the technique described in (Sect. 6.1.1). The maximum absolute error for \(\nu =0.85\) and various choices of \(N,\alpha \ \text {and}\ \beta \) are shown in Table 2. From Table 2, we can achieve a good approximation to the exact solution by using a few terms of shifted Jacobi polynomials. Also maximum absolute error for \(N=10\) and different values of \(\nu ,\ \alpha ,\ \text {and}\ \beta \) are shown in Table 3.

Table 3 Maximum absolute error for different values of \(\nu , \alpha \ \text {and}\ \beta \) at \(N=10\), in Example 2

Remark 5

In case of \(\alpha =\beta =0\) and \(L=1\), this result is in complete agreement with the result obtained by Saadatmandi and Dehghan (see [68]).

Example 3

Consider the equation

$$\begin{aligned}&D^2 u(x)-2 D^{\frac{5}{3}} u(x) +D^{\frac{2}{3}} u(x)+u(x)\nonumber \\&\quad = x^3+6x+\frac{16}{5\sqrt{\pi }}x^{2.5},\nonumber \\&u(0)=0,\quad u'(0)=0,\quad x\in \varLambda , \end{aligned}$$
(8.8)

whose exact solution is given by \(u(x)=x^3.\)

By applying the technique described in Sect. 6.1.2 with \(N=3\) and \(x\in \varLambda \), we approximate the solution as

$$\begin{aligned} u(x)= \sum \limits _{i=0}^{3}c_iL^{(\alpha ,\beta )}_{i}(x)=C^\mathrm{T} \psi (x). \end{aligned}$$

Here, we have

$$\begin{aligned}&\mathbf D ^{(2)}=\beta ^{2}\left( \begin{array}{cccccccc} 0 &{} &{}0 &{} &{}0 &{} &{}0 \\ 0 &{} &{}0 &{} &{}0 &{} &{}0 \\ 1 &{} &{}0 &{} &{}0 &{} &{}0 \\ 2 &{} &{}1 &{} &{}0 &{} &{}0 \\ \end{array}\right) ,\\&\mathbf D ^{\left( \frac{5}{3}\right) } =\left( \begin{array}{ccccccc} 0 &{} &{}0 &{} &{}0 &{} &{}0\\ 0 &{} &{}0 &{} &{}0 &{} &{}0\\ \varOmega _{\frac{5}{3}}(2,0) &{} &{}\varOmega _{\frac{5}{3}}(2,1) &{} &{}\varOmega _{\frac{5}{3}}(2,2) &{} &{}\varOmega _{\frac{5}{3}}(2,3)\\ \varOmega _{\frac{5}{3}}(3,0) &{} &{}\varOmega _{\frac{5}{3}}(3,1) &{} &{}\varOmega _{\frac{5}{3}}(3,2) &{} &{}\varOmega _{\frac{5}{3}}(3,3)\\ \end{array}\right) ,\\&\mathbf D ^{\left( \frac{2}{3}\right) } =\left( \begin{array}{ccccccc} 0 &{} &{}0 &{} &{}0 &{} &{}0\\ \varOmega _{\frac{2}{3}}(1,0) &{} &{}\varOmega _{\frac{2}{3}}(1,1) &{} &{}\varOmega _{\frac{2}{3}}(1,2) &{} &{}\varOmega _{\frac{2}{3}}(1,3)\\ \varOmega _{\frac{2}{3}}(2,0) &{} &{}\varOmega _{\frac{2}{3}}(2,1) &{} &{}\varOmega _{\frac{2}{3}}(2,2) &{} &{}\varOmega _{\frac{2}{3}}(2,3)\\ \varOmega _{\frac{2}{3}}(3,0) &{} &{}\varOmega _{\frac{2}{3}}(3,1) &{} &{}\varOmega _{\frac{2}{3}}(3,2) &{} &{}\varOmega _{\frac{2}{3}}(3,3)\\ \end{array}\right) ,\\&G= \left( \begin{array}{c} g_0 \\ g_1 \\ g_2 \\ g_3 \\ \end{array}\right) , \end{aligned}$$

where \(g_{j}\) and \(\varOmega _{\nu }(i,j)\) are computed from Eqs. (4.35) and (4.40), respectively.

Therefore, using Eq. (6.19), we obtain

$$\begin{aligned}&c_0+ [1\!+\!\varOmega _{\frac{2}{3}}(1,0)]c_1+[1\!-\!2\varOmega _{\frac{5}{3}}(2,0)\!+\! \varOmega _{\frac{2}{3}}(2,0)]c_2\nonumber \\&\quad +\,[2-2\varOmega _{\frac{5}{3}}(3,0)]c_3- g_0=0, \end{aligned}$$
(8.9)
$$\begin{aligned}&[1+\varOmega _{\frac{2}{3}}(1,1)]c_1+[\varOmega _{\frac{2}{3}}(2,1)- 2\varOmega _{\frac{5}{3}}(2,1)]c_2\nonumber \\&\quad +\,[1-2\varOmega _{\frac{5}{3}}(3,1)+ \varOmega _{\frac{2}{3}}(3,1)]c_3- g_1=0. \end{aligned}$$
(8.10)

Now, by applying Eq. (6.20) we have

$$\begin{aligned}&C^\mathrm{T}\phi (0)=c_0+(\alpha +1)c_1+\dfrac{(\alpha +1)(\alpha +2)}{2} c_2\nonumber \\&\qquad \qquad \qquad +\,\dfrac{(\alpha +1)(\alpha +2)(\alpha +3)}{6}c_3=0,\nonumber \\&C^\mathrm{T}\mathbf D ^{(1)}\phi (0)=-\beta c_1-\beta (\alpha +2)c_2\nonumber \\&\qquad \qquad \qquad \qquad -\,\dfrac{\beta (\alpha +3)(\alpha +2)}{2}c_3=0. \end{aligned}$$
(8.11)

Finally, by solving Eqs. (8.9)–(8.11) we have the \(4\) unknown coefficients with various choices of \(\alpha \) and \(\beta \) which are given in Table 4. Then, we get

$$\begin{aligned} c_0= & {} \frac{\alpha ^{3}+6\alpha ^{2}+11\alpha +6}{\beta ^{3}},\quad \!\! c_1{=}\frac{-3\alpha ^{2}-15\alpha -18}{\beta ^{3}} ,\\ c_2= & {} \frac{6\alpha +18}{\beta ^{3}},\quad c_3=\frac{-6}{\beta ^{3}}. \end{aligned}$$

Thus, we can write

$$\begin{aligned}u(x)=\left( \begin{array}{cccc} c_{0}, &{} c_{1}, &{} c_{2}, c_{3} \\ \end{array}\right) \left( \begin{array}{c} L^{(\alpha ,\beta )}_{0}(x) \\ L^{(\alpha ,\beta )}_{1}(x) \\ L^{(\alpha ,\beta )}_{2}(x) \\ L^{(\alpha ,\beta )}_{3}(x) \\ \end{array}\right) =x^3. \end{aligned}$$
Table 4 \(c_0,\ c_1,\ c_2 \ \text {and}\ c_3 \) for different values of \(\alpha \) and \(\beta \) in Example 3

Example 4

Consider the following linear boundary value problem

$$\begin{aligned}&4(x+1)D^{\frac{5}{2}} u(x)+4 D^{\frac{3}{2}} u(x) +\frac{1}{\sqrt{x+1}}u(x)\nonumber \\&\quad =\sqrt{x}+\sqrt{\pi },\nonumber \\&u(0)=\sqrt{\pi },\quad u'(0)=\frac{\sqrt{\pi }}{2}, \quad u(1)=\sqrt{2\pi }. \end{aligned}$$
(8.12)

The exact solution of this problem is \(u(x)= \sqrt{\pi (x+1)}\).

Table 5 Maximum absolute error for different values of \(N\) in Example 4

Now, we apply the collocation technique based on BOM, which is described in Sect. 6.3.2, for solving Eq. (8.12). The \(L_{\infty }\) and \(L_2\) errors are presented in Table 5 for different values of \(n\). Also, in Table 5, a comparison is made between the presented method and the method based on linear B-spline functions (see [78]). The method of [78] requires the solution of a rather large systems of algebraic equations to obtain accuracy of comparable order. Indeed, in the BOM method, we obtain \( N+1 \) algebraic equations while the method of [78] requires \( 2^N+1 \) algebraic equations which increase the computational time.

Example 5

Consider the following initial value problem

$$\begin{aligned}&D^{\frac{3}{2}} u(x)+3 u(x)= 3x^3+\frac{8}{\varGamma (0.5)}x^{1.5},\nonumber \\&\quad u(0)=0,\ u'(0)=0,\quad x\in [0,L], \end{aligned}$$
(8.13)

whose exact solution is given by \(u(x)=x^3.\)

By applying the technique described in (Sect. 6.2.1) using SJOM of fractional integration with \(N=3\), we may write the approximate solution and the right-hand side in the form

$$\begin{aligned} u(x)= & {} \sum \limits _{i=0}^{3}c_iP_{L,i}^{(\alpha ,\beta )}(x)=C^\mathrm{T} \varPhi (x),\quad \text {and}\quad \\ g(x)\simeq & {} \sum \limits _{i=0}^{3}g_i P_{L,i}^{(\alpha ,\beta )}(x)=G^\mathrm{T} \varPhi (x). \end{aligned}$$

From Eq. (5.6) one can write

$$\begin{aligned}&\mathbf P ^{\left( \frac{3}{2}\right) } =\left( \begin{array}{ccccc} \varUpsilon _{\frac{3}{2}}(0,0,\alpha ,\beta )&{} \varUpsilon _{\frac{3}{2}}(0,1,\alpha ,\beta )&{} \varUpsilon _{\frac{3}{2}}(0,2,\alpha ,\beta ) &{} \varUpsilon _{\frac{3}{2}}(0,3,\alpha ,\beta )\\ \varUpsilon _{\frac{3}{2}}(1,0,\alpha ,\beta )&{} \varUpsilon _{\frac{3}{2}}(1,1,\alpha ,\beta )&{} \varUpsilon _{\frac{3}{2}}(1,2,\alpha ,\beta ) &{} \varUpsilon _{\frac{3}{2}}(1,3,\alpha ,\beta )\\ \varUpsilon _{\frac{3}{2}}(2,0,\alpha ,\beta ) &{} \varUpsilon _{\frac{3}{2}}(2,1,\alpha ,\beta ) &{} \varUpsilon _{\frac{3}{2}}(2,2,\alpha ,\beta )&{} \varUpsilon _{\frac{3}{2}}(2,3,\alpha ,\beta ) \\ \varUpsilon _{\frac{3}{2}}(3,0,\alpha ,\beta ) &{} \varUpsilon _{\frac{3}{2}}(3,1,\alpha ,\beta ) &{} \varUpsilon _{\frac{3}{2}}(3,2,\alpha ,\beta )&{} \varUpsilon _{\frac{3}{2}}(3,3,\alpha ,\beta ) \\ \end{array}\right) ,\\&{ G= \left( \begin{array}{c} g_0 \\ g_1 \\ g_2 \\ g_3 \\ \end{array} \right) ,} \end{aligned}$$

where \(\varUpsilon _{\frac{3}{2}}(i,j,\alpha ,\beta )\) is given in Eq. (5.7) and

$$\begin{aligned} g_j= & {} \frac{ (2j+\alpha +\beta +1) j! }{L^{\alpha +\beta +1}\varGamma (j+\alpha +1) } \\&\times \sum \limits _{f=0}^{j}\frac{(-1)^{j-f} \varGamma (f+j+\alpha +\beta +1)}{L^{f}f!\ (j-f)!\ \varGamma (f+\beta +1)}\\&\times \int _{0}^{L}\left( \frac{64 x^{9/2}}{105 \sqrt{\pi }}+x^3\right) x^{\beta +f} (L-x)^\alpha \hbox {d}x. \end{aligned}$$

Making use of (6.28) and (6.30) yields

$$\begin{aligned}&3\varUpsilon _{\frac{3}{2}}(0,2,\alpha ,\beta )c_0+ 3\varUpsilon _{\frac{3}{2}}(1,2,\alpha ,\beta )c_1+ 3\varUpsilon _{\frac{3}{2}}\nonumber \\&\quad (2,2,\alpha ,\beta )c_2+ 3\varUpsilon _{\frac{3}{2}}(3,2,\alpha ,\beta )c_3+c_2-g_2=0, \nonumber \\\end{aligned}$$
(8.14)
$$\begin{aligned}&3\varUpsilon _{\frac{3}{2}}(0,3,\alpha ,\beta )c_0+ 3\varUpsilon _{\frac{3}{2}}(1,3,\alpha ,\beta )c_1+ 3\varUpsilon _{\frac{3}{2}}\nonumber \\&\quad (2,3,\alpha ,\beta )c_2+ 3\varUpsilon _{\frac{3}{2}}(3,3,\alpha ,\beta )c_3 +c_3-g_3=0.\nonumber \\ \end{aligned}$$
(8.15)

Applying Eq. (6.31) for the initial conditions gives

$$\begin{aligned}&C^cT\phi (0)=c_0-(\beta +1)c_1+\frac{(\beta +1)(\beta +2)}{2}c_2\nonumber \\&\qquad \qquad \qquad \ -\frac{(\beta +1)(\beta +2)(\beta +3)}{6}c_3=0,\nonumber \\&C^\mathrm{T} D^{(1)}\phi (0)=\frac{(\alpha +\beta +2)}{L}c_1- \frac{(\beta +2)(\alpha +\beta +3)}{L}c_2\nonumber \\&\qquad \qquad \qquad \qquad \ +\,\frac{(\beta +2)(\beta +3)(\alpha +\beta +4)}{2L}c_3=0.\nonumber \\ \end{aligned}$$
(8.16)

Finally, by solving Eqs. (8.14)–(8.16), we get the approximate solution. In particular, the special cases for ultraspherical basis (\(\alpha =\beta \) and each is replaced by \(\alpha -\frac{1}{2}\)) and for Chebyshev basis of the first, second, third and fourth kinds may be obtained directly by taking \(\alpha =\beta =\mp \frac{1}{2},\ \alpha =-\beta =\pm \frac{1}{2}\), respectively, and for the shifted Legendre basis by taking \(\alpha =\beta =0.\), we offer some of these special cases.

Case 1. If \(\alpha =\beta =0\), then

$$\begin{aligned} c_0=\frac{L^3}{4},\quad \quad c_1=\frac{9L^3}{20},\quad \quad c_2=\frac{L^3}{4},\quad \quad c_3=\frac{L^3}{20}. \end{aligned}$$

and the approximate solution is given by

$$\begin{aligned} u_N(x)=\sum \limits _{i=0}^{3}c_iP_{L,i}^{(0,0)}(x) =x^3, \end{aligned}$$

which is the exact solution.

Case 2. If we choose \(\alpha =-\frac{1}{2}, \beta =\frac{1}{2}\), then

$$\begin{aligned} c_0=\frac{35L^3}{64},\quad c_1=\frac{21L^3}{32},\quad c_2=\frac{7L^3}{24},\quad c_3=\frac{L^3}{20}, \end{aligned}$$

and

$$\begin{aligned} u_N(x)=\sum \limits _{i=0}^{3}c_iP_{L,i}^{\left( -\frac{1}{2},\frac{1}{2}\right) }(x) =x^3, \end{aligned}$$

which is the exact solution.

Case 3. In the case of \(\alpha =\frac{1}{2}, \beta =-\frac{1}{2}\), we have

$$\begin{aligned} c_0=\frac{5L^3}{64},\quad c_1=\frac{9L^3}{32},\quad c_2=\frac{5L^3}{24},\quad c_3=\frac{L^3}{20}, \end{aligned}$$

and

$$\begin{aligned} u_N(x)=\sum \limits _{i=0}^{3}c_iP_{L,i}^{\left( \frac{1}{2},-\frac{1}{2}\right) }(x) =x^3, \end{aligned}$$

which is the exact solution.

Example 6

Consider the equation

$$\begin{aligned}&D^2 u(x)-2 D u(x) +D^{\frac{1}{2}} u(x)+u(x)\nonumber \\&\quad = x^3-6x^2+6x+\frac{16}{5\sqrt{\pi }}x^{2.5},\nonumber \\&u(0)=0,\quad u'(0)=0,\quad x\in [0,L], \end{aligned}$$
(8.17)

whose exact solution is given by \(u(x)=x^3.\)

Now, we can apply the technique described in (Sect. 6.2.1) using SJOM of fractional integration with \(N=3\). The approximate solution obtained by using the proposed method for some special cases of \(\alpha \ and\ \beta \) are listed in the following cases

Table 6 Maximum absolute errors for \(\gamma =0.01\) and different values of \(\alpha \), \(\beta \) and \(N\) in Example 7
Table 7 Maximum absolute errors for \(N=10\) and different values of \(\gamma \), \(\alpha \) and \(\beta \) in Example 7

Case 1. If \(\alpha =\beta =0\), then

$$\begin{aligned} c_0=\frac{L^3}{4},\quad \quad c_1=\frac{9L^3}{20},\quad \quad c_2=\frac{L^3}{4},\quad \quad c_3=\frac{L^3}{20}, \end{aligned}$$

and

$$\begin{aligned} u_N(x)=\sum \limits _{i=0}^{3}c_iP_{L,i}^{(0,0)}(x) =x^3, \end{aligned}$$

which is the exact solution.

Case 2. If \(\alpha =-\frac{1}{2}, \beta =\frac{1}{2}\), then

$$\begin{aligned} c_0=\frac{35L^3}{64},\quad c_1=\frac{21L^3}{32},\quad c_2=\frac{7L^3}{24},\quad c_3=\frac{L^3}{20}, \end{aligned}$$

and

$$\begin{aligned} u_N(x)=\sum \limits _{i=0}^{3}c_iP_{L,i}^{\left( -\frac{1}{2},\frac{1}{2}\right) }(x) =x^3, \end{aligned}$$

which is the exact solution.

Case 3. If \(\alpha =\frac{1}{2}, \beta =-\frac{1}{2}\), then

$$\begin{aligned} c_0=\frac{5L^3}{64},\quad c_1=\frac{9L^3}{32},\quad c_2=\frac{5L^3}{24},\quad c_3=\frac{L^3}{20}, \end{aligned}$$

and

$$\begin{aligned} u_N(x)=\sum \limits _{i=0}^{3}c_iP_{L,i}^{\left( \frac{1}{2},-\frac{1}{2}\right) }(x) =x^3, \end{aligned}$$

which is the exact solution.

Case 4. If \(\alpha =\beta =-\frac{1}{2}\), then

$$\begin{aligned} c_0=\frac{5L^3}{16},\quad c_1=\frac{15L^3}{32},\quad c_2=\frac{3L^3}{16},\quad c_3=\frac{L^3}{32}, \end{aligned}$$

and

$$\begin{aligned} u_N(x)=\sum \limits _{i=0}^{3}c_iP_{L,i}^{\left( -\frac{1}{2},-\frac{1}{2}\right) }(x) =x^3, \end{aligned}$$

which is the exact solution.

Example 7

Consider the following fractional initial value problem

$$\begin{aligned}&D^{\frac{3}{2}} u(x)+3 u(x)= \gamma ^{\frac{3}{2}} \hbox {e}^{\gamma x}+3\hbox {e}^{\gamma x},\nonumber \\&\qquad u(0)=1,\quad u'(0)=\gamma ,\qquad x\in (0,20), \end{aligned}$$
(8.18)

whose exact solution is given by \(u(x)=\hbox {e}^{\gamma x}.\)

The solution of this problem is obtained by applying the technique described in Sect. 6.2.2 using MGLOM of fractional integration. The maximum absolute errors for \(\gamma = 0.01\) and various choices of \(N\), \(\alpha \) and \(\beta \) are shown in Table 6. From Table 6, we can achieve a good approximation to the exact solution by using a few terms of modified generalized Laguerre polynomials. Also maximum absolute errors for \(N = 10\) and different values of \(\gamma \), \(\alpha \) and \(\beta \) are displayed in Table 7.

Example 8

Consider the following initial value problem of multi-term nonlinear FDE

$$\begin{aligned}&D^\zeta u(x)+D^{\eta }u(x).D^{\theta }u(x)+u^2(x)=x^6\\&\quad +\,\frac{6x^{3-\zeta }}{\varGamma {(4-\zeta )}}+\frac{36x^{6-\eta -\theta }}{\varGamma {(4-\eta )}\varGamma {(4-\theta )}},\\&\zeta \in (2,3),\quad \eta \in (1,2),\quad \theta \in (0,1),\\&u(0)=u^\prime (0)= u^{\prime \prime }(0)=0. \end{aligned}$$

The exact solution of this problem is \(u(x) = x^3\).

In Table 8, we introduce the maximum absolute errors, using the collocation technique based on SJOM in (Sect. 6.3.1), at \(\zeta =2.5,\ \eta =1.5,\ \theta =0.9\) with various choices of \(\alpha ,\ \beta \ \text {and}\ N\). Also, the maximum absolute errors for four different choices of \( N,\ \zeta ,\ \eta ,\ \theta \) and \( \alpha =\beta =1.5\) are shown in Table 9. From Table 9, we see that as \(\zeta ,\ \eta ,\ \theta \) approach their integer values, the solution of the FDE approaches that of the integer-order differential equations and, accordingly, the approximate solutions will become accurate.

Table 8 Maximum absolute errors for \(\zeta =2.5, \eta =1.5, \theta =0.9\) and different choices of \(\alpha ,\ \beta \ \text {and}\ N\), in Example 8
Table 9 Maximum absolute errors for \(\alpha =\beta =1.5\) and different choices of \(\zeta ,\ \eta ,\ \theta \ \text {and}\ N\), in Example 8

Example 9

Consider the following nonlinear initial value problem

$$\begin{aligned}&D^2u(x)+D^{\nu }u(x)+u^2(x)=g(x),\nonumber \\&\quad \quad u(0)=1,\quad u^\prime (0)=0, \quad \quad x\in (0,20), \end{aligned}$$
(8.19)

where

$$\begin{aligned} g(x)= & {} \cos ^{2}(\gamma x)-\gamma ^2 \cos (\gamma x)\\&+\frac{1}{\varGamma (-\nu )}\int _{0}^{x}{(x-t)}^{-\nu -1} u(t) \hbox {d}t \end{aligned}$$

and the exact solution is given by \(u(x)=\cos (\gamma x).\)

The solution of this problem is obtained by applying the technique described in (Sect. 6.2.2) in Eq. (8.14) with \(\alpha =0\) and \(\beta =1\) using LOM. The maximum absolute error for \(\gamma = \frac{1}{30}\) and \(\gamma = \frac{1}{100}\) with various choices of \(N\) and \(\nu \) are shown in Tables 10 and 11, respectively.

Table 10 Maximum absolute errors for \(\gamma =\frac{1}{30}\) and different values of \(\nu \) and \(N\) in Example 9
Table 11 Maximum absolute errors for \(\gamma =\frac{1}{100}\) and different values of \(\nu \) and \(N\) in Example 9

Example 10

Finally, we consider the following nonlinear boundary value problem

$$\begin{aligned}&D^2u(x)+\varGamma {\left( \frac{4}{5}\right) } \root 5 \of {x^6}\ D^{\frac{6}{5}}u(x)\nonumber \\&\quad +\,\frac{11}{9}\ \varGamma {\left( \frac{6}{5}\right) } \root 6 \of {x}D^{\frac{1}{6}}u(x)+(u'(x))^2=2+\frac{1}{10}x^2,\nonumber \\&u(0)=1,\ u(1)=2. \end{aligned}$$
(8.20)

The exact solution of this problem is \(u(x)=x^2+1\).

We apply the method presented in Sect. 6.3.2 in which we use the collocation method based on BOM of fractional derivative. In Table 12, we compare the \(L_\infty (0,1)\) and \(L_2(0,1)\) errors of the BOM algorithm with the method proposed in [78].

Example 11

Consider the FDE

$$\begin{aligned}&D^{2} u(x) + D^{\frac{3}{2}}u(x) + u(x) =x^2 + 2+\dfrac{\varGamma {(3)}}{\varGamma {\left( \dfrac{3}{2}\right) }}x^{\frac{1}{2}} , \nonumber \\&\quad \quad u(0)=0,\quad \quad u'(0)=0, \end{aligned}$$
(8.21)

the exact solution is given by \( u(x)=x^2 \).

We convert Eq. (8.21) into a system of FDE by changing variable \(u_1(x)=u(x)\) and get

$$\begin{aligned} D^{\frac{1}{2}} u_1(x)= & {} u_2 (x) \nonumber \\ D^{\frac{1}{2}} u_2(x)= & {} u_3 (x) \nonumber \\ D^{\frac{1}{2}} u_3(x)= & {} u_4(x) \\ D^{\frac{1}{2}} u_4 (x)= & {} -u_4(x)-u_1(x)+ x^2 + 2+\dfrac{\varGamma {(3)}}{\varGamma {(1.5)}}x^{\frac{1}{2}},\nonumber \end{aligned}$$
(8.22)

with initial conditions

$$\begin{aligned}&u_1(0)=u(0), \quad \quad \quad u_2(0)=0, \quad \quad \quad u_3(0)=u'(0) ,\nonumber \\&u_4(0)=0. \end{aligned}$$
(8.23)

The maximum absolute error for \(y(x)=y_1(x)\) using FGLC method at \(N=4\) and various choices of \(\alpha \) are shown in Table 13. It is clear that the approximate solutions are in complete agreement with the exact solutions.

Table 12 Maximum absolute errors for different values of \(N\), in Example 10
Table 13 Maximum absolute error using FGLC method with various choices of \(\alpha \) at \(N=4\) for Example 11

9 Conclusion

In this article, we have presented a broad discussion of spectral techniques based on operational matrices of fractional derivatives and integrals for some orthogonal polynomials, such as the Legendre, Chebyshev, Jacobi, Bernstein, Laguerre, generalized Laguerre and modified generalized Laguerre polynomials, and their use with numerical techniques for solving fractional differential equations on finite and semi-finite intervals.

Efficient numerical integration processes for FDEs were investigated based on spectral methods in combination with operational matrices. Comparisons between the obtained approximate solutions, using spectral methods, of the problems with their exact solutions and with the approximate solutions achieved by other methods were introduced to confirm the validity and applicability of spectral techniques based on operational matrices over other methods. The proposed methods can be extended to solve the time-dependent FDES.