1 Introduction

The fractional calculus is a field of mathematical study by changing the order of derivative and integral from the integer to the non-integer. In fact, the fractional calculus is the classical mathematical development. Since the beginning of this concept in the theory of calculus and integrals, mathematicians such as Euler, Laplace, Riemann, and Liouville developed it. The basic concepts of the theory of fractional calculation can be found in [27].

The application of the concepts of fractional calculus can be found by researching in various fields such as viscoelastic damping [1], robotics and control [16], signal processing [29], and electric circuits [28].

In general, fractional differential equations do not always have the exact solution, or it is difficult to obtain an analytical solution. For this reason, in recent years, the study on the numerical solution of this type of equations has increased. Some of the methods used to solve the fractional differential equations, such as Laplace transforms [6], method based on operational matrices [2], variational iteration method [25], finite difference method [22], Legendre wavelets method [10], Haar wavelet [36], Bernoulli polynomials method [11], Chebyshev wavelets method [15], Fractional-order Bernoulli wavelets method [33], and so on.

Time-delay systems have been very much considered in the last few decades. Because many of these time-delay systems appear in many systems and branches of science such as engineering, chemistry, physics, hydraulic networks, long transmission lines, disease models [46], traffic control [42], etc.

In 1949, Myshkis introduced the theory of a general class of differential equations with delayed arguments [24]. In addition, Krasovski [12], Bellman and Cooke [3], El’sgol’c and Norkin [7], Hale [9] researched in this field.

Indeed, a strong motivation for studying and research to solve fractional differential equations with time delay comes from the fact that these equations describe efficiently anomalous diffusion on fractals, physical objects of fractional dimension, like some amorphous semiconductors or strongly porous materials, fractional random walk, etc.

Other applications of this kind of equation occur in the following fields: fluid flow, viscoelasticity, control theory of dynamical systems, diffusive transport akin to diffusion, electrical networks, probability and statistics, dynamical processes in self-similar and porous structures, electrochemistry of corrosion, optics and signal processing, rheology, etc.

In recent years, several numerical methods have been presented for solving delay differential equations integer order and non-integer order, such as hybrid of block-pulse functions and Taylor series [19], Legendre wavelet method [39], Chebyshev polynomials [41], method in [45], fractional-order Bernoulli wavelet [32], and so on.

Lagrange interpolation is commendable for analytical tools. The Lagrange approach is in most cases the method of choice for dealing with polynomial interpolation [5]. Lagrange polynomials are used to solve numerical types of equations. As examples, this polynomials are used to solve the integral equation [34], differential equations [44] and delay differential equations [20] and delay optimal control problems [18] and so on. Also, the scaling interpolation functions is used to solve the differential equations [14, 35] and optimal control problems [8], and so on.

Recently in [37], we introduced fractional-order Lagrange polynomials (FLPs) and applied these new functions to solve the fractional differential equations and we showed FLPs are proper for the approximation of smooth functions.

Now, in this paper, a new set of fractional functions are presented which are used FLPs to construct them. By using these new functions, we solve a class of fractional differential equations and fractional delay differential equations. We demonstrate that FGSLFs are appropriate for the approximation of smooth and piecewise smooth functions. Notice that FLPs are the special cases of FGSLFs which are obtained by taking \(k=1\).

By considering zeros of orthogonal polynomials (such as Legendre polynomials, Chebyshev polynomials, etc) as the nodes of Lagrange polynomials, the orthogonal Lagrange polynomials are constructed [43], so we obtain FGLFs, the fractional integration operational matrix of FGLFs and the delay operational matrix of FGLFs generally, without considering the nodes of Lagrange polynomials.

As the result, by choosing the different nodes of Lagrange polynomials, we have orthogonal and non-orthogonal Lagrange scaling functions. This is the most important advantage of FGLFs over the fractional order Bernoulli wavelets given in Ref. [33].

Another advantage of FLGFs over the Bernoulli wavelets introduced in Ref. [32] is the existence of the fractional-parameter, \(\alpha \). In presented examples in Sect. 7, we can see that the influence this parameter to solve fractional differential equations and fractional delay differential equations.

The rest of the paper is organized as follows. In Sect. 2, some necessary definitions and mathematical preliminaries required for our subsequent development is given. In Sect. 3, we recall the fractional-order Lagrange polynomials, and then we propose general Lagrange scaling functions and fractional-order general Lagrange scaling functions. In Sect. 4, we achieve the FGLSFs operational matrices of fractional order integration and delay. Section 5 is devoted to the numerical method for solving the fractional differential equations and delay fractional differential equations. In Sect. 6, the error analysis is given. In Sect. 7, we report numerical results and the effectiveness of this method is shown by them. Also, in this section, we employ this method for numerical solution of the Pieroux model in the problem of noise effect on light in laser device.

2 Preliminaries

In this section, we give some basic definitions and properties of fractional calculus theory which are used in this paper.

Definition 1

Let \(f:\;[a,\;b] \rightarrow R\) be a function, \(\nu > 0\) a real number and \(n= \lceil \nu \rceil \), where \(\lceil \nu \rceil \) denotes the smallest integer greater than or equal to \(\nu \), the Riemann–Liouville integral of fractional order is defined as [21]

$$\begin{aligned} I^{\nu }f(x) = \left\{ \begin{array}{ll} \frac{1}{\varGamma ( \nu )} \int _{0}^{x} (x-t)^{ \nu -1}f (t) dt = \frac{1}{\varGamma (\nu )} x^{ \nu -1} * f(x) &{}\quad \nu >0,\\ f(x) &{}\quad \nu =0, \end{array}\right. \end{aligned}$$

where \(x^{ \nu -1} * f(x)\) is the convolution product of \(x^{\nu -1}\) and f(x).

For the Riemann–Liouville fractional integral, we have [21]

$$\begin{aligned} I^{\nu } x^{n} = \frac{\varGamma (n+1)}{\varGamma (n+1+ \nu )} x^{\nu + n},\;\;\;n> -1, \end{aligned}$$

Definition 2

Caputo’s fractional derivative of order \(\nu \) is defined as [21]

$$\begin{aligned} D^ \nu f(x) = \frac{1}{\varGamma (m- \nu )}\int _{0}^{x} \frac{f^{(m)}(t)}{(x-t)^{\nu -m+1}} d t, \end{aligned}$$

for \(m-1 < \nu \le m,\;m \in N\), \(x > 0\). For the Caputo derivative we have [33]:

$$\begin{aligned} D^ \nu x^k = \left\{ \begin{array}{ll} 0,&{}\quad \nu \in N_{0},\; k < \nu ,\\ \frac{\varGamma (k+1)}{\varGamma (k- \nu +1)} x^{k- \nu }, &{}\quad \textit{otherwise} \end{array}\right. \end{aligned}$$

and \(D^ \nu \lambda =0,\) where \(\lambda \) is constant.

Also, this derivative can be expressed using the Riemann–Liouville integration as

$$\begin{aligned} D^{\nu }f(x) = \left\{ \begin{array}{ll} I^{m- \nu }f^{(m)}(x), &{}\quad m-1 < \nu \le m,\;m \in N, \\ \frac{d^{m}f(x)}{dx^{m}}, &{}\quad \nu = m \end{array}\right. \end{aligned}$$

For Caputo’s derivative, the following properties are established:

$$\begin{aligned}&(D^{\nu } I^{\nu } f)(x) = f(x),\nonumber \\&(I^{\nu }D^{\nu }f)(x) = f(x) - \sum _{i=0}^{\lceil \nu \rceil -1} f^{(i)}(0) \frac{x^{i}}{i!}. \end{aligned}$$
(2.1)

Definition 3

(Generalized Taylor’s formula) [26] Suppose that \(D^{k \alpha }f(x) \in C (0, 1]\) for \({k=0, 1, \ldots , n+1}\). Then, we have

$$\begin{aligned} f(x) = \sum _{k=0}^{n} \frac{x^{k \alpha }}{\varGamma (k \alpha +1)} D^{k \alpha }f(0^{+}) + \frac{x^{(n+1) \alpha }}{\varGamma ((n+1) \alpha +1)} D^{(n+1) \alpha }f (\xi ), \end{aligned}$$

with \(0 < \xi \le x,\;\forall x \in (0, \;1]\). Also, one has:

$$\begin{aligned} \left| f(x) - \sum _{k=0}^{n} \frac{x^{k \alpha }}{\varGamma (k \alpha +1)} D^{k \alpha }f(0^{+}) \right| \le M_{\alpha } \frac{x^{(n+1) \alpha }}{\varGamma ((n+1) \alpha +1)}, \end{aligned}$$

where \(M_{\alpha } \ge \sup _{\xi \in (0,\;1]} \vert D^{(n+1) \alpha }f (\xi ) \vert \).

3 Fractional-order general Lagrange scaling functions

3.1 Lagrange polynomials

Suppose that the set of nodes be given by \(x_{i},\;i=0, 1, \ldots , n\). The Lagrange interpolating polynomials are defined as follows, for any fixed non negetive integer number n:

$$\begin{aligned} L_{i}(x) := \prod _{{\begin{array}{c} j=0 \\ j \ne i \end{array}}}^{n} \frac{(x-x_{j})}{(x_{i} -x_{j})}. \end{aligned}$$

Also, these polynomials are characterized by Kronecker property

$$\begin{aligned} L_{i}(x_{l}) = \delta _{il}, \end{aligned}$$

where

$$\begin{aligned} \delta _{il} = \left\{ \begin{array}{ll} 1, &{}\quad i=l, \\ 0, &{}\quad i \ne l\end{array}\right. \end{aligned}$$

It is necessary to mention that there are no explicit formulas for the determination of points \(x_{i}\).

Let \(L_i(x),i=0, 1, \ldots , n\) are Lagrange polynomials on the set of nodes \(x_i\). Lagrange polynomials in these points are described by [37]

$$\begin{aligned} L_{i}(x) = \sum _{s=0}^{n} \beta _{is} x^{n-s},\quad i=0, 1, \ldots , n. \end{aligned}$$
(3.1)

where

$$\begin{aligned} \beta _{i0}= & {} \frac{1}{\prod _{{\begin{array}{c} j=0 \\ j \ne i \end{array}}}^{n}(x_{i}-x_{j})}\\ \beta _{is}= & {} \frac{(-1)^{s}}{\prod _{{\begin{array}{c} j=0\\ j \ne i \end{array}}}^{n}(x_{i}-x_{j})} \sum _{k_{s}=k_{s-1}+1}^{n} \ldots \sum _{k_{1}=0}^{n-s+1} \prod _{r=1}^{s} x_{k_{r}}, \end{aligned}$$

and \(i \ne k_{1} \ne \cdots \ne k_{s},\;s=1, 2, \ldots , n\).

Lemma 1

Let \(L_{i}(x),\;i=0, 1,\ldots , n\) are Lagrange polynomials. Then these polynomials satisfy the following formula

$$\begin{aligned} \int _{0}^{1} L_{i}(x)L_{j}(x)dx= \sum _{s_{1}=0}^{n} \sum _{s_{2}=0}^{n} \frac{ \beta _{is_{1}} \beta _{is_{2}}}{2n-s_{1}-s_{2}+1} \end{aligned}$$
(3.2)

Proof

Using Eq. (3.1), we have

$$\begin{aligned} \int _{0}^{1} L_{i}(x)L_{j}(x)dx= & {} \sum _{s_{1}=0}^{n} \sum _{s_{2}=0}^{n} \beta _{is_{1}} \beta _{is_{2}} \int _{0}^{1}x^{n-s_{1}}x^{n-s_{2}} dx\\= & {} \sum _{s_{1}=0}^{n} \sum _{s_{2}=0}^{n} \frac{ \beta _{is_{1}} \beta _{is_{2}}}{2n-s_{1}-s_{2}+1}. \end{aligned}$$

\(\square \)

3.2 Fractional-order Lagrange polynomials

The fractional-order Lagrange polynomials are defined as follows [37]

$$\begin{aligned} L_{i}^{\alpha }(x) = \sum _{s=0}^{n} \beta _{is} x^{\alpha (n-s)},\;\;i=0, 1, 2, \ldots , n. \end{aligned}$$
(3.3)

where \(0< \alpha \le 1\) and

$$\begin{aligned} \beta _{i0}= & {} \frac{1}{\prod _{{\begin{array}{c} j=0 \\ j \ne i \end{array}}}^{n}(x_{i}-x_{j})},\\ \beta _{is}= & {} \frac{(-1)^{s}}{\prod _{{\begin{array}{c} j=0 \\ j \ne i \end{array}}}^{n}(x_{i}-x_{j})} \sum _{k_{s}=k_{s-1}+1}^{n} \ldots \sum _{k_{1}=0}^{n-s+1} \prod _{r=1}^{s} x_{k_{r}}, \end{aligned}$$

and \(i \ne k_{1} \ne \cdots \ne k_{s},\;s=1, 2, \ldots , n\).

These fractional functions on arbitrary nodal points are derived. Then, we can have different choices for Lagrange polynomials. For example, the fractional-order Lagrange polynomials, for \(n=2,\;x_{i} = \frac{i}{n}\) are as follows:

$$\begin{aligned} L_{0}^{\alpha }(x)= 1-3x^{\alpha }+2x^{2 \alpha },\;\;L_{1}^{\alpha }(x)=4x^{\alpha }-4x^{2 \alpha },\;\;L_{2}^{\alpha }(x)=-x^{\alpha }+2x^{2 \alpha }. \end{aligned}$$

Also, while \(x_{i}\) are zeros of shifted Legendre polynomials and \(n=2\), these polynomials are as

$$\begin{aligned} L_{0}^{\alpha }(x)= & {} 1.47883 -4.62433x^{\alpha }+3.33333x^{2 \alpha },\\ L_{1}^{\alpha }(x)= & {} -0.666667+6.66667x^{\alpha }-6.66667x^{2 \alpha },\\ L_{2}^{\alpha }(x)= & {} 0.187836 - 2.04234 x^{\alpha }+3.33333x^{2 \alpha }. \end{aligned}$$

3.3 Fractional-order general Lagrange scaling functions

3.3.1 General Lagrange scaling functions

Now, we define general Lagrange scaling functions. These functions are introduced for arbitrary points of Lagrange polynomials.

General Lagrange scaling functions are as follows:

$$\begin{aligned} \psi _{ji}(x)= \left\{ \begin{array}{ll} 2^{\frac{k-1}{2}}\tilde{L}_{i}(2^{k-1}x-\tilde{j}),&{}\quad \frac{\tilde{j}}{2^{k-1}} \le x < \frac{\tilde{j}+1}{2^{k-1}}, \\ 0, &{}\quad otherwise \end{array} \right. \end{aligned}$$
(3.4)

with

$$\begin{aligned} \tilde{L}_{i}(x)= \frac{1}{\sqrt{w_{i}}}L_{i}(x), \end{aligned}$$
(3.5)

where, \(w_{i}\) are achieved from Eq. (3.2) and \(\tilde{j}=j-1,\;j=1, 2,\ldots , 2^{k-1},\; i=0, 1,\ldots , n\).

It is worth to mention that if we consider \(x_{i}\) as the roots of Legendre polynomials, we find a special case of GLSFs, which is called interpolation scaling function [8, 14, 35].

3.3.2 Fractional-order general Lagrange scaling functions

In the following, we introduce a new set of fractional-order basis functions that are called fractional-order general Lagrange scaling functions. These basic functions are constructed using FLPs and GLSFs, which are indicated as \(\psi _{ji}^{\alpha }(x)\).

Using Eqs. (3.4), (3.5), \(\psi _{ji}^{\alpha }(x)\) is given as follows

$$\begin{aligned} \psi ^{\alpha }_{ji}(x)= \left\{ \begin{array}{ll} 2^{\frac{k-1}{2}}\tilde{L}_{i}(2^{k-1}x^{\alpha }-\tilde{j}),&{}\quad \frac{\tilde{j}}{2^{k-1}} \le x^{\alpha } < \frac{\tilde{j}+1}{2^{k-1}}, \\ 0, &{}\quad \textit{otherwise} \end{array} \right. \end{aligned}$$
(3.6)

with

$$\begin{aligned} \tilde{L}_{i}(2^{k-1}x^{\alpha }-\tilde{j})= \frac{1}{\sqrt{w_{i}}}L_{i}(2^{k-1}x^{\alpha }-\tilde{j}), \end{aligned}$$
(3.7)

where, \(\tilde{j}=j-1,\;j=1, 2, \ldots , 2^{k-1},\;i=0, 1, \ldots , n\). For example, in \(x_{i}=\frac{i}{n},\;k=2,\;n=1\), we have

For \(0 \le x^{\alpha } < \frac{1}{2}\)

$$\begin{aligned} \begin{array}{ll} \psi ^{\alpha }_{10}(x) = \sqrt{2}\tilde{L}_{0}(2x^{\alpha }) = \sqrt{6}(1-2x^{\alpha }), \\ \psi ^{\alpha }_{11}(x) = \sqrt{2}\tilde{L}_{1}(2x^{\alpha }) = \sqrt{6}(2x^{\alpha }), \end{array} \end{aligned}$$

for \( \frac{1}{2} \le x^{\alpha } <1\)

$$\begin{aligned} \begin{array}{ll} \psi ^{\alpha }_{20}(x) = \sqrt{2}\tilde{L}_{0}(2x^{\alpha }-1) = \sqrt{6}(2-2x^{\alpha }), \\ \psi ^{\alpha }_{21}(x) = \sqrt{2}\tilde{L}_{1}(2x^{\alpha }-1) = \sqrt{6}(2x^{\alpha }-1). \end{array} \end{aligned}$$

Moreover, Figs. 1 and 2 show graphs of FGLSFs for \(n=2,\;k=2,x_{i} =\frac{i}{n}\) and various values of \(\alpha \).

Fig. 1
figure 1

Graph of the FGLSFs with \(n=2,\;k=2,\;\alpha =\frac{1}{2}\)

Fig. 2
figure 2

Graph of the FGLSFs with \(n=2,\;k=2,\;\alpha =\frac{1}{4}\)

3.4 Function approximation

A function f defined uniquely over \([0,\;1)\) can be expanded in terms of fractional-order general Lagrange scaling functions as

$$\begin{aligned} f(x) \simeq \sum _{j=1}^{2^{k-1}} \sum _{i=0}^{n} c_{ji}\psi _{ji}^{\alpha }(x) = C^{T}\varPsi ^{\alpha }(x), \end{aligned}$$

where C and \(\varPsi ^{\alpha }(x)\) are \(2^{k-1}(n+1) \times 1\) vectors given by

$$\begin{aligned} C= & {} [c_{10}, c_{11}, \ldots , c_{1n}, c_{20}, c_{21}, \ldots , c_{2n}, \ldots , c_{2^{k-1}0}, \ldots , c_{2^{k-1}n}]^{T}\\= & {} [c_{0}, c_1, \ldots , c_{n}, c_{n+1}, \ldots , c_{2^{k-1}n}]^{T} \end{aligned}$$

and T indicates transposition.

$$\begin{aligned} \varPsi ^{\alpha }(x)= & {} [\psi _{10}^{\alpha }(x), \ldots , \psi _{1n}^{\alpha }(x), \psi _{20}^{\alpha }(x), \ldots , \psi _{2n}^{\alpha }(x), \ldots , \psi _{2^{k-1}0}^{\alpha }(x), \ldots , \psi _{2^{k-1}n}^{\alpha }(x)]^{T} \nonumber \\= & {} [\psi _{0}^{\alpha }(x), \psi _{1}^{\alpha }(x), \ldots , \psi _{n}^{\alpha }(x), \psi _{n+1}^{\alpha }(x), \ldots , \psi _{2^{k-1}n}^{\alpha }(x)]^{T}. \end{aligned}$$
(3.8)

The coefficient vector C can be achieved as follows

$$\begin{aligned} C^{T} = F^{T} D^{-1}, \end{aligned}$$

where

$$\begin{aligned} D= & {} \langle \varPsi ^{\alpha },\;\varPsi ^{\alpha } \rangle = \int _{0}^{1} \varPsi ^{\alpha }(x)\varPsi ^{\alpha T}(x)x^{\alpha -1} dx,\\ F= & {} [f_{10}, f_{11}, \ldots , f_{1n}, f_{20}, f_{21}, \ldots , f_{2n}, \ldots , f_{2^{k-1}0}, \ldots , f_{2^{k-1}n}]^{T}, \end{aligned}$$

and

$$\begin{aligned} f_{ji} = \langle f, \psi _{ji}^{\alpha } \rangle = \int _{0}^{1} f(x) \psi _{ji}^{\alpha }(x)x^{\alpha -1} dx,\;\;j=1, \ldots ,\;2^{k-1},\;\;\;i=0, 1, \ldots , n. \end{aligned}$$

4 Operational matrices of delay and fractional integration

In this section, we derive the FGLSFs operational matrices of delay and fractional integration. We obtain these matrices directly, without transformation to FLPs.

4.1 The fractional integration operational matrix of FGLSFs

The fractional integration operator of order \(\nu >0\) of the vector \(\varPsi ^{\alpha }(x) \) can be expressed by

$$\begin{aligned} I^{\nu }\varPsi ^{\alpha }(x) \simeq P^{(\nu ,\;\alpha )} \varPsi ^{\alpha }(x), \end{aligned}$$

Using definition of the operator \(I^{\nu }\), we have

$$\begin{aligned} I^{\nu }\psi _{ji}^{\alpha }(x) = \frac{1}{\varGamma (\nu )}x^{\nu -1} * \psi _{ji}^{\alpha }(x),\;\;i=0, 1, \ldots , n,\;\;\;j=1, 2, \ldots , 2^{k-1}. \end{aligned}$$
(4.1)

Now, by taking the Laplace transform to both sides of Eq. (4.1), we achieve

$$\begin{aligned} L\left[ I^{\nu }\psi _{ji}^{\alpha }(x)\right] = L\left[ \frac{1}{\varGamma (\nu )}x^{\nu -1}\right] L[ \psi ^{\alpha }_{ji}(x)],\;\;\;i=0, 1, \ldots , n\;\;\;j=1, 2, \ldots , 2^{k-1}. \end{aligned}$$
(4.2)

where

$$\begin{aligned} L\left[ \frac{1}{\varGamma (\nu )}x^{\nu -1}\right] = r^{-v} \end{aligned}$$
(4.3)

Also, for \(\psi _{ji}^{\alpha }(x)\), we have

$$\begin{aligned} \psi _{ji}^{\alpha }(x)= & {} 2^{\frac{k-1}{2}} \biggl ( \mu _{\left( \frac{\tilde{j}}{2^{k-1}}\right) ^{\frac{1}{\alpha }}}(x) \tilde{L}_{i}(2^{k-1}x^{\alpha }-\tilde{j})-\mu _{\left( \frac{\tilde{j}+1}{2^{k-1}}\right) ^{\frac{1}{\alpha }}}(x)\tilde{L}_{i}(2^{k-1}x^{\alpha }-\tilde{j}) \biggl ) \nonumber \\= & {} \frac{2^{\frac{k-1}{2}}}{\sqrt{\omega _{i}}} L_{i}(2^{k-1}x^{\alpha }-\tilde{j}) \biggl ( \mu _{\left( \frac{\tilde{j}}{2^{k-1}}\right) ^{\frac{1}{\alpha }}}(x) -\mu _{\left( \frac{\tilde{j}+1}{2^{k-1}}\right) ^{\frac{1}{\alpha }}}(x)\biggl ) \nonumber \\= & {} \frac{2^{\frac{k-1}{2}}}{\sqrt{\omega _{i}}} \sum _{s=0}^{n} \beta _{is}(2^{k-1}x^{\alpha }-\tilde{j})^{\alpha (n-s)}\biggl ( \mu _{\left( \frac{\tilde{j}}{2^{k-1}}\right) ^{\frac{1}{\alpha }}}(x)-\mu _{\left( \frac{\tilde{j}+1}{2^{k-1}}\right) ^{\frac{1}{\alpha }}}(x) \biggl ) \end{aligned}$$
(4.4)

where \(\mu _{c}(x)\) is unit step function defined as

$$\begin{aligned} \mu _{c}(x) = \left\{ \begin{array}{ll} 1 &{}\quad x \ge c, \\ 0 &{}\quad x<c. \end{array} \right. \end{aligned}$$

Now, for every \(i,\;j,\;\alpha \), we have a known function. Then, the Laplace transform of Eq. (4.4) can be obtained. On the other hand, according to Eqs. (4.2)–(4.4), Laplace transform of \(I^{\nu }\psi _{ji}^{\alpha }(x)\) can be achieved. By taking the inverse Laplace transform of \(L(I^{\nu } \psi _{ji}^{\alpha }(x))\), yields

$$\begin{aligned} I^{\nu }\psi _{ji}^{\alpha }(x) = \tilde{\varphi }^{(\nu ,\alpha )}_{ji}(x). \end{aligned}$$
(4.5)

We can expand \(\tilde{\varphi }^{(\nu ,\alpha )}_{ji}(x)\) in terms of FGLSFs as

$$\begin{aligned} \tilde{\varphi }^{(\nu , \alpha )}_{ji}(x) \simeq \sum \limits _{\tau =1}^{2^{k-1}} \sum \limits _{\rho = 0}^{n} \tilde{c}_{\tau \rho } \psi _{\tau \rho }^{\alpha }(x)= \tilde{C}_{j,\;i}^{T} \varPsi ^{\alpha }(x). \end{aligned}$$
(4.6)

Therefore, we have

$$\begin{aligned} P^{(\nu , \alpha )}= [\tilde{C}_{j,\;i}],\;\;\;j=1, \ldots , 2^{k-1},\;\;i=0, 1, \ldots , n. \end{aligned}$$

For example, for \(k = 2,n = 2\) and \(\alpha =\nu = 1,\;x_{i} = \frac{i}{n}\), the FGLSFs operational matrix can be expressed as

$$\begin{aligned} P^{(1,1)} =\left[ {\begin{array}{ccccccc} 0.0166667 &{} 0.208333 &{} 0.0666667 &{}0&{}0 &{}0 \\ -0.0166667 &{} 0.166667 &{} 0.183333 &{}0&{} 0 &{}0\\ 0.0166667 &{} -0.0416667 &{} 0.0666667 &{} 0 &{} 0 &{}0\\ 0 &{}0 &{}0&{} 1.6 &{} 3.375 &{} 1.65 \\ 0 &{} 0 &{}0 &{} -0.85 &{} -1.5 &{} -0.65\\ 0 &{} 0 &{}0 &{} 0.6 &{} 1.125 &{} 0.65\\ \end{array}} \right] . \end{aligned}$$

4.2 Delay operational matrix of FGLSFs

In the continue, we obtain delay operational matrix of FGLSFs.

Using Eq.  (3.1), the Lagrange polynomials vector can be considered as

$$\begin{aligned} L(x) = \varLambda T_{n}(x), \end{aligned}$$
(4.7)

where

$$\begin{aligned} T_{n}(x)=[1, x, x^2, \ldots , x^n]^{T},\quad L(x)=[L_{0}(x), L_{1}(x), \ldots , L_{n}(x)]^{T} \end{aligned}$$

and \(\varLambda =(\gamma _{i,j})_{i,j=0}^{n}\) is matrix of order \((n + 1) \times (n + 1)\), where \(\gamma _{i,j} = \beta _{i,n-j}\).

Also, for Taylor polynomials, we have [31]

$$\begin{aligned} T_{n}(x- \xi ) = \theta (\xi ) T_{n}(x) \end{aligned}$$

where \(\theta (\xi )\) is the following matrix

$$\begin{aligned} \theta (\xi )=\left[ {\begin{array}{lllll} 1 &{} 0 &{} \ldots &{}0 \\ - \xi &{} 1&{} \ldots &{}0\\ (-\xi )^{2} &{}-2 \xi &{} \ldots &{}0\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ (- \xi )^{n} &{} \left( \begin{array}{ll} \quad n \\ n-1 \end{array}\right) (-\xi )^{n-1} &{} \ldots &{}1\\ \end{array}} \right] . \end{aligned}$$

Using Eq. (4.7), we obtain

$$\begin{aligned} L(x- \xi ) = \varLambda \theta ( \xi ) T_{n}(x) = \varLambda \theta (\xi ) \varLambda ^{-1} L(x) \end{aligned}$$

In addition, we can write

$$\begin{aligned} L(x^{\alpha } - \xi )= \varLambda \theta (\xi ) \varLambda ^{-1} L(x^{\alpha }) \end{aligned}$$

and

$$\begin{aligned} L^{\alpha }(x - \xi )= \varLambda \theta (\xi ) \varLambda ^{-1} L^{\alpha }(x), \end{aligned}$$

where, \(L^{\alpha }(x)= [L_{0}^{\alpha }(x), L_{1}^{\alpha }(x), \ldots , L_{n}^{\alpha }(x)]^{T}\).

Let \(M_{\xi } =\varLambda \theta (\xi ) \varLambda ^{-1}\) and

$$\begin{aligned} W=\left[ {\begin{array}{cccc} \frac{1}{\sqrt{\omega _{0}}} &{}0&{}\ldots &{}0 \\ 0 &{} \frac{1}{\sqrt{\omega _{1}}} &{} \ldots &{}0\\ \vdots &{}\vdots &{}\ddots &{} \vdots \\ 0 &{} 0&{}\ldots &{}\frac{1}{\sqrt{\omega _{n}}} \\ \end{array}} \right] , \end{aligned}$$

also, we know

$$\begin{aligned} L(2^{k-1}(x^{\alpha }- \xi ) - \tilde{j} )= M_{2^{k-1}\xi } L(2^{k-1}x^{\alpha }- \tilde{j}) \end{aligned}$$

Then, we achieve

$$\begin{aligned} \varPsi ^{\alpha }(x-\xi ) = \varOmega _{\xi } \varPsi ^{\alpha }(x), \end{aligned}$$

where

$$\begin{aligned} \varOmega _{\xi }= diag[\underbrace{\tilde{ \varOmega }_{\xi },\; \tilde{\varOmega }_{\xi },\; \ldots ,\; \tilde{\varOmega }_{\xi }}_{2^{k-1}}] \end{aligned}$$

and

$$\begin{aligned} \tilde{\varOmega }_{\xi } = W M_{2^{k-1}\xi } W^{-1} \end{aligned}$$

\(\varOmega _{\xi }\) is delay operational matrix of FGLSFs.

5 Numerical method

The matrices presented in the previous section are generally obtained. So, we can have different choices for the nodes of Lagrange polynomials.

In this paper, we choose \(x_{i}=\frac{i}{n}\) the points of Lagrange polynomials. Therefore, we have a set of non-orthogonal polynomials.

We consider the following problems:

Problem 1

Fractional differential equations

$$\begin{aligned} \left\{ \begin{array}{ll} F(x, y(x), D^{\nu }y(x))=0,&{}\quad 0 \le x \le 1,\;\;m-1 < \nu \le m,\\ y^{(i)}(0)= \lambda _{i},&{}\quad i=0, 1, \ldots , m-1, \end{array} \right. \end{aligned}$$
(5.1)

Problem 2

Fractional delay differential equations

$$\begin{aligned} \left\{ \begin{array}{ll} F(x, y(x), y(x-\xi ), D^{\nu }y(x))=0,&{}\quad 0 \le x \le 1,\;\;m-1< \nu \le m,\;0< \xi<1,\\ y^{(i)}(0)= \lambda _{i}, &{}\quad i=0, 1, \ldots , m-1,\;\;m \in N \\ y(x)=\varPhi (x), &{}\quad x <0. \end{array} \right. \end{aligned}$$
(5.2)

We approximate \(D^{\nu }y(x)\) in these problems by the FGLSFs as

$$\begin{aligned} D^{\nu }y(x) \simeq C^{T}\varPsi ^{\alpha }(x), \end{aligned}$$
(5.3)

so, using Eq. (2.1), operational matrix of fractional integration and initial conditions of Problems 1, 2, we get

$$\begin{aligned} y(x) \simeq I^{\nu } (C^{T} \varPsi ^{\alpha }(x)) +\sum _{k=0}^{m-1} \frac{x^{k}}{k!} \lambda _{k} \simeq C^{T} P^{(\nu ,\alpha )} \varPsi ^{\alpha }(x) + E^{T} \varPsi ^{\alpha }(x) \end{aligned}$$
(5.4)

where

$$\begin{aligned} E(x)= & {} \sum _{k=0}^{m-1} \frac{x^{k}}{k!} y^{(k)}(0)=\sum _{k=0}^{m-1} \frac{x^{k}}{k!}\lambda _{k},\\ E(x)\simeq & {} E^{T}\varPsi ^{\alpha }(x). \end{aligned}$$

For Problem 1, substituting Eqs. (5.3) and (5.4) in Eq. (5.1), we have an algebraic equation with \(2^{k-1}(n+1)\) unknown. Then, we collocate this equation at \(x_{p}=\frac{p}{n2^{k-1}},\;{p=0, 1, 2, \ldots , n2^{k-1}}\). We have a system of algebraic equations, which can be solved for the unknown vector C by using Newton’s iterative method.

For Problem 2, using Eq. (5.4), we get

$$\begin{aligned} y(x- \xi ) \simeq \left\{ \begin{array}{ll} A^T \varPsi ^{\alpha }(x) &{}\quad 0 \le x \le \xi , \\ C^{T} P^{(\nu ,\alpha )} \varOmega _{\xi } \varPsi ^{\alpha }(x) + E^{T} \varOmega _{\xi } \varPsi ^{\alpha }(x) &{}\quad \xi < x \le 1 \end{array} \right. \end{aligned}$$
(5.5)

where, \(\varPhi (x- \xi ) \simeq A^T \varPsi ^{\alpha }(x)\).

Now, substituting Eqs. (5.3), (5.4) and (5.5) in Eq. (5.2), we get an algebraic equation. Then, by using collocation method and Newton’s iterative method, we can solved this problem.

6 Error analysis

Theorem 1

Let \(D^{i \alpha }f \in C(0, 1],\;i=0, 1, \ldots , n,\;\;(2n+2)\alpha + \alpha \ge 1,\;\;(\widehat{n}=2^{k-1}(n+1))\) and \(Y_{n}^{\alpha }=span \{L_{0}^{\alpha }(x), L_{1}^{\alpha }(x), \ldots , L_{n}^{\alpha }(x) \}\). If \(f_{n}(x) =A^{T} L^{\alpha }(x)\) is the best approximation of f(x) out of \(Y_{n}^{\alpha }\) on the interval \([\frac{j-1}{2^{k-1}}, \frac{j}{2^{k-1}}]\). Then, for approximate solution \(f_{\widehat{n}}(x)\) using FGLSFs on [0, 1], we derive

$$\begin{aligned} \Vert f- f_{\widehat{n}} \Vert _{2} \le \frac{sup_{x \in [0, 1]} \vert D^{(n+1) \alpha } f(x) \vert }{\varGamma (n \alpha + \alpha +1) \sqrt{(2n+3) \alpha }}. \end{aligned}$$
(6.1)

Proof

Define

$$\begin{aligned} f_{1}(x)= \sum _{i=0}^{n} \frac{x^{i \alpha }}{\varGamma (i \alpha +1)} D^{i \alpha }f(0^{+}). \end{aligned}$$

Using the generalized Taylor’s formula, we get

$$\begin{aligned} \vert f(x)- f_{1}(x) \vert \le \frac{x^{(n+1) \alpha }}{\varGamma ((n+1)\alpha +1} sup_{x \in I_{k, j}} \vert D^{(n+1)\alpha } f(x) \vert , \end{aligned}$$

where \(I_{k, j}=[\frac{j-1}{2^{k-1}}, \frac{j}{2^{k-1}}]\).

Given that \(f_{n}(x) =A^{T} L^{\alpha }(x)\) is the best approximation of f(x) out of \(Y_{n}^{\alpha }\) on the interval \(I_{k,\;j}\), and \(f_{1}(x) \in Y_{n}^{\alpha }\), then

$$\begin{aligned} \Vert f- f_{\widehat{n}} \Vert _{L^{2}[0, 1]}^{2}= & {} \Vert f- C^{T} \varPsi ^{\alpha } \Vert _{L^{2}[0, 1]}^{2} = \sum _{j=1}^{2^{k-1}} \Vert f - A^{T} L^{\alpha } \Vert _{L^{2}[\frac{j-1}{2^{k-1}},\;\frac{j}{2^{k-1}}]}^{2} \\\le & {} \sum _{j=1}^{2^{k-1}} \Vert f-f_{1} \Vert _{L^{2}[\frac{j-1}{2^{k-1}},\;\frac{j}{2^{k-1}}]}^{2} \\\le & {} \sum _{j=1}^{2^{k-1}} \int _{I_{k, j}} \biggl [\frac{x^{(n+1)\alpha }}{\varGamma ((n+1) \alpha +1)} sup_{x \in I_{k, j}} \vert D^{(n+1)\alpha }f(x) \vert \biggl ]^{2} x^{\alpha -1} dx \\\le & {} \int _{0}^{1} \biggl [\frac{x^{(n+1)\alpha }}{\varGamma ((n+1) \alpha +1)} sup_{x \in [0, 1]} \vert D^{(n+1)\alpha }f(x) \vert \biggl ]^{2} x^{\alpha -1} dx \\\le & {} \frac{1}{\varGamma (n \alpha + \alpha +1)^{2} ((2n+2) \alpha +\alpha )} (sup_{x \in [0, 1]} \vert D^{(n+1)\alpha }f(x) \vert )^{2}, \end{aligned}$$

by taking the square roots, the proof is complete. Therefore, FGLSF’s approximations of f(x) are convergent. \(\square \)

Theorem 2

Let H is a Hilbert space and Y is a close subspace of H such that \(dim Y < \infty \) and \(y_{1}, y_{2}, \ldots , y_{n}\), is any basis for Y. Let z be an arbitrary element in H and \(y^{*}\) be the unique best approximation to z out of Y. Thus [13]

$$\begin{aligned} \Vert z-y^{*} \Vert _{2}^{2} = \frac{G(z, y_{1}, y_{2}, \ldots , y_{n})}{G(y_{1}, y_{2}, \ldots , y_{n})} \end{aligned}$$

where

$$\begin{aligned} G( x, y_{1}, y_{2}, \ldots , y_{n}) = \left| \begin{array}{cccc} \langle x, x\rangle &{} \langle x, y_{1}\rangle &{} \ldots &{} \langle x, y_{n}\rangle \\ \langle y_{1}, x\rangle &{} \langle y_{1}, y_{1}\rangle &{} \ldots &{} \langle y_{1}, y_{n}\rangle \\ \vdots &{} \vdots &{} \vdots &{} \vdots \\ \langle y_{n}, x\rangle &{} \langle y_{n}, y_{1}\rangle &{} \ldots &{} \langle y_{n}, y_{n}\rangle \\ \end{array}\right| . \\ \end{aligned}$$

Lemma 2

Let \(g \in L^{2}[0,\;1]\) is approximated by FLPs as

$$\begin{aligned} g(x) \simeq g_{n}(x)= A^{T}L^{\alpha }(x), \end{aligned}$$

so, we have

$$\begin{aligned} \lim _{n \rightarrow \infty } e_{n}(g) =0. \end{aligned}$$

where

$$\begin{aligned} e_{n}(g)= \int _{0}^{1}[g(x)-g_{n}(x)]^{2} dx. \end{aligned}$$

In the following, we find an upper bound for the error vector of fractional integration operational matrix.

Suppose \(E_{I}^{(\nu )}\) is the error vector of the operational matrix \(P^{(\nu , \alpha )}\). We consider this vector as follows

$$\begin{aligned} E_{I}^{(\nu )} = P^{(\nu , \alpha )}\varPsi ^{\alpha }(x)-I^{\nu }\varPsi ^{\alpha }(x),\quad E_{I}^{(\nu )} =\left[ \begin{array}{l} e_{I0} \\ e_{I1} \\ \vdots \\ e_{In}\\ \end{array}\right] , \end{aligned}$$
(6.2)

From Eq. (4.6) and approximated \(\tilde{\varphi }_{ji}^{(\nu , \alpha )}(x)\), we get

$$\begin{aligned} \tilde{\varphi }_{ji}^{(\nu , \alpha )}(x) \simeq \sum _{\tau =1}^{2^{k-1}}\sum _{\rho =0}^{n}\tilde{c}_{\tau \rho } \psi ^{\alpha }_{\tau \rho }(x), \end{aligned}$$

we obtain \(\tilde{c}_{\tau \rho }\) with the best approximation. From Theorem 2, we have:

$$\begin{aligned} \left\| \tilde{\varphi }_{ji}^{(\nu , \alpha )}(x) - \sum _{\tau =1}^{2^{k-1}}\sum _{\rho =0}^{n}\tilde{c}_{\tau \rho } \psi ^{\alpha }_{\tau \rho }(x) \right\| _{2} = \biggl (\frac{G(\tilde{\varphi }_{ji}^{(\nu , \alpha )}(x),\;\psi _{0}^{\alpha }, \psi _{1}^{\alpha }, \ldots , \psi _{2^{k-1}n}^{\alpha })}{G(\psi _{0}^{\alpha }, \psi _{1}^{\alpha }, \ldots , \psi _{2^{k-1}n}^{\alpha })} \biggl )^{\frac{1}{2}}. \end{aligned}$$

Then, from Eqs. (4.2)–(4.6), we achieve

$$\begin{aligned} \Vert e_{Ii} \Vert _{2}= & {} \left\| I^{\nu }\psi _{ji}^{\alpha }(x) - \sum _{\tau =1}^{2^{k-1}}\sum _{\rho =0}^{n}\tilde{c}_{\tau \rho } \psi ^{\alpha }_{\tau \rho }(x) \right\| \\\le & {} \left\| \tilde{\varphi }_{ji}^{(\nu , \alpha )}(x) - \sum _{\tau =1}^{2^{k-1}}\sum _{\rho =0}^{n}\tilde{c}_{\tau \rho } \psi ^{\alpha }_{\tau \rho }(x) \right\| \\\le & {} \biggl (\frac{G(\tilde{\varphi }_{ji}^{(\nu , \alpha )}(x), \psi _{0}^{\alpha }, \psi _{1}^{\alpha }, \ldots , \psi _{2^{k-1}n}^{\alpha })}{G(\psi _{0}^{\alpha },\;\psi _{1}^{\alpha }, \ldots , \psi _{2^{k-1}n}^{\alpha })} \biggl )^{\frac{1}{2}}. \end{aligned}$$

By considering the above discussion and Theorem 1, we can see that by increasing the number of the FGLSFs, the error vector \(E^{(\nu )}\) tends to zero.

We can show, the convergence of the present method for Problem 1. For simplicity, we rewrite this problem in the following form:

$$\begin{aligned} D^{\nu }y(x)=a(x)y(x)+b(x),\;\;\;m-1 \le \nu < m, \end{aligned}$$
(6.3)

where, \(a(x),\;b(x)\) are known functions.

Theorem 3

Suppose y(x) is the analytic solution to Eq. (6.3). In this case, the error of the our method \(\Vert E_{y_{\widehat{n}}} \Vert _{2}\) is as follows

$$\begin{aligned} \Vert E_{y_{\widehat{n}}} \Vert _{2}\le & {} \Vert a \Vert _{2} \biggl (C^{T} E_{I}^{\nu } + \frac{sup_{x \in [0,\;1]} \vert D^{(n+1) \alpha } E(x) \vert }{\varGamma (n \alpha + \alpha +1) \sqrt{(2n+3) \alpha }} \biggl ) \\&+ \Vert y_{\widehat{n}} \Vert _{2} \frac{sup_{x \in [0,\;1]} \vert D^{(n+1) \alpha } a(x) \vert }{\varGamma (n \alpha + \alpha +1) \sqrt{(2n+3) \alpha }}\\&+ \frac{sup_{x \in [0,\;1]} \vert D^{(n+1) \alpha } b(x) \vert }{\varGamma (n \alpha + \alpha +1) \sqrt{(2n+3) \alpha }} \end{aligned}$$

Proof

We define

$$\begin{aligned} E_{y_{\widehat{n}}}(x)= D^{\nu } y(x)-D^{\nu } y_{\widehat{n}}(x), \end{aligned}$$

then, from Eq. (6.3), we achieve

$$\begin{aligned} E_{y_{\widehat{n}}}(x) = (a(x) y(x)- a_{\widehat{n}}(x) y_{\widehat{n}}(x) )+(b(x)-b_{\widehat{n}}(x)). \end{aligned}$$

Using Eqs. (6.1) and (6.2), we have

$$\begin{aligned} \Vert y-y_{\widehat{n}} \Vert _{2}\le & {} \Vert C^{T}I^{\nu } \varPsi ^{\alpha } - C^{T} P^{(\nu ,\;\alpha )} \varPsi ^{\alpha } \Vert _{2} + \Vert E- E^{T} \varPsi ^{\alpha } \Vert _{2} \\\le & {} C^{T} E_{I}^{\nu } + \frac{sup_{x \in [0,\;1]} \vert D^{(n+1) \alpha } E(x) \vert }{\varGamma (n \alpha + \alpha +1) \sqrt{(2n+3) \alpha }} \end{aligned}$$

Moreover, we have

$$\begin{aligned}&\Vert ay-a y_{\widehat{n}} \Vert _{2} \le \Vert a \Vert _{2} \Vert y- y_{\widehat{n}} \Vert _{2} \nonumber \\&\quad \le \Vert a \Vert _{2} \biggl (C^{T} E_{I}^{\nu } + \frac{sup_{x \in [0,\;1]} \vert D^{(n+1) \alpha } E(x) \vert }{\varGamma (n \alpha + \alpha +1) \sqrt{(2n+3) \alpha }} \biggl ) \end{aligned}$$
(6.4)
$$\begin{aligned}&\Vert ay_{\widehat{n}}-a_{\widehat{n}} y_{\widehat{n}} \Vert _{2} \le \Vert y_{\widehat{n}} \Vert _{2} \Vert a- a_{\widehat{n}} \Vert _{2} \le \Vert y_{\widehat{n}} \Vert _{2} \frac{sup_{x \in [0,\;1]} \vert D^{(n+1) \alpha } a(x) \vert }{\varGamma (n \alpha + \alpha +1) \sqrt{(2n+3) \alpha }} \end{aligned}$$
(6.5)

and

$$\begin{aligned} \Vert b- b_{\widehat{n}} \Vert _{2} \le \frac{sup_{x \in [0,\;1]} \vert D^{(n+1) \alpha } b(x) \vert }{\varGamma (n \alpha + \alpha +1) \sqrt{(2n+3) \alpha }} \end{aligned}$$
(6.6)

Using Eqs. (6.4)–(6.6), we achieve

$$\begin{aligned} \Vert E_{y_{\widehat{n}}} \Vert _{2}\le & {} \Vert a \Vert _{2} \biggl (C^{T} E_{I}^{\nu } + \frac{sup_{x \in [0,\;1]} \vert D^{(n+1) \alpha } E(x) \vert }{\varGamma (n \alpha + \alpha +1) \sqrt{(2n+3) \alpha }} \biggl ) \\&+ \Vert y_{\widehat{n}} \Vert _{2} \frac{sup_{x \in [0,\;1]} \vert D^{(n+1) \alpha } a(x) \vert }{\varGamma (n \alpha + \alpha +1) \sqrt{(2n+3) \alpha }}\\&+ \frac{sup_{x \in [0,\;1]} \vert D^{(n+1) \alpha } b(x) \vert }{\varGamma (n \alpha + \alpha +1) \sqrt{(2n+3) \alpha }} \end{aligned}$$

For Problem 2, it can be shown similarly that our method is convergent. \(\square \)

7 Illustrative test problems

In this section, we apply the proposed method to solve the following test examples.

7.1 Fractional differential equations

Example 7.1

Consider the fractional differential equation [4]

$$\begin{aligned} \left\{ \begin{array}{ll} D^{\nu }y(x)+y(x)=\frac{\varGamma (\lambda +1)}{\varGamma ( \lambda -\nu +1)}x^{\lambda - \nu }+x^{\lambda }, &{}\quad 0 \le \nu \le \lambda \le 1, x \in [0, 1]\\ y(0)=0.&{} \end{array} \right. \end{aligned}$$

The exact solution of this equation is \(y(x)=x^{\lambda }\). We solve this problem by applying the method described in Sect. 5, the above equation is transformed as following

$$\begin{aligned} C^{T}\varPsi ^{\alpha }(x)+C^{T}P^{(\nu , \alpha } \varPsi ^{\alpha }(x) = E^{T}\varPsi ^{\alpha }(x), \end{aligned}$$

where \(\frac{\varGamma (\lambda +1)}{\varGamma ( \lambda -\nu +1)}x^{\lambda - \nu }+x^{\lambda } \simeq E^{T}\varPsi ^{\alpha }(x)\). By collocation above equation at \(x_{p} = \frac{p}{n2^{k-1}},\;p=0, 1, \ldots , n2^{k-1}\) and using Newton’s iterative method, we obtain the unknown vector C.

Table 1 displays the absolute error obtained between the approximate solutions and the exact solution for \(k = 2, n = 2\) and various values of \(\nu =\lambda \). Also, Fig. 3 shows the absolute error obtained between the approximate solutions and the exact solution for \(k = 2, n=2\) with \(\alpha =\nu =\lambda \).

So, Table 1 and Fig. 3 demonstrate the validity and effectively of the our method for this problem.

Table 1 The absolute error with \(k=2,\;n=2\) various values of \(\alpha \) in Example 7.1
Fig. 3
figure 3

The absolute error obtained between the approximate solutions and the exact solution for \(k = 2, n=2\) with a) \(\alpha =\nu =1\) b) \(\alpha =\nu =\frac{1}{2}\), in Example 7.1

Example 7.2

Consider the following nonlinear initial value problem

$$\begin{aligned} D^{\frac{1}{2}}y(x)+y(x)D^{\frac{1}{2}}y(x)= f(x),\;\;\;\; 0 \le x \le 1, \end{aligned}$$

subject to

$$\begin{aligned} y(0)=1. \end{aligned}$$

where

$$\begin{aligned} f(x)= \left\{ \begin{array}{ll} \frac{\sqrt{\pi }}{2}+\frac{\sqrt{\pi }}{2}(1+\sqrt{x}), &{}\quad 0 \le x < \frac{1}{4}\\ \frac{\sqrt{\pi }}{2} + \frac{8\sqrt{x}}{\sqrt{\pi }}+\frac{\sqrt{\pi } \sqrt{x}}{2} + \frac{8x}{\sqrt{\pi }}+2\sqrt{\pi }x+\frac{32x^{\frac{3}{2}}}{\sqrt{\pi }}&{}\quad \frac{1}{4} \le x \le 1, \end{array} \right. \end{aligned}$$

whose exact solution is given by

$$\begin{aligned} y(x) = \left\{ \begin{array}{ll} \sqrt{x}+1, &{}\quad 0 \le x < \frac{1}{4},\\ 4x+ \sqrt{x}, &{}\quad \frac{1}{4} \le x \le 1. \end{array} \right. \end{aligned}$$

We employ proposed method for solving this problem for \(k=2, n=4\) and \(\alpha =1, \frac{1}{2}\). In Table 2, the absolute error obtained between the our numerical results in \(\alpha =1\) and \(\frac{1}{2}\) and the exact solution for various of x. Table 2 demonstrates the validity and effectively of our method for this problem.

Table 2 Comparison of the absolute error our method for \(\alpha =1, \frac{1}{2}\) in Example 7.2
Fig. 4
figure 4

The comparison of y(x) for \(k=2,\;n=4\) and various of \(\alpha =\nu \) and the exact solution, in Example 7.3

Example 7.3

Consider the following nonlinear fractional equation [10]

$$\begin{aligned} D^{\nu }u(x) = -u^{2}(x)+1,\;\;0< \nu \le 1, \end{aligned}$$

where

$$\begin{aligned} u(0)=0. \end{aligned}$$

the exact solution for \(\nu = 1\) is given by

$$\begin{aligned} u(x)= \frac{e^{2x}-1}{e^{2x}+1}. \end{aligned}$$

By employing the our method, the problem reduces to

$$\begin{aligned} C^{T}\varPsi ^{\alpha }(x)=-(C^{T}F^{(\nu ,\alpha )}\varPsi ^{\alpha }(x))(C^{T}P^{(\nu ,\alpha )}\varPsi ^{\alpha }(x))^{T}+E^{T}\varPsi ^{\alpha }(x), \end{aligned}$$

and

$$\begin{aligned} 1 \simeq E^{T}\varPsi ^{\alpha }(x). \end{aligned}$$

We apply the present approach to solve this problem with \(n=4,\;k=1\). The approximation solution for this problem by Legendre wavelet method in \(k = 1,\; M = 25,\;\nu =1\) is plotted in [10]. From figure in [10] and Fig. 4, we can see that we obtain a good approximation for the exact solution. In addition, Fig. 4 shows the approximation solutions obtained for \(\alpha =\nu =1,\;n=4,\;k=2\) and different values of \(\nu =\alpha \) using the FGLSF scheme. From these results, it is seen that the approximation solutions converge to the exact solution.

The exact solutions for the values of \(\nu \ne 1\) don’t exist. Therefore, similar to [37] for indicate an effectiveness of the our method in this example, we consider the norm of residual error as follows

$$\begin{aligned} \Vert Res_{n} \Vert ^{2} = \int _{0}^{1} Res_{n}^{2}(x) dx. \end{aligned}$$

where

$$\begin{aligned} Res_{n}(x)=C^{T}\varPsi ^{\alpha }(x)+(C^{T}P^{(\nu ,\alpha )}\varPsi ^{\alpha }(x))(C^{T}P^{(\nu ,\alpha )}\varPsi ^{\alpha }(x))^{T} -E^{T}\varPsi ^{\alpha }(x), \end{aligned}$$

Table 3 shows \(\Vert Res_{n} \Vert ^{2}\) with some \(n,\;k\) and various values of \(\nu = \alpha \). Numerical results display the advantage of the proposed technique to solve this nonlinear problem.

Table 3 The \(\Vert Res_{n} \Vert ^{2}\) with various values of \(\nu = \alpha \) for Example 7.3

7.2 Fractional delay differential equations

Example 7.4

Consider the following fractional delay differential equation

$$\begin{aligned} \left\{ \begin{array}{ll} D^{\nu }y(x)=y(x - \xi )+f(x), &{}\quad 0 \le x \le 1, \\ y(x)=\sqrt{x}, &{}\quad -\xi \le x \le 0 \end{array} \right. \end{aligned}$$

where

$$\begin{aligned} f(x)= \left\{ \begin{array}{ll} \frac{\sqrt{\pi }}{2}+\sqrt{x - \xi },&{}\quad 0 \le x < \frac{1}{4}, \\ \xi - \frac{2 \sqrt{x}}{\sqrt{\pi }}-x+ \frac{8x^{\frac{3}{2}}}{3 \sqrt{\pi }}+(x- \xi )^{2}, &{}\quad \frac{1}{4} \le x \le 1 \end{array} \right. \end{aligned}$$

the analytic solution of this problem for \(\nu = \frac{1}{2}, \xi = 0.000001\) is

$$\begin{aligned} y(x)= \left\{ \begin{array}{ll} \sqrt{x},&{} \quad 0 \le x < \frac{1}{4}, \\ x^{2}-x, &{}\quad \frac{1}{4} \le x \le 1 \end{array} \right. \end{aligned}$$

We apply the present method for solving this problem for \(k=2, n=4\) and different values of \(\alpha \). In Table 4, the absolute error achieved between the numerical results in \(\alpha =1, \frac{1}{2}\) and the exact solution for various of x. Table 4 shows the validity and effectively of the present method for this problem.

Table 4 Comparison of the absolute error our method for \(\alpha =1, \frac{1}{2}\) in Example 7.4

Example 7.5

Consider the following fractional delay differential equation for \(0< \nu \le 1\) [32]

$$\begin{aligned} \left\{ \begin{array}{ll} D^{\nu }y(x)=y(x - \xi )- y(x)+ \frac{2}{\varGamma (3- \nu )} x^{2- \nu } \ \frac{1}{\varGamma (2- \nu )}x^{1- \nu } +2 \xi x - \xi ^2- \xi ,&{} 0\quad \le x \le 1, \\ y(x)=0, &{} \quad x\le 0 \end{array} \right. \end{aligned}$$

The exact solution of this problem is \(y(x)=x^2-x\), when \(\nu =1\).

We employ this method for \(\nu =\alpha =1\), \(k=2,\;n=2\) and various choices of \(\xi \). In Fig. 5, the approximate solutions obtained for different values of \(\alpha =\nu \) and the exact solution in \( \xi =0.01\) are shown.

Table 5 shows the efficiency and accuracy of FGLSFs to solve this fractional delay differential equation. From these results, it is seen that the approximate solutions converge to the exact solution.

Fig. 5
figure 5

The comparison of our results for \(n=2,\;k=2,\;\xi =0.01\) with \(\alpha =\nu \) and exact solution in Example 7.5

Table 5 The absolute errors of our approximation in \(k=2,\;n=2\) with various values of \(\xi \) for Example 7.5

Example 7.6

Consider the following fractional delay differential equation

$$\begin{aligned} \left\{ \begin{array}{ll} D^{\nu }y(x)=-y(x)- y(x-0.3)+e^{-x+0.3},&{}\quad 0 \le x \le 1,\;\;2< \nu \le 3, \\ y(x)=1, \quad y^{'}(0)=-1, \quad y^{''}(0)=1,\quad y(x)=e^{-x}&{} \quad x\le 0 \quad \quad \quad \quad \quad \end{array} \right. \end{aligned}$$

The analytic solution of this problem, for \(\nu =3\), is \(y(x)=e^{-x}\).

Table 6 shows the numerical results obtained for different values of x using our method in \(k=1,\;n=6,\;\alpha =1\) or \(\widehat{n}=7\), the Hermite wavelet method in \(\widehat{n}=25\) [40], Bernoulli wavelets method with \(k=2,\;M=7\) or \(\widehat{n}=14\) [32], and the exact solution. Also, Fig. 6 displays the approximate solutions obtained with various values of \(\nu \) and the exact solution for \(n=6,\;k=1,\;\alpha =1\).

Table 6 Comparison of the approximate solution with exact solution in Example 7.6
Fig. 6
figure 6

The comparison of y(x) for \(k=1,\;n=6,\;\nu =3\) and various of \(\alpha \) and the exact solution, in Example 7.6

Example 7.7

We consider the fractional delay differential equation [17]

$$\begin{aligned} D^{\nu }y(x)=x y(x-\xi _{1})+x^{2}y^{2}(x-\xi _{2})+g(x), \quad \quad 0 \le x \le 1,\;0< \nu \le 1, \end{aligned}$$

where \(y(0)=1\) and

$$\begin{aligned} g(x)= & {} \left\{ \begin{array}{ll} x, &{}\quad 0 \le x \le \frac{1}{2}, \\ \frac{1}{2}, &{}\quad \frac{1}{2} \le x \le 1, \end{array} \right. \\ \xi _{1}= & {} \left\{ \begin{array}{ll} \frac{1}{2}, &{}\quad 0 \le x \le \frac{3}{4}, \\ \frac{1}{4}, &{}\quad \frac{3}{4} \le x \le 1, \end{array} \right. \\ \xi _{2}= & {} \left\{ \begin{array}{ll} \frac{1}{4}, &{}\quad 0 \le x \le \frac{1}{2}, \\ \frac{3}{4}, &{}\quad \frac{1}{2} \le x \le 1. \end{array} \right. \end{aligned}$$

The exact solution of this problem for \(\nu =1\) is

$$\begin{aligned} y(x)= \left\{ \begin{array}{ll} 1+\frac{1}{2}x^2, &{}\quad 0 \le x< \frac{1}{4}, \\ \frac{20{,}373}{20{,}480}+\frac{1}{2}x^{2}+\frac{11}{32}x^3-\frac{1}{16}x^4+\frac{1}{10}x^5, &{}\quad \frac{1}{4} \le x< \frac{1}{2}, \\ \frac{48{,}191}{61{,}440}+\frac{1}{2}x+\frac{9}{16}x^2-\frac{1}{6}x^3+\frac{1}{8}x^4, &{}\quad \frac{1}{2} \le x <\frac{3}{4}, \\ \frac{1{,}289{,}743}{1{,}966{,}080}+\frac{1}{2}x+\frac{14{,}287}{40{,}960}x^{2}+\frac{187}{384}x^{3}-\frac{1}{256}x^{4}+\frac{1}{24}x^{5}+\frac{1}{48}x^{6}, &{}\quad \frac{3}{4} \le x \le 1. \end{array} \right. \end{aligned}$$

By employing our method for \(k=3, n=7\) and \(\alpha =1\), we obtain the exact solution.

Example 7.8

This example is a model which is based on the effect of noise on light which is reflected from laser to mirror has been introduced by Pieroux [30]. Figure 7 shows this model taken from Ref. [23].

$$\begin{aligned} \left\{ \begin{array}{ll} D^{\nu }y(x)=-\frac{1}{\epsilon } y(x)+\frac{1}{\epsilon } y(x)y(x- \xi ), &{}\quad 0 \le x \le 1, \\ y(x)=0.9, &{}\quad - \xi \le x \le 0 \end{array} \right. \end{aligned}$$

The exact solution of this equation is not available [23]. Then to display an efficiency of the proposed method for this problem, we define the norm of residual error as follows

$$\begin{aligned} Res_{n}(x)= & {} C^{T}\varPsi ^{\alpha }(x)+\frac{1}{\epsilon }(C^{T}P^{(\nu ,\alpha )} \varPsi ^{\alpha }(x)+0.9) \\&-\frac{1}{\epsilon }(C^{T}P^{(\nu ,\alpha )}\varPsi ^{\alpha }(x)+0.9)(C^{T}P^{(\nu ,\alpha )} \varOmega _{\xi } \varPsi ^{\alpha }(x)+0.9)^{T},\\ \Vert Res_{n} \Vert ^{2}= & {} \int _{0}^{1} Res_{n}^{2}(x) dx. \end{aligned}$$
Fig. 7
figure 7

Semiconductor laser subject to an optoelectronic feedback. The figure illustrates the optoelectronic device used by Saboureau et al. [38]. The feedback operates on the pump of the laser by using part of the output light which is injected into a photodetector connected to the pump. The delay of the feedback is controlled by changing the length of the optical path [23]

Table 7 shows \(\Vert Res_{n} \Vert ^{2}\) with \(\epsilon =0.1,\;\xi =0.3\), \(n=6,\;k=1\) and various values of \(\nu = \alpha \). This table displays the advantage of the present technique for solving this nonlinear problem.

Table 7 The \(\Vert Res_{n} \Vert ^{2}\) with various values of \(\nu ,\;\alpha \) for Example 7.8

8 Conclusion

The purpose of this study is to provide a new set of basis functions for solving two classes of fractional equations. First, we provide general Lagrange scaling functions(GLSF). These functions are constructed based on the Lagrange polynomials. In other words, these functions are introduced regardless of the nodes of Lagrange polynomials. In continue, using FLPs we introduce a new set of functions that are called fractional-order general Lagrange scaling functions(FGLSF). In the following, the fractional integration operational matrix of FGLSFs and delay operational matrix of FGLSFs are presented. The the operational matrix of fractional integration is calculated using the Laplace transform. Also, we obtain this matrix directly, without transformation to FLPs. These matrices and the collocation method are used to solve two classes of problems, the fractional differential equations and fractional delay differential equations. Numerical examples demonstrate the ability and effectiveness of our method. Also, in these numerical examples, we can see that the present method is convergent.