We now analyze Cauchy type problems of differential equations of fractional order with Hilfer and Hilfer-Prabhakar derivative operators. The existence and uniqueness theorems for n-term nonlinear fractional differential equations with Hilfer fractional derivatives of arbitrary orders and types will be proved. Cauchy type problems for integro-differential equations of Volterra type with generalized Mittag-Leffler function in the kernel will be considered as well. Using the operational method of Mikusinski, the solution of a Cauchy type problem for a linear n-term fractional differential equations with Hilfer fractional derivatives will be obtained. We will show utility of operational method to solve Cauchy type problems of a wide class of integro-differential equations with variable coefficients, involving Prabhakar integral operator and Laguerre derivatives. For this purpose, following some recent works, we choose the examples which, by means of fractional derivatives, generalize the well-known ordinary differential equations and partial differential equations, related to time fractional heat equations, free electronic laser equation, some evolution and boundary value problems, and finally some Cauchy type problems for the generalized fractional Poisson process.

3.1 Ordinary Fractional Differential Equations: Existence and Uniqueness Theorems

An important issue in the theory of ordinary fractional differential equations is related to the existence and uniqueness of solutions of fractional differential equations. Several authors have considered a “model” of nonlinear fractional differential equation with R-L fractional derivative \(\left ( D_{a+}^{\mu }y\right ) \left (x\right ) \) of order \(\Re \left (\mu \right )>0\) on a finite interval \(\left [a,b\right ]\) of the real axis R : 

$$\displaystyle \begin{aligned} \left(D_{a+}^{\mu}y\right)\left(x\right)=f\left[x,y\left(x\right)\right] \quad \left(\Re\left(\mu\right)>0; \quad x>a\right),{} \end{aligned} $$
(3.1)

with initial values

$$\displaystyle \begin{aligned} \left(D_{a+}^{\mu-k}y\right)\left(a+\right)=b_{k}, \quad b_{k}\in\mathrm{C} \quad \left(k=1,2,\dots,n\right),{} \end{aligned} $$
(3.2)

where \(n=\Re \left (\mu \right )+1\) for μ∉N and μ = n for μ ∈N. When \(0<\Re \left (\mu \right )<1\), the problem takes the form

$$\displaystyle \begin{aligned} \left(D_{a+}^{\mu}y\right)\left(x\right)=f\left[x,y\left(x\right)\right], \quad \left(I_{a+}^{1-\mu}y\right)\left(a+\right)=b \quad \left(b\in\mathrm{C}\right){} \end{aligned} $$
(3.3)

and can be rewritten as the weighted Cauchy type problem

$$\displaystyle \begin{aligned} \left(D_{a+}^{\mu}y\right)\left(x\right)=f\left[x,y\left(x\right)\right], \quad \underset{x\rightarrow a+}{\lim}\left(x-a\right)^{1-\mu}y\left(x\right)=b \quad \left(b\in\mathrm{C}\right).{} \end{aligned} $$
(3.4)

In this chapter we investigate the above-mentioned problems based on reducing problem of nonlinear Volterra integral equation of the second kind [39]:

$$\displaystyle \begin{aligned} y\left(x\right)=\sum_{j=1}^{n}\frac{b_{j}}{\varGamma\left(\mu-j+1\right)}\left(x-a\right)^{\mu-j}+\frac{1}{\varGamma\left(\mu\right)}\int\limits_{a}^{x}\frac{f\left[t,y\left(t\right)\right]}{\left(x-t\right)^{1-\mu}}\,\mathrm{d}t \quad \left(x>a\right).\\ \end{aligned} $$
(3.5)

Pitcher and Sewell [30] in 1938 first considered the nonlinear fractional differential equation with 0 < μ < 1, provided that \(f\left (x,y\right )\) is bounded in a special region G lying in R ×R and satisfies the Lipschitz condition with respect to y:

$$\displaystyle \begin{aligned} \left\vert f\left(x,y_{1}\right)-f\left(x,y_{2}\right)\right\vert\leq A\left\vert y_{1}-y_{2}\right\vert, \end{aligned} $$
(3.6)

where constant A does not depend on x. They proved the existence of the continuous solution \(y\left (x\right )\) for the corresponding nonlinear integral equation of the form (3.5) with 0 < μ < 1, n = 1 and b 1 = 0. The work of Pitcher and Sewel [30] did contain the idea of reducing the solution of the fractional differential equation to that of a Volterra integral equation. The existence and uniqueness results without proof are formulated by Al-Bassam [1] for more general Cauchy type problems for a real μ > 0:

$$\displaystyle \begin{aligned} \left(D_{a+}^{\mu}y\right)\left(x\right)=f\left[x,y\left(x\right)\right] \quad \left(n-1<\mu\leq n;\, n\in\mathrm{N}\right),{} \end{aligned} $$
(3.7)
$$\displaystyle \begin{aligned} \left(D_{a+}^{\mu-k}y\right)\left(a+\right)=b_{k}, \quad b_{k}\in\mathrm{R} \quad \left( k=1,2,\dots,n\right).{} \end{aligned} $$
(3.8)

In this regard, see the survey paper by Kilbas and Trujillo [19], Sections 4 and 5. Kilbas and Marzan [18] considered the Cauchy type problem for nonlinear fractional differential equations with μ ∈C \(\left (\Re \left (\mu \right )>0\right )\):

$$\displaystyle \begin{aligned} \left(D_{a+}^{\mu}y\right)\left(x\right)=f\left[x,y\left(x\right),\left(D_{a+}^{\mu_{1}}y\right)\left(x\right),\left(D_{a+}^{\mu_{2}}y\right)\left(x\right),\dots,\left(D_{a+}^{\mu_{m-1}}y\right)\left(x\right)\right], \end{aligned} $$
(3.9)

where \(0<\Re \left (\mu _{1}\right )<\Re \left (\mu _{2}\right )<\dots <\Re \left (\mu _{m-1}\right )<\Re \left (\mu \right )\) and m ≥ 2.

In what follows a general nonlinear model with composite fractional derivative [39]:

$$\displaystyle \begin{aligned} \left(D_{a+}^{\mu,\nu}y\right)\left(x\right)=f\left[x,y\left(x\right) \right] \quad \left(n-1<\mu\leq n;\, n\in\mathrm{N}, \, 0\leq\nu\leq1\right) \end{aligned} $$
(3.10)
$$\displaystyle \begin{aligned} \underset{x\rightarrow a+}{\lim }\frac{\mathrm{d}^{k}}{\mathrm{d}x^{k}}\left(I_{a+}^{\left(n-\mu \right)\left(1-\nu\right)}y\right)\left(x\right)=c_{k}, \quad c_{k}\in\mathrm{R} \quad \left(k=0,1,2,\dots,n-1\right), \end{aligned} $$
(3.11)

and particular case of nonlinear model given by:

$$\displaystyle \begin{aligned} \left(D_{a+}^{\mu,\nu}y\right)\left(x\right)=f\left[x,y\left(x\right)\right] \quad \left(0<\mu\leq 1;\, 0\leq\nu\leq1\right) \end{aligned} $$
(3.12)
$$\displaystyle \begin{aligned} \underset{x\rightarrow a+}{\lim}\left(I_{a+}^{\left(1-\mu\right)\left(1-\nu\right)}y\right)\left(x\right)=c, \quad c\in\mathrm{R}, \end{aligned} $$
(3.13)

will be considered.

3.2 Equivalence of Cauchy Type Problem and the Volterra Integral Equation

Proposition 3.1 ([39])

Let \(y\in L\left (a,b\right )\), n − 1 < μ  n, n ∈N, 0 ≤ ν ≤ 1, \(I_{a+}^{\left (n-\mu \right )\left (1-\nu \right )}y\in AC^{k}\left [a,b\right ]\). Then the R-L fractional integral \(I_{a+}^{\mu }\)and the generalized fractional derivative \(D_{a+}^{\mu ,\nu }\)are connected by the relation:

$$\displaystyle \begin{aligned} \left(I_{a+}^{\mu}D_{a+}^{\mu,\nu}y\right)\left(x\right)=y\left(x\right)-y_{\mu,\nu}\left(x\right), \quad x>0, \end{aligned} $$
(3.14)

where

$$\displaystyle \begin{aligned} y_{\mu,\nu}\left(x\right)=\sum_{k=0}^{n-1}\frac{\left(x-a\right)^{k-\left(n-\mu\right)\left(1-\nu\right)}}{\varGamma\left(k-\left(n-\mu\right)\left(1-\nu\right)+1\right)}\underset{x\rightarrow a+}{\lim}\frac{\mathrm{d}^{k}}{\mathrm{d}x^{k}}\left(I_{a+}^{\left(n-\mu\right)\left(1-\nu\right) }y\right)\left(x\right). \end{aligned} $$
(3.15)

Proof

Using the composition properties of the Hilfer derivative one gets

$$\displaystyle \begin{aligned} \left(I_{a+}^{\mu}D_{a+}^{\mu,\nu}y\right)\left(x\right)&=\left(I_{a+}^{\mu}I_{a+}^{\nu\left(n-\mu\right)}D_{a+}^{\mu+\nu n-\mu\nu}y\right)\left(x\right)=\left(I_{a+}^{\mu+\nu\left(n-\mu\right)}D_{a+}^{\mu+\nu\left(n-\mu\right)}y\right)\left(x\right)\\ &=y\left(x\right) -\sum_{k=0}^{n-1}\frac{\left(x-a\right)^{k-\left(n-\mu\right)\left(1-\nu \right)}}{\varGamma\left(k-\left(n-\mu\right)\left(1-\nu\right)+1\right)}\\ & \quad \times \underset{x\rightarrow a+}{\lim}\frac{\mathrm{d}^{k}}{\mathrm{d}x^{k}}\left(I_{a+}^{\left(n-\mu\right)\left(1-\nu\right)}y\right)\left(x\right). \end{aligned} $$
(3.16)

Proposition 3.2 ([39])

Let G be an open set in R and let \(f:\left [a,b\right ]\times G\rightarrow \mathrm {R}\)be a function such that \(f\left (x,y\right )\in L\left (a,b\right )\). If \(y\in L\left (a,b\right )\), n − 1 < μ  n, n ∈N, 0 ≤ ν ≤ 1, \(I_{a+}^{\left (n-\mu \right )\left (1-\nu \right )}y\in AC^{k}\left [a,b\right ]\), 0 ≤ k  n − 1, then \(y\left (x\right )\)satisfies a.e. the relations (3.10) and (3.11) if and only if \(y\left (x\right )\)satisfies a.e. the integral equation

$$\displaystyle \begin{aligned} y\left(x\right)=\sum_{k=0}^{n-1}c_{k}\frac{\left(x-a\right)^{k-\left(n-\mu\right)\left(1-\nu\right)}}{\varGamma\left(k-\left(n-\mu\right)\left(1-\nu\right)+1\right)}+\frac{1}{\varGamma\left(\mu\right)}\int\limits_{a+}^{x}\frac{f\left[t,y\left(t\right)\right]}{\left(x-t\right)^{1-\mu}}\,\mathrm{d}t. \end{aligned} $$
(3.17)

In particular, if 0 < μ < 1, then \(y\left (x\right )\)satisfies a.e. these relations if and only if \(y\left (x\right )\)satisfies a.e. the integral equation

$$\displaystyle \begin{aligned} y\left(x\right)=c\frac{\left(x-a\right)^{\left(\mu-1\right)\left(1-\nu\right)}}{\varGamma\left(\mu+\nu-\mu\nu\right)}+\frac{1}{\varGamma\left(\mu\right)}\int\limits_{a+}^{x}\frac{f\left[t,y\left(t\right)\right]}{\left(x-t\right)^{1-\mu}}\,\mathrm{d}t. \end{aligned} $$
(3.18)

Proof (Necessity)

Let \(y\left (x\right )\in L\left (a,b\right )\) satisfy a.e. the relations (3.10) and (3.11). Since \(f\left (x,y\right )\in L\left (a,b\right )\), by (3.10) it follows that there exists a.e. on \(\left [a,b\right ]\) the fractional derivative \(\left (D_{a+}^{\mu ,\nu }y\right )\left (x\right )\in L\left (a,b\right )\). By Lemma 2.1 the integral \(I_{a+}^{\mu }f\left [t,y\left (t\right )\right ]\in L\left (a,b\right )\) exists a.e. on \(\left [a,b\right ]\). Applying the integral operator \(I_{a+}^{\mu }\) to both sides of (3.10) and using the relation (3.16) Eq. (3.17) is obtained, and hence the necessity is proved. Now we prove the sufficiency. Let \(y\left (x\right )\in L\left (a,b\right )\) satisfy a.e. Eq. (3.17). Using the relation

$$\displaystyle \begin{aligned} \left[D_{a+}^{\mu,\nu}\left(t-a\right)^{k-\left(n-\mu\right)\left(1-\nu\right)}\right]\left(x\right)=0 \end{aligned}$$

for 0 ≤ k ≤ n − 1, and applying the operator \(D_{a+}^{\mu ,\nu }\) to both side of (3.17), one obtains

$$\displaystyle \begin{aligned} \left(D_{a+}^{\mu,\nu}y\right)\left(x\right)&=\sum_{k=0}^{n-1}c_{k}\frac{\left[D_{a+}^{\mu,\nu}\left(t-a\right)^{k-\left(n-\mu\right)\left(1-\nu\right)}\right]\left(x\right)}{\varGamma\left(k-\left(n-\mu\right)\left(1-\nu\right)+1\right)}+\left(D_{a+}^{\mu,\nu}I_{a+}^{\mu}f\left[t,y\left(t\right)\right]\right)\left(x\right)\\&=f\left(x,y\left(x\right)\right). \end{aligned} $$
(3.19)

Next we show that relation (3.11) also holds. By applying the operator \(I_{a+}^{\left (n-\mu \right )\left (1-\nu \right )}\) to both sides of (3.17), one obtains:

$$\displaystyle \begin{aligned} \left(I_{a+}^{\left(n-\mu\right)\left(1-\nu\right)}y\right)\left(x\right)&=\sum_{k=0}^{n-1}c_{k}\frac{\left[I_{a+}^{\left(n-\mu\right)\left(1-\nu\right)}\left(t-a\right)^{k-\left(n-\mu\right)\left(1-\nu\right)}\right]\left(x\right)}{\varGamma\left(k-\left(n-\mu\right)\left(1-\nu\right)+1\right)}\\ &\quad +\left(I_{a+}^{\left(n-\mu\right)\left(1-\nu\right)}I_{a+}^{\mu}f\left[t,y\left(t\right)\right]\right)\left(x\right)=\sum_{k=0}^{n-1}\frac{c_{j}}{j!}\left(x-a\right)^{j}\\ &\quad +\left(I_{a+}^{n-n\nu+\mu\nu}f\left[t,y\left(t\right)\right]\right)\left(x\right). \end{aligned} $$
(3.20)

If 0 ≤ k ≤ n − 1, then

$$\displaystyle \begin{aligned} \frac{\mathrm{d}^{k}}{\mathrm{d}x^{k}}\left(I_{a+}^{(n-\mu)(1-\nu)}y\right)\left(x\right)&=\sum_{j=k}^{n-1}\frac{c_{j}}{\left(j-k\right)!}\left(x-a\right)^{j-k}\\ &\quad +\frac{\mathrm{d}^{k}}{\mathrm{d}x^{k}}\left(I_{a+}^{n-n\nu+\mu\nu}f\left[t,y\left(t\right)\right]\right)\left(x\right)\\ &=\sum_{j=k}^{n-1}\frac{c_{j}}{\left(j-k\right)!}\left(x-a\right)^{j-k}+\left(I_{a+}^{n-n\nu+\mu\nu-k}f\left[t,y\left(t\right)\right]\right)\left(x\right)\\ &=\sum_{j=k}^{n-1}\frac{c_{j}}{\left(j-k\right)!}\left(x-a\right)^{j-k}\\ &\quad +\frac{1}{\varGamma\left(n-n\nu+\mu\nu-k\right)}\int\limits_{a+}^{x}\frac{f\left[t,y\left(t\right)\right]}{\left(x-t\right)^{1-n+n\nu-\mu\nu+k}}\,\mathrm{d}t. \end{aligned} $$
(3.21)

Taking in (3.21) a limit x → a +  a.e., the relations in (3.11) are obtained. Thus the sufficiency is proved, which completes the proof of theorem.

Theorem 3.1 ([39])

Let G be an open set in R and let \(f:\left [a,b\right ]\times G\rightarrow \mathrm {R}\)be a function such that \(f\left (x,y\right )\in L\left (a,b\right )\)for any y  G and the Lipschitzian-type condition (3.6) is satisfied. If n − 1 < μ  n, n ∈N, 0 ≤ ν ≤ 1, \(I_{a+}^{\left (n-\mu \right )\left (1-\nu \right )}y\in AC^{k}\left [a,b\right ]\), 0 ≤ k  n − 1, then there exists a unique solution \(y\left (x\right )\)to the Cauchy type problem (3.10)(3.11) in the space \(L_{a+}^{\mu ,\nu }\left (a,b\right )\). In particular, if 0 < μ < 1, then there exists a unique solution \(y\left (x\right )\)to the Cauchy type problem (3.12)(3.13) in the space \(L_{a+}^{\mu ,\nu }\left (a,b\right )\).

Proof

In order to prove the existence of the unique solution \(y\left (x\right )\in L\left (a,b\right )\), according to Proposition 3.2, it is sufficient to prove the existence of the unique solution \(y\left (x\right ) \in L\left (a,b\right )\) of the nonlinear Volterra integral equation (3.17). From the known method for nonlinear Volterra integral equations, the first one proves the result on a part of the interval \(\left [a,b\right ]\). Equation (3.17) makes sense in any interval \(\left [a,x_{1}\right ]\subset \left [a,b\right ]\) \(\left (a<x_{1}<b\right )\). Choose x 1 such that the inequality

$$\displaystyle \begin{aligned} A\frac{\left(x_{1}-a\right)^{\mu}}{\varGamma\left(\mu +1\right)}<1 \end{aligned} $$
(3.22)

holds, and then prove the existence of a unique solution \(y\left (x\right )\in L\left (a,x_{1}\right )\) to Eq. (3.17) on the interval \(\left [a,x_{1}\right ]\). The integral equation (3.17) can be rewritten in the form \(y\left (x\right )=\left (Ty\right )\left (x\right )\), where

$$\displaystyle \begin{aligned} \left(Ty\right)\left(x\right)=y_{0}\left(x\right)+\frac{1}{\varGamma\left(\mu\right)}\int\limits_{a+}^{x}\frac{f\left[t,y\left(t\right)\right]}{\left(x-t\right)^{1-\mu}}\,\mathrm{d}t \end{aligned} $$
(3.23)
$$\displaystyle \begin{aligned} y_{0}\left(x\right)=\sum_{k=0}^{n-1}c_{k}\frac{\left(x-a\right)^{k-\left(n-\mu\right)\left(1-\nu\right)}}{\varGamma\left(k-\left(n-\mu\right)\left(1-\nu\right)+1\right)} \end{aligned} $$
(3.24)

and then one applies the Banach fixed point theorem for the complete metric space \(L\left (a,x_{1}\right )\). First, one has to prove the following:

  1. (i)

    If \(y\left (x\right )\in L\left (a,x_{1}\right )\), then \(\left (Ty\right )\left (x\right )\in L\left (a,x_{1}\right )\).

  2. (ii)

    \(\left (\forall y_{1},y_{2}\in L\left (a,x_{1}\right )\right )\) the following inequality holds:

    $$\displaystyle \begin{aligned} \left\Vert Ty_{1}-Ty_{2}\right\Vert {}_{1}\leq\omega\left\Vert y_{1}-y_{2}\right\Vert, \quad \omega=A\frac{\left(x_{1}-a\right)^{\mu}}{\varGamma\left(\mu+1\right)}. \end{aligned} $$
    (3.25)

Indeed, since \(y_{0}\left (x\right )\in L\left (a,x_{1}\right )\), \(f\left (x,y\right )\in L\left (a,x_{1}\right )\), the integral in the right-hand side of (3.23) also belongs to \(L\left (a,x_{1}\right )\) and hence \(\left (Ty\right )\left (x\right )\in L\left (a,x_{1}\right )\). Now, we prove the estimate (3.25). Therefore, one obtains

$$\displaystyle \begin{aligned} \left\Vert Ty_{1}-Ty_{2}\right\Vert {}_{L\left(a,x_{1}\right)}&=\left\Vert I_{a+}^{\mu}f\left[t,y_{1}\left(t\right)\right]-I_{a+}^{\mu}f\left[t,y_{2}\left( t\right)\right]\right\Vert {}_{L\left(a,x_{1}\right)}\\&=\left\Vert I_{a+}^{\mu}\left\{f\left[t,y_{1}\left(t\right)\right]-f\left[t,y_{2}\left(t\right)\right]\right\}\right\Vert {}_{L\left(a,x_{1}\right)}\\&\leq A\left\Vert I_{a+}^{\mu}\left[y_{1}\left(t\right)-y_{2}\left(t\right)\right] \right\Vert {}_{L\left(a,x_{1}\right)}\\&\leq A\frac{\left(x_{1}-a\right)^{\mu}}{\varGamma\left(\mu+1\right)}\left\Vert y_{1}\left(x\right)-y_{2}\left(x\right)\right\Vert {}_{L\left(a,x_{1}\right)}. \end{aligned} $$
(3.26)

In accordance with 0 < ω < 1 there exist an unique solution \(y^{\ast }\left (x\right )\in L\left (a,x_{1}\right )\) to Eq. (3.17) on the interval \(\left [a,x_{1}\right ]\). The solution \(y^{\ast }\left (x\right )\) is obtained as a limit of convergent sequence \(\left (T^{m}y_{0}^{\ast }\right )\left (x\right )\):

$$\displaystyle \begin{aligned} \underset{m\rightarrow\infty}{\lim}\left\Vert T^{m}y_{0}^{\ast}-y^{\ast}\right\Vert {}_{L\left(a,x_{1}\right)}=0, \end{aligned} $$
(3.27)

where \(y_{0}^{\ast }\left (x\right )\in L\left (a,b\right )\). If at least one c k ≠ 0 in the initial values (3.11), we can take \(y_{0}^{\ast }\left (x\right )=y_{0}\left (x\right )\) with \(y_{0}\left (x\right )\) defined by (3.24). By (3.23) we define a recursion formula:

$$\displaystyle \begin{aligned} \left(T^{m}y_{0}^{\ast}\right)\left(x\right)=y_{0}\left(x\right)+\frac{1}{\varGamma\left(\mu\right)}\int\limits_{a+}^{x}\frac{f\left[t,\left(T^{m-1}y_{0}^{\ast}\right)\left(t\right)\right]}{\left(x-t\right)^{1-\mu}}\,\mathrm{d}t \quad \left(m=1,2,3,\dots\right)\\ \end{aligned} $$
(3.28)

If we denote \(y_{m}\left (x\right ) =\left (T^{m}y_{0}^{\ast }\right )\left (x\right ) \), then the last equation takes the following form:

$$\displaystyle \begin{aligned} y_{m}\left(x\right)=y_{0}\left(x\right)+\frac{1}{\varGamma\left(\mu\right)}\int\limits_{a+}^{x}\frac{f\left[t,y_{m-1}\left(t\right)\right]}{\left(x-t\right)^{1-\mu}}\,\mathrm{d}t \quad \left(m=1,2,3,\dots\right) \end{aligned} $$
(3.29)

and hence (3.27) can be rewritten as follows:

$$\displaystyle \begin{aligned} \underset{m\rightarrow\infty}{\lim}\left\Vert y_{m}-y^{\ast}\right\Vert {}_{L\left(a,x_{1}\right)}=0. \end{aligned} $$
(3.30)

This means that the method of successive approximations is applied to find a unique solution \(y^{\ast }\left (x\right )\) to the integral equation (3.17) on \(\left [a,x_{1}\right ]\). Next we consider the interval \(\left [x_{1},x_{2}\right ]\), where x 2 = x 1 + h 1, h 1 > 0 are such that x 2 < . Rewrite Eq. (3.17) in the form

$$\displaystyle \begin{aligned} y\left(x\right)=y_{0}\left(x\right)+\frac{1}{\varGamma\left(\mu\right)}\int\limits_{a+}^{x_{1}}\frac{f\left[t,y\left(t\right)\right]}{\left(x-t\right)^{1-\mu}}\,\mathrm{d}t+\frac{1}{\varGamma\left(\mu\right)}\int\limits_{x_{1}}^{x}\frac{f\left[t,y\left(t\right)\right]}{\left(x-t\right)^{1-\mu}}\,\mathrm{d}t. \end{aligned} $$
(3.31)

Since the function \(y\left (t\right )\) is uniquely defined on the interval \(\left [a,x_{1}\right ]\), the last integral can be considered as a known function, and then

$$\displaystyle \begin{aligned} y\left(x\right)=y_{01}\left(x\right)+\frac{1}{\varGamma\left(\mu\right)}\int\limits_{x_{1}}^{x}\frac{f\left[t,y\left(t\right)\right]}{\left(x-t\right)^{1-\mu}}\,\mathrm{d}t, \end{aligned} $$
(3.32)

where

$$\displaystyle \begin{aligned} y_{01}\left(x\right)=y_{0}\left(x\right)+\frac{1}{\varGamma\left(\mu\right)}\int\limits_{a+}^{x}\frac{f\left[t,y\left(t\right)\right]}{\left(x-t\right)^{1-\mu}}\,\mathrm{d}t \end{aligned} $$
(3.33)

is the known function. Using the same arguments as above, it follows that there exists a unique solution \(y^{\ast }\left (x\right )\in L\left (x_{1},x_{2}\right )\) of Eq. (3.17) on the interval \(\left [x_{1},x_{2}\right ]\). Taking the next interval \(\left [x_{2},x_{3}\right ]\), where x 3 = x 2 + h 2, h 2 > 0, x 3 < , and replacing the process, one concludes that there exists a unique solution \(y^{\ast }\left (x\right )\in L\left (a,b\right )\) for (3.17). Thus, there exists a unique solution \(y\left (x\right )=y^{\ast }\left (x\right )\in L\left (a,b\right )\) to the Volterra integral equation (3.17) and hence to the Cauchy type problem. To complete the proof of theorem one must show that such unique solution \(y\left (x\right )\in L\left (a,b\right )\) belongs to the space \(L_{a+}^{\mu ,\nu }\left (a,b\right )\). It is sufficient to prove that \(\left (D_{a+}^{\mu ,\nu }y\right )\left (x\right )\in L\left (a,b\right )\). By the above proof, the solution \(y\left (x\right )\in L\left (a,b\right )\) is a limit of the sequence \(y_{m}\left (x\right )\in L\left (a,b\right )\):

$$\displaystyle \begin{aligned} \underset{m\rightarrow\infty}{\lim}\left\Vert y_{m}-y\right\Vert {}_{L\left(a,x_{1}\right)}=0, \end{aligned} $$
(3.34)

with the choice of certain y m on each \(\left [a,x_{1}\right ]\), \(\left [x_{1},x_{2}\right ]\), …, \(\left [x_{L-1},b\right ]\). Since

$$\displaystyle \begin{aligned} \left\Vert D_{a+}^{\mu,\nu}y_{m}-D_{a+}^{\mu,\nu}y\right\Vert {}_{1}=\left\Vert f\left(x,y_{m}\right)-f\left(x,y\right)\right\Vert {}_{1}\leq A\left\Vert y_{m}-y\right\Vert {}_{1}, \end{aligned} $$
(3.35)

by (3.34), one obtains

$$\displaystyle \begin{aligned} \underset{m\rightarrow\infty}{\lim}\left\Vert D_{a+}^{\mu,\nu}y_{m}-D_{a+}^{\mu,\nu}y\right\Vert {}_{1}=0, \end{aligned} $$
(3.36)

and hence \(\left (D_{a+}^{\mu ,\nu }y\right )\left (x\right )\in L\left (a,b\right )\). This completes the proof of theorem.

3.3 Generalized Cauchy Type Problems

Here we study a Cauchy type problem for general n-term nonlinear fractional differential equations with generalized fractional derivatives of arbitrary orders and types [39]:

$$\displaystyle \begin{aligned} \left(D_{a+}^{\mu,\nu}y\right)\left(x\right)=f\left[x,y\left(x\right), \left( D_{a+}^{\mu_{1},\nu_{1}}y\right)\left(x\right), \left(D_{a+}^{\mu_{2},\nu_{2}}y\right)\left(x\right),\dots,\left(D_{a+}^{\mu_{m-1},\nu_{n-1}}y\right)\left(x\right)\right], \end{aligned} $$
(3.37)

with n-initial values:

$$\displaystyle \begin{aligned} \underset{x\rightarrow a+}{\lim}\frac{\mathrm{d}^{k}}{\mathrm{d}x^{k}}\left(I_{a+}^{\left(n-\mu \right)\left(1-\nu\right)}y\right)\left(x\right)=c_{k}, \quad c_{k}\in\mathrm{R} \quad \left( k=0,1,2,\dots,n-1\right). \end{aligned} $$
(3.38)

As special case, we consider fractional differential equation with initial value

$$\displaystyle \begin{aligned} \underset{x\rightarrow a+}{\lim}\left(I_{a+}^{\left(n-\mu\right)\left(1-\nu \right)}y\right)\left(x\right)=c, \quad c\in\mathrm{R}. \end{aligned} $$
(3.39)

Proposition 3.3 ([39])

Let 0 ≤ ν ≤ 1, 0 ≤ ν i ≤ 1 and μ, μ i ∈R, n − 1 < μ  n, n ∈N, n − 1 < μ i ≤ n, i = 1, 2, …, n − 1 be such that 0 < μ 1 < μ 2 < ⋯ < μ n−1 < μ, n ≥ 2. Then let G be an open set in Rnand let \(f:\left (a,b\right ]\times G\rightarrow \mathrm {R}\)be a function such that \(f\left (x,y,y_{1},y_{2},\dots ,y_{n-1}\right )\in L\left (a,b\right )\)for any \(\left (y,y_{1},y_{2},\dots ,y_{n-1}\right )\in G\). If \(y\left (x\right )\in L\left (a,b\right )\), \(I_{a+}^{\left (n-\mu \right )\left (1-\nu \right )}y\in AC^{k}\left [a,b\right ]\), 0 ≤ k  n − 1, then \(y\left (x\right )\)satisfies a.e. the relations (3.37) and (3.38) if and only if, \(y\left (x\right )\)satisfies a.e. the integral equation

$$\displaystyle \begin{aligned} y\left(x\right)&=\sum_{k=0}^{n-1}c_{k}\frac{\left(x-a\right)^{k-\left(n-\mu\right)\left(1-\nu\right)}}{\varGamma\left(k-\left(n-\mu\right)\left(1-\nu\right)+1\right)}+\frac{1}{\varGamma\left(\mu\right)}\\ &\quad \times \int\limits_{a+}^{x}\frac{f\left[t,y\left(t\right),\left(D_{a+}^{\mu _{1},\nu_{1}}y\right)\left(t\right),\left(D_{a+}^{\mu_{2},\nu_{2}}y\right)\left(t\right),\dots,\left(D_{a+}^{\mu_{m-1},\nu_{n-1}}y\right)\left(t\right)\right]}{\left(x-t\right)^{1-\mu}}\,\mathrm{d}t, \end{aligned} $$
(3.40)

x > a. In particular, if 0 < μ < 1, then \(y\left (x\right )\)satisfies a.e. the relations (3.39) and (3.40) if and only if \(y\left (x\right )\)satisfies a.e. the integral equation

$$\displaystyle \begin{aligned} y\left(x\right)&=c\frac{\left(x-a\right)^{\left(\mu-1\right)\left(1-\nu\right)}}{\varGamma\left(\mu+\nu-\mu\nu\right)}+\frac{1}{\varGamma\left(\mu\right)}\\ &\quad \times \int\limits_{a+}^{x}\frac{f\left[t,y\left(t\right),\left(D_{a+}^{\mu_{1},\nu_{1}}y\right)\left(t\right),\left(D_{a+}^{\mu_{2},\nu_{2}}y\right)\left(t\right),\dots,\left(D_{a+}^{\mu_{m-1},\nu_{n-1}}y\right)\left(t\right)\right]}{\left(x-t\right)^{1-\mu}}\,\mathrm{d}t, \end{aligned} $$
(3.41)

x > a.

Theorem 3.2 ([39])

Let the conditions of previous theorem be valid, and let function \(f\left (x,y,y_{1},y_{2,\ldots ,}y_{n-1}\right )\) satisfy the Lipschitzian type condition:

$$\displaystyle \begin{aligned} \left\vert f\left(x,y,y_{1},y_{2},\dots,y_{n-1}\right)-f\left(x,Y,Y_{1},Y_{2},\dots,Y_{n-1}\right)\right\vert\leq A\sum_{j=0}^{n}\left\vert y_{j}-Y_{j}\right\vert \end{aligned} $$
(3.42)

for all \(x\in \left (a,b\right ]\)and \(\left (y,y_{1},y_{2},\dots ,y_{n-1}\right ),\left (Y,Y_{1},Y_{2},\dots ,Y_{n-1}\right )\in G,\)where A > 0 does not depend on \(x\in \left (a,b\right ]\). Then let

$$\displaystyle \begin{aligned} \underset{x\rightarrow a+}{\lim}\frac{\mathrm{d}^{k_{i}}}{\mathrm{d}x^{k_{i}}}\left(I_{a+}^{\left(n-\mu_{i}\right)\left( 1-\nu_{i}\right)}y\right)\left(x\right)=b_{k_{i}}, \quad \left(i=1,2,\dots,n_{i}\right), \end{aligned} $$
(3.43)

be fixed numbers, where \(n_{i}=\left [\mu _{i}\right ]+1\)for μ i∉N and n i = μ ifor μ i ∈R. Then there exists a unique solution \(y\left (x\right )\)to the Cauchy type problem (3.37)(3.38) in the space \(L_{a+}^{\mu ,\nu }\left (a,b\right )\). In particular, if 0 < μ < 1 and

$$\displaystyle \begin{aligned} \underset{x\rightarrow a+}{\lim}\left(I_{a+}^{\left(n-\mu_{i}\right)\left(1-\nu_{i}\right)}y\right)\left(x\right)=b_{i}, \quad i=1,2,\dots,n-1, \end{aligned}$$

are fixed numbers, then there exists a unique solution \(y\left (x\right )\in L_{a+}^{\mu ,\nu }\left (a,b\right )\)to the Cauchy type problem (3.37)(3.39).

Proof

This theorem can be proved in a way similar to the proof of Theorem 3.1. By Proposition 3.3 it is sufficient to establish the existence of a unique solution \(y\left ( x\right ) \in L\left ( a,b\right )\) to the integral equation (3.40). We choose \(x_{1}\in \left ( a,b\right ) \) such that the condition

$$\displaystyle \begin{aligned} A\sum_{j=0}^{n}\left[\frac{\left(x_{1}-a\right)^{\mu-\mu_{j}}}{\varGamma\left(\mu-\mu_{j}+1\right)}\right]<1 \end{aligned} $$
(3.44)

holds and apply the Banach fixed point theorem to prove the existence of a unique solution \(y\left (x\right )=y^{\ast }\left (x\right )\in L\left (a,x_{1}\right )\). We use the space \(L\left (a,b\right )\) and rewrite Eq. (3.40) in the form \(y\left (x\right )=\left (Ty\right ) \left (x\right )\), where

$$\displaystyle \begin{aligned} \left(Ty\right)\left(x\right)&=y_{0}\left(x\right)+\frac{1}{\varGamma\left(\mu\right)}\\&\quad \times \int\limits_{a+}^{x}\frac{f\left[t,y\left(t\right),\left(D_{a+}^{\mu_{1},\nu_{1}}y\right)\left(t\right),\left(D_{a+}^{\mu_{2},\nu_{2}}y\right)\left(t\right),\dots,\left(D_{a+}^{\mu_{m-1},\nu_{n-1}}y\right)\left(t\right)\right]}{\left(x-t\right)^{1-\mu}}\,\mathrm{d}t, \end{aligned} $$
(3.45)

and

$$\displaystyle \begin{aligned} y_{0}\left(x\right)=\sum_{k=0}^{n-1}c_{k}\frac{\left(x-a\right)^{k-\left(n-\mu\right)\left(1-\nu\right)}}{\varGamma\left(k-\left(n-\mu\right)\left(1-\nu\right)+1\right)} \end{aligned} $$
(3.46)

By Lipschitzian condition (3.42), we obtain

(3.47)

By the theorem,

$$\displaystyle \begin{aligned} \frac{\mathrm{d}^{k_{j}}}{\mathrm{d}x^{k_{j}}}\left(I_{a+}^{\left(1-\nu_{j}\right)\left(n-\mu_{j}\right)}\left(y_{1}\right)\right)\left(a+\right)=\frac{\mathrm{d}^{k_{j}}}{\mathrm{d}x^{k_{j}}}\left(I_{a+}^{\left(1-\nu_{j}\right)\left(n-\mu_{j}\right)}\left(y_{2}\right)\right)\left(a+\right), \end{aligned}$$

and hence, for any \(x\in \left [a,b\right ]\),

$$\displaystyle \begin{aligned} &\left\vert\left\{I_{a+}^{\mu}\left[f\left(x,y_{1},D_{a+}^{\mu_{1},\nu _{1}}y_{1},\dots,D_{a+}^{\mu_{n-1},\nu_{n-1}}y_{1}\right) \right.\right.\right. \\ & \qquad \left.\left.\left.-f\left(x,y_{2},D_{a+}^{\mu_{1},\nu_{1}}y_{2},\dots,D_{a+}^{\mu_{n-1},\nu_{n-1}}y_{2}\right)\right]\right\}\left(x\right)\right\vert\\ &\quad \leq A\sum_{j=1}^{n-1}\left(I_{a+}^{\mu-\mu_{j}}\left\vert y_{1}-y_{2}\right\vert\right)\left(x\right). \end{aligned} $$
(3.48)

Using this relation with x = x 1 and applying Lemma 2.1 with b = x 1, we derive the estimate:

$$\displaystyle \begin{aligned} \left\Vert\left(Ty_{1}\right)\left(x\right)-\left(Ty_{2}\right)\left(x\right)\right\Vert {}_{L\left(a,x_{1}\right)}\leq\omega\left\Vert y_{1}-y_{2}\right\Vert, \end{aligned} $$
(3.49)
$$\displaystyle \begin{aligned} \omega=A\sum_{j=1}^{n-1}\left[\frac{\left(x_{1}-a\right)^{\mu-\mu_{j}}}{\varGamma\left(\mu-\mu_{j}+1\right)}\right], \end{aligned}$$

which yields the existence of a unique solution \(y^{\ast }\left (x\right )\) to Eq. (3.40) in \(L\left (a,x_{1}\right )\). This solution is obtained as a limit of the convergent sequence \(\left (T^{m}y_{0}^{\ast }\right )\left (x\right )=y_{m}\left (x\right )\), for which the relations

$$\displaystyle \begin{aligned} \underset{m\rightarrow\infty}{\lim}\left\Vert T^{m}y_{0}^{\ast}-y^{\ast}\right\Vert {}_{L\left(a,x_{1}\right)}=0, \end{aligned} $$
(3.50)

and

$$\displaystyle \begin{aligned} \underset{m\rightarrow\infty}{\lim}\left\Vert y_{m}-y^{\ast}\right\Vert {}_{L\left(a,x_{1}\right)}=0 \end{aligned} $$
(3.51)

hold. We can show also that there exists a unique solution \(y\left (x\right )\in L\left (a,b\right )\) to the integral equation (3.40), i.e., to the Cauchy type problem (3.37)–(3.38) such that \(\left (D_{a+}^{\mu ,\nu }y\right )\left (x\right )\in L\left (a,b\right )\). Namely,

$$\displaystyle \begin{aligned} \left\Vert D_{a+}^{\mu,\nu}y_{m}-D_{a+}^{\mu,\nu}y^{\ast}\right\Vert {}_{1}&=\left\Vert f\left(x,y_{m},D_{a+}^{\mu_{1},\nu_{1}}y_{m},\dots,D_{a+}^{\mu_{n-1},\nu_{n-1}}y_{m}\right)\right.\\ &\left.\qquad -f\left(x,y^{\ast},D_{a+}^{\mu_{1},\nu_{1}}y^{\ast},\dots,D_{a+}^{\mu_{n-1},\nu_{n-1}}y^{\ast}\right)\right\Vert {}_{1}\\ & \leq\omega\left\Vert y_{m}-y^{\ast}\right\Vert {}_{1}\rightarrow0, \quad m\rightarrow\infty. \end{aligned} $$
(3.52)

In particular, if 0 < μ < 1, then there exists a unique solution \(y\left (x\right )\in L_{a+}^{\mu ,\nu }\left (a,b\right )\) to the Cauchy type problem (3.37)–(3.39).

3.4 Equations of Volterra Type

Many authors have applied methods of fractional integro-differentiation to construct solutions of ordinary differential equations of fractional order, to investigate integro-differential equations, and to obtain a unified theory of special functions. The methods and results in these fields are presented by Samko et al. [33], Kiryakova [21], Kilbas et al. [20], etc. We mention here also the paper by Tuan and Al-Saqabi [41], where using an operational method they solved a fractional integro-differential equation of Volterra type of the form

$$\displaystyle \begin{aligned} \left(D_{0+}^{\alpha}f\right)\left(x\right)+\frac{a}{\varGamma\left(\nu\right)}\int\limits_{0}^{x}\left(x-t\right)^{\nu-1}f\left(t\right)\,\mathrm{d}t=g\left(x\right), \end{aligned} $$
(3.53)

\(\Re \left (\alpha \right )>0\), \(\Re \left (\nu \right )>0\), a ∈C, \(g\in L\left [0,b\right ]\).

Kilbas et al. [20] established an explicit solution of the Cauchy type problem for the equation

$$\displaystyle \begin{aligned} \left(D_{a+}^{\alpha}y\right)\left(x\right)=\lambda\left(\mathcal{E}_{0+;\rho,\alpha}^{\omega;\gamma,1}y\right)\left(x\right)+f\left(x\right), \end{aligned} $$
(3.54)
$$\displaystyle \begin{aligned} \left(0<x\leq b,\alpha\in\mathrm{C},\Re\left(\alpha\right)>0,\lambda,\gamma,\rho,\omega\in\mathrm{C}\right) \end{aligned}$$

under the initial values

$$\displaystyle \begin{aligned} \left(D_{a+}^{\alpha-k}y\right)\left(a+\right)=b_{k}, \quad b_{k}\in\mathrm{C} \quad \left(k=1,2,\dots,n\right), \end{aligned} $$
(3.55)

where \(n=\Re \left (\alpha \right )+1\) for α∉N and α = n for α ∈N in terms of the generalized Mittag-Leffler functions. The homogeneous equation corresponding to the case with \(\left (f\left (x\right )=0\right )\) is a generalization of the equation which describes the unsaturated behavior of the free electron laser. In Ref. [37] Srivastava and Tomovski by using the Laplace transform method gave an explicit solution in the space \(L\left (0,b\right ]\) of the following Cauchy type problem with a = 0 and \(\varphi \left (x\right )=1\), \(x\in \left (0,b\right ]\)

$$\displaystyle \begin{aligned} \left(D_{0+}^{\mu,\nu}y\right)\left(x\right)=\lambda\left(\mathcal{E}_{0+;\alpha,\beta}^{\omega;\gamma,\mathbf{k}}1\right)\left(x\right)+f\left(x\right) \quad \left(0<x\leq b\right) \end{aligned} $$
(3.56)
$$\displaystyle \begin{aligned} \left(\alpha,\beta,\gamma,\omega\in\mathrm{C}, \Re\left(\alpha\right)>\max\left\{0,\Re\left(\mathbf{k}\right)-1\right\},\min\left\{\Re\left(\beta\right),\Re\left(\gamma\right),\Re\left(\mathbf{k}\right)\right\}>0\right) \end{aligned}$$

under the initial values

$$\displaystyle \begin{aligned} \left(I_{0+}^{\left(1-\mu\right)\left(1-\alpha\right)}y\right)\left(0+\right)=c. \end{aligned} $$
(3.57)

Here, by using the method of successive approximation (and later by Laplace transform method), we shall give an explicit solution, in the space \(L\left (0,b\right ]\), of a more general (nonlinear Cauchy problem) fractional differential equation than (3.56) which contain the composite fractional derivative operator (2.14). This problem was proposed as an open problem by Srivastava and Tomovski in Ref. [37].

Theorem 3.3 ([39])

The following fractional integro-differential equation

$$\displaystyle \begin{aligned} \left(D_{0+}^{\mu,\nu}y\right)\left(x\right)=\lambda\left(\mathcal{E}_{0+;\alpha,\beta}^{\omega;\gamma,\mathbf{k}}y\right)\left(x\right)+f\left(x\right) \quad \left(0<x\leq b\right) \end{aligned} $$
(3.58)

α, β, γ, ω ∈C, \(\Re \left (\alpha \right )>\max \left \{0,\Re \left (\mathbf {k}\right )-1\right \}\), \(\min \left \{\Re \left (\beta \right ),\Re \left (\gamma \right ), \Re \left (\mathbf {k}\right )\right \}>0\), \(f\in L\left [0,b\right ]\)with the initial values (3.11) with a = 0, and n − 1 < μ  n, n ∈N, 0 ≤ ν ≤ 1 has its solution in the space \(L\left (0,b\right ]\)given by

$$\displaystyle \begin{aligned} y\left(x\right)&=\sum_{m=1}^{\infty}\lambda^{m}\left(\underset{m-1}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right)\left(x\right)\\&\quad +\sum_{m=1}^{\infty}\lambda^{m}\sum_{k=0}^{n-1}c_{k}\,\{\underset{m-1}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\left[x^{\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)+k}\right.\\&\quad \left.\times E_{\alpha,\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)+k+1}^{\gamma,\mathbf{k}}\left(\omega x^{\alpha}\right)\right]\}\\&\quad +\left(I_{0+}^{\mu}f\right)\left(x\right)+\sum_{k=0}^{n-1}\frac{c_{k}}{\varGamma\left(k-\left(n-\mu\right)\left(1-\nu\right)+1\right)}x^{k-\left(n-\mu\right)\left(1-\nu\right)}, \end{aligned} $$
(3.59)

\(\left \vert \lambda \right \vert <1/\mathbf {M}\), whereMis a positive constant given by (2.113 ) with a = 0. In particular, if 0 < μ < 1 under the initial values (3.57), Eq.(3.58) has its solution in the space \(L\left (0,b\right ]\)given by

$$\displaystyle \begin{aligned} y\left(x\right)&=\sum_{m=1}^{\infty}\lambda^{m}\left(\underset{m-1}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right)\left(x\right)\\&\quad +\frac{c}{\varGamma\left(\mu+\nu-\mu\nu\right)}\sum_{m=1}^{\infty}\lambda^{m}\left\{\underset{m-1}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}x^{\left(\mu-1\right)\left(1-\nu\right)}\right\}\\&\quad +\left(I_{0+}^{\mu}f\right)\left(x\right)+c\frac{x^{\left(\mu-1\right)\left(1-\nu\right)}}{\varGamma\left(\mu+\nu-\mu\nu\right)}, \quad \left(\left\vert\lambda\right\vert<1/\mathbf{M}\right). \end{aligned} $$
(3.60)

where c is an arbitrary constant.

Proof

To prove this theorem we apply Proposition 3.2, that a solution of the Cauchy type problem (3.58)–(3.11) with a = 0 is equivalent to a solution of Volterra integral equation of the second kind. By Proposition 3.1, we get:

$$\displaystyle \begin{aligned} y\left(x\right)&=\lambda\left(\varepsilon_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}y\right)\left(x\right)+\left(I_{0+}^{\mu}f\right)\left(x\right)\\&\quad +\sum_{i=0}^{n-1}\frac{c_{i}}{\varGamma\left(i-\left(n-\mu\right)\left(1-\nu\right)+1\right)}x^{i-\left(n-\mu\right)\left(1-\nu\right)}. \end{aligned} $$
(3.61)

By the theory of Volterra integral equations of the second kind, such an integral equation has a unique solution \(y\left (x\right )\in L\left (0,b\right ]\). To find the exact solution we apply the method of successive approximation. We consider the sequence \(\left \{y_{m}\left (x\right )\right \}_{m=0}^{\infty }\) defined by

$$\displaystyle \begin{aligned} y_{0}\left(x\right)=\sum_{i=0}^{n-1}\frac{c_{i}}{\varGamma\left(i-\left(n-\mu\right)\left(1-\nu\right)+1\right)}x^{i-\left(n-\mu\right)\left(1-\nu\right)}, \end{aligned} $$
(3.62)
$$\displaystyle \begin{aligned} y_{m}\left(x\right)=y_{0}\left(x\right)+\lambda\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}y_{m-1}\right)\left(x\right)+\left(I_{0+}^{\mu}f\right)\left(x\right) \quad \left(m=1,2,3,\dots\right)\\ \end{aligned} $$
(3.63)

For m = 1,

$$\displaystyle \begin{aligned} y_{1}\left(x\right)=y_{0}\left(x\right)+\lambda\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}y_{0}\right)\left(x\right)+\left(I_{0+}^{\mu}f\right)\left(x\right). \end{aligned} $$
(3.64)

Here \(y_{2}\left (x\right )\) is

$$\displaystyle \begin{aligned} y_{2}\left(x\right)=y_{0}\left(x\right)+\lambda\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}y_{1}\right)\left(x\right)+\left(I_{0+}^{\mu}f\right)\left(x\right), \end{aligned} $$
(3.65)

and

$$\displaystyle \begin{aligned} y_{2}\left(x\right)&=y_{0}\left(x\right)+\left(I_{0+}^{\mu}f\right)\left(x\right)+\lambda\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\sum_{k=0}^{n-1}\frac{c_{k}x^{k-\left(n-\mu\right)\left(1-\nu\right)}}{\varGamma\left(k-\left(n-\mu\right)\left(1-\nu\right)+1\right)}\right)\\&\quad +\lambda\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\right)\left(\lambda\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}y_{0}\right)\left(x\right)+\lambda\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}I_{0+}^{\mu}f\right)\left(x\right)\\&=y_{0}\left(x\right)+\left(I_{0+}^{\mu}f\right)\left(x\right)+\lambda\sum_{k=0}^{n-1}\frac{c_{k}}{k!}\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}D_{0+}^{\left(n-\mu\right)\left(1-\nu\right)}x^{k}\right)\\&\quad +\lambda^{2}\sum_{k=0}^{n-1}\frac{c_{k}}{k!}\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}D_{0+}^{\left(n-\mu\right)\left(1-\nu\right)}x^{k}\right)\left(x\right)+\lambda\left(\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right)\left(x\right)\\&=y_{0}\left(x\right)+\left(I_{0+}^{\mu}f\right)\left(x\right)+\lambda\sum_{k=0}^{n-1}\frac{c_{k}}{k!}\left(\mathcal{E}_{0+;\alpha,\beta+\mu -\left(n-\mu\right)\left(1-\nu\right)}^{\omega;\gamma,\mathbf{k}}x^{k}\right)\\&\quad +\lambda\left(\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right)\left(x\right)+\lambda^{2}\sum_{k=0}^{n-1}\frac{c_{k}}{k!}\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)}^{\omega;\gamma,\mathbf{k}}x^{k}\right). \end{aligned} $$
(3.66)

Similarly, for m = 3, we have

$$\displaystyle \begin{aligned} y_{3}\left(x\right)&= y_{0}\left(x\right)+\lambda\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}y_{2}\right)\left(x\right)+\left(I_{0+}^{\mu}f\right)\left(x\right)\\ & =y_{0}\left(x\right)+\left(I_{0+}^{\mu}f\right)\left(x\right)+\lambda\sum_{k=0}^{n-1}\frac{c_{k}}{k!}\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}D_{0+}^{\left(n-\mu\right)\left(1-\nu\right)}x^{k}\right) \\ &\quad +\lambda\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}I_{0+}^{\mu}f\right)\left(x\right)\\ &\quad +\lambda^{2}\sum_{k=0}^{n-1}\frac{c_{k}}{k!}\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)}^{\omega;\gamma,\mathbf{k}}x^{k}\right)\left(x\right) \\ &\quad +\lambda^{2}\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right)\left(x\right)\\ &\quad +\lambda^{3}\sum_{k=0}^{n-1}\frac{c_{k}}{k!}\left(\underset{2}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\mathcal{E}_{0+;\alpha,\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)}^{\omega;\gamma,\mathbf{k}}x^{k}\right)\left(x\right)\\ & =y_{0}\left(x\right)+\left(I_{0+}^{\mu}f\right)\left(x\right)+\lambda\sum_{k=0}^{n-1}\frac{c_{k}}{k!}\left(\mathcal{E}_{0+;\alpha,\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)}^{\omega;\gamma,\mathbf{k}}x^{k}\right) \\ &\quad +\lambda\left(\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right)\left(x\right)\\ &\quad +\lambda^{2}\sum_{k=0}^{n-1}\frac{c_{k}}{k!}\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)}^{\omega;\gamma,\mathbf{k}}x^{k}\right)\\ &\quad +\lambda^{2}\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right)\left(x\right)\\ &\quad +\lambda^{3}\sum_{k=0}^{n-1}\frac{c_{k}}{k!}\left(\underset{2}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\mathcal{E}_{0+;\alpha,\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)}^{\omega;\gamma,\mathbf{k}}x^{k}\right)\left(x\right). \end{aligned} $$
(3.67)

Continuing this process, we obtain

$$\displaystyle \begin{aligned} y_{m}\left(x\right)&=y_{0}\left(x\right)+\left(I_{0+}^{\mu}f\right)\left(x\right)\\ &\quad +\sum_{j=1}^{m-1}\lambda ^{j}\left(\underset{j-1}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right)\left(x\right)+\sum_{j=1}^{m}\lambda^{j}\\ &\quad \times\sum_{k=0}^{n-1}\frac{c_{k}}{k!}\left(\underset{j-1}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\mathcal{E}_{0+;\alpha,\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)}^{\omega;\gamma,\mathbf{k}}x^{k}\right), \end{aligned} $$
(3.68)

for all m ∈N.

The series

$$\displaystyle \begin{aligned} \sum\limits_{j=1}^{\infty}\lambda^{j}\left(\underset{j-1}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta +\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right)\left(x\right) \end{aligned}$$

for all \(x\in \left (0,b\right ]\) and\(\left \vert \lambda \right \vert <1/\mathbf {M}\) is convergent, which can be verified as follows. From

$$\displaystyle \begin{aligned} &\left\Vert\underset{j-1}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right\Vert {}_{1}\\&\quad \leq\mathbf{M}\left\Vert\underset{j-2}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right\Vert {}_{1}\\&\quad \leq{\mathbf{M}}^{2}\left\Vert\underset{j-3}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right\Vert {}_{1}\leq\dots\\&\quad \leq{\mathbf{M}}^{j-1}\left\Vert\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right\Vert {}_{1}\leq{\mathbf{M}}^{j}\left\Vert f\right\Vert {}_{1}. \end{aligned} $$
(3.69)

By applying the Weierstrass M-test we obtain that the series

$$\displaystyle \begin{aligned} \sum_{j=1}^{\infty}\lambda^{j}\left(\underset{j-1}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right)\left(x\right) \end{aligned} $$

converges uniformly for all \(x\in \left (a,b\right ]\) and \(\left \vert \lambda \right \vert <\frac {1}{\mathbf {M}}\), where M is a constant given by series (2.113). Analogously, we can verify that the series

$$\displaystyle \begin{aligned} \sum_{j=1}^{\infty}\lambda^{j}\sum_{k=0}^{n-1}\frac{c_{k}}{k!}\left(\underset{j-1}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\mathcal{E}_{0+;\alpha,\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)}^{\omega;\gamma,\mathbf{k}}x^{k}\right) \end{aligned} $$

also converges uniformly for all \(x\in \left (a,b\right ]\), since the numerical series

$$\displaystyle \begin{aligned} \sum\limits_{j=1}^{\infty}\left(\lambda\mathbf{M}\right)^{j}\sum\limits_{k=0}^{n-1}\frac{c_{k}}{k!}\left\Vert x\right\Vert {}_{1}^{k} \end{aligned} $$

is convergent for all \(\left \vert \lambda \right \vert <\frac {1}{\mathbf {M}}\). Letting m → in (3.68) and applying the formula [37, p. 203, Eq. (2.22)]

$$\displaystyle \begin{aligned} &\mathcal{E}_{0+;\alpha,\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)}^{\omega;\gamma,\mathbf{k}}x^{k}\\&\quad =\int\limits_{0}^{x}\left(x-t\right)^{\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)-1}t^{k}E_{\alpha,\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)}^{\gamma,\mathbf{k}}\left(\omega\left(x-t\right)^{\alpha}\right)\,\mathrm{d}t\\&\quad =\varGamma \left( k+1\right) x^{\beta +\mu -\left( n-\mu \right) \left(1-\nu \right) +k}\ E_{\alpha ,\beta +\mu -\left( n-\mu\right) \left( 1-\nu \right) +k+1}^{\gamma ,\mathbf{k}}\left( \omega x^{\alpha }\right) \end{aligned} $$
(3.70)

we obtain the following representation for the solution \(y\left (x\right )\):

$$\displaystyle \begin{aligned} y\left(x\right)&=y_{0}\left(x\right)+\left(I_{0+}^{\mu}f\right)\left(x\right)\\ &\quad +\sum_{j=1}^{\infty}\lambda^{j}\left(\underset{j-1}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right)\left(x\right)\\ &\quad +\sum_{j=1}^{\infty}\lambda^{j}\sum_{k=0}^{n-1}c_{k}\\ &\quad \times\left\{\underset{j-1}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\right. \\ &\quad \times\left.\left[x^{\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)+k}E_{\alpha,\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)+k+1}^{\gamma,\mathbf{k}}\left(\omega x^{\alpha}\right)\right]\right\}. \end{aligned} $$
(3.71)

In particular, if 0 < μ < 1 one has

$$\displaystyle \begin{aligned} y\left(x\right)=\lambda\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}y\right)\left(x\right)+\left(I_{0+}^{\mu}f\right)\left(x\right)+c\frac{x^{\left(\mu-1\right)\left(1-\nu\right)}}{\varGamma\left(\mu+\nu-\mu\nu\right)}. \end{aligned} $$
(3.72)

We consider sequence \(y_{m}\left (x\right )\):

$$\displaystyle \begin{aligned} y_{m}\left(x\right)=y_{0}\left(x\right)+\lambda\left(\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}y_{m-1}\right)\left(x\right)+\left(I_{0+}^{\mu}f\right)\left(x\right), \quad \left(m=1,2,3,\dots\right)\\ \end{aligned} $$
(3.73)

where

$$\displaystyle \begin{aligned} y_{0}\left(x\right)=c\frac{x^{\left(\mu-1\right)\left(1-\nu\right)}}{\varGamma\left(\mu+\nu-\mu\nu\right)}. \end{aligned}$$

Following the above process of successive approximations, we obtain:

$$\displaystyle \begin{aligned} y_{m}\left(x\right)&=y_{0}\left(x\right)+\left(I_{0+}^{\mu}f\right)\left(x\right)\\&\quad +\sum_{j=1}^{m-1}\lambda ^{j}\left(\underset{j-1}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}\mathcal{E}_{0+;\alpha,\beta+2\mu}^{\omega;\gamma,\mathbf{k}}f\right)\left(x\right)\\&\quad +\frac{c}{\varGamma\left(\mu+\nu-\mu\nu\right)}\\&\quad \times\sum_{j=1}^{m}\lambda^{j}\left(\underset{j}{\underbrace{\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}\dots\mathcal{E}_{0+;\alpha,\beta+\mu}^{\omega;\gamma,\mathbf{k}}}}x^{\left(\mu-1\right)\left(1-\nu\right)}\right) \end{aligned} $$
(3.74)

Letting m → in the last sequence, we obtain the solution (3.60), which completes the proof of the theorem.

By applying the integral formula (2.109) we obtain the following theorem (case k=1):

Theorem 3.4 ([39])

The following fractional integro-differential equation

$$\displaystyle \begin{aligned} \left(D_{0+}^{\mu,\nu}y\right)\left(x\right)=\lambda\left(\mathcal{E}_{0+;\alpha,\beta}^{\omega;\gamma}y\right)\left(x\right)+f\left(x\right), \quad \left(0<x\leq b\right) \end{aligned} $$
(3.75)

α, β, γ, ω, λ ∈C, \(\Re \left (\alpha \right ),\Re \left (\beta \right )>0\)with the initial values (3.11) and n − 1 < μ  n, n ∈N, 0 ≤ ν ≤ 1 has its solution in the space \(L\left (0,b\right ]\)given by

$$\displaystyle \begin{aligned} y\left(x\right)&=\sum_{m=1}^{\infty}\lambda^{m}\left(\mathcal{E}_{0+;\alpha,\left(\beta+\mu\right)m+\mu}^{\omega;\gamma m}f\right)\left(x\right)+\sum_{m=1}^{\infty}\lambda^{m}\sum_{k=0}^{n-1}c_{k}\\&\quad \times\left\{\mathcal{E}_{0+;\alpha,\left(\beta+\mu\right)\left(m-1\right)}^{\omega;\gamma\left(m-1\right)}\left[x^{\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)+k}E_{\alpha,\beta+\mu-\left(n-\mu\right)\left(1-\nu\right)+k+1}^{\gamma}\left(\omega x^{\alpha}\right)\right]\right\} \\&\quad +\left(I_{0+}^{\mu}f\right)\left(x\right)+\sum_{k=0}^{n-1}\frac{c_{k}}{\varGamma\left(k-\left(n-\mu\right)\left(1-\nu\right)+1\right)}x^{k-\left(n-\mu\right)\left(1-\nu \right)}, \end{aligned} $$
(3.76)

i.e.,

$$\displaystyle \begin{aligned} y\left(x\right)&=\sum_{m=1}^{\infty}\lambda^{m}\left(\mathcal{E}_{0+;\alpha,\left(\beta+\mu\right)m+\mu}^{\omega;\gamma m}f\right)\left(x\right)\\&\quad +\sum_{m=1}^{\infty }\lambda^{m}\sum_{k=0}^{n-1}c_{k}\left[x^{\left(\beta+\mu\right)m-\left( n-\mu\right)\left(1-\nu\right)+k}E_{\alpha,\left(\beta+\mu\right)m-\left(n-\mu\right)\left(1-\nu\right)+k+1}^{\gamma m}\left(\omega x^{\alpha}\right)\right]\\&\quad +\left(I_{0+}^{\mu}f\right)\left(x\right)+\sum_{k=0}^{n-1}\frac{c_{k}}{\varGamma\left(k-\left(n-\mu\right)\left(1-\nu\right)+1\right)}x^{k-\left(n-\mu\right)\left(1-\nu\right)}, \\ & \qquad \left(\left\vert\lambda\right\vert<1/{\mathbf{M}}^{\prime }\right). \end{aligned} $$
(3.77)

In particular, if 0 < μ < 1 under the initial value (3.57), Eq.(3.75) has its solution in the space \(L\left (0,b\right ]\)given by

$$\displaystyle \begin{aligned} y\left(x\right)&=\sum_{m=1}^{\infty}\lambda^{m}\left(\mathcal{E}_{0+;\alpha,\left(\beta+\mu\right)m+\mu}^{\omega;\gamma m}f\right)\left(x\right)\\&\quad +c\sum_{m=1}^{\infty}\lambda^{m}x^{\left(\beta+\mu\right)m-\left(1-\mu\right)\left(1-\nu\right)}E_{\alpha,\left(\beta+\mu\right)m-\left(1-\mu\right)\left(1-\nu\right)+1}^{\gamma m}\left(\omega x^{\alpha}\right)\\&\quad +\left(I_{0+}^{\mu}f\right)\left(x\right)+c\frac{x^{\left(\mu-1\right)\left(1-\nu\right)}}{\varGamma\left(\mu+\nu-\mu\nu\right)}, \quad \left(\left\vert\lambda\right\vert<1/{\mathbf{M}}^{\prime}\right), \end{aligned} $$
(3.78)

where c is an arbitrary constant and M is a positive constant given by

$$\displaystyle \begin{aligned} {\mathbf{M}}^{\prime}=b^{\Re\left(\beta\right)}\sum_{n=0}^{\infty}\frac{\left\vert\left(\gamma\right)_{n}\right\vert}{\left\{\Re\left(\alpha\right)n+\Re\left(\beta\right)\right\}\left\vert\varGamma\left(\alpha n+\beta\right)\right\vert}\frac{\left\vert\omega b^{\Re\left(\alpha\right)}\right\vert^{n}}{n!}. \end{aligned} $$
(3.79)

If we put \(f\left (t\right )=t^{\epsilon -1}E_{\alpha ,\epsilon }^{\sigma }\left (\omega t^{\alpha }\right )\)in (3.75) and apply the formula (2.109 ), we get the following particular case of the solutions (3.77) and (3.78).

Corollary 3.1 ([39])

The following fractional integro-differential equation

$$\displaystyle \begin{aligned} \left(D_{0+}^{\mu,\nu}y\right)\left(x\right)=\lambda\left(\mathcal{E}_{0+;\alpha,\beta}^{\omega;\gamma}y\right)\left(x\right)+x^{\epsilon-1}E_{\alpha,\epsilon}^{\sigma}\left(\omega x^{\alpha}\right) \quad \left(0<x\leq b\right) \end{aligned} $$
(3.80)

α, β, γ, 𝜖, σ, ω, λ ∈C, \(\Re \left (\alpha \right ),\Re \left (\beta \right ),\Re \left (\epsilon \right )>0\)with the initial values (3.11) and n − 1 < μ  n, n ∈N, 0 ≤ ν ≤ 1 has its solution in the space \(L\left (0,b\right ]\)given by

$$\displaystyle \begin{aligned} y\left(x\right)&=\sum_{m=1}^{\infty}\lambda^{m}x^{\left(\beta+\mu\right) m+\mu+\epsilon-1}E_{\alpha,\left(\beta+\mu\right)m+\mu+\epsilon}^{\gamma m+\sigma}\left(\omega x^{\alpha}\right)\\&\quad +\sum_{m=1}^{\infty}\lambda^{m}\sum_{k=0}^{n-1}c_{k}\left[x^{\left(\beta+\mu\right)m-\left(n-\mu\right)\left(1-\nu\right)+k}E_{\alpha,\left(\beta+\mu\right)m-\left(n-\mu\right)\left(1-\nu\right)+k+1}^{\gamma m}\left(\omega x^{\alpha}\right)\right] \\&\quad +\left(I_{0+}^{\mu}f\right)\left(x\right)+\sum_{k=0}^{n-1}\frac{c_{k}}{\varGamma\left(k-\left(n-\mu\right)\left(1-\nu\right)+1\right)}x^{k-\left(n-\mu\right)\left(1-\nu\right)}, \\ &\qquad \left(\left\vert\lambda\right\vert<1/{\mathbf{M}}^{\prime}\right). \end{aligned} $$
(3.81)

In particular, if 0 < μ < 1 under the initial value (3.57), Eq.(3.80) has its solution in the space \(L\left (0,b\right ]\)given by

$$\displaystyle \begin{aligned} y\left(x\right)&=\sum_{m=1}^{\infty}\lambda^{m}x^{\left(\beta+\mu\right) m+\mu+\epsilon-1}E_{\alpha,\left(\beta+\mu\right)m+\mu+\epsilon}^{\gamma m+\sigma}\left(\omega x^{\alpha}\right)\\&\quad +c\sum_{m=1}^{\infty}\lambda^{m}x^{\left(\beta+\mu \right)m-\left(1-\mu\right)\left(1-\nu\right)}E_{\alpha,\left(\beta+\mu\right)m-\left(1-\mu\right)\left(1-\nu\right)+1}^{\gamma m}\left(\omega x^{\alpha}\right)\\&\quad +\left(I_{0+}^{\mu}f\right)\left(x\right)+c\frac{x^{\left(\mu-1\right)\left(1-\nu\right)}}{\varGamma\left(\mu+\nu-\mu\nu\right)}, \quad \left(\left\vert\lambda\right\vert<1/{\mathbf{M}}^{\prime}\right), \end{aligned} $$
(3.82)

where c is an arbitrary constant andM is a positive constant given by (3.79).

If we put \(f\left (t\right )=t^{\epsilon -1}\) in (3.75) and apply the formula (2.110), we get the following particular case of the solution (3.77) and (3.78).

Corollary 3.2 ([39])

The following fractional integro-differential equation

$$\displaystyle \begin{aligned} \left(D_{0+}^{\mu,\nu}y\right)\left(x\right)=\lambda\left(\mathcal{E}_{0+;\alpha,\beta}^{\omega;\gamma}y\right)\left(x\right)+x^{\epsilon-1} \quad \left(0<x\leq b\right) \end{aligned} $$
(3.83)

α, β, γ, 𝜖, σ, ω, λ ∈C, \(\Re \left (\alpha \right ),\Re \left (\beta \right ),\Re \left (\epsilon \right )>0\)with the initial values (3.11) and n − 1 < μ  n, n ∈N, 0 ≤ ν ≤ 1 has its solution in the space \(L\left (0,b\right ]\)given by

$$\displaystyle \begin{aligned} y\left(x\right)&=\varGamma\left(\epsilon\right)\sum_{m=1}^{\infty}\lambda^{m}x^{\left(\beta+\mu\right)m+\mu+\epsilon}E_{\alpha,\left(\beta+\mu\right)m+\mu+\epsilon+1}^{\gamma m}\left(\omega x^{\alpha}\right)\\ &\quad +\sum_{m=1}^{\infty}\lambda^{m}\sum_{k=0}^{n-1}c_{k}\left[x^{\left(\beta+\mu\right)m-\left(n-\mu\right)\left(1-\nu\right)+k}E_{\alpha,\left(\beta+\mu\right)m-\left(n-\mu\right)\left(1-\nu\right)+k+1}^{\gamma m}\left(\omega x^{\alpha}\right)\right]\\ &\quad +\left(I_{0+}^{\mu}f\right)\left(x\right)+\sum_{k=0}^{n-1}\frac{c_{k}}{\varGamma\left(k-\left(n-\mu\right)\left(1-\nu\right)+1\right)}x^{k-\left(n-\mu\right)\left(1-\nu\right)}, \\ &\qquad \left(\left\vert\lambda\right\vert<1/{\mathbf{M}}^{\prime}\right). \end{aligned} $$
(3.84)

In particular, if 0 < μ < 1 under the initial value (3.57), Eq.(3.83) has its solution in the space \(L\left (0,b\right ]\)given by

$$\displaystyle \begin{aligned} y\left(x\right)&=\varGamma\left(\epsilon\right)\sum_{m=1}^{\infty}\lambda^{m}x^{\left(\beta+\mu\right)m+\mu+\epsilon}E_{\alpha,\left(\beta+\mu\right)m+\mu+\epsilon+1}^{\gamma m}\left(\omega x^{\alpha}\right)\\&\quad +c\sum_{m=1}^{\infty}\lambda^{m}x^{\left(\beta+\mu\right)m-\left(1-\mu\right)\left(1-\nu\right)}E_{\alpha,\left(\beta+\mu\right)m-\left(1-\mu\right)\left(1-\nu\right)+1}^{\gamma m}\left(\omega x^{\alpha}\right)\\&\quad +\left(I_{0+}^{\mu}f\right)\left(x\right)+c\frac{x^{\left(\mu-1\right)\left(1-\nu\right)}}{\varGamma\left(\mu+\nu-\mu\nu\right)}x^{k-\left(n-\mu\right)\left(1-\nu\right)}, \quad \left(\left\vert\lambda\right\vert<1/{\mathbf{M}}^{\prime}\right), \end{aligned} $$
(3.85)

where c is an arbitrary constant andM is a positive constant given by (3.79).

3.5 Operational Method for Solving Fractional Differential Equations

In the 1950s, Jan Mikusiński proposed a new approach to develop an operational calculus for the operator of differentiation [28]. This algebraic approach was based on the interpretation of the Laplace convolution as a multiplication in the ring of the continuous functions on the real half-axis. The Mikusiński operational calculus was successfully used in ordinary differential equations, integral equations, partial differential equations and in the theory of the special functions. It is worth mentioning that the Mikusiński scheme was extended by several mathematicians to develop operational calculi for differential operators with variable coefficients [7, 8, 27]. These operators are all particular cases of the so-called hyper-Bessel differential operator

$$\displaystyle \begin{aligned} (B\, y)(x) = x^{-\beta }\prod_{i=1}^n\left(\gamma_i +{1\over \beta }x{d\over dx}\right)\, y(x). \end{aligned} $$
(3.86)

An operational calculus for the operator (3.86) was constructed in [6]. New results in the field of operational calculus have been presented by Luchko et al. in Refs. [13, 23, 24], where the operational calculi for the R-L, Caputo and for the more general multiple Erdélyi-Kober fractional derivatives have been constructed and applied for solution of the fractional differential equations and integral equations of the Abel type.

3.5.1 Properties of the Generalized Fractional Derivative with Types

The R-L, Caputo, and the composite fractional derivatives are defined as certain compositions of the R-L fractional integral and ordinary derivatives. It is clear that these operators play an important role in the development of the corresponding operational calculi and there should be some coinciding elements in the operational calculi for all three fractional derivatives.

We begin by defining the function space C γ, γ ∈ R, which was introduced for the first time in Ref. [6] devoted to the operational calculus for hyper-Bessel differential operator.

Definition 3.1

A real or complex-valued function y is said to belong to the space C γ, γ ∈ R, if there exists a real number p, p > γ, such that

$$\displaystyle \begin{aligned} y(t) = t^py_1(t), \quad t>0 \end{aligned}$$

with a function y 1 ∈ C[0, ).

Clearly, C γ is a vector space and the set of spaces C γ is ordered by inclusion according to

$$\displaystyle \begin{aligned} C_\gamma \subset C_\delta \Leftrightarrow \gamma \ge \delta . \end{aligned} $$
(3.87)

Theorem 3.5 ([25])

The R-L fractional integral \(I_{0+}^{\alpha },\alpha \ge 0\), is a linear map of the space C γ, γ ≥−1, into itself, that is,

$$\displaystyle \begin{aligned} I_{0+}^{\alpha}:C_\gamma\to C_{\alpha+\gamma} \subset C_{\gamma}. \end{aligned}$$

For the proof of the theorem, see Ref. [25].

It is well known that the operator \(I_{0+}^{\alpha }\), α > 0 has a convolution representation in the space C γ, γ ≥−1:

$$\displaystyle \begin{aligned} (I_{0+}^{\alpha} y)(x) = (h_{\alpha} \circ y)(x), \ h_{\alpha}(x)=x^{\alpha -1}/\varGamma(\alpha), \quad y\in C_\gamma. \end{aligned} $$
(3.88)

Here

$$\displaystyle \begin{aligned} (g\circ f)(x)=\int_0^x g(x-t)f(t)\,\mathrm{d}t, \quad x>0 \end{aligned}$$

is the Laplace convolution. From the semi-group property (2.3) it follows

$$\displaystyle \begin{aligned} (\underbrace{I_{0+}^{\alpha} \ldots I_{0+}^{\alpha}}_n y)(x) = (I_{0+}^{n\alpha} y)(x), \quad y\in C_\gamma, \, \gamma\ge -1, \, \alpha\ge 0, \, n\in \mathrm{N}. \end{aligned} $$
(3.89)

The composite fractional derivative \(D_{0+}^{\alpha ,\beta }\) is not defined on the whole space C γ. Here let us introduce a subspace of C γ, which is suitable for dealing with \(D_{0+}^{\alpha ,\beta }\).

Definition 3.2 ([17])

A function y ∈ C −1 is said to be in the space \( \varOmega _{-1}^\mu ,\ \mu \ge 0\) if \(D_{0+}^{\alpha ,\beta } y \in C_{-1}\) for all 0 ≤ α ≤ μ, 0 ≤ β ≤ 1.

For β = 0, i.e. for the R-L fractional derivative, the space \(\varOmega _{-1}^\mu \) coincides with the function space introduced in Ref. [25].

Obviously, \(\varOmega _{-1}^\mu \) is a vector space and \(\varOmega _{-1}^0\equiv C_{-1}\). The space \(\varOmega _{-1}^\mu \) contains in particular all functions z that can be represented in the form z(x) = x γy(x) with γ ≥ μ and y being an analytical function on the real half-axis.

The following result plays a very important role for the applications of the operational calculus for D α, β to solution of differential equations with these generalized derivatives.

Theorem 3.6 ([17])

Let \(y\in \varOmega _{-1}^\alpha , \ n-1<\alpha \le n \in N\) . Then the R-L fractional integral and the generalized composite fractional derivative are connected by the relation

$$\displaystyle \begin{aligned} (I_{0+}^\alpha D_{0+}^{\alpha,\beta} y)(x) = y(x)- y_{\alpha,\beta}(x),\ x>0, \end{aligned} $$
(3.90)

where

$$\displaystyle \begin{aligned} y_{\alpha,\beta}(x):=\sum_{k=0}^{n-1} { x^{ k - n +\alpha - \beta \alpha +\beta n } \over \varGamma( k - n +\alpha - \beta \alpha +\beta n +1 )} \lim_{x\to 0+} {\mathrm{d}^k\over \mathrm{d}x^k} (I_{0+}^{(1-\beta)(n-\alpha)} y)(x),\ x>0. \end{aligned} $$
(3.91)

Proof

For n − 1 < α ≤ n ∈ N and 0 ≤ β ≤ 1, the generalized derivative can be represented as a composition of the R-L fractional integral and the R-L fractional derivative (2.17), therefore

$$\displaystyle \begin{aligned} (D_{0+}^{\alpha,\beta} y)(x) = \left(I_{0+}^{\beta(n-\alpha)} {\mathrm{d}^n\over \mathrm{d}x^n} (I_{0+}^{(1-\beta)(n-\alpha)} y)\right)(x) = (I_{0+}^{\beta(n-\alpha)} {{}_{RL}}D_{0+}^{\alpha +\beta n - \alpha \beta} y)(x). \end{aligned} $$
(3.92)

Using the formula (2.3) one obtains

$$\displaystyle \begin{aligned} (I_{0+}^\alpha D_{0+}^{\alpha,\beta} y)(x) = (I_{0+}^\alpha I_{0+}^{\beta(n-\alpha)}{{}_{RL}}D_{0+}^{\alpha +\beta n - \alpha \beta} y)(x) = (I_{0+}^{\alpha + \beta n-\alpha \beta} {{}_{RL}}D_{0+}^{\alpha +\beta n - \alpha \beta} y)(x). \end{aligned}$$

The formula (3.90) follows now from the known formula for the composition of the Riemann-Liouville fractional integral and the Riemann-Liouville fractional derivative (see the formula from Proposition 3.1 with a = 0).

3.5.2 Operational Calculus for Fractional Derivatives with Types

The formula (3.90) shows that the generalized derivative of order α and type β always corresponds to the R-L fractional integral of order α. The type β influences the form of the initial values that should appear while formulating the initial-value problems for the differential equations. That is why the main part of the operational calculus for \(D_{0+}^{\alpha ,\beta }\) follows the lines of the construction of the operational calculus for the Riemann-Liouville or for the Liouville-Caputo fractional derivatives presented in Ref. [13].

As in the case of the Mikusiński type operational calculus for the Riemann-Liouville or for the Liouville-Caputo fractional derivatives, we have the following theorem:

Theorem 3.7 ([17])

The space C −1with the operations of the Laplace convolutionand ordinary addition becomes a commutative ring (C −1, ∘, +) without divisors of zero.

This ring can be extended to the field \(\mathcal {M}_{-1}\) of convolution quotients by following the lines of the classical Mikusiński operational calculus [28]:

$$\displaystyle \begin{aligned} \mathcal{M}_{-1} := C_{-1} \times (C_{-1} \setminus \{0\})/\sim, \end{aligned}$$

where the equivalence relation (∼) is defined, as usual, by

$$\displaystyle \begin{aligned} (f,g)\sim (f_1,g_1) \Leftrightarrow (f\circ g_1)(t) = (g\circ f_1)(t). \end{aligned}$$

For the sake of convenience, the elements of the field \(\mathcal {M}_{-1}\) can be formally considered as convolution quotients fg. The operations of addition and multiplication are then defined in M −1 as usual:

$$\displaystyle \begin{aligned} {f\over g}+{f_1\over g_1}:={ f\circ g_1+g\circ f_1 \over g\circ g_1} \end{aligned} $$
(3.93)

and

$$\displaystyle \begin{aligned} {f\over g}\cdot {f_1\over g_1}:={f\circ f_1\over g\circ g_1}. \end{aligned} $$
(3.94)

Theorem 3.8 ([17])

The space \(\mathcal {M}_{-1}\)with the operations of addition (3.93) and multiplication (3.94) becomes a commutative field \((\mathcal {M}_{-1},\cdot ,+)\).

The ring C −1 can be embedded into the field \(\mathcal {M}_{-1}\) by the map (α > 0):

$$\displaystyle \begin{aligned} f\mapsto {h_{\alpha}\circ f\over h_{\alpha}}, \end{aligned}$$

with, by (3.88), h α(x) = x α−1Γ(α).

In the field \(\mathcal {M}_{-1}\), the operation of multiplication with a scalar λ from the field R (or C) can be defined by the relation \(\lambda {f\over g}:={\lambda f\over g},\ {f\over g}\in \mathcal {M}_{-1}\). Because the space C −1 is a vector space, the space \(\mathcal {M}_{-1}\) can be shown to be a vector space, too. Since the constant function f(x) ≡ λ, x > 0 belongs to the space C −1, we have to distinguish the operation of multiplication with a scalar in the vector space \(\mathcal {M}_{-1}\) and the operation of multiplication with a constant function in the field \(\mathcal {M}_{-1}\). In this last case we get

$$\displaystyle \begin{aligned} \{\lambda\}\cdot {f\over g} = {\lambda h_{\alpha +1}\over h_\alpha}\cdot{f\over g} = \{ 1\}\cdot {\lambda f\over g}\, . \end{aligned} $$
(3.95)

Whereas the space C −1 consists of the conventional functions, the majority of the elements of the field \(\mathcal {M}_{-1}\) are not reduced to the functions from the ring C −1 and, consequently, can be considered to be the generalized functions or the so-called hyper-functions. In particular, let us consider the element \(I={h_\alpha \over h_\alpha }\) of the field \(\mathcal {M}_{-1}\) that is the identity of this field with respect to the operation of multiplication:

$$\displaystyle \begin{aligned} I \cdot {f\over g} = {h_{\alpha}\circ f \over h_{\alpha}\circ g} = {f\over g}. \end{aligned}$$

The last formula shows that the identity element I of the field \(\mathcal {M}_{-1}\) plays the role of the Dirac δ-function in the conventional theory of the generalized functions.

Another hyper-function, i.e. an element of the field \(\mathcal {M}_{-1}\) that cannot be represented as a conventional function from the space C −1 that will play an important role in the applications of the operational calculus for the generalized fractional derivative is given by

Definition 3.3 ([23])

The algebraic inverse of the R-L fractional integral \(I_{0+}^{\alpha }\) is said to be the element α of the field \(\mathcal {M}_{-1}\), which is reciprocal to the element h α in the field \(\mathcal {M}_{-1}\), that is,

$$\displaystyle \begin{aligned} S_\alpha = {I\over h_{\alpha }} \equiv {h_{\alpha}\over h_{\alpha}\circ h_{\alpha}} \equiv {h_{\alpha}\over h_{2\alpha}}, \end{aligned} $$
(3.96)

where (and in what follows) \(I={h_{\alpha }\over h_{\alpha }}\) denotes the identity element of the field \(\mathcal {M}_{-1}\) with respect to the operation of multiplication.

The R-L fractional integral \(I_{0+}^\alpha \) can be represented as a multiplication (convolution) in the ring C −1 (with the function h α, see (3.88)). Since the ring C −1 is embedded into the field \(\mathcal {M}_{-1}\) of convolution quotients, this fact can be rewritten as follows:

$$\displaystyle \begin{aligned} (I_{0+}^{\alpha}y)(x)={I\over S_\alpha}\cdot y. \end{aligned} $$
(3.97)

As to the generalized fractional derivative D α, β, there exists no convolution representation in the ring C −1 for it, but it is reduced to the operator of multiplication in the field \(\mathcal {M}_{-1}\).

Theorem 3.9 ([17])

Let a function y be from the space \(\varOmega _{-1}^\alpha , \ n-1<\alpha \le n,\ n\in N\) . Then the generalized fractional derivative \(D_{0+}^{\alpha ,\beta }y\) can be represented as multiplication in the field \(\mathcal {M}_{-1}\) of convolution quotients:

$$\displaystyle \begin{aligned} (D_{0+}^{\alpha,\beta}y)(x)=S_\alpha\cdot y-S_\alpha\cdot y_{\alpha,\beta}, \end{aligned} $$
(3.98)
$$\displaystyle \begin{aligned} \begin{array}{rcl} {} y_{\alpha,\beta}(x)&\displaystyle =&\displaystyle \sum_{k=0}^{n-1}{x^{k-n+\alpha-\beta\alpha+\beta n}\over \varGamma(k-n+\alpha-\beta\alpha+\beta n+1)}\\ &\displaystyle &\displaystyle \times \lim_{x\to0+}{\mathrm{d}^k\over \mathrm{d}x^k} (I_{0+}^{(1-\beta)(n-\alpha)}y)(x), \quad x>0. \end{array} \end{aligned} $$
(3.99)

Proof

To prove the formula (3.98), we just use the embedding of the ring C −1 into the field \(\mathcal {M}_{-1}\) and then multiply the relation (3.90) with the algebraic inverse of the Riemann-Liouville fractional integral operator—the element S α. The obtained relation is exactly the formula (3.98).

The formula (3.89) means that for α > 0, n ∈ N

$$\displaystyle \begin{aligned} h_\alpha^n(x):=\underbrace{h_\alpha\circ\ldots\circ h_\alpha}_n=h_{n\alpha}(x). \end{aligned}$$

This relation can be extended to an arbitrary positive real power exponent:

$$\displaystyle \begin{aligned} h_\alpha^\lambda(x)=h_{\lambda\alpha}(x), \quad \lambda > 0. \end{aligned} $$
(3.100)

For any λ > 0, the inclusion \(h_\alpha ^\lambda \in C_{-1}\) holds true and the following relations can be easily proved (β > 0, γ > 0):

$$\displaystyle \begin{aligned} h_\alpha^\beta\circ h_\alpha^\gamma = h_{\alpha\beta}\circ h_{\alpha\gamma}=h_{(\beta+\gamma)\alpha}=h_\alpha^{\beta +\gamma}, \end{aligned} $$
(3.101)
$$\displaystyle \begin{aligned} h_{\alpha_1}^\beta=h_{\alpha_2}^\gamma \Leftrightarrow\ \alpha_1\beta = \alpha_2\gamma. \end{aligned} $$
(3.102)

The above relations motivate the following definition of a power function of the element S α with an arbitrary real power exponent λ:

$$\displaystyle \begin{aligned} S_\alpha^\lambda=\left\{ \begin{array}{ll} h_{\alpha}^{-\lambda}, & \lambda < 0, \\ I, & \lambda =0, \\ {I\over h_{\alpha}^\lambda}, & \lambda >0. \end{array} \right. \end{aligned} $$
(3.103)

For any α, β ∈ R, it follows from this definition and the relations (3.101) and (3.102) that

$$\displaystyle \begin{aligned} S_\alpha^\beta\cdot S_\alpha^\gamma = S_\alpha^{\beta +\gamma}, \end{aligned} $$
(3.104)
$$\displaystyle \begin{aligned} S_{\alpha_1}^\beta = S_{\alpha_2}^\gamma\ \Leftrightarrow\ \alpha_1\beta = \alpha_2\gamma. \end{aligned} $$
(3.105)

For the application of the operational calculus to solution of the differential equations with composite fractional derivatives it is important to identify the hyper-functions from the field \(\mathcal {M}_{-1}\), which can be represented as the conventional functions, i.e. as the elements of the ring C −1.

One useful class of such representations is given by the following theorem (see, e.g., Refs. [23, 24]):

Theorem 3.10 ([23, 24])

Let the multiple power series

$$\displaystyle \begin{aligned} \sum^{\infty}_{i_1,\ldots,i_n=0}a_{i_1,\ldots,i_n}z_{1}^{i_1}\times\cdots\times z_{n}^{i_n}, \quad z_1,\dots,z_n\in {C}, \ a_{i_1,\ldots,i_n}\in{C} \end{aligned}$$

be convergent at a point z 0 = (z 10, …, z n0) with all z k0 ≠ 0, k = 1, …, n. Then the hyper-function

$$\displaystyle \begin{aligned} z(S_\alpha):= S_\alpha^{-\beta}\sum^{\infty }_{i_1,\ldots,i_n=0}a_{i_1,\ldots,i_n}(S_\alpha^{-\alpha_1})^{i_1}\times\cdots\times (S_\alpha^{-\alpha_n})^{i_n} \end{aligned}$$

with β > 0, α i > 0, i = 1, …, n can be represented as an element of the ring C −1:

$$\displaystyle \begin{aligned} z(S_\alpha)=\sum^{\infty }_{i_1,\ldots,i_n=0}a_{i_1,\ldots,i_n} h_{(\beta+\alpha_1i_1+\cdots+\alpha_ni_n)\alpha}(x), \end{aligned}$$

where h α(x) is given by (3.88).

This theorem is the source of a number of the important operational relations, which will be used in the further discussions (for more operational relations, we refer to Refs. [13, 25]):

$$\displaystyle \begin{aligned} {I\over S_\alpha-\rho}=x^{\alpha -1}E_{\alpha,\alpha}(\rho x^{\alpha}), \end{aligned} $$
(3.106)

where ρ ∈ R (or ρ ∈ C) and E α,β(z) is the two parameter M-L function, as can formally be obtained as a geometric series:

$$\displaystyle \begin{aligned} \begin{array}{rcl} {I\over S_\alpha-\rho }&\displaystyle =&\displaystyle {I\over {I\over h_\alpha}-\rho }={h_\alpha\over I-\rho h_\alpha} =\sum_{k=0}^\infty \rho^k h_\alpha^{k+1}\\ &\displaystyle =&\displaystyle \sum_{k=0}^\infty {\rho^k x^{(k+1)\alpha -1}\over \varGamma(\alpha k +\alpha)}=x^{\alpha -1}E_{\alpha,\alpha}(\rho x^{\alpha}). \end{array} \end{aligned} $$

The m-fold convolution of the right-hand side of the relation (3.106) gives the following operational relation:

$$\displaystyle \begin{aligned} {I\over (S_\alpha-\rho )^m}=x^{\alpha m-1}E_{\alpha,m\alpha}^m(\rho x^\alpha), \ m\in N, \end{aligned} $$
(3.107)

where \(E_{\alpha ,\beta }^{\delta }(z)\) is the three parameter M-L function .

Let β > 0, α i > 0, i = 1, …, n. Then

$$\displaystyle \begin{aligned} {S_\alpha^{-\beta}\over I-\sum_{i=1}^n \lambda_i S_\alpha^{-\alpha_i}}=x^{\beta\alpha -1}E_{(\alpha_1\alpha,\ldots,\alpha_n\alpha),\beta\alpha}(\lambda_1x^{\alpha_1\alpha},\ldots,\lambda_n x^{\alpha_n\alpha}) \end{aligned} $$
(3.108)

with the multinomial M-L function .

3.5.3 Fractional Differential Equations with Types

Here, the presented operational calculus is applied for solving linear fractional differential equations with generalized derivatives and constant coefficients.

First, some simple fractional differential equations are considered. We begin with the initial value problem (n − 1 < α ≤ n, n ∈ N, 0 ≤ β ≤ 1, λ ∈ R) [17]

$$\displaystyle \begin{aligned} &(D_{0+}^{\alpha, \beta} y)(x) -\lambda y(x) = g(x),\\ &\lim_{x\to 0+}\frac{\mathrm{d}^k}{\mathrm{d}x^k}(I_{0+}^{(1-\beta)(n-\alpha)}y)(x)=c_k\in {R}, \quad k=0,\dots,n-1. \end{aligned} $$
(3.109)

The function g is assumed to lie in C −1 and the unknown function y is to be determined in the space \(\varOmega _{-1}^\alpha \).

Making use of the relation (3.98), the initial value problem (3.109) can be reduced to the following algebraic equation in the field \(\mathcal {M}_{-1}\) of convolution quotients:

$$\displaystyle \begin{aligned} S_\alpha\cdot y-\lambda y&=S_\alpha\cdot y_{\alpha,\beta}+g,\\ y_{\alpha,\beta}(x)&=\sum_{k=0}^{n-1}c_k{x^{k-n+\alpha-\beta\alpha+\beta n} \over\varGamma(k-n+\alpha-\beta\alpha+\beta n+1)}. \end{aligned} $$

This linear equation can be easily solved in the field M −1:

$$\displaystyle \begin{aligned} y = y_g + y_h = {I\over S_\alpha -\lambda}\cdot g + {S_\alpha \over S_\alpha -\lambda}\cdot y_{\alpha,\beta}. \end{aligned}$$

The right-hand side of this relation can be interpreted as a function from the space \(\varOmega _{-1}^\alpha \), that is, as a classical solution of the initial value problem (3.109).

It follows from the operational relation (3.106) and the embedding of the ring C −1 into the field \(\mathcal {M}_{-1}\), that the first term of this relation, y g (solution of the inhomogeneous fractional differential equation (3.109) with zero initial values), can be represented in the form

$$\displaystyle \begin{aligned} y_g(x) = \int_0^x (x-t)^{\alpha -1}E_{\alpha,\alpha}(\lambda(x-t)^\alpha)g(t)\,\mathrm{d}t=\left({\mathbf{E}}_{\alpha,\alpha,\lambda,0+}^{1}g\right)(x). \end{aligned} $$
(3.110)

As to the second term, y h, it is a solution of the homogeneous fractional differential equation (3.109) with the given initial values and we have

$$\displaystyle \begin{aligned} y_h(x) = \sum_{k=0}^{n-1}c_ku_k(x),\ u_k(x)={S_\alpha \over S_\alpha -\lambda}\cdot \left\{ { x^{ k - n +\alpha - \beta \alpha +\beta n } \over \varGamma( k - n +\alpha - \beta \alpha +\beta n +1 )}\right\}. \end{aligned} $$
(3.111)

Making use of the relation

$$\displaystyle \begin{aligned} {x^{k-n+\alpha-\beta\alpha+\beta n}\over \varGamma(k-n+\alpha-\beta\alpha+\beta n+1)}&=h_{k-n+\alpha-\beta\alpha+\beta n+1}(x)=h^\alpha_{(k-n+\alpha-\beta\alpha+\beta n+1)/\alpha}(x)\\& ={I\over S_\alpha^{(k-n+\alpha-\beta\alpha+\beta n+1)/\alpha}}, \end{aligned} $$
(3.112)

the formula (3.104), and the operational relation (3.108), we get the representation of the functions u k(x), k = 0, …, n − 1 in terms of the two parameter M-L function :

$$\displaystyle \begin{aligned} u_k(x)&={S_\alpha\over S_\alpha-\lambda}\cdot \left\{{x^{k-n+\alpha-\beta\alpha+\beta n}\over \varGamma(k-n+\alpha-\beta\alpha+\beta n+1)}\right\}\\&={S_\alpha^{-(k-n+\alpha-\beta\alpha+\beta n+1)/\alpha}\over I-\lambda S_\alpha^{-1}}=x^{k-(1-\beta)(n-\alpha)} E_{\alpha,k+1-(1-\beta)(n-\alpha)}(\lambda x^\alpha). \end{aligned} $$

Putting now the two parts of the solution together, we get the final form of the solution of the initial-value problem (3.109):

$$\displaystyle \begin{aligned} y(x)&= y_g(x)+y_h(x)\\&=\int_0^x (x-t)^{\alpha -1}E_{\alpha,\alpha}(\lambda (x-t)^\alpha)g(t)\,\mathrm{d}t\\&\quad +\sum_{k=0}^{n-1}c_k x^{k-(1-\beta)(n-\alpha)}E_{\alpha,k+1-(1-\beta)(n-\alpha)}(\lambda x^\alpha). \end{aligned} $$
(3.113)

The proof of the fact that the solution y belongs to the space \(\varOmega _{-1}^\alpha \) is straightforward follows the lines of the proof from Ref. [24] and we omit it here.

Whereas the solution of the inhomogeneous fractional differential equation (3.109) with zero initial values—the function y g—only depends on the order α of the derivative, the solution of the homogeneous equation—the function y h—looks different for different values of the type β of the derivative. In particular, the part y h of the solution takes the form

$$\displaystyle \begin{aligned} y_h(x) = \sum_{k=0}^{n-1}c_k u_k(x),\ u_k(x)= x^k\, E_{\alpha, k+1}(\lambda x^\alpha) \end{aligned}$$

and

$$\displaystyle \begin{aligned} y_h(x) = \sum_{k=0}^{n-1}c_k u_k(x),\ u_k(x)= x^{k-n+\alpha}\,E_{\alpha, k+1-n+\alpha}(\lambda x^\alpha) \end{aligned}$$

for the Liouville-Caputo fractional derivative (β = 1) and for the R-L fractional derivative (β = 0), respectively.

Next, we consider the linear differential equation [17]

$$\displaystyle \begin{aligned} \sum_{i=1}^{n}\lambda _{i}\left( D_{0+}^{\alpha _{i},\beta_{i}}y\right)\left( x\right) -\lambda y\left( x\right) =g\left( x\right) \end{aligned} $$
(3.114)

with initial values

$$\displaystyle \begin{aligned} \lim_{x\rightarrow 0+}{\frac{\mathrm{d}^{k}}{\mathrm{d}x^{k}}}(I_{0+}^{(1-\beta_{i})(n-\alpha_{i})}y)(x)=c_{k}\in {\mathrm{R}} \end{aligned} $$
(3.115)

where i = 1, 2, …, n;k = 0, …, n − 1, n − 1 < α i ≤ n, n ∈N, 0 ≤ β i ≤ 1, λ, λ i ∈R and the ordering α 1 > α 2 > ⋯ > α n > 0 is assumed without loss of generality. Then the following algebraic equation in the field \(\mathcal {M}_{-1}\) of convolution quotients is obtained

$$\displaystyle \begin{aligned} \sum_{i=1}^{n}\lambda _{i}\left( S^{\alpha _{i}}y-S^{\alpha_{i}}y_{\alpha _{i},\beta _{i}}\right) -\lambda y=g. \end{aligned} $$
(3.116)

This linear equation can be easily solved in the field \(\mathcal {M}_{-1}\):

$$\displaystyle \begin{aligned} y&=y_{g}+Y=\frac{I}{\sum_{i=1}^{n}\lambda_{i}S^{\alpha_{i}}-\lambda}g+\frac{\sum_{j=1}^{n} \lambda_{j}S^{\alpha_{j}}y_{\alpha_{j},\beta_{j}}}{\sum_{i=1}^{n}\lambda_{i}S^{\alpha_{i}}-\lambda}\\& =\frac{I}{\sum_{i=1}^{n}\lambda_{i}S^{\alpha_{i}}-\lambda}g\\&\quad +\sum_{j=1}^{n}\lambda_{j} \frac{S^{\alpha_{j}}}{\sum_{i=1}^{n}\lambda_{i}S^{\alpha_{i}}-\lambda} \left[\sum_{k=0}^{n-1}c_{k}{\frac{x^{k-n+\alpha_{j}-\beta_{j}\alpha_{j}+\beta_{j}n}} {\varGamma(k-n+\alpha_{j}-\beta_{j}\alpha_{j}+\beta_{j}n+1)}}\right]. \end{aligned} $$

On the other hand, one gets

$$\displaystyle \begin{aligned} \frac{I}{\sum_{i=1}^{n}\lambda_{i}S^{\alpha_{i}}-\lambda}&=\frac{S^{-\alpha_{1}}}{\lambda_{1}+\sum_{i=2}^{n}\lambda_{i}S^{\alpha_{i}-\alpha_{1}}-\lambda S^{-\alpha_{1}}}\\& =\frac{1}{\lambda_{1}}\frac{S^{-\alpha_{1}}}{I-\sum_{i=2}^{n}\left(-\frac{\lambda_{i}}{\lambda_{1}}\right)S^{\alpha_{i}-\alpha_{1}}-\frac{\lambda}{\lambda_{1}}S^{-\alpha_{1}}}\\& =\frac{1}{\lambda_{1}}x^{\alpha_{1}-1}E_{\left(\alpha_{1}-\alpha_{2},\alpha_{1}-\alpha_{3},\dots,\alpha_{1}-\alpha_{n},\alpha_{1}\right),\alpha_{1}}\left( -\frac{\lambda_{2}}{\lambda_{1}}x^{\alpha_{1}-\alpha_{2}},\dots,\right.\\&\quad \left.-\frac{\lambda_{n}}{\lambda_{1}}x^{\alpha_{1}-\alpha_{n}},-\frac{\lambda }{\lambda_{n}}x^{\alpha_{1}}\right). \end{aligned} $$

Hence,

$$\displaystyle \begin{aligned} y_{g}&=\frac{1}{\lambda_{1}}\int\limits_{0}^{x}\left(x-t\right)^{\alpha_{1}-1}E_{\left(\alpha_{1}-\alpha_{2},\alpha_{1}-\alpha_{3},\dots,\alpha _{1}-\alpha_{n},\alpha_{1}\right),\alpha _{1}}\left(-\frac{\lambda_{2}}{\lambda _{1}}\left(x-t\right)^{\alpha_{1}-\alpha_{2}},\dots,\right.\\&\quad \left.-\frac{\lambda_{n}}{\lambda_{1}}\left(x-t\right)^{\alpha_{1}-\alpha_{n}},-\frac{\lambda_{n}}{\lambda}\left(x-t\right)^{\alpha _{1}}\right)g(t)\,\mathrm{d}t. \end{aligned} $$

Applying the relations (3.108) and (3.112) we get

$$\displaystyle \begin{aligned} Y&=\sum_{j=1}^{n}\lambda_{j}\frac{S^{\alpha_{j}}}{\sum_{i=1}^{n}\lambda_{i}S^{\alpha_{i}}-\lambda}\left[\sum_{k=0}^{n-1}c_{k}{\frac{x^{k-n+\alpha _{j}-\beta_{j}\alpha_{j}+\beta_{j}n}}{\varGamma(k-n+\alpha_{j}-\beta_{j}\alpha_{j}+\beta_{j}n+1)}}\right]\\& =\sum_{j=1}^{n}\lambda_{j}\frac{S^{\alpha_{j}}}{\sum_{i=1}^{n}\lambda _{i}S^{\alpha _{i}}-\lambda}\left(\sum_{k=0}^{n-1}c_{k}S^{-(k-n+\alpha_{j}-\beta_{j}\alpha_{j}+\beta_{j}n+1)}\right)\\& =\sum_{j=1}^{n}\sum_{k=0}^{n-1}\lambda_{j}c_{k}\frac{S^{-(k-n-\beta_{j}\alpha_{j}+\beta_{j}n+1)}}{\sum_{i=1}^{n}\lambda_{i}S^{\alpha_{i}}-\lambda}\\& =\frac{1}{\lambda_{1}}\sum_{j=1}^{n}\sum_{k=0}^{n-1}\lambda_{j}c_{k}\frac{S^{-(k-n-\beta_{j}\alpha_{j}+\alpha_{1}+\beta_{j}n+1)}}{I-\sum_{i=2}^{n}\left(-\frac{\lambda_{i}}{\lambda_{1}}\right)S^{\alpha _{i}-\alpha_{1}}-\frac{\lambda }{\lambda_{1}}S^{-\alpha_{1}}}\\& =\frac{1}{\lambda_{1}}\sum_{j=1}^{n}\sum_{k=0}^{n-1}\lambda_{j}c_{k}x^{k-n-\beta_{j}\alpha_{j}+\alpha_{1}+\beta_{j}n}\\&\quad \times E_{\left(\alpha_{1}-\alpha_{2},\alpha_{1}-\alpha_{3},\dots,\alpha_{1}-\alpha_{n},\alpha_{1}\right),(k-n-\beta_{j}\alpha_{j}+\alpha_{1}+\beta_{j}n+1)}\left( -\frac{\lambda_{2}}{\lambda_{1}}x^{\alpha_{1}-\alpha_{2}},\dots,\right.\\&\quad \left.-\frac{\lambda_{n}}{\lambda_{1}}x^{\alpha_{1}-\alpha_{n}},-\frac{\lambda}{\lambda_{1}}x^{\alpha_{1}}\right). \end{aligned} $$

If β j = 0, j = 1, 2, …, n the solution coincides with the solution of the linear n-term differential equation with the R-L fractional derivatives

$$\displaystyle \begin{aligned}y=y_{g}+Y_{0}\end{aligned}$$

where

$$\displaystyle \begin{aligned} Y_{0}&=\frac{1}{\lambda_{1}}\sum_{j=1}^{n}\sum_{k=0}^{n-1}\lambda_{j}c_{k}x^{k-n+\alpha_{1}}\\&\quad \times E_{\left(\alpha_{1}-\alpha_{2},\alpha_{1}-\alpha_{3},\dots,\alpha_{1}-\alpha_{n},\alpha_{1}\right),(k-n+\alpha_{1}+1)} \left(-\frac{\lambda_{2}}{\lambda_{1}}x^{\alpha_{1}-\alpha_{2}},\dots,\right.\\ &\quad \left.-\frac{\lambda_{n}}{\lambda_{1}}x^{\alpha_{1}-\alpha_{n}},-\frac{\lambda}{\lambda_{1}}x^{\alpha_{1}}\right). \end{aligned} $$

If β j = 1, j = 1, 2, …, n the solution coincides with the solution of the linear n-term differential equation with the Caputo fractional derivatives

$$\displaystyle \begin{aligned}y=y_{g}+Y_{1}\end{aligned} $$

where

$$\displaystyle \begin{aligned} Y_{1}&=\frac{1}{\lambda_{1}}\sum_{j=1}^{n}\sum_{k=0}^{n-1}\lambda_{j}c_{k}x^{k+\alpha_{1}-\alpha_{j}}\\& \quad \times E_{(\alpha_{1}-\alpha_{2},\alpha_{1}-\alpha_{3},\dots,\alpha_{1}-\alpha_{n},\alpha_{1}),(k+\alpha_{1}-\alpha_{j}+1)} \left(-\frac{\lambda_{2}}{\lambda_{1}}x^{\alpha_{1}-\alpha_{2}},\dots,\right.\\& \quad \left.-\frac{\lambda_{n}}{\lambda_{1}}x^{\alpha_{1}-\alpha_{n}},-\frac{\lambda}{\lambda_{1}}x^{\alpha_{1}}\right). \end{aligned} $$
  1. (i)

    If α i = α, i = 1, 2, …, n, we consider the following special case of the above linear n-term differential equation with the generalized fractional derivatives:

    $$\displaystyle \begin{aligned} &\sum_{i=1}^{n}\lambda _{i}\left( D_{0+}^{\alpha,\beta_{i}}y\right)\left( x\right) -\lambda y\left( x\right) =g\left( x\right)\\&\lim_{x\rightarrow 0+}{\frac{\mathrm{d}^{k}}{\mathrm{d}x^{k}}}(I_{0+}^{(1-\beta _{i})(n-\alpha)}y)(x)=c_{k}\in {\mathrm{R}},\quad i=1,2,\ldots,n\ k=0,\dots ,n-1; \end{aligned} $$
    (3.117)
    $$\displaystyle \begin{aligned} \left( 0<\alpha <1,\quad 0\leq\beta_{i}\leq1,\, \lambda,\lambda_{i}\in\mathbf{R}, \quad i=1,2,\dots,n, \, \varLambda=\sum_{i=1}^{n}\lambda_{i}\neq 0\right). \end{aligned} $$
    (3.118)

    Hence we get the following algebraic equation in the field \(\mathcal {M}_{-1}\) of convolution quotients:

    $$\displaystyle \begin{aligned}\sum\limits_{i=1}^{n}\lambda _{i}\left( S^{\alpha }y-S^{\alpha }y_{\alpha,\beta _{i}}\right) -\lambda y=g.\end{aligned}$$

    This linear equation can be easily solved in the field \(\mathcal {M}_{-1}\):

    $$\displaystyle \begin{aligned} \begin{array}{rcl} y=y_{g}^{\ast}+Y^{\ast}=\frac{I}{\varLambda S^{\alpha}-\lambda}g+\frac{S^{\alpha}}{\varLambda S^{\alpha}-\lambda}\sum_{j=1}^{n}\lambda _{j}y_{\alpha,\beta _{j}}. \end{array} \end{aligned} $$

    Since

    $$\displaystyle \begin{aligned}\frac{I}{\varLambda S^{\alpha }-\lambda }=\frac{1}{\varLambda }x^{\alpha-1}E_{\alpha ,\alpha }\left( \frac{\lambda }{\varLambda }x^{\alpha }\right),\end{aligned} $$

    one gets

    $$\displaystyle \begin{aligned}y_{g}^{\ast}=\frac{1}{\varLambda}\int\limits_{0}^{x}(x-t)^{\alpha-1}E_{\alpha,\alpha}\left(\frac{\lambda}{\varLambda}(x-t)^{\alpha}\right)g\left(t\right) \,\mathrm{d}t.\end{aligned} $$

    On the other hand,

    $$\displaystyle \begin{aligned} Y^{\ast}&=\frac{S^{\alpha }}{\varLambda S^{\alpha }-\lambda } \sum_{i=1}^{n}\sum_{k=0}^{n-1}\lambda _{i}c_{k}{\frac{x^{k-n+\alpha -\beta _{i}\alpha +\beta _{i}n}}{\varGamma (k-n+\alpha -\beta _{i}\alpha +\beta_{i}n+1)}}\\& =\sum_{i=1}^{n}\sum_{k=0}^{n-1}\lambda_{i}c_{k}\frac{S^{-\left(k-n-\beta _{i}\alpha +\beta _{i}n+1\right) }}{\varLambda S^{\alpha }-\lambda }\\&=\sum_{i=1}^{n}\sum_{k=0}^{n-1}\frac{\lambda_{i}}{\varLambda }c_{k}\frac{S^{-\left( k-n-\beta _{i}\alpha +\beta _{i}n+\alpha +1\right) }}{I-\frac{\lambda }{\varLambda }S^{-\alpha }}\\& =\frac{1}{\varLambda}\sum_{i=1}^{n}\sum_{k=0}^{n-1}\lambda_{i}c_{k}x^{k-n-\beta _{i}\alpha +\beta _{i}n+\alpha }E_{\alpha ,k-n-\beta_{i}\alpha +\beta _{i}n+\alpha +1}\left( \frac{\lambda }{\varLambda }x^{\alpha}\right). \end{aligned} $$
  2. (ii)

    Let \(\alpha _{i}=\left (n-i\right )\alpha \), i = 1, 2, …, n where 0 < α < 1. Then the solution can be represented in terms of the three parameter M-L function

    $$\displaystyle \begin{aligned}y_{g}=\frac{I}{\sum\limits_{i=1}^{n}\lambda_{i}S_{\alpha}^{n-i}-\lambda}g=\left[\sum_{j=1}^{p}\sum\limits_{m=1}^{n_{j}}\frac{c_{jm}}{\left(S_{\alpha}-\gamma_{j}\right)^{m}}\right]g,\end{aligned}$$
    $$\displaystyle \begin{aligned}n_{1}+n_{2}+\dots+n_{p}=n.\end{aligned} $$

    Operational relation (3.107) gives us the representation

    $$\displaystyle \begin{aligned}y_{g}=\int\limits_{0}^{t}u_{\delta}\left(\tau\right)g\left(t-\tau\right)\,\mathrm{d}\tau,\end{aligned}$$

    where

    $$\displaystyle \begin{aligned}u_{\delta}\left(t\right)=\sum\limits_{j=1}^{p}\sum\limits_{m=1}^{n_{j}}c_{jm}t^{\alpha m-1}E_{\alpha,\alpha m}^{m}\left(\gamma_{j}t^{\alpha}\right).\end{aligned}$$

3.6 Fractional Equations Involving Laguerre Derivatives

In this section we show the utility of operational methods to solve a wide class of integro-differential equations involving Prabhakar operators, also with variable coefficients.

We start from the analysis of the following equation [40]

$$\displaystyle \begin{aligned} \frac{\partial}{\partial t}t\frac{\partial}{\partial t}f(x,t)=\left(\mathcal{E}_{0^+;\alpha,\beta}^{\omega;\gamma}\right)_x f(x,t), \end{aligned} $$
(3.119)

where

$$\displaystyle \begin{aligned} \left(\mathcal{E}_{0^+;\alpha,\beta}^{\omega;\gamma,1}\right)_x=\left(\mathcal{E}_{0^+;\alpha,\beta}^{\omega;\gamma}\right)_x \end{aligned}$$

stands for the Prabhakar integral with respect to x-variable, with ω, α, β, γ ∈R+.

The operator

$$\displaystyle \begin{aligned} D_{Lt} =\frac{\mathrm{d}}{\mathrm{d}t}t\frac{\mathrm{d}}{\mathrm{d}t} \end{aligned}$$

is also named in literature as Laguerre derivative . It is well known that the eigenfunction of the Laguerre derivative is given by the function

$$\displaystyle \begin{aligned} C_0(t)= \sum_{k=0}^{\infty}\frac{t^k}{(k!)^2}, \end{aligned} $$
(3.120)

i.e., the zeroth order of the Tricomi functions. This means that

$$\displaystyle \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}t\frac{\mathrm{d}}{\mathrm{d}t}C_0(\lambda t)=\lambda C_0(\lambda t). \end{aligned}$$

We now apply this result to the fractional integro-differential equations with variable coefficients (3.119).

Theorem 3.11 ([40])

Consider the following initial value problem

$$\displaystyle \begin{aligned} \begin{cases} \frac{\partial}{\partial t}t\frac{\partial}{\partial t}f(x,t)=\left(\mathcal{E}_{0^+;\alpha,\beta}^{\omega;\gamma}\right)_x f(x,t),\\ f(x,0)=g(x), \end{cases} \end{aligned} $$
(3.121)

in the half plane x > 0, with analytic initial value g(x). The operational solution of Eq.(3.121) is given by:

$$\displaystyle \begin{aligned} f(x,t)=C_0\left(t\left(\mathcal{E}_{0^+;\alpha,\beta}^{\omega;\gamma}\right)_x\right)g(x)=\sum_{k=0}^{\infty}\frac{t^k}{(k!)^2}\left(\mathcal{E}_{0^+;\alpha,k\beta}^{\omega;k\gamma}\right)_x g(x). \end{aligned} $$
(3.122)

The operational solution (3.122) becomes an effective solution when the series converges, and this depends on the actual form of the initial value g(x). We remark that this operational approach cannot be applied to the more general operator \(\mathcal {E}_{0^+;\alpha ,\beta }^{\omega ;\gamma ,{\kappa }}\). The reason is due to the fact that the proof of the validity of the semigroup property for this operator is an open problem, as it was discussed above. On the other hand, for \(\mathcal {E}_{0^+;\alpha ,\beta }^{\omega ;\gamma }\), we have

$$\displaystyle \begin{aligned} \left(\mathcal{E}_{0^+;\alpha,\beta}^{\omega;\gamma}\right)^k=\underbrace{\mathcal{E}_{0^+;\alpha,\beta}^{\omega;\gamma}\cdot\mathcal{E}_{0^+;\alpha,\beta}^{\omega;\gamma}\dots\mathcal{E}_{0^+;\alpha,\beta}^{\omega;\gamma}}_{k\times}=\mathcal{E}_{0^+;\alpha,k\beta}^{\omega;k\gamma}. \end{aligned} $$
(3.123)

Then we have that

$$\displaystyle \begin{aligned} C_0\left(t\left(\mathcal{E}_{0^+;\alpha,\beta}^{\omega;\gamma}\right)_x\right)g(x)&=\sum_{k=0}^{\infty}\frac{t^k}{(k!)^2}\left(\mathcal{E}_{0^+;\alpha,\beta}^{\omega;\gamma}\right)^k_x g(x)\\ &=\sum_{k=0}^{\infty}\frac{t^k}{(k!)^2}\left(\mathcal{E}_{0^+;\alpha,k\beta}^{\omega;k\gamma}\right)_x g(x), \end{aligned} $$
(3.124)

as claimed.

Example 3.1

As a first concrete example we consider the following initial value problem [40]

$$\displaystyle \begin{aligned} \begin{cases} \frac{\partial}{\partial t}t\frac{\partial}{\partial t}f(x,t)=\left(\mathcal{E}_{0^+;\alpha,\beta}^{\omega;\gamma}\right)_x f(x,t)\\f(x,0)=g(x)=x^{\delta-1}, \quad \delta>0, \Re(\alpha),\Re(\beta)>0. \end{cases} \end{aligned} $$
(3.125)

By application of relation (2.110), i.e.,

$$\displaystyle \begin{aligned} \mathcal{E}_{0+;\alpha ,\beta }^{\omega ;\gamma}x^{\delta-1}=\varGamma(\delta)x^{\beta+\delta-1}E^{\gamma}_{\alpha,\beta+\delta}(\omega x^{\alpha}), \end{aligned}$$

one has

$$\displaystyle \begin{aligned} \left(\mathcal{E}_{0+;\alpha,k\beta}^{\omega;k\gamma}\right)_x g(x)=\left(\mathcal{E}_{0+;\alpha,k\beta}^{\omega;k\gamma}\right)_x x^{\delta-1}=\varGamma(\delta)x^{k\beta+\delta-1}E^{k\gamma}_{\alpha,k\beta+\delta}(\omega x^{\alpha}), \end{aligned}$$

whose solution is given by

$$\displaystyle \begin{aligned} f(x,t)=\varGamma(\delta)\sum_{k=0}^{\infty}\frac{t^k}{(k!)^2}x^{k\beta+\delta-1}E^{k\gamma}_{\alpha,k\beta+\delta}(\omega x^{\alpha}). \end{aligned} $$
(3.126)

We observe that boundary value problems for equations involving Laguerre spatial derivatives can be studied by operational methods in a similar way, as we are going to show with the following example.

Example 3.2

Consider the following boundary value problem [40]

$$\displaystyle \begin{aligned} \begin{cases} \frac{\partial}{\partial x}x\frac{\partial}{\partial x}f(x,t)= \left(\mathcal{E}_{0^+;\alpha ,\beta }^{\omega ;\gamma}\right)_t f(x,t)\\f(0,t)=t^{\delta-1}E^{\sigma}_{\alpha,\delta}(\omega t^{\alpha}), \quad \delta>1,\Re(\sigma),\Re(\alpha)>0, \end{cases} \end{aligned}$$

in the half plane x ≥ 0. Here we use relation (2.109), i.e.,

$$\displaystyle \begin{aligned} \mathcal{E}_{0^+;\beta,\alpha}^{\omega;\gamma}t^{\delta-1}E^{\sigma}_{\beta,\delta}(\omega t^{\beta})=t^{\alpha+\delta-1}E^{\gamma+\sigma}_{\beta,\alpha+\delta}(\omega t^{\beta}). \end{aligned}$$

Then, the solution is given by

$$\displaystyle \begin{aligned} f(x,t)=\varGamma(\delta)\sum_{k=0}^{\infty}\frac{x^k}{(k!)^2}t^{k\alpha+\delta-1}E^{k\gamma+\sigma}_{\beta,k\alpha+\delta}(\omega t^{\beta}). \end{aligned}$$

By using similar reasoning, we can also treat in a simple way integro-differential equations involving both Laguerre derivatives, i.e., with variable coefficients, and R-L integrals. Indeed, it is well known that R-L integrals satisfy the semigroup property.

Theorem 3.12 ([40])

Consider the following initial value problem

$$\displaystyle \begin{aligned} \begin{cases} \frac{\partial}{\partial t}t\frac{\partial}{\partial t}f(x,t)= \left(I_{0^+}^{\alpha}\right)_x f(x,t),\quad \alpha > 0,\\f(x,0)=g(x), \end{cases} \end{aligned} $$
(3.127)

in the half plane x > 0, with analytic initial value g(x). The operational solution of Eq.(3.127) is given by:

$$\displaystyle \begin{aligned} f(x,t)=C_0\left(t\left(I_{0^+}^{\alpha}\right)_x\right)g(x)=\sum_{k=0}^{\infty}\frac{t^k}{(k!)^2}\left(I_{0^+}^{k\alpha}\right)_x g(x). \end{aligned} $$
(3.128)

Example 3.3

Consider the following initial value problem

$$\displaystyle \begin{aligned} \begin{cases} \frac{\partial}{\partial t}t\frac{\partial}{\partial t}f(x,t)= \left(I_{0^+}^{\alpha}\right)_x f(x,t),\\f(x,0)=x^{\gamma}, \quad \gamma >0, \alpha>0, \end{cases} \end{aligned}$$

by applying the previous theorem, its solution is given by

$$\displaystyle \begin{aligned} f(x,t)&=\sum_{k=0}^{\infty}\frac{t^k\left(I_{0^+}^{k\alpha}\right)_x}{(k!)^2} x^{\gamma}=\varGamma(\gamma+1)x^{\gamma}\sum_{k=0}^{\infty}\frac{(x^{\alpha}t)^k}{(k!)^2\varGamma(\alpha k+\gamma+1)}\\ &=\varGamma(\gamma+1)x^{\gamma}\, {{}_0}\varPsi_{3}\left[ \begin{array}{c} -; \\ \left(1,1\right), \left(1,1\right),\left(\gamma+1,\alpha\right); \end{array} x^{\alpha}t\right] \end{aligned} $$

where we used relation (2.4).

Theorem 3.13 ([40])

Let Ω xbe a linear differential operator with respect to x and ψ(x) an eigenfunction of Ω, such that

$$\displaystyle \begin{aligned} \varOmega_x \psi(\lambda x)=\lambda \psi(\lambda x), \quad \psi(0)=1, \end{aligned} $$
(3.129)

then the evolution problem

$$\displaystyle \begin{aligned} \begin{cases} &\varOmega_x f(x,t)=\left(\mathcal{E}_{0^+;\alpha ,\beta}^{\omega;\gamma}\right)_t f(x,t), \quad t>0,\\ & f(0,t)=g(t), \end{cases} \end{aligned} $$
(3.130)

with an analytic function g(t) as boundary condition, admits an operational solution

$$\displaystyle \begin{aligned} f(x,t)=\psi\left(x\left(\mathcal{E}_{0^+;\alpha ,\beta}^{\omega;\gamma}\right)_t\right)g(t). \end{aligned} $$
(3.131)

This theorem highlights the utility of operational methods to solve, in a simple way, linear integro-differential equations involving Prabhakar integral operators.

Example 3.4

Let us consider the following boundary value problem [40]

$$\displaystyle \begin{aligned} \begin{cases} \frac{\partial}{\partial x}f(x,t)= \left(\mathcal{E}_{0^+;\alpha,\beta}^{\omega;\gamma}\right)_t f(x,t),\\ f(0,t)= g(t)=t^{\delta-1}, \quad \delta>0, \end{cases} \end{aligned}$$

its analytic solution is given by

$$\displaystyle \begin{aligned} f(x,t)=\varGamma(\delta)\sum_{k=0}^{\infty}\frac{x^k}{k!}t^{k\beta+\delta-1}E^{k\gamma}_{\alpha,k\beta+\delta}(\omega t^{\alpha}). \end{aligned} $$
(3.132)

We observe that the convergence of the series (3.126) was proved by Sandev et al. in [34].

3.7 Applications of Hilfer-Prabhakar Derivatives

In what follows we give some applications of Hilfer-Prabhakar derivatives in mathematical physics and probability.

First we consider a generalization of the time fractional heat equation by Hilfer-Prabhakar derivatives, involving the non-regularized operator \(\mathcal {D}^{\gamma , \mu , \nu }_{\rho , \omega ,0+}\).

Theorem 3.14 ([11])

The solution to the Cauchy problem

$$\displaystyle \begin{aligned} \begin{cases} \mathcal{D}_{\rho ,\omega ,0+}^{\gamma,\mu,\nu}u(x,t)=K\frac{\partial^{2}}{\partial x^{2}}u(x,t), & t>0,\:x\in \mathrm{R}, \\ \left({\mathbf{E}}_{\rho,(1-\nu)(1-\mu),\omega,0^{+}}^{-\gamma(1-\nu)}u(x,t)\right)_{t=0+}=g(x), & \\ \lim_{x\rightarrow\pm\infty}u(x,t)=0, & \end{cases} \end{aligned} $$
(3.133)

with μ ∈ (0, 1), ν ∈ [0, 1], ω ∈R, K, ρ > 0, γ ≥ 0, is given by

$$\displaystyle \begin{aligned} \begin{array}{rcl} u(x,t)&\displaystyle =&\displaystyle \int_{-\infty}^{+\infty}e^{-\imath kx}\hat{g}(k)\frac{1}{2\pi}\sum_{n=0}^{\infty}\left(-K\right)^{n}t^{\mu\left(n+1\right)-\nu(\mu-1)-1}\\&\displaystyle &\displaystyle \qquad \times E_{\rho,\mu\left(n+1\right)-\nu(\mu-1)}^{\gamma\left(n+1-\nu\right)}(\omega t^{\rho})k^{2n}\,\mathrm{d}k. \end{array} \end{aligned} $$
(3.134)

Proof

By Fourier-Laplace transform of (3.133), where we use \(\hat {u}(x,s)=\mathcal {L}\left [u(x,s)\right ]\) and \(\tilde {u}(k,t)=\mathcal {F}\left [u(k,t)\right ]\), and by using formula (2.60), one has

$$\displaystyle \begin{aligned} s^{\mu}(1-\omega s^{-\rho})^{\gamma}\tilde{\hat{u}}(k,s)-s^{\nu(\mu-1)}(1-\omega s^{-\rho})^{\gamma\nu}\tilde{g}(k)=-Kk^{2}\tilde{\hat{u}}(k,s), \end{aligned} $$
(3.135)

so that

$$\displaystyle \begin{aligned} \tilde{\hat{u}}(k,s)&=\frac{s^{\nu(\mu-1)}(1-\omega s^{-\rho})^{\gamma\nu}\tilde{g}(k)}{s^{\mu}(1-\omega s^{-\rho})^{\gamma}+Kk^{2}}\\ &=s^{-\mu+\nu(\mu-1)}(1-\omega s^{-\rho})^{-\gamma(1-\nu)}\tilde{g}(k)\left(1+\frac{Kk^{2}}{s^{\mu}(1-\omega s^{-\rho})^{\gamma}}\right)^{-1}\\&=\sum_{n=0}^{\infty }\left(-Kk^{2}\right)^{n}s^{-\mu\left(n+1\right)+\nu(\mu-1)}(1-\omega s^{-\rho})^{-\gamma\left(n+1-\nu\right)}\tilde{g}(k), \end{aligned} $$
(3.136)

for \(\left \vert \frac {Kk^{2}}{s^{\mu }(1-\omega s^{-\rho })^{\gamma }}\right \vert <1\). The inverse Laplace transform yields

$$\displaystyle \begin{aligned} \tilde{u}(k,t)=\sum_{n=0}^{\infty}\left(-K\right)^{n}t^{\mu\left(n+1\right)-\nu(\mu-1)-1}E_{\rho,\mu\left(n+1\right)-\nu(\mu-1)}^{\gamma\left(n+1-\nu\right)}(\omega t^{\rho})k^{2n}\hat{g}(k). \end{aligned} $$
(3.137)

Note that, for each k, the inversion term by term of the Laplace transform is always possible in view of Theorem 30.1 in Ref. [9] provided to choose a sufficiently large abscissa (dependent of k) for the inverse integral and by recalling that the generalized M-L function is defined as an absolutely convergent series. The convergence of (3.137) and in general of series of the same form (see below) can be proved by using the same technique as in Appendix C of Ref. [34]. Next, by applying the inverse Fourier transform to (3.137) one finishes the proof of the theorem.

Theorem 3.15 ([11])

The solution to the Cauchy problem

$$\displaystyle \begin{aligned} \begin{cases} {}_C\mathcal{D}_{\rho,\omega,0^{+}}^{\gamma,\mu}u(x,t)=K\frac{\partial^{2}}{\partial x^{2}}u(x,t), & t>0,\:x\in \mathrm{R}, \\ u(x,0^+)=g(x), & \\ \lim_{x\rightarrow\pm\infty}u(x,t)=0, & \end{cases} \end{aligned} $$
(3.138)

with μ ∈ (0, 1), ω ∈R, K, ρ > 0, γ ≥ 0, is given by

$$\displaystyle \begin{aligned} u(x,t)=\int_{-\infty}^{+\infty}e^{-\imath kx}\tilde{g}(k)\frac{1}{2\pi}\sum_{n=0}^{\infty}\left(-Kt^{\mu}\right)^{n}E_{\rho,\mu n+1}^{\gamma n}\left(\omega t^{\rho}\right)k^{2n}\,\mathrm{d}k. \end{aligned} $$
(3.139)

Proof

Taking the Fourier–Laplace transform of (3.138), by formula (2.64), we have that

$$\displaystyle \begin{aligned} s^{\mu}(1-\omega s^{-\rho})^{\gamma}\tilde{\hat{u}}(k,s)-s^{\mu-1}(1-\omega s^{-\rho })^{\gamma}\tilde{g}(k)=-Kk^{2}\tilde{\hat{u}}(k,s), \end{aligned} $$
(3.140)

so that

$$\displaystyle \begin{aligned} \tilde{\hat{u}}(k,s)&=\frac{s^{\mu-1}(1-\omega s^{-\rho})^{\gamma}\tilde{g}(k)}{s^{\mu}(1-\omega s^{-\rho})^{\gamma}+Kk^{2}}=s^{-1}\tilde{g}(k)\left(1+\frac{Kk^{2}}{s^{\mu}(1-\omega s^{-\rho})^{\gamma}}\right)^{-1}\\&=\sum_{n=0}^{\infty}\left(-Kk^{2}\right)^{n}s^{-\mu n-1}(1-\omega s^{-\rho})^{-\gamma n}\tilde{g}(k), \end{aligned} $$
(3.141)

for \(\left \vert \frac {Kk^{2}}{s^{\mu }(1-\omega s^{-\rho })^{\gamma }}\right \vert <1\). The inverse Laplace transform yields

$$\displaystyle \begin{aligned} \tilde{u}(k,t)=\sum_{n=0}^{\infty}\left(-Kt^{\mu}\right)^{n}E_{\rho,\mu n+1}^{\gamma n}(\omega t^{\rho})k^{2n}\hat{g}(k). \end{aligned} $$
(3.142)

By applying the inverse Fourier transform the proof of the theorem is finished.

As an additional example we consider the free electron laser integro-differential equation for the complex amplitude y(x), which is given by Dattoli et al. [5]

$$\displaystyle \begin{aligned} \begin{cases} \frac{dy(x)}{dx}=-\imath \pi g\int_0^x (x-t)e^{\imath\eta(x-t)}y(t)dt,\qquad g,\eta\in\mathrm{R}, \: x\in (0,1], \\y(0)=1. \end{cases} \end{aligned} $$
(3.143)

Here g is the gain coefficient, and η is the detuning parameter. This equation has been generalized to a fractional free electron laser equation in Ref. [20]. Here we give an analysis of the free electron laser equation involving Hilfer-Prabhakar derivative [11]

$$\displaystyle \begin{aligned} \begin{cases} \mathcal{D}^{\gamma,\mu,\nu}_{\rho,\omega,0^+}y(x)=\lambda{\mathbf{E}}^{\varpi}_{\rho,\mu,\omega,0^+}y(x)+f(x), & x\in(0,\infty), \,\, f(x)\in L^1[0,\infty), \\ \left({\mathbf{E}}^{-\gamma(1-\nu)}_{\rho,(1-\nu)(1-\mu),\omega,0^+}y(x)\right)_{x=0^+}=\kappa, & \kappa \ge 0, \end{cases}\\ \end{aligned} $$
(3.144)

where μ ∈ (0, 1), ν ∈ [0, 1], ω, λ ∈C, ρ > 0, γ, ϖ ≥ 0. This generalizes the problem studied in Ref. [20], corresponding to ν = γ = 0. Here f(x) is a given function. The original FEL equation is then retrieved for γ = 0, ν = 0, μ → 1, f ≡ 0, λ = −iπg, ω = , ρ = ϖ = κ = 1.

Theorem 3.16 ([11])

The solution to the Cauchy problem (3.144) is given by

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(x)&\displaystyle =&\displaystyle \kappa \sum_{k=0}^\infty \lambda^k x^{\nu(1-\mu)+\mu+2\mu k-1} E_{\rho,\nu(1-\mu)+\mu+2k\mu}^{\gamma+k(\varpi +\gamma)-\gamma\nu}(\omega x^\rho)\\&\displaystyle &\displaystyle +\sum_{k=0}^{\infty}\lambda^{k}{\mathbf{E}}_{\rho,\mu(2k+1),\omega,0^{+}}^{\gamma+k(\varpi+\gamma)}f(x). \end{array} \end{aligned} $$
(3.145)

Proof

By Laplace transform of (3.144) (see (2.60)) one gets

$$\displaystyle \begin{aligned} &s^{\mu}(1-\omega s^{-\rho})^{\gamma}\mathcal{L}[y(x)](s)-\kappa s^{-\nu(1-\mu)}(1-\omega s^{-\rho})^{\gamma\nu}\\&\quad =\lambda \mathcal{L}[x^{\mu-1}E_{\rho,\mu}^{\varpi}(\omega x^{\rho})](s)\cdot \mathcal{L}[y(x)](s)+\mathcal{L}[f(x)](s), \end{aligned} $$
(3.146)

from where

$$\displaystyle \begin{aligned} \mathcal{L}[y(x)](s)&=\frac{\kappa s^{-\nu(1-\mu)-\mu}(1-\omega s^{-\rho})^{\gamma \nu-\gamma}}{1-\lambda s^{-2\mu}(1-\omega s^{-\rho})^{-\varpi-\gamma}}\\&\quad + \frac{s^{-\mu}(1-\omega s^{-\rho})^{-\gamma}}{1-\lambda s^{-2\mu}(1-\omega s^{-\rho})^{-\varpi-\gamma}}\mathcal{L}[f(x)](s)\\&=\kappa \sum_{k=0}^\infty\lambda^k s^{-\nu(1-\mu)-\mu-2\mu k}(1-\omega s^{-\rho})^{\gamma\nu-\gamma-k(\varpi+\gamma)}\\&\quad +\sum_{k=0}^\infty \lambda^k s^{-\mu(2k +1)}(1-\omega s^{-\rho})^{-\gamma-k(\varpi+\gamma)} \mathcal{L}[f(x)](s). \end{aligned} $$
(3.147)

By inverse Laplace transform and by using the convolution theorem of the Laplace transform, follows the claimed result.

Example 3.5 ([11])

Let us consider the Cauchy problem (3.144) with κ = 0, \(f(x)=x^{\mathfrak {m}-1}\). By using relation (2.110) one has

$$\displaystyle \begin{aligned} {\mathbf{E}}_{\rho,\mu(2k+1),\omega,0+}^{\gamma+k(\varpi+\gamma)}x^{\mathfrak{m}-1}=\varGamma(\mathfrak{m})\,x^{\mu(2k+1)+\mathfrak{m}-1}E_{\rho,\mu(2k+1)+\mathfrak{m}}^{\gamma+k(\varpi+\gamma)}(\omega x^{\rho}), \end{aligned} $$
(3.148)

and, therefore, the solution of the Cauchy problem is given by

$$\displaystyle \begin{aligned} y(x)=\varGamma(\mathfrak{m})\,x^{\mu+\mathfrak{m}-1}\sum_{k=0}^{\infty}(\lambda x^{2\mu})^{k}E_{\rho,\mu(2k+1)+\mathfrak{m}}^{\gamma+k(\varpi+\gamma)}(\omega x^{\rho}). \end{aligned} $$
(3.149)

Example 3.6 ([11])

Let us consider the Cauchy problem (3.144) with κ = 0, \(f(x)=x^{\mathfrak {m}-1}E_{\rho ,\mathfrak {m}}^{\sigma }(\omega x^{\rho })\). From relation (2.110), one has

$$\displaystyle \begin{aligned} {\mathbf{E}}_{\rho,\mu(2k+1),\omega,0{+}}^{\gamma+k(\varpi+\gamma)}x^{\mathfrak{m}-1}E_{\rho,\mathfrak{m}}^{\sigma}(\omega x^{\rho})=x^{\mu(2k+1)+\mathfrak{m}-1}E_{\rho,\mu(2k+1)+\mathfrak{m}}^{\gamma+k(\varpi+\gamma)+\sigma}(\omega x^{\rho}), \end{aligned} $$
(3.150)

thus, the solution is given by

$$\displaystyle \begin{aligned} y(x)=x^{\mu+\mathfrak{m}-1}\sum_{k=0}^{\infty}(\lambda x^{2\mu})^{k}E_{\rho,\mu(2k+1)+\mathfrak{m}}^{\gamma+k(\varpi+\gamma)+\sigma}(\omega x^{\rho}). \end{aligned} $$
(3.151)

3.7.1 Fractional Poisson Processes

Here we present a generalization of the homogeneous Poisson process for which the governing equations contain the regularized Hilfer–Prabhakar differential operator in time [11]. The considered model generalizes the time-fractional Poisson process. The state probabilities of the classical Poisson process and its time-fractional generalization can be found by solving an infinite system of difference-differential equations. As the zero state probability of a renewal process coincides with the residual time probability, the process can be characterized by the waiting distribution. The M-L function appeared as residual waiting time between events in renewal processes with properly scaled thinning out the sequence of events in a power law renewal process [4, 12, 26, 29, 32, 36, 42]. Such a process is a fractional Poisson process. Gnedenko and Kovalenko did their analysis only in the Laplace domain, and Balakrishnan [2] also found this Laplace transform as highly relevant for analysis of time fractional diffusion processes. Later, Hilfer and Anton [16] were the first who explicitly introduced the M-L waiting-time density \(f_{\mu }(t)=-\frac {\mathrm {d}}{\mathrm {d}t}E_{\mu }(-t^{\mu })=t^{\mu -1}E_{\mu ,\mu }(-t^\mu )\), 0 < μ < 1, into the continuous time random walk theory. They showed that the waiting time probability density function that gives the time fractional diffusion equation for the probability density function has the M-L form. In the next section we will pay special attention of the importance of M-L functions in the continuous time random walk theory.

In what follows we will demonstrate the importance of the M-L functions related to the fractional Poisson processes. We consider the following Cauchy problem involving the regularized operator \({ }_C\mathcal {D}^{\gamma ,\mu }_{\rho , \omega , 0^+}\).

Definition 3.4 (Cauchy Problem for the Generalized Fractional Poisson Process [11])

$$\displaystyle \begin{aligned} \begin{cases} {}_C\mathcal{D}^{\gamma,\mu}_{\rho,-\phi,0^+}p_k(t)=-\lambda p_k(t)+\lambda p_{k-1}(t), & k\ge0, \: t>0, \: \lambda>0, \\ p_k(0) = \begin{cases} 1, \quad k=0, \\ 0, \quad k \geq 1, \end{cases} & \end{cases} \end{aligned} $$
(3.152)

where ϕ > 0, γ ≥ 0, 0 < ρ ≤ 1, 0 < μ ≤ 1. We also have 0 < μγ⌉∕γ −  < 1, ∀ r = 0, …, ⌈γ⌉, if γ≠0.

These ranges for the parameters are needed to ensure non-negativity of the solution. Multiplying both the terms of (3.152) by v k and adding over all k, we obtain the fractional Cauchy problem for the probability generating function

$$\displaystyle \begin{aligned} G(v,t)=\sum_{k=0}^\infty v^k p_k(t) \end{aligned}$$

of the counting number N(t), t ≥ 0,

$$\displaystyle \begin{aligned} \begin{cases} {}_C\mathcal{D}^{\gamma,\mu}_{\rho,-\phi,0^+}G(v,t)=-\lambda(1-v)G(v,t), & |v| \le1, \\ G(v,0)=1. & \end{cases} \end{aligned} $$
(3.153)

Theorem 3.17 ([11])

The solution of Eq. (3.153) is given by

$$\displaystyle \begin{aligned} G(v,t)=\sum_{k=0}^{\infty}(-\lambda t^{\mu})^k(1-v)^k E_{\rho,\mu k+1}^{\gamma k}(-\phi t^{\rho}), \quad |v|\le1. \end{aligned} $$
(3.154)

Proof

In view of Lemma 2.5, we have

$$\displaystyle \begin{aligned} s^{\mu}[1+\phi s^{-\rho}]^{\gamma}\mathcal{L}[G](v,s)-s^{\mu-1}[1+\phi s^{-\rho}]^{\gamma}=-\lambda(1-v)\mathcal{L}[G](v,s), \end{aligned} $$
(3.155)

so that

$$\displaystyle \begin{aligned} \mathcal{L}[G](v,s)&=\frac{s^{\mu-1}[1+\phi s^{-\rho}]^{\gamma}}{s^{\mu}[1+\phi s^{-\rho}]^{\gamma}+\lambda(1-v)}=\frac{1}{s} \left(1+\frac{\lambda(1-v)}{s^{\mu}[1+\phi s^{-\rho}]^{\gamma}}\right)^{-1} \\ &=\frac{1}{s}\sum_{k=0}^{\infty}\left[-\frac{\lambda(1-v)}{s^{\mu}[1+\phi s^{-\rho}]^{\gamma}}\right]^k\\&=\sum_{k=0}^{\infty}(-\lambda(1-v))^k s^{-\mu k-1}[1+\phi s^{-\rho}]^{-k\gamma}, \end{aligned} $$
(3.156)

where |λ(1 − v)∕[s μ(1 + ϕs ρ)γ]| < 1. By using (2.61) we can invert the Laplace transform (3.156) obtaining the claimed result.

Remark 3.1

Observe that for γ = 0, we retrieve the classical result obtained, for example, in [22]. Indeed, from the fact that, see Eq. (1.15),

$$\displaystyle \begin{aligned} E_{\rho,\mu k+1}^{0}(-\phi t^{\rho})=\sum_{r=0}^\infty\frac{(-\phi t^\rho)^r \varGamma(r)}{r!\varGamma(\rho r+\mu k+1)\varGamma(0)}=\frac{1}{\varGamma(\mu k+1)}, \end{aligned} $$
(3.157)

Eq. (3.154) becomes

$$\displaystyle \begin{aligned} G(v,t)&=\sum_{k=0}^{\infty}\frac{(-\lambda t^{\mu})^k(1-v)^k}{\varGamma(\mu k+1)}= E^1_{\mu,1}(-\lambda(1-v)t^{\mu})\\&=E_{\mu}(-\lambda(1-v)t^{\mu}), \end{aligned} $$
(3.158)

that coincides with equation (23) in Ref. [22].

From the probability generating function (3.154), we are now able to find the probability distribution at fixed time t of N(t), t ≥ 0, governed by (3.152). Indeed, a simple binomial expansion leads to

$$\displaystyle \begin{aligned} G(v,t)=\sum_{k =0}^{\infty}v^k\sum_{r=k}^{\infty}(-1)^{r-k}\binom{r}{k}(\lambda t^{\mu})^r E_{\rho,\mu r+1}^{\gamma r}(-\phi t^{\rho}). \end{aligned} $$
(3.159)

Therefore,

$$\displaystyle \begin{aligned} p_k(t)=\sum_{r=k}^{\infty}(-1)^{r-k}\binom{r}{k}(\lambda t^{\mu})^r E_{\rho,\mu r+1}^{\gamma r}(-\phi t^{\rho}), \quad k\ge0,\,\, t\ge0. \end{aligned} $$
(3.160)

We observe that, for γ = 0,

$$\displaystyle \begin{aligned} p_k(t)&=\sum_{r=k}^{\infty}(-1)^{r-k}\binom{r}{k}\frac{(\lambda t^{\mu})^r}{\varGamma(\mu r+1)}=(\lambda t^\mu)^k E_{\mu, \mu k+1}^{k+1}(-\lambda t^\mu)\\&=\frac{(\lambda t^\mu)^k}{k!} E^{(k)}_{\mu,1}(-\lambda t^\mu), \quad k\geq 0,\,\, t\ge0, \end{aligned} $$
(3.161)

The first expression of (3.161) coincides with equation (1.4) in Ref. [3]. The third one is a convenient representation involving the kth derivative of the two parameter M-L function evaluated at − λt μ. It is immediate to note, from (3.154), by inserting v = 1, that \(\sum _{k=0}^\infty p_k(t)=1\). From (3.152), one can evaluate the mean value of N(t) by differentiation of Eq. (3.153) with respect to v and to take v = 1. That is,

$$\displaystyle \begin{aligned} \begin{cases} {}_C\mathcal{D}^{\gamma,\mu}_{\rho,-\phi,0^+}\langle N(t)\rangle=\lambda, & t>0,\\ \langle N(t)\rangle\big|{}_{t=0}=0, & \end{cases} \end{aligned} $$
(3.162)

whose solution is given by

$$\displaystyle \begin{aligned} \langle N(t)\rangle=\lambda t^{\mu}E^{\gamma}_{\rho,1+\mu}(-\phi t^{\rho}),\quad t\ge0. \end{aligned} $$
(3.163)

3.7.1.1 Subordination Representation

An alternative representation for the fractional Poisson process N(t), t ≥ 0 [11] can be given as follows. Let us consider the Cauchy problem

$$\displaystyle \begin{aligned} \begin{cases} {}_C\mathcal{D}^{\gamma,\mu}_{\rho,-\phi,0^+}h(x,t)=-\frac{\partial}{\partial x}h(x,t), & t>0, \: x\ge0, \\ h(x,0^+)=\delta(x). & \end{cases} \end{aligned} $$
(3.164)

The Laplace–Laplace transform of h(x, t) is given by

$$\displaystyle \begin{aligned} \tilde{\tilde{h}}(z,s)=\frac{s^{\mu-1}(1+\phi s^{-\rho})^{\gamma}}{s^\mu (1+\phi s^{-\rho})^{\gamma}+z}, \quad s>0, \: z>0. \end{aligned} $$
(3.165)

Therefore one has

$$\displaystyle \begin{aligned} s^\mu(1+\phi s^{-\rho})^\gamma\tilde{\tilde{h}}(z,s)-s^{\mu-1}(1+\phi s^{-\rho})^\gamma=-z\, \tilde{\tilde{h}}(z,s), \end{aligned} $$
(3.166)

which immediately leads to (3.165). Consider now the stochastic process, given as a finite sum of subordinated independent subordinators

$$\displaystyle \begin{aligned} \mathfrak{V}_t=\sum_{r=0}^{\lceil\gamma\rceil} {}_r V_{\varPhi(t)}^{\mu\frac{\lceil\gamma\rceil}{\gamma}-r\rho}, \quad t\ge0, \end{aligned} $$
(3.167)

where ⌈γ⌉ represents the ceiling of γ. Furthermore, we considered a sum of ⌈γ⌉ independent stable subordinators of different indices and the random time change here is defined by

$$\displaystyle \begin{aligned} \varPhi(t)=\binom{\lceil\gamma\rceil}{r}V_t^{\frac{\gamma}{\lceil\gamma\rceil}}, \quad t\ge0, \end{aligned} $$
(3.168)

where \(V_t^{\frac {\gamma }{\lceil \gamma \rceil }}\) is a further stable subordinator, independent of the others. Note that in order the above process \(\mathfrak {V}_t\), t ≥ 0, to be well-defined, the constraint 0 < μγ⌉∕γ −  < 1 holds for each r = 0, 1, …, ⌈γ⌉. The next step is to define its hitting time. This can be done as

$$\displaystyle \begin{aligned} \mathfrak{E}_t=\inf\{s\ge0\colon\mathfrak{V}_s>t\}, \quad t\ge0. \end{aligned} $$
(3.169)

Theorem 2.2 of Ref. [10] ensures us that the law \(\Pr \{\mathfrak {E}_t\in dx \}/dx\) is the solution to the Cauchy problem (3.164) and therefore that its Laplace–Laplace transform is exactly that in (3.165).

Theorem 3.18 ([11])

Let \(\mathfrak {E}_t\), t ≥ 0, be the hitting-time process presented in formula (3.169). Furthermore let \(\mathcal {N}(t)\), t ≥ 0, be a homogeneous Poisson process of parameter λ > 0, independent of \(\mathfrak {E}_t\). The equality

$$\displaystyle \begin{aligned} N(t)=\mathcal{N}(\mathfrak{E}_t), \quad t\ge0, \end{aligned} $$
(3.170)

holds in distribution.

Proof

The result can be proved by writing the probability generating function related to the time changed process \(\mathcal {N}(\mathfrak {E}_t)\) as

$$\displaystyle \begin{aligned} \sum_{k=0}^\infty v^k \Pr(\mathcal{N}(\mathfrak{E}_t)=k)=\int_0^\infty e^{-\lambda(1-v)y} \Pr(\mathfrak{E}_t\in \mathrm{d}y). \end{aligned} $$
(3.171)

Therefore, by taking the Laplace transform with respect to time one obtains

$$\displaystyle \begin{aligned} \int_0^\infty \int_0^\infty e^{-\lambda(1-v)y-st}\Pr(\mathfrak{E}_t\in \mathrm{d}y)\,\mathrm{d}t=\frac{s^{\mu-1}(1+\phi s^{-\rho})^{\gamma}}{s^\mu(1+\phi s^{-\rho})^{\gamma}+\lambda(1-v)}. \end{aligned} $$
(3.172)

By inverse Laplace transform one finds

$$\displaystyle \begin{aligned} \sum_{k=0}^\infty v^k \Pr(\mathcal{N}(\mathfrak{E}_t)=k)=\sum_{k=0}^\infty(-\lambda(1-v))^k t^{\mu k} E_{\rho,k\mu+1}^{k\gamma}(-\phi t^\rho), \end{aligned} $$
(3.173)

which coincides with Eq. (3.154).

3.7.1.2 Renewal Process

The generalized fractional Poisson process N(t), t ≥ 0, can be constructed as a renewal process with specific waiting times [11]. Let us consider k i.i.d. random variables T j, j = 1, …, k, representing the inter-event waiting times and having probability density function

$$\displaystyle \begin{aligned} f_{T_j}(t_j)=\lambda t_j^{\mu-1}\sum_{r=0}^\infty(-\lambda t_j^\mu)^r E_{\rho,\mu r+\mu}^{\gamma r+\gamma}(-\phi t_j^\rho), \quad t\ge0,\,\, \mu\in(0,1), \end{aligned} $$
(3.174)

and Laplace transform

$$\displaystyle \begin{aligned} \langle e^{-sT_j}\rangle & =\lambda\sum_{r=0}^\infty(-\lambda)^r s^{-\mu r-\mu} (1+\phi s^{-\rho})^{-\gamma r-\gamma}\\& =\frac{\lambda s^{-\mu}(1+\phi s^{-\rho})^{-\gamma}}{1+\lambda s^{-\mu}(1+\phi s^{-\rho})^{-\gamma}}, \qquad \left|-\lambda s^{-\mu}(1+\phi s^{-\rho})^{-\gamma}\right|<1 \notag \\ & =\frac{\lambda}{s^\mu(1+\phi s^{-\rho})^{\gamma}+\lambda}. \end{aligned} $$
(3.175)

Let \(\mathcal {T}_m=T_1+T_2+\dots +T_m\) denote the waiting time of the mth renewal event. The probability distribution \(\Pr (N(t)=k)\) can be written making the renewal structure explicit. By Laplace transform of Eq. (3.160) one finds

$$\displaystyle \begin{aligned} \mathcal{L}[p_k](s)&=\sum_{r=k}^\infty(-1)^{r-k}\binom{r}{k}\lambda^r s^{-\mu r-1}(1+\phi s^{-\rho})^{-\gamma r}\\ &=s^{-1}\sum_{r=0}^\infty(-1)^r\binom{r+k}{k}\left(\frac{\lambda}{s^\mu(1+\phi s^{-\rho})^{\gamma}}\right)^{r+k} \notag \\&=s^{-1}\lambda^k s^{-\mu k}(1+\phi s^{-\rho})^{-\gamma k}\sum_{r=0}^\infty \binom{-k-1}{r} \left(\frac{\lambda}{s^\mu(1+\phi s^{-\rho})^{\gamma}}\right)^r \notag \\&=s^{-1}\lambda^k s^{-\mu k}(1+\phi s^{-\rho})^{-\gamma k}\left(1+\frac{\lambda}{s^\mu(1+\phi s^{-\rho})^{\gamma}}\right)^{-k-1} \notag \\&=\frac{\lambda^k s^{\mu-1}(1+\phi s^{-\rho})^{\gamma}}{[s^\mu(1+\phi s^{-\rho})^{\gamma}+\lambda]^{k+1}}. \end{aligned} $$
(3.176)

On the other hand, one has [11]

$$\displaystyle \begin{aligned} \mathcal{L}[p_k](s)&=\int_0^\infty e^{-st}\left(\Pr(\mathcal{T}_k<t)-\Pr(\mathcal{T}_{k+1}<t)\right)\,\mathrm{d}t\\&=\int_0^\infty e^{-st}\left[\int_0^t\Pr(\mathcal{T}_k\in \mathrm{d}y)-\int_0^t\Pr(\mathcal{T}_{k+1}\in \mathrm{d}y)\right]\,\mathrm{d}t \notag \\&=\int_0^\infty \Pr(\mathcal{T}_k\in dy)\int_y^\infty e^{-st}\,\mathrm{d}t-\int_0^\infty \Pr(\mathcal{T}_{k+1}\in \mathrm{d}y)\int_y^\infty e^{-st}\,\mathrm{d}t \notag\\&=s^{-1}\left[ \int_0^\infty e^{-sy} \Pr(\mathcal{T}_k \in \mathrm{d}y)-\int_0^\infty e^{-sy}\Pr (\mathcal{T}_{k+1}\in \mathrm{d}y)\right] \notag \\&=s^{-1}\left[\left( \frac{\lambda}{s^\mu (1+\phi s^{-\rho})^{\gamma}+\lambda}\right)^k-\left( \frac{\lambda}{s^\mu(1+\phi s^{-\rho})^{\gamma}+\lambda}\right)^{k+1}\right] \notag \\&=s^{-1}\left[\frac{\lambda^k[s^\mu(1+\phi s^{-\rho})^{\gamma}+\lambda]-\lambda^{k+1}}{[s^\mu(1+\phi s^{-\rho})^{\gamma}+\lambda]^{k+1}} \right] \notag \\&=\frac{\lambda^k s^{\mu-1}(1+\phi s^{-\rho})^{\gamma}}{[s^\mu(1+\phi s^{-\rho})^{\gamma}+\lambda]^{k+1}}, \end{aligned} $$
(3.177)

which coincides with (3.176). Therefore, considering the renewal structure of the process, one can find the probability of the residual waiting time as [11]

$$\displaystyle \begin{aligned} \mathrm{P}(T_1>t)=p_0(t)=\sum_{r=0}^\infty(-\lambda t^\mu)^r E_{\rho,\mu r+1}^{\gamma r}(-\phi t^\rho). \end{aligned} $$
(3.178)

In order to prove the non-negativity of the probability density function (3.174) (and therefore of p k(t)) one can use the properties of the completely monotone and Bernstein functions. Let us consider the case γ≠0 (the case γ = 0 is studied in Ref. [22]). From the Bernstein theorem (see e.g. Ref. [35], Theorem 1.4), in order to show the non-negativity of the probability density function, it is sufficient to find when its Laplace transform is a completely monotone function (3.175). The function z → 1∕(z + λ) is completely monotone for any positive λ and that 1∕(g(z) + λ) is completely monotone if g(z) is a Bernstein function. Thus, one should prove that the function

$$\displaystyle \begin{aligned} s^\mu(1+\phi s^{-\rho})^{\gamma}=\left(s^{\mu/\gamma}+\phi s^{\mu/\gamma-\rho}\right)^{\gamma} \end{aligned} $$
(3.179)

is a Bernstein function. We have

$$\displaystyle \begin{aligned} \left(s^{\mu/\gamma}+\phi s^{\mu/\gamma-\rho}\right)^{\gamma}&=\left[\left(s^{\mu/\gamma}+\phi s^{\mu/\gamma-\rho}\right)^{\lceil\gamma\rceil}\right]^{\gamma/\lceil\gamma\rceil} \\&=\left(\sum_{r=0}^{\lceil\gamma\rceil}\binom{\lceil\gamma\rceil}{r}\phi^r s^{\mu\lceil\gamma\rceil/\gamma-\rho r}\right)^{\gamma/\lceil\gamma\rceil}. \end{aligned} $$
(3.180)

Since the space of Bernstein functions is closed under composition and linear combinations [35], it follows that (3.180) is a Bernstein function for 0 < μγ⌉∕γ −  < 1, ∀ r = 0, …, ⌈γ⌉, which coincide with the constraints derived in Sect. 3.7.1.1. The same restrictions will be obtained in the next section, within the continuous time random walk theory.

3.7.1.3 Fractional Poisson Process Involving Three Parameter M-L Function

At the end of this chapter we consider a fractional Poisson process introducing a discrete probability distribution in terms of the three parameter M-L function [31]. The telegraph’s process, which represents a finite-velocity one dimensional random motion, has been generalized to fractional one. The fractional extensions of the telegraph process \(\{\mathcal {T}_\alpha (t) \colon t \geq 0\}\), whose changes of direction are related to the fractional Poisson process \(\{ \mathcal {N}_\alpha (t) \colon t \geq 0\}\) having distribution [3]

$$\displaystyle \begin{aligned} \mathbb P(\mathcal{N}_\alpha(t) = k) = \frac{\lambda^k}{E_\alpha(\lambda t^\alpha)} \frac{t^{\alpha k}}{\varGamma(\alpha k+1)}, \quad k \in \mathbb{N}_{0}:=\mathbb{N}\cup \{0\}, \quad t\geq 0. \end{aligned}$$

The fractional Poisson process resulting in \(\{ \mathcal {N}_{\alpha , \beta }(t) \colon t\geq 0\}\) defined with two parameter M-L function E α,β(λt α) was studied in [15]. Therefore, the related distribution is

$$\displaystyle \begin{aligned} \mathbb P(\mathcal{N}_{\alpha, \beta}(t) = k) = \frac{\lambda^k}{E_{\alpha, \beta}(\lambda t^\alpha)} \frac{t^{\alpha k}}{\varGamma (\alpha k + \beta)},\quad k\in \mathbb{N}_{0}, \quad t\geq 0, \end{aligned}$$

for which the related raw moments are obtained in terms of the Bell polynomials [15].

As a generalization of the previous ones, the more general fractional Poisson process \(\{\mathcal {N}_{\alpha , \beta }^\gamma (t) \colon t\geq 0\}\) defined by the three parameter M-L function \(E_{\alpha , \beta }^\gamma (\lambda t^\alpha )\) has distribution [31]

$$\displaystyle \begin{aligned} \mathbb P(\mathcal{N}_{\alpha, \beta}^{\,\gamma}(t) = k) = \frac{\lambda^k}{E_{\alpha, \beta}^\gamma(\lambda t^\alpha)}\frac{(\gamma)_k\, t^{\alpha k}}{k!\, \varGamma(\alpha k+\beta)}, \quad k \in \mathbb N_0, \quad t \geq 0. \end{aligned} $$
(3.181)

From here one can conclude that there is a correspondence between the non-homogeneous Poisson process \(\{ \mathcal {N}(t) \colon t \geq 0\}\) with intensity function λαt α−1,

$$\displaystyle \begin{aligned} \mathbb P( \mathcal{N}(t) = k) = e^{-\lambda t^\alpha} \, \frac{(\lambda t^\alpha)^k}{\varGamma(k+1)}, \quad \lambda, \alpha>0, \quad k \in \mathbb N_0,\end{aligned}$$

and the fractional Poisson process \(\{\mathcal {N}_{\alpha , \beta }^\gamma (t) \colon t\geq 0\}\).

Proposition 3.4 ([31])

Let \(\min \{ \alpha , \beta , \gamma , \lambda \}>0\)and t ≥ 0. Then

$$\displaystyle \begin{aligned} \mathbb P(\mathcal{N}_{\alpha, \beta}^{\,\gamma}(t) = k) = \dfrac{\dfrac{(\gamma)_k}{\varGamma(\alpha k + \beta)}\,\mathbb P( \mathcal{N}(t) = k)}{\sum\limits_{n \geq 0} \frac{(\gamma)_n}{\varGamma(\alpha n + \beta)}\,\mathbb P( \mathcal{N}(t) = n)}, \quad k \in \mathbb N_0,\end{aligned}$$

where \(\mathcal {N}(t)\)is a non-homogeneous Poisson process with intensity function λαt α−1.

Proof

Rewriting (3.181) as

$$\displaystyle \begin{aligned} \mathbb P(\mathcal{N}_{\alpha, \beta}^{\,\gamma}(t) = k) = \frac{\dfrac{(\gamma)_k}{\varGamma(\alpha k + \beta)}\, \dfrac{(\lambda t^\alpha)^k}{k!}\, e^{-\lambda t^\alpha}}{\sum\limits_{n \geq 0} \dfrac{(\gamma)_n}{\varGamma(\alpha n + \beta)}\, \dfrac{(\lambda t^\alpha)^n}{n!}\, e^{-\lambda t^\alpha}},\end{aligned}$$

one obtains the result in this Proposition.

In what follows for simplicity one gets λ = 1. For a non-negative random variable X on a standard probability space \((\varOmega , \mathcal {F}, \mathbb P)\) having a fractional Poisson-type distribution

$$\displaystyle \begin{aligned} \mathbb P_{\alpha, \beta}^\gamma(k) = \mathbb P(X = k) = \frac 1{E_{\alpha, \beta}^\gamma(t^\alpha) }\frac{(\gamma)_k \,t^{\alpha k}}{k!\, \varGamma(\alpha k+\beta)}, \quad k \in \mathbb N_0,\,\, t \geq 0, \end{aligned}$$

with \(\min \{\alpha , \beta , \gamma \} >0\), and for \(\sum _{k \geq 0} \mathbb P_{\alpha , \beta }^\gamma (k) = 1\), the random variable X is well defined. This correspondence we quote in the sequel X ∼ML(α, β, γ).

The factorial moment of the random variable X of order \(s \in \mathbb N\) is given by

$$\displaystyle \begin{aligned} \varPhi_s = \langle X(X-1) \cdots (X-s+1)\rangle = (-1)^s\, \langle(-X)_s\rangle = \frac{\mathrm{d}^s}{\mathrm{d}t^s}\langle t^X\rangle\Big|{}_{t=1},\end{aligned}$$

provided the moment generating function M X(t) = 〈t X〉 there exists in some neighborhood of t = 1 together with all its derivatives up to the order s. By virtue of the Viète-Girard formulae for expanding X(X − 1)⋯(X − s + 1) one obtains

$$\displaystyle \begin{aligned} \varPhi_s = \sum _{r=1}^s (-1)^{s-r} \, e_r\, \langle X^r\rangle\end{aligned}$$

where e r is an elementary symmetric polynomials:

$$\displaystyle \begin{aligned} e_r = e_r(\ell_1, \cdots, \ell_r) = \sum_{1 \le \ell_1< \cdots <\ell_r \le s-1}\ell_1 \cdots \ell_r, \quad r = \overline{0,s-1}.\end{aligned}$$

Theorem 3.19 ([31])

For all \(\min \{ \alpha , \beta , \gamma \}>0\)the s-th raw moment of the random variable X ML(α, β, γ) is given by

$$\displaystyle \begin{aligned} \langle X^s \rangle=\frac{1}{E_{\alpha,\beta}^\gamma(t^\alpha)}\sum_{j=0}^s (\gamma)_j\,{s \brace j} \,t^{\alpha j}\, E_{\alpha, \alpha j+\beta}^{\gamma+j}(t^\alpha), \quad s \in \mathbb N_0, \quad t \geq 0. \end{aligned} $$
(3.182)

Moreover, the s-th factorial moment is given by

$$\displaystyle \begin{aligned} \varPhi_s = \frac 1{E_{\alpha, \beta}^\gamma(t^\alpha) } \sum _{r=1}^s (-1)^r \, e_r\,\sum_{j=0}^r (\gamma)_j\,{r \brace j} \,t^{\alpha j}\, E_{\alpha, \alpha j+\beta}^{\gamma+j}(t^\alpha), \end{aligned} $$
(3.183)

where the curly braces denote the Stirling numbers of the second kind.

Proof

From the connection between the raw and the factorial moments of a random variable:

$$\displaystyle \begin{aligned} \langle X^s \rangle= \sum_{j=0}^s (-1)^j\,{s \brace j} \langle (-X)_j\rangle, \quad {s \brace j} = \frac 1{j!} \sum_{m=0}^j (-1)^{j-m} \binom{j}{m} m^s,\end{aligned}$$

one finds

$$\displaystyle \begin{aligned} \langle X^s \rangle&= \sum_{j=0}^s (-1)^j\, {s \brace j} \langle (-X)_j\rangle = \sum_{j=0}^s (-1)^j\,{s \brace j} \sum_{k \geq 0} (-k)_j \, \mathbb P_{\alpha, \beta}^\gamma(k) \\ & = \frac 1{E_{\alpha, \beta}^\gamma(t^\alpha) } \sum_{j=0}^s (-1)^j\,{s \brace j} \sum_{k \geq 0}\frac{(-k)_j \,(\gamma)_k \,t^{\alpha k}}{k!\, \varGamma(\alpha k+\beta)} \\ &= \frac 1{E_{\alpha, \beta}^\gamma(t^\alpha) \, \varGamma(\gamma)} \sum_{j=0}^s {s \brace j}\, t^{\alpha j}\,\sum_{k \geq j} \frac{\varGamma(\gamma+(k-j)+j) \,t^{\alpha(k-j)}}{(k-j)!\,\varGamma(\alpha (k-j)+ \alpha j+\beta )} \\ &= \frac 1{E_{\alpha, \beta}^\gamma(t^\alpha) } \sum_{j=0}^s \frac{\varGamma(\gamma+j)}{\varGamma(\gamma)} \, {s \brace j} \,t^{\alpha j} \,\sum_{k \geq 0} \frac{(\gamma+j)_k \,t^{\alpha k}} {k!\, \varGamma(\alpha k+ \alpha j+\beta)}, \end{aligned} $$

which is the statement (3.182). The derivation of (3.183) is now straightforward.

In order to obtain the fractional order moments one needs the so-called extended Hurwitz-Lerch Zeta (HLZ) function \(\varPhi ^{(\rho , \sigma , \kappa )}_{\lambda , \mu ; \nu }(z, s, a)\) introduced in Refs. [14, 38] as

$$\displaystyle \begin{aligned} \varPhi^{(\rho, \sigma, \kappa)}_{\lambda, \mu; \nu}(z, s, a) = \sum_{n \geq 0} \frac{(\lambda)_{\rho n}\, (\mu)_{\sigma n}}{n!\, (\nu)_{\kappa n}}\, \dfrac{z^n}{(n+a)^s}, \end{aligned} $$
(3.184)

where \(\lambda , \mu \in \mathbb C\), \(a, \nu \in \mathbb C \setminus \mathbb Z_0^-\), ρ, σ, κ > 0, κ − ρ − σ + 1 > 0 when \(s, z \in \mathbb C\), κ − ρ − σ = −1 and \(s \in \mathbb C\) when |z| < δ = ρ ρσ σκ κ, while κ − ρ − σ = −1 and \(\Re (s + \nu - \lambda - \mu ) >1\) when |z| = δ. By setting σ → 0 in (3.184) one obtains the generalized HLZ function

$$\displaystyle \begin{aligned} \varPhi^{(\rho, 0, \kappa)}_{\lambda, \mu; \nu}(z, s, a) \equiv \varPhi^{(\rho, \kappa)}_{\,\,\lambda; \nu}(z, s, a). \end{aligned}$$

Theorem 3.20 ([31])

Let X ∼ML(α, β, γ). For all \(\min \{\alpha , \beta , \gamma \}>0\)and for all s ≥ 0 one gets

$$\displaystyle \begin{aligned} \langle X^s \rangle= \frac{\gamma\,t^\alpha}{E_{\alpha, \beta}^\gamma(t^\alpha)\, \varGamma(\alpha+\beta)}\, \varPhi^{(1, \alpha)}_{\gamma+1; \alpha+\beta}(t^\alpha, 1-s, 1). \end{aligned} $$
(3.185)

Proof

By definition, for all s > 0 it follows

$$\displaystyle \begin{aligned} \langle X^s\rangle = \frac 1{E_{\alpha, \beta}^\gamma(t^\alpha)} \sum_{n \geq 1} n^s \, \frac{(\gamma)_n \,t^{\alpha n}}{n!\, \varGamma(\alpha n+\beta)}, \end{aligned}$$

since the zeroth term vanishes. Therefore,

$$\displaystyle \begin{aligned} \langle X^s\rangle &=\frac 1{E_{\alpha, \beta}^\gamma(t^\alpha)} \sum_{n \geq 1} \frac{n^{s-1}(\gamma)_n\, t^{\alpha n}}{(n-1)!\, \varGamma(\alpha n+\beta)}\\& =\frac{\gamma\,t^\alpha}{E_{\alpha, \beta}^\gamma(t^\alpha)} \sum_{n \geq 0} \frac{(\gamma+1)_n\, t^{\alpha n}}{n!\, \varGamma(\alpha n+\alpha+\beta)\,(n+1)^{1-s}} \\ &= \frac{\gamma\,t^\alpha}{E_{\alpha, \beta}^\gamma(t^\alpha)\, \varGamma(\alpha+\beta)}\, \varPhi^{(1, \alpha)}_{\gamma+1; \alpha+\beta}(t^\alpha, 1-s, 1). \end{aligned} $$

Being λ = γ + 1, ν = α + β, z = t α; s↦1 − s, ρ = 1, κ = α and a = 1, by applying the convergence constraints for \(\varPhi ^{(\rho , \kappa )}_{\,\,\lambda ; \nu }(z, s, a)\) in (3.184), one finishes the proof.

Remark 3.2

For the raw integer order moments for the two parameter M-L distributed random variable in [15] has been found

$$\displaystyle \begin{aligned} \langle Y^n\rangle = \frac 1{E_{\alpha, \beta}(t)}\, \left( t\, \frac{\mathrm{d}}{\mathrm{d}t}\right)^n\, E_{\alpha, \beta}(t), \quad n \in \mathbb N_0.\end{aligned}$$

This case corresponds to Y ∼ML(α, β, 1) distribution. Indeed, taking \(s=1, \gamma =1;\, t \mapsto t^{\frac 1\alpha }\) in (3.182) we have

$$\displaystyle \begin{aligned} \langle X\rangle = t\, \frac{E_{\alpha, \alpha+\beta}^2(t)}{E_{\alpha, \beta}(t)}.\end{aligned}$$

On the other hand, since

$$\displaystyle \begin{aligned} \left(E_{\alpha, \beta}(t)\right)' = \sum_{n \geq 1} \frac{n\, t^{n-1}}{\varGamma(\alpha n+\beta)} = \sum_{n \geq 1} \frac{(2)_{n-1} t^{n-1}}{(n-1)!\,\varGamma(\alpha (n-1) + \alpha + \beta)} = E_{\alpha, \alpha+\beta}^2(t),\end{aligned}$$

one concludes that 〈X〉≡〈Y 〉. By setting s = 1, γ = 1, \(t \mapsto t^{\frac {1}{\alpha }}\), relation (3.185) becomes

$$\displaystyle \begin{aligned} \langle X\rangle = t\,\frac{\varPhi^{(1, \alpha)}_{2; \alpha+\beta}(t, 0, 1) }{E_{\alpha, \beta}(t)\, \varGamma(\alpha+\beta)} = \frac{t}{E_{\alpha, \beta}(t)\, \varGamma(\alpha+\beta)}\,\sum_{n \geq 0} \frac{(2)_n t^n}{n!\,(\alpha+\beta)_{\alpha n}}.\end{aligned}$$

Corollary 3.3 ([31])

For all \(\min \{\alpha , \beta , \gamma \}>0\) and for all \(s \in \mathbb N_0\) we have

$$\displaystyle \begin{aligned} \varPhi^{(1, \alpha)}_{\gamma+1; \alpha+\beta}(t^\alpha, 1-s, 1) = \frac{\varGamma(\alpha+\beta)}{\gamma\,t^\alpha}\,\sum_{j=0}^s (\gamma)_j\,{s \brace j} \,t^{\alpha j}\, E_{\alpha, \alpha j+\beta}^{\gamma+j}(t^\alpha), \quad t >0.\end{aligned}$$