The analysis of fractional differential equations, carried out by means of fractional calculus and integral transforms (Laplace, Fourier), leads to certain special functions of Mittag-Leffler (M-L) and Wright types. These useful special functions are investigated systematically as relevant cases of the general class of functions which are popularly known as Fox H-functions, after Charles Fox, who initiated a detailed study of these functions as symmetrical Fourier kernels [5]. Definitions, some properties, relations, asymptotic expansions and Laplace transform formulas for the M-L type functions and Fox H-function are given in this Chapter. At the beginning of the twentieth century, Swedish mathematician Gösta Mittag-Leffler introduced a generalization of the exponential function, today known as the M-L function [28]. The properties of the M-L function and its generalizations had been totally ignored by the scientific community for a long time due to their unknown application in the science. In 1930 Hille and Tamarkin solved the Abel-Volterra integral equation in terms of the M-L function [19]. The basic properties and relations of the M-L function appeared in the third volume of the Bateman project in the Chapter XVIII: Miscellaneous Functions [4]. More detailed analysis of the M-L function and their generalizations as well as the fractional derivatives and integrals were published later, and it has been shown that they are of great interest for modeling anomalous diffusion and relaxation processes. Similarly, Fox H-function, introduced by Fox [5], is of great importance in solving fractional differential equations and for analysis of anomalous diffusion processes. The Fox H-function has been used to express the fundamental solution of the fractional diffusion equation obtained from a continuous time random walk model. Therefore, in this Chapter we will give the most important definitions, relations, asymptotic expansions of these functions which represent a basis for investigation of anomalous diffusion and non-exponential relaxation in different complex systems.

1.1 Mittag-Leffler Functions

The standard one parameter M-L function , introduced by Mittag-Leffler, is defined by Mittag-Leffler [28]:

$$\displaystyle \begin{aligned} E_{\alpha}(z)=\sum_{k=0}^{\infty}\frac{z^k}{\varGamma(\alpha k+1)}, \end{aligned} $$
(1.1)

where \((z \in \mathrm {C}; \Re (\alpha )>0)\), and Γ is the Gamma function [4]. It generalizes the exponential, trigonometric, and hyperbolic functions since

$$\displaystyle \begin{aligned} E_{1}(\pm z)&=\sum_{k=0}^{\infty}\frac{(\pm z)^k}{\varGamma(k+1)}=\sum_{k=0}^{\infty}\frac{(\pm z)^k}{k!}=e^{\pm z},\\ E_{2}(-z^2)&=\sum_{k=0}^{\infty}\frac{(-z)^k}{\varGamma(2 k+1)}=\sum_{k=0}^{\infty}\frac{(-z)^k}{(2k)!}=\cos{}(z), \\ E_{2}(z^2)&=\sum_{k=0}^{\infty}\frac{z^k}{\varGamma(2 k+1)}=\sum_{k=0}^{\infty}\frac{z^k}{(2k)!}=\cosh(z). \end{aligned} $$

The case with α = 1∕2 yields

$$\displaystyle \begin{aligned} E_{\frac{1}{2}}\left(\pm z^{\frac{1}{2}}\right)=\sum_{k=0}^{\infty}\frac{(\pm z)^{\frac{k}{2}}}{\varGamma\left(\frac{k}{2}+1\right)}=\mathrm{e}^z\left[ 1+ \mathrm{ erf}\left(\pm z^{\frac{1}{2}}\right)\right], \end{aligned}$$

where

$$\displaystyle \begin{aligned} \mathrm{erf}(z) = \frac 2{\sqrt{\pi}}\int_0^z \mathrm{e}^{-x^2}\, \mathrm{d}x \end{aligned}$$

is the error function. The one parameter M-L function (1.1) is an entire function of order \(\rho =1/\Re (\alpha )\) and type 1.

Special form of the one parameter M-L function, which has many applications, is given by (see Fig. 1.1)

$$\displaystyle \begin{aligned} e_\alpha(t;\lambda)=E_\alpha(-\lambda t^\alpha) \quad \big(\alpha>0;\,\lambda \in \mathrm{C}\big). \end{aligned} $$
(1.2)

Its Laplace transform

$$\displaystyle \begin{aligned} \mathcal{L}[f(t)](s)=\int_{0}^{\infty}e^{-st}f(t)\,\mathrm{d}t, \end{aligned}$$

reads [29]

$$\displaystyle \begin{aligned} \mathcal{L}\left[e_{\alpha}(t;\mp\lambda)\right](s)=\frac{s^{\alpha-1}}{s^\alpha\mp\lambda}, \end{aligned} $$
(1.3)

where \( \Re (s)>|\lambda |{ }^{1/\alpha }\). The function (1.2) is an eigenfunction of a fractional boundary value problem \(_{C}D_{0+}^{\alpha }f(t)+\lambda f(t)=0\) (see the next section for definition of the fractional derivative \(_{C}D_{0+}^{\alpha }\)), in comparison with the exponential function e λt as an eigenfunction of the ordinary boundary value problem \(\frac {df(t)}{dt}+\lambda f(t)=0\).

Fig. 1.1
figure 1

One parameter M-L function (1.2) for α = 1∕2 (blue line), α = 1—exponential function (red line), α = 3∕2 (green line)

The two parameter M-L function defined by Agarwal [1], Erdélyi et al. [4], Kilbas et al. [20], and Podlubny [29]

$$\displaystyle \begin{aligned} E_{\alpha,\beta}(z)=\sum_{k=0}^{\infty}\frac{z^k}{\varGamma(\alpha k+\beta)}, \end{aligned} $$
(1.4)

where \((z, \beta \in \mathrm {C}; \Re (\alpha )>0)\), was introduced and investigated later. This function in Ref. [4] is called generalized M-L function. Note that

$$\displaystyle \begin{aligned} E_{\alpha,1}(z)=E_{\alpha}(z), \end{aligned}$$

and

$$\displaystyle \begin{aligned} E_{\alpha,0}(z)=\sum_{k=1}^{\infty}\frac{z^{k}}{\varGamma(\alpha k)}=z\sum_{k=0}^{\infty}\frac{z^{k}}{\varGamma(\alpha k+\alpha)}=zE_{\alpha,\alpha}(z). \end{aligned}$$

It relates to some elementary functions, i.e.,

$$\displaystyle \begin{aligned} E_{1,2}(z)&=\sum_{k=0}^{\infty}\frac{z^{k}}{\varGamma(k+2)}=\sum_{k=0}^{\infty}\frac{z^{k}}{(k+1)!}=\frac{e^{z}-1}{z},\\ E_{2,2}(z)&=\sum_{k=0}^{\infty}\frac{z^{k}}{\varGamma(2 k+2)}=\sum_{k=0}^{\infty}\frac{z^{k}}{(2 k+1)!}=\frac{\sinh(\sqrt{z})}{\sqrt{z}}. \end{aligned} $$

The two parameter M-L function (1.4) is an entire functions of order \(\rho =1/\Re (\alpha )\) and type 1. The following function

$$\displaystyle \begin{aligned} e_{\alpha,\beta}(t;\lambda)=t^{\beta-1}\,E_{\alpha, \beta}(-\lambda t^\alpha) \quad \big(\alpha,\beta>0;\,\lambda \in \mathrm{C}\big) \end{aligned} $$
(1.5)

plays an important role in the theory of fractional differential equations (see Fig. 1.2). The Laplace transform of the two parameter M-L function (1.5) reads [29]

$$\displaystyle \begin{aligned} \mathcal{L}\left[e_{\alpha,\beta}(t;\mp\lambda)\right](s)=\frac{s^{\alpha-\beta}}{s^\alpha\mp\lambda}, \end{aligned} $$
(1.6)

where \( \Re (s)>|\lambda |{ }^{1/\alpha }\). The following integrals

$$\displaystyle \begin{aligned} \begin{array}{rcl} \frac 1{1-z}=\int_0^\infty \mathrm{e}^{-x}\, E_\alpha(x^\alpha\, z)\,\mathrm{d}x=\int_0^\infty \mathrm{e}^{-x}\,x^{\beta-1}\, E_{\alpha, \beta}(x^\alpha\, z)\,\mathrm{d}x, \end{array} \end{aligned} $$

are fundamental in the evaluation of the Laplace transforms of the functions E α(−λx α) and E α,β(−λx α) when α, β > 0 and λ ∈C. Both of these functions play key rôles in fractional calculus and its application to differential equations.

Fig. 1.2
figure 2

Two parameter M-L function (1.5) for α = 1∕2, β = 3∕4 (blue line), α = 1, β = 1∕2 (red line), α = β = 3∕2 (green line)

For the two parameter M-L function the following formula holds true [12, 17]

$$\displaystyle \begin{aligned} E_{\alpha,\beta}(z)=z\,E_{\alpha,\alpha+\beta}(z)+\frac{1}{\varGamma(\beta)}. \end{aligned} $$
(1.7)
$$\displaystyle \begin{aligned} E_{\alpha,\beta}(z)=\beta E_{\alpha,\beta+1}(z)+\alpha z\frac{\mathrm{d}}{\mathrm{d}z}E_{\alpha,\beta+1}(z), \end{aligned} $$
(1.8)
$$\displaystyle \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}z}E_{\alpha,\beta}(z)=\frac{E_{\alpha,\beta-1}(z)-(\beta-1)E_{\alpha,\beta}(z)}{\alpha z}, \end{aligned} $$
(1.9)
$$\displaystyle \begin{aligned} \frac{\mathrm{d}^{n}}{\mathrm{d}z^{n}}\left[z^{\beta-1}E_{\alpha,\beta}(az^{\alpha})\right]=z^{\beta-n-1}E_{\alpha,\beta-n}(az^{\alpha}), \end{aligned} $$
(1.10)
$$\displaystyle \begin{aligned} n\in\mathrm{N}, \end{aligned}$$

as well as

$$\displaystyle \begin{aligned} & \int_{0}^{x}t^{\gamma-1}E_{\alpha,\gamma}(-at^\alpha)(x-t)^{\beta-1}E_{\alpha,\beta}(-b(x-t)^\alpha)\,\mathrm{d}t\\ & \quad =\frac{bE_{\alpha,\beta+\gamma}(-bx^\alpha)-aE_{\alpha,\beta+\gamma}(-ax^\alpha)}{b-a}x^{\beta+\gamma-1}, \quad (a\neq b), \end{aligned} $$
(1.11)

from where it follows

$$\displaystyle \begin{aligned} &\int_{0}^{x}t^{\alpha-1}E_{\alpha,\alpha}(-at^\alpha)(x-t)^{\beta-1}E_{\alpha,\beta}(-b(x-t)^\alpha)\,\mathrm{d}t \\ & \quad =\frac{E_{\alpha,\beta}(-bx^\alpha)-E_{\alpha,\beta}(-ax^\alpha)}{a-b}x^{\beta-1}, \quad (a\neq b), \end{aligned} $$
(1.12)

and

$$\displaystyle \begin{aligned} \int_{0}^{x}t^{\alpha-1}E_{\alpha,\alpha}\left(-at^{\alpha}\right)(x-t)^{\beta-1}E_{\alpha,\beta}\left(-a(x-t)^{\alpha}\right)\,\mathrm{d}t=x^{\alpha+\beta-1}E_{\alpha,\beta}\left(-ax^{\alpha}\right). \end{aligned} $$
(1.13)

For more useful relations and properties of these M-L functions, we refer to the literature [12, 17]. Moreover, the two parameter M-L function with negative first parameter α has been studied in Ref. [15].

Furthermore, the three parameter M-L (or Prabhakar) function is defined by Prabhakar [34]:

$$\displaystyle \begin{aligned} E_{\alpha,\beta}^\gamma(z)=\sum_{k=0}^{\infty}\frac{(\gamma)_k}{\varGamma(\alpha k+\beta)}\frac{z^k}{k!}, \end{aligned} $$
(1.14)

where β, γ, z ∈C, \(\Re (\alpha )>0\), (γ)k is the Pochhammer symbol

$$\displaystyle \begin{aligned} (\gamma)_{0}=1, \quad (\gamma)_{k}=\frac{\varGamma(\gamma+k)}{\varGamma(\gamma)}, \quad (0)_{0}:=1. \end{aligned}$$

This function is also an entire function of order \(\rho =1/\Re (\alpha )\) and type 1. By definition, it follows that

$$\displaystyle \begin{aligned} E_{\alpha,\beta}^1(z)&=E_{\alpha,\beta}(z), \\ E_{\alpha,1}^1(z)&=E_{\alpha}(z), \end{aligned} $$

as well as

$$\displaystyle \begin{aligned} E_{\alpha,n}^0(z)=\begin{cases} \frac{1}{\varGamma(n)}, & n\in N, \\ 0, & n=0. \end{cases} \end{aligned} $$
(1.15)
Fig. 1.3
figure 3

Three parameter M-L function (1.16) for α = 3∕42, β = 1, γ = 1∕2 (blue line), α = γ = 5∕4, β = 3∕2 (red line), α = 5∕4, β = 1∕4, γ = 1∕2 (green line)

The following function (see Fig. 1.3)

$$\displaystyle \begin{aligned} e_{\alpha, \beta}^{\gamma}(t;\lambda)= t^{\beta-1}\,E_{\alpha, \beta}^{\gamma}(-\lambda t^\alpha) \qquad \big( \min\{\alpha, \beta, \gamma\}>0;\, \lambda \in \mathrm{R}\big) \end{aligned} $$
(1.16)

is related to the three parameter M-L function. The Laplace transform of the three parameter M-L function (1.16) reads [20, 34]

$$\displaystyle \begin{aligned} \mathcal{L}\left[e_{\alpha, \beta}^{\gamma}(t;\mp\lambda)\right](s)=\frac{s^{\alpha\gamma-\beta}}{(s^\alpha\mp\lambda)^\gamma}, \end{aligned} $$
(1.17)

where |λs α| < 1. For the three parameter M-L function the following Laplace transform formula also holds true (1.14) [45]

$$\displaystyle \begin{aligned} \frac{s^{\mu(\alpha-1)}}{s^\alpha\pm\lambda\left[\frac{s^{\rho\gamma-\alpha}}{\left(s^\rho+\nu\right)^\gamma}\right]}&=\frac{s^{\mu(\alpha-1)-\alpha}}{1\pm\lambda\left[\frac{s^{\rho\gamma-2\alpha}}{\left(s^\rho+\nu\right)^\gamma}\right]}=\sum_{k=0}^{\infty}(\mp\lambda)^{k}\frac{s^{(\rho\gamma-2\alpha)k+\mu(\alpha-1)-\alpha}}{\left(s^{\rho}+\nu\right)^{\gamma k}}\\ &=\mathcal{L}\left[\sum_{k=0}^{\infty}(\mp\lambda)^{k}x^{2\alpha k+\alpha+\mu-\mu\alpha-1}E_{\rho,2\alpha k+\alpha+\mu-\mu\alpha}^{\gamma k}\left(-\nu x^\rho\right)\right](s), \end{aligned} $$
(1.18)

where we apply relation (1.17).

Another formula which is used in solving fractional differential equations is [12]

$$\displaystyle \begin{aligned} &\left(\frac{\mathrm{d}}{\mathrm{d}z}\right)^{p}\left[z^{\beta-1}E_{\alpha,\beta}^{\gamma}(az^{\alpha})\right]=\left(\frac{\mathrm{d}}{\mathrm{d}z}\right)^{p}\sum_{k=0}^{\infty}\frac{(\gamma)_{k}}{\varGamma(\alpha k+\beta)}\frac{a^{k}z^{\alpha k+\beta-1}}{k!}\\ & \quad =\sum_{k=0}^{\infty}(\gamma)_{k}\frac{a^{k}z^{\alpha k+\beta-p}}{k!}\frac{(\alpha k+\beta-1)(\alpha k+\beta-2)\dots(\alpha k+\beta-p)}{\varGamma(\alpha n+\beta)}\\ & \quad =z^{\beta-p-1}\sum_{k=0}^{\infty}\frac{(\gamma)_{k}}{\varGamma(\alpha k+\beta-p)}\frac{\left(az^{\alpha}\right)^{k}}{k!}=z^{\beta-p-1}E_{\alpha,\beta-p}^{\gamma}(az^{\alpha}), \end{aligned} $$
(1.19)

where \(\Re (\beta -p)>0\), p ∈N, \(\Re (\gamma )>0\), a ∈C. In a similar way one obtains the n-th derivative of the three parameter M-L function [6]:

$$\displaystyle \begin{aligned} \left(\frac{\mathrm{d}}{\mathrm{d}z}\right)^{n}E_{\alpha,\beta}^{\gamma}(z)=\gamma(\gamma+1)\dots(\gamma+n-1)\,E_{\alpha,\beta+\alpha n}^{\gamma+n}(z), \end{aligned} $$
(1.20)

from where for γ = 1 one obtains the connection between the n-th derivative of the two parameter M-L function and the three parameter M-L function [20]

$$\displaystyle \begin{aligned} \left(\frac{\mathrm{d}}{\mathrm{d}z}\right)^{n}E_{\alpha,\beta}(z)=n!\,E_{\alpha,\beta+\alpha n}^{n+1}(z), \quad n\in N. \end{aligned} $$
(1.21)

For the three parameter M-L function the following recurrence relations hold true [30]:

$$\displaystyle \begin{aligned} \alpha\gamma z\,E_{\alpha,\alpha+\beta+1}^{\gamma+1}(z)=E_{\alpha,\beta}^{\gamma}(z)-\beta E_{\alpha,\beta+1}^{\gamma}(z),\end{aligned} $$
(1.22)
$$\displaystyle \begin{aligned} \alpha^{2}\gamma(\gamma+1) z^{2}\,E_{\alpha,2\alpha+\beta+2}^{\gamma+2}(z)&=E_{\alpha,\beta}^{\gamma}(z)-(\alpha+2\beta+1)E_{\alpha,\beta+1}^{\gamma}(z)\\ &\quad +(\alpha+\beta+1)(\beta+1)E_{\alpha,\beta+2}^{\gamma}(z), \end{aligned} $$
(1.23)

for all \(\min \{\alpha ,\beta ,\gamma \}>0\), and z > 0.

Here we also give the following relation which appears in the anomalous diffusion modeling [36]

$$\displaystyle \begin{aligned} z^{\alpha_1}E_{\alpha_2-\alpha_1,\alpha_1+1}^{-1}\left(-z ^{\alpha_2-\alpha_1}\right)=\frac{z^{\alpha_1}}{\varGamma\left(1+\alpha_1\right)}+\frac{z^{\alpha_2}}{\varGamma\left(1+\alpha_2\right)}. \end{aligned} $$
(1.24)

It can be directly obtained from the following general formula [6]

$$\displaystyle \begin{aligned} E_{\alpha,\beta}^{-j}\left(z\right)=\sum_{k=0}^{j}(-1)^{k}\left(\begin{array}{l l} j\\ k \end{array}\right)\frac{z^{k}}{\varGamma(\alpha k+\beta)}, \quad j\in N. \end{aligned} $$
(1.25)

The infinite series in three parameter M-L functions can be represented in terms of one and two parameter M-L functions as follows [41]:

$$\displaystyle \begin{aligned} \sum_{n=0}^{\infty}(-xy)^{n}E_{\alpha,2\alpha n+\beta}^{n+1}\left(x+y\right)=\frac{xE_{\alpha,\beta}\left(x\right)-yE_{\alpha,\beta}\left(y\right)}{x-y} \end{aligned} $$
(1.26)

for x ≠ y, and

$$\displaystyle \begin{aligned} \sum_{n=0}^{\infty}\left(-x^{2}\right)^{n}E_{\alpha,2\alpha n+\beta}^{n+1}\left(2x\right)=E_{\alpha,\beta}\left(x\right)+x\frac{\mathrm{d}}{\mathrm{d}x}E_{\alpha,\beta}\left(x\right). \end{aligned} $$
(1.27)

In Chap. 7 we demonstrate the application of relations (1.26) and (1.27) in the theory of fractional generalized Langevin equation.

The asymptotic behavior of the three parameter M-L function for z ≫ 1 can be obtained by using the series expansion of the three parameter M-L function around z =  [6] (see also [36])

$$\displaystyle \begin{aligned} E_{\alpha,\beta}^{\gamma}(-z)\simeq\frac{z^{-\gamma}}{\varGamma(\gamma)}\sum_{n=0}^{\infty}\frac{\varGamma(\gamma+n)}{\varGamma(\beta-\alpha(\gamma+n))}\frac{(-z)^{-n}}{n!}, \quad z>1. \end{aligned} $$
(1.28)

for 0 < α < 2. Thus, for large z one obtains

$$\displaystyle \begin{aligned} E_{\alpha,\beta}^{\gamma}(-z)\simeq\frac{z^{-\gamma}}{\varGamma(\beta-\alpha\gamma)}, \quad z\gg1, \end{aligned} $$
(1.29)

from where it follows the following asymptotic behavior

$$\displaystyle \begin{aligned} E_{\alpha,\beta}^{\gamma}(-z^{\alpha})\simeq\frac{z^{-\alpha\gamma}}{\varGamma(\beta-\alpha\gamma)}, \quad z\gg1, \end{aligned} $$
(1.30)

for large argument z. Furthermore, in the case z → 0, the three parameter M-L function has the behavior [36]

$$\displaystyle \begin{aligned} E_{\alpha,\beta}^{\gamma}(-z^{\alpha})\simeq\frac{1}{\varGamma(\beta)}-\gamma\frac{z^{\alpha}}{\varGamma(\alpha+\beta)}\simeq\frac{1}{\varGamma(\beta)}\exp\left(-\gamma\frac{\varGamma(\beta)}{\varGamma(\alpha+\beta)}z^{\alpha}\right), \quad z\ll1. \end{aligned} $$
(1.31)

For the case with 0 < α < 1 this behavior is called stretched exponential since it is a function whose decay with z is faster than that of the ordinary exponential function for 0 < z < 1 but slower afterwards [33]. On the contrary, for the case with 1 < α < 2 this behavior is called compressed exponential since it is a function whose decay with z is slower than the one of the ordinary exponential function for 0 < z < 1 but faster afterwards [33]. These behaviors of the three parameter M-L function are used in the description of anomalous diffusion and non-exponential relaxation processes. Graphical representation of the three parameter M-L function and its asymptotics is given in Fig. 1.4.

Fig. 1.4
figure 4

Three parameter M-L function (1.14) for α = 3∕4, β = 1, γ = 1∕2 (blue line). The stretched exponential asymptotic (1.31) (red line) and the power-law asymptotic (1.30) (green line) are plotted for the same values of parameters. Reprinted figure with permission from T. Sandev, A.V. Chechkin, N. Korabel, H. Kantz, I.M. Sokolov and R. Metzler, Phys. Rev. E, 92, 042117 (2015). Copyright (2015) by the American Physical Society

For γ → 1, the series (1.28) reduces to the asymptotic expansion of the two parameter M-L function

$$\displaystyle \begin{aligned} E_{\alpha,\beta}(-z)\simeq-\sum_{n=1}^{\infty}\frac{(-z)^{-n}}{\varGamma(\beta-\alpha n)}, \quad z>1, \end{aligned} $$
(1.32)

and for one parameter M-L function it reads

$$\displaystyle \begin{aligned} E_{\alpha}(-z)\simeq-\sum_{n=1}^{\infty}\frac{(-z)^{-n}}{\varGamma(1-\alpha n)}, \quad z>1. \end{aligned} $$
(1.33)

The four parameter M-L function is defined by Srivastava and Tomovski [42]:

$$\displaystyle \begin{aligned} E_{\alpha,\beta}^{\gamma,\kappa}(z)=\sum_{n=0}^{\infty}\frac{(\gamma)_{\kappa n}}{\varGamma(\alpha n+\beta)}\cdot \frac{z^n}{n!}, \end{aligned} $$
(1.34)

where \((z, \alpha , \beta , \gamma , \kappa \in \mathrm {C};\Re [\alpha ]>\max \{0,\Re [\kappa ]-1\};\Re [\kappa ]>0)\), (γ)κn is the Pochhammer symbol. The four parameter M-L function is an entire function of order \(\rho =\frac {1}{\Re (\alpha -\kappa )+1}\) and type \(\sigma =\frac {1}{\rho }\left (\frac {[\Re (\kappa )]^{\Re (\kappa )}}{[\Re (\alpha )]^{\Re (\alpha )}}\right )\). It is a generalization of the three parameter M-L function \(E_{\alpha ,\beta }^\gamma (z)\), i.e.,

$$\displaystyle \begin{aligned} E_{\alpha,\beta}^{\gamma,1}(z)=E_{\alpha,\beta}^\gamma(z). \end{aligned}$$

As further extensions of the M-L functions, we like to attract the attention to multinomial M-L functions defined by Hilfer et al. [18]:

$$\displaystyle \begin{aligned} E_{\left(\alpha_{1},\alpha_{2},\dots,\alpha_{n}\right),\beta}\left(z_{1},z_{2},\dots,z_{n}\right)&=\sum_{k=0}^{\infty}\sum_{l_{1}\geq0,l_{2}\geq 0,\dots,l_{n}\geq0}^{l_{1}+l_{2}+\dots+l_{n}=k}\left(\begin{array}{c l} k\\ l_{1},\dots,l_{n} \end{array}\right)\\ & \quad \times \frac{\prod_{i=1}^{n}z_{i}^{l_{i}}}{\varGamma\left(\beta+\sum_{i=1}^{n}\alpha_{i}l_{i}\right)}, \end{aligned} $$
(1.35)

where

$$\displaystyle \begin{aligned} \left(\begin{array}{c l} k\\ l_{1},\dots,l_{n} \end{array}\right)=\frac{k!}{l_{1}!l_{2}!\ldots l_{n}!} \end{aligned} $$

are the so-called multinomial coefficients . Luchko and Gorenflo [21] called this function multivariate, but later it was recalled as multinomial M-L function [18]. The following function

$$\displaystyle \begin{aligned} &e_{(\alpha_1,\alpha_2,\dots,\alpha_n),\beta}\left(t;\lambda_1,\lambda_2,\dots,\lambda_n\right)\\ & \quad =t^{\beta-1}E_{\left(\alpha_{1},\alpha_{2},\dots,\alpha_{n}\right),\beta}\left(-\lambda_1 t^{\alpha_1},-\lambda_2 t^{\alpha_2},\dots,-\lambda_n t^{\alpha_n}\right), \end{aligned} $$
(1.36)

has been shown to have application in description of various anomalous diffusion-wave models. Its Laplace transform reads [18]

$$\displaystyle \begin{aligned} \mathcal{L}\left[e_{(\alpha_1,\alpha_2,\dots,\alpha_n),\beta}\left(t;\mp\lambda_1,\mp\lambda_2,\dots,\mp\lambda_n\right)\right](s)=\frac{s^{-\beta}}{1\mp\sum_{j=1}^{n}\lambda_{j}s^{-\alpha_{j}}}. \end{aligned} $$
(1.37)

Here we note that for α 1 = α, λ 1 = λ and λ 2 = ⋯ = λ n = 0 the multinomial M-L function reduces to the two parameter M-L function (1.16),

$$\displaystyle \begin{aligned} e_{(\alpha),\beta}\left(t;\lambda\right)=\mathcal{L}^{-1}\left[\frac{s^{-\beta}}{1+\lambda s^{-\alpha}}\right]=t^{\beta-1}E_{\alpha,\beta}\left(-\lambda t^{\alpha}\right). \end{aligned} $$
(1.38)

Moreover, for λ 1 ≠ 0, λ 2 ≠ 0, λ 3 = ⋯ = λ n = 0, one obtains that the multinomial M-L function can give infinite series in three parameter M-L functions, i.e.,

$$\displaystyle \begin{aligned} e_{(\alpha_1,\alpha_2),\beta}\left(t;\lambda_1,\lambda_2\right)&=\mathcal{L}^{-1}\left[\frac{s^{-\beta}}{1+\lambda_1 s^{-\alpha_1}+\lambda_2 s^{-\alpha_2}}\right]\\&=\mathcal{L}^{-1}\left[\frac{s^{-\beta}}{1+\lambda_1 s^{-\alpha_1}}\frac{1}{1+\lambda_2\frac{ s^{-\alpha_2}}{1+\lambda_1s^{-\alpha_1}}}\right]\\&=\sum_{k=0}^{\infty}\left(-\lambda_2\right)^{n}\frac{s^{-\left(\alpha_2-\alpha_1\right)k+\alpha_1-\beta}}{\left(s^{\alpha_1}+\lambda_1\right)^{n+1}}\\&=\sum_{k=0}^{\infty}\left(-\lambda_2\right)^{k}t^{\alpha_2 k+\beta-1}E_{\alpha_1,\alpha_2 k+\beta}^{k+1}\left(-\lambda_1 t^{\alpha_1}\right), \end{aligned} $$
(1.39)

where we apply the Laplace transform formula (1.17).

Fig. 1.5
figure 5

Multinomial M-L function (1.36) for λ 1 = λ 2 = λ 3 = 1∕3, α 1 = 1∕4, α 2 = 1∕2, β = 7∕8 and α 3 = 3∕4 (blue line), α 3 = 5∕4, (red line), α 3 = 7∕4 (green line)

Graphical representation of the multinomial M-L function \(e_{(\alpha _1,\alpha _2,\alpha _3),\beta } \left (t;\lambda _1,\lambda _2,\lambda _3\right )\) (1.36) is given in Fig. 1.5. In the short time limit it behaves as t β−1Γ(β) and in the long time limit as \(t^{\beta -\alpha _3-1}/\varGamma (\beta -\alpha _3)\). The crossover behavior depends on all parameters. Therefore, by parameters’ tuning one may fit different crossover behaviors, which makes the multinomial M-L function suitable for description of complex behaviors of the MSD observed in different physical and biological systems.

1.2 Fox H-Function

The Fox’ H-function (or H-function) is defined with the following Mellin-Barnes integral [5, 26, 43]

$$\displaystyle \begin{aligned} H_{p,q}^{m,n}(z)&=H_{p,q}^{m,n}\left[z\left|\begin{array}{c l} (a_1,A_1),\ldots ,(a_p,A_p)\\ (b_1,B_1),\ldots ,(b_q,B_q) \end{array}\right.\right]\\&=H_{p,q}^{m,n}\left[z\left|\begin{array}{c l} (a_p,A_p)\\ (b_q,B_q) \end{array}\right.\right]=\frac{1}{2\pi\imath}\int_{\varOmega}\theta(s)z^{s}\mathrm{d}s, \end{aligned} $$
(1.40)

where

$$\displaystyle \begin{aligned} \theta(s)=\frac{\prod_{j=1}^{m}\varGamma(b_j-B_js)\prod_{j=1}^{n}\varGamma(1-a_j+A_js)}{\prod_{j=m+1}^{q}\varGamma(1-b_j+B_js)\prod_{j=n+1}^{p}\varGamma(a_j-A_js)}, \end{aligned}$$

0 ≤ n ≤ p, 1 ≤ m ≤ q, a i, b j ∈C, A i, B j ∈R+, i = 1, …, p, j = 1, …, q. Contour integration Ω starts at c − ı∞ and finishes at c + ı∞ separating the poles of the function Γ(b j + B js), j = 1, …, m with those of the function Γ(1 − a i − A is), i = 1, …, n. It plays an important role in the theory of fractional differential equations enabling closed form representation of the solutions of fractional diffusion-wave equations. It is a very general function giving as special cases many well-known special functions.

Series expansion of the H-function (1.40) is given by Mathai and Saxena [26]

$$\displaystyle \begin{aligned} &H_{p,q}^{m,n}\left[z\left|\begin{array}{l} (a_1,A_1),\ldots ,(a_p,A_p)\\ (b_1,B_1),\ldots ,(b_q,B_q) \end{array}\right.\right]\\& \quad =\sum_{h=1}^{m}\sum_{k=0}^{\infty}\frac{\prod_{j=1, j\neq h}^{m}\varGamma\left(b_j-B_j\frac{b_h+k}{B_h}\right)\prod_{j=1}^{n}\varGamma\left(1-a_j+A_j\frac{b_h+k}{B_h}\right)}{\prod_{j=m+1}^{q}\varGamma\left(1-b_j+B_j\frac{b_h+k}{B_h}\right)\prod_{j=n+1}^{p}\varGamma\left(a_j-A_j\frac{b_h+k}{B_h}\right)}\\ & \qquad \cdot\frac{(-1)^kz^{(b_h+k)/B_h}}{k!B_h}. \end{aligned} $$
(1.41)

The H-function has the following properties [26]:

$$\displaystyle \begin{aligned} &H_{p,q}^{m,n}\left[z\left|\begin{array}{c l} (a_1,A_1),\ldots ,(a_p,A_p)\\ (b_1,B_1),\ldots ,(b_{q-1},B_{q-1}),(a_1,A_1) \end{array}\right.\right]\\& \quad =H_{p-1,q-1}^{m,n-1}\left[z\left|\begin{array}{c l} (a_2,A_2),\ldots ,(a_p,A_p)\\ (b_1,B_1),\ldots ,(b_{q-1},B_{q-1}) \end{array}\right.\right], \end{aligned} $$
(1.42)

where n ≥ 1, q > m,

$$\displaystyle \begin{aligned} & H_{p,q}^{m,n}\left[z^{\delta}\left|\begin{array}{l} (a_1,A_1),\ldots ,(a_p,A_p)\\ (b_1,B_1),\ldots ,(b_q,B_q) \end{array}\right.\right]\\ & \quad =\frac{1}{\delta}\cdot H_{p,q}^{m,n}\left[z\left|\begin{array}{c l} (a_1,A_1/\delta),\ldots ,(a_p,A_p/\delta)\\ (b_1,B_1/\delta),\ldots ,(b_q,B_q/\delta) \end{array}\right.\right],\,\, \delta>0,\end{aligned} $$
(1.43)
$$\displaystyle \begin{aligned} H_{p+1,q+1}^{m,n+1}\left[z\left|\begin{array}{c l} (0,\alpha), (a_p,A_p)\\ (b_q,B_q), (r,\alpha) \end{array}\right.\right]=(-1)^{r} H_{p+1,q+1}^{m+1,n}\left[z\left|\begin{array}{c l} (a_p,A_p), (0,\alpha)\\ (r,\alpha), (b_q,B_q) \end{array}\right.\right],\end{aligned} $$
(1.44)
$$\displaystyle \begin{aligned} z^{\sigma}H_{p,q}^{m,n}\left[z\left|\begin{array}{c l} (a_p,A_p)\\ (b_q,B_q) \end{array}\right.\right]=H_{p,q}^{m,n}\left[z\left|\begin{array}{c l} (a_p+\sigma A_p,A_p)\\ (b_q+\sigma B_q,B_q) \end{array}\right.\right],\end{aligned} $$
(1.45)
$$\displaystyle \begin{aligned} H_{p,q}^{m,n}\left[z\left|\begin{array}{c l} (a_p,A_p)\\ (b_q,B_q) \end{array}\right.\right]=H_{q,p}^{n,m}\left[z^{-1}\left|\begin{array}{c l} (1-b_q,B_q)\\ (1-a_p,A_p) \end{array}\right.\right]. \end{aligned} $$
(1.46)

The k-th derivative (k ∈N) of H-function is given by Srivastava et al. [43]

$$\displaystyle \begin{aligned} & \frac{\mathrm{d}^k}{\mathrm{d}z^k}\left\{z^\alpha H_{p,q}^{m,n}\left[(az)^\beta\left|\begin{array}{c l} (a_p,A_p)\\ (b_q,B_q) \end{array}\right.\right]\right\}\\ & \quad =z^{\alpha-k}H_{p+1,q+1}^{m,n+1}\left[(az)^\beta\left|\begin{array}{c l} (-\alpha,\beta),(a_p,A_p)\\ (b_q,B_q), (k-\alpha,\beta) \end{array}\right.\right], \end{aligned} $$
(1.47)

where β > 0. All these properties and relations have been used for simplification of the obtained solutions of fractional diffusion and Fokker-Planck equations.

The Laplace transform of the Fox H-function reads [26, 43]

$$\displaystyle \begin{aligned} \mathcal{L}\left[t^{\rho-1}H_{p+1,q}^{m,n}\left[zt^{-\sigma}\left|\begin{array}{c l} (a_p,A_p),(\rho,\sigma)\\ (b_q,B_q) \end{array}\right.\right]\right]=s^{-\rho}H_{p,q}^{m,n}\left[zs^{\sigma}\left|\begin{array}{c l} (a_p,A_p)\\ (b_q,B_q) \end{array}\right.\right], \end{aligned} $$
(1.48)

where σ > 0, \(\Re (s)>0\), \(\Re \left (\rho +\sigma \max _{1\leq j\leq n}\left (\frac {1-a_j}{A_j}\right )\right )>0\), \(|\arg (z)|<\pi \theta _1/2\), θ 1 > 0, θ 1 = θ − a. The Mellin transform of the Fox H-function yields

$$\displaystyle \begin{aligned} \int_{0}^{\infty}x^{\xi-1}H_{p,q}^{m,n}\left[ax\left|\begin{array}{l} (a_1,A_1),\ldots ,(a_p,A_p)\\ (b_1,B_1),\ldots ,(b_q,B_q) \end{array}\right.\right]\mathrm{d}x=a^{-\xi}\theta(-\xi), \end{aligned} $$
(1.49)

where

$$\displaystyle \begin{aligned} \theta(-\xi)=\frac{\prod_{j=1}^{m}\varGamma(b_j+B_j\xi)\prod_{j=1}^{n}\varGamma(1-a_j-A_j\xi)}{\prod_{j=m+1}^{q}\varGamma(1-b_j-B_j\xi)\prod_{j=n+1}^{p}\varGamma(a_j+A_j\xi)}. \end{aligned}$$

The Mellin transform will be used to obtain the fractional moments of the fundamental solutions of fractional diffusion equations. Furthermore, the cosine Mellin transform of the Fox H-function reads [26, 35, 43]

$$\displaystyle \begin{aligned} &\int_{0}^{\infty}k^{\rho-1}\cos{}(kx)H_{p,q}^{m,n}\left[ak^{\delta}\left|\begin{array}{c l} (a_p,A_p)\\ (b_q,B_q) \end{array}\right.\right]\mathrm{d}k\\&\quad =\frac{\pi}{x^\rho}H_{q+1,p+2}^{n+1,m}\left[\frac{x^\delta}{a}\left|\begin{array}{c l} (1-b_q,B_q), (\frac{1+\rho}{2}, \frac{\delta}{2})\\ (\rho,\delta), (1-a_p,A_p), (\frac{1+\rho}{2},\frac{\delta}{2}) \end{array}\right.\right], \end{aligned} $$
(1.50)

where \(\Re \left (\rho +\delta \min _{1\leq j\leq m}\left (\frac {b_j}{B_j}\right )\right )>1\), x δ > 0, \(\Re \left (\rho +\delta \max _{1\leq j\leq n}\left (\frac {a_j-1}{A_j}\right )\right )<\frac {3}{2}\), \(|\arg (a)|<\pi \theta /2\), θ > 0, \(\theta =\sum _{j=1}^{n}A_j-\sum _{j=n+1}^{p}A_j+\sum _{j=1}^{m}B_j-\sum _{j=m+1}^{q}B_j\). The application of these transformation formulas will be demonstrated later in solving different fractional diffusion and Fokker-Planck equations.

The three parameter M-L function is a special case of the H-function [26]

$$\displaystyle \begin{aligned} E_{\alpha,\beta}^{\delta}(-z)=\frac{1}{\delta}H_{1,2}^{1,1}\left[z\left|\begin{array}{l} (1-\delta,1)\\ (0,1),(1-\beta,\alpha) \end{array}\right.\right]. \end{aligned} $$
(1.51)

Thus, by using relations (1.51) and (1.40), the cosine transform (1.50) of the two parameter M-L function is given in terms of H-function, i.e.

$$\displaystyle \begin{aligned} \int_{0}^{\infty}\cos{}(kx)E_{\alpha,\beta}\left(-ak^2\right)\mathrm{d}k&=\frac{\pi}{x}H_{3,3}^{2,1}\left[\frac{x^2}{a}\left|\begin{array}{c l} (1,1), (\beta,\alpha), (1,1)\\ (1,2), (1,1), (1,1) \end{array}\right.\right]\\&=\frac{\pi}{x}H_{1,1}^{1,0}\left[\frac{x^2}{a}\left|\begin{array}{c l} (\beta,\alpha)\\ (1,2) \end{array}\right.\right]. \end{aligned} $$
(1.52)

This relation will be used later to solve the mono-fractional diffusion equation.

The asymptotic expansion of the H-function \(H_{p,q}^{m,0}(z)\) for large z is [26, 40]

$$\displaystyle \begin{aligned} H_{p,q}^{m,0}(z)\simeq Bz^{(1-\alpha)/m^{*}}\exp\left(-m^{*}C^{1/m^{*}}z^{1/m^{*}}\right),\end{aligned} $$
(1.53)
$$\displaystyle \begin{aligned} \alpha=\sum_{k=1}^{p}a_{k}-\sum_{k=1}^{q}b_{k}+\frac{1}{2}(q-p+1),\end{aligned} $$
(1.54)
$$\displaystyle \begin{aligned} m^{*}=\sum_{j=1}^{q}B_{j}-\sum_{j=1}^{p}A_{j}>0,\end{aligned} $$
(1.55)
$$\displaystyle \begin{aligned} C=\prod_{k=1}^{p}\left(A_{k}\right)^{A_{k}}\prod_{k=1}^{q}\left(B_{k}\right)^{-B_{k}},\end{aligned} $$
(1.56)
$$\displaystyle \begin{aligned} B=(2\pi)^{-\frac{m-p-1}{2}}C^{(1-\alpha)/m^{*}}\left(m^{*}\right)^{-1/2}\prod_{k=1}^{p}\left(A_{k}\right)^{-a_{k}+1/2}\prod_{k=1}^{m}\left(B_{k}\right)^{b_{k}-1/2}. \end{aligned} $$
(1.57)

This asymptotic formula, as we will see in the next chapters, is very important in the analysis of the asymptotic behaviors of the fundamental solutions of fractional diffusion and Fokker-Planck equations.

The Fox-Wright function is defined by Mathai and Saxena [26]

$$\displaystyle \begin{aligned} {}_{p}\varPsi_{q}(z)={}_{p}\varPsi_{q}\left[\begin{array}{c l} (a_1,A_1),\ldots ,(a_p,A_p);\\ (b_1,B_1),\ldots ,(b_q,B_q); \end{array}z \right]=\sum_{k=0}^{\infty}\frac{\prod_{j=1}^{p}\varGamma(a_j+A_j k)}{\prod_{j=1}^{q}\varGamma(b_j+B_j k)}\cdot\frac{z^k}{k!}, \end{aligned} $$
(1.58)

where a j, A j ∈ C, \(\Re [A_j]>0\), for j = 1, …, p i b j, B j ∈ C, \(\Re [B_j]>0\), for j = 1, …, q, \(1+\Re \left (\sum _{j=1}^{q}B_{j}-\sum _{j=1}^{p}A_j\right )\geq 0\). For a special case of the Wright function (p = 0, q = 1, b 1 = β, B 1 = α), the following notation is used [26]:

$$\displaystyle \begin{aligned} \varphi(\alpha,\beta;z)=\sum_{n=0}^{\infty}\frac{1}{\varGamma(\alpha n+\beta)}\frac{z^{n}}{n!} =H_{0,2}^{1,0}\left[-z\left|\begin{array}{c l} - \\ (0,1),(1-\beta,\alpha) \end{array}\right.\right], \end{aligned} $$
(1.59)

where \(\Re (\alpha )>-1\), β ∈ C.

It is easily seen from the definition that

$$\displaystyle \begin{aligned} E_{\alpha, \beta}^{\gamma}(z)= \frac{1}{\varGamma(\gamma)}\; \; _{1}\varPsi _{1}\left[ \begin{array}{rr} \left(\gamma,1\right); \\ \\ \left(\beta,\alpha\right); \end{array} z\right]. \end{aligned} $$
(1.60)

The Laplace transform of the four parameter M-L function can be represented in terms of the Fox-Wright function [42]

$$\displaystyle \begin{aligned} \mathcal{L}\left[t^{\rho-1}E_{\alpha,\beta}^{\gamma,\kappa}(\omega t^{\sigma})\right](s)=\frac{s^{-\rho}}{\varGamma(\gamma)}{}_{2}\varPsi_{1}\left[\begin{array}{c l} (\rho,\sigma),(\gamma,\kappa);\\ {\quad \quad }(\beta,\alpha); \end{array}\frac{\omega}{s^{\sigma}} \right]. \end{aligned} $$
(1.61)

The auxiliary functions of the Wright type (used by Mainardi) are defined by

$$\displaystyle \begin{aligned} M_{\alpha}(y)=\sum_{n=0}^{\infty}\frac{1}{\varGamma(-\alpha n+1-\alpha)}\frac{(-y)^{ n}}{n!}. \end{aligned} $$
(1.62)

The relation to the Fox H-function is as follows [25]:

$$\displaystyle \begin{aligned} M_{\alpha}(y)=H_{1,1}^{1,0}\left[\left.y\right.\left|\begin{array}{l}(1-\alpha, \alpha)\\(0,1)\end{array}\right.\right]. \end{aligned} $$
(1.63)

The one-sided Lévy stable probability density L α(y) can be represented through the M α(y) as [11]

$$\displaystyle \begin{aligned} L_{\alpha}(t)=\frac{\alpha}{t^{\alpha+1}}M_{\alpha}\left(\frac{1}{t^{\alpha}}\right), \end{aligned} $$
(1.64)

which has the Laplace transform

$$\displaystyle \begin{aligned} L_{\alpha}(t)=\mathcal{L}^{-1}\left[e^{-s^{\alpha}}\right]. \end{aligned} $$
(1.65)

All these properties and relations are of huge importance in the theory of the fractional differential equations, and will be applied in the next chapters.

1.3 Some Results Related to the Complete Monotonicity of the Mittag-Leffler Functions

In this part we analyze the complete monotonicity of the function \( e_{\alpha , \beta }^{\gamma }(t; \lambda )\). In this respect we recall Prabhakar formula:

$$\displaystyle \begin{aligned} \mathcal{L}_s\big[ e_{\alpha, \beta}^{\gamma}(t; \lambda)\big] = \frac{s^{\alpha\gamma-\beta}}{(s^\alpha+\lambda)^\gamma} \qquad \big( s>|\lambda|{}^{\frac 1\alpha} \big). \end{aligned} $$
(1.66)

For simplicity we use λ = 1. This convention does not restrict the generality of our considerations.

In Ref. [3], the authors treated the case 0 < α, β, γ ≤ 1 with αγ ≤ β. They discussed complete monotonicity of the function \(e_{\alpha ,\beta }^{\gamma }\) by invoking a theorem given by Gripenberg et al. [14]. This theorem gives conditions for the complete monotonicity of a function f in terms of properties of its Laplace transform. Here we use the method of the Bernstein theorem which relates the complete monotonicity of a function f to the non-negativity of its inverse Laplace transform. We also note that the complete monotonicity of the M-L functions has been investigated and discussed in several works [6, 7, 13, 16, 22,23,24, 27, 32, 39].

We first present that, under certain conditions to be made precise later, the function

$$\displaystyle \begin{aligned} e_{\alpha,\beta}^{\gamma}(t) \equiv e_{\alpha, \beta}^{\gamma}(t;1) \end{aligned}$$

is the Laplace transform of a non-negative function [46]. For this purpose, we will bend the Bromwich path of the Laplace inversion formula into the Hankel path, thereby using the Cauchy residue theorem for taking account of the singularities which we sweep over.

The function sφ(s), which has a pole of order n at s 0, possesses the residue at this point given by

$$\displaystyle \begin{aligned}\mathrm{Res}[\varphi(s); s_0] = \frac{1}{(n-1)!}\lim_{s \to s_0} \frac{\mathrm{d}^{n-1}}{\mathrm{d}s^{n-1}} \left\{\varphi(s)(s-s_0)^n\right\}.\end{aligned}$$

This last formula gives the coefficient of the power s −1 in the Laurent series expansion of φ(s) (see [37]).

Lemma 1.1 ([46])

Let

$$\displaystyle \begin{aligned} \psi(s)=\frac{s^p}{(1+s^q)^{n}}\quad (p,q>0;\;n\in\mathrm{N}). \end{aligned}$$

Then the following assertion holds true:

$$\displaystyle \begin{aligned} Res\left[\psi(s);e^{\pm\imath\frac{\pi}{q}k}\right]=\frac{e^{\pm \imath \frac{\pi}q (p+q+1)}(-p)_{n-1}}{q^n\,(n-1)!}\sum_{k = 0}^{n-1}\frac{(1-n)_k}{(p-n+2)_k}\,c_k, \end{aligned}$$

where

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} c_\ell = (-1)^{l}\sum_{ \begin{array}{l} j_1+ \cdots + j_n = \ell\\ (0 \leq j_1, \cdots, j_n \leq \ell) \end{array}} b_{j_1}^*\cdots\,b_{j_n}^* \quad \left(\ell\in\mathrm{N}_0\right) \end{array} \end{aligned} $$
(1.67)

with the coefficients \(b^*_j\) given by

$$\displaystyle \begin{aligned} b_j^* = b_j^*(q) = \delta_{0j} + q^{-j}\big(1-\delta_{0j}\big)\,\begin{vmatrix} \tbinom q2 & q & 0 & \cdots & 0 \\ \tbinom q3 & \tbinom q2 & q & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \tbinom qj & \tbinom q{j-1} & \tbinom q{j-2} & \ddots & q \\ \tbinom q{j+1} & \tbinom qj & \tbinom q{j-1} & \cdots & \tbinom q2 \end{vmatrix} \qquad \left(j\in\mathrm{N}_0\right).\end{aligned} $$
(1.68)

Proof

In order to compute \(Res[\psi (s);e^{\imath \frac {\pi }{q}}]\) let us transform s q + 1 (q > 0) as follows:

$$\displaystyle \begin{aligned} 1+s^q &= 1+ \big( s - e^{\imath\pi/q} + e^{\imath\pi/q}\big)^q\\ &= 1- \sum_{k=0}^\infty \binom qk e^{\imath(\pi/q)k} \big(s- e^{\imath\pi/q}\big)^k\\ &= \mathrm{e}^{-\mathrm{i}\pi} \sum_{k=1}^\infty \binom qk e^{-\imath(\pi/q)k} \big(s- e^{\imath\pi/q}\big)^k\\ &= \mathrm{e}^{-\mathrm{\imath} \frac \pi q (q+1)}\big(s-e^{\imath\pi/q}\big) \sum_{k=0}^\infty \binom q{k+1} \big(e^{-\imath\pi/q} s- 1\big)^k.\end{aligned} $$
(1.69)

For all p > 0 and n ∈N, by using (1.69), one has

$$\displaystyle \begin{aligned} \psi(s) \big(s- e^{\imath\frac{\pi}{q}}\big)^n &= e^{\imath \frac{n\pi}q (q+1)} s^p \left[\sum_{k=0}^\infty \binom q{k+1} \big(e^{-\imath\pi/q} s- 1\big)^k \right]^{-n} \\ &=\frac{\mathrm{e}^{\mathrm{\imath} \frac{n\pi}q (q+1)} s^p}{q^n}\Bigg[1+\cdots+\frac 1q\binom{q}{\ell+1} \big(e^{-\imath\pi}s-1\big)^\ell+\cdots\Bigg]^{-n}.\end{aligned} $$
(1.70)

The next step is to invert the power series

$$\displaystyle \begin{aligned} \sum_{k=0}^\infty a_k X_q^k,\end{aligned} $$

where

$$\displaystyle \begin{aligned} a_j = \frac 1q \binom q{j+1} \qquad (X_q = e- s- 1) . \end{aligned}$$

By the well-known procedure, it can be found that

$$\displaystyle \begin{aligned} \left(\sum_{j=0}^\infty a_j X_q^j\right)^{-1} = \sum_{j=0}^\infty b_jX_q^j,\end{aligned} $$

where the unknown coefficients b j are given by the following system:

$$\displaystyle \begin{aligned} \sum_{m=0}^j \binom q{m+1}\,b_{j-m} = q \delta_{0j}, \qquad \left(j \in \mathrm{N}_0\right).\end{aligned} $$

Adapting the solution of the general Hessenberg type system considered in Ref. [8] to the above system in b j, Eq. (1.68) is obtained. Indeed, since [8, p. 738, Theorem 3.1]

$$\displaystyle \begin{aligned} b_j^* = (-1)^j b_j = \begin{vmatrix} 1 & 1 & 0 & 0 & \cdots & 0 \\ 0 & \frac{1}{q}\tbinom q2 & 1 & 0 & \cdots & 0 \\ 0 & \frac{1}{q}\tbinom q3 & \frac{1}{q}\tbinom q2 & 1 & \ddots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & \frac{1}{q}\tbinom qj & \frac{1}{q}\tbinom q{j-1} & \frac{1}{q}\tbinom q{j-2} & \ddots & 1 \\ 0 & \frac{1}{q}\tbinom q{j+1} & \frac{1}{q}\tbinom qj & \frac{1}{q}\tbinom q{j-1} & \cdots & \frac{1}{q}\tbinom q2 \end{vmatrix},\end{aligned} $$

one has \(b^*_0 = 1\), and the expansion along the first column yields (1.68).

We now look for the power series in X q, which is equal to the n-th power of

$$\displaystyle \begin{aligned} \sum_{j=0}^\infty b_jX_q^j,\end{aligned} $$

that is,

$$\displaystyle \begin{aligned} \sum_{\ell=0}^\infty c_\ell X_q^\ell = \Big( \sum_{j=0}^\infty b_jX_q^j\Big)^n,\end{aligned} $$

so that

$$\displaystyle \begin{aligned} c_\ell &=\sum_{\begin{array}{l} j_1+\ldots +j_n=\ell \\ (0\leq j_1,\ldots , j_n \leq \ell) \end{array}} b_{j_1},\dots , b_{j_n}\\ &=\sum_{ \begin{array}{l} j_1+ \dots + j_n = \ell \\ (0 \leq j_1, \dots, j_n \leq \ell) \end{array}}(-1)^{j_1+\dots+j_n-n}b_{j_1}^*\dots\,b_{j_n}^*. \end{aligned} $$

Some fairly obvious steps would now give us the asserted form of the coefficients c . Thus, for instance, one has c 0 = 1, c 1 = 1 − q, and so on. By this simplification, Eq. (1.70) becomes

$$\displaystyle \begin{aligned} \psi(s) \big(s-e^{\imath\frac{\pi}{q}})^n = s^p \sum_{\ell=0}^\infty c_\ell X_q^\ell.\end{aligned}$$

Next, by using the chain rule, one calculates the limit of the derivative as follows:

$$\displaystyle \begin{aligned} &\lim_{s \to e^{\imath\frac{\pi}{q}}} \left[ \psi(s)\left(s-e^{\imath\frac{\pi}{q}}\right)^n \right]^{(n-1)} \\ & \quad = e^{\imath\frac\pi q(p-n+1)}\, \sum_{k = 0}^{n-1} \tbinom{n-1}k\, k!\, c_k \,(-1)^{n-1-k} (-p)_{n-1-k}\\ & \quad = e^{\imath\frac\pi q(p-n+1)}\,(-1)^{n-1}\, \sum_{k = 0}^{n-1} (-1)^{k} (1-n)_k\,(-p)_{n-1-k}\, c_k. \end{aligned} $$
(1.71)

Since

$$\displaystyle \begin{aligned} (b)_{n+m} = (b)_n \cdot (b+n)_m, \end{aligned}$$

upon replacing n by n − 1 and setting m = −k, it is obtained

$$\displaystyle \begin{aligned} (-p)_{n-1-k} = (-p)_{n-1} \cdot (-p+n-1)_{-k}.\end{aligned}$$

On the other hand, it is easily observed that

$$\displaystyle \begin{aligned} (c)_{-n} = \dfrac{(-1)^{n}}{(1-c)_n}.\end{aligned}$$

Therefore,

$$\displaystyle \begin{aligned} (-p)_{n-1-k} = \frac{(-1)^k\, (-p)_{n-1}}{(p-n+2)_k},\end{aligned}$$

which can be used in Eq. (1.71) to get

$$\displaystyle \begin{aligned} \lim_{s \to e^{\imath\frac{\pi}{q}}} \left[ \psi(s)\left(s-e^{\imath\frac{\pi}{q}}\right)^n \right]^{(n-1)} = e^{\imath\frac\pi q(p-n+1)}\,(-1)^{n-1}\,(-p)_{n-1} \sum_{k = 0}^{n-1} \frac{(1-n)_k}{(p-n+2)_k}\, c_k. \end{aligned}$$

Hence

$$\displaystyle \begin{aligned}Res\left[\psi(s);e^{\imath\frac{\pi}{q}}\right] = \frac{e^{\imath \frac{\pi}q (p+q+1)}(-p)_{n-1}}{q^n\,(n-1)!} \sum_{k = 0}^{n-1} \frac{(1-n)_k}{(p-n+2)_k}\, c_k.\end{aligned}$$

Similarly, one finds

$$\displaystyle \begin{aligned}Res\left[\psi(s);e^{-\imath\frac{\pi}{q}}\right] = \frac{e^{-\imath \frac{\pi}q (p+q+1)}(-p)_{n-1}}{q^n\,(n-1)!} \sum_{k = 0}^{n-1} \frac{(1-n)_k}{(p-n+2)_k}\, c_k,\end{aligned}$$

which completes the proof of the Lemma.

Theorem 1.1 ([46])

Let \({Br}_{\sigma _0}\) denote the integration path

$$\displaystyle \begin{aligned} \{s= \sigma+{\imath}\tau \colon \sigma \geq \sigma_0\qquad \textit{and} \qquad \tau \in \mathrm{R}\} \end{aligned}$$

in the upward direction. Then, for all α ∈ (0, 1], β > 0, γ > 0 and for all t > 0,

$$\displaystyle \begin{aligned} e_{\alpha, \beta}^{\gamma}(t) = L_{t}^{-1}\left[ \frac{s^{\alpha\gamma-\beta}}{(s^\alpha+1)^\gamma} \right] = \frac 1{2\pi\imath}\,\int_{{Br}_0} {e}^{st}\,\frac{s^{\alpha\gamma-\beta}}{(s^\alpha+1)^\gamma}\, {d}s = L_t \left[ K_{\alpha, \beta}^{\gamma} \right] \end{aligned} $$
(1.72)

and

$$\displaystyle \begin{aligned} K_{\alpha, \beta}^{\gamma}(r) = \frac{r^{\alpha\gamma-\beta}}\pi\, \frac{\sin\left[ \gamma\, \arctan \left(\dfrac{r^\alpha\,\sin{}(\pi\,\alpha)}{r^{\alpha}\,\cos{}(\pi\, \alpha) + 1}\right) + \pi(\beta-\alpha\gamma)\right]} {\left[r^{2\alpha}+2r^\alpha\, \cos{}(\pi\, \alpha) + 1\right]^{\frac\gamma2}}. \end{aligned} $$
(1.73)

Moreover, for all α ∈ (1, 2], β > 0 and γ = n ∈N,

$$\displaystyle \begin{aligned} e_{\alpha, \beta}^{n}(t)&= L_{t}^{-1}\left[ \frac{s^{\alpha\,n-\beta}}{(s^\alpha+1)^n}\right] + \frac{2(-1)^{n-1}}{\alpha^n\,(n-1)!}\,\,{e}^{t \cos\left(\frac\pi\alpha\right)}\\ & \quad \times \cos \left[ t\sin \left(\tfrac\pi\alpha\right) - \tfrac\pi\alpha(\beta-1)\right]\, \sum_{\ell=0}^{n-1} \frac{(1-n)_\ell\,c_\ell}{(\alpha n-\beta-n+2)_\ell}, \end{aligned} $$
(1.74)

where

$$\displaystyle \begin{aligned}L_{t}^{-1}\left[ \frac{s^{\alpha\,n-\beta}}{(s^\alpha+1)^n}\right] = \frac 1{2\pi{\imath}}\,\int_{{Br}_0} {e}^{st}\,\frac{s^{\alpha\,n-\beta}}{(s^\alpha+1)^n}\, {d}s = L_t \left[ K_{\alpha, \beta}^{n} \right] ,\end{aligned}$$

and c (ℓ ∈{0, 1, 2, ⋯ , n − 1}) and \(b_j^* = b_j^*(\alpha )\;\;(j \in \mathrm {N}_0)\)are given by (1.67) and (1.68), respectively.

Proof

By employing Prabhakar’s formula (1.66), one derives \(e_{\alpha , \beta }^{\gamma }(t)\) as the following inverse Laplace transform:

$$\displaystyle \begin{aligned} e_{\alpha, \beta}^{\gamma}(t) = \frac 1{2\pi {\imath}} \int_{\mathrm{Br}} \mathrm{e}^{st}\,\frac{s^{\alpha\gamma-\beta}} {(s^\alpha + 1)^\gamma}\, \mathrm{d}s \quad (0<\alpha\leq 2)\end{aligned}$$

without detouring on the general theory of the M-L functions in the complex plane.

For transparency reasons, two cases (1) α ∈ (0, 1] and (2) α ∈ (1, 2] are considered separately. For all non-integer values of α, the power s α is given by

$$\displaystyle \begin{aligned} s^\alpha = |s|{}^\alpha\,\mathrm{e}^{\mathrm{\imath}\arg(s)} \quad \big(|\arg (s)|<\pi\big), \end{aligned}$$

that is, in the complex s-plane cut along the negative real axis.

The essential step consists of decomposing \(e_{\alpha , \beta }^{\gamma }(t)\) into a sum of two terms, bending the Bromwich path of integration Br into the equivalent Hankel path Ha(ρ), a loop which starts from − along the lower side of the negative real half-axis, encircles the circular disk \(|s| \leq \rho ^{\frac {1}{\alpha }} = 1\) in the positive sense, and terminates at − along the upper side of the negative real half-axis. Hence

$$\displaystyle \begin{aligned} e_{\alpha, \beta}^{\gamma}(t) = f_{\alpha, \beta}^{\gamma}(t) + g_{\alpha, \beta}^{\gamma}(t) \quad \left(t \geq 0\right) \end{aligned} $$
(1.75)

with

$$\displaystyle \begin{aligned} f_{\alpha, \beta}^{\gamma}(t) = \frac 1{2\pi {\imath}} \int_{-\mathrm{Ha}(\epsilon)} \mathrm{e}^{st}\,\frac{s^{\alpha\gamma-\beta}} {(s^\alpha + 1)^\gamma}\, \mathrm{d}s, \end{aligned} $$
(1.76)

where the path −Ha(𝜖) has the opposite orientation with respect to Ha(𝜖), with vanishing 𝜖 → 0, and

$$\displaystyle \begin{aligned} g_{\alpha, \beta}^{\gamma}(t) = \sum_{j} \mathrm{e}^{s_jt}\, \mathrm{Res} \left[ s^{\alpha\gamma-\beta}(s^\gamma+1)^{-\gamma}; s_j\right]\, ,\end{aligned}$$

where s j are the relevant poles of the integrand in (1.76).

Let γ = n. In fact, in this case, the poles of order n turn out to be

$$\displaystyle \begin{aligned} s_j = \exp\left( {\imath}(2j+1)\frac{\pi}{\alpha}\right)\qquad \big(|\arg(s_j)|<\pi\big).\end{aligned} $$
  1. 1.

    If α ∈ (0, 1], there are no such poles, since (for all integers j) we have

    $$\displaystyle \begin{aligned} |2j+1|\pi \geq \alpha \pi. \end{aligned}$$

    Consequently, for all t ≥ 0, the function \(g_{\alpha , \beta }^{\gamma }(t)\) vanishes. So, in view of (1.76), the display (1.75) becomes

    $$\displaystyle \begin{aligned} e_{\alpha, \beta}^{\gamma}(t) = \frac 1{2\pi {\imath}} \int_{-\mathrm{Ha}(\epsilon)} \mathrm{e}^{st}\,\frac{s^{\alpha\gamma-\beta}} {(s^\alpha + 1)^\gamma}\, {d}s = L_t\left[ K_{\alpha, \beta}^{\gamma}\right], \end{aligned}$$

    where by the fact that here the values of the integrand below and above the cut along the negative real half-line are conjugate-complex to each other (or, alternatively, by the Titchmarsh formula [44]), this gives the stated formula (1.73):

    $$\displaystyle \begin{aligned} K_{\alpha, \beta}^{\gamma}(r) &= - \frac 1\pi\, \Im\left( \frac{r^{\alpha\gamma-\beta}\, \mathrm{e}^{\mathrm{\imath}\pi(\alpha\gamma-\beta)}}{(r^\alpha\,\mathrm{e}^{{\imath}\pi\alpha}+1)^\gamma}\right) \\ &= -\frac{r^{\alpha\gamma-\beta}}\pi\, \frac{\sin \left[ \pi(\alpha\gamma-\beta)-\gamma\, \arctan \left(\frac{\sin{}(\pi\alpha)} {\cos{}(\pi\alpha) + r^{-\alpha}}\right) \right]} {\left[r^{2\alpha} + 2r^\alpha\, \cos{}(\pi\alpha) + 1\right]^{\frac\gamma2}}, \end{aligned} $$

    which establishes the first part of theorem.

  2. 2.

    If α ∈ (1, 2], there exist two relevant poles given by

    $$\displaystyle \begin{aligned} s_{\pm 1} = \exp \{\pm {\imath}\tfrac\pi\alpha\} \end{aligned}$$

    of order n located in the left half-plane for ss αγβ(s α + 1)n. Then, by (1.67) and (1.68), one has

    $$\displaystyle \begin{aligned} p = \alpha \cdot n - \beta\qquad \text{and} \qquad q = \alpha. \end{aligned}$$

    One thus concludes that

    $$\displaystyle \begin{aligned} g_{\alpha, \beta}^{\gamma}(t) & = \mathrm{e}^{s_{-1}t}\; \mathrm{Res}\left[ \frac{s^{\alpha\gamma-\beta}} {(s^\alpha+1)^n}; s_{-1}\right] + \mathrm{e}^{s_{1}t}\; \mathrm{Res}\left[ \frac{s^{\alpha\gamma-\beta}}{(s^\alpha+1)^n}; s_1\right] \\ &= \exp\left(t\mathrm{e}^{{\imath}\frac\pi\alpha}\right)\, \frac{\exp\left[{\imath}\pi \left(n+1- \frac{\beta-1}\alpha\right)\right]\,(-\alpha n+\beta)_{n-1}}{\alpha^n\, (n-1)!}\, \\ & \quad \times \sum_{k=0}^{n-1} \frac{(1-n)_k\, c_k}{\big((\alpha -1) n-\beta+2\big)_k} \\ & \quad + \exp\left(t\mathrm{e}^{-{\imath}\frac\pi\alpha}\right)\, \frac{\exp\left[-{\imath}\pi \left(n+1 - \frac{\beta-1}\alpha\right)\right]\,(-\alpha n+\beta)_{n-1}}{\alpha^n\, (n-1)!}\, \\ & \quad \times \sum_{k=0}^{n-1} \frac{(1-n)_k\, c_k}{\big((\alpha -1) n-\beta+2\big)_k} \\ &= \frac{2(-1)^{n+1}\,\mathrm{e}^{t \,\cos\left(\frac\pi\alpha\right)}}{\alpha^n\,(n-1)!} \, \cos \left[ t\,\sin \left(\tfrac\pi\alpha\right) - \tfrac\pi\alpha(\beta-1)\right] \, \\ & \quad \times \sum_{k=0}^{n-1} \frac{(1-n)_k\, c_k}{\big((\alpha -1) n-\beta+2\big)_k} . \end{aligned} $$

    Therefore, by using (1.75), one deduces the assertion (1.74) of the theorem.

Remark 1.1

For γ = 1 and β = γ = 1, the expression in (1.73) reduces, respectively, to the following well-known results:

$$\displaystyle \begin{aligned} K_{\alpha, \beta}(r) = K_{\alpha, \beta}^{1}(r) = \frac{r^{\alpha-\beta}}\pi\, \frac{r^\alpha\,\sin (\pi\beta) + \sin[\pi(\beta-\alpha)]} {r^{2\alpha} + 2\cos{}(\pi\alpha)\,r^\alpha + 1} \quad \left(0<\alpha < \beta\leq 1\right)\end{aligned}$$

for the two parameter kernel [9, 10], and

$$\displaystyle \begin{aligned} K_{\alpha}(r) = K_{\alpha, 1}^{1}(r) = \frac{r^{\alpha-1}}{\pi}\,\frac{\sin{}(\pi\alpha)} {r^{2\alpha} + 2\cos{}(\pi\alpha)\,r^\alpha + 1} \quad \left(0<\alpha \leq 1\right) \end{aligned} $$
(1.77)

for the one parameter kernel (see, for example, [9, 10]).

Now, putting β = n = 1 in (1.74), we are led to the following:

Corollary 1.1

For all α ∈ (1, 2] and t > 0, the following assertion holds true: 

$$\displaystyle \begin{aligned} e_{\alpha}(t) = \int_0^\infty \mathrm{e}^{-rt} K_{\alpha}(r)\, \mathrm{d}r + \frac 2\alpha\,\mathrm{e}^{t \cos{}(\frac\pi\alpha)} \,\cos\left[t\sin \left(\tfrac\pi\alpha\right)\right]. \end{aligned} $$
(1.78)

Moreover, for all α ∈ (1, 2], β > 0 and t > 0,

$$\displaystyle \begin{aligned} e_{\alpha, \beta}(t) = \int_0^\infty \mathrm{e}^{-rt} K_{\alpha, \beta}(r)\, \mathrm{d}r + \frac 2\alpha\,\mathrm{e}^{t \cos{}(\frac\pi\alpha)} \,\cos \left[ t\sin \left(\tfrac\pi\alpha\right) - \tfrac\pi\alpha(\beta-1)\right].\end{aligned}$$

Since \(\lim _{t \to 0^+} e_\alpha (t) =1\) from (1.72) and (1.78) one concludes:

Corollary 1.2

The following integral holds true:

$$\displaystyle \begin{aligned} \int_0^\infty K_{\alpha}\left( r\right) \mathrm{d}r = \begin{cases} 1, & 0<\alpha \leq 1 \\ 1-\frac 2\alpha,&1<\alpha \leq2 \end{cases}\, . \end{aligned}$$

Remark 1.2

Corollary 1.1 deserves a comment on its meaning in applications. In the earlier works [9, 10], the authors explained and gave illustrative examples for the formula (1.78). Therein, the first term on the right-hand side is negative and, by sign inversion, we get the complete monotonicity. We could call such behavior completely monotone from below. For t tending to infinity, it goes to zero slowly, namely, like a power of t with negative exponent. This can be shown by aid of the well-known Watson’s lemma (see, e.g., [2]). However, the second term oscillates, but with exponentially decaying amplitude. So, clearly, we have e α(0+) = 1 and then a superposition of a negative function tending slowly to zero by a cosine-like oscillation with rapidly decaying amplitude. As a consequence, e α(t) has only finitely many zeros, a special type of oscillation (see the discussions and illustrations in the aforementioned works [9, 10]).

It is important to note also that, for the function e α,β(t), one has the same qualitative behavior by following the same reasons.

Definition 1.1 ([38])

A given function f : [0, ) → [0, ) is said to be completely monotone if f is continuous on [0, ), infinitely differentiable on (0, ) and satisfies (−1)nf (n)(x) ≥ 0 for x > 0, n ∈ 0, 1, …. According to the Bernstein characterization theorem, the completely monotone functions appear as Laplace transforms of non-negative locally integrable function K(t), t > 0, which is called the spectral function, for which \(f(s) = \int _0^\infty K(t)e^{-st} dt\).

As it was showed, the function \(e^\gamma _{\alpha ,\beta }(t)\) is completely monotone whenever α ∈ (0, 1], 0 < αγ ≤ β ≤ 1 [46], and therefore by the Bernstein theorem [38] the spectral function \(K_{\alpha ,\beta }^\gamma (r)\) is non-negative for the same range of the parameters. Furthermore, the following results hold true.

Theorem 1.2 ([31])

One has that

$$\displaystyle \begin{aligned} \int_0^\infty K_{\alpha,1}^\gamma(r) \, \mathit{\mbox{d}}r = \begin{cases} 1, & \alpha \in (0,1],\: \gamma > 0, \\ 1-\frac{2\left( -1\right)^{n-1}}{\alpha^{n}\left( n-1\right) !} \sum_{l=0}^{n-1}\frac{\left( 1-n\right) _{l}c_{l}}{\left( n\left( \alpha -1\right) +1\right) _{l}}, &\alpha \in \left( 1,2\right], \: \gamma =n\in \mathbb{N}. \end{cases} \end{aligned} $$
(1.79)

Proof

By letting t → 0+, it is obtained

$$\displaystyle \begin{aligned} 1 = \lim_{t\to 0^+} e_{\alpha,1}^{\gamma}(t) = \int_0^\infty K_{\alpha,1}^\gamma(r) \, \mbox{d}r, \qquad \alpha \in (0,1],\: \gamma > 0, \end{aligned} $$
(1.80)

and

$$\displaystyle \begin{aligned} 1 = \int_0^\infty K_{\alpha,1}^\gamma(r) \, \mbox{d}r +\frac{2\left( -1\right)^{n-1}}{\alpha^{n}\left( n-1\right) !}\sum_{l=0}^{n-1}\frac{\left( 1-n\right) _{l}c_{l}}{\left( n\left(\alpha -1\right) +1\right) _{l}}, \quad \alpha \in \left( 1,2\right],\: \gamma =n\in \mathbb{N}, \end{aligned} $$
(1.81)

where c l are coefficients given by (1.67). From this, the claim easily follows.

The kernel K α(r) has been studied in Ref. [10], and the general spectral function \(K_{\alpha ,\beta }^\gamma (r)\) has been extensively analyzed in Ref. [24].

One concludes by emphasizing that, if α ∈ (0, 1], 0 < αγ ≤ 1, r > 0, the kernel

$$\displaystyle \begin{aligned} K_{\alpha,1}^\gamma(r)= \frac{r^{\alpha \gamma -1}}{\pi} \frac{\sin \left( \gamma \arctan \left( \frac{r^\alpha \sin (\pi \alpha)}{r^\alpha \cos (\pi \alpha)+1} \right) + \pi (1-\alpha \gamma) \right)}{\left( r^{2\alpha} + 2r^\alpha\cos (\pi \alpha) +1 \right)^{\gamma/2}}, \end{aligned} $$
(1.82)

is the density of a probability measure concentrated on the positive real line. Graphical representation of \(K_{\alpha ,1}^\gamma (r)\) is given in Figs. 1.6 and 1.7. For additional graphical representations of the function (1.82), we refer to Ref. [31].

Fig. 1.6
figure 6

Graphical representation of the function (1.82) for α = 0.5, and γ = 0.2 (blue line), γ = 0.2 (red line), γ = 0.2 (green line), γ = 0.2 (pink line); (a) linear scale, (b) log-log scale

Fig. 1.7
figure 7

Graphical representation of the function (1.82) for γ = 2, and α = 0.1 (blue line), α = 0.2 (red line), α = 0.3 (green line), α = 0.4 (pink line); (a) linear scale, (b) log-log scale