Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

3.1 Introduction

In the previous chapter we addressed the problem of fractional derivative definition and proposed the use the Grünwald–Letnikov and in particular the forward and backward derivatives. These choices were motivated by five main reasons they:

  • do not need superfluous derivative computations,

  • do not insert unwanted initial conditions,

  • are more flexible,

  • allow sequential computations,

  • are more general in the sense of allowing to be applied to a large class of functions.

We presented also the Liouville derivatives that we deduced from the convolutional property of the Laplace transform.

However there are other integral representations mainly when working in a complex setup. It is well known that the formulation in the complex plane is represented by the generalised Cauchy derivative. So, we need a coherent mathematical reasoning for a connection between the GL formulation and the generalised Cauchy. We are going to present it.

In facing this problem, we assume here as starting point the definitions of direct and reverse fractional differences and present their integral representations. From these representations and using the asymptotic properties of the Gamma function, we will obtain the generalised Cauchy integral as a unified formulation for any order derivative in the complex plane. As we will see

The generalised Cauchy derivative of analytic functions is equal to the Grünwald–Letnikov fractional derivative.

When trying to compute the Cauchy integral using the Hankel contour we conclude that:

  • The integral have two terms: one corresponds to a derivative and the other to a primitive.

  • The exact computation leads to a regularized integral, generalising the well known concept of pseudo-function, but without rejecting any infinite part.

  • The definition implies causality.

The forward and backward derivatives emerge again as very special cases. We will study them for the case of functions with Laplace Transforms. This leads us to obtain once again the causal and anti-causal fractional linear differintegrators both with Transfer Function equal to s?, ? ? R in agreement with the mathematical development presented in Chap. 2.

3.2 Integral Representations for the Differences

In Chap. 2, we presented the general descriptions of fractional differences. These were based on a study by Diaz and Osler [1]. They proposed an integral formulation for the differences and conjectured about the possibility of using it for defining a fractional derivative. This problem was also discussed in a round table held at the International Conference on “Transform Methods & Special Functions”, Varna’96 as stated by Kiryakova [2]. The validity of such conjecture was proved [3, 4] and used to obtain the Cauchy integrals from the differences and simultaneously generalise it to the fractional case. Those integral formulations for the fractional differences will be presented in the following. We will start by the integer order case.

3.2.1 Positive Integer Order

We return back to Sect. 2.2.1 and recover the formulae for the differences we presented there. Consider first the positive integer order case (2.11) and (2.12). Assume that f(z) is analytic inside and on a closed integration path that includes the points t = z ? kh in the direct case and t = z + kh in the corresponding reverse case, with k = 0, 1, …, N {see Fig. 3.1} and Re(h) > 0.

Fig. 3.1
figure 1figure 1

Integration paths and poles for the integral representation of integer order differences

The results stated in (2.11) and (2.12) can be interpreted in terms of the residue theorem.Footnote 1 In fact they can be considered as \( {\frac{1}{2\pi j}}.\sum {R_{i} } \) where R i i = 1, 2, … are the residues in the computation of the integral of a function with poles at t = z ? kh and t = z + kh, k = 0, 1, 2, … As it can be seen by direct verification, we have:

$$ \sum\limits_{k = 0}^{N} {( - 1)^{k} \left( {\begin{array}{*{20}c} N \\ k \\ \end{array} } \right)f(z - kh) = } {\frac{N!}{2\pi jh}}\int\limits_{{C_{d} }} {{\frac{f(w)}{{\prod\nolimits_{k = 0}^{N} {\left( {{\frac{w - z}{h}} + k} \right)} }}}{\text{d}}w} $$
(3.1)

and

$$ \sum\limits_{k = 0}^{N} {( - 1)^{k} \left( {\begin{array}{*{20}c} N \\ k \\ \end{array} } \right)f(z + kh)} = {\frac{N!}{ - 2\pi jh}}\int\limits_{{C_{r} }} {{\frac{f(w)}{{\prod\nolimits_{k = 0}^{N} {\left( {{\frac{z - w}{h}} + k} \right)} }}}{\text{d}}w} $$
(3.2)

We must remark that the binomial coefficients appear naturally when computing the residues. These formulations are more general than those proposed by Diaz and Osler, because they considered only the h = 1 case.

The product in the denominator in the above formulae is called shifted factorial and is usually represented by the Pochhammer symbol. With it we can express the differences in the following integral formulations:

$$ \Updelta_{d}^{N} f(z) = {\frac{N!}{2\pi jh}}\int\limits_{{C_{d} }} {{\frac{f(w)}{{\left( {{\frac{w - z}{h}}} \right)_{N + 1} }}}{\text{d}}w} $$
(3.3)

and

$$ \Updelta_{r}^{N} f(z) = {\frac{{( - 1)^{N + 1} N!}}{2\pi jh}}\int\limits_{{C_{r} }} {{\frac{f(w)}{{\left( {{\frac{z - w}{h}}} \right)_{N + 1} }}}{\text{d}}w} $$
(3.4)

Attending to the relation between the Pochhammer symbol and the Gamma function:

$$ \Upgamma (z + n) = (z)_{n} \Upgamma (z) $$
(3.5)

we can write:

$$ \Updelta_{d}^{N} f(z) = {\frac{N!}{2\pi jh}}\int\limits_{{C_{d} }} {f(w){\frac{{\Upgamma \left( {{\frac{w - z}{h}}} \right)}}{{\Upgamma \left( {{\frac{w - z}{h}} + N + 1} \right)}}}{\text{d}}w} $$
(3.6)

and

$$ \Updelta_{r}^{N} f(z) = {\frac{{( - 1)^{N + 1} N!}}{2\pi jh}}\int\limits_{{C_{r} }} {f(w){\frac{{\Upgamma \left( {{\frac{z - w}{h}}} \right)}}{{\Upgamma \left( {{\frac{z - w}{h}} + N + 1} \right)}}}{\text{d}}w} $$
(3.7)

This is correct and is coherent with the difference definitions, because the Gamma function ?(z) has poles at the negative integers (z = ?n). The corresponding residues are equal to (?1)n/n!. Although both the Gamma functions have infinite poles, outside the contour they cancel out and the integrand is analytic. We should also remark that the direct and reverse differences are not equal.

3.2.2 Fractional Order

Consider the fractional order differences defined in (2.13) and (2.14). It is not hard to see that we are in presence of a situation similar to the positive integer case, excepting the fact of having infinite poles. So we have to use an integration path that encircles all the poles. This can be done with a U shaped contour like those shown in Fig. 3.2. We use (3.6) and (3.7) with the suitable adaptations, obtaining:

$$ \Updelta_{d}^{\alpha } f(z) = {\frac{\Upgamma (\alpha + 1)}{2\pi jh}}\int\limits_{{C_{d} }} {f(w){\frac{{\Upgamma \left( {{\frac{w - z}{h}}} \right)}}{{\Upgamma \left( {{\frac{w - z}{h}} + \alpha + 1} \right)}}}{\text{d}}w} $$
(3.8)

and

$$ \Updelta_{r}^{\alpha } f(z) = {\frac{{( - 1)^{\alpha + 1} \Upgamma (\alpha + 1)}}{2\pi jh}}\int\limits_{{C_{r} }} {f(w){\frac{{\Upgamma \left( {{\frac{z - w}{h}}} \right)}}{{\Upgamma \left( {{\frac{z - w}{h}} + \alpha + 1} \right)}}}{\text{d}}w} $$
(3.9)
Fig. 3.2
figure 2figure 2

Integration paths and poles for the integral representation of fractional order differences

Remark that one turns into the other with the substitution h ? ?h. We can use the residue theorem to confirm the correctness of the above formulae.

3.2.3 Two Properties

In the following, we shall be concerned with the fractional order case. We will consider the direct case. The other is similar.

3.2.3.1 Repeated differencing

We are going to study the effect of a sequential application of the difference operator ?. We have

$$ \Updelta_{d}^{\beta } [\Updelta_{d}^{\alpha } f(z)] = {\frac{\Upgamma (\beta + 1)\Upgamma (\alpha + 1)}{{(2\pi jh)^{2} }}}\int\limits_{{C_{d} }} {\int\limits_{{C_{d} }} {f(w)\,{\frac{{\Upgamma \left( {{\frac{w - s}{h}}} \right)}}{{\Upgamma \left( {{\frac{w - s}{h}} + \alpha + 1} \right)}}}\,{\text{d}}w\,{\frac{{\Upgamma \left( {{\frac{s - z}{h}}} \right)}}{{\Upgamma \left( {{\frac{s - z}{h}} + \beta + 1} \right)}}}{\text{d}}s} } $$
(3.10)

Permuting the integrations, we obtain

$$ \Updelta_{d}^{\beta } [\Updelta_{d}^{\alpha } f(z)] = {\frac{\Upgamma (\beta + 1)\Upgamma (\alpha + 1)}{{(2\pi jh)^{2} }}}\int\limits_{{C_{d} }} {f(w)\int\limits_{{C_{d} }} {{\frac{{\Upgamma \left( {{\frac{w - s}{h}}} \right)}}{{\Upgamma \left( {{\frac{w - s}{h}} + \alpha + 1} \right)}}}{\frac{{\Upgamma \left( {{\frac{s - z}{h}}} \right)}}{{\Upgamma \left( {{\frac{s - z}{h}} + \beta + 1} \right)}}}{\text{d}}s\,{\text{d}}w} } $$
(3.11)

By the residue theorem

$$ \begin{gathered} {\frac{\Upgamma (\beta + 1)}{2\pi jh}}\int\limits_{{C_{d} }} {{\frac{{\Upgamma \left( {{\frac{w - s}{h}}} \right)}}{{\Upgamma \left( {{\frac{w - s}{h}} + \alpha + 1} \right)}}}{\frac{{\Upgamma \left( {{\frac{s - z}{h}}} \right)}}{{\Upgamma \left( {{\frac{s - z}{h}} + \beta + 1} \right)}}}{\text{d}}s} \hfill \\ = \frac{1}{h}\sum\limits_{n = 0}^{\infty } {{\frac{{( - 1)^{n} }}{n!}}\;{\frac{{\Upgamma (\beta + 1) \cdot \Upgamma \left( {{\frac{w - z}{h}} + n} \right)}}{{\Upgamma \left( {{\frac{w - z}{h}} + \alpha + 1 + n} \right)\Upgamma (\beta - n + 1)}}}} \hfill \\ = \frac{1}{h}{\frac{{\Upgamma \left( {{\frac{w - z}{h}}} \right)}}{{\Upgamma \left( {{\frac{w - z}{h}} + \alpha + 1} \right)}}}\sum\limits_{n = 0}^{\infty } {{\frac{{\left( {{\frac{w - z}{h}}} \right)_{n} ( - \beta )_{n} }}{{\left( {{\frac{w - z}{h}} + \alpha + 1} \right)_{n} }}}} \hfill \\ = \frac{1}{h}{\frac{{\Upgamma \left( {{\frac{w - z}{h}}} \right)}}{{\Upgamma \left( {{\frac{w - z}{h}} + \alpha + 1} \right)}}}{}_{2}F_{1} \left( {{\frac{w - z}{h}}; - \beta ;{\frac{w - z}{h}} + \alpha + 1;1} \right) \hfill \\ \end{gathered} $$
(3.12)

where 2 F 1 is the Gauss hypergeometric function. If ? + ? + 1 > 0, we have:

$$ {\frac{\Upgamma (\beta + 1)}{2\pi jh}}\int\limits_{{C_{d} }} {{\frac{{\Upgamma \left( {{\frac{w - s}{h}}} \right)}}{{\Upgamma \left( {{\frac{w - s}{h}} + \alpha + 1} \right)}}}{\frac{{\Upgamma \left( {{\frac{s - z}{h}}} \right)}}{{\Upgamma \left( {{\frac{s - z}{h}} + \beta + 1} \right)}}}} {\text{d}}s = \frac{1}{h}{\frac{{\Upgamma \left( {{\frac{w - z}{h}}} \right)\Upgamma (\alpha + \beta + 1)}}{{\Upgamma \left( {{\frac{w - z}{h}} + \alpha + \beta + 1} \right)\Upgamma (\alpha + 1)}}} $$
(3.13)

leading to the conclusion that:

$$ \Updelta_{d}^{\beta } [\Updelta_{d}^{\alpha } f(z)] = {\frac{\Upgamma (\alpha + \beta + 1)}{2\pi jh}}\int\limits_{{C_{d} }} {f(w){\frac{{\Upgamma \left( {{\frac{w - z}{h}}} \right)}}{{\Upgamma \left( {{\frac{w - z}{h}} + \alpha + \beta + 1} \right)}}}{\text{d}}w} $$
(3.14)

and

$$ \Updelta_{d}^{\beta } [\Updelta_{d}^{\alpha } f(z)] = \Updelta_{d}^{\alpha + \beta } f(z) $$
(3.15)

provided that ? + ? + 1 > 0. It is not difficult to see that the above operation is commutative. The condition ? + ? + 1 > 0 is restrictive, since it may happen that we cannot have ? ? ?? ? 1. However, we must remark that (3.8) and (3.9) are valid for every ? ? R. The same happens with (? + ?) in (3.15). This means that we can use (3.15) with every pair (??) ? R. It should be stressed that, at least in principle, we must not mix the two differences, because they use different integration paths. If any way we decide to do it, we have to use a doubly opened integration path. The result seem not to have any interest here. Later we will use such a path when dealing with the centred derivatives.

3.2.3.2 Inversion

Putting ? = ?? into (3.15), we obtain:

$$ \begin{aligned} \Updelta_{d}^{ - \alpha } [\Updelta_{d}^{\alpha } f(z)] & = \Updelta_{d}^{\alpha } [\Updelta_{d}^{ - \alpha } f(z)] \\ & = {\frac{1}{2\pi j}}\int\limits_{{C_{d} }} {f(w){\frac{1}{w - z}}{\text{d}}w = f(z)} \\ \end{aligned} $$
(3.16)

as we would expect. So the operation of differencing is invertible. This means that we can write:

$$ f(z) = {\frac{\Upgamma (\alpha + 1)}{2\pi jh}}\int\limits_{{C_{d} }} {\Updelta_{d}^{\alpha } f(w){\frac{{\Upgamma \left( {{\frac{w - z}{h}}} \right)}}{{\Upgamma \left( {{\frac{w - z}{h}} - \alpha + 1} \right)}}}{\text{d}}w} $$
(3.17)

in the direct case. In the reverse case, we will have:

$$ f(z) = {\frac{\Upgamma (\alpha + 1)}{2\pi jh}}\int\limits_{{C_{d} }} {\Updelta_{r}^{\alpha } f(w){\frac{{\Upgamma \left( {{\frac{z - w}{h}}} \right)}}{{\Upgamma \left( {{\frac{z - w}{h}} - \alpha + 1} \right)}}}{\text{d}}w} $$
(3.18)

according to (3.9).

3.3 Obtaining the Generalized Cauchy Formula

The ratio of two gamma functions \( {\frac{\Upgamma (s + a)}{\Upgamma (s + b)}} \) has an interesting expansion [5]:

$$ {\frac{\Upgamma (s + a)}{\Upgamma (s + b)}} = s^{a - b} \left[ {1 + \sum\limits_{1}^{N} {C_{k} s^{ - k} + O(s^{ - N - 1} )} } \right] $$
(3.19)

as |s| ? ?, uniformly in every sector that excludes the negative real half-axis. The coefficients in the series can be expressed in terms of Bernoulli polynomials. Their knowledge is not important here.

Consider (3.8) and (3.9) again. Let |h| < ? ? R, where ? is a small number. This allows us to write:

$$ \Updelta_{d}^{\alpha } f(z) = {\frac{\Upgamma (\alpha + 1)}{2\pi jh}}\int\limits_{{C_{d} }} {f(w)} {\frac{1}{{\left( {{\frac{w - z}{h}}} \right)^{\alpha + 1} }}}{\text{d}}w + g_{1} (h) $$
(3.20)

and

$$ \Updelta_{r}^{\alpha } f(z) = {\frac{{( - 1)^{\alpha + 1} \Upgamma (\alpha + 1)}}{2\pi jh}}\int\limits_{{C_{r} }} {f(w){\frac{1}{{\left( {{\frac{z - w}{h}}} \right)^{\alpha + 1} }}}{\text{d}}w + g_{2} (h)} $$
(3.21)

where C d and C r are the contours represented in Fig. 3.2. The g 1(h) and g 2(h) terms are proportional to h ?+2. So, the fractional incremental ratio are, aside terms proportional to h, given by:

$$ {\frac{{\Updelta_{d}^{\alpha } f(z)}}{{h^{\alpha } }}} = {\frac{\Upgamma (\alpha + 1)}{2\pi j}}\int\limits_{{C_{d} }} {f(w){\frac{1}{{(w - z)^{\alpha + 1} }}}{\text{d}}w} $$
(3.22)

and

$$ {\frac{{\Updelta_{r}^{\alpha } f(z)}}{{h^{\alpha } }}} = {\frac{\Upgamma (\alpha + 1)}{2\pi j}}\int\limits_{{C_{r} }} {f(w){\frac{1}{{(w - z)^{\alpha + 1} }}}{\text{d}}w} $$
(3.23)

Allowing h ? 0, we obtain the direct and reverse generalised Cauchy derivatives:

$$ D_{d}^{\alpha } f(z) = {\frac{\Upgamma (\alpha + 1)}{2\pi j}}\int\limits_{{C_{d} }} {f(w){\frac{1}{{(w - z)^{\alpha + 1} }}}{\text{d}}w} $$
(3.24)

and

$$ D_{r}^{\alpha } f(z) = {\frac{\Upgamma (\alpha + 1)}{2\pi j}}\int\limits_{{C_{r} }} {f(w){\frac{1}{{(w - z)^{\alpha + 1} }}}{\text{d}}w} $$
(3.25)

If ? = N, both the derivatives are equal and coincide with the usual Cauchy definition. In the fractional case we have different solutions, since we are using a different integration path. Remark that (3.24) and (3.25) are formally the same. They differ only in the integration path. This means that we can use a general procedure as we did on Chap. 2.

Definition 3.1

We define the generalised Cauchy derivative by

$$ D_{\theta }^{\alpha } f(z) = {\frac{\Upgamma (\alpha + 1)}{2\pi j}}\int\limits_{{C_{\theta } }} {f(w){\frac{1}{{(w - z)^{\alpha + 1} }}}{\text{d}}w} $$
(3.26)

where \( C_{\theta } \) is any U-shaped path encircling the branch cut line and making an angle ? + ? with the real positive half axis. As we will see next, the particular cases ? = 0 or ? = ? lead to new forms of representing the forward and backward derivatives.

3.4 Analysis of Cauchy Formula

3.4.1 General Formulation

Consider the generalised Cauchy formula (3.26) and rewrite it in a more convenient format obtained by a simple translation:

$$ D_{\theta }^{\alpha } f(z) = {\frac{\Upgamma (\alpha + 1)}{2\pi j}}\int\limits_{C} {f(w + z){\frac{1}{{w^{\alpha + 1} }}}{\text{d}}w} $$
(3.27)

Here we will choose C as a special integration path: the Hankel contour represented in Fig. 3.3. It is constituted by two straight lines and a small circle. We assume that it surrounds the selected branch cut line. This is described by x·ej?, with x ? R + and ? ? (0,2?[. The circle has a radius equal to ? small enough to allow it to stay inside the region of analyticity of f(z). If ? is a negative integer, the integral along the circle is zero and we are led to the well known repeated integration formula [68] as we will see later. In the general ? case we need the two terms. Let us decompose the above integral using the Hankel contour. For reducing steps, we will assume already that the straight lines are infinitely near. We have, then:

$$ D_{\theta }^{\alpha } f(z) = {\frac{\Upgamma (\alpha + 1)}{2\pi j}}\left[ {\int\limits_{{C_{1} }} { + \int\limits_{{C_{2} }} + \int\limits_{{C_{3} }} {} } } \right]f(w + z){\frac{1}{{w^{\alpha + 1} }}}{\text{d}}w $$
(3.28)
Fig. 3.3
figure 3figure 3

The Hankel contour used in computing the derivative defined in Eq. 3.27

Over C 1 we have w = x·ej(???), while over C 3 we have w = x·ej(?+?), with x ? R +, over C 2 we have w = ?ej?, with ? ? ]? ? ?, ?+?[. We can write, at last:

$$ \begin{aligned} D_{\theta }^{\alpha } f(z) & = {\frac{\Upgamma (\alpha + 1)}{2\pi j}}\left[ {\int\limits_{\infty }^{\rho } {f(x \cdot {\text{e}}^{j(\theta - \pi )} + z){\frac{{{\text{e}}^{ - j\alpha (\theta - \pi )} }}{{x^{\alpha + 1} }}}{\text{d}}x} + \int\limits_{\rho }^{\infty } {f(x \cdot {\text{e}}^{j(\theta + \pi )} + z){\frac{{{\text{e}}^{ - j\alpha (\theta + \pi )} }}{{x^{\alpha + 1} }}}} {\text{d}}x} \right] \\ & \quad + {\frac{\Upgamma (\alpha + 1)}{2\pi j}}{\frac{1}{{\rho^{\alpha } }}}\int\limits_{\theta - \pi }^{\theta + \pi } {f(\rho \cdot {\text{e}}^{j\varphi } + z){\text{e}}^{ - j\alpha \varphi } j\,{\text{d}}\varphi } \\ \end{aligned} $$
(3.29)

For the first term, we have:

$$ \begin{gathered} \int\limits_{\infty }^{\rho } {f(x \cdot {\text{e}}^{j(\theta - \pi )} + z){\frac{{{\text{e}}^{ - j\alpha (\theta - \pi )} }}{{x^{\alpha + 1} }}}} {\text{d}}x + \int\limits_{\rho }^{\infty } {f(x \cdot {\text{e}}^{j(\theta + \pi)} + z){\frac{{{\text{e}}^{{ - j\alpha \left( {\theta + \pi } \right)}} }}{{x^{\alpha + 1} }}}{\text{d}}x} \hfill \\ = [ - {\text{e}}^{ - j\alpha (\theta - \pi )} + {\text{e}}^{ - j\alpha (\theta + \pi )} ]\int\limits_{\rho }^{\infty } {f( - x \cdot {\text{e}}^{j\theta } + z){\frac{1}{{x^{\alpha + 1} }}}{\text{d}}x} \hfill \\ = - {\text{e}}^{ - j\alpha \theta } \cdot [{\text{e}}^{j\pi \alpha } - {\text{e}}^{ - j\pi \alpha } ]\int\limits_{\rho }^{\infty } {f( - x \cdot {\text{e}}^{j\theta } + z)} {\frac{1}{{x^{\alpha + 1} }}}{\text{d}}x \hfill \\ = - {\text{e}}^{ - j\alpha \theta } \cdot 2j \cdot { \sin }(\alpha \pi )\int\limits_{\rho }^{\infty } {f( - x \cdot {\text{e}}^{j\theta } + z){\frac{1}{{x^{\alpha + 1} }}}{\text{d}}x} \hfill \\ \end{gathered} $$
(3.30)

where we assumed that f(?x·ej(???) + z) = f(?x·ej(?+?) + z), because f(z) is analytic.

For the second term, we begin by noting that the analyticity of the function f(z) allows us to write:

$$ f( - x \cdot {\text{e}}^{j\theta } + z) = \sum\limits_{0}^{\infty } {{\frac{{f^{(n)} (z)}}{n!}}( - 1)^{n} x^{n} {\text{e}}^{jn\theta } } $$
(3.31)

for x < r ? R +. We have, then:

$$ j{\frac{1}{{\rho^{\alpha } }}}\int\limits_{\theta - \pi }^{\theta + \pi } {f(\rho \cdot {\text{e}}^{j\varphi } } + z){\text{e}}^{ - j\alpha \varphi } \,{\text{d}}\varphi = j{\frac{1}{{\rho^{\alpha } }}}\sum\limits_{0}^{\infty } {{\frac{{f^{(n)} (z)}}{n!}}\rho^{n} } \int\limits_{\theta - \pi }^{\theta + \pi } {{\text{e}}^{j(n - \alpha )\varphi } {\text{d}}\varphi } $$
(3.32)

Performing the integration, we have:

$$ \begin{aligned} j{\frac{1}{{\rho^{\alpha } }}}\int\limits_{\theta - \pi }^{\theta + \pi } {f(\rho \cdot {\text{e}}^{j\varphi } } + z){\text{e}}^{ - j\alpha \varphi } {\text{d}}\varphi & = - j\sum\limits_{0}^{\infty } {{\frac{{f^{(n)} (z)}}{n!}}\rho^{n - \alpha } {\text{e}}^{j(n - \alpha )\theta } {\frac{{2 \cdot { \sin }[(n - \alpha )\pi ]}}{(n - \alpha )}}} \\ & = 2j \cdot {\text{e}}^{ - j\alpha \theta } { \sin }(\alpha \pi )\sum\limits_{0}^{\infty } {{\frac{{f^{(n)} (z)}}{n!}}\;{\frac{{{\text{e}}^{jn\theta } ( - 1)^{n} \rho^{n - \alpha } }}{(n - \alpha )}}} \\ \end{aligned} $$
(3.33)

But the summation in the last expression can be written in another interesting format:

$$ \sum\limits_{0}^{\infty } {{\frac{{f^{(n)} (z)}}{n!}}{\frac{{{\text{e}}^{jn\theta } ( - 1)^{n} \rho^{n - \alpha } }}{(n - \alpha )}}} = \left[ { - \sum\limits_{0}^{N} {{\frac{{f^{(n)} (z)}}{n!}}( - 1)^{n} {\text{e}}^{jn\theta } \int\limits_{\rho }^{\infty } {x^{n - \alpha - 1} {\text{d}}x + \sum\limits_{N + 1}^{\infty } {{\frac{{f^{(n)} (z)}}{n!}}{\frac{{( - 1^{n} ){\text{e}}^{jn\theta } \rho^{n - \alpha } }}{(n - \alpha )}}} } } } \right] $$

where N = ???.Footnote 2 Substituting it in (3.33) and joining to (3.30) we can write:

$$ \begin{aligned} D_{\theta }^{\alpha } f(z) & = K \cdot \int\limits_{\rho }^{\infty } {{\frac{{\left[ {f( - x \cdot {\text{e}}^{j\theta } + z) - \sum\nolimits_{0}^{N} {{\frac{{f^{(n)} (z)}}{n!}}} ( - 1)^{n} {\text{e}}^{jn\theta } x^{n} } \right]}}{{x^{\alpha + 1} }}}} {\text{d}}x \\ & \quad - K\sum\nolimits_{N + 1}^{\infty } {{\frac{{f^{(n)} (z)}}{n!}}} ( - 1)^{n} {\text{e}}^{jn\theta } {\frac{{\rho^{n - \alpha } }}{(n - \alpha )}} \\ \end{aligned} $$
(3.34)

If ? < 0, we make the inner summation equal to zero. Using the reflection formula of the gamma function

$$ {\frac{1}{\Upgamma (\beta )\Upgamma (1 - \beta )}} = {\frac{{{ \sin }(\pi \beta )}}{\pi }} $$

we obtain for K

$$ K = - {\frac{{\Upgamma (\alpha + 1){\text{e}}^{ - j\theta \alpha } }}{\pi }}{ \sin }(\alpha \pi ) = {\frac{{{\text{e}}^{ - j\theta \alpha } }}{\Upgamma ( - \alpha )}} $$
(3.35)

Now let ? go to zero in (3.34). The second term on the right hand side goes to zero and we obtain:

$$ D_{\theta }^{\alpha } f(z) = {\frac{{{\text{e}}^{ - j\theta \alpha } }}{\Upgamma ( - \alpha )}}\int\limits_{0}^{\infty } {{\frac{{\left[ {f( - x \cdot {\text{e}}^{j\theta } + z) - \sum\nolimits_{0}^{N} {{\frac{{f^{(n)} (z)}}{n!}}( - 1)^{n} {\text{e}}^{jn\theta } x^{n} } } \right]}}{{x^{\alpha + 1} }}}} {\text{d}}x $$
(3.36)

that is valid for any ? ? R.

It is interesting to remark that (3.36) is nothing else than a generalisation of the “pseudo-function” notion [9, 10], but valid for an analytic function in a non compact region of the complex plane. On the other hand, we did not have to reject any infinite part as Hadamard did. Relation (3.36) represents a regularised fractional derivative that has some similarities with the Marchaud derivative [5]: for 0 < ? < 1, they are equal.

If one puts w = x·ej?, we can write:

$$ D_{\theta }^{\alpha } f(z) = {\frac{1}{\Upgamma ( - \alpha )}}{\text{e}}^{ - j\theta \alpha } \int\limits_{{\gamma_{\theta } }} {{\frac{{\left[ {f(z - w) - \sum\nolimits_{0}^{N} {{\frac{{f^{(n)} (z)( - 1)^{n} }}{n!}}w^{n} } } \right]}}{{w^{\alpha + 1} }}}} {\text{d}}w $$
(3.37)

where ?? is a half straight line starting at w = 0 and making an angle ? with the positive real axis. As we can conclude there are infinite ways of computing the derivative of a given function: these are defined by the chosen branch cut lines. However, this does not mean that we have infinite different derivatives. It is not hard to see that all the branch cut lines belonging to a given region of analyticity of the function are equivalent and lead to the same result unless the integral may be divergent if the function increases without bound.

3.5 Examples

3.5.1 The Exponential Function

To illustrate the previous assertions we are going to consider the case of the exponential function.

Let f(z) = eaz, with a ? R. Inserting it into (3.36), it comes:

$$ D_{\theta }^{\alpha } f(z) = {\frac{1}{\Upgamma ( - \alpha )}}{\text{e}}^{ - j\theta \alpha } {\text{e}}^{az} \int\limits_{0}^{\infty } {{\frac{{\left[ {{\text{e}}^{{ - ax \cdot {\text{e}}^{j\theta } }} - \sum\nolimits_{0}^{N} {{\frac{{a^{n} }}{n!}}{\text{e}}^{jn\theta } ( - 1)^{n} x^{n} } } \right]}}{{x^{\alpha + 1} }}}} {\text{d}}x $$

With a variable change ? = axej?, the above equation gives:

$$ D_{\theta }^{\alpha } f(z) = {\frac{1}{\Upgamma ( - \alpha )}}a^{\alpha } {\text{e}}^{az} \int\limits_{0}^{{\infty \cdot a{\text{e}}^{j\theta } }} {{\frac{{\left[ {{\text{e}}^{ - \tau } - \sum\nolimits_{0}^{N} {{\frac{{( - 1)^{n} }}{n!}}} \tau^{n} } \right]}}{{\tau^{\alpha + 1} }}}} {\text{d}}\tau $$
(3.38)

where the integration path is half straight line that forms an angle equal to ? with the positive real axis, in agreement with (3.36). The integral in (3.38) is almost the generalised Gamma function definition [11, 12] and is a generalisation of Euler integral representation for the gamma function. But this requires integration along the positive real axis. However, the integration can be done along any ray with an angle in the interval [0, ?/2) [13]. To obtain convergence of this integral we must have Re(aej?) > 0. This means that a must necessarily be positive. This is coherent with what was said in Chap. 2: the forward derivative {|?| ? [0, ?/2)} of an exponential exists only if the function behaves like “right” function going to zero when z goes to ??. Returning to the above integral, we can write:

$$ D_{\theta }^{\alpha } f(z) = {\frac{1}{\Upgamma ( - \alpha )}}a^{\alpha } {\text{e}}^{az} \int\limits_{0}^{\infty } {{\frac{{\left[ {{\text{e}}^{ - \tau } - \sum\nolimits_{0}^{N} {{\frac{{( - 1^{n} )}}{n!}}} \tau^{n} } \right]}}{{\left( {\tau^{\alpha + 1} } \right)}}}} {\text{d}}\tau $$
(3.39)

The integral defines the value of the gamma function ?(??). In fact [11, 12] we have

$$ \Upgamma (z) = \int\limits_{0}^{\infty } {\tau^{z - 1} } \left[ {{\text{e}}^{ - \tau } - \sum\limits_{0}^{N} {{\frac{{( - 1)^{n} }}{n!}}} \tau^{n} } \right]{\text{d}}\tau $$
(3.40)

if we maintain the convention made before: when z < 0 the summation is zero. We obtain then:

$$ D_{\theta }^{\alpha } \left[ {{\text{e}}^{az} } \right] = a^{\alpha } {\text{e}}^{az} $$
(3.41)

as expected. In the particular limiting case, a ? 0, we obtain D ? ? 1 = 0 provided that ? > 0. If ? < 0, the limit is infinite. The Grünwald–Letnikov definition allowed us to obtain the same conclusions as seen. Now, consider the case where a < 0. To obtain convergence in (3.38) we must have ? ? [?/2, 3?/2). This means that the exponential must go to zero when z goes to +?. It is what we called a “left” function. The derivative is also expressed by (3.41) but the branch cut line to define the power is now a half straight line in the right half complex plane: in particular the positive real half axis. This is the same problem we found in Sect. 2.7.3. We conclude then that (3.41) is the result given by the forward derivative if a > 0 and the one given by the backward derivative if a < 0. This has very important consequences. By these facts we must be careful in using (3.41). In fact, in a first glance, we could be lead to use it to compute the derivatives of functions like sin(z), cos(z), sinh(z) and cosh(z). But if we have in mind our reasoning we can conclude immediately that those functions do not have finite derivatives if z ? C.

In fact they use simultaneously the exponentials ez and e?z which derivatives cannot exist simultaneously, as we just saw. However, we can conclude that functions expressed by Dirichlet series \( f(t) = \sum_{0}^{\infty } {a_{k} {\text{e}}^{{\lambda_{k} t}} } \) with all the Re(? k ) positive or all negative have finite derivatives given by \( f^{(\alpha )} (t) = \sum_{0}^{\infty } {a_{k} (\lambda_{k} )^{\alpha } {\text{e}}^{{\lambda_{k} t}} } \). In particular functions with Laplace transform with region of convergence in the right or left half planes have fractional derivatives.

Another interesting case is the cisoid f(t) = ej?t, ? ? R+. Inserting it into (3.36) again, it comes:

$$ D_{\theta }^{\alpha } f(t) = {\frac{1}{\Upgamma ( - \alpha )}}{\text{e}}^{ - j\theta \alpha } {\text{e}}^{j\omega t} \int\limits_{0}^{\infty } {{\frac{{\left[ {{\text{e}}^{{j\omega x.{\text{e}}^{j\theta } }} - \sum\nolimits_{0}^{N} {{\frac{{(j\omega )^{n} }}{n!}}{\text{e}}^{jn\theta } x^{n} } } \right]}}{{x^{\alpha + 1} }}}} {\text{d}}x $$
(3.42)

With ? = ?/2, j?ej? = ?? and we obtain easily:

$$ D_{f}^{\alpha } f(t) = (j\omega )^{\alpha } {\text{e}}^{j\omega t} $$

It is not difficult to see that (3.42) remains valid if ? < 0 provided that we remember that the branch cut line is the negative real half axis. We only have to put ? = ??/2

$$ D_{f}^{\alpha } f(t) = ( - j\omega )^{\alpha } {\text{e}}^{ - j\omega t} $$

We can conclude then that:

$$ D_{f}^{\alpha } { \cos }(\omega t) = \omega^{\alpha } { \cos }(\omega t + \alpha \pi /2) $$
(3.43)

This procedure corresponds to extend the validity of the forward derivative and agrees with the results we presented in Sect. 2.7.4. For sin(?t), the procedure is similar leading to

$$ D_{f}^{\alpha } \,{ \sin }(\omega t) = \omega^{\alpha } \,{ \sin }(\omega t + \alpha \pi /2) $$
(3.44)

When ? = 1, we recover the usual formulae. The backward case would lead to the results obtained in Sect. 2.7.3. We will not consider it again.

3.5.2 The Power Function

Let f(z) = z ?, with ? ? R. If ? > ?, we will show that D ?[z ?] defined for every z ? C does not exist, unless ? is a positive integer, because the integral in (3.36) is divergent for every ? ? [??, ?). This has an important consequence: we cannot compute the derivative of a given function by using its Taylor series and computing the derivative term by term.

Let us see what happens for non integer values of ?. The branch cut line needed for the definition of the function must be chosen to be outside the integration region. This is equivalent to say that the two branch cut lines cannot intersect. To use (3.36) we compute the successive integer order derivatives of this function that are given by:

$$ D^{n} z^{\beta } = ( - 1)^{n} ( - \beta )_{n} z^{\beta - n} $$
(3.45)

Now, we have:

$$ D_{\theta }^{\alpha } z^{\beta } = {\frac{{{\text{e}}^{ - j\theta \alpha } }}{\Upgamma ( - \alpha )}}\int\limits_{0}^{\infty } {{\frac{{\left[ {( - x \cdot {\text{e}}^{j\theta } + z)^{\beta } - \sum\nolimits_{0}^{N} {{\frac{{( - 1)^{n} ( - \beta )_{n} z^{\beta - n} }}{n!}}} {\text{e}}^{jn\theta } x^{n} } \right]}}{{x^{\alpha + 1} }}}} {\text{d}}x $$
(3.46)

With a substitution ? = x·ej?/z, we obtain:

$$ D_{\theta }^{\alpha } z^{\beta } = {\frac{1}{\Upgamma ( - \alpha )}}z^{\beta - \alpha } \int\limits_{0}^{{\infty \,{\text{e}}^{j\theta } /z}} {{\frac{{\left[ {(1 - \tau )^{\beta } - \sum\nolimits_{0}^{N} {{\frac{{( - 1)^{n} ( - \beta )_{n} }}{n!}}} \tau^{n} } \right]}}{{\tau^{\alpha + 1} }}}} {\text{d}}\tau $$
(3.47)

To become simpler the analysis let us assume that ? = 0 and z ? R +. We obtain:

$$ D_{f}^{\alpha } z^{\beta } = {\frac{1}{\Upgamma ( - \alpha )}}z^{\beta - \alpha } \int\limits_{0}^{\infty } {{\frac{{\left[ {(1 - \tau )^{\beta } - \sum\nolimits_{0}^{N} {{\frac{{( - 1)^{n} ( - \beta )_{n} }}{n!}}} \tau^{n} } \right]}}{{\tau^{\alpha + 1} }}}} {\text{d}}\tau $$
(3.48)

Let us decompose the integral

$$ \begin{aligned} \int\limits_{0}^{\infty } {{\frac{{\left[ {(1 - \tau )^{\beta } - \sum\nolimits_{0}^{N} {{\frac{{( - 1)^{n} ( - \beta )_{n} }}{n!}}} \tau^{n} } \right]}}{{\tau^{\alpha + 1} }}}} {\text{d}}\tau = & \int\limits_{0}^{1} {{\frac{{\left[ {(1 - \tau )^{\beta } - \sum\nolimits_{0}^{N} {{\frac{{( - 1)^{n} ( - \beta )_{n} }}{n!}}} \tau^{n} } \right]}}{{\tau^{\alpha + 1} }}}} {\text{d}}\tau \\ & + \int\limits_1^{\infty } {{\frac{{\left[ {(1 - \tau )^{\beta } - \sum\nolimits_{0}^{N} {{\frac{{( - 1)^{n} ( - \beta )_{n} }}{n!}}} \tau^{n} } \right]}}{{\tau^{\alpha + 1} }}}} {\text{d}}\tau \\ \end{aligned} $$

As shown in Ortigueira [14], the first integral is a generalised version of the Beta function B(??, ? + 1) valid for ? ? R and ? > ?1. But the second is divergent. We conclude that the power function defined in C does not have fractional derivatives.

3.5.3 The Derivatives of Real Functions

As we are mainly interested in real variable functions we are going to obtain the formulae suitable for this case. Now, we only have two hypotheses: ? = 0 or ? = ?.

3.5.3.1 ? = 0: Forward Derivative

This corresponds to choosing the real negative half axis as branch cut line. Substituting ? = 0 into (3.36), we have:

$$ D_{f}^{\alpha } f(z) = {\frac{1}{\Upgamma ( - \alpha )}}\int\limits_{0}^{\infty } {{\frac{{\left[ {f(z - x) - \sum\nolimits_{0}^{N} {{\frac{{f^{(n)} (z)}}{n!}}} ( - x)^{n} } \right]}}{{x^{\alpha + 1} }}}} {\text{d}}x $$
(3.49)

As this integral uses the left hand values of the function, we will call this forward or direct derivative again in agreement with Sect. 2.3.

3.5.3.2 ? = ?: Backward Derivative

This corresponds to choosing the real positive half axis as branch cut line. Substituting ? = ? into (3.36) and performing the change x ? ?x, we have:

$$ D_{b}^{\alpha } f(z) = {\frac{ - j\pi \alpha }{\Upgamma ( - \alpha )}}\int\limits_{0}^{\infty } {{\frac{{\left[ {f(x + z) - \sum\nolimits_{0}^{N} {{\frac{{f^{(n)} (z)}}{n!}}} x^{n} } \right]}}{{x^{\alpha + 1} }}}} {\text{d}}x $$
(3.50)

As this integral uses the right hand values of the function, we will call this backward or reverse derivative in agreement with Sect. 2.3 again.

3.5.4 Derivatives of Some Causal Functions

We are going to study the causal power function and exponential. Although we could do it using the LT as we will see in the next section, we are going to do it here using the relation (3.49). Let f(t) = t ? u(t). As seen above:

$$ D^{n} t^{\beta } u(t) = ( - 1)^{n} ( - \beta )_{n} t^{\beta - n} u(t) $$

that inserted in (3.49) gives

$$ D_{f}^{\alpha } f(t) = t^{\beta } u(t){\frac{1}{\Upgamma ( - \alpha )}}\int\limits_{0}^{t} {{\frac{{\left[ {(1 - x/t)^{\beta } - \sum\nolimits_{0}^{N} {{\frac{{( - 1)^{n} ( - \beta )_{n} t^{ - n} }}{n!}}( - x)^{n} } } \right]}}{{x^{\alpha + 1} }}}} {\text{d}}x $$

that is converted in the next expression through the substitution ? = x/t

$$ D_{f}^{\alpha } t^{(\beta)} = t^{\beta - \alpha } u(t){\frac{1}{\Upgamma ( - \alpha )}}\int\limits_{0}^{1} {{\frac{{\left[ {(1 - \tau )^{\beta } - \sum\nolimits_{0}^{N} {{\frac{{( - 1)^{n} ( - \beta )_{n} }}{n!}}( - \tau )^{n} } } \right]}}{{\tau^{\alpha + 1} }}}} {\text{d}}\tau . $$

The above integral is a representation of the beta function, B(??,? + 1), for ? > ?1 {see [14]}. But

$$ B( - \alpha ,\beta + 1) = {\frac{\Upgamma ( - \alpha )\Upgamma (\beta + 1)}{\Upgamma (b - \alpha + 1)}} $$

and then

$$ D_{f}^{\alpha } t^{\beta } = {\frac{\Upgamma (\beta + 1)}{\Upgamma (\beta - \alpha + 1)}}t^{\beta - \alpha } u(t) $$
(3.51)

that coincides with the result obtained in (2.76).

Now let us try the exponential function, f(t) = eat u(t). Inserting in (3.49) we obtain

$$ D_{f}^{\alpha } {\text{e}}^{at} u(t) = - e^{at} u(t){\frac{1}{\Upgamma ( - \alpha )}}\int\limits_{0}^{t} {{\frac{{\left[ {\sum\nolimits_{N + 1}^{\infty } {{\frac{{( - a)^{n} x^{n} }}{n!}}} } \right]}}{{x^{\alpha + 1} }}}} {\text{d}}x $$

Assuming that the series converges uniformly, we get easily

$$ D_{f}^{\alpha } {\text{e}}^{at} u(t) = - {\text{e}}^{at} u(t){\frac{1}{\Upgamma ( - \alpha )}}\left[ {\sum\limits_{N + 1}^{\infty } {{\frac{{( - a)^{n} t^{n - \alpha } }}{n!(n - \alpha )}}} } \right] $$
(3.52)

Alternatively we can use the causal part of the McLaurin series and compute the derivative of each term. This is not a contradiction with our affirmation because the terms of the series are causal powers.

3.6 Derivatives of Functions with Laplace Transform

Consider now the special class of functions with Laplace Transform. Let f(t) be such a function and F(s) its LT, with a suitable region of convergence, R c . This means that we can write

$$ f(t) = {\frac{1}{2\pi j}}\int\limits_{a - j\infty }^{a + j\infty } {F(s){\text{e}}^{st} {\text{d}}s} $$
(3.53)

where a is a real number inside the region of convergence. Inserting (3.53) inside (3.49) and permuting the integration symbols, we obtain:

$$ D_{f}^{\alpha } f(z) = {\frac{1}{2\pi j\Upgamma ( - \alpha )}}\int\limits_{a - j\infty }^{a + j\infty } {F(s){\text{e}}^{sz} } \int\limits_{0}^{\infty } {{\frac{{\left[ {{\text{e}}^{ - sx} - \sum\nolimits_{0}^{N} {{\frac{{(sx)^{n} }}{n!}}} } \right]}}{{x^{\alpha + 1} }}}} {\text{d}}x\,{\text{d}}s $$
(3.54)

Now we are going to use the results presented above in Sect. 3.5. If Re(s) > 0, the inner integral is equal to ?(??)·s ?, if Re(s) < 0 it is divergent. We conclude that:

$$ LT[D_{f}^{\alpha } f(t)] = s^{\alpha } F(s)\quad {\text{for}}\quad Re (s) > 0 $$
(3.55)

a well known result.

Now, insert (3.53) inside (3.50) and permute again the integration symbols to obtain

$$ D_{b}^{\alpha } f(z) = {\frac{{{\text{e}}^{ - j\pi \alpha } }}{2\pi j\Upgamma ( - \alpha )}}\int\limits_{a - j\infty }^{a + j\infty } {F(s){\text{e}}^{sz} } \int\limits_{0}^{\infty } {{\frac{{\left[ {{\text{e}}^{sx} - \sum\nolimits_{0}^{N} {{\frac{{(sx)^{n} }}{n!}}} } \right]}}{{x^{\alpha + 1} }}}} {\text{d}}x\,{\text{d}}s $$
(3.56)

If Re(s) < 0 and considering the result obtained in Sect. 3.5 {see Sect. 2.7 also} the inner integral is equal to ?(??)·s ?, if Re(s) > 0 it is divergent. We conclude that:

$$ LT[D_{b}^{\alpha } f(t)] = s^{\alpha } F(s)\quad {\text{for}}\quad Re (s) < 0 $$
(3.57)

We confirmed the results we obtained in Chap. 2 stating the enlargement the applicability of the well known property of the Laplace transform of the derivative. The presence of the factor e?j?? may seem strange but is a consequence of assuming that H(s) = s ? is the common expression for the transfer function of the causal and anti-causal differintegrator. We must be careful because in current literature that factor has been removed and the resulting derivative is called “right” derivative. According to the development we did that factor must be retained. It is interesting to refer that this was already none by Liouville.

3.7 Generalized Caputo and Riemann–Liouville Derivatives for Analytic Functions

The most known and popular fractional derivatives are almost surely the Riemann–Liouville (RL) and the Caputo (C) derivatives [5, 8, 15]. Without considering the reserves put before [14], we are going to face two related questions:

  • Can we formulate those derivatives in the complex plane?

  • Is there a coherent relation between those derivatives and the incremental ratio based Grünwald–Letnikov (GL)?

As expected attending to what we wrote in Chap. 2 about these derivatives, the answers for those questions are positive. We proceed by constructing formulations in the complex plane obtained from the GL as we did in Sect. 3.5.

3.7.1 RL and C Derivatives in the Complex Plane

As we showed in Sect. 2.6, the generalised GL derivative verifies

$$ D_{\theta }^{\alpha } \left[ {D_{\theta }^{\beta } f(t)} \right] = D_{\theta }^{\beta } \left[ {D_{\theta }^{\alpha } f(t)} \right] = D_{\theta }^{\alpha + \beta } f(t) $$
(3.58)

provided that both derivatives (of orders ? and ?) exist. This is what we called before the semi group property. This is important and not enjoyed by other derivatives.

In particular we can put ? = n ? Z + and ? = n ? ? > 0 and we are led to the following expressions

$$ D_{\theta }^{\alpha } f(z) = D^{n} \left[ {{\text{e}}^{j\theta \varepsilon } \mathop { \lim }\limits_{\left| h \right| \to 0} \sum\limits_{k = 0}^{\infty } {( - 1)^{k} \left( {\mathop k\limits^{ - \varepsilon } } \right)f(z - kh)\left| h \right|^{\varepsilon } } } \right] $$
(3.59)

and

$$ D_{\theta }^{\alpha } f(z) = {\text{e}}^{j\theta \varepsilon } \mathop { \lim }\limits_{\left| h \right| \to 0} \sum\limits_{k = 0}^{\infty } {( - 1)^{k} \left( {\mathop k\limits^{ - \varepsilon } } \right)f^{(n)} (z - kh)\left| h \right|^{\varepsilon } } $$
(3.60)

that we can call mix GL-RL and GL-C.

According to what we showed in Sect. 3.3 the GL derivative leads to the generalised Cauchy for analytic functions that obviously verify also the semi group property. So, we can write:

$$ D_{\theta }^{\alpha } f(z) = {\frac{\Upgamma (\alpha - \beta + 1)}{2\pi j}}\int\limits_{C} {f^{(\beta )} } (w + z){\frac{1}{{w^{\alpha - \beta + 1} }}}{\text{d}}w $$
(3.61)

Let us choose again, ? = n ? Z + and ? = n ? ? > 0. We obtain:

$$ D_{\theta }^{\alpha } f(z) = {\frac{\Upgamma ( - \varepsilon + 1)}{2\pi j}}\int\limits_{C} {f^{(n)} (w + z)w^{\varepsilon - 1} {\text{d}}w} $$
(3.62)

or

$$ D_{\theta }^{\alpha } f(z) = {\frac{\Upgamma ( - \varepsilon + 1)}{2\pi j}}\int\limits_{{C_{d} }} {f^{(n)} (w)(w - z)^{\varepsilon - 1} {\text{d}}w} $$
(3.63)

that can be considered as a Caputo-Cauchy derivative, provided the integral exists. The integration paths C and C d are U-shaped lines as shown in Fig. 3.2. The representation (3.60) is valid because f(z) is analytic and we assumed that the GL derivative exists. So (3.61) and (3.62) too.

Consider again the integration path in Fig. 3.3. As before, we can decompose (3.61) into three integrals along the two half-straight lines and the circle. We have, then:

$$ D_{\theta }^{\alpha } f(z) = {\frac{\Upgamma ( - \varepsilon + 1)}{2\pi j}}\left[ {\int\limits_{{C_{1} }} { + \int\limits_{{C_{2} }} + } \int\limits_{{C_{3} }} {} } \right]f^{(n)} (w + z)w^{\varepsilon - 1} {\text{d}}w $$

We do not need to continue because we can use (3.36). This is valid because f(z) being analytic its nth order derivative is also. However it is interesting to pursue due to some interesting details. Thus we continue.

Over C 1 we have w = x·ej(???), while over C 3 we have w = x·ej(?+?), with x ? R +, over C 2 we have w = ?ej?, with ? ? (? ? ?,? + ?). We can write, at last:

$$ \begin{aligned} D_{\theta }^{\alpha } f(z) = {\frac{\Upgamma ( - \varepsilon + 1)}{2\pi j}} = & \int\limits_{\infty }^{\rho } {f^{(n)} ( - x \cdot {\text{e}}^{j\theta } } + z){\text{e}}^{j\varepsilon (\theta - \pi )} x^{\varepsilon - 1} {\text{d}}x \\ & \quad + {\frac{\Upgamma ( - \varepsilon + 1)}{2\pi j}}\int\limits_{\infty }^{\rho } {f^{(n)} ( - x \cdot {\text{e}}^{j\theta } } + z){\text{e}}^{j\varepsilon (\theta + \pi )} x^{\varepsilon - 1} {\text{d}}x \\ & \quad + {\frac{\Upgamma ( - \varepsilon + 1)}{2\pi j}} = \rho^{\varepsilon } \int\limits_{\theta - \pi }^{\theta + \pi } {f^{(n)} (\rho \cdot {\text{e}}^{j\varphi } } + z){\text{e}}^{j\varepsilon \varphi } j\,{\text{d}}\varphi \\ \end{aligned} $$

For the first and second terms, we have:

$$ \int\limits_{\infty }^{\rho } {f^{(n)} ( - x \cdot {\text{e}}^{j\theta } } + z){\text{e}}^{j\varepsilon (\theta - \pi )} .x^{\varepsilon - 1} {\text{d}}x + \int\limits_{\rho }^{\infty } {f^{(n)} ( - x \cdot {\text{e}}^{j\theta } } + z){\text{e}}^{j\varepsilon (\theta + \pi )} x^{\varepsilon - 1} {\text{d}}x = {\text{e}}^{j\varepsilon \theta } \cdot 2j \cdot { \sin }(\varepsilon \pi )\int\limits_{\rho }^{\infty } {f^{(n)} ( - x \cdot {\text{e}}^{j\theta } } + z)x^{\varepsilon - 1} {\text{d}}x $$

For the third term, we begin by noting that the analyticity of the function f(z) allows us to write:

$$ f^{(n)} ( - x \cdot {\text{e}}^{j\theta } + z) = \sum\limits_{n}^{\infty } {{\frac{{( - 1)^{n} ( - k)_{n} f^{(k)} (z)}}{k!}}} ( - 1)^{k} x^{k - n} {\text{e}}^{jk\theta } $$
(3.64)

for x < r ? R +. We have then

$$ j\rho^{\varepsilon } \int\limits_{\theta - \pi }^{\theta + \pi } {f^{(n)} (\rho \cdot {\text{e}}^{j\varphi } + z)} {\text{e}}^{j\varepsilon \varphi } \,{\text{d}}\varphi = j\rho^{\varepsilon } \sum\limits_{n}^{\infty } {{\frac{{( - 1)^{n} ( - k)_{n} f^{(k)} (z)}}{k!}}} ( - 1)^{k} \rho^{k - n} \int\limits_{\theta - \pi }^{\theta + \pi } {{\text{e}}^{j(k + \varepsilon )\varphi } {\text{d}}\varphi } $$

Performing the integration, we have:

$$ \begin{aligned} j\rho^{\varepsilon } \int\limits_{\theta - \pi }^{\theta + \pi } {f^{(n)} (\rho \cdot {\text{e}}^{j\varphi } + z)} {\text{e}}^{j\varepsilon \varphi } {\text{d}}\varphi & = 2j \cdot {\text{e}}^{j\varepsilon \theta } { \sin }(\varepsilon \pi )\\ &\quad \times \sum\limits_{n}^{\infty } {{\frac{{( - 1)^{n} ( - k)_{n} f^{(k)} (z)}}{k!}}} ( - 1)^{k} {\frac{{{\text{e}}^{jk\theta } \rho^{k - n + \varepsilon } }}{(k + \varepsilon )}}\\ \end{aligned} $$

As ? decreases to zero, the summation in the last expression goes to zero. This means that when ? ? 0

$$ D_{\theta }^{\alpha } f(z) = {\text{e}}^{j\varepsilon \theta } {\frac{1}{\Upgamma (\varepsilon )}}\int\limits_{0}^{\infty } {f^{(n)} } ( - x \cdot {\text{e}}^{j\theta } + z)x^{\varepsilon - 1} {\text{d}}x $$
(3.65)

This can be considered as a generalised Caputo derivative. In fact, with ? = 0, we obtain:

$$ D_{f}^{\alpha } f(z) = {\frac{1}{\Upgamma (\varepsilon )}}\int\limits_{0}^{\infty } {f^{(n)} } (z - x) \, x^{\varepsilon - 1} {\text{d}}x = {\frac{1}{\Upgamma (\varepsilon )}}\int\limits_{ - \infty }^{z} {f^{(n)} } (\tau ) \, (z - \tau )^{\varepsilon - 1} {\text{d}}\tau $$
(3.66)

that is the forward Caputo derivative in R.

Now, return to (3.61) and put ? = 0 and ? = n ? ?, with ? > 0, again:

$$ \begin{aligned} D_{\theta }^{\alpha } f(z) &= {\frac{\Upgamma (\alpha + 1)}{2\pi j}} \int\limits_{{C_{d} }} {f(w){\frac{1}{{(w - z)^{n - \varepsilon + 1} }}}{\text{d}}w} \\& = {\frac{\Upgamma (\alpha + 1)}{2\pi j}} \int\limits_{C} {f(w)(w - z)^{\varepsilon - n - 1} } {\text{d}}w \\ \end{aligned} $$
(3.67)

But, as

$$ (w - z)^{\varepsilon - n - 1} = {\frac{1}{{(1 - \varepsilon )_{n} }}}D_{z}^{n} (w - z)^{\varepsilon - 1} $$
(3.68)

we obtain, by commuting the operations of derivative and integration

$$ D_{\theta}^{\alpha } f(z) = D^{n} \left[ {{\frac{\Upgamma (-\varepsilon + 1)}{2\pi j}}\int\limits_{C} {f(w) \, (w - z)^{\varepsilon - 1} {\text{d}}w} } \right] $$
(3.69)

We may wander about the validity of the above commutation. We remark that the resulting integrand function has a better behaviour than the original, ensuring that we gain something on doing such operation. The formula (3.69) is the complex version of the Riemann–Liouville derivative that we can write in the format

$$ D_{\theta }^{\alpha } f(z) = D^{n} \left[ {{\frac{\Upgamma (-\varepsilon + 1)}{2\pi j}}\int\limits_{C} {f(w + z) \, w^{\varepsilon - 1} {\text{d}}w} } \right] $$
(3.70)

Using again the Hankel integration path, we obtain easily:

$$ D_{\theta }^{\alpha } f(z) = {\text{e}}^{j\varepsilon \theta } D^{n} \left[ {{\frac{1}{\Upgamma (\varepsilon )}}\int\limits_{0}^{\infty } {f( - x \cdot {\text{e}}^{j\theta } } + z)x^{\varepsilon - 1} {\text{d}}x} \right] $$
(3.71)

that is a generalised RL derivative. With ? = 0, we can obtain the usual “left” formulation of the RL in R. With ? = ?, we obtain aside a factor the “right” RL derivative.

3.7.2 Half Plane Derivatives

Let us assume that f(z) ? 0 for Re(z) < 0. In this case, the summation in (2.32) {see (3.59) and (3.60)} goes only to K = ?Re(z)/Re(h)? and the integration path in (3.27) is finite, closed and completely in the right half complex plane. In Fig. 3.4 we assumed that z and h are real.

Fig. 3.4
figure 4figure 4

The contour used in computing the half plane derivatives

Consider a sequence h n (n = 1, 2, 3, …) going to zero. The number of poles inside the integration path is K, but in the limit, the quotient of two gamma functions will give rise to a multivalue expression that forces us to insert a branch cut line that starts at z and ends at ??. Over this line the integrand is not continuous. So, we obtain:

$$ \begin{aligned} D_{\theta }^{\alpha } f(z) & = {\frac{\Upgamma (\alpha + 1)}{2\pi j}} \int\limits_{C} {f(w){\frac{1}{{(w - z)^{\alpha + 1} }}}{\text{d}}w} \\ & \quad + {\frac{\Upgamma (\alpha + 1)}{2\pi j}}\int\limits_{\gamma } {f(w){\frac{1}{{(w - z)^{\alpha + 1} }}}{\text{d}}w} \\ \end{aligned} $$
(3.72)

where C is an open contour that encircles the branch cut line and ? is a small line passing at w = 0 whose length we will reduce to zero {see Fig. 3.5}. However, we prefer to use the analogue to the Hankel contour. The contour ? is a short straight line over the imaginary axis. Although the integrand is not continuous, the phase has a 2?(? + 1) jump, the second integral above is zero. To compute the others, we are going to do a translation to obtain an integral similar to the used above.

Fig. 3.5
figure 5figure 5

The Hankel contour used in computing the derivative defined in Eq. 3.73

As before and again for reducing steps, we will assume already that the straight lines are infinitely near to each other. We have, then:

$$ D_{\theta }^{\alpha } f(z) = {\frac{\Upgamma (\alpha + 1)}{2\pi j}}\left[ {\int\limits_{{C_{1} }} { + \int\limits_{{C_{2} }} + } \int\limits_{{C_{3} }} {} } \right]f(w + z){\frac{1}{{w^{\alpha + 1} }}}{\text{d}}w $$
(3.73)

Over C 1 we have w = xej(???), while over C 3 we have w = x·ej(?+?), with x ? R +, over C 2 we have w = ?ej?, with ? ? (? ? ?, ? + ?).

Let ? = |z|. We can write, at last:

$$ \begin{aligned} D_{\theta }^{\alpha } f(z) & = {\frac{\Upgamma (\alpha + 1)}{2\pi j}}\int\limits_{\zeta }^{\rho } {f( - x \cdot {\text{e}}^{j\theta } + z){\frac{{{\text{e}}^{ - j\alpha (\theta - \pi )} }}{{x^{\alpha + 1} }}}{\text{d}}x} \\ & \quad + {\frac{\Upgamma (\alpha + 1)}{2\pi j}}\int\limits_{\rho }^{\zeta } {f( - x \cdot {\text{e}}^{j\theta } + z){\frac{{{\text{e}}^{ - j\alpha (\theta + \pi )} }}{{x^{\alpha + 1} }}}} {\text{d}}x \\ & \quad + {\frac{\Upgamma (\alpha + 1)}{2\pi j}}{\frac{1}{{\rho^{\alpha } }}}\int\limits_{\theta - \pi }^{\theta + 1} {f(\rho \cdot {\text{e}}^{j\varphi } + z){\text{e}}^{ - j\alpha \varphi } j\,{\text{d}}\varphi } \\ \end{aligned} $$
(3.74)

For the first term, we have:

$$ \begin{aligned} &\int\limits_{\zeta }^{\rho } {f(x \cdot {\text{e}}^{j(\theta - \pi )} + z)} {\frac{{{\text{e}}^{ - j\alpha (\theta - \pi )} }}{{x^{\alpha + 1} }}}{\text{d}}x + \int\limits_{\rho }^{\zeta } {f( - x \cdot {\text{e}}^{j(\theta + \pi )} + z){\frac{{{\text{e}}^{ - j\alpha \theta } }}{{x^{\alpha + 1} }}}} {\text{d}}x = {\text{e}}^{ - j\alpha \theta } \cdot [{\text{e}}^{j\pi \alpha } - {\text{e}}^{ - j\pi \alpha } ]\int\limits_{\rho }^{\zeta } {f( - x \cdot {\text{e}}^{j\theta } + z){\frac{1}{{x^{\alpha + 1} }}}} {\text{d}}x \\ & = {\text{e}}^{ - j\alpha \theta } \cdot 2j \cdot \sin (\alpha \pi )\int\limits_{\rho }^{\zeta } {f( - x \cdot {\text{e}}^{j\theta } + z){\frac{1}{{x^{\alpha + 1} }}}} {\text{d}}x \\ \end{aligned} $$
(3.75)

For the second term, we have

$$ j{\frac{1}{{\rho^{\alpha } }}}\int\limits_{\theta - \pi }^{\theta + \pi } {f(\rho \cdot {\text{e}}^{j\varphi } + z){\text{e}}^{ - j\alpha \varphi } {\text{d}}\varphi = j{\frac{1}{{\rho^{\alpha } }}}} \sum\limits_{0}^{\infty } {{\frac{{f^{(n)} (z)}}{n!}}\rho^{n} \int\limits_{\theta - \pi }^{\theta + \pi } {{\text{e}}^{j(n - \alpha )\varphi } } }\, {\text{d}}\varphi $$
(3.76)

Performing the integration, we have:

$$ j{\frac{1}{{\rho^{\alpha } }}}\int\limits_{\theta - \pi }^{\theta + \pi } {f(\rho \cdot {\text{e}}^{j\varphi } + z){\text{e}}^{ - j\alpha \varphi } {\text{d}}\varphi = {-}2j \cdot {\text{e}}^{ - j\alpha \theta } { \sin }(\alpha \pi )} \sum\limits_{0}^{\infty } {{\frac{{f^{(n)} (z)}}{n!}}} {\frac{{{\text{e}}^{j(n - \alpha )\theta } \rho^{n - \alpha } }}{(n - \alpha )}} $$
(3.77)

As before:

$$ \sum\limits_{0}^{\infty } {{\frac{{f^{(n)} (z)}}{n!}}} \cdot {\frac{{{\text{e}}^{j(n - \alpha )\theta } \rho^{n - \alpha } }}{(n - \alpha )}} = \left[- {\sum\limits_{0}^{\infty } {{\frac{{f^{(n)} (z)}}{n!}}} {\text{e}}^{jn\theta } \int\limits_{\rho }^{\infty } {x^{n - \alpha - 1} {\text{d}}x + \sum\limits_{N + 1}^{\infty } {{\frac{{f^{(n)} (z)}}{n!}}} } {\frac{{{\text{e}}^{jn\theta } \rho^{n - \alpha } }}{(n - \alpha )}}} \right] $$

Substituting it in (3.77) and joining to (3.75) we can write:

$$ \begin{aligned} D_{\theta }^{\alpha } f(z)&= K \cdot \int\limits_{\rho }^{\zeta } {{\frac{{\left[ {f( - x \cdot {\text{e}}^{j\theta } + z) - \sum\nolimits_{0}^{N} {{\frac{{f^{(n)} (z)}}{n!}}{\text{e}}^{jn\theta } x^{n} } } \right]}}{{x^{\alpha + 1} }}}} {\text{d}}x \\ & \quad - K \cdot \sum\limits_{N + 1}^{\infty } {{\frac{{f^{(n)} (z)}}{n!}}( - 1)^{n} {\frac{{\rho^{n - \alpha } }}{(n - \alpha )}}} + \Uptheta \\ \end{aligned} $$
(3.78)

with

$$ \Uptheta = - \sum\limits_{0}^{N} {{\frac{{f^{(n)} (z)}}{n!}}{\text{e}}^{jn\theta } \int\limits_{\zeta }^{\infty } {x^{n - \alpha - 1} } {\text{d}}x = z^{ - \alpha } \sum\limits_{0}^{N} \frac{{f^{(n)} (z)}}{n!}\frac{{z^{n} }}{n - \alpha }} $$

If ? < 0, we make the three summations equal to zero. Using the reflection formula of the gamma function we obtain for K

$$ K = - {\frac{{\Upgamma (\alpha + 1){\text{e}}^{ - j\theta \alpha } }}{\pi }}{ \sin }(\alpha \pi ) = {\frac{{{\text{e}}^{ - j\theta \alpha } }}{\Upgamma ( - \alpha )}} $$
(3.79)

Now let ? go to zero. The second term on the right hand side in (3.78) goes to zero and we obtain:

$$ D_{\theta }^{\alpha } f(z) = K \cdot \int\limits_{0}^{\zeta } {{\frac{{\left[ {f( - x \cdot {\text{e}}^{j\theta } + z) - \sum\nolimits_{0}^{N} {{\frac{{f^{(n)} (z)}}{n!}}{\text{e}}^{jn\theta } x^{n} } } \right]}}{{x^{\alpha + 1} }}}} {\text{d}}x + \sum\limits_{0}^{N} {{\frac{{f^{(n)} (z)}}{n!}}{\frac{{z^{n - \alpha } }}{n - \alpha }}} $$
(3.80)

This result shows that in this situation and with ? > 0 we have a regularised integral and an additional term. This means that it is somehow difficult to compute the fractional derivative by using (3.80): a simple expression obtained from the general GL derivative

$$ D_{\theta }^{\alpha } f(z) = {\text{e}}^{ - j\theta \alpha} \mathop { \lim }\limits_{\left| h \right| \to 0} {\frac{{\sum\nolimits_{k = 0}^{{\left\lfloor {\zeta /h} \right\rfloor }} {( - 1)^{k} \left( {\mathop k\limits^{\alpha } } \right)f(z - kh)} }}{{\left| h \right|^{\alpha } }}} $$
(3.81)

leads to a somehow complicated formation in (3.80). However, if ? < 0 we obtain:

$$ D_{\theta }^{\alpha } f(z) = k \cdot \int\limits_{0}^{\zeta } {{\frac{{f(x \cdot {\text{e}}^{j\theta } + z)}}{{x^{\alpha + 1} }}}} {\text{d}}x $$
(3.82)

So, we must avoid (3.80). To do it, remark first that, from (3.58) we have:

$$ D_{\theta }^{\alpha } f(z) = D_{\theta }^{n} \left[ {D_{\theta }^{ - \varepsilon } f(t)} \right] = D_{\theta }^{ - \varepsilon } \left[ {D_{\theta }^{n} f(t)} \right] $$
(3.83)

This means that we can compute the ? order derivative into two steps. As one step is a fractional anti-derivative, we avoid (3.80) and use (3.82). The order of the steps: computing the integer order derivative before or after the anti-derivative leads to

$$ D_{\theta }^{\alpha } f(z) = K \cdot \int\limits_{0}^{\zeta } {{\frac{{f^{(n)} (x \cdot {\text{e}}^{j\theta } + z)}}{{x^{\alpha + 1} }}}} {\text{d}}x $$
(3.84)

and

$$ D_{\theta }^{\alpha } f(z) = K \cdot D^{n} \int\limits_{0}^{\zeta } {{\frac{{f(x \cdot {\text{e}}^{j\theta } + z)}}{{x^{\alpha + 1} }}}} {\text{d}}x $$
(3.85)

that are the C and RL formulations in the complex plane. However, from (3.83) we can write also:

$$ D_{\theta }^{\alpha } f(z) = {\text{e}}^{ - j\theta \alpha } \mathop { \lim }\limits_{\left| h \right| \to 0} {\frac{{\sum\nolimits_{k = 0}^{{\left\lfloor {\zeta /h} \right\rfloor }} {( - 1)^{k} \left( {\mathop k\limits^{ - \varepsilon } } \right)f^{(n)} (z - kh)} }}{{\left| h \right|^{\alpha } }}} $$
(3.86)

and

$$ D_{\theta }^{\alpha } f(z) = \left[ {{\text{e}}^{ - j\theta \alpha } \mathop { \lim }\limits_{\left| h \right| \to 0} {\frac{{\sum\nolimits_{k = 0}^{{\left\lfloor {\zeta /h} \right\rfloor }} {( - 1)^{k} \left( {\mathop k\limits^{ - \varepsilon } } \right)f{(z - kh)}} }}{{\left| h \right|^{\alpha } }}}} \right]^{(n)} $$
(3.87)

These results mean that: we can easily define C-GL (3.86) and RL-GL (3.87) derivatives. Attending to the way we followed for going from GL to C and RL, we can conclude that, in the case of analytic functions, the existence of RL or C derivatives ensure the existence of the corresponding GL. The reverse may be not correct, since the commutation of limit and integration in (3.22) and (3.23) may not be valid. It is a simple task to obtain other decompositions of ?, leading to valid definitions. One possibility is the Miller-Ross sequential differintegration [7]:

$$ D^{\alpha } x(t) = \left[ {\prod\limits_{i = 1}^{N} {D^{\sigma_{i}}} } \right]x(t) $$
(3.88)

with ? = N?. This is a special case of multi-step case proposed by Samko et al. [5] and based on the Riemann–Liouville definition:

$$ D^{\alpha } x(t) = \left[ {\prod\limits_{i = 1}^{N} {D^{{\sigma_{i} }} } } \right]x(t) $$
(3.89)

with

$$ \alpha = \left[ {\sum\limits_{i = 1}^{N} {\sigma_{i} } } \right] - 1\quad {\text{and}}\quad 0 < \sigma_{i} \le 1 $$
(3.90)

These definitions suggest us that, to compute a ? derivative, we have infinite ways, depending on the steps that we follow to go from 0 (or ??) to ?, that is we express ? as a summation of N reals ? i (i = 0, …, N ? 1), with the ? i not necessarily less or equal to one.

3.8 Conclusions

We started from the Grünwald–Letnikov derivative, obtained an integral formulation representing the summation. From it and by permuting the limit and the integration we obtained the general Cauchy derivative. From this and using the Hankel contour as integration path we arrived at a regularized derivative. This is similar to Hadamard definition but was obtained without rejecting and infinite part. With the presented methodology we could also obtain the Riemann–Liouville and Caputo derivatives and showed also that they can be computed with the Grünwald–Letnikov definition.