Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

2.1 Introduction

2.1.1 A Brief Historical Overview

The fractional calculus is a 300 years old mathematical discipline. In fact and some time after the publication of the studies on Differential Calculus, where he introduced the notation \( {\frac{{{\text{d}}^{n} y}}{{{\text{d}}x^{n} }}} ,\) Leibnitz received a letter from Bernoulli putting him a question about the meaning of a non-integer derivative order. Also he received a similar enquiry from L’Hôpital: What if n is ½? Leibnitzs replay was prophetic: It will lead to a paradox, a paradox from which one day useful consequences will be drawn, because there are no useless paradoxes. It was the beginning of a discussion about the theme that involved other mathematicians like Euler and Fourier. Euler suggested in 1730 a generalisation of the rule used for computing the derivative of the power function. He used it to obtain derivatives of order 1/2. Nevertheless, we can say that the XVIII century was not proficuous in which concerns the development of Fractional Calculus. Only in the early XIX, interesting developments started being published. Laplace proposed an integral formulation (1812), but it was Lacroix who used for the first time the designation “derivative of arbitrary order” (1819). Using the gamma function he could define the fractional derivative of the power function, but did not go ahead. In 1822, Fourier presented the following generalization:

$$ {\frac{{{\text{d}}^{v} f(t)}}{{{\text{d}}t^{v} }}} = {\frac{1}{2\pi }}\int\limits_{ - \infty }^{ + \infty } {f(\tau )\,{\text{d}}\tau \int\limits_{ - \infty }^{ + \infty } {u^{v} \cos (ut - u\tau + {{v\pi } \mathord{\left/ {\vphantom {{v\pi } {2)}}} \right. \kern-\nulldelimiterspace} {2)}}}}\, {\text{d}}u $$

and stated that it was valid for any \( \nu ,\) positive or negative. However, we can reference the beginning of the fractional calculus in the works of Liouville and Abel. Abel solved the integral equation that appears in the solution of the tautochrome problemFootnote 1:

$$ \int\limits_{a}^{t} {{\frac{\varphi (\tau )}{{(t - \tau )^{\mu } \,{\text{d}}\tau \, }}} = f(t),\quad t > a,\;0 < \mu < 1} $$

that represents an operation of fractional integration of order \( 1- \mu .\) However, it seems he was completely unaware of the fractional derivative or integral concept. Liouville did several attempts in 1832. In the first he took the exponentials as starting point for introducing the fractional derivative. With them he generalised the usual formula for the derivative of an exponential and applied it to the derivative computation of functions represented by series with exponentials (later called Dirichlet series). In another attempt he presented a formula for fractional integration similar to the above

$$ D^{ - p} \varphi (t) = {\frac{1}{{( - 1)^{p} \Upgamma (p)}}}\int\limits_{0}^{\infty } {\varphi (t + \tau )\tau^{p - 1} \,{\text{d}}\tau, \quad - \infty < t < + \infty ,\,\text{Re} (p) > 0} $$
(2.1)

where \( \Upgamma \)(p) is the gamma function. To this integral, frequently with the term (?1)p omitted, we give the name of Liouville’s fractional integral. It is important to refer that Liouville was the first to consider the solution of differential equations. In other papers, Liouville went ahead with the development of ideas concerning this theme, having presented a generalization of the notion of incremental ratio to define a fractional derivative [1]. This idea was recovered, later, by Grünwald (1867) and Letnikov (1868). It is interesting to refer that Liouville was the first to note the difference between forward and backward derivatives (somehow different from the concepts of left and right). In a paper published in 1892 (after his death) Riemann reached to an expression similar to (2.1) for the fractional integral

$$ D^{ - \alpha } \varphi (t) = {\frac{1}{\Upgamma (\alpha )}}\int\limits_{0}^{t} {{\frac{\varphi (\tau )}{{(t - \tau )^{1 - \alpha } }}}\,{\text{d}}\tau ,\quad t > 0} $$
(2.2)

that, together with (2.2), became the more important basis for the fractional integration. It suits to refer that both Liouville and Riemann dealt with the called “complementary” functions that would appear when treating the differentiation of order ? as an integration of order \( - \alpha .\) Holmgren (1865/66) and Letnikov (1868/74) discussed that problem when looking for the solution of differential equations, putting in a correct statement the fractional differentiation as inverse operation of the fractional integration. Besides, Holmgren gave a rigorous proof of Leibnitz’ rule for the fractional derivative of the product of two functions that was published before by Liouville, first, and Hargrave, later (1848). In the advent of XXth century, Hadamard proposed a fractional differentiation method by differentiating term by term the Taylor’s series associated with the function. Weyl (1917) defined a fractional integration suitable to periodic functions, having used the integrals.

$$ I_{ + }^{\alpha } (t) = {\frac{1}{\Upgamma (\alpha )}}\int\limits_{ - \infty }^{t} {{\frac{\varphi (\tau )}{{(t - \tau )^{1 - \alpha } }}}\,{\text{d}}\tau ,\quad - \infty < t < + \infty, \, 0 < \alpha < 1} $$
(2.3)

and

$$ I_{ - }^{\alpha } (t) = {\frac{1}{\Upgamma (\alpha )}}\int\limits_{t}^{\infty } {{\frac{\varphi (\tau )}{{(\tau - t)^{1 - \alpha } }}}\,{\text{d}}\tau, \quad - \infty < t < + \infty ,\,0 < \alpha < 1} $$
(2.4)

particular cases of Liouville and Riemann ones but that have been, a basis for fractional integration in R. An interesting contribution to the fractional differentiation was given by Marchaud (1927) that presented a new formulation

$$ D_{ + }^{\alpha } f(t) = c \cdot \int\limits_{0}^{\infty } {{\frac{{\Updelta_{\tau }^{k} f(t)}}{{\tau^{1 + \alpha } }}}} \,{\text{d}}\tau ,\quad \alpha > 0 $$
(2.5)

where \( \Updelta_{\tau }^{k} f(t) \) is the finite difference of order \( k > \alpha ,\) k = 1,2,3,… and c a normalization constant. This definition coincides withFootnote 2

$$ D^{\alpha } f(t) = \frac{1}{\Upgamma (\alpha - n)}\frac{{{\text{d}}^{n} }}{{{\text{d}}t^{n} }}\int\limits_{ - \infty }^{t} {{\frac{f(\tau )}{{(t - \tau )^{\alpha - n + 1} }}}\,{\text{d}}\tau }, \quad n = \left\lfloor \alpha \right\rfloor + 1 $$
(2.6)

for enough “good” functions. It is important to remark that the construction (2.5) is advantageous relatively to (2.6): it can be applied to functions with “bad” behaviour at infinity, as being allowed to grow up as \( t \to + \infty .\)

A different approach was proposed by Heaviside (1892): the so-called operational calculus that was not readily accepted by the mathematicians till the works of Carson, Bromwich, and Doetsch that validated his procedure with help of the Laplace transform (LT).

Modernly, the unified formulation of integration and differentiation—called differintegration—based on Cauchy integralFootnote 3

$$ f^{(\alpha )} (z) = {\frac{\Upgamma (\alpha + 1)}{2\pi j}}\int\limits_{c} {{\frac{f(\tau )}{{(\tau - z)^{1 + \alpha } }}}\,{\text{d}}\tau } $$
(2.7)

gained great popularity. This approach can be referenced for the first time to Sonin (1869), but only with Laurent (1884) obtained a coherent formulation. In the eighties in the XX century this approach evolved with the works of several people as Nishimoto (published a sequence of four books), Campos, Srivastava, Kalla, Riesz, Osler, etc. The book of Samko et al. [1] was the culminate of several step development. However, the main part of the book is devoted, not to the Cauchy integral formulation, but to the so-called Riemann–Liouville derivative. This formulation appeared first in a paper by Sonin (1869) and joins the Riemann and Liouville integral formulations together with the integer order derivative. Essentially it is a multi-step procedure that does an integer order derivative after a fractional integration. Caputo in the sixties inverted the procedure: one starts by an integer order derivative and afterwards does a fractional integration. We must refer also a different form of fractional differentiation that was introduced by Riesz: the so-called Riesz and Riesz-Feller potentials.

Since the beginning of the nineties of XXth century, the Fractional Calculus attracted the attention of an increasing number of mathematicians, physicians, and engineers that have been supporting its development and originating several new formulations and mainly using it to explain some natural and engineering phenomena and also using it to develop new engineering applications.

2.1.2 Current Formulations

In Tables 2.1 and 2.2, the most known definitions of Fractional Integrals and Derivatives are presented.

Table 2.1 Fractional integral definitions \( (\alpha > 0) \)
Table 2.2 Fractional derivative definitions (? > 0)

As seen, there are clear differences among some kinds of definitions. On the other hand, there are definitions that impose causality and the relation

$$ {\text{LT}}\left[ {D^{\alpha } f(t)} \right] = s^{\alpha } F(s),\quad \alpha \in R $$
(2.8)

is not always valid. However, the major inconvenient of most definitions is in the fact of incorporating properties of the signal. Although we can talk about derivatives or integrals of functions defined on a given sub-interval in R, but we do not find correct to incorporate that property in the definition. This means that only the definitions with R as domain may be valid definitions. This assumption brings an important consequence: the integral and derivative are inverse operations and commute:

$$ D^{\alpha } \{ D^{\beta } \} = D^{\alpha + \beta } = D^{\beta } \{ D^{\alpha } \}, \quad \alpha ,\beta \in R $$

Later we will return to this subject.

2.1.3 A Signal Processing Point of View

In recent years fractional calculus has been rediscovered by scientists and engineers and applied in an increasing number of fields, namely in the areas of electromagnetism, control engineering, and signal processing. The increase in the number of physical and engineering processes that are best described by fractional differential equations has motivated out its study. This led to an enrichment of Fractional Calculus with new approaches that however brought contributions to a somehow chaotic state of the art. As seen, there are several definitions that lead to different results, making difficult the establishment of a systematic theory of fractional linear systems in agreement with the current practice. Although from a purely mathematical point of view it is legitimate to accept and even use one or all, from the point of view of applications, the situation is different. We should accept only the definitions that might lead to a fractional systems theory coherent with the usual practice and accepted notions and concepts such as the impulse response and transfer function. The use of the Grünwald–Letnikov forward and backward derivatives lead to a correct generalization of the current linear systems theory. Moreover this choice is motivated by other reasons:

  • It does not need superfluous derivative computations.

  • It does not insert unwanted initial conditions.

  • It is more flexible.

  • It allows sequential computations.

  • It leads to the other definitions.

On the other hand, it does not assume any bound on the domain of the signals to be used. In general we will assume it is the real line. If we want to use any bounded interval we will use the Heaviside unit step function. This has as consequence that we will use the bilateral (two-sided) Laplace transform. We will not use the one-sided LT, for several reasons:

  • It forces us to use only causal signals.

  • Some of its properties lose symmetry, e.g. the translation and the derivation/integration properties.

  • It does not treat easily the case of impulses located at t = 0 [2].

  • In the fractional case, it imposes on us the same set of initial conditions as the Riemann–Liouville case that can be a constraint.

2.1.4 Overview

This chapter has three main parts corresponding to the fractional derivatives definitions, their properties and generalisations. We present a general formula and the forward and backward derivatives as special cases valid for real functions. We treat the case of functions with Laplace transform and obtain integral formulae named as Liouville differintegrations, since they were proposed first by Liouville. These are suitable for fractional linear systems studies, since they allow a generalisation of known concepts without meaningful changes. We will show that these derivatives impose causality: one is causal and the other anti-causal. For the general derivative we will prove its semi-group properties and deduce some other interesting features. We will show that it is compatible with classic derivative that appears here as a special case. We will compute the derivatives of some useful functions. In particular, we obtain derivatives of exponentials, causal exponentials, causal powers and logarithms.

2.1.5 Cautions

We deal with a multivalued expression \( z^{\alpha } .\) As is well known, to define a function we have to fix a branch cut line and choose a branch (Riemann surface). It is a common procedure to choose the negative real half-axis as branch cut line. In what follows we will assume that we adopt the principal branch and assume that the obtained function is continuous above the branch cut line. With this, we will write \( ( - 1)^{\alpha } = {\text{e}}^{{{{j}}\alpha \pi }} .\)

In the following and otherwise stated, we will assume to be in the context of the generalised functions (distributions). We always assume that they are either of exponential order or tempered distributions.

Unless stated, our domain of work will be the entire R, not R +, or, when stated C.

As referred before, “our” Laplace transform (LT) will be the two-sided Laplace transform.

2.2 From the Classic Derivative to the Fractional

2.2.1 On the Grünwald–Letnikov Derivative

In the prehistory of Fractional Calculus, Liouville (1832) was the first to look for a definition of fractional derivative through the generalization of the incremental ratio used for integer order derivatives to the fractional case [1, 3]. However, he did not go on with this idea. Greer (1859) treated the order ½ case. Grünwald (1867) and mainly Letnikov (1868,1872) studied the fractional derivative obtained by the referred generalization and studied its properties. Here we will present a more general vision of the incremental ratio derivative and deduce its properties. The most important is the semi-group property that created great difficulties in the past. In fact, this seems to have been considered first by Peacock under the “principle of the permanence of equivalent forms”. However, he did not convince anybody. The same happened to Kelland (1846) and later to Heaviside in the nineties in the XIX century. However, Heaviside got interesting results with his operational calculus that contributed to be accepted in several scientific domains. But in Fractional Calculus, the group property has only been accepted in the integral case. We will show that it is valid in the general case and maintained in the generalized functions case. The main point is in the use of the same formula for both derivative case (positive orders) and integral case (negative orders). In this case, we do not have to care neither about any integration constant, nor on initial conditions.

2.2.2 Difference Definitions

Let f(z) be a complex variable function and introduce \( \Updelta_{d} \) and \( \Updelta_{r} \) as finite “direct” and “reverse” differences defined by:

$$ \Updelta_{d} f\left( z \right) = f\left( z \right)-f\left( {z - h} \right) $$
(2.9)

and

$$ \Updelta_{r} f\left( z \right) = f\left( {z + h} \right)-f\left( z \right) $$
(2.10)

with \( h \in C \) and, for reasons that will be apparent later, we assume that Re(h) > 0 or Re(h) = 0 with Im(h) > 0. The repeated use of the above definitions lead to

$$ \Updelta_{d}^{N} f(z) = \sum\limits_{k = 0}^{N} {( - 1)^{k} \left( {_{k}^{N} } \right)f(z - kh)} $$
(2.11)

and

$$ \Updelta_{r}^{N} f(z) = ( - 1)^N\sum\limits_{k = 0}^{N} {( - 1)^{k} } (_{k}^{N} )f(z + kh) $$
(2.12)

where \( \left ( \begin{array}{ll} N \\ k \end{array}\right) \) are the binomial coefficients. These definitions are readily extended to the fractional order case [4]:

$$ \Updelta_{d}^{\alpha }\; f(z) = \sum\limits_{k = 0}^{\infty } {( - 1)^{k} } (_{k}^{\alpha } )f(z - kh) $$
(2.13)

and

$$ \Updelta_{r}^{\alpha} f(z) = ( - 1)^{\alpha} \sum\limits_{k = 0}^{\infty} (-1)^k (_{k}^{\alpha } )f(z + kh) $$
(2.14)

where we assume that \( \alpha \in R .\) This formulation remains valid in the negative integer case. Let \( \alpha = - N \) (N a positive integer). As it is well known from the Z Transform theory, the following relation holds, if \( k \ge 0 \)

$$ ZT[(n + 1)_{k} u(n)] = {\frac{k!}{{(1 - Z^{ - 1} )^{k + 1} }}}\quad {\text{for}}\,|q| > 1 $$
(2.15)

where u(n) is the discrete time unit step:

$$ u(n) = \left\{ \begin{array}{ll} 1 & n \ge 0\\ 0 & n < 0 \end{array} \right. $$

Introducing the Pochhammer symbol,

$$ \left( a \right)_{k} = \, a\left( {a + 1} \right)\left( {a + 2} \right) \cdots \left( {a + k - 1} \right) $$

and putting k = N ? 1, we obtain easily:

$$ (1 - q^{ - 1} )^{ - N} = \sum\limits_{n = 0}^{\infty } {{\frac{{(n + 1)_{N - 1} }}{(N - 1)!}}} q^{ - n} \quad {\text{for}}\,|q| > 1 $$
(2.16)

Interpreting q ?1 as a delay as it is commonly done in Digital Signal Processing, (2.16) leads to

$$ \mathop \Updelta \nolimits_{d}^{ - N} f(z) = \sum\limits_{n = 0}^{\infty } {{\frac{{(n + 1)_{N - 1} }}{(N - 1)!}}} \,f(z - nh) $$
(2.17)

For the reverse case, we have:

$$ \mathop \Updelta \nolimits_{r}^{ - N} f(z) = (z - 1)^{ - N} f(z) = ( - 1)^{N} \sum\limits_{n = 0}^{\infty } {{\frac{{(n + 1)_{N - 1} }}{(N - 1)!}}} f(z + nh) $$
(2.18)

As

$$ (n + 1)_{N - 1} = {\frac{(n + N - 1)!}{n!}} = {\frac{{(N - 1)!(N)_{n} }}{n!}} $$
(2.19)

and

$$ {\frac{{( - a)_{n} }}{n!}} = ( - 1)^{n} (_{n}^{a} ) $$
(2.20)

we have:

$$ (n + 1)_{N - 1} = (N - 1)!( - 1)^{n} (_{n}^{ - N} ) $$
(2.21)

So, we can write:

$$ \mathop \Updelta \nolimits_{d}^{ - N} f(z) = \sum\limits_{n = 0}^{\infty } {( - 1)^{n} } (_{n}^{ - N} )f(z - nh) $$
(2.22)

For the anti-causal case, we have:

$$ \mathop \Updelta \nolimits_{r}^{ - N} f(z) = ( - 1)^{N} \sum\limits_{n = 0}^{\infty } {( - 1)^{n} } (_{n}^{ - N} )f(z + nh) $$
(2.23)

As it can be seen, these expressions are the ones we obtain by putting \( \alpha = - N \) into (2.13) and (2.14) that emerge here as representations for the differences of any order.

2.2.3 Integer Order Derivatives

The normal way of introducing the derivative of a continuous function is through the limits of the incremental ratio:

$$ \mathop f\nolimits_{d}^{(1)} (z) = \mathop {\lim }\limits_{h \to 0} {\frac{f(z) - f(z - h)}{h}} $$
(2.24)

and

$$ \mathop f\nolimits_{r}^{(1)} (z) = \mathop {\lim }\limits_{h \to 0} {\frac{f(z + h) - f(z)}{h}} $$
(2.25)

These incremental ratios are very important in continuous to discrete conversion of systems defined by differential equations (linear or nonlinear). It is also well known that the first is better than the second because of stability matters. The use of the LT into the above definitions allows us to obtain the transfer functions of the differentiators that are equal both in analytical expression and domain (the whole complex plane):

$$ s = \mathop {\lim }\limits_{h \to 0} {\frac{{(1 - {\text{e}}^{ - sh} )}}{h}} = \mathop {\lim }\limits_{h \to 0} {\frac{{({\text{e}}^{sh} - 1)}}{h}} $$
(2.26)

This is the reason why the above derivatives give the same result, whenever they exist and f(z) is a continuous function. We must stress also that \( h \in R .\) Later we will see that in the general case h is a value on a half straight line in the complex plane.

In most books on Mathematical Analysis we are told that to compute the high order derivatives we must proceed sequentially by repeating the application of formulae (2.24) or (2.25). This means that, if we want to compute the fourth order derivative, we have to compute f?(z), f?(z), and f (3)(z). However, we have an alternative as we will see next. Assume that we want to compute the second order derivative from the first. We have

$$ \begin{aligned} \mathop f\nolimits_{d}^{(2)} (z) = \mathop {\lim }\limits_{h \to 0} {\frac{{f^{(1)} (z) - f^{(1)} (z - h)}}{h}} & = \mathop {\lim }\limits_{h \to 0} {\frac{{\mathop {\lim }\nolimits_{h \to 0} {\frac{f(z) - f(z - h)}{h}} - \mathop {\lim }\nolimits_{h \to 0} {\frac{f(z - h) - f(z - 2h)}{h}}}}{h}} \\ & = \mathop {\lim }\limits_{h \to 0} {\frac{{\mathop {\lim }\nolimits_{h \to 0} {\frac{f(z) - 2f(z - h) + f(z - 2h)}{h}}}}{h}} \\ & = \mathop {\lim }\nolimits_{h \to 0} {\frac{f(z) - 2f(z - h) + f(z - 2h)}{{h^{2} }}} \\ \end{aligned} $$
(2.27)

As seen, we obtained an expression that allows us to obtain the second order derivative directly from the function. It is not a difficult task to repeat the procedure for successively increasing orders to obtain a general expression:

$$ f_d^{(n)} (z) = \mathop{\lim}\limits_{h \to 0} {\frac{{\sum\nolimits_{k = 0}^{n} {( - 1)(_{k}^{n} )f(z - kh)} }}{{h^{n} }}} $$
(2.28)

that allows us to obtain the nth order derivative directly without “passing” by the intermediate derivatives. To see that this is correct, assume that we want to compute the (n + 1)th order derivative of f(t) from the first order derivative of (2.28). We have:

$$ \begin{aligned} \mathop f\nolimits_{d}^{(n + 1)} (z) & = \mathop {\lim }\limits_{h \to 0} {\frac{{\mathop {\lim }\nolimits_{h \to 0} {\frac{{\sum\nolimits_{k = 0}^{n} {( - 1)^{k} (_{k}^{n} )f(z - kh)} }}{{h^{n} }}} - \mathop {\lim }\nolimits_{h \to 0} {\frac{{\sum\nolimits_{k = 0}^{n} {( - 1)^{k} (_{k}^{n} )f(z - kh - h)} }}{{h^{n} }}}}}{h}} \\ & = \mathop {\lim }\limits_{h \to 0} {\frac{{\mathop {\lim }\limits_{h \to 0} {\frac{{\sum\nolimits_{k = 0}^{n} {( - 1)^{k} (_{k}^{n} )f(z - kh)} - \sum\nolimits_{k = 0}^{n} {( - 1)^{k} (_{k}^{n} )f(z - kh - h)} }}{{h^{n} }}}}}{h}} \\ & = {\frac{{\sum\nolimits_{k = 0}^{n} {( - 1)^{k} (_{k}^{n} )f(z - kh)} - \sum\nolimits_{k = 0}^{n} {( - 1)^{k} (_{k}^{n} )f(z - kh - h)} }}{{h^{n + 1} }}} \\ & = {\frac{{\sum\nolimits_{k = 0}^{n} {( - 1)^{k} (_{k}^{n} )f(z - kh)} + \sum\nolimits_{k = 1}^{n + 1} {( - 1)^{k} (_{k - 1}^{n} )f(z - kh)} }}{{h^{n + 1} }}} \\ & = {\frac{{\sum\nolimits_{k = 0}^{n + 1} {( - 1)^{k} \left[ {(_{k}^{n} )} \right. + \left. {(_{k - 1}^{n} )} \right]f(z - kh)} }}{{h^{n + 1} }}} \\ \end{aligned} $$
(2.29)

As, (?1)! = ? and 0! = 1, we conclude easily that

$$ (_{k}^{n} ) + (_{k - 1}^{n} ) = (_{k}^{n + 1} ) $$

which, together with (2.29), confirms the validity of relation (2.28). Proceeding similarly with (2.25) we obtain

$$ \mathop f\nolimits_{r}^{(n)} (z) = ( - 1)^{n} \mathop {\lim }\limits_{h \to 0} {\frac{{\sum\nolimits_{k = 0}^{n} {( - 1)^{k} (_{k}^{n} )f(z + kh)} }}{{h^{n} }}} $$
(2.30)

Expressions (2.28) and (2.30) allow us to do a direct computation of the nth order derivative of a given function. Considering (2.11) and (2.12) we see that the above derivatives are limits of incremental ratio. As before it is a simple task to use the LT to obtain the transfer function of the differentiator

$$ H\left( s \right) = s^{n} $$
(2.31)

valid for \( s \in C .\) From this result we conclude that the differentiator is a linear system that does not impose causality. It is what we may nominate acausal system. This is a very important subject that will be invalid when generalising the derivative concept.

2.3 Definition of Fractional Derivative

To generalize the known notion of fractional derivatives we start from the above derivatives to introduce the general formulation of the incremental ratio valid for any order, real or complex, obtained from the fractional order differences (2.13) and (2.14).

Definition 2.1

We define fractional derivative of f(z) by the limit of the fractional incremental ratio

$$ \mathop D\nolimits_{\theta }^{\alpha } f(z) = {\text{e}}^{ - j\theta \alpha } \mathop {\lim }\limits_{|h| \to 0} {\frac{{\sum\nolimits_{k = 0}^{n} {( - 1)^{k} (_{k}^{\alpha } )f(z - kh)} }}{{|h|^{\alpha } }}} $$
(2.32)

where \( h = \left| h \right|{\text{e}}^{j\theta } \) is a complex number, with \( \theta \in ( - \pi ,\pi ] .\) This derivative is a general incremental ratio based derivative that expands to the whole complex plane the classic derivatives and the Grünwald–Letnikov fractional derivatives. We will retain this name and refer as GL derivative in the following.

To understand and give an interpretation to the above formula, assume that z is a time and that h is real, \( \theta = 0 \) or \( \theta = \pi .\) If \( \theta = 0 ,\) only the present and past values are being used, while, if \( \theta = \pi ,\) only the present and future values are used. This means that if we look at (2.32) as a linear system, the first case is causal, while the second is anti-causalFootnote 4 [5, 6].

In general, if \( \theta = 0 ,\) we call (2.32) the forward Grünwald–LetnikovFootnote 5 derivative.

$$ \mathop D\nolimits_{f}^{\alpha } f(z) = \mathop {\lim }\limits_{h \to 0 + } {\frac{{\sum\nolimits_{k = 0}^{\infty } {( - 1)^{k} (_{k}^{\alpha } )f(z - kh)} }}{{h^{\alpha } }}} $$
(2.33)

If \( \theta = \pi ,\) we put \( h = - |h| \) to obtain the backward Grünwald–Letnikov derivative.

$$ \mathop D\nolimits_{b}^{\alpha } f(z) = \mathop {\lim }\limits_{h \to 0 + } {\text{e}}^{ - j\pi \alpha } {\frac{{\sum\nolimits_{k = 0}^{\infty } {( - 1)^{k} (_{k}^{\alpha } )f(z + kh)} }}{{h^{\alpha } }}} $$
(2.34)

The exponential factor in this formula makes it different from the so-called right GL derivative found in current literature.

2.4 Existence

It is not a simple task to formulate the weakest conditions that ensure the existence of the fractional derivatives (2.32), (2.33) and (2.34), although we can give some necessary conditions for their existence. To study the existence conditions for the fractional derivatives we must care about the behaviour of the function along the half straight-line z ± nh with \( n \in Z^{ + } .\) If the function is zero for Re(z) < \( a \in R \) (resp. Re(z) > a) the forward (backward) derivative exists at every finite point of f(z). In the general case, we must have in mind the behavior of the binomial coefficients. They verify

$$ |(_{k}^{\alpha } )| \le {\frac{A}{{k^{\alpha + 1} }}} $$
(2.35)

meaning that \( f(z).{\frac{A}{{k^{\alpha + 1} }}} \) must decrease, at least as \( {\frac{A}{{k^{|\alpha | + 1} }}} \) when k goes to infinite.

For instance, we are going to consider the forward case. If \( \alpha \) > 0, it is enough to ensure that f(z) is bounded in the left half plane; but if \( \alpha \) < 0, f(z) must decrease to zero to obtain a convergent series. This suggests that the behaviour for Re(z) < 0 or Re(z) > 0 should be adopted for defining right and left functions. We say that f(z) is a right [left] function if \( f( - \infty ) = 0\;[f( + \infty ) = 0] .\) In particular, they should be used for the functions such that f(z) = 0 for Re(z) < 0 and f(z) = 0 for Re(z) > 0, respectively.Footnote 6 This is very interesting, since we conclude that the existence of the fractional derivative depends only on what happens in one half complex plane, left or right. Consider \( f(z) = z^{\beta } ,\) with \( \beta \in R \) with a suitable branch cut line. If \( \beta > \alpha ,\) we conclude immediately that \( D^{\alpha } [z^{\beta } ] \) defined for every \( z \in C \) does not exist, unless \( \alpha \) is a positive integer, because the summation in (2.32) is divergent.

2.5 Properties

We are going to present the main properties of the above presented derivative.

Linearity The linearity property of the fractional derivative is evident from the above formulae. In fact, we have

$$ \mathop D\nolimits_{\theta }^{\alpha } \left[ {f(z) + g(z)} \right] = \mathop D\nolimits_{\theta }^{\alpha } f(z) + \mathop D\nolimits_{\theta }^{\alpha } g(z) $$
(2.36)

Causality The causality property was already referred above and can also be obtained easily. We must be ware that it only makes sense, if we are using the forward or backward derivatives and that t = z ? R. Assume that f(t) = 0, for t < 0. We conclude immediately that \( D_{\theta }^{\alpha } \;f(t) = 0 \) for t < 0. For the anti-causal case, the situation is similar.

Scale change Let f(z) = g(az), where a is a constant. Let \( h = \left| h \right|{\text{e}}^{j\theta } \) and \( a = \left| a \right|{\text{e}}^{j\varphi } .\) From (2.32), we have:

$$ \begin{aligned} D_{\theta }^{\alpha } g(az) & = \mathop {\lim }\limits_{h \to 0} {\frac{{\sum\nolimits_{k = 0}^{\infty } {( - 1)^k (_{k}^{\alpha } )g(az - kah)} }}{{h^{\alpha } }}} \\ & = a^{\alpha } \mathop {\lim }\limits_{h \to 0} {\frac{{\sum\nolimits_{k = 0}^{\infty } {( - 1)^k (_{k}^{\alpha } )g(az - kah)} }}{{(ah)^{\alpha } }}} \\ & = a^{\alpha } D_{\theta + \varphi }^{\alpha } g(\tau )|_{\tau = az} \\ \end{aligned} $$
(2.37)

Time reversal If z is a time and f(z) = g(?z), we obtain from the property we just deduced that:

$$ \begin{aligned} D_{\theta }^{\alpha } g( - z) & = ( - 1)^{\alpha } \mathop {\lim }\limits_{h \to 0} {\frac{{\sum\nolimits_{k = 0}^{\infty } {( - 1)^k(_{k}^{\alpha } )g( - z + kah)} }}{{( - h)^{\alpha } }}} \\ & = ( - 1)^{\alpha }D_{\theta }^{\alpha } g(\tau )|_{\tau = - z} \\ \end{aligned} $$
(2.38)

in agreement with (2.33) and (2.34). This means that the time reversal converts the forward derivative into the backward and vice versa.

Shift invariance The derivative operator is shift invariant:

$$ D_{\theta }^{\alpha } f(z - a) = D_{\theta }^{\alpha } g(\tau )|_{\tau = z - a} $$
(2.39)

as it can be easily verified.

Derivative of a product We are going to compute the derivative of the product of two functions—\( f(t) = \varphi (t)\cdot \psi (t) \)—assumed to be defined for \( t \in R ,\) by simplicity, although the result we will obtain is valid for \( t \in C ,\) excepting over an eventual branch cut line. Assume that one of them is analytic in a given region. From (2.32) and working with increments, we can write

$$ \Updelta^{\alpha } f(z) = {\frac{{\sum\nolimits_{k = 0}^{\infty } {( - 1)^{k} (_{k}^{\alpha } )\varphi (z - kh)\psi (z - kh)} }}{{h^{\alpha } }}} $$
(2.40)

But, as

$$ \Updelta^{N} f(z) = \sum\limits_{k = 0}^{N} {( - 1)^{k} (_{k}^{N} )f(z - kh)} $$
(2.41)

we can obtain

$$ f(z - kh) = \sum\limits_{i = 0}^{k} {( - 1)^{i} } (_{i}^{k} )\Updelta^{i} f(z) $$
(2.42)

that inserted in (2.40) leads to

$$ \Updelta^{\alpha } f(z) = {\frac{{\sum\nolimits_{k = 0}^{\infty } {( - 1)^{k} (_{k}^{\alpha } )\varphi (z - kh)\sum\nolimits_{i = 0}^{k} {( - 1)^{i} } (_{i}^{k} )\Updelta^{i} \psi (z)} }}{{h^{\alpha } }}} $$
(2.43)

that can be transformed into:

$$ \Updelta^{\alpha } f(z) = {\frac{{\sum\nolimits_{i = 0}^{\infty } {( - 1)^{i} \Updelta^{i} \psi (z)\sum\nolimits_{k = i}^{\infty } {( - 1)^{k} } (_{i}^{k} )(_{k}^{\alpha } )\varphi (z - kh)} }}{{h^{\alpha } }}} $$
(2.44)

But

$$ ( - 1)^{k + i} (_{i}^{k + i} )(_{k + i}^{\alpha } ) = \frac{{( - \alpha )_{i} }}{i!}\frac{{( - \alpha + i)_{k} }}{k!}= {\frac{{( - \alpha )_{i} }}{i!}}( - 1)^{k} (_{k}^{\alpha - i} ) $$

that substituted into the above relation gives

$$ \Updelta^{\alpha } f(z) = {\frac{{\sum\nolimits_{i = 0}^{\infty } {(_{i}^{\alpha } )\Updelta^{i} \psi (z)} \sum\nolimits_{k = 0}^{\infty } {( - 1)^{k} (_{k}^{\alpha - i} )\varphi (z - kh - ih)} }}{{h^{\alpha } }}} $$
(2.45)

and

$$ \Updelta^{\alpha } f(z) =\sum\limits_{i = 0}^{\infty}\frac{(_{i}^{\alpha})\Updelta^{i} \psi (z)}{h^{i}}\frac{\sum\limits_{k = 0}^{\infty} {( - 1)^{k} (_{k}^{\alpha - i} )\varphi (z - kh - ih)}}{h^{\alpha - i}}$$
(2.46)

Computing the limit as \( h \to 0 ,\) we obtain the derivative of the product:

$$ D_{\theta }^{\alpha } [\varphi (t )\psi (t )] = \sum\limits_{n = 0}^{\infty } {(_{n}^{\alpha } )} \varphi^{(n)} (t)\psi^{ (\alpha - n)} (t) $$
(2.47)

that is the generalized Leibniz rule. This formula was obtained first by Liouville [7]. We must realize that the above formula is commutative if both functions are analytic. If only one of them is analytic, it is not commutative. We must remark that the noncommutativity of this rule seems natural, since we only require analyticity to one function. It is a situation very similar to the one we find when defining the product of generalized functions and its derivatives.

The deduction of (2.47) we presented here differs from others presented in literature [1]. As it is clear when \( \alpha = N \in Z^{ + } \) we obtain the classic Leibniz rule. When \( \alpha \) = ?1, we obtain a very interesting formula for computing the primitive of the product of two functions, generalizing the partial primitivation:

$$ D^{ - 1} [\varphi (t)\psi (t)] = \sum\limits_{n = 0}^{\infty } {( - 1)^{n} } \varphi^{(n)} (t)\psi^{( - n - 1)} (t) $$
(2.48)

This formula can be useful in computing the LT of the Fourier transform (FT). We only have to choose \( \varphi (t) \) or \( \psi (t) \) equal to e-st in the LT case and equal to \( {\text{e}}^{ - j\omega t} \) in the FT case.

To exemplify the use of this formula let \( g\left( t \right) = {\frac{{t^{n} }}{n!}} \cdot {\text{e}}^{at} .\) Put \( \varphi \left( t \right) = {\frac{{t^{n} }}{n!}} \) and \( \psi \left( t \right) = {\text{e}}^{at} .\) As \( \varphi^{\left( k \right)} \left( t \right) = {\frac{{t^{n - k} }}{(n - k)!}} ,\) while k ? n and \( \psi^{{\left( { - k - 1} \right)}} \left( t \right) = a^{ - k - 1} {\text{e}}^{at} .\) Then:

$$ D^{ - 1} g(t) = {\text{e}}^{at} \sum\limits_{k = 0}^{n} {( - 1)^{k} {\frac{{t^{n - k} }}{(n - k)!}}} a^{ - k - 1} $$

From this formula, we can compute the LT of \( {\frac{{t^{n} }}{n!}}u\left( t \right) .\) If we put \( \varphi \left( t \right) = f\left( t \right) \) and \( \psi \left( t \right) = 1 ,\) \( t \in R ,\) we have:

$$ D^{ - 1} [f(t)] = \sum\limits_{n = 0}^{\infty} {( - 1)^{n} f^{(n)} (t){\frac{{t^{n + 1} }}{(n + 1)!}}} $$
(2.49)

similar to the McLaurin formula.

Integration by parts. The so-called integration by parts relates both causal and anti-causal derivatives and can be stated as:

$$ \int\limits_{ - \infty }^{ + \infty } {g(t)D_{f}^{\alpha } } f(t)\,{\text{d}}t = ( - 1)^{\alpha } \int\limits_{ - \infty }^{ + \infty } {f(t)D_{b}^{\alpha } g(t)\,} {\text{d}}t $$
(2.50)

where we assume that both integrals exist. To obtain this formula, we only have to use (2.33) inside the integral and perform a variable change

$$ \int\limits_{ - \infty }^{ + \infty } {\mathop {\lim }\limits_{h \to 0 + } {\frac{{\sum\nolimits_{k = 0}^{\infty } {( - 1)^{k} (_{k}^{\alpha } )f(t - kh)g(t)} }}{{h^{\alpha } }}}\,} {\text{d}}t = \int\limits_{ - \infty }^{ + \infty } {\mathop {\lim }\limits_{h \to 0 + } {\frac{{\sum\nolimits_{k = 0}^{\infty } {( - 1)^{k} (_{k}^{\alpha } )g(t + kh)f(t)} }}{{h^{\alpha } }}}\,{\text{d}}t} $$

leading immediately to (2.50) if we use (2.32). This result is slightly different from the one find in current literature due to our definition of backward derivative.

2.6 Group Structure of the Fractional Derivative

2.6.1 Additivity and Commutativity of the Orders

Additivity We are going to apply (2.32) twice for two orders. We have

$$ D_{\theta }^{\alpha } [D_{\theta }^{\beta } f(t)] = D_{\theta }^{\beta } [D_{\theta }^{\alpha } f(t)] = D_{\theta }^{\alpha + \beta } f(t) $$
(2.51)

To prove this statement we start from (2.32) and write:

$$ \begin{aligned} D_{\theta }^{\alpha } \left[ {D_{\theta }^{\beta } f(t)} \right] & = \mathop {\lim }\limits_{h \to 0} {\frac{{\sum\nolimits_{k = 0}^{\infty } {(_{k}^{\alpha } )( - 1)^{k} \left[ {\sum\nolimits_{n = 0}^{\infty } {(_{n}^{\beta } )( - 1)^{n} f[t - (k + n)h]} } \right]} }}{{h^{\alpha } h^{\beta } }}} \\ & = \mathop {\lim }\limits_{h \to 0} {\frac{{\sum\nolimits_{n = 0}^{\infty } {(_{n}^{\beta } )( - 1)^{n} \left[ {\sum\nolimits_{k = 0}^{\infty } {(_{k}^{\alpha } )( - 1)^{k} f[t - (k + n)h]} } \right]} }}{{h^{\alpha } h^{\beta } }}} \\ \end{aligned} $$

for any \( \alpha ,\beta \in R ,\) or even \( \in C .\) With a change in the summation, we obtain:

$$ D_{\theta }^{\alpha } \left[ {D_{\theta }^{\alpha } f(t)} \right] = \mathop {\lim }\limits_{h \to 0} {\frac{{\sum\nolimits_{m = 0}^{\infty } {\left[ {\sum\nolimits_{n = 0}^{\infty } {(_{m - n}^{\alpha } )(_{n}^{\beta } )} } \right]} ( - 1)^{m} f[t - mh]}}{{h^{\alpha + \beta } }}} $$

As Samko et al. [1]

$$ \sum\limits_{0}^{m} {(_{m - n}^{\beta } )(_{n}^{\beta } )} = (_{m}^{\alpha + \beta } ), $$
(2.52)
$$ D_{\theta }^{\alpha } \left[ {D_{\theta }^{\beta } f(t)} \right] = \mathop {\lim }\limits_{h \to 0} {\frac{{\sum\nolimits_{m = 0}^{\infty } {(_{m}^{\alpha + \beta } )} ( - 1)^{m} f[t - mh]}}{{h^{\alpha + \beta } }}} = D^{\alpha + \beta } f(t) $$

Associativity This property comes easily from the above results. In fact, it is easy to show that

$$ D_{\theta }^{\gamma } \left[ {D_{\theta }^{\alpha + \beta } f(t)} \right] = D_{\theta }^{\gamma + \alpha + \beta } f(t) = D_{\theta }^{\alpha + \beta + \gamma } f(t) = D_{\theta }^{\alpha } \left[ {D_{\theta }^{\beta + \gamma } f(t)} \right] $$
(2.53)

Neutral element

If we put \( \beta = - \alpha \) in (2.51) we obtain:

$$ D_{\theta }^{\alpha } \left[ {D_{\theta }^{ - \alpha } f(t)} \right] = D_{\theta }^{0}\; f(t) = f(t) $$
(2.54)

or using it again

$$ D_{\theta }^{ - \alpha } \left[ {D_{\theta }^{\alpha } f(t)} \right] = D_{\theta }^{0}\; f(t) = f(t) $$
(2.55)

This is very important because it states the existence of inverse.

Inverse element From the last result we conclude that there is always an inverse element: for every \( \alpha \) order derivative, there is always a \( - \alpha \) order derivative. This seems to be contradictory with our knowledge from the classic calculus where the Nth order derivative has N primitives. To understand the situation we must refer that the inverse is given by (2.32) and it does not give any primitivation constant. This forces us to be consistent and careful with the used language. So, when ? is positive we will speak of derivative. When \( \alpha \) is negative, we will use the term anti-derivative (not primitive or integral). This clarifies the situation and solves the problem created by Liouville and Riemann when they introduced the complimentary polynomials and shows how to obtain the “proper primitives” of Krempl [8]. The problem can be well understood if we realize that currently we do not have a direct way of computing integrals: we reverse the rules of differentiation. This fact leads us to include the primitivation constants that are inserted artificially. In fact, having autonomous rules for primitivation that do not pass by the rules of differentiation the primitivation constant would not appear. It is what happen when we use the GL derivative (2.32) with negative orders. Also, when performing a numerical computation we do not have to care about such constant. Besides, we must refer that this does not have any relation with the initial conditions that appear in the solution of differential equations. Later we will return to this problem.

2.7 Simple Examples

2.7.1 The Exponential

Let us apply the above definitions to the function f(z) = esz. The convergence of (2.33) is dependent of s and of h. Let h > 0, the series in (2.33) becomes

$$ {\text{e}}^{sz} \sum\limits_{k = 0}^{\infty } {( - 1)^k\left( {\begin{array}{*{20}c} \alpha \\ k \\ \end{array} } \right){\text{e}}^{ - ksh} } $$

As it is well known, the binomial series

$$ \sum\limits_{k = 0}^{\infty } {( - 1)^{k} \left( {\begin{array}{*{20}c} \alpha \\ k \\ \end{array} } \right)} {\text{e}}^{ - ksh} $$

is convergent to the main branch of

$$ g(s) = (1-{\text{e}}^{ - sh} )^{\alpha } $$

provided that |e?sh| < 1, that is if Re(s) > 0. This means that the branch cut line of g(s) must be in the left hand half complex plane. Then

$$ D_{f}^{\alpha } f(z) = \mathop {\lim }\limits_{h \to 0 + } {\frac{{(1 - {\text{e}}^{ - sh} )}}{{h^{\alpha } }}}{\text{e}}^{sz} = \mathop {\lim }\limits_{h \to 0 + } \left( {{\frac{{1 - {\text{e}}^{ - sh} }}{h}}} \right)^{\alpha } {\text{e}}^{sz} = |s|^{\alpha } {\text{e}}^{j\theta \alpha } {\text{e}}^{sz} $$
(2.56)

iff \( \theta \in ( - \pi /2,\pi /2) \) which corresponds to be working with the principal branch of the power function, (·)?, and assuming a branch cut line in the left hand complex half plane.

Now, consider the series in (2.34) with f(z) = esz. Proceeding as above, we obtain another binomial series:

$$ \sum\limits_{k = 0}^{\infty } {( - 1)^{k} \left( {\begin{array}{*{20}c} \alpha \\ k \\ \end{array} } \right){\text{e}}^{ksh} } $$

that is convergent to the main branch of

$$ f(s) = (1-{\text{e}}^{sh} )^{\alpha } $$

provided that Re(s) < 0. This means that the branch cut line of f(s) must be in the left hand half complex plane. We will assume to work in the principal branch and that f(s) is continuous from above. Here we must remark that in (·)? we are again in the principal branch but we are assuming a branch cut line in the right hand complex half plane.

We obtain directly:

$$ D_{f}^{\alpha } f(z) = |s|^{\alpha } {\text{e}}^{j\theta \alpha } {\text{e}}^{sz} $$

with |?| < \( \pi \)/2, and

$$ D_{b}^{\alpha } f(z) = |s|^{\alpha } {\text{e}}^{j\theta \alpha } {\text{e}}^{sz} $$

valid iff \( \theta \in ( \pi /2,3\pi /2) .\)

We must be careful in using the above results. In fact, in a first glance, we could be led to use it for computing the derivatives of functions like sin(z), cos(z), sinh(z) and cosh(z). But if we have in mind our reasoning we can conclude immediately that those functions do not have finite derivatives if z ? C. In fact they use simultaneously the exponentials ez and e?z whose derivatives cannot exist simultaneously, as we just saw.

2.7.2 The Constant Function

We start by computing the fractional derivative of the constant function. Let then f(t) = 1 for every t ? R and ? ? R\Z. From (2.32) we have:

$$ D_{\theta }^{\alpha } f(t) = \mathop {\lim }\limits_{h \to 0} {\frac{{\sum\nolimits_{k = 0}^{\infty } {( - 1)^{k} } \left( {\begin{array}{*{20}c} \alpha \\ k \\ \end{array} } \right)}}{{h^{\alpha } }}} = \left\{ {\begin{array}{*{20}c} 0 & {\alpha > 0} \\ \infty & {\alpha < 0} \\ \end{array} } \right. $$
(2.57)

To prove it, we are going to consider the partial sum of the series

$$ \sum\limits_{k = 0}^{n} {( - 1)^{k} \left( {\begin{array}{*{20}c} \alpha \\ k \\ \end{array} } \right)} = ( - 1)^{n} \left( {\begin{array}{*{20}c} {\alpha - 1} \\ n \\ \end{array} } \right) = \frac{1}{\Upgamma (1 - \alpha )}\frac{\Upgamma ( - \alpha + n + 1)}{\Upgamma (n + 1)} $$

As \( n \to \infty \) the quotient of two gamma functions has a known limiting behaviourFootnote 7 [1] that allows us to show that

$$ \frac{1}{\Upgamma (1 - \alpha )}\frac{\Upgamma ( - \alpha + n + 1)}{\Upgamma (n + 1)} \to \frac{1}{\Upgamma (1 - \alpha )}\frac{1}{{n^{\alpha } }} $$

leading to the limits shown in (2.57). So, the \( \alpha \) order fractional derivative of 1 is the null function. If ? < 0, the limit is infinite. So, there is no fractional “primitive” of a constant. However, this does not happen if \( \alpha \) is a negative integer. To exemplify, consider the case ? = ?1. From (2.32), we have:

$$ D^{ - 1} 1 = \mathop {\lim }\limits_{L \to \infty } \sum\limits_{n = 0}^{L} {t/L} $$
(2.58)

where L is the integer part of t/h. We have

$$ D^{ - 1} 1 = t $$
(2.59)

It should be stressed that the “primitivation constant” does not appear as expected. This means that when working in the context defined by (2.32) two functions with the same fractional derivative are equal.

The example we just treated allows us to obtain an interesting result:

There are no fractional derivatives of the power function defined in R (or C).

In fact, suppose that there is a fractional derivative of t n, t ? R, n ? N +. We must have:

$$ D^{\alpha } t^{ + } = n!D^{\alpha } D^{ - n} 1 = D^{ - n} D^{\alpha } 1 $$

This means that we must be careful when trying to generalise the Taylor series. We conclude also that we cannot compute the fractional derivative of a function by using directly its Taylor expansion. The same result could be obtained directly from (2.32). It is enough to remark that a power function tends to infinite when the argument tends to ??. The Taylor expansions can be used provided that we consider the causal (right) or anti-causal (left) parts only.

2.7.3 The LT of the Fractional Derivative

The above results can be used to generalize a well known property of the Laplace transform. If we return back to Eq. 2.33 and apply the bilateral Laplace transform

$$ F(s) = \int\limits_{ - \infty }^{ + \infty } {f(t){\text{e}}^{ - st} \,{\text{d}}s} $$
(2.60)

to both sides and use the result in (2.56). We conclude that:

$$ {\text{LT}}\left[ {D_{f}^{\alpha } f(t)} \right] = s^{\alpha } F(s)\quad {\text{for}}\,\text{Re} (s) > 0 $$
(2.61)

where in s ? we assume the principal branch and a cut line in the left half plane. With Eq. 2.34 we obtain:

$$ {\text{LT}}\left[ {D_{b}^{\alpha } f(t)} \right] = s^{\alpha } F(s)\quad {\text{for}}\;\text{Re} (s) < 0 $$
(2.62)

where now the branch cut line is in the right half plane. These results have a system interpretation:

There are two systems (differintegrators) with the same expression for the transfer function H(s) = s ?, but with different regions of convergence: one is causal; the other is anti-causal.

This must be contrasted with the classic integer order case as we referred before. We will not compute the impulse responses here. It will be done later.

2.7.4 The Complex Sinusoid

Now, we are going to see if the above results can be extended to functions with Fourier Transform. We note that the multivalued expression H(s) = s ? becomes an analytic function (as soon as we fix a branch cut line) in the whole complex plane excepting the branch cut line. The computation of the derivative of functions with Fourier Transform is dependent on the way used to define (j?)?. Assume that H(s) is a transfer function with region of convergence defined by Re(s) > 0. This means that we have to choose a branchcut line in the left half complex plane. To obtain the correct definition of (j?)? we have to perform the limit as s ? j? from the right. We have

$$ (j\omega )^{\alpha } = |\omega |^{\alpha } \cdot \left\{ {\begin{array}{*{20}c} {{\text{e}}^{j\alpha \pi /2} } \hfill & {{\text{if}}\;\omega > 0} \hfill \\ {{\text{e}}^{ - j\alpha \pi /2} } \hfill & {{\text{if}}\;\omega < 0} \hfill \\ \end{array} } \right. $$
(2.63)

If the region of convergence is given by Re(s) < 0, we have to choose a branchcut line in the right half complex plane. If we perform the limit as s ? j? from the left we conclude that

$$ (j\omega )^{\alpha } = |\omega |^{\alpha } \cdot \left\{ {\begin{array}{*{20}c} {{\text{e}}^{j\alpha \pi /2} } \hfill & {{\text{if}}\;\omega > 0} \hfill \\ {{\text{e}}^{j3\alpha \pi /2} } \hfill & {{\text{if}}\;\omega < 0} \hfill \\ \end{array} } \right. $$
(2.64)

We are going to see which consequences impose Eqs. 2.63 and 2.64. They mean that the forward and backward derivatives of a cisoid are given by

$$ D_{f}^{\alpha } {\text{e}}^{j\omega t} = {\text{e}}^{j\omega t} |\omega |^{\alpha } \cdot \left\{ {\begin{array}{*{20}c} {{\text{e}}^{j\alpha \pi /2} } \hfill & {{\text{if}}\;\omega > 0} \hfill \\ {{\text{e}}^{ - j\alpha \pi /2} } \hfill & {{\text{if}}\;\omega < 0} \hfill \\ \end{array} } \right. $$
(2.65)

and

$$ D_{b}^{\alpha } {\text{e}}^{j\omega t} = {\text{e}}^{j\omega t} |\omega |^{\alpha } \cdot \left\{ {\begin{array}{*{20}c} {{\text{e}}^{j\alpha \pi /2} } \hfill & {{\text{if}}\;\omega > 0} \hfill \\ {{\text{e}}^{j3\alpha \pi /2} } \hfill & {{\text{if}}\;\omega < 0} \hfill \\ \end{array} } \right. $$
(2.66)

As the cisoid can be considered as having symmetric behavior in the sense that the segment for t < 0 is indistinguishable from the corresponding for t > 0, we were expecting that the forward and backward derivatives were equal. This does not happen and the result (2.66) is somehow strange. To see the consequences of this result assume that we want to compute the derivative of a function defined by:

$$ f(t) = {\frac{1}{2\pi }}\int\limits_{ - \infty }^{ + \infty } {F(j\omega ){\text{e}}^{j\omega t} \,{\text{d}}\omega } $$
(2.67)

we obtain two different inverse transforms, meaning that we have different transforms with the same domain and only one of them corresponds to the generalization of a classic property of the Fourier transform: the one obtained with the forward derivative. To reinforce this question let us try to compute the derivative of a real sinusoid: x(t) = cos(?0 t). From (2.65) we obtain:

$$ D_{f}^{\alpha } \cos (\omega_{0} t) = |\omega_{0} |^{\alpha } \cos (\omega_{0} t + \alpha \pi /2) $$
(2.68)

while, from (2.66) we have:

$$ D_{b}^{\alpha } \cos (\omega_{0} t) = |\omega_{0} |^{\alpha } [{\text{e}}^{j\alpha \pi /2} + {\text{e}}^{j3\alpha \pi /2} ]/2 $$
(2.69)

that is in general a complex function.

These results force us to conclude that:

To compute the fractional derivative of a sinusoid we have to use only the forward derivative.

The Fourier transform of the fractional derivative is computed from the Laplace transform of the forward derivative by computing the limit as s goes to \( j\omega .\)

We may put the question of what happens with the frequency response of a given fractional linear system. From the conclusions we have just presented, we can say that, having a causal fractional linear system with transfer function equal to H(s), the frequency response must be computed from:

$$ H(j\omega ) = \mathop {\lim }\limits_{s \to j\omega } H(s) $$
(2.70)

This is in agreement with other known results. For example, if the input to the system is white noise, with unit power, the output spectrum is given by:

$$ S(\omega ) = \mathop {\lim }\limits_{s \to j\omega } H(s)H( - s) $$
(2.71)

2.8 Starting from the Transfer Function

To obtain the Grünwald–Letnikov fractional derivative we used a heuristic approach by generalising the integer order derivative defined through the incremental ratio. Here we will show that we can obtain it from the Laplace transform. To do it, it is enough to start from the transfer function H(s) = s ? and express it as a limit as shown in Fig. 2.1

Fig. 2.1
figure 1

Obtention of the GL derivatives from the transfer function s ?

It is not hard to see that s ? can be considered as the limit when h ? R + tends to zero in the left hand sides of the following expressions:

$$ {\frac{{(1 - {\text{e}}^{ - sh} )^{\alpha } }}{{h^{\alpha } }}} = {\frac{1}{{h^{\alpha } }}}\sum\limits_{k = 0}^{\infty } {( - 1)^{k} \left( {\begin{array}{*{20}c} \alpha \\ k \\ \end{array} } \right)} {\text{e}}^{ - shk} \quad \text{Re} (s) > 0 $$
(2.72)

and

$$ {\frac{{({\text{e}}^{ sh} - 1)^{\alpha } }}{{h^{\alpha } }}} = {\frac{{( - 1)^{\alpha } }}{{h^{\alpha } }}}\sum\limits_{k = 0}^{\infty } {( - 1)^{k} \left( {\begin{array}{*{20}c} \alpha \\ k \\ \end{array} } \right)} {\text{e}}^{shk} \quad \quad \text{Re} (s) < 0 $$
(2.73)

The right hand sides converge for Re(s) > 0 and Re(s) < 0. This means that the first leads to a causal derivative while the second leads to the anti-causal. These expressions, when inverted back into time lead, respectively, toFootnote 8

$$ d_{f}^{(\alpha )} (t) = \mathop {\lim }\limits_{h \to 0 + } {\frac{1}{{h^{\alpha } }}}\sum\limits_{k = 0}^{\infty } {( - 1)^{k} \left( {\begin{array}{*{20}c} \alpha \\ k \\ \end{array} } \right)} \delta (t - kh) $$
(2.74)

and

$$ d_{b}^{(\alpha )} (t) = \mathop {\lim }\limits_{h \to 0 + } {\frac{{( - 1)^{\alpha } }}{{h^{\alpha } }}}\sum\limits_{k = 0}^{\infty } {( - 1)^{k} \left( {\begin{array}{*{20}c} \alpha \\ k \\ \end{array} } \right)} \delta (t + kh) $$
(2.75)

Let f(t) be a bounded function and ? > 0. The convolution of (2.74) and (2.75) with f(t) leads to the Grünwald–Letnikov forward and backward derivatives.

2.9 The Fractional Derivative of Generalized Functions

The results there obtained are valid for analytic functions. Here we will combine the theory there developed with the distribution theory to generalize those results for other functions. We will consider functions defined on R and will treat the forward derivative case, only by simplicity, omitting the subscript. However, we will deal with functions that not only they are not analytic, but they can be discontinuous. This leads us to the Distribution (Generalized Function) Theory. We will use the results of the Axiomatic Theory due to its simplicity. Combining both results we will show that the main interesting properties of the above derivative we obtained for analytic functions remain valid in a distributional context.

The above properties are valid provided that all the involved derivatives exist. This may not happen in a lot of situations; for example, the derivative of the power function causal or not. As these functions are very important we will consider them with detail. Meanwhile let us see how we can enlarge the validity of the above formulae. Let us consider formula (2.33), because the others can be treated immediately from it.

Consider a function f(t) such that there exists D ? f(t) but it is not continuous. In principle, we cannot assure that we can apply (2.33) to obtain D ?+? f(t). To solve the problem, we will use a suitable definition of distribution. Due to its simplicity, we adopt here the definition underlying the Axiomatic Theory of Distributions [912]. It states that: a Distribution is an integer order derivative of a continuous function.

Consider then that f(t) = D n g(t), where n is a positive integer and g(t) is continuous and with continuous fractional derivative of order ? + ?. In this case, we can write:

$$ D^{\alpha + \beta } f(t) = D^{\alpha + \beta } D^{n} g(t) = D^{n} D^{\alpha + \beta } g(t) $$

So, we obtained the desired derivative by integer order derivative computation of the fractional derivative. The other properties are consequence of this one. Some examples will clarify the situation.

2.9.1 The Causal Power Function

The results obtained in the above close section allow us to obtain the derivative of any order of the function p(t) = t ? u(t), with ? > 0. It is a continuous function, thus indefinitely (integer order) derivable. To compute the fractional derivative of p(t), the easy way is to use the Laplace transform (LT). As well known, the LT of p(t) is \( P(s) = {\frac{\Upgamma (\beta + 1)}{{s^{\beta + 1} }}} ,\) for Re(s) > 0. The transform of the fractional derivative of order ? is given by: \( s^{\alpha } {\frac{\Upgamma (\beta + 1)}{{s^{\beta + 1} }}} .\) So,

$$ D_{f}^{\alpha } t^{\beta } u(t) = {\frac{\Upgamma (\beta + 1)}{\Upgamma (\beta - \alpha + 1)}}t^{\beta - \alpha } u(t) $$
(2.76)

that generalizes the integer order formula for ?, ? ? R + and ? > ? [1, 13, 14]. In fact, if ? = N, we obtain

$$ D_{f}^{N} t^{\beta } u(t) = (\beta )_{N} t^{\beta - N} u(t) $$
(2.77)

that is the result we obtain by successive order one derivations. To obtain it, we use the rule of the derivative of the product. The derivative of u(t) is ?(t) that appears multiplied by a power that is zero at t = 0 [9]

$$ Dt^{\beta } u(t) = \beta t^{\beta - 1} u(t) + t^{\beta } \delta (t) = \beta t^{\beta - 1} u(t) $$

Equation 2.76 can be considered valid for \( \beta - \alpha = - 1 \) provided that we write

$$ D_{f}^{\alpha } t^{\alpha - 1} u(t) = \delta (t) $$
(2.78)

If ? = N ? Z +, we have:

$$ D_{f}^{\alpha } [t^{N} u(t)] = {\frac{N!}{\Upgamma (N + 1 - \alpha )}}[t^{N - \alpha } u(t)] $$
(2.79)

if N > ?. This result shows that the derivative of a given causal function can be computed from the causal McLaurin series by computing the derivatives of the series terms. This means that we can obtain the derivative of the causal exponential eat u(t) by computing the derivative of each term of the McLaurin series. However, the resulting series is not easily related to the exponential. This will be done later.

Let as return to our initial objective: to enlarge the validity of (2.76). With the assumed values for ? and ?, (2.76) represents a continuous function. So, we can compute the Nth order derivative. Again products of powers and the ?(t) will appear, but now the powers assume an infinite value at t = 0. We remove this term to obtain what is normally called finite part of the distribution. If ? < 0, we have

$$ Dt^{\gamma } u(t) = \gamma t^{\gamma - 1} u(t) + t^{\gamma } \delta (t) $$

The second term on the right is removed to give us the finite part [9, 12]

$$ F_{p} [Dt^{\gamma } u(t)] = \gamma t^{\gamma - 1} u(t) $$

In the following, we will assume that we will be working with the finite part and will omit the symbol F p . With these considerations we conclude that (2.76) remains valid provided that ? ? R and ? ? R ? Z ?. In particular, we have:

$$ D_{f}^{\alpha } u(t) = {\frac{1}{\Upgamma (1 - \alpha )}}t^{ - \alpha } u(t) $$
(2.80)

To confirm the validity of the procedure, we are going to obtain this result directly from (2.33). For t < 0, D ? u(t) is zero; for t > 0, we have

$$ D_{f}^{\alpha } u(t) = \mathop {\lim }\limits_{h \to 0 + } {\frac{{\sum\nolimits_{k = 0}^{L} {( - 1)^{k} } \left( {\begin{array}{*{20}c} \alpha \\ k \\ \end{array} } \right)}}{{h^{\alpha } }}} $$
(2.81)

where L is the integer part of t/h: \( L = \left\lfloor {t/h} \right\rfloor \) and ? is a positive non-integer real (otherwise it leads to ?(t) and its derivatives). We are going to make some manipulations to obtain the required result. If h is very small, L ? t/h and we have:

$$ D_{f}^{\alpha } u(t) = \mathop {\lim }\limits_{h \to 0 + } {\frac{{\sum\nolimits_{k = 0}^{L} {{\frac{{( - \alpha )_{k} }}{k!}}} }}{{h^{\alpha } }}} = \mathop {\lim }\limits_{L \to \infty } t^{ - \alpha } L^{\alpha } \sum\limits_{k = 0}^{L} {{\frac{{( - \alpha )_{k} }}{k!}}} $$
(2.82)

To go further we use one interesting property of the Gauss hypergeometric function ([15]; Wolfram.com):

$$ _{2} F_{1} ( - n,b, - m,z) = \sum\limits_{0}^{n} {{\frac{{( - n)_{k} (b)_{k} z^{k} }}{{( - m)_{k} k!}}}} $$

where n, m ? Z +, with m ? n. Putting z = 1, b = ??, and m = n, we can write:

$$ D_{f}^{\alpha } u(t) = \mathop {\lim }\limits_{L \to \infty } t^{ - \alpha } L^{\alpha }_{2} F_{1} ( - L, - \alpha , - L,1) $$
(2.83)

But

$$ _{2} F_{1} (a,b,c,1) = {\frac{\Upgamma (c)\Upgamma (c - a - b)}{\Upgamma (c - a)\Upgamma (c - b)}} $$

if Re(c  a ? b) > 0. The application of this formula leads to an undetermination

$$ _{2} F_{1} ( - L, - \alpha - L,1) = {\frac{\Upgamma ( - L)\Upgamma (\alpha )}{\Upgamma (\alpha - L)\Upgamma (0)}}\quad \alpha > 0 $$

However attending to the residues of the gamma function at the poles, we can write:

$$ {\frac{\Upgamma ( - L)}{\Upgamma (0)}} = {\frac{{( - 1)^{L} }}{L!}} $$

and

$$ _{2} F_{1} ( - L, - \alpha - L,1) = \frac{{( - 1)^{L} }}{L!}\frac{\Upgamma (\alpha )}{\Upgamma (\alpha - L)} = {\frac{{(1 - \alpha )_{L} }}{L!}} = {\frac{{( - \alpha )_{L + 1} }}{ - \alpha L!}} $$
(2.84)

With this result we can write (2.83) as

$$ D_{f}^{\alpha } u(t) = t^{ - \alpha } \mathop {\lim }\limits_{L \to \infty } L^{\alpha } {\frac{{( - \alpha )_{L + 1} }}{ - \alpha L!}} $$
(2.85)

In the right hand side we recognize the gamma function [16] leading to the expected result. If ? is a negative integer, ?N, we know that

$$ D_{f}^{ - N} u(t) = {\frac{{t^{N} }}{N!}}u(t) $$
(2.86)

and attending to (2.86) we conclude that (2.76) is valid for ? ? R ? Z +. Equation 2.80 allows us to obtain the interesting result

$$ D_{f}^{\alpha } \delta (t) = {\frac{{t^{ - \alpha - 1} }}{\Upgamma ( - \alpha )}}u(t) $$
(2.87)

valid for positive non-integer orders. In terms of linear system theory, (2.87) tells us that the fractional forward differintegrator (a current terminology) is a linear system with impulse response equal to the right hand side. As the output is given by the convolution of the input and the impulse response, we obtain [6, 17]

$$ D_{f}^{\alpha } f(t) = {\frac{1}{\Upgamma ( - \alpha )}}\int\limits_{ - \infty }^{t} {f(\tau )(t - \tau )^{ - \alpha - 1} \,{\text{d}}\tau } $$
(2.88)

that we will call the forward Liouville derivative.

Using (2.33) and (2.87), we can obtain a very curious result

$$ {\frac{{t^{ - \alpha - 1} }}{\Upgamma ( - \alpha )}}u(t) = \mathop {\lim }\limits_{h \to 0 + } {\frac{{\sum\nolimits_{k = 0}^{\infty } {( - 1)^{k} } \left( {\begin{array}{*{20}c} \alpha \\ k \\ \end{array} } \right)\delta (t - kh)}}{{h^{\alpha } }}} $$
(2.89)

meaning that the power function is obtained by joining infinite impulses with infinitesimal amplitudes modulated by the binomial coefficients.

A similar procedure would lead us to obtain the impulse response of the backward differintegrator that is given by [6, 17]

$$ D_{b}^{\alpha } \delta (t) = - {\frac{{t^{ - \alpha - 1} u( - t)}}{\Upgamma ( - \alpha )}} $$
(2.90)

that allows us to obtain the backward Liouville derivative

$$ D_{b}^{\alpha } f(t) = - {\frac{1}{\Upgamma ( - \alpha )}}\int\limits_{t}^{\infty } {f(\tau ) \cdot (t - \tau )^{ - \alpha - 1} \,{\text{d}}\tau } $$
(2.91)

With a change of variable it leads to

$$ D_{b}^{\alpha } f(t) = - {\frac{{( - 1)^{ - \alpha } }}{\Upgamma ( - \alpha )}}\int\limits_{0}^{\infty } {f(t + \tau ) \cdot \tau^{ - \alpha - 1} \,{\text{d}}\tau } $$
(2.92)

that shows the anti-causal characteristic on depending on the future values of the function.

These integral formulations were introduced both exactly with this format by Liouville. Unhappily in the common literature the factor (?1)?? in (2.92) has been removed and is called Weyl derivative [18]. Although the above results were obtained for functions with Laplace transform their validity can be extended to other functions.

Integrals (2.88) and (2.91) are not very useful since they are “hyper singular” integrals [1, 13].

2.9.2 The Causal Exponential

The derivative of the causal exponential can be obtained from the McLaurin series, as we said above. We have:

$$ D^{\alpha } [{\text{e}}^{at} u(t)] = \sum\limits_{0}^{\infty } {{\frac{{(at)^{k - \alpha } }}{\Upgamma (k - \alpha )}}u(t)} $$
(2.93)

However, this function is not very interesting, since it does not represents the fractional generalization of the causal exponential. To obtain it, put ? = n? in (2.76) and rewrite it in the format:

$$ D^{\alpha } {\frac{{t^{n\alpha } u(t)}}{\Upgamma (n\alpha + 1)}} = {\frac{1}{\Upgamma ((n - 1)\alpha + 1)}}t^{(n - 1)\alpha } u(t) $$
(2.94)

We are led to the Mittag–Leffler function:

$$ g(t) = \sum\limits_{n = 0}^{\infty } {{\frac{{t^{n\alpha } }}{\Upgamma (n\alpha + 1)}}u(t)} $$
(2.95)

It is not hard to show, using (2.94), that

$$ D^{\alpha } g(t) = \sum\limits_{n = 0}^{\infty } {{\frac{{t^{n\alpha } }}{\Upgamma (n\alpha + 1)}} \cdot u(t) + {\frac{{t^{ - \alpha } }}{\Upgamma ( - \alpha + 1)}}u(t)} $$
(2.96)

or with (2.80)

$$ D^{\alpha } [g(t) - u(t)] = g(t) $$

This means that g(t) is the solution of the fractional differential equation

$$ D^{\alpha } g(t) - g(t) = 0 $$
(2.97)

under the initial condition g(0) = 1.

2.9.3 The Causal Logarithm

We going to study the causal logarithm \({\lambda(t)=log(t)\cdot u(t)}\) We could use Eq. 2.32, but the computations are somehow involved. Instead, we start from relation (2.76) and compute the derivative of both sides relative to \( \beta \) to obtain:

$$ D^{\alpha } [t^{\beta } \log (t)u(t)] = {\frac{\Upgamma (\beta + 1)}{\Upgamma (\beta - \alpha + 1)}}t^{\beta - \alpha } u(t)[\log (t) + \psi (\beta + 1) - \psi (\beta - \alpha + 1)] $$
(2.98)

where we represented by ? the logarithmic derivative of the gamma function:

$$ \psi (t) = D[\log \Upgamma (t)] = \Upgamma^{\prime } (t)/\Upgamma (t) $$

Putting ? = 0 in (2.98) we obtain the derivative of the causal logarithm

$$ D^{\alpha } [\log (t)u(t)] = {\frac{1}{\Upgamma ( - \alpha + 1)}}t^{ - \alpha } u(t)[\log (t) - \gamma - \psi ( - \alpha + 1)] $$
(2.99)

where \( \gamma = - \psi(1) = - \Upgamma^{\prime}(1)\) is the Euler–Mascheroni constant.

Another interesting result can be obtained by integer order derivation of both sides in (2.99). As

$$ D^{N} [\log (t)u(t)] = ( - 1)^{N - 1} (N - 1)!t^{ - N} u(t),\quad N = 1,2, \ldots $$
(2.100)

we have:

$$ \begin{aligned} D^{\alpha } t^{ - N} u(t) & = {\frac{{D^{N} [\log (t)u(t)]}}{{( - 1)^{N - 1} (N - 1)!}}} \\ & = {\frac{1}{{( - 1)^{N - 1} (N - 1)!\Upgamma ( - \alpha - N + 1)}}}t^{ - \alpha - N} u(t)[\log (t) - \gamma - \psi ( - \alpha - N + 1)] \\ \end{aligned} $$
(2.101)

From the properties of the gamma functions, we have:

$$ \Upgamma ( - \alpha - N + 1) = ( - 1)^{N} {\frac{\Upgamma (1 - \alpha )}{{(\alpha )_{N} }}} $$

and using repeatedly the recurrence property of the digamma we obtain:

$$ \psi ( - \alpha - N + 1) = \psi ( - \alpha + 1) - \sum\limits_{n = 1}^{N} {{\frac{1}{1 - \alpha - n}}} $$

leading to

$$ D^{\alpha } t^{ - N} u(t) = - {\frac{{(\alpha )_{N} }}{(N - 1)!\Upgamma ( - \alpha + 1)}}t^{ - \alpha - N} u(t)\left[ {\log (t) - \gamma - \psi ( - \alpha + 1) - \sum\limits_{n = 1}^{N} {{\frac{1}{1 - \alpha - n}}} } \right] $$
(2.102)

If ? = M is a positive integer, the gamma function ?(?? + 1) is infinite and we have:

$$ D^{M} t^{ - N} u(t) = \mathop {\lim }\limits_{\alpha \to M} \frac{\psi ( - \alpha + 1)}{\Upgamma ( - \alpha + 1)}\frac{(\alpha)N}{(N - 1)!}t^{ - \alpha - N} u(t) $$
(2.103)

But

$$ \Upgamma (x) \approx ( - 1)^{n} {\frac{1}{n!(x + n)}} $$

near the pole at x = ?n. Then

$$ \mathop {\lim }\limits_{\alpha \to M} {\frac{\psi ( - \alpha + 1)}{\Upgamma ( - \alpha + 1)}} = ( - 1)^{M} (M - 1)! $$

and, finally

$$ D^{M} t^{ - N} u(t) = ( - 1)^{M} (M)_{N} t^{ - M - N} u(t) $$
(2.104)

that is the classic result. It is not very difficult to obtain this result from (2.76) provided that we take care about the indetermination. To conclude:

$$ D_{f}^{\alpha } t^{\beta } u(t) = \left\{ {\begin{array}{*{20}c} {{\frac{\Upgamma (\beta + 1)}{\Upgamma (\beta - \alpha + 1)}}t^{\beta - \alpha } u(t)} \hfill & {\beta \in R - Z^-} \hfill \\ { - {\frac{{(\alpha )_{N} }}{(N - 1)!\Upgamma ( - \alpha + 1)}}t^{ - \alpha - N} u(t)\left[ {\log (t) - \gamma - \psi ( - \alpha + 1) - \sum\limits_{n = 1}^{N} {{\frac{1}{1 - \alpha - n}}} } \right]} \hfill & { - N = \beta \in Z^-} \hfill \\ \end{array} } \right. $$
(2.105)

We must remark that these results are more general than those obtained by integral derivative formulations like Riemann–Liouville or Caputo derivatives [13, 14].

2.9.4 Consequences in the Laplace Transform Domain

Consider that we are working in the context of the Laplace transform. With the two-sided LT we use there are no problems in defining the LT of \( \delta \left( t \right) \):

$$ {\text{LT}}[\delta (t)] = 1 $$

Attending to (2.61) and (2.87) we conclude that

$$ {\text{LT}}\left[ {{\frac{{t^{ - \alpha - 1} }}{\Upgamma ( - \alpha )}}u(t)} \right] = s^{\alpha } ,\quad \text{Re} (s) > 0 $$

that generalizes a well known result for any ? ? R – Z ?. Essentially, we prolonged the sequence:

$$ \ldots s^{ - n} \ldots ,s^{ - 2} ,s^{ - 1} ,1,s^{1} ,s^{2} , \ldots ,s^{n} \ldots $$

in order to include other kinds of exponents: rational, real, or even complex numbers. In agreement with what we said before, there are two forms of obtaining the extension, depending on the choice done for region of convergence for the LT: the left and right half-planes. This has some implications in the study of the fractional linear systems as we will see later.

From (2.76) we obtain easily the LT of the Mittag–Leffler function

$$ G(s) = \sum\limits_{n = 0}^{\infty } {{\frac{1}{{s^{n\alpha + 1} }}} = {\frac{{s^{\alpha - 1} }}{{s^{\alpha } - 1}}}} ;\quad \text{Re} (s) > 1 $$
(2.106)

Relation (2.106) expresses a special case of the more general result known as Hardy’s theorem [16] that states:

Let the series

$$ F(s) = \sum\limits_{0}^{ + \infty } {a_{n} \Upgamma (\alpha + n + 1) \cdot s^{ - \alpha - n - 1} } $$
(2.107)

be convergent for some Re(s> s 0 > 0 and \( \alpha \) > –1. The series

$$ f(t) = \sum\limits_{0}^{ + \infty } {a_{n} t^{\alpha + n} } $$
(2.108)

converges for all t > 0 and F(s) = LT[f(t)].

In agreement with the results in Sect. 2.9.1, the validity of the Hardy’s theorem can be extended to values of ? < ?1, provided it is not integer.

2.10 Riemann–Liouville and Caputo Derivatives

The Riemann–Liouville and Caputo derivatives are multistep derivatives that use several integer order derivatives and a fractional integration [1, 13, 14, 19]. Although they create some difficulties as we will see later, we are going to describe them, since they are widely used with questionable results. To present them, we use (2.87) and (2.90) to obtain the following distributions:

$$ \delta_{ \pm }^{( - \nu )} (t) = \pm {\frac{{t^{\nu - 1} }}{\Upgamma (\nu )}}u( \pm t),\quad 0 < \nu < 1 $$

and

$$ \delta_{ \pm }^{(n)} (t) = \left\{ {\begin{array}{*{20}c} { \pm {\frac{{t^{ - n - 1} }}{( - n - 1)!}}u( \pm t)} \hfill & {{\text{for}}\,n < 0} \hfill \\ {\delta^{(n)} (t)} \hfill & {{\text{for}}\,n \ge 0} \hfill \\ \end{array} } \right. $$
(2.109)

where n ? Z. With them we define two differintegrations usually classified as left and right sided, respectively:

$$ f_{l}^{(\alpha )} (t) = [f(t)u(t - a)]*\delta_{ + }^{(n)} (t)*\delta_{ + }^{( - \nu )} (t) $$
(2.110)
$$ f_{r}^{(\alpha )} (t) = [f(t)u(b - t)]*\delta_{ + }^{(n)} ( - t)*\delta_{ + }^{( - \nu )} ( - t) $$
(2.111)

with a < b ? R. The orders are given by ? = n – ?, n being the least integer greater than \( \alpha \) and 0 < ? < 1. In particular, if ? is integer then ? = 0.Footnote 9 From different orders of commutability and associability in the double convolution we can obtain distinct formulations. For example, from (2.111) we obtain the left Riemann–Liouville and Caputo derivatives:

$$ f_{RL + }^{(\beta )} (t) = \delta_{ + }^{(n)} (t)*\left\{ {[f(t)u(t - a)]* \delta_{ + }^{( - \nu )} (t)} \right\} $$
(2.112)
$$ f_{C + }^{(\beta )} (t) = \left\{ {[f(t)u(t - a)]*\delta_{ + }^{(n)} (t)} \right\}*\delta_{ + }^{( - \nu )} (t) $$
(2.113)

For the right side derivatives the procedure is similar.

We are going to study more carefully the characteristics of these derivatives. Consider (2.113). Let

$$ \varphi^{( - \nu )} (t) = \left\{ {[f(t)u(t - a)]*\delta_{ + }^{( - \nu )} (t)} \right\}. $$

We have:

$$ \varphi^{( - \nu )} (t) = \left\{ {\begin{array}{*{20}c} {{\frac{1}{\Upgamma (\nu )}}\int\limits_{a}^{t} {f(\tau ) \cdot (t - \tau )^{\nu - 1} \,{\text{d}}\tau } } \hfill & {{\text{if}}\,t > a} \hfill \\ 0 \hfill & {{\text{if}}\,t < a} \hfill \\ \end{array} } \right. $$

So, in general when doing the second convolution in (2.113) we are computing the integer order derivative of a function with a jump. The “jump formula” [9, 10]Footnote 10 leads to

$$ f_{RL + }^{(\beta )} (t) = {\frac{1}{\Upgamma ( - \alpha )}}\int\limits_{a}^{t} {f(t) \cdot (t - \tau )^{ - \alpha - 1} \,{\text{d}}\tau } - \sum\limits_{i = 0}^{n - 1} {f^{(\alpha - 1 - i)} (a)\delta^{(i)} (t)} $$
(2.114)

The appearance of the “initial conditions” \(f^{(\alpha-1-i)}(a+)\) provoked some confusions because they were used as initial conditions of linear systems. This is not correct in general. They represent what we need to join to the Riemann–Liouville derivative to obtain the Liouville derivative (2.88). We will return to this subject when we study the initial condition problem in linear fractional systems. Now let us do a similar analysis to the Caputo derivative. The expression

$$ \left\{ {[f(t)u(t - a)]*\delta_{ + }^{(n)} (t)} \right\} $$

states the integer order derivative of the function \({f(t)\cdot u(t-a)}\) Again the jump formula gives

$$ y^{(n)} (t) \cdot u(t - a) = [y(t) \cdot u(t - a)]^{(n)} - \sum\limits_{i = 0}^{n - 1} {y^{(n - 1 - i)} (a)\delta^{(i)} (t)} $$
(2.115)

that leads to:

$$ f_{C + }^{(\beta )} (t) = {\frac{1}{\Upgamma ( - \alpha )}}\int\limits_{a}^{t} {f(\tau ) \cdot (t - \tau )^{ - \alpha - 1} \,} {\text{d}}\tau - \sum\limits_{i = 0}^{n - 1} {f^{(n - 1 - i)} (a)\delta^{(i - \nu )} (t)} $$
(2.116)

In this case, we can extract conclusions similar to those we did in the Riemann–Liouville case. Relation (2.116) explains why sometimes the first n terms of the Taylor series of f(t) are subtracted to it before doing a fractional derivative computation. It is like a regularization.

2.11 The Fractional Derivative of Periodic Signals

2.11.1 Periodic Functions

Let us consider a function defined by Fourier series

$$ f(t) = \sum\limits_{ - \infty }^{ + \infty } {F_{n} {\text{e}}^{j2\pi /Tnt} } $$
(2.117)

Only by simplicity we will assume for now that the series is uniformly convergent almost everywhere over the real straight line. As it is well known, it defines a almost everywhere continuous periodic function, f(t)

$$ f(t) = f(t + T),\quad t \in R $$
(2.118)

where T is the period and we can write

$$ f(t) = \sum\limits_{ - \infty }^{ + \infty } {f_{\beta } (t - nT)} $$
(2.119)

where f b (t) is an almost everywhere continuous function with bounded support. It is a simple task to see that it can be expressed by the convolution

$$ f(t) = f_{b} (t)*\sum\limits_{ - \infty }^{ + \infty } {\delta (t - nT)} $$
(2.120)

We obtained three different was of representing a periodic function.

The results presented in Sect. 2.7.4 show that we can obtain the derivative of a periodic function from its Fourier series by computing the derivative term by term. We obtain:

$$ D_{f}^{\alpha } f(t) = \left( {{\frac{2\pi j}{T}}} \right)^{\alpha } \sum\limits_{ - \infty }^{ + \infty } {n^{\alpha } } F_{n} {\text{e}}^{j2\pi /Tnt} $$
(2.121)

It would be interesting to study the convergence of this series, but it is not important here. We can use the procedure followed in study of the comb signal [20] that was an adaptation of the theory presented in [9].

We concluded that, if a periodic function is defined by it Fourier series it can be fractionally derivated term by term and the derivative is also periodic with the same period.

Now we go back to (2.119). As the derivative is a linear operator, we can apply it to each term of the series. Besides f b (t) is a function with bounded support which implies that the summation in (2.32) is done over a finite number of parcels. So, it is convergent for each h and has a fractional derivative. Another way of concluding this is by using the Laplace transform and (2.49). This leads to

$$ D_{f}^{\alpha } f(t) = \sum\limits_{ - \infty }^{ + \infty } {D_{f}^{\alpha } } f_{\beta } (t - nT) $$
(2.122)

However, the fractional derivative of a bounded support function does not have bounded support. This means that the series of the derivatives has what is called in signal processing a aliasing which may prevent its convergence. Anyway, the fractional derivative of a periodic function defined by (2.119) exists (at least formally) and is a periodic function with the same period.

Finally, let us consider the representation (2.120). The fractional derivative will be given by the convolution of f b (t) and the fractional derivative of the comb function [20]. Attending to (2.87) we conclude that the derivative of the comb is

$$ D_{f}^{\alpha } \sum\limits_{ - \infty }^{ + \infty } {\delta (t - nT)} = {\frac{1}{\Upgamma ( - \alpha )}}\sum\limits_{ - \infty }^{ + \infty } {(t - nT)^{ - \alpha - 1} u(t - nT)} $$
(2.123)

that convolved with f b (t) leads again to (2.122).

We are going now to study the causal periodic case and show that the fractional-order derivatives of a causal periodic function with a specific period cannot be a periodic function with the same period. Consider relation (2.120) again and assume that the support of f b (t) is the interval (0, T). It is immediate to verify that the causal periodic function is given by

$$ g(t) = \sum\limits_{0}^{ + \infty } {f_{\beta } (t - nT)} $$
(2.124)

or equivalently:

$$ g(t) = f(t) \cdot u(t) $$
(2.125)

Assume that 0 < ? < 1. We are going to use the Caputo derivative only with illustration objective. The same result could be obtained with the GL or Liouville derivatives. In a first step, we obtain:

$$ g^{\prime}(t) = f^{\prime}(t) \cdot u(t) + f(0) \cdot \delta (t) $$
(2.126)

and

$$ D^{\alpha } g(t) = {\frac{1}{\Upgamma (1 - \alpha )}}\int\limits_{0}^{t} {f^{\prime}(\tau ) \cdot (t - \tau )^{ - \alpha } \,{\text{d}}\tau + f(0) \cdot t^{ - \alpha } u(t)} $$
(2.127)

We conclude that \( D^{\alpha } g\left( t \right) \) cannot be periodic, because the second term destroys any hypothesis of periodicity. However, we can always remove a constant function without affecting the periodicity. So, without loosing generality, let us assume that f(0) = 0, for example that \( f(t) = \sin (\omega_{0} t) .\) We can write:

$$ \begin{aligned} {\frac{1}{\Upgamma (1 - \alpha )}}\int\limits_{0}^{t} {{\text{e}}^{{ - j\omega_{0} \tau }} } \cdot \tau^{ - \alpha } \,{\text{d}}\tau & = {\frac{1}{\Upgamma (1 - \alpha )}}\int\limits_{0}^{\infty } {{\text{e}}^{{ - j\omega_{0} \tau }} } \cdot \tau^{ - \alpha } \,{\text{d}}\tau \\&\quad - {\frac{1}{\Upgamma (1 - \alpha )}}\int\limits_{t}^{\infty } {{\text{e}}^{{ - j\omega_{0} \tau }} } \cdot \tau^{ - \alpha } \,{\text{d}}\tau\\ \end{aligned} $$
(2.128)

Take the above integral and write

$$ {\frac{1}{\Upgamma (1 - \alpha )}}\int\limits_{0}^{t} {{\text{e}}^{{ - j\omega_{0} \tau }} } \cdot \tau^{ - \alpha } \,{\text{d}}\tau = {\frac{1}{\Upgamma (1 - \alpha )}}\int\limits_{0}^{\infty } {{\text{e}}^{{ - j\omega_{0} \tau }} } \cdot \tau^{ - \alpha } \,{\text{d}}\tau - {\frac{1}{\Upgamma (1 - \alpha )}}\int\limits_{t}^{\infty } {{\text{e}}^{{ - j\omega_{0} \tau }} } \cdot \tau^{ - \alpha } \,{\text{d}}\tau $$

The first term is the Fourier transform of t ?a u(t). Using the results of Sect. 2.7.4 it assumes the value \((j \omega_0)^{\alpha -1}.\) We obtain from (2.61)

$$ D^{\alpha } g(t) = |\omega_{0} |^{\alpha } \sin (\omega_{0} t + \alpha \pi /2)u(t) - {\frac{{2\omega_{0} }}{\Upgamma (1 - \alpha )}}\text{Re} \left\{ {{\text{e}}^{{j\omega_{0} t}} \int\limits_{t}^{\infty } {{\text{e}}^{ - j\omega 0\tau } } \cdot \tau^{ - \alpha } \,{\text{d}}\tau } \right\} $$
(2.129)

We conclude that D ? g(t) in not causal periodic, because there is a transient term. We could also obtain this result by applying the generalized Leibniz rule to g(t).

2.12 Fractional Derivative Interpretations

There have been several attempts to give an interpretation (normally, geometric) to the fractional derivative. The most interesting were proposed by Podlubny [21] and by Machado [22, 23]. The first proposed geometric interpretations for the integral formulations of the fractional derivative. The second gives a probabilistic interpretation for the GL derivative. These attempts although interesting were not widely accepted and did not give rise to new problem solving methodology.

2.13 Conclusions

We approached the fractional derivative definition through a generalized Grünwald–Letnikov formulation. We presented the most interesting properties it enjoys namely the causality and the group properties. We deduced integral formulations and obtained the called Riemann–Liouville and Caputo derivatives. We considered the derivative of periodic signals.