Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Fractional calculus plays an important role in many different areas, and has proven to be a truly multidisciplinary subject [2030]. It is a mathematical field as old as the calculus itself. In a letter dated 30th September 1695, Leibniz posed the following question to L’Hopital: “Can the meaning of derivative be generalized to derivatives of non-integer order?” Since then, several mathematicians had investigated Leibniz’s challenge, prominent among them were Liouville, Riemann, Weyl, and Letnikov. There are many applications of fractional calculus, for example, in viscoelasticity, electrochemistry, diffusion processes, control theory, heat conduction, electricity, mechanics, chaos and fractals, and signals and systems [1222].

Several methods to solve fractional differential equations are available, using Laplace and Fourier transforms, truncated Taylor series, and numerical approximations. In Almeida and Torres [7] a new direct method to find exact solutions of fractional variational problems is proposed, based on a simple but powerful idea introduced by Leitmann, that does not involve solving (fractional) differential equations [32]. By an appropriate coordinate transformation, we rewrite the initial problem to an equivalent simpler one; knowing the solution for the new equivalent problem, and since there exists an one-to-one correspondence between the minimizers (or maximizers) of the new problem with the ones of the original, we determine the desired solution. For a modern account on Leitmann’s direct method see  [2526].

The calculus of variations is a field of mathematics that deals with extremizing functionals [33]. The variational functionals are often formed as definite integrals involving unknown functions and their derivatives. The fundamental problem consists to find functions y(x), x ∈ [a, b], that extremize a given functional when subject to boundary conditions y(a) = y a and y(b) = y b . Since this can be a hard task, one wishes to study necessary and sufficient optimality conditions. The simplest example is the following one: what is the shape of the curve y(x), x ∈ [a, b], joining two fixed points y a and y b , that has the minimum possible length? The answer is obviously the straight line joining y a and y b . One can obtain it solving the corresponding Euler–Lagrange necessary optimality condition. If the boundary condition y(b) = y b is not fixed, that is, if we are only interested in the minimum length, the answer is the horizontal straight line y(x) = y a , x ∈ [a, b] (free endpoint problem). In this case we need to complement the Euler–Lagrange equation with an appropriate natural boundary condition. For a general account on Euler–Lagrange equations and natural boundary conditions, we refer the readers to [2324] and references therein. Another important family of variational problems is the isoperimetric one [5]. The classical isoperimetric problem consists to find a continuously differentiable function y = y(x), x ∈ [a, b], satisfying given boundary conditions y(a) = y a and y(b) = y b , which minimizes (or maximizes) a functional

$$\begin{array}{rcl} I(y) ={ \int \nolimits \nolimits }_{a}^{b}L(x,y(x),y'(x))\,\mathrm{d}x& & \\ \end{array}$$

subject to the constraint

$$\begin{array}{rcl} {\int \nolimits \nolimits }_{a}^{b}g(x,y(x),y'(x))\,\mathrm{d}x = l.& & \\ \end{array}$$

The most famous isoperimetric problem can be posed as follows. Amongst all closed curves with a given length, which one encloses the largest area? The answer, as we know, is the circle. The general method to solve such problems involves an Euler–Lagrange equation obtained via the concept of Lagrange multiplier (see, e.g., [4]).

The fractional calculus of variations is a recent field, initiated in 1997, where classical variational problems are considered but in presence of some fractional derivative or fractional integral [31]. In the past few years an increasing of interest has been put on finding necessary conditions of optimality for variational problems with Lagrangians involving fractional derivatives [191011161718192728], fractional derivatives and fractional integrals [3615], classical and fractional derivatives [29], as well as fractional difference operators [1314]. A good introduction to the subject is given in the monograph of Klimek [21]. Here we consider unconstrained and constrained fractional variational problems via Caputo operators.

2 Preliminaries and Notations

There exist several definitions of fractional derivatives and fractional integrals, for example, Riemann–Liouville, Caputo, Riesz, Riesz–Caputo, Weyl, Grunwald–Letnikov, Hadamard, and Chen. Here we review only some basic features of Caputo’s fractional derivative. For proofs and more on the subject, we refer the readers to [2030].

Let f : [a, b] →  be an integrable function, α > 0, and Γ be the Euler gamma function. The left and right Riemann–Liouville fractional integral operators of order α are defined byFootnote 1

$$ \begin{array}{rcl} {}_{a}{I}_{x}^{\alpha }[f] := x\mapsto \frac{1} {\Gamma (\alpha )}{\int \nolimits \nolimits }_{a}^{x}{(x - t)}^{\alpha -1}f(t)\mathrm{d}t& & \\ \end{array}$$

and

$$ \begin{array}{rcl} {}_{x}{I}_{b}^{\alpha }[f] := x\mapsto \frac{1} {\Gamma (\alpha )}{\int \nolimits \nolimits }_{x}^{b}{(t - x)}^{\alpha -1}f(t)\mathrm{d}t,& & \\ \end{array}$$

respectively. The left and right Riemann–Liouville fractional derivative operators of order α are, respectively, defined by

$$ \begin{array}{rcl} {}_{a}{D}_{x}^{\alpha } := \frac{{\mathrm{d}}^{n}} {\mathrm{d}{x}^{n}} {\circ }_{a}{I}_{x}^{n-\alpha }& & \\ \end{array}$$

and

$$ \begin{array}{rcl}{ }_{x}{D}_{b}^{\alpha } := {(-1)}^{n} \frac{{d}^{n}} {\mathrm{d}{x}^{n}} {\circ }_{x}{I}_{b}^{n-\alpha },& & \\ \end{array}$$

where \(n = [\alpha ] + 1\). Interchanging the composition of operators in the definition of Riemann–Liouville fractional derivatives, we obtain the left and right Caputo fractional derivatives of order α:

$$ \begin{array}{rcl}{ }_{a}^{C}{D}_{ x}^{\alpha } :{= }_{ a}{I}_{x}^{n-\alpha } \circ \frac{{d}^{n}} {\mathrm{d}{x}^{n}}& & \\ \end{array}$$

and

$$ \begin{array}{rcl} {}_{x}^{C}{D}_{ b}^{\alpha } :{= }_{ x}{I}_{b}^{n-\alpha } \circ {(-1)}^{n} \frac{{d}^{n}} {\mathrm{d}{x}^{n}}.& & \\ \end{array}$$

Theorem 9.1.

Assume that f is of class C n on [a,b]. Then its left and right Caputo derivatives are continuous on the closed interval [a,b].

One of the most important results for the proof of necessary optimality conditions, is the integration by parts formula. For Caputo derivatives the following relations hold.

Theorem 9.2.

Let α > 0, and f,g : [a,b] → ℝ be C n functions. Then

$$\begin{array}{rcl} {\int \nolimits \nolimits }_{a}^{b}g(x) {\cdot }_{ a}^{C}{D}_{ x}^{\alpha }[f](x)\mathrm{d}x =& {\int \nolimits \nolimits }_{ a}^{b}f(x) {\cdot }_{ x}{D}_{b}^{\alpha }[g](x)\mathrm{d}x & \\ & +{\sum \nolimits }_{j=0}^{n-1}{\left [{}_{x}{D}_{b}^{\alpha +j-n}[g](x) {\cdot }_{x}{D}_{b}^{n-1-j}[f](x)\right ]}_{a}^{b}& \\ \end{array}$$

and

$$\begin{array}{rcl} {\int \nolimits \nolimits }_{a}^{b}g(x) {\cdot }_{ x}^{C}{D}_{ b}^{\alpha }[f](x)\mathrm{d}x =& {\int \nolimits \nolimits }_{ a}^{b}f(x) {\cdot }_{ a}{D}_{x}^{\alpha }[g](x)\mathrm{d}x & \\ & +{\sum \nolimits }_{j=0}^{n-1}{\left [{(-1){}^{n+j}}_{a}{D}_{x}^{\alpha +j-n}[g](x) {\cdot }_{a}{D}_{x}^{n-1-j}[f](x)\right ]}_{a}^{b},& \\ \end{array}$$

where \({}_{a}{D}_{x}^{k} {= }_{a}{I}_{x}^{-k}\) and \({}_{x}{D}_{b}^{k} {= }_{x}{I}_{b}^{-k}\) whenever k < 0.

In the particular case when 0 < α < 1, we get from Theorem 9.2 that

$$\begin{array}{rcl} {\int \nolimits \nolimits }_{a}^{b}g(x) {\cdot }_{ a}^{C}{D}_{ x}^{\alpha }[f](x)\mathrm{d}x ={ \int \nolimits \nolimits }_{a}^{b}f(x) {\cdot }_{ x}{D}_{b}^{\alpha }[g](x)\mathrm{d}x +{ \left [{}_{ x}{I}_{b}^{1-\alpha }[g](x) \cdot f(x)\right ]}_{ a}^{b}& & \\ \end{array}$$

and

$$\begin{array}{rcl} {\int \nolimits \nolimits }_{a}^{b}g(x) {\cdot }_{ x}^{C}{D}_{ b}^{\alpha }[f](x)\mathrm{d}x ={ \int \nolimits \nolimits }_{a}^{b}f(x) {\cdot }_{ a}{D}_{x}^{\alpha }[g](x)\mathrm{d}x -{\left [{}_{ a}{I}_{x}^{1-\alpha }[g](x) \cdot f(x)\right ]}_{ a}^{b}.& & \\ \end{array}$$

In addition, if f is such that \(f(a) = f(b) = 0\), then

$$\begin{array}{rcl} {\int \nolimits \nolimits }_{a}^{b}g(x) {\cdot }_{ a}^{C}{D}_{ x}^{\alpha }[f](x)\mathrm{d}x ={ \int \nolimits \nolimits }_{a}^{b}f(x) {\cdot }_{ x}{D}_{b}^{\alpha }[g](x)\mathrm{d}x& & \\ \end{array}$$

and

$$\begin{array}{rcl} {\int \nolimits \nolimits }_{a}^{b}g(x) {\cdot }_{ x}^{C}{D}_{ b}^{\alpha }[f](x)\mathrm{d}x ={ \int \nolimits \nolimits }_{a}^{b}f(x) {\cdot }_{ a}{D}_{x}^{\alpha }[g](x)\mathrm{d}x.& & \\ \end{array}$$

Along the work, we denote by i L, i = 1, , m (m ∈ ), the partial derivative of function L :  m →  with respect to its ith argument. For convenience of notation, we introduce the operator α C[ ⋅]β defined by

$$ \begin{array}{rcl} {}_{\alpha }^{C}{[y]}_{ \beta } := x\mapsto \left (x,y(x){,\,}_{a}^{C}{D}_{ x}^{\alpha }[y](x){,\,}_{ x}^{C}{D}_{ b}^{\beta }[y](x)\right )\!,& & \\ \end{array}$$

where α, β ∈ (0, 1).

3 Euler–Lagrange Equations

The fundamental problem of the fractional calculus of variations is addressed in the following way: find functions y ∈ ,

$$\begin{array}{rcl} \mathcal{E} := \left \{y \in {C}^{1}([a,b])\,\vert \,y(a) = {y}_{ a}\mbox{ and }y(b) = {y}_{b}\right \},& & \\ \end{array}$$

that maximize or minimize the functional

$$\begin{array}{rcl} J(y) ={ \int \nolimits \nolimits }_{a}^{b}\left (L {\circ }_{ \alpha }^{C}{[y]}_{ \beta }\right )(x)\mathrm{d}x.& &\end{array}$$
(9.1)

As usual, the Lagrange function L is assumed to be of class C 1 on all its arguments. We also assume that 3 L ∘ α C[y]β has continuous right Riemann–Liouville fractional derivative of order α and 4 L ∘ α C[y]β has continuous left Riemann–Liouville fractional derivative of order β for y ∈ .

In [1] a necessary condition of optimality for such functionals is proved. We remark that although functional (9.1) contains only Caputo fractional derivatives, the fractional Euler–Lagrange equation also contains Riemann–Liouville fractional derivatives.

Theorem 9.3 (Euler–Lagrange equation for (9.1)). 

If y is a minimizer or a maximizer of J on ℰ, then y is a solution of the fractional differential equation

$$\begin{array}{rcl} \left ({\partial }_{2}L {\circ }_{\alpha }^{C}{[y]}_{ \beta }\right )(x) {+ }_{x}{D}_{b}^{\alpha }\left [{\partial }_{ 3}L {\circ }_{\alpha }^{C}{[y]}_{ \beta }\right ](x) {+ }_{a}{D}_{x}^{\beta }\left [{\partial }_{ 4}L {\circ }_{\alpha }^{C}{[y]}_{ \beta }\right ](x) = 0& &\end{array}$$
(9.2)

for all x ∈ [a,b].

Proof.

Given | ε | ≪ 1, consider h ∈ V where

$$\begin{array}{rcl} V := \left \{h \in {C}^{1}([a,b])\,\vert \,h(a) = 0\mbox{ and }h(b) = 0\right \},& & \\ \end{array}$$

and a variation of function y of type y + εh. Define the real valued function j(ε) by

$$\begin{array}{rcl} j(\epsilon ) = J(y + \epsilon h) ={ \int \nolimits \nolimits }_{a}^{b}\left (L {\circ }_{ \alpha }^{C}{[y + \epsilon h]}_{ \beta }\right )(x)\mathrm{d}x.& & \\ \end{array}$$

Since ε = 0 is a minimizer or a maximizer of j, we have j′(0) = 0. Thus,

$$\begin{array}{rcl} & {\int \nolimits \nolimits }_{a}^{b}\Bigl [\left ({\partial }_{2}L {\circ }_{\alpha }^{C}{[y]}_{\beta }\right )(x) \cdot h(x) + \left ({\partial }_{3}L {\circ }_{\alpha }^{C}{[y]}_{\beta }\right)(x) {\cdot }_{a}^{C}{D}_{x}^{\alpha }[h](x)& \\ & \qquad + \left ({\partial }_{4}L {\circ }_{\alpha }^{C}{[y]}_{\beta }\right )(x) {\cdot }_{x}^{C}{D}_{b}^{\beta }[h](x)\Bigr ]\mathrm{d}x = 0. & \\ \end{array}$$

We obtain equality (9.2) integrating by parts and applying the classical fundamental lemma of the calculus of variations [33].

We remark that when α → 1, then (9.1) is reduced to a classical functional

$$\begin{array}{rcl} J(y) ={ \int \nolimits \nolimits }_{a}^{b}f(x,y(x),y'(x))\,\mathrm{d}x,& & \\ \end{array}$$

and the fractional Euler–Lagrange equation (9.2) gives the standard one:

$$\begin{array}{rcl}{ \partial }_{2}f(x,y(x),y'(x)) - \frac{\mathrm{d}} {\mathrm{d}x}{\partial }_{3}f(x,y(x),y'(x)) = 0.& & \\ \end{array}$$

Solutions to equation (9.2) are said to be extremals of (9.1).

4 The Isoperimetric Problem

The fractional isoperimetric problem is stated in the following way: find the minimizers or maximizers of functional J as in (9.1), over all functions y ∈  satisfying the fractional integral constraint

$$\begin{array}{rcl} I(y) ={ \int \nolimits \nolimits }_{a}^{b}\left (g {\circ }_{ \alpha }^{C}{[y]}_{ \beta }\right )(x)\mathrm{d}x = l.& & \\ \end{array}$$

Similarly as L, g is assumed to be of class C 1 with respect to all its arguments, function 3 g ∘ α C[y]β is assumed to have continuous right Riemann–Liouville fractional derivative of order α and 4 g ∘ α C[y]β continuous left Riemann–Liouville fractional derivative of order β for y ∈ . A necessary optimality condition for the fractional isoperimetric problem is given in [8].

Theorem 9.4.

Let y be a minimizer or maximizer of J on ℰ, when restricted to the set of functions z ∈ℰ such that I(z) = l. In addition, assume that y is not an extremal of I. Then, there exists a constant λ such that y is a solution of

$$\begin{array}{rcl} \left ({\partial }_{2}F {\circ }_{\alpha }^{C}{[y]}_{ \beta }\right )(x) {+ }_{x}{D}_{b}^{\alpha }[{\partial }_{ 3}F {\circ }_{\alpha }^{C}{[y]}_{ \beta }](x) {+ }_{a}{D}_{x}^{\beta }[{\partial }_{ 4}F {\circ }_{\alpha }^{C}{[y]}_{ \beta }](x) = 0& &\end{array}$$
(9.3)

for all x ∈ [a,b], where \(F = L + \lambda g\).

Proof.

Given h 1, h 2 ∈ V, | ε1 | ≪ 1 and | ε2 | ≪ 1, consider

$$\begin{array}{rcl} j({\epsilon }_{1},{\epsilon }_{2}) ={ \int \nolimits \nolimits }_{a}^{b}\left (L {\circ }_{ \alpha }^{C}{[y + {\epsilon }_{ 1}{h}_{1} + {\epsilon }_{2}{h}_{2}]}_{\beta }\right )(x)\mathrm{d}x& & \\ \end{array}$$

and

$$\begin{array}{rcl} i({\epsilon }_{1},{\epsilon }_{2}) ={ \int \nolimits \nolimits }_{a}^{b}\left (g {\circ }_{ \alpha }^{C}{[y + {\epsilon }_{ 1}{h}_{1} + {\epsilon }_{2}{h}_{2}]}_{\beta }\right )(x)\mathrm{d}x - l.& & \\ \end{array}$$

Since y is not an extremal for I, there exists a function h 2 such that

$$\begin{array}{rcl}{ \left. \frac{\partial i} {\partial {\epsilon }_{2}}\right \vert }_{(0,0)}\neq 0,& & \\ \end{array}$$

and by the implicit function theorem, there exists a C 1 function ε2( ⋅), defined in some neighborhood of zero, such that

$$\begin{array}{rcl} i({\epsilon }_{1},{\epsilon }_{2}({\epsilon }_{1})) = 0.& & \\ \end{array}$$

Applying the Lagrange multiplier rule (see, e.g., [33, Theorem 4.1.1]) there exists a constant λ such that

$$\begin{array}{rcl} \nabla (j(0,0) + \lambda i(0,0)) = \mathbf{0}.& & \\ \end{array}$$

Differentiating j and i at (0, 0), and integrating by parts, we prove the theorem.

Example 9.1.

Let \(\overline{y}(x) = {E}_{\alpha }({x}^{\alpha })\), x ∈ [0, 1], where E α is the Mittag-Leffler function. Then \({}_{0}^{C}{D}_{x}^{\alpha }[\overline{y}] = \overline{y}\). Consider the following fractional variational problem:

$$\begin{array}{clll} J(y) ={ \int \nolimits \nolimits }_{0}^{1}{\left ({}_{ 0}^{C}{D}_{ x}^{\alpha }[y](x)\right )}^{2}\,\mathrm{d}x\rightarrow \mathrm{extr},& \\ I(y) ={ \int \nolimits \nolimits }_{0}^{1}\overline{y}{(x)\,}_{ 0}^{C}{D}_{ x}^{\alpha }[y](x)\,\mathrm{d}x = l,& \\ y(0) = 1\,,\quad y(1) = {y}_{1}, & \end{array}$$

with \(l :={ \int \nolimits \nolimits }_{0}^{1}{(\overline{y}(x))}^{2}\mathrm{d}x\) and y 1 : = E α(1). In this case function F of Theorem 9.4 is

$$\begin{array}{rcl} F(x,y,v,w) = {v}^{2} + \lambda \overline{y}(x)\,v& & \\ \end{array}$$

and the fractional Euler–Lagrange equation (9.3) is

$$ \begin{array}{rcl} {}_{x}{D}_{1}^{\alpha }[{2\,}_{ 0}^{C}{D}_{ x}^{\alpha }[y] + \lambda \overline{y}](x) = 0.& & \\ \end{array}$$

A solution to this problem is \(\lambda = -2\) and \(y(x) = \overline{y}(x)\), x ∈ [0, 1].

The case when y is an extremal of I is also included in the results of [8].

Theorem 9.5.

If y is a minimizer or a maximizer of J on ℰ, subject to the isoperimetric constraint I(y) = l, then there exist two constants λ 0 and λ, not both zero, such that

$$\begin{array}{rcl} \left ({\partial }_{2}K {\circ }_{\alpha }^{C}{[y]}_{ \beta }\right )(x) {+ }_{x}{D}_{b}^{\alpha }\left [{\partial }_{ 3}K {\circ }_{\alpha }^{C}{[y]}_{ \beta }\right ](x) {+ }_{a}{D}_{x}^{\beta }\left [{\partial }_{ 4}K {\circ }_{\alpha }^{C}{[y]}_{ \beta }\right ](x) = 0& & \\ \end{array}$$

for all x ∈ [a,b], where \(K = {\lambda }_{0}L + \lambda g\).

Proof.

The same as the proof of Theorem 9.4, but now using the abnormal Lagrange multiplier rule (see, e.g., [33, Theorem 4.1.3]).

5 Transversality Conditions

We now give the natural boundary conditions (also known as transversality conditions) for problems with the terminal point of integration free as well as y b .

Let

$$\begin{array}{rcl} \mathcal{F} := \left \{(y,x) \in {C}^{1}([a,b]) \times [a,b]\,\vert \,y(a) = {y}_{ a}\right \}.& & \\ \end{array}$$

The type of functional we consider now is

$$\begin{array}{rcl} J(y,T) ={ \int \nolimits \nolimits }_{a}^{T}\left (L {\circ }_{ \alpha }^{C}[y]\right )(x)\,\mathrm{d}x,& &\end{array}$$
(9.4)

where the operator α C[ ⋅] is defined by

$$ \begin{array}{rcl} {}_{\alpha }^{C}[y] := x\mapsto \left (x,y(x){,\,}_{ a}^{C}{D}_{ x}^{\alpha }[y](x)\right ).& & \\ \end{array}$$

These problems are investigated in [1] and more general cases in [2].

Theorem 9.6.

Suppose that (y,T) ∈ℱ minimizes or maximizes J defined by (9.4) on ℱ. Then

$$\begin{array}{rcl} \left ({\partial }_{2}L {\circ }_{\alpha }^{C}[y]\right )(x) {+ }_{ x}{D}_{T}^{\alpha }\left [{\partial }_{ 3}L {\circ }_{\alpha }^{C}[y]\right ](x) = 0& &\end{array}$$
(9.5)

for all x ∈ [a,T]. Moreover, the following transversality conditions hold:

$$\begin{array}{rcl} \left (L {\circ }_{\alpha }^{C}[y]\right )(T) = 0\,,{\quad }_{ x}{I}_{T}^{1-\alpha }\left [{\partial }_{ 3}L {\circ }_{\alpha }^{C}[y]\right ](T) = 0.& & \\ \end{array}$$

Proof.

The result is obtained by considering variations y + εh of function y and variations T + ε △ T of T as well, and then applying the Fermat theorem, integration by parts, Leibniz’s rule, and using the arbitrariness of h and △ T.

Transversality conditions for several other situations can be easily obtained. Some important examples are:

  • If T is fixed but y(T) is free, then besides the Euler–Lagrange equation (9.5) one obtains the transversality condition

    $$\begin{array}{rcl} {}_{x}{I}_{T}^{1-\alpha }\left [{\partial }_{ 3}L {\circ }_{\alpha }^{C}[y]\right ](T) = 0.& & \\ \end{array}$$
  • If y(T) is given but T is free, then the transversality condition is

    $$\begin{array}{rcl} \left (L {\circ }_{\alpha }^{C}[y]\right )(T) - y'(T) {\cdot }_{ x}{I}_{T}^{1-\alpha }\left [{\partial }_{ 3}L {\circ }_{\alpha }^{C}[y]\right ](T) = 0.& & \\ \end{array}$$
  • If y(T) is not given but is restricted to take values on a certain given curve ψ, that is, y(T) = ψ(T), then

    $$\begin{array}{rcl} \left (\psi '(T) - y'(T)\right ) {\cdot }_{x}{I}_{T}^{1-\alpha }\left [{\partial }_{ 3}L {\circ }_{\alpha }^{C}[y]\right ](T) + \left (L {\circ }_{ \alpha }^{C}[y]\right )(T) = 0.& & \\ \end{array}$$