Keywords

1 Introduction

The theory of time scales is a relatively new area, which was introduced in 1988 by Stefan Hilger in his Ph.D. thesis [36,37,38]. It bridges, generalizes and extends the traditional discrete theory of dynamical systems (difference equations) and the theory for continuous dynamical systems (differential equations) [13] and the various dialects of q-calculus [29, 48] into a single unified theory [13, 14, 43].

The calculus of variations on time scales was introduced in 2004 by Martin Bohner [11] (see also [1, 39]) and has been developing rapidly in the past ten years, mostly due to its great potential for applications, e.g., in biology [13], economics [3, 7, 8, 31, 50] and mathematics [16, 32, 34, 62]. In order to deal with nontraditional applications in economics, where the system dynamics are described on a time scale partly continuous and partly discrete, or to accommodate nonuniform sampled systems, one needs to work with variational problems defined on a time scale [6, 8, 23, 27].

This survey is organized as follows. In Sect. 2 we review the basic notions of the time-scale calculus: the concepts of delta derivative and delta integral (Sect. 2.1); the analogous backward concepts of nabla differentiation and nabla integration (Sect. 2.2); and the relation between delta/forward and nabla/backward approaches (Sect. 2.3). Then, in Sect. 3, we review the central results of the recent and powerful calculus of variations on time scales. Both delta and nabla approaches are considered (Sects. 3.1 and 3.2, respectively). Our results begin with Sect. 4, where we investigate inverse problems of the calculus of variations on time scales. To our best knowledge, and in contrast with the direct problem, which is already well studied in the framework of time scales [62], the inverse problem has not been studied before. Its investigation is part of the Ph.D. thesis of the first author [23]. Let here, for the moment and just for simplicity, the time scale \(\mathbb {T}\) be the set \(\mathbb {R}\) of real numbers. Given L, a Lagrangian function, in the ordinary/direct fundamental problem of the calculus of variations one wants to find extremal curves \(y : [a, b] \rightarrow \mathbb {R}^n\) giving stationary values to some action integral (functional)

$$ \mathscr {I}(y)=\int \limits _{a}^{b} L(t,y(t),y'(t)) dt $$

with respect to variations of y with fixed boundary conditions \(y(a)=y_{a}\) and \(y(b)=y_{b}\). Thus, if in the direct problem we start with a Lagrangian and we end up with extremal curves, then one might expect as inverse problem to start with extremal curves and search for a Lagrangian. Such inverse problem is considered, in the general context of time scales, in Sect. 4.1: we describe a general form of a variational functional having an extremum at a given function \(y_{0}\) under Euler–Lagrange and strengthened Legendre conditions (Theorem 16). In Corollary 2 the form of the Lagrangian L on the particular case of an isolated time scale is presented and we end Sect. 4.1 with some concrete cases and examples. We proceed with a more common inverse problem of the calculus of variations in Sect. 4.2. Indeed, normally the starting point are not the extremal curves but, instead, the Euler–Lagrange equations that such curves must satisfy:

$$\begin{aligned} \frac{\partial L}{\partial y} - \frac{d}{dt} \frac{\partial L}{\partial y'} = 0 \Leftrightarrow \frac{\partial L}{\partial y} - \frac{\partial ^2 L}{\partial t \partial y'} - \frac{\partial ^2 L}{\partial y \partial y'} y' - \frac{\partial ^2 L}{\partial y' \partial y'} y'' = 0 \end{aligned}$$
(1)

(we are still keeping, for illustrative purposes, \(\mathbb {T} = \mathbb {R}\)). This is what is usually known as the inverse problem of the calculus of variations: start with a second order ordinary differential equation and determine a Lagrangian L (if it exists) whose Euler–Lagrange equations are the same as the given equation. The problem of variational formulation of differential equations (or the inverse problem of the calculus of variations) dates back to the 19th century. The problem seems to have been posed by Helmholtz in [35], followed by several results from [20, 21, 40, 54, 64]. There are, however, two different types of inverse problems, depending on the meaning of the phrase “are the same as”. Do we require the equations to be the same or do we allow multiplication by functions to obtain new but equivalent equations? The first case is often called Helmholtz’s inverse problem: find conditions under which a given differential equation is an Euler–Lagrange equation. The latter case is often called the multiplier problem: given \(f(t,y,y',y'') = 0\), does a function \(r(t,y,y')\) exist such that the equation \(r(t,y,y') f(t,y,y',y'') = 0\) is the Euler–Lagrange equation of a functional? In this work we are interested in Helmholtz’s problem. The answer to this problem in \(\mathbb {T}= \mathbb {R}\) is classical and well known: the complete solution to Helmholtz’s problem is found in the celebrated 1941 paper of Douglas [22]. Let O be a second order differential operator. Then, the differential equation \(O(y) = 0\) is a second order Euler–Lagrange equation if and only if the Fréchet derivatives of O are self-adjoint. A simple example illustrating the difference between both inverse problems is the following one. Consider the second order differential equation

$$\begin{aligned} m y'' + h y' + k y = f. \end{aligned}$$
(2)

This equation is not self-adjoint and, as a consequence, there is no variational problem with such Euler–Lagrange equation. However, if we multiply the equation by \(p(t) = \exp (ht/m)\), then

$$\begin{aligned} m \frac{d}{dt} \left[ \exp (ht/m) y'\right] + k \, \exp (ht/m) y = \exp (ht/m) f \end{aligned}$$
(3)

and now a variational formulation is possible: the Euler–Lagrange equation (1) associated with

$$ \mathscr {I}(y)=\int \limits _{t_{0}}^{t_{1}} \exp (ht/m) \left[ \frac{1}{2} m y'^2 - \frac{1}{2} k y^2 - f y \right] dt $$

is precisely (3). A recent theory of the calculus of variations that allows to obtain (2) directly has been developed, but involves Lagrangians depending on fractional (noninteger) order derivatives [4, 47, 49]. For a survey on the fractional calculus of variations, which is not our subject here, we refer the reader to [56]. For the time scale \(\mathbb {T}= \mathbb {Z}\), available results on the inverse problem of the calculus of variations are more recent and scarcer. In this case Helmholtz’s inverse problem can be formulated as follows: find conditions under which a second order difference equation is a second order discrete Euler–Lagrange equation. Available results in the literature go back to the works of Crăciun and Opriş (1996) and Albu and Opriş (1999) and, more recently, to the works of Hydon and Mansfeld (2004) and Bourdin and Cresson (2013) [2, 17, 19, 41]. The main difficulty to obtain analogous results to those of the classical continuous calculus of variations in the discrete (or, more generally, in the time-scale) setting is due to the lack of chain rule. This lack of chain rule is easily seen with a simple example. Let \(f,g: \mathbb {Z} \rightarrow \mathbb {Z}\) be defined by \(f\left( t\right) =t^{3}\), \(g\left( t\right) =2t\). Then, \(\varDelta \left( f\circ g\right) (t) = \varDelta \left( 8t^{3}\right) =8\left( 3t^{2}+3t+1\right) =24t^{2}+24t+8\) and \(\varDelta f\left( g\left( t\right) \right) \cdot \varDelta g\left( t\right) =\left( 12t^{2}+6t+1\right) 2=24t^{2}+12t+2\). Therefore, \(\varDelta \left( f\circ g\right) (t) \ne \varDelta f\left( g\left( t\right) \right) \varDelta g\left( t\right) \). The difficulties caused by the lack of a chain rule in a general time scale \(\mathbb {T}\), in the context of the inverse problem of the calculus of variations on time scales, are discussed in Sect. 4.3. To deal with the problem, our approach to the inverse problem of the calculus of variations uses an integral perspective instead of the classical differential point of view. As a result, we obtain a useful tool to identify integro-differential equations which are not Euler–Lagrange equations on an arbitrary time scale \(\mathbb {T}\). More precisely, we define the notion of self-adjointness of a first order integro-differential equation (Definition 15) and its equation of variation (Definition 16). Using such property, we prove a necessary condition for an integro-differential equation on an arbitrary time scale \(\mathbb {T}\) to be an Euler–Lagrange equation (Theorem 17). In order to illustrate our results we present Theorem 17 in the particular time scales \(\mathbb {T}\in \left\{ \mathbb {R},h\mathbb {Z},\overline{q^{\mathbb {Z}}}\right\} \), \(h > 0\), \(q>1\) (Corollaries 35). Furthermore, we discuss equivalences between: (i) integro-differential equations (20) and second order differential equations (29) (Proposition 1), and (ii) equations of variations of them on an arbitrary time scale \(\mathbb {T}\) ((21) and (30), respectively). As a result, we show that it is impossible to prove the latter equivalence due to lack of a general chain rule on an arbitrary time scale [12, 13]. In Sect. 5 we address the direct problem of the calculus of variations on time scales by considering a variational problem which may be found often in economics (see [45] and references therein). We extremize a functional of the calculus of variations that is the composition of a certain scalar function with the delta and nabla integrals of a vector valued field, possibly subject to boundary conditions and/or isoperimetric constraints. In Sect. 5.1 we provide general Euler–Lagrange equations in integral form (Theorem 18), transversality conditions are given in Sect. 5.2, while Sect. 5.3 considers necessary optimality conditions for isoperimetric problems on an arbitrary time scale. Interesting corollaries and examples are presented in Sect. 5.4. We end with Sect. 6 of conclusions and open problems.

2 Preliminaries

A time scale \(\mathbb {T}\) is an arbitrary nonempty closed subset of \(\mathbb {R}\). The set of real numbers \(\mathbb {R}\), the integers \(\mathbb {Z}\), the natural numbers \(\mathbb {N}\), the nonnegative integers \(\mathbb {N}_{0}\), an union of closed intervals \([0,1]\cup [2,7]\) or the Cantor set are examples of time scales, while the set of rational numbers \(\mathbb {Q}\), the irrational numbers \(\mathbb {R}\setminus \mathbb {Q}\), the complex numbers \(\mathbb {C}\) or an open interval like (0, 1) are not time scales. Throughout this survey we assume that for \(a,b\in \mathbb {T}\), \(a<b\), all intervals are time scales intervals, i.e., \([a,b]=[a,b]_{\mathbb {T}}:=[a,b]\cap \mathbb {T} =\left\{ t\in \mathbb {T}: a\le t\le b\right\} \).

Definition 1

(See Sect. 1.1 of [13]) Let \(\mathbb {T}\) be a time scale and \(t\in \mathbb {T}\). The forward jump operator \(\sigma :\mathbb {T} \rightarrow \mathbb {T}\) is defined by \(\sigma (t):=\inf \left\{ s\in \mathbb {T}: s>t\right\} \) for \(t\ne \sup \mathbb {T}\) and \(\sigma (\sup \mathbb {T}) := \sup \mathbb {T}\) if \(\sup \mathbb {T}<+\infty \). Accordingly, we define the backward jump operator \(\rho :\mathbb {T} \rightarrow \mathbb {T}\) by \(\rho (t):=\sup \left\{ s\in \mathbb {T}: s<t\right\} \) for \(t\ne \inf \mathbb {T}\) and \(\rho (\inf \mathbb {T}) := \inf \mathbb {T}\) if \(\inf \mathbb {T}>-\infty \). The forward graininess function \(\mu :\mathbb {T} \rightarrow [0,\infty )\) is defined by \(\mu (t):=\sigma (t)-t\) and the backward graininess function \(\nu :\mathbb {T} \rightarrow [0,\infty )\) by \(\nu (t):=t-\rho (t)\).

Example 1

The two classical time scales are \(\mathbb {R}\) and \(\mathbb {Z}\), representing the continuous and the purely discrete time, respectively. Other standard examples are the periodic numbers, \(h\mathbb {Z}=\left\{ hk: h>0, k\in \mathbb {Z}\right\} \), and the q-scale

$$ \overline{q^{\mathbb {Z}}} :=q^{\mathbb {Z}}\cup \left\{ 0\right\} =\left\{ q^{k}:q>1, k\in \mathbb {Z}\right\} \cup \left\{ 0\right\} . $$

Sometimes one considers also the time scale \(q^{\mathbb {N}_{0}}=\left\{ q^{k}:q>1, k\in \mathbb {N}_{0}\right\} \). The following time scale is common: \(\mathbb {P}_{a,b}=\bigcup \limits _{k=0}^{\infty } [k(a+b),k(a+b)+a]\), \(a,b>0\).

Table 1 and Example 2 present different forms of jump operators \(\sigma \) and \(\rho \), and graininess functions \(\mu \) and \(\nu \), in specified time scales.

Table 1 Examples of jump operators and graininess functions on different time scales

Example 2

(See Example 1.2 of [65]) Let \(a,b>0\) and consider the time scale

$$ \mathbb {P}_{a,b}=\bigcup \limits _{k=0}^{\infty } [k(a+b),k(a+b)+a]. $$

Then,

$$ \sigma (t)= {\left\{ \begin{array}{ll} t &{}\text {if } t\in A_{1}, \\ t+b &{}\text {if } t\in A_{2}, \end{array}\right. } \quad \quad \quad \rho (t)= {\left\{ \begin{array}{ll} t-b &{}\text {if } t\in B_{1}, \\ t &{}\text {if } t\in B_{2} \end{array}\right. } $$

(see Fig. 1) and

$$ \mu (t)= {\left\{ \begin{array}{ll} 0 &{}\text {if } t\in A_{1}, \\ b &{}\text {if } t\in A_{2}, \end{array}\right. } \quad \quad \quad \nu (t)= {\left\{ \begin{array}{ll} b &{}\text {if } t\in B_{1}, \\ 0 &{}\text {if } t\in B_{2}, \end{array}\right. } $$

where

$$ \bigcup \limits _{k=0}^{\infty } [k(a+b),k(a+b)+a]=A_{1}\cup A_{2}=B_{1}\cup B_{2} $$

with

$$\begin{aligned} A_{1}=\bigcup \limits _{k=0}^{\infty } [k(a+b),k(a+b)+a), \quad B_{1}=\bigcup \limits _{k=0}^{\infty } \left\{ k(a+b)\right\} , \end{aligned}$$
$$\begin{aligned} A_{2}=\bigcup \limits _{k=0}^{\infty } \left\{ k(a+b)+a\right\} , \quad B_{2}=\bigcup \limits _{k=0}^{\infty } (k(a+b),k(a+b)+a]. \end{aligned}$$
Fig. 1
figure 1

Jump operators of the time scale \(\mathbb {T}=\mathbb {P}_{a,b}\)

In the time-scale theory the following classification of points is used:

  • A point \(t\in \mathbb {T}\) is called right-scattered or left-scattered if \(\sigma (t)>t\) or \(\rho (t)<t\), respectively.

  • A point t is isolated if \(\rho (t)<t<\sigma (t)\).

  • If \(t<\sup \mathbb {T}\) and \(\sigma (t)=t\), then t is called right-dense; if \(t>\inf \mathbb {T}\) and \(\rho (t)=t\), then t is called left-dense.

  • We say that t is dense if \(\rho (t)=t=\sigma (t)\).

Definition 2

(See Sect. 1 of [55]) A time scale \(\mathbb {T}\) is said to be an isolated time scale provided given any \(t \in \mathbb {T}\), there is a \(\delta > 0\) such that \((t - \delta , t+\delta ) \cap \mathbb {T} = \{t\}\).

Definition 3

(See [9]) A time scale \(\mathbb {T}\) is said to be regular if the following two conditions are satisfied simultaneously for all \(t\in \mathbb {T}\): \(\sigma (\rho (t))=t\) and \(\rho (\sigma (t))=t\).

2.1 The Delta Derivative and the Delta Integral

If \(f:\mathbb {T}\rightarrow \mathbb {R}\), then we define \(f^{\sigma }:\mathbb {T}\rightarrow \mathbb {R}\) by \(f^{\sigma }(t):=f(\sigma (t))\) for all \(t\in \mathbb {T}\). The delta derivative (or Hilger derivative) of function \(f:\mathbb {T}\rightarrow \mathbb {R}\) is defined for points in the set \(\mathbb {T}^{\kappa }\), where

$$\begin{aligned} \mathbb {T}^{\kappa } := {\left\{ \begin{array}{ll} \mathbb {T}\setminus \left\{ \sup \mathbb {T}\right\} &{} \text { if } \rho (\sup \mathbb {T})<\sup \mathbb {T}<\infty ,\\ \mathbb {T}&{} \text { otherwise. } \end{array}\right. } \end{aligned}$$

Let us define the sets \(\mathbb {T}^{\kappa ^n}\), \(n\ge 2\), inductively: \(\mathbb {T}^{\kappa ^1} :=\mathbb {T}^\kappa \) and \(\mathbb {T}^{\kappa ^n} := \left( \mathbb {T}^{\kappa ^{n-1}}\right) ^\kappa \), \(n\ge 2\). We define delta differentiability in the following way.

Definition 4

(Sect. 1.1 of [13]) Let \(f:\mathbb {T}\rightarrow \mathbb {R}\) and \(t\in \mathbb {T}^{\kappa }\). We define \(f^{\varDelta }(t)\) to be the number (provided it exists) with the property that given any \(\varepsilon >0\), there is a neighborhood U (\(U=(t-\delta , t+\delta )\cap \mathbb {T}\) for some \(\delta >0\)) of t such that

$$\begin{aligned} \left| f^{\sigma }(t)-f(s)-f^{\varDelta }(t)\left( \sigma (t)-s\right) \right| \le \varepsilon \left| \sigma (t)-s\right| \text{ for } \text{ all } s\in U. \end{aligned}$$

A function f is delta differentiable on \(\mathbb {T}^{\kappa }\) provided \(f^{\varDelta }(t)\) exists for all \(t\in \mathbb {T}^{\kappa }\). Then, \(f^{\varDelta }:\mathbb {T}^{\kappa }\rightarrow \mathbb {R}\) is called the delta derivative of f on \(\mathbb {T}^{\kappa }\).

Theorem 1

(Theorem 1.16 of [13]) Let \(f:\mathbb {T} \rightarrow \mathbb {R}\) and \(t\in \mathbb {T}^{\kappa }\). The following hold:

  1. 1.

    If f is delta differentiable at t, then f is continuous at t.

  2. 2.

    If f is continuous at t and t is right-scattered, then f is delta differentiable at t with

    $$ f^{\varDelta }(t)=\frac{f^\sigma (t)-f(t)}{\mu (t)}. $$
  3. 3.

    If t is right-dense, then f is delta differentiable at t if and only if the limit

    $$ \lim \limits _{s\rightarrow t}\frac{f(t)-f(s)}{t-s} $$

    exists as a finite number. In this case,

    $$ f^{\varDelta }(t)=\lim \limits _{s\rightarrow t}\frac{f(t)-f(s)}{t-s}. $$
  4. 4.

    If f is delta differentiable at t, then

    $$f^\sigma (t)=f(t)+\mu (t)f^{\varDelta }(t).$$

The next example is a consequence of Theorem 1 and presents different forms of the delta derivative on specific time scales.

Example 3

Let \(\mathbb {T}\) be a time scale.

  1. 1.

    If \(\mathbb {T}=\mathbb {R}\), then \(f:\mathbb {R} \rightarrow \mathbb {R}\) is delta differentiable at \(t\in \mathbb {R}\) if and only if

    $$\begin{aligned} f^\varDelta (t)=\lim \limits _{s\rightarrow t}\frac{f(t)-f(s)}{t-s} \end{aligned}$$

    exists, i.e., if and only if f is differentiable (in the ordinary sense) at t and in this case we have \(f^{\varDelta }(t)=f'(t)\).

  2. 2.

    If \(\mathbb {T}=h\mathbb {Z}\), \(h > 0\), then \(f:h\mathbb {Z} \rightarrow \mathbb {R}\) is delta differentiable at \(t\in h\mathbb {Z}\) with

    $$\begin{aligned} f^{\varDelta }(t)=\frac{f(\sigma (t))-f(t)}{\mu (t)}=\frac{f(t+h)-f(t)}{h}=:\varDelta _{h}f(t). \end{aligned}$$

    In the particular case \(h=1\) we have \(f^{\varDelta }(t)=\varDelta f(t)\), where \(\varDelta \) is the usual forward difference operator.

  3. 3.

    If \(\mathbb {T}=\overline{q^{\mathbb {Z}}}\), \(q>1\), then for a delta differentiable function \(f:\overline{q^{\mathbb {Z}}}\rightarrow \mathbb {R}\) we have

    $$\begin{aligned} f^{\varDelta }(t)=\frac{f(\sigma (t))-f(t)}{\mu (t)}=\frac{f(qt)-f(t)}{(q-1)t}=:\varDelta _{q}f(t) \end{aligned}$$

    for all \(t\in \overline{q^{\mathbb {Z}}}\setminus \left\{ 0\right\} \), i.e., we get the usual Jackson derivative of quantum calculus [42, 48].

Now we formulate some basic properties of the delta derivative on time scales.

Theorem 2

(Theorem 1.20 of [13]) Let \(f,g:\mathbb {T} \rightarrow \mathbb {R}\) be delta differentiable at \(t\in \mathbb {T^{\kappa }}\). Then,

  1. 1.

    the sum \(f+g:\mathbb {T} \rightarrow \mathbb {R}\) is delta differentiable at t with

    $$\begin{aligned} (f+g)^{\varDelta }(t)=f^{\varDelta }(t)+g^{\varDelta }(t); \end{aligned}$$
  2. 2.

    for any constant \(\alpha \), \(\alpha f:\mathbb {T}\rightarrow \mathbb {R}\) is delta differentiable at t with

    $$\begin{aligned} (\alpha f)^{\varDelta }(t)=\alpha f^{\varDelta }(t); \end{aligned}$$
  3. 3.

    the product \(fg:\mathbb {T}\rightarrow \mathbb {R}\) is delta differentiable at t with

    $$\begin{aligned} (fg)^{\varDelta }(t)=f^{\varDelta }(t)g(t)+f^{\sigma }(t)g^{\varDelta }(t) =f(t)g^{\varDelta }(t)+f^{\varDelta }(t)g^{\sigma }(t); \end{aligned}$$
  4. 4.

    if \(g(t)g^{\sigma }(t)\ne 0\), then f / g is delta differentiable at t with

    $$\begin{aligned} \left( \frac{f}{g}\right) ^{\varDelta }(t) =\frac{f^{\varDelta }(t)g(t)-f(t)g^{\varDelta }(t)}{g(t)g^{\sigma }(t)}. \end{aligned}$$

Now we introduce the theory of delta integration on time scales. We start by defining the associated class of functions.

Definition 5

(Sect. 1.4 of [14]) A function \(f:\mathbb {T}\rightarrow \mathbb {R}\) is called rd-continuous provided it is continuous at right-dense points in \(\mathbb {T}\) and its left-sided limits exist (finite) at all left-dense points in \(\mathbb {T}\).

The set of all rd-continuous functions \(f:\mathbb {T} \rightarrow \mathbb {R}\) is denoted by \(C_{rd}=C_{rd}(\mathbb {T})=C_{rd}(\mathbb {T},\mathbb {R})\). The set of functions \(f:\mathbb {T}\rightarrow \mathbb {R}\) that are delta differentiable and whose derivative is rd-continuous is denoted by \(C^{1}_{rd}=C_{rd}^{1}(\mathbb {T})=C^{1}_{rd}(\mathbb {T},\mathbb {R})\).

Definition 6

(Definition 1.71 of [13]) A function \(F:\mathbb {T} \rightarrow \mathbb {R}\) is called a delta antiderivative of \(f:\mathbb {T} \rightarrow \mathbb {R}\) provided \(F^{\varDelta }(t)=f(t)\) for all \(t\in \mathbb {T}^{\kappa }\).

Definition 7

Let \(\mathbb {T}\) be a time scale and \(a,b\in \mathbb {T}\). If \(f:\mathbb {T}^{\kappa } \rightarrow \mathbb {R}\) is a rd-continuous function and \(F:\mathbb {T} \rightarrow \mathbb {R}\) is an antiderivative of f, then the Cauchy delta integral is defined by

$$\begin{aligned} \int \limits _{a}^{b} f(t)\varDelta t := F(b)-F(a). \end{aligned}$$

Theorem 3

(Theorem 1.74 of [13]) Every rd-continuous function f has an antiderivative F. In particular, if \(t_{0}\in \mathbb {T}\), then F defined by

$$\begin{aligned} F(t):=\int \limits _{t_{0}}^{t} f(\tau )\varDelta \tau , \quad t\in \mathbb {T}, \end{aligned}$$

is an antiderivative of f.

Theorem 4

(Theorem 1.75 of [13]) If \(f\in C_{rd}\), then \(\displaystyle \int \limits _{t}^{\sigma (t)}f(\tau )\varDelta \tau =\mu (t)f(t), t\in \mathbb {T}^{\kappa }\).

Let us see two special cases of the delta integral.

Example 4

Let \(a,b\in \mathbb {T}\) and \(f:\mathbb {T} \rightarrow \mathbb {R}\) be rd-continuous.

  1. 1.

    If \(\mathbb {T}=\mathbb {R}\), then

    $$\begin{aligned} \int \limits _{a}^{b}f(t)\varDelta t=\int \limits _{a}^{b}f(t)dt, \end{aligned}$$

    where the integral on the right hand side is the usual Riemann integral.

  2. 2.

    If [ab] consists of only isolated points, then

    $$\begin{aligned} \int \limits _{a}^{b} f(t)\varDelta t= {\left\{ \begin{array}{ll} \sum \limits _{t\in [a,b)}\mu (t)f(t), &{} \hbox { if } a<b, \\ 0, &{} \hbox { if } a=b,\\ -\sum \limits _{t\in [b,a)}\mu (t)f(t), &{} \hbox { if } a>b. \end{array}\right. } \end{aligned}$$

Now we present some useful properties of the delta integral.

Theorem 5

(Theorem 1.77 of [13]) If \(a,b,c\in \mathbb {T}\), \(a< c < b\), \(\alpha \in \mathbb {R}\), and \(f,g \in C_{rd}(\mathbb {T}, \mathbb {R})\), then:

  1. 1.

    \(\int \limits _{a}^{b}(f(t)+g(t))\varDelta t =\int \limits _{a}^{b} f(t)\varDelta t+\int \limits _{a}^{b}g(t)\varDelta t\),

  2. 2.

    \(\int \limits _{a}^{b}\alpha f(t)\varDelta t=\alpha \int \limits _{a}^{b} f(t)\varDelta t\),

  3. 3.

    \(\int \limits _{a}^{b}f(t)\varDelta t =-\int \limits _{b}^{a}f(t)\varDelta t\),

  4. 4.

    \(\int \limits _{a}^{b} f(t)\varDelta t =\int \limits _{a}^{c} f(t)\varDelta t +\int \limits _{c}^{b} f(t)\varDelta t\),

  5. 5.

    \(\int \limits _{a}^{a} f(t)\varDelta t=0\),

  6. 6.

    \(\int \limits _{a}^{b}f(t)g^{\varDelta }(t)\varDelta t =\left. f(t)g(t)\right| ^{t=b}_{t=a} -\int \limits _{a}^{b} f^{\varDelta }(t)g^\sigma (t)\varDelta t\),

  7. 7.

    \(\int \limits _{a}^{b} f^\sigma (t) g^{\varDelta }(t)\varDelta t =\left. f(t)g(t)\right| ^{t=b}_{t=a} -\int \limits _{a}^{b}f^{\varDelta }(t)g(t)\varDelta t\).

2.2 The Nabla Derivative and the Nabla Integral

The nabla calculus is similar to the delta one of Sect. 2.1. The difference is that the backward jump operator \(\rho \) takes the role of the forward jump operator \(\sigma \). For a function \(f:\mathbb {T}\rightarrow \mathbb {R}\) we define \(f^{\rho }:\mathbb {T}\rightarrow \mathbb {R}\) by \(f^{\rho }(t):=f(\rho (t))\). If \(\mathbb {T}\) has a right-scattered minimum m, then we define \(\mathbb {T}_{\kappa }:=\mathbb {T}-\left\{ m\right\} \); otherwise, we set \(\mathbb {T}_{\kappa }:=\mathbb {T}\):

$$\begin{aligned} \mathbb {T}_{\kappa } := {\left\{ \begin{array}{ll} \mathbb {T}\setminus \left\{ \inf \mathbb {T}\right\} &{} \text { if } -\infty<\inf \mathbb {T}<\sigma (\inf \mathbb {T}),\\ \mathbb {T}&{} \text { otherwise}. \end{array}\right. } \end{aligned}$$

Let us define the sets \(\mathbb {T}_{\kappa }\), \(n\ge 2\), inductively: \(\mathbb {T}_{\kappa ^1} := \mathbb {T}_\kappa \) and \(\mathbb {T}_{\kappa ^n} := (\mathbb {T}_{\kappa ^{n-1}})_\kappa \), \(n\ge 2\). Finally, we define \(\mathbb {T}_{\kappa }^{\kappa } := \mathbb {T}_{\kappa } \cap \mathbb {T}^{\kappa }\). The definition of nabla derivative of a function \(f:\mathbb {T}\rightarrow \mathbb {R}\) at point \(t\in \mathbb {T}_{\kappa }\) is similar to the delta case (cf. Definition 4).

Definition 8

(Sect. 3.1 of [14]) We say that a function \(f:\mathbb {T} \rightarrow \mathbb {R}\) is nabla differentiable at \(t\in \mathbb {T}_{\kappa }\) if there is a number \(f^{\nabla }(t)\) such that for all \(\varepsilon >0\) there exists a neighborhood U of t (i.e., \(U=(t-\delta , t+\delta )\cap \mathbb {T}\) for some \(\delta >0\)) such that

$$\begin{aligned} |f^{\rho }(t)-f(s)-f^{\nabla }(t)(\rho (t)-s)| \le \varepsilon |\rho (t)-s| \, \text{ for } \text{ all } s\in U. \end{aligned}$$

We say that \(f^{\nabla }(t)\) is the nabla derivative of f at t. Moreover, f is said to be nabla differentiable on \(\mathbb {T}\) provided \(f^{\nabla }(t)\) exists for all \(t\in \mathbb {T}_{\kappa }\).

The main properties of the nabla derivative are similar to those given in Theorems 1 and 2, and can be found, respectively, in Theorems 8.39 and 8.41 of [13].

Example 5

If \(\mathbb {T}=\mathbb {R}\), then \(f^{\nabla }(t)=f'(t)\). If \(\mathbb {T}=h\mathbb {Z}\), \(h>0\), then

$$ f^{\nabla }(t)=\frac{f(t)-f(t-h)}{h} =: \nabla _{h} f(t). $$

For \(h=1\) the operator \(\nabla _{h}\) reduces to the standard backward difference operator \(\nabla f(t) = f(t) - f(t-1)\).

We now briefly recall the theory of nabla integration on time scales. Similarly as in the delta case, first we define a suitable class of functions.

Definition 9

(Sect. 3.1 of [14]) Let \(\mathbb {T}\) be a time scale and \(f:\mathbb {T}\rightarrow \mathbb {R}\). We say that f is ld-continuous if it is continuous at left-dense points \(t\in \mathbb {T}\) and its right-sided limits exist (finite) at all right-dense points.

Remark 1

If \(\mathbb {T}=\mathbb {R}\), then f is ld-continuous if and only if f is continuous. If \(\mathbb {T}=\mathbb {Z}\), then any function is ld-continuous.

The set of all ld-continuous functions \(f:\mathbb {T}\rightarrow \mathbb {R}\) is denoted by \(C_{ld}=C_{ld}(\mathbb {T})=C_{ld}(\mathbb {T},\mathbb {R})\); the set of all nabla differentiable functions with ld-continuous derivative by \(C^{1}_{ld}=C^{1}_{ld}(\mathbb {T})=C^{1}_{ld}(\mathbb {T},\mathbb {R})\). Follows the definition of nabla integral on time scales.

Definition 10

(Definition 8.42 of [13]) A function \(F : \mathbb {T} \rightarrow \mathbb {R}\) is called a nabla antiderivative of \(f : \mathbb {T} \rightarrow \mathbb {R}\) provided \(F^\nabla (t) = f(t)\) for all \(t \in \mathbb {T}_\kappa \). In this case we define the nabla integral of f from a to b (\(a, b \in \mathbb {T}\)) by

$$ \int _a^b f(t) \nabla t := F(b) - F(a). $$

Theorem 6

(Theorem 8.45 of [13] or Theorem 11 of [44]) Every ld-continuous function f has a nabla antiderivative F. In particular, if \(a \in \mathbb {T}\), then F defined by

$$ F(t) = \int _a^t f(\tau ) \nabla \tau , \quad t \in \mathbb {T}, $$

is a nabla antiderivative of f.

Theorem 7

(Theorem 8.46 of [13]) If \(f:\mathbb {T}\rightarrow \mathbb {R}\) is ld-continuous and \(t\in \mathbb {T}_{\kappa }\), then

$$\int _{\rho (t)}^{t}f(\tau )\nabla \tau =\nu (t)f(t).$$

Properties of the nabla integral, analogous to the ones of the delta integral given in Theorem 5, can be found in Theorem 8.47 of [13]. Here we give two special cases of the nabla integral.

Theorem 8

(See Theorem 8.48 of [13]) Assume \(a,b\in \mathbb {T}\) and \(f:\mathbb {T}\rightarrow \mathbb {R}\) is ld-continuous.

  1. 1.

    If \(\mathbb {T}=\mathbb {R}\), then

    $$ \int \limits _{a}^{b}f(t)\nabla t=\int \limits _{a}^{b} f(t)dt, $$

    where the integral on the right hand side is the Riemann integral.

  2. 2.

    If \(\mathbb {T}\) consists of only isolated points, then

    $$\begin{aligned} \int \limits _{a}^{b}f(t)\nabla t = {\left\{ \begin{array}{ll} \sum \limits _{t\in (a,b]}\nu (t)f(t), &{} \hbox { if } a<b, \\ 0, &{} \hbox { if } a=b,\\ -\sum \limits _{t\in (b,a]}\nu (t)f(t), &{} \hbox { if } a>b. \end{array}\right. } \end{aligned}$$

2.3 Relation Between Delta and Nabla Operators

It is possible to relate the approach of Sect. 2.1 with that of Sect. 2.2.

Theorem 9

(See Theorems 2.5 and 2.6 of [5]) If \(f:\mathbb {T}\rightarrow \mathbb {R}\) is delta differentiable on \(\mathbb {T}^{\kappa }\) and if \(f^{\varDelta }\) is continuous on \(\mathbb {T}^{\kappa }\), then f is nabla differentiable on \(\mathbb {T}_{\kappa }\) with

$$\begin{aligned} f^{\nabla }(t)=\left( f^{\varDelta }\right) ^{\rho }(t) \, \text { for all } t\in \mathbb {T}_{\kappa }. \end{aligned}$$

If \(f:\mathbb {T}\rightarrow \mathbb {R}\) is nabla differentiable on \(\mathbb {T}_{\kappa }\) and if \(f^{\nabla }\) is continuous on \(\mathbb {T}_{\kappa }\), then f is delta differentiable on \(\mathbb {T}^{\kappa }\) with

$$\begin{aligned} f^{\varDelta }(t)=\left( f^{\nabla }\right) ^{\sigma }(t) \, \text { for all } t\in \mathbb {T}^{\kappa }. \end{aligned}$$
(4)

Theorem 10

(Proposition 17 of [44]) If function \(f:\mathbb {T}\rightarrow \mathbb {R}\) is continuous, then for all \(a,b\in \mathbb {T}\) with \(a<b\) we have

$$\begin{aligned} \int \limits _{a}^{b}f(t)\varDelta t =\int \limits _{a}^{b}f^{\rho }(t)\nabla t,\\ \int \limits _{a}^{b}f(t)\nabla t =\int \limits _{a}^{b}f^{\sigma }(t)\varDelta t. \end{aligned}$$

For a more general theory relating delta and nabla approaches, we refer the reader to the duality theory of Caputo [18].

3 Direct Problems of the Calculus of Variations on Time Scales

There are two available approaches to the (direct) calculus of variations on time scales. The first one, the delta approach, is widely described in literature (see, e.g., [11, 13,14,15, 30, 31, 39, 44, 51, 62, 65]). The latter one, the nabla approach, was introduced mainly due to its applications in economics (see, e.g., [5,6,7,8]). It has been shown that these two types of calculus of variations are dual [18, 33, 46].

3.1 The Delta Approach to the Calculus of Variations

In this section we present the basic information about the delta calculus of variations on time scales. Let \(\mathbb {T}\) be a given time scale with at least three points, and \(a,b\in \mathbb {T}\), \(a<b\), \(a=\min \mathbb {T}\) and \(b=\max \mathbb {T}\). Consider the following variational problem on the time scale \(\mathbb {T}\):

$$\begin{aligned} \mathscr {L}[y]=\int \limits _{a}^{b} L\left( t, y^{\sigma }(t),y^{\varDelta }(t)\right) \varDelta t \longrightarrow \min \end{aligned}$$
(5)

subject to the boundary conditions

$$\begin{aligned} y(a)=y_{a}, \quad y(b)=y_{b}, \quad y_{a},y_{b}\in \mathbb {R}^{n}, \quad n\in \mathbb {N}. \end{aligned}$$
(6)

Definition 11

A function \(y\in C_{rd}^{1}(\mathbb {T},\mathbb {R}^{n})\) is said to be an admissible path (function) to problem (5)–(6) if it satisfies the given boundary conditions \(y(a)=y_{a}\), \(y(b)=y_{b}\).

In what follows the Lagrangian L is understood as a function \(L:\mathbb {T}\times \mathbb {R}^{2n}\rightarrow \mathbb {R}\), \((t,y,v) \rightarrow L(t,y,v)\), and by \(L_y\) and \(L_v\) we denote the partial derivatives of L with respect to y and v, respectively. Similar notation is used for second order partial derivatives. We assume that \(L(t,\cdot ,\cdot )\) is differentiable in (yv); \(L(t,\cdot ,\cdot )\), \(L_{y}(t,\cdot ,\cdot )\) and \(L_{v}(t,\cdot ,\cdot )\) are continuous at \(\left( y^{\sigma }(t),y^{\varDelta }(t)\right) \) uniformly at t and rd-continuous at t for any admissible path y. Let us consider the following norm in \(C_{rd}^{1}\):

$$ \Vert y\Vert _{C^{1}_{rd}} = \sup _{t\in [a,b]}\Vert y(t)\Vert +\sup _{t\in [a,b]^{\kappa }}\Vert y^{\triangle }(t)\Vert , $$

where \(\Vert \cdot \Vert \) is the Euclidean norm in \(\mathbb {R}^n\).

Definition 12

We say that an admissible function \(\hat{y}\in C^{1}_{rd}(\mathbb {T};\mathbb {R}^{n})\) is a local minimizer (respectively, a local maximizer) to problem (5)–(6) if there exists \(\delta >0\) such that \(\mathscr {L}[\hat{y}] \le \mathscr {L}[y]\) (respectively, \(\mathscr {L}[\hat{y}]\ge \mathscr {L}[y]\)) for all admissible functions \(y\in C^{1}_{rd}(\mathbb {T}; \mathbb {R}^{n})\) satisfying the inequality \(||y-\hat{y}||_{C_{rd}^{1}}<\delta \).

Local minimizers (or maximizers) to problem (5)–(6) fulfill the delta differential Euler–Lagrange equation.

Theorem 11

(Delta differential Euler–Lagrange equation – see Theorem 4.2 of [11]) If \(\hat{y}\in C^{2}_{rd}(\mathbb {T}; \mathbb {R}^{n})\) is a local minimizer to (5)–(6), then the Euler–Lagrange equation (in the delta differential form)

$$\begin{aligned} L^{\varDelta }_{v}\left( t,\hat{y}^{\sigma }(t),\hat{y}^{\varDelta }(t)\right) =L_{y}\left( t,\hat{y}^{\sigma }(t),\hat{y}^{\varDelta }(t)\right) \end{aligned}$$

holds for \(t\in [a,b]^{\kappa }\).

The next theorem provides the delta integral Euler–Lagrange equation.

Theorem 12

(Delta integral Euler–Lagrange equation – see [30, 39]) If \(\hat{y}(t)\in C_{rd}^{1}(\mathbb {T}; \mathbb {R}^{n})\) is a local minimizer of the variational problem (5)–(6), then there exists a vector \(c\in \mathbb {R}^{n}\) such that the Euler–Lagrange equation (in the delta integral form)

$$\begin{aligned} L_{v}\left( t, \hat{y}^{\sigma }(t),\hat{y}^{\varDelta }(t)\right) =\int \limits _{a}^{t}L_{y}(\tau , \hat{y}^{\sigma }(\tau ),\hat{y}^{\varDelta }(\tau ))\varDelta \tau +c^{T} \end{aligned}$$
(7)

holds for \(t\in [a,b]^{\kappa }\).

In the proof of Theorems 11 and 12 a time scale version of the Dubois–Reymond lemma is used.

Lemma 1

(See [11, 31]) Let \(f\in C_{rd}\), \(f:[a,b]\rightarrow \mathbb {R}^{n}\). Then

$$ \int \limits _{a}^{b}f^{T}(t)\eta ^{\varDelta }(t)\varDelta t=0 $$

holds for all \(\eta \in C^{1}_{rd}([a,b],\mathbb {R}^{n})\) with \(\eta (a)=\eta (b)=0\) if and only if \(f(t) = c\) for all \(t\in [a,b]^{\kappa }\), \(c\in \mathbb {R}^{n}\).

The next theorem gives a second order necessary optimality condition for problem (5)–(6).

Theorem 13

(Legendre condition – see Result 1.3 of [11]) If \(\hat{y}\in C^{2}_{rd}(\mathbb {T}; \mathbb {R}^{n})\) is a local minimizer of the variational problem (5)–(6), then

$$\begin{aligned} A(t)+\mu (t)\left\{ C(t)+C^{T}(t)+\mu (t)B(t) +(\mu (\sigma (t)))^{\dag }A(\sigma (t))\right\} \ge 0, \end{aligned}$$
(8)

\(t\in [a,b]^{\kappa ^{2}}\), where

$$\begin{aligned} \begin{aligned}&A(t)=L_{vv}\left( t,\hat{y}^{\sigma }(t),\hat{y}^{\varDelta }(t)\right) ,\\&B(t)=L_{yy}\left( t,\hat{y}^{\sigma }(t),\hat{y}^{\varDelta }(t)\right) ,\\&C(t)=L_{yv}\left( t,\hat{y}^{\sigma }(t),\hat{y}^{\varDelta }(t)\right) \end{aligned} \end{aligned}$$

and where \(\alpha ^{\dag }=\frac{1}{\alpha }\) if \(\alpha \in \mathbb {R}\setminus \lbrace 0 \rbrace \) and \(0^{\dag }=0\).

Remark 2

If (8) holds with the strict inequality “>”, then it is called the strengthened Legendre condition.

3.2 The Nabla Approach to the Calculus of Variations

In this section we consider a problem of the calculus of variations that involves a functional with a nabla derivative and a nabla integral. The motivation to study such variational problems is coming from applications, in particular from economics [7, 8]. Let \(\mathbb {T}\) be a given time scale, which has sufficiently many points in order for all calculations to make sense, and let \(a,b\in \mathbb {T}\), \(a<b\). The problem consists of minimizing or maximizing

$$\begin{aligned} \mathscr {L}[y]=\int \limits _{a}^{b} L\left( t, y^{\rho }(t), y^{\nabla }(t)\right) \nabla t \end{aligned}$$
(9)

in the class of functions \(y\in C^{1}_{ld}(\mathbb {T};\mathbb {R}^{n})\) subject to the boundary conditions

$$\begin{aligned} y(a)=y_{a}, \quad y(b)=y_{b}, \quad y_{a}, y_{b}\in \mathbb {R}^{n}, \quad n\in \mathbb {N}. \end{aligned}$$
(10)

Definition 13

A function \(y\in C_{ld}^{1}(\mathbb {T},\mathbb {R}^{n})\) is said to be an admissible path (function) to problem (9)–(10) if it satisfies the given boundary conditions \(y(a){=}y_{a}\), \(y(b)=y_{b}\).

As before, the Lagrangian L is understood as a function \(L:\mathbb {T}\times \mathbb {R}^{2n}\rightarrow \mathbb {R}\), \((t,y,v) \rightarrow L(t,y,v)\). We assume that \(L(t,\cdot ,\cdot )\) is differentiable in (yv); \(L(t,\cdot ,\cdot )\), \(L_{y}(t,\cdot ,\cdot )\) and \(L_{v}(t,\cdot ,\cdot )\) are continuous at \(\left( y^{\rho }(t),y^{\nabla }(t)\right) \) uniformly at t and ld-continuous at t for any admissible path y. Let us consider the following norm in \(C_{ld}^{1}\):

$$ \Vert y\Vert _{C^{1}_{ld}} = \sup _{t\in [a,b]}\Vert y(t)\Vert +\sup _{t\in [a,b]_{\kappa }}\Vert y^{\nabla }(t)\Vert $$

with \(\Vert \cdot \Vert \) the Euclidean norm in \(\mathbb {R}^n\).

Definition 14

(See [3]) We say that an admissible function \(y\in C^{1}_{ld}(\mathbb {T};\mathbb {R}^{n})\) is a local minimizer (respectively, a local maximizer) for the variational problem (9)–(10) if there exists \(\delta >0\) such that \(\mathscr {L}[\hat{y}]\le \mathscr {L} [y]\) (respectively, \(\mathscr {L}[\hat{y}]\ge \mathscr {L} [y]\)) for all \(y\in C^{1}_{ld}(\mathbb {T}; \mathbb {R}^{n})\) satisfying the inequality \(||y-\hat{y}||_{C^{1}_{ld}} <\delta \).

In case of the first order necessary optimality condition for nabla variational problem on time scales (9)–(10), the Euler–Lagrange equation takes the following form.

Theorem 14

(Nabla Euler–Lagrange equation – see [62]) If a function \(\hat{y}\in C_{ld}^{1}(\mathbb {T};\mathbb {R}^{n})\) provides a local extremum to the variational problem (9)–(10), then \(\hat{y}\) satisfies the Euler–Lagrange equation (in the nabla differential form)

$$\begin{aligned} L^{\nabla }_{v}\left( t, y^{\rho }(t), y^{\nabla }(t)\right) =L_{y}\left( t, y^{\rho }(t), y^{\nabla }(t)\right) \end{aligned}$$

for all \(t\in [a,b]_{\kappa }\).

Now we present the fundamental lemma of the nabla calculus of variations on time scales.

Lemma 2

(See [50]) Let \(f\in C_{ld}([a,b],\mathbb {R}^{n})\). If

$$ \int \limits _{a}^{b}f(t)\eta ^{\nabla }(t)\nabla t=0 $$

for all \(\eta \in C^{1}_{ld}([a,b],\mathbb {R}^{n})\) with \(\eta (a)=\eta (b)=0\), then \(f(t)=c\) for all \(t\in [a,b]_{\kappa }\), \(c\in \mathbb {R}^{n}\).

For a good survey on the direct calculus of variations on time scales, covering both delta and nabla approaches, we refer the reader to [62].

4 Inverse Problems of the Calculus of Variations on Time Scales

This section is devoted to inverse problems of the calculus of variations on an arbitrary time scale. To our best knowledge, the inverse problem has not been studied before 2014 [23, 26, 28] in the framework of time scales, in contrast with the direct problem, that establishes dynamic equations of Euler–Lagrange type to time-scale variational problems, that has now been investigated for ten years, since 2004 [11]. To begin (Sect. 4.1) we consider an inverse extremal problem associated with the following fundamental problem of the calculus of variations: to minimize

$$\begin{aligned} \mathscr {L}[y]=\int \limits _{a}^{b} L\left( t,y^{\sigma }(t),y^{\varDelta }(t)\right) \varDelta t \end{aligned}$$
(11)

subject to boundary conditions \(y(a)=y_{0}(a)\), \(y(b)=y_{0}(b)\) on a given time scale \(\mathbb {T}\). The Euler–Lagrange equation and the strengthened Legendre condition are used in order to describe a general form of a variational functional (11) that attains an extremum at a given function \(y_0\). In the latter Sect. 4.2, we introduce a completely different approach to the inverse problem of the calculus of variations, using an integral perspective instead of the classical differential point of view [17, 21]. We present a sufficient condition of self-adjointness for an integro-differential equation (Lemma 3). Using this property, we prove a necessary condition for an integro-differential equation on an arbitrary time scale \(\mathbb {T}\) to be an Euler–Lagrange equation (Theorem 17), related to a property of self-adjointness (Definition 15) of its equation of variation (Definition 16).

4.1 A General Form of the Lagrangian

The problem under our consideration is to find a general form of the variational functional

$$\begin{aligned} \mathscr {L}[y]=\int \limits _{a}^{b} L\left( t,y^{\sigma }(t),y^{\varDelta }(t)\right) \varDelta t \end{aligned}$$
(12)

subject to boundary conditions \(y(a)=y(b)=0\), possessing a local minimum at zero, under the Euler–Lagrange and the strengthened Legendre conditions. We assume that \(L(t,\cdot ,\cdot )\) is a \(C^{2}\)-function with respect to (yv) uniformly in t, and L, \(L_{y}\), \(L_{v}\), \(L_{vv}\in C_{rd}\) for any admissible path \(y(\cdot )\). Observe that under our assumptions, by Taylor’s theorem, we may write L, with the big O notation, in the form

$$\begin{aligned} L(t,y,v)=P(t, y) +Q(t, y) v +\frac{1}{2} R(t, y,0)v^{2} + O(v^3), \end{aligned}$$
(13)

where

$$\begin{aligned} \begin{aligned} P(t, y)&= L(t, y,0),\\ Q(t, y)&= L_{v}(t, y,0),\\ R(t, y,0)&= L_{vv}(t, y,0). \end{aligned} \end{aligned}$$
(14)

Let \(R(t, y, v) = R(t, y,0) + O(v)\). Then, one can write (13) as

$$\begin{aligned} L(t, y,v)=P(t, y) +Q(t, y) v +\frac{1}{2} R(t, y, v) v^{2}. \end{aligned}$$

Now the idea is to find general forms of \(P(t, y^{\sigma }(t))\), \(Q(t, y^{\sigma }(t))\) and \(R(t, y^{\sigma }(t), y^{\varDelta }(t))\) using the Euler–Lagrange (7) and the strengthened Legendre (8) conditions with notation (14). Then we use the Euler–Lagrange equation (7) and choose an arbitrary function \(P(t,y^{\sigma }(t))\) such that \(P(t,\cdot )\in C^{2}\) with respect to the second variable, uniformly in t, P and \(P_y\) rd-continuous in t for all admissible y. We can write the general form of Q as

$$\begin{aligned} Q(t,y^{\sigma }(t))=C+\int \limits _{a}^{t}P_{y}(\tau ,0)\varDelta \tau +q(t,y^{\sigma }(t))-q(t,0), \end{aligned}$$

where \(C\in \mathbb {R}\) and q is an arbitrarily function such that \(q(t,\cdot )\in C^{2}\) with respect to the second variable, uniformly in t, q and \(q_y\) are rd-continuous in t for all admissible y. From the strengthened Legendre condition (8), with notation (14), we set

$$\begin{aligned} R(t,0,0)+\mu (t)\left\{ 2Q_{y}(t,0)+\mu (t)P_{yy}(t,0) +\left( \mu ^{\sigma }(t)\right) ^{\dag }R(\sigma (t),0,0)\right\} = p(t) \nonumber \\ \end{aligned}$$
(15)

with \(p\in C_{rd}([a,b])\), \(p(t)>0\) for all \(t\in [a,b]^{\kappa ^{2}}\), chosen arbitrary, where \(\alpha ^{\dag }=\frac{1}{\alpha }\) if \(\alpha \in \mathbb {R}\setminus \lbrace 0 \rbrace \) and \(0^{\dag }=0\). We obtain the following theorem, which presents a general form of the integrand L for functional (12).

Theorem 15

Let \(\mathbb {T}\) be an arbitrary time scale. If functional (12) with boundary conditions \(y(a)=y(b)=0\) attains a local minimum at \(\hat{y}(t)\equiv 0\) under the strengthened Legendre condition, then its Lagrangian L takes the form

$$\begin{aligned} \begin{aligned}&L\left( t,y^{\sigma }(t),y^{\varDelta }(t)\right) = P\left( t,y^{\sigma }(t)\right) \\&+\left( C+\int \limits _{a}^{t}P_{y}(\tau ,0)\varDelta \tau +q(t,y^{\sigma }(t))-q(t,0)\right) y^{\varDelta }(t)\\&+\Biggl (p(t)-\mu (t)\left\{ 2Q_{y}(t,0)+\mu (t) P_{yy}(t,0) +\left( \mu ^{\sigma }(t)\right) ^{\dag }R(\sigma (t),0,0)\right\} \\&\qquad +w(t,y^{\sigma }(t),y^{\varDelta }(t)) -w(t,0,0)\Biggr )\frac{(y^{\varDelta }(t))^{2}}{2}, \end{aligned} \end{aligned}$$

where R(t, 0, 0) is a solution of equation (15), \(C\in \mathbb {R}\), \(\alpha ^{\dag }=\frac{1}{\alpha }\) if \(\alpha \in \mathbb {R}\setminus \lbrace 0 \rbrace \) and \(0^{\dag }=0\). Functions P, p, q and w are arbitrary functions satisfying:

  1. (i)

    \(P(t,\cdot ),q(t,\cdot )\in C^{2}\) with respect to the second variable uniformly in t; P, \(P_y\), q, \(q_y\) are rd-continuous in t for all admissible y; \(P_{yy}(\cdot ,0)\) is rd-continuous in t; \(p\in C^{1}_{rd}\) with \(p(t)>0\) for all \(t\in [a,b]^{\kappa ^{2}}\);

  2. (ii)

    \(w(t,\cdot ,\cdot )\in C^2\) with respect to the second and the third variable, uniformly in t; \(w, w_y, w_v, w_{vv}\) are rd-continuous in t for all admissible y.

Proof

See [28].

Now we consider the general situation when the variational problem consists in minimizing (12) subject to arbitrary boundary conditions \(y(a)=y_{0}(a)\) and \(y(b)=y_{0}(b)\), for a certain given function \(y_{0}\in C_{rd}^{2}([a,b])\).

Theorem 16

Let \(\mathbb {T}\) be an arbitrary time scale. If the variational functional (12) with boundary conditions \(y(a)=y_{0}(a)\), \(y(b)=y_{0}(b)\), attains a local minimum for a certain given function \(y_{0}(\cdot )\in C^{2}_{rd}([a,b])\) under the strengthened Legendre condition, then its Lagrangian L has the form

$$\begin{aligned} \begin{aligned}&L\left( t,y^{\sigma }(t),y^{\varDelta }(t)\right) = P\left( t,y^{\sigma }(t)-y^{\sigma }_{0}(t)\right) + \left( y^{\varDelta }(t)-y_{0}^{\varDelta }(t)\right) \\&\times \left( C+\int \limits _{a}^{t}P_{y}\left( \tau ,-y_{0}^{\sigma }(\tau )\right) \varDelta \tau +q\left( t,y^{\sigma }(t)-y^{\sigma }_{0}(t)\right) -q\left( t,-y_{0}^{\sigma }(t)\right) \right) +\frac{1}{2}\Biggl (p(t)\\&-\mu (t) \left\{ 2Q_{y}(t,-y_{0}^{\sigma }(t))+\mu (t) P_{yy}(t,-y_{0}^{\sigma }(t)) +\left( \mu ^{\sigma }(t)\right) ^{\dag }R(\sigma (t),-y_{0}^{\sigma }(t),-y_{0}^{\varDelta }(t))\right\} \\&+w(t,y^{\sigma }(t)-y^{\sigma }_{0}(t),y^{\varDelta }(t)-y_{0}^{\varDelta }(t)) -w\left( t,-y_{0}^{\sigma }(t),-y_{0}^{\varDelta }(t)\right) \Biggr ) \left( y^{\varDelta }(t)-y_{0}^{\varDelta }(t)\right) ^{2}, \end{aligned} \end{aligned}$$

where \(C\in \mathbb {R}\) and functions P, p, q, w satisfy conditions (i) and (ii) of Theorem 15.

Proof

See [28].

For the classical situation \(\mathbb {T}=\mathbb {R}\), Theorem 16 gives a recent result of [57, 58].

Corollary 1

(Theorem 4 of [57]) If the variational functional

$$ \mathscr {L}[y]=\int \limits _{a}^{b} L(t,y(t),y'(t))dt $$

attains a local minimum at \(y_{0}(\cdot )\in C^{2}[a,b]\) when subject to boundary conditions \(y(a)=y_{0}(a)\) and \(y(b)=y_{0}(b)\) and the classical strengthened Legendre condition

$$ R(t,y_{0}(t),y'_{0}(t))>0, \quad t\in [a,b], $$

then its Lagrangian L has the form

$$\begin{aligned} \begin{aligned}&L(t,y(t),y'(t))=P(t,y(t)-y_{0}(t))\\&+(y'(t)-y'_{0}(t))\left( C+\int \limits _{a}^{t} P_{y}(\tau ,-y_{0}(\tau ))d\tau +q(t,y(t)-y_{0}(t))-q(t,-y_{0}(t))\right) \\&+\frac{1}{2}\left( p(t)+w(t,y(t)-y_{0}(t),y'(t)-y'_{0}(t)) -w(t,-y_{0}(t),-y'_{0}(t))\right) (y'(t)-y'_{0}(t))^{2}, \end{aligned} \end{aligned}$$

where \(C\in \mathbb {R}\).

In the particular case of an isolated time scale, where \(\mu (t) \ne 0\) for all \(t\in \mathbb {T}\), we get the following corollary.

Corollary 2

Let \(\mathbb {T}\) be an isolated time scale. If functional (12) subject to the boundary conditions \(y(a)=y(b)=0\) attains a local minimum at \(\hat{y}(t) \equiv 0\) under the strengthened Legendre condition, then the Lagrangian L has the form

$$\begin{aligned} \begin{aligned}&L\left( t,y^{\sigma }(t),y^{\varDelta }(t)\right) = P\left( t,y^{\sigma }(t)\right) \\&+\left( C+\int \limits _{a}^{t}P_{y}(\tau ,0)\varDelta \tau +q(t,y^{\sigma }(t))-q(t,0)\right) y^{\varDelta }(t)\\&+\left( e_{r}(t,a)R_{0} +\int \limits _{a}^{t}e_{r}(t,\sigma (\tau ))s(\tau )\varDelta \tau +w(t,y^{\sigma }(t),y^{\varDelta }(t)) -w(t,0,0)\right) \frac{(y^{\varDelta }(t))^{2}}{2}, \end{aligned} \end{aligned}$$

where \(C,R_{0}\in \mathbb {R}\) and r(t) and s(t) are given by

$$\begin{aligned} r(t) := -\frac{1+\mu (t)(\mu ^{\sigma }(t))^{\dag }}{\mu ^{2}(t)(\mu ^{\sigma }(t))^{\dag }}, \quad s(t) := \frac{p(t) -\mu (t)[2Q_{y}(t,0)+\mu (t)P_{yy}(t,0)]}{\mu ^{2}(t)(\mu ^{\sigma }(t))^{\dag }} \nonumber \\ \end{aligned}$$
(16)

with \(\alpha ^{\dag } = \frac{1}{\alpha }\) if \(\alpha \in \mathbb {R}\setminus \lbrace 0 \rbrace \) and \(0^{\dag }=0\), and functions P, p, q, w satisfy assumptions of Theorem 15.

Based on Corollary 2, we present the form of Lagrangian L in the periodic time scale \(\mathbb {T}=h\mathbb {Z}\).

Example 6

Let \(\mathbb {T}=h\mathbb {Z}\), \(h > 0\), and \(a, b\in h\mathbb {Z}\) with \(a<b\). Then \(\mu (t) \equiv h\). Consider the variational functional

$$\begin{aligned} \mathscr {L}[y]=h\sum _{k=\frac{a}{h}}^{\frac{b}{h}-1} L\left( kh,y(kh+h),\varDelta _h y(kh)\right) \end{aligned}$$
(17)

subject to the boundary conditions \(y(a)=y(b)=0\), which attains a local minimum at \(\hat{y}(kh)\equiv 0\) under the strengthened Legendre condition

$$ R(kh,0,0)+2hQ_{y}(kh,0)+h^{2}P_{yy}(kh,0)+R(kh+h,0,0)>0, $$

\(kh\in [a,b-2h]\cap h\mathbb {Z}\). Functions r(t) and s(t) (see (16)) have the following form:

$$\begin{aligned} r(t) \equiv -\frac{2}{h}, \quad s(t)=\frac{p(t)}{h} -\left( 2Q_{y}(t,0) + h P_{yy}(t,0)\right) . \end{aligned}$$

Hence,

$$ \int \limits _{a}^{t}P_{y}(\tau ,0)\varDelta \tau =h\sum \limits _{i=\frac{a}{h}}^{\frac{t}{h}-1}P_{y}(ih,0), $$
$$\begin{aligned} \int \limits _{a}^{t}e_{r}(t,\sigma (\tau ))s(\tau )\varDelta \tau =\sum _{i=\frac{a}{h}}^{\frac{t}{h}-1}(-1)^{\frac{t}{h}-i-1} \left( p(ih)-2hQ_{y}(ih,0)-h^{2}P_{yy}(ih,0)\right) . \end{aligned}$$

Thus, the Lagrangian L of the variational functional (17) on \(\mathbb {T}=h\mathbb {Z}\) has the form

$$\begin{aligned} \begin{aligned} L&\left( kh,y(kh+h),\varDelta _h y(kh)\right) =P\left( kh,y(kh+h)\right) \\&+\left( C+\sum \limits _{i=\frac{a}{h}}^{k-1}hP_{y}(ih,0) +q(kh,y(kh+h))-q(kh,0)\right) \varDelta _h y(kh)\\&+\frac{1}{2}\Biggl ((-1)^{k-\frac{a}{h}}R_{0} +\sum _{i=\frac{a}{h}}^{k-1}(-1)^{k-i-1} \left( p(i h)-2hQ_{y}(ih,0)-h^{2}P_{yy}(ih,0)\right) \\&\qquad \,\, +w(kh,y(kh+h),\varDelta _h y(kh))-w(kh,0,0)\Biggr ) \left( \varDelta _h y(kh)\right) ^{2}, \end{aligned} \end{aligned}$$

where functions P, p, q, w are arbitrary but satisfy assumptions of Theorem 15.

4.2 Necessary Condition for an Euler–Lagrange Equation

This section provides a necessary condition for an integro-differential equation on an arbitrary time scale to be an Euler–Lagrange equation (Theorem 17). For that the notions of self-adjointness (Definition 15) and equation of variation (Definition 16) are essential.

Definition 15

(First order self-adjoint integro-differential equation) A first order integro-differential dynamic equation is said to be self-adjoint if it has the form

$$\begin{aligned} Lu(t)=const, \hbox { where } Lu(t)=p(t)u^{\varDelta }(t)+\int \limits _{t_{0}}^{t}\left[ r(s)u^{\sigma }(s)\right] \varDelta s \end{aligned}$$
(18)

with \(p,r\in C_{rd}\), \(p\ne 0\) for all \(t\in \mathbb {T}\) and \(t_{0}\in \mathbb {T}\).

Let \(\mathbb {D}\) be the set of all functions \(y:\mathbb {T}\rightarrow \mathbb {R}\) such that \(y^{\varDelta }:\mathbb {T}^{\kappa }\rightarrow \mathbb {R}\) is continuous. A function \(y\in \mathbb {D}\) is said to be a solution of (18) provided \(Ly(t)=const\) holds for all \(t\in \mathbb {T^{\kappa }}\). For simplicity, we use the operators \([\cdot ]\) and \(\langle \cdot \rangle \) defined as

$$\begin{aligned}{}[y](t):=(t, y^{\sigma }(t), y^{\varDelta }(t)), \quad \quad \langle y\rangle (t):=(t, y^{\sigma }(t), y^{\varDelta }(t), y^{\varDelta \varDelta }(t)), \end{aligned}$$
(19)

and partial derivatives of function \((t,y,v,z)\rightarrow L(t,y,v,z)\) are denoted by \(\partial _{2}L=L_{y}\), \(\partial _{3}L=L_{v}\), \(\partial _{4}L=L_{z}\).

Definition 16

(Equation of variation) Let

$$\begin{aligned} H[y](t)+\int \limits _{t_{0}}^{t}G[y](s)\varDelta s = const \end{aligned}$$
(20)

be an integro-differential equation on time scales with \(H_{v}\ne 0\), \(t\rightarrow F_{y}[y](t)\), \(t\rightarrow F_{v}[y](t) \in C_{rd}(\mathbb {T},\mathbb {R})\) along every curve y, where \(F\in \lbrace G,H\rbrace \). The equation of variation associated with (20) is given by

$$\begin{aligned} H_{y}[u](t)u^{\sigma }(t)+H_{v}[u](t) u^{\varDelta }(t) +\int \limits _{t_{0}}^{t}G_{y}[u](s)u^{\sigma }(s)+G_{v}[u](s) u^{\varDelta }(s)\varDelta s=0. \end{aligned}$$
(21)

Lemma 3

(Sufficient condition of self-adjointness) Let (20) be a given integro-differential equation. If

$$\begin{aligned} H_{y}[y](t)+G_{v}[y](t)=0, \end{aligned}$$

then its equation of variation (21) is self-adjoint.

Proof

See [26].

Now we provide an answer to the general inverse problem of the calculus of variations on time scales.

Theorem 17

(Necessary condition for an Euler–Lagrange equation in integral form) Let \(\mathbb {T}\) be an arbitrary time scale and

$$\begin{aligned} H(t,y^{\sigma }(t),y^{\varDelta }(t))+\int \limits _{t_{0}}^{t}G(s,y^{\sigma }(s),y^{\varDelta }(s))\varDelta s=const \end{aligned}$$
(22)

be a given integro-differential equation. If (22) is to be an Euler–Lagrange equation, then its equation of variation (21) is self-adjoint in the sense of Definition 15.

Proof

See [26].

Remark 3

In practical terms, Theorem 17 is useful to identify equations that are not Euler–Lagrange: if the equation of variation (21) of a given dynamic equation (20) is not self-adjoint, then we conclude that (20) is not an Euler–Lagrange equation.

Now we present an example of a second order differential equation on time scales which is not an Euler–Lagrange equation.

Example 7

Let us consider the following second-order linear oscillator dynamic equation on an arbitrary time scale \(\mathbb {T}\):

$$\begin{aligned} y^{\varDelta \varDelta }(t)+y^{\varDelta }(t)-t=0. \end{aligned}$$
(23)

We may write Eq. (23) in integro-differential form (20):

$$\begin{aligned} y^{\varDelta }(t)+\int \limits _{t_{0}}^{t}\left( y^{\varDelta }(s)-s\right) \varDelta s=const, \end{aligned}$$
(24)

where \(H[y](t)=y^{\varDelta }(t)\) and \(G[y](t)=y^{\varDelta }(t)-t\). Because

$$\begin{aligned} H_{y}[y](t)=G_{y}[y](t)=0, \quad H_{v}[y](t)=G_{v}[y](t)=1, \end{aligned}$$

the equation of variation associated with (24) is given by

$$\begin{aligned} u^{\varDelta }(t)+\int \limits _{t_{0}}^{t}u^{\varDelta }(s)\varDelta s=0 \iff u^{\varDelta }(t)+u(t)=u(t_{0}). \end{aligned}$$
(25)

We may notice that Eq. (25) cannot be written in form (18), hence, it is not self-adjoint. Following Theorem 17 (see Remark 3) we conclude that Eq. (23) is not an Euler–Lagrange equation.

Now we consider the particular case of Theorem 17 when \(\mathbb {T}=\mathbb {R}\) and \(y\in C^{2}([t_{0},t_{1}];\mathbb {R})\). In this case operator \([\cdot ]\) of (19) has the form

$$ [y](t)=(t,y(t),y'(t))=:[y]_{\mathbb {R}}(t), $$

while condition (18) can be written as

$$\begin{aligned} p(t)u'(t)+\int \limits _{t_{0}}^{t}r(s)u(s)ds=const. \end{aligned}$$
(26)

Corollary 3

If a given integro-differential equation

$$ H(t,y(t),y'(t))+\int \limits _{t_{0}}^{t}G(s,y(s),y'(s))ds=const $$

is to be the Euler–Lagrange equation of the variational problem

$$\begin{aligned} \mathscr {I}[y]=\int \limits _{t_{0}}^{t_{1}} L(t,y(t),y'(t))dt \end{aligned}$$

(cf., e.g., [63]), then its equation of variation

$$ H_{y}[u]_{\mathbb {R}}(t)u(t)+H_{v}[u]_{\mathbb {R}}(t)u'(t) +\int \limits _{t_{0}}^{t}G_{y}[u]_{\mathbb {R}}(s)u(s)+G_{v}[u]_{\mathbb {R}}(s)u'(s)ds=0 $$

must be self-adjoint, in the sense of Definition 15 with (18) given by (26).

Proof

Follows from Theorem 17 with \(\mathbb {T}=\mathbb {R}\).

Now we consider the particular case of Theorem 17 when \(\mathbb {T}=h\mathbb {Z}\), \(h>0\). In this case operator \([\cdot ]\) of (19) has the form

$$ [y](t)=(t,y(t+h),\varDelta _{h} y(t))=:[y]_{h}(t), $$

where

$$ \varDelta _{h}y(t)=\frac{y(t+h)-y(t)}{h}. $$

For \(\mathbb {T}=h\mathbb {Z}\), \(h>0\), condition (18) can be written as

$$\begin{aligned} p(t)\varDelta _{h}u(t)+\sum \limits _{k=\frac{t_{0}}{h}}^{\frac{t}{h}-1}hr(kh)u(kh+h)=const. \end{aligned}$$
(27)

Corollary 4

If a given difference equation

$$ H(t,y(t+h),\varDelta _{h} y(t)) +\sum \limits _{k=\frac{t_{0}}{h}}^{\frac{t}{h}-1}hG(kh,y(kh+h),\varDelta _{h} y(kh))=const $$

is to be the Euler–Lagrange equation of the discrete variational problem

$$\begin{aligned} \mathscr {I}[y]=\sum \limits _{k=\frac{t_{0}}{h}}^{\frac{t_{1}}{h}-1}hL(kh,y(kh+h),\varDelta _{h} y(kh)) \end{aligned}$$

(cf., e.g., [10]), then its equation of variation

$$\begin{aligned} H_{y}[u]_{h}(t)u(t+h)&+H_{v}[u]_{h}(t)\varDelta _{h}u(t)\\&\qquad +h\sum \limits _{k=\frac{t_{0}}{h}}^{\frac{t}{h}-1}\left( G_{y}[u]_{h}(kh)u(kh+h)+G_{v}[u]_{h}(kh)\varDelta _{h}u(kh)\right) =0 \end{aligned}$$

is self-adjoint, in the sense of Definition 15 with (18) given by (27).

Proof

Follows from Theorem 17 with \(\mathbb {T}=h\mathbb {Z}\).

Finally, let us consider the particular case of Theorem 17 when \(\mathbb {T}=\overline{q^{\mathbb {Z}}}=q^{\mathbb {Z}}\cup \left\{ 0\right\} \), where \(q^{\mathbb {Z}}=\left\{ q^{k}: k\in \mathbb {Z}, q>1\right\} \). In this case operator \([\cdot ]\) of (19) has the form

$$ [y]_{\overline{q^{\mathbb {Z}}}}(t)=(t,y(qt),\varDelta _{q} y(t))=:[y]_{q}(t), $$

where

$$ \varDelta _{q}y(t)=\frac{y(qt)-y(t)}{(q-1)t}. $$

For \(\mathbb {T}=\overline{q^{\mathbb {Z}}}\), \(q>1\), condition (18) can be written as (cf., e.g., [60]):

$$\begin{aligned} p(t)\varDelta _{q}u(t)+ (q-1)\sum \limits _{s\in [t_{0},t) \cap \mathbb {T}}sr(s)u(qs)=const. \end{aligned}$$
(28)

Corollary 5

If a given q-equation

$$ H(t,y(qt),\varDelta _{q} y(t))+(q-1)\sum \limits _{s\in [t_{0},t) \cap \mathbb {T}}sG(s,y(qs),\varDelta _{q}y(s))=const, $$

\(q>1\), is to be the Euler–Lagrange equation of the variational problem

$$\begin{aligned} \mathscr {I}[y]=(q-1)\sum \limits _{t\in [t_{0},t_{1}) \cap \mathbb {T}}tL(t,y(qt),\varDelta _{q}y(t)), \end{aligned}$$

\(t_{0}, t_{1}\in \overline{q^{\mathbb {Z}}}\), then its equation of variation

$$\begin{aligned} H_{y}[ u]_{q}(t)u(qt)&+H_{v}[u]_{q}(t)\varDelta _{q}u(t)\\&\qquad +(q-1)\sum \limits _{s\in [t_{0},t)\cap \mathbb {T}} s\left( G_{y}[u]_{q}(s)u(qs)+G_{v}[u]_{q}(s)\varDelta _{q}u(s) \right) =0 \end{aligned}$$

is self-adjoint, in the sense of Definition 15 with (18) given by (28).

Proof

Choose \(\mathbb {T}=\overline{q^{\mathbb {Z}}}\) in Theorem 17.

More information about Euler–Lagrange equations for q-variational problems may be found in [30, 48, 52] and references therein.

4.3 Discussion

On an arbitrary time scale \(\mathbb {T}\), we can easily show equivalence between the integro-differential equation (20) and the second order differential equation (29) below (Proposition 1). However, when we consider equations of variations of them, we notice that it is not possible to prove an equivalence between them on an arbitrary time scale. The main reason of this impossibility, even in the discrete time scale \(\mathbb {Z}\), is the absence of a general chain rule on an arbitrary time scale (see, e.g., Example 1.85 of [13]). However, on \(\mathbb {T}=\mathbb {R}\) we can present this equivalence (Proposition 2).

Proposition 1

(See [26]) The integro-differential equation (20) is equivalent to a second order delta differential equation

$$\begin{aligned} W\left( t,y^{\sigma }(t), y^{\varDelta }(t), y^{\varDelta \varDelta }(t)\right) =0. \end{aligned}$$
(29)

Let \(\mathbb {T}\) be a time scale such that \(\mu \) is delta differentiable. The equation of variation of a second order differential equation (29) is given by

$$\begin{aligned} W_{z}\langle u\rangle (t)u^{\varDelta \varDelta }(t) +W_{v}\langle u\rangle (t) u^{\varDelta }(t) +W_{y}\langle u\rangle (t) u^{\sigma }(t)=0. \end{aligned}$$
(30)

On an arbitrary time scale it is impossible to prove the equivalence between the equation of variation (21) and (30). Indeed, after differentiating both sides of Eq. (21) and using the product rule given by Theorem 2, one has

$$\begin{aligned} H_{y}[u](t)u^{\sigma \varDelta }(t)+H_{y}^{\varDelta }[u](t)u^{\sigma \sigma }(t)&+H_{v}[u](t)u^{\varDelta \varDelta }(t)+H_{v}^{\varDelta }[u](t)u^{\varDelta \sigma }(t)\nonumber \\&\qquad +G_{y}[u](t)u^{\sigma }(t)+G_{v}[u](t)u^{\varDelta }(t)=0. \end{aligned}$$
(31)

The direct calculations

  • \(H_{y}[u](t)u^{\sigma \varDelta }(t)=H_{y}[u](t)(u^{\varDelta }(t) +\mu ^{\varDelta }(t)u^{\varDelta }(t)+\mu ^{\sigma }(t)u^{\varDelta \varDelta }(t))\),

  • \(H_{y}^{\varDelta }[u](t)u^{\sigma \sigma }(t) =H_{y}^{\varDelta }[u](t)(u^{\sigma }(t)+\mu ^{\sigma }(t)u^{\varDelta }(t) +\mu (t)\mu ^{\sigma }(t)u^{\varDelta \varDelta }(t))\),

  • \(H_{v}^{\varDelta }[u](t)u^{\varDelta \sigma }(t) =H_{v}^{\varDelta }[u](t)(u^{\varDelta }(t)+\mu u^{\varDelta \varDelta }(t))\),

and the fourth item of Theorem 1, allow us to write Eq. (31) in form

$$\begin{aligned}&u^{\varDelta \varDelta }(t)\left[ \mu (t)H_{y}[u](t)+H_{v}[u](t)\right] ^{\sigma }\nonumber \\&\qquad \quad +u^{\varDelta }(t)\left[ H_{y}[u](t)+(\mu (t)H_{y}[u](t))^{\varDelta } +H_{v}^{\varDelta }[u](t)+G_{v}[u](t)\right] \nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad \quad +u^{\sigma }(t)\left[ H_{y}^{\varDelta }[u](t)+G_{y}[u](t)\right] =0. \end{aligned}$$
(32)

We are not able to prove that the coefficients of (32) are the same as in (30), respectively. This is due to the fact that we cannot find the partial derivatives of (29), that is, \(W_{z}\langle u\rangle (t)\), \(W_{v}\langle u\rangle (t)\) and \(W_{y}\langle u\rangle (t)\), from Eq. (30) because of lack of a general chain rule in an arbitrary time scale [12]. The equivalence, however, is true for \(\mathbb {T}=\mathbb {R}\). Operator \(\left\langle \cdot \right\rangle \) has in this case the form \(\left\langle y\right\rangle (t)=(t, y(t), y'(t), y''(t)) =: \left\langle y\right\rangle _{\mathbb {R}} (t)\).

Proposition 2

(See [26]) The equation of variation

$$\begin{aligned} H_{y}[u]_{\mathbb {R}}(t)u(t)+H_{v}[u]_{\mathbb {R}}(t)u'(t) +\int \limits _{t_{0}}^{t}G_{y}[u]_{\mathbb {R}}(s)u(s) +G_{v}[u]_{\mathbb {R}}(s)u'(s)ds=0 \end{aligned}$$

is equivalent to the second order differential equation

$$\begin{aligned} W_{z}\langle u\rangle _{\mathbb {R}}(t)u''(t) +W_{v}\langle u\rangle _{\mathbb {R}}(t) u'(t) +W_{y}\langle u\rangle _{\mathbb {R}}(t)u(t)=0. \end{aligned}$$

Proposition 2 allows us to obtain the classical result of [21, Theorem II] as a corollary of our Theorem 17. The absence of a chain rule on an arbitrary time scale (even for \(\mathbb {T}=\mathbb {Z}\)) implies that the classical approach [21] fails on time scales. This is the reason why we use here a completely different approach to the subject based on the integro-differential form. The case \(\mathbb {T}=\mathbb {Z}\) was recently investigated in [17]. However, similarly to [21], the approach of [17] is based on the differential form and cannot be extended to general time scales.

5 The Delta-Nabla Calculus of Variations for Composition Functionals

The delta-nabla calculus of variations has been introduced in [44]. Here we investigate more general problems of the time-scale calculus of variations for a functional that is the composition of a certain scalar function with the delta and nabla integrals of a vector valued field. We begin by proving general Euler–Lagrange equations in integral form (Theorem 18). Then we consider cases when initial or terminal boundary conditions are not specified, obtaining corresponding transversality conditions (Theorems 19 and 20). Furthermore, we prove necessary optimality conditions for general isoperimetric problems given by the composition of delta-nabla integrals (Theorem 21). Finally, some illustrating examples are presented (Sect. 5.4).

5.1 The Euler–Lagrange Equations

Let us begin by defining the class of functions \(C_{k,n}^{1}([a,b];\mathbb {R})\), which contains delta and nabla differentiable functions.

Definition 17

By \(C_{k,n}^{1}([a,b];\mathbb {R})\), \(k,n\in \mathbb {N}\), we denote the class of functions \(y:[a,b]\rightarrow \mathbb {R}\) such that: if \(k\ne 0\) and \(n\ne 0\), then \(y^{\varDelta }\) is continuous on \([a,b]^{\kappa }_{\kappa }\) and \(y^{\nabla }\) is continuous on \([a,b]_{\kappa }^{\kappa }\), where \([a,b]^{\kappa }_{\kappa }:=[a,b]^{\kappa }\cap [a,b]_{\kappa }\); if \(n=0\), then \(y^{\varDelta }\) is continuous on \([a,b]^{\kappa }\); if \(k=0\), then \(y^{\nabla }\) is continuous on \([a,b]_{\kappa }\).

Our aim is to find a function y which minimizes or maximizes the following variational problem:

$$\begin{aligned} \mathscr {L}[y]&=H\left( \int \limits _{a}^{b}f_{1}(t,y^{\sigma }(t),y^{\varDelta }(t))\varDelta t, \ldots ,\int \limits _{a}^{b}f_{k}(t,y^{\sigma }(t),y^{\varDelta }(t))\varDelta t,\right. \nonumber \\&\qquad \left. \int \limits _{a}^{b}f_{k+1}(t,y^{\rho }(t),y^{\nabla }(t))\nabla t,\ldots , \int \limits _{a}^{b}f_{k+n}(t,y^{\rho }(t),y^{\nabla }(t))\nabla t\right) , \end{aligned}$$
(33)
$$\begin{aligned} (y(a)=y_{a}), \quad (y(b)=y_{b}). \end{aligned}$$
(34)

The parentheses in (34), around the end-point conditions, means that those conditions may or may not occur (it is possible that one or both y(a) and y(b) are free). A function \(y\in C_{k,n}^{1}\) is said to be admissible provided it satisfies the boundary conditions (34) (if any is given). For \(k = 0\) problem (33)–(34) becomes a nabla problem (neither delta integral nor delta derivative is present); for \(n = 0\) problem (33)–(34) reduces to a delta problem (neither nabla integral nor nabla derivative is present). For simplicity, we use the operators \([\cdot ]\) and \(\lbrace \cdot \rbrace \) defined by

$$\begin{aligned}{}[y](t):=(t,y^{\sigma }(t),y^{\varDelta }(t)),\quad \lbrace y\rbrace (t):=(t,y^{\rho }(t),y^{\nabla }(t)). \end{aligned}$$

We assume that:

  1. 1.

    the function \(H:\mathbb {R}^{n+k}\rightarrow \mathbb {R}\) has continuous partial derivatives with respect to its arguments, which we denote by \(H_{i}^{'}\), \(i=1, \ldots , n+k\);

  2. 2.

    functions \((t,y,v)\rightarrow f_{i}(t,y,v)\) from \([a,b]\times \mathbb {R}^{2}\) to \(\mathbb {R}\), \(i=1,\ldots , n+k\), have partial continuous derivatives with respect to y and v uniformly in \(t \in [a,b]\), which we denote by \(f_{iy}\) and \(f_{iv}\);

  3. 3.

    \(f_{i}\), \(f_{iy}\), \(f_{iv}\) are rd-continuous on \([a,b]^{\kappa }\), \(i=1,\ldots ,k\), and ld-continuous on \([a,b]_{\kappa }\), \(i=k+1,\ldots ,k+n\), for all \(y\in C_{k,n}^{1}\).

Definition 18

(Cf. [44]) We say that an admissible function \(\hat{y}\in C_{k,n}^{1}([a,b];\mathbb {R})\) is a local minimizer (respectively, local maximizer) to problem (33)–(34), if there exists \(\delta >0\) such that \(\mathscr {L}[\hat{y}]\le \mathscr {L} [y]\) (respectively, \(\mathscr {L}[\hat{y}]\ge \mathscr {L}[y]\)) for all admissible functions \(y\in C_{k,n}^{1}([a,b];\mathbb {R})\) satisfying the inequality \(|| y-\hat{y}||_{1,\infty }<\delta \), where

$$\begin{aligned} ||y||_{1,\infty }:=||y^{\sigma }||_{\infty }+||y^{\varDelta }||_{\infty } +||y^{\rho }||_{\infty }+||y^{\nabla }||_{\infty } \end{aligned}$$

with \(||y||_{\infty }:= \sup _{t\in [a,b]_{\kappa }^{\kappa }} |y(t)|\).

For brevity, in what follows we omit the argument of \(H_{i}^{'}\). Precisely,

$$ H_{i}^{'}:=\frac{\partial H}{\partial \mathscr {F}_{i}}(\mathscr {F}_{1}(y),\ldots ,\mathscr {F}_{k+n}(y)), \quad i=1,\ldots ,n+k, $$

where

$$\begin{aligned} \begin{aligned} \mathscr {F}_{i}(y)&=\int \limits _{a}^{b} f_{i}[y](t)\varDelta t, \hbox { for } i=1,\ldots ,k,\\ \mathscr {F}_{i}(y)&=\int \limits _{a}^{b}f_{i}\lbrace y\rbrace (t)\nabla t, \hbox { for } i=k+1,\ldots ,k+n. \end{aligned} \end{aligned}$$

Depending on the given boundary conditions, we can distinguish four different problems. The first one is the problem \((P_{ab})\), where the two boundary conditions are specified. To solve this problem we need an Euler–Lagrange necessary optimality condition, which is given by Theorem 18 below. Next two problems — denoted by \((P_{a})\) and \((P_{b})\) — occur when y(a) is given and y(b) is free (problem \((P_{a})\)) and when y(a) is free and y(b) is specified (problem \((P_{b})\)). To solve both of them we need an Euler–Lagrange equation and one proper transversality condition. The last problem — denoted by (P) — occurs when both boundary conditions are not present. To find a solution for such a problem we need to use an Euler–Lagrange equation and two transversality conditions (one at each time a and b).

Theorem 18

(The Euler–Lagrange equations in integral form) If \(\hat{y}\) is a local solution to problem (33)–(34), then the Euler–Lagrange equations (in integral form)

$$\begin{aligned}&\sum \limits _{i=1}^{k}H_{i}^{'}\cdot \left( f_{iv}[\hat{y}](\rho (t))-\int \limits _{a}^{\rho (t)} f_{iy}[\hat{y}](\tau )\varDelta \tau \right) \nonumber \\&\qquad \qquad \qquad +\sum \limits _{i=k+1}^{k+n}H_{i}^{'}\cdot \left( f_{iv}\lbrace \hat{y}\rbrace (t)-\int \limits _{a}^{t} f_{iy}\lbrace \hat{y}\rbrace (\tau )\nabla \tau \right) = c, \quad t\in \mathbb {T}_{\kappa }, \end{aligned}$$
(35)

and

$$\begin{aligned}&\sum \limits _{i=1}^{k}H_{i}^{'}\cdot \left( f_{iv}[\hat{y}](t)-\int \limits _{a}^{t} f_{iy}[\hat{y}](\tau )\varDelta \tau \right) \nonumber \\&\qquad \qquad +\sum \limits _{i=k+1}^{k+n}H_{i}^{'}\cdot \left( f_{iv}\lbrace \hat{y}\rbrace (\sigma (t))-\int \limits _{a}^{\sigma (t)} f_{iy}\lbrace \hat{y}\rbrace (\tau )\nabla \tau \right) = c, \quad t\in \mathbb {T}^{\kappa }, \end{aligned}$$
(36)

hold.

Proof

See [25].

For regular time scales (Definition 3), the Euler–Lagrange equations (35) and (36) coincide; on a general time scale, they are different. Such a difference is illustrated in Example 8. For such purpose let us define \(\xi \) and \(\chi \) by

$$\begin{aligned} \begin{aligned} \xi (t):=\sum \limits _{i=1}^{k}H_{i}^{'}\cdot \left( f_{iv}[\hat{y}](t)-\int \limits _{a}^{t} f_{iy}[\hat{y}](\tau )\varDelta \tau \right) ,\\ \chi (t):=\sum \limits _{i=k+1}^{k+n}H_{i}^{'}\cdot \left( f_{iv}\lbrace \hat{y}\rbrace (t)-\int \limits _{a}^{t} f_{iy}\lbrace \hat{y}\rbrace (\tau )\nabla \tau \right) . \end{aligned} \end{aligned}$$
(37)

Example 8

Let us consider the irregular time scale \(\mathbb {T}=\mathbb {P}_{1,1}=\bigcup \limits _{k=0}^{\infty }\left[ 2k,2k+1\right] \). We show that for this time scale there is a difference between the Euler–Lagrange equations (35) and (36). The forward and backward jump operators are given by

$$ \sigma (t)= {\left\{ \begin{array}{ll} t,\quad t\in \bigcup \limits _{k=0}^{\infty }[2k,2k+1),\\ t+1, \quad t\in \bigcup \limits _{k=0}^{\infty }\left\{ 2k+1\right\} , \end{array}\right. } \quad \rho (t)= {\left\{ \begin{array}{ll} t,\quad t\in \bigcup \limits _{k=0}^{\infty }(2k,2k+1],\\ t-1, \quad t\in \bigcup \limits _{k=1}^{\infty }\left\{ 2k\right\} ,\\ 0, \quad t = 0. \end{array}\right. } $$

For \(t = 0\) and \(t\in \bigcup \limits _{k=0}^{\infty }\left( 2k,2k+1\right) \), Eqs. (35) and (36) coincide. We can distinguish between them for \(t\in \bigcup \limits _{k=0}^{\infty }\left\{ 2k+1\right\} \) and \(t\in \bigcup \limits _{k=1}^{\infty }\left\{ 2k\right\} \). In what follows we use the notations (37). If \(t\in \bigcup \limits _{k=0}^{\infty }\left\{ 2k+1\right\} \), then we obtain from (35) and (36) the Euler–Lagrange equations \(\xi (t) + \chi (t) = c\) and \(\xi (t) + \chi (t+1) = c\), respectively. If \(t\in \bigcup \limits _{k=1}^{\infty }\left\{ 2k\right\} \), then the Euler–Lagrange equation (35) has the form \(\xi (t-1) + \chi (t) = c\) while (36) takes the form \(\xi (t) + \chi (t) = c\).

5.2 Natural Boundary Conditions

In this section we minimize or maximize the variational functional (33), but initial and/or terminal boundary condition y(a) and/or y(b) are not specified. In what follows we obtain corresponding transversality conditions.

Theorem 19

(Transversality condition at the initial time \(t = a\)) Let \(\mathbb {T}\) be a time scale for which \(\rho (\sigma (a))=a\). If \(\hat{y}\) is a local extremizer to (33) with y(a) not specified, then

$$\begin{aligned} \sum \limits _{i=1}^{k}H_{i}^{'} \cdot f_{iv}[\hat{y}](a) +\sum \limits _{i=k+1}^{k+n}H_{i}^{'}\cdot \left( f_{iv}\lbrace \hat{y}\rbrace (\sigma (a)) - \int \limits ^{\sigma (a)}_{a}f_{iy}\lbrace \hat{y}\rbrace (t)\nabla t \right) = 0 \end{aligned}$$

holds together with the Euler–Lagrange equations (35) and (36).

Proof

See [25].

Theorem 20

(Transversality condition at the terminal time \(t = b\)) Let \(\mathbb {T}\) be a time scale for which \(\sigma (\rho (b))=b\). If \(\hat{y}\) is a local extremizer to (33) with y(b) not specified, then

$$\begin{aligned} \sum \limits _{i=1}^{k}H_{i}^{'}\cdot \left( f_{iv}[\hat{y}](\rho (b)) + \int \limits _{\rho (b)}^{b}f_{iy}[\hat{y}](t)\varDelta t \right) +\sum \limits _{i=k+1}^{k+n}H_{i}^{'} \cdot f_{iv}\lbrace \hat{y}\rbrace (b) = 0 \end{aligned}$$

holds together with the Euler–Lagrange equations (35) and (36).

Proof

See [25].

Several interesting results can be immediately obtained from Theorems 1820. An example of such results is given by Corollary 6.

Corollary 6

If \(\hat{y}\) is a solution to the problem

$$\begin{aligned} \mathscr {L}[y] =\frac{\int \limits _{a}^{b}f_{1}(t,y^{\sigma }(t),y^{\varDelta }(t)) \varDelta t}{\int \limits _{a}^{b}f_{2}(t,y^{\rho }(t),y^{\nabla }(t))\nabla t} \longrightarrow \mathrm {extr},\\ (y(a)=y_{a}), \quad (y(b)=y_{b}), \end{aligned}$$

then the Euler–Lagrange equations

$$ \frac{1}{\mathscr {F}_{2}} \left( f_{1v}[\hat{y}](\rho (t))-\int \limits _{a}^{\rho (t)} f_{1y}[\hat{y}](\tau )\varDelta \tau \right) - \frac{\mathscr {F}_{1}}{\mathscr {F}_{2}^{2}} \left( f_{2v}\lbrace \hat{y}\rbrace (t)-\int \limits _{a}^{t} f_{2y}\lbrace \hat{y}\rbrace (\tau )\nabla \tau \right) = c, $$

\(t\in \mathbb {T}_{\kappa }\), and

$$ \frac{1}{\mathscr {F}_{2}} \left( f_{1v}[\hat{y}](t)-\int \limits _{a}^{t} f_{1y}[\hat{y}](\tau )\varDelta \tau \right) -\frac{\mathscr {F}_{1}}{\mathscr {F}_{2}^{2}} \left( f_{2v}\lbrace \hat{y}\rbrace (\sigma (t)) -\int \limits ^{\sigma (t)}_{a} f_{2y}\lbrace \hat{y}\rbrace (\tau )\nabla \tau \right) =c, $$

\(t\in \mathbb {T}^{\kappa }\), hold, where

$$ \mathscr {F}_{1}:={\int \limits _{a}^{b} f_{1}(t,\hat{y}^{\sigma }(t),\hat{y}^{\varDelta }(t))\varDelta t} \quad \text { and } \quad \mathscr {F}_{2}:={\int \limits _{a}^{b} f_{2}(t,\hat{y}^{\rho }(t),\hat{y}^{\nabla }(t))\nabla t}. $$

Moreover, if y(a) is free and \(\rho (\sigma (a))=a\), then

$$ \frac{1}{\mathscr {F}_{2}} f_{1v}[\hat{y}](a) -\frac{\mathscr {F}_{1}}{\mathscr {F}_{2}^{2}} \left( f_{2v}\lbrace \hat{y}\rbrace (\sigma (a))-\int \limits _{a}^{\sigma (a)} f_{2y}\lbrace \hat{y}\rbrace (t)\nabla t\right) =0; $$

if y(b) is free and \(\sigma (\rho (b))=b\), then

$$ \frac{1}{\mathscr {F}_{2}} \left( f_{1v}[\hat{y}](\rho (b))+\int \limits ^{b}_{\rho (b)} f_{1y}[\hat{y}](t)\varDelta t\right) -\frac{\mathscr {F}_{1}}{\mathscr {F}_{2}^{2}} f_{2v}\lbrace \hat{y}\rbrace (b)=0. $$

5.3 Isoperimetric Problems

Let us now consider the general delta–nabla composition isoperimetric problem on time scales subject to boundary conditions. The problem consists of extremizing

$$\begin{aligned} \mathscr {L}[y]&=H\left( \int \limits _{a}^{b}f_{1}(t,y^{\sigma }(t),y^{\varDelta }(t))\varDelta t, \ldots ,\int \limits _{a}^{b}f_{k}(t,y^{\sigma }(t),y^{\varDelta }(t))\varDelta t,\right. \nonumber \\&\qquad \qquad \,\,\,\left. \int \limits _{a}^{b}f_{k+1}(t,y^{\rho }(t),y^{\nabla }(t))\nabla t,\ldots , \int \limits _{a}^{b}f_{k+n}(t,y^{\rho }(t),y^{\nabla }(t)) \nabla t \right) \end{aligned}$$
(38)

in the class of functions \(y\in C^1_{k+m,n+p}\) satisfying given boundary conditions

$$\begin{aligned} y(a)=y_{a},\quad y(b)=y_{b}, \end{aligned}$$
(39)

and a generalized isoperimetric constraint

$$\begin{aligned} \mathscr {K}[y]&=P\left( \int \limits _{a}^{b}g_{1}(t,y^{\sigma }(t),y^{\varDelta }(t))\varDelta t, \ldots ,\int \limits _{a}^{b}g_{m}(t,y^{\sigma }(t),y^{\varDelta }(t))\varDelta t,\right. \nonumber \\&\qquad \left. \int \limits _{a}^{b}g_{m+1}(t,y^{\rho }(t),y^{\nabla }(t))\nabla t,\ldots , \int \limits _{a}^{b}g_{m+p}(t,y^{\rho }(t),y^{\nabla }(t)) \nabla t \right) =d, \end{aligned}$$
(40)

where \(y_{a},y_{b},d\in \mathbb {R}\). We assume that:

  1. 1.

    the functions \(H:\mathbb {R}^{n+k}\rightarrow \mathbb {R}\) and \(P:\mathbb {R}^{m+p}\rightarrow \mathbb {R}\) have continuous partial derivatives with respect to all their arguments, which we denote by \(H_{i}^{'}\), \(i{=}1,\ldots ,n+k\), and \(P_{i}^{'}\), \(i=1,\ldots ,m+p\);

  2. 2.

    functions \((t,y,v)\rightarrow f_{i}(t,y,v)\), \(i=1,\ldots , n+k\), and \((t,y,v)\rightarrow g_{j}(t,y,v)\), \(j=1,\ldots ,m+p\), from \([a,b]\times \mathbb {R}^{2}\) to \(\mathbb {R}\), have partial continuous derivatives with respect to y and v uniformly in \(t\in [a,b]\), which we denote by \(f_{iy}\), \(f_{iv}\), and \(g_{jy}, g_{jv}\);

  3. 3.

    for all \(y\in C_{k+m,n+p}^{1}\), \(f_{i}\), \(f_{iy}\), \(f_{iv}\) and \(g_{j},g_{jy}\), \(g_{jv}\) are rd-continuous in \(t\in [a,b]^{\kappa }\), \(i=1,\ldots ,k\), \(j=1,\ldots ,m\), and ld-continuous in \(t\in [a,b]_{\kappa }\), \(i=k+1,\ldots ,k+n\), \(j=m+1,\ldots ,m+p\).

A function \(y\in C^{1}_{k+m, n+p}\) is said to be admissible provided it satisfies the boundary conditions (39) and the isoperimetric constraint (40). For brevity, we omit the argument of \(P_{i}^{'}\): \(P_{i}^{'}:=\frac{\partial P}{\partial \mathscr {G}_{i}}(\mathscr {G}_{1}(\hat{y}), \ldots ,\mathscr {G}_{m+p}(\hat{y}))\) for \(i=1,\ldots ,m+p\), with

$$ \mathscr {G}_{i}(\hat{y})=\int \limits _{a}^{b} g_{i}(t,\hat{y}^{\sigma }(t),\hat{y}^{\varDelta }(t))\varDelta t, \quad i=1,\ldots ,m, $$

and

$$ \mathscr {G}_{i}(\hat{y})=\int \limits _{a}^{b} g_{i}(t,\hat{y}^{\rho }(t),\hat{y}^{\nabla }(t))\nabla t, \quad i=m+1,\ldots ,m+p. $$

Definition 19

We say that an admissible function \(\hat{y}\) is a local minimizer (respectively, a local maximizer) to the isoperimetric problem (38)–(40), if there exists a \(\delta >0\) such that \(\mathscr {L}[\hat{y}]\leqslant \mathscr {L}[y]\) (respectively, \(\mathscr {L}[\hat{y}]\geqslant \mathscr {L}[y]\)) for all admissible functions \(y\in C_{k+m,n+p}^{1}\) satisfying the inequality \(||y-\hat{y}||_{1,\infty }<\delta \).

Let us define u and w by

$$\begin{aligned} \begin{aligned} u(t):= \sum \limits _{i=1}^{m}P_{i}^{'}\cdot \left( g_{iv}[\hat{y}](t)-\int \limits _{a}^{t} g_{iy}[\hat{y}](\tau )\varDelta \tau \right) ,\\ w(t):= \sum \limits _{i=m+1}^{m+p}P_{i}^{'}\cdot \left( g_{iv}\lbrace \hat{y}\rbrace (t)-\int \limits _{a}^{t} g_{iy}\lbrace \hat{y}\rbrace (\tau )\nabla \tau \right) . \end{aligned} \end{aligned}$$
(41)

Definition 20

An admissible function \(\hat{y}\) is said to be an extremal for \(\mathscr {K}\) if \(u(t) + w(\sigma (t)) = const\) and \(u(\rho (t)) + w(t) = const\) for all \(t\in [a,b]_\kappa ^\kappa \). An extremizer (i.e., a local minimizer or a local maximizer) to problem (38)–(40) that is not an extremal for \(\mathscr {K}\) is said to be a normal extremizer; otherwise (i.e., if it is an extremal for \(\mathscr {K}\)), the extremizer is said to be abnormal.

Theorem 21

(Optimality condition to the isoperimetric problem (38)–(40)) Let \(\chi \) and \(\xi \) be given as in (37), and u and w be given as in (41). If \(\hat{y}\) is a normal extremizer to the isoperimetric problem (38)–(40), then there exists a real number \(\lambda \) such that

  1. 1.

    \(\xi ^{\rho }(t)+\chi (t)-\lambda \left( u^{\rho }(t)+w(t)\right) = const\);

  2. 2.

    \(\xi (t)+\chi ^{\sigma }(t)-\lambda \left( u^{\rho }(t)+w(t)\right) = const\);

  3. 3.

    \(\xi ^{\rho }(t)+\chi (t)-\lambda \left( u(t)+w^{\sigma }(t)\right) = const\);

  4. 4.

    \(\xi (t)+\chi ^{\sigma }(t)-\lambda \left( u(t)+w^{\sigma }(t)\right) = const\);

for all \(t\in [a,b]^{\kappa }_{\kappa }\).

Proof

See proof of Theorem 3.9 in [25].

5.4 Illustrative Examples

In this section we consider three examples which illustrate the results of Theorems 18 and 21. We begin with a nonautonomous problem.

Example 9

Consider the problem

$$\begin{aligned} \begin{aligned} \mathscr {L}[y]= \frac{\int \limits _{0}^{1} t y^{\varDelta }(t) \varDelta t}{\int \limits _{0}^{1}(y^{\nabla }(t))^{2}\nabla t} \longrightarrow \min , \\ y(0)=0, \quad y(1)=1. \end{aligned} \end{aligned}$$
(42)

If y is a local minimizer to problem (42), then the Euler–Lagrange equations of Corollary 6 must hold, i.e.,

$$ \frac{1}{\mathscr {F}_{2}}\rho (t)-2\frac{\mathscr {F}_{1}}{\mathscr {F}_{2}^{2}} y^{\nabla }(t)=c, \quad t \in \mathbb {T}_{\kappa }, $$

and

$$ \frac{1}{\mathscr {F}_{2}}t-2\frac{\mathscr {F}_{1}}{\mathscr {F}_{2}^{2}} y^{\nabla }(\sigma (t))=c, \quad t \in \mathbb {T}^{\kappa }, $$

where \(\mathscr {F}_{1}:=\mathscr {F}_{1}(y)=\int \limits _{0}^{1}t y^{\varDelta }(t)\varDelta t\) and \(\mathscr {F}_{2}:=\mathscr {F}_{2}(y)=\int \limits _{0}^{1}(y^{\nabla }(t))^{2}\nabla t\). Let us consider the second equation. Using (4) of Theorem 9, it can be written as

$$\begin{aligned} \frac{1}{\mathscr {F}_{2}}t-2\frac{\mathscr {F}_{1}}{\mathscr {F}_{2}^{2}}y^{\varDelta }(t)=c, \quad t \in \mathbb {T}^{\kappa }. \end{aligned}$$
(43)

Solving (43) subject to the boundary conditions \(y(0)=0\) and \(y(1)=1\) gives

$$\begin{aligned} y(t)= \frac{1}{2Q}\int \limits _{0}^{t}\tau \varDelta \tau -t\left( \frac{1}{2Q}\int \limits _{0}^{1}\tau \varDelta \tau -1\right) , \quad t \in \mathbb {T}^{\kappa }, \end{aligned}$$
(44)

where \(Q:=\frac{\mathscr {F}_{1}}{\mathscr {F}_{2}}\). Therefore, the solution depends on the time scale. Let us consider two examples: \(\mathbb {T}=\mathbb {R}\) and \(\mathbb {T}=\left\{ 0,\frac{1}{2},1\right\} \). On \(\mathbb {T}=\mathbb {R}\), from (44) we obtain

$$\begin{aligned} y(t)=\frac{1}{4Q}t^{2}+\frac{4Q-1}{4Q}t, \quad \quad y^{\varDelta }(t) = y^{\nabla }(t) = y'(t)=\frac{1}{2Q}t+\frac{4Q-1}{4Q} \end{aligned}$$
(45)

as solution of (43). Substituting (45) into \(\mathscr {F}_{1}\) and \(\mathscr {F}_{2}\) gives \(\mathscr {F}_{1}=\frac{12Q+1}{24Q}\) and \(\mathscr {F}_{2}=\frac{48Q^{2}+1}{48Q^{2}}\), that is,

$$\begin{aligned} Q=\frac{2Q(12Q+1)}{48Q^{2}+1}. \end{aligned}$$
(46)

Solving Eq. (46) we get \(Q\in \left\{ \frac{3-2\sqrt{3}}{12},\frac{3+2\sqrt{3}}{12}\right\} \). Because (42) is a minimizing problem, we select \(Q=\frac{3-2\sqrt{3}}{12}\) and we get the extremal

$$\begin{aligned} y(t)=-(3+2\sqrt{3}) t^{2} + (4 + 2 \sqrt{3}) t. \end{aligned}$$
(47)

If \(\mathbb {T}=\left\{ 0,\frac{1}{2},1\right\} \), then from (44) we obtain \(y(t)=\frac{1}{8Q}\sum \limits _{k=0}^{2t-1}k+\frac{8Q-1}{8Q}t\), that is,

$$\begin{aligned} y(t)= {\left\{ \begin{array}{ll} 0, &{} \text { if } t=0,\\ \frac{8Q-1}{16Q}, &{} \text { if } t=\frac{1}{2},\\ 1, &{} \text { if } t=1. \end{array}\right. } \end{aligned}$$

Direct calculations show that

$$\begin{aligned} \begin{aligned} y^{\varDelta }(0)=\frac{y(\frac{1}{2})-y(0)}{\frac{1}{2}}=\frac{8Q-1}{8Q}, \quad y^{\varDelta }\left( \frac{1}{2}\right) =\frac{y(1)-y(\frac{1}{2})}{\frac{1}{2}}=\frac{8Q+1}{8Q},\\ y^{\nabla }\left( \frac{1}{2}\right) =\frac{y(\frac{1}{2})-y(0)}{\frac{1}{2}}=\frac{8Q-1}{8Q}, \quad y^{\nabla }(1)=\frac{y(1)-y(\frac{1}{2})}{\frac{1}{2}} =\frac{8Q+1}{8Q}. \end{aligned} \end{aligned}$$
(48)

Substituting (48) into the integrals \(\mathscr {F}_{1}\) and \(\mathscr {F}_{2}\) gives

$$\begin{aligned} \mathscr {F}_{1}=\frac{8Q+1}{32Q}, \quad \mathscr {F}_{2}=\frac{64Q^{2}+1}{64Q^{2}}, \quad Q=\frac{\mathscr {F}_{1}}{\mathscr {F}_{2}}=\frac{2Q(8Q+1)}{64Q^{2}+1}. \end{aligned}$$

Thus, we obtain the equation \(64Q^{2}-16Q-1=0\). The solutions to this equation are: \(Q\in \left\{ \frac{1-\sqrt{2}}{8}, \frac{1+\sqrt{2}}{8}\right\} \). We are interested in the minimum value Q, so we select \(Q = \frac{1+\sqrt{2}}{8}\) to get the extremal

$$\begin{aligned} y(t) ={\left\{ \begin{array}{ll} 0, &{} \hbox { if } t=0,\\ 1-\frac{\sqrt{2}}{2}, &{} \hbox { if } t=\frac{1}{2},\\ 1, &{} \hbox { if }t=1. \end{array}\right. } \end{aligned}$$
(49)

Note that the extremals (47) and (49) are different: for (47) one has \(x(1/2) = \frac{5}{4} + \frac{\sqrt{3}}{2}\).

In the previous example, the variational functional is given by the ratio of a delta and a nabla integral. Now we discuss a variational problem where the composition is expressed by the product of three time-scale integrals.

Example 10

Consider the problem

$$\begin{aligned} \begin{array}{c} \mathscr {L}[y]= \left( \int \limits _{0}^{3} t y^{\varDelta }(t) \varDelta t\right) \left( \int \limits _{0}^{3} y^{\varDelta }(t)\left( 1+t\right) \varDelta t\right) \left( \int \limits _{0}^{3}\left[ \left( y^{\nabla }(t)\right) ^{2}+t\right] \nabla t\right) \longrightarrow \min ,\\ y(0)=0,\quad y(3)=3. \end{array} \end{aligned}$$
(50)

If y is a local minimizer to problem (50), then the Euler–Lagrange equations must hold, and we can write that

$$\begin{aligned} \left( \mathscr {F}_{1}\mathscr {F}_{3}+\mathscr {F}_{2}\mathscr {F}_{3}\right) t +\mathscr {F}_{1}\mathscr {F}_{3}+2\mathscr {F}_{1}\mathscr {F}_{2} y^{\nabla }(\sigma (t))=c, \quad t \in \mathbb {T}^{\kappa }, \end{aligned}$$
(51)

where c is a constant, \(\mathscr {F}_{1}:=\mathscr {F}_{1}(y) =\int \limits _{0}^{3} t y^{\varDelta }(t) \varDelta t\), \(\mathscr {F}_{2}:=\mathscr {F}_{2}(y) =\int \limits _{0}^{3} y^{\varDelta }(t)\left( 1+t\right) \varDelta t\), and \(\mathscr {F}_{3}:=\mathscr {F}_{3}(y) =\int \limits _{0}^{3}\left[ \left( y^{\nabla }(t)\right) ^{2}+t\right] \nabla t\). Using relation (4), we can write (51) as

$$\begin{aligned} \left( \mathscr {F}_{1}\mathscr {F}_{3}+\mathscr {F}_{2}\mathscr {F}_{3}\right) t +\mathscr {F}_{1}\mathscr {F}_{3}+2\mathscr {F}_{1}\mathscr {F}_{2}y^{\varDelta }(t)=c, \quad t \in \mathbb {T}^{\kappa }. \end{aligned}$$
(52)

Using the boundary conditions \(y(0)=0\) and \(y(3)=3\), from (52) we get that

$$\begin{aligned} y(t)=\left( 1+\frac{Q}{3}\int \limits _{0}^{3}\tau \varDelta \tau \right) t -Q \int \limits _{0}^{t}\tau \varDelta \tau , \quad t \in \mathbb {T}^{\kappa }, \end{aligned}$$
(53)

where \(Q=\frac{\mathscr {F}_{1}\mathscr {F}_{3} +\mathscr {F}_{2}\mathscr {F}_{3}}{2\mathscr {F}_{1}\mathscr {F}_{2}}\). Therefore, the solution depends on the time scale. Let us consider \(\mathbb {T}=\mathbb {R}\) and \(\mathbb {T} =\left\{ 0,\frac{1}{2},1,\frac{3}{2},2,\frac{5}{2},3\right\} \). On \(\mathbb {T}=\mathbb {R}\), expression (53) gives

$$\begin{aligned} y(t)=\left( \frac{2+3Q}{2}\right) t - \frac{Q}{2}t^{2}, \quad y^{\varDelta }(t)= y^{\nabla }(t) = y'(t) = \frac{2+3Q}{2}-Qt \end{aligned}$$
(54)

as solution of (52). Substituting (54) into \(\mathscr {F}_{1}\), \(\mathscr {F}_{2}\) and \(\mathscr {F}_{3}\) gives:

$$ \mathscr {F}_{1}=\frac{18-9Q}{4}, \quad \mathscr {F}_{2}=\frac{30-9Q}{4}, \quad \mathscr {F}_{3}=\frac{9Q^{2}+30}{4}. $$

Solving equation \(9Q^{3} - 36 Q^{2} + 45 Q - 40=0\), one finds the solution

$$ Q = \frac{1}{27}\left[ 36+ \root 3 \of {24786-729\sqrt{1155}} +9\root 3 \of {34+\sqrt{1155}}\right] \approx 2,7755 $$

and the extremal \(y(t)=5,16325t-1,38775t^{2}\).

Let us consider now the time scale \(\mathbb {T} =\left\{ 0,\frac{1}{2},1,\frac{3}{2},2,\frac{5}{2},3\right\} \). From (53), we obtain

$$\begin{aligned} y(t)=\left( \frac{4+5Q}{4}\right) t-\frac{Q}{4} \sum \limits _{k=0}^{2t-1}k = {\left\{ \begin{array}{ll} 0, &{} \hbox { if } t=0,\\ \frac{4+5Q}{8}, &{} \hbox { if } t=\frac{1}{2},\\ 1+Q, &{} \hbox { if } t=1,\\ \frac{12+9Q}{8}, &{} \hbox { if } t=\frac{3}{2},\\ 2+Q, &{} \hbox { if } t=2,\\ \frac{20+5Q}{8}, &{} \hbox { if } t=\frac{5}{2},\\ 3, &{} \hbox { if } t=3 \end{array}\right. } \end{aligned}$$
(55)

as solution of (52). Substituting (55) into \(\mathscr {F}_{1}\), \(\mathscr {F}_{2}\) and \(\mathscr {F}_{3}\), yields

$$\begin{aligned} \mathscr {F}_{1}=\frac{60-35Q}{16}, \quad \mathscr {F}_{2}=\frac{108-35Q}{16}, \quad \mathscr {F}_{3}=\frac{35Q^{2}+132}{16}. \end{aligned}$$

Solving equation \(245Q^{3}-882Q^{2}+1110-\frac{5544}{5}=0\), we get \(Q\approx 2,5139\) and the extremal

$$\begin{aligned} y(t)= {\left\{ \begin{array}{ll} 0, &{} \hbox { if } t=0,\\ 2,0711875, &{} \hbox { if } t=\frac{1}{2},\\ 3,5139, &{} \hbox { if } t=1,\\ 4,3281375, &{} \hbox { if } t=\frac{3}{2},\\ 4,5139, &{} \hbox { if } t=2,\\ 4,0711875, &{} \hbox { if } t=\frac{5}{2},\\ 3, &{} \hbox { if } t=3 \end{array}\right. } \end{aligned}$$
(56)

for problem (50) on \(\mathbb {T}=\left\{ 0,\frac{1}{2},1,\frac{3}{2},2,\frac{5}{2},3\right\} \).

In order to illustrate the difference between composition of mixed delta-nabla integrals and pure delta or nabla situations, we consider now two variants of problem (50): (i) the first consisting of delta operators only:

$$\begin{aligned} \begin{aligned} \mathscr {L}[y]= \left( \int \limits _{0}^{3} t y^{\varDelta }(t) \varDelta t\right) \left( \int \limits _{0}^{3} y^{\varDelta }(t)\left( 1+t\right) \varDelta t\right) \left( \int \limits _{0}^{3}\left[ \left( y^{\varDelta }(t)\right) ^{2}+t\right] \varDelta t\right) \longrightarrow \min ; \end{aligned} \end{aligned}$$
(57)

(ii) the second of nabla operators only:

$$\begin{aligned} \begin{aligned} \mathscr {L}[y]= \left( \int \limits _{0}^{3} t y^{\nabla }(t) \nabla t\right) \left( \int \limits _{0}^{3} y^{\nabla }(t)\left( 1+t\right) \nabla t\right) \left( \int \limits _{0}^{3}\left[ \left( y^{\nabla }(t)\right) ^{2}+t\right] \nabla t\right) \longrightarrow \min . \end{aligned} \end{aligned}$$
(58)

Both problems (i) and (ii) are subject to the same boundary conditions as in (50):

$$\begin{aligned} y(0)=0,\quad y(3)=3. \end{aligned}$$
(59)

All three problems (50), (57) and (59), and (58), (59), coincide in \(\mathbb {R}\). Consider, as before, the time scale \(\mathbb {T}=\left\{ 0,\frac{1}{2},1,\frac{3}{2},2,\frac{5}{2},3\right\} \). Recall that problem (50) has extremal (56). (i) Now, let us consider the delta problem (57) and (59). We obtain

$$\begin{aligned} \mathscr {F}_{1}=\frac{60-35Q}{16}, \quad \mathscr {F}_{2}=\frac{108-35Q}{16}, \quad \mathscr {F}_{3}=\frac{35Q^{2}+108}{16} \end{aligned}$$

and the equation \(245Q^{3}-882Q^{2}+1026-\frac{5436}{5}=0\). Its numerical solution \(Q\approx 2,5216\) entails the extremal

$$\begin{aligned} y(t)= {\left\{ \begin{array}{ll} 0, &{} \hbox { if } t=0,\\ 2,076, &{} \hbox { if } t=\frac{1}{2},\\ 3,5216, &{} \hbox { if } t=1,\\ 4,3368, &{} \hbox { if } t=\frac{3}{2},\\ 4,5216, &{} \hbox { if } t=2,\\ 4,076, &{} \hbox { if } t=\frac{5}{2},\\ 3, &{} \hbox { if } t=3. \end{array}\right. } \end{aligned}$$

(ii) In the latter nabla problem (58), (59) we have

$$\begin{aligned} \mathscr {F}_{1}=\frac{84-35Q}{16}, \quad \mathscr {F}_{2}=\frac{132-35Q}{16}, \quad \mathscr {F}_{3}=\frac{35Q^{2}+132}{16} \end{aligned}$$

and the equation \(175Q^{3}-810Q^{2}+1122-\frac{7128}{7}=0\). Using its numerical solution \(Q\approx 3,1097\) we get the extremal

$$\begin{aligned} y(t)= {\left\{ \begin{array}{ll} 0, &{} \hbox { if } t=0,\\ 2,4942, &{} \hbox { if } t=\frac{1}{2},\\ 4,1907, &{} \hbox { if } t=1,\\ 5,0895, &{} \hbox { if } t=\frac{3}{2},\\ 5,1907, &{} \hbox { if } t=2,\\ 4,4942, &{} \hbox { if } t=\frac{5}{2},\\ 3, &{} \hbox { if } t=3. \end{array}\right. } \end{aligned}$$

Finally, we apply the results of Sect. 5.3 to an isoperimetric problem.

Example 11

Let us consider the problem of extremizing

$$\begin{aligned} \mathscr {L}[y]= \frac{ \int \limits _{0}^{1}(y^{\varDelta }(t))^{2}\varDelta t}{\int \limits _{0}^{1} ty^{\nabla }(t)\nabla t} \end{aligned}$$

subject to the boundary conditions \(y(0)=0\) and \(y(1)=1\) and the isoperimetric constraint

$$ \mathscr {K}[y]=\int \limits _{0}^{1} ty^{\nabla }(t)\nabla t=1. $$

Applying Theorem 21, we get the nabla differential equation

$$\begin{aligned} \frac{2}{\mathscr {F}_{2}}y^{\nabla }(t) - \left( \lambda + \frac{\mathscr {F}_{1}}{(\mathscr {F}_{2})^{2}}\right) t = c, \quad t \in \mathbb {T}^{\kappa }_{\kappa }. \end{aligned}$$
(60)

Solving this equation, we obtain

$$\begin{aligned} y(t)=\left( 1-Q\int \limits _{0}^{1}\tau \nabla \tau \right) t +Q\int \limits _{0}^{t}\tau \nabla \tau , \end{aligned}$$
(61)

where \(Q=\frac{\mathscr {F}_{2}}{2}\left( \frac{\mathscr {F}_{1}}{(\mathscr {F}_{2})^{2}}+\lambda \right) \). Therefore, the solution of equation (60) depends on the time scale. Let us consider \(\mathbb {T}=\mathbb {R}\) and \(\mathbb {T}=\left\{ 0,\frac{1}{2},1\right\} \).

On \(\mathbb {T}=\mathbb {R}\), from (61) we obtain that \(y(t)=\frac{2-Q}{2}t+\frac{Q}{2}t^{2}\). Substituting this expression for y into the integrals \(\mathscr {F}_{1}\) and \(\mathscr {F}_{2}\), gives \(\mathscr {F}_{1}=\frac{Q^{2}+12}{12}\) and \(\mathscr {F}_{2}=\frac{Q+6}{12}\). Using the given isoperimetric constraint, we obtain \(Q=6\), \(\lambda =8\), and \(y(t)=3t^{2{}}-2t\). Let us consider now the time scale \(\mathbb {T}=\left\{ 0,\frac{1}{2},1\right\} \). From (61), we have

$$ y(t)=\frac{4-3Q}{4}t+Q\sum \limits _{k=1}^{2t}\frac{k}{4} = {\left\{ \begin{array}{ll} 0, &{} \hbox { if } t=0,\\ \frac{4-Q}{8}, &{} \hbox { if } t=\frac{1}{2},\\ 1, &{} \hbox { if } t=1. \end{array}\right. } $$

Simple calculations show that

$$\begin{aligned} \begin{aligned}&\mathscr {F}_{1}=\sum \limits _{k=0}^{1}\frac{1}{2} \left( y^{\varDelta }\left( \frac{k}{2}\right) \right) ^{2} =\frac{1}{2}\left( y^{\varDelta }(0)\right) ^{2} +\frac{1}{2}\left( y^{\varDelta }\left( \frac{1}{2}\right) \right) ^{2}=\frac{Q^{2}+16}{16}, \\&\mathscr {F}_{2}=\sum \limits _{k=1}^{2}\frac{1}{4}k y^{\nabla }\left( \frac{k}{2}\right) =\frac{1}{4} y^{\nabla }\left( \frac{1}{2}\right) +\frac{1}{2}y^{\nabla }(1)=\frac{Q+12}{16} \end{aligned} \end{aligned}$$

and \(\mathscr {K}(y)=\frac{Q+12}{16}=1\). Therefore, \(Q=4\), \(\lambda =6\), and we have the extremal

$$ y(t)= {\left\{ \begin{array}{ll} 0, &{} \hbox { if } t \in \left\{ 0, \frac{1}{2}\right\} ,\\ 1, &{} \hbox { if } t=1. \end{array}\right. } $$

6 Conclusions

In this survey we collected some of our recent research on direct and inverse problems of the calculus of variations on arbitrary time scales. For infinity horizon variational problems on time scales we refer the reader to [24, 53]. We started by studying inverse problems of the calculus of variations, which have not been studied before in the time-scale framework. First we derived a general form of a variational functional which attains a local minimum at a given function \(y_{0}\) under Euler–Lagrange and strengthened Legendre conditions (Theorem 16). Next we considered a new approach to the inverse problem of the calculus of variations by using an integral perspective instead of the classical differential point of view. In order to solve the problem, we introduced new definitions: (i) self-adjointness of an integro-differential equation, and (ii) equation of variation. We obtained a necessary condition for an integro-differential equation to be an Euler–Lagrange equation on an arbitrary time scale \(\mathbb {T}\) (Theorem 17). It remains open the question of sufficiency. Finally, we developed the direct calculus of variations by considering functionals that are a composition of a certain scalar function with delta and nabla integrals of a vector valued field. For such problems we obtained delta-nabla Euler–Lagrange equations in integral form (Theorem 18), transversality conditions (Theorems 19 and 20) and necessary optimality conditions for isoperimetric problems (Theorem 21). To consider such general mixed delta-nabla variational problems on unbounded time scales (infinite horizon problems) remains also an open direction of research. Another interesting open research direction consists to study delta-nabla inverse problems of calculus of variations for composition functionals and their conservation laws [61].