Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The essence of mathematics lies in its freedom.

Georg Cantor (1845–1915).

As for everything else, so for a mathematical theory: beauty can be perceived but not explained.

Arthur Cayley (1821–1895).

From a modeling point of view it is realistic to model a phenomenon by a dynamic system which incorporates both continuous and discrete times, namely, time as an arbitrary closed set of reals. It is natural to ask whether it is possible to provide a framework which allows us to handle both dynamic systems simultaneously so that we can get some insight and a better understanding of the subtle differences of these two systems. The recently developed theory of “dynamic systems on time scales” or dynamic systems on measure chains (by a measure chain we mean the union of disjoint closed intervals of \(\mathbb{R}\)) offers a desired unified approach.

This chapter contains some preliminaries, definitions, and concepts concerning time scale calculus. The results in this chapter will cover delta, nabla, and diamond-α derivatives and integrals.

1.1 Delta Calculus

For the notions used below we refer the reader to the books [51, 52] which summarize and organize much of time scale calculus. A time scale is an arbitrary nonempty closed subset of the real numbers. Throughout the book, we denote the time scale by the symbol \(\mathbb{T}\). For example, the real numbers \(\mathbb{R}\), the integers \(\mathbb{Z}\), and the natural numbers \(\mathbb{N}\) are time scales. For \(t \in \mathbb{T}\), we define the forward jump operator \(\sigma: \mathbb{T} \rightarrow \mathbb{T}\) by \(\sigma (t):=\inf \{ s \in \mathbb{T}: s > t\}\). A time-scale \(\mathbb{T}\) equipped with the order topology is metrizable and is a K σ -space; i.e., it is a union of at most countably many compact sets. The metric on \(\mathbb{T}\) which generates the order topology is given by \(d(r;s):= \left \vert \mu (r;s)\right \vert,\) where μ(. ) = μ(. ; τ) for a fixed \(\tau \in \mathbb{T}\) is defined as follows. The mapping \(\mu: \mathbb{T} \rightarrow \mathbb{R}^{+} = [0,\infty )\) such that \(\mu (t):=\sigma (t) - t\) is called the graininess.

When \(\mathbb{T} = \mathbb{R}\), we see that σ(t) = t and \(\mu (t) \equiv 0\) for all \(t \in \mathbb{T}\) and when \(\mathbb{T} = \mathbb{N}\), we have that \(\sigma (t) = t + 1\) and μ(t) ≡ 1 for all \(t \in \mathbb{T}\). The backward jump operator \(\rho: \mathbb{T} \rightarrow \mathbb{T}\) is defined by \(\rho (t):=\sup \{ s \in \mathbb{T}: s < t\}\). The mapping \(\nu: \mathbb{T} \rightarrow \mathbb{R}_{0}^{+}\) such that \(\nu (t) = t -\rho (t)\) is called the backward graininess. If σ(t) > t, we say that t is right-scattered , while if ρ(t) < t, we say that t is left-scattered . Also, if \(t <\sup \mathbb{T}\) and σ(t) = t, then t is called right-dense , and if \(t >\inf \mathbb{T}\) and ρ(t) = t, then t is called left-dense . If \(\mathbb{T}\) has a left-scattered maximum m, then \(\mathbb{T}^{k} = \mathbb{T} -\{ m\}\). Otherwise \(\mathbb{T}^{k} = \mathbb{T}\). In summary,

$$\displaystyle{ \mathbb{T}^{k}=\left \{\begin{array}{c} \mathbb{T}\setminus (\rho \sup \mathbb{T}\text{, }\sup \mathbb{T}], \ \ \text{if} \ \ \ \ \sup \mathbb{T} < \infty \text{, } \\ \mathbb{T}, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \sup \mathbb{T} = \infty. \end{array} \right. }$$

Likewise \(\mathbb{T}_{k}\) is defined as the set \(\mathbb{T}_{k}\) \(= \mathbb{T}\setminus \) \([\inf \mathbb{T},\sigma (\inf \mathbb{T})]\) if \(\left \vert \inf \mathbb{T}\right \vert < \infty \), and \(\mathbb{T}_{k}= \mathbb{T}\) if \(\inf \mathbb{T} = -\infty \).

For a function \(f: \mathbb{T} \rightarrow \mathbb{R}\), we define the derivative f Δ as follows. Let \(t \in \mathbb{T}\). If there exists a number \(\alpha \in \mathbb{R}\) such that for all \(\varepsilon > 0\) there exists a neighborhood U of t with

$$\displaystyle{ \vert f(\sigma (t)) - f(s) -\alpha (\sigma (t) - s)\vert \leq \varepsilon \vert \sigma (t) - s\vert, }$$

for all s ∈ U, then f is said to be differentiable at t, and we call α the delta derivative of f at t and denote it by f Δ(t). For example, if \(\mathbb{T} = \mathbb{R}\), then

$$\displaystyle{ f^{\varDelta }(t) = f^{\prime}(t) =\lim \limits _{\varDelta t\rightarrow 0}\frac{f(t +\varDelta t) - f(t)} {\varDelta t} \text{, for all }\ t \in \mathbb{T}. }$$

If \(\mathbb{T} = \mathbb{N}\), then \(f^{\varDelta }(t) = f(t + 1) - f(t)\) for all \(t \in \mathbb{T}\). For a function \(f: \mathbb{T} \rightarrow \mathbb{R}\) the (delta) derivative is defined by

$$\displaystyle{ f^{\varDelta }(t) = \frac{f(\sigma (t)) - f(t)} {\sigma (t) - t}, }$$

if f is continuous at t and t is right-scattered. If t is not right-scattered then the derivative is defined by

$$\displaystyle{ f^{\varDelta }(t) =\lim _{s\rightarrow t}\frac{f(\sigma (t)) - f(s)} {t - s} =\lim _{t\rightarrow \infty }\frac{f(t) - f(s)} {t - s}, }$$

provided this limit exists. A useful formula is

$$\displaystyle{ f^{\sigma } = f +\mu f^{\varDelta }\quad \mbox{ where}\quad f^{\sigma }:= f \circ \sigma. }$$

A function \(f: [a,b] \rightarrow \mathbb{R}\) is said to be right-dense continuous (rd-continuous) if it is right continuous at each right-dense point and there exists a finite left limit at all left-dense points, and f is said to be differentiable if its derivative exists. The space of rd-continuous functions is denoted by \(C_{r}(\mathbb{T}\), \(\mathbb{R})\). A time scale \(\mathbb{T}\) is said to be regular if the following two conditions are satisfied simultaneously:

  1. (a).

    For all \(t \in \mathbb{T}\), σ(ρ(t)) = t.

  2. (b).

    For all \(t \in \mathbb{T}\), ρ(σ(t)) = t.

Remark 1.1.1

If \(\mathbb{T}\) is a regular time scale, then both operators are invertible with \(\sigma ^{-1} =\rho\) and \(\rho ^{-1} =\sigma\) .

The following theorem gives the product and quotient rules for the derivative of the product fg and the quotient fg (where gg σ ≠ 0) of two delta differentiable functions f and g.

Theorem 1.1.1

Assume f; \(g: \mathbb{T} \rightarrow \mathbb{R}\) are delta differentiable at \(t \in \mathbb{T}\) . Then

$$\displaystyle\begin{array}{rcl} (fg)^{\varDelta } = f^{\varDelta }g + f^{\sigma }g^{\varDelta } = fg^{\varDelta } + f^{\varDelta }g^{\sigma },& &{}\end{array}$$
(1.1.1)
$$\displaystyle\begin{array}{rcl} \left (\frac{f} {g}\right )^{\varDelta } = \frac{f^{\varDelta }g - fg^{\varDelta }} {gg^{\sigma }}.& &{}\end{array}$$
(1.1.2)

By using the product rule, we see that the derivative of \(f(t) = (t-\alpha )^{m}\) for \(m \in \mathbb{N},\) and \(\alpha \in \mathbb{T}\) can be calculated as

$$\displaystyle{ f^{\varDelta }(t) = \left ((t-\alpha )^{m}\right )^{\varDelta } =\sum _{ \nu =0}^{m-1}\left (\sigma (t)-\alpha \right )^{\nu }(t-\alpha )^{m-\nu -1}. }$$
(1.1.3)

As a special case when α = 0, we see that the derivative of f(t) = t m for \(m \in \mathbb{N}\) can be calculated as

$$\displaystyle{ \left (t^{m}\right )^{\varDelta } =\sum _{ \gamma =0}^{m-1}\sigma ^{\gamma }(t)t^{m-\gamma -1}. }$$

Note that when \(\mathbb{T} = \mathbb{R}\), we have

$$\displaystyle{ \sigma (t) = t,\text{ }\mu (t) = 0,\text{ }\ f^{\varDelta }(t) = f^{\prime}(t).\text{ }\ }$$

When \(\mathbb{T} = \mathbb{Z}\), we have

$$\displaystyle{ \sigma (t) = t + 1,\text{ }\mu (t) = 1,\text{ }f^{\varDelta }(t) =\varDelta f(t). }$$

When \(\mathbb{T} =h\mathbb{Z}\), h > 0, we have \(\sigma (t) = t + h,\) μ(t) = h, 

$$\displaystyle{ f^{\varDelta }(t) =\varDelta _{h}f(t) = \frac{(f(t+h)-f(t))} {h}. }$$

When \(\mathbb{T} =\{ t: t = q^{k}\), \(k \in \mathbb{N}_{0}\), q > 1}, we have σ(t) = qt, \(\mu (t) = (q - 1)t,\)

$$\displaystyle{ f^{\varDelta }(t) =\varDelta _{q}f(t) = \frac{(f(q\,t) - f(t))} {(q - 1)\,t}. }$$

When \(\mathbb{T} = \mathbb{N}_{0}^{2} =\{ t^{2}: t \in \mathbb{N}\},\) we have \(\sigma (t) = (\sqrt{t} + 1)^{2}\) and

$$\displaystyle{ \mu (t) = 1 + 2\sqrt{t},\,f^{\varDelta }(t) =\varDelta _{0}f(t) = (f((\sqrt{t} + 1)^{2}) - f(t))/1 + 2\sqrt{t}. }$$

When \(\mathbb{T} = \mathbb{T}_{n} =\{ t_{n}: n \in \mathbb{N}\}\) where (t n ) are the harmonic numbers that are defined by t 0 = 0 and \(t_{n} =\sum _{ k=1}^{n}\frac{1} {k},n \in \mathbb{N}_{0},\) and we have

$$\displaystyle{ \sigma (t_{n}) = t_{n+1},\text{ }\mu (t_{n}) = \frac{1} {n + 1},\text{ }f^{\varDelta }(t) =\varDelta _{1}f(t_{n}) = (n + 1)f(t_{n}). }$$

When \(\mathbb{T}_{2}=\{\sqrt{n}: n \in \mathbb{N}\},\) we have \(\sigma (t) = \sqrt{t^{2 } + 1},\)

$$\displaystyle{ \mu (t) = \sqrt{t^{2 } + 1} - t,\text{ }f^{\varDelta }(t) =\varDelta _{2}f(t) = \frac{(f(\sqrt{t^{2 } + 1}) - f(t))} {\sqrt{t^{2 } + 1} - t}. }$$

When \(\mathbb{T}_{3}=\{\root{3}\of{n}: n \in \mathbb{N}\},\) we have \(\sigma (t) = \root{3}\of{t^{3} + 1}\) and

$$\displaystyle{ \mu (t) = \root{3}\of{t^{3} + 1} - t,\,f^{\varDelta }(t) =\varDelta _{ 3}f(t) = \frac{(f(\root{3}\of{t^{3} + 1}) - f(t))} {\root{3}\of{t^{3} + 1} - t}. }$$

For \(a,b \in \mathbb{T},\) and a delta differentiable function f, the Cauchy integral of f Δ is defined by

$$\displaystyle{ \int _{a}^{b}f^{\varDelta }(t)\varDelta t = f(b) - f(a). }$$

Theorem 1.1.2

Let \(f,g \in C_{rd}([a,b], \mathbb{R})\) be rd-continuous functions, \(a,b,c \in \mathbb{T}\) and \(\alpha,\beta \in \mathbb{R}\) . Then, the following are true:

  1. 1.

    \(\int \nolimits _{a}^{b}\left [\alpha f(t) +\beta g(t)\right ]\varDelta t =\alpha \int \nolimits _{ a}^{b}f(t)\varDelta t +\beta \int \nolimits _{ a}^{b}g(t)\varDelta t\),

  2. 2.

    \(\int \nolimits _{a}^{b}f(t)\varDelta t = -\int \nolimits _{b}^{a}f(t)\varDelta t\),

  3. 3.

    \(\int \nolimits _{a}^{c}f(t)\varDelta t =\int \nolimits _{ a}^{b}f(t)\varDelta t +\int \nolimits _{ b}^{c}f(t)\varDelta t\),

  4. 4.

    \(\left \vert \int _{a}^{b}f(t)\varDelta t\right \vert \leq \int _{a}^{b}\left \vert f(t)\right \vert \varDelta t\) .

An integration by parts formula reads

$$\displaystyle{ \int _{a}^{b}f(t)g^{\varDelta }(t)\varDelta t = \left.f(t)g(t)\right \vert _{ a}^{b} -\int _{ a}^{b}f^{\varDelta }(t)g^{\sigma }(t)\varDelta t, }$$
(1.1.4)

and infinite integrals are defined as

$$\displaystyle{ \int _{a}^{\infty }f(t)\varDelta t =\lim _{ b\rightarrow \infty }\int _{a}^{b}f(t)\varDelta t. }$$

Note that when \(\mathbb{T} = \mathbb{R}\), we have

$$\displaystyle{ \int _{a}^{b}f(t)\varDelta t =\int _{ a}^{b}f(t)dt. }$$

When \(\mathbb{T} = \mathbb{Z}\), we have

$$\displaystyle{ \int _{a}^{b}f(t)\varDelta t =\sum _{ t=a}^{b-1}f(t). }$$

When \(\mathbb{T} =h\mathbb{Z}\), h > 0, we have

$$\displaystyle{ \int _{a}^{b}f(t)\varDelta t =\sum _{ k=0}^{\frac{b-a-h} {h} }f(a + kh)h. }$$

When \(\mathbb{T} =\{ t: t = q^{k}\), \(k \in \mathbb{N}_{0}\), q > 1}, we have

$$\displaystyle{ \int _{t_{0}}^{\infty }f(t)\varDelta t =\sum _{ k=0}^{\infty }f(q^{k})\mu (q^{k}). }$$

Note that the integration formula on a discrete time scale is defined by

$$\displaystyle{ \int _{a}^{b}f(t)\varDelta t =\sum _{ t\in (a,b)}f(t)\mu (t). }$$

It is well known that rd-continuous functions possess antiderivatives. If f is rd-continuous and F Δ = f, then

$$\displaystyle{ \int _{t}^{\sigma (t)}f(s)\varDelta s = F(\sigma (t)) - F(t) =\mu (t)F^{\varDelta }(t) =\mu (t)f(t). }$$

Now, we will give the definition of the generalized exponential functions and its derivatives. We say that \(p: \mathbb{T}^{\kappa }\mapsto \mathbb{R}\) is regressive provided 1 +μ(t)p(t) ≠ 0 for all \(t \in \mathbb{T}^{\kappa }\). We define the set \(\mathcal{R}\) of all regressive and rd-continuous functions. We define the set \(\mathcal{R}^{+}\) of all positively regressive elements of \(\mathcal{R}\) by \(\mathcal{R}^{+} =\{ p \in \mathcal{R}: 1 +\mu (t)p(t) > 0,\) for all \(t \in \mathbb{T}\}\). The set of all regressive functions on a time scale \(\mathbb{T}\) forms an Abelian group under the addition ⊕ defined by \(p \oplus q:= p + q +\mu pq\). If \(p \in \mathcal{R}\), then we can define the exponential function by

$$\displaystyle{ e_{p}(t,s) =\exp \left (\int _{s}^{t}\xi _{ \mu (\tau )}(p(\tau ))\varDelta \tau \right ),\ \text{ for }t \in \mathbb{T},s \in \mathbb{T}^{k}, }$$
(1.1.5)

where ξ h (z) is the cylinder transformation , which is defined by

$$\displaystyle{ \xi _{h}(z) = \left \{\begin{array}{c} \frac{\log (1+hz)} {h},\quad h\neq 0,\\ z, \quad h = 0. \end{array} \right. }$$

If \(p \in \mathcal{R}\), then e p (t, s) is real-valued and nonzero on \(\mathbb{T}\). If \(p \in \mathcal{R}^{+},\) then \(e_{p}(t,t_{0})\) is always positive. Note that if \(\mathbb{T} = \mathbb{R}\), then

$$\displaystyle{ e_{a}(t,t_{0}) =\exp (\int _{t_{0}}^{t}a(s)ds), }$$

if \(\mathbb{T} = \mathbb{N}\), then

$$\displaystyle{ e_{a}(t,t_{0}) =\mathop{ \prod }\limits _{s=t_{0}}^{t-1}(1 + a(s)), }$$

and  if \(\mathbb{T} =q^{\mathbb{N}_{0}}\), then

$$\displaystyle{ e_{a}(t,t_{0}) =\mathop{ \prod }\limits _{s=t_{0}}^{t-1}(1 + (q - 1)sa(s)). }$$

If \(p: \mathbb{T}^{\kappa }\mapsto \mathbb{R}\) is rd-continuous and regressive, then the exponential function \(e_{p}(t,t_{0})\) is for each fixed \(t_{0} \in \mathbb{T}^{\kappa }\) the unique solution of the initial value problem \(x^{\varDelta } = p(t)x,\;x(t_{0}) = 1,\) for all t ∈  \(\mathbb{T}\). We will use the following definition to present the properties of the exponential function e p (t, s). If p, \(q \in \mathcal{R}\), then we define \(\ominus p(t) = -p(t)/(1 +\mu (t)p(t))\) and \((p \oplus q)(t) = p(t) + q(t) +\mu (t)p(t)q(t)\) for all \(t \in \mathbb{T}^{\kappa }\). The following properties are proved in [51].

Theorem 1.1.3

If \(p,q \in \mathcal{R}\) and \(t_{0} \in \mathbb{T}\) , then

  • \(e_{p}(t,t) \equiv 1\quad \mbox{ and }\quad e_{0}(t,s) \equiv 1\) ;

  • \(e_{p}(\sigma (t),s) = (1 +\mu (t)p(t))e_{p}(t,s)\) ;

  • \(\frac{1} {e_{p}(t,s)} = e_{\ominus p}(t,s) = e_{p}(s,t)\) ;

  • \(\frac{e_{p}(t,s)} {e_{q}(t,s)} = e_{p\ominus q}(t,s)\) ;

  • e p (t,s)e q (t,s) = e p⊕q (t,s);

  • if \(p \in \mathcal{R}^{+}\) , then e p (t,t 0 ) > 0 for all \(t \in \mathbb{T}\) .

  • \(e_{p}^{\varDelta }(t,t_{0}) = p(t)e_{p}(t,t_{0})\) .

  • \(\left ( \frac{1} {e_{p}(\cdot,s)}\right )^{\varDelta } = - \frac{p(t)} {e_{p}^{\sigma }(\cdot,s)}\) .

Lemma 1.1.1

For a nonnegative \(\varphi\) with \(-\varphi \in \mathcal{R}^{+}\) , we have the inequalities

$$\displaystyle{ 1 -\int _{s}^{t}\varphi (u)\bigtriangleup u \leq e_{ -\varphi }(t,s) \leq \exp \left \{-\int _{s}^{t}\varphi (u)\bigtriangleup u\right \}\text{ for all }t \geq s. }$$

If \(\varphi\) is rd-continuous and nonnegative, then

$$\displaystyle{ 1 +\int _{ s}^{t}\varphi (u)\bigtriangleup u \leq e_{\varphi }(t,s) \leq \exp \left \{\int _{ s}^{t}\varphi (u)\bigtriangleup u\right \}\text{ for all }t \geq s. }$$

Remark 1.1.2

If \(\lambda \in \mathcal{R}^{+}\) and λ(r) < 0 for all \(t \in [s,t)_{\mathbb{T}}\) , then

$$\displaystyle{ 0 < e_{\lambda }(t,s) \leq \exp \left (\int _{s}^{t}\lambda (r)\varDelta r\right ) < 1. }$$

Theorem 1.1.4

If \(p \in \mathcal{R}\) and \(a,b,c \in \mathbb{T}\) , then

$$\displaystyle{ \int _{a}^{b}p(t)e_{ p}(c,\sigma (t))\varDelta t = -\int _{a}^{b}(e_{ p}^{\varDelta }(c,.)(t)\varDelta t = e_{ p}(c,a) - e_{p}(c,b). }$$

Theorem 1.1.5

If \(a,b,c \in \mathbb{T}\) and \(f \in C_{rd}(\mathbb{T}\), \(\mathbb{R})\), \(a,b \in \mathbb{T}\) such that f(t) ≥ 0 for all a ≤ t < b, then

$$\displaystyle{ \int _{a}^{b}f(t)\varDelta t \geq 0. }$$

Lemma 1.1.2

Let \(v \in C_{rd}^{1}(\mathbb{T}\), \(\mathbb{R})\) be strictly increasing and \(\tilde{\mathbb{T}} = v(\mathbb{T})\) be a time scale. If \(f \in C_{rd}(\mathbb{T}\), \(\mathbb{R})\) , then for \(a,b \in \mathbb{T}\),

$$\displaystyle{ \int _{a}^{b}f(x)v^{\varDelta }(x)\varDelta x =\int _{ v(a)}^{v(b)}f(v^{-1}(y))\tilde{\varDelta }y. }$$

Throughout the book, we will use the following facts:

$$\displaystyle{ \int \limits _{t_{0}}^{\infty }\frac{\varDelta s} {s^{\nu }} = \infty,\text{ if }0 \leq \nu \leq 1,\text{ and }\int \limits _{t_{0}}^{\infty }\frac{\varDelta s} {s^{\nu }} < \infty,\text{ if }\nu > 1, }$$

and without loss of generality, we assume that \(\sup \mathbb{T} = \infty \), and define the time scale interval \([a,b]_{\mathbb{T}}\) by \([a,b]_{\mathbb{T}}:= [a,b] \cap \mathbb{T}\). The following results are adapted from [52].

Lemma 1.1.3

Let \(f: \mathbb{R} \rightarrow \mathbb{R}\)  be continuously differentiable and suppose \(g: \mathbb{T} \rightarrow \mathbb{R}\)  is delta differentiable. Then \(f \circ g: \mathbb{T} \rightarrow \mathbb{R}\)  is delta differentiable and

$$\displaystyle{ f^{\varDelta }(g(t)) = f^{\prime}(g(\zeta ))g^{\varDelta }(t),\ \text{ for } \ \ \zeta \in [t,\sigma (t)]. }$$
(1.1.6)

Lemma 1.1.4

Let \(f: \mathbb{R} \rightarrow \mathbb{R}\)  be continuously differentiable and suppose \(g: \mathbb{T} \rightarrow \mathbb{R}\)  is delta differentiable. Then \(f \circ g: \mathbb{T} \rightarrow \mathbb{R}\)  is delta differentiable and the formula

$$\displaystyle{ (f \circ g)^{\varDelta }(t) = \left \{\int _{0}^{1}f^{\prime}(g(t) + h\mu (t)g^{\varDelta }(t))dh\right \}g^{\varDelta }(t), }$$
(1.1.7)

holds.

Lemma 1.1.5

Assume the continuous mapping \(f: [r,s]_{\mathbb{T}} \rightarrow \mathbb{R}\) , r, \(s \in \mathbb{T}\) , satisfies f(r) < 0 < f(s). Then there is a \(\tau \in [r,s)_{\mathbb{T}}\)  with \(f(\tau )f(\sigma (\tau )) \leq 0\) .

Lemma 1.1.6

Let the mapping \(f: \mathbb{T} \rightarrow \mathbb{R},\) \(g: \mathbb{T} \rightarrow \mathbb{R}\)  be differentiable and assume that

$$\displaystyle{ \vert f^{\varDelta }(t)\vert \leq g^{\varDelta }(t). }$$

Then for r,  \(s \in \mathbb{T},\) r ≤ s,

$$\displaystyle{ \vert f(s) - f(r)\vert \leq g(s) - g(r). }$$

Assume \(g: \mathbb{T} \rightarrow \mathbb{R}\)  be differentiable and g Δ (t) ≥ 0, then g(t) is nondecreasing.

Definition 1.1.1

We say a function \(f: \mathbb{T} \rightarrow \mathbb{R}\) is right-increasing (right-decreasing) at \(t_{0} \in \mathbb{T}^{k}\) provided that

  1. (i)

    if σ(t 0 ) > t 0 , then \(f(\sigma (t_{0})) > f(t_{0}),(f(\sigma (t_{0})) < f(t_{0})),\)

  2. (ii)

    if σ(t 0 ) = t 0 , then there is a neighborhood U of t 0 such that \(f(t) > f(t_{0}),(f(t) < f(t_{0})),\) for all t ∈ U, t > t 0 .

Definition 1.1.2

We say a function \(f: \mathbb{T} \rightarrow \mathbb{R}\) assumes its local right-maximum (local right-minimum) at t 0 ∈ T provided that:

  1. (i)

    if σ(t 0 ) > t 0 , then \(f(\sigma (t_{0})) \leq f(t_{0}),(f(\sigma (t_{0})) \geq f(t_{0})),\)

  2. (ii)

    if σ(t 0 ) = t 0 , then there is a neighborhood U of t 0 such that \(f(t) \leq f(t_{0}),(f(t) \geq f(t_{0})),\) for all t ∈ U, t > t 0 .

Theorem 1.1.6

If \(f: \mathbb{T} \rightarrow \mathbb{R}\) is Δ-differentiable at \(t_{0} \in \mathbb{T}^{k}\) and \(f^{\varDelta }(t_{0}) > 0,\) \((f^{\varDelta }(t_{0}) < 0),\) then f is right-increasing, (right-decreasing), at t 0 .

Theorem 1.1.7

If \(f: \mathbb{T} \rightarrow \mathbb{R}\) is Δ-differentiable at \(t_{0} \in \mathbb{T}^{k}\) and if \(f^{\varDelta }(t_{0}) > 0\,(f^{\varDelta }(t_{0}) < 0),\) then f assumes a local right-minimum (local right-maximum), at t 0 .

Theorem 1.1.8

Suppose \(f: \mathbb{T} \rightarrow \mathbb{R}\) is Δ-differentiable at \(t_{0} \in \mathbb{T}^{k}\) and assumes its local right-minimum (local right-maximum) at t 0 . Then \(f^{\varDelta }(t_{0}) \geq 0(f^{\varDelta }(t_{0}) \leq 0)\) .

Theorem 1.1.9

Let f be a continuous function on \([a,b]_{\mathbb{T}}\) that is Δ-differentiable on [a,b) (the differentiability at a is understood as right-sided), and satisfies f(a) = f(b). Then there exist \(\zeta,\tau \in [a,b)_{\mathbb{T}}\) such that

$$\displaystyle{ f^{\varDelta }(\tau ) \leq 0 \leq f^{\varDelta }(\zeta ). }$$

Corollary 1.1.1

Let f be a continuous function on \([a,b]_{\mathbb{T}}\) that is Δ-differentiable on [a,b). If f Δ (t) = 0 for all \(t \in [a,b)_{\mathbb{T}},\) then f is a constant function on \([a,b]_{\mathbb{T}}\) .

Corollary 1.1.2

Let f be a continuous function on [a,b] that is Δ-differentiable on [a,b). Then f is increasing, decreasing, nondecreasing, and nonincreasing on \([a,b]_{\mathbb{T}}\) if \(f^{\varDelta }(t) > 0,f^{\varDelta }(t)\) > 0,f Δ (t) ≥ 0, and f Δ (t) ≤ 0 for all \(t \in [a,b)_{\mathbb{T}},\) respectively.

Theorem 1.1.10

Let f and g be continuous functions on [a,b] that are Δ-differentiable on \([a,b)_{\mathbb{T}}\) . Suppose g Δ (t) > 0 for all   \(t \in [a,b)\) . Then there exist ζ, \(\tau \in [a,b)_{\mathbb{T}}\) such that

$$\displaystyle{ \frac{f^{\varDelta }(\tau )} {g^{\varDelta }(\tau )} \leq \dfrac{f(b) - f(a)} {g(b) - g(a)} \leq \frac{f^{\varDelta }(\zeta )} {g^{\varDelta }(\zeta )}. }$$

1.2 Nabla Calculus

The corresponding theory for nabla derivatives was also studied extensively. The results in this section are adapted from [27].

Let \(\mathbb{T}\) be a time scale, the backward jump operator \(\rho: \mathbb{T} \rightarrow \mathbb{T}\) is defined by \(\rho (t):=\sup \{ s \in \mathbb{T}: s < t\}\). The mapping \(\nu: \mathbb{T} \rightarrow \mathbb{R}_{0}^{+}\) such that \(\nu (t) = t -\rho (t)\) is called the backward graininess. The function \(f: \mathbb{T} \rightarrow \mathbb{R}\) is called nabla differentiable at \(t_{0} \in \mathbb{T}\), if there exists an \(a \in \mathbb{R}\) with the following property: For any ε > 0, there exists a neighborhood U of t, such that

$$\displaystyle{ \left \vert f(\rho (t)) - f(s) - a[\rho (t) - s]\right \vert \leq \epsilon \left \vert \rho (t) - s\right \vert }$$

for all s ∈ U; we write a = f (t). For \(\mathbb{T} = \mathbb{R}\), we have \(f^{\nabla }(t) = f^{{\prime}}(t)\) and for \(\mathbb{T} = \mathbb{Z}\), we have the backward difference operator \(f^{\nabla }(t) = \nabla f(t) = f(t) - f(t - 1)\).

A function \(f: \mathbb{T} \rightarrow \mathbb{R}\) is left-dense continuous or ld-continuous provided it is continuous at left-dense points in \(\mathbb{T}\) and its right-sided limits exist (finite) at right-dense points in \(\mathbb{T}\). If \(\mathbb{T} = \mathbb{R}\), then f is ld-continuous if and only if f is continuous. If \(\mathbb{T} = \mathbb{Z}\), then any function is ld-continuous. The following theorem gives several properties of the nabla derivative.

Theorem 1.2.1

Assume \(f: \mathbb{T} \rightarrow \mathbb{R}\) is a function and let \(t \in \mathbb{T}\) . Then we have the following:

  1. 1.

    If f is nabla differentiable at t, then f is continuous at t.

  2. 2.

    If f is continuous at t and t is left scattered, then f is nabla differentiable at t with

    $$\displaystyle{ f^{\nabla }(t) = \frac{f(t) - f(\rho (t))} {\nu (t)}. }$$
  3. 3.

    If f is left-dense, then f is nabla differentiable at t iff the limit \(\lim _{s\rightarrow t}\frac{f(t)-f(s)} {t-s}\) exists as a definite number, and in this case

    $$\displaystyle{ f^{\nabla }(t) =\lim _{ s\rightarrow t}\frac{f(t) - f(s)} {t - s}. }$$
  4. 4.

    If f is nabla differentiable at t, then \(f(\rho (t))=f(t)-\nu (t)f^{\nabla }(t)\) .

Theorem 1.2.2

Assume f, \(g: \mathbb{T} \rightarrow \mathbb{R}\) are nabla differentiable at \(t \in \mathbb{T}\) . Then:

  1. (i)

    The product \(fg: \mathbb{T} \rightarrow \mathbb{R}\) is nabla differentiable at t, and we get the product rule

    $$\displaystyle{ (fg)^{\nabla }(t) = f^{\nabla }(t)g(t) + f^{\rho }(t)g^{\nabla }(t) = f(t)g^{\nabla }(t) + f^{\nabla }(t)g^{\rho }(t). }$$
  2. (ii)

    If g(t)g ρ (t) ≠ 0, then f∕g is nabla differentiable at t, and we get the quotient rule

    $$\displaystyle{ \left (\frac{f} {g}\right )^{\nabla }(t) = \frac{f^{\nabla }(t)g(t) - f(t)g^{\nabla }(t)} {g(t)g^{\rho }(t)}. }$$

A function \(F: \mathbb{T} \rightarrow \mathbb{R}\) is called a nabla antiderivative of \(f: \mathbb{T} \rightarrow \mathbb{R}\) provided F (t) = f(t) holds for all \(t \in \mathbb{T}\). We then define the nabla integral of f by

$$\displaystyle{ \int _{a}^{t}f(s)\nabla s = F(t) - f(a),\text{ for all }\ t \in \mathbb{T}. }$$

If f and f are continuous, then

$$\displaystyle{ \left (\int _{a}^{t}f(t,s)\nabla s\right )^{\nabla } = f(\rho (t),t) +\int _{ a}^{t}f^{\nabla }(t,s)\nabla s. }$$

One can easily see that every ld-continuous function has a nabla antiderivative . As in the case of the delta derivative we see that if \(f: \mathbb{T} \rightarrow \mathbb{R}\) is ld-continuous and \(t \in \mathbb{T}\), then

$$\displaystyle{ \int _{\rho (t)}^{t}f(s)\nabla s =\nu (t)f(t). }$$

Theorem 1.2.3

If \(a,b,c \in \mathbb{T}\) and \(\alpha,\beta \in \mathbb{R}\) , and \(f,g: \mathbb{T} \rightarrow \mathbb{R}\) are ld-continuous, then

  1. 1.

    \(\int \nolimits _{a}^{b}\left [\alpha f(t) +\beta g(t)\right ]\varDelta t =\alpha \int \nolimits _{ a}^{b}f(t)\nabla t +\beta \int \nolimits _{ a}^{b}g(t)\nabla t\),

  2. 2.

    \(\int \nolimits _{a}^{b}f(t)\nabla t = -\int \nolimits _{b}^{a}f(t)\nabla t\),

  3. 3.

    \(\int \nolimits _{a}^{c}f(t)\nabla t =\int \nolimits _{ a}^{b}f(t)\nabla t +\int \nolimits _{ b}^{c}f(t)\nabla t\),

  4. 4.

    \(\left \vert \int _{a}^{b}f(t)\nabla t\right \vert \leq \int _{a}^{b}\left \vert f(t)\right \vert \nabla t,\)

  5. 5.

    \(\int _{a}^{b}f(t)g^{\nabla }(t)\nabla t = \left.f(t)g(t)\right \vert _{a}^{b} -\int _{a}^{b}f^{\nabla }(t)g^{\rho }(t)\nabla t,\)

  6. 6.

    \(\int _{a}^{b}f^{\rho }(t)g^{\nabla }(t)\nabla t = \left.f(t)g(t)\right \vert _{a}^{b} -\int _{a}^{b}f^{\nabla }(t)g(t)\nabla t\) .

The relations between delta and nabla derivatives can be summarized as follows. Assume that \(f: \mathbb{T} \rightarrow \mathbb{R}\) is delta differentiable on \(\mathbb{T}^{k}\). Then f is nabla differentiable at t and

$$\displaystyle{ f^{\nabla }(t) = f^{\varDelta }(\rho (t)), }$$

for \(t \in \mathbb{T}^{k}\) such that σ(ρ(t)) = t. If, in addition, f Δ is continuous on \(\mathbb{T}^{k}\), then f is nabla differentiable at t and \(f^{\nabla }(t) = f^{\varDelta }(\rho (t))\) holds for any \(t \in \mathbb{T}_{k}\). Assume that \(f: \mathbb{T} \rightarrow \mathbb{R}\) is nabla differentiable on \(\mathbb{T}_{k}\). Then f is delta differentiable at t and

$$\displaystyle{ f^{\varDelta }(t) = f^{\nabla }(\sigma (t)), }$$

for \(t \in \mathbb{T}_{k}\) such that ρ(σ(t)) = t. If, in addition, f is continuous on \(\mathbb{T}_{k}\), then f is delta differentiable at t and \(f^{\varDelta }(t) = f^{\nabla }(\sigma (t))\) holds for any \(t \in \mathbb{T}^{k}\).

We now give the definition of the generalized nabla exponential function. Assume that \(p: \mathbb{T} \rightarrow \mathbb{R}\) is ld-continuous and 1 − p(t)ν(t) ≠ 0 for \(t \in \mathbb{T}_{k}\). We define the set \(\mathcal{R}_{\nu }\) of all regressive and ld-continuous functions. We define the set \(\mathcal{R}_{\nu }^{+}\) of all positively regressive elements of \(\mathcal{R}_{\nu }\) by \(\mathcal{R}_{\nu }^{+} =\{ p \in \mathcal{R}: 1 -\nu (t)p(t) > 0,\) for all \(t \in \mathbb{T}\}\). The set of all ν-regressive functions on a time scale \(\mathbb{T}\) forms an Abelian group under the addition ⊕ defined by \(p \oplus _{\nu }q:= p + q -\nu pq\). The explicit nabla exponential function is given by

$$\displaystyle{ \check{e}_{p}(t,s) =\exp \left (\int _{s}^{t}\overline{\xi }_{ \nu (\tau )}(p(\tau ))\nabla \tau \right ),\ \text{ for }\ t \in \mathbb{T},\text{ }s \in \mathbb{T}^{k}, }$$
(1.2.1)

where \(\overline{\xi }_{h}(z)\) is the cylinder transformation , which is defined by

$$\displaystyle{ \overline{\xi }_{h}(z) = \left \{\begin{array}{c} -\frac{\log (1-hz)} {h},\quad h\neq 0,\\ z, \quad \ \ \ \ \ \ \ \ \ \ \ \ h = 0. \end{array} \right. }$$

For \(t \in \mathbb{T}\), \(s \in \mathbb{T}_{k},\) the exponential function \(\check{e}_{p}(t,s)\) is the solution of the initial value problem

$$\displaystyle{ x^{\nabla }(t) = p(t)x(t), \ \ t \in \mathbb{T}_{ k}\text{ with }x(s) = 1. }$$

The following theorem gives the properties of the exponential function \(\check{e}_{p}(t,s)\). The theorem is adapted from Bohner and Peterson [52].

Theorem 1.2.4

If \(p,q \in \mathcal{R}_{\nu }\) and \(t_{0} \in \mathbb{T}\) , then

  • \(\check{e}_{p}(t,t) \equiv 1,\quad \mbox{ and }\quad \check{e}_{0}(t,s) \equiv 1\) ;

  • \(\check{e}_{p}(\rho (t),s) = (1 -\nu (t)p(t))\check{e}_{p}(t,s)\) ;

  • \(\frac{1} {\check{e}_{p}(t,s)} =\check{ e}_{\ominus \nu p}(t,s)\) ;

  • \(\frac{\check{e}_{p}(t,s)} {\check{e}_{q}(t,s)} =\check{ e}_{p\ominus \nu q}(t,s)\) ;

  • \(\check{e}_{p}(t,s)\check{e}_{q}(t,s) =\check{ e}_{p\oplus \nu q}(t,s)\) ;

  • if \(1 - p(t)\nu (t)\neq 0,\) then \(\check{e}_{p}(t,t_{0}) > 0\) for all \(t \in \mathbb{T}\) .

  • \(\check{e}_{p}^{\nabla }(t,t_{0}) = p(t)\check{e}_{p}(t,t_{0})\) .

  • \(\left ( \frac{1} {\check{e}_{p}(\cdot,s)}\right )^{\nabla } = - \frac{p(t)} {\check{e}_{p}^{\sigma }(\cdot,s)}\) .

1.3 Diamond-α Calculus

Now we introduce the diamond-α dynamic derivative and diamond-α dynamic integration. The comprehensive development of the calculus of the diamond-α derivative and diamond-α integration is given in [140]. Let \(\mathbb{T}\) be a time scale and f(t) be differentiable on \(\mathbb{T}\) in the Δ and ∇ sense. For \(t \in \mathbb{T}\), we define the diamond-α derivative \(f^{\diamond _{\alpha }}(t)\) by

$$\displaystyle{ f^{\diamond _{\alpha }}(t) =\alpha f^{\varDelta }(t) + (1-\alpha )f^{\nabla }(t), \ \ 0 \leq \alpha \leq 1. }$$

Thus f is diamond-α differentiable if and only if f is Δ and ∇ differentiable. The diamond-α derivative reduces to the standard Δ-derivative for α = 1, or the standard ∇ derivative for α = 0. It represents a weighted dynamic derivative for α ∈ (0, 1).

Theorem 1.3.1

Let f, \(g: \mathbb{T} \rightarrow \mathbb{R}\) be diamond-α differentiable at \(t \in \mathbb{T}\) . Then

  1. (i).

    \(f + g: \mathbb{T} \rightarrow \mathbb{R}\) is diamond-α differentiable at \(t \in \mathbb{T}\) , with

    $$\displaystyle{ (f + g)^{\diamond _{\alpha }}(t) = f^{\diamond _{\alpha }}(t) + g^{\diamond _{\alpha }}(t). }$$
  2. (ii).

    \(f.g: \mathbb{T} \rightarrow \mathbb{R}\) is diamond-α differentiable at \(t \in \mathbb{T}\) , with

    $$\displaystyle{ (f.g)^{\diamond _{\alpha }}(t) = f^{\diamond _{\alpha }}(t)g(t) +\alpha f^{\sigma }(t)g^{\varDelta }(t) + (1-\alpha )f^{\rho }(t)g^{\nabla }(t). }$$
  3. (iii).

    For \(g(t)g^{\sigma }(t)g^{\rho }(t)\neq 0\), \(f/g: \mathbb{T} \rightarrow \mathbb{R}\) is diamond-α differentiable at \(t \in \mathbb{T}\) , with

    $$\displaystyle{ (\frac{f} {g})^{\diamond _{\alpha }}(t) = \frac{f^{\diamond _{\alpha }}(t)g^{\sigma }(t)g^{\rho }(t) -\alpha f^{\sigma }(t)g^{\rho }(t)g^{\varDelta }(t) - (1-\alpha )f^{\rho }(t)g^{\sigma }(t)g^{\nabla }(t)} {g(t)g^{\sigma }(t)g^{\rho }(t)}. }$$

Theorem 1.3.2

Let f, \(g: \mathbb{T} \rightarrow \mathbb{R}\) be diamond-α differentiable at \(t \in \mathbb{T}\) . Then the following hold:

  1. (i).

    \((f)^{\diamond _{\alpha }\varDelta }(t) =\alpha f^{\varDelta \varDelta }(t) + (1-\alpha )f^{\nabla \varDelta }(t),\)

  2. (ii).

    \((f)^{\diamond _{\alpha }\nabla }(t) =\alpha f^{\varDelta \nabla }(t) + (1-\alpha )f^{\nabla \nabla }(t),\)

  3. (iii).

    \((f)^{\varDelta \diamond _{\alpha }}(t) =\alpha f^{\varDelta \varDelta }(t) + (1-\alpha )f^{\varDelta \nabla }(t)\neq (f)^{\diamond _{\alpha }\varDelta }(t),\)

  4. (iv).

    \((f)^{\nabla \diamond _{\alpha }}(t) =\alpha f^{\nabla \varDelta }(t) + (1-\alpha )f^{\nabla \nabla }(t)\neq (f)^{\diamond _{\alpha }\nabla }(t),\)

  5. (v).

    \((f)^{\diamond _{\alpha }\diamond _{\alpha }}(t) =\alpha ^{2}f^{\varDelta \varDelta }(t) +\alpha (1-\alpha )[f^{\varDelta \nabla }(t) + f^{\nabla \varDelta }(t)]\)

    \(+(1-\alpha )^{2}f^{\nabla \nabla }(t)\neq \alpha ^{2}f^{\varDelta \varDelta }(t) + (1-\alpha )^{2}f^{\nabla \nabla }(t)\) .

Theorem 1.3.3

(Mean Value Theorem) . Suppose that f is a continuous function on \([a,b]_{\mathbb{T}}\) and has a diamond-α derivative at each point of \([a,b)_{\mathbb{T}}\) . Then there exist points η, \(\eta ^{\prime}\) such that

$$\displaystyle{ f^{\diamond _{\alpha }}(\eta ^{\prime})(b - a) \leq f(b) - f(a) \leq f^{\diamond _{\alpha }}(\eta )(b - a). }$$

When f(a) = f(b), then we have that

$$\displaystyle{ f^{\diamond _{\alpha }}(\eta ^{\prime}) \leq 0 \leq f^{\diamond _{\alpha }}(\eta ). }$$

Corollary 1.3.1

Let f be a continuous function on \([a,b]_{\mathbb{T}}\) and has a diamond-α derivative at each point of \([a,b)_{\mathbb{T}}\) . Then f is increasing if \(f^{\diamond _{\alpha }}(t) > 0\) , decreasing if \(f^{\diamond _{\alpha }}(t) < 0\), nonincreasing if \(f^{\diamond _{\alpha }}(t) \leq 0\) and nondecreasing \(f^{\diamond _{\alpha }}(t) \geq 0\) on \([a,b]_{\mathbb{T}}\) .

Theorem 1.3.4

Let a, t \(\in \mathbb{T}\) , and \(h: \mathbb{T} \rightarrow \mathbb{R}\) . Then, the diamond-α integral from a to t of h is defined by

$$\displaystyle{ \int _{a}^{t}h(s)\diamond _{\alpha }s =\alpha \int _{ a}^{t}h(s)\varDelta s + (1-\alpha )\int _{ a}^{t}h(s)\nabla s, \ 0 \leq \alpha \leq 1, }$$

provided that there exists delta and nabla integrals of h on  \(\mathbb{T}\) .

In general, we do not have

$$\displaystyle{ \left (\int _{a}^{t}h(s)\diamond _{\alpha }s\right )^{\diamond _{\alpha }} = h(t)\text{, for }t \in \mathbb{T}. }$$

Example 1.3.1 ([31])

Let \(\mathbb{T} = 0,1,2,\) a = 0 and h(t) = t 2 for \(t \in \mathbb{T}\) . This gives us that

$$\displaystyle{ \left.\left (\int _{a}^{t}h(s)\diamond _{\alpha }s\right )^{\diamond _{\alpha }}\right \vert _{ t=1} = 1 + 2\alpha (1-\alpha ), }$$

so that the equality above holds only when ♢ α = Δ or ♢ α = ∇.

Theorem 1.3.5

Let \(a,b,c \in \mathbb{T}\), \(\alpha,\beta \in \mathbb{R}\) , and f and g be continuous functions on \([a,b] \cup \mathbb{T}\) . Then the following properties hold:

  1. (1).

    \(\int \nolimits _{a}^{b}\left [\alpha f(t) +\beta g(t)\right ]\diamond _{\alpha }t =\alpha \int \nolimits _{ a}^{b}f(t)\diamond _{\alpha }t +\beta \int \nolimits _{ a}^{b}g(t)\diamond _{\alpha }t\),

  2. (2).

    \(\int \nolimits _{a}^{b}f(t)\diamond _{\alpha }t = -\int \nolimits _{b}^{a}f(t)\diamond _{\alpha }t\),

  3. (3).

    \(\int \nolimits _{a}^{c}f(t)\diamond _{\alpha }t =\int \nolimits _{ a}^{b}f(t)\diamond _{\alpha }t +\int \nolimits _{ b}^{c}f(t)\diamond _{\alpha }t\) .

Example 1.3.2

If we let \(\mathbb{T} = \mathbb{R}\) , then we obtain

$$\displaystyle{ \int \nolimits _{a}^{b}f(t)\diamond _{\alpha }t =\int \nolimits _{ a}^{b}f(t)dt, \ \ \text{where }a,\text{ }b \in \mathbb{R}, }$$

and if we let \(\mathbb{T} = \mathbb{Z}\) , and m < n, then we obtain

$$\displaystyle{ \int \nolimits _{m}^{n}f(t)\diamond _{\alpha }t =\sum \limits _{ i=m}^{n-1}\left [\alpha f(i) + (1-\alpha )f(i + 1)\right ], \ \ \text{for }m,\text{ }n \in \mathbb{N}_{ 0}. }$$
(1.3.1)

Example 1.3.3

If we let \(\mathbb{T} = q^{\mathbb{N}}\) , for q > 1 and m < n, then we obtain

$$\displaystyle{ \int \nolimits _{q^{m}}^{q^{n} }f(t)\diamond _{\alpha }t = (q-1)\sum \limits _{i=m}^{n-1}q^{i}\left [\alpha f(q^{i})+(1-\alpha )f(q^{i+1})\right ]\ \text{, for }m,\text{ }n \in \mathbb{N}_{ 0}, }$$
(1.3.2)

and if we let \(\mathbb{T} =\{ t_{i}: i \in \mathbb{N}_{0}\}\) such that t i < t i+1 and m < n, then we obtain the general case (which includes (1.3.1) and (1.3.2))

$$\displaystyle{ \int \nolimits _{t_{m}}^{t_{n} }f(t)\diamond _{\alpha }t =\sum \limits _{ i=m}^{n-1}(t_{ i+1} - t_{i})\left [\alpha f(t_{i}) + (1-\alpha )f(t_{i+1})\right ]\text{, for }m,\text{ }n \in \mathbb{N}_{0}, }$$
(1.3.3)

Remark 1.3.1

Note that if f(t) ≥ 0 for all t ∈ \([a,b]_{\mathbb{T}}\) , then \(\int \nolimits _{a}^{b}cf(t)\diamond _{\alpha }t \geq 0\) . If f(t) ≥ g(t) for all t ∈ \([a,b]_{\mathbb{T}}\) , then \(\int \nolimits _{a}^{b}f(t)\diamond _{\alpha }t \geq \int \nolimits _{a}^{b}g(t)\diamond _{\alpha }t \geq 0,\) and f(t) = 0 if and only if \(\int \nolimits _{a}^{b}f(t)\diamond _{\alpha }t = 0\) .

Corollary 1.3.2

Let \(t \in \mathbb{T}_{k}^{k}\) and \(f: \mathbb{T} \rightarrow \mathbb{R}\) . Then

$$\displaystyle{ \int _{t}^{\sigma (t)}f(s)\diamond _{\alpha }s =\mu (t)[\alpha f(t) + (1-\alpha )f^{\sigma }(t)], }$$

and

$$\displaystyle{ \int _{t}^{\rho (t)}f(s)\diamond _{\alpha }s =\nu (t)[\alpha f^{\rho }(t) + (1-\alpha )f(t)]. }$$

Recall a function p: \(\mathbb{T} \rightarrow \mathbb{R}\) is called regressive provided 1 +μ(t)p(t) ≠ 0 for all \(t \in \mathbb{T}^{k}\). Note \(\mathcal{R}\) denotes the set of all regressive and rd-continuous functions on \(\mathbb{T}\). Similarly, a function \(q: \mathbb{T} \rightarrow \mathbb{R}\) is called ν-regressive provided 1 −ν(t)q(t) ≠ 0 for all \(t \in \mathbb{T}_{k}\). Note \(\mathcal{R}_{\nu }\) denotes the set of all ν-regressive and ld-continuous functions on \(\mathbb{T}\). We consider two functions: E p, α and e p, α where \(p \in \mathcal{R}\cap \mathcal{R}_{\nu }\) and \(\alpha \in \left [0,1\right ]\). For \(p \in \mathcal{R}\cap \mathcal{R}_{\nu }\) and \(\alpha \in \left [0,1\right ],\) we define

$$\displaystyle{ E_{p,\alpha }(.,t_{0}) =\alpha e_{p}(.,t_{0}) + (1-\alpha )\check{e}_{p}(.,t_{0}); \ \ \text{for }t \in \mathbb{T}, }$$

where e p (. , t 0) and \(\check{e}_{p}(.,t_{0})\) are the delta and nabla exponential functions defined as in (1.1.5) and (1.2.1), respectively.

Example 1.3.4 ([31])

Consider the time scale \(\mathbb{T} = \mathbb{Z}\) and the constant function \(p(t) = 1/2\) . Take t 0 = 0. Then, \(e_{p}(t,0) = (3/2)^{t}\) is the solution of the initial value problem \(y^{\varDelta }(t) = (1/2)y(t),\) y(t 0 ) = 1. Moreover \(\check{e}_{p}(t,0) = 2^{t}\) is the unique solution of \(y^{\nabla }(t) = (1/2)y(t)\) , y(t 0 ) = 1. Now \(E_{p,\alpha }(t;0) =\alpha (3/2)^{t} + (1-\alpha )(2)^{t}\) , for \(t \in \mathbb{Z}\) .

Remark 1.3.2

Combined-exponentials cannot be really called an exponential function. Indeed, they seem to fail the most important property of an exponential function, i.e., they are not a solution of an appropriate initial value problem.

Next we give a direct formulas for the ♢  α -derivative of exponential functions \(e_{p}(.,t_{0})\) and \(\check{e}_{p}(.,t_{0})\).

Theorem 1.3.6

Let \(\mathbb{T}\) be a regular time scale. Assume that t, \(t_{0} \in \mathbb{T}\) and \(p \in \mathcal{R}\cap \mathcal{R}_{\nu }\) . Then

$$\displaystyle{ e^{\diamond _{\alpha }}(t,t_{ 0}) = \left [\alpha p(t) + \frac{\left (1-\alpha \right )p^{\rho }(t)} {1 +\nu (t)p^{\rho }(t)}\right ]e_{p}(t,t_{0}), }$$
$$\displaystyle{ \check{e}_{p}^{\diamond _{\alpha }}(t,t_{ 0}) = \left [\alpha p(t) + \frac{\left (1-\alpha \right )p^{\sigma }(t)} {1 +\mu (t)p^{\sigma }(t)}\right ]\check{e}_{p}(.,t_{0}), }$$

where e p (.,t 0 ) is a solution of the initial value problem \(y^{\diamond _{\alpha }}(t) = q(t)y(t)\) , y(t 0 ) = 1, where \(q(t) =\alpha p(t) + \frac{\left (1-\alpha \right )p^{\rho }(t)} {1+\nu (t)p^{\rho }(t)}\) .

1.4 Taylor Monomials and Series

Here we define Taylor monomials and Taylor expansions of functions corresponding to delta and nabla derivatives. To define these functions, we need some basic definitions about calculus of functions of two variables on time scales. Let \(\mathbb{T}_{1}\) and \(\mathbb{T}_{2}\) be two time scales with at least two points and consider the time scale intervals \(\varOmega _{1} = [t_{0},\infty ) \cap \mathbb{T}_{1}\) and \(\varOmega _{2} = [s_{0},\infty ) \cap \mathbb{T}_{2}\) for \(t_{0} \in \mathbb{T}_{1}\) and \(s_{0} \in \mathbb{T}_{2}\). Let σ 1, ρ 1, Δ 1 and σ 2, ρ 2, Δ 2 denote the forward jump operators, backward jump operators, and the delta differentiation operator , respectively, on \(\mathbb{T}_{1}\) and \(\mathbb{T}_{2}\). We say that a real valued function f on \(\mathbb{T}_{1} \times \mathbb{T}_{2}\) at \((t,s) \in \varOmega \equiv \varOmega _{1} \times \varOmega _{2}\) has a Δ 1 partial derivative \(f^{\varDelta _{1}}(t,s)\) with respect to t if for each ε > 0 there exists a neighborhood U t of t such that

$$\displaystyle{ \vert f(\sigma _{1}(t),s)-f(\eta,s)-f^{\varDelta _{1}}(t,s)[\sigma _{1}(t)-\eta ]\vert \leq \epsilon \vert \sigma (t)-\eta \vert,\ \mbox{ for all}\ \eta \in U_{t}. }$$

In this case, we say \(f^{\varDelta _{1}}(t,s)\) is the (partial delta) derivative of f(t, s) at t. We say that a real valued function f on \(\mathbb{T}_{1} \times \mathbb{T}_{2}\) at (t, s) ∈ Ω 1 ×Ω 2 has a Δ 2 partial derivative \(f^{\varDelta _{2}}(t,s)\) with respect to s if for each ε > 0 there exists a neighborhood U s of s such that

$$\displaystyle{ \vert f(t,\sigma _{2}(s))-f(t,\xi )-f^{\varDelta _{2}}(t,s)[\sigma _{2}(t)-\xi ]\vert \leq \epsilon \vert \sigma (t)-\xi \vert,\ \mbox{ for all}\ \xi \in U_{s}. }$$

In this case, we say \(f^{\varDelta _{2}}(t,s)\) is the (partial delta) derivative of f(t, s) at s. The function f is called rd-continuous in t if for every \(\alpha _{2} \in \mathbb{T}_{2}\) the function f(t, α 2) is rd-continuous on \(\mathbb{T}_{1}\). The function f is called rd-continuous in s if for every \(\alpha _{1} \in \mathbb{T}_{1}\) the function f(α 1, s) is rd-continuous on \(\mathbb{T}_{2}\).

Theorem 1.4.1

Let \(t_{0} \in \mathbb{T}^{\kappa }\) and assume \(k: \mathbb{T} \times \mathbb{T}^{\kappa }\mapsto \mathbb{R}\) is continuous at (t,t), where \(t \in \mathbb{T}^{\kappa }\) with t > t 0 . Also assume that k(t,⋅) is rd-continuous on [t 0 ,σ(t)]. Suppose for each ε > 0 there exists a neighborhood of t, independent U of \(\tau \in [t_{0},\sigma (t)]\) , such that

$$\displaystyle{ \vert k(\sigma (t),\tau )-k(s,\tau )-k^{\varDelta }(t,\tau )(\sigma (t)-s)\vert \leq \epsilon \vert \sigma (t) - s\vert,\text{ }\!\!\mbox{ for all }\quad \!\!\!\!s \in U, }$$

where k Δ denotes the derivative of k with respect to the first variable. Then

$$\displaystyle{ g(t):=\int _{ t_{0}}^{t}k(t,\tau )\varDelta \tau,\quad \mbox{ implies }\quad g^{\varDelta }(t) =\int _{ t_{0}}^{t}k^{\varDelta }(t,\tau )\varDelta \tau + k(\sigma (t),t). }$$

The Taylor monomials \(h_{k}: \mathbb{T} \times \mathbb{T} \rightarrow \mathbb{R}\), \(k \in \mathbb{N}_{0} = \mathbb{N}\cup \{0\}\), are defined recursively as follows. The function h 0 is defined by

$$\displaystyle{ h_{0}(t,s) = 1\ \text{, for all }s,t \in \mathbb{T}\text{,} }$$

and given h k for \(k \in \mathbb{N}_{0}\), the function h k+1 is defined by

$$\displaystyle{ h_{k+1}(t,s) =\int _{ s}^{t}h_{ k}(\tau,s)\varDelta \tau \ \text{, for all }s,\text{ }t \in \mathbb{T}. }$$

If we let \(h_{k}^{\varDelta }(t,s)\) denote for each fixed \(s \in \mathbb{T}\), the derivative of h(t, s) with respect to t, then

$$\displaystyle{ h_{k}^{\varDelta }(t,s) = h_{ k-1}(t,s), \ \ k \in \mathbb{N}, \ t \in \mathbb{T}\text{, } }$$

for each fixed \(s \in \mathbb{T}\). The above definition obviously implies

$$\displaystyle{ h_{1}(t,s) = t - s\ \text{, for all }s,\text{ }t \in \mathbb{T}\text{. } }$$

In the following, we give some formulas of h k (t, s) as determined in [51]. In the case when \(\mathbb{T} = \mathbb{R}\), then

$$\displaystyle{ h_{k}(t,s) = \frac{(t - s)^{k}} {k!} \ \text{, for all }s,t \in \mathbb{R}. }$$
(1.4.1)

In the case when \(\mathbb{T} = \mathbb{N}\), we see that

$$\displaystyle{ h_{k}(n,s):= \frac{(n - s)^{(k)}} {k!},\text{ }k = 0,1,2,\ldots, \ \ t > s, }$$
(1.4.2)

where \(t^{(k)} = t(t - 1)\cdots (t - k + 1)\) is the so-called falling function (see [100]). When \(\mathbb{T}\mathbf{=}\{t: t = q^{n}\) , \(n \in \mathbb{N}\), q > 1}, we have that

$$\displaystyle{ h_{k}(t,s) =\mathop{ \prod }\limits _{m=0}^{k-1}\frac{t - q^{m}s} {\mathop{\sum }\limits _{j=0}^{m}q^{j}}\ \text{, for all }s,t \in \mathbb{T}. }$$
(1.4.3)

If \(\mathbb{T} =h\mathbb{N}\) , h > 0,  we see that

$$\displaystyle{ h_{k}(t,s) = \frac{\mathop{\prod }\limits _{i=0}^{k-1}(t - ih - s)} {k!} \ \text{, for all }s,t \in \mathbb{T}\text{, }t > s. }$$
(1.4.4)

In general for t ≥ s, we have that h k (t, s) ≥ 0, and

$$\displaystyle{ h_{k}(t,s) \leq \frac{(t - s)^{k}} {k!} , \ \ \text{for all }t > s, \ k \in \mathbb{N}_{0}. }$$

We also consider the Taylor monomials \(g_{k}: \mathbb{T} \times \mathbb{T} \rightarrow \mathbb{R}\), \(k \in \mathbb{N}_{0} = \mathbb{N}\cup \{0\}\), which are defined recursively. The function g 0 is defined by

$$\displaystyle{ g_{0}(t,s) = 1\ \text{, for all }s,t \in \mathbb{T}\text{,} }$$

and given g k for \(k \in \mathbb{N}_{0}\), the function g k+1 is defined by

$$\displaystyle{ g_{k+1}(t,s) =\int _{ s}^{t}g_{ k}(\sigma (\tau ),s)\varDelta \tau \ \text{, for all }s,t \in \mathbb{T}. }$$

If we let \(g_{k}^{\varDelta }(t,s)\) denote for each fixed \(s \in \mathbb{T}\), the derivative of g(t, s) with respect to t, then

$$\displaystyle{ g_{k}^{\varDelta }(t,s) = g_{ k-1}(\sigma (t),s), \ \ k \in \mathbb{N}, \ t \in \mathbb{T}\text{, } }$$

for each fixed \(s \in \mathbb{T}\). One can see that

$$\displaystyle{ h_{k}(t,s) = (-1)^{k}g_{ k}(s,t). }$$

We denote by \(C_{rd}^{(n)}(\mathbb{T})\) the space of all functions \(f \in C_{rd}(\mathbb{T})\) such that \(f^{\varDelta _{i}} \in C_{rd}(\mathbb{T})\) for i = 0, 1, 2, , n for \(n \in \mathbb{N}\). For the function \(f: \mathbb{T} \rightarrow \mathbb{R}\), we consider the second derivative \(f^{\varDelta _{2}}\) provided f Δ is delta differentiable on \(\mathbb{T}\) with derivative \(f^{\varDelta _{2}} = (f^{\varDelta })^{\varDelta }\). Similarly, we define the nth order derivative \(f^{\varDelta _{n}} = (f^{\varDelta _{n-1}})^{\varDelta }\). Now, we give the definition of generalized polynomials as follows:

$$\displaystyle{ h_{n}(t,s):= \left \{\begin{array}{@{}l@{\quad }l@{}} 1, \quad &n = 0 \\ \int _{s}^{t}h_{ n-1}(\xi,s)\varDelta \xi,\quad &n \in \mathbb{N} \end{array} \right. }$$
(1.4.5)

and

$$\displaystyle{ g_{n}(t,s):= \left \{\begin{array}{@{}l@{\quad }l@{}} 1, \quad &n = 0 \\ \int _{s}^{t}g_{ n-1}(\sigma (\xi ),s)\varDelta \xi,\quad &n \in \mathbb{N}\text{, } \end{array} \right. }$$

for all s, \(t \in \mathbb{T}\).

Property. Using induction it is easy to see that h n (t, s) ≥ 0 holds for all \(n \in \mathbb{N}\) and s, \(t \in \mathbb{T}\) with t ≥ s and \((-1)^{n}h_{n}(t,s) \geq 0\) holds for all \(n \in \mathbb{N}\) and s, \(t \in \mathbb{T}\) with t ≤ s. Moreover, h n (t, s) is increasing with respect to its first component for all t ≥ s.   ■ 

Recall the following result (see [52]).

Lemma 1.4.1

For \(n \in \mathbb{N}\) and \(t \in \mathbb{T}\) , we have g n (t,s) = 0 for all \(s \in [\rho ^{n-1}(t),t]_{\mathbb{T}}\) .

Lemma 1.4.2

For \(n \in \mathbb{N}\), \(t \in \mathbb{T}\) and \(s \in \mathbb{T}^{\kappa ^{n} }\) , we have \(h_{n}(t,s) = (-1)^{n}g_{n}(s,t)\) .

From Lemmas 1.4.1 and 1.4.3 we have the following result.

Lemma 1.4.3

For \(n \in \mathbb{N}\) and \(t \in \mathbb{T}\) , we have h n (t,s) = 0 for all s ∈ [ρ n−1 \((t),t]_{\mathbb{T}}\) .

Theorem 1.4.2

Let \(n \in \mathbb{N}\) and \(f \in \mathrm{ C}_{\mathrm{rd}}^{n}(\mathbb{T}, \mathbb{R})\) be an n times differentiable function. For \(s \in \mathbb{T}^{\kappa ^{n-1} }\) , we have

$$\displaystyle{ f(t) =\sum _{ j=0}^{n-1}h_{ j}(t,s)f^{\varDelta ^{j} }(s) +\int _{ s}^{\rho ^{n-1}(t) }h_{n-1}(t,\sigma (\xi ))f^{\varDelta ^{n} }(\xi )\varDelta \xi,\text{ for all}\ t \in \mathbb{T}. }$$

Theorem 1.4.3

Assume that \(f \in C_{rd}^{(n)}(\mathbb{T})\) and \(s \in \mathbb{T}\) . Then

$$\displaystyle{ f(t) =\mathop{ \sum }\limits _{k=0}^{n-1}f^{\varDelta _{k} }(s)h_{k}(t,s) +\int _{ s}^{t}h_{ n-1}(t,(\sigma (\tau ))f^{\varDelta _{n} }(\tau )\varDelta \tau. }$$
(1.4.6)

As a special case if m < n, then

$$\displaystyle{ f^{\varDelta _{m}}(t) =\mathop{ \sum }\limits _{k=0}^{n-m-1}f^{\varDelta _{k+m}}(s)h_{k}(t,s) +\int _{ s}^{t}h_{n-m-1}(t,(\sigma (\tau ))f^{\varDelta _{n}}(\tau )\varDelta \tau. }$$

Now, we define the Taylor expansions of the functions corresponding to the nabla derivative. The generalized polynomial that will be used in describing these expansions are \(\hat{h}_{k}: \mathbb{T} \times \mathbb{T} \rightarrow \mathbb{R}\), \(k \in \mathbb{N}_{0} = \mathbb{N}\cup \{0\}\), which are defined recursively as follows. The function \(\hat{h}_{0}\) is defined by

$$\displaystyle{ \hat{h}_{0}(t,s) = 1\ \text{, for all }s,\text{ }t \in \mathbb{T}\text{,} }$$

and given \(\hat{h}_{k}\) for \(k \in \mathbb{N}_{0}\), the function \(\hat{h}_{k+1}\) is defined by

$$\displaystyle{ \hat{h}_{k+1}(t,s) =\int _{ s}^{t}\hat{h}_{ k}(\tau,s)\nabla \tau \ \text{, for all }s,\text{ }t \in \mathbb{T}. }$$
(1.4.7)

Note that the functions \(\hat{h}_{k}\) are all well defined. If we let \(\hat{h}_{k}^{\varDelta }(t,s)\) denote for each fixed \(s \in \mathbb{T}\), the derivative of \(\hat{h}(t,s)\) with respect to t, then

$$\displaystyle{ \hat{h}_{k}^{\varDelta }(t,s) =\hat{ h}_{ k-1}(t,s), \ \ k \in \mathbb{N}, \ t \in \mathbb{T}_{k}\text{, } }$$

for each fixed \(s \in \mathbb{T}\). The above definition obviously implies

$$\displaystyle{ \hat{h}_{1}(t,s) = t - s\ \text{, for all }s,\text{ }t \in \mathbb{T}\text{. } }$$

Finding the \(\hat{h}_{k}\) for k > 1 is not an easy task in general. However for a particular given time scale it might be easy to find these functions. We will consider some examples first before we present Taylor’s formula in general. In the case when \(\mathbb{T} = \mathbb{R}\), then \(\rho (t) = t\) and

$$\displaystyle{ \hat{h}_{k}(t,s) = \frac{(t - s)^{k}} {k!} \ \text{, for all }s,\text{ }t \in \mathbb{R}. }$$
(1.4.8)

In the case when \(\mathbb{T} = \mathbb{N}\), we see that \(\rho (t) = t - 1,\) ν(t) = 1, \(y^{\nabla }(t) = \nabla (t) = y(t) - y(t - 1),\) and

$$\displaystyle{ \hat{h}_{k}(t,s):= \frac{(t - s)^{(k)}} {k!},\text{ }k = 0,1,2,\ldots, \ \ t > s, }$$
(1.4.9)

where \(t^{(k)} = t(t - 1)\cdots (t - k + 1)\) is the so-called falling function (see [100]). Noting that \(\nabla t^{(k)} = k\) t (k−1), we see that

$$\displaystyle{ \hat{h}_{k+1}(t,s):=\int _{ s}^{t}\frac{(\tau -s)^{(k)}} {k!} \nabla \tau =\sum \limits _{ r=s+1}^{t}\frac{(\tau -s)^{(k)}} {k!} = \frac{(\tau -s)^{(k+1)}} {(k + 1)!}, }$$
(1.4.10)

for k = 0, 1, 2, ,   t > s. In the case when \(\mathbb{T}\mathbf{=}\{t: t = q^{n}\) , \(n \in \mathbb{N}\), q > 1}, we have \(\rho (t) = t/q,\) \(\nu (t) = (q - 1)t/q,\) and

$$\displaystyle{ \hat{h}_{k}(t,s) =\mathop{ \prod }\limits _{m=0}^{k-1}\frac{q^{m}t - s} {\mathop{\sum }\limits _{j=0}^{m}q^{j}}\ \text{, for all }s,t \in \mathbb{T}. }$$
(1.4.11)

In general for t ≥ s, we have that \(\hat{h}_{k}(t,s) \geq 0,\) and

$$\displaystyle{ \hat{h}_{k}(t,s) \leq \frac{(t - s)^{k}} {k!} , \ \ \text{for all }t > s, \ k \in \mathbb{N}_{0}. }$$

We may also relate the functions \(\hat{h}_{k}\), \(\hat{g}_{0}\) for the nabla derivative to the functions h k and g k in the delta derivative.

Definition 1.4.1

For t, s define the functions

$$\displaystyle{ h_{0}(t,s) = g_{0}(t,s) =\hat{ h}_{0}(t,s) =\hat{ g}_{0}(t,s) = 1\text{, } }$$

and given h n ,g n, \(\hat{h}_{n}\) and  \(\hat{g}_{n}\) for \(n \in \mathbb{N}_{0}\),

$$\displaystyle\begin{array}{rcl} h_{n+1}(t,s)& =& \int _{s}^{t}h_{ n}(\tau,s)\varDelta \tau \text{, }g_{n+1}(t,s) =\int _{ s}^{t}g_{ n}(\sigma (\tau ),s)\varDelta \tau \text{,} {}\\ \hat{h}_{n+1}(t,s)& =& \int _{s}^{t}\hat{h}_{ n}(\tau,s)\nabla \tau \text{, }\hat{g}_{n+1} =\int _{ s}^{t}\hat{g}_{ n}(\rho (\tau ),s)\nabla \tau, {}\\ \end{array}$$

we have that

$$\displaystyle{ \hat{h}_{n} = g_{n}(t,s) = (-1)^{n}h_{ n}(s,t) = (-1)^{n}\hat{g}_{ n}(s,t). }$$

We denote by \(C_{ld}^{(n)}(\mathbb{T})\) the space of all functions \(f \in C_{ld}(\mathbb{T})\) such that \(f^{\nabla _{i}} \in C_{ld}(\mathbb{T})\) for i = 0, 1, 2, , n for \(n \in \mathbb{N}\). For the function \(f: \mathbb{T} \rightarrow \mathbb{R}\), we consider the second derivative \(f^{\nabla ^{2} }\) provided f is nabla differentiable on \(\mathbb{T}\) with derivative \(f\nabla ^{2} = (f^{\nabla })^{\nabla }\). Similarly, we define the nth order nabla derivative \(f^{\nabla ^{n} } = (f^{\nabla ^{n-1} })^{\nabla }\).

Theorem 1.4.4

Let \(n \in \mathbb{N}\) . Suppose that the function f is such that \(f^{\nabla ^{n+1} }\) is ld-continuous on \(\mathbb{T}_{{\kappa{n+1}}}\) . Let \(s \in \mathbb{T}_{{\kappa {n}}}\), \(t \in \mathbb{T}\) , and define

$$\displaystyle{ \hat{h}_{0}(t,s) = 1\text{, }\hat{h}_{k+1}(t,s) =\int _{ s}^{t}\hat{h}_{ k}(\tau,s)\nabla \tau\ \text{, for all }s,\text{ }t \in \mathbb{T}\text{ and }k \in \mathbb{N}_{0}. }$$

Then, we have

$$\displaystyle{ f(t) =\sum _{ k=0}^{n}\hat{h}_{ k}(t,s)f^{\nabla ^{k} }(s) +\int _{ s}^{t}\hat{h}_{ n}(t,\rho (\xi ))f^{\nabla ^{n+1} }(\xi )\nabla \xi. }$$

We end this section with the time scale version of L’Hôpital’s rule . We present the rule for delta and nabla derivatives.

Theorem 1.4.5

Assume that f and g are Δ-differentiable on \(\mathbb{T}\) and let \(t_{0} \in \mathbb{T} \cup \{\infty \}\) . If \(t_{0} \in \mathbb{T}\) , assume that t 0 is right-dense. Furthermore, assume that \(\lim _{t\rightarrow t_{0}^{-}}f(t) =\lim _{t\rightarrow t_{0}^{-}}g(t) = 0,\) and suppose that there exists \(\varepsilon > 0\) with g(t)g Δ (t) > 0 for all \(t \in L_{\varepsilon }(t_{0}) =\{ t \in \mathbb{T}: 0 < t_{0} - t <\varepsilon \}\) . Then

$$\displaystyle{ \lim \inf _{t\rightarrow t_{0}^{-}}\frac{f^{\varDelta }(t)} {g^{\varDelta }(t)} \leq \lim \inf _{t\rightarrow t_{0}^{-}}\frac{f(t)} {g(t)} \leq \lim \sup _{t\rightarrow t_{0}^{-}}\frac{f(t)} {g(t)} \leq \lim \sup _{t\rightarrow t_{0}^{-}}\frac{f^{\varDelta }(t)} {g^{\varDelta }(t)}. }$$

Theorem 1.4.6

Assume that f and g are ∇-differentiable on \(\mathbb{T}\) and let \(t_{0} \in \mathbb{T} \cup \{-\infty \}\) . If \(t_{0} \in \mathbb{T}\) , assume that t 0 is right-dense. Furthermore, assume that \(\lim _{t\rightarrow t_{0}^{+}}f(t) =\lim _{t\rightarrow t_{0}^{+}}g(t) = 0,\) and suppose that there exists \(\varepsilon > 0\) with \(g(t)g^{\nabla }(t) > 0\) for all \(t \in R_{\varepsilon }(t_{0}) =\{ t \in \mathbb{T}: 0 < t - t_{0} <\varepsilon \}\) . Then

$$\displaystyle{ \lim \inf _{t\rightarrow t_{0}^{-}}\frac{f^{\nabla }(t)} {g^{\nabla }(t)} \leq \lim \inf _{t\rightarrow t_{0}^{-}}\frac{f(t)} {g(t)} \leq \lim \sup _{t\rightarrow t_{0}^{-}}\frac{f(t)} {g(t)} \leq \lim \sup _{t\rightarrow t_{0}^{-}}\frac{f^{\nabla }(t)} {g^{\nabla }(t)}. }$$