Abstract
From a modeling point of view it is realistic to model a phenomenon by a dynamic system which incorporates both continuous and discrete times, namely, time as an arbitrary closed set of reals. It is natural to ask whether it is possible to provide a framework which allows us to handle both dynamic systems simultaneously so that we can get some insight and a better understanding of the subtle differences of these two systems.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
The essence of mathematics lies in its freedom.
Georg Cantor (1845–1915).
As for everything else, so for a mathematical theory: beauty can be perceived but not explained.
Arthur Cayley (1821–1895).
From a modeling point of view it is realistic to model a phenomenon by a dynamic system which incorporates both continuous and discrete times, namely, time as an arbitrary closed set of reals. It is natural to ask whether it is possible to provide a framework which allows us to handle both dynamic systems simultaneously so that we can get some insight and a better understanding of the subtle differences of these two systems. The recently developed theory of “dynamic systems on time scales” or dynamic systems on measure chains (by a measure chain we mean the union of disjoint closed intervals of \(\mathbb{R}\)) offers a desired unified approach.
This chapter contains some preliminaries, definitions, and concepts concerning time scale calculus. The results in this chapter will cover delta, nabla, and diamond-α derivatives and integrals.
1.1 Delta Calculus
For the notions used below we refer the reader to the books [51, 52] which summarize and organize much of time scale calculus. A time scale is an arbitrary nonempty closed subset of the real numbers. Throughout the book, we denote the time scale by the symbol \(\mathbb{T}\). For example, the real numbers \(\mathbb{R}\), the integers \(\mathbb{Z}\), and the natural numbers \(\mathbb{N}\) are time scales. For \(t \in \mathbb{T}\), we define the forward jump operator \(\sigma: \mathbb{T} \rightarrow \mathbb{T}\) by \(\sigma (t):=\inf \{ s \in \mathbb{T}: s > t\}\). A time-scale \(\mathbb{T}\) equipped with the order topology is metrizable and is a K σ -space; i.e., it is a union of at most countably many compact sets. The metric on \(\mathbb{T}\) which generates the order topology is given by \(d(r;s):= \left \vert \mu (r;s)\right \vert,\) where μ(. ) = μ(. ; τ) for a fixed \(\tau \in \mathbb{T}\) is defined as follows. The mapping \(\mu: \mathbb{T} \rightarrow \mathbb{R}^{+} = [0,\infty )\) such that \(\mu (t):=\sigma (t) - t\) is called the graininess.
When \(\mathbb{T} = \mathbb{R}\), we see that σ(t) = t and \(\mu (t) \equiv 0\) for all \(t \in \mathbb{T}\) and when \(\mathbb{T} = \mathbb{N}\), we have that \(\sigma (t) = t + 1\) and μ(t) ≡ 1 for all \(t \in \mathbb{T}\). The backward jump operator \(\rho: \mathbb{T} \rightarrow \mathbb{T}\) is defined by \(\rho (t):=\sup \{ s \in \mathbb{T}: s < t\}\). The mapping \(\nu: \mathbb{T} \rightarrow \mathbb{R}_{0}^{+}\) such that \(\nu (t) = t -\rho (t)\) is called the backward graininess. If σ(t) > t, we say that t is right-scattered , while if ρ(t) < t, we say that t is left-scattered . Also, if \(t <\sup \mathbb{T}\) and σ(t) = t, then t is called right-dense , and if \(t >\inf \mathbb{T}\) and ρ(t) = t, then t is called left-dense . If \(\mathbb{T}\) has a left-scattered maximum m, then \(\mathbb{T}^{k} = \mathbb{T} -\{ m\}\). Otherwise \(\mathbb{T}^{k} = \mathbb{T}\). In summary,
Likewise \(\mathbb{T}_{k}\) is defined as the set \(\mathbb{T}_{k}\) \(= \mathbb{T}\setminus \) \([\inf \mathbb{T},\sigma (\inf \mathbb{T})]\) if \(\left \vert \inf \mathbb{T}\right \vert < \infty \), and \(\mathbb{T}_{k}= \mathbb{T}\) if \(\inf \mathbb{T} = -\infty \).
For a function \(f: \mathbb{T} \rightarrow \mathbb{R}\), we define the derivative f Δ as follows. Let \(t \in \mathbb{T}\). If there exists a number \(\alpha \in \mathbb{R}\) such that for all \(\varepsilon > 0\) there exists a neighborhood U of t with
for all s ∈ U, then f is said to be differentiable at t, and we call α the delta derivative of f at t and denote it by f Δ(t). For example, if \(\mathbb{T} = \mathbb{R}\), then
If \(\mathbb{T} = \mathbb{N}\), then \(f^{\varDelta }(t) = f(t + 1) - f(t)\) for all \(t \in \mathbb{T}\). For a function \(f: \mathbb{T} \rightarrow \mathbb{R}\) the (delta) derivative is defined by
if f is continuous at t and t is right-scattered. If t is not right-scattered then the derivative is defined by
provided this limit exists. A useful formula is
A function \(f: [a,b] \rightarrow \mathbb{R}\) is said to be right-dense continuous (rd-continuous) if it is right continuous at each right-dense point and there exists a finite left limit at all left-dense points, and f is said to be differentiable if its derivative exists. The space of rd-continuous functions is denoted by \(C_{r}(\mathbb{T}\), \(\mathbb{R})\). A time scale \(\mathbb{T}\) is said to be regular if the following two conditions are satisfied simultaneously:
-
(a).
For all \(t \in \mathbb{T}\), σ(ρ(t)) = t.
-
(b).
For all \(t \in \mathbb{T}\), ρ(σ(t)) = t.
Remark 1.1.1
If \(\mathbb{T}\) is a regular time scale, then both operators are invertible with \(\sigma ^{-1} =\rho\) and \(\rho ^{-1} =\sigma\) .
The following theorem gives the product and quotient rules for the derivative of the product fg and the quotient f∕g (where gg σ ≠ 0) of two delta differentiable functions f and g.
Theorem 1.1.1
Assume f; \(g: \mathbb{T} \rightarrow \mathbb{R}\) are delta differentiable at \(t \in \mathbb{T}\) . Then
By using the product rule, we see that the derivative of \(f(t) = (t-\alpha )^{m}\) for \(m \in \mathbb{N},\) and \(\alpha \in \mathbb{T}\) can be calculated as
As a special case when α = 0, we see that the derivative of f(t) = t m for \(m \in \mathbb{N}\) can be calculated as
Note that when \(\mathbb{T} = \mathbb{R}\), we have
When \(\mathbb{T} = \mathbb{Z}\), we have
When \(\mathbb{T} =h\mathbb{Z}\), h > 0, we have \(\sigma (t) = t + h,\) μ(t) = h,
When \(\mathbb{T} =\{ t: t = q^{k}\), \(k \in \mathbb{N}_{0}\), q > 1}, we have σ(t) = qt, \(\mu (t) = (q - 1)t,\)
When \(\mathbb{T} = \mathbb{N}_{0}^{2} =\{ t^{2}: t \in \mathbb{N}\},\) we have \(\sigma (t) = (\sqrt{t} + 1)^{2}\) and
When \(\mathbb{T} = \mathbb{T}_{n} =\{ t_{n}: n \in \mathbb{N}\}\) where (t n ) are the harmonic numbers that are defined by t 0 = 0 and \(t_{n} =\sum _{ k=1}^{n}\frac{1} {k},n \in \mathbb{N}_{0},\) and we have
When \(\mathbb{T}_{2}=\{\sqrt{n}: n \in \mathbb{N}\},\) we have \(\sigma (t) = \sqrt{t^{2 } + 1},\)
When \(\mathbb{T}_{3}=\{\root{3}\of{n}: n \in \mathbb{N}\},\) we have \(\sigma (t) = \root{3}\of{t^{3} + 1}\) and
For \(a,b \in \mathbb{T},\) and a delta differentiable function f, the Cauchy integral of f Δ is defined by
Theorem 1.1.2
Let \(f,g \in C_{rd}([a,b], \mathbb{R})\) be rd-continuous functions, \(a,b,c \in \mathbb{T}\) and \(\alpha,\beta \in \mathbb{R}\) . Then, the following are true:
-
1.
\(\int \nolimits _{a}^{b}\left [\alpha f(t) +\beta g(t)\right ]\varDelta t =\alpha \int \nolimits _{ a}^{b}f(t)\varDelta t +\beta \int \nolimits _{ a}^{b}g(t)\varDelta t\),
-
2.
\(\int \nolimits _{a}^{b}f(t)\varDelta t = -\int \nolimits _{b}^{a}f(t)\varDelta t\),
-
3.
\(\int \nolimits _{a}^{c}f(t)\varDelta t =\int \nolimits _{ a}^{b}f(t)\varDelta t +\int \nolimits _{ b}^{c}f(t)\varDelta t\),
-
4.
\(\left \vert \int _{a}^{b}f(t)\varDelta t\right \vert \leq \int _{a}^{b}\left \vert f(t)\right \vert \varDelta t\) .
An integration by parts formula reads
and infinite integrals are defined as
Note that when \(\mathbb{T} = \mathbb{R}\), we have
When \(\mathbb{T} = \mathbb{Z}\), we have
When \(\mathbb{T} =h\mathbb{Z}\), h > 0, we have
When \(\mathbb{T} =\{ t: t = q^{k}\), \(k \in \mathbb{N}_{0}\), q > 1}, we have
Note that the integration formula on a discrete time scale is defined by
It is well known that rd-continuous functions possess antiderivatives. If f is rd-continuous and F Δ = f, then
Now, we will give the definition of the generalized exponential functions and its derivatives. We say that \(p: \mathbb{T}^{\kappa }\mapsto \mathbb{R}\) is regressive provided 1 +μ(t)p(t) ≠ 0 for all \(t \in \mathbb{T}^{\kappa }\). We define the set \(\mathcal{R}\) of all regressive and rd-continuous functions. We define the set \(\mathcal{R}^{+}\) of all positively regressive elements of \(\mathcal{R}\) by \(\mathcal{R}^{+} =\{ p \in \mathcal{R}: 1 +\mu (t)p(t) > 0,\) for all \(t \in \mathbb{T}\}\). The set of all regressive functions on a time scale \(\mathbb{T}\) forms an Abelian group under the addition ⊕ defined by \(p \oplus q:= p + q +\mu pq\). If \(p \in \mathcal{R}\), then we can define the exponential function by
where ξ h (z) is the cylinder transformation , which is defined by
If \(p \in \mathcal{R}\), then e p (t, s) is real-valued and nonzero on \(\mathbb{T}\). If \(p \in \mathcal{R}^{+},\) then \(e_{p}(t,t_{0})\) is always positive. Note that if \(\mathbb{T} = \mathbb{R}\), then
if \(\mathbb{T} = \mathbb{N}\), then
and if \(\mathbb{T} =q^{\mathbb{N}_{0}}\), then
If \(p: \mathbb{T}^{\kappa }\mapsto \mathbb{R}\) is rd-continuous and regressive, then the exponential function \(e_{p}(t,t_{0})\) is for each fixed \(t_{0} \in \mathbb{T}^{\kappa }\) the unique solution of the initial value problem \(x^{\varDelta } = p(t)x,\;x(t_{0}) = 1,\) for all t ∈ \(\mathbb{T}\). We will use the following definition to present the properties of the exponential function e p (t, s). If p, \(q \in \mathcal{R}\), then we define \(\ominus p(t) = -p(t)/(1 +\mu (t)p(t))\) and \((p \oplus q)(t) = p(t) + q(t) +\mu (t)p(t)q(t)\) for all \(t \in \mathbb{T}^{\kappa }\). The following properties are proved in [51].
Theorem 1.1.3
If \(p,q \in \mathcal{R}\) and \(t_{0} \in \mathbb{T}\) , then
-
\(e_{p}(t,t) \equiv 1\quad \mbox{ and }\quad e_{0}(t,s) \equiv 1\) ;
-
\(e_{p}(\sigma (t),s) = (1 +\mu (t)p(t))e_{p}(t,s)\) ;
-
\(\frac{1} {e_{p}(t,s)} = e_{\ominus p}(t,s) = e_{p}(s,t)\) ;
-
\(\frac{e_{p}(t,s)} {e_{q}(t,s)} = e_{p\ominus q}(t,s)\) ;
-
e p (t,s)e q (t,s) = e p⊕q (t,s);
-
if \(p \in \mathcal{R}^{+}\) , then e p (t,t 0 ) > 0 for all \(t \in \mathbb{T}\) .
-
\(e_{p}^{\varDelta }(t,t_{0}) = p(t)e_{p}(t,t_{0})\) .
-
\(\left ( \frac{1} {e_{p}(\cdot,s)}\right )^{\varDelta } = - \frac{p(t)} {e_{p}^{\sigma }(\cdot,s)}\) .
Lemma 1.1.1
For a nonnegative \(\varphi\) with \(-\varphi \in \mathcal{R}^{+}\) , we have the inequalities
If \(\varphi\) is rd-continuous and nonnegative, then
Remark 1.1.2
If \(\lambda \in \mathcal{R}^{+}\) and λ(r) < 0 for all \(t \in [s,t)_{\mathbb{T}}\) , then
Theorem 1.1.4
If \(p \in \mathcal{R}\) and \(a,b,c \in \mathbb{T}\) , then
Theorem 1.1.5
If \(a,b,c \in \mathbb{T}\) and \(f \in C_{rd}(\mathbb{T}\), \(\mathbb{R})\), \(a,b \in \mathbb{T}\) such that f(t) ≥ 0 for all a ≤ t < b, then
Lemma 1.1.2
Let \(v \in C_{rd}^{1}(\mathbb{T}\), \(\mathbb{R})\) be strictly increasing and \(\tilde{\mathbb{T}} = v(\mathbb{T})\) be a time scale. If \(f \in C_{rd}(\mathbb{T}\), \(\mathbb{R})\) , then for \(a,b \in \mathbb{T}\),
Throughout the book, we will use the following facts:
and without loss of generality, we assume that \(\sup \mathbb{T} = \infty \), and define the time scale interval \([a,b]_{\mathbb{T}}\) by \([a,b]_{\mathbb{T}}:= [a,b] \cap \mathbb{T}\). The following results are adapted from [52].
Lemma 1.1.3
Let \(f: \mathbb{R} \rightarrow \mathbb{R}\) be continuously differentiable and suppose \(g: \mathbb{T} \rightarrow \mathbb{R}\) is delta differentiable. Then \(f \circ g: \mathbb{T} \rightarrow \mathbb{R}\) is delta differentiable and
Lemma 1.1.4
Let \(f: \mathbb{R} \rightarrow \mathbb{R}\) be continuously differentiable and suppose \(g: \mathbb{T} \rightarrow \mathbb{R}\) is delta differentiable. Then \(f \circ g: \mathbb{T} \rightarrow \mathbb{R}\) is delta differentiable and the formula
holds.
Lemma 1.1.5
Assume the continuous mapping \(f: [r,s]_{\mathbb{T}} \rightarrow \mathbb{R}\) , r, \(s \in \mathbb{T}\) , satisfies f(r) < 0 < f(s). Then there is a \(\tau \in [r,s)_{\mathbb{T}}\) with \(f(\tau )f(\sigma (\tau )) \leq 0\) .
Lemma 1.1.6
Let the mapping \(f: \mathbb{T} \rightarrow \mathbb{R},\) \(g: \mathbb{T} \rightarrow \mathbb{R}\) be differentiable and assume that
Then for r, \(s \in \mathbb{T},\) r ≤ s,
Assume \(g: \mathbb{T} \rightarrow \mathbb{R}\) be differentiable and g Δ (t) ≥ 0, then g(t) is nondecreasing.
Definition 1.1.1
We say a function \(f: \mathbb{T} \rightarrow \mathbb{R}\) is right-increasing (right-decreasing) at \(t_{0} \in \mathbb{T}^{k}\) provided that
-
(i)
if σ(t 0 ) > t 0 , then \(f(\sigma (t_{0})) > f(t_{0}),(f(\sigma (t_{0})) < f(t_{0})),\)
-
(ii)
if σ(t 0 ) = t 0 , then there is a neighborhood U of t 0 such that \(f(t) > f(t_{0}),(f(t) < f(t_{0})),\) for all t ∈ U, t > t 0 .
Definition 1.1.2
We say a function \(f: \mathbb{T} \rightarrow \mathbb{R}\) assumes its local right-maximum (local right-minimum) at t 0 ∈ T provided that:
-
(i)
if σ(t 0 ) > t 0 , then \(f(\sigma (t_{0})) \leq f(t_{0}),(f(\sigma (t_{0})) \geq f(t_{0})),\)
-
(ii)
if σ(t 0 ) = t 0 , then there is a neighborhood U of t 0 such that \(f(t) \leq f(t_{0}),(f(t) \geq f(t_{0})),\) for all t ∈ U, t > t 0 .
Theorem 1.1.6
If \(f: \mathbb{T} \rightarrow \mathbb{R}\) is Δ-differentiable at \(t_{0} \in \mathbb{T}^{k}\) and \(f^{\varDelta }(t_{0}) > 0,\) \((f^{\varDelta }(t_{0}) < 0),\) then f is right-increasing, (right-decreasing), at t 0 .
Theorem 1.1.7
If \(f: \mathbb{T} \rightarrow \mathbb{R}\) is Δ-differentiable at \(t_{0} \in \mathbb{T}^{k}\) and if \(f^{\varDelta }(t_{0}) > 0\,(f^{\varDelta }(t_{0}) < 0),\) then f assumes a local right-minimum (local right-maximum), at t 0 .
Theorem 1.1.8
Suppose \(f: \mathbb{T} \rightarrow \mathbb{R}\) is Δ-differentiable at \(t_{0} \in \mathbb{T}^{k}\) and assumes its local right-minimum (local right-maximum) at t 0 . Then \(f^{\varDelta }(t_{0}) \geq 0(f^{\varDelta }(t_{0}) \leq 0)\) .
Theorem 1.1.9
Let f be a continuous function on \([a,b]_{\mathbb{T}}\) that is Δ-differentiable on [a,b) (the differentiability at a is understood as right-sided), and satisfies f(a) = f(b). Then there exist \(\zeta,\tau \in [a,b)_{\mathbb{T}}\) such that
Corollary 1.1.1
Let f be a continuous function on \([a,b]_{\mathbb{T}}\) that is Δ-differentiable on [a,b). If f Δ (t) = 0 for all \(t \in [a,b)_{\mathbb{T}},\) then f is a constant function on \([a,b]_{\mathbb{T}}\) .
Corollary 1.1.2
Let f be a continuous function on [a,b] that is Δ-differentiable on [a,b). Then f is increasing, decreasing, nondecreasing, and nonincreasing on \([a,b]_{\mathbb{T}}\) if \(f^{\varDelta }(t) > 0,f^{\varDelta }(t)\) > 0,f Δ (t) ≥ 0, and f Δ (t) ≤ 0 for all \(t \in [a,b)_{\mathbb{T}},\) respectively.
Theorem 1.1.10
Let f and g be continuous functions on [a,b] that are Δ-differentiable on \([a,b)_{\mathbb{T}}\) . Suppose g Δ (t) > 0 for all \(t \in [a,b)\) . Then there exist ζ, \(\tau \in [a,b)_{\mathbb{T}}\) such that
1.2 Nabla Calculus
The corresponding theory for nabla derivatives was also studied extensively. The results in this section are adapted from [27].
Let \(\mathbb{T}\) be a time scale, the backward jump operator \(\rho: \mathbb{T} \rightarrow \mathbb{T}\) is defined by \(\rho (t):=\sup \{ s \in \mathbb{T}: s < t\}\). The mapping \(\nu: \mathbb{T} \rightarrow \mathbb{R}_{0}^{+}\) such that \(\nu (t) = t -\rho (t)\) is called the backward graininess. The function \(f: \mathbb{T} \rightarrow \mathbb{R}\) is called nabla differentiable at \(t_{0} \in \mathbb{T}\), if there exists an \(a \in \mathbb{R}\) with the following property: For any ε > 0, there exists a neighborhood U of t, such that
for all s ∈ U; we write a = f ∇(t). For \(\mathbb{T} = \mathbb{R}\), we have \(f^{\nabla }(t) = f^{{\prime}}(t)\) and for \(\mathbb{T} = \mathbb{Z}\), we have the backward difference operator \(f^{\nabla }(t) = \nabla f(t) = f(t) - f(t - 1)\).
A function \(f: \mathbb{T} \rightarrow \mathbb{R}\) is left-dense continuous or ld-continuous provided it is continuous at left-dense points in \(\mathbb{T}\) and its right-sided limits exist (finite) at right-dense points in \(\mathbb{T}\). If \(\mathbb{T} = \mathbb{R}\), then f is ld-continuous if and only if f is continuous. If \(\mathbb{T} = \mathbb{Z}\), then any function is ld-continuous. The following theorem gives several properties of the nabla derivative.
Theorem 1.2.1
Assume \(f: \mathbb{T} \rightarrow \mathbb{R}\) is a function and let \(t \in \mathbb{T}\) . Then we have the following:
-
1.
If f is nabla differentiable at t, then f is continuous at t.
-
2.
If f is continuous at t and t is left scattered, then f is nabla differentiable at t with
$$\displaystyle{ f^{\nabla }(t) = \frac{f(t) - f(\rho (t))} {\nu (t)}. }$$ -
3.
If f is left-dense, then f is nabla differentiable at t iff the limit \(\lim _{s\rightarrow t}\frac{f(t)-f(s)} {t-s}\) exists as a definite number, and in this case
$$\displaystyle{ f^{\nabla }(t) =\lim _{ s\rightarrow t}\frac{f(t) - f(s)} {t - s}. }$$ -
4.
If f is nabla differentiable at t, then \(f(\rho (t))=f(t)-\nu (t)f^{\nabla }(t)\) .
Theorem 1.2.2
Assume f, \(g: \mathbb{T} \rightarrow \mathbb{R}\) are nabla differentiable at \(t \in \mathbb{T}\) . Then:
-
(i)
The product \(fg: \mathbb{T} \rightarrow \mathbb{R}\) is nabla differentiable at t, and we get the product rule
$$\displaystyle{ (fg)^{\nabla }(t) = f^{\nabla }(t)g(t) + f^{\rho }(t)g^{\nabla }(t) = f(t)g^{\nabla }(t) + f^{\nabla }(t)g^{\rho }(t). }$$ -
(ii)
If g(t)g ρ (t) ≠ 0, then f∕g is nabla differentiable at t, and we get the quotient rule
$$\displaystyle{ \left (\frac{f} {g}\right )^{\nabla }(t) = \frac{f^{\nabla }(t)g(t) - f(t)g^{\nabla }(t)} {g(t)g^{\rho }(t)}. }$$
A function \(F: \mathbb{T} \rightarrow \mathbb{R}\) is called a nabla antiderivative of \(f: \mathbb{T} \rightarrow \mathbb{R}\) provided F ∇(t) = f(t) holds for all \(t \in \mathbb{T}\). We then define the nabla integral of f by
If f and f ∇ are continuous, then
One can easily see that every ld-continuous function has a nabla antiderivative . As in the case of the delta derivative we see that if \(f: \mathbb{T} \rightarrow \mathbb{R}\) is ld-continuous and \(t \in \mathbb{T}\), then
Theorem 1.2.3
If \(a,b,c \in \mathbb{T}\) and \(\alpha,\beta \in \mathbb{R}\) , and \(f,g: \mathbb{T} \rightarrow \mathbb{R}\) are ld-continuous, then
-
1.
\(\int \nolimits _{a}^{b}\left [\alpha f(t) +\beta g(t)\right ]\varDelta t =\alpha \int \nolimits _{ a}^{b}f(t)\nabla t +\beta \int \nolimits _{ a}^{b}g(t)\nabla t\),
-
2.
\(\int \nolimits _{a}^{b}f(t)\nabla t = -\int \nolimits _{b}^{a}f(t)\nabla t\),
-
3.
\(\int \nolimits _{a}^{c}f(t)\nabla t =\int \nolimits _{ a}^{b}f(t)\nabla t +\int \nolimits _{ b}^{c}f(t)\nabla t\),
-
4.
\(\left \vert \int _{a}^{b}f(t)\nabla t\right \vert \leq \int _{a}^{b}\left \vert f(t)\right \vert \nabla t,\)
-
5.
\(\int _{a}^{b}f(t)g^{\nabla }(t)\nabla t = \left.f(t)g(t)\right \vert _{a}^{b} -\int _{a}^{b}f^{\nabla }(t)g^{\rho }(t)\nabla t,\)
-
6.
\(\int _{a}^{b}f^{\rho }(t)g^{\nabla }(t)\nabla t = \left.f(t)g(t)\right \vert _{a}^{b} -\int _{a}^{b}f^{\nabla }(t)g(t)\nabla t\) .
The relations between delta and nabla derivatives can be summarized as follows. Assume that \(f: \mathbb{T} \rightarrow \mathbb{R}\) is delta differentiable on \(\mathbb{T}^{k}\). Then f is nabla differentiable at t and
for \(t \in \mathbb{T}^{k}\) such that σ(ρ(t)) = t. If, in addition, f Δ is continuous on \(\mathbb{T}^{k}\), then f is nabla differentiable at t and \(f^{\nabla }(t) = f^{\varDelta }(\rho (t))\) holds for any \(t \in \mathbb{T}_{k}\). Assume that \(f: \mathbb{T} \rightarrow \mathbb{R}\) is nabla differentiable on \(\mathbb{T}_{k}\). Then f is delta differentiable at t and
for \(t \in \mathbb{T}_{k}\) such that ρ(σ(t)) = t. If, in addition, f ∇ is continuous on \(\mathbb{T}_{k}\), then f is delta differentiable at t and \(f^{\varDelta }(t) = f^{\nabla }(\sigma (t))\) holds for any \(t \in \mathbb{T}^{k}\).
We now give the definition of the generalized nabla exponential function. Assume that \(p: \mathbb{T} \rightarrow \mathbb{R}\) is ld-continuous and 1 − p(t)ν(t) ≠ 0 for \(t \in \mathbb{T}_{k}\). We define the set \(\mathcal{R}_{\nu }\) of all regressive and ld-continuous functions. We define the set \(\mathcal{R}_{\nu }^{+}\) of all positively regressive elements of \(\mathcal{R}_{\nu }\) by \(\mathcal{R}_{\nu }^{+} =\{ p \in \mathcal{R}: 1 -\nu (t)p(t) > 0,\) for all \(t \in \mathbb{T}\}\). The set of all ν-regressive functions on a time scale \(\mathbb{T}\) forms an Abelian group under the addition ⊕ defined by \(p \oplus _{\nu }q:= p + q -\nu pq\). The explicit nabla exponential function is given by
where \(\overline{\xi }_{h}(z)\) is the cylinder transformation , which is defined by
For \(t \in \mathbb{T}\), \(s \in \mathbb{T}_{k},\) the exponential function \(\check{e}_{p}(t,s)\) is the solution of the initial value problem
The following theorem gives the properties of the exponential function \(\check{e}_{p}(t,s)\). The theorem is adapted from Bohner and Peterson [52].
Theorem 1.2.4
If \(p,q \in \mathcal{R}_{\nu }\) and \(t_{0} \in \mathbb{T}\) , then
-
\(\check{e}_{p}(t,t) \equiv 1,\quad \mbox{ and }\quad \check{e}_{0}(t,s) \equiv 1\) ;
-
\(\check{e}_{p}(\rho (t),s) = (1 -\nu (t)p(t))\check{e}_{p}(t,s)\) ;
-
\(\frac{1} {\check{e}_{p}(t,s)} =\check{ e}_{\ominus \nu p}(t,s)\) ;
-
\(\frac{\check{e}_{p}(t,s)} {\check{e}_{q}(t,s)} =\check{ e}_{p\ominus \nu q}(t,s)\) ;
-
\(\check{e}_{p}(t,s)\check{e}_{q}(t,s) =\check{ e}_{p\oplus \nu q}(t,s)\) ;
-
if \(1 - p(t)\nu (t)\neq 0,\) then \(\check{e}_{p}(t,t_{0}) > 0\) for all \(t \in \mathbb{T}\) .
-
\(\check{e}_{p}^{\nabla }(t,t_{0}) = p(t)\check{e}_{p}(t,t_{0})\) .
-
\(\left ( \frac{1} {\check{e}_{p}(\cdot,s)}\right )^{\nabla } = - \frac{p(t)} {\check{e}_{p}^{\sigma }(\cdot,s)}\) .
1.3 Diamond-α Calculus
Now we introduce the diamond-α dynamic derivative and diamond-α dynamic integration. The comprehensive development of the calculus of the diamond-α derivative and diamond-α integration is given in [140]. Let \(\mathbb{T}\) be a time scale and f(t) be differentiable on \(\mathbb{T}\) in the Δ and ∇ sense. For \(t \in \mathbb{T}\), we define the diamond-α derivative \(f^{\diamond _{\alpha }}(t)\) by
Thus f is diamond-α differentiable if and only if f is Δ and ∇ differentiable. The diamond-α derivative reduces to the standard Δ-derivative for α = 1, or the standard ∇ derivative for α = 0. It represents a weighted dynamic derivative for α ∈ (0, 1).
Theorem 1.3.1
Let f, \(g: \mathbb{T} \rightarrow \mathbb{R}\) be diamond-α differentiable at \(t \in \mathbb{T}\) . Then
-
(i).
\(f + g: \mathbb{T} \rightarrow \mathbb{R}\) is diamond-α differentiable at \(t \in \mathbb{T}\) , with
$$\displaystyle{ (f + g)^{\diamond _{\alpha }}(t) = f^{\diamond _{\alpha }}(t) + g^{\diamond _{\alpha }}(t). }$$ -
(ii).
\(f.g: \mathbb{T} \rightarrow \mathbb{R}\) is diamond-α differentiable at \(t \in \mathbb{T}\) , with
$$\displaystyle{ (f.g)^{\diamond _{\alpha }}(t) = f^{\diamond _{\alpha }}(t)g(t) +\alpha f^{\sigma }(t)g^{\varDelta }(t) + (1-\alpha )f^{\rho }(t)g^{\nabla }(t). }$$ -
(iii).
For \(g(t)g^{\sigma }(t)g^{\rho }(t)\neq 0\), \(f/g: \mathbb{T} \rightarrow \mathbb{R}\) is diamond-α differentiable at \(t \in \mathbb{T}\) , with
$$\displaystyle{ (\frac{f} {g})^{\diamond _{\alpha }}(t) = \frac{f^{\diamond _{\alpha }}(t)g^{\sigma }(t)g^{\rho }(t) -\alpha f^{\sigma }(t)g^{\rho }(t)g^{\varDelta }(t) - (1-\alpha )f^{\rho }(t)g^{\sigma }(t)g^{\nabla }(t)} {g(t)g^{\sigma }(t)g^{\rho }(t)}. }$$
Theorem 1.3.2
Let f, \(g: \mathbb{T} \rightarrow \mathbb{R}\) be diamond-α differentiable at \(t \in \mathbb{T}\) . Then the following hold:
-
(i).
\((f)^{\diamond _{\alpha }\varDelta }(t) =\alpha f^{\varDelta \varDelta }(t) + (1-\alpha )f^{\nabla \varDelta }(t),\)
-
(ii).
\((f)^{\diamond _{\alpha }\nabla }(t) =\alpha f^{\varDelta \nabla }(t) + (1-\alpha )f^{\nabla \nabla }(t),\)
-
(iii).
\((f)^{\varDelta \diamond _{\alpha }}(t) =\alpha f^{\varDelta \varDelta }(t) + (1-\alpha )f^{\varDelta \nabla }(t)\neq (f)^{\diamond _{\alpha }\varDelta }(t),\)
-
(iv).
\((f)^{\nabla \diamond _{\alpha }}(t) =\alpha f^{\nabla \varDelta }(t) + (1-\alpha )f^{\nabla \nabla }(t)\neq (f)^{\diamond _{\alpha }\nabla }(t),\)
-
(v).
\((f)^{\diamond _{\alpha }\diamond _{\alpha }}(t) =\alpha ^{2}f^{\varDelta \varDelta }(t) +\alpha (1-\alpha )[f^{\varDelta \nabla }(t) + f^{\nabla \varDelta }(t)]\)
\(+(1-\alpha )^{2}f^{\nabla \nabla }(t)\neq \alpha ^{2}f^{\varDelta \varDelta }(t) + (1-\alpha )^{2}f^{\nabla \nabla }(t)\) .
Theorem 1.3.3
(Mean Value Theorem) . Suppose that f is a continuous function on \([a,b]_{\mathbb{T}}\) and has a diamond-α derivative at each point of \([a,b)_{\mathbb{T}}\) . Then there exist points η, \(\eta ^{\prime}\) such that
When f(a) = f(b), then we have that
Corollary 1.3.1
Let f be a continuous function on \([a,b]_{\mathbb{T}}\) and has a diamond-α derivative at each point of \([a,b)_{\mathbb{T}}\) . Then f is increasing if \(f^{\diamond _{\alpha }}(t) > 0\) , decreasing if \(f^{\diamond _{\alpha }}(t) < 0\), nonincreasing if \(f^{\diamond _{\alpha }}(t) \leq 0\) and nondecreasing \(f^{\diamond _{\alpha }}(t) \geq 0\) on \([a,b]_{\mathbb{T}}\) .
Theorem 1.3.4
Let a, t \(\in \mathbb{T}\) , and \(h: \mathbb{T} \rightarrow \mathbb{R}\) . Then, the diamond-α integral from a to t of h is defined by
provided that there exists delta and nabla integrals of h on \(\mathbb{T}\) .
In general, we do not have
Example 1.3.1 ([31])
Let \(\mathbb{T} = 0,1,2,\) a = 0 and h(t) = t 2 for \(t \in \mathbb{T}\) . This gives us that
so that the equality above holds only when ♢ α = Δ or ♢ α = ∇.
Theorem 1.3.5
Let \(a,b,c \in \mathbb{T}\), \(\alpha,\beta \in \mathbb{R}\) , and f and g be continuous functions on \([a,b] \cup \mathbb{T}\) . Then the following properties hold:
-
(1).
\(\int \nolimits _{a}^{b}\left [\alpha f(t) +\beta g(t)\right ]\diamond _{\alpha }t =\alpha \int \nolimits _{ a}^{b}f(t)\diamond _{\alpha }t +\beta \int \nolimits _{ a}^{b}g(t)\diamond _{\alpha }t\),
-
(2).
\(\int \nolimits _{a}^{b}f(t)\diamond _{\alpha }t = -\int \nolimits _{b}^{a}f(t)\diamond _{\alpha }t\),
-
(3).
\(\int \nolimits _{a}^{c}f(t)\diamond _{\alpha }t =\int \nolimits _{ a}^{b}f(t)\diamond _{\alpha }t +\int \nolimits _{ b}^{c}f(t)\diamond _{\alpha }t\) .
Example 1.3.2
If we let \(\mathbb{T} = \mathbb{R}\) , then we obtain
and if we let \(\mathbb{T} = \mathbb{Z}\) , and m < n, then we obtain
Example 1.3.3
If we let \(\mathbb{T} = q^{\mathbb{N}}\) , for q > 1 and m < n, then we obtain
and if we let \(\mathbb{T} =\{ t_{i}: i \in \mathbb{N}_{0}\}\) such that t i < t i+1 and m < n, then we obtain the general case (which includes (1.3.1) and (1.3.2))
Remark 1.3.1
Note that if f(t) ≥ 0 for all t ∈ \([a,b]_{\mathbb{T}}\) , then \(\int \nolimits _{a}^{b}cf(t)\diamond _{\alpha }t \geq 0\) . If f(t) ≥ g(t) for all t ∈ \([a,b]_{\mathbb{T}}\) , then \(\int \nolimits _{a}^{b}f(t)\diamond _{\alpha }t \geq \int \nolimits _{a}^{b}g(t)\diamond _{\alpha }t \geq 0,\) and f(t) = 0 if and only if \(\int \nolimits _{a}^{b}f(t)\diamond _{\alpha }t = 0\) .
Corollary 1.3.2
Let \(t \in \mathbb{T}_{k}^{k}\) and \(f: \mathbb{T} \rightarrow \mathbb{R}\) . Then
and
Recall a function p: \(\mathbb{T} \rightarrow \mathbb{R}\) is called regressive provided 1 +μ(t)p(t) ≠ 0 for all \(t \in \mathbb{T}^{k}\). Note \(\mathcal{R}\) denotes the set of all regressive and rd-continuous functions on \(\mathbb{T}\). Similarly, a function \(q: \mathbb{T} \rightarrow \mathbb{R}\) is called ν-regressive provided 1 −ν(t)q(t) ≠ 0 for all \(t \in \mathbb{T}_{k}\). Note \(\mathcal{R}_{\nu }\) denotes the set of all ν-regressive and ld-continuous functions on \(\mathbb{T}\). We consider two functions: E p, α and e p, α where \(p \in \mathcal{R}\cap \mathcal{R}_{\nu }\) and \(\alpha \in \left [0,1\right ]\). For \(p \in \mathcal{R}\cap \mathcal{R}_{\nu }\) and \(\alpha \in \left [0,1\right ],\) we define
where e p (. , t 0) and \(\check{e}_{p}(.,t_{0})\) are the delta and nabla exponential functions defined as in (1.1.5) and (1.2.1), respectively.
Example 1.3.4 ([31])
Consider the time scale \(\mathbb{T} = \mathbb{Z}\) and the constant function \(p(t) = 1/2\) . Take t 0 = 0. Then, \(e_{p}(t,0) = (3/2)^{t}\) is the solution of the initial value problem \(y^{\varDelta }(t) = (1/2)y(t),\) y(t 0 ) = 1. Moreover \(\check{e}_{p}(t,0) = 2^{t}\) is the unique solution of \(y^{\nabla }(t) = (1/2)y(t)\) , y(t 0 ) = 1. Now \(E_{p,\alpha }(t;0) =\alpha (3/2)^{t} + (1-\alpha )(2)^{t}\) , for \(t \in \mathbb{Z}\) .
Remark 1.3.2
Combined-exponentials cannot be really called an exponential function. Indeed, they seem to fail the most important property of an exponential function, i.e., they are not a solution of an appropriate initial value problem.
Next we give a direct formulas for the ♢ α -derivative of exponential functions \(e_{p}(.,t_{0})\) and \(\check{e}_{p}(.,t_{0})\).
Theorem 1.3.6
Let \(\mathbb{T}\) be a regular time scale. Assume that t, \(t_{0} \in \mathbb{T}\) and \(p \in \mathcal{R}\cap \mathcal{R}_{\nu }\) . Then
where e p (.,t 0 ) is a solution of the initial value problem \(y^{\diamond _{\alpha }}(t) = q(t)y(t)\) , y(t 0 ) = 1, where \(q(t) =\alpha p(t) + \frac{\left (1-\alpha \right )p^{\rho }(t)} {1+\nu (t)p^{\rho }(t)}\) .
1.4 Taylor Monomials and Series
Here we define Taylor monomials and Taylor expansions of functions corresponding to delta and nabla derivatives. To define these functions, we need some basic definitions about calculus of functions of two variables on time scales. Let \(\mathbb{T}_{1}\) and \(\mathbb{T}_{2}\) be two time scales with at least two points and consider the time scale intervals \(\varOmega _{1} = [t_{0},\infty ) \cap \mathbb{T}_{1}\) and \(\varOmega _{2} = [s_{0},\infty ) \cap \mathbb{T}_{2}\) for \(t_{0} \in \mathbb{T}_{1}\) and \(s_{0} \in \mathbb{T}_{2}\). Let σ 1, ρ 1, Δ 1 and σ 2, ρ 2, Δ 2 denote the forward jump operators, backward jump operators, and the delta differentiation operator , respectively, on \(\mathbb{T}_{1}\) and \(\mathbb{T}_{2}\). We say that a real valued function f on \(\mathbb{T}_{1} \times \mathbb{T}_{2}\) at \((t,s) \in \varOmega \equiv \varOmega _{1} \times \varOmega _{2}\) has a Δ 1 partial derivative \(f^{\varDelta _{1}}(t,s)\) with respect to t if for each ε > 0 there exists a neighborhood U t of t such that
In this case, we say \(f^{\varDelta _{1}}(t,s)\) is the (partial delta) derivative of f(t, s) at t. We say that a real valued function f on \(\mathbb{T}_{1} \times \mathbb{T}_{2}\) at (t, s) ∈ Ω 1 ×Ω 2 has a Δ 2 partial derivative \(f^{\varDelta _{2}}(t,s)\) with respect to s if for each ε > 0 there exists a neighborhood U s of s such that
In this case, we say \(f^{\varDelta _{2}}(t,s)\) is the (partial delta) derivative of f(t, s) at s. The function f is called rd-continuous in t if for every \(\alpha _{2} \in \mathbb{T}_{2}\) the function f(t, α 2) is rd-continuous on \(\mathbb{T}_{1}\). The function f is called rd-continuous in s if for every \(\alpha _{1} \in \mathbb{T}_{1}\) the function f(α 1, s) is rd-continuous on \(\mathbb{T}_{2}\).
Theorem 1.4.1
Let \(t_{0} \in \mathbb{T}^{\kappa }\) and assume \(k: \mathbb{T} \times \mathbb{T}^{\kappa }\mapsto \mathbb{R}\) is continuous at (t,t), where \(t \in \mathbb{T}^{\kappa }\) with t > t 0 . Also assume that k(t,⋅) is rd-continuous on [t 0 ,σ(t)]. Suppose for each ε > 0 there exists a neighborhood of t, independent U of \(\tau \in [t_{0},\sigma (t)]\) , such that
where k Δ denotes the derivative of k with respect to the first variable. Then
The Taylor monomials \(h_{k}: \mathbb{T} \times \mathbb{T} \rightarrow \mathbb{R}\), \(k \in \mathbb{N}_{0} = \mathbb{N}\cup \{0\}\), are defined recursively as follows. The function h 0 is defined by
and given h k for \(k \in \mathbb{N}_{0}\), the function h k+1 is defined by
If we let \(h_{k}^{\varDelta }(t,s)\) denote for each fixed \(s \in \mathbb{T}\), the derivative of h(t, s) with respect to t, then
for each fixed \(s \in \mathbb{T}\). The above definition obviously implies
In the following, we give some formulas of h k (t, s) as determined in [51]. In the case when \(\mathbb{T} = \mathbb{R}\), then
In the case when \(\mathbb{T} = \mathbb{N}\), we see that
where \(t^{(k)} = t(t - 1)\cdots (t - k + 1)\) is the so-called falling function (see [100]). When \(\mathbb{T}\mathbf{=}\{t: t = q^{n}\) , \(n \in \mathbb{N}\), q > 1}, we have that
If \(\mathbb{T} =h\mathbb{N}\) , h > 0, we see that
In general for t ≥ s, we have that h k (t, s) ≥ 0, and
We also consider the Taylor monomials \(g_{k}: \mathbb{T} \times \mathbb{T} \rightarrow \mathbb{R}\), \(k \in \mathbb{N}_{0} = \mathbb{N}\cup \{0\}\), which are defined recursively. The function g 0 is defined by
and given g k for \(k \in \mathbb{N}_{0}\), the function g k+1 is defined by
If we let \(g_{k}^{\varDelta }(t,s)\) denote for each fixed \(s \in \mathbb{T}\), the derivative of g(t, s) with respect to t, then
for each fixed \(s \in \mathbb{T}\). One can see that
We denote by \(C_{rd}^{(n)}(\mathbb{T})\) the space of all functions \(f \in C_{rd}(\mathbb{T})\) such that \(f^{\varDelta _{i}} \in C_{rd}(\mathbb{T})\) for i = 0, 1, 2, …, n for \(n \in \mathbb{N}\). For the function \(f: \mathbb{T} \rightarrow \mathbb{R}\), we consider the second derivative \(f^{\varDelta _{2}}\) provided f Δ is delta differentiable on \(\mathbb{T}\) with derivative \(f^{\varDelta _{2}} = (f^{\varDelta })^{\varDelta }\). Similarly, we define the nth order derivative \(f^{\varDelta _{n}} = (f^{\varDelta _{n-1}})^{\varDelta }\). Now, we give the definition of generalized polynomials as follows:
and
for all s, \(t \in \mathbb{T}\).
Property. Using induction it is easy to see that h n (t, s) ≥ 0 holds for all \(n \in \mathbb{N}\) and s, \(t \in \mathbb{T}\) with t ≥ s and \((-1)^{n}h_{n}(t,s) \geq 0\) holds for all \(n \in \mathbb{N}\) and s, \(t \in \mathbb{T}\) with t ≤ s. Moreover, h n (t, s) is increasing with respect to its first component for all t ≥ s. ■
Recall the following result (see [52]).
Lemma 1.4.1
For \(n \in \mathbb{N}\) and \(t \in \mathbb{T}\) , we have g n (t,s) = 0 for all \(s \in [\rho ^{n-1}(t),t]_{\mathbb{T}}\) .
Lemma 1.4.2
For \(n \in \mathbb{N}\), \(t \in \mathbb{T}\) and \(s \in \mathbb{T}^{\kappa ^{n} }\) , we have \(h_{n}(t,s) = (-1)^{n}g_{n}(s,t)\) .
From Lemmas 1.4.1 and 1.4.3 we have the following result.
Lemma 1.4.3
For \(n \in \mathbb{N}\) and \(t \in \mathbb{T}\) , we have h n (t,s) = 0 for all s ∈ [ρ n−1 \((t),t]_{\mathbb{T}}\) .
Theorem 1.4.2
Let \(n \in \mathbb{N}\) and \(f \in \mathrm{ C}_{\mathrm{rd}}^{n}(\mathbb{T}, \mathbb{R})\) be an n times differentiable function. For \(s \in \mathbb{T}^{\kappa ^{n-1} }\) , we have
Theorem 1.4.3
Assume that \(f \in C_{rd}^{(n)}(\mathbb{T})\) and \(s \in \mathbb{T}\) . Then
As a special case if m < n, then
Now, we define the Taylor expansions of the functions corresponding to the nabla derivative. The generalized polynomial that will be used in describing these expansions are \(\hat{h}_{k}: \mathbb{T} \times \mathbb{T} \rightarrow \mathbb{R}\), \(k \in \mathbb{N}_{0} = \mathbb{N}\cup \{0\}\), which are defined recursively as follows. The function \(\hat{h}_{0}\) is defined by
and given \(\hat{h}_{k}\) for \(k \in \mathbb{N}_{0}\), the function \(\hat{h}_{k+1}\) is defined by
Note that the functions \(\hat{h}_{k}\) are all well defined. If we let \(\hat{h}_{k}^{\varDelta }(t,s)\) denote for each fixed \(s \in \mathbb{T}\), the derivative of \(\hat{h}(t,s)\) with respect to t, then
for each fixed \(s \in \mathbb{T}\). The above definition obviously implies
Finding the \(\hat{h}_{k}\) for k > 1 is not an easy task in general. However for a particular given time scale it might be easy to find these functions. We will consider some examples first before we present Taylor’s formula in general. In the case when \(\mathbb{T} = \mathbb{R}\), then \(\rho (t) = t\) and
In the case when \(\mathbb{T} = \mathbb{N}\), we see that \(\rho (t) = t - 1,\) ν(t) = 1, \(y^{\nabla }(t) = \nabla (t) = y(t) - y(t - 1),\) and
where \(t^{(k)} = t(t - 1)\cdots (t - k + 1)\) is the so-called falling function (see [100]). Noting that \(\nabla t^{(k)} = k\) t (k−1), we see that
for k = 0, 1, 2, …, t > s. In the case when \(\mathbb{T}\mathbf{=}\{t: t = q^{n}\) , \(n \in \mathbb{N}\), q > 1}, we have \(\rho (t) = t/q,\) \(\nu (t) = (q - 1)t/q,\) and
In general for t ≥ s, we have that \(\hat{h}_{k}(t,s) \geq 0,\) and
We may also relate the functions \(\hat{h}_{k}\), \(\hat{g}_{0}\) for the nabla derivative to the functions h k and g k in the delta derivative.
Definition 1.4.1
For t, s define the functions
and given h n ,g n, \(\hat{h}_{n}\) and \(\hat{g}_{n}\) for \(n \in \mathbb{N}_{0}\),
we have that
We denote by \(C_{ld}^{(n)}(\mathbb{T})\) the space of all functions \(f \in C_{ld}(\mathbb{T})\) such that \(f^{\nabla _{i}} \in C_{ld}(\mathbb{T})\) for i = 0, 1, 2, …, n for \(n \in \mathbb{N}\). For the function \(f: \mathbb{T} \rightarrow \mathbb{R}\), we consider the second derivative \(f^{\nabla ^{2} }\) provided f ∇ is nabla differentiable on \(\mathbb{T}\) with derivative \(f\nabla ^{2} = (f^{\nabla })^{\nabla }\). Similarly, we define the nth order nabla derivative \(f^{\nabla ^{n} } = (f^{\nabla ^{n-1} })^{\nabla }\).
Theorem 1.4.4
Let \(n \in \mathbb{N}\) . Suppose that the function f is such that \(f^{\nabla ^{n+1} }\) is ld-continuous on \(\mathbb{T}_{{\kappa{n+1}}}\) . Let \(s \in \mathbb{T}_{{\kappa {n}}}\), \(t \in \mathbb{T}\) , and define
Then, we have
We end this section with the time scale version of L’Hôpital’s rule . We present the rule for delta and nabla derivatives.
Theorem 1.4.5
Assume that f and g are Δ-differentiable on \(\mathbb{T}\) and let \(t_{0} \in \mathbb{T} \cup \{\infty \}\) . If \(t_{0} \in \mathbb{T}\) , assume that t 0 is right-dense. Furthermore, assume that \(\lim _{t\rightarrow t_{0}^{-}}f(t) =\lim _{t\rightarrow t_{0}^{-}}g(t) = 0,\) and suppose that there exists \(\varepsilon > 0\) with g(t)g Δ (t) > 0 for all \(t \in L_{\varepsilon }(t_{0}) =\{ t \in \mathbb{T}: 0 < t_{0} - t <\varepsilon \}\) . Then
Theorem 1.4.6
Assume that f and g are ∇-differentiable on \(\mathbb{T}\) and let \(t_{0} \in \mathbb{T} \cup \{-\infty \}\) . If \(t_{0} \in \mathbb{T}\) , assume that t 0 is right-dense. Furthermore, assume that \(\lim _{t\rightarrow t_{0}^{+}}f(t) =\lim _{t\rightarrow t_{0}^{+}}g(t) = 0,\) and suppose that there exists \(\varepsilon > 0\) with \(g(t)g^{\nabla }(t) > 0\) for all \(t \in R_{\varepsilon }(t_{0}) =\{ t \in \mathbb{T}: 0 < t - t_{0} <\varepsilon \}\) . Then
Bibliography
D. R. Anderson, J. Bullock, L. Erbe, A. Peterson, and H. Tran, Nabla dynamic equations on time scales, Panamer. Math. J., 13:1, 1–47, (2003).
N. Atasever, On Diamond-Alpha Dynamic Equations and Inequalities, (2011). Georgia Southern University, USA, Electronic Theses & Dissertations. Paper 667
M. Bohner and A. Peterson. Dynamic equations on time scales. Birkhäuser Boston Inc., Boston, MA, 2001.
M. Bohner and A. Peterson. Advances in dynamic equations on time scales. Birkhäauser, Boston, 2003.
W.G. Kelley and A. C. Peterson, Difference Equations; An Introduction with Applications, Academic Press, New York 1991.
Q. Sheng, M. Fadag, J. Henderson and J. M. Davis, An exploration of combined dynamic derivatives on time scales and their applications, Nonlinear Anal. Real World Appl. 7, 395–413, (2006).
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Agarwal, R., O’Regan, D., Saker, S. (2014). Preliminaries. In: Dynamic Inequalities On Time Scales. Springer, Cham. https://doi.org/10.1007/978-3-319-11002-8_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-11002-8_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-11001-1
Online ISBN: 978-3-319-11002-8
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)