Abstract
This chapter focuses on what we call the calculus on a mixed time scale whose elements we will define in terms of a point α and two linear functions. There has been recent interest in mixed time scales by Auch [37, 38], Auch et al. [39], Estes [34, 78], Erbe et al. [76], and Mert [145].
Access provided by Autonomous University of Puebla. Download chapter PDF
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
5.1 Introduction
5.2 Basic Mixed Time Scale Calculus
In this section, we introduce some fundamental concepts and properties concerning what we will call a mixed time scale. Throughout this chapter we assume a, b are constants satisfying
We will use two linear functions to define our so-called mixed time scale. First we let \(\sigma: \mathbb{R} \rightarrow \mathbb{R}\) be defined by
Then we define the linear function ρ to be the inverse function of \(\sigma\), that is
We call \(\sigma\) the forward jump operator and ρ the backward jump operator . Only for these two functions we use the following notation. For n ≥ 1 we define the function \(\sigma ^{n}\) recursively by
where \(\sigma ^{0}(t):= t\), and
where ρ 0(t): = t. We now define our mixed time scale \(\mathbb{T}_{\alpha }\), where for simplicity we always assume α ≥ 0:
By Exercise 5.1, we have that
Usually the domains of \(\sigma\) and ρ will be either \(\mathbb{R}\) or \(\mathbb{T}_{\alpha }.\)
Theorem 5.1.
If a > 1, then
Proof.
Since \(\sigma ^{n}(\alpha ) \geq 0> \frac{b} {1-a}\) for all n ≥ 0, it remains to show that \(\rho ^{n}(\alpha )> \frac{b} {1-a}\) for all n ≥ 1. We prove this by induction. First, for the base case note that
Now assume n ≥ 1 and \(\rho ^{n}(\alpha )> \frac{b} {1-a}\). Then it follows that
□
Note that the above theorem does not hold if a = 1. Also note that when a > 1, \(\mathbb{T}_{\alpha }\) is not a closed set (see Theorem 5.6 (iii)).
Definition 5.2.
For \(c,d \in \mathbb{T}_{\alpha }\) such that d ≥ c, we define
We define \(\mathbb{T}_{(c,d)}, \mathbb{T}_{(c,d]},\) and \(\mathbb{T}_{[c,d)}\) similarly. Additionally, we may use the notation \(\mathbb{T}_{c}^{d}\), where \(\mathbb{T}_{c}^{d}:= \mathbb{T}_{[c,d]}.\)
Definition 5.3.
We define a forward graininess function, μ, by
In the following theorem we give some properties of the graininess function μ.
Theorem 5.4.
For \(t \in \mathbb{T}_{\alpha }\) and \(n \in \mathbb{N}_{0}\) , the following hold:
-
(i)
μ(t) > 0;
-
(ii)
\(\mu (\sigma ^{n}(t)) = a^{n}\mu (t);\)
-
(iii)
\(\mu (\rho ^{n}(t)) = a^{-n}\mu (t).\)
Proof.
We just prove (iii) and leave the rest of the proof (see Exercise 5.2) to the reader. To see that (iii) holds consider for \(t \in \mathbb{T}_{\alpha }\) the base case
Now assume n ≥ 1 and \(\mu (\rho ^{n}(t)) = a^{-n}\mu (t)\) for \(t \in \mathbb{T}_{\alpha }.\) Then using the induction assumption we get for \(t \in \mathbb{T}_{\alpha }\)
Hence, (iii) holds. □
Theorem 5.5 (Properties of Forward Jump Operator).
Given \(m,n \in \mathbb{N}_{0}\) and \(t \in \mathbb{T}_{\alpha }\)
-
(i)
for n ≥ 1, \(\sigma ^{n}(t) = a^{n}t + b\sum \limits _{j=0}^{n-1}a^{j};\)
-
(ii)
if m > n, \(\sigma ^{m}(t)>\sigma ^{(n)}(t);\)
-
(iii)
if t > 0, \(\lim \limits _{n\rightarrow \infty }\sigma ^{n}(t) = \infty.\)
Proof.
We will only prove (i) and (iii) here. First we will prove property (i) by an induction argument. The base case clearly holds. Assume that n ≥ 1 and \(\sigma ^{n}(t) = a^{n}t +\sum \limits _{ j=0}^{n-1}a^{j}b\). It follows that
This completes the proof of (i).
Next we prove that (iii) holds. First, consider the subcase in which a > 1. Then
Since t > 0 and a > 1, we have that \(\lim \limits _{n\rightarrow \infty }\sigma ^{n}(t) = \infty\). Next, consider the case in which a = 1. Then b > 0, and
Since b > 0, we have that \(\lim \limits _{n\rightarrow \infty }\sigma ^{n}(t) = \infty\). This completes the proof of (iii). □
Theorem 5.6 (Properties of Backward Jump Operator).
Given positive integers m, n, and \(t \in \mathbb{T}_{\alpha }\) , the following properties hold:
-
(i)
\(\rho ^{n}(t) = a^{-n}\left (t -\sum \limits _{j=0}^{n-1}a^{j}b\right );\)
-
(ii)
if m > n, then \(\rho ^{m}(t) <\rho ^{n}(t);\)
-
(iii)
\(\lim \limits _{n\rightarrow \infty }\rho ^{n}(t) = -\infty\) if a = 1 and \(= \frac{b} {1-a}\) if a > 1.
Proof.
We will just prove (i) holds (see Exercise 5.3 for parts (ii) and (iii)). So, first we note that
Assume that n ≥ 1 and \(\rho ^{n}(t) = a^{-n}\left (t -\sum \limits _{j=0}^{n-1}a^{j}b\right )\) holds. Then
This completes the proof of (i) by induction. □
We now define a function N(t, s), whose value (we will see in Theorem 5.8), when \(s,t \in \mathbb{T}_{\alpha }\) with s ≤ t, gives the cardinality, \(card(\mathbb{T}_{[s,t)})\), of the set \(\mathbb{T}_{[s,t)}\).
Definition 5.7.
For a > 1, we define the function \(N: \mathbb{T}_{\alpha } \times \mathbb{T}_{\alpha } \rightarrow \mathbb{Z}\) by
For simplicity, we will use the notation N(t): = N(t, α), for \(t \in \mathbb{T}_{\alpha }.\)
As presented in Estes [34, 78], some properties of the function N are given in the following theorem.
Theorem 5.8.
Assume a > 1 and \(t,s,r \in \mathbb{T}_{\alpha }\) . Then the following hold:
-
(i)
N(t,t) = 0;
-
(ii)
\(N(t,s) = card(\mathbb{T}_{[s,t)})\) , if s ≤ t;
-
(iii)
\(N(s,t) = -N(t,s);\)
-
(iv)
\(N(t,s) = N(t,r) + N(r,s).\)
Proof.
Since
we have that (i) holds. To see that (ii) holds, let \(s,t \in \mathbb{T}_{\alpha }\) with s ≤ t. If \(k = card(\mathbb{T}_{[s,t)})\), then \(t =\sigma ^{k}(s)\), and so we have that
To see that (iii) holds, consider
The proof of (iv) is Exercise 5.4. □
5.3 Discrete Difference Calculus
In this section, we define a difference operator on our mixed time scale \(\mathbb{T}_{\alpha }\) and study its properties. Note that if a = q > 1 and b = 0, then D is the q-difference operator (see Chap. 4), and if \(a = b = 1\), then D is the forward difference operator.
Definition 5.9.
Given \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\), the mixed time scale difference operator is defined by
Theorem 5.10 (Properties of Difference Operator).
Let \(f,g: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) and \(\alpha \in [0,\infty )\) be given. Then for \(t \in \mathbb{T}_{\alpha }\) the following hold:
-
(i)
Dα = 0;
-
(ii)
Dαf(t) = αDf(t);
-
(iii)
\(D(f(t) + g(t)) = Df(t) + Dg(t);\)
-
(iv)
\(D(f(t)g(t)) = f(\sigma (t))Dg(t) + (Df(t))g(t);\)
-
(v)
\(D(f(t)g(t)) = f(t)Dg(t) + (Df(t))g(\sigma (t));\)
-
(vi)
\(D\left (\dfrac{f(t)} {g(t)}\right ) = \dfrac{g(t)Df(t) - (Dg(t))f(t)} {g(t)g(\sigma (t))}\) if \(g(t)g(\sigma (t))\neq 0\) .
Proof.
Since \(D\alpha = \dfrac{\alpha -\alpha } {\mu (t)} = 0\) we have that (i) holds. Also
so (ii) holds. To see that (iii) holds note that
The proof of property (iv) is left to the reader. Property (v) follows from (iv) by interchanging f(t) and g(t). Finally, property (vi) follows from the following:
And this completes the proof. □
5.4 Discrete Delta Integral
In this section, we will define the integral of a function defined on the mixed time scale \(\mathbb{T}_{\alpha }\). We will develop several properties of this integral, including the two fundamental theorems for the calculus on mixed time scales.
Definition 5.11.
Let \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) and \(c,d \in \mathbb{T}_{\alpha }\) be given. Then
Theorem 5.12 (Properties of Integral).
Given \(f,g: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) and \(c,d,l \in \mathbb{T}_{\alpha }\) , the following properties hold:
-
(i)
\(\int _{c}^{d}f(t)Dt = -\int _{d}^{c}f(t)Dt;\)
-
(ii)
\(\int _{c}^{d}\alpha f(t)Dt =\alpha \int _{ c}^{d}f(t)Dt;\)
-
(iii)
\(\int _{c}^{d}(f(t) + g(t))Dt =\int _{ c}^{d}f(t)Dt +\int _{ c}^{d}g(t)Dt;\)
-
(iv)
\(\int _{c}^{c}f(t)Dt = 0;\)
-
(v)
\(\int _{c}^{d}f(t)Dt =\int _{ c}^{l}f(t)Dt +\int _{ l}^{d}f(t)Dt;\)
-
(vi)
if d ≥ c, then \(\left \vert \int _{c}^{d}f(t)Dt\right \vert \leq \int _{c}^{d}\vert f(t)\vert Dt;\)
-
(vii)
\(\mbox{ if $f(t) \geq g(t)$ for $t \in \mathbb{T}_{[c,d)}$, then}\int _{c}^{d}f(t)Dt \geq \int _{c}^{d}g(t)Dt, if d \geq c\) .
Proof.
These properties follow from properties of summations. As an example, we will just prove property (vi). To this end, we note that
□
Definition 5.13.
Assume \(c,d \in \mathbb{T}_{\alpha }\) and c < d. Given \(f: \mathbb{T}_{[c,d]} \rightarrow \mathbb{R}\). We say F is an antidifference of f on \(\mathbb{T}_{[c,d]}\) provided DF(t) = f(t) for all \(t \in \mathbb{T}_{[c,\rho (d)]}\).
The following theorem shows that every function \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) has an antidifference on \(\mathbb{T}_{\alpha }.\)
Theorem 5.14 (Fundamental Theorem of Difference Calculus: Part II).
Assume \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) and \(c \in \mathbb{T}_{\alpha }\) . If we define \(F: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) by \(F(t) =\int _{ c}^{t}f(s)Ds\) , then F is an antidifference of f on \(\mathbb{T}_{\alpha }.\)
Proof.
Let F be as defined as in the statement of this theorem. Then for \(t \in \mathbb{T}_{\alpha },\)
which is what we wanted to show. □
Theorem 5.15.
Assume \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) and F is an antidifference of f on \(\mathbb{T}_{\alpha }\) . Then a general antidifference of f on \(\mathbb{T}_{\alpha }\) is given by
where C is an arbitrary constant.
Proof.
Let F be an antidifference of f on \(\mathbb{T}_{\alpha }\). Set \(G(t) = F(t) + C\) for \(t \in \mathbb{T}_{\alpha },\) where C is a constant. Then
Conversely, assume G(t) is any antidifference of f on \(\mathbb{T}_{\alpha }.\) Then
From Exercise 5.6, there is a constant C so that
Hence,
as desired. □
Definition 5.16.
We define the indefinite integral as follows:
where F(t) is any antidifference of f(t).
Theorem 5.17 (Fundamental Theorem of Difference Calculus: Part I).
Assume \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) and \(c,d \in \mathbb{T}_{\alpha }\) . Then, if F is any antidifference of f on \(\mathbb{T}_{\alpha }\) , it follows that
Proof.
Put
By Theorem 5.14 G(t) is an antidifference of f(t) on \(\mathbb{T}_{\alpha }\). Let F(t) be any fixed antidifference of f(t) on \(\mathbb{T}_{\alpha }\). Then by Theorem 5.15 we have that
It follows that
□
Remark 5.18.
Note that the Fundamental Theorem of Calculus tells us that given \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\), a point \(t_{0} \in \mathbb{T}_{\alpha }\), and a real number C, the unique solution of the IVP
is given by \(y(t) =\int _{ t_{0}}^{t}f(s)Ds + C\), for \(t \in \mathbb{T}_{\alpha }.\)
The integration by parts formulas in the next theorem are very useful.
Theorem 5.19 (Integration by Parts).
Given two functions \(u,v: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) , if \(c,d \in \mathbb{T}_{\alpha }\) , c < d, then
and
Proof.
By the product rule
Using the fundamental theorem of calculus, we get
It follows that
This proves the first integration by parts formula. Interchanging u(t) and v(t) leads to the second integration by parts formula. □
5.5 Falling and Rising Functions
In this section, we define the falling and rising functions for the mixed time scale \(\mathbb{T}_{\alpha }\), which are analogous to the falling and rising functions for the delta calculus in Chap. 1 Several properties of these functions will be given, including the appropriate power rule.
First we define the appropriate rising and falling functions for the mixed time scale calculus.
Definition 5.20.
Assume \(n \in \mathbb{N}\). We define the rising function, \(t^{\overline{n}}\), read “t to the n rising,” by
for \(t \in \mathbb{R}.\) We also define the falling function, \(t^{\underline{n}},\) read “t to the n falling,” by
for \(t \in \mathbb{R}.\)
Definition 5.21.
For \(n \in \mathbb{Z}\), we define a-square bracket of n by
Theorem 5.22 (Properties of a-Square Bracket of n).
For \(n \in \mathbb{Z}\) , and a ≥ 1,
-
(i)
[0] a = 0;
-
(ii)
[1] a = 1;
-
(iii)
\([n]_{a} + a^{n} = [n + 1]_{a};\)
-
(iv)
\(a[n]_{a} + 1 = [n + 1]_{a};\)
-
(v)
\([-n]_{a} = -\dfrac{[n]_{a}} {a^{n}}.\)
Proof.
To see that (iii) holds for \(a> 1\), note that
Also (iii) trivially holds for a = 1.
To see that (iv) holds for a > 1 note that
Also for a = 1 we have that
Property (v) holds for a > 1, since
Furthermore, (v) is trivially true for a = 1. □
We may use the a-square bracket function to simplify the expressions that we found for the forward and backward jump operators.
Theorem 5.23.
For \(n \in \mathbb{N}\) the following hold:
-
(i)
\(\sigma ^{n}(t) = a^{n}t + [n]_{a}b;\)
-
(ii)
\(\rho ^{n}(t) = a^{-n}t + [-n]_{a}b;\)
-
(iii)
\(\sigma ^{n}(t) - t = [n]_{a}\mu (t).\)
Proof.
In order to prove property (i), we have by part (i) of Theorem 5.5 that
Similarly part (i) of Theorem 5.6 gives us that (ii) holds. Finally, using property (i) we have that
and hence (iii) holds. □
Next we prove a power rule.
Theorem 5.24 (Power Rule).
For \(n \in \mathbb{N}\) the following holds:
Proof.
For \(t \in \mathbb{R}\) we have that
where in the last step we used part (iii) of Theorem 5.23. □
Definition 5.25.
For \(n \in \mathbb{Z}\) and a ≥ 1, we define the a-bracket of n by
The following theorem gives us several properties of the a-bracket function.
Theorem 5.26.
The following hold:
-
(i)
{0} a = 0;
-
(ii)
{1} a = 1;
-
(iii)
\(\{n\}_{a} = \dfrac{[n]_{a}} {a^{n-1}};\)
-
(iv)
\(\{n\}_{a} = -a[-n]_{a};\)
-
(v)
\(\sigma (t) -\rho ^{n-1}(t) =\{ n\}_{a}\mu (t).\)
Proof.
We will just prove part (iv) holds when a > 1. This follows from
where the first equality is by part (iii) of this theorem and the second equality is by part (v) of Theorem 5.22. □
Theorem 5.27 (Power Rule).
For \(n \in \mathbb{N}\) the following holds:
Proof.
To establish the result, we calculate
with the last equality by part (v) of Theorem 5.26. □
5.6 Discrete Taylor’s Theorem
In this section we will develop Taylor’s Theorem for functions on \(\mathbb{T}_{\alpha }\). First we define the Taylor monomials for the mixed time scale \(\mathbb{T}_{\alpha }\) as follows.
Definition 5.28.
We define the Taylor monomials for the mixed time scale \(\mathbb{T}_{\alpha }\) as follows. First put h 0(t, s) = 1 for \(t,s \in \mathbb{T}_{\alpha }\). Then for each \(n \in \mathbb{N}_{1}\) we recursively define h n (t, s), for each fixed \(s \in \mathbb{T}_{\alpha }\), to be the unique solution (see Remark 5.18) of the IVP
In the next theorem we derive a formula for h n (t, s) (see Erbe et al. [76]).
Theorem 5.29.
The Taylor monomials, h n (t,s), \(n \in \mathbb{N}_{0}\) , for the mixed time scale \(\mathbb{T}_{\alpha }\) are given by
for \(t,s \in \mathbb{T}_{\alpha }.\)
Proof.
For \(n \in \mathbb{N}_{0}\), let
We prove by induction on n, for \(n \in \mathbb{N}_{0},\) that \(f_{n}(t,s) = h_{n}(t,s)\) for \(t,s \in \mathbb{T}_{\alpha }\). By our convention on products \(f_{0}(t,s) = 1 = h_{0}(t,s),\) and it is easy to see that \(f_{1}(t,s) = t - s = h_{1}(t,s)\) for \(t,s \in \mathbb{T}_{\alpha }\). Assume n ≥ 1 and \(f_{k}(t,s) = h_{k}(t,s)\) for \(t,s \in \mathbb{T}_{\alpha }\), 0 ≤ k ≤ n. It remains to show that \(f_{n+1}(t,s) = h_{n+1}(t,s)\) for \(t,s \in \mathbb{T}_{\alpha }\). First, note that
by the induction hypothesis. Fix \(s \in \mathbb{T}_{\alpha }\), then using the product rule
Since, for each fixed s, \(y(t) = f_{n+1}(t,s)\) solves the IVP
we have by Remark 5.18 that \(h_{n+1}(t,s) = f_{n+1}(t,s)\) for \(t \in \mathbb{T}_{\alpha }\). Finally, notice that since \(s \in \mathbb{T}_{\alpha }\) is arbitrary we conclude that \(h_{n+1}(t,s) = f_{n+1}(t,s)\) for all \(t,s \in \mathbb{T}_{\alpha }.\) □
Definition 5.30.
For \(n \in \mathbb{N}_{0}\), we define the a-falling-bracket (of n) factorial, denoted by {n} a ! , recursively by {0} a ! = 1 and for \(n \in \mathbb{N}_{1}\)
Definition 5.31.
For \(n \in \mathbb{N}_{0}\), we define the a-rising-bracket (of n) factorial, denoted by [n] a ! , recursively by [0] a ! = 1 and for \(n \in \mathbb{N}_{1}\)
The following theorem is a generalization of the binomial expansion of \((t - t)^{n} = 0\), \(n \in \mathbb{N}_{1}.\)
Theorem 5.32 (Estes [34, 78]).
Assume n ≥ 1 and \(t \in \mathbb{T}_{\alpha }\) . Then
Proof.
For \(n \in \mathbb{N}_{1}\), consider
We will prove by induction on n that f n (t) = 0. For the base case n = 1 we have
Assume \(n \in \mathbb{N}_{1}\) and f n (t) = 0. It remains to show \(f_{n+1}(t) = 0\). Using the product rule
Since \(Df_{n+1}(t) = 0\), we have that \(f_{n+1}(t) = C\), with \(t \in \mathbb{T}_{\alpha }\), for some constant C. Note that f n+1(t) can be expanded to a polynomial in t, and that each term of the sum
is divisible by t. Thus, the polynomial expansion of f n+1(t) has no constant term and by the polynomial principle, C = 0. We have shown that \(f_{n+1}(t) = 0\). This completes the proof by induction. □
Next we prove an alternate formula for the Taylor monomials due to Estes [34, 78].
Theorem 5.33.
Assume \(n \in \mathbb{N}_{0}\) and \(t,s \in \mathbb{T}_{\alpha }\) . Then
Proof.
Fix s and let
We will show by induction that \(f_{n}(t,s) = h_{n}(t,s)\). The base case \(f_{0}(t,s) = h_{0}(t,s)\) follows from the definitions. Assume that \(n \in \mathbb{N}_{1}\) and \(f_{n}(t,s) = h_{n}(t,s)\). From Theorem 5.32, we know that f n (s, s) = 0. Also
Hence, for each fixed \(s \in \mathbb{T}_{\alpha }\), \(y(t) = f_{n+1}(t,s)\) solves the IVP
So, by the uniqueness of solutions to IVPs (see Remark 5.18), we have that \(f_{n+1}(t,s) = h_{n+1}(t,s)\) for each fixed \(s \in \mathbb{T}_{\alpha }.\) This completes the proof by induction. □
Later we are going to see that we need to take the mixed time scale difference of h n (t, s) with respect to its second variable. To do this we now introduce the type-two Taylor monomials, g n (t, s), \(n \in \mathbb{N}_{0}.\)
Definition 5.34.
We define the type-two Taylor monomials \(g_{n}: \mathbb{T}_{\alpha } \times \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\), for \(n \in \mathbb{N}_{0}\), recursively as follows:
and for each \(n \in \mathbb{N}_{1}\), g n (t, s), for each fixed \(s \in \mathbb{T}_{\alpha }\), is the unique solution (see Remark 5.18) of the IVP
In the next theorem we give two different formulas for the type-two Taylor monomials.
Theorem 5.35.
The type-two Taylor monomials are given by
for \(t,s \in \mathbb{T}_{\alpha }.\)
Proof.
We prove by induction on n that \(h_{n}(s,t) = g_{n}(t,s),\) \(t,s \in \mathbb{T}_{\alpha }\), \(n \in \mathbb{N}_{0}\). Obviously this holds for n = 0. Assume n ≥ 0 and h n (s, t) = g n (t, s) for \(t,s \in \mathbb{T}_{\alpha }\). Fix \(s \in \mathbb{T}_{\alpha }\) and consider
for \(t \in \mathbb{T}_{\alpha }.\) Also, by Theorem 5.32, \(h_{n+1}(s,s) = 0.\) So, \(y(t) = h_{n+1}(s,t)\) satisfies for each fixed s the same IVP
as g n+1(t, s). Hence, by the uniqueness (see Remark 5.18) of solutions to IVPs
for \(t \in \mathbb{T}_{\alpha }\). □
We now can state our power rule as follows.
Theorem 5.36.
Assume \(n \in \mathbb{N}_{0}\) . Then for each fixed \(s \in \mathbb{T}_{\alpha }\)
and
for \(t \in \mathbb{T}_{\alpha }.\)
Proof.
By the definition (Definition 5.28) of h n (t, s) we have for each fixed \(s \in \mathbb{T}_{\alpha }\) that \(Dh_{n+1}(t,s) = h_{n}(t,s)\) for all \(t \in \mathbb{T}_{\alpha }.\) To see that \(Dh_{n+1}(s,t) = -h_{n}(\sigma (t),s)\) for \(t \in \mathbb{T}_{\alpha }\), note that by Theorem 5.33
which completes the proof. □
Theorem 5.37 (Taylor’s Formula).
Assume \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) , \(n \in \mathbb{N}_{0}\) and \(s \in \mathbb{T}_{\alpha }\) . Then
where the n-th degree Taylor polynomial, p n (t,s), based at s, is given by
and the remainder term, R n (t,s), based at s, is given by
Proof.
We prove Taylor’s Formula by induction. For n = 0 we have
Solving for f(t) we get the desired result
Now assume that n ≥ 0 and \(f(t) = p_{n}(t,s) + R_{n}(t,s)\), for \(t \in \mathbb{T}_{\alpha }\). Then integrating by parts we obtain
Solving for f(t) we obtain the desired result
This completes the proof by induction. □
We can now use Taylor’s Theorem to prove the following variation of constants formula.
Theorem 5.38 (Variation of Constants Formula).
Assume \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) and \(s \in \mathbb{T}_{\alpha }\) . Then the unique solution of the IVP
where C k , 0 ≤ k ≤ n − 1, are given constants, is given by
for \(t \in \mathbb{T}_{\alpha }.\)
Proof.
It is easy to see that the given IVP has a unique solution y(t). By Taylor’s Formula (see Theorem 5.37) applied to y(t) with n replaced by n − 1, we get
for \(t \in \mathbb{T}_{\alpha }.\) □
Example 5.39.
Consider the mixed time scale where α = 1 and \(\sigma (t) = 2t + 1\) (so a = 2 and b = 1). Use the variation of constants formula in Theorem 5.38 to solve the IVP
By the variation of constants formula in Theorem 5.38, we have that
Integrating by parts we calculate
for \(t \in \mathbb{T}_{1}.\) The reader could check this result by just twice integrating both sides of the equation \(D^{2}y(t) = t - 1\) from 1 to t.
5.7 Exponential Function
In this section we define the exponential function on a mixed time scale and give several of its properties. First we define the set of regressive functions by
Definition 5.40.
The mixed time scale exponential function based at \(s \in \mathbb{T}_{\alpha }\), denoted by e p (t, s) , where \(p \in \mathcal{R}\) is defined to be the unique solution, y(t), of the initial value problem
In the next theorem we give a formula for e p (t, s).
Theorem 5.41.
Assume \(p \in \mathcal{R}\) and \(s \in \mathbb{T}_{\alpha }.\) Then
Proof.
It suffices to show (see Remark 5.18) that y(t) = e p (t, s) as defined in the statement of this theorem satisfies the IVP
It is clear that e p (s, s) = 1. It remains to show that \(De_{p}(t,s) = p(t)e_{p}(t,s)\). Consider the case that \(t =\sigma ^{k}(s)\) for k ≥ 1. Then
Consider the case when t = s. Then,
Consider the case when t = ρ(s). In this case it follows that
Finally, consider the case when t = ρ k(s) for k ≥ 2. In this final case it then holds that
which completes the proof. □
Next we define an addition on \(\mathcal{R}.\)
Definition 5.42.
We define the circle plus addition, ⊕, on the set of regressive functions \(\mathcal{R}\) on the mixed time scale \(\mathbb{T}_{\alpha }\) by
Similar to the proof of Theorem 4.16, we can prove the following theorem.
Theorem 5.43.
The set of regressive functions \(\mathcal{R}\) with the addition ⊕ is an Abelian group.
Like in Chap. 4, the additive inverse of a function \(p \in \mathcal{R}\) is given by
We then define the circle minus subtraction, ⊖, on \(\mathcal{R}\) by
It follows that
In the following theorem we give several properties of the exponential function e p (t, s).
Theorem 5.44.
Let \(t,s,r \in \mathbb{T}_{\alpha }\) and \(p,l \in \mathcal{R}\) . Then the following properties hold:
-
(i)
e 0 (t,s) = 1;
-
(ii)
e p (s,s) = 1;
-
(iii)
\(De_{p}(t,s) = p(t)e_{p}(t,s);\)
-
(iv)
\(e_{p}(\sigma (t),s) = \left [1 + p(t)\mu (t)\right ]e_{p}(t,s);\)
-
(v)
\(e_{p}(\rho (t),s) = \dfrac{e_{p}(t,s)} {\left [1 + p\left (\rho (t)\right )\mu \left (\rho (t)\right )\right ]};\)
-
(vi)
\(\mbox{ if $1 + p(t)\mu (t)> 0$ for all $t \in \mathbb{T}_{\alpha }$ then }e_{p}(t,s)> 0;\)
-
(vii)
\(e_{p}(t,s)e_{p}(s,r) = e_{p}(t,r);\)
-
(viii)
\(e_{p}(s,t) = \dfrac{1} {e_{p}(t,s)} = e_{\ominus p}(t,s);\)
-
(ix)
\(e_{p}(t,s)e_{l}(t,s) = e_{p\oplus l}(t,s);\)
-
(x)
\(\dfrac{e_{l}(t,s)} {e_{p}(t,s)} = e_{l\ominus p}(t,s).\)
Proof.
The proof of this theorem is very similar to the proof of Theorem 4.18. Here we will just prove first half of part (viii). Consider the case t > s. Then
When t = s, it follows that \(e_{p}(s,s) = \frac{1} {e_{p}(s,s)} = 1.\) Finally, consider the case when t < s. Then
which was to be shown. □
Next we define the scalar dot multiplication, \(\odot\), on the set of positively regressive functions \(\mathcal{R}^{+}:=\{ p \in \mathcal{R}: 1 +\mu (t)p(t)> 0\text{, }t \in \mathbb{T}_{\alpha }\}.\)
Definition 5.45.
We define the scalar dot multiplication, \(\odot\), on \(\mathcal{R}+\) by
Similar to the proof of Theorem 4.21 we can prove the following theorem.
Theorem 5.46.
If \(\alpha \in \mathbb{R}\) and \(p \in \mathcal{R}^{+}\) , then
for \(t \in \mathbb{T}_{\alpha }.\)
Then similar to the proof of Theorem 4.22, we get the following result.
Theorem 5.47.
The set of positively regressive functions \(\mathcal{R}^{+}\) on a mixed time scale with the addition ⊕ and the scalar multiplication \(\odot\) is a vector space.
5.8 Trigonometric Functions
In this section, we use the exponential function defined in the previous section to define the hyperbolic and trigonometric functions for the mixed time scale.
Definition 5.48.
For \(\pm p \in \mathcal{R}\), we define the mixed time scale hyperbolic cosine function \(\cosh _{p}(\cdot,s)\) based at \(s \in \mathbb{T}_{\alpha }\) by
Definition 5.49.
Likewise we define the mixed time scale hyperbolic sine function \(\sinh _{p}(\cdot,s)\) based at \(s \in \mathbb{T}_{\alpha }\) by
Similar to the proof of Theorem 4.24 one can prove the following theorem.
Theorem 5.50.
Assume \(\pm p \in \mathcal{R}\) and \(t,s \in \mathbb{T}_{\alpha }\) . Then the following properties hold:
-
(i)
\(\cosh _{p}(s,s) = 1;\)
-
(ii)
\(\sinh _{p}(s,s) = 0;\)
-
(iii)
\(\cosh _{-p}(t,s) =\cosh _{p}(t;s);\)
-
(iv)
\(\sinh _{-p}(t,s) = -\sinh _{p}(t,s);\)
-
(v)
\(D\cosh _{p}(t,s) = p(t)\sinh _{p}(t,s);\)
-
(vi)
\(D\sinh _{p}(t,s) = p(t)\cosh _{p}(t,s);\)
-
(vii)
\(\cosh _{p}^{2}(t,s) -\sinh _{p}^{2}(t,s) = e_{-\mu p^{2}}(t,s).\)
Definition 5.51.
Assume \(\pm ip \in \mathcal{R}\). Then we define the mixed time scale cosine function \(\cos _{p}(\cdot,s)\) based at \(s \in \mathbb{T}_{\alpha }\) by
Definition 5.52.
We define the mixed time scale sine function \(\sin _{p}(\cdot,s)\) based at \(s \in \mathbb{T}_{\alpha }\) by
Similar to the proof of Theorem 4.27 one can prove the following theorem.
Theorem 5.53.
Assume \(\pm ip \in \mathcal{R}\) and \(t,s \in \mathbb{T}_{\alpha }.\) Then the following properties hold:
-
(i)
\(\cos _{p}(s,s) = 1;\)
-
(ii)
\(\sin _{p}(s,s) = 0;\)
-
(iii)
\(\cos _{-p}(t,s) =\cos _{p}(t,s);\)
-
(iv)
\(\sin _{-p}(t,s) = -\sin _{p}(t,s);\)
-
(v)
\(D\cos _{p}(t,s) = -p(t)\sin _{p}(t,s);\)
-
(vi)
\(D\sin _{p}(t,s) = p(t)\cos _{p}(t,s);\)
-
(vii)
\(\cos _{p}^{2}(t,s) +\sin _{ p}^{2}(t,s) = e_{\mu p^{2}}(t,s).\)
Similar to the proof of Theorem 4.26 one can prove the following theorem.
Theorem 5.54.
Assume \(\pm ip \in \mathcal{R}\) and \(t,s \in \mathbb{T}_{\alpha }\) . Then the following properties hold:
-
(i)
\(\sin _{ip}(t,s) = i\sinh _{p}(t,s);\)
-
(ii)
\(\cos _{ip}(t,s) =\cosh _{p}(t,s);\)
-
(iii)
\(\sinh _{ip}(t,s) = i\sin _{p}(t,s);\)
-
(iv)
\(\cosh _{ip}(t,s) =\cos _{p}(t,s),\)
for \(t \in \mathbb{T}_{\alpha }.\)
It is easy to prove the following theorem.
Theorem 5.55.
If \(p \in \mathcal{R}\) , then a general solution of
is given by
Theorem 5.56.
Assume \(t,s \in \mathbb{T}_{\alpha }\) and p is a constant. Then the following Taylor series converge on \(\mathbb{T}_{[s,\infty )}\).
-
(i)
\(e_{p}(t,s) =\sum \limits _{ n=0}^{\infty }p^{n}h_{n}(t,s),\quad \mbox{ if}\quad p \in \mathcal{R};\)
-
(ii)
\(\sin _{p}(t,s) =\sum \limits _{ n=0}^{\infty }(-1)^{n}p^{2n+1}h_{2n+1}(t,s),\quad \mbox{ if}\quad \pm ip \in \mathcal{R};\)
-
(iii)
\(\cos _{p}(t,s) =\sum \limits _{ n=0}^{\infty }(-1)^{n}p^{2n}h_{2n}(t,s),\quad \mbox{ if}\quad \pm ip \in \mathcal{R};\)
-
(iv)
\(\sinh _{p}(t,s) =\sum \limits _{ n=0}^{\infty }p^{2n+1}h_{2n+1}(t,s),\quad \mbox{ if}\quad \pm p \in \mathcal{R};\)
-
(v)
\(\cosh _{p}(t,s) =\sum \limits _{ n=0}^{\infty }p^{2n}h_{2n}(t,s),\quad \mbox{ if}\quad \pm p \in \mathcal{R}.\)
Proof.
Fix \(t \in \mathbb{T}_{[s,\infty )}\). Then for some k ≥ 0, it holds that \(t =\sigma ^{k}(s) \geq s\). Let \(M =\max \{ \left \vert e_{p}(\tau,s)\right \vert \:\ \tau \in \mathbb{T}_{[s,t]}\}\). Then for n ≥ 1 we have
Now if m ≥ k, we have that
Note that since m ≥ k, the product in the above expression contains the factor
Thus, for all m ≥ k,
Hence, by Taylor’s Formula (Theorem 5.37) the Taylor series for e p (t, s) converges for any \(t \in \mathbb{T}_{[s,\infty )}\). The remainder of this theorem follows from the fact that the functions \(\cos _{p}(t,s)\) \(\sin _{p}(t,s)\), \(\cosh _{p}(t,s)\), and \(\sinh _{p}(t,s)\) are defined in terms of appropriate exponential functions. □
Theorem 5.57.
Fix \(s \in \mathbb{T}_{\alpha }\) . Then the Taylor series for each of the functions in Theorem 5.56 converges on \(\mathbb{T}_{(-\infty,s)}\) when \(\vert p\vert <\frac{a} {\mu (s)}\) .
Proof.
Let \(s \in \mathbb{T}_{\alpha }\) be fixed. We will first prove that if \(\vert p\vert <\frac{\mu (s)} {a}\), then the Taylor series for e p (t, s) converges for each \(t \in \mathbb{T}_{(-\infty,s)}.\) Fix \(t \in \mathbb{T}_{(-\infty,s)}.\) We claim that for each a ≥ 1
First we prove (5.4) for a = 1. This follows from the following calculations:
Next we prove (5.4) for a > 1. To this end consider
Now consider the remainder term
It follows that
If we let
then
Using (5.4) and \(\vert p\vert <\frac{a} {\mu (s)}\), there is a number r and a positive integer N so that
It follows that
Therefore, by Taylor’s Formula,
for \(t \in \mathbb{T}_{(-\infty,s)}.\)
The remainder of this theorem follows from the fact that the functions \(\cos _{p}(t,s)\) \(\sin _{p}(t,s)\), \(\cosh _{p}(t,s)\), and \(\sinh _{p}(t,s)\) are defined in terms of appropriate exponential functions. □
Theorem 5.58.
For fixed \(t,s \in \mathbb{T}_{\alpha }\) , the power series
converges for \(\vert x\vert <\dfrac{a} {\mu (s)}.\)
Proof.
First, consider the power series
We will perform the ratio test with this series:
Since \(\lim \limits _{n\rightarrow \infty } \dfrac{t} {[n + 1]_{a}} = 0\), we have that
So, A(x) converges when \(\vert x\vert <\dfrac{a} {b}\). Next, consider the power series
Again, we perform the ratio test. Then
So B(x) converges when \(\vert x\vert <\dfrac{a} {\mu (s)}\). Note that μ(s) > b, so \(\dfrac{a} {\mu (s)} <\dfrac{a} {b}\) for all s. Now, f(x) = A(x)B(x). So, f(x) converges when \(\vert x\vert <\dfrac{a} {\mu (s)}\). □
5.9 The Laplace Transform
Most of the results in this section are due to Auch et al. [39]. In this chapter when discussing Laplace transforms we assume that \(r \in \mathbb{T}_{\alpha }\) satisfies r ≥ α ≥ 0, and we let \(\mathbb{T}_{r} =\{ t \geq r: t \in \mathbb{T}_{\alpha }\}.\) Also we let \(\mathcal{R}^{c}\) denote the set of regressive complex constants.
Definition 5.59.
If \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\), then we define the discrete Laplace transform of f based at \(r \in \mathbb{T}\) by
where \(\mathcal{L}_{r}\{f\}: \mathcal{R}^{c} \rightarrow \mathbb{C}\).
Definition 5.60.
We say that a function \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) is of exponential order k > 0 if for every fixed \(r \in \mathbb{T}_{\alpha }\), there exists a constant M > 0 such that
for all sufficiently large \(t \in \mathbb{T}_{r}\).
Theorem 5.61.
Suppose \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\) is of exponential order k > 0. Then \(\mathcal{L}_{r}\{f\}(s)\) exists for |s| > k.
Proof.
Since f is of exponential order k > 0, there is a constant M > 0 and a \(T \in \mathbb{T}_{r}\) such that | f(t) | ≤ Me k (t, r) for t ≥ T. Pick \(N \in \mathbb{N}_{0}\) such that \(t =\sigma ^{N}(r)\). Then we have
We will show that this sum converges absolutely for | s | > k by the ratio test. We have
Hence the sum converges absolutely when | s | > k, and therefore \(\mathcal{L}_{r}\{f\}(s)\) converges if | s | > k. □
Theorem 5.62 (Linearity).
Suppose \(f,g: \mathbb{T}_{r} \rightarrow \mathbb{R}\) are of exponential order k > 0, and \(c,d \in \mathbb{R}\) . Then
for |s| > k.
Proof.
The result follows easily from the linearity of the delta integral. We have, for | s | > k, that
which completes the proof. □
The following lemma will be useful for computing Laplace transforms of various functions.
Lemma 5.63.
If \(p,q \in \mathcal{R}^{c}\) and |p| < |q|, then for \(t \in \mathbb{T}_{r}\) we have
Proof.
Let \(p,q \in \mathcal{R}^{c}\) with | p | < | q | . First, note that
This implies that
Thus, \(\lim _{t\rightarrow \infty }e_{p\ominus q}(t,r) = 0.\) □
Remark 5.64.
In particular, note that if s > 0, then
Theorem 5.65.
Let \(p \in \mathcal{R}^{c}\) . Then for |s| > |p|, we have
Proof.
First, note that e p (t, r) is of exponential order | p | since
Thus, if | s | > | p | , we have
Then note both that \(De_{p\ominus s}(t,r) = (p \ominus s)(t)\,e_{p\ominus s}(t,r)\) and that \((p \ominus s)(t) = \frac{p-s} {1+s\mu (t)}.\) This gives us
since \(\lim _{t\rightarrow \infty }e_{p\ominus s}(t,r) = 0\) by Lemma 5.63. □
The following results describe the relationship between the Laplace transform and the delta difference.
Lemma 5.66.
If \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\) is of exponential order k > 0, then Df is also of exponential order k > 0.
Proof.
Let | f(t) | ≤ Me k (t, r) for sufficiently large t. We will prove this lemma by showing that
First, consider
Thus, we have
Then, for any ε > 0 and t sufficiently large,
Therefore, Df is of exponential order k > 0. □
Corollary 5.67.
If \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\) is of exponential order k > 0, then D n f is also of exponential order k > 0 for every \(n \in \mathbb{N}\) .
To see that the corollary holds, use the previous result.
Theorem 5.68.
If \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\) is of exponential order k > 0 and \(n \in \mathbb{N}\) , then
for |s| > k.
Proof.
We will proceed by induction on n. First consider the base case n = 1. Using integration by parts we have
Now assume the statement is true for some n > 1. Then by the base case we have
as desired. □
To show that the Laplace transform is injective (Theorem 5.70) and therefore invertible we will use the following lemma.
Lemma 5.69.
Let \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\) be of exponential order k > 0. Then
Proof.
Let \(t_{n} =\sigma ^{n}(r).\) Since f is of exponential order k, there is a constant M > 0 and a positive integer N such that \(\vert f(t_{n})\vert \leq Me_{k}(t_{n},r)\) for all n ≥ N. Then we have
We now show that
for \(\mathfrak{R}(s),\,\mathfrak{I}(s)> k\) and for all integers m ≥ 0. To this end, we prove that
holds by writing \(s = \mathfrak{R}(s) +\mathrm{ i}\mathfrak{I}(s)\) and showing
and
Rewriting these inequalities yields
and
respectively, which are true by assumption. Thus for sufficiently large | s | , we have
as \(\vert s\vert \rightarrow \infty\). □
Theorem 5.70 (Injectivity).
If \(f,g: \mathbb{T}_{r} \rightarrow \mathbb{R}\) and \(\mathcal{L}_{r}\{f\}(s) = \mathcal{L}_{r}\{g\}(s)\) , then f(t) = g(t) for all t ≥ r.
Proof.
We will first prove that \(\mathcal{L}_{r}\{f\}(s) = 0\) implies f(t) = 0 for all t ≥ r. First, note that by Lemma 5.69, we have
for any \(r \in \mathbb{T}_{\alpha }\). We will show that \(f(\sigma ^{n}(r)) = 0\) for all n ≥ 0 by induction. We first prove the case n = 0. To this end, we observe that if
then it follows that
And from this it follows that
whence
Consequently, it holds that
from which it follows that
Taking the limit as \(s \rightarrow \infty\) yields
and so, it holds that
hence
For the inductive step, assume \(f(\sigma ^{i}(r)) = 0\) for all i < n. Then it follows that
from which we obtain
Thus,
So, we deduce that
All in all, we conclude that
and so,
Taking the limit as \(s \rightarrow \infty\) yields
Therefore,
Hence, we deduce that
So, f(t) = 0 for all t ≥ r.
Thus, \(\mathcal{L}_{r}\{f\}(s) = 0\) if and only if f = 0. Now let g be an arbitrary function such that \(\mathcal{L}_{r}\{f\}(s) = \mathcal{L}_{r}\{g\}(s)\). Then by linearity, we have \(\mathcal{L}_{r}\{f - g\}(s) = 0\), which implies \(f - g = 0\). Hence, f(t) = g(t) for all t ≥ r. □
Theorem 5.71 (Shifting).
Suppose \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\) is of exponential order k > 0. Then for |s| > k, we have:
Proof.
For part (i), we will proceed by induction on n. For the base case, consider n = 1:
Note that
Therefore, we have
Note that
We can thus further simplify the expression to
which proves the base case.
For the inductive step, assume the hypothesis is true for some n ≥ 1. Then by the base case, we have
We now prove part (ii) similarly. For the base case, consider n = 1. We obtain
proving the base case. For the inductive step, assume the hypothesis is true for some n ≥ 1. Then by the base case, we have
And this completes the proof. □
5.10 Laplace Transform Formulas
Theorem 5.72.
If \(c \in \mathbb{R}\) , then \(\mathcal{L}_{r}\{c\}(s) = \frac{c} {s}\) for |s| > 0.
Proof.
Note c = ce 0(t, r) and apply Theorems 5.65 and 5.62. □
Definition 5.73.
If \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\), then for any \(n \in \mathbb{N}\), we define the n-th antidifference of f based at r by
Theorem 5.74.
Let \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\) . Then for any \(n \in \mathbb{N}\) ,
Proof.
Consider the initial value problem
It is easy to see that the unique solution to this system is given by \(D_{r}^{-n}f(t)\).
However, by Taylor’s Theorem, \(y(t) = P_{n-1}(t,r) + R_{n-1}(t,r)\) where
Then we have
which implies that R n−1(t, r) is also a solution. Thus we have
since the solution is unique. □
The following results are used to obtain the exponential order of the Taylor monomial, h n (t, r), and give its Laplace transform.
Lemma 5.75.
Let f(t) be of exponential order k > 0. Then \(D_{r}^{-1}f(t)\) is also of exponential order k > 0.
Proof.
Let | f(t) | ≤ Me k (t, r) for all t ≥ x, and let \(C = \left \vert \int _{r}^{x}f(u)\,Du\right \vert\). Then for t ≥ x, we have
As this demonstrates that \(D_{r}^{-1}f\) is of exponential order, the proof of the lemma is complete. □
Corollary 5.76.
Let f(t) be of exponential order k > 0. Then \(D_{r}^{-n}f(t)\) is also of exponential order k > 0 for all \(n \in \mathbb{N}\) .
To see that the corollary holds, use the previous result.
Theorem 5.77.
Let f be of exponential order k > 0. Then for |s| > k,
Proof.
We will proceed by induction on n. For the base case, consider n = 1. Then using integration by parts, we have
Thus, \(\mathcal{L}_{r}\{D_{r}^{-1}f(t)\}(s) = \frac{1} {s}\mathcal{L}_{r}\{f\}(s)\).
For the inductive step, assume the statement is true for some integer n > 0. Then we have
as was to be shown. □
Lemma 5.78.
The n-th Taylor monomial h n (t,r) is of exponential order k = 1.
Proof.
We will prove this result for a > 1 (leaving the case a = 1 to the reader) by induction on n. Consider the base case n = 1. We have
Thus, for sufficiently large t and any ε > 0,
For the inductive step, assume h n (t, r) is of exponential order 1 for some n. Then, since \(D_{r}^{-1}h_{n}(t,r) = h_{n+1}(t,r)\), applying Lemma 5.75 implies h n+1 is of exponential order 1. □
Theorem 5.79.
Let |s| > 1. Then
Proof.
The base case, n = 0, is trivial since \(\mathcal{L}_{r}\{1\}(s) = \frac{1} {s}\). Note that
Thus, \(\mathcal{L}_{r}\{h_{n+1}(t,r)\}(s) = \frac{1} {s}\mathcal{L}_{r}\{h_{n}(t,r)\}(s)\). Suppose that the theorem holds for some n. Then it follows that
which completes the induction step and thus proves the result. □
Lemma 5.80.
The discrete trigonometric functions, \(\sin _{p}\) and \(\cos _{p}\) , and the hyperbolic trigonometric functions, \(\sinh _{p}\) and \(\cosh _{p}\) , are all of exponential order |p|.
Proof.
Let p be such that \(\pm p \in \mathcal{R}^{c}.\) Then for sufficiently large t, we have
The proof for \(\cosh _{p}(t,r)\) is analogous.
For \(\cos _{p}\), we can use the identity \(\cos _{p}(t,r) =\cosh _{ip}(t,r)\) to obtain
The proof for \(\sin _{p}(t,r)\) is analogous. □
Theorem 5.81.
For |s| > |p| and \(\pm p \in \mathcal{R}^{c}\) , we have
-
(i)
\(\mathcal{L}_{r}\{\cosh _{p}(t,r)\}(s) = \frac{s} {s^{2}-p^{2}}\) ;
-
(ii)
\(\mathcal{L}_{r}\{\sinh _{p}(t,r)\}(s) = \frac{p} {s^{2}-p^{2}}\) .
Proof.
To see that (i) holds, note that
The proof of (ii) is similar. □
Theorem 5.82.
For |s| > |p| and \(\pm ip \in \mathcal{R}^{c}\) , we have
-
(i)
\(\mathcal{L}_{r}\{\cos _{p}(t,r)\}(s) = \frac{s} {s^{2}+p^{2}}\) ;
-
(ii)
\(\mathcal{L}_{r}\{\sin _{p}(t,r)\}(s) = \frac{p} {s^{2}+p^{2}}.\)
Proof.
To see that (i) holds, recall that \(\cos _{p}(t,r) =\cosh _{ip}(t,r)\) and thus,
The proof of (ii) is analogous. □
Lemma 5.83.
For \(p,q \in \mathcal{R}^{c}\) and \(t,r \in \mathbb{T}_{\alpha }\) , let \(k(t) = \frac{q} {1+p\mu (t)}.\) Then the following functions are of exponential order |p| + |q|:
-
(i)
\(e_{p}(t,r)\cosh _{k}(t,r);\)
-
(ii)
\(e_{p}(t,r)\sinh _{k}(t,r);\)
-
(iii)
\(e_{p}(t,r)\cos _{k}(t,r);\)
-
(iv)
\(e_{p}(t,r)\sin _{k}(t,r).\)
Proof.
We will prove the result for (i). First, note that
Therefore,
where
Thus,
for some M > 0, and so, \(e_{p}(t,r)\cosh _{k}(t,r)\) is of exponential order | p | + | q | . The proofs of (ii)–(iv) are analogous. □
Theorem 5.84.
Let \(k(t) = \frac{q} {1+p\mu (t)}\) for \(p,q \in \mathcal{R}^{c}\) . Then for |s| > |p| + |q|, we have
-
(i)
\(\mathcal{L}_{r}\{e_{p}(t,r)\cosh _{k}(t,r)\}(s) = \frac{s-p} {(s-p)^{2}-q^{2}};\)
-
(ii)
\(\mathcal{L}_{r}\{e_{p}(t,r)\sinh _{k}(t,r)\}(s) = \frac{q} {(s-p)^{2}-q^{2}};\)
-
(iii)
\(\mathcal{L}_{r}\{e_{p}(t,r)\cos _{k}(t,r)\}(s) = \frac{s-p} {(s-p)^{2}+q^{2}};\)
-
(iv)
\(\mathcal{L}_{r}\{e_{p}(t,r)\sin _{k}(t,r)\}(s) = \frac{q} {(s-p)^{2}+q^{2}}.\)
Proof.
To prove (i), first note that
as stated above. Therefore,
The proof of (ii) is similar.
To see that (iii) holds, consider that \(\cos _{q}(t,r) =\cosh _{iq}(t,r)\) and let q = iq in the result of the proof of (i) to get
The proof of (iv) is analogous. □
5.11 Solving IVPs Using the Laplace Transform
In this section we will demonstrate how the discrete Laplace transform can be applied to solve difference equations on \(\mathbb{T}_{r}\).
Example 5.85.
Solve:
We will take the Laplace transform of both sides of the equation and use the initial conditions to solve this problem. We begin with
from which it follows that
Using partial fractions, we obtain
Therefore, by the injectivity of the Laplace transform,
Example 5.86.
Solve the following IVP:
To solve the above problem, we first take the Laplace transform of both sides. This yields
from which it follows that
We then solve for \(\mathcal{L}_{0}\{y\}\) and invert by writing
from which it follows that
Thus,
5.12 Green’s Functions
In this section we will consider boundary value problems on a mixed time scale with Sturm–Liouville type boundary value conditions for a > 1. We will find a Green’s function for a boundary value problem on a mixed time scale with Dirichlet boundary conditions, and investigate some of its properties. Many of the results in this section can be viewed as analogues to results for the continuous case given in Kelley and Peterson [137].
Theorem 5.87.
Let \(\beta \in \mathbb{T}_{\sigma ^{2}(\alpha )}\) and \(A,B,E,F \in \mathbb{R}\) be given. Then the homogeneous boundary value problem (BVP)
has only the trivial solution if and only if
Proof.
A general solution of \(-D^{2}y(t) = 0\) is given by
Using the boundary conditions, we have
and
Thus, we have the following system of equations
which has only the trivial solution if and only if
It follows that
as claimed. □
Lemma 5.88.
Assume \(\beta \in \mathbb{T}_{\sigma ^{2}(\alpha )}\) and \(A_{1},A_{2} \in \mathbb{R}\) . Then the boundary value problem
has the solution
Proof.
A general solution to the mixed difference equation D 2 y(t) = 0 is given by
Using the first boundary condition, we get
Using the second boundary condition, we have that
Solving for c 1 we get
Hence,
□
Theorem 5.89.
Assume \(f: \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } \rightarrow \mathbb{R}\) and \(\beta \in \mathbb{T}_{\sigma ^{2}(\alpha )}\) . Then the unique solution of the BVP
is given by
for \(t \in \mathbb{T}_{\alpha }^{\beta }\) , where \(G: \mathbb{T}_{\alpha }^{\beta } \times \mathbb{T}_{\alpha }^{\rho (\beta )} \rightarrow \mathbb{R}\) is called the Green’s function for the homogeneous BVP
and is defined by
where for \((t,s) \in \mathbb{T}_{\alpha }^{\beta } \times \mathbb{T}_{\alpha }^{\rho (\beta )}\)
and
Proof.
Note for the γ defined in Theorem 5.87 we have that for \(A = E = 1\), \(B = F = 0\),
Hence, by Exercise 5.13, the BVP (5.5), (5.6) has a unique solution y(t). Using the variation of constants formula (Theorem 5.38 with n = 2) we have that
Using the first boundary condition, we get
and using the second boundary condition, we have that
Solving for c 1 yields
Thus,
for G(t, s) defined as in the statement of this theorem. □
Theorem 5.90.
The Green’s function for the BVP (5.7) , (5.8) , satisfies
and
Proof.
First, note both that
and that
Now we will show that DG(t, s) ≥ 0 for t ≤ s, DG(t, s) ≤ 0 for s < t, and \(G(\sigma (s),s) \geq G(s,s)\). So first consider the domain 0 ≤ K(t) ≤ K(s) ≤ K(β) − 1:
Now consider the domain \(0 \leq K(s) \leq K(t) - 1 \leq K(\beta ) - 1\):
since \(\beta -\sigma (s) \leq \beta -\alpha\). Now, since G is increasing for t ≤ s and decreasing for s < t, we need to see which is larger: \(G(\sigma (s),s)\) or G(s, s). So consider
which implies that \(\max _{t\in \mathbb{T}_{\alpha }^{\beta }}G(t,s) = G(\sigma (s),s)\). Also, since DG(t, s) ≥ 0 for \(t \in \mathbb{T}_{[\alpha,s]}\), DG(t, s) ≤ 0 for \(t \in \mathbb{T}_{(s,\beta )}\), and \(G(\alpha,s) = 0 = G(\beta,s)\), we have G(t, s) ≥ 0 on its domain. □
Remark 5.91.
Note that in the above proof, we have DG(t, s) > 0 for t ≤ s < ρ(β), DG(t, s) < 0 for α < s < t.
In the next theorem we give some more properties of the Green’s function for the BVP (5.7), (5.8).
Theorem 5.92.
Let G(t,s), u(t,s), and v(t,s) be as defined in Theorem 5.89 . Then the following hold:
-
(i)
\(G(\alpha,s) = 0 = G(\beta,s),\quad s \in \mathbb{T}_{\alpha }^{\rho (\beta )};\)
-
(ii)
for each fixed \(s \in \mathbb{T}_{\alpha }^{\rho (\beta )},\) \(-D^{2}u(t,s) = 0 = -D^{2}v(t,s)\) for \(t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) };\)
-
(iii)
\(v(t,s) = u(t,s) + h_{1}(t,\sigma (s)),\quad (t,s) \in \mathbb{T}_{\alpha }^{\beta } \times \mathbb{T}_{\alpha }^{\rho (\beta )};\)
-
(iv)
\(u(\sigma (s),s) = v(\sigma (s),s),\quad s \in \mathbb{T}_{\alpha }^{\rho (\beta )};\)
-
(v)
\(-D^{2}G(t,s) = \frac{\delta _{ts}} {\mu (s)},\quad (t,s) \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } \times \mathbb{T}_{\alpha }^{\rho (\beta )},\) where δ ts is the Kronecker delta, i.e., δ ts = 1 for t = s and δ ts = 0 for t ≠ s.
Proof.
In the proof of Theorem 5.90 we proved (i). The proofs of the properties (ii)–(iv) are straightforward and left to the reader (see Exercise 5.15). We now use these properties to prove (v). It follows that for t < s,
and for t > s,
Finally, when t = s, we have using Exercise 5.5
Therefore,
for \((t,s) \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } \times \mathbb{T}_{\alpha }^{\rho (\beta )}\). □
The following theorem along with Exercise 5.13 is a uniqueness result for the Green’s function for the BVP (5.7), (5.8).
Theorem 5.93.
There is a unique function \(G: \mathbb{T}_{\alpha }^{\beta } \times \mathbb{T}_{\alpha }^{\rho (\beta )} \rightarrow \mathbb{R}\) such that \(G(\alpha,s) = 0 = G(\beta,s)\) , for each \(s \in \mathbb{T}_{\alpha }^{\rho (\beta )}\) , and that \(-D^{2}G(t,s) = \frac{\delta _{ts}} {\mu (s)}\) , for each fixed \(s \in \mathbb{T}_{\alpha }^{\rho (\beta )}\) .
Proof.
Fix \(s \in \mathbb{T}_{\alpha }^{\rho (\beta )}\). Then by Theorem 5.89 with \(f(t) = \frac{\delta _{ts}} {\mu (s)},\) \(t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) },\) the BVP
has a unique solution on \(\mathbb{T}_{\alpha }^{\beta }\). Hence for each fixed \(s \in \mathbb{T}_{\alpha }^{\rho (\beta )}\), G(t, s) is uniquely determined for \(t \in \mathbb{T}_{\alpha }^{\beta }.\) Since \(s \in \mathbb{T}_{\alpha }^{\rho (\beta )}\) is arbitrary, G(t, s) is uniquely determined on \(\mathbb{T}_{\alpha }^{\beta } \times \mathbb{T}_{\alpha }^{\rho (\beta )}.\) □
Theorem 5.94.
Assume \(f: \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } \rightarrow \mathbb{R}\) . Then the unique solution of the BVP
is given by
where u(t) solves the BVP
and G(t,s) is the Green’s function for the BVP (5.7) , (5.8) .
Proof.
By Exercise 5.13 the given BVP has a unique solution y(t). By Theorem 5.89
where \(z(t):=\int _{ \alpha }^{\beta }G(t,s)f(s)Ds\) is by Theorem 5.89 the solution of the BVP
It follows that
and
Furthermore,
for \(t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) }.\) □
We now prove a comparison theorem for solutions of boundary value problems of the type treated by Theorem 5.94.
Theorem 5.95 (Comparison Theorem).
If \(u,v: \mathbb{T}_{\alpha }^{\beta } \rightarrow \mathbb{R}\) satisfy
Then u(t) ≥ v(t) on \(\mathbb{T}_{\alpha }^{\beta }\) .
Proof.
Let \(w(t):= u(t) - v(t)\), for \(t \in \mathbb{T}_{\alpha }^{\beta }.\) Then for \(t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) }\)
If \(A_{1}:= u(\alpha ) - v(\alpha ) \geq 0\) and \(A_{2}:= u(\beta ) - v(\beta ) \geq 0\), then w(t) solves the boundary value problem
Thus, by Theorem 5.94
where G(t, s) is the Green’s function defined earlier and y(t) is the solution of
Since \(-D^{2}y(t) = 0\) has the general solution
and both y(α), y(β) ≥ 0, we have y(t) ≥ 0. By Theorem 5.90, G(t, s) ≥ 0, and, thus, we have
for \(t \in \mathbb{T}_{\alpha }^{\beta }.\) □
5.13 Exercises
5.1. Show that the points in \(\mathbb{T}_{\alpha }\) satisfy
5.2. Prove part (ii) of Theorem 5.4.
5.3. Prove parts (ii) and (iii) of Theorem 5.6.
5.4. Prove part (iv) of Theorem 5.8.
5.5. Assume \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\). Show that
5.6. Assume \(c,d \in \mathbb{T}_{\alpha }\) with c < d. Prove that if \(f: \mathbb{T}_{[c,d]} \rightarrow \mathbb{R}\) and Df(t) = 0 for \(t \in \mathbb{T}_{[c,\rho (d)]}\), then f(t) = C for all \(t \in \mathbb{T}_{[c,d]}\), where C is a constant.
5.7. Show that if \(n \in \mathbb{N}_{1}\) and a ≥ 1, then
Then use this formula to prove parts (iii)–(v) of Theorem 5.22.
5.8. Prove part (ii) of Theorem 5.23.
5.9. Assume \(f: \mathbb{T}_{\alpha } \times \mathbb{T}_{\alpha } \rightarrow \mathbb{R}.\) Derive the Leibniz formula
for \(t \in \mathbb{T}_{\alpha }.\)
5.10. Consider the mixed time scale where α = 2 and \(\sigma (t) = 3t + 2\) (so a = 3 and b = 2). Use the variation of constants formula in Theorem 5.38 to solve the IVP
5.11. Use the Leibniz formula in Exercise 5.9 to prove the Variation of Constants Theorem (Theorem 5.37).
5.12. Prove Theorem 5.43.
5.13. Assume \(\beta \in \mathbb{T}_{\sigma ^{2}(\alpha )}\), \(A,B,E,F \in \mathbb{R}\) and \(f: \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } \rightarrow \mathbb{R}.\) Then the nonhomogeneous BVP
where the constants C 1, C 2 are given, has a unique solution if and only if the corresponding homogeneous BVP
has only the trivial solution.
5.14. Show that for the BVP
the γ in Theorem 5.87 satisfies γ = 0. Then show that the given BVP has infinitely many solutions.
5.15. Prove parts (ii)–(iv) of Theorem 5.92.
5.16. Use Theorem 5.92 to prove directly that the function
for \(t \in \mathbb{T}_{\alpha }^{\beta }\), where G(t, s) is the Green’s function for the BVP (5.7), (5.8), solves the BVP (5.5), (5.6).
Bibliography
Atici, F.M., Eloe, P.W.: Initial value problems in discrete fractional calculus. Proc. Am. Math. Soc. 137, 981–989 (2009)
Auch, T.: Developement and application of difference and fractional calculus on discrete time scales. PhD Disertation, University of Nebraska-Lincoln (2013)
Auch, T., Lai, J., Obudzinski, E., Wright, C.: Discrete q-calculus and the q-Laplace transform. Panamer. Math. J. 24, 1–19 (2014)
Auch, T.: Discrete calculus on a scaled number line. Panamer. Math. J., to appear
Erbe, L., Mert, R., Peterson, A., Zafer, A.: Oscillation of even order nonlinear delay dynamic equations on time scales. Czech. Math. J. 63, 265–279 (2013)
Estes, A.: Discrete calculus. Undergraduate Honors Thesis, University of Nebraska-Lincoln, 2013
Kelley, W.G., Peterson, A.C.: The Theory of Differential Equations: Classical and Qualitative, Second Edition. Universitext, Springer, New York (2010)
Mert, R.: Oscillation of higher-order neutral dynamic equations on time scales. Adv. Differ. Equ. (68), 11 pp. (2012)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Goodrich, C., Peterson, A.C. (2015). Calculus on Mixed Time Scales. In: Discrete Fractional Calculus. Springer, Cham. https://doi.org/10.1007/978-3-319-25562-0_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-25562-0_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-25560-6
Online ISBN: 978-3-319-25562-0
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)