Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

5.1 Introduction

This chapter focuses on what we call the calculus on a mixed time scale whose elements we will define in terms of a point α and two linear functions. There has been recent interest in mixed time scales by Auch [37, 38], Auch et al. [39], Estes [34, 78], Erbe et al. [76], and Mert [145].

5.2 Basic Mixed Time Scale Calculus

In this section, we introduce some fundamental concepts and properties concerning what we will call a mixed time scale. Throughout this chapter we assume a, b are constants satisfying

$$\displaystyle{a \geq 1,\quad b \geq 0,\quad a + b> 1.}$$

We will use two linear functions to define our so-called mixed time scale. First we let \(\sigma: \mathbb{R} \rightarrow \mathbb{R}\) be defined by

$$\displaystyle{\sigma (t) = at + b,\quad t \in \mathbb{R}.}$$

Then we define the linear function ρ to be the inverse function of \(\sigma\), that is

$$\displaystyle{\rho (t) = \frac{t - b} {a},\quad t \in \mathbb{R}.}$$

We call \(\sigma\) the forward jump operator and ρ the backward jump operator . Only for these two functions we use the following notation. For n ≥ 1 we define the function \(\sigma ^{n}\) recursively by

$$\displaystyle{\sigma ^{n}(t) =\sigma (\sigma ^{n-1}(t)),\quad t \in \mathbb{R},}$$

where \(\sigma ^{0}(t):= t\), and

$$\displaystyle{\rho ^{n}(t) =\rho (\rho ^{n-1}(t)),\quad t \in \mathbb{R},}$$

where ρ 0(t): = t. We now define our mixed time scale \(\mathbb{T}_{\alpha }\), where for simplicity we always assume α ≥ 0:

$$\displaystyle{ \mathbb{T}_{\alpha }:=\{ \cdots \,,\rho ^{2}(\alpha ),\rho (\alpha ),\alpha,\sigma (\alpha ),\sigma ^{2}(\alpha ),\cdots \,\}. }$$

By Exercise 5.1, we have that

$$\displaystyle{\cdots <\rho ^{2}(\alpha ) <\rho (\alpha ) <\alpha <\sigma (\alpha ) <\sigma ^{2}(\alpha ) <\cdots \,.}$$

Usually the domains of \(\sigma\) and ρ will be either \(\mathbb{R}\) or \(\mathbb{T}_{\alpha }.\)

Theorem 5.1.

If a > 1, then

$$\displaystyle{t> \frac{b} {1 - a},\quad t \in \mathbb{T}_{\alpha }.}$$

Proof.

Since \(\sigma ^{n}(\alpha ) \geq 0> \frac{b} {1-a}\) for all n ≥ 0, it remains to show that \(\rho ^{n}(\alpha )> \frac{b} {1-a}\) for all n ≥ 1. We prove this by induction. First, for the base case note that

$$\displaystyle{\rho (\alpha ) = \frac{\alpha -b} {a}> \frac{b} {1 - a}.}$$

Now assume n ≥ 1 and \(\rho ^{n}(\alpha )> \frac{b} {1-a}\). Then it follows that

$$\displaystyle\begin{array}{rcl} \rho ^{n+1}(\alpha ) =\rho \left (\rho ^{n}(t)\right ) = \frac{\rho ^{n}(\alpha ) - b} {a}> \frac{ \frac{b} {1-a} - b} {a} = \frac{b} {1 - a}.& & {}\\ \end{array}$$

 □ 

Note that the above theorem does not hold if a = 1. Also note that when a > 1, \(\mathbb{T}_{\alpha }\) is not a closed set (see Theorem 5.6 (iii)).

Definition 5.2.

For \(c,d \in \mathbb{T}_{\alpha }\) such that d ≥ c, we define

$$\displaystyle{ \mathbb{T}_{[c,d]}:= \mathbb{T}_{\alpha } \cap [c,d] =\{ c,\sigma (c),\sigma ^{2}(c),\ldots,\rho (d),d\}. }$$

We define \(\mathbb{T}_{(c,d)}, \mathbb{T}_{(c,d]},\) and \(\mathbb{T}_{[c,d)}\) similarly. Additionally, we may use the notation \(\mathbb{T}_{c}^{d}\), where \(\mathbb{T}_{c}^{d}:= \mathbb{T}_{[c,d]}.\)

Definition 5.3.

We define a forward graininess function, μ, by

$$\displaystyle{ \mu (t):=\sigma (t) - t = (at + b) - t = (a - 1)t + b. }$$

In the following theorem we give some properties of the graininess function μ.

Theorem 5.4.

For \(t \in \mathbb{T}_{\alpha }\) and \(n \in \mathbb{N}_{0}\) , the following hold:

  1. (i)

    μ(t) > 0;

  2. (ii)

    \(\mu (\sigma ^{n}(t)) = a^{n}\mu (t);\)

  3. (iii)

    \(\mu (\rho ^{n}(t)) = a^{-n}\mu (t).\)

Proof.

We just prove (iii) and leave the rest of the proof (see Exercise 5.2) to the reader. To see that (iii) holds consider for \(t \in \mathbb{T}_{\alpha }\) the base case

$$\displaystyle\begin{array}{rcl} \mu (\rho (t)) = (a - 1)\rho (t) + b = (a - 1)\frac{t - b} {a} + b = \frac{(a - 1)t + b} {a} = a^{-1}\mu (t).& & {}\\ \end{array}$$

Now assume n ≥ 1 and \(\mu (\rho ^{n}(t)) = a^{-n}\mu (t)\) for \(t \in \mathbb{T}_{\alpha }.\) Then using the induction assumption we get for \(t \in \mathbb{T}_{\alpha }\)

$$\displaystyle\begin{array}{rcl} \mu (\rho ^{n+1}(t)) =\mu (\rho ^{n}(\rho (t))) = a^{-n}\mu (\rho (t)) = a^{-(n+1)}\mu (t).& & {}\\ \end{array}$$

Hence, (iii) holds. □ 

Theorem 5.5 (Properties of Forward Jump Operator).

Given \(m,n \in \mathbb{N}_{0}\) and \(t \in \mathbb{T}_{\alpha }\)

  1. (i)

    for n ≥ 1, \(\sigma ^{n}(t) = a^{n}t + b\sum \limits _{j=0}^{n-1}a^{j};\)

  2. (ii)

    if m > n, \(\sigma ^{m}(t)>\sigma ^{(n)}(t);\)

  3. (iii)

    if t > 0, \(\lim \limits _{n\rightarrow \infty }\sigma ^{n}(t) = \infty.\)

Proof.

We will only prove (i) and (iii) here. First we will prove property (i) by an induction argument. The base case clearly holds. Assume that n ≥ 1 and \(\sigma ^{n}(t) = a^{n}t +\sum \limits _{ j=0}^{n-1}a^{j}b\). It follows that

$$\displaystyle\begin{array}{rcl} \sigma ^{n+1}(t) =\sigma (\sigma ^{n}(t))& =& \sigma \left (a^{n}t + b\sum \limits _{ j=0}^{n-1}a^{j}\right ) {}\\ & =& a\left (a^{n}t + b\sum \limits _{ j=0}^{n-1}a^{j}\right ) + b = a^{n+1}t + \left (b\sum \limits _{ j=0}^{n}a^{j}\right ). {}\\ \end{array}$$

This completes the proof of (i).

Next we prove that (iii) holds. First, consider the subcase in which a > 1. Then

$$\displaystyle\begin{array}{rcl} \sigma ^{n}(t) = a^{n}t +\sum \limits _{ j=0}^{n-1}a^{j}b \geq a^{n}t.& & {}\\ \end{array}$$

Since t > 0 and a > 1, we have that \(\lim \limits _{n\rightarrow \infty }\sigma ^{n}(t) = \infty\). Next, consider the case in which a = 1. Then b > 0, and

$$\displaystyle\begin{array}{rcl} \sigma ^{n}(t) = a^{n}t +\sum \limits _{ j=0}^{n-1}a^{j}b = t +\sum \limits _{ j=0}^{n-1}b = t + nb \geq nb.& & {}\\ \end{array}$$

Since b > 0, we have that \(\lim \limits _{n\rightarrow \infty }\sigma ^{n}(t) = \infty\). This completes the proof of (iii). □ 

Theorem 5.6 (Properties of Backward Jump Operator).

Given positive integers m, n, and \(t \in \mathbb{T}_{\alpha }\) , the following properties hold:

  1. (i)

    \(\rho ^{n}(t) = a^{-n}\left (t -\sum \limits _{j=0}^{n-1}a^{j}b\right );\)

  2. (ii)

    if m > n, then \(\rho ^{m}(t) <\rho ^{n}(t);\)

  3. (iii)

    \(\lim \limits _{n\rightarrow \infty }\rho ^{n}(t) = -\infty\) if a = 1 and \(= \frac{b} {1-a}\) if a > 1.

Proof.

We will just prove (i) holds (see Exercise 5.3 for parts (ii) and (iii)). So, first we note that

$$\displaystyle\begin{array}{rcl} \rho ^{1}(t) = \dfrac{t - b} {a} = a^{-1}\left (t -\sum \limits _{ j=0}^{0}a^{j}b\right ).& & {}\\ \end{array}$$

Assume that n ≥ 1 and \(\rho ^{n}(t) = a^{-n}\left (t -\sum \limits _{j=0}^{n-1}a^{j}b\right )\) holds. Then

$$\displaystyle\begin{array}{rcl} \rho ^{n+1}(t)& =& \rho (\rho ^{n}(t)) {}\\ & =& \rho \left (a^{-n}\left [t -\sum \limits _{ j=0}^{n-1}a^{j}b\right ]\right ) {}\\ & =& \dfrac{a^{-n}\left [t -\sum \limits _{j=0}^{n-1}a^{j}b\right ] - b} {a} {}\\ & =& a^{-n-1}\left (t -\sum \limits _{ j=0}^{n-1}a^{j}b - a^{n}b\right ) {}\\ & =& a^{-(n+1)}\left (t -\sum \limits _{ j=0}^{n}a^{j}b\right ). {}\\ \end{array}$$

This completes the proof of (i) by induction. □ 

We now define a function N(t, s), whose value (we will see in Theorem 5.8), when \(s,t \in \mathbb{T}_{\alpha }\) with s ≤ t, gives the cardinality, \(card(\mathbb{T}_{[s,t)})\), of the set \(\mathbb{T}_{[s,t)}\).

Definition 5.7.

For a > 1, we define the function \(N: \mathbb{T}_{\alpha } \times \mathbb{T}_{\alpha } \rightarrow \mathbb{Z}\) by

$$\displaystyle{ N(t,s):=\log _{a}\left ( \frac{\mu (t)} {\mu (s)}\right ). }$$

For simplicity, we will use the notation N(t): = N(t, α), for \(t \in \mathbb{T}_{\alpha }.\)

As presented in Estes [34, 78], some properties of the function N are given in the following theorem.

Theorem 5.8.

Assume a > 1 and \(t,s,r \in \mathbb{T}_{\alpha }\) . Then the following hold:

  1. (i)

    N(t,t) = 0;

  2. (ii)

    \(N(t,s) = card(\mathbb{T}_{[s,t)})\) , if s ≤ t;

  3. (iii)

    \(N(s,t) = -N(t,s);\)

  4. (iv)

    \(N(t,s) = N(t,r) + N(r,s).\)

Proof.

Since

$$\displaystyle{N(t,t) =\log _{a}\left (\frac{\mu (t)} {\mu (t)}\right ) =\log _{a}1 = 0,}$$

we have that (i) holds. To see that (ii) holds, let \(s,t \in \mathbb{T}_{\alpha }\) with s ≤ t. If \(k = card(\mathbb{T}_{[s,t)})\), then \(t =\sigma ^{k}(s)\), and so we have that

$$\displaystyle\begin{array}{rcl} N(t,s) =\log _{a}\left [ \frac{\mu (t)} {\mu (s)}\right ] =\log _{a}\left [\frac{\mu (\sigma ^{k}(s))} {\mu (s)} \right ] =\log _{a}\left [\frac{a^{k}\mu (s)} {\mu (s)} \right ] =\log _{a}a^{k} = k.& & {}\\ \end{array}$$

To see that (iii) holds, consider

$$\displaystyle\begin{array}{rcl} N(t,s) =\log _{a}\left [ \frac{\mu (t)} {\mu (s)}\right ] = -\log _{a}\left [\frac{\mu (s)} {\mu (t)} \right ] = -N(s,t).& & {}\\ \end{array}$$

The proof of (iv) is Exercise 5.4. □ 

5.3 Discrete Difference Calculus

In this section, we define a difference operator on our mixed time scale \(\mathbb{T}_{\alpha }\) and study its properties. Note that if a = q > 1 and b = 0, then D is the q-difference operator (see Chap. 4), and if \(a = b = 1\), then D is the forward difference operator.

Definition 5.9.

Given \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\), the mixed time scale difference operator is defined by

$$\displaystyle\begin{array}{rcl} Df(t):= \dfrac{f(\sigma (t)) - f(t)} {\mu (t)},\quad t \in \mathbb{T}_{\alpha }.& & {}\\ \end{array}$$

Theorem 5.10 (Properties of Difference Operator).

Let \(f,g: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) and \(\alpha \in [0,\infty )\) be given. Then for \(t \in \mathbb{T}_{\alpha }\) the following hold:

  1. (i)

    Dα = 0;

  2. (ii)

    Dαf(t) = αDf(t);

  3. (iii)

    \(D(f(t) + g(t)) = Df(t) + Dg(t);\)

  4. (iv)

    \(D(f(t)g(t)) = f(\sigma (t))Dg(t) + (Df(t))g(t);\)

  5. (v)

    \(D(f(t)g(t)) = f(t)Dg(t) + (Df(t))g(\sigma (t));\)

  6. (vi)

    \(D\left (\dfrac{f(t)} {g(t)}\right ) = \dfrac{g(t)Df(t) - (Dg(t))f(t)} {g(t)g(\sigma (t))}\) if \(g(t)g(\sigma (t))\neq 0\) .

Proof.

Since \(D\alpha = \dfrac{\alpha -\alpha } {\mu (t)} = 0\) we have that (i) holds. Also

$$\displaystyle\begin{array}{rcl} D\alpha f(t) = \dfrac{\alpha f(\sigma (t)) -\alpha f(t)} {\mu (t)} =\alpha \left (\dfrac{f(\sigma (t)) - f(t)} {\mu (t)} \right ) =\alpha Df(t),& & {}\\ \end{array}$$

so (ii) holds. To see that (iii) holds note that

$$\displaystyle\begin{array}{rcl} D(f(t) + g(t))& =& \dfrac{[f(\sigma (t)) + g(\sigma (t))] - [f(t) + g(t)]} {\mu (t)} {}\\ & =& \dfrac{f(\sigma (t)) - f(t)} {\mu (t)} + \dfrac{g(\sigma (t)) - g(t)} {\mu (t)} = Df(t) + Dg(t). {}\\ \end{array}$$

The proof of property (iv) is left to the reader. Property (v) follows from (iv) by interchanging f(t) and g(t). Finally, property (vi) follows from the following:

$$\displaystyle\begin{array}{rcl} D\left (\dfrac{f(t)} {g(t)}\right )& =& \dfrac{\left (\dfrac{f(\sigma (t))} {g(\sigma (t))}\right ) -\left (\dfrac{f(t)} {g(t)}\right )} {\mu (t)} {}\\ & =& \dfrac{f(\sigma (t))g(t) - g(\sigma (t))f(t)} {g(t)g(\sigma (t))\mu (t)} {}\\ & =& \dfrac{f(\sigma (t))g(t) - f(t)g(t) + f(t)g(t) - g(\sigma (t))f(t)} {g(\sigma (t))g(t)\mu (t)} {}\\ & =& \dfrac{g(t)\left (\dfrac{f(\sigma (t)) - f(t)} {\mu (t)} \right ) - f(t)\left (\dfrac{g(\sigma (t)) - g(t)} {\mu (t)} \right )} {g(t)g(\sigma (t))} {}\\ & =& \dfrac{g(t)Df(t) - (Dg(t))f(t)} {g(t)g(\sigma (t))}. {}\\ \end{array}$$

And this completes the proof. □ 

5.4 Discrete Delta Integral

In this section, we will define the integral of a function defined on the mixed time scale \(\mathbb{T}_{\alpha }\). We will develop several properties of this integral, including the two fundamental theorems for the calculus on mixed time scales.

Definition 5.11.

Let \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) and \(c,d \in \mathbb{T}_{\alpha }\) be given. Then

$$\displaystyle\begin{array}{rcl} \int _{c}^{d}f(t)Dt:= \left \{\begin{array}{@{}l@{\quad }l@{}} \sum \limits _{j=0}^{N(d,c)-1}f(\sigma ^{j}(c))\mu (\sigma ^{j}(c)) \quad &\mbox{ if $c <d$} \\ \quad \quad 0 \quad &\mbox{ if $c = d$} \\ -\sum \limits _{j=0}^{N(c,d)-1}f(\sigma ^{j}(d))\mu (\sigma ^{j}(d))\quad &\mbox{ if $c> d$}. \end{array} \right.& & {}\\ \end{array}$$

Theorem 5.12 (Properties of Integral).

Given \(f,g: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) and \(c,d,l \in \mathbb{T}_{\alpha }\) , the following properties hold:

  1. (i)

    \(\int _{c}^{d}f(t)Dt = -\int _{d}^{c}f(t)Dt;\)

  2. (ii)

    \(\int _{c}^{d}\alpha f(t)Dt =\alpha \int _{ c}^{d}f(t)Dt;\)

  3. (iii)

    \(\int _{c}^{d}(f(t) + g(t))Dt =\int _{ c}^{d}f(t)Dt +\int _{ c}^{d}g(t)Dt;\)

  4. (iv)

    \(\int _{c}^{c}f(t)Dt = 0;\)

  5. (v)

    \(\int _{c}^{d}f(t)Dt =\int _{ c}^{l}f(t)Dt +\int _{ l}^{d}f(t)Dt;\)

  6. (vi)

    if d ≥ c, then \(\left \vert \int _{c}^{d}f(t)Dt\right \vert \leq \int _{c}^{d}\vert f(t)\vert Dt;\)

  7. (vii)

    \(\mbox{ if $f(t) \geq g(t)$ for $t \in \mathbb{T}_{[c,d)}$, then}\int _{c}^{d}f(t)Dt \geq \int _{c}^{d}g(t)Dt, if d \geq c\) .

Proof.

These properties follow from properties of summations. As an example, we will just prove property (vi). To this end, we note that

$$\displaystyle\begin{array}{rcl} \left \vert \int _{c}^{d}f(t)Dt\right \vert & =& \left \vert \sum _{ j=0}^{K(d,c)-1}f(\sigma ^{j}(c))\mu (\sigma ^{j}(c))\right \vert {}\\ &\leq & \sum _{j=0}^{K(d,c)-1}\left \vert f(\sigma ^{j}(c))\mu (\sigma ^{j}(c))\right \vert {}\\ & =& \int _{c}^{d}\vert f(t)\vert Dt. {}\\ \end{array}$$

 □ 

Definition 5.13.

Assume \(c,d \in \mathbb{T}_{\alpha }\) and c < d. Given \(f: \mathbb{T}_{[c,d]} \rightarrow \mathbb{R}\). We say F is an antidifference of f on \(\mathbb{T}_{[c,d]}\) provided DF(t) = f(t) for all \(t \in \mathbb{T}_{[c,\rho (d)]}\).

The following theorem shows that every function \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) has an antidifference on \(\mathbb{T}_{\alpha }.\)

Theorem 5.14 (Fundamental Theorem of Difference Calculus: Part II).

  Assume \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) and \(c \in \mathbb{T}_{\alpha }\) . If we define \(F: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) by \(F(t) =\int _{ c}^{t}f(s)Ds\) , then F is an antidifference of f on \(\mathbb{T}_{\alpha }.\)

Proof.

Let F be as defined as in the statement of this theorem. Then for \(t \in \mathbb{T}_{\alpha },\)

$$\displaystyle\begin{array}{rcl} DF(t)& =& \frac{\int _{c}^{\sigma (t)}f(s)Ds -\int _{c}^{t}f(s)Ds} {\mu (t)} = \frac{\int _{t}^{\sigma (t)}f(s)Ds} {\mu (t)} = \frac{f(t)\mu (t)} {\mu (t)} = f(t), {}\\ \end{array}$$

which is what we wanted to show. □ 

Theorem 5.15.

Assume \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) and F is an antidifference of f on \(\mathbb{T}_{\alpha }\) . Then a general antidifference of f on \(\mathbb{T}_{\alpha }\) is given by

$$\displaystyle{G(t) = F(t) + C,\quad t \in \mathbb{T}_{\alpha },}$$

where C is an arbitrary constant.

Proof.

Let F be an antidifference of f on \(\mathbb{T}_{\alpha }\). Set \(G(t) = F(t) + C\) for \(t \in \mathbb{T}_{\alpha },\) where C is a constant. Then

$$\displaystyle{DG(t) = D[F(t) + C] = f(t) + 0 = f(t),\quad \mbox{ for}\quad t \in \mathbb{T}_{\alpha }.}$$

Conversely, assume G(t) is any antidifference of f on \(\mathbb{T}_{\alpha }.\) Then

$$\displaystyle{D[G(t) - F(t)] = DG(t) - DF(t) = f(t) - f(t) = 0,\quad t \in \mathbb{T}_{\alpha }.}$$

From Exercise 5.6, there is a constant C so that

$$\displaystyle{G(t) - F(t) = C,\quad t \in \mathbb{T}_{\alpha }.}$$

Hence,

$$\displaystyle{F(t) = G(t) + C,\quad t \in \mathbb{T}_{\alpha },}$$

as desired. □ 

Definition 5.16.

We define the indefinite integral as follows:

$$\displaystyle\begin{array}{rcl} \int f(t)Dt = F(t) + C,& & {}\\ \end{array}$$

where F(t) is any antidifference of f(t).

Theorem 5.17 (Fundamental Theorem of Difference Calculus: Part I).

Assume \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) and \(c,d \in \mathbb{T}_{\alpha }\) . Then, if F is any antidifference of f on \(\mathbb{T}_{\alpha }\) , it follows that

$$\displaystyle{\int _{c}^{d}f(t)Dt =\int _{ c}^{d}DF(t)Dt = F(d) - F(c).}$$

Proof.

Put

$$\displaystyle{G(t):=\int _{ c}^{t}f(s)Ds,\quad t \in \mathbb{T}_{\alpha }.}$$

By Theorem 5.14 G(t) is an antidifference of f(t) on \(\mathbb{T}_{\alpha }\). Let F(t) be any fixed antidifference of f(t) on \(\mathbb{T}_{\alpha }\). Then by Theorem 5.15 we have that

$$\displaystyle{F(t) = G(t) + A,\quad \mbox{ where $A$ is a constant}.}$$

It follows that

$$\displaystyle\begin{array}{rcl} F(d) - F(a) = \left [G(d) + A\right ] -\left [G(c) + A\right ] = G(d) =\int _{ c}^{d}f(s)Ds.& & {}\\ \end{array}$$

 □ 

Remark 5.18.

Note that the Fundamental Theorem of Calculus tells us that given \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\), a point \(t_{0} \in \mathbb{T}_{\alpha }\), and a real number C, the unique solution of the IVP

$$\displaystyle\begin{array}{rcl} Dy(t)& =& f(t) {}\\ y(t_{0})& =& C {}\\ \end{array}$$

is given by \(y(t) =\int _{ t_{0}}^{t}f(s)Ds + C\), for \(t \in \mathbb{T}_{\alpha }.\)

The integration by parts formulas in the next theorem are very useful.

Theorem 5.19 (Integration by Parts).

Given two functions \(u,v: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) , if \(c,d \in \mathbb{T}_{\alpha }\) , c < d, then

$$\displaystyle\begin{array}{rcl} \int _{c}^{d}u(t)Dv(t)Dt = u(t)v(t)\Big\vert _{ c}^{d} -\int _{ c}^{d}v(\sigma (t))Du(t)Dt& & {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} \int _{c}^{d}u(\sigma (t))Dv(t)Dt = u(t)v(t)\Big\vert _{ c}^{d} -\int _{ c}^{d}v(t)Du(t)Dt.& & {}\\ \end{array}$$

Proof.

By the product rule

$$\displaystyle\begin{array}{rcl} D[u(t)v(t)] = v(\sigma (t))Du(t) + (Dv(t))u(t).& & {}\\ \end{array}$$

Using the fundamental theorem of calculus, we get

$$\displaystyle\begin{array}{rcl} \int _{c}^{d}u(t)Dv(t)Dt + v(\sigma (t))Du(t)Dt = u(d)v(d) - u(c)v(c).& & {}\\ \end{array}$$

It follows that

$$\displaystyle{ \int _{c}^{d}u(t)Dv(t)Dt = \left.u(t)v(t)\right \vert _{ c}^{d} -\int _{ c}^{d}v(\sigma (t))Du(t)Dt. }$$

This proves the first integration by parts formula. Interchanging u(t) and v(t) leads to the second integration by parts formula. □ 

5.5 Falling and Rising Functions

In this section, we define the falling and rising functions for the mixed time scale \(\mathbb{T}_{\alpha }\), which are analogous to the falling and rising functions for the delta calculus in Chap. 1 Several properties of these functions will be given, including the appropriate power rule.

First we define the appropriate rising and falling functions for the mixed time scale calculus.

Definition 5.20.

Assume \(n \in \mathbb{N}\). We define the rising function, \(t^{\overline{n}}\), read “t to the n rising,” by

$$\displaystyle{ t^{\overline{n}}:=\prod _{ j=0}^{n-1}\sigma ^{j}(t),\quad t^{\overline{0}}:= 1, }$$

for \(t \in \mathbb{R}.\) We also define the falling function, \(t^{\underline{n}},\) read “t to the n falling,” by

$$\displaystyle{ t^{\underline{n}}:=\prod _{ j=0}^{n-1}\rho ^{j}(t),\quad t^{\underline{0}}:= 1. }$$

for \(t \in \mathbb{R}.\)

Definition 5.21.

For \(n \in \mathbb{Z}\), we define a-square bracket of n by

$$\displaystyle\begin{array}{rcl} [n]_{a}&:= \left \{\begin{array}{@{}l@{\quad }l@{}} \dfrac{a^{n} - 1} {a - 1} \quad &\mbox{ for $a> 1$} \\ \quad n \quad &\mbox{ for $a = 1$} \end{array} \right..& {}\\ & & {}\\ \end{array}$$

Theorem 5.22 (Properties of a-Square Bracket of n).

For \(n \in \mathbb{Z}\) , and a ≥ 1,

  1. (i)

    [0] a = 0;

  2. (ii)

    [1] a = 1;

  3. (iii)

    \([n]_{a} + a^{n} = [n + 1]_{a};\)

  4. (iv)

    \(a[n]_{a} + 1 = [n + 1]_{a};\)

  5. (v)

    \([-n]_{a} = -\dfrac{[n]_{a}} {a^{n}}.\)

Proof.

To see that (iii) holds for \(a> 1\), note that

$$\displaystyle{ [n]_{a} + a^{n} = \frac{a^{n} - 1} {a - 1} + a^{n} = \frac{a^{n+1} - 1} {a - 1} = [n + 1]_{a}. }$$

Also (iii) trivially holds for a = 1. 

To see that (iv) holds for a > 1 note that

$$\displaystyle\begin{array}{rcl} a[n]_{a} + 1& =& a\frac{a^{n} - 1} {a - 1} + 1 = \frac{a^{n+1} - a} {a - 1} + \frac{a - 1} {a - 1} {}\\ & =& \frac{a^{n+1} - 1} {a - 1} = [n + 1]_{a}. {}\\ \end{array}$$

Also for a = 1 we have that

$$\displaystyle{a[n]_{a} + 1 = n + 1 = [n + 1]_{a}.}$$

Property (v) holds for a > 1, since

$$\displaystyle\begin{array}{rcl} [-n]_{a} = \frac{a^{-n} - 1} {a - 1} = -a^{-n}\frac{a^{n} - 1} {a - 1} = -\frac{[n]_{a}} {a^{n}}.& & {}\\ \end{array}$$

Furthermore, (v) is trivially true for a = 1. □ 

We may use the a-square bracket function to simplify the expressions that we found for the forward and backward jump operators.

Theorem 5.23.

For \(n \in \mathbb{N}\) the following hold:

  1. (i)

    \(\sigma ^{n}(t) = a^{n}t + [n]_{a}b;\)

  2. (ii)

    \(\rho ^{n}(t) = a^{-n}t + [-n]_{a}b;\)

  3. (iii)

    \(\sigma ^{n}(t) - t = [n]_{a}\mu (t).\)

Proof.

In order to prove property (i), we have by part (i) of Theorem 5.5 that

$$\displaystyle{ \sigma ^{n}(t) = a^{n}t + b\sum \limits _{ j=0}^{n-1}a^{j} = a^{n}t + b\left (\dfrac{a^{n} - 1} {a - 1} \right ) = a^{n}t + [n]_{ a}b. }$$

Similarly part (i) of Theorem 5.6 gives us that (ii) holds. Finally, using property (i) we have that

$$\displaystyle\begin{array}{rcl} \sigma ^{n}(t) - t& =& a^{n}t + [n]_{ a}b - t = (a^{n} - 1)t + \left (\dfrac{a^{n} - 1} {a - 1} \right )b {}\\ & =& \left (\dfrac{a^{n} - 1} {a - 1} \right )\left [(a - 1)t + b\right ] = [n]_{a}\mu (t), {}\\ \end{array}$$

and hence (iii) holds. □ 

Next we prove a power rule.

Theorem 5.24 (Power Rule).

For \(n \in \mathbb{N}\) the following holds:

$$\displaystyle\begin{array}{rcl} Dt^{\overline{n}} = [n]_{a}(\sigma (t))^{\overline{n - 1}},\quad \mbox{ for}\quad t \in \mathbb{R}.& & {}\\ \end{array}$$

Proof.

For \(t \in \mathbb{R}\) we have that

$$\displaystyle\begin{array}{rcl} Dt^{\overline{n}} = \dfrac{\prod \limits _{j=0}^{n-1}\sigma ^{j}(\sigma (t)) -\prod \limits _{j=0}^{n-1}\sigma ^{j}(t)} {\mu (t)} & =& \frac{[\sigma ^{(n)}(t) - t]} {\mu (t)} \prod \limits _{j=1}^{n-1}\sigma ^{j}(t) {}\\ & =& \frac{[\sigma ^{(n)}(t) - t]} {\mu (t)} \prod \limits _{j=0}^{n-2}\sigma ^{j}(\sigma (t)) {}\\ & =& [n]_{a}\left [\sigma (t)\right ]^{\overline{n - 1}}, {}\\ \end{array}$$

where in the last step we used part (iii) of Theorem 5.23. □ 

Definition 5.25.

For \(n \in \mathbb{Z}\) and a ≥ 1, we define the a-bracket of n by

$$\displaystyle\begin{array}{rcl} \{n\}_{a}&:= \left \{\begin{array}{@{}l@{\quad }l@{}} \dfrac{a^{n} - 1} {(a - 1)a^{n-1}}\quad &\mbox{ for $a> 1$} \\ \quad n \quad &\mbox{ for $a = 1$.} \end{array} \right.& {}\\ \end{array}$$

The following theorem gives us several properties of the a-bracket function.

Theorem 5.26.

The following hold:

  1. (i)

    {0} a = 0;

  2. (ii)

    {1} a = 1;

  3. (iii)

    \(\{n\}_{a} = \dfrac{[n]_{a}} {a^{n-1}};\)

  4. (iv)

    \(\{n\}_{a} = -a[-n]_{a};\)

  5. (v)

    \(\sigma (t) -\rho ^{n-1}(t) =\{ n\}_{a}\mu (t).\)

Proof.

We will just prove part (iv) holds when a > 1. This follows from

$$\displaystyle{ \{n\}_{a} = \frac{[n]_{a}} {a^{n-1}} = -a[-n]_{a}, }$$

where the first equality is by part (iii) of this theorem and the second equality is by part (v) of Theorem 5.22. □ 

Theorem 5.27 (Power Rule).

For \(n \in \mathbb{N}\) the following holds:

$$\displaystyle{ Dt^{\underline{n}} =\{ n\}_{ a}t^{\underline{n-1}},\quad \mbox{ for}\quad t \in \mathbb{R}. }$$

Proof.

To establish the result, we calculate

$$\displaystyle\begin{array}{rcl} Dt^{\underline{n}} = \dfrac{[\sigma (t)]^{\underline{n}} - t^{\underline{n}}} {\mu (t)} & =& \dfrac{\prod \limits _{j=0}^{n-1}\rho ^{j}(\sigma (t)) -\prod \limits _{j=0}^{n-1}\rho ^{j}(t)} {\mu (t)} {}\\ & =& \dfrac{\sigma (t)\prod \limits _{j=1}^{n-1}\rho ^{j-1}(t) -\rho ^{n-1}(t)\prod \limits _{j=1}^{n-1}\rho ^{j-1}(t)} {\mu (t)} {}\\ & =& \dfrac{\sigma (t) -\rho ^{n-1}(t)} {\mu (t)} \prod \limits _{j=0}^{n-2}\rho ^{j}(t) {}\\ & =& \{n\}_{a}t^{\underline{n-1}}, {}\\ \end{array}$$

with the last equality by part (v) of Theorem 5.26. □ 

5.6 Discrete Taylor’s Theorem

In this section we will develop Taylor’s Theorem for functions on \(\mathbb{T}_{\alpha }\). First we define the Taylor monomials for the mixed time scale \(\mathbb{T}_{\alpha }\) as follows.

Definition 5.28.

We define the Taylor monomials for the mixed time scale \(\mathbb{T}_{\alpha }\) as follows. First put h 0(t, s) = 1 for \(t,s \in \mathbb{T}_{\alpha }\). Then for each \(n \in \mathbb{N}_{1}\) we recursively define h n (t, s), for each fixed \(s \in \mathbb{T}_{\alpha }\), to be the unique solution (see Remark 5.18) of the IVP

$$\displaystyle\begin{array}{rcl} Dy(t)& =& h_{n-1}(t,s),\quad t \in \mathbb{T}_{\alpha } {}\\ y(s)& =& 0. {}\\ \end{array}$$

In the next theorem we derive a formula for h n (t, s) (see Erbe et al. [76]).

Theorem 5.29.

The Taylor monomials, h n (t,s), \(n \in \mathbb{N}_{0}\) , for the mixed time scale \(\mathbb{T}_{\alpha }\) are given by

$$\displaystyle{h_{n}(t,s) =\prod _{ k=1}^{n}\frac{t -\sigma ^{k-1}(s)} {[k]_{a}} }$$

for \(t,s \in \mathbb{T}_{\alpha }.\)

Proof.

For \(n \in \mathbb{N}_{0}\), let

$$\displaystyle{f_{n}(t,s):=\prod _{ k=1}^{n}\frac{t -\sigma ^{k-1}(s)} {[k]_{a}},\quad \mbox{ for}\quad t,s \in \mathbb{T}_{\alpha }.}$$

We prove by induction on n, for \(n \in \mathbb{N}_{0},\) that \(f_{n}(t,s) = h_{n}(t,s)\) for \(t,s \in \mathbb{T}_{\alpha }\). By our convention on products \(f_{0}(t,s) = 1 = h_{0}(t,s),\) and it is easy to see that \(f_{1}(t,s) = t - s = h_{1}(t,s)\) for \(t,s \in \mathbb{T}_{\alpha }\). Assume n ≥ 1 and \(f_{k}(t,s) = h_{k}(t,s)\) for \(t,s \in \mathbb{T}_{\alpha }\), 0 ≤ k ≤ n. It remains to show that \(f_{n+1}(t,s) = h_{n+1}(t,s)\) for \(t,s \in \mathbb{T}_{\alpha }\). First, note that

$$\displaystyle\begin{array}{rcl} f_{n+1}(t,s) =\prod _{ k=1}^{n+1}\frac{t -\sigma ^{k-1}(s)} {[k]_{a}} & =& \frac{t -\sigma ^{n}(s)} {[n + 1]_{a}}\prod _{k=1}^{n}\frac{t -\sigma ^{k-1}(s)} {[k]_{a}} {}\\ & =& \frac{t -\sigma ^{n}(s)} {[n + 1]_{a}}\;f_{n}(t,s) {}\\ & =& \frac{t -\sigma ^{n}(s)} {[n + 1]_{a}}\;h_{n}(t,s) {}\\ \end{array}$$

by the induction hypothesis. Fix \(s \in \mathbb{T}_{\alpha }\), then using the product rule

$$\displaystyle\begin{array}{rcl} & & Df_{n+1}(t,s) = D\left (\frac{t -\sigma ^{n}(s)} {[n + 1]_{a}}\;h_{n}(t,s)\right ) {}\\ & & = \frac{\sigma (t) -\sigma ^{n}(s)} {[n + 1]_{a}} \;h_{n-1}(t,s) + \frac{h_{n}(t,s)} {[n + 1]_{a}} {}\\ & & = \frac{(at + b - a^{n}s - [n]_{a}b)} {[n + 1]_{a}} f_{n-1}(t,s) + \frac{f_{n}(t,s)} {[n + 1]_{a}}\quad \;\mbox{ using Theorem 5.23, (i)} {}\\ & & = \frac{a(t - a^{n-1}s - b[n - 1]_{a})} {[n + 1]_{a}} f_{n-1}(t,s) + \frac{f_{n}(t,s)} {[n + 1]_{a}}\;\mbox{ using Theorem 5.22, (iv)} {}\\ & & = \frac{a(t -\sigma ^{n-1}(s))} {[n + 1]_{a}} f_{n-1}(t,s) + \frac{f_{n}(t,s)} {[n + 1]_{a}}\;\quad \quad \quad \quad \;\mbox{ using Theorem 5.23, (i)} {}\\ & & = \frac{a[n]_{a}f_{n}(t,s)} {[n + 1]_{a}} + \frac{f_{n}(t,s)} {[n + 1]_{a}}\; {}\\ & & = \frac{a[n]_{a} + 1} {[n + 1]_{a}} \;f_{n}(t,s) {}\\ & & = f_{n}(t,s)\;\quad \quad \quad \mbox{ using Theorem 5.22, (iv)} {}\\ & & = h_{n}(t,s). {}\\ \end{array}$$

Since, for each fixed s, \(y(t) = f_{n+1}(t,s)\) solves the IVP

$$\displaystyle\begin{array}{rcl} Dy(t)& =& h_{n}(t,s),\quad t \in \mathbb{T}_{\alpha } {}\\ y(s)& =& 0, {}\\ \end{array}$$

we have by Remark 5.18 that \(h_{n+1}(t,s) = f_{n+1}(t,s)\) for \(t \in \mathbb{T}_{\alpha }\). Finally, notice that since \(s \in \mathbb{T}_{\alpha }\) is arbitrary we conclude that \(h_{n+1}(t,s) = f_{n+1}(t,s)\) for all \(t,s \in \mathbb{T}_{\alpha }.\) □ 

Definition 5.30.

For \(n \in \mathbb{N}_{0}\), we define the a-falling-bracket (of n) factorial, denoted by {n} a ! , recursively by {0} a ! = 1 and for \(n \in \mathbb{N}_{1}\)

$$\displaystyle\begin{array}{rcl} \{n\}_{a}! =\{ n\}_{a}\left (\{n - 1\}_{a}!\right ).& & {}\\ \end{array}$$

Definition 5.31.

For \(n \in \mathbb{N}_{0}\), we define the a-rising-bracket (of n) factorial, denoted by [n] a ! , recursively by [0] a ! = 1 and for \(n \in \mathbb{N}_{1}\)

$$\displaystyle\begin{array}{rcl} [n]_{a}! = [n]_{a}\left ([n - 1]_{a}!\right ).& & {}\\ \end{array}$$

The following theorem is a generalization of the binomial expansion of \((t - t)^{n} = 0\), \(n \in \mathbb{N}_{1}.\)

Theorem 5.32 (Estes [34, 78]).

Assume n ≥ 1 and \(t \in \mathbb{T}_{\alpha }\) . Then

$$\displaystyle\begin{array}{rcl} \sum \limits _{i=0}^{n}\dfrac{(-1)^{i}\left (t^{\overline{i}}\right )\left (t^{\underline{n-i}}\right )} {[i]_{a}!\{n - i\}_{a}!} = 0.& & {}\\ \end{array}$$

Proof.

For \(n \in \mathbb{N}_{1}\), consider

$$\displaystyle\begin{array}{rcl} f_{n}(t):=\sum \limits _{ i=0}^{n}\dfrac{(-1)^{i}\left (t^{\overline{i}}\right )\left (t^{\underline{n-i}}\right )} {[i]_{a}!\{n - i\}_{a}!}.& & {}\\ \end{array}$$

We will prove by induction on n that f n (t) = 0. For the base case n = 1 we have

$$\displaystyle\begin{array}{rcl} f_{1}(t) = t - t = 0.& & {}\\ \end{array}$$

Assume \(n \in \mathbb{N}_{1}\) and f n (t) = 0. It remains to show \(f_{n+1}(t) = 0\). Using the product rule

$$\displaystyle\begin{array}{rcl} Df_{n+1}(t)& =& D\left (\sum \limits _{i=0}^{n+1}\dfrac{(-1)^{i}\left (t^{\overline{i}}\right )\left (t^{\underline{n+1-i}}\right )} {[i]_{a}!\{n + 1 - i\}_{a}!} \right ) {}\\ & =& D\left (\dfrac{(-1)^{n+1}t^{\overline{n + 1}}} {[n + 1]_{a}!} +\sum \limits _{ i=1}^{n}\dfrac{(-1)^{i}\left (t^{\overline{i}}\right )\left (t^{\underline{n+1-i}}\right )} {[i]_{a}!\{n + 1 - i\}_{a}!} + \dfrac{t^{\underline{n+1}}} {\{n + 1\}_{a}!}\right ) {}\\ & =& \sum \limits _{i=1}^{n}\dfrac{(-1)^{i}\left (\sigma (t)\right )^{\overline{i - 1}}\left (\sigma (t)\right )^{\underline{n+1-i}}} {[i - 1]_{a}!\{n + 1 - i\}_{a}!} {}\\ & & \quad \quad +\sum \limits _{ i=1}^{n}\dfrac{(-1)^{i}\left (t^{\overline{i}}\right )\left (t^{\underline{n-i}}\right )} {[i]_{a}!\{n - i\}_{a}!} + \dfrac{(-1)^{n+1}\left (\sigma (t)\right )^{\overline{n}}} {[n]_{a}!} + \dfrac{t^{\underline{n}}} {\{n\}_{a}!} {}\\ & =& \sum \limits _{i=1}^{n+1}\dfrac{(-1)^{i}\left (\sigma (t\right ))^{\overline{i - 1}}\left (\sigma (t)\right )^{\underline{n+1-i}}} {[i - 1]_{a}!\{n + 1 - i\}_{a}!} +\sum \limits _{ i=0}^{n}\dfrac{(-1)^{i}\left (t^{\overline{i}}\right )\left (t^{\underline{n-i}}\right )} {[i - 1]_{a}!\{n - i\}_{a}!} {}\\ & =& -\sum \limits _{i=0}^{n}\dfrac{(-1)^{i}\left (\sigma (t)\right )^{\overline{i}}\left (\sigma (t)\right )^{\underline{n-i}}} {[i]_{a}!\{n - i\}_{a}!} +\sum \limits _{ i=0}^{n}\dfrac{(-1)^{i}\left (t^{\overline{i}}\right )\left (t^{\underline{n-i}}\right )} {[i]_{a}!\{n - i\}_{a}!} {}\\ & =& -f_{n}(\sigma (t)) + f_{n}(t) {}\\ & =& 0. {}\\ \end{array}$$

Since \(Df_{n+1}(t) = 0\), we have that \(f_{n+1}(t) = C\), with \(t \in \mathbb{T}_{\alpha }\), for some constant C. Note that f n+1(t) can be expanded to a polynomial in t, and that each term of the sum

$$\displaystyle\begin{array}{rcl} f_{n+1}(t) =\sum \limits _{ i=0}^{n+1}\dfrac{(-1)^{i}\left (t^{\overline{i}}\right )\left (t^{\underline{n+1-i}}\right )} {[i]_{a}!\{n + 1 - i\}_{a}!} & & {}\\ \end{array}$$

is divisible by t. Thus, the polynomial expansion of f n+1(t) has no constant term and by the polynomial principle, C = 0. We have shown that \(f_{n+1}(t) = 0\). This completes the proof by induction. □ 

Next we prove an alternate formula for the Taylor monomials due to Estes [34, 78].

Theorem 5.33.

Assume \(n \in \mathbb{N}_{0}\) and \(t,s \in \mathbb{T}_{\alpha }\) . Then

$$\displaystyle\begin{array}{rcl} h_{n}(t,s) =\sum \limits _{ i=0}^{n}\dfrac{(-1)^{i}\left (s^{\overline{i}}\right )\left (t^{\underline{n-i}}\right )} {[i]_{a}!\{n - i\}_{a}!}.& & {}\\ \end{array}$$

Proof.

Fix s and let

$$\displaystyle\begin{array}{rcl} f_{n}(t,s):=\sum \limits _{ i=0}^{n}\dfrac{(-1)^{i}\left (s^{\overline{i}}\right )\left (t^{\underline{n-i}}\right )} {[i]_{a}!\{n - i\}_{a}!}.& & {}\\ \end{array}$$

We will show by induction that \(f_{n}(t,s) = h_{n}(t,s)\). The base case \(f_{0}(t,s) = h_{0}(t,s)\) follows from the definitions. Assume that \(n \in \mathbb{N}_{1}\) and \(f_{n}(t,s) = h_{n}(t,s)\). From Theorem 5.32, we know that f n (s, s) = 0. Also

$$\displaystyle\begin{array}{rcl} Df_{n+1}(t,s)& =& D\left (\sum \limits _{i=0}^{n}\dfrac{(-1)^{i}\left (s^{\overline{i}}\right )\left (t^{\underline{n+1-i}}\right )} {[i]_{a}!\{n + 1 - i\}_{a}!} \right ) {}\\ & =& \sum \limits _{i=0}^{n-1}\dfrac{(-1)^{i}\left (s^{\overline{i}}\right )\left (t^{\underline{n-i}}\right )} {[i]_{a}!\{n - i\}_{a}!} {}\\ & =& f_{n}(t,s) = h_{n}(t,s). {}\\ \end{array}$$

Hence, for each fixed \(s \in \mathbb{T}_{\alpha }\), \(y(t) = f_{n+1}(t,s)\) solves the IVP

$$\displaystyle\begin{array}{rcl} Dy(t)& =& h_{n}(t,s),\quad t \in \mathbb{T}_{\alpha } {}\\ y(s)& =& 0. {}\\ \end{array}$$

So, by the uniqueness of solutions to IVPs (see Remark 5.18), we have that \(f_{n+1}(t,s) = h_{n+1}(t,s)\) for each fixed \(s \in \mathbb{T}_{\alpha }.\) This completes the proof by induction.  □ 

Later we are going to see that we need to take the mixed time scale difference of h n (t, s) with respect to its second variable. To do this we now introduce the type-two Taylor monomials, g n (t, s), \(n \in \mathbb{N}_{0}.\)

Definition 5.34.

We define the type-two Taylor monomials \(g_{n}: \mathbb{T}_{\alpha } \times \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\), for \(n \in \mathbb{N}_{0}\), recursively as follows:

$$\displaystyle{g_{0}(t,s) = 1\quad \mbox{ for}\quad t,s \in \mathbb{T}_{\alpha },}$$

and for each \(n \in \mathbb{N}_{1}\), g n (t, s), for each fixed \(s \in \mathbb{T}_{\alpha }\), is the unique solution (see Remark 5.18) of the IVP

$$\displaystyle\begin{array}{rcl} Dy(t)& =& -g_{n-1}(\sigma (t),s),\quad t \in \mathbb{T}_{\alpha } {}\\ y(s)& =& 0. {}\\ \end{array}$$

In the next theorem we give two different formulas for the type-two Taylor monomials.

Theorem 5.35.

The type-two Taylor monomials are given by

$$\displaystyle\begin{array}{rcl} g_{n}(t,s) = h_{n}(s,t) =\sum \limits _{ i=0}^{n}\dfrac{(-1)^{i}\left (t^{\overline{i}}\right )\left (s^{\underline{n-i}}\right )} {[i]_{a}!\{n - i\}_{a}!} =\prod _{ k=1}^{n}\frac{s -\sigma ^{k-1}(t)} {[k]_{a}},& & {}\\ \end{array}$$

for \(t,s \in \mathbb{T}_{\alpha }.\)

Proof.

We prove by induction on n that \(h_{n}(s,t) = g_{n}(t,s),\) \(t,s \in \mathbb{T}_{\alpha }\), \(n \in \mathbb{N}_{0}\). Obviously this holds for n = 0. Assume n ≥ 0 and h n (s, t) = g n (t, s) for \(t,s \in \mathbb{T}_{\alpha }\). Fix \(s \in \mathbb{T}_{\alpha }\) and consider

$$\displaystyle\begin{array}{rcl} Dh_{n+1}(s,t)& =& D\left (\sum \limits _{i=0}^{n+1}\dfrac{(-1)^{i}\left (t^{\overline{i}}\right )\left (s^{\underline{n+1-i}}\right )} {[i]_{a}!\{n + 1 - i\}_{a}!} \right ) {}\\ & =& \sum \limits _{i=1}^{n+1}\dfrac{(-1)^{i}\left (\sigma (t)\right )^{\overline{i - 1}}\left (s^{\underline{n+1-i}}\right )} {[i - 1]_{a}!\{n + 1 - i\}_{a}!} \quad \quad \quad \quad \mbox{ by Theorem 5.24} {}\\ & =& -\sum \limits _{i=0}^{n}\dfrac{(-1)^{i}\left (\sigma (t\right ))^{\overline{i}}\left (s^{\underline{n-i}}\right )} {[i]_{a}!\{n - i\}_{a}!} {}\\ & =& -h_{n}(s,\sigma (t)) = -g_{n}(\sigma (t),s)\quad \mbox{ by the induction hypothesis} {}\\ \end{array}$$

for \(t \in \mathbb{T}_{\alpha }.\) Also, by Theorem 5.32, \(h_{n+1}(s,s) = 0.\) So, \(y(t) = h_{n+1}(s,t)\) satisfies for each fixed s the same IVP

$$\displaystyle\begin{array}{rcl} Dy(t)& =& -g_{n-1}(\sigma (t),s) {}\\ y(s)& =& 0 {}\\ \end{array}$$

as g n+1(t, s). Hence, by the uniqueness (see Remark 5.18) of solutions to IVPs

$$\displaystyle{f_{n+1}(t,s) = h_{n+1}(s,t) =\sum \limits _{ i=0}^{n}\dfrac{(-1)^{i}\left (t^{\overline{i}}\right )\left (s^{\underline{n-i}}\right )} {[i]_{a}!\{n - i\}_{a}!},}$$

for \(t \in \mathbb{T}_{\alpha }\). □ 

We now can state our power rule as follows.

Theorem 5.36.

Assume \(n \in \mathbb{N}_{0}\) . Then for each fixed \(s \in \mathbb{T}_{\alpha }\)

$$\displaystyle{Dh_{n+1}(t,s) = h_{n}(t,s)}$$

and

$$\displaystyle{Dh_{n+1}(s,t) = -h_{n}(\sigma (t),s)}$$

for \(t \in \mathbb{T}_{\alpha }.\)

Proof.

By the definition (Definition 5.28) of h n (t, s) we have for each fixed \(s \in \mathbb{T}_{\alpha }\) that \(Dh_{n+1}(t,s) = h_{n}(t,s)\) for all \(t \in \mathbb{T}_{\alpha }.\) To see that \(Dh_{n+1}(s,t) = -h_{n}(\sigma (t),s)\) for \(t \in \mathbb{T}_{\alpha }\), note that by Theorem 5.33

$$\displaystyle\begin{array}{rcl} Dh_{n+1}(s,t)& =& D\sum \limits _{i=0}^{n+1}\dfrac{(-1)^{i}\left (t^{\overline{i}}\right )\left (s^{\underline{n+1-i}}\right )} {[i]_{a}!\{n + 1 - i\}_{a}!} {}\\ & =& \sum \limits _{i=1}^{n+1}\dfrac{(-1)^{i}\left ([\sigma (t)]^{\overline{i - 1}}\right )\left (s^{\underline{n+1-i}}\right )} {[i - 1]_{a}!\{n + 1 - i\}_{a}!} {}\\ & =& -\sum _{i=0}^{n}\dfrac{(-1)^{i}\left ([\sigma (t)]^{\overline{i}}\right )\left (s^{\underline{n-i}}\right )} {[i]_{a}!\{n - i\}_{a}!} {}\\ & =& -h_{n}(\sigma (t),s), {}\\ \end{array}$$

which completes the proof. □ 

Theorem 5.37 (Taylor’s Formula).

Assume \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) , \(n \in \mathbb{N}_{0}\) and \(s \in \mathbb{T}_{\alpha }\) . Then

$$\displaystyle{ f(t) = p_{n}(t,s) + R_{n}(t,s),\quad t \in \mathbb{T}_{\alpha }, }$$

where the n-th degree Taylor polynomial, p n (t,s), based at s, is given by

$$\displaystyle{ p_{n}(t,s) =\sum _{ k=0}^{n}D^{k}f(s)h_{ k}(t,s),\quad t \in \mathbb{T}_{\alpha }, }$$

and the remainder term, R n (t,s), based at s, is given by

$$\displaystyle{ R_{n}(t,s) =\int _{ s}^{t}h_{ n}(t,\sigma (\tau ))D^{n+1}f(\tau )D\tau,\quad t \in \mathbb{T}_{\alpha }. }$$

Proof.

We prove Taylor’s Formula by induction. For n = 0 we have

$$\displaystyle{ R_{0}(t,s) =\int _{ s}^{t}h_{ 0}(t,\sigma (\tau ))Df(\tau )D\tau =\int _{ s}^{t}Df(\tau )D\tau = f(t) - f(s). }$$

Solving for f(t) we get the desired result

$$\displaystyle{f(t) = f(s) + R_{0}(t,s) = p_{0}(t,s) + R_{0}(t,s),\quad t \in \mathbb{T}_{\alpha }.}$$

Now assume that n ≥ 0 and \(f(t) = p_{n}(t,s) + R_{n}(t,s)\), for \(t \in \mathbb{T}_{\alpha }\). Then integrating by parts we obtain

$$\displaystyle\begin{array}{rcl} R_{n+1}(t,s)& =& \int _{s}^{t}h_{ n+1}(t,\sigma (\tau ))D^{n+2}f(\tau )D\tau {}\\ & =& h_{n+1}(t,\tau )D^{n+1}f(\tau )\Big\vert _{\tau =s}^{t} +\int _{ s}^{t}h_{ n}(t,\sigma (\tau ))D^{n+1}f(\tau )D\tau {}\\ & =& -h_{n+1}(t,s)D^{n+1}f(s) +\int _{ s}^{t}h_{ n}(t,\sigma (\tau ))D^{n+1}f(\tau )D\tau {}\\ & =& -h_{n+1}(t,s)D^{n+1}f(s) + R_{ n}(t,s) {}\\ & =& -h_{n+1}(t,s)D^{n+1}f(s) + f(t) - p_{ n}(t,s) {}\\ & =& -p_{n+1}(t,s) + f(t). {}\\ \end{array}$$

Solving for f(t) we obtain the desired result

$$\displaystyle{f(t) = p_{n+1}(t,s) + R_{n+1}(t,s),\quad t \in \mathbb{T}_{\alpha }.}$$

This completes the proof by induction. □ 

We can now use Taylor’s Theorem to prove the following variation of constants formula.

Theorem 5.38 (Variation of Constants Formula).

Assume \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) and \(s \in \mathbb{T}_{\alpha }\) . Then the unique solution of the IVP

$$\displaystyle\begin{array}{rcl} D^{n}y(t)& =& f(t),\quad t \in \mathbb{T}_{\alpha } {}\\ D^{i}y(s)& =& C_{ k},\quad 0 \leq k \leq n - 1, {}\\ \end{array}$$

where C k , 0 ≤ k ≤ n − 1, are given constants, is given by

$$\displaystyle{y(t) =\sum _{ k=0}^{n-1}C_{ k}h_{k}(t,s) +\int _{ s}^{t}h_{ n-1}(t,\sigma (\tau ))f(\tau )D\tau,}$$

for \(t \in \mathbb{T}_{\alpha }.\)

Proof.

It is easy to see that the given IVP has a unique solution y(t). By Taylor’s Formula (see Theorem 5.37) applied to y(t) with n replaced by n − 1, we get

$$\displaystyle\begin{array}{rcl} y(t)& =& \sum _{k=0}^{n-1}D^{k}y(s)h_{ k}(t,s) +\int _{ s}^{t}h_{ n-1}(t,\sigma (\tau ))D^{n}y(\tau )D\tau {}\\ & =& \sum _{k=0}^{n-1}C_{ k}h_{k}(t,s) +\int _{ s}^{t}h_{ n-1}(t,\sigma (\tau ))f(\tau )D\tau, {}\\ \end{array}$$

for \(t \in \mathbb{T}_{\alpha }.\) □ 

Example 5.39.

Consider the mixed time scale where α = 1 and \(\sigma (t) = 2t + 1\) (so a = 2 and b = 1). Use the variation of constants formula in Theorem 5.38 to solve the IVP

$$\displaystyle\begin{array}{rcl} D^{2}y(t)& =& t - 1,\quad t \in \mathbb{T}_{ 1} {}\\ y(1)& =& 2,\;\;Dy(1) = 0. {}\\ \end{array}$$

By the variation of constants formula in Theorem 5.38, we have that

$$\displaystyle\begin{array}{rcl} y(t)& =& 2h_{0}(t,1) + 0h_{1}(t,1) +\int _{ 1}^{t}h_{ 1}(t,\sigma (s))(s - 1)Ds {}\\ & =& 2 +\int _{ 1}^{t}h_{ 1}(t,\sigma (s))h_{1}(s,1)Ds. {}\\ \end{array}$$

Integrating by parts we calculate

$$\displaystyle\begin{array}{rcl} y(t)& =& 2 + h_{1}(t,s)h_{2}(s,1)\Big\vert _{s=1}^{t} +\int _{ 1}^{t}h_{ 2}(s,1)Ds {}\\ & =& 2 + h_{3}(s,1)\Big\vert _{s=1}^{t} = 2 + h_{ 3}(t,1) {}\\ & =& 2 + \frac{1} {21}(t - 1)(t - 3)(t - 7), {}\\ \end{array}$$

for \(t \in \mathbb{T}_{1}.\) The reader could check this result by just twice integrating both sides of the equation \(D^{2}y(t) = t - 1\) from 1 to t.

5.7 Exponential Function

In this section we define the exponential function on a mixed time scale and give several of its properties. First we define the set of regressive functions by

$$\displaystyle{\mathcal{R} =\{ p: \mathbb{T}_{\alpha } \rightarrow \mathbb{C}\;\;\mbox{ such that}\;\;1 + p(t)\mu (t)\neq 0\;\;\mbox{ for}\;\;t \in \mathbb{T}_{\alpha }\}.}$$

Definition 5.40.

The mixed time scale exponential function based at \(s \in \mathbb{T}_{\alpha }\), denoted by e p (t, s) , where \(p \in \mathcal{R}\) is defined to be the unique solution, y(t), of the initial value problem

$$\displaystyle{ Dy(t) = p(t)y(t), }$$
(5.1)
$$\displaystyle{ \quad y(s) = 1. }$$
(5.2)

In the next theorem we give a formula for e p (t, s).

Theorem 5.41.

Assume \(p \in \mathcal{R}\) and \(s \in \mathbb{T}_{\alpha }.\) Then

$$\displaystyle\begin{array}{rcl} e_{p}(t,s) = \left \{\begin{array}{@{}l@{\quad }l@{}} \prod \limits _{j=0}^{K(t,s)-1}\left [1 + p(\sigma ^{j}(s))\mu (\sigma ^{j}(s))\right ],\quad &\text{ if }t> s \\ \quad 1, \quad &\text{ if }\ t = s \\ \prod \limits _{j=1}^{-K(t,s)} \dfrac{1} {\left [1 + p(\rho ^{j}(s))\mu (\rho ^{j}(s))\right ]},\quad &\text{ if }t <s. \end{array} \right.& & {}\\ \end{array}$$

Proof.

It suffices to show (see Remark 5.18) that y(t) = e p (t, s) as defined in the statement of this theorem satisfies the IVP

$$\displaystyle\begin{array}{rcl} Dy(t)& =& p(t)y(t),\quad t \in \mathbb{T}_{\alpha } {}\\ y(s)& =& 1. {}\\ \end{array}$$

It is clear that e p (s, s) = 1. It remains to show that \(De_{p}(t,s) = p(t)e_{p}(t,s)\). Consider the case that \(t =\sigma ^{k}(s)\) for k ≥ 1. Then

$$\displaystyle\begin{array}{rcl} De_{p}(t,s)& =& \dfrac{1} {\mu (t)}\prod \limits _{j=0}^{N(\sigma (t),s)-1}\left [1 + p\left (\sigma ^{j}(s)\right )\mu \left (\sigma ^{j}(s)\right )\right ] {}\\ & & \quad \quad - \dfrac{1} {\mu (t)}\prod \limits _{j=0}^{K(t,s)-1}\left [1 + p\left (\sigma ^{j}(s)\right )\mu \left (\sigma ^{j}(s)\right )\right ] {}\\ & =& \dfrac{p(\sigma ^{N(t,s)}(s))\mu \left (\sigma ^{N(t,s)}(s)\right )\prod \limits _{j=0}^{N(t,s)-1}\left [1 + p\left (\sigma ^{j}(s)\right )\mu \left (\sigma ^{j}(s)\right )\right ]} {\mu (t)} {}\\ & =& p(t)e_{p}(t,s). {}\\ \end{array}$$

Consider the case when t = s. Then,

$$\displaystyle\begin{array}{rcl} De_{p}(s,s) = \dfrac{[1 + p(s)\mu (s)] - 1} {\mu (s)} = p(s) = p(s)e_{p}(s,s).& & {}\\ \end{array}$$

Consider the case when t = ρ(s). In this case it follows that

$$\displaystyle\begin{array}{rcl} De_{p}(\rho (s),s)& =& \dfrac{1 - \dfrac{1} {1 + p\left (\rho (s)\right )\mu \left (\rho (s)\right )}} {\mu \left (\rho (s)\right )} {}\\ & =& \dfrac{p\left (\rho (s)\right )} {1 + p\left (\rho (s)\right )\mu \left (\rho (s)\right )} {}\\ & =& p\left (\rho (s)\right )e_{p}(\rho (s),s). {}\\ \end{array}$$

Finally, consider the case when t = ρ k(s) for k ≥ 2. In this final case it then holds that

$$\displaystyle\begin{array}{rcl} De_{p}(t,s)& =& \dfrac{1} {\mu (t)}\prod \limits _{j=1}^{-N(\sigma (t),s)} \dfrac{1} {1 + p\left (\rho ^{(j)}(s)\right )\mu \left (\rho ^{(j)}(s)\right )} {}\\ & -& \dfrac{1} {\mu (t)}\prod \limits _{j=1}^{-N(t,s)} \dfrac{1} {1 + p\left (\rho ^{(j)}(s)\right )\mu \left (\rho ^{(j)}(s)\right )} {}\\ & =& \dfrac{1} {\mu (t)}\left [1 - \dfrac{1} {1 + p\left (\rho ^{(-N(t,s))}(s)\right )\mu \left (\rho ^{(-N(t,s))}(s)\right )}\right ] {}\\ & & \quad \times \prod \limits _{j=1}^{-N(\sigma (t),s)} \dfrac{1} {1 + p\left (\rho ^{(k)}(s)\right )\mu \left (\rho ^{(k)}(s)\right )} {}\\ & =& \dfrac{p(t)} {1 + p\mu (t)}\prod \limits _{j=1}^{-N(\sigma (t),s)} \dfrac{1} {1 + p\left (\rho ^{(k)}(s)\right )\mu \left (\rho ^{(k)}(s)\right )} {}\\ & =& p(t)e_{p}(t,s), {}\\ \end{array}$$

which completes the proof. □ 

Next we define an addition on \(\mathcal{R}.\)

Definition 5.42.

We define the circle plus addition, ⊕, on the set of regressive functions \(\mathcal{R}\) on the mixed time scale \(\mathbb{T}_{\alpha }\) by

$$\displaystyle{(p \oplus r)(t):= p(t) + r(t) +\mu (t)p(t)r(t),\quad t \in \mathbb{T}_{\alpha }.}$$

Similar to the proof of Theorem 4.16, we can prove the following theorem.

Theorem 5.43.

The set of regressive functions \(\mathcal{R}\) with the addition ⊕ is an Abelian group.

Like in Chap. 4, the additive inverse of a function \(p \in \mathcal{R}\) is given by

$$\displaystyle{\ominus p:= \frac{-p} {1 +\mu p}.}$$

We then define the circle minus subtraction, ⊖, on \(\mathcal{R}\) by

$$\displaystyle{p \ominus r:= p \oplus (\ominus r).}$$

It follows that

$$\displaystyle{p \ominus r = \frac{p - r} {1 +\mu r}.}$$

In the following theorem we give several properties of the exponential function e p (t, s). 

Theorem 5.44.

Let \(t,s,r \in \mathbb{T}_{\alpha }\) and \(p,l \in \mathcal{R}\) . Then the following properties hold:

  1. (i)

    e 0 (t,s) = 1;

  2. (ii)

    e p (s,s) = 1;

  3. (iii)

    \(De_{p}(t,s) = p(t)e_{p}(t,s);\)

  4. (iv)

    \(e_{p}(\sigma (t),s) = \left [1 + p(t)\mu (t)\right ]e_{p}(t,s);\)

  5. (v)

    \(e_{p}(\rho (t),s) = \dfrac{e_{p}(t,s)} {\left [1 + p\left (\rho (t)\right )\mu \left (\rho (t)\right )\right ]};\)

  6. (vi)

    \(\mbox{ if $1 + p(t)\mu (t)> 0$ for all $t \in \mathbb{T}_{\alpha }$ then }e_{p}(t,s)> 0;\)

  7. (vii)

    \(e_{p}(t,s)e_{p}(s,r) = e_{p}(t,r);\)

  8. (viii)

    \(e_{p}(s,t) = \dfrac{1} {e_{p}(t,s)} = e_{\ominus p}(t,s);\)

  9. (ix)

    \(e_{p}(t,s)e_{l}(t,s) = e_{p\oplus l}(t,s);\)

  10. (x)

    \(\dfrac{e_{l}(t,s)} {e_{p}(t,s)} = e_{l\ominus p}(t,s).\)

Proof.

The proof of this theorem is very similar to the proof of Theorem 4.18. Here we will just prove first half of part (viii). Consider the case t > s. Then

$$\displaystyle\begin{array}{rcl} e_{p}(s,t)& =& \prod \limits _{j=1}^{N(t,s)} \dfrac{1} {1 + p\left (\rho ^{j}(t)\right )\mu \left (\rho ^{j}(t)\right )} {}\\ & =& \prod \limits _{j=1}^{N(t,s)} \dfrac{1} {1 + p\left (\rho ^{j}(\sigma ^{N(t,s)}(s)\right )\mu \left (\rho ^{j}(\sigma ^{N(t,s)}(s)\right )} {}\\ & =& \prod \limits _{j=1}^{N(t,s)} \dfrac{1} {1 + p\left (\sigma ^{N(t,s)-j}(s)\right )\mu \left (\sigma ^{N(t,s)-j}(s)\right )} {}\\ & =& \prod \limits _{j=0}^{N(t,s)-1} \dfrac{1} {1 + p\left (\sigma ^{j}(s)\right )\mu \left (\sigma ^{j}(s)\right )} = \frac{1} {e_{p}(t,s)}. {}\\ \end{array}$$

When t = s, it follows that \(e_{p}(s,s) = \frac{1} {e_{p}(s,s)} = 1.\) Finally, consider the case when t < s. Then

$$\displaystyle\begin{array}{rcl} e_{p}(s,t)& =& \prod \limits _{j=0}^{N(s,t)-1}\left [1 + p\left (\sigma ^{j}(t)\right )\mu \left (\sigma ^{j}(t)\right )\right ] {}\\ & =& \prod \limits _{j=0}^{N(s,t)-1}\left [1 + p\left (\rho ^{N(s,t)-j}(s)\right )\mu (\rho ^{N(s,t)-j}(s))\right ] {}\\ & =& \prod \limits _{j=1}^{N(s,t)}\left [1 + p\left (\rho ^{j}(s)\right )\mu \left (\rho ^{j}(s)\right )\right ] = \frac{1} {e_{p}(t,s)}, {}\\ \end{array}$$

which was to be shown. □ 

Next we define the scalar dot multiplication, \(\odot\), on the set of positively regressive functions \(\mathcal{R}^{+}:=\{ p \in \mathcal{R}: 1 +\mu (t)p(t)> 0\text{, }t \in \mathbb{T}_{\alpha }\}.\)

Definition 5.45.

We define the scalar dot multiplication, \(\odot\), on \(\mathcal{R}+\) by

$$\displaystyle{(\alpha \odot p)(t):= \frac{[1 +\mu (t)p(t)]^{\alpha } - 1} {\mu (t)},\quad t \in \mathbb{T}_{\alpha }.}$$

Similar to the proof of Theorem 4.21 we can prove the following theorem.

Theorem 5.46.

If \(\alpha \in \mathbb{R}\) and \(p \in \mathcal{R}^{+}\) , then

$$\displaystyle{e_{p}^{\alpha }(t,a) = e_{\alpha \odot p}(t,a)}$$

for \(t \in \mathbb{T}_{\alpha }.\)

Then similar to the proof of Theorem 4.22, we get the following result.

Theorem 5.47.

The set of positively regressive functions \(\mathcal{R}^{+}\) on a mixed time scale with the addition ⊕ and the scalar multiplication \(\odot\) is a vector space.

5.8 Trigonometric Functions

In this section, we use the exponential function defined in the previous section to define the hyperbolic and trigonometric functions for the mixed time scale.

Definition 5.48.

For \(\pm p \in \mathcal{R}\), we define the mixed time scale hyperbolic cosine function \(\cosh _{p}(\cdot,s)\) based at \(s \in \mathbb{T}_{\alpha }\) by

$$\displaystyle\begin{array}{rcl} \cosh _{p}(t,s):= \dfrac{e_{p}(t,s) + e_{-p}(t,s)} {2},\quad t \in \mathbb{T}_{\alpha }.& & {}\\ \end{array}$$

Definition 5.49.

Likewise we define the mixed time scale hyperbolic sine function \(\sinh _{p}(\cdot,s)\) based at \(s \in \mathbb{T}_{\alpha }\) by

$$\displaystyle\begin{array}{rcl} \sinh _{p}(t,s):= \dfrac{e_{p}(t,s) - e_{-p}(t,s)} {2},\quad t \in \mathbb{T}_{\alpha }.& & {}\\ \end{array}$$

Similar to the proof of Theorem 4.24 one can prove the following theorem.

Theorem 5.50.

Assume \(\pm p \in \mathcal{R}\) and \(t,s \in \mathbb{T}_{\alpha }\) . Then the following properties hold:

  1. (i)

    \(\cosh _{p}(s,s) = 1;\)

  2. (ii)

    \(\sinh _{p}(s,s) = 0;\)

  3. (iii)

    \(\cosh _{-p}(t,s) =\cosh _{p}(t;s);\)

  4. (iv)

    \(\sinh _{-p}(t,s) = -\sinh _{p}(t,s);\)

  5. (v)

    \(D\cosh _{p}(t,s) = p(t)\sinh _{p}(t,s);\)

  6. (vi)

    \(D\sinh _{p}(t,s) = p(t)\cosh _{p}(t,s);\)

  7. (vii)

    \(\cosh _{p}^{2}(t,s) -\sinh _{p}^{2}(t,s) = e_{-\mu p^{2}}(t,s).\)

Definition 5.51.

Assume \(\pm ip \in \mathcal{R}\). Then we define the mixed time scale cosine function \(\cos _{p}(\cdot,s)\) based at \(s \in \mathbb{T}_{\alpha }\) by

$$\displaystyle\begin{array}{rcl} \cos _{p}(t,s):= \dfrac{e_{ip}(t,s) + e_{-ip}(t,s)} {2},\quad t \in \mathbb{T}_{\alpha }.& & {}\\ \end{array}$$

Definition 5.52.

We define the mixed time scale sine function \(\sin _{p}(\cdot,s)\) based at \(s \in \mathbb{T}_{\alpha }\) by

$$\displaystyle\begin{array}{rcl} \sin _{p}(t,s):= \dfrac{e_{ip}(t,s) - e_{-ip}(t,s)} {2i},\quad t \in \mathbb{T}_{\alpha }.& & {}\\ \end{array}$$

Similar to the proof of Theorem 4.27 one can prove the following theorem.

Theorem 5.53.

Assume \(\pm ip \in \mathcal{R}\) and \(t,s \in \mathbb{T}_{\alpha }.\) Then the following properties hold:

  1. (i)

    \(\cos _{p}(s,s) = 1;\)

  2. (ii)

    \(\sin _{p}(s,s) = 0;\)

  3. (iii)

    \(\cos _{-p}(t,s) =\cos _{p}(t,s);\)

  4. (iv)

    \(\sin _{-p}(t,s) = -\sin _{p}(t,s);\)

  5. (v)

    \(D\cos _{p}(t,s) = -p(t)\sin _{p}(t,s);\)

  6. (vi)

    \(D\sin _{p}(t,s) = p(t)\cos _{p}(t,s);\)

  7. (vii)

    \(\cos _{p}^{2}(t,s) +\sin _{ p}^{2}(t,s) = e_{\mu p^{2}}(t,s).\)

Similar to the proof of Theorem 4.26 one can prove the following theorem.

Theorem 5.54.

Assume \(\pm ip \in \mathcal{R}\) and \(t,s \in \mathbb{T}_{\alpha }\) . Then the following properties hold:

  1. (i)

    \(\sin _{ip}(t,s) = i\sinh _{p}(t,s);\)

  2. (ii)

    \(\cos _{ip}(t,s) =\cosh _{p}(t,s);\)

  3. (iii)

    \(\sinh _{ip}(t,s) = i\sin _{p}(t,s);\)

  4. (iv)

    \(\cosh _{ip}(t,s) =\cos _{p}(t,s),\)

for \(t \in \mathbb{T}_{\alpha }.\)

It is easy to prove the following theorem.

Theorem 5.55.

If \(p \in \mathcal{R}\) , then a general solution of

$$\displaystyle{Dy(t) = p(t)y(t),\quad t \in \mathbb{T}_{\alpha }}$$

is given by

$$\displaystyle{y(t) = ce_{p}(t,a),\quad t \in \mathbb{T}_{\alpha }.}$$

Theorem 5.56.

Assume \(t,s \in \mathbb{T}_{\alpha }\) and p is a constant. Then the following Taylor series converge on \(\mathbb{T}_{[s,\infty )}\).

  1. (i)

    \(e_{p}(t,s) =\sum \limits _{ n=0}^{\infty }p^{n}h_{n}(t,s),\quad \mbox{ if}\quad p \in \mathcal{R};\)

  2. (ii)

    \(\sin _{p}(t,s) =\sum \limits _{ n=0}^{\infty }(-1)^{n}p^{2n+1}h_{2n+1}(t,s),\quad \mbox{ if}\quad \pm ip \in \mathcal{R};\)

  3. (iii)

    \(\cos _{p}(t,s) =\sum \limits _{ n=0}^{\infty }(-1)^{n}p^{2n}h_{2n}(t,s),\quad \mbox{ if}\quad \pm ip \in \mathcal{R};\)

  4. (iv)

    \(\sinh _{p}(t,s) =\sum \limits _{ n=0}^{\infty }p^{2n+1}h_{2n+1}(t,s),\quad \mbox{ if}\quad \pm p \in \mathcal{R};\)

  5. (v)

    \(\cosh _{p}(t,s) =\sum \limits _{ n=0}^{\infty }p^{2n}h_{2n}(t,s),\quad \mbox{ if}\quad \pm p \in \mathcal{R}.\)

Proof.

Fix \(t \in \mathbb{T}_{[s,\infty )}\). Then for some k ≥ 0, it holds that \(t =\sigma ^{k}(s) \geq s\). Let \(M =\max \{ \left \vert e_{p}(\tau,s)\right \vert \:\ \tau \in \mathbb{T}_{[s,t]}\}\). Then for n ≥ 1 we have

$$\displaystyle\begin{array}{rcl} \vert R_{n}(t,s)\vert & =& \left \vert \int _{s}^{t}h_{ n}(t,\sigma (\tau ))D^{n+1}e_{ p}(\tau,s)D\tau \right \vert \\ & =& \left \vert \int _{s}^{t}h_{ n}(t,\sigma (\tau ))p^{n+1}e_{ p}(\tau,s)D\tau \right \vert \\ &\leq & M\vert p\vert ^{n+1}\left \vert \int _{ s}^{t}h_{ n}(t,\sigma (\tau ))D\tau \right \vert \\ & =& M\vert p\vert ^{n+1}\left \vert h_{ n+1}(t,s)\right \vert. {}\end{array}$$
(5.3)

Now if m ≥ k, we have that

$$\displaystyle\begin{array}{rcl} \vert R_{m}(t,s)\vert \leq M\vert p\vert ^{m+1}\left \vert h_{ m+1}(t,s)\right \vert = M\vert p\vert ^{m+1}\prod \limits _{ i=1}^{m+1}\dfrac{t -\sigma ^{i-1}(s)} {[i]_{a}}.& & {}\\ \end{array}$$

Note that since m ≥ k, the product in the above expression contains the factor

$$\displaystyle\begin{array}{rcl} \dfrac{t -\sigma ^{k}(s)} {[k + 1]_{a}} = \dfrac{t - t} {[k + 1]_{a}} = 0.& & {}\\ \end{array}$$

Thus, for all m ≥ k,

$$\displaystyle{ R_{m}(t,s) = 0. }$$

Hence, by Taylor’s Formula (Theorem 5.37) the Taylor series for e p (t, s) converges for any \(t \in \mathbb{T}_{[s,\infty )}\). The remainder of this theorem follows from the fact that the functions \(\cos _{p}(t,s)\) \(\sin _{p}(t,s)\), \(\cosh _{p}(t,s)\), and \(\sinh _{p}(t,s)\) are defined in terms of appropriate exponential functions. □ 

Theorem 5.57.

Fix \(s \in \mathbb{T}_{\alpha }\) . Then the Taylor series for each of the functions in Theorem 5.56 converges on \(\mathbb{T}_{(-\infty,s)}\) when \(\vert p\vert <\frac{a} {\mu (s)}\) .

Proof.

Let \(s \in \mathbb{T}_{\alpha }\) be fixed. We will first prove that if \(\vert p\vert <\frac{\mu (s)} {a}\), then the Taylor series for e p (t, s) converges for each \(t \in \mathbb{T}_{(-\infty,s)}.\) Fix \(t \in \mathbb{T}_{(-\infty,s)}.\) We claim that for each a ≥ 1

$$\displaystyle{ \lim _{n\rightarrow \infty }\frac{\sigma ^{n-1}(s) - t} {[n]_{a}} = \frac{\mu (s)} {a}. }$$
(5.4)

First we prove (5.4) for a = 1. This follows from the following calculations:

$$\displaystyle\begin{array}{rcl} \lim _{n\rightarrow \infty }\frac{\sigma ^{n-1}(s) - t} {[n]_{a}} & =& \lim _{n\rightarrow \infty }\frac{a^{n-1}s + [n - 1]_{a}b - t} {[n]_{a}} {}\\ & =& \lim _{n\rightarrow \infty }\frac{s + [n - 1]_{1}b - t} {[n]_{1}} {}\\ & =& \lim _{n\rightarrow \infty }\frac{s - (n - 1)b - t} {n} = \frac{b} {1} = \frac{\mu (s)} {a}. {}\\ \end{array}$$

Next we prove (5.4) for a > 1. To this end consider

$$\displaystyle\begin{array}{rcl} & & \lim \limits _{n\rightarrow \infty }\frac{\sigma ^{n-1}(s) - t} {[n]_{a}} =\lim \limits _{n\rightarrow \infty }\frac{a^{n-1}s + [n - 1]_{a}b - t} {[n]_{a}} \quad \quad \mbox{ by Theorem 5.5, (i)} {}\\ & & =\lim \limits _{n\rightarrow \infty }\frac{a^{n-1}s + \frac{a^{n-1}-1} {a-1} b - t} {\frac{a^{n}-1} {a-1} } \quad \mbox{ by the definition of $[n]_{a}$} {}\\ & & =\lim \limits _{n\rightarrow \infty }\frac{(a - 1)a^{n-1}s + (a^{n-1} - 1)b - (a - 1)t} {a^{n} - 1} {}\\ & & = \frac{(a - 1)s + b} {a} {}\\ & & = \frac{\mu (s)} {a}. {}\\ \end{array}$$

Now consider the remainder term

$$\displaystyle{ R_{n}(t,s) =\int _{ s}^{t}h_{ n}(t,\sigma (\tau ))D^{n+1}e_{ p}(\tau,s)D\tau =\int _{ s}^{t}h_{ n}(t,\sigma (\tau ))p^{n+1}e_{ p}(\tau,s)D\tau. }$$

It follows that

$$\displaystyle{\vert R_{n}(t,s)\vert \leq \vert p\vert ^{n+1}\int _{ t}^{s}\vert h_{ n}(t,\sigma (\tau ))\vert \vert e_{p}(\tau,s)\vert D\tau.}$$

If we let

$$\displaystyle{M:=\max \{ \vert e_{p}(t,s)\vert: t \leq \tau \leq s - 1\},}$$

then

$$\displaystyle\begin{array}{rcl} \vert R_{n}(t,s)\vert & \leq & M\vert p\vert ^{n+1}\int _{ t}^{s}\vert h_{ n}(t,\sigma (\tau ))\vert D\tau {}\\ & =& M\vert p\vert ^{n+1}\int _{ t}^{s}\left \vert \prod _{ k=1}^{n}\frac{t -\sigma ^{k-1}(\tau )} {[k]_{a}} \right \vert D\tau {}\\ & =& M\vert p\vert ^{n+1}\int _{ t}^{s}(-1)^{n}h_{ n}(t,\sigma (\tau ))D\tau {}\\ & =& M\vert p\vert ^{n+1}\left [(-1)^{n+1}h_{ n+1}(t,\tau )\right ]_{\tau =t}^{\tau =s} {}\\ & =& M\vert p\vert ^{n+1}(-1)^{n+1}h_{ n+1}(t,s) {}\\ & =& M\vert p\vert ^{n+1}\prod _{ k=1}^{n+1}\frac{\sigma ^{k-1}(s) - t} {[k]_{a}}. {}\\ \end{array}$$

Using (5.4) and \(\vert p\vert <\frac{a} {\mu (s)}\), there is a number r and a positive integer N so that

$$\displaystyle\begin{array}{rcl} 0 \leq \vert p\vert \dfrac{\sigma ^{n-1}(s) - t} {[n]_{a}} \leq r <1,\quad \mbox{ for}\quad n \geq N.& & {}\\ \end{array}$$

It follows that

$$\displaystyle{\lim _{n\rightarrow \infty }\vert R_{n}(t,s)\vert =\lim _{n\rightarrow \infty }M\prod _{k=1}^{\infty }\vert p\vert \frac{\sigma ^{k-1}(s) - t} {[k]_{a}} = 0.}$$

Therefore, by Taylor’s Formula,

$$\displaystyle{e_{p}(t,s) =\sum _{ n=0}^{\infty }p^{n}h_{ n}(t,s)}$$

for \(t \in \mathbb{T}_{(-\infty,s)}.\)

The remainder of this theorem follows from the fact that the functions \(\cos _{p}(t,s)\) \(\sin _{p}(t,s)\), \(\cosh _{p}(t,s)\), and \(\sinh _{p}(t,s)\) are defined in terms of appropriate exponential functions. □ 

Theorem 5.58.

For fixed \(t,s \in \mathbb{T}_{\alpha }\) , the power series

$$\displaystyle\begin{array}{rcl} f(x) =\sum \limits _{ n=0}^{\infty }h_{ n}(t,s)x^{n}& & {}\\ \end{array}$$

converges for \(\vert x\vert <\dfrac{a} {\mu (s)}.\)

Proof.

First, consider the power series

$$\displaystyle{ A(x) =\sum \limits _{ n=0}^{\infty } \dfrac{t^{\underline{n}}} {\{n\}_{a}!}x^{n}. }$$

We will perform the ratio test with this series:

$$\displaystyle\begin{array}{rcl} \lim \limits _{n\rightarrow \infty }\left \vert \dfrac{a_{n+1}} {a_{n}} \right \vert & =& \lim \limits _{n\rightarrow \infty }\left \vert \dfrac{t^{\underline{n+1}}} {\{n + 1\}_{a}t^{\underline{n}}}\right \vert {}\\ & =& \lim \limits _{n\rightarrow \infty }\left \vert \dfrac{\rho ^{n}(t)} {\{n + 1\}_{a}}\right \vert {}\\ & =& \lim \limits _{n\rightarrow \infty }\left \vert \dfrac{a^{-n}t + [-n]_{a}b} {\dfrac{[n + 1]_{a}} {a^{n}} } \right \vert =\lim \limits _{n\rightarrow \infty }\left \vert \dfrac{t} {[n + 1]_{a}} - \dfrac{[n]_{a}b} {[n + 1]_{a}}\right \vert. {}\\ \end{array}$$

Since \(\lim \limits _{n\rightarrow \infty } \dfrac{t} {[n + 1]_{a}} = 0\), we have that

$$\displaystyle\begin{array}{rcl} \lim \limits _{n\rightarrow \infty }\left \vert \dfrac{a_{n+1}} {a_{n}} \right \vert =\lim \limits _{n\rightarrow \infty }\left \vert \dfrac{s^{\underline{n+1}}} {\{n + 1\}_{a}t^{\underline{n}}}\right \vert & =& \lim \limits _{n\rightarrow \infty }\left \vert \dfrac{[n]_{a}b} {[n + 1]_{a}}\right \vert {}\\ & =& \lim \limits _{n\rightarrow \infty }\left \vert \dfrac{(a^{n} - 1)b} {(a^{n+1} - 1)}\right \vert = \dfrac{b} {a}. {}\\ \end{array}$$

So, A(x) converges when \(\vert x\vert <\dfrac{a} {b}\). Next, consider the power series

$$\displaystyle\begin{array}{rcl} B(x) =\sum \limits _{ n=0}^{\infty }\dfrac{(-1)^{n}s^{\overline{n}}} {[n]_{a}!} x^{n}.& & {}\\ \end{array}$$

Again, we perform the ratio test. Then

$$\displaystyle\begin{array}{rcl} \lim \limits _{n\rightarrow \infty }\left \vert \dfrac{a_{n+1}} {a_{n}} \right \vert & =& \lim \limits _{n\rightarrow \infty }\left \vert \dfrac{\sigma ^{n}(s)} {[n + 1]_{a}}\right \vert =\lim \limits _{n\rightarrow \infty }\dfrac{a^{n}s + [n]_{a}b} {[n + 1]_{a}} {}\\ & =& \lim \limits _{n\rightarrow \infty }\left ( \dfrac{a^{n}s} {[n + 1]_{a}} + \dfrac{[n]_{a}b} {[n + 1]_{a}}\right ) {}\\ & =& \lim \limits _{n\rightarrow \infty }\left (\dfrac{a^{n}(a - 1)s} {a^{n} - 1} + \dfrac{[n]_{a}b} {[n + 1]_{a}}\right ) {}\\ & =& \lim \limits _{n\rightarrow \infty }\left (\dfrac{a^{n}(a - 1)s} {a^{n+1} - 1} + \dfrac{[n]_{a}b} {[n + 1]_{a}}\right ) {}\\ & =& \dfrac{(a - 1)s} {a} + \dfrac{b} {a} = \dfrac{\mu (s)} {a}. {}\\ \end{array}$$

So B(x) converges when \(\vert x\vert <\dfrac{a} {\mu (s)}\). Note that μ(s) > b, so \(\dfrac{a} {\mu (s)} <\dfrac{a} {b}\) for all s. Now, f(x) = A(x)B(x). So, f(x) converges when \(\vert x\vert <\dfrac{a} {\mu (s)}\). □ 

5.9 The Laplace Transform

Most of the results in this section are due to Auch et al. [39]. In this chapter when discussing Laplace transforms we assume that \(r \in \mathbb{T}_{\alpha }\) satisfies r ≥ α ≥ 0, and we let \(\mathbb{T}_{r} =\{ t \geq r: t \in \mathbb{T}_{\alpha }\}.\) Also we let \(\mathcal{R}^{c}\) denote the set of regressive complex constants.

Definition 5.59.

If \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\), then we define the discrete Laplace transform of f based at \(r \in \mathbb{T}\) by

$$\displaystyle{\mathcal{L}_{r}\{f\}(s):=\int _{ r}^{\infty }e_{ \ominus s}(\sigma (t),r)f(t)\,Dt,}$$

where \(\mathcal{L}_{r}\{f\}: \mathcal{R}^{c} \rightarrow \mathbb{C}\).

Definition 5.60.

We say that a function \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\) is of exponential order k > 0 if for every fixed \(r \in \mathbb{T}_{\alpha }\), there exists a constant M > 0 such that

$$\displaystyle{\left \vert f(t)\right \vert \leq Me_{k}(t,r),}$$

for all sufficiently large \(t \in \mathbb{T}_{r}\).

Theorem 5.61.

Suppose \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\) is of exponential order k > 0. Then \(\mathcal{L}_{r}\{f\}(s)\) exists for |s| > k.

Proof.

Since f is of exponential order k > 0, there is a constant M > 0 and a \(T \in \mathbb{T}_{r}\) such that | f(t) | ≤ Me k (t, r) for t ≥ T. Pick \(N \in \mathbb{N}_{0}\) such that \(t =\sigma ^{N}(r)\). Then we have

$$\displaystyle\begin{array}{rcl} \left \vert \int _{T}^{\infty }e_{ \ominus s}(\sigma (t),r)f(t)\,Dt\right \vert & \leq & \int _{T}^{\infty }\left \vert e_{ \ominus s}(\sigma (t),r)f(t)\,\right \vert Dt {}\\ & \leq & M\int _{T}^{\infty }\left \vert e_{ \ominus s}(\sigma (t),r)e_{k}(t,r)\,\right \vert Dt {}\\ & =& M\int _{T}^{\infty }\left \vert \frac{1} {1 + s\mu (t)}e_{k\ominus s}(t,r)\,\right \vert Dt {}\\ & =& M\sum _{i=N}^{\infty }\left \vert \frac{\mu (\sigma ^{i}(r))} {1 + s\mu (\sigma ^{i}(r))}e_{k\ominus s}(\sigma ^{i}(r),r)\,\right \vert {}\\ & =& M\sum _{i=N}^{\infty }\left \vert \frac{a^{i}\mu (r)} {1 + sa^{i}\mu (r)}e_{k\ominus s}(\sigma ^{i}(r),r)\,\right \vert. {}\\ \end{array}$$

We will show that this sum converges absolutely for | s | > k by the ratio test. We have

$$\displaystyle\begin{array}{rcl} & & \lim _{i\rightarrow \infty }\left \vert \left (\frac{a^{i+1}\mu (r)e_{k\ominus s}(\sigma ^{i+1}(r),r)} {1 + sa^{i+1}\mu (r)} \,\right )\left ( \frac{1 + sa^{i}\mu (r)} {a^{i}\mu (r)e_{k\ominus s}(\sigma ^{i}(r),r)}\,\right )\right \vert {}\\ & & =\lim _{i\rightarrow \infty }\left \vert \left (\frac{ae_{k\ominus s}(\sigma ^{i+1}(r),r)} {1 + a^{i+1}s\mu (r)} \right )\left ( \frac{1 + sa^{i}\mu (r)} {e_{k\ominus s}(\sigma ^{i}(r),r)}\right )\right \vert {}\\ & & =\lim _{i\rightarrow \infty }\left \vert \left ( \frac{ae_{k\ominus s}(\sigma ^{i}(r),r)(1 + ka^{i}\mu (r))} {(1 + a^{i+1}s\mu (r))(1 + sa^{i}\mu (r))}\right )\left ( \frac{1 + sa^{i}\mu (r)} {e_{k\ominus s}(\sigma ^{i}(r),r)}\right )\right \vert {}\\ & & =\lim _{i\rightarrow \infty }\left \vert \frac{a + ka^{i+1}\mu (r)} {1 + sa^{i+1}\mu (r)} \right \vert {}\\ & & =\lim _{i\rightarrow \infty }\left \vert \frac{\frac{1} {a^{i}} + k\mu (r)} { \frac{1} {a^{i+1}} + s\mu (r)}\right \vert = \frac{k} {\vert s\vert }. {}\\ \end{array}$$

Hence the sum converges absolutely when | s | > k, and therefore \(\mathcal{L}_{r}\{f\}(s)\) converges if | s | > k. □ 

Theorem 5.62 (Linearity).

Suppose \(f,g: \mathbb{T}_{r} \rightarrow \mathbb{R}\) are of exponential order k > 0, and \(c,d \in \mathbb{R}\) . Then

$$\displaystyle{\mathcal{L}_{r}\{cf + dg\}(s) = c\mathcal{L}_{r}\{f\}(s) + d\mathcal{L}_{r}\{g\}(s),}$$

for |s| > k.

Proof.

The result follows easily from the linearity of the delta integral. We have, for | s | > k, that

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{r}\{cf + dg\}(s)& =& \int _{r}^{\infty }(cf(t) + dg(t))e_{ \ominus s}(\sigma (t),r)\,Dt {}\\ & =& c\int _{r}^{\infty }f(t)e_{ \ominus s}(\sigma (t),r)\,Dt + d\int _{r}^{\infty }g(t)e_{ \ominus s}(\sigma (t),r)\,Dt {}\\ & =& c\mathcal{L}_{r}\{f\}(s) + d\mathcal{L}_{r}\{g\}(s), {}\\ \end{array}$$

which completes the proof. □ 

The following lemma will be useful for computing Laplace transforms of various functions.

Lemma 5.63.

If \(p,q \in \mathcal{R}^{c}\) and |p| < |q|, then for \(t \in \mathbb{T}_{r}\) we have

$$\displaystyle{\lim _{t\rightarrow \infty }e_{p\ominus q}(t,r) = 0.}$$

Proof.

Let \(p,q \in \mathcal{R}^{c}\) with | p | < | q | . First, note that

$$\displaystyle\begin{array}{rcl} \left \vert \lim _{t\rightarrow \infty }\frac{1 + p\mu (t)} {1 + q\mu (t)}\right \vert & =& \left \vert \lim _{t\rightarrow \infty }\frac{ \frac{1} {\mu (t)} + p} { \frac{1} {\mu (t)} + q}\right \vert {}\\ & =& \left \vert \frac{p} {q}\right \vert <1,\quad \mbox{ since }\quad \vert p\vert <\vert q\vert. {}\\ \end{array}$$

This implies that

$$\displaystyle\begin{array}{rcl} \lim _{t\rightarrow \infty }\left \vert e_{p\ominus q}(t,r)\right \vert =\lim _{t\rightarrow \infty }\left \vert \frac{e_{p}(t,r)} {e_{q}(t,r)}\right \vert =\prod _{ i=0}^{\infty }\left \vert \frac{1 + p\mu (\sigma ^{i}(r))} {1 + q\mu (\sigma ^{i}(r))}\right \vert = 0.& & {}\\ \end{array}$$

Thus, \(\lim _{t\rightarrow \infty }e_{p\ominus q}(t,r) = 0.\) □ 

Remark 5.64.

In particular, note that if s > 0, then

$$\displaystyle{\lim _{t\rightarrow \infty }e_{\ominus s}(t,r) = 0.}$$

Theorem 5.65.

Let \(p \in \mathcal{R}^{c}\) . Then for |s| > |p|, we have

$$\displaystyle{\mathcal{L}_{r}\{e_{p}(t,r)\}(s) = \frac{1} {s - p}.}$$

Proof.

First, note that e p (t, r) is of exponential order | p | since

$$\displaystyle\begin{array}{rcl} \vert e_{p}(t,r)\vert & =& \prod _{i=0}^{K(t,r)-1}\left \vert 1 + p\mu (\sigma ^{i}(r))\right \vert {}\\ &\leq & \prod _{i=0}^{K(t,r)-1}1 + \left \vert p\right \vert \mu (\sigma ^{i}(r)) = e_{ \vert p\vert }(t,r). {}\\ \end{array}$$

Thus, if | s | > | p | , we have

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{r}\{e_{p}(t,r)\}(s)& =& \int _{r}^{\infty }e_{ \ominus s}(\sigma (t),r)e_{p}(t,r)Dt {}\\ & =& \int _{r}^{\infty }e_{ p\ominus s}(t,r) \frac{1} {1 + s\mu (t)}Dt {}\\ & =& \frac{1} {p - s}\int _{r}^{\infty }e_{ p\ominus s}(t,r) \frac{p - s} {1 + s\mu (t)}Dt. {}\\ \end{array}$$

Then note both that \(De_{p\ominus s}(t,r) = (p \ominus s)(t)\,e_{p\ominus s}(t,r)\) and that \((p \ominus s)(t) = \frac{p-s} {1+s\mu (t)}.\) This gives us

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{r}\{e_{p}(t,r)\}(s)& =& \frac{1} {p - s}\int _{r}^{\infty }De_{ p\ominus s}(t,r)Dt {}\\ & =& \frac{1} {p - s}\left [\lim _{t\rightarrow \infty }e_{p\ominus s}(t,r) - e_{p\ominus s}(r,r)\right ] {}\\ & =& \frac{1} {s - p}, {}\\ \end{array}$$

since \(\lim _{t\rightarrow \infty }e_{p\ominus s}(t,r) = 0\) by Lemma 5.63. □ 

The following results describe the relationship between the Laplace transform and the delta difference.

Lemma 5.66.

If \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\) is of exponential order k > 0, then Df is also of exponential order k > 0.

Proof.

Let | f(t) | ≤ Me k (t, r) for sufficiently large t. We will prove this lemma by showing that

$$\displaystyle{\lim _{t\rightarrow \infty }\left \vert \frac{Df(t)} {e_{k}(t,r)}\right \vert \leq M <\infty.}$$

First, consider

$$\displaystyle\begin{array}{rcl} \left \vert \frac{Df(t)} {e_{k}(t,r)}\right \vert & =& \left \vert \frac{f(\sigma (t)) - f(t)} {\mu (t)e_{k}(t,r)} \right \vert {}\\ &\leq & \frac{\vert f(\sigma (t))\vert + \vert f(t)\vert } {\vert \mu (t)e_{k}(t,r)\vert } {}\\ &\leq & \frac{Me_{k}(\sigma (t),r) + Me_{k}(t,r)} {\left \vert \mu (t)e_{k}(t,r)\right \vert } {}\\ &\leq & \frac{Me_{k}(t,r)(1 + k\mu (t)) + Me_{k}(t,r)} {\mu (t)e_{k}(t,r)} {}\\ & \leq & \frac{Me_{k}(t,r)(2 + k\mu (t))} {\mu (t)e_{k}(t,r)} {}\\ & \leq & \frac{2M} {\mu (t)} + Mk. {}\\ \end{array}$$

Thus, we have

$$\displaystyle{\lim _{t\rightarrow \infty }\left \vert \frac{Df(t)} {e_{k}(t,r)}\right \vert \leq Mk.}$$

Then, for any ε > 0 and t sufficiently large,

$$\displaystyle{\vert Df(t)\vert \leq (Mk+\epsilon )e_{k}(t,r).}$$

Therefore, Df is of exponential order k > 0. □ 

Corollary 5.67.

If \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\) is of exponential order k > 0, then D n f is also of exponential order k > 0 for every \(n \in \mathbb{N}\) .

To see that the corollary holds, use the previous result.

Theorem 5.68.

If \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\) is of exponential order k > 0 and \(n \in \mathbb{N}\) , then

$$\displaystyle{\mathcal{L}_{r}\{D^{n}f(t)\}(s) = s^{n}\mathcal{L}_{ r}\{f\}(s) -\sum _{i=0}^{n-1}s^{n-1-i}D^{i}f(r),}$$

for |s| > k.

Proof.

We will proceed by induction on n. First consider the base case n = 1. Using integration by parts we have

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{r}\{Df(t)\}(s)& =& \int _{r}^{\infty }e_{ \ominus s}(\sigma (t),r)Df(t)\,Dt {}\\ & =& e_{\ominus s}(t,r)f(t)\biggr |_{t=r}^{t\rightarrow \infty } +\int _{ r}^{\infty } \frac{s} {1 + s\mu (t)}e_{\ominus s}(t,r)f(t)\,Dt {}\\ & =& s\int _{r}^{\infty }e_{ \ominus s}(\sigma (t),r)f(t)\,Dt - \frac{f(r)} {e_{s}(r,r)} {}\\ & =& s\mathcal{L}_{r}\left \{f(t)\right \}(s) - f(r). {}\\ \end{array}$$

Now assume the statement is true for some n > 1. Then by the base case we have

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{r}\{D^{n+1}f(t)\}(s)& =& s\mathcal{L}_{ r}\{D^{n}f(t)\}(s) - D^{n}f(r) {}\\ & =& s\left [s^{n}\mathcal{L}_{ r}\{f\}(s) -\sum _{i=0}^{n-1}s^{n-1-i}D^{i}f(r)\right ] - D^{n}f(r) {}\\ & =& s^{n+1}\mathcal{L}_{ r}\{f\}(s) -\sum _{i=0}^{n-1}s^{n-i}D^{i}f(r) - D^{n}f(r) {}\\ & =& s^{n+1}\mathcal{L}_{ r}\{f\}(s) -\sum _{i=0}^{n}s^{n-i}D^{i}f(r), {}\\ \end{array}$$

as desired. □ 

To show that the Laplace transform is injective (Theorem 5.70) and therefore invertible we will use the following lemma.

Lemma 5.69.

Let \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\) be of exponential order k > 0. Then

$$\displaystyle{\lim _{s\rightarrow \infty }\mathcal{L}_{r}\{f\}(s) = 0.}$$

Proof.

Let \(t_{n} =\sigma ^{n}(r).\) Since f is of exponential order k, there is a constant M > 0 and a positive integer N such that \(\vert f(t_{n})\vert \leq Me_{k}(t_{n},r)\) for all n ≥ N. Then we have

$$\displaystyle\begin{array}{rcl} & & \lim _{s\rightarrow \infty }\left \vert \mathcal{L}_{r}\{f\}(s)\right \vert \leq \lim _{s\rightarrow \infty }\sum _{n=0}^{\infty }\left \vert f(t_{ n})\,e_{\ominus s}(t_{n+1},r)\right \vert \ \mu (t_{n}) {}\\ & & =\lim _{s\rightarrow \infty }\sum _{n=0}^{N-1}\vert f(t_{ n})\vert \vert e_{\ominus s}(t_{n+1},r)\vert \mu (t_{n}) {}\\ & & \quad \ +\lim _{s\rightarrow \infty }\sum _{n=N}^{\infty }\vert e_{ \ominus s}(t_{n+1},r)\vert \vert f(t_{n})\vert \mu (t_{n}) {}\\ & & \leq M\lim _{s\rightarrow \infty }\sum _{n=N}^{\infty }e_{ k}(t_{n},r)\vert e_{\ominus s}(t_{n+1},r)\vert \mu (t_{n}) {}\\ & & \leq M\lim _{s\rightarrow \infty }\sum _{n=N}^{\infty }e_{ k}(t_{n},r)\vert [1 + \ominus s(t_{n})\mu (t_{n})]e_{\ominus s}(t_{n},r)\vert \mu (t_{n}) {}\\ & & = M\lim _{s\rightarrow \infty }\sum _{n=N}^{\infty }\Big\vert \frac{e_{k}(t_{n},r)} {[1 + s\mu (t_{n})]e_{s}(t_{n},r)}\Big\vert \mu (t_{n}) {}\\ & & \leq M\lim _{s\rightarrow \infty }\sum _{n=N}^{\infty }\Big\vert \frac{e_{k}(t_{n},r)} {e_{s}(t_{n},r)}\Big\vert \mu (t_{n}). {}\\ \end{array}$$

We now show that

$$\displaystyle{\left \vert \frac{1 + k\mu (r)} {1 + s\mu (r)}\right \vert \geq \left \vert \frac{1 + k\mu (t_{m})} {1 + s\mu (t_{m})}\right \vert }$$

for \(\mathfrak{R}(s),\,\mathfrak{I}(s)> k\) and for all integers m ≥ 0. To this end, we prove that

$$\displaystyle{\left \vert (1 + k\mu (r))(1 + s\mu (t_{m}))\right \vert \geq \left \vert (1 + k\mu (t_{m}))(1 + s\mu (r))\right \vert }$$

holds by writing \(s = \mathfrak{R}(s) +\mathrm{ i}\mathfrak{I}(s)\) and showing

$$\displaystyle{(1 + k\mu (r))(1 + \mathfrak{R}(s)\mu (t_{m})) \geq (1 + k\mu (t_{m}))(1 + \mathfrak{R}(s)\mu (r))}$$

and

$$\displaystyle{\mathfrak{I}(s)\mu (t_{m}) + k\mathfrak{I}(s)\mu (r)\mu (t_{m}) \geq \mathfrak{I}(s)\mu (r) + k\mathfrak{I}(s)\mu (t_{m})\mu (r).}$$

Rewriting these inequalities yields

$$\displaystyle{(\mathfrak{R}(s) - k)\mu (t_{m}) \geq (\mathfrak{R}(s) - k)\mu (r)}$$

and

$$\displaystyle{\mathfrak{I}(s)\mu (t_{m}) \geq \mathfrak{I}(s)\mu (r),}$$

respectively, which are true by assumption. Thus for sufficiently large | s | , we have

$$\displaystyle\begin{array}{rcl} \left \vert \mathcal{L}_{r}\{f\}(s)\right \vert & \leq & \sum _{n=N}^{\infty }\left \vert \frac{e_{k}(t_{n},r)} {e_{s}(t_{n},r)}\mu (t_{n})\right \vert {}\\ & =& \sum _{n=N}^{\infty }\mu (t_{ n})\prod _{m=0}^{n-1}\left \vert \frac{1 + k\mu (t_{m})} {1 + s\mu (t_{m})}\right \vert {}\\ &\leq & \sum _{n=N}^{\infty }\mu (r)\,a^{n}\left \vert \frac{1 + k\mu (r)} {1 + s\mu (r)}\right \vert ^{n} {}\\ & =& \mu (r) \frac{\left \vert \frac{a+ak\mu (r)} {1+s\mu (r)} \right \vert ^{N}} {1 -\left \vert \frac{a+ak\mu (r)} {1+s\mu (r)} \right \vert } \rightarrow 0 {}\\ \end{array}$$

as \(\vert s\vert \rightarrow \infty\). □ 

Theorem 5.70 (Injectivity).

If \(f,g: \mathbb{T}_{r} \rightarrow \mathbb{R}\) and \(\mathcal{L}_{r}\{f\}(s) = \mathcal{L}_{r}\{g\}(s)\) , then f(t) = g(t) for all t ≥ r.

Proof.

We will first prove that \(\mathcal{L}_{r}\{f\}(s) = 0\) implies f(t) = 0 for all t ≥ r. First, note that by Lemma 5.69, we have

$$\displaystyle{\lim _{s\rightarrow \infty }\mathcal{L}_{r}\{f\}(s) = 0,}$$

for any \(r \in \mathbb{T}_{\alpha }\). We will show that \(f(\sigma ^{n}(r)) = 0\) for all n ≥ 0 by induction. We first prove the case n = 0. To this end, we observe that if

$$\displaystyle{\mathcal{L}_{r}\{f\}(s) = 0,}$$

then it follows that

$$\displaystyle{\int _{r}^{\infty }f(t)\,e_{ \ominus s}(\sigma (t),r)\,Dt = 0.}$$

And from this it follows that

$$\displaystyle{e_{s}(\sigma (r),r)\int _{r}^{\infty }f(t)\,e_{ \ominus s}(\sigma (t),r)\,Dt = 0,}$$

whence

$$\displaystyle{\int _{r}^{\infty }f(t)\,e_{ \ominus s}(\sigma (t),\sigma (r))\,Dt = 0.}$$

Consequently, it holds that

$$\displaystyle{f(r)\mu (r) +\int _{ \sigma (r)}^{\infty }f(t)\,e_{ \ominus s}(\sigma (t),\sigma (r))\,Dt = 0,}$$

from which it follows that

$$\displaystyle{f(r)\mu (r) + \mathcal{L}_{\sigma (r)}\{f\}(s) = 0.}$$

Taking the limit as \(s \rightarrow \infty\) yields

$$\displaystyle{f(r)\mu (r) +\lim _{s\rightarrow \infty }\mathcal{L}_{\sigma (r)}\{f\}(s) = 0,}$$

and so, it holds that

$$\displaystyle{f(r)\mu (r) = 0,}$$

hence

$$\displaystyle{f(r) = 0.}$$

For the inductive step, assume \(f(\sigma ^{i}(r)) = 0\) for all i < n. Then it follows that

$$\displaystyle{\mathcal{L}_{r}\{f\}(s) = 0,}$$

from which we obtain

$$\displaystyle{\int _{\sigma ^{n}(r)}^{\infty }f(t)\,e_{ \ominus s}(\sigma (t),r)\,Dt = 0.}$$

Thus,

$$\displaystyle{e_{s}(\sigma ^{n+1}(r),r)\int _{\sigma ^{ n}(r)}^{\infty }f(t)\,e_{ \ominus s}(\sigma (t),r)\,Dt = 0.}$$

So, we deduce that

$$\displaystyle{\int _{\sigma ^{n}(r)}^{\infty }f(t)\,e_{ \ominus s}(\sigma (t),\sigma ^{n+1}(r))\,Dt = 0.}$$

All in all, we conclude that

$$\displaystyle{f(\sigma ^{n}(r))\mu (\sigma ^{n}(r)) +\int _{ \sigma ^{ n+1}(r)}^{\infty }f(t)\,e_{ \ominus s}(\sigma (t),\sigma ^{n+1}(r))\,Dt = 0,}$$

and so,

$$\displaystyle{f(\sigma ^{n}(r))\mu (\sigma ^{n}(r)) + \mathcal{L}_{\sigma ^{ n+1}(r)}\{f\}(s) = 0.}$$

Taking the limit as \(s \rightarrow \infty\) yields

$$\displaystyle{f(\sigma ^{n}(r))\mu (\sigma ^{n}(r)) +\lim _{ s\rightarrow \infty }\mathcal{L}_{\sigma ^{n+1}(r)}\{f\}(s) = 0.}$$

Therefore,

$$\displaystyle{f(\sigma ^{n}(r))\mu (\sigma ^{n}(r)) = 0.}$$

Hence, we deduce that

$$\displaystyle{f(\sigma ^{n}(r)) = 0.}$$

So, f(t) = 0 for all t ≥ r.

Thus, \(\mathcal{L}_{r}\{f\}(s) = 0\) if and only if f = 0. Now let g be an arbitrary function such that \(\mathcal{L}_{r}\{f\}(s) = \mathcal{L}_{r}\{g\}(s)\). Then by linearity, we have \(\mathcal{L}_{r}\{f - g\}(s) = 0\), which implies \(f - g = 0\). Hence, f(t) = g(t) for all t ≥ r. □ 

Theorem 5.71 (Shifting).

Suppose \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\) is of exponential order k > 0. Then for |s| > k, we have:

$$\displaystyle\begin{array}{rcl} \mbox{ i)}\,\mathcal{L}_{\sigma ^{n}(r)}\{f\}(s)& =& e_{s}(\sigma ^{n}(r),r)\mathcal{L}_{ r}\{f\}(s) {}\\ & & \quad -\int _{r}^{\sigma ^{n}(r) }e_{s}(\sigma ^{n}(r),\sigma (t))f(t)\,Dt; {}\\ \mbox{ ii)}\,\mathcal{L}_{\rho ^{n}(r)}\{f\}(s)& =& e_{\ominus s}(r,\rho ^{n}(r))\mathcal{L}_{ r}\{f\}(s) {}\\ & & \quad +\int _{ \rho ^{n}(r)}^{r}e_{ \ominus s}(\sigma (t),\rho ^{n}(r))f(t)\,Dt. {}\\ \end{array}$$

Proof.

For part (i), we will proceed by induction on n. For the base case, consider n = 1: 

$$\displaystyle{\mathcal{L}_{\sigma (r)}\{f\}(s) =\int _{ \sigma (r)}^{\infty }e_{ \ominus s}(\sigma (t),\sigma (r))f(t)\,Dt.}$$

Note that

$$\displaystyle\begin{array}{rcl} e_{\ominus s}(\sigma (t),\sigma (r))& =& \frac{(1 + s\mu (r))} {(1 + s\mu (r))e_{s}(\sigma (t),\sigma (r))} {}\\ & =& \frac{(1 + s\mu (r))} {e_{s}(\sigma (t),r)} = (1 + s\mu (r))e_{\ominus s}(\sigma (t),r). {}\\ \end{array}$$

Therefore, we have

$$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{\sigma (r)}\{f\}(s) {}\\ & & =\int _{ \sigma (r)}^{\infty }(1 + s\mu (r))e_{ \ominus s}(\sigma (t),r)f(t)\,Dt {}\\ & & =\sum _{ j=0}^{\infty }\left [(1 + s\mu (r))e_{ \ominus s}(\sigma ^{j+1}(\sigma (r)),r)f(\sigma ^{j}(\sigma (r)))\mu (\sigma ^{j}(\sigma (r)))\right ] {}\\ & & =\sum _{ j=0}^{\infty }\left [(1 + s\mu (r))e_{ \ominus s}(\sigma ^{j+2}(r),r)f(\sigma ^{j+1}(r))\mu (\sigma ^{j+1}(r))\right ] {}\\ & & =\sum _{ j=0}^{\infty }\left [(1 + s\mu (r))e_{ \ominus s}(\sigma ^{j+1}(r),r)f(\sigma ^{j}(r))\mu (\sigma ^{j}(r))\right ] {}\\ & & \quad \quad \quad - (1 + s\mu (r))e_{\ominus s}(\sigma (r),r)f(r)\mu (r). {}\\ \end{array}$$

Note that

$$\displaystyle{(1 + s\mu (r))e_{\ominus s}(\sigma (r),r) = \frac{(1 + s\mu (r))} {e_{s}(\sigma (r),r)} = \frac{(1 + s\mu (r))} {(1 + s\mu (r))e_{s}(r,r)} = 1.}$$

We can thus further simplify the expression to

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{\sigma (r)}\{f\}(s)& =& \int _{r}^{\infty }(1 + s\mu (r))e_{ \ominus s}(\sigma (t),r)f(t)\,Dt - f(r)\mu (r) {}\\ & =& (1 + s\mu (r))\mathcal{L}_{r}\{f\}(s) - f(r)\mu (r) {}\\ & =& e_{s}(\sigma (r),r))\mathcal{L}_{r}\{f\}(s) -\int _{r}^{\sigma (r)}e_{ s}(\sigma (r),\sigma (t))f(t)\,Dt, {}\\ \end{array}$$

which proves the base case.

For the inductive step, assume the hypothesis is true for some n ≥ 1. Then by the base case, we have

$$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{\sigma ^{n+1}(r)}\{f\}(s) {}\\ & & = e_{s}(\sigma ^{n+1}(r),\sigma ^{n}(r))\mathcal{L}_{\sigma ^{ n}(r)}\{f\}(s) {}\\ & & \quad \ -\int _{\sigma ^{n}(r)}^{\sigma ^{n+1}(r) }e_{s}(\sigma ^{n+1}(r),\sigma (t))f(t)\,Dt {}\\ & & = e_{s}(\sigma ^{n+1}(r),\sigma ^{n}(r))\Bigg[e_{ s}(\sigma ^{n}(r),r)\mathcal{L}_{ r}\{f\}(s) {}\\ & & \quad \ -\int _{r}^{\sigma ^{n}(r) }e_{s}(\sigma ^{n}(r),\sigma (t))f(t)\,Dt\Bigg] -\int _{\sigma ^{ n}(r)}^{\sigma ^{n+1}(r) }e_{s}(\sigma ^{n+1}(r),\sigma (t))f(t)\,Dt {}\\ & & = e_{s}(\sigma ^{n+1}(r),r)\mathcal{L}_{ r}\{f\}(s) -\int _{r}^{\sigma ^{n}(r) }e_{s}(\sigma ^{n+1}(r),\sigma (t))f(t)\,Dt {}\\ & & \quad \ -\int _{\sigma ^{n}(r)}^{\sigma ^{n+1}(r) }e_{s}(\sigma ^{n+1}(r),\sigma (t))f(t)\,Dt {}\\ & & = e_{s}(\sigma ^{n+1}(r),r)\mathcal{L}_{ r}\{f\}(s) -\int _{r}^{\sigma ^{n+1}(r) }e_{s}(\sigma ^{n+1}(r),\sigma (t))f(t)\,Dt. {}\\ \end{array}$$

We now prove part (ii) similarly. For the base case, consider n = 1. We obtain

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{\rho (r)}\{f\}(s)& =& \int _{\rho (r)}^{\infty }e_{ \ominus s}(\sigma (t),\rho (r))f(t)\,Dt {}\\ & =& \int _{\rho (r)}^{\infty }\frac{e_{\ominus s}(\sigma (t),r)} {1 + s\mu (\rho (r))} f(t)\,Dt {}\\ & =& \frac{1} {1 + s\mu (\rho (r))}\mathcal{L}_{r}\{f\}(s) + \frac{f(\rho (r))\mu (\rho (r))} {1 + s\mu (\rho (r))} {}\\ & =& e_{\ominus s}(r,\rho (r))\mathcal{L}_{r}\{f\}(s) +\int _{ \rho (r)}^{r}e_{ \ominus s}(\sigma (t),\rho (r))f(t)\,Dt, {}\\ \end{array}$$

proving the base case. For the inductive step, assume the hypothesis is true for some n ≥ 1. Then by the base case, we have

$$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{\rho ^{n+1}(r)}\{f\}(s) {}\\ & & = e_{\ominus s}(\rho ^{n}(r),\rho ^{n+1}(r))\mathcal{L}_{\rho ^{ n}(r)}\{f\}(s) +\int _{ \rho ^{n+1}(r)}^{\rho ^{n}(r) }e_{\ominus s}(\sigma (t),\rho ^{n+1}(r))f(t)\,Dt {}\\ \end{array}$$
$$\displaystyle\begin{array}{rcl} & & = e_{\ominus s}(\rho ^{n}(r),\rho ^{n+1}(r))\Bigg[e_{ \ominus s}(r,\rho ^{n}(r))\mathcal{L}_{ r}\{f\}(s) {}\\ & & \quad \ +\int _{ \rho ^{n}(r)}^{r}e_{ \ominus s}(\sigma (t),\rho ^{n}(t))f(t)\,Dt\Bigg] +\int _{ \rho ^{ n+1}(r)}^{\rho ^{n}(r) }e_{\ominus s}(\sigma (t),\rho ^{n+1}(t))f(t)\,Dt {}\\ & & = e_{\ominus s}(r,\rho ^{n+1}(r))\mathcal{L}_{ r}\{f\}(s) +\int _{ \rho ^{n}(r)}^{r}e_{ \ominus s}(\sigma (t),\rho ^{n+1}(t))f(t)\,Dt {}\\ & & \quad \ +\int _{ \rho ^{n+1}(r)}^{\rho ^{n}(r) }e_{\ominus s}(\sigma (t),\rho ^{n+1}(t))f(t)\,Dt {}\\ & & = e_{\ominus s}(r,\rho ^{n+1}(r))\mathcal{L}_{ r}\{f\}(s) +\int _{ \rho ^{n+1}(r)}^{r}e_{ \ominus s}(\sigma (t),\rho ^{n+1}(t))f(t)\,Dt. {}\\ \end{array}$$

And this completes the proof. □ 

5.10 Laplace Transform Formulas

Theorem 5.72.

If \(c \in \mathbb{R}\) , then \(\mathcal{L}_{r}\{c\}(s) = \frac{c} {s}\) for |s| > 0.

Proof.

Note c = ce 0(t, r) and apply Theorems 5.65 and 5.62. □ 

Definition 5.73.

If \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\), then for any \(n \in \mathbb{N}\), we define the n-th antidifference of f based at r by

$$\displaystyle{D_{r}^{-n}f(t) =\int _{ r}^{t}\int _{ r}^{x_{n} }\cdots \int _{r}^{x_{3} }\int _{r}^{x_{2} }f(x_{1})\,Dx_{1}\cdots Dx_{n}.}$$

Theorem 5.74.

Let \(f: \mathbb{T}_{r} \rightarrow \mathbb{R}\) . Then for any \(n \in \mathbb{N}\) ,

$$\displaystyle{D_{r}^{-n}f(t) =\int _{ r}^{t}h_{ n-1}(t,\sigma (s))f(s)\,Ds.}$$

Proof.

Consider the initial value problem

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} D^{n}y(t) = f(t), \quad \\ D^{k}y(r) = 0\mbox{ for }0 \leq k \leq n - 1.\quad \end{array} \right.}$$

It is easy to see that the unique solution to this system is given by \(D_{r}^{-n}f(t)\).

However, by Taylor’s Theorem, \(y(t) = P_{n-1}(t,r) + R_{n-1}(t,r)\) where

$$\displaystyle\begin{array}{rcl} & & \quad P_{n-1}(t,r) =\sum _{ k=0}^{n-1}D^{k}f(r)h_{ k}(t,r), {}\\ & & \mbox{ and } {}\\ & & \quad R_{n-1}(t,r) =\int _{ r}^{t}h_{ n-1}(t,\sigma (s))D^{n}f(s)\,Ds. {}\\ \end{array}$$

Then we have

$$\displaystyle\begin{array}{rcl} f(t) = D^{n}y(t) = D^{n}P_{ n-1}(t,r) + D^{n}R_{ n-1}(t,r) = D^{n}R_{ n-1}(t,r),& & {}\\ \end{array}$$

which implies that R n−1(t, r) is also a solution. Thus we have

$$\displaystyle\begin{array}{rcl} D_{r}^{-n}f(t)& =& R_{ n-1}(t,r) {}\\ & =& \int _{r}^{t}h_{ n-1}(t,\sigma (s))D^{n}y(s)\,Ds {}\\ & =& \int _{r}^{t}h_{ n-1}(t,\sigma (s))D^{n}D_{ r}^{-n}f(s)\,Ds {}\\ & =& \int _{r}^{t}h_{ n-1}(t,\sigma (s))f(s)\,Ds, {}\\ \end{array}$$

since the solution is unique. □ 

The following results are used to obtain the exponential order of the Taylor monomial, h n (t, r), and give its Laplace transform.

Lemma 5.75.

Let f(t) be of exponential order k > 0. Then \(D_{r}^{-1}f(t)\) is also of exponential order k > 0.

Proof.

Let | f(t) | ≤ Me k (t, r) for all t ≥ x, and let \(C = \left \vert \int _{r}^{x}f(u)\,Du\right \vert\). Then for t ≥ x, we have

$$\displaystyle\begin{array}{rcl} \left \vert D_{r}^{-1}f(t)\right \vert & =& \left \vert \int _{ r}^{t}f(u)\,Du\right \vert {}\\ &\leq & \left \vert \int _{r}^{x}f(u)\,Du\right \vert + \left \vert \int _{ x}^{t}f(u)\,Du\right \vert {}\\ &\leq & C +\int _{ r}^{t}\left \vert f(u)\right \vert \,Du {}\\ & \leq & C + M\int _{r}^{t}e_{ k}(u,r)\,Du {}\\ & =& C + \frac{M} {k} \left (e_{k}(u,r)\right )\biggr |_{u=r}^{u=t} {}\\ & =& C + \frac{M} {k} (e_{k}(t,r) - 1) {}\\ & \leq & C + \frac{M} {k} e_{k}(t,r) {}\\ & \leq & \left (C + \frac{M} {k} \right )e_{k}(t,r). {}\\ \end{array}$$

As this demonstrates that \(D_{r}^{-1}f\) is of exponential order, the proof of the lemma is complete. □ 

Corollary 5.76.

Let f(t) be of exponential order k > 0. Then \(D_{r}^{-n}f(t)\) is also of exponential order k > 0 for all \(n \in \mathbb{N}\) .

To see that the corollary holds, use the previous result.

Theorem 5.77.

Let f be of exponential order k > 0. Then for |s| > k,

$$\displaystyle{\mathcal{L}_{r}\{D_{r}^{-n}f(t)\}(s) = \frac{1} {s^{n}}\mathcal{L}_{r}\{f\}(s).}$$

Proof.

We will proceed by induction on n. For the base case, consider n = 1. Then using integration by parts, we have

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{r}\{f\}(s)& =& \int _{r}^{\infty }e_{ \ominus s}(\sigma (t),r)f(t)\,Dt {}\\ & =& e_{\ominus s}(t,r)D_{r}^{-1}f(t)\biggr |_{ t=r}^{t\rightarrow \infty } + s\int _{ r}^{\infty }e_{ \ominus s}(\sigma (t),r)D_{r}^{-1}f(t)\,Dt {}\\ & =& s\mathcal{L}_{r}\{D_{r}^{-1}f(t)\}(s). {}\\ \end{array}$$

Thus, \(\mathcal{L}_{r}\{D_{r}^{-1}f(t)\}(s) = \frac{1} {s}\mathcal{L}_{r}\{f\}(s)\).

For the inductive step, assume the statement is true for some integer n > 0. Then we have

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{r}\{D_{r}^{-n-1}f(t)\}(s)& =& \frac{1} {s}\mathcal{L}_{r}\{D_{r}^{-n}f(t)\} {}\\ & =& \frac{1} {s}\left [ \frac{1} {s^{n}}\mathcal{L}_{r}\{f\}(s)\right ] {}\\ & =& \frac{1} {s^{n+1}}\mathcal{L}_{r}\{f\}(s), {}\\ \end{array}$$

as was to be shown. □ 

Lemma 5.78.

The n-th Taylor monomial h n (t,r) is of exponential order k = 1.

Proof.

We will prove this result for a > 1 (leaving the case a = 1 to the reader) by induction on n. Consider the base case n = 1. We have

$$\displaystyle\begin{array}{rcl} \lim _{t\rightarrow \infty }\left \vert \frac{h_{1}(t,r)} {e_{1}(t,r)} \right \vert & =& \lim _{t\rightarrow \infty }\left \vert \frac{t - r} {e_{1}(t,r)}\right \vert {}\\ &\leq & \lim _{t\rightarrow \infty }\left \vert \frac{t} {e_{1}(t,r)}\right \vert +\lim _{t\rightarrow \infty }\left \vert \frac{-r} {e_{1}(t,r)}\right \vert {}\\ &\leq & \lim _{t\rightarrow \infty }\left \vert \frac{t} {1 +\mu (\rho (t))}\right \vert {}\\ & =& \lim _{t\rightarrow \infty }\left \vert \frac{t} {1 + a^{-1}\mu (t)}\right \vert {}\\ & =& \lim _{t\rightarrow \infty }\left \vert \frac{at} {a + (a - 1)t + b}\right \vert {}\\ & =& \lim _{t\rightarrow \infty }\left \vert \frac{a} {(a - 1) + \frac{a+b} {t} }\right \vert {}\\ & =& \frac{a} {a - 1}. {}\\ \end{array}$$

Thus, for sufficiently large t and any ε > 0,

$$\displaystyle{\vert h_{1}(t,r)\vert \leq \left ( \frac{a} {a - 1}+\epsilon \right )e_{1}(t,r).}$$

For the inductive step, assume h n (t, r) is of exponential order 1 for some n. Then, since \(D_{r}^{-1}h_{n}(t,r) = h_{n+1}(t,r)\), applying Lemma 5.75 implies h n+1 is of exponential order 1. □ 

Theorem 5.79.

Let |s| > 1. Then

$$\displaystyle{\mathcal{L}_{r}\{h_{n}(t,r)\}(s) = \frac{1} {s^{n+1}}.}$$

Proof.

The base case, n = 0, is trivial since \(\mathcal{L}_{r}\{1\}(s) = \frac{1} {s}\). Note that

$$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{r}\{h_{n}(t,r)\}(s) {}\\ \ & & \qquad \quad =\int _{ r}^{\infty }e_{ \ominus s}(\sigma (t),r)h_{n}(t,r)Dt {}\\ & & \qquad \quad = \left.e_{\ominus s}(\sigma (t),r)h_{n+1}(t,r)\right \vert _{t=r}^{t\rightarrow \infty } +\int _{ r}^{\infty } \frac{s} {1 + s\mu (t)}e_{\ominus s}(t,r)h_{n+1}(t,r) {}\\ & & \qquad \quad =\int _{ r}^{\infty } \frac{s} {1 + s\mu (t)}e_{\ominus s}(t,r)h_{n+1}(t,r) {}\\ & & {}\\ & & \qquad \quad = s\mathcal{L}_{r}\{h_{n+1}(t,r)\}(s). {}\\ \end{array}$$

Thus, \(\mathcal{L}_{r}\{h_{n+1}(t,r)\}(s) = \frac{1} {s}\mathcal{L}_{r}\{h_{n}(t,r)\}(s)\). Suppose that the theorem holds for some n. Then it follows that

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{r}\{h_{n+1}(t,r)\}(s) = \frac{1} {s}\mathcal{L}_{r}\{h_{n}(t,r)\}(s) = \frac{1} {s} \frac{1} {(s^{n+1})} = \frac{1} {s^{n+2}},& & {}\\ \end{array}$$

which completes the induction step and thus proves the result. □ 

Lemma 5.80.

The discrete trigonometric functions, \(\sin _{p}\) and \(\cos _{p}\) , and the hyperbolic trigonometric functions, \(\sinh _{p}\) and \(\cosh _{p}\) , are all of exponential order |p|.

Proof.

Let p be such that \(\pm p \in \mathcal{R}^{c}.\) Then for sufficiently large t, we have

$$\displaystyle\begin{array}{rcl} \vert \sinh _{p}(t,r)\vert & =& \frac{1} {2}\left \vert e_{p}(t,r) - e_{-p}(t,r)\right \vert {}\\ &\leq & \frac{1} {2}\vert e_{p}(t,r)\vert + \frac{1} {2}\vert e_{-p}(t,r)\vert {}\\ &\leq & e_{\vert p\vert }(t,r). {}\\ \end{array}$$

The proof for \(\cosh _{p}(t,r)\) is analogous.

For \(\cos _{p}\), we can use the identity \(\cos _{p}(t,r) =\cosh _{ip}(t,r)\) to obtain

$$\displaystyle\begin{array}{rcl} \vert \cos _{p}(t,r)\vert & =& \vert \cosh _{ip}(t,r)\vert {}\\ &\leq & e_{\vert ip\vert }(t,r) {}\\ & =& e_{\vert p\vert }(t,r). {}\\ \end{array}$$

The proof for \(\sin _{p}(t,r)\) is analogous. □ 

Theorem 5.81.

For |s| > |p| and \(\pm p \in \mathcal{R}^{c}\) , we have

  1. (i)

    \(\mathcal{L}_{r}\{\cosh _{p}(t,r)\}(s) = \frac{s} {s^{2}-p^{2}}\) ;

  2. (ii)

    \(\mathcal{L}_{r}\{\sinh _{p}(t,r)\}(s) = \frac{p} {s^{2}-p^{2}}\) .

Proof.

To see that (i) holds, note that

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{r}\{\cosh _{p}(t,r)\}(s)& =& \frac{1} {2}\left [\mathcal{L}_{r}\{e_{p}(t,r)\}(s) + \mathcal{L}_{r}\{e_{-p}(t,r)\}(s)\right ] {}\\ & =& \frac{1} {2} \frac{1} {(s - p)} + \frac{1} {2} \frac{1} {(s + p)} {}\\ & =& \frac{s} {s^{2} - p^{2}}. {}\\ \end{array}$$

The proof of (ii) is similar. □ 

Theorem 5.82.

For |s| > |p| and \(\pm ip \in \mathcal{R}^{c}\) , we have

  1. (i)

    \(\mathcal{L}_{r}\{\cos _{p}(t,r)\}(s) = \frac{s} {s^{2}+p^{2}}\) ;

  2. (ii)

    \(\mathcal{L}_{r}\{\sin _{p}(t,r)\}(s) = \frac{p} {s^{2}+p^{2}}.\)

Proof.

To see that (i) holds, recall that \(\cos _{p}(t,r) =\cosh _{ip}(t,r)\) and thus,

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{r}\{\cos _{p}(t,r)\}(s)& =& \mathcal{L}_{r}\{\cosh _{ip}(t,r)\}(s) {}\\ & =& \frac{1} {2} \frac{1} {(s - ip)} + \frac{1} {2} \frac{1} {(s + ip)} {}\\ & =& \frac{s} {s^{2} + p^{2}}. {}\\ \end{array}$$

The proof of (ii) is analogous. □ 

Lemma 5.83.

For \(p,q \in \mathcal{R}^{c}\) and \(t,r \in \mathbb{T}_{\alpha }\) , let \(k(t) = \frac{q} {1+p\mu (t)}.\) Then the following functions are of exponential order |p| + |q|:

  1. (i)

    \(e_{p}(t,r)\cosh _{k}(t,r);\)

  2. (ii)

    \(e_{p}(t,r)\sinh _{k}(t,r);\)

  3. (iii)

    \(e_{p}(t,r)\cos _{k}(t,r);\)

  4. (iv)

    \(e_{p}(t,r)\sin _{k}(t,r).\)

Proof.

We will prove the result for (i). First, note that

$$\displaystyle\begin{array}{rcl} p \oplus \frac{q} {1 + p\mu (t)}& =& p + \frac{q} {1 + p\mu (t)} + \frac{pq\mu (t)} {1 + p\mu (t)} {}\\ & =& p + \frac{q(1 + p\mu (t))} {1 + p\mu (t)} {}\\ & =& p + q. {}\\ \end{array}$$

Therefore,

$$\displaystyle\begin{array}{rcl} \vert e_{p}(t,r)\cosh _{k}(t,r)\vert & =& \left \vert e_{p}(t,r)\frac{e_{k}(t,r) + e_{-k}(t,r)} {2} \right \vert {}\\ & =& \left \vert \frac{e_{p\oplus k}(t,r) + e_{p\oplus -k}(t,r)} {2} \right \vert {}\\ &\leq & \left \vert \frac{e_{p\oplus k}(t,r)\vert + \vert e_{p\oplus -k}(t,r)} {2} \right \vert {}\\ &\leq & \left \vert e_{s}(t,r)\right \vert, {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} s& =& max\{\vert p \oplus k\vert,\vert p \oplus -k\vert \} {}\\ & =& max\{\vert p + q\vert,\vert p - q\vert \} {}\\ &\leq & \vert p\vert + \vert q\vert. {}\\ \end{array}$$

Thus,

$$\displaystyle{\vert e_{p}(t,r)\cosh _{k}(t,r)\vert \leq \vert e_{s}(t,r)\vert \leq Me_{s}(t,r),}$$

for some M > 0, and so, \(e_{p}(t,r)\cosh _{k}(t,r)\) is of exponential order | p | + | q | . The proofs of (ii)–(iv) are analogous. □ 

Theorem 5.84.

Let \(k(t) = \frac{q} {1+p\mu (t)}\) for \(p,q \in \mathcal{R}^{c}\) . Then for |s| > |p| + |q|, we have

  1. (i)

    \(\mathcal{L}_{r}\{e_{p}(t,r)\cosh _{k}(t,r)\}(s) = \frac{s-p} {(s-p)^{2}-q^{2}};\)

  2. (ii)

    \(\mathcal{L}_{r}\{e_{p}(t,r)\sinh _{k}(t,r)\}(s) = \frac{q} {(s-p)^{2}-q^{2}};\)

  3. (iii)

    \(\mathcal{L}_{r}\{e_{p}(t,r)\cos _{k}(t,r)\}(s) = \frac{s-p} {(s-p)^{2}+q^{2}};\)

  4. (iv)

    \(\mathcal{L}_{r}\{e_{p}(t,r)\sin _{k}(t,r)\}(s) = \frac{q} {(s-p)^{2}+q^{2}}.\)

Proof.

To prove (i), first note that

$$\displaystyle{p \oplus k = p + q,}$$

as stated above. Therefore,

$$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{r}\{e_{p}(t,r)\cosh _{k}(t,r)\}(s) {}\\ & & \qquad \qquad = \frac{1} {2}\left [\mathcal{L}_{r}\left \{e_{p\oplus \frac{q} {1+p\mu (t)} }(t,r)\right \}(s) + \mathcal{L}_{r}\left \{e_{p\oplus \frac{-q} {1+p\mu (t)} }(t,r)\right \}(s)\right ] {}\\ & & \qquad \qquad = \frac{1} {2}\left [\mathcal{L}_{r}\left \{e_{p+q}(t,r)\right \}(s) + \mathcal{L}_{r}\left \{e_{p-q}(t,r)\right \}(s)\right ] {}\\ & & \qquad \qquad = \frac{1} {2}\left [ \frac{1} {s - (p + q)} + \frac{1} {s - (p - q)}\right ] {}\\ & & \qquad \qquad = \frac{s - p} {(s - p)^{2} - q^{2}}. {}\\ \end{array}$$

The proof of (ii) is similar.

To see that (iii) holds, consider that \(\cos _{q}(t,r) =\cosh _{iq}(t,r)\) and let q = iq in the result of the proof of (i) to get

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{r}\{e_{p}(t,r)\cos _{k}(t,r)\}(s) = \frac{s - p} {(s - p)^{2} - (iq)^{2}} = \frac{s - p} {(s - p)^{2} + q^{2}}.& & {}\\ \end{array}$$

The proof of (iv) is analogous. □ 

5.11 Solving IVPs Using the Laplace Transform

In this section we will demonstrate how the discrete Laplace transform can be applied to solve difference equations on \(\mathbb{T}_{r}\).

Example 5.85.

Solve:

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} D^{2}f(t) - 2Df(t) - 8f(t) = 0,\quad \\ Df(r) = 0;f(r) = -\frac{3} {2}. \quad \end{array} \right.}$$

We will take the Laplace transform of both sides of the equation and use the initial conditions to solve this problem. We begin with

$$\displaystyle\begin{array}{rcl} 0& =& \mathcal{L}_{r}\{D^{2}f(t) - 2Df(t) - 8f(t)\}(s) {}\\ & =& \mathcal{L}_{r}\{D^{2}f(t)\}(s) - 2\mathcal{L}_{ r}\{Df(t)\}(s) - 8\mathcal{L}_{r}\{f\}(s) {}\\ & =& \left [s^{2}\mathcal{L}\{f\}(s) - sf(r) - Df(r)\right ] - 2\left [s\mathcal{L}\{f\}(s) - f(r)\right ] - 8\mathcal{L}\{f\}(s) {}\\ & =& \left [s^{2}\mathcal{L}\{f\}(s) - s\left (-\frac{3} {2}\right )\right ] - 2\left [s\mathcal{L}\{f\}(s) -\left (-\frac{3} {2}\right )\right ] - 8\mathcal{L}\{f\}(s) {}\\ & =& (s^{2} - 2s - 8)\mathcal{L}\{f\}(s) + \frac{3} {2}s - 3, {}\\ \end{array}$$

from which it follows that

$$\displaystyle{\mathcal{L}\{f\}(s) = \frac{3 -\frac{3} {2}s} {s^{2} - 2s - 8}.}$$

Using partial fractions, we obtain

$$\displaystyle{\mathcal{L}\{f\}(s) = \frac{3 -\frac{3} {2}s} {s^{2} - 2s - 8} = \frac{-\frac{1} {2}} {s - 4} + \frac{-1} {s + 2}.}$$

Therefore, by the injectivity of the Laplace transform,

$$\displaystyle{f(t) = -\frac{1} {2}e_{4}(t,r) - e_{-2}(t,r).}$$

Example 5.86.

Solve the following IVP:

$$\displaystyle{ \begin{array}{ll} D^{2}y(t) + 4y(t)& = 0 \\ y(0) & = 1 \\ Dy(0) & = 1. \end{array} }$$

To solve the above problem, we first take the Laplace transform of both sides. This yields

$$\displaystyle{\mathcal{L}_{0}\{D^{2}y(t) + 4y(t)\} = \mathcal{L}_{ 0}\{0\},}$$

from which it follows that

$$\displaystyle{s^{2}\mathcal{L}_{ 0}\{y\} - (s + 1) + 4\mathcal{L}_{0}\{y\} = 0.}$$

We then solve for \(\mathcal{L}_{0}\{y\}\) and invert by writing

$$\displaystyle{(s^{2} + 4)\mathcal{L}_{ 0}\{y\} = s + 1,}$$

from which it follows that

$$\displaystyle{\mathcal{L}_{0}\{y\} = \frac{s} {(s^{2} + 4)} + \frac{1} {2} \frac{2} {(s^{2} + 4)}.}$$

Thus,

$$\displaystyle{y(t) =\cos _{2}(t,0) + \frac{1} {2}\sin _{2}(t,0).}$$

5.12 Green’s Functions

In this section we will consider boundary value problems on a mixed time scale with Sturm–Liouville type boundary value conditions for a > 1. We will find a Green’s function for a boundary value problem on a mixed time scale with Dirichlet boundary conditions, and investigate some of its properties. Many of the results in this section can be viewed as analogues to results for the continuous case given in Kelley and Peterson [137].

Theorem 5.87.

Let \(\beta \in \mathbb{T}_{\sigma ^{2}(\alpha )}\) and \(A,B,E,F \in \mathbb{R}\) be given. Then the homogeneous boundary value problem (BVP)

$$\displaystyle{ \left \{\begin{array}{lll} & - D^{2}y(t) = 0, &t \in \mathbb{T}_{\alpha } \\ & Ay(\alpha ) - BDy(\alpha ) = 0 \\ &Ey(\beta ) + FDy(\beta ) = 0\end{array} \right. }$$

has only the trivial solution if and only if

$$\displaystyle{ \gamma:= AE(\beta -\alpha ) + BE + AF\neq 0. }$$

Proof.

A general solution of \(-D^{2}y(t) = 0\) is given by

$$\displaystyle{ y(t) = c_{0} + c_{1}h_{1}(t,\alpha ). }$$

Using the boundary conditions, we have

$$\displaystyle{ Ay(\alpha ) - BDy(\alpha ) = Ac_{0} - Bc_{1} = 0 }$$

and

$$\displaystyle{ Ey(\beta ) + FDy(\beta ) = E[c_{0} + c_{1}(\beta -\alpha )] + Fc_{0} = 0. }$$

Thus, we have the following system of equations

$$\displaystyle\begin{array}{rcl} c_{0}A - c_{1}B& =& 0 {}\\ c_{0}E + c_{1}[E(\beta -\alpha ) + F]& =& 0, {}\\ \end{array}$$

which has only the trivial solution if and only if

$$\displaystyle{ \gamma:= \left \vert \begin{array}{lllll} &A&&& - B\;\;\\ &E & & &E\beta - E\alpha + F \end{array} \right \vert \neq 0. }$$

It follows that

$$\displaystyle\begin{array}{rcl} \gamma &:=& A[E(\beta -\alpha ) + F)] + BE {}\\ & =& AE(\beta -\alpha ) + BE + AF, {}\\ \end{array}$$

as claimed. □ 

Lemma 5.88.

Assume \(\beta \in \mathbb{T}_{\sigma ^{2}(\alpha )}\) and \(A_{1},A_{2} \in \mathbb{R}\) . Then the boundary value problem

$$\displaystyle\begin{array}{rcl} -D^{2}y(t)& =& 0,\quad t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } {}\\ y(\alpha )& =& A_{1},\quad y(\beta ) = A_{2} {}\\ \end{array}$$

has the solution

$$\displaystyle{ y(t) = A_{1} + \frac{A_{2} - A_{1}} {\beta -\alpha } (t-\alpha ). }$$

Proof.

A general solution to the mixed difference equation D 2 y(t) = 0 is given by

$$\displaystyle{ y(t) = c_{0} + c_{1}h_{1}(t,\alpha ) = c_{0} + c_{1}(t-\alpha ). }$$

Using the first boundary condition, we get

$$\displaystyle{ y(\alpha ) = c_{0} = A_{1}. }$$

Using the second boundary condition, we have that

$$\displaystyle\begin{array}{rcl} y(\beta ) = A_{1} + c_{1}(\beta -\alpha ) = A_{2}.& & {}\\ \end{array}$$

Solving for c 1 we get

$$\displaystyle{c_{1} = \frac{A_{2} - A_{1}} {\beta -\alpha }.}$$

Hence,

$$\displaystyle{ y(t) = A_{1} + \frac{A_{2} - A_{1}} {\beta -\alpha } (t-\alpha ). }$$

 □ 

Theorem 5.89.

Assume \(f: \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } \rightarrow \mathbb{R}\) and \(\beta \in \mathbb{T}_{\sigma ^{2}(\alpha )}\) . Then the unique solution of the BVP

$$\displaystyle\begin{array}{rcl} - D^{2}y(t) = f(t),\quad t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) }& &{}\end{array}$$
(5.5)
$$\displaystyle\begin{array}{rcl} y(\alpha ) = 0 = y(\beta ),& &{}\end{array}$$
(5.6)

is given by

$$\displaystyle{ y(t) =\int _{ \alpha }^{\beta }G(t,s)f(s)Ds =\sum _{ j=0}^{K(\beta )-1}G(t,\sigma ^{j}(\alpha ))f(\sigma ^{j}(\alpha ))\mu (\sigma ^{j}(\alpha )), }$$

for \(t \in \mathbb{T}_{\alpha }^{\beta }\) , where \(G: \mathbb{T}_{\alpha }^{\beta } \times \mathbb{T}_{\alpha }^{\rho (\beta )} \rightarrow \mathbb{R}\) is called the Green’s function for the homogeneous BVP

$$\displaystyle{ -D^{2}y(t) = 0,\quad t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } }$$
(5.7)
$$\displaystyle{ y(\alpha ) = 0 = y(\beta ), }$$
(5.8)

and is defined by

$$\displaystyle{ G(t,s):= \left \{\begin{array}{lllll} &u(t,s),&&0 \leq K(s) \leq K(t) - 1 \leq K(\beta ) - 1\\ &v(t, s), & &0 \leq K(t) \leq K(s) \leq K(\beta ) - 1, \end{array} \right. }$$

where for \((t,s) \in \mathbb{T}_{\alpha }^{\beta } \times \mathbb{T}_{\alpha }^{\rho (\beta )}\)

$$\displaystyle{u(t,s):= \frac{h_{1}(\beta,\sigma (s))} {h_{1}(\beta,\alpha )} h_{1}(t,\alpha ) - h_{1}(t,\sigma (s))}$$

and

$$\displaystyle{v(t,s):= \frac{h_{1}(\beta,\sigma (s))} {h_{1}(\beta,\alpha )} h_{1}(t,\alpha ).}$$

Proof.

Note for the γ defined in Theorem 5.87 we have that for \(A = E = 1\), \(B = F = 0\),

$$\displaystyle{ \gamma = AC(\beta -\alpha ) + BC + AD = (\beta -\alpha )\neq 0. }$$

Hence, by Exercise 5.13, the BVP (5.5), (5.6) has a unique solution y(t). Using the variation of constants formula (Theorem 5.38 with n = 2) we have that

$$\displaystyle\begin{array}{rcl} y(t)& =& c_{0}h_{0}(t,\alpha ) + c_{1}h_{1}(t,\alpha ) -\int _{\alpha }^{t}h_{ 1}(t,\sigma (s)f(s)Ds {}\\ & =& c_{0} + c_{1}h_{1}(t,\alpha ) -\sum _{j=0}^{K(t)-1}h_{ 1}(t,\sigma (\sigma ^{j}(\alpha )))f(\sigma ^{j}(\alpha ))\mu (\sigma ^{j}(\alpha )) {}\\ & =& c_{0} + c_{1}h_{1}(t,\alpha ) -\sum _{j=0}^{K(t)-1}h_{ 2}(t,\sigma ^{j+1}(\alpha ))f(\sigma ^{j}(\alpha ))a^{j}\mu (\alpha ). {}\\ \end{array}$$

Using the first boundary condition, we get

$$\displaystyle\begin{array}{rcl} y(\alpha )& =& c_{0} + c_{1}h_{1}(\alpha,\alpha ) -\int _{\alpha }^{\alpha }h_{1}(t,\sigma (s))f(s)Ds {}\\ & =& c_{0} {}\\ & =& 0, {}\\ \end{array}$$

and using the second boundary condition, we have that

$$\displaystyle\begin{array}{rcl} y(\beta ) = c_{1}h_{1}(\beta,\alpha ) -\sum _{j=0}^{K(\beta )-1}h_{ 1}(\beta,\sigma ^{j+1}(\alpha ))f(\sigma ^{j}(\alpha ))\mu (\sigma ^{j}(\alpha )) = 0.& & {}\\ \end{array}$$

Solving for c 1 yields

$$\displaystyle\begin{array}{rcl} c_{1} = \frac{\sum _{j=0}^{K(\beta )-1}h_{2}(\beta,\sigma ^{j+1}(\alpha ))f(\sigma ^{j}(\alpha ))\mu (\sigma ^{j}(\alpha ))} {h_{1}(\beta,\alpha )}.& & {}\\ \end{array}$$

Thus,

$$\displaystyle\begin{array}{rcl} y(t)& =& \frac{\sum _{j=0}^{K(\beta )-1}h_{1}(\beta,\sigma ^{j+1}(\alpha ))f(\sigma ^{j}(\alpha ))\mu (\sigma ^{j}(\alpha ))} {h_{1}(\beta,\alpha )} h_{1}(t,\alpha ) {}\\ & & \quad \ -\sum _{j=0}^{K(t)-1}h_{ 1}(t,\sigma ^{j+1}(\alpha ))f(\sigma ^{j}(\alpha ))\mu (\sigma ^{j}(\alpha )) {}\\ & =& \sum _{j=0}^{K(t)-1}\left [\frac{h_{1}(\beta,\sigma ^{j+1}(\alpha ))} {h_{1}(\beta,\alpha )} h_{1}(t,\alpha ) - h_{1}(t,\sigma ^{j+1}(\alpha ))\right ]f(\sigma ^{j}(\alpha ))\mu (\sigma ^{j}(\alpha )) {}\\ & & \quad \ +\sum _{ j=K(t)}^{K(\beta )-1}\frac{h_{1}(\beta,\sigma ^{j+1}(\alpha ))} {h_{1}(\beta,\alpha )} h_{1}(t,\alpha )f(\sigma ^{j}(\alpha ))\mu (\sigma ^{j}(\alpha )) {}\\ & =& \sum _{j=0}^{K(\beta )-1}G(t,\sigma ^{j}(\alpha ))f(\sigma ^{j}(\alpha ))\mu (\sigma ^{j}(\alpha )) {}\\ & =& \int _{\alpha }^{\beta }G(t,s)f(s)Ds, {}\\ \end{array}$$

for G(t, s) defined as in the statement of this theorem. □ 

Theorem 5.90.

The Green’s function for the BVP (5.7) , (5.8) , satisfies

$$\displaystyle{G(t,s) \geq 0,\quad (t,s) \in \mathbb{T}_{\alpha }^{\beta } \times \mathbb{T}_{\alpha }^{\rho (\beta )}}$$

and

$$\displaystyle{ \max _{t\in \mathbb{T}_{\alpha }^{\beta }}G(t,s) = G(\sigma (s),s),\quad s \in \mathbb{T}_{\alpha }^{\rho (\beta )}. }$$

Proof.

First, note both that

$$\displaystyle{ G(\alpha,s) = \frac{h_{1}(\beta,\sigma (s))} {h_{1}(\beta,\alpha )} h_{1}(\alpha,\alpha ) = \frac{h_{1}(\beta,\sigma (s))} {h_{1}(\beta,\alpha )} (\alpha -\alpha ) = 0 }$$

and that

$$\displaystyle{ G(\beta,s) = \frac{h_{1}(\beta,\sigma (s))} {h_{1}(\beta,\alpha )} h_{1}(\beta,\alpha ) - h_{1}(\beta,\sigma (s)) = 0. }$$

Now we will show that DG(t, s) ≥ 0 for t ≤ s, DG(t, s) ≤ 0 for s < t, and \(G(\sigma (s),s) \geq G(s,s)\). So first consider the domain 0 ≤ K(t) ≤ K(s) ≤ K(β) − 1:

$$\displaystyle\begin{array}{rcl} DG(t,s)& =& \frac{h_{1}(\beta,\sigma (s))} {h_{1}(\beta,\alpha )} Dh_{1}(t,\alpha ) {}\\ & =& \frac{\beta -\sigma (s)} {\beta -\alpha } h_{0}(t,\alpha ) {}\\ & =& \frac{\beta -\sigma (s)} {\beta -\alpha } {}\\ &\geq & 0. {}\\ \end{array}$$

Now consider the domain \(0 \leq K(s) \leq K(t) - 1 \leq K(\beta ) - 1\):

$$\displaystyle\begin{array}{rcl} DG(t,s)& =& \frac{h_{1}(\beta,\sigma (s))} {h_{1}(\beta,\alpha )} Dh_{1}(t,\alpha ) - Dh_{1}(t,\sigma (s)) {}\\ & =& \frac{\beta -\sigma (s)} {\beta -\alpha } h_{0}(t,\alpha ) - h_{0}(t,\sigma (s)) {}\\ & =& \frac{\beta -\sigma (s)} {\beta -\alpha } - 1 {}\\ & \leq & 0, {}\\ \end{array}$$

since \(\beta -\sigma (s) \leq \beta -\alpha\). Now, since G is increasing for t ≤ s and decreasing for s < t, we need to see which is larger: \(G(\sigma (s),s)\) or G(s, s). So consider

$$\displaystyle\begin{array}{rcl} & & G(\sigma (s),s) - G(s,s) {}\\ & & \quad \quad = \frac{\beta -\sigma (s)} {\beta -\alpha } (\sigma (s)-\alpha ) - (\sigma (s) -\sigma (s)) -\frac{\beta -\sigma (s)} {\beta -\alpha } (s-\alpha ) {}\\ & & \quad \quad = \frac{\beta -\sigma (s)} {\beta -\alpha } [\sigma (s) -\alpha -s+\alpha ] {}\\ & & \quad \quad = \frac{\beta -\sigma (s)} {\beta -\alpha } (\sigma (s) - s) {}\\ & & \quad \quad \geq 0, {}\\ \end{array}$$

which implies that \(\max _{t\in \mathbb{T}_{\alpha }^{\beta }}G(t,s) = G(\sigma (s),s)\). Also, since DG(t, s) ≥ 0 for \(t \in \mathbb{T}_{[\alpha,s]}\), DG(t, s) ≤ 0 for \(t \in \mathbb{T}_{(s,\beta )}\), and \(G(\alpha,s) = 0 = G(\beta,s)\), we have G(t, s) ≥ 0 on its domain. □ 

Remark 5.91.

Note that in the above proof, we have DG(t, s) > 0 for t ≤ s < ρ(β), DG(t, s) < 0 for α < s < t.

In the next theorem we give some more properties of the Green’s function for the BVP (5.7), (5.8).

Theorem 5.92.

Let G(t,s), u(t,s), and v(t,s) be as defined in Theorem 5.89 . Then the following hold:

  1. (i)

    \(G(\alpha,s) = 0 = G(\beta,s),\quad s \in \mathbb{T}_{\alpha }^{\rho (\beta )};\)

  2. (ii)

    for each fixed \(s \in \mathbb{T}_{\alpha }^{\rho (\beta )},\) \(-D^{2}u(t,s) = 0 = -D^{2}v(t,s)\) for \(t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) };\)

  3. (iii)

    \(v(t,s) = u(t,s) + h_{1}(t,\sigma (s)),\quad (t,s) \in \mathbb{T}_{\alpha }^{\beta } \times \mathbb{T}_{\alpha }^{\rho (\beta )};\)

  4. (iv)

    \(u(\sigma (s),s) = v(\sigma (s),s),\quad s \in \mathbb{T}_{\alpha }^{\rho (\beta )};\)

  5. (v)

    \(-D^{2}G(t,s) = \frac{\delta _{ts}} {\mu (s)},\quad (t,s) \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } \times \mathbb{T}_{\alpha }^{\rho (\beta )},\) where δ ts is the Kronecker delta, i.e., δ ts = 1 for t = s and δ ts = 0 for t ≠ s.

Proof.

In the proof of Theorem 5.90 we proved (i). The proofs of the properties (ii)–(iv) are straightforward and left to the reader (see Exercise 5.15). We now use these properties to prove (v). It follows that for t < s,

$$\displaystyle\begin{array}{rcl} -D^{2}G(t,s) = -D^{2}u(t,s) = 0 = \frac{\delta _{ts}} {\mu (s)}& & {}\\ \end{array}$$

and for t > s,

$$\displaystyle\begin{array}{rcl} -D^{2}G(t,s) = -D^{2}v(t,s) = 0 = \frac{\delta _{ts}} {\mu (s)}.& & {}\\ \end{array}$$

Finally, when t = s, we have using Exercise 5.5

$$\displaystyle\begin{array}{rcl} & & D^{2}G(t,s) {}\\ & & \quad = \frac{G(\sigma ^{2}(t),s)\mu (t) - G(\sigma (t),s)[\mu (t) +\mu (\sigma (t))] + G(t,s)\mu (\sigma (t))} {[\mu (t)]^{2}\mu (\sigma (t))} {}\\ & & \quad = \frac{v(\sigma ^{2}(t),s)\mu (t) - u(\sigma (t),s)[\mu (t) +\mu (\sigma (t))] + u(t,s)\mu (\sigma (t))} {[\mu (t)]^{2}\mu (\sigma (t))} {}\\ & & \quad = \frac{v(\sigma ^{2}(s),s)\mu (s) - u(\sigma (s),s)[\mu (s) +\mu (\sigma (s))] + u(s,s)\mu (\sigma (s))} {[\mu (s)]^{2}\mu (\sigma (s))} {}\\ & & \quad = \frac{[u(\sigma ^{2}(s),s) + h_{1}(\sigma ^{2}(s),\sigma (s))]\mu (s)} {[\mu (s)]^{2}\mu (\sigma (s))} {}\\ & & \quad \quad + \frac{-u(\sigma (s),s)[\mu (s) +\mu (\sigma (s))] + u(s,s)\mu (\sigma (s))} {[\mu (s)]^{2}\mu (\sigma (s))} {}\\ & & \quad = \frac{h_{1}(\sigma ^{2}(s),\sigma (s))\mu (s)} {[\mu (s)]^{2}\mu (\sigma (s))} + D^{2}u(s,s) {}\\ & & \quad = \frac{h_{1}(\sigma ^{2}(s),\sigma (s))} {\mu (s)\mu (\sigma (s))} {}\\ & & \quad = \frac{\sigma ^{2}(s) -\sigma (s)} {\mu (s)\mu (\sigma (s))} {}\\ & & \quad = \frac{1} {\mu (s)}. {}\\ \end{array}$$

Therefore,

$$\displaystyle{ -D^{2}G(t,s) = \frac{\delta _{ts}} {\mu (s)}, }$$

for \((t,s) \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } \times \mathbb{T}_{\alpha }^{\rho (\beta )}\). □ 

The following theorem along with Exercise 5.13 is a uniqueness result for the Green’s function for the BVP (5.7), (5.8).

Theorem 5.93.

There is a unique function \(G: \mathbb{T}_{\alpha }^{\beta } \times \mathbb{T}_{\alpha }^{\rho (\beta )} \rightarrow \mathbb{R}\) such that \(G(\alpha,s) = 0 = G(\beta,s)\) , for each \(s \in \mathbb{T}_{\alpha }^{\rho (\beta )}\) , and that \(-D^{2}G(t,s) = \frac{\delta _{ts}} {\mu (s)}\) , for each fixed  \(s \in \mathbb{T}_{\alpha }^{\rho (\beta )}\) .

Proof.

Fix \(s \in \mathbb{T}_{\alpha }^{\rho (\beta )}\). Then by Theorem 5.89 with \(f(t) = \frac{\delta _{ts}} {\mu (s)},\) \(t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) },\) the BVP

$$\displaystyle\begin{array}{rcl} & & -D^{2}y(t) = \frac{\delta _{ts}} {\mu (s)},\quad t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } {}\\ & & \quad \quad y(\alpha ) = 0 = y(\beta ), {}\\ \end{array}$$

has a unique solution on \(\mathbb{T}_{\alpha }^{\beta }\). Hence for each fixed \(s \in \mathbb{T}_{\alpha }^{\rho (\beta )}\), G(t, s) is uniquely determined for \(t \in \mathbb{T}_{\alpha }^{\beta }.\) Since \(s \in \mathbb{T}_{\alpha }^{\rho (\beta )}\) is arbitrary, G(t, s) is uniquely determined on \(\mathbb{T}_{\alpha }^{\beta } \times \mathbb{T}_{\alpha }^{\rho (\beta )}.\) □ 

Theorem 5.94.

Assume \(f: \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } \rightarrow \mathbb{R}\) . Then the unique solution of the BVP

$$\displaystyle\begin{array}{rcl} & & -D^{2}y(t) = f(t),\quad t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } {}\\ & & y(\alpha ) = A_{1},\quad y(\beta ) = A_{2} {}\\ \end{array}$$

is given by

$$\displaystyle\begin{array}{rcl} y(t)& =& u(t) +\int _{ \alpha }^{\beta }G(t,s)f(s)Ds = u(t) {}\\ & & \quad +\sum _{ j=0}^{K(\beta )-1}G(t,\sigma ^{j}(\alpha ))f(\sigma ^{j}(\alpha ))\mu (\sigma ^{j}(\alpha )), {}\\ \end{array}$$

where u(t) solves the BVP

$$\displaystyle{ \left \{\begin{array}{ll} & - D^{2}y(t) = 0,\quad t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } \\ & y(\alpha ) = A_{1},\quad y(\beta ) = A_{2} \end{array} \right. }$$

and G(t,s) is the Green’s function for the BVP (5.7) , (5.8) .

Proof.

By Exercise 5.13 the given BVP has a unique solution y(t). By Theorem 5.89

$$\displaystyle\begin{array}{rcl} y(t)& =& u(t) +\int _{ \alpha }^{\beta }G(t,s)f(s)Ds {}\\ & =& u(t) + z(t), {}\\ \end{array}$$

where \(z(t):=\int _{ \alpha }^{\beta }G(t,s)f(s)Ds\) is by Theorem 5.89 the solution of the BVP

$$\displaystyle{-D^{2}z(t) = f(t),\quad z(\alpha ) = 0 = z(\beta ).}$$

It follows that

$$\displaystyle\begin{array}{rcl} y(\alpha ) = u(\alpha ) + z(\alpha ) = A_{1}& & {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} y(\beta ) = u(\beta ) + z(\beta ) = A_{2}.& & {}\\ \end{array}$$

Furthermore,

$$\displaystyle\begin{array}{rcl} -D^{2}y(t) = -D^{2}u(t) - D^{2}z(t) = 0 + f(t) = f(t)& & {}\\ \end{array}$$

for \(t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) }.\) □ 

We now prove a comparison theorem for solutions of boundary value problems of the type treated by Theorem 5.94.

Theorem 5.95 (Comparison Theorem).

If \(u,v: \mathbb{T}_{\alpha }^{\beta } \rightarrow \mathbb{R}\) satisfy

$$\displaystyle\begin{array}{rcl} D^{2}u(t)& \leq & D^{2}v(t),\quad t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } {}\\ u(\alpha )& \geq & v(\alpha ), {}\\ u(\beta )& \geq & v(\beta ). {}\\ \end{array}$$

Then u(t) ≥ v(t) on \(\mathbb{T}_{\alpha }^{\beta }\) .

Proof.

Let \(w(t):= u(t) - v(t)\), for \(t \in \mathbb{T}_{\alpha }^{\beta }.\) Then for \(t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) }\)

$$\displaystyle{ f(t):= -D^{2}w(t) = -D^{2}u(t) + D^{2}v(t) \geq 0. }$$

If \(A_{1}:= u(\alpha ) - v(\alpha ) \geq 0\) and \(A_{2}:= u(\beta ) - v(\beta ) \geq 0\), then w(t) solves the boundary value problem

$$\displaystyle{ \left \{\begin{array}{ll} & - D^{2}w(t) = f(t),\quad t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } \\ & w(\alpha ) = A_{1},\quad w(\beta ) = A_{2}. \end{array} \right. }$$

Thus, by Theorem 5.94

$$\displaystyle\begin{array}{rcl} w(t) = y(t) +\int _{ \alpha }^{\beta }G(t,s)f(s)Ds\quad t \in \mathbb{T}_{\alpha }^{\beta },& & {}\\ \end{array}$$

where G(t, s) is the Green’s function defined earlier and y(t) is the solution of

$$\displaystyle{ \left \{\begin{array}{ll} & - D^{2}y(t) = 0,\quad t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } \\ & y(\alpha ) = A_{1},\quad y(\beta ) = A_{2}. \end{array} \right. }$$

Since \(-D^{2}y(t) = 0\) has the general solution

$$\displaystyle{ y(t) = c_{0} + c_{1}h_{1}(t,\alpha ) = c_{0} + c_{1}(t-\alpha ), }$$

and both y(α), y(β) ≥ 0, we have y(t) ≥ 0. By Theorem 5.90, G(t, s) ≥ 0, and, thus, we have

$$\displaystyle{ w(t) = y(t) +\int _{ \alpha }^{\beta }G(t,s)f(s)Ds \geq 0, }$$

for \(t \in \mathbb{T}_{\alpha }^{\beta }.\) □ 

5.13 Exercises

5.1. Show that the points in \(\mathbb{T}_{\alpha }\) satisfy

$$\displaystyle{\cdots <\rho ^{2}(\alpha ) <\rho (\alpha ) <\alpha <\sigma (\alpha ) <\sigma ^{2}(\alpha ) <\cdots \,.}$$

5.2. Prove part (ii) of Theorem 5.4.

5.3. Prove parts (ii) and (iii) of Theorem 5.6.

5.4. Prove part (iv) of Theorem 5.8.

5.5. Assume \(f: \mathbb{T}_{\alpha } \rightarrow \mathbb{R}\). Show that

$$\displaystyle{D^{2}f(t) = \frac{f(\sigma ^{2}(t))\mu (t) - f(\sigma (t))[\mu (t) +\mu (\sigma (t))] + f(t)\mu (\sigma (t))} {[\mu (t)]^{2}\mu (\sigma (t))}.}$$

5.6. Assume \(c,d \in \mathbb{T}_{\alpha }\) with c < d. Prove that if \(f: \mathbb{T}_{[c,d]} \rightarrow \mathbb{R}\) and Df(t) = 0 for \(t \in \mathbb{T}_{[c,\rho (d)]}\), then f(t) = C for all \(t \in \mathbb{T}_{[c,d]}\), where C is a constant.

5.7. Show that if \(n \in \mathbb{N}_{1}\) and a ≥ 1, then

$$\displaystyle{[n]_{a} =\sum _{ k=0}^{n-1}a^{k}.}$$

Then use this formula to prove parts (iii)–(v) of Theorem 5.22.

5.8. Prove part (ii) of Theorem 5.23.

5.9. Assume \(f: \mathbb{T}_{\alpha } \times \mathbb{T}_{\alpha } \rightarrow \mathbb{R}.\) Derive the Leibniz formula

$$\displaystyle{D\int _{a}^{t}f(t,s)Ds =\int _{ a}^{t}Df(t,s)Ds + f(\sigma (t),t)}$$

for \(t \in \mathbb{T}_{\alpha }.\)

5.10. Consider the mixed time scale where α = 2 and \(\sigma (t) = 3t + 2\) (so a = 3 and b = 2). Use the variation of constants formula in Theorem 5.38 to solve the IVP

$$\displaystyle\begin{array}{rcl} D^{2}y(t)& =& 2t - 4,\quad t \in \mathbb{T}_{ 2} {}\\ y(2)& =& 0,\;\;Dy(2) = 0. {}\\ \end{array}$$

5.11. Use the Leibniz formula in Exercise 5.9 to prove the Variation of Constants Theorem (Theorem 5.37).

5.12. Prove Theorem 5.43.

5.13. Assume \(\beta \in \mathbb{T}_{\sigma ^{2}(\alpha )}\), \(A,B,E,F \in \mathbb{R}\) and \(f: \mathbb{T}_{\alpha }^{\rho ^{2}(\beta ) } \rightarrow \mathbb{R}.\) Then the nonhomogeneous BVP

$$\displaystyle\begin{array}{rcl} & & -D^{2}y(t) = f(t),\quad t \in \mathbb{R}_{\alpha }^{\beta } {}\\ & & \ \ Ay(\alpha ) - BDy(\alpha ) = C_{1} {}\\ & & Ey(\beta ) + FDy(\beta ) = C_{2}, {}\\ \end{array}$$

where the constants C 1, C 2 are given, has a unique solution if and only if the corresponding homogeneous BVP

$$\displaystyle\begin{array}{rcl} & & -D^{2}y(t) = 0,\quad t \in \mathbb{R}_{\alpha }^{\beta } {}\\ & & Ay(\alpha ) - BDy(\alpha ) = 0 {}\\ & & Ey(\beta ) + FDy(\beta ) = 0 {}\\ \end{array}$$

has only the trivial solution.

5.14. Show that for the BVP

$$\displaystyle\begin{array}{rcl} & & -D^{2}y(t) = 0,\quad t \in \mathbb{T}_{\alpha }^{\rho ^{2}(\alpha ) } {}\\ & & Dy(\alpha ) = 0 = Dy(\beta ), {}\\ \end{array}$$

the γ in Theorem 5.87 satisfies γ = 0. Then show that the given BVP has infinitely many solutions.

5.15. Prove parts (ii)–(iv) of Theorem 5.92.

5.16. Use Theorem 5.92 to prove directly that the function

$$\displaystyle{y(t):=\int _{ \alpha }^{\beta }G(t,s)f(s)Ds,}$$

for \(t \in \mathbb{T}_{\alpha }^{\beta }\), where G(t, s) is the Green’s function for the BVP (5.7), (5.8), solves the BVP (5.5), (5.6).