Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

3.1 Introduction

As mentioned in the previous chapter and as demonstrated on numerous occasions, the disadvantage of the discrete delta fractional calculus is the shifting of domains when one goes from the domain of the function to the domain of its delta fractional difference. This problem is not as great with the fractional nabla difference as noted by Atici and Eloe. In this chapter we study the discrete fractional nabla calculus. We then define the corresponding nabla Laplace transform motivated by a particularly general definition of the delta Laplace transform that was first defined in a very general way by Bohner and Peterson [62]. Several properties of this nabla Laplace transform are then derived. Fractional nabla Taylor monomials are defined and formulas for their nabla Laplace transforms are presented. Then the discrete nabla version of the Mittag–Leffler function and its nabla Laplace transform is obtained. Finally, a variation of constants formula for an initial value problem for a ν-th, 0 < ν < 1, order nabla fractional difference equation is given along with some applications. Much of the work in this chapter comes from the results in Hein et al. [119], Holm [123125], Brackins [64], Ahrendt et al. [3, 4], and Baoguo et al. [49, 52].

3.2 Preliminary Definitions

We first introduce some notation and state elementary results concerning the nabla calculus, which we will use in this chapter. As in Chaps. 1 and 2 for \(a \in \mathbb{R}\), the sets \(\mathbb{N}_{a}\) and \(\mathbb{N}_{a}^{b}\), where ba is a positive integer, are defined by

$$\displaystyle{\mathbb{N}_{a}:=\{ a,a + 1,a + 2,\ldots \},\quad \mathbb{N}_{a}^{b}:=\{ a,a + 1,a + 2,\ldots,b\}.}$$

For an arbitrary function \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\) we define the nabla operator (backwards difference operator) , ∇, by

$$\displaystyle{(\nabla f)(t):= f(t) - f(t - 1),\quad t \in \mathbb{N}_{a+1}.}$$

For convenience, we adopt the convention that ∇f(t): = (∇f)(t). Sometimes it is useful to use the relation

$$\displaystyle{ \nabla f(t) = \Delta f(t - 1) }$$
(3.1)

to get results for the nabla calculus from the delta calculus and vice versa. Since many readers will be interested only in the nabla calculus, we want this chapter to be self-contained. So we will not use the formula (3.1) in this chapter. The operator ∇n is defined recursively by \(\nabla ^{n}f(t):= \nabla \big(\nabla ^{n-1}f(t)\big)\) for \(t \in \mathbb{N}_{a+n}\), \(n \in \mathbb{N}_{1}\), where ∇0 is the identity operator defined by ∇0 f(t) = f(t). We define the backward jump operator, \(\rho: \mathbb{N}_{a+1} \rightarrow \mathbb{N}_{a}\), by

$$\displaystyle{\rho (t) = t - 1.}$$

Also we let f ρ denote the composition function fρ. It is easy (Exercise 3.1) to see that if \(f: \mathbb{N} \rightarrow \mathbb{R}\) and ∇f(t) = 0 for \(t \in \mathbb{N}_{a+1},\) then

$$\displaystyle{f(t) = C,\quad t \in \mathbb{N}_{a},\quad \mbox{ where $C$ is a constant.}}$$

The following theorem gives several properties of the nabla difference operator.

Theorem 3.1.

Assume \(f,g: \mathbb{N}_{a} \rightarrow \mathbb{R}\) and \(\alpha,\beta \in \mathbb{R}\) . Then for \(t \in \mathbb{N}_{a+1},\)

  1. (i)

    ∇α = 0;

  2. (ii)

    ∇αf(t) = α∇f(t);

  3. (iii)

    \(\nabla \left (f(t) + g(t)\right ) = \nabla f(t) + \nabla g(t);\)

  4. (iv)

    if α ≠ 0, then \(\nabla \alpha ^{t+\beta } = \frac{\alpha -1} {\alpha } \alpha ^{t+\beta };\)

  5. (v)

    \(\nabla \left (f(t)g(t)\right ) = f(\rho (t))\nabla g(t) + \nabla f(t)g(t);\)

  6. (vi)

    \(\nabla \left (\frac{f(t)} {g(t)} \right ) = \frac{g(t)\nabla f(t)-f(t)\nabla g(t)} {g(t)g(\rho (t))},\quad \mbox{ if}\;\;g(t)\neq 0,\quad t \in \mathbb{N}_{a+1}\) .

Proof.

We will just prove (iv) and (v) and leave the proof of the other parts to the reader. To see that (iv) holds assume that α ≠ 0 and note that

$$\displaystyle\begin{array}{rcl} \nabla \alpha ^{t+\beta }& =& \alpha ^{t+\beta } -\alpha ^{t-1+\beta } {}\\ & =& [\alpha -1]\alpha ^{t-1+\beta } {}\\ & =& \frac{\alpha -1} {\alpha } \alpha ^{t+\beta }. {}\\ \end{array}$$

Next we prove the product rule (v). For \(t \in \mathbb{N}_{a+1}\), consider

$$\displaystyle\begin{array}{rcl} \nabla [f(t)g(t)]& =& f(t)g(t) - f(t - 1)g(t - 1) {}\\ & =& f(t - 1)[g(t) - g(t - 1)] + [f(t) - f(t - 1)]g(t) {}\\ & =& f(\rho (t))\nabla g(t) + \nabla f(t)g(t), {}\\ \end{array}$$

which is the desired result. □ 

Next we define the rising function.

Definition 3.2.

Assume n is a positive integer and \(t \in \mathbb{R}\). Then we define the rising function, \(t^{\overline{n}},\) read “t to the n rising,”by

$$\displaystyle{t^{\overline{n}}:= t(t + 1)\cdots (t + n - 1).}$$

Readers familiar with the Pochhammer function may recognize this notation in its alternative form, (k) n . See Knuth [139].

The rising function is defined this way so that the following power rule holds.

Theorem 3.3 (Nabla Power Rule).

For \(n \in \mathbb{N}_{1}\) , \(\alpha \in \mathbb{R}\) ,

$$\displaystyle{\nabla (t+\alpha )^{\overline{n}} = n\;(t+\alpha )^{\overline{n - 1}},}$$

for \(t \in \mathbb{R}\) .

Proof.

We simply write

$$\displaystyle\begin{array}{rcl} \nabla (t+\alpha )^{\overline{n}}& =& (t+\alpha )^{\overline{n}} - (t - 1+\alpha )^{\overline{n}} {}\\ & =& [(t+\alpha )(t +\alpha +1)\cdots (t +\alpha +n - 1)] {}\\ & \quad -& [(t +\alpha -1)(t+\alpha )\cdots (t +\alpha +n - 2)] {}\\ & =& (t+\alpha )(t +\alpha +1)\cdots (t +\alpha +n - 2) {}\\ & \quad \cdot & [(t +\alpha +n - 1) - (t +\alpha -1)] {}\\ & =& n\;(t+\alpha )^{\overline{n - 1}}. {}\\ \end{array}$$

This completes the proof. □ 

Note that for \(n \in \mathbb{N}_{1}\),

$$\displaystyle\begin{array}{rcl} t^{\overline{n}}&:=& t(t + 1)\cdots (t + n - 1) {}\\ & =& (t + n - 1)(t + n - 2)\cdots (t + 1) \cdot t {}\\ & =& \frac{(t + n - 1)(t + n - 2)\cdots t \cdot \Gamma (t)} {\Gamma (t)} {}\\ & =& \frac{\Gamma (t + n)} {\Gamma (t)},\quad t\notin \{0,-1,-2,\cdots \,\}, {}\\ \end{array}$$

where \(\Gamma \) is the gamma function (Definition 1.6). Motivated by this we next define the (generalized) rising function as follows.

Definition 3.4.

The (generalized) rising function is defined by

$$\displaystyle{ t^{\overline{r}} = \frac{\Gamma (t + r)} {\Gamma (t)}, }$$
(3.2)

for those values of t and r so that the right-hand side of equation (3.2) is sensible. Also, we use the convention that if t is a nonpositive integer, but t + r is not a nonpositive integer, then \(t^{\overline{r}}:= 0\).

We then get the following generalized power rules.

Theorem 3.5 (Generalized Nabla Power Rules).

The formulas

$$\displaystyle{ \nabla (t+\alpha )^{\overline{r}} = r\;(t+\alpha )^{\overline{r - 1}}, }$$
(3.3)

and

$$\displaystyle{ \nabla (\alpha -t)^{\overline{r}} = -r(\alpha -\rho (t))^{\overline{r - 1}}, }$$
(3.4)

hold for those values of t, r, and α so that the expressions in equations (3.3) and (3.4) are sensible. In particular, \(t^{\overline{0}} = 1,\) \(t\neq 0,-1,-2,\cdots \) .

Proof.

Consider that

$$\displaystyle\begin{array}{rcl} \nabla (t+\alpha )^{\overline{r}}& =& (t+\alpha )^{\overline{r}} - (t - 1+\alpha )^{\overline{r}} {}\\ & =& \frac{\Gamma (t +\alpha +r)} {\Gamma (t+\alpha )} -\frac{\Gamma (t +\alpha +r - 1)} {\Gamma (t +\alpha -1)} {}\\ & =& [(t +\alpha +r - 1) - (t +\alpha -1)]\frac{\Gamma (t +\alpha +r - 1)} {\Gamma (t+\alpha )} {}\\ & =& r\frac{\Gamma (t +\alpha +r - 1)} {\Gamma (t+\alpha )} {}\\ & =& r(t+\alpha )^{\overline{r - 1}}. {}\\ \end{array}$$

Hence, (3.3) holds. Next we prove (3.4). To see this, note that

$$\displaystyle\begin{array}{rcl} \nabla (\alpha -t)^{\overline{r}}& =& (\alpha -t)^{\overline{r}} - (\alpha -t + 1)^{\overline{r}} {}\\ & =& \frac{\Gamma (\alpha -t + r)} {\Gamma (\alpha -t)} -\frac{\Gamma (\alpha -t + 1 + r)} {\Gamma (\alpha -t + 1)} {}\\ & =& [(\alpha -t) - (\alpha -t + r)]\frac{\Gamma (\alpha -t + r)} {\Gamma (\alpha -t + 1)} {}\\ & =& -\,r\ \frac{\Gamma (\alpha -t + r)} {\Gamma (\alpha -t + 1)} {}\\ & =& -\,r\ \frac{\Gamma (\alpha -\rho (t) + r - 1)} {\Gamma (\alpha -\rho (t))} {}\\ & =& -\,r(\alpha -\rho (t))^{\overline{r - 1}}. {}\\ \end{array}$$

This completes the proof. □ 

3.3 Nabla Exponential Function

In this section we want to study the nabla exponential function that plays a similar role in the nabla calculus that the exponential function e pt does in the continuous calculus. Motivated by the fact that when p is a constant, x(t) = e pt is the unique solution of the initial value problem

$$\displaystyle{x' = px,\quad x(0) = 1,}$$

we define the nabla exponential function, E p (t, s) based at \(s \in \mathbb{N}_{a}\), where the function p is in the set of (nabla) regressive functions

$$\displaystyle{\mathcal{R}:=\{ p: \mathbb{N}_{a+1} \rightarrow \mathbb{R}: \quad 1 - p(t)\neq 0,\quad \mbox{ for}\quad t \in \mathbb{N}_{a+1}\},}$$

to be the unique solution of the initial value problem

$$\displaystyle{ \nabla y(t) = p(t)y(t),\quad t \in \mathbb{N}_{a+1} }$$
(3.5)
$$\displaystyle{ y(s) = 1. }$$
(3.6)

After reading the proof of the next theorem one sees why this IVP has a unique solution. In the next theorem we give a formula for the exponential function E p (t, s).

Theorem 3.6.

Assume \(p \in \mathcal{R}\) and \(s \in \mathbb{N}_{a}\) . Then

$$\displaystyle{ E_{p}(t,s) = \left \{\begin{array}{@{}l@{\quad }l@{}} \prod _{\tau =s+1}^{t} \frac{1} {1-p(\tau )}\text{, } \quad &t \in \mathbb{N}_{s} \\ \prod _{\tau =t+1}^{s}[1 - p(\tau )]\text{, }\quad &t \in \mathbb{N}_{a}^{s-1}. \end{array} \right. }$$
(3.7)

Here it is understood that \(\prod _{\tau =t+1}^{t}h(\tau ) = 1\) for any function h.

Proof.

First we find a formula for E p (t, s) for t ≥ s + 1 by solving the IVP (3.5), (3.6) by iteration. Solving the nabla difference equation (3.5) for y(t) we obtain

$$\displaystyle\begin{array}{rcl} y(t) = \frac{1} {1 - p(t)}y(t - 1),\quad t \in \mathbb{N}_{a+1}.& &{}\end{array}$$
(3.8)

Letting \(t = s + 1\) in (3.8) we get

$$\displaystyle{y(s + 1) = \frac{1} {1 - p(s + 1)}y(s) = \frac{1} {1 - p(s + 1)}.}$$

Then letting \(t = s + 2\) in (3.8) we obtain

$$\displaystyle{y(s + 2) = \frac{1} {1 - p(s + 2)}y(s + 1) = \frac{1} {\left [1 - p(s + 1)\right ]\left [1 - p(s + 2)\right ]}.}$$

Proceeding in this matter we get by mathematical induction that

$$\displaystyle{E_{p}(t,a) =\prod _{ \tau =s+1}^{t} \frac{1} {1 - p(\tau )},}$$

for \(t \in \mathbb{N}_{s+1}\). By our convention on products we get

$$\displaystyle{E_{p}(s,s) =\prod _{ \tau =s+1}^{s}[1 - p(\tau )] = 1}$$

as desired. Now assume a ≤ t < s. Solving the nabla difference equation (3.5) for y(t − 1) we obtain

$$\displaystyle\begin{array}{rcl} y(t - 1) = [1 - p(t)]y(t),\quad t \in \mathbb{N}_{a+1}.& &{}\end{array}$$
(3.9)

Letting t = s in (3.9) we get

$$\displaystyle{y(s - 1) = [1 - p(s)]y(s) = [1 - p(s)].}$$

If s − 2 ≥ a, we obtain by letting \(t = s - 1\) in (3.9)

$$\displaystyle{y(s - 2) = [1 - p(s - 1)]y(s - 1) = \left [1 - p(s)\right ]\left [1 - p(s - 1)\right ].}$$

By mathematical induction we arrive at

$$\displaystyle{E_{p}(t,s) =\prod _{ \tau =t+1}^{s}[1 - p(\tau )],\quad \mbox{ for}\quad t \in \mathbb{N}_{ a}^{s}.}$$

Hence, E p (t, s) is given by (3.7). □ 

Theorem 3.6 gives us the following example.

Example 3.7.

If \(s \in \mathbb{N}_{a}\) and p(t) ≡ p 0, where p 0 ≠ 1 is a constant, then

$$\displaystyle{E_{p}(t,s) = (1 - p_{0})^{s-t},\quad t \in \mathbb{N}_{ a}.}$$

We now set out to prove properties of the exponential function E p (t, s). To motivate some of these properties, consider, for \(p,q \in \mathcal{R}\), the product

$$\displaystyle\begin{array}{rcl} & & E_{p}(t,a)E_{q}(t,a) =\prod _{ \tau =a+1}^{t} \frac{1} {1 - p(\tau )}\prod _{\tau =a+1}^{t} \frac{1} {1 - q(\tau )} {}\\ & & \quad \quad \quad =\prod _{ \tau =a+1}^{t} \frac{1} {\left [1 - p(\tau )\right ]\left [1 - q(\tau )\right ]} {}\\ & & \quad \quad \quad =\prod _{ \tau =a+1}^{t} \frac{1} {1 -\left [p(\tau ) + q(\tau ) - p(\tau )q(\tau )\right ]} {}\\ & & \quad \quad \quad =\prod _{ \tau =a+1}^{t} \frac{1} {1 - (p \boxplus q)(\tau )}\quad \mbox{ if $(p \boxplus q)(t):= p(t) + q(t) - p(t)q(t)$} {}\\ & & \quad \quad \quad = E_{p\boxplus q}(t,a) {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a}\).

Hence, we deduce that the nabla exponential function satisfies the law of exponents

$$\displaystyle{E_{p}(t,a)E_{q}(t,a) = E_{p\boxplus q}(t,a),\quad t \in \mathbb{N}_{a},}$$

if we define the box plus addition \(\boxplus \) on \(\mathcal{R}\) by

$$\displaystyle{(p \boxplus q)(t):= p(t) + q(t) - p(t)q(t),\quad t \in \mathbb{N}_{a+1}.}$$

We now give an important result concerning the box plus addition \(\boxplus \).

Theorem 3.8.

If we define the box plus addition, \(\boxplus \) , on \(\mathcal{R}\) by

$$\displaystyle{p \boxplus q:= p + q - pq,}$$

then \(\mathcal{R}\) , \(\boxplus \) is an Abelian group.

Proof.

First, to see that the closure property is satisfied, note that if \(p,q \in \mathcal{R}\), then 1 − p(t) ≠ 0 and 1 − q(t) ≠ 0 for \(t \in \mathbb{N}_{a+1}\). It follows that

$$\displaystyle{ 1 - (p \boxplus q)(t) = 1 -\left [p(t) + q(t) - p(t)q(t)\right ] = (1 - p(t))(1 - q(t))\neq 0, }$$

for \(t \in \mathbb{N}_{a+1}\), and hence the function \(p \boxplus q \in \mathcal{R}\).

Next, notice that the zero function, 0, is in \(\mathcal{R}\), since the regressivity condition \(1 - 0 = 1\neq 0\) holds. Also

$$\displaystyle{0 \boxplus p = 0 + p - 0 \cdot p = p,\quad \mbox{ for all $p \in \mathcal{R}$},}$$

so the zero function 0 is the identity element in \(\mathcal{R}\).

We now show that every element in \(\mathcal{R}\) has an additive inverse let \(p \in \mathcal{R}\). So, set \(q = \frac{-p} {1-p}\) and note that since

$$\displaystyle{1 - q(t) = 1 - \frac{-p(t)} {1 - p(t)} = \frac{1} {1 - p(t)}\neq 0,\quad t \in \mathbb{N}_{a+1}}$$

we have that \(q \in \mathcal{R},\) and we also have that

$$\displaystyle{p \boxplus q = p \boxplus \frac{-p} {1 - p} = p + \frac{-p} {1 - p} - \frac{-p^{2}} {1 - p} = 0,}$$

so q is the additive inverse of p. For \(p \in \mathcal{R},\) we use the following notation for the additive inverse of p:

$$\displaystyle{ \boxminus p:= \frac{-p} {1 - p}. }$$
(3.10)

The fact that the addition \(\boxplus \) is associative and commutative is Exercise 3.4. □ 

We can now define box minus subtraction, \(\boxminus,\) on \(\mathcal{R}\) in a standard manner as follows.

Definition 3.9.

We define box minus subtraction on \(\mathcal{R}\) by

$$\displaystyle{p \boxminus q:= p \boxplus [\boxminus q].}$$

By Exercise 3.5 we have that if \(p,q \in \mathcal{R}\), then

$$\displaystyle{(p \boxminus q)(t) = \frac{p(t) - q(t)} {1 - q(t)},\quad t \in \mathbb{N}_{a}.}$$

In addition, we define the set of (nabla) positively regressive functions, \(\mathcal{R}^{+},\) by

$$\displaystyle{\mathcal{R}^{+} =\{ p: \mathbb{N}_{ a+1}:\rightarrow \mathbb{R},\quad \mbox{ such that}\quad 1 - p(t) > 0\quad \mbox{ for}\quad t \in \mathbb{N}_{a+1}\}.}$$

The proof of the following theorem is left as an exercise (see Exercise 3.8).

Theorem 3.10.

The set of positively regressive functions, \(\mathcal{R}^{+}\) , with the addition \(\boxplus \) , is a subgroup of \(\mathcal{R}\) .

In the next theorem we give several properties of the exponential function E p (t, s).

Theorem 3.11.

Assume \(p,q \in \mathcal{R}\) and \(s,r \in \mathbb{N}_{a}\) . Then

  1. (i)

    E 0 (t,s) = 1, \(t \in \mathbb{N}_{a};\)

  2. (ii)

    E p (t,s) ≠ 0, \(t \in \mathbb{N}_{a};\)

  3. (iii)

    if \(p \in \mathcal{R}^{+}\) , then E p (t,s) > 0, \(t \in \mathbb{N}_{a};\)

  4. (iv)

    ∇E p (t,s) = p(t)E p (t,s), \(t \in \mathbb{N}_{a+1},\) and E p (t,t) = 1, \(t \in \mathbb{N}_{a};\)

  5. (v)

    \(E_{p}(\rho (t),s) = [1 - p(t)]E_{p}(t,s)\) , \(t \in \mathbb{N}_{a+1};\)

  6. (vi)

    E p (t,s)E p (s,r) = E p (t,r), \(t \in \mathbb{N}_{a};\)

  7. (vii)

    \(E_{p}(t,s)E_{q}(t,s) = E_{p\boxplus q}(t,s),\) \(t \in \mathbb{N}_{a};\)

  8. (viii)

    \(E_{\boxminus p}(t,s) = \frac{1} {E_{p}(t,s)},\) \(t \in \mathbb{N}_{a};\)

  9. (ix)

    \(\frac{E_{p}(t,s)} {E_{q}(t,s)} = E_{p\boxminus q}(t,s),\) \(t \in \mathbb{N}_{a}\) .

Proof.

Using Example 3.7, we have that

$$\displaystyle{E_{0}(t,s) = (1 - 0)^{s-t} = 1}$$

and thus (i) holds.

To see that (ii) holds, note that since \(p \in \mathcal{R}\), it follows that 1 − p(t) ≠ 0, and hence we have that for \(t \in \mathbb{N}_{s}\)

$$\displaystyle{E_{p}(t,s) =\prod _{ \tau =s+1}^{t} \frac{1} {1 - p(\tau )}\neq 0}$$

and for \(t \in \mathbb{N}_{a}^{s-1}\)

$$\displaystyle{E_{p}(t,s) =\prod _{ \tau =t+1}^{s}[1 - p(\tau )]\neq 0.}$$

Hence, (ii) holds. The proof of (iii) is similar to the proof of (ii), whereas property (iv) follows from the definition of E p (t, s).

Since, for \(t \in \mathbb{N}_{s},\)

$$\displaystyle\begin{array}{rcl} E_{p}(\rho (t),s)& =& \prod _{\tau =s+1}^{t-1} \frac{1} {1 - p(\tau )} {}\\ & =& [1 - p(t)]\prod _{\tau =s+1}^{t} \frac{1} {1 - p(\tau )} {}\\ & =& [1 - p(t)]E_{p}(t,s) {}\\ \end{array}$$

we have that (v) holds for \(t \in \mathbb{N}_{s+1}\). Next assume \(t \in \mathbb{N}_{a+1}^{s-1}\). Then

$$\displaystyle\begin{array}{rcl} E_{p}(\rho (t),s)& =& \prod _{\tau =\rho (t)+1}^{s}\left [1 - p(\tau )\right ] {}\\ & =& \prod _{\tau =t}^{s}\left [1 - p(\tau )\right ] {}\\ & =& [1 - p(t)]\prod _{\tau =t+1}^{s}\left [1 - p(\tau )\right ] {}\\ & =& [1 - p(t)]E_{p}(t,s). {}\\ \end{array}$$

Hence, (v) holds for \(t \in \mathbb{N}_{a+1}^{s-1}\). It is easy to check that \(E_{p}(\rho (s),s) = \left [1 - p(s)\right ]E_{p}(s,s)\). This completes the proof of (v).

We will just show that (vi) holds when s ≥ r ≥ a. First consider the case \(t \in \mathbb{N}_{s}\). In this case

$$\displaystyle\begin{array}{rcl} E_{p}(t,s)E_{p}(s,r)& =& \prod _{\tau =s+1}^{t} \frac{1} {1 - p(\tau )}\prod _{\tau =r+1}^{s} \frac{1} {1 - p(\tau )} {}\\ & =& \prod _{\tau =r+1}^{t} \frac{1} {1 - p(\tau )} {}\\ & =& E_{p}(t,r). {}\\ \end{array}$$

Next, consider the case \(t \in \mathbb{N}_{r}^{s-1}\). Then

$$\displaystyle\begin{array}{rcl} E_{p}(t,s)E_{p}(s,r)& =& \prod _{\tau =t+1}^{s}\left [1 - p(\tau )\right ]\prod _{\tau =r+1}^{s} \frac{1} {1 - p(\tau )} {}\\ & =& \prod _{\tau =r+1}^{t} \frac{1} {1 - p(\tau )} {}\\ & =& E_{p}(t,r). {}\\ \end{array}$$

Finally, consider the case \(t \in \mathbb{N}_{a}^{r-1}\). Then

$$\displaystyle\begin{array}{rcl} E_{p}(t,s)E_{p}(s,r)& =& \prod _{\tau =t+1}^{s}\left [1 - p(\tau )\right ]\prod _{\tau =r+1}^{s} \frac{1} {1 - p(\tau )} {}\\ & =& \prod _{\tau =r+1}^{t}\left [1 - p(\tau )\right ] {}\\ & =& E_{p}(t,r). {}\\ \end{array}$$

This completes the proof of (vi) for the special case s ≥ r ≥ a. The case a ≤ s ≤ r is left to the reader (Exercise 3.9). The proof of the law of exponents (vii) is Exercise 3.10. To see that (viii) holds, note that for \(t \in \mathbb{N}_{s}\)

$$\displaystyle\begin{array}{rcl} E_{\boxminus p}(t,s)& =& \prod _{\tau =s+1}^{t} \frac{1} {1 - (\boxminus p)(\tau )} {}\\ & =& \prod _{\tau =s+1}^{t}\left [1 - p(\tau )\right ] {}\\ & =& \frac{1} {E_{p}(t,s)}. {}\\ \end{array}$$

Also, if \(t \in \mathbb{N}_{a}^{s-1}\)

$$\displaystyle\begin{array}{rcl} E_{\boxminus p}(t,s)& =& \prod _{\tau =t+1}^{s}\left [1 - (\boxminus p)(\tau )\right ] {}\\ & =& \prod _{\tau =t+1}^{s} \frac{1} {1 - p(\tau )} {}\\ & =& \frac{1} {E_{p}(t,s)}. {}\\ \end{array}$$

Hence (viii) holds for \(t \in \mathbb{N}_{a}\). Finally, using (viii) and then (vii), we have that

$$\displaystyle{ \frac{E_{p}(t,s)} {E_{q}(t,s)} = E_{p}(t,s)E_{\boxminus q}(t,s) = E_{p\boxplus [\boxminus q]}(t,s) = E_{p\boxminus q}(t,s), }$$

from which it follows that (ix) holds. □ 

Next we define the scalar box dot multiplication, \(\boxdot \).

Definition 3.12.

For \(\alpha \in \mathbb{R},\) \(p \in \mathcal{R}^{+}\) the scalar box dot multiplication, \(\alpha \boxdot p\), is defined by

$$\displaystyle{\alpha \boxdot p = 1 - (1 - p)^{\alpha }.}$$

It follows that for \(\alpha \in \mathbb{R},\) \(p \in \mathcal{R}^{+}\)

$$\displaystyle\begin{array}{rcl} 1 - (\alpha \boxdot p)(t)& =& 1 -\left \{1 -\left [1 - p(t)\right ]^{\alpha }\right \} {}\\ & =& \left [1 - p(t)\right ]^{\alpha } > 0 {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a+1}\). Hence \(\alpha \boxdot p \in \mathcal{R}^{+}\).

Now we can prove the following law of exponents.

Theorem 3.13.

If \(\alpha \in \mathbb{R}\) and \(p \in \mathcal{R}^{+}\) , then

$$\displaystyle{E_{p}^{\alpha }(t,a) = E_{\alpha \boxdot p}(t,a)}$$

for \(t \in \mathbb{N}_{a}\) .

Proof.

Consider that, for \(t \in \mathbb{N}_{a}\),

$$\displaystyle\begin{array}{rcl} E_{p}^{\alpha }(t,a)& =& \left [\prod _{\tau =a+1}^{t} \frac{1} {1 + p(\tau )}\right ]^{\alpha } {}\\ & =& \prod _{\tau =a+1}^{t} \frac{1} {[1 + p(\tau )]^{\alpha }} {}\\ & =& \prod _{\tau =a+1}^{t} \frac{1} {1 - [1 - (1 - p(\tau ))^{\alpha }]} {}\\ & =& \prod _{\tau =a+1}^{t} \frac{1} {1 - [\alpha \boxdot p](\tau )} {}\\ & =& E_{\alpha \boxdot p}(t,a). {}\\ \end{array}$$

This completes the proof. □ 

Theorem 3.14.

The set of positively regressive functions \(\mathcal{R}^{+}\) , with the addition \(\boxplus \) and the scalar multiplication \(\boxdot \) , is a vector space.

Proof.

From Theorem 3.10 we know that \(\mathcal{R}^{+}\) with the addition \(\boxplus \) is an Abelian group. The four remaining nontrivial properties of a vector space are the following:

  1. (i)

    \(1 \boxdot p = p;\)

  2. (ii)

    \(\alpha \boxdot (p \boxplus q) = (\alpha \boxdot p) \boxplus (\alpha \boxdot q);\)

  3. (iii)

    \(\alpha \boxdot (\beta \boxdot p) = (\alpha \beta ) \boxdot p;\)

  4. (iv)

    \((\alpha +\beta ) \boxdot p = (\alpha \boxdot p) \boxplus (\beta \boxdot p),\)

where \(\alpha,\beta \in \mathbb{R}\) and \(p,q \in \mathcal{R}^{+}\). We will prove properties (i)–(iii) and leave property (iv) as an exercise (Exercise 3.12).

Property (i) follows immediately from the following:

$$\displaystyle{1 \boxdot p = 1 - (1 - p)^{1} = p.}$$

To prove (ii) consider

$$\displaystyle\begin{array}{rcl} & & (\alpha \boxdot p) \boxplus (\alpha \boxdot q) {}\\ & & \qquad \qquad =\alpha \boxdot p +\alpha \boxdot q - (\alpha \boxdot p)(\alpha \boxdot q) {}\\ & & \qquad \qquad = [1 - (1 - p)^{\alpha }] + [1 - (1 - q)^{\alpha }] - [1 - (1 - p)^{\alpha }][1 - (1 - q)^{\alpha }] {}\\ & & \qquad \qquad = 1 - (1 - p)^{\alpha }(1 - q)^{\alpha } {}\\ & & \qquad \qquad = 1 - (1 - p - q + pq)^{\alpha } {}\\ & & \qquad \qquad = 1 - (1 - p \boxplus q)^{\alpha } {}\\ & & \qquad \qquad =\alpha \boxdot (p \boxplus q). {}\\ \end{array}$$

Hence, (ii) holds. Finally, consider

$$\displaystyle\begin{array}{rcl} \alpha \boxdot (\beta \boxplus p)& =& 1 - (1 -\beta \boxdot p)^{\alpha } {}\\ & =& 1 -\bigg [1 -\left [1 - (1 - p)^{\beta }\right ]\bigg]^{\alpha } {}\\ & =& 1 - (1 - p)^{\alpha \beta } {}\\ & =& (\alpha \beta ) \boxdot p. {}\\ \end{array}$$

Hence, property (iii) holds. □ 

3.4 Nabla Trigonometric Functions

In this section we introduce the discrete nabla hyperbolic sine and cosine functions, the discrete sine and cosine functions and give some of their properties. First we define the nabla hyperbolic sine and cosine functions.

Definition 3.15.

Assume \(p,-p \in \mathcal{R}\). Then the generalized nabla hyperbolic sine and cosine functions are defined as follows:

$$\displaystyle{\mbox{ Cosh}_{p}(t,a):= \frac{E_{p}(t,a) + E_{-p}(t,a)} {2},\quad \mbox{ Sinh}_{p}(t,a):= \frac{E_{p}(t,a) - E_{-p}(t,a)} {2} }$$

for \(t \in \mathbb{N}_{a}\).

The following theorem gives various properties of the nabla hyperbolic sine and cosine functions.

Theorem 3.16.

Assume \(p,-p \in \mathcal{R}\) . Then

  1. (i)

    \(\mbox{ Cosh}_{p}(a,a) = 1,\quad \mbox{ Sinh}_{p}(a,a) = 0;\)

  2. (ii)

    \(\mbox{ Cosh}_{p}^{2}(t,a) -\mbox{ Sinh}_{p}^{2}(t,a) = E_{p^{2}}(t,a),\quad t \in \mathbb{N}_{a};\)

  3. (iii)

    \(\nabla \mbox{ Cosh}_{p}(t,a) = p(t)\;\mbox{ Sinh}_{p}(t,a),\quad t \in \mathbb{N}_{a+1};\)

  4. (iv)

    \(\nabla \mbox{ Sinh}_{p}(t,a) = p(t)\;\mbox{ Cosh}_{p}(t,a),\quad t \in \mathbb{N}_{a+1};\)

  5. (v)

    \(\mbox{ Cosh}_{-p}(t,a) = \mbox{ Cosh}_{p}(t,a),\quad t \in \mathbb{N}_{a};\)

  6. (vi)

    \(\mbox{ Sinh}_{-p}(t,a) = -\mbox{ Sinh}_{p}(t,a),\quad t \in \mathbb{N}_{a}\) .

Proof.

Parts (i), (v), (vi) follow immediately from the definitions of the nabla hyperbolic sine and cosine functions. To see that (ii) holds, note that

$$\displaystyle\begin{array}{rcl} & & \mathrm{Cosh}_{p}^{2}(t,a) -\mathrm{ Sinh}_{ p}^{2}(t,a) {}\\ & & = \frac{\left [E_{p}(t,a) + E_{-p}(t,a)\right ]^{2} -\left [E_{p}(t,a) - E_{-p}(t,a)\right ]^{2}} {4} {}\\ & & = E_{p}(t,a)E_{-p}(t,a) {}\\ & & = E_{p\boxplus (-p)}(t,a) {}\\ & & = E_{p^{2}}(t,a). {}\\ \end{array}$$

To see that (iii) holds, consider

$$\displaystyle\begin{array}{rcl} \nabla \mbox{ Cosh}_{p}(t,a)& =& \frac{1} {2}\nabla E_{p}(t,a) + \frac{1} {2}\nabla E_{-p}(t,a) {}\\ & =& \frac{1} {2}\;[pE_{p}(t,a) - pE_{-p}(t,a)] {}\\ & =& p\;\frac{E_{p}(t,a) - E_{-p}(t,a)} {2} {}\\ & =& p\;\mbox{ Sinh}_{p}(t,a). {}\\ \end{array}$$

The proof of (iv) is similar (Exercise 3.13). □ 

Next, we define the nabla sine and cosine functions.

Definition 3.17.

Assume \(ip,-ip \in \mathcal{R}\). Then we define the nabla sine and cosine functions as follows:

$$\displaystyle{\mbox{ Cos}_{p}(t,a) = \frac{E_{ip}(t,a) + E_{-ip}(t,a)} {2},\quad \mbox{ Sin}_{p}(t,a) = \frac{E_{ip}(t,a) - E_{-ip}(t,a)} {2i} }$$

for \(t \in \mathbb{N}_{a}\).

Using the definitions of Cos p (t, a) and Sin p (t, a) we get immediately Euler’s formula

$$\displaystyle{ E_{ip}(t,a) = \mbox{ Cos}_{p}(t,a) + i\mbox{ Sin}_{p}(t,a),\quad t \in \mathbb{N}_{a} }$$
(3.11)

provided \(ip,-ip \in \mathcal{R}\).

The following theorem gives some relationships between the nabla trigonometric functions and the nabla hyperbolic trigonometric functions.

Theorem 3.18.

Assume p is a constant. Then the following hold:

  1. (i)

    Sin ip (t,a) = iSinh p (t,a), if  p ≠ ± 1;

  2. (ii)

    Cos ip (t,a) = Cosh p (t,a), if  p ≠ ± 1;

  3. (iii)

    Sinh ip (t,a) = iSin p (t,a), if  p ≠ ± i;

  4. (iv)

    Cosh ip (t,a) = Cos p (t,a), if  p ≠ ± i,

for \(t \in \mathbb{N}_{a}\) .

Proof.

To see that (i) holds, note that

$$\displaystyle\begin{array}{rcl} \mbox{ Sin}_{ip}(t,a)& =& \frac{1} {2i}[E_{i^{2}p}(t,a) - E_{-i^{2}p}(t,a)] {}\\ & =& \frac{1} {2i}[E_{-p}(t,a) - E_{p}(t,a)] {}\\ & =& i\frac{E_{p}(t,a) - E_{-p}(t,a)} {2} {}\\ & =& i\;\mbox{ Sinh}_{p}(t,a). {}\\ \end{array}$$

The proof of parts (ii), (iii), and (iv) are similar. □ 

The following theorem gives various properties of the generalized sine and cosine functions.

Theorem 3.19.

Assume \(ip,-ip \in \mathcal{R}\) . Then

  1. (i)

    \(\mbox{ Cos}_{p}(a,a) = 1,\quad \mbox{ Sin}_{p}(a,a) = 0;\)

  2. (ii)

    \(\mbox{ Cos}_{p}^{2}(t,a) + \mbox{ Sin}_{p}^{2}(t,a) = E_{-p^{2}}(t,a),\quad t \in \mathbb{N}_{a};\)

  3. (iii)

    \(\nabla \mbox{ Cos}_{p}(t,a) = -p(t)\;\mbox{ Sin}_{p}(t,a),\quad t \in \mathbb{N}_{a+1};\)

  4. (iv)

    \(\nabla \mbox{ Sin}_{p}(t,a) = p(t)\;\mbox{ Cos}_{p}(t,a),\quad t \in \mathbb{N}_{a+1};\)

  5. (v)

    \(\mbox{ Cos}_{-p}(t,a) = \mbox{ Cos}_{p}(t,a),\quad t \in \mathbb{N}_{a};\)

  6. (vi)

    \(\mbox{ Sin}_{-p}(t,a) = -\mbox{ Sin}_{p}(t,a),\quad t \in \mathbb{N}_{a}\) .

Proof.

The proof of this theorem follows from Theorems 3.16 and 3.18. □ 

3.5 Second Order Linear Equations with Constant Coefficients

The nonhomogeneous second order linear nabla difference equation is given by

$$\displaystyle{ \nabla ^{2}y(t) + p(t)\nabla y(t) + q(t)y(t) = f(t),\quad t \in \mathbb{N}_{ a+2}, }$$
(3.12)

where we assume \(p,g,f: \mathbb{N}_{a+2} \rightarrow \mathbb{R}\) and \(1 + p(t) + q(t)\neq 0\) for \(t \in \mathbb{N}_{a+2}\). In this section we will see that we can easily solve the corresponding second order linear homogeneous nabla difference equation with constant coefficients

$$\displaystyle{ \nabla ^{2}y(t) + p\nabla y(t) + qy(t) = 0,\quad t \in \mathbb{N}_{ a+2}, }$$
(3.13)

where we assume the constants \(p,q \in \mathbb{R}\) satisfy \(1 + p + q\neq 0\).

First we prove an existence-uniqueness theorem for solutions of initial value problems (IVPs) for (3.12).

Theorem 3.20.

Assume that \(p,q,f: \mathbb{N}_{a+2} \rightarrow \mathbb{R}\) , \(1 + p(t) + q(t)\neq 0\) , \(t \in \mathbb{N}_{a+2}\) , \(A,B \in \mathbb{R},\) and \(t_{0} \in \mathbb{N}_{a+1}\) . Then the IVP

$$\displaystyle{ \nabla ^{2}y(t) + p(t)\nabla y(t) + q(t)y(t) = f(t),\;\;t \in \mathbb{N}_{ a+2}, }$$
(3.14)
$$\displaystyle{ y(t_{0} - 1) = A,\quad y(t_{0}) = B, }$$
(3.15)

where \(t_{0} \in \mathbb{N}_{a+1}\) and \(A,B \in \mathbb{R}\) has a unique solution y(t) on \(\mathbb{N}_{a}\) .

Proof.

Expanding equation (3.14) we have by first solving for y(t) and then solving for y(t − 2) that, since \(1 + p(t) + q(t)\neq 0\),

$$\displaystyle\begin{array}{rcl} y(t)& =& \frac{2 + p(t)} {1 + p(t) + q(t)}y(t - 1) \\ & & - \frac{1} {1 + p(t) + q(t)}y(t - 2) + \frac{f(t)} {1 + p(t) + q(t)}{}\end{array}$$
(3.16)

and

$$\displaystyle{ y(t - 2) = -[1 + p(t) + q(t)]y(t) + [2 + p(t)]y(t - 1) + f(t). }$$
(3.17)

If we let \(t = t_{0} + 1\) in (3.16), then equation (3.14) holds at \(t = t_{0} + 1\) iff

$$\displaystyle\begin{array}{rcl} y(t_{0} + 1)& =& \frac{[2 + p(t_{0} + 1)]B} {1 + p(t_{0} + 1) + q(t_{0} + 1)} - \frac{A} {1 + p(t_{0} + 1) + q(t_{0} + 1)} {}\\ & & + \frac{f(t_{0} + 1)} {1 + p(t_{0} + 1) + q(t_{0} + 1)}. {}\\ \end{array}$$

Hence, the solution of the IVP (3.14), (3.15) is uniquely determined at t 0 + 1. But using the equation (3.16) evaluated at \(t = t_{0} + 2\), we have that the unique values of the solution at t 0 and t 0 + 1 uniquely determine the value of the solution at t 0 + 2. By induction we get that the solution of the IVP (3.14), (3.15) is uniquely determined on \(\mathbb{N}_{t_{0}-1}\). On the other hand if t 0 ≥ a + 2, then using equation (3.17) with t = t 0, we have that

$$\displaystyle{y(t_{0} - 2) = -[1 + p(t_{0}) + q(t_{0})]B + [2 + p(t_{0})]A + f(t_{0}).}$$

Hence the solution of the IVP (3.14), (3.15) is uniquely determined at t 0 − 2. Similarly, if t 0 − 3 ≥ a, then the value of the solution at t 0 − 2 and at t 0 − 1 uniquely determines the value of the solution at t 0 − 3. Proceeding in this manner we have by mathematical induction that the solution of the IVP (3.14), (3.15) is uniquely determined on \(\mathbb{N}_{a}^{t_{0}-1}\). Hence the result follows. □ 

Remark 3.21.

Note that the so-called initial conditions in (3.15)

$$\displaystyle{y(t_{0} - 1) = A,\quad y(t_{0}) = B}$$

hold iff the equations y(t 0) = C, \(\nabla y(t_{0}) = D:= B - A\) are satisfied. Because of this we also say that

$$\displaystyle{y(t_{0}) = C,\quad \nabla y(t_{0}) = D}$$

are initial conditions for solutions of equation (3.12). In particular, Theorem 3.20 holds if we replace the conditions (3.15) by the conditions

$$\displaystyle{y(t_{0}) = C,\quad \nabla y(t_{0}) = D.}$$

Remark 3.22.

From Exercise 3.21 we see that if \(1 + p(t) + q(t)\neq 0\), \(t \in \mathbb{N}_{a+2}\), then the general solution of the linear homogeneous equation

$$\displaystyle{\nabla ^{2}y(t) + p(t)\nabla y(t) + q(t)y(t) = 0}$$

is given by

$$\displaystyle{y(t) = c_{1}y_{1}(t) + c_{2}y_{2}(t),\quad t \in \mathbb{N}_{a},}$$

where y 1(t), y 2(t) are any two linearly independent solutions of (3.13) on \(\mathbb{N}_{a}\).

Next we show we can solve the second order linear nabla difference equation with constant coefficients (3.13). We say the equation

$$\displaystyle{\lambda ^{2} + p\lambda + q = 0}$$

is the characteristic equation of the nabla linear difference equation (3.13) and the solutions of this characteristic equation are called the characteristic values of (3.13).

Theorem 3.23 (Distinct Roots).

Assume \(1 + p + q\neq 0\) and \(\lambda _{1}\neq \lambda _{2}\) (possibly complex) are the characteristic values of (3.13) . Then

$$\displaystyle{y(t) = c_{1}E_{\lambda _{1}}(t,a) + c_{2}E_{\lambda _{2}}(t,a)}$$

is a general solution of (3.13) on \(\mathbb{N}_{a}\) .

Proof.

Since \(\lambda _{1}\), \(\lambda _{2}\) satisfy the characteristic equation for (3.13), we have that the characteristic polynomial for (3.13) is given by

$$\displaystyle{ (\lambda -\lambda _{1})(\lambda -\lambda _{2}) =\lambda ^{2} - (\lambda _{ 1} +\lambda _{2})\lambda +\lambda _{1}\lambda _{2} }$$

and hence

$$\displaystyle{p = -\lambda _{1} -\lambda _{2},\quad q =\lambda _{1}\lambda _{2}.}$$

Since

$$\displaystyle{1 + p + q = 1 + (-\lambda _{1} -\lambda _{2}) +\lambda _{1}\lambda _{2} = (1 -\lambda _{1})(1 -\lambda _{2})\neq 0,}$$

we have that \(\lambda _{1},\lambda _{2}\neq 1\) and hence \(E_{\lambda _{1}}(t,a)\) and \(E_{\lambda _{2}}(t,a)\) are well defined. Next note that

$$\displaystyle\begin{array}{rcl} & & \nabla ^{2}E_{\lambda _{ i}}(t,a) + p\;\nabla E_{\lambda _{i}}(t,a) + q\;E_{\lambda _{i}}(t,a) {}\\ & & \qquad \qquad \quad = [\lambda _{i}^{2} + p\lambda _{ i} + q]E_{\lambda _{i}}(t,a) {}\\ & & \qquad \qquad \quad = 0, {}\\ \end{array}$$

for i = 1, 2. Hence \(E_{\lambda _{i}}(t,a)\), i = 1, 2 are solutions of (3.13). Since \(\lambda _{1}\neq \lambda _{2}\), these two solutions are linearly independent on \(\mathbb{N}_{a}\), and by Remark 3.22,

$$\displaystyle{y(t) = c_{1}E_{\lambda _{1}}(t,a) + c_{2}E_{\lambda _{2}}(t,a)}$$

is a general solution of (3.13) on \(\mathbb{N}_{a}\). □ 

Example 3.24.

Solve the nabla linear difference equation

$$\displaystyle{\nabla ^{2}y(t) + 2\nabla y(t) - 8y(t) = 0.\quad t \in \mathbb{N}_{ a+2}.}$$

The characteristic equation is

$$\displaystyle{\lambda ^{2} + 2\lambda - 8 = (\lambda -2)(\lambda +4) = 0}$$

and the characteristic roots are

$$\displaystyle{\lambda _{1} = 2,\quad \lambda _{2} = -4.}$$

Note that \(1 + p + q = -5\neq 0\), so we can apply Theorem 3.23. Then we have that

$$\displaystyle\begin{array}{rcl} y(t)& =& c_{1}E_{\lambda _{1}}(t,a) + c_{2}E_{\lambda _{2}}(t,a) {}\\ & =& c_{1}E_{2}(t,a) + c_{2}E_{-4}(t,a) {}\\ & =& c_{1}(-1)^{a-t} + c_{ 2}5^{a-t} {}\\ \end{array}$$

is a general solution on \(\mathbb{N}_{a}\).

Usually, we want to find all real-valued solutions of (3.13). When a characteristic value \(\lambda _{1}\) of (3.13) is complex, \(E_{\lambda _{1}}(t,a)\) is a complex-valued solution. In the next theorem we show how to use this complex-valued solution to find two linearly independent real-valued solutions on \(\mathbb{N}_{a}\).

Theorem 3.25 (Complex Roots).

Assume the characteristic values of (3.13) are \(\lambda =\alpha \pm i\beta\) , β > 0 and α ≠ 1. Then a general solution of (3.13) is given by

$$\displaystyle{y(t) = c_{1}E_{\alpha }(t,a)\mbox{ Cos}_{\gamma }(t,a) + c_{2}E_{\alpha }(t,a)\mbox{ Sin}_{\gamma }(t,a),}$$

where \(\gamma:= \frac{\beta } {1-\alpha }\) .

Proof.

Since the characteristic roots are \(\lambda =\alpha \pm i\beta\), β > 0, we have that the characteristic equation is given by

$$\displaystyle{\lambda ^{2} - 2\alpha \lambda +\alpha ^{2} +\beta ^{2} = 0.}$$

It follows that \(p = -2\alpha\) and \(q =\alpha ^{2} +\beta ^{2}\), and hence

$$\displaystyle{1 + p + q = (1-\alpha )^{2} +\beta ^{2}\neq 0.}$$

Hence, Remark 3.22 applies. By the proof of Theorem 3.23, we have that \(y(t) = E_{\alpha +i\beta }(t,a)\) is a complex-valued solution of (3.13). Using

$$\displaystyle{\alpha +i\beta =\alpha \boxplus i \frac{\beta } {1-\alpha } =\alpha \boxplus i\gamma,}$$

where \(\gamma = \frac{\beta } {1-\alpha }\), α ≠ 1, we get that

$$\displaystyle{y(t) = E_{\alpha +i\beta }(t,a) = E_{\alpha \boxplus i\gamma }(t,a) = E_{\alpha }(t,a)E_{i\gamma }(t,a)}$$

is a nontrivial solution. It follows from Euler’s formula (3.11) that

$$\displaystyle\begin{array}{rcl} y(t)& =& E_{\alpha }(t,a)E_{i\gamma }(t,a) {}\\ & =& E_{\alpha }(t,a)[\mbox{ Cos}_{\gamma }(t,a) + i\mbox{ Sin}_{\gamma }(t,a)] {}\\ & =& y_{1}(t) + iy_{2}(t) {}\\ \end{array}$$

is a solution of (3.13). But since p and q are real, we have that the real part, y 1(t) = E α (t, a)Cos γ (t, a), and the imaginary part, y 2(t) = E α (t, a)Sin γ (t, a), of y(t) are solutions of (3.13). But y 1(t), y 2(t) are linearly independent on \(\mathbb{N}_{a}\), so we get that

$$\displaystyle{y(t) = c_{1}E_{\alpha }(t,a)\mbox{ Cos}_{\gamma }(t,a) + c_{2}E_{\alpha }(t,a)\mbox{ Sin}_{\gamma }(t,a)}$$

is a general solution of (3.13) on \(\mathbb{N}_{a}\). □ 

Example 3.26.

Solve the nabla difference equation

$$\displaystyle{ \nabla ^{2}y(t) + 2\nabla y(t) + 2y(t) = 0,\quad t \in \mathbb{N}_{ a+2}. }$$
(3.18)

The characteristic equation is

$$\displaystyle{\lambda ^{2} + 2\lambda + 2 = 0,}$$

and so, the characteristic roots are \(\lambda = -1 \pm i\). Note that \(1 + p + q = 5\neq 0\). So, applying Theorem 3.25, we find that

$$\displaystyle{y(t) = c_{1}E_{-1}(t,a)\mbox{ Cos}_{\frac{1} {2} }(t,a) + c_{2}E_{-1}(t,a)\mbox{ Sin}_{\frac{1} {2} }(t,a)}$$

is a general solution of (3.18) on \(\mathbb{N}_{a}\).

The previous theorem (Theorem 3.25) excluded the case when the characteristic roots of (3.13) are 1 ± i β, where β > 0. The next theorem considers this case.

Theorem 3.27.

If the characteristic values of (3.13) are 1 ± iβ, where β > 0, then a general solution of (3.13) is given by

$$\displaystyle{y(t) = c_{1}\beta ^{a-t}\cos \left [ \frac{\pi } {2}(t - a)\right ] + c_{2}\beta ^{a-t}\sin \left [ \frac{\pi } {2}(t - a)\right ],}$$

\(t \in \mathbb{N}_{a}\) .

Proof.

Since 1 − i β is a characteristic value of (3.13), we have that \(y(t) = E_{1-i\beta }(t,a)\) is a complex-valued solution of (3.13). Now

$$\displaystyle\begin{array}{rcl} y(t)& =& E_{1-i\beta }(t,a) {}\\ & =& (i\beta )^{a-t} {}\\ & =& \left (\beta e^{i \frac{\pi }{2} }\right )^{a-t} {}\\ & =& \beta ^{a-t}e^{i \frac{\pi }{2} (a-t)} {}\\ & =& \beta ^{a-t}\left \{\cos \left [ \frac{\pi } {2}(a - t)\right ] + i\sin \left [ \frac{\pi } {2}(a - t)\right ]\right \} {}\\ & =& \beta ^{a-t}\cos \left [ \frac{\pi } {2}(t - a)\right ] - i\beta ^{a-t}\sin \left [ \frac{\pi } {2}(t - a)\right ]. {}\\ & & {}\\ \end{array}$$

It follows that

$$\displaystyle{y_{1}(t) =\beta ^{a-t}\cos \left [ \frac{\pi } {2}(t - a)\right ],\quad y_{2}(t) =\beta ^{a-t}\sin \left [ \frac{\pi } {2}(t - a)\right ]}$$

are solutions of (3.13). Since these solutions are linearly independent on \(\mathbb{N}_{a}\), we have that

$$\displaystyle{y(t) = c_{1}\beta ^{a-t}\cos \left [ \frac{\pi } {2}(t - a)\right ] + c_{2}\beta ^{a-t}\sin \left [ \frac{\pi } {2}(t - a)\right ]}$$

is a general solution of (3.13). □ 

Example 3.28.

Solve the nabla linear difference equation

$$\displaystyle{\nabla ^{2}y(t) - 2\nabla y(t) + 5y(t) = 0,\quad t \in \mathbb{N}_{ 2}.}$$

The characteristic equation is \(\lambda ^{2} - 2\lambda + 5 = 0\), so the characteristic roots are \(\lambda = 1 \pm 2i\). It follows from Theorem 3.27 that

$$\displaystyle{y(t) = c_{1}2^{-t}\cos \left ( \frac{\pi } {2}t\right ) + c_{2}2^{-t}\sin \left ( \frac{\pi } {2}t\right ),}$$

for \(t \in \mathbb{N}_{0}\).

Theorem 3.29 (Double Root).

Assume \(\lambda _{1} =\lambda _{2} = r\neq 1\) is a double root of the characteristic equation. Then

$$\displaystyle{y(t) = c_{1}E_{r}(t,a) + c_{2}(t - a)E_{r}(t,a)}$$

is a general solution of (3.13) .

Proof.

Since \(\lambda _{1} = r\) is a double root of the characteristic equation, we have that \(\lambda ^{2} - 2r\lambda + r^{2} = 0\) is the characteristic equation. It follows that \(p = -2r\) and q = r 2. Therefore

$$\displaystyle{1 + p + q = 1 - 2r + r^{2} = (1 - r)^{2}\neq 0}$$

since r ≠ 1. Hence, Remark 3.22 applies. Since r ≠ 1 is a characteristic root, we have that y 1(t) = E r (t, a) is a nontrivial solution of (3.13). From Exercise 3.14, we have that \(y_{2}(t) = (t - a)E_{r}(t,a)\) is a second solution of (3.13) on \(\mathbb{N}_{a}\). Since these two solutions are linearly independent on \(\mathbb{N}_{a}\), we have from Remark 3.22 that

$$\displaystyle{y(t) = c_{1}E_{r}(t,a) + c_{2}(t - a)E_{r}(t,a)}$$

is a general solution of (3.13). □ 

Example 3.30.

Solve the nabla difference equation

$$\displaystyle{\nabla ^{2}y(t) + 12\nabla y(t) + 36y(t) = 0,\quad t \in \mathbb{N}_{ a+2}.}$$

The corresponding characteristic equation is

$$\displaystyle{\lambda ^{2} + 12\lambda + 36 = (\lambda +6)^{2} = 0.}$$

Hence \(r = -6\neq 1\) is a double root, so by Theorem 3.29 a general solution is given by

$$\displaystyle\begin{array}{rcl} y(t)& =& c_{1}E_{-6}(t,a) + c_{2}(t - a)E_{-6}(t,a) {}\\ & =& c_{1}7^{a-t} + c_{ 2}(t - a)7^{a-t} {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a}\).

3.6 Discrete Nabla Integral

In this section we define the nabla definite and indefinite integral, give several of their properties, and present a nabla fundamental theorem of calculus.

Definition 3.31.

Assume \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) and \(b \in \mathbb{N}_{a}\). Then the nabla integral of f from a to b is defined by

$$\displaystyle{\int _{a}^{b}f(t)\nabla t:=\sum _{ t=a+1}^{b}f(t),\quad t \in \mathbb{N}_{ a}}$$

with the convention that

$$\displaystyle{\int _{a}^{a}f(t)\nabla t =\sum _{ t=a+1}^{a}f(t):= 0.}$$

Note that even if f had the domain \(\mathbb{N}_{a}\) instead of \(\mathbb{N}_{a+1}\) the value of the integral \(\int _{a}^{b}f(t)\nabla t\) does not depend on the value of f at a. Also note if \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\), then \(F(t):=\int _{ a}^{t}f(\tau )\nabla \tau\) is defined on \(\mathbb{N}_{a}\) with F(a) = 0.

The following theorem gives some important properties of this nabla integral.

Theorem 3.32.

Assume \(f,g: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) , \(b,c,d \in \mathbb{N}_{a}\) , b ≤ c ≤ d, and \(\alpha \in \mathbb{R}\) . Then

  1. (i)

    \(\int _{b}^{c}\alpha f(t)\nabla t =\alpha \int _{ b}^{c}f(t)\nabla t;\)

  2. (ii)

    \(\int _{b}^{c}(f(t) + g(t))\nabla t =\int _{ b}^{c}f(t)\nabla t +\int _{ b}^{c}g(t)\nabla t;\)

  3. (iii)

    \(\int _{b}^{b}f(t)\nabla t = 0;\)

  4. (iv)

    \(\int _{b}^{d}f(t)\nabla t =\int _{ b}^{c}f(t)\nabla t +\int _{ c}^{d}f(t)\nabla t;\)

  5. (v)

    \(\vert \int _{b}^{c}f(t)\nabla t\vert \leq \int _{b}^{c}\vert f(t)\vert \nabla t;\)

  6. (vi)

    if \(F(t):=\int _{ b}^{t}f(s)\nabla s\) , for \(t \in \mathbb{N}_{b}^{c},\) then ∇F(t) = f(t), \(t \in \mathbb{N}_{b+1}^{c}\) ;

  7. (vii)

    if f(t) ≥ g(t) for \(t \in \mathbb{N}_{b+1}^{c},\) then \(\int _{b}^{c}f(t)\nabla t \geq \int _{b}^{c}g(t)\nabla t\) .

Proof.

To see that (vi) holds, assume

$$\displaystyle{F(t) =\int _{ b}^{t}f(s)\nabla s,\quad t \in \mathbb{N}_{ b}^{c}.}$$

Then, for \(t \in \mathbb{N}_{b+1}^{c}\), we have that

$$\displaystyle\begin{array}{rcl} \nabla F(t)& =& \nabla \left (\int _{b}^{t}f(s)\nabla s\right ) {}\\ & =& \nabla \left (\sum _{s=b+1}^{t}f(s)\right ) {}\\ & =& \sum _{s=b+1}^{t}f(s) -\sum _{ s=b+1}^{t-1}f(s) {}\\ & =& f(t). {}\\ \end{array}$$

Hence property (vi) holds. All the other properties of the nabla integral in this theorem hold since the corresponding properties for the summations hold. □ 

Definition 3.33.

Assume \(f: \mathbb{N}_{a+1}^{b} \rightarrow \mathbb{R}\). We say \(F: \mathbb{N}_{a}^{b} \rightarrow \mathbb{R}\) is a nabla antidifference of f(t) on \(\mathbb{N}_{a}^{b}\) provided

$$\displaystyle{\nabla F(t) = f(t),\quad t \in \mathbb{N}_{a+1}^{b}.}$$

If \(f: \mathbb{N}_{a+1}^{b} \rightarrow \mathbb{R}\), then if we define F by

$$\displaystyle{F(t):=\int _{ a}^{t}f(s)\nabla s,\quad t \in \mathbb{N}_{ a}^{b}}$$

we have from part (vi) of Theorem 3.32 that ∇F(t) = f(t), for \(t \in \mathbb{N}_{a+1}^{b}\), that is, F(t) is a nabla antidifference of f(t) on \(\mathbb{N}_{a}^{b}\). Next we show that if \(f: \mathbb{N}_{a+1}^{b} \rightarrow \mathbb{R}\), then f(t) has infinitely many antidifferences on \(\mathbb{N}_{a}^{b}\).

Theorem 3.34.

If \(f: \mathbb{N}_{a+1}^{b} \rightarrow \mathbb{R}\) and G(t) is a nabla antidifference of f(t) on \(\mathbb{N}_{a}^{b}\) , then \(F(t) = G(t) + C\) , where C is a constant, is a general nabla antidifference of f(t) on \(\mathbb{N}_{a}^{b}\) .

Proof.

Assume G(t) is a nabla antidifference of f(t) on \(\mathbb{N}_{a}^{b}\). Let \(F(t):= G(t) + C\), \(t \in \mathbb{N}_{a}^{b}\), where C is a constant. Then

$$\displaystyle{\nabla F(t) = \nabla G(t) = f(t),\quad t \in \mathbb{N}_{a+1}^{b},}$$

and so, F(t) is a antidifference of f(t) on \(\mathbb{N}_{a}^{b}\).

Conversely, assume F(t) is a nabla antidifference of f(t) on \(\mathbb{N}_{a}^{b}\). Then

$$\displaystyle{\nabla (F(t) - G(t)) = \nabla F(t) -\nabla G(t) = f(t) - f(t) = 0}$$

for \(t \in \mathbb{N}_{a+1}^{b}\). This implies \(F(t) - G(t) = C\), for \(t \in \mathbb{N}_{a}^{b}\), where C is a constant. Hence

$$\displaystyle{F(t):= G(t) + C,\quad t \in \mathbb{N}_{a}^{b}.}$$

This completes the proof. □ 

Definition 3.35.

If \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\), then the nabla indefinite integral of f is defined by

$$\displaystyle{\int f(t)\nabla t = F(t) + C,}$$

where F(t) is a nabla antidifference of f(t) and C is an arbitrary constant.

Since any formula for a nabla derivative gives us a formula for an indefinite integral, we have the following theorem.

Theorem 3.36.

The following hold:

  1. (i)

    \(\int \alpha ^{t+\beta }\nabla t = \frac{\alpha } {\alpha -1}\alpha ^{t+\beta } + C,\quad \alpha \neq 1;\)

  2. (ii)

    \(\int (t-\alpha )^{\overline{r}}\nabla t = \frac{1} {r+1}(t-\alpha )^{\overline{r + 1}} + C,\quad r\neq - 1;\)

  3. (iii)

    \(\int (\alpha -\rho (t))^{\overline{r}}\nabla t = - \frac{1} {r+1}(\alpha -t)^{\overline{r + 1}} + C,\quad r\neq - 1;\)

  4. (iv)

    \(\int p(t)\;E_{p}(t,a)\nabla t = E_{p}(t,a) + C,\quad \mbox{ if}\quad p \in \mathcal{R};\)

  5. (v)

    \(\int p(t)\mbox{ Cosh}_{p}(t,a)\nabla t = \mbox{ Sinh}_{p}(t,a) + C,\quad \mbox{ if}\quad \pm p \in \mathcal{R};\)

  6. (vi)

    \(\int p(t)\mbox{ Sinh}_{p}(t,a)\nabla t = \mbox{ Cosh}_{p}(t,a) + C,\quad \mbox{ if}\quad \pm p \in \mathcal{R};\)

  7. (vii)

    \(\int p(t)\mbox{ Cos}_{p}(t,a)\nabla t = \mbox{ Sin}_{p}(t,a) + C,\quad \mbox{ if}\quad \pm ip \in \mathcal{R};\)

  8. (viii)

    \(\int p(t)\mbox{ Sin}_{p}(t,a)\nabla t = -\mbox{ Cos}_{p}(t,a) + C,\quad \mbox{ if}\quad \pm ip \in \mathcal{R},\)

where C is an arbitrary constant.

Proof.

The formula

$$\displaystyle{\int \alpha ^{t+\beta }\nabla t = \frac{\alpha } {\alpha -1}\alpha ^{t+\beta } + C,\quad \alpha \neq 1,}$$

is clear when α = 0, and for α ≠ 0 it follows from part (iv) of Theorem 3.1. Parts (ii) and (iii) of this theorem follow from the power rules (3.3) and (3.4), respectively. Part (iv) of this theorem follows from part (iv) of Theorem 3.11. Parts (v) and (vi) of this theorem follow from parts (iv) and (iii) of Theorem 3.16, respectively. Finally, parts (vii) and (viii) of this theorem follow from parts (iv) and (iii) of Theorem 3.19, respectively. □ 

We now state and prove the fundamental theorem for the nabla calculus.

Theorem 3.37 (Fundamental Theorem of Nabla Calculus).

We assume \(f: \mathbb{N}_{a+1}^{b} \rightarrow \mathbb{R}\) and F is any nabla antidifference of f on \(\mathbb{N}_{a}^{b}\) . Then

$$\displaystyle{\int _{a}^{b}f(t)\nabla t = F(t)\big\vert _{ a}^{b}:= F(b) - F(a).}$$

Proof.

By Theorem 3.32, (vi), we have that G defined by \(G(t):=\int _{ a}^{t}f(s)\nabla s\), for \(t \in \mathbb{N}_{a}^{b},\) is a nabla antidifference of f on \(\mathbb{N}_{a}^{b}\). Since F is a nabla antidifference of f on \(\mathbb{N}_{a}^{b}\), it follows from Theorem 3.34 that \(F(t) = G(t) + C\), \(t \in \mathbb{N}_{a}^{b},\) for some constant C. Hence,

$$\displaystyle\begin{array}{rcl} F(t)\big\vert _{a}^{b}& =& F(b) - F(a) {}\\ & =& [G(b) + C] - [G(a) + C] {}\\ & =& G(b) - G(a) {}\\ & =& \int _{a}^{b}f(s)\nabla s -\int _{ a}^{a}f(s)\nabla s {}\\ & =& \int _{a}^{b}f(s)\nabla s. {}\\ \end{array}$$

This completes the proof. □ 

Example 3.38.

Assume p ≠ 0, 1 is a constant. Use the integration formula

$$\displaystyle{\int _{a}^{t}E_{ p}(t,a)\nabla t = \frac{1} {p}E_{p}(t,a)\big\vert _{a}^{t}}$$

to evaluate the integral \(\int _{0}^{4}f(t)\nabla t,\) where \(f(t):= (-3)^{-t},\) \(t \in \mathbb{N}_{0}\). We calculate

$$\displaystyle\begin{array}{rcl} \int _{0}^{4}(-3)^{-t}\nabla t& =& \int _{ 0}^{4}(1 - 4)^{0-t}\nabla t {}\\ & =& \int _{0}^{4}E_{ 4}(t,0)\nabla t {}\\ & =& \frac{1} {4}E_{4}(t,0)\big\vert _{0}^{4} {}\\ & =& \frac{1} {4}[E_{4}(4,0) - E_{4}(0,0)] {}\\ & =& \frac{1} {4}[(-3)^{-4} - 1] {}\\ & =& -\frac{20} {81}. {}\\ \end{array}$$

Check this answer by using part (i) in Theorem 3.36.

Using the product rule (part (v) in Theorem 3.1) we can prove the following integration by parts formulas.

Theorem 3.39 (Integration by Parts).

Given two functions \(u,v: \mathbb{N}_{a} \rightarrow \mathbb{R}\) and \(b,c \in \mathbb{N}_{a}\) , b < c, we have the integration by parts formulas:

$$\displaystyle{ \int _{b}^{c}u(t)\nabla v(t)\nabla t = u(t)v(t)\Big\vert _{ b}^{c} -\int _{ b}^{c}v(\rho (t))\nabla u(t)\nabla t, }$$
(3.19)
$$\displaystyle{ \int _{b}^{c}u(\rho (t))\nabla v(t)\nabla t = u(t)v(t)\Big\vert _{ b}^{c} -\int _{ b}^{c}v(t)\nabla u(t)\nabla t. }$$
(3.20)

Example 3.40.

Given \(f(t) = (t - 1)3^{1-t}\) for \(t \in \mathbb{N}_{1}\), evaluate the integral \(\int _{1}^{t}f(\tau )\nabla \tau\). Note that

$$\displaystyle{\int _{1}^{t}f(\tau )\nabla \tau =\int _{ 1}^{t}(\tau -1)E_{ -2}(\tau,1)\nabla \tau.}$$

To set up to use the integration by parts formula (3.19), set

$$\displaystyle{u(\tau ) =\tau -1,\quad \nabla v(\tau ) = E_{-2}(\tau,1).}$$

It follows that

$$\displaystyle{\nabla u(\tau ) = 1,\quad v(\tau ) = -\frac{1} {2}E_{-2}(\tau,1),\quad v(\rho (\tau )) = -\frac{3} {2}E_{-2}(\tau,1).}$$

Hence, using the integration by parts formula (3.19), we get

$$\displaystyle\begin{array}{rcl} \int _{1}^{t}f(\tau )\nabla \tau & =& \int _{ 1}^{t}(\tau -1)E_{ -2}(\tau,1)\nabla \tau {}\\ & =& -\frac{1} {2}(\tau -1)E_{-2}(\tau,1)\Big\vert _{\tau =1}^{\tau =t} + \frac{3} {2}\int _{1}^{t}E_{ -2}(\tau,1)\nabla \tau {}\\ &-& \frac{1} {2}(t - 1)E_{-2}(t,1) -\frac{3} {4}E_{-2}(\tau,1)\Big\vert _{\tau =1}^{\tau =t} {}\\ & =& -\frac{1} {2}(t - 1)E_{-2}(t,1) -\frac{3} {4}E_{-2}(t,1) + \frac{3} {4} {}\\ & =& -\frac{3} {2}t\left (\frac{1} {3}\right )^{t} -\frac{1} {4}\left (\frac{1} {3}\right )^{t} + \frac{3} {4}, {}\\ \end{array}$$

for \(t \in \mathbb{N}_{1}\).

3.7 First Order Linear Difference Equations

In this section we show how to solve the first order nabla linear equation

$$\displaystyle{ \nabla y(t) = p(t)y(t) + q(t),\quad t \in \mathbb{N}_{a+1}, }$$
(3.21)

where we assume \(p,q: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) and \(p \in \mathcal{R}\). At the end of this section we will then show how to use the fact that we can solve the first order nabla linear equation (3.21) to solve certain nabla second order linear equations with variable coefficients (3.13) by the method of factoring.

We begin by using one of the following nabla Leibniz’s formulas to find a variation of constants formula for (3.21).

Theorem 3.41 (Nabla Leibniz Formulas).

Assume \(f: \mathbb{N}_{a} \times \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) . Then

$$\displaystyle{ \nabla \left (\int _{a}^{t}f(t,\tau )\nabla \tau \right ) =\int _{ a}^{t}\nabla _{ t}f(t,\tau )\nabla \tau + f(\rho (t),t), }$$
(3.22)

\(t \in \mathbb{N}_{a+1}\) . Also

$$\displaystyle{ \nabla \left (\int _{a}^{t}f(t,\tau )\nabla \tau \right ) =\int _{ a}^{t-1}\nabla _{ t}f(t,\tau )\nabla \tau + f(t,t), }$$
(3.23)

for \(t \in \mathbb{N}_{a+1}\)

Proof.

The proof of (3.22) follows from the following:

$$\displaystyle\begin{array}{rcl} \nabla \left (\int _{a}^{t}f(t,\tau )\nabla \tau \right )& =& \int _{ a}^{t}f(t,\tau )\nabla \tau -\int _{ a}^{t-1}f(t - 1,\tau )\nabla \tau {}\\ & =& \int _{a}^{t}[f(t,\tau ) - f(t - 1,\tau )]\nabla \tau +\int _{ t-1}^{t}f(t - 1,\tau )\nabla \tau {}\\ & =& \int _{a}^{t}\nabla _{ t}f(t,\tau )\nabla \tau + f(\rho (t),t) {}\\ & & {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a+1}\). The proof of (3.23) is Exercise 3.22. □ 

Theorem 3.42 (Variation of Constants Formula).

Assume \(p,q: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) and \(p \in \mathcal{R}\) . Then the unique solution of the IVP

$$\displaystyle\begin{array}{rcl} \nabla y(t)& =& p(t)y(t) + q(t),\quad t \in \mathbb{N}_{a+1} {}\\ y(a)& =& A {}\\ \end{array}$$

is given by

$$\displaystyle{y(t) = AE_{p}(t,a) +\int _{ a}^{t}E_{ p}(t,\rho (s))q(s)\nabla s,\quad t \in \mathbb{N}_{a}.}$$

Proof.

The proof of uniqueness is left to the reader. Let

$$\displaystyle{ y(t):= AE_{p}(t,a) +\int _{ a}^{t}E_{ p}(t,\rho (s))q(s)\nabla s,\quad t \in \mathbb{N}_{a}. }$$

Using the nabla Leibniz formula (3.22), we obtain

$$\displaystyle\begin{array}{rcl} \nabla y(t)& =& Ap(t)E_{p}(t,a) +\int _{ a}^{t}p(t)E_{ p}(t,\rho (s))q(s)\nabla s + E_{p}(\rho (t),\rho (t))q(t) {}\\ & =& p(t)\left [AE_{p}(t,a) +\int _{ a}^{t}E_{ p}(t,\rho (s))q(s)\nabla s\right ] + q(t) {}\\ & =& p(t)y(t) + q(t) {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a+1}\). We also see that y(a) = A. And this completes the proof. □ 

Example 3.43.

Assuming \(r \in \mathcal{R}\), solve the IVP

$$\displaystyle{ \nabla y(t) = r(t)y(t) + E_{r}(t,a),\quad t \in \mathbb{N}_{a+1} }$$
(3.24)
$$\displaystyle{ y(a) = 0. }$$
(3.25)

Using the variation of constants formula in Theorem 3.42, we have

$$\displaystyle\begin{array}{rcl} y(t)& =& \int _{a}^{t}E_{ r}(t,\rho (s))E_{r}(s,a)\nabla s {}\\ & =& E_{r}(t,a)\int _{a}^{t}E_{ r}(a,\rho (s))E_{r}(s,a)\nabla s {}\\ & =& E_{r}(t,a)\int _{a}^{t} \frac{E_{r}(s,a)} {E_{r}(\rho (s),a)}\nabla s {}\\ & =& E_{r}(t,a)\int _{a}^{t} \frac{E_{r}(s,a)} {[1 - r(s)]E_{r}(s,a)}\nabla s {}\\ & =& E_{r}(t,a)\int _{a}^{t} \frac{1} {1 - r(s)}\nabla s. {}\\ \end{array}$$

If we further assume r(t) = r ≠ 1 is a constant, then we obtain that the function \(\frac{1} {1-r}(t - a)E_{r}(t,a)\) is the solution of the IVP

$$\displaystyle{\nabla y(t) = ry(t) + E_{r}(t,a),\quad y(a) = 0.}$$

A general solution of the linear equation (3.21) is given by adding a general solution of the corresponding homogeneous equation ∇y(t) = p(t)y(t) to a particular solution to the nonhomogeneous difference equation (3.21). Hence,

$$\displaystyle{y(t) = cE_{p}(t,a) +\int _{ 0}^{t}E_{ p}(t,\rho (s))q(s)\nabla s}$$

is a general solution of (3.21). We use this fact in the following example.

Example 3.44.

Find a general solution of the linear difference equation

$$\displaystyle{ \nabla y(t) = (\boxminus 3)y(t) + 3t,\quad t \in \mathbb{N}_{1}. }$$
(3.26)

Note that the constant function \(p(t):= \boxminus 3\) is a regressive function on \(\mathbb{N}_{1}\). Hence, the general solution of (3.26) is given by

$$\displaystyle\begin{array}{rcl} y(t)& =& cE_{p}(t,a) +\int _{ a}^{t}E_{ p}(t,\rho (s))q(s)\nabla s {}\\ & =& cE_{\boxminus 3}(t,0) + 3\int _{0}^{t}sE_{ \boxminus 3}(t,\rho (s))\nabla s {}\\ & =& cE_{\boxminus 3}(t,0) + 3\int _{0}^{t}sE_{ 3}(\rho (s),t)\nabla s {}\\ & =& cE_{\boxminus 3}(t,0) - 6\int _{0}^{t}sE_{ 3}(s,t)\nabla s, {}\\ \end{array}$$

for \(t \in \mathbb{N}_{0}\). Integrating by parts we get

$$\displaystyle\begin{array}{rcl} y(t)& =& cE_{\ominus 3}(t,0) - 2sE_{3}(s,t)\big\vert _{s=0}^{t} + 2\int _{ 0}^{t}E_{ 3}(\rho (s),t)\nabla s {}\\ & =& cE_{\boxminus 3}(t,0) - 2t - 4\int _{0}^{t}E_{ 3}(s,t)\nabla s {}\\ & =& cE_{\boxminus 3}(t,0) - 2t -\frac{4} {3}E_{3}(s,t)\big\vert _{0}^{t} {}\\ & =& cE_{\boxminus 3}(t,0) - 2t -\frac{4} {3} + \frac{4} {3}E_{3}(0,t) {}\\ & =& \alpha E_{\boxminus 3}(t,0) - 2t -\frac{4} {3} {}\\ & =& \alpha (-2)^{t} - 2t -\frac{4} {3}{}\\ && {}\\ \end{array}$$

for \(t \in \mathbb{N}_{0}\).

Example 3.45.

Assuming r ≠ 1, use the method of factoring to solve the nabla difference equation

$$\displaystyle{ \nabla ^{2}y(t) - 2r\nabla y(t) + r^{2}y(t) = 0,\quad t \in \mathbb{N}_{ a}. }$$
(3.27)

A factored form of (3.27) is

$$\displaystyle{ (\nabla - rI)(\nabla - rI)y(t) = 0,\quad t \in \mathbb{N}_{a}. }$$
(3.28)

It follows from (3.28) that any solution of \((\nabla - rI)y(t) = 0\) is a solution of (3.27). Hence y 1(t) = E r (t, a) is a solution of (3.27). It also follows from the factored equation (3.28) that the solution y(t) of the IVP

$$\displaystyle{(\nabla - rI)y(t) = E_{r}(t,a),\quad y(a) = 0}$$

is a solution of (3.27). Hence, by the variation of constants formula in Theorem 3.42,

$$\displaystyle\begin{array}{rcl} y(t)& =& \int _{a}^{t}E_{ r}(t,\rho (s))E_{r}(s,a)\nabla s = E_{r}(t,a)\int _{a}^{t}E_{ r}(a,\rho (s))E_{r}(s,a)\nabla s {}\\ & =& E_{r}(t,a)\int _{a}^{t}E_{ \ominus r}(\rho (s),a)E_{r}(s,a)\nabla s {}\\ & =& E_{r}(t,a)\int _{a}^{t}[1 -\ominus r]E_{ \ominus r}(s,a)E_{r}(s,a)\nabla s {}\\ & =& E_{r}(t,a)\int _{a}^{t}[1 -\ominus r]\nabla s = E_{ r}(t,a)\int _{a}^{t} \frac{1} {1 - r}\nabla s {}\\ & =& \frac{1} {1 - r}(t - a)E_{r}(t,a) {}\\ \end{array}$$

is a solution of (3.27). But this implies that \(y_{2}(t) = (t - a)E_{r}(t,a)\) is a solution of (3.27). Since y 1(t) and y 2(t) are linearly independent on \(\mathbb{N}_{a},\)

$$\displaystyle{y(t) = c_{1}E_{r}(t,a) + c_{2}(t - a)E_{r}(t,a)}$$

is a general solution of (3.27) on \(\mathbb{N}_{a}\).

3.8 Nabla Taylor’s Theorem

In this section we want to prove the nabla version of Taylor’s Theorem. To do this we first study the nabla Taylor monomials and give some of their important properties. These nabla Taylor monomials will appear in the nabla Taylor’s Theorem. We then will find nabla Taylor series expansions for the nabla exponential, hyperbolic, and trigonometric functions. Finally, as a special case of our Taylor’s theorem we will obtain a variation of constants formula for ∇n y(t) = h(t).

Definition 3.46.

We define the nabla Taylor monomials , H n (t, a), \(n \in \mathbb{N}_{0}\), by H 0(t, a) = 1, for \(t \in \mathbb{N}_{a}\), and

$$\displaystyle{ H_{n}(t,a) = \frac{\left (t - a\right )^{\overline{n}}} {n!},\quad t \in \mathbb{N}_{a-n+1},\quad n \in \mathbb{N}_{1}. }$$

Theorem 3.47.

The nabla Taylor monomials satisfy the following:

  1. (i)

    \(H_{n}(t,a) = 0,\quad a - n + 1 \leq t \leq a,\quad n \in \mathbb{N}_{1};\)

  2. (ii)

    \(\nabla H_{n+1}(t,a) = H_{n}(t,a),\quad t \in \mathbb{N}_{a-n+1},\quad n \in \mathbb{N}_{0};\)

  3. (iii)

    \(\int _{a}^{t}H_{n}(\tau,a)\nabla \tau = H_{n+1}(t,a),\quad t \in \mathbb{N}_{a},\quad n \in \mathbb{N}_{0};\)

  4. (iv)

    \(\int _{a}^{t}H_{n}(t,\rho (s))\nabla s = H_{n+1}(t,a),\quad t \in \mathbb{N}_{a},\quad n \in \mathbb{N}_{0}\) .

Proof.

Part (i) of this theorem follows from the definition (Definition 3.46) of the nabla Taylor monomials. By the first power rule (3.3), it follows that

$$\displaystyle\begin{array}{rcl} \nabla H_{n+1}(t,a)& =& \nabla \frac{(t - a)^{\overline{n + 1}}} {(n + 1)!} {}\\ & =& \frac{(t - a)^{\overline{n}}} {n!} {}\\ & =& H_{n}(t,a), {}\\ \end{array}$$

and so, we have that part (ii) of this theorem holds. Part (iii) follows from parts (ii) and (i). Finally, to see that (iv) holds we use the integration formula in part (iii) in Theorem 3.36 to get

$$\displaystyle\begin{array}{rcl} \int _{a}^{t}H_{ n}(t,\rho (s))\nabla s& =& \frac{1} {n!}\int _{a}^{t}(t -\rho (s))^{\overline{n}}\nabla s {}\\ & =& \frac{-1} {(n + 1)!}(t - s)^{\overline{n + 1}}\Big\vert _{s=a}^{s=t} {}\\ & =& \frac{(t - a)^{\underline{n+1}}} {(n + 1)!} {}\\ & =& H_{n+1}(t,a). {}\\ \end{array}$$

This completes the proof. □ 

Now we state and prove the nabla Taylor’s Theorem.

Theorem 3.48 (Nabla Taylor’s Formula).

Assume \(f: \mathbb{N}_{a-n} \rightarrow \mathbb{R},\) where \(n \in \mathbb{N}_{0}\) . Then

$$\displaystyle{f(t) = p_{n}(t) + R_{n}(t),\quad t \in \mathbb{N}_{a-n},}$$

where the n-th degree nabla Taylor polynomial, p n (t), is given by

$$\displaystyle{p_{n}(t):=\sum _{ k=0}^{n}\nabla ^{k}f(a)\frac{(t - a)^{\overline{k}}} {k!} =\sum _{ k=0}^{n}\nabla ^{k}f(a)H_{ k}(t,a)}$$

and the Taylor remainder, R n (t), is given by

$$\displaystyle{R_{n}(t) =\int _{ a}^{t}\frac{(t -\rho (s))^{\overline{n}}} {n!} \nabla ^{n+1}f(s)\nabla s =\int _{ a}^{t}H_{ n}(t,\rho (s))\nabla ^{n+1}f(s)\nabla s,}$$

for \(t \in \mathbb{N}_{a-n}\) . (By convention we assume R n (t) = 0 for a − n ≤ t < a.)

Proof.

We will use the second integration by parts formula in Theorem 3.39, namely (3.20), to evaluate the integral in the definition of R n (t). To do this we set

$$\displaystyle{u(\rho (s)) = H_{n}(t,\rho (s)),\quad \nabla v(s) = \nabla ^{n+1}f(s).}$$

Then it follows that

$$\displaystyle{u(s) = H_{n}(t,s),\quad v(s) = \nabla ^{n}f(s).}$$

Using part (iv) of Theorem 3.47, we get

$$\displaystyle{\nabla u(s) = -H_{n-1}(t,\rho (s)).}$$

Hence we get from the second integration by parts formula (3.20) that

$$\displaystyle\begin{array}{rcl} R_{n}(t)& =& \int _{a}^{t}H_{ n}(t,\rho (s))\nabla ^{n+1}f(s)\nabla s {}\\ & =& H_{n}(t,s)\nabla ^{n}f(s)\Big\vert _{ s=a}^{s=t} +\int _{ a}^{t}H_{ n-1}(t,\rho (s))\nabla ^{n}f(s)\nabla s {}\\ & =& -\nabla ^{n}f(a)H_{ n}(t,a) +\int _{ a}^{t}H_{ n-1}(t,\rho (s))\nabla ^{n}f(s)\nabla s. {}\\ \end{array}$$

Again, using the second integration by parts formula (3.20), we have that

$$\displaystyle\begin{array}{rcl} R_{n}(t) =& -& \nabla ^{n}f(a)H_{ n}(t,a) + H_{n-1}(t,s)\nabla ^{n-1}f(s)\Big\vert _{ s=a}^{s=t} {}\\ & +& \int _{a}^{t}H_{ n-2}(t,\rho (s))\nabla ^{n-1}f(s)\nabla s {}\\ =& -& \nabla ^{n}f(a)H_{ n}(t,a) -\nabla ^{n-1}f(a)H_{ n-1}(t,a) {}\\ & +& \int _{a}^{t}H_{ n-2}(t,\rho (s))\nabla ^{n-1}f(s)\nabla s. {}\\ & & {}\\ \end{array}$$

By induction on n we obtain

$$\displaystyle\begin{array}{rcl} R_{n}(t)& =& -\sum _{k=1}^{n}\nabla ^{k}f(a)H_{ k}(t,a) +\int _{ a}^{t}H_{ 0}(t,\rho (s))\nabla f(s)\nabla s {}\\ & =& -\sum _{k=1}^{n}\nabla ^{k}f(a)H_{ k}(t,a) + f(t) - f(a)H_{0}(t,a) {}\\ & =& -\sum _{k=0}^{n}\nabla ^{k}f(a)H_{ k}(t,a) + f(t) {}\\ & =& -p_{n}(t) + f(t). {}\\ \end{array}$$

Solving for f(t) we get the desired result. □ 

We next define the formal nabla power series of a function at a point.

Definition 3.49.

Let \(a \in \mathbb{R}\) and let

$$\displaystyle{\mathbb{Z}_{a}:=\{\ldots,a - 2,a - 1,a,a + 1,a + 2,\ldots \}.}$$

If \(f: \mathbb{Z}_{a} \rightarrow \mathbb{R}\), then we call

$$\displaystyle{\sum _{k=0}^{\infty }\nabla ^{k}f(a)\frac{(t - a)^{\overline{k}}} {k!} =\sum _{ k=0}^{\infty }\nabla ^{k}f(a)H_{ k}(t,a)}$$

the (formal) nabla Taylor series of f at t = a

The following theorem gives some convergence results for nabla Taylor series for various functions.

Theorem 3.50.

Assume |p| < 1 is a constant. Then the following hold:

  1. (i)

    \(E_{p}(t,a) =\sum _{ n=0}^{\infty }p^{n}H_{n}(t,a);\)

  2. (ii)

    \(\mbox{ Sin}_{p}(t,a) =\sum _{ n=0}^{\infty }(-1)^{n}p^{2n+1}H_{2n+1}(t,a);\)

  3. (iii)

    \(\mbox{ Cos}_{p}(t,a) =\sum _{ n=0}^{\infty }(-1)^{n}p^{2n}H_{2n}(t,a);\)

  4. (iv)

    \(\mbox{ Cosh}_{p}(t,a) =\sum _{ n=0}^{\infty }p^{2n}H_{2n}(t,a);\)

  5. (v)

    \(\mbox{ Sinh}_{p}(t,a) =\sum _{ n=0}^{\infty }p^{2n+1}H_{2n+1}(t,a),\)

for \(t \in \mathbb{N}_{a}\) .

Proof.

First we prove part (i). Since \(\nabla ^{n}E_{p}(t,a) = p^{n}E_{p}(t,a)\) for \(n \in \mathbb{N}_{0}\), we have that the Taylor series for E p (t, a) is given by

$$\displaystyle{ \sum _{n=0}^{\infty }\nabla ^{n}E_{ p}(a,a)H_{n}(t,a) =\sum _{ n=0}^{\infty }p^{n}H_{ n}(t,a). }$$

To show that the above Taylor series converges to E p (t, a) when | p |  < 1 is a constant, for each \(t \in \mathbb{N}_{a}\), it suffices to show that the remainder term, R n (t), in Taylor’s Formula satisfies

$$\displaystyle{\lim _{n\rightarrow \infty }R_{n}(t) = 0}$$

when | p |  < 1, for each fixed \(t \in \mathbb{N}_{a}\),

So fix \(t \in \mathbb{N}_{a}\) and consider

$$\displaystyle\begin{array}{rcl} \vert R_{n}(t)\vert & =& \left \vert \int _{a}^{t}H_{ n}(t,\rho (s))\nabla ^{n+1}E_{ p}(s,a)\nabla s\right \vert {}\\ & =& \left \vert \int _{a}^{t}H_{ n}(t,\rho (s))p^{n+1}E_{ p}(s,a)\nabla s\right \vert. {}\\ \end{array}$$

Since t is fixed, there is a constant C such that

$$\displaystyle{\vert E_{p}(s,a)\vert \leq C,\quad a \leq s \leq t.}$$

Hence,

$$\displaystyle\begin{array}{rcl} \vert R_{n}(t)\vert & \leq & C\int _{a}^{t}H_{ n}(t,\rho (s))\vert p\vert ^{n+1}\nabla s {}\\ & =& C\vert p\vert ^{n+1}\int _{ a}^{t}H_{ n}(t,\rho (s))\nabla s {}\\ & =& C\vert p\vert ^{n+1}H_{ n+1}(t,a)\quad \mbox{ by Theorem <InternalRef RefID="FPar47">3.47</InternalRef>, (iv)} {}\\ & =& C\vert p\vert ^{n+1}\frac{(t - a)^{\overline{n + 1}}} {(n + 1)!}. {}\\ \end{array}$$

By the ratio test, if | p |  < 1, the series

$$\displaystyle{\sum _{n=0}^{\infty }\frac{\vert p\vert ^{n+1}(t - a)^{\overline{n + 1}}} {(n + 1)!} }$$

converges. It follows that if | p |  < 1, then by the n-th term test

$$\displaystyle{\lim _{n\rightarrow \infty }\frac{\vert p\vert ^{n+1}(t - a)^{\overline{n + 1}}} {(n + 1)!} = 0.}$$

This implies that if | p |  < 1, then for each fixed \(t \in \mathbb{N}_{a}\)

$$\displaystyle{\lim _{n\rightarrow \infty }R_{n}(t) = 0,}$$

and hence if | p |  < 1,

$$\displaystyle{E_{p}(t,a) =\sum _{ n=0}^{\infty }p^{n}H_{ n}(t,a)}$$

for all \(t \in \mathbb{N}_{a}\). Since the functions Sin p (t, a), Cos p (t, a), Sinh p (t, a), and Cosh p (t, a) are defined in terms of E p (t, a), parts (ii)–(v) follow easily from part (i). □ 

We now see that the integer order variation of constants formula follows from Taylor’s formula.

Theorem 3.51 (Integer Order Variation of Constants Formula).

Assume \(h: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) and \(n \in \mathbb{N}_{1}\) . Then the solution of the IVP

$$\displaystyle\begin{array}{rcl} \nabla ^{n}y(t)& =& h(t),\quad t \in \mathbb{N}_{ a+1} \\ \nabla ^{k}y(a)& =& C_{ k},\quad 0 \leq k \leq n - 1,{}\end{array}$$
(3.29)

where C k , 0 ≤ k ≤ n − 1, are given constants, is given by the variation of constants formula

$$\displaystyle{y(t) =\sum _{ k=0}^{n-1}C_{ k}H_{k}(t,a) +\int _{ a}^{t}H_{ n-1}(t,\rho (s))h(s)\nabla s,\quad t \in \mathbb{N}_{a-n+1}.}$$

Proof.

It is easy to see that the given IVP has a unique solution y that is defined on \(\mathbb{N}_{a-n+1}\). By Taylor’s formula (see Theorem 3.48) with n replaced by n − 1 we get that

$$\displaystyle\begin{array}{rcl} y(t)& =& \sum _{k=0}^{n-1}\nabla ^{k}y(a)H_{ k}(t,a) +\int _{ a}^{t}H_{ n-1}(t,\rho (s))\nabla ^{n}y(s)\nabla s {}\\ & =& \sum _{k=0}^{n-1}C_{ k}H_{k}(t,a) +\int _{ a}^{t}H_{ n-1}(t,\rho (s))h(s)\nabla s, {}\\ \end{array}$$

\(t \in \mathbb{N}_{a-n+1}\). □ 

We immediately get the following special case of Theorem 3.51. This special case, which we label Corollary 3.52, is also called a variation of constants formula.

Corollary 3.52 (Integer Order Variation of Constants Formula).

Assume the function \(h: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) and \(n \in \mathbb{N}_{0}\) . Then the solution of the IVP

$$\displaystyle\begin{array}{rcl} \nabla ^{n}y(t)& =& h(t),\quad t \in \mathbb{N}_{ a+1} \\ \nabla ^{k}y(a)& =& 0,\quad 0 \leq k \leq n - 1{}\end{array}$$
(3.30)

is given by the variation of constants formula.

$$\displaystyle{y(t) =\int _{ a}^{t}H_{ n-1}(t,\rho (s))h(s)\nabla s,\quad t \in \mathbb{N}_{a-n+1}.}$$

Example 3.53.

Use the variation of constants formula to solve the IVP

$$\displaystyle\begin{array}{rcl} \nabla ^{2}y(t)& =& (-2)^{a-t},\quad t \in \mathbb{N}_{ a+1} {}\\ y(a)& =& 2,\quad \nabla y(a) = 1. {}\\ \end{array}$$

By the variation of constants formula in Theorem 3.51 the solution of this IVP is given by

$$\displaystyle\begin{array}{rcl} y(t)& =& C_{0}H_{0}(t,a) + C_{1}H_{1}(t,a) +\int _{ a}^{t}H_{ 1}(t,\rho (s))(-2)^{a-s}\nabla s {}\\ & =& 2H_{0}(t,a) + H_{1}(t,a) +\int _{ a}^{t}H_{ 1}(t,\rho (s))E_{3}(s,a)\nabla s {}\\ & =& 2H_{0}(t,a) + H_{1}(t,a) + \frac{1} {3}H_{1}(t,s)E_{3}(s,a)\Big\vert _{s=a}^{t} + \frac{1} {3}\int _{a}^{t}E_{ 3}(s,a)\nabla s {}\\ & =& 2H_{0}(t,a) + H_{1}(t,a) -\frac{1} {3}H_{1}(t,a) + \frac{1} {9}E_{3}(s,a)\Big\vert _{a}^{t} {}\\ & =& 2H_{0}(t,a) + H_{1}(t,a) -\frac{1} {3}H_{1}(t,a) + \frac{1} {9}E_{3}(t,a) -\frac{1} {9} {}\\ & =& 2 + \frac{2} {3}H_{1}(t,a) + \frac{1} {9}(-2)^{a-t} -\frac{1} {9} {}\\ & =& \frac{17} {9} + \frac{2} {3}(t - a) + \frac{1} {9}(-2)^{a-t}, {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a-1}\).

3.9 Fractional Sums and Differences

With the relevant preliminaries established, we are now ready to develop what we mean by fractional nabla differences and fractional nabla sums. We first give the motivation for how we define nabla integral sums.

In the previous section (see Corollary 3.52) we saw that

$$\displaystyle{y(t) =\int _{ a}^{t}H_{ n-1}(t,\rho (s))f(s)\nabla s}$$

is the unique solution of the nabla difference equation \(\nabla ^{n}y(t) = f(t),t \in \mathbb{N}_{a+1}\) satisfying the initial conditions ∇i y(a) = 0, 0 ≤ i ≤ n − 1, for \(t \in \mathbb{N}_{a}\). Integrating n times both sides of ∇n y(t) = f(t) and using the initial conditions ∇i y(a) = 0, 0 ≤ i ≤ n − 1, we get by uniqueness

$$\displaystyle\begin{array}{rcl} & & \int _{a}^{t}\int _{ a}^{\tau _{1} }\cdots \int _{a}^{\tau _{n-1} }f(\tau _{n})\nabla \tau _{n}\cdots \nabla \tau _{2}\nabla \tau _{1} \\ & & \qquad \qquad \qquad \qquad \qquad \qquad \quad =\int _{ a}^{t}H_{ n-1}(t,\rho (s))f(s)\nabla s.{}\end{array}$$
(3.31)

The formula (3.31) can also be easily proved by repeated integration by parts. Motivated by this we define the nabla integral order sum as in the following definition.

Definition 3.54 (Integral Order Sum).

Let \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) be given and \(n \in \mathbb{N}_{1}\). Then

$$\displaystyle{ \nabla _{a}^{-n}f(t):=\int _{ a}^{t}H_{ n-1}(t,\rho (s))f(s)\nabla s,\quad t \in \mathbb{N}_{a}. }$$

Also, we define \(\nabla ^{-0}f(t):= f(t)\).

Note that the function ∇ a n f depends on the values of f at all the points a + 1 ≤ s ≤ t, unlike the positive integer nabla difference ∇n f(t), which just depends on the values of f at the n + 1 points tn ≤ s ≤ t. Another interesting observation is that we could think of \(\nabla _{a}^{-n}f(t)\) as defined on \(\mathbb{N}_{a-n+1}\), from which we obtain that \(f(t) = 0,\quad a + n - 1 \leq t \leq a\) by our convention that the nabla integral from a point to a smaller point is zero (see Definition 3.31). The following example appears in Hein et al. [119].

Example 3.55.

Use the definition (Definition 3.54) of the fractional sum to find \(\nabla _{a}^{-2}E_{p}(t,a),\) where p ≠ 0, 1 is a constant. By definition we obtain, using the second integration by parts formula (3.20),

$$\displaystyle\begin{array}{rcl} \nabla _{a}^{-2}E_{ p}(t,a)& =& \int _{a}^{t}H_{ 1}(t,\rho (s))E_{p}(s,a)\nabla s {}\\ & =& \frac{1} {p}E_{p}(s,a)H_{1}(t,s)\big\vert _{s=a}^{t} + \frac{1} {p}\int _{a}^{t}E_{ p}(s,a)\nabla s {}\\ & =& -\frac{1} {p}H_{1}(t,a) + \frac{1} {p^{2}}E_{p}(s,a)\big\vert _{s=a}^{t} {}\\ & =& -\frac{1} {p}H_{1}(t,a) + \frac{1} {p^{2}}E_{p}(t,a) - \frac{1} {p^{2}} {}\\ & =& -\frac{1} {p}(t - a) + \frac{1} {p^{2}}(1 - p)^{a-t} - \frac{1} {p^{2}}. {}\\ \end{array}$$

Note that if \(n \in \mathbb{N}_{1}\), then

$$\displaystyle{H_{n}(t,a) = \frac{(t - a)^{\overline{n}}} {n!} = \frac{(t - a)^{\overline{n}}} {\Gamma (n + 1)}.}$$

Motivated by this we define the fractional μ-th order nabla Taylor monomial as follows.

Definition 3.56.

Let \(\mu \neq - 1,-2,-3,\cdots \). Then we define the μ-th order nabla fractional Taylor monomial, H μ (t, a), by

$$\displaystyle{H_{\mu }(t,a) = \frac{(t - a)^{\overline{\mu }}} {\Gamma (\mu +1)},}$$

whenever the right-hand side of this equation is sensible.

In the next theorem we collect some of the properties of fractional nabla Taylor monomials.

Theorem 3.57.

The following hold:

  1. (i)

    H μ (a,a) = 0;

  2. (ii)

    \(\nabla H_{\mu }(t,a) = H_{\mu -1}(t,a);\)

  3. (iii)

    \(\int _{a}^{t}H_{\mu }(s,a)\nabla s = H_{\mu +1}(t,a);\)

  4. (iv)

    \(\int _{a}^{t}H_{\mu }(t,\rho (s))\nabla s = H_{\mu +1}(t,a);\)

  5. (v)

    for \(k \in \mathbb{N}_{1}\) , \(H_{-k}(t,a) = 0\) , \(t \in \mathbb{N}_{a},\)

provided the expressions in this theorem are well defined.

Proof.

Part (i) follows immediately from the definition of H μ (t, a). The proofs of parts (ii)–(iii) of this theorem are the same as the proof of Theorem 3.47, where we used the fractional power rules instead of the integer power rules. Finally, part (v) follows since

$$\displaystyle{H_{-k}(t,a) = \frac{(t - a)^{\overline{ - k}}} {\Gamma (-k + 1)} = 0}$$

by our earlier convention when the denominator is undefined but the numerator is defined. □ 

Now we can define the fractional nabla sum in terms of the nabla fractional Taylor monomial as follows.

Definition 3.58 (Nabla Fractional Sum).

Let \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) be given and assume μ > 0. Then

$$\displaystyle{\nabla _{a}^{-\mu }f(t):=\int _{ a}^{t}H_{\mu -1}(t,\rho (s))f(s)\nabla s,}$$

for \(t \in \mathbb{N}_{a}\), where by convention \(\nabla _{a}^{-\mu }f(a) = 0\).

The following example appears in Hein et al. [119].

Example 3.59.

Use the definition (Definition 3.58) of the fractional sum to find \(\nabla _{a}^{-\mu }1\). By definition

$$\displaystyle\begin{array}{rcl} \nabla _{a}^{-\mu }1& =& \int _{ a}^{t}H_{\mu -1}(t,\rho (s)) \cdot 1\nabla s {}\\ & =& \int _{a}^{t}H_{\mu -1}(t,\rho (s))\nabla s {}\\ & =& H_{\mu }(t,a),\quad t \in \mathbb{N}_{a} {}\\ \end{array}$$

by part (iv) of Theorem 3.57.

For those readers that have read Chap. 2 we gave a relationship between a certain delta fractional sum and a certain nabla fractional sum. This formula is sometimes useful for obtaining results for the nabla fractional calculus from the delta fractional calculus. Since we want this chapter to be self-contained we will not use this formula in this chapter.

Theorem 3.60.

Assume \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\) and ν > 0. Then

$$\displaystyle{\Delta _{a}^{-\nu }f(t+\nu ) = \nabla _{ a}^{-\nu }f(t) + H_{\nu -1}(t,\rho (a))f(a),}$$

for \(t \in \mathbb{N}_{a}\) . In particular, if f(a) = 0, then

$$\displaystyle{\Delta _{a}^{-\nu }f(t+\nu ) = \nabla _{ a}^{-\nu }f(t),}$$

for \(t \in \mathbb{N}_{a}\) .

Proof.

Note that \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\) implies \(\Delta _{a}^{-\nu }f(t+\nu )\) is defined for \(t \in \mathbb{N}_{a}\). Using the definition of the ν-th order fractional sum (Definition 3.58) we find that

$$\displaystyle\begin{array}{rcl} \Delta _{a}^{-\nu }f(t+\nu )& =& \int _{ a}^{t+1}h_{\nu -1}(t+\nu,\sigma (\tau ))f(\tau )\nabla \tau {}\\ & =& \sum _{\tau =a}^{t}h_{\nu -1}(t+\nu,\sigma (\tau ))f(\tau ) {}\\ & =& \sum _{\tau =a}^{t}\frac{(t +\nu -\rho (\tau ))^{\underline{\nu -1}}} {\Gamma (\nu )} f(\tau ) {}\\ & =& \sum _{\tau =a}^{t} \frac{\Gamma (t +\nu -\tau )} {\Gamma (\nu )\Gamma (t +\nu -\tau )} {}\\ & =& \sum _{\tau =a}^{t}\frac{(t -\tau +1)^{\overline{\nu - 1}}} {\Gamma (\nu )} f(\tau ) {}\\ & =& \nabla _{a}^{-\nu }f(t) {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a}\). □ 

We next define the nabla fractional difference (nabla Riemann–Liouville fractional difference) in terms of a nabla fractional sum. The Caputo fractional sum (Definition 3.117) will be considered in Sect. 3.18.

Definition 3.61 (Nabla Fractional Difference).

Let \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\), \(\nu \in \mathbb{R}^{+}\) and choose N such that N − 1 < ν ≤ N. Then we define the ν-th order nabla fractional difference, \(\nabla _{a}^{\nu }f(t)\), by

$$\displaystyle{\nabla _{a}^{\nu }f(t):= \nabla ^{N}\nabla _{ a}^{-(N-\nu )}f(t)\quad \mbox{ for}\quad t \in \mathbb{N}_{ a+N}.}$$

We now have a definition for both fractional sums and fractional differences; however, they may still be unified to a similar form. We will show here that the traditional definition of a fractional difference can be rewritten in a form similar to the definition for a fractional sum. The following result appears in Ahrendt et al. [3].

Theorem 3.62.

Assume \(f: \mathbb{N}_{a} \rightarrow \mathbb{R},\) ν > 0, \(\nu \not\in \mathbb{N}_{1}\) , and choose \(N \in \mathbb{N}_{1}\) such that N − 1 < ν < N. Then

$$\displaystyle{ \nabla _{a}^{\nu }f(t) =\int _{ a}^{t}H_{ -\nu -1}(t,\rho (\tau ))f(\tau )\nabla \tau, }$$
(3.32)

for \(t \in \mathbb{N}_{a+1}\) .

Proof.

Note that

$$\displaystyle\begin{array}{rcl} & & \nabla _{a}^{\nu }f(t) = \nabla ^{N}\nabla _{ a}^{-(N-\nu )}f(t) {}\\ & & = \nabla ^{N}\left (\int _{ a}^{t}H_{ N-\nu -1}(t,\rho (\tau ))f(\tau )\nabla \tau \right ) {}\\ & & = \nabla ^{N-1}\nabla \left (\int _{ a}^{t}H_{ N-\nu -1}(t,\rho (\tau ))f(\tau )\nabla \tau \right ) {}\\ & & = \nabla ^{N-1}\left (\int _{ a}^{t}H_{ N-\nu -2}(t,\rho (\tau ))f(\tau )\nabla \tau + H_{N-\nu -1}(\rho (t),\rho (t))f(t)\right ) {}\\ & & = \nabla ^{N-1}\int _{ a}^{t}H_{ N-\nu -2}(t,\rho (\tau ))f(\tau )\nabla \tau. {}\\ \end{array}$$

By applying Leibniz’s Rule N − 1 more times, we deduce that

$$\displaystyle{ \nabla _{a}^{\nu }f(t) =\int _{ a}^{t}H_{ -\nu -1}(t,\rho (\tau ))f(\tau )\nabla \tau, }$$

which is the desired result. □ 

In the following theorem we show that the nabla fractional difference, for each fixed \(t \in \mathbb{N}_{a}\), is a continuous function of ν for ν > 0. The following theorem appears in Ahrendt et al. [3].

Theorem 3.63 (Continuity of the Nabla Fractional Difference).

Assume \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\) . Then the fractional difference \(\nabla _{a}^{\nu }f\) is continuous with respect to ν for ν > 0.

Proof.

It is sufficient for this proof to show that for \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\), N − 1 < ν ≤ N, and \(m \in \mathbb{N}_{0}\), the following hold:

$$\displaystyle{ \nabla _{a}^{\nu }f(a + N + m)\mbox{ is continuous with respect to $\nu $ on $(N - 1,N)$,} }$$
(3.33)
$$\displaystyle{ \nabla _{a}^{\nu }f(a + N + m) \rightarrow \nabla ^{N}f(a + N + m)\text{ as }\nu \rightarrow N^{-}, }$$
(3.34)

and

$$\displaystyle{ \nabla _{a}^{\nu }f(a + N + m) \rightarrow \nabla ^{N-1}f(a + N + m)\text{ as }\nu \rightarrow (N - 1)^{+}. }$$
(3.35)

Let ν be fixed such that N − 1 < ν < N. We now show that (3.33) holds. To see this note that we have the following:

$$\displaystyle\begin{array}{rcl} & & \nabla _{a}^{\nu }f(a + N + m) =\int _{ a}^{t}H_{ -\nu -1}(t,\rho (\tau ))\nabla \tau {}\\ & =& \frac{1} {\Gamma (-\nu )}\sum _{\tau =a+1}^{t}(t -\rho (\tau ))^{\overline{ -\nu -1}}\bigg\vert _{ t=a+N+m}f(\tau ) {}\\ & =& \frac{1} {\Gamma (-\nu )}\sum _{\tau =a+1}^{a+N+m}(a + N + m -\rho (\tau ))^{\overline{ -\nu -1}}f(\tau ) {}\\ & =& \sum _{\tau =a+1}^{a+N+m}\frac{\Gamma (a + N + m -\tau +1 -\nu -1)} {\Gamma (a + N + m -\tau +1)\Gamma (-\nu )} f(\tau ) {}\\ & =& \sum _{\tau =a+1}^{a+N+m}\frac{(a + N + m -\tau -\nu - 1)\cdots (-\nu )\Gamma (-\nu )} {(a + N + m-\tau )!\Gamma (-\nu )} f(\tau ) {}\\ & =& \sum _{\tau =a+1}^{a+N+m-1}\frac{(a + N + m -\tau -\nu - 1)\cdots (-\nu )} {(a + N + m-\tau )!} f(\tau ) + f(a + N + m). {}\\ \end{array}$$

Letting \(i:= a + N + m-\tau\), we get

$$\displaystyle\begin{array}{rcl} & & \nabla _{a}^{\nu }f(a + N + m) {}\\ & =& \sum _{i=1}^{N+m}\frac{(i - 1-\nu )\cdots (1-\nu )(-\nu )} {i!} f(a + N + M - i) + f(a + N + M). {}\\ \end{array}$$

This shows that the ν-th order fractional difference is continuous on N − 1 < ν < N, showing (3.33) holds.

Now we consider the case ν → N in order to show that (3.34) holds:

$$\displaystyle\begin{array}{rcl} & & \lim _{\nu \rightarrow N^{-}}\nabla _{a}^{\nu }f(a + N + m) {}\\ & & \quad =\lim _{\nu \rightarrow N^{-}}\sum _{i=1}^{N+m}\frac{(i - 1-\nu )\cdots (1-\nu )(-\nu )} {i!} f(a + N + M - i) {}\\ & & \qquad +\, f(a + N + M) {}\\ & & \quad =\sum _{ i=1}^{N+m}\left (\frac{(i - 1 - N)\cdots (-N)} {i!} f(a + N + m - i)\right ) + f(a + N + m) {}\\ & & \quad =\sum _{ i=0}^{N+m}\left ((-1)^{i}\frac{(N + 1 - i)\cdots (N)} {i!} f(a + N + m - i)\right ) {}\\ & & \quad =\sum _{ i=0}^{N+m}(-1)^{i}{N\choose i}f(a + N + m - i) {}\\ & & \quad =\sum _{ i=0}^{N}(-1)^{i}{N\choose i}f(a + m + N - i) {}\\ & & \quad = \nabla ^{N}f(a + N + m). {}\\ \end{array}$$

Finally we want to show (3.35) holds. So we write

$$\displaystyle\begin{array}{rcl} & & \lim _{\nu \rightarrow (N-1)^{+}}\nabla _{a}^{\nu }f(a + N + m) {}\\ & & \quad =\lim _{\nu \rightarrow (N-1)^{+}}\sum _{i=1}^{N+m}\frac{(i - 1-\nu )\cdots (1-\nu )(-\nu )} {i!} f(a + N + M - i) {}\\ & & \qquad +\, f(a + N + M) {}\\ & & \quad =\sum _{ i=1}^{N+m}\frac{(i - N)(i - N - 1)\cdots (1 - N)} {i!} f(a + N + m - i) {}\\ & & \qquad +\, f(a + N + m) {}\\ & & \quad =\sum _{ i=0}^{N+m}(-1)^{i}\frac{(N - i)(N + 1 - i)\cdots (N - 1)} {i!} f(a + N + m - i) {}\\ & & \quad =\sum _{ i=0}^{N+m}(-1)^{i}{N - 1\choose i}f(a + m + 1 + N - 1 - i) {}\\ & & \quad = \nabla ^{N-1}f(a + N + m). {}\\ \end{array}$$

This completes the proof. □ 

To prove various properties for the nabla fractional sums and differences it is convenient to develop the theory of the nabla Laplace transform, which we do in the next section.

3.10 Nabla Laplace Transforms

Having established the necessary preliminaries, we are now ready to discuss an important application of this material: the Laplace transform. The Laplace transform, as in the standard calculus, will provide us with an elegant way to solve initial value problems for a fractional nabla difference equation. In this section, we will lay the groundwork for this method, prove the basic properties, and establish a means in which to solve various initial value (nabla) fractional difference equations. We begin this section by defining the nabla Laplace transform operator \(\mathcal{L}_{a}\) (based at a) as follows:

Definition 3.64.

Assume \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\). Then the nabla Laplace transform of f is defined by

$$\displaystyle{\mathcal{L}_{a}\{f\}(s) =\int _{ a}^{\infty }E_{ \boxminus s}(\rho (t),a)f(t)\nabla t,}$$

for those values of s ≠ 1 such that this improper integral converges.

In the following theorem we give another formula for the Laplace transform, which is often more convenient to use.

Theorem 3.65.

Assume \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) . Then

$$\displaystyle{ \mathcal{L}_{a}\{f\}(s) =\sum _{ k=1}^{\infty }(1 - s)^{k-1}f(a + k), }$$
(3.36)

for those values of s such that this infinite series converges.

Proof.

Assume \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\). Then

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{f\}(s)& =& \int _{a}^{\infty }E_{ \boxminus s}(\rho (t),a)f(t)\nabla t {}\\ & =& \int _{a}^{\infty }[1 -\boxminus s]^{a-t+1}f(t)\nabla t {}\\ & =& \int _{a}^{\infty }\left ( \frac{1} {1 - s}\right )^{a-t+1}f(t)\nabla t {}\\ & =& \int _{a}^{\infty }(1 - s)^{t-a-1}f(t)\nabla t {}\\ & =& \sum _{t=a+1}^{\infty }(1 - s)^{t-a-1}f(t) {}\\ & =& \sum _{k=1}^{\infty }(1 - s)^{k-1}f(a + k), {}\\ \end{array}$$

for those values of s such that this infinite series converges. □ 

In the definition of the nabla Laplace transform we assumed s ≠ 1 because we do not define \(E_{\boxminus 1}(t,a)\). But the formula of the nabla Laplace transform (3.36) is well defined when s = 1. From now on we will always include s = 1 in the domain of convergence for the nabla Laplace transform although in the proofs we will often assume s ≠ 1. In fact the formula (3.36) for any \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) gives us that

$$\displaystyle{\mathcal{L}_{a}\{f\}(1) = f(a + 1).}$$

Example 3.66.

We use the last theorem to find \(\mathcal{L}_{a}\{1\}(s)\). By Theorem 3.65 we obtain

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{1\}(s)& =& \sum _{k=1}^{\infty }(1 - s)^{k-1}1 {}\\ & =& \sum _{k=0}^{\infty }(1 - s)^{k} {}\\ & =& \frac{1} {1 - (1 - s)},\quad \mbox{ for $\vert 1 - s\vert < 1$} {}\\ & =& \frac{1} {s}. {}\\ \end{array}$$

That is

$$\displaystyle{\mathcal{L}_{a}\{1\}(s) = \frac{1} {s},\quad \vert s - 1\vert < 1.}$$

Theorem 3.67.

For all nonnegative integers n, we have that

$$\displaystyle{\mathcal{L}_{a}\{H_{n}(\cdot,a)\}(s) = \frac{1} {s^{n+1}},\quad \mbox{ for}\quad \vert s - 1\vert < 1.}$$

Proof.

The proof is by induction on n. The result is true for n = 0 by the previous example. Suppose now that \(\mathcal{L}_{a}\{H_{n}(\cdot,a)\}(s) = \frac{1} {s^{n+1}}\) for some fixed n ≥ 0 and | s − 1 |  < 1. Then consider

$$\displaystyle{\mathcal{L}_{a}\{H_{n+1}(\cdot,a)\}(s) =\int _{ a}^{\infty }E_{ \boxminus s}(\rho (t),a)H_{n+1}(t,a)\nabla t.}$$

We will apply the first integration by parts formula (3.19) with

$$\displaystyle{u(t) = H_{n+1}(t,a),\quad \mbox{ and}\quad \nabla v(t) = E_{\boxminus s}(\rho (t),a) = -\frac{1} {s} \boxminus sE_{\boxminus s}(t,a).}$$

It follows that

$$\displaystyle{\nabla u(t) = H_{n}(t,a),\quad v(\rho (t)) = -\frac{1} {s}E_{\boxminus s}(\rho (t),a).}$$

Hence by the integration by parts formula (3.19)

$$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{a}\{H_{n+1}(\cdot,a)\}(s) =\int _{ a}^{\infty }E_{ \boxminus s}(\rho (t),a)H_{n+1}(t,a)\nabla t {}\\ & & \quad \quad = -\frac{1} {s}E_{\boxminus s}(t,a)H_{n+1}(t,a)\big\vert _{a}^{\infty } + \frac{1} {s}\int _{a}^{\infty }E_{ \boxminus s}(\rho (t),a)H_{n}(t,a)\nabla t {}\\ & & \quad \quad = -\frac{1} {s}(1 - s)^{t-a}H_{ n+1}(t,a)\big\vert _{a}^{\infty } + \frac{1} {s}\mathcal{L}\{H_{n}(\cdot,a)\}(s). {}\\ \end{array}$$

Using the nabla form of L’Hôpital’s rule (Exercise 3.19) we calculate

$$\displaystyle\begin{array}{rcl} \lim _{t\rightarrow \infty }\vert (1 - s)^{t-a}H_{ n+1}(t,a)\vert & =& \lim _{t\rightarrow \infty }\frac{H_{n+1}(t,a)} {\vert 1 - s\vert ^{a-t}} {}\\ & =& \lim _{t\rightarrow \infty } \frac{H_{n}(t,a)} {[1 -\vert 1 - s\vert ]\vert 1 - s\vert ^{a-t}} {}\\ & =& \lim _{t\rightarrow \infty } \frac{H_{n-1}(t,a)} {[1 -\vert 1 - s\vert ]^{2}\vert 1 - s\vert ^{a-t}} {}\\ & =& \cdots {}\\ & =& \lim _{t\rightarrow \infty } \frac{H_{0}(t,a)} {[1 -\vert 1 - s\vert ]^{n+1}\vert 1 - s\vert ^{a-t}} {}\\ & =& 0, {}\\ \end{array}$$

since | s − 1 |  < 1. Thus we have that

$$\displaystyle{\mathcal{L}_{a}\{H_{n+1}(\cdot,a)\}(s) = \frac{1} {s^{n+2}},\quad \vert s - 1\vert < 1}$$

completing the proof. □ 

Definition 3.68.

A function \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) is said to be of exponential order r > 0 if there exist a constant M > 0 and a number \(T \in \mathbb{N}_{a+1}\) such that

$$\displaystyle{\vert f(t)\vert \leq Mr^{t},\quad \mbox{ for all $t \in \mathbb{N}_{ T}$}.}$$

Theorem 3.69.

For \(n \in \mathbb{N}_{1}\) , the Taylor monomials H n (t,a) are of exponential order 1 + ε for all ε > 0. Also, H 0 (t,a) is of exponential order 1.

Proof.

Since

$$\displaystyle{\vert H_{0}(t,a)\vert = 1 \cdot 1^{t},\quad t \in \mathbb{N}_{ a+1},}$$

H 0(t, a) is of exponential order 1. Next, assume \(n \in \mathbb{N}_{1}\) and ε > 0 is fixed. Using repeated applications of the nabla L’Hôpital’s rule, we get

$$\displaystyle\begin{array}{rcl} \lim _{t\rightarrow \infty }\frac{H_{n}(t,a)} {(1+\epsilon )^{t}} & =& \lim _{t\rightarrow \infty }\frac{H_{n-1}(t,a)} { \frac{\epsilon }{1+\epsilon }(1+\epsilon )^{t}} {}\\ & =& \lim _{t\rightarrow \infty }\frac{H_{n-2}(t,a)} {( \frac{\epsilon }{1+\epsilon })^{2}(1+\epsilon )^{t}} {}\\ & \cdots & {}\\ & =& \lim _{t\rightarrow \infty } \frac{H_{0}(t,a)} {( \frac{\epsilon }{1+\epsilon })^{n}(1+\epsilon )^{t}} {}\\ & =& 0. {}\\ \end{array}$$

It follows from this that each H n (t, a), \(n \in \mathbb{N}_{1}\), is of exponential order 1 +ε for all ε > 0. □ 

Theorem 3.70 (Existence of Nabla Laplace Transform).

If \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) is a function of exponential order r > 0, then its Laplace transform exists for \(\vert s - 1\vert < \frac{1} {r}\) .

Proof.

Let f be a function of exponential order r. Then there is a constant M > 0 and a number \(T \in \mathbb{N}_{a+1}\) such that | f(t) | ≤ Mr t for all \(t \in \mathbb{N}_{T}\). Pick K so that \(T = a + K\), then we have that

$$\displaystyle{\vert f(a + k)\vert \leq Mr^{a+k},\quad k \in \mathbb{N}_{ K}.}$$

We now show that

$$\displaystyle{\mathcal{L}_{a}\{f\}(s) =\sum _{ k=1}^{\infty }(1 - s)^{k-1}f(a + k)}$$

converges for \(\vert s - 1\vert < \frac{1} {r}\). To see this, consider

$$\displaystyle\begin{array}{rcl} \sum _{k=K}^{\infty }\vert (1 - s)^{k-1}f(a + k)\vert & =& \sum _{ k=K}^{\infty }\vert 1 - s\vert ^{k-1}\vert f(a + k)\vert {}\\ &\leq & \sum _{k=K}^{\infty }\vert 1 - s\vert ^{k-1}Mr^{a+k} {}\\ & =& Mr^{a+1}\sum _{ k=K}^{\infty }[r\vert s - 1\vert ]^{k-1}, {}\\ \end{array}$$

which converges since r | s − 1 |  < 1. It follows that \(\mathcal{L}_{a}\{f\}(s)\) converges absolutely for \(\vert s - 1\vert < \frac{1} {r}\). □ 

Theorem 3.71.

The Laplace transform of the Taylor monomial, H n (t,a), \(n \in \mathbb{N}_{0}\) , exists for |s − 1| < 1.

Proof.

The proof of this theorem follows from Theorems 3.69 and 3.70. □ 

Similarly, by Exercise 3.30 each of the functions E p (t, a), Cosh p (t, a), Sinh p (t, a), Cos p (t, a), and Sin p (t, a) is of exponential order | 1 + p | , and hence by Theorem 3.70 their Laplace transforms exist for \(\vert s - 1\vert < \frac{1} {\vert 1+p\vert }\).

Theorem 3.72 (Uniqueness Theorem).

Assume \(f,g: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) . Then f(t) = g(t), \(t \in \mathbb{N}_{a+1},\) if and only if

$$\displaystyle{\mathcal{L}_{a}\{f\}(s) = \mathcal{L}_{a}\{g\}(s),\quad \mbox{ for}\quad \vert s - 1\vert < r}$$

for some r > 0.

Proof.

Since \(\mathcal{L}_{a}\) is a linear operator it suffices to show that f(t) = 0 for \(t \in \mathbb{N}_{a+1}\) if and only if \(\mathcal{L}_{a}\{f\}(s) = 0\) for | s − 1 |  < r for some r > 0. If f(t) = 0 for \(t \in \mathbb{N}_{a+1}\), then trivially \(\mathcal{L}_{a}\{f\}(s) = 0\) for all \(s \in \mathbb{C}\). Conversely, assume that \(\mathcal{L}_{a}\{f\}(s) = 0\) for | s − 1 |  < r for some r > 0. In this case we have that

$$\displaystyle{\sum _{k=1}^{\infty }f(a + k)(1 - s)^{k-1} = 0,\quad \vert s - 1\vert < r.}$$

This implies that

$$\displaystyle{f(t) = 0,\quad t \in \mathbb{N}_{a+1}.}$$

This completes the proof. □ 

3.11 Fractional Taylor Monomials

To find the formula for the Laplace transform of a fractional nabla Taylor monomial we will use the following lemma which appears in Hein et al [119].

Lemma 3.73.

For \(\nu \in \mathbb{C}\setminus \mathbb{Z}\) and n ≥ 0, we have that

$$\displaystyle{ (1+\nu )^{\overline{n}} = \frac{(-1)^{n}\Gamma (-\nu )} {\Gamma (-\nu - n)}. }$$
(3.37)

Proof.

The proof of (3.37) is by induction for \(n \in \mathbb{N}_{0}\). For n = 0 (3.37) clearly holds. Assume (3.37) is true for some fixed n ≥ 0. Then,

$$\displaystyle\begin{array}{rcl} (1+\nu )^{\overline{n + 1}}& =& (1+\nu )^{\overline{n}}(\nu +n + 1) {}\\ & =& \frac{(-1)^{n}\Gamma (-\nu )(\nu +n + 1)} {\Gamma (-\nu - n)},\quad \mbox{ by the induction hypothesis} {}\\ & =& \frac{(-1)^{n+1}\Gamma (-\nu )} {\Gamma (-\nu - (n + 1))}. {}\\ \end{array}$$

The result follows. □ 

We now determine the Laplace transform of the fractional nabla Taylor monomial.

Theorem 3.74.

For ν not an integer, we have that

$$\displaystyle{\mathcal{L}_{a}\{H_{\nu }(\cdot,a)\}(s) = \frac{1} {s^{\nu +1}},\quad \mbox{ for}\quad \vert s - 1\vert < 1.}$$

Proof.

Consider for | s − 1 |  < 1,  | s | p > 1

$$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{a}\{H_{\nu }(\cdot,a)\}(s) =\sum _{ k=1}^{\infty }(1 - s)^{k-1}H_{\nu }(a + k,a) =\sum _{ k=1}^{\infty }(1 - s)^{k-1} \frac{k^{\overline{\nu }}} {\Gamma (\nu +1)} {}\\ & & \quad \quad =\sum _{ k=1}^{\infty }(1 - s)^{k-1} \frac{\Gamma (k+\nu )} {\Gamma (k)\Gamma (\nu +1)} =\sum _{ k=0}^{\infty }(1 - s)^{k} \frac{\Gamma (k + 1+\nu )} {\Gamma (k + 1)\Gamma (\nu +1)} {}\\ & & \quad \quad =\sum _{ k=0}^{\infty }(1 - s)^{k} \frac{(1+\nu )^{\overline{k}}} {\Gamma (k + 1)} {}\\ & & \quad \quad =\sum _{ k=0}^{\infty }(-1)^{k}(1 - s)^{k} \frac{\Gamma (-\nu )} {\Gamma (k + 1)\Gamma (-\nu - k)}\quad \quad \quad \text{(by Lemma <InternalRef RefID="FPar73">3.73</InternalRef>)} {}\\ & & \quad \quad =\sum _{ k=0}^{\infty }(-1)^{k}(1 - s)^{k}\frac{[-(\nu +1)]^{\underline{k}}} {\Gamma (k + 1)} {}\\ & & \quad \quad =\sum _{ k=0}^{\infty }(-1)^{k}{-(\nu +1)\choose k}(1 - s)^{k} {}\\ & & \quad \quad = \left [1 - (1 - s)\right ]^{-(\nu +1)}\quad \quad \text{(by the Generalized Binomial Theorem)} {}\\ & & \quad \quad = \frac{1} {s^{\nu +1}}. {}\\ \end{array}$$

This completes the proof. □ 

Combining Theorems 3.67 and 3.74, we get the following corollary:

Corollary 3.75.

For \(\nu \in \mathbb{C}\setminus \{- 1,-2,-3,\ldots \}\) , we have that

$$\displaystyle{\mathcal{L}_{a}\{H_{\nu }(\cdot,a)\}(s) = \frac{1} {s^{\nu +1}},\quad \mbox{ for}\quad \vert 1 - s\vert < 1.}$$

Theorem 3.76.

The following hold:

  1. (i)

    \(\mathcal{L}_{a}\{E_{p}(\cdot,a)\}(s) = \frac{1} {s-p},\quad p\neq 1;\)

  2. (ii)

    \(\mathcal{L}_{a}\{\mbox{ Cosh}_{p}(\cdot,a)\}(s) = \frac{s} {s^{2}-p^{2}},\quad p\neq \pm 1;\)

  3. (iii)

    \(\mathcal{L}_{a}\{\mbox{ Sinh}_{p}(\cdot,a)\}(s) = \frac{p} {s^{2}-p^{2}},\quad p\neq \pm 1;\)

  4. (iv)

    \(\mathcal{L}_{a}\{\mbox{ Cos}_{p}(\cdot,a)\}(s) = \frac{s} {s^{2}+p^{2}},\quad p\neq \pm i;\)

  5. (v)

    \(\mathcal{L}_{a}\{\mbox{ Sin}_{p}(\cdot,a)\}(s) = \frac{p} {s^{2}+p^{2}},\quad p\neq \pm i;\)

where (i) holds for \(\vert s - 1\vert < \vert 1 - p\vert \) , (ii) and (iii) hold for \(\vert s - 1\vert < \mbox{ min}\{\vert 1 - p\vert,\vert 1 + p\vert \},\) and (iv) and (v) hold for \(\vert s - 1\vert < \mbox{ min}\{\vert 1 - ip\vert,\vert 1 + ip\vert \}\) .

Proof.

To see that (i) holds, note that

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{E_{p}(\cdot,a)\}(s)& =& \sum _{k=1}^{\infty }(1 - s)^{k-1}E_{ p}(a + k,a) {}\\ & =& \sum _{k=1}^{\infty }(1 - s)^{k-1}(1 - p)^{-k}; {}\\ & =& \frac{1} {1 - p}\sum _{k=1}^{\infty }\left (\frac{1 - s} {1 - p}\right )^{k-1} {}\\ & =& \frac{1} {1 - p} \frac{1} {1 -\frac{1-s} {1-p}},\quad \mbox{ for}\quad \left \vert \frac{1 - s} {1 - p}\right \vert < 1 {}\\ & =& \frac{1} {s - p} {}\\ \end{array}$$

for \(\vert s - 1\vert < \vert 1 - p\vert \). To see that (ii) holds, note that for \(\vert s - 1\vert <\min \{ \vert 1 - p\vert,\vert 1 + p\vert \}\)

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{\mbox{ Cosh}_{p}(\cdot,a)\}(s)& =& \frac{1} {2}\mathcal{L}_{a}\{E_{p}(\cdot,a)\}(s) + \frac{1} {2}\mathcal{L}_{a}\{E_{-p}(\cdot,a)\}(s) {}\\ & =& \frac{1} {2(s - p)} + \frac{1} {2(s + p)} {}\\ & =& \frac{s} {s^{2} - p^{2}}. {}\\ \end{array}$$

To see that (iv) holds, note that

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{\mbox{ Cos}_{p}(\cdot,a)\}(s)& =& \mathcal{L}_{a}\{Cosh_{ip}(\cdot,a)\}(s) {}\\ & =& \frac{s} {s^{2} - (ip)^{2}} {}\\ & =& \frac{s} {s^{2} + p^{2}} {}\\ \end{array}$$

for \(\vert s - 1\vert <\min \{ \vert 1 - ip\vert,\vert 1 + ip\vert \}\). The proofs of parts (iii) and (v) are left as an exercise (Exercise 3.31). □ 

3.12 Convolution

We are now ready to investigate one of the most important properties in solving initial-value fractional nabla difference equations: convolution. This definition is motivated by the desire to express the fractional nabla sums and fractional nabla differences as convolutions of arbitrary functions and Taylor monomials. As a consequence, the resulting properties that stem from this definition are, in fact, consistent with the standard convolution. Many of the results in this section appear in Hein et al. [119] and Ahrendt et al. [3].

Definition 3.77.

For \(f,g: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\), we define the nabla convolution product of f and g by

$$\displaystyle{(f {\ast} g)(t):=\int _{ a}^{t}f(t -\rho (\tau ) + a)g(\tau )\nabla \tau,\quad t \in \mathbb{N}_{ a+1}.}$$

Example 3.78.

Use the definition of the nabla convolution product to find 1 ∗Sin p (⋅ , a), p ≠ 0, ±i. By Definition 3.77,

$$\displaystyle\begin{array}{rcl} (1 {\ast}\mbox{ Sin}_{p}(\cdot,a))(t)& =& \int _{a}^{t}1 \cdot \mbox{ Sin}_{ p}(\tau,a)\nabla \tau {}\\ & =& \int _{a}^{t}\mbox{ Sin}_{ p}(\tau,a)\nabla \tau {}\\ & =& -\frac{1} {p}\mbox{ Cos}_{p}(\tau,a)\big\vert _{a}^{t} {}\\ & =& \frac{1} {p} -\frac{1} {p}\mbox{ Cos}_{p}(t,a), {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a+1}\).

Example 3.79.

Use the definition of the (nabla) convolution product to find

$$\displaystyle{\left (E_{p}(\cdot,a) {\ast} E_{q}(\cdot,a)\right )(t),\quad p,q\neq 1,\quad p\neq q.}$$

Assume q ≠ 0. By Definition 3.77 and using the second integration by parts formula, we have that

$$\displaystyle\begin{array}{rcl} & & \left (E_{p}(\cdot,a) {\ast} E_{q}(\cdot,a)\right )(t) {}\\ & & =\int _{ a}^{t}E_{ p}(t -\rho (\tau ) + a,a)E_{q}(\tau,a)\nabla \tau {}\\ & & = \frac{1} {q}E_{p}(t -\tau +a,a)E_{q}(\tau,a)\big\vert _{\tau =a}^{\tau =t} + \frac{p} {q}\int _{a}^{t}E_{ p}(t -\rho (\tau ) + a,a)E_{q}(\tau,a)\nabla \tau {}\\ & & = \frac{1} {q}E_{q}(t,a) -\frac{1} {q}E_{p}(t,a) + \frac{p} {q}\left (E_{p}(\cdot,a) {\ast} E_{q}(\cdot,a)\right )(t). {}\\ & & {}\\ \end{array}$$

Solving for \(\left (E_{p}(\cdot,a) {\ast} E_{q}(\cdot,a)\right )(t)\), we obtain

$$\displaystyle{\left (E_{p}(\cdot,a) {\ast} E_{q}(\cdot,a)\right )(t) = \frac{1} {p - q}E_{p}(t,a) + \frac{1} {q - p}E_{q}(t,a)}$$

for \(t \in \mathbb{N}_{a+1}\). We leave it to the reader to show that this last formula is also valid if q = 0.

Theorem 3.80.

Assume \(\nu \in \mathbb{R}\setminus \{0,-1,-2,\ldots \}\) and \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) . Then

$$\displaystyle{\nabla _{a}^{-\nu }f(t) = (H_{\nu -1}(\cdot,a) {\ast} f)(t),}$$

for \(t \in \mathbb{N}_{a+1}\) .

Proof.

The result follows from the following:

$$\displaystyle\begin{array}{rcl} (H_{\nu -1}(\cdot,a) {\ast} f)(t)& =& \int _{a}^{t}H_{\nu -1}(t -\rho (\tau ) + a,a)f(\tau )\nabla \tau {}\\ & =& \int _{a}^{t}\frac{(t -\rho (\tau ) + a - a)^{\overline{\nu - 1}}} {\Gamma (\nu )} f(\tau )\nabla \tau {}\\ & =& \int _{a}^{t}\frac{(t -\rho (\tau ))^{\overline{\nu - 1}}} {\Gamma (\nu )} f(\tau )\nabla \tau {}\\ & =& \int _{a}^{t}H_{\nu -1}(t,\rho (\tau ))f(\tau )\nabla \tau {}\\ & =& \nabla _{a}^{-\nu }f(t), {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a+1}\). □ 

Theorem 3.81 (Nabla Convolution Theorem).

Assume \(f,g: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) and their nabla Laplace transforms converge for |s − 1| < r for some r > 0. Then

$$\displaystyle{\mathcal{L}_{a}\{f {\ast} g\}(s) = \mathcal{L}_{a}\{f\}(s)\mathcal{L}_{a}\{g\}(s),}$$

for |s − 1| < r.

Proof.

The following proves our result:

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{f {\ast} g\}(s)& =& \sum _{k=1}^{\infty }(1 - s)^{k-1}(f {\ast} g)(a + k) {}\\ & =& \sum _{k=1}^{\infty }(1 - s)^{k-1}\int _{ a}^{a+k}f(a + k -\rho (\tau ) + a)g(\tau )\nabla \tau {}\\ & =& \sum _{k=1}^{\infty }(1 - s)^{k-1}\sum _{ \tau =a+1}^{a+k}f(a + k -\rho (\tau ) + a)g(\tau ) {}\\ & =& \sum _{k=1}^{\infty }\sum _{ \tau =1}^{k}(1 - s)^{k-1}f(k -\rho (\tau ) + a)g(\tau +a) {}\\ & =& \sum _{\tau =1}^{\infty }\sum _{ k=\tau }^{\infty }(1 - s)^{k-1}f(k -\rho (\tau ) + a)g(a+\tau ) {}\\ & =& \left (\sum _{\tau =1}^{\infty }(1 - s)^{\tau -1}g(a+\tau )\right )\left (\sum _{ k=1}^{\infty }(1 - s)^{k-1}f(a + k)\right ) {}\\ & =& \mathcal{L}_{a}\{g\}(s)\mathcal{L}_{a}\{f\}(s) {}\\ \end{array}$$

for | s − 1 |  < r. □ 

With the above result and the uniqueness of the Laplace transform, it follows that the convolution product is commutative and associative (see Exercise 3.32).

We next establish properties of the Laplace transform that will be useful in solving initial value problems for integer nabla difference equations.

Theorem 3.82 (Transformation of Fractional Sums).

Assume ν > 0 and the nabla Laplace transform of \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) converges for |s − 1| < r for some r > 0. Then

$$\displaystyle{\mathcal{L}_{a}\{\nabla _{a}^{-\nu }f\}(s) = \frac{1} {s^{\nu }}\mathcal{L}_{a}\{f\}(s)}$$

for \(\vert s - 1\vert <\min \{ 1,r\}\) .

Proof.

The result follows since

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{\nabla _{a}^{-\nu }f\}(s)& =& \mathcal{L}_{ a}\{H_{\nu -1}(\cdot,a) {\ast} f\}(s) {}\\ & =& \mathcal{L}_{a}\{H_{\nu -1}(\cdot,a)\}(s)\mathcal{L}_{a}\{f\}(s) {}\\ & =& \frac{1} {s^{\nu }}\mathcal{L}_{a}\{f\}(s), {}\\ \end{array}$$

for \(\vert s - 1\vert <\min \{ 1,r\}\). □ 

Assuming that ν is a positive integer, this result is consistent with the formula in the continuous case for the Laplace transform of the n-th iterated integral of a function. We want to establish similar properties for fractional differences; however, we will first establish integer-order difference properties.

Theorem 3.83 (Transform of Nabla Difference).

Assume \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\) is of exponential order r > 0. Then

$$\displaystyle{\mathcal{L}_{a}\{\nabla f\}(s) = s\mathcal{L}_{a}\{f\}(s) - f(a)}$$

for |s − 1| < r.

Proof.

Note that since we are assuming that \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\), we have that \(\nabla f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) and so we can consider \(\mathcal{L}_{a}\{\nabla f\}(s)\). Since \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\) is of exponential order r > 0, it follows that \(\nabla f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) is of exponential order r > 0. It follows that for | s − 1 |  < r, 

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{\nabla f\}(s)& =& \sum _{k=1}^{\infty }(1 - s)^{k-1}\nabla f(a + k) {}\\ & =& \sum _{k=1}^{\infty }(1 - s)^{k-1}\left [f(a + k) - f(a + k - 1)\right ] {}\\ & =& \mathcal{L}_{a}\{f\}(s) -\sum _{k=1}^{\infty }(1 - s)^{k-1}f(a + k - 1) {}\\ & =& \mathcal{L}_{a}\{f\}(s) -\sum _{k=0}^{\infty }(1 - s)^{k}f(a + k) {}\\ & =& \mathcal{L}_{a+1}\{f\}(s) - f(a) - (1 - s)\sum _{k=1}^{\infty }(1 - s)^{k-1}f(a + k) {}\\ & =& \mathcal{L}_{a}\{f\}(s) - f(a) - (1 - s)\mathcal{L}_{a}\{f\}(s) {}\\ & =& s\mathcal{L}_{a}\{f\}(s) - f(a). {}\\ \end{array}$$

This completes the proof. □ 

The following is a simple example where we will use the Nabla Convolution Theorem and Theorem 3.83 to solve an initial value problem.

Example 3.84.

Use the Nabla Convolution Theorem to help you solve the IVP

$$\displaystyle\begin{array}{rcl} \nabla y(t) - 3y(t)& =& E_{4}(t,a),\quad t \in \mathbb{N}_{a+1} {}\\ y(a)& =& 0. {}\\ \end{array}$$

If y(t) is the solution of this IVP and its Laplace transform, Y a (s), exists, then we have that

$$\displaystyle{ sY _{a}(s) - y(a) - 3Y _{a}(s) = \frac{1} {s - 4}. }$$

Using the initial condition and solving for Y a (s) we obtain

$$\displaystyle{ Y _{a}(s) = \frac{1} {s - 4} \frac{1} {s - 3}. }$$

Using the Nabla Convolution Theorem we see that

$$\displaystyle{y(t) = \left (E_{4}(\cdot,a) {\ast} E_{3}(\cdot,a)\right )(t).}$$

Using Example 3.79 we find that

$$\displaystyle\begin{array}{rcl} y(t)& =& E_{4}(t,a) - E_{3}(t,a) {}\\ & =& (-3)^{a-t} - (-2)^{a-t} {}\\ \end{array}$$

is the solution of our given IVP on \(\mathbb{N}_{a}\). Of course, in this simple example one could also use partial fractions to find y(t).

We can then generalize this result for an arbitrary number of nabla differences.

Theorem 3.85 (Transform of n-th-Order Nabla Difference).

Assume \(f: \mathbb{N}_{a-n+1} \rightarrow \mathbb{R}\) is of exponential order r > 0. Then

$$\displaystyle{ \mathcal{L}_{a}\{\nabla ^{n}f\}(s) = s^{n}\mathcal{L}_{ a}\{f\}(s) -\sum _{k=1}^{n}s^{n-k}\nabla ^{k-1}f(a). }$$
(3.38)

for |s − 1| < r, for each \(n \in \mathbb{N}_{1}\) .

Proof.

Note that since f is of exponential order r > 0, ∇n f is of exponential order r > 0 for each \(n \in \mathbb{N}_{1}\). Hence \(\mathcal{L}_{a}\{\nabla ^{n}f\}(s)\) converges for | s − 1 |  < r for each \(n \in \mathbb{N}_{1}\). The proof of (3.38) is by induction for \(n \in \mathbb{N}_{1}\). The base case n = 1 follows from Theorem 3.83. Now assume n ≥ 1 and (3.38) holds for | s − 1 |  < r. Then, using Theorem 3.83, we have that

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{\nabla ^{n+1}f\}(s)& =& \mathcal{L}_{ a}\{\nabla \left [\nabla ^{n}f\right ]\}(s) {}\\ & =& s\mathcal{L}_{a}\{\nabla ^{n}f\}(s) -\nabla ^{n}f(a) {}\\ & =& s\left [s^{n}\mathcal{L}_{ a}\{f\}(s) -\sum _{k=1}^{n}s^{n-k}\nabla ^{k-1}f(a)\right ] -\nabla ^{n}f(a) {}\\ & =& s^{n+1}\mathcal{L}_{ a}\{f\}(s) -\sum _{k=1}^{n+1}s^{(n+1)-k}\nabla ^{k-1}f(a). {}\\ \end{array}$$

Hence, (3.38) holds when n is replaced by n + 1 and the proof is complete. □ 

Example 3.86.

Solve the IVP

$$\displaystyle\begin{array}{rcl} \nabla ^{2}y(t) - 6\nabla y(t) + 8y(t)& =& 0,\quad t \in \mathbb{N}_{ a+1} {}\\ y(a) = 1,\quad \nabla y(a)& =& -1. {}\\ \end{array}$$

If y(t) is the solution of this IVP and we let \(Y _{a}(s):= \mathcal{L}_{a}\{y\}(s),\) we have that

$$\displaystyle{ \left [s^{2}Y _{ a}(s) - sy(a) -\nabla y(a)\right ] - 6\left [sY _{a}(s) - y(a)\right ] + 8Y _{a}(s) = 0. }$$

Using the initial conditions we have

$$\displaystyle{\left [s^{2}Y _{ a}(s) - s + 1\right ] - 6\left [sY _{a}(s) - 1\right ] + 8Y _{a}(s) = 0.}$$

Solving for Y a (s) we have that

$$\displaystyle\begin{array}{rcl} Y _{a}(s)& =& \frac{s - 7} {s^{2} - 6s + 8} {}\\ & =& \frac{s - 7} {(s - 2)(s - 4)} {}\\ & =& \frac{5} {2} \frac{1} {s - 2} -\frac{3} {2} \frac{1} {s - 4}. {}\\ \end{array}$$

It follows that

$$\displaystyle\begin{array}{rcl} y(t)& =& \frac{5} {2}E_{2}(t,a) -\frac{3} {2}E_{4}(t,a) {}\\ & =& \frac{5} {2}(-1)^{a-t} -\frac{3} {2}(-3)^{a-t} {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a-1}\).

3.13 Further Properties of the Nabla Laplace Transform

In this section we want to find the Laplace transform of a ν-th order fractional difference of a function, where 0 < ν < 1.

Theorem 3.87.

Assume \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) is of exponential order r > 0 and 0 < ν < 1. Then

$$\displaystyle{\mathcal{L}_{a}\{\nabla _{a}^{\nu }f\}(s) = s^{\nu }\mathcal{L}_{ a}\{f\}(s)}$$

for |s − 1| < r.

Proof.

Using Theorems 3.82 and 3.83 we have that

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{\nabla _{a}^{\nu }f\}(s)& =& \mathcal{L}_{ a}\{\nabla \nabla _{a}^{-(1-\nu )}f\}(s) {}\\ & =& s\mathcal{L}_{a}\{\nabla _{a}^{-(1-\nu )}f\}(s) -\nabla _{ a}^{-(1-\nu )}f(a) {}\\ & =& \frac{s} {s^{1-\nu }}s^{\nu }\mathcal{L}_{a}\{f\}(s) {}\\ & s^{\nu }& \mathcal{L}_{a}\{f\}(s) {}\\ \end{array}$$

for | s − 1 |  < 1. □ 

Next we state and prove a useful lemma (see Hein et al. [119] for n = 1 and see Ahrendt et al. [3] for general n).

Lemma 3.88 (Shifting Base Lemma).

Given \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) and \(n \in \mathbb{N}_{1}\) , we have that

$$\displaystyle{\mathcal{L}_{a+n}\{f\}(s) = \left ( \frac{1} {1 - s}\right )^{n}\mathcal{L}_{ a}\{f\}(s) -\sum _{k=1}^{n} \frac{f(a + k)} {(1 - s)^{n-k+1}}.}$$

Proof.

Consider

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a+n}\{f\}(s)& =& \sum _{k=1}^{\infty }(1 - s)^{k-1}f(a + n + k) {}\\ & =& \sum _{k=n+1}^{\infty }(1 - s)^{k-n-1}f(a + k) {}\\ & =& \frac{1} {(1 - s)^{n}}\mathcal{L}_{a}\{f\}(s) -\sum _{k=1}^{n} \frac{f(a + k)} {(1 - s)^{n-k+1}}, {}\\ \end{array}$$

which is what we wanted to prove. □ 

With this, we are ready to provide the general form of the Laplace transform of a ν-th order fractional difference of a function f, where 0 < ν < 1.

Theorem 3.89.

Given \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) and 0 < ν < 1. Then we have

$$\displaystyle{\mathcal{L}_{a+1}\{\nabla _{a}^{\nu }f\}(s) = s^{\nu }\mathcal{L}_{ a+1}\{f\}(s) -\frac{1 - s^{\nu }} {1 - s}f(a + 1).}$$

Proof.

Consider

$$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{a+1}\{\nabla _{a}^{\nu }f\}(s) {}\\ & & \quad = \mathcal{L}_{a+1}\{\nabla \nabla _{a}^{-(1-\nu )}f\}(s) {}\\ & & \quad = s\mathcal{L}_{a+1}\{\nabla _{a}^{-(1-\nu )}f\}(s) -\nabla _{ a}^{-(1-\nu )}f(a + 1),\quad \mbox{ by Theorem <InternalRef RefID="FPar8">3.8</InternalRef>3} {}\\ & & \quad = s\mathcal{L}_{a+1}\{\nabla _{a}^{-(1-\nu )}f\}(s) - f(a + 1),\quad \mbox{ by Exercise 3.27.} {}\\ \end{array}$$

From this and Lemma 3.88, we have that

$$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{a+1}\{\nabla _{a}^{\nu }f\}(s) {}\\ & & = s\left ( \frac{1} {1 - s}\mathcal{L}_{a}\{\nabla _{a}^{-(1-\nu )}f\}(s) - \frac{1} {1 - s}\nabla _{a}^{-(1-\nu )}f(a + 1)\right ) - f(a + 1) {}\\ & & = \frac{s^{\nu }} {1 - s}\mathcal{L}_{a}\{f\}(s) - \frac{1} {1 - s}f(a + 1)\quad \text{(by Theorem <InternalRef RefID="FPar8">3.8</InternalRef>2)}. {}\\ \end{array}$$

Applying Lemma 3.88 again we obtain

$$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{a+1}\{\nabla _{a}^{\nu }f\}(s) {}\\ & & = s^{\nu }\left (\mathcal{L}_{a+1}\{f\}(s) + \frac{1} {1 - s}f(a + 1)\right ) - \frac{1} {1 - s}f(a + 1), {}\\ & & {}\\ \end{array}$$

which is the desired result. □ 

The following theorem was proved by Jia Baoguo.

Theorem 3.90.

Let \(f: \mathbb{N}_{a-N+1} \rightarrow \mathbb{R}\) and N − 1 < ν < N be given. Then we have

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a+1}\{\nabla _{a}^{\nu }f\}(s)& =& s^{\nu }\mathcal{L}_{ a+1}\{f\}(s) + \frac{s^{\nu } - s^{N-1}} {1 - s} f(a + 1) {}\\ & & -\sum _{k=2}^{N}s^{N-k}\nabla ^{k-1}f(a + 1). {}\\ \end{array}$$

Proof.

We first calculate

$$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{a+1}\{\nabla _{a}^{\nu }f\}(s) {}\\ & & = \mathcal{L}_{a+1}\{\nabla ^{N}\nabla _{ a}^{-(N-\nu )}f\}(s) {}\\ & & \mathop{=}\limits^{ \text{Theorem <InternalRef RefID="FPar8">3.8</InternalRef>5}}s^{N}\mathcal{L}_{ a+1}(\nabla _{a}^{-(N-\nu )}f)(s) -\sum _{ k=1}^{N}s^{N-k}\nabla ^{k-1}\nabla _{ a}^{-(N-\nu )}f(a + 1) {}\\ & & \mathop{=}\limits^{ \text{Lemma <InternalRef RefID="FPar88">3.88</InternalRef>}} \frac{s^{N}} {1 - s}\mathcal{L}_{a}\{\nabla _{a}^{-(N-\nu )}f\}(s) - s^{N}\frac{\nabla _{a}^{-(N-\nu )}f(a + 1)} {1 - s} {}\\ & & \quad -\sum _{k=1}^{N}s^{N-k}\nabla ^{k-1}\nabla _{ a}^{-(N-\nu )}f(a + 1) {}\\ & & \mathop{=}\limits^{ \text{Theorem <InternalRef RefID="FPar8">3.8</InternalRef>2}} \frac{s^{N}} {1 - s} \cdot \frac{1} {s^{N-\nu }}\mathcal{L}_{a}\{f\}(s) - s^{N}\frac{\nabla _{a}^{-(N-\nu )}f(a + 1)} {1 - s} {}\\ & & \quad -\sum _{k=1}^{N}s^{N-k}\nabla ^{k-1}\nabla _{ a}^{-(N-\nu )}f(a + 1) {}\\ & & \mathop{=}\limits^{ \text{Theorem <InternalRef RefID="FPar8">3.8</InternalRef>8}} \frac{s^{\nu }} {1 - s}[(1 - s)\mathcal{L}_{a+1}\{f\}(s) + f(a + 1)] - s^{N}\frac{\nabla _{a}^{-(N-\nu )}f(a + 1)} {1 - s} {}\\ & & \quad -\sum _{k=1}^{N}s^{N-k}\nabla ^{k-1}\nabla _{ a}^{-(N-\nu )}f(a + 1). {}\\ \end{array}$$

Since

$$\displaystyle\begin{array}{rcl} & \nabla _{a}^{-(N-\nu )}f(a + 1) =\int _{ a}^{a+1}H_{N-\nu -1}(a + 1,a)f(s)\nabla s& {}\\ & = H_{N-\nu }(a + 1,a)f(a + 1) = f(a + 1), & {}\\ \end{array}$$

we have

$$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{a+1}\{\nabla _{a}^{\nu }f\}(s) {}\\ & & = s^{\nu }\mathcal{L}_{a+1}\{f\}(s) + \frac{s^{\nu }} {1 - s}f(a + 1) -\frac{s^{N}f(a + 1)} {1 - s} {}\\ & & \quad -\sum _{k=1}^{N}s^{N-k}\nabla ^{k-1}f(a + 1) {}\\ & & = s^{\nu }\mathcal{L}_{a+1}\{f\}(s) + \frac{s^{\nu } - s^{N-1}} {1 - s} f(a + 1) -\sum _{k=2}^{N}s^{N-k}\nabla ^{k-1}f(a + 1). {}\\ \end{array}$$

 □ 

Remark 3.91.

When N = 1, Theorem 3.90 becomes the Theorem 3.89. When N = 2, we can get the following Corollary.

Corollary 3.92.

Let \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\) and 1 < ν < 2 be given. Then we have

$$\displaystyle{\mathcal{L}_{a+1}\{\nabla _{a}^{\nu }f\}(s) = s^{\nu }\mathcal{L}_{ a+1}\{f\}(s) + \frac{s^{\nu } - s} {1 - s}f(a + 1) -\nabla f(a + 1)}$$
$$\displaystyle{= s^{\nu }\mathcal{L}_{a+1}\{f\}(s) + \frac{s^{\nu } - 1} {1 - s}f(a + 1) + f(a).}$$

3.14 Generalized Power Rules

We now see that with the use of the Laplace transform it is very easy to prove the following generalized power rules.

Theorem 3.93 (Generalized Power Rules).

Let \(\nu \in \mathbb{R}^{+}\) and \(\mu \in \mathbb{R}\) such that μ, ν + μ, and μ −ν are nonnegative integers. Then we have that

  1. (i)

    \(\nabla _{a}^{-\nu }H_{\mu }(t,a) = H_{\mu +\nu }(t,a);\)

  2. (ii)

    \(\nabla _{a}^{\nu }H_{\mu }(t,a) = H_{\mu -\nu }(t,a);\)

  3. (iii)

    \(\nabla _{a}^{-\nu }(t - a)^{\overline{\mu }} = \frac{\Gamma (\mu +1)} {\Gamma (\mu +\nu +1)}\;(t - a)^{\overline{\mu +\nu }};\)

  4. (iv)

    \(\nabla _{a}^{\nu }(t - a)^{\overline{\mu }} = \frac{\Gamma (\mu +1)} {\Gamma (\mu -\nu +1)}\;(t - a)^{\overline{\mu -\nu }};\)

for \(t \in \mathbb{N}_{a}\) .

Proof.

To see that (i) holds, note that

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{\nabla _{a}^{-\nu }H_{\mu }(\cdot,a)\}(s)& =& \frac{1} {s^{\nu }}\mathcal{L}_{a}\{H_{\mu }(\cdot,a)\}(s) {}\\ & =& \frac{1} {s^{\nu +\mu +1}} {}\\ & =& \mathcal{L}_{a}\{H_{\mu +\nu }(\cdot,a)\}(s). {}\\ \end{array}$$

Hence, by the uniqueness theorem (Theorem 3.72) we have that

$$\displaystyle{\nabla _{a}^{-\nu }H_{\mu }(t,a) = H_{\mu +\nu }(t,a),\quad t \in \mathbb{N}_{a+1}.}$$

Also, this last equation holds for t = a and hence (i) holds. To see that (i) implies (iii), note that

$$\displaystyle\begin{array}{rcl} \nabla _{a}^{-\nu }(t - a)^{\overline{\mu }}& =& \Gamma (\mu +1)\nabla _{ a}^{-\nu }H_{\mu }(t,a) {}\\ & =& \Gamma (\mu +1)H_{\mu +\nu }(t,a) {}\\ & =& \frac{\Gamma (\mu +1)} {\Gamma (\mu +\nu + 1)}(t - a)^{\underline{\mu +\nu }}. {}\\ \end{array}$$

To show that part (ii) holds we will first show that

$$\displaystyle{ \mathcal{L}_{a}\{\nabla _{a}^{-\nu }H_{\mu }(\cdot,a)\}(s) = \mathcal{L}_{ a}\{H_{\mu -\nu }(\cdot,a)\}(s), }$$
(3.39)

for | s − 1 |  < 1. On the one hand, using Lemma 3.88 with n = 1 we have that

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a+1}\{H_{\mu -\nu }(\cdot,a)\}(s)& =& \frac{1} {1 - s}\mathcal{L}_{a}\{H_{\mu -\nu }(\cdot,a)\}(s) -\frac{H_{\mu -\nu }(a + 1,a)} {1 - s} \\ & =& \frac{1} {1 - s} \frac{1} {s^{\mu -\nu +1}} - \frac{1} {1 - s}. {}\end{array}$$
(3.40)

On the other hand, using Theorem 3.89 we have that

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a+1}\{\nabla _{a}^{\nu }H_{\mu }(\cdot,a)\}(s)& =& s^{\nu }\mathcal{L}_{ a+1}\{H_{\mu }(\cdot,a)\}(s) -\frac{1 - s^{\nu }} {1 - s}H_{\mu }(a + 1,a) \\ & =& s^{\nu }\Big[ \frac{1} {1 - s}\mathcal{L}_{a}\{H_{\mu }(\cdot,a)\}(s) - \frac{1} {1 - s}\Big] -\frac{1 - s^{\nu }} {1 - s} \\ & =& \frac{s^{\nu }} {1 - s}\Big[ \frac{1} {s^{\mu +1}} - 1\Big] -\frac{1 - s^{\nu }} {1 - s} \\ & =& \frac{1} {1 - s} \frac{1} {s^{\mu -\nu +1}} - \frac{1} {1 - s}. {}\end{array}$$
(3.41)

From (3.40) and (3.41) we get that (3.39) holds. Hence, by the uniqueness theorem (Theorem 3.72)

$$\displaystyle{\nabla _{a}^{\nu }H_{\mu }(t,a) = H_{\mu -\nu }(t,a)}$$

for \(t \in \mathbb{N}_{a+1}\). But this last equation also holds for t = a. Thus, part (ii) holds for \(t \in \mathbb{N}_{a}\). The proof of (iv) is left to the reader (Exercise 3.34). □ 

Next we consider the fractional difference equation

$$\displaystyle{ \nabla _{a}^{\nu }x(t) = f(t),\quad t \in \mathbb{N}_{ a+N}, }$$
(3.42)

where N − 1 < ν < N, \(N \in \mathbb{N}_{1}\). First we prove the following existence-uniqueness theorem for the fractional difference equation 3.42.

Theorem 3.94.

Assume \(f: \mathbb{N}_{a+N} \rightarrow \mathbb{R}\) and N − 1 < ν < N, \(N \in \mathbb{N}_{1}\) . Then the IVP

$$\displaystyle\begin{array}{rcl} \nabla _{a}^{\nu }x(t)& =& f(t)\text{, }t \in \mathbb{N}_{ a+N} {}\\ x(a + k)& =& c_{k}\text{, }1 \leq k \leq N, {}\\ \end{array}$$

where c k , for 1 ≤ k ≤ N, are given constants, has a unique solution, which is defined on \(\mathbb{N}_{a+1}\) .

Proof.

First note that if we write the fractional equation \(\nabla _{a}^{\nu }x(t) = h(t)\) in expanded form we have that

$$\displaystyle{ \sum _{\tau =a}^{t-1}H_{ -\nu -1}(t,\rho (\tau ))x(\tau ) + x(t) = f(t). }$$

It follows that the given IVP is equivalent to the summation equation

$$\displaystyle{ x(t) = f(t) -\sum _{\tau =a}^{t-1}H_{ -\nu -1}(t,\rho (\tau ))x(\tau ) }$$
(3.43)
$$\displaystyle{ x(a + k) = c_{k},\quad 1 \leq k \leq N. }$$
(3.44)

Letting \(t = a + N + 1\) in this summation IVP we have that x(t) solves our IVP at \(t = a + N + 1\) iff

$$\displaystyle{x(a + N + 1) = f(a+) -\sum _{\tau =a}^{t-1}c_{ k}H_{-\nu -1}(t,\rho (\tau ))x(\tau ).}$$

 □ 

Theorem 3.95.

Assume ν > 0 and N − 1 < ν ≤ N. Then a general solution of \(\nabla _{a}^{\nu }x(t) = 0\) is given by

$$\displaystyle{x(t) = c_{1}H_{\nu -1}(t,a) + c_{2}H_{\nu -2}(t,a) + \cdots + c_{N}H_{\nu -N}(t,a)}$$

for \(t \in \mathbb{N}_{a}\) .

Proof.

For 1 ≤ k ≤ N, we have from (3.59) that

$$\displaystyle\begin{array}{rcl} \nabla _{a}^{\nu }H_{\nu -k}(t,a)& =& \nabla ^{k}\nabla _{ a}^{\nu -k}H_{\nu -k}(t,a) {}\\ & =& \nabla ^{k}H_{ 0}(t,a)\quad (\mbox{ by Theorem <InternalRef RefID="FPar93">3.93</InternalRef>}) {}\\ & =& \nabla ^{k}1 {}\\ & =& 0 {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a}\). Since these N solutions are linearly independent on \(\mathbb{N}_{a}\) we have that

$$\displaystyle{x(t) = c_{1}H_{\nu -1}(t,a) + c_{2}H_{\nu -2}(t,a) + \cdots + c_{N}H_{\nu -N}(t,a)}$$

is a general solution of \(\nabla _{a}^{\nu }x(t) = 0\) on \(\mathbb{N}_{a}\). □ 

The next theorem relates fractional Taylor monomials based at values that differ by a positive integer. This result is in Hein et al. [119] and Ahrendt et al. [3].

Theorem 3.96.

For \(\nu \in \mathbb{R}\setminus \{- 1,-2,...\}\) and \(N,m \in \mathbb{N}\) ,

$$\displaystyle{ H_{\nu -N}(t,a + m) =\sum \limits _{ k=0}^{m}{m\choose k}(-1)^{k}H_{\nu -N-k}(t,a). }$$

Proof.

The proof is by induction on m for m ≥ 1. Consider the base case m = 1

$$\displaystyle\begin{array}{rcl} & & H_{\nu -N}(t,a) - H_{\nu -N-1}(t,a) {}\\ & & = \frac{(t - a)^{\overline{\nu - N}}} {\Gamma (\nu -N + 1)} -\frac{(t - a)^{\overline{\nu - N - 1}}} {\Gamma (\nu -N)} {}\\ & & = \frac{\Gamma (t - a +\nu -N)} {\Gamma (\nu -N + 1)\Gamma (t - a)} -\frac{\Gamma (t - a +\nu -N - 1)} {\Gamma (\nu -N)\Gamma (t - a)} {}\\ & & = \frac{\Gamma (t - a +\nu -N - 1)} {\Gamma (t - a)\Gamma (\nu -N + 1)}\big[(t - a +\nu -N - 1) - (\nu -N)\big] {}\\ & & = \frac{(t -\rho (a))\Gamma (t - a +\nu -N - 1)} {\Gamma (t - a)\Gamma (\nu -N + 1)} {}\\ & & = \frac{\Gamma (t - a +\nu -N - 1)} {\Gamma (t -\rho (a))\Gamma (\nu -N + 1)} {}\\ & & = \frac{[t - (a + 1)]^{\overline{\nu - N}}} {\Gamma (\nu -N + 1)} {}\\ & & = H_{\nu -N}(t,a + 1). {}\\ \end{array}$$

Hence the base case, m = 1, holds. Now assume m ≥ 1 is fixed and

$$\displaystyle{H_{\nu -N}(t,a + m) =\sum \limits _{ k=0}^{m}{m\choose k}(-1)^{k}H_{\nu -N-k}(t,a).}$$

From the base case with the number a replaced by the number a + m we have that

$$\displaystyle{ H_{\nu -N}(t,a + m + 1) = H_{\nu -N}(t,a + m) - H_{\nu -N-1}(t,a + m). }$$

Applying the induction hypothesis to both terms on the right side of this equation gives

$$\displaystyle\begin{array}{rcl} & & H_{\nu -N}(t,a + m + 1) {}\\ & =& \sum \limits _{k=0}^{m}{m\choose k}(-1)^{k}H_{\nu -N-k}(t,a) -\sum \limits _{k=0}^{m}{m\choose k}(-1)^{k}H_{\nu -N-1-k}(t,a) {}\\ & =& \sum \limits _{k=0}^{m}{m\choose k}(-1)^{k}H_{\nu -N-k}(t,a) -\sum \limits _{k=1}^{m+1}{m\choose k - 1}(-1)^{k-1}H_{\nu -N-k}(t,a) {}\\ & =& \sum \limits _{k=0}^{m+1}{m\choose k}(-1)^{k}H_{\nu -N-k}(t,a) -{ m\choose m + 1}(-1)^{m+1}H_{\nu -N-m-1}(t,a) {}\\ & -& \sum \limits _{k=0}^{m+1}{m\choose k - 1}(-1)^{k-1}H_{\nu -N-k}(t,a) +{ m\choose - 1}(-1)^{-1}H_{\nu -N}(t,a) {}\\ & =& \sum \limits _{k=0}^{m+1}\left ({m\choose k} +{ m\choose k - 1}\right )(-1)^{k}H_{\nu -N-k}(t,a) {}\\ & =& \sum \limits _{k=0}^{m+1}{m + 1\choose k}(-1)^{k}H_{\nu -N-k}(t,a). {}\\ \end{array}$$

This completes the proof. □ 

3.15 Mittag–Leffler Function

In this section we define the nabla Mittag–Leffler function, which is useful for solving certain IVPs. First we give an alternate proof of part (i) of Theorem 3.50.

Theorem 3.97.

For |p| < 1, we have that \(E_{p}(t,a) =\sum _{ k=0}^{\infty }p^{k}H_{k}(t,a)\) for \(t \in \mathbb{N}_{a}\) .

Proof.

We will show that E p (t, a) and \(\sum _{k=0}^{\infty }p^{k}H_{k}(t,a)\) have the same Laplace transform. In order to ensure convergence, we restrict the transform domain such that | s |  <  | p | , | 1 − s |  < 1, and \(\vert 1 - s\vert < \vert 1 - p\vert \). First, we determine the Laplace transform of the exponential function as follows:

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\big\{E_{p}(\cdot,a)\big\}(s)& =& \sum _{k=1}^{\infty }(1 - s)^{k-1}(1 - p)^{-k} {}\\ & =& \frac{1} {1 - p}\sum _{k=0}^{\infty }\left (\frac{1 - s} {1 - p}\right )^{k} = \frac{1} {s - p}. {}\\ \end{array}$$

Next, we have

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\bigg\{\sum _{k=0}^{\infty }p^{k}H_{ k}(\cdot,a)\bigg\}(s)& =& \sum _{k=0}^{\infty }p^{k}\mathcal{L}_{ a}\{H_{k}(\cdot,a)\}(s) {}\\ & =& \frac{1} {s}\sum _{k=0}^{\infty }\left (\frac{p} {s}\right )^{k} = \frac{1} {s - p}. {}\\ \end{array}$$

Finally, E p (a, a) = 1 by definition, and \(\sum _{k=0}^{\infty }p^{k}H_{k}(a,a) = 1\) since \(p^{0}H_{0}(a,a) = 1\) and \(p^{k}H_{k}(a,a) = 0\) for k ≥ 1. Therefore, we obtain the desired result on \(\mathbb{N}_{a}\). □ 

Next we define the nabla Mittag–Leffler function, which is a generalization of the exponential function E p (t, a).

Definition 3.98 (Mittag–Leffler Function).

For | p |  < 1, α > 0, \(\beta \in \mathbb{R}\), we define the nabla Mittag–Leffler function by

$$\displaystyle{E_{p,\alpha,\beta }(t,a):=\sum _{ k=0}^{\infty }p^{k}H_{\alpha k+\beta }(t,a),\quad t \in \mathbb{N}_{a}.}$$

Remark 3.99.

Since H 0(t, a) = 1, we have that E 0, ν, 0(t, a) = 1 and E p, ν, 0(a, a) = 1. Also note that E p, 1, 0(t, a) = E p (t, a), for | p |  < 1.

Theorem 3.100.

Assume |p| < 1, α > 0, \(\beta \in \mathbb{R}\) . Then

$$\displaystyle{ \nabla _{\rho (a)}^{\nu }E_{ p,\alpha,\beta }(t,\rho (a)) = E_{p,\alpha,\beta -\nu }(t,\rho (a)) }$$
(3.45)

for \(t \in \mathbb{N}_{a}\) .

Proof.

Since

$$\displaystyle\begin{array}{rcl} \nabla _{\rho (a)}^{\nu }E_{ p,\alpha,\beta }(t,\rho (a))& =& \nabla _{\rho (a)}^{\nu }\left (\sum _{ k=0}^{\infty }p^{k}H_{\alpha k+\beta }(t,\rho (a))\right ) {}\\ & =& \sum _{k=0}^{\infty }p^{k}\nabla _{\rho (a)}^{\nu }H_{\alpha k+\beta }(t,\rho (a)) {}\\ & =& \sum _{k=0}^{\infty }p^{k}H_{\alpha k+\beta -\nu }(t,\rho (a)) {}\\ & =& E_{p,\alpha,\beta -\nu }(t,\rho (a)) {}\\ \end{array}$$

we have that (3.45) holds for \(t \in \mathbb{N}_{a}\). □ 

Theorem 3.101.

Assume N − 1 < ν ≤ N, \(N \in \mathbb{N}\) and |c| < 1. Then

$$\displaystyle{E_{-c,\nu,\nu -i}(t,\rho (a))\qquad 1 \leq i \leq N}$$

are N linearly independent solutions on \(\mathbb{N}_{a}\) of

$$\displaystyle{\nabla _{\rho (a)}^{\nu }x(t) + cx(t) = 0\text{, }t \in \mathbb{N}_{ a+N}.}$$

In particular, a general solution of the fractional equation \(\nabla _{\rho (a)}^{\nu }x(t) + cx(t) = 0\) is given by

$$\displaystyle{x(t) = c_{1}E_{-c,\nu,\nu -1}(t,\rho (a)) + c_{2}E_{-c,\nu,\nu -2}(t,\rho (a)) + \cdots + c_{N}E_{-c,\nu,\nu -N}(t,\rho (a)),}$$

for \(t \in \mathbb{N}_{a}\) .

Proof.

If c = 0, then this result follows from Theorem 3.95. Now assume c ≠ 0. Fix 1 ≤ i ≤ N and consider for t ∈ N a+1, 

$$\displaystyle\begin{array}{rcl} \nabla _{\rho (a)}^{\nu }E_{ -c,\nu,\nu -i}(t,\rho (a))& =& E_{-c,\nu,-i}(t,\rho (a)),\quad \mbox{ by (<InternalRef RefID="Equ141">3.45</InternalRef>)} {}\\ & =& \sum _{k=0}^{\infty }(-c)^{k}H_{\nu k-i}(t,\rho (a)) {}\\ & =& \sum _{k=1}^{\infty }(-c)^{k}H_{\nu k-i}(t,\rho (a)) {}\\ & =& \sum _{k=0}^{\infty }(-c)^{k+1}H_{\nu (k+1)-i}(t,\rho (a)) {}\\ & =& -c\sum _{k=0}^{\infty }(-c)^{k}H_{\nu k+(\nu -i)}(t,\rho (a)) {}\\ & =& -cE_{-c,\nu,\nu -i}(t,\rho (a)). {}\\ \end{array}$$

Hence, for each 1 ≤ i ≤ N, \(E_{-c,\nu,\nu -i}(t,\rho (a))\) is a solution of \(\nabla _{\rho (a)}^{\nu }x(t) + cx(t) = 0\) on \(\mathbb{N}_{a}\). It follows that a general solution of \(\nabla _{\rho (a)}^{\nu }x(t) + cx(t) = 0\) is given by

$$\displaystyle{x(t) = c_{1}E_{-c,\nu,\nu -1}(t,\rho (a)) + c_{2}E_{-c,\nu,\nu -2}(t,\rho (a)) + \cdots + c_{N}E_{-c,\nu,\nu -N}(t,\rho (a))}$$

for \(t \in \mathbb{N}_{a}\). □ 

The following example was suggested by Jia Baoguo.

Example 3.102.

Consider the second order nabla difference equation

$$\displaystyle{ \nabla ^{2}x(t) + cx(t) = 0,\quad \quad 0 < c < 1. }$$
(3.46)

From Definition 3.98, Theorem 3.50, and Theorem 3.101, we have the following are two solutions of (3.46):

$$\displaystyle\begin{array}{rcl} E_{-c,2,1}(t,a)& =& \sum _{k=0}^{\infty }(-c)^{k}H_{ 2k+1}(t,a) \\ & =& \frac{1} {\sqrt{c}}\sum _{k=0}^{\infty }(-1)^{k}(\sqrt{c})^{2k+1}H_{ 2k+1}(t,a) \\ & =& \frac{1} {\sqrt{c}}\mbox{ Sin}_{\sqrt{c}}(t,a). {}\end{array}$$
(3.47)

and

$$\displaystyle\begin{array}{rcl} E_{-c,2,0}(t,a)& =& \sum _{k=0}^{\infty }(-c)^{k}H_{ 2k}(t,a) \\ & =& \sum _{k=0}^{\infty }(-1)^{k}(\sqrt{c})^{2k}H_{ 2k+1}(t,a) \\ & =& \mbox{ Cos}_{\sqrt{c}}(t,a). {}\end{array}$$
(3.48)

The characteristic values of the equation (3.46) are \(\lambda _{1,2} = \pm \sqrt{c}i\). So the solutions of (3.46) are \(x_{1}(x,a) = E_{\sqrt{c}i}(t,a)\) and \(x_{2}(x,a) = E_{-\sqrt{c}i}(t,a)\). So

$$\displaystyle\begin{array}{rcl} E_{-\sqrt{c}i} = (1 + \sqrt{c}i)^{a-t}& =& (1 + c)^{\frac{a-t} {2} }[\cos \theta +i\sin \theta ]^{a-t} \\ & =& (1 + c)^{\frac{a-t} {2} }[\cos (a - t)\theta + i\sin (a - t)\theta ]{}\end{array}$$
(3.49)

and

$$\displaystyle\begin{array}{rcl} E_{\sqrt{c}i} = (1 -\sqrt{c}i)^{a-t} = (1 + c)^{\frac{a-t} {2} }[\cos (a - t)\theta - i\sin (a - t)\theta ],& &{}\end{array}$$
(3.50)

where \(\cos \theta = \frac{1} {\sqrt{1+c}}\), \(\sin \theta = \frac{\sqrt{c}} {\sqrt{1+c}}\).

From the definitions (see Definition 3.17) of \(\mbox{ Cos}_{\sqrt{c}}(t,a)\) and \(\mbox{ Sin}_{\sqrt{c}}(t,a)\), we have

$$\displaystyle\begin{array}{rcl} \mbox{ Cos}_{\sqrt{c}}(t,a)& =& \frac{E_{\sqrt{c}i}(t,a) + E_{-\sqrt{c}i}(t,a)} {2} \\ & =& (1 + c)^{\frac{a-t} {2} }\cos (a - t)\theta {}\end{array}$$
(3.51)

and

$$\displaystyle\begin{array}{rcl} \mbox{ Sin}_{\sqrt{c}}(t,a)& =& \frac{E_{\sqrt{c}i}(t,a) - E_{-\sqrt{c}i}(t,a)} {2i} \\ & =& -(1 + c)^{\frac{a-t} {2} }\sin (a - t)\theta. {}\end{array}$$
(3.52)

Consequently, using (3.47), (3.48), (3.51), and (3.52), we find that

$$\displaystyle{E_{-c,2,1}(t,a) = - \frac{1} {\sqrt{c}}(1 + c)^{\frac{a-t} {2} }\sin (a - t)\theta }$$

and that

$$\displaystyle{E_{-c,2,0}(t,a) = (1 + c)^{\frac{a-t} {2} }\cos (a - t)\theta.}$$

Thus, from Theorem 3.101, the general solution of the equation (3.46) is given by

$$\displaystyle{x(t) = (1 + c)^{\frac{a-t} {2} }[c_{1}\sin (a - t)\theta + c_{2}\cos (a - t)\theta ].}$$

Finally, the real part

$$\displaystyle{y_{1}(t,a) = (1 + c)^{\frac{a-t} {2} }\cos (a - t)\theta }$$

and the imaginary part

$$\displaystyle{y_{2}(t,a) = (1 + c)^{\frac{a-t} {2} }\sin (a - t)\theta }$$

are solutions of (3.46).

We will now determine the Laplace transform of the Mittag–Leffler function.

Theorem 3.103.

Assume |p| < 1 a constant, α > 0, and \(\beta \in \mathbb{R}\) . Then

$$\displaystyle{\mathcal{L}_{a}\{E_{p,\alpha,\beta }(\cdot,a)\}(s) = \frac{s^{\alpha -\beta -1}} {s^{\alpha } - p}.}$$

for |1 − s| < 1, |s α | > |p|.

Proof.

Note that

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{E_{p,\alpha,\beta }(\cdot,a)\}(s)& =& \sum _{k=0}^{\infty }p^{k}\mathcal{L}_{ a}\{H_{\alpha k+\beta }(\cdot,a)\}(s) {}\\ & =& \frac{1} {s^{\beta +1}}\sum _{k=0}^{\infty }\left (\frac{p} {s^{\alpha }}\right )^{k} {}\\ & =& \frac{s^{\alpha -\beta -1}} {s^{\alpha } - p}. {}\\ \end{array}$$

This completes the proof. □ 

3.16 Solutions to Initial Value Problems

We will now consider a ν-th order fractional nabla initial value problem and give a formula for its solution in case 0 < ν < 1.

Theorem 3.104 (Fractional Variation of Constants Formula [119]).

Assume \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\) , |c| < 1 and 0 < ν < 1. Then the unique solution of the fractional initial value problem

$$\displaystyle{ \left \{\begin{array}{l} \nabla _{\rho (a)}^{\nu }x(t) + cx(t) = f(t),\quad t \in \mathbb{N}_{a+1}, \\ x(a) = A,\quad A \in \mathbb{R}\end{array} \right. }$$
(3.53)

is given by

$$\displaystyle{ x(t) =\big [E_{-c,\nu,\nu -1}(\cdot,\rho (a)) {\ast} f(\cdot )\big](t) +\big [A(c + 1) - f(a)\big]E_{-c,\nu,\nu -1}(t,\rho (a)). }$$
(3.54)

Proof.

We begin by taking the Laplace transform based at a of both sides of the fractional equation in (3.53) to get that

$$\displaystyle{\mathcal{L}_{a}\{\nabla _{\rho (a)}^{\nu }x\}(s) + c\mathcal{L}_{ a}\{x\}(s) = \mathcal{L}_{a}\{f\}(s).}$$

Applying Theorem 3.89 and using the initial condition, we have that

$$\displaystyle{(s^{\nu } + c)\mathcal{L}_{a}\{x\}(s) - A\left (\frac{1 - s^{\nu }} {1 - s}\right ) = \mathcal{L}_{a}\{f\}(s).}$$

Using Lemma 3.88 implies that

$$\displaystyle\begin{array}{rcl} & & \left (s^{\nu } + c\right )\underbrace{\mathop{\left [ \frac{1} {1-s}\mathcal{L}_{\rho (a)}\{x\}(s) - \frac{1} {1-s}x(a)\right ]}}\limits _{=\mathcal{L}_{a}\{x\}(s)} - A\left (\frac{1 - s^{\nu }} {1 - s}\right ) {}\\ & & \quad \quad \quad \quad =\underbrace{\mathop{ \frac{1} {1-s}\mathcal{L}_{\rho (a)}\{f\}(s) - \frac{1} {1-s}f(a)}}\limits _{=\mathcal{L}_{a}\{f\}(s)}. {}\\ \end{array}$$

Multiplying both sides of the preceding equality by (1 − s) and then solving for \(\mathcal{L}_{\rho (a)}\{x\}(s)\) we obtain

$$\displaystyle \mathcal{L}_{\rho (a)}\{x\}(s) = \frac{1} {s^{\nu } + c}\mathcal{L}_{\rho (a)}\{f\}(s) + \left [A(c + 1) - f(a)\right ] \frac{1} {s^{\nu } + c}.$$

Since

$$\displaystyle{\mathcal{L}_{\rho (a)}\{E_{-c,\nu,\nu -1}(\cdot,\rho (a))\}(s) = \frac{1} {s^{\nu } + c},}$$

we have by the convolution theorem that

$$\displaystyle{ x(t) =\big (E_{-c,\nu,\nu -1}(\cdot,\rho (a)) {\ast} f(\cdot )\big)(t) + \left [A(c + 1) - f(a)\right ]E_{-c,\nu,\nu -1}(t,\rho (a)) }$$
(3.56)

for \(t \in \mathbb{N}_{a+1}\). □ 

Letting c = 0 in the above fractional initial value problem, we get the following corollary.

Corollary 3.105.

Let \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\) and 0 < ν < 1. Then the unique solution of the fractional IVP

$$\displaystyle{\left \{\begin{array}{l} \nabla _{\rho (a)}^{\nu }x(t) = f(t),\quad t \in \mathbb{N}_{a+1} \\ x(a) = A,\quad A \in \mathbb{R} \end{array} \right.}$$

is given by

$$\displaystyle{ x(t) = \nabla _{\rho (a)}^{-\nu }f(t) +\big (A - f(a)\big)H_{\nu -1}(t,\rho (a)). }$$
(3.57)

Proof.

First, we observe that

$$\displaystyle{E_{0,\nu,\nu -1}(t,\rho (a)) = H_{\nu -1}(t,\rho (a)).}$$

Finally, we have \(\big(E_{0,\nu,\nu -1}(\cdot,\rho (a)) {\ast} f(\cdot )\big)(t) = \left [H_{\nu -1}(\cdot,\rho (a)) {\ast} f(\cdot )\right ](t) = \nabla _{\rho (a)}a^{-\nu }f(t)\) by Theorem 3.80. From this, the stated solution to the initial value problem follows. □ 

Example 3.106.

Use Corollary 3.105 to solve the IVP

$$\displaystyle\begin{array}{rcl} \nabla _{\rho (0)}^{\frac{1} {2} }x(t)& =& t,\quad t \in \mathbb{N}_{1} {}\\ x(0)& =& 2. {}\\ \end{array}$$

By the variation of constants formula (3.57), the solution of this IVP is given by

$$\displaystyle\begin{array}{rcl} x(t)& =& \nabla _{\rho (0)}^{-\frac{1} {2} }H_{1}(t,0) + (2 - 1)H_{-\frac{1} {2} }(t,\rho (0)) {}\\ & =& \nabla _{\rho (0)}^{-\frac{1} {2} }H_{1}(t,0) + \frac{(t -\rho (0))^{\overline{ -\frac{1} {2}}}} {\Gamma (\frac{1} {2})} {}\\ & =& \int _{\rho (0)}^{0}H_{ -\frac{1} {2} }(t,\rho (s))s\nabla s +\int _{ 0}^{t}H_{ -\frac{1} {2} }(t,\rho (s))s\nabla s + \frac{(t -\rho (0))^{\overline{ -\frac{1} {2}}}} {\Gamma (\frac{1} {2})} {}\\ & =& \int _{0}^{t}H_{ -\frac{1} {2} }(t,\rho (s))s\nabla s + \frac{(t -\rho (0))^{\overline{ -\frac{1} {2}}}} {\Gamma (\frac{1} {2})} {}\\ & =& \nabla _{0}^{-\frac{1} {2} }H_{1}(t,0) + \frac{(t -\rho (0))^{\overline{ -\frac{1} {2}}}} {\Gamma (\frac{1} {2})} {}\\ & =& \nabla _{0}^{-\frac{1} {2} }H_{1}(t,0) + \frac{(t + 1)^{\overline{ -\frac{1} {2}}}} {\Gamma (\frac{1} {2})} {}\\ & =& H_{\frac{3} {2} }(t,0) + \frac{1} {\sqrt{\pi }}(t + 1)^{\overline{ -\frac{1} {2}}} {}\\ & =& \frac{t^{\overline{\frac{3} {2}}}} {\Gamma (\frac{5} {2})} + \frac{1} {\sqrt{\pi }}(t + 1)^{\overline{ -\frac{1} {2}}} {}\\ & =& \frac{4} {3\sqrt{\pi }}t^{\overline{\frac{3} {2}}} + \frac{1} {\sqrt{\pi }}(t + 1)^{\overline{ -\frac{1} {2}}}, {}\\ \end{array}$$

where we used Theorem 3.93 in step seven.

3.17 Nabla Fractional Composition Rules

In this section we prove several composition rules for nabla fractional sums and differences. Most of these results can be found in Ahrendt et al. [3]. First we prove the following formula for the composition of two fractional sums.

Theorem 3.107.

Assume \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) , and ν,μ > 0. Then

$$\displaystyle{ \nabla _{a}^{-\nu }\nabla _{ a}^{-\mu }f(t) = \nabla _{ a}^{-\nu -\mu }f(t),\quad t \in \mathbb{N}_{ a}. }$$

Proof.

Note that

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{\nabla _{a}^{-\nu }\nabla _{ a}^{-\mu }f\}(s)& =& \frac{1} {s^{\nu }}\mathcal{L}_{a}\{\nabla _{a}^{-\mu }f\}(s) {}\\ & =& \frac{1} {s^{\nu +\mu }}\mathcal{L}\{f\}(s) {}\\ & =& \mathcal{L}_{a}\{\nabla _{a}^{-\nu -\mu }f\}(s). {}\\ & & {}\\ \end{array}$$

By the uniqueness theorem for Laplace transforms (Theorem 3.72), we have

$$\displaystyle{ \nabla _{a}^{-\nu }\nabla _{ a}^{-\mu }f(t) = \nabla _{ a}^{-\nu -\mu }f(t) }$$

for \(t \in \mathbb{N}_{a+1}\). Also

$$\displaystyle{\nabla _{a}^{-\nu }\nabla _{ a}^{-\mu }f(a) = 0 = \nabla _{ a}^{-\nu -\mu }f(a).}$$

 □ 

Next we prove a theorem concerning the composition of an integer-order difference with a fractional sum and with a fractional difference. This result was first proved in this generality by Ahrendt et al. [3].

Lemma 3.108.

Let \(k \in \mathbb{N}_{0}\) , μ > 0, and choose \(N \in \mathbb{N}\) such that N − 1 < μ ≤ N. Then

$$\displaystyle{ \nabla ^{k}\nabla _{ a}^{-\mu }f(t) = \nabla _{ a}^{k-\mu }f(t), }$$
(3.58)

and

$$\displaystyle{ \nabla ^{k}\nabla _{ a}^{\mu }f(t) = \nabla _{ a}^{k+\mu }f(t). }$$
(3.59)

for \(t \in \mathbb{N}_{a+k}\) .

Proof.

Assume \(k \in \mathbb{N}_{0}\), μ > 0, and choose \(N \in \mathbb{N}_{1}\) such that N − 1 < μ ≤ N. First we prove (3.58) for μ = N. To see this first note that

$$\displaystyle\begin{array}{rcl} \nabla \nabla _{a}^{-1}f(t)& =& \nabla \int _{ a}^{t}H_{ 0}(t,\rho (\tau ))f(\tau )\nabla \tau {}\\ & =& \nabla \int _{a}^{t}f(\tau )\nabla \tau {}\\ & =& f(t),\quad t \in \mathbb{N}_{a+1}. {}\\ \end{array}$$

So, then, for the case of μ = N we have

$$\displaystyle\begin{array}{rcl} \nabla ^{k}\nabla _{ a}^{-N}f(t)& =& \nabla ^{k-1}[\nabla \nabla _{ a}^{-1}(\nabla _{ a}^{-(N-1)}f(t))] {}\\ & =& \nabla ^{k-1}\nabla ^{-(N-1)}f(t) {}\\ & =& \nabla ^{k-2}\nabla ^{-(N-2)}f(t) {}\\ & \cdots & {}\\ & =& \nabla ^{-(N-k)}f(t) {}\\ & =& \nabla ^{k-N}f(t),\quad t \in \mathbb{N}_{ a+k}. {}\\ \end{array}$$

Hence (3.58) holds for μ = N. Now we consider (3.59). It is trivial to prove (3.59) when μ = N, so we assume N − 1 < μ < N. First we will show that (3.59) holds for the base case

$$\displaystyle{\nabla \nabla _{a}^{\mu }f(t) = \nabla _{ a}^{1+\mu }f(t),\quad t \in \mathbb{N}_{ a+1}.}$$

This follows from the following:

$$\displaystyle\begin{array}{rcl} & & \nabla \nabla _{a}^{\mu }f(t) {}\\ & & = \nabla \left (\int _{a}^{t}H_{ -\mu -1}(t,\rho (\tau )f(\tau )\nabla \tau \right ) {}\\ & & =\int _{ a}^{t}H_{ -\mu -2}(t,\rho (\tau ))f(\tau )\nabla \tau + H_{-\mu -1}(\rho (t),\rho (t))f(t)\quad (\mbox{ by 3.22})) {}\\ & & =\int _{ a}^{t}H_{ -\mu -2}(t,\rho (\tau ))f(\tau )\nabla \tau {}\\ & & = \nabla _{a}^{1+\mu }f(t). {}\\ \end{array}$$

Then, for any \(k \in \mathbb{N}_{0}\),

$$\displaystyle\begin{array}{rcl} \nabla ^{k}\nabla _{ a}^{\mu }f(t)& =& \nabla ^{k-1}(\nabla \nabla _{ a}^{\mu }f(t)) {}\\ & =& \nabla ^{k-1}\nabla _{ a}^{1+\mu }f(t) {}\\ & =& \nabla ^{k-2}\nabla _{ a}^{2+\mu }f(t) {}\\ & \cdots & {}\\ & =& \nabla _{a}^{k+\mu }f(t),\quad t \in \mathbb{N}_{ a+k}, {}\\ \end{array}$$

which shows (3.59) holds for this case. In case N − 1 < μ < N, noticing that \(\nabla _{a}^{k+\mu }f(t)\) can be obtained from \(\nabla _{a}^{k-\mu }f(t)\) by replacing μ by −μ, we obtain by a similar argument that (3.58) holds for the case N − 1 < μ < N. And this completes the proof. □ 

Theorem 3.109.

Assume \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\) and ν,μ > 0. Then

$$\displaystyle{ \nabla _{a}^{\nu }\nabla _{ a}^{-\mu }f(t) = \nabla _{ a}^{\nu -\mu }f(t). }$$

Proof.

Let ν, μ > 0 be given, and \(N \in \mathbb{N}\) such that N − 1 < ν ≤ N. Then we have

$$\displaystyle\begin{array}{rcl} \nabla _{a}^{\nu }\nabla _{ a}^{-\mu }f(t)& =& \nabla ^{N}\nabla _{ a}^{-(N-\nu )}\nabla _{ a}^{-\mu }f(t) {}\\ & =& \nabla ^{N}\nabla _{ a}^{-(N-\nu )-\mu }f(t)\quad \text{by Theorem <InternalRef RefID="FPar1">3.1</InternalRef>07 } {}\\ & =& \nabla _{a}^{N-N+\nu -\mu }f(t)\quad \text{by Lemma <InternalRef RefID="FPar108">3.108</InternalRef>} {}\\ & =& \nabla _{a}^{\nu -\mu }f(t). {}\\ \end{array}$$

This completes the proof. □ 

The following theorem for N = 1 appears in Hein et al. [119] and for general N appears in Ahrendt et al. [3].

Theorem 3.110.

Assume the nabla Laplace transform of \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) exists for |s − 1| < r, r > 0, ν > 0, and pick \(N \in \mathbb{N}_{1}\) so that N − 1 < ν ≤ N. Then

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a+N}\{\nabla _{a}^{\nu }f\}(s)& =& s^{\nu }\mathcal{L}_{ a+N}\{f\}(s) +\sum \limits _{ k=0}^{N-1}\bigg[ \frac{s^{\nu }} {(1 - s)^{N-k}}f(a + k + 1) {}\\ & \quad -& \frac{s^{N}} {(1 - s)^{N-k}}\nabla _{a}^{-(N-\nu )}f(a + k + 1) {}\\ & \quad -& \nabla ^{N-k-1}\nabla _{ a}^{-(N-\nu )}f(a + N)s^{k}\bigg], {}\\ \end{array}$$

for |s − 1| < r.

Proof.

Consider

$$\displaystyle\begin{array}{rcl} & & \mathcal{L}_{a+N}\{\nabla _{a}^{\nu }f\}(s) = \mathcal{L}_{ a+N}\{\nabla ^{N}\nabla _{ a}^{-(N-\nu )}f\}(s) {}\\ & & = s^{N}\mathcal{L}_{ a+N}\{\nabla _{a}^{-(N-\nu )}f\}(s) -\sum _{ k=1}^{N}s^{N-k}\nabla ^{k-1}\nabla _{ a}^{-(N-\nu )}f(a + N)\quad \mbox{ by 3.85} {}\\ & & = s^{N}\left [ \frac{1} {(1 - s)^{N}}\mathcal{L}_{a}\{\nabla _{a}^{-(N-\nu )}f\}(s) -\sum _{ k=1}^{N}\frac{\nabla _{a}^{-(N-\nu )}f(a + k)} {(1 - s)^{N-k+1}} \right ] {}\\ & & \quad \quad \quad -\sum _{k=0}^{N-1}s^{k}\nabla ^{N-k-1}\nabla _{ a}^{-(N-\nu )}f(a + N)\quad \mbox{ by Lemma <InternalRef RefID="FPar88">3.88</InternalRef>} {}\\ & & = \frac{s^{\nu }} {(1 - s)^{N}}\mathcal{L}_{a}\{f\}(s) - s^{N}\sum _{ k=1}^{N}\frac{\nabla _{a}^{-(N-\nu )}f(a + k)} {(1 - s)^{N-k+1}} {}\\ & & \quad \quad \quad -\sum _{k=0}^{N-1}s^{k}\nabla ^{N-k-1}\nabla _{ a}^{-(N-\nu )}f(a + N) {}\\ & & = s^{\nu }\bigg[\mathcal{L}_{a+N}\{f\}(s) +\sum _{ k=1}^{N} \frac{f(a + k)} {(1 - s)^{n-k+1}}\bigg] - s^{N}\sum _{ k=1}^{N}\frac{\nabla _{a}^{-(N-\nu )}f(a + k)} {(1 - s)^{N-k+1}} {}\\ & & \quad \quad \quad -\sum _{k=0}^{N-1}s^{k}\nabla ^{N-k-1}\nabla _{ a}^{-(N-\nu )}f(a + N)\quad \mbox{ by Lemma <InternalRef RefID="FPar88">3.88</InternalRef>} {}\\ & & = s^{\nu }\mathcal{L}_{a+N}\{f\}(s) +\sum \limits _{ k=0}^{N-1}\bigg[ \frac{s^{\nu }} {(1 - s)^{N-k}}f(a + k + 1) {}\\ & & \quad \quad \quad \frac{s^{N}} {(1 - s)^{N-k}}\nabla _{a}^{-(N-\nu )}f(a + k + 1) -\nabla ^{N-k-1}\nabla _{ a}^{-(N-\nu )}f(a + N)s^{k}\bigg], {}\\ \end{array}$$

which is what we wanted to prove. □ 

Theorem 3.111.

Assume \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\) , ν > 0 and \(k \in \mathbb{N}_{0}\) . Then

$$\displaystyle{ \nabla _{a+k}^{-\nu }\nabla ^{k}f(t) = \nabla _{ a+k}^{k-\nu }f(t) -\sum _{ j=0}^{k-1}\nabla ^{j}f(a + k)H_{\nu -k+j}(t,a + k). }$$

Proof.

Integrating by parts on two different occasions below we get

$$\displaystyle\begin{array}{rcl} & & \nabla _{a+k}^{-\nu }\nabla ^{k}f(t) =\int _{ a+k}^{t}H_{\nu -1}(t,\rho (\tau ))\nabla ^{k}f(\tau )\nabla \tau {}\\ & & =\int _{ a+k}^{t}H_{\nu -1}(t,\rho (\tau ))\nabla \nabla ^{k-1}f(\tau )\nabla \tau {}\\ & & = \nabla ^{k-1}f(\tau )H_{\nu -1}(t,\tau )\big\vert _{a+k}^{t} +\int _{ a+k}^{t}H_{\nu -2}(t,\rho (\tau ))\nabla ^{k-1}f(\tau ) {}\\ & & = -\nabla ^{k-1}f(a + k)H_{\nu -1}(t,a + k) + H_{\nu -1}(t,a + k)\nabla ^{k-1}f(\tau ) {}\\ & & = \nabla _{a+k}^{-(\nu -1)}\nabla ^{k-1}f(t) -\nabla ^{k-1}f(a + k)H_{\nu -1}(t,a + k) {}\\ & & = \nabla [\nabla _{a+k}^{-\nu }\nabla ^{k-1}f(t)] -\nabla ^{k-1}f(a + k)H_{\nu -1}(t,a + k) {}\\ & & = \nabla _{a+k}^{-(\nu -2)}\nabla ^{k-2}f(t) -\nabla ^{k-2}f(a + k)H_{\nu -2}(t,a + k) {}\\ & & \quad -\nabla ^{k-1}f(a + k)H_{\nu -1}(t,a + k). {}\\ \end{array}$$

Integrating by parts k − 2 more times gives

$$\displaystyle{ \nabla _{a+k}^{-\nu }\nabla ^{k}f(t) = \nabla _{ a+k}^{k-\nu }f(t) -\sum _{ j=0}^{k-1}\nabla ^{j}f(a + k)H_{\nu -k+j}(t,a + k), }$$

which was what we wanted to prove. □ 

Theorem 3.112.

Let ν > 0 and \(k \in \mathbb{N}_{0}\) be given and choose \(N \in \mathbb{N}\) such that N − 1 < ν ≤ N. Then

$$\displaystyle{ \nabla _{a+k}^{\nu }\nabla ^{k}f(t) = \nabla _{ a+k}^{k+\nu }f(t) -\sum _{ j=0}^{k-1}\nabla ^{j}f(a + k)H_{ -\nu -k+j}(t,a + k). }$$

Proof.

Consider

$$\displaystyle\begin{array}{rcl} & & \nabla _{a+k}^{\nu }\nabla ^{k}f(t) = \nabla ^{N}(\nabla _{ a+k}^{-(N-\nu )}\nabla ^{k}f(t)) {}\\ & =& \nabla ^{N}\left (\nabla _{ a+k}^{k-(N-\nu )}f(t) -\sum _{ j=0}^{k-1}\nabla ^{j}f(a + k)H_{ N-\nu -k+j}(t,a + k)\right ) {}\\ & =& \nabla _{a+k}^{k+\nu }f(t) -\sum _{ j=0}^{k-1}\nabla ^{j}f(a + k)H_{ N-\nu -k+j}(t,a) {}\\ & =& \nabla _{a+k}^{k+\nu }f(t) -\sum _{ j=0}^{k-1}\nabla ^{j}f(a + k)\nabla ^{N-1}H_{ N-\nu -k+j}(t,a) {}\\ & =& \nabla _{a+k}^{k+\nu }f(t) -\sum _{ j=0}^{k-1}\nabla ^{j}f(a + k)\nabla ^{N-1}H_{ N-\nu -k+j-1}(t,a). {}\\ \end{array}$$

Taking the difference inside the summation N − 1 more times, we get

$$\displaystyle{ \nabla _{a+k}^{\nu }\nabla ^{k}f(t) = \nabla _{ a+k}^{k+\nu }f(t) -\sum _{ j=0}^{k-1}\nabla ^{j}f(a + k)H_{ -\nu -k+j}(t,a + k), }$$

which is what we wanted to prove. □ 

Theorem 3.113.

Assume 1 < ν ≤ 2. Then the unique solution of the fractional IVP

$$\displaystyle\begin{array}{rcl} \nabla _{a}^{\nu }x(t)& =& 0,\quad t \in \mathbb{N}_{ a+2} {}\\ x(a + 2)& =& A_{0}, {}\\ \nabla x(a + 2)& =& A_{1}, {}\\ \end{array}$$

where \(A_{0},A_{1} \in \mathbb{R},\) is given by

$$\displaystyle\begin{array}{rcl} x(t)& =& [(2-\nu )A_{0} + (\nu -1)A_{1}]h_{\nu -1}(t,a) \\ & & +[(\nu -1)A_{0} -\nu A_{1}]h_{\nu -2}(t,a),{}\end{array}$$
(3.60)

for \(t \in \mathbb{N}_{a+1}\) .

Proof.

Let x(t) be the solution of the IVP (3.60) \(\nabla _{a}^{\nu }x(t) = 0\). Then

$$\displaystyle{x(t) = c_{1}H_{\nu -1}(t,a) + c_{2}H_{\nu -2}(t,a).}$$

Using the IC’s we get that

$$\displaystyle{x(a + 1) = x(a + 2) -\nabla x(a + 2) = A_{0} - A_{1}.}$$

It follows from this that

$$\displaystyle{x(a + 1) = c_{1}H_{\nu -1}(a + 1,a) + c_{2}H_{\nu -2}(a + 1,a) = A_{0} - A_{1}.}$$

Since \(H_{\nu -1}(a + 1,a) = H_{\nu -2}(a + 1,a) = 1,\) we have that

$$\displaystyle{c_{1} + c_{2} = A_{0} - A_{1}.}$$

Since \(\nabla x(t) = c_{1}H_{\nu -2}(t,a) + c_{2}H_{\nu -3}(t,a),\) we get that

$$\displaystyle\begin{array}{rcl} \nabla x(a + 2)& =& c_{1}H_{\nu -2}(a + 2,a) + c_{2}H_{\nu -3}(a + 2,a) {}\\ & =& c_{1}(\nu -1) + c_{2}(\nu -2) {}\\ & =& A_{1}. {}\\ \end{array}$$

Solving the system

$$\displaystyle\begin{array}{rcl} & \qquad \qquad c_{1} + c_{2} = A_{0} - A_{1} & {}\\ & (\nu -1)c_{1} + (\nu -2)c_{2} = A_{1}& {}\\ \end{array}$$

we get

$$\displaystyle{c_{1} = (2-\nu )A_{0} + (\nu -1)A_{1},\qquad c_{2} = (\nu -1)A_{0} -\nu A_{1}.}$$

Hence,

$$\displaystyle{x(t) = [(2-\nu )A_{0} + (\nu -1)A_{1}]h_{\nu -1}(t,a) + [(\nu -1)A_{0} -\nu A_{1}]h_{\nu -2}(t,a),}$$

for \(t \in \mathbb{N}_{a+1}\). □ 

Next, we look at the nonhomogeneous equation with zero initial conditions.

Theorem 3.114.

Let \(g: \mathbb{N}_{a} \rightarrow \mathbb{R}\) and 1 < ν ≤ 2. Then, for \(t \in \mathbb{N}_{a+2}\) , the fractional initial value problem

$$\displaystyle\begin{array}{rcl} \nabla _{a}^{\nu }x(t)& =& g(t),\qquad t \in \mathbb{N}_{ a+2} {}\\ x(a + 2)& =& 0 {}\\ \nabla x(a + 2)& =& 0 {}\\ \end{array}$$

has the unique solution

$$\displaystyle\begin{array}{rcl} & & x(t) = \nabla _{a}^{-\nu }g(t) - [g(a + 1) + g(a + 2)]h_{\nu -1}(t,a) \\ & & \qquad \quad + g(a + 2)h_{\nu -2}(t,a). {}\end{array}$$
(3.61)

Proof.

We take the Laplace transform based at a + 2 of both sides of the equation.

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a+2}\{\nabla _{a}^{\nu }x\}(s)& = \mathcal{L}_{ a+2}\{g\}(s).& {}\\ & & {}\\ \end{array}$$

Next, we use Theorem 3.110 and Lemma 3.88 on the left-hand side and the Laplace transform shifting theorem on the right-hand side of the equation.

$$\displaystyle\begin{array}{rcl} & & \frac{s^{\nu }} {(1 - s)^{2}}\mathcal{L}_{a}\{x\}(s) {}\\ & & \quad \quad - ( \frac{s} {1 - s})^{2}\nabla _{ a}^{-(2-\nu )}x(a + 1) -\nabla \nabla _{ a}^{-(2-\nu )}x(a + 2) {}\\ & & \quad \quad - ( \frac{s^{2}} {1 - s})\nabla _{a}^{-(2-\nu )}x(a + 2) - s\nabla _{ a}^{-(2-\nu )}x(a + 2) {}\\ & & = \frac{1} {(1 - s)^{2}}\mathcal{L}_{a}\{g\}(s) - \frac{1} {(1 - s)^{2}}g(a + 1) - \frac{1} {1 - s}g(a + 2). {}\\ \end{array}$$

Using \(x(a + 2) = 0 = \nabla x(a + 2),\) we obtain

$$\displaystyle\begin{array}{rcl} \frac{s^{\nu }} {(1 - s)^{2}}\mathcal{L}_{a}\{x\}(s)& =& \frac{1} {(1 - s)^{2}}\mathcal{L}_{a}\{g\}(s) - \frac{1} {(1 - s)^{2}}g(a + 1) {}\\ & & \quad \quad \quad \quad \quad \quad \quad \quad - \frac{1} {1 - s}g(a + 2). {}\\ \end{array}$$

Next, we solve for the Laplace transform of x(t) to obtain

$$\displaystyle\begin{array}{rcl} \mathcal{L}_{a}\{x\}(s)& =& \frac{1} {s^{\nu }}\mathcal{L}_{a}\{g\}(s) -\frac{1} {s^{\nu }}g(a + 1) -\frac{(1 - s)} {s^{\nu }} g(a + 2) {}\\ & =& [\mathcal{L}_{a}\{h_{\nu -1}(t,a)\}(s)\mathcal{L}_{a}\{g\}(s)] -\frac{1} {s^{\nu }}g(a + 1) {}\\ & & -\frac{1} {s^{\nu }}g(a + 2) + \frac{1} {s^{\nu -1}}g(a + 2). {}\\ \end{array}$$

Finally, we take the inverse Laplace transform and note that

$$\displaystyle{\nabla _{a}^{-\nu }g(t) = h_{\nu -1}(t,a) {\ast} g(t),}$$

which yields

$$\displaystyle\begin{array}{rcl} x(t)& =& [h_{\nu -1}(\cdot,a) {\ast} g(\cdot )] - [g(a + 1) + g(a + 2)]h_{\nu -1}(t,a) {}\\ & & +g(a + 2)h_{\nu -2}(t,a) {}\\ & =& \nabla _{a}^{-\nu }g(t) - [g(a + 1) + g(a + 2)]h_{\nu -1}(t,a) + g(a + 2)h_{\nu -2}(t,a) {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a+2}\). □ 

3.18 Monotonicity for the Nabla Case

Most of the results in this section appear in the paper [49]. These results were motivated by the paper by Dahal and Goodrich [67]. The results of Dahal and Goodrich are treated in Sect. 7.2 First, we derive a nabla difference inequality which plays an important role in proving our main result on monotonicity.

Theorem 3.115.

Assume that \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\) , \(\nabla _{a}^{\nu }f(t) \geq 0\) , for each \(t \in \mathbb{N}_{a+1}\) , with 1 < ν < 2. Then

$$\displaystyle\begin{array}{rcl} \nabla f(t)& \geq & -f(a + 1)[H_{-\nu -1}(t,a) + H_{-\nu }(t,a + 1)] \\ & & \quad \quad -\sum _{\tau =a+2}^{t-1}H_{ -\nu }(t,\rho (\tau ))\nabla f(\tau ) {}\end{array}$$
(3.62)
$$\displaystyle{ = -f(a + 1)\frac{(-\nu + 1)^{\overline{t -\rho (a)}}} {(t -\rho (a))!} -\sum _{\tau =a+2}^{t-1}\frac{(-\nu + 1)^{\overline{t-\tau }}} {(t-\tau )!} \nabla f(\tau ) }$$
(3.63)

for \(t \in \mathbb{N}_{a+1}\) , where for t ≥τ,

$$\displaystyle{ H_{-\nu }(t,\rho (\tau )) = \frac{(-\nu + 1)^{\overline{t-\tau }}} {(t-\tau )!} < 0. }$$
(3.64)

Proof.

Note that

$$\displaystyle\begin{array}{rcl} \nabla _{a}^{\nu }f(t)& =& \int _{ a}^{a+1}H_{ -\nu -1}(t,\rho (\tau ))f(\tau )\nabla \tau +\int _{ a+1}^{t}H_{ -\nu -1}(t,\rho (\tau ))f(\tau )\nabla \tau \\ & =& H_{-\nu -1}(t,a)f(a + 1) +\int _{ a+1}^{t}H_{ -\nu -1}(t,\rho (\tau ))f(\tau )\nabla \tau.{}\end{array}$$
(3.65)

Integrating by parts and using the power rule

$$\displaystyle{\nabla _{\tau }H_{-\nu }(t,\tau ) = -H_{-\nu -1}(t,\rho (\tau ))}$$

we have that (where we use \(H_{-\nu }(t,\rho (t)) = 1\))

$$\displaystyle\begin{array}{rcl} & & \int _{a+1}^{t}H_{ -\nu -1}(t,\rho (\tau ))f(\tau )\nabla \tau \\ & =& -f(\tau )H_{-\nu }(t,\tau )\vert _{\tau =a+1}^{t} +\int _{ a+1}^{t}H_{ -\nu }(t,\rho (\tau ))\nabla f(\tau )\nabla \tau \\ & =& f(a + 1)H_{-\nu }(t,a + 1) +\sum _{ \tau =a+2}^{t}H_{ -\nu }(t,\rho (\tau ))\nabla f(\tau ) \\ & =& f(a + 1)H_{-\nu }(t,a + 1) +\sum _{ \tau =a+2}^{t-1}H_{ -\nu }(t,\rho (\tau ))\nabla f(\tau ) \\ & \; & \quad \quad \quad + H_{-\nu }(t,\rho (t))\nabla f(t) \\ & =& f(a + 1)H_{-\nu }(t,a + 1) +\sum _{ \tau =a+2}^{t-1}H_{ -\nu }(t,\rho (\tau ))\nabla f(\tau ) + \nabla f(t).{}\end{array}$$
(3.66)

Using (3.65) and (3.66), we obtain

$$\displaystyle\begin{array}{rcl} 0& \leq & \Delta _{a}^{\nu }f(t) {}\\ & =& [H_{-\nu -1}(t,a) + H_{-\nu }(t,a + 1)]f(a + 1) {}\\ & +& \sum _{\tau =a+2}^{t-1}H_{ -\nu }(t,\rho (\tau ))\nabla f(\tau ) + \nabla f(t), {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a+1}\) by assumption. Solving this last inequality for ∇f(t) we obtained the desired inequality (3.62). Next we show that (3.64) holds. This follows from the following:

$$\displaystyle\begin{array}{rcl} & & H_{-\nu }(t,\rho (\tau )) {}\\ & & = \frac{(t -\rho (\tau ))^{\overline{-\nu }}} {\Gamma (-\nu + 1)} {}\\ & & = \frac{(t -\tau +1)^{\overline{-\nu }}} {\Gamma (-\nu + 1)} {}\\ & & = \frac{\Gamma (t + 1 -\nu -\tau )} {\Gamma (t -\tau +1)\Gamma (-\nu + 1)} {}\\ & & = \frac{(-\nu + t-\tau )(-\nu + t -\tau -1)\cdots (-\nu + 1)} {(t-\tau )!} \quad (\mbox{ since}\quad t-\tau \geq 1) {}\\ & & = \frac{(-\nu + 1)^{\overline{t-\tau }}} {(t-\tau )!} {}\\ & & < 0 {}\\ \end{array}$$

since 1 < ν < 2. Also

$$\displaystyle\begin{array}{rcl} & & -[H_{-\nu -1}(t,a) + H_{-\nu }(t,a + 1)] {}\\ & =& -\Big[\frac{(t - a)^{\overline{ -\nu -1}}} {\Gamma (-\nu )} + \frac{(t -\rho (a))^{\overline{-\nu }}} {\Gamma (-\nu + 1)} \Big] {}\\ & =& -\Big[\frac{\Gamma (-\nu + t -\rho (a))} {\Gamma (t - a)\Gamma (-\nu )} + \frac{\Gamma (-\nu + t -\rho (a))} {\Gamma (t -\rho (a))\Gamma (-\nu + 1)}\Big] {}\\ & =& -\frac{(-\nu + t -\rho (a))(-\nu + t - a - 2)\cdots (-\nu + 2)(-\nu + 1)} {(t -\rho (a))!} {}\\ & =& - \frac{\Gamma (-\nu + t - a)} {\Gamma (-\nu + 1)(t -\rho (a))!} {}\\ & =& -\frac{(-\nu + 1)^{\overline{t -\rho (a)}}} {(t -\rho (a))!} {}\\ & >& 0. {}\\ \end{array}$$

This completes the proof. □ 

Theorem 3.116.

Assume \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) , \(\nabla _{a}^{\nu }f(t) \geq 0\) , for each \(t \in \mathbb{N}_{a+1}\) , with 1 < ν < 2. Then ∇f(t) ≥ 0, for \(t \in \mathbb{N}_{a+2}\) .

Proof.

We prove that ∇f(a + k) ≥ 0, for k ≥ 0 by the principle of strong induction. Since \(\nabla _{a}^{\nu }f(a + 1) = f(a + 1),\) we have by assumption that f(a + 1) ≥ 0. When \(t = a + 2\), it follows that

$$\displaystyle\begin{array}{rcl} \nabla _{0}^{\nu }f(a + 2)& =& \int _{ a}^{a+2}H_{ -\nu -1}(a + 2,\tau -1)f(\tau )\nabla \tau {}\\ & =& f(a + 2) -\nu f(a + 1) {}\\ & =& \nabla f(a + 2) - (\nu -1)f(a + 1). {}\\ \end{array}$$

From our assumption \(\nabla _{a}^{\nu }f(a + 2) \geq 0\) and the fact that \(\nabla _{a}^{\nu }f(a + 1) = f(a + 1)\), we have

$$\displaystyle\begin{array}{rcl} \nabla f(a + 2)& \geq & (\nu -1)f(a + 1) \geq 0. {}\\ \end{array}$$

Suppose k ≥ 2 and that ∇f(a + i) ≥ 0, for i = 2, 3, 4, ⋯ , k. Then from Theorem 3.115, we have \(\nabla f(a + k + 1) \geq 0\), so this completes the proof. □ 

3.19 Caputo Fractional Difference

In this section we define the μ-th Caputo fractional difference operator and give some of its properties. Many of the results in this section and related results are contained in the papers by Anastassiou [713] and the paper by Ahrendt et al. [4].

Definition 3.117.

Assume \(f: \mathbb{N}_{a-N+1} \rightarrow \mathbb{R}\) and μ > 0. Then the μ-th Caputo nabla fractional difference of f is defined by

$$\displaystyle{\nabla _{a{\ast}}^{\mu }f(t) = \nabla _{ a}^{-(N-\mu )}\nabla ^{N}f(t)}$$

for \(t \in \mathbb{N}_{a+1},\) where N = ⌈μ⌉.

One nice property of the Caputo nabla fractional difference is that if μ ≥ 1 and C is any constant, then

$$\displaystyle{\nabla _{a{\ast}}^{\mu }C = 0.}$$

Note that this is not true for the nabla Riemann–Liouville fractional difference, when C ≠ 0 and μ > 0 is not an integer (see Exercise 3.28).

The following theorem follows immediately from the definition of the Caputo nabla fractional difference and the definition of the Taylor monomials.

Theorem 3.118.

Assume μ > 0 and N = ⌈μ⌉. Then the nabla Taylor monomials, H k (t,a), 0 ≤ k ≤ N − 1, are N linearly independent solutions of \(\nabla _{a{\ast}}^{\mu }x = 0\) on \(\mathbb{N}_{a-N+1}\) .

The reader should compare this theorem (Theorem 3.118) to Theorem 3.95 which gives the analogue result for the nabla Riemann–Liouville case \(\nabla _{a}^{\mu }x = 0\). That is, H μk (t, a), where 0 ≤ k ≤ N − 1, are N linearly independent solutions of \(\nabla _{a}^{\mu }x = 0\).

The following result appears in Anastassiou [7].

Theorem 3.119 (Nabla Taylor’s Theorem with Caputo Differences).

Assume \(f: \mathbb{N}_{a-N+1} \rightarrow \mathbb{R}\) , μ > 0 and N − 1 < μ ≤ N. Then

$$\displaystyle{f(t) =\sum _{ k=0}^{N-1}H_{ k}(t,a)\nabla ^{k}f(a) +\int _{ a}^{t}H_{\mu -1}(t,\rho (\tau ))\nabla _{a{\ast}}^{\mu }f(\tau )\nabla \tau,}$$

for \(t \in \mathbb{N}_{a-N+1}\) .

Proof.

By Taylor’s Theorem (Theorem 3.48) with \(n = N - 1\), we have that

$$\displaystyle{f(t) =\sum _{ k=0}^{N-1}H_{ k}(t,a)\nabla ^{k}f(a) +\int _{ a}^{t}H_{ N-1}(t,\rho (\tau ))\nabla ^{N}f(\tau )\nabla \tau,}$$

for \(t \in \mathbb{N}_{a-N+1}\). Hence to complete the proof we just need to show that

$$\displaystyle{ \int _{a}^{t}H_{ N-1}(t,\rho (\tau ))\nabla ^{N}f(\tau )\nabla \tau =\int _{ a}^{t}H_{\mu -1}(t,\rho (\tau ))\nabla _{a{\ast}}^{\mu }f(\tau )\nabla \tau, }$$
(3.67)

holds for \(t \in \mathbb{N}_{a-N+1}\). By convention both integrals in (3.67) are equal to zero for \(t \in \mathbb{N}_{a-N+1}^{a}\). Hence it remains to prove that (3.67) holds for \(t \in \mathbb{N}_{a}\). To see that this is true note that

$$\displaystyle\begin{array}{rcl} \int _{a}^{t}H_{\mu -1}(t,\rho (\tau ))\nabla _{a{\ast}}^{\mu }f(\tau )\nabla \tau & =& \nabla _{ a}^{-\mu }\nabla _{ a{\ast}}^{\mu }f(t) {}\\ & =& \nabla _{a}^{-\mu }\nabla _{ a}^{-(N-\mu )}\nabla ^{N}f(t) {}\\ & =& \nabla _{a}^{-\mu -N+\mu }\nabla ^{N}f(t)\mbox{ by Theorem <InternalRef RefID="FPar1">3.1</InternalRef>07} {}\\ & =& \nabla _{a}^{-N}\nabla ^{N}f(t),\quad {}\\ & =& \int _{a}^{t}H_{ N-1}(t,\rho (\tau ))\nabla ^{N}f(\tau )\nabla \tau {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a}\). □ 

3.20 Nabla Fractional Initial Value Problems

In this section we will consider the nabla fractional initial value problem (IVP)

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \quad &\nabla _{a{\ast}}^{\nu }x(t) = h(t),\quad t \in \mathbb{N}_{a+1} \\ \quad &\nabla ^{k}x(a) = c_{k},\quad 0 \leq k \leq N - 1,\end{array} \right. }$$
(3.68)

where we always assume that \(a,\nu \in \mathbb{R}\), ν > 0, N: = ⌈ν⌉, \(c_{k} \in \mathbb{R}\) for 0 ≤ k ≤ N − 1, and \(h: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\). In the next theorem we will see that this IVP has a unique solution which is defined on \(\mathbb{N}_{a-N+1}\).

Theorem 3.120.

The unique solution to the IVP (3.68) is given by

$$\displaystyle{x(t) =\sum _{ k=0}^{N-1}H_{ k}(t,a)c_{k} + \nabla _{a}^{-\nu }h(t),}$$

for \(t \in \mathbb{N}_{a-N+1}\) , where by convention \(\nabla _{a}^{-\nu }h(t) = 0\) for \(a - N + 1 \leq t \leq a\) .

Proof.

Define \(f: \mathbb{N}_{a-N+1} \rightarrow \mathbb{R}\) by

$$\displaystyle{\nabla ^{k}f(a) = c_{ k},}$$

for 0 ≤ k ≤ N − 1 (note that this uniquely defines f(t) for \(a - N + 1 \leq t \leq a\)), and for \(t \in \mathbb{N}_{a+1}\) define f(t) recursively by

$$\displaystyle\begin{array}{rcl} \nabla ^{N}f(t)& =& h(t) -\int _{ a}^{t-1}H_{ N-\nu -1}(t,\rho (\tau ))\nabla ^{N}f(\tau )\nabla \tau {}\\ & =& h(t) -\sum _{\tau =a+1}^{t-1}H_{ N-\nu -1}(t,\rho (\tau ))\nabla ^{N}f(\tau ). {}\\ \end{array}$$

So for any \(t \in \mathbb{N}_{a+1}\),

$$\displaystyle{ \nabla ^{N}f(t) +\int _{ a}^{t-1}H_{ N-\nu -1}(t,\rho (\tau ))\nabla ^{N}f(\tau )\nabla \tau = h(t). }$$
(3.69)

It follows that

$$\displaystyle\begin{array}{rcl} \nabla _{a{\ast}}^{\nu }f(t)& =& \nabla _{ a}^{-(N-\nu )}\nabla ^{N}f(t) {}\\ & =& \int _{a}^{t}H_{ N-\nu -1}(t,\rho (\tau ))\nabla ^{N}f(\tau )\nabla \tau {}\\ & =& \int _{a}^{t-1}H_{ N-\nu -1}(t,\rho (\tau ))\nabla ^{N}f(\tau )\nabla \tau + \nabla ^{N}f(t) {}\\ & =& h(t)\quad \mbox{ by 3.68} {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a+1}\). Therefore, f(t) solves the IVP (3.68). Conversely, if we suppose that there is a function \(f: \mathbb{N}_{a-N+1} \rightarrow \mathbb{R}\) that satisfies the IVP, reversing the above steps would lead to the same recursive definition. Therefore the solution to the IVP is uniquely defined. By the Caputo Discrete Taylor’s Theorem, x(t) = f(t) is given by

$$\displaystyle\begin{array}{rcl} x(t)& =& \sum _{k=0}^{N-1}H_{ k}(t,a)\nabla ^{k}x(a) + \nabla _{ a}^{-\nu }\nabla _{ a{\ast}}^{\nu }x(t) {}\\ & =& \sum _{k=0}^{N-1}c_{ k}H_{k}(t,a) + \nabla _{a}^{-\nu }h(t). {}\\ \end{array}$$

 □ 

The following example appears in [4].

Example 3.121.

Solve the IVP

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \quad &\nabla _{0{\ast}}^{0.7}x(t) = t,\quad t \in \mathbb{N}_{1} \\ \quad &x(0) = 2. \end{array} \right.}$$

Applying the variation of constants formula in Theorem 3.120 we get

$$\displaystyle\begin{array}{rcl} x(t)& =& \sum _{k=0}^{0}\frac{t^{\overline{k}}} {k!} 2 + \nabla _{0}^{-0.7}h(t) {}\\ & =& 2 +\int _{ 0}^{t}H_{ -.3}(t,\rho (s))s\nabla s {}\\ & & {}\\ \end{array}$$

for \(t \in \mathbb{N}_{0}\). Integrating by parts, we have

$$\displaystyle\begin{array}{rcl} x(t)& =& 2 - sH_{.7}(t,s)\big\vert _{s=0}^{s=t} +\int _{ 0}^{t}H_{.7}(t,\rho (s))\nabla s {}\\ & =& 2 - H_{1.7}(t,s)\big\vert _{s=0}^{s=t} {}\\ & =& 2 + \frac{1} {\Gamma (2.7)}t^{\overline{1.7}} {}\\ \end{array}$$

for \(t \in \mathbb{N}_{0}\).

Corollary 3.122.

For ν > 0, N = ⌈ν⌉, and \(h: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) , we have that

$$\displaystyle{\nabla _{a}^{-(N-\nu )}\nabla _{ a}^{N-\nu }h(t) = h(t),\ \ t \in \mathbb{N}_{ a+1}.}$$

Proof.

Assume Nν; otherwise, the proof is trivial. Let ν > 0, N = ⌈ν⌉, and \(h: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\). Let \(c_{k} \in \mathbb{R}\) for 0 ≤ k ≤ N − 1, and define \(f: \mathbb{N}_{a-N+1} \rightarrow \mathbb{R}\) in terms of h by

$$\displaystyle{f(t):=\sum _{ k=0}^{N-1}\frac{(t - a)^{\overline{k}}} {k!} c_{k} + \nabla _{a}^{-\nu }h(t).}$$

Then by Theorem 3.120, f(t) solves the IVP

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \quad &\nabla _{a{\ast}}^{\nu }f(t) = h(t),\quad t \in \mathbb{N}_{a+1} \\ \quad &\nabla ^{k}f(a) = c_{k},\ \ 0 \leq k \leq N - 1. \end{array} \right.}$$

With repeated applications of the Leibniz rule (3.22) we get

$$\displaystyle\begin{array}{rcl} \nabla ^{N}f(t)& =& \nabla ^{N}\bigg[\sum _{ k=0}^{N-1}H_{ k}(t,a)c_{k} + \nabla _{a}^{-\nu }h(t)\bigg] {}\\ & =& \nabla ^{N}\nabla _{ a}^{-\nu }h(t) {}\\ & =& \nabla ^{N}\bigg[\int _{ a}^{t}H_{\nu -1}(t,\rho (\tau ))h(\tau )\nabla \tau \bigg] {}\\ & =& \nabla ^{N-1}\bigg[\int _{ a}^{t}H_{\nu -2}(t,\rho (\tau ))h(\tau )\nabla \tau + H_{\nu -1}(\rho (t),\rho (t))h(t)\bigg] {}\\ & =& \nabla ^{N-1}\bigg[\int _{ a}^{t}H_{\nu -2}(t,\rho (\tau ))h(\tau )\nabla \tau \bigg] {}\\ & =& \nabla ^{N-2}\bigg[\int _{ a}^{t}H_{\nu -3}(t,\rho (\tau ))h(\tau )\nabla \tau + H_{\nu -2}(\rho (t),\rho (t))h(t)\bigg] {}\\ & =& \nabla ^{N-2}\bigg[\int _{ a}^{t}H_{\nu -3}(t,\rho (\tau ))h(\tau )\nabla \tau \bigg] {}\\ & =& \qquad \qquad \qquad \cdots {}\\ & =& \nabla \bigg[\int _{a}^{t}H_{\nu -N}(t,\rho (\tau )h(\tau )\nabla \tau \bigg] {}\\ & =& \int _{a}^{t}H_{\nu -N-1}(t,\rho (\tau ))h(\tau )\nabla \tau + H_{\nu -N}(\rho (t),\rho (t))h(t) {}\\ & =& \int _{a}^{t}H_{\nu -N-1}(t,\rho (\tau ))h(\tau )\nabla \tau {}\\ & =& \int _{a}^{t}H_{ -(N-\nu )-1}(t,\rho (\tau )h(\tau )\nabla \tau {}\\ & =& \nabla _{a}^{N-\nu }h(t). {}\\ \end{array}$$

It follows that

$$\displaystyle{\nabla _{a}^{-(N-\nu )}\nabla _{ a}^{N-\nu }h(t) = \nabla _{ a}^{-(N-\nu )}\nabla ^{N}f(t) = \nabla _{ a{\ast}}^{\nu }f(t) = h(t),}$$

and the proof is complete. □ 

3.21 Monotonicity (Caputo Case)

Many of the results in this section appear in Baoguo et al. [49]. This work is motivated by the paper by R. Dahal and C. Goodrich [67], where they obtained some interesting monotonicity results for the delta fractional difference operator. These monotonicity results for the delta case will be discussed in Sect. 7.2 In this section, we prove the following corresponding results for Caputo fractional differences.

Theorem 3.123.

Assume that N − 1 < ν < N, \(f: \mathbb{N}_{a-N+1} \rightarrow \mathbb{R}\) , \(\nabla _{a^{{\ast}}}^{\nu }f(t) \geq 0\) for \(t \in \mathbb{N}_{a+1}\) and ∇ N−1 f(a) ≥ 0. Then ∇ N−1 f(t) ≥ 0 for \(t \in \mathbb{N}_{a}\) .

Theorem 3.124.

Assume N − 1 < ν < N, \(f: \mathbb{N}_{a-N+1} \rightarrow \mathbb{R}\) , and ∇ N f(t) ≥ 0 for \(t \in \mathbb{N}_{a+1}\) . Then \(\nabla _{a^{{\ast}}}^{\nu }f(t) \geq 0\) , for each \(t \in \mathbb{N}_{a+1}\) .

When N = 2 in Theorem 3.123 we get the important monotonicity result.

Theorem 3.125.

Assume that 1 < ν < 2, \(f: \mathbb{N}_{\rho (a)} \rightarrow \mathbb{R}\) , \(\nabla _{a^{{\ast}}}^{\nu }f(t) \geq 0\) for \(t \in \mathbb{N}_{a+1}\) and f(a) ≥ f(ρ(a)). Then f(t) is an increasing function for \(t \in \mathbb{N}_{\rho (a)}\) .

Also the following partial converse of Theorem 3.123 is true.

Theorem 3.126.

Assume 0 < ν < 1, \(f: \mathbb{N}_{\rho (a)} \rightarrow \mathbb{R}\) and f is an increasing function for \(t \in \mathbb{N}_{a}\) . Then \(\nabla _{a^{{\ast}}}^{\nu }f(t) \geq 0\) , for each \(t \in \mathbb{N}_{a+1}\) .

We also give a counterexample to show that the above assumption f(a) ≥ f(ρ(a)) in 3.125 is essential. We begin by proving the following theorem.

Theorem 3.127.

Assume that \(f: \mathbb{N}_{a-N+1} \rightarrow \mathbb{R}\) , and \(\nabla _{a^{{\ast}}}^{\mu }f(t) \geq 0,\) for each \(t \in \mathbb{N}_{a+1}\) , with N − 1 < μ < N. Then

$$\displaystyle\begin{array}{rcl} & & \nabla ^{N-1}f(a + k) \\ & \geq & \sum _{i=1}^{k-1}\Big[\frac{(k - i + 1)^{\overline{N -\mu -2}}} {\Gamma (N -\mu -1)} \Big]\nabla ^{N-1}f(a + i - 1) \\ & +& H_{N-\mu -1}(a + k,a)\nabla ^{N-1}f(a), {}\end{array}$$
(3.70)

for \(k \in \mathbb{N}_{1}\) (note by our convention on sums the first term on the right-hand side is zero when k = 1).

Proof.

If \(t = a + 1\), we have that

$$\displaystyle\begin{array}{rcl} 0 \leq \nabla _{a^{{\ast}}}^{\mu }f(a + 1)& =& \nabla _{a}^{-(N-\mu )}\nabla ^{N}f(t) {}\\ & =& \int _{a}^{a+1}H_{ N-\mu -1}(a + 1,\rho (s))\nabla ^{N}f(s)\nabla s {}\\ & =& H_{N-\mu -1}(a + 1,a)\nabla ^{N}f(a + 1) {}\\ & =& \nabla ^{N}f(a + 1) = \nabla ^{N-1}f(a + 1) -\nabla ^{N-1}f(a), {}\\ \end{array}$$

where we used \(H_{N-\mu -1}(a + 1,a) = 1\). Solving for \(\nabla ^{N-1}f(a + 1)\) we get the inequality

$$\displaystyle{\nabla ^{N-1}f(a + 1) \geq \nabla ^{N-1}f(a),}$$

which gives us the inequality (3.70) for \(t = a + 1\). Hence, the inequality (3.70) holds for \(t = a + 1\).

Next consider the case \(t = a + k\) for k ≥ 2. Taking \(t = a + k\), k ≥ 2 we have that

$$\displaystyle\begin{array}{rcl} 0& \leq & \nabla _{a^{{\ast}}}^{\mu }f(t) {}\\ & =& \nabla _{a}^{-(N-\mu )}\nabla ^{N}f(t) {}\\ & =& \int _{a}^{t}H_{ N-\mu -1}(t,\rho (s))\nabla ^{N}f(s)\nabla s {}\\ & =& \int _{a}^{a+k}H_{ N-\mu -1}(a + k,\rho (s))\nabla ^{N}f(s)\nabla s {}\\ & =& \sum _{i=1}^{k}H_{ N-\mu -1}(a + k,a + i - 1)\nabla ^{N}f(a + i) {}\\ & =& \sum _{i=1}^{k}H_{ N-\mu -1}(a + k,a + i - 1)\left [\nabla ^{N-1}f(a + i) -\nabla ^{N-1}f(a + i - 1)\right ] {}\\ & =& \sum _{i=1}^{k}H_{ N-\mu -1}(a + k,a + i - 1)\nabla ^{N-1}f(a + i) {}\\ & & -\sum _{i=1}^{k}H_{ N-\mu -1}(a + k,a + i - 1)\nabla ^{N-1}f(a + i - 1) {}\\ & =& \nabla ^{N-1}f(a + k) +\sum _{ i=1}^{k-1}H_{ N-\mu -1}(a + k,a + i - 1)\nabla ^{N-1}f(a + i) {}\\ & -& H_{N-\mu -1}(a + k,a)\nabla ^{N-1}f(a) {}\\ & & -\sum _{i=2}^{k}H_{ N-\mu -1}(a + k,a + i - 1)\nabla ^{N-1}f(a + i - 1), {}\\ \end{array}$$

where we used \(H_{N-\mu -1}(a + k,a + k - 1) = 1\). It follows that

$$\displaystyle\begin{array}{rcl} 0& \leq & \nabla ^{N-1}f(a + k) +\sum _{ i=1}^{k-1}H_{ N-\mu -1}(a + k,a + i - 1)\nabla ^{N-1}f(a + i) {}\\ & -& H_{N-\mu -1}(a + k,a)\nabla ^{N-1}f(a) -\sum _{ i=1}^{k-1}H_{ N-\mu -1}(a + k,a + i)\nabla ^{N-1}f(a + i) {}\\ & =& \nabla ^{N-1}f(a + k) - H_{ N-\mu -1}(a + k,a)\nabla ^{N-1}f(a + i) {}\\ & -& \sum _{i=1}^{k-1}\Big[H_{ N-\mu -1}(a + k,a + i) {}\\ & -& H_{N-\mu -1}(a + k,a + i - 1)\Big]\nabla ^{N-1}f(a + i). {}\\ \end{array}$$

Hence,

$$\displaystyle\begin{array}{rcl} 0& \leq & \nabla ^{N-1}f(a + k) - H_{ N-\mu -1}(a + k,a)\nabla ^{N-1}f(a) {}\\ & & -\sum _{i=1}^{k-1}\nabla _{ s}H_{N-\mu -1}(a + k,s)\vert _{s=a+i}\nabla ^{N-1}f(a + i) {}\\ & =& \nabla ^{N-1}f(a + k) - H_{ N-\mu -1}(a + k,a)\nabla ^{N-1}f(a) {}\\ & & +\sum _{i=1}^{k-1}H_{ N-\mu -2}(a + k,a + i - 1)\nabla ^{N-1}f(a + i) {}\\ & =& \nabla ^{N-1}f(a + k) - H_{ N-\mu -1}(a + k,a)\nabla ^{N-1}f(a) {}\\ & & +\sum _{i=1}^{k-1}\Big[\frac{(k - i + 1)^{\overline{N -\mu -2}}} {\Gamma (N -\mu -1)} \Big]\nabla ^{N-1}f(a + i). {}\\ \end{array}$$

Solving the above inequality for \(\nabla ^{N-1}f(a + k)\), we obtain the desired inequality (3.70).

Next we consider for 1 ≤ i ≤ k − 1

$$\displaystyle\begin{array}{rcl} & & \frac{(k - i + 1)^{\overline{N -\mu -2}}} {\Gamma (N -\mu -1)} = \frac{\Gamma (N -\mu +k - i - 1)} {\Gamma (k - i + 1)\Gamma (N -\mu -1)} {}\\ & & = \frac{(N -\mu +k - i - 2)\cdots (N -\mu -1)} {(k - i)!} < 0 {}\\ \end{array}$$

since N < μ + 1. Also

$$\displaystyle\begin{array}{rcl} H_{N-\mu -1}(a + k,a)& =& \frac{k^{\overline{N -\mu -1}}} {\Gamma (N-\mu )} {}\\ & =& \frac{\Gamma (N -\mu +k - 1)} {\Gamma (k)\Gamma (N-\mu )} {}\\ & =& \frac{(N -\mu +k - 2)\cdots (N-\mu )} {(k - 1)!} > 0. {}\\ \end{array}$$

And this completes the proof. □ 

From Theorem 3.127 we have the following

Theorem 3.128.

Assume that N − 1 < ν < N, \(f: \mathbb{N}_{a-N+1} \rightarrow \mathbb{R}\) , \(\nabla _{a^{{\ast}}}^{\nu }f(t) \geq 0\) for \(t \in \mathbb{N}_{a+1}\) and ∇ N−1 f(a) ≥ 0. Then ∇ N−1 f(t) ≥ 0 for \(t \in \mathbb{N}_{a}\) .

Proof.

By using the principle of strong induction, we prove that the conclusion of the theorem is correct. By assumption, the result holds for t = a. Suppose that ∇N−1 f(t) ≥ 0, for \(t = a,a + 1,\ldots,a + k - 1\). From Theorem 3.127 and (3.70), we have \(\nabla ^{N-1}f(a + k) \geq 0\), and the proof is complete. □ 

Taking N = 2 and N = 3, we can get the following corollaries.

Corollary 3.129.

Assume that 1 < ν < 2, \(f: N_{\rho (a)} \rightarrow \mathbb{R}\) , \(\nabla _{a^{{\ast}}}^{\nu }f(t) \geq 0\) for \(t \in \mathbb{N}_{a+1}\) and f(a) ≥ f(ρ(a)). Then f(t) is increasing for \(t \in \mathbb{N}_{\rho (a)}\) .

Corollary 3.130.

Assume that 2 < ν < 3, \(f: N_{a-2} \rightarrow \mathbb{R}\) , \(\nabla _{a^{{\ast}}}^{\nu }f(t) \geq 0\) for \(t \in \mathbb{N}_{a+1}\) and ∇ 2 f(a) ≥ 0. Then ∇f(t) is increasing for \(t \in \mathbb{N}_{a}\) .

One should compare the next result with Theorem 3.128.

Theorem 3.131.

Assume that N − 1 < ν < N, \(f: \mathbb{N}_{a-N+1} \rightarrow \mathbb{R}\) , and ∇ N f(t) ≥ 0 for \(t \in \mathbb{N}_{a+1}\) . Then \(\nabla _{a^{{\ast}}}^{\nu }f(t) \geq 0\) , for each \(t \in \mathbb{N}_{a+1}\) .

Proof.

Taking \(t = a + k\), we have

$$\displaystyle\begin{array}{rcl} & & \nabla _{a^{{\ast}}}^{-\mu }f(t) \\ & & = \nabla _{a}^{-(N-\mu )}\nabla ^{N}f(t) \\ & & =\int _{ a}^{t}H_{ N-\mu -1}(t,\rho (s))\nabla ^{N}f(s)\nabla s \\ & & =\sum _{ i=1}^{k}H_{ N-\mu -1}(a + k,a + i - 1)\nabla ^{N}f(a + i).{}\end{array}$$
(3.71)

Since

$$\displaystyle\begin{array}{rcl} & & H_{N-\mu -1}(a + k,a + i - 1) \\ & & \quad \quad \quad \quad \quad = \frac{(k - i + 1)^{\overline{N -\mu -1}}} {\Gamma (N-\mu )} \\ & & \quad \quad \quad \quad \quad = \frac{\Gamma (k + N - i-\mu )} {\Gamma (N-\mu )\Gamma (k - i + 1)} \\ & & \quad \quad \quad \quad \quad = \frac{(-\mu + k + N - i)\cdots (N -\mu +1)(N-\mu )} {(k - i)!} > 0,{}\end{array}$$
(3.72)

where we used μ < N, from (3.71) and (3.72) we get that \(\nabla _{a^{{\ast}}}^{\nu }f(t) \geq 0\), for each \(t \in \mathbb{N}_{a+1}\), □ 

Taking N = 1 and N = 2, we get the following corollaries.

Corollary 3.132.

Assume that 0 < ν < 1, \(f: \mathbb{N}_{\rho (a)} \rightarrow \mathbb{R}\) and f is an increasing function for \(t \in \mathbb{N}_{a}\) . Then \(\nabla _{a^{{\ast}}}^{\nu }f(t) \geq 0\) , for \(t \in \mathbb{N}_{a+1}\) .

Corollary 3.133.

Assume that 1 < ν < 2, \(f: \mathbb{N}_{\rho (a)} \rightarrow \mathbb{R}\) and ∇ 2 f(t) ≥ 0 for \(t \in \mathbb{N}_{a+1}\) . Then \(\nabla _{a^{{\ast}}}^{\nu }f(t) \geq 0\) , for each \(t \in \mathbb{N}_{a+1}\) .

In the following, we will give a counterexample to show that the assumption in Corollary 3.129f(a) ≥ f(ρ(a))” is essential. To verify this example we will use the following simple lemma.

Lemma 3.134.

Assume \(f \in C^{2}([a,\infty ))\) and \(f^{{\prime\prime}}(t) \geq 0\) on \([a,\infty )\) . Then \(\nabla _{a^{{\ast}}}^{\nu }f(t) \geq 0\) , for each \(t \in \mathbb{N}_{a+1}\) , with 1 < ν < 2.

Proof.

By Taylor’s Theorem,

$$\displaystyle{ f(a + i + 1) = f(a + i) + f^{{\prime}}(a + i) + \frac{f^{{\prime\prime}}(\xi ^{i})} {2},\,\,\ \xi ^{i} \in [a + i,a + i + 1] }$$
(3.73)

and

$$\displaystyle{ f(a + i - 1) = f(a + i) - f^{{\prime}}(a + i) + \frac{f^{{\prime\prime}}(\eta ^{i})} {2},\,\,\,\eta ^{i} \in [a + i - 1,a + i] }$$
(3.74)

for \(i = 0,1,\ldots,k - 1\). Using (3.73) and (3.74), we have

$$\displaystyle\begin{array}{rcl} \nabla ^{2}f(a + i + 1)& =& f(a + i + 1) - 2f(a + i) + f(a + i - 1) \\ & =& \frac{f^{{\prime\prime}}(\xi ^{i}) + f^{{\prime\prime}}(\eta ^{i})} {2} \\ & \geq & 0. {}\end{array}$$
(3.75)

From (3.75) and Corollary 3.133, we get that \(\nabla _{a^{{\ast}}}^{\nu }f(t) \geq 0\), for each \(t \in \mathbb{N}_{a+1}\), with 1 < ν < 2. □ 

Example 3.135.

Let \(f(t) = -\sqrt{t}\), a = 2. We have \(f^{{\prime\prime}}(t) \geq 0\), for t ≥ 1. By Lemma 3.134, we have \(\nabla _{a^{{\ast}}}^{\nu }f(t) \geq 0\).

Note that \(f(\rho (a)) = f(1) = -1 > f(a) = -\sqrt{2}\). Therefore f(x) does not satisfy the assumptions of Corollary 3.129. In fact, f(x) is decreasing, for t ≥ 1.

Corollary 3.129 could be useful for solving nonlinear fractional equations as the following result shows.

Corollary 3.136.

Let \(h: \mathbb{N}_{a+1} \times \mathbb{R} \rightarrow \mathbb{R}\) be a nonnegative, continuous function. Then any solution of the Caputo nabla fractional difference equation

$$\displaystyle\begin{array}{rcl} \nabla _{a^{{\ast}}}^{\nu }y(t)& =& h(t,y(t)),\quad t \in \mathbb{N}_{a+1},\quad 1 <\nu < 2{}\end{array}$$
(3.76)

satisfying ∇y(a) = A ≥ 0 is increasing on \(\mathbb{N}_{\rho (a)}\) .

3.22 Asymptotic Behavior and Comparison Theorems

In this section we will determine the asymptotic behavior of solutions of a nabla Caputo fractional equation of the form

$$\displaystyle{ \nabla _{a{\ast}}^{\nu }x(t) = c(t)x(t),\quad t \in \mathbb{N}_{ a+1}, }$$
(3.77)

where \(c: \mathbb{N}_{a+1} \rightarrow \mathbb{R},\) 0 < ν < 1. We will prove important comparison theorems to help us prove our asymptotic results. Most of the results in this section appear in Baoguo et al. [52]. The following lemma will be useful.

Lemma 3.137.

Assume that c(t) < 1, 0 < ν < 1. Then any solution of

$$\displaystyle\begin{array}{rcl} \nabla _{a{\ast}}^{\nu }x(t) = c(t)x(t),\quad \quad t \in \mathbb{N}_{ a+1}& &{}\end{array}$$
(3.78)

satisfying x(a) > 0 is positive on \(\mathbb{N}_{a}\) .

Proof.

Using the integrating by parts formula (3.23) and

$$\displaystyle{\nabla _{s}H_{-\nu }(t,s) = -H_{-\nu -1}(t,\rho (s)),}$$

we have

$$\displaystyle\begin{array}{rcl} \nabla _{a{\ast}}^{\nu }x(t)& =& \nabla _{ a}^{-(1-\nu )}\nabla x(t) {}\\ & =& \int _{a}^{t}H_{ -\nu }(t,\rho (s))\nabla x(s)\nabla s {}\\ & =& H_{-\nu }(t,s)x(s)\vert _{s=a}^{t} +\int _{ a}^{t}H_{ -\nu -1}(t,\rho (s))x(s)\nabla s {}\\ & =& -H_{-\nu }(t,a)x(a) +\sum _{ s=a+1}^{t}H_{ -\nu -1}(t,\rho (s))x(s). {}\\ \end{array}$$

Taking \(t = a + k\), we have

$$\displaystyle\begin{array}{rcl} \nabla _{a{\ast}}^{\nu }x(t)& =& \nabla _{ a{\ast}}^{\nu }x(a + k) {}\\ & =& x(a + k) -\nu x(a + k - 1) -\frac{\nu (-\nu + 1)} {2!} x(a + k - 2) -\cdots {}\\ & & -\frac{\nu (-\nu + 1)\cdots (-\nu + k - 2)} {(k - 1)!} x(a + 1) {}\\ & & -\frac{(-\nu + 1)\cdots (-\nu + k - 1)} {(k - 1)!} x(a). {}\\ \end{array}$$

Using (3.78), we get

$$\displaystyle\begin{array}{rcl} & & x(a + k) {}\\ & & = \frac{1} {1 - c(a + k)}\Big[\nu x(a + k - 1) + \frac{\nu (-\nu + 1)} {2!} x(a + k - 2) + \cdots {}\\ & & \quad + \frac{\nu (-\nu + 1)\cdots (-\nu + k - 2)} {(k - 1)!} x(a + 1) {}\\ & & \quad + \frac{(-\nu + 1)\cdots (-\nu + k - 1)} {(k - 1)!} x(a)\Big]. {}\\ \end{array}$$

From the strong induction principle, 0 < ν < 1, and x(a) > 0, it is easy to prove that x(a + k) > 0, for \(k \in \mathbb{N}_{0}\). □ 

The following comparison theorem plays an important role in proving our main results.

Theorem 3.138.

Assume c 2 (t) ≤ c 1 (t) < 1, 0 < ν < 1, and x(t),y(t) are solutions of the equations

$$\displaystyle{ \nabla _{a^{{\ast}}}^{\nu }x(t) = c_{1}(t)x(t), }$$
(3.79)

and

$$\displaystyle{ \nabla _{a}^{\nu }y(t) = c_{ 2}(t)y(t), }$$
(3.80)

respectively, for \(t \in \mathbb{N}_{a+1}\) satisfying x(a) ≥ y(a) > 0. Then

$$\displaystyle{x(t) \geq y(t),}$$

for \(t \in \mathbb{N}_{a}\) .

Proof.

Similar to the proof of Lemma 3.137, taking \(t = a + k\), we have

$$\displaystyle\begin{array}{rcl} & & x(a + k) \\ & & = \frac{1} {1 - c_{1}(a + k)}\Big[\nu x(a + k - 1) + \frac{\nu (-\nu + 1)} {2!} x(a + k - 2) + \cdots \\ & & \quad + \frac{\nu (-\nu + 1)\cdots (-\nu + k - 2)} {(k - 1)!} x(a + 1) + \frac{(-\nu + 1)\cdots (-\nu + k - 1)} {(k - 1)!} x(a)\Big]{}\end{array}$$
(3.81)

and

$$\displaystyle\begin{array}{rcl} & & y(a + k) \\ & & = \frac{1} {1 - c_{2}(a + k)}\Big[\nu y(a + k - 1) + \frac{\nu (-\nu + 1)} {2!} y(a + k - 2) + \cdots \\ & & \quad + \frac{\nu (-\nu + 1)\cdots (-\nu + k - 2)} {(k - 1)!} y(a + 1) + \frac{(-\nu + 1)\cdots (-\nu + k - 1)} {(k - 1)!} y(a)\Big].{}\end{array}$$
(3.82)

We will prove \(x(a + k) \geq y(a + k) > 0\) for \(k \in \mathbb{N}_{0}\) by using the principle of strong induction. By assumption x(a) ≥ y(a) > 0 so the base case holds. Now assume that \(x(a + i) \geq y(a + i) > 0\), for \(i = 0,1,\ldots,k - 1\). Using c 2(t) ≤ c 1(t) < 1,

$$\displaystyle{\frac{\nu (-\nu + 1)\cdots (-\nu + i - 1)} {i!} > 0,}$$

the base case k = 1 for \(i = 2,3,\cdots k - 1\),

$$\displaystyle{\frac{(-\nu + 1)(-\nu + 2)\cdots (-\nu + k - 1)} {(k - 1)!} > 0,}$$

(3.81), and (3.82) we have

$$\displaystyle{x(a + k) \geq y(a + k) > 0.}$$

This completes the proof. □ 

Remark 3.139.

Since H 0(t, a) = 1, we have that E 0, ν, 0(t, a) = 1 and E p, ν, 0 (a, a) = 1.

Lemma 3.140.

Assume that 0 < ν < 1, |b| < 1. Then

$$\displaystyle{\nabla _{a^{{\ast}}}^{\nu }E_{b,\nu,0}(t,a) = bE_{b,\nu,0}(t,a)}$$

for \(t \in \mathbb{N}_{a+1}\) .

Proof.

Integrating by parts, we have

$$\displaystyle\begin{array}{rcl} & & \nabla _{a^{{\ast}}}^{\nu }E_{b,\nu,0}(t,a) \\ & & =\int _{ a}^{t}H_{ -\nu }(t,\rho (s))\nabla E_{b,\nu,0}(s,a)\nabla s \\ & & = [H_{-\nu }(t,s)E_{b,\nu,0}(s,a)]_{s=a}^{t} +\int _{ a}^{t}H_{ -\nu -1}(t,\rho (s))E_{b,\nu,0}(s,a)\nabla s \\ & & = -H_{-\nu }(t,a) +\int _{ a}^{t}H_{ -\nu -1}(t,\rho (s))\sum _{k=0}^{\infty }b^{k}H_{\nu k}(s,a)\nabla s, {}\end{array}$$
(3.83)

where we used \(H_{-\nu }(t,t) = 0\) and E b, ν, 0(a, a) = 1. In the following, we first prove that the infinite series

$$\displaystyle{ H_{-\nu -1}(t,\rho (s))\sum _{k=0}^{\infty }b^{k}H_{\nu k}(s,a) }$$
(3.84)

for each fixed t is uniformly convergent for s ∈ [a, t].

We will first show that

$$\displaystyle{\vert H_{-\nu -1}(t,\rho (s))\vert = \left \vert \frac{\Gamma (-\nu + t - s)} {\Gamma (t - s + 1)\Gamma (-\nu )}\right \vert \leq 1}$$

for a ≤ s ≤ t. For s = t we have that

$$\displaystyle{\vert H_{-\nu -1}(t,\rho (t))\vert = 1.}$$

Now assume that a ≤ s < t. Then

$$\displaystyle\begin{array}{rcl} \left \vert \frac{\Gamma (-\nu + t - s)} {\Gamma (t - s + 1)\Gamma (-\nu )}\right \vert & =& \left \vert \frac{(t - s -\nu -1)(t - s -\nu -2)\cdots (-\nu )} {(t - s)!} \right \vert {}\\ & =& \left \vert \frac{t - s - (\nu +1)} {t - s} \right \vert \left \vert \frac{t - s - 1 - (\nu +1)} {t - s - 1} \right \vert \cdots \left \vert \frac{-\nu } {1} \right \vert {}\\ &\leq & 1. {}\\ \end{array}$$

Also consider

$$\displaystyle\begin{array}{rcl} H_{\nu k}(s,a)& =& \frac{\Gamma (\nu k + s - a)} {\Gamma (s - a)\Gamma (\nu k + 1)} {}\\ & =& \frac{(\nu k + s - a - 1)\cdots (\nu k + 1)} {(s - a - 1)!}. {}\\ \end{array}$$

Note that for large k it follows that

$$\displaystyle\begin{array}{rcl} H_{\nu k}(s,a)& \leq & (\nu k + s - a - 1)^{s-a-1} {}\\ & \leq & (\nu k + t - a - 1)^{t-a-1} {}\\ \end{array}$$

for a ≤ s ≤ t. Applying the Root Test to the infinite series in (3.84) we get that for each fixed t

$$\displaystyle{\lim _{k\rightarrow \infty }\root{k}\of{\vert b\vert ^{k}(\nu k + t - a - 1)^{t-a-1}} = \vert b\vert < 1.}$$

Hence, for each fixed t the infinite series in (3.84) is uniformly convergent for s ∈ [a, t]. So from (3.83), integrating term by term, we obtain, using (3.32) and \(\nabla _{a}^{\nu }H_{\nu k}(s,a)) = H_{\nu k-\nu }(s,a)\), that

$$\displaystyle\begin{array}{rcl} \nabla _{a{\ast}}^{\nu }E_{ b,\nu,0}(t,a)& =& -H_{-\nu }(t,a) +\sum _{ k=0}^{\infty }b^{k}\int _{ a}^{t}H_{ -\nu -1}(t,\rho (s))H_{\nu k}(s,a)\nabla s {}\\ & =& -H_{-\nu }(t,a) +\sum _{ k=0}^{\infty }b^{k}\nabla _{ a}^{\nu }H_{\nu k}(t,a) {}\\ & =& -H_{-\nu }(t,a) +\sum _{ k=0}^{\infty }b^{k}H_{\nu k-\nu }(t,a) {}\\ & =& \sum _{k=1}^{\infty }b^{k}H_{\nu k-\nu }(t,a) {}\\ & =& bE_{b,\nu,0}(t,a), {}\\ \end{array}$$

where we also used H 0(t, a) = 1. This completes the proof. □ 

With the aid of Lemma 3.140, we now give a rigorous proof of the following result.

Lemma 3.141.

Assume that 0 < ν < 1, |b| < 1. Then E b,ν,0 (t,a) is the unique solution of Caputo nabla fractional IVP

$$\displaystyle\begin{array}{rcl} & & \nabla _{a^{{\ast}}}^{\nu }x(t) = bx(t),\quad \quad t \in \mathbb{N}_{a+1} \\ & & x(a) = 1. {}\end{array}$$
(3.85)

Proof.

It is easy to see that the given IVP has a unique solution. If b = 0, then

$$\displaystyle{E_{0,\nu,0}(t,a) = 1}$$

is the solution of the given IVP. For b ≠ 0 the result follows from Lemma 3.140 and the uniqueness. □ 

We will see that the following lemma, given in Pudlubny [153], is useful in proving asymptotic properties of certain fractional Taylor monomials and certain nabla Mittag–Leffler functions.

Lemma 3.142.

Assume \(\mathfrak{R}(z) > 0\) . Then

$$\displaystyle{\Gamma (z) =\lim _{n\rightarrow \infty } \frac{n!n^{z}} {z(z + 1)\cdots (z + n)}.}$$

The following lemma is an asymptotic property for certain nabla fractional Taylor monomials.

Lemma 3.143.

Assume that 0 < ν < 1. Then we have

$$\displaystyle\begin{array}{rcl} & \lim _{t\rightarrow \infty }H_{\nu k}(t,a) = \infty,\quad \mbox{ for}\quad k \geq 1,& {}\\ & \lim _{t\rightarrow \infty }H_{\nu k}(t,a) = 1,\quad \mbox{ for}\quad k = 0. & {}\\ \end{array}$$

Proof.

Taking \(t = a + n\), n ≥ 0, we have

$$\displaystyle\begin{array}{rcl} & & \lim _{t\rightarrow \infty }H_{\nu k}(t,a) =\lim _{n\rightarrow \infty }H_{\nu k}(a + n,a) =\lim _{n\rightarrow \infty } \frac{n^{\overline{\nu k}}} {\Gamma (\nu k + 1)} \\ & & \quad \quad \quad =\lim _{n\rightarrow \infty } \frac{\Gamma (\nu k + n)} {\Gamma (n)\Gamma (\nu k + 1)} \\ & & \quad \quad \quad =\lim _{n\rightarrow \infty }\frac{(\nu k + n - 1)(\nu k + n - 2)\cdots (\nu k + 1)} {(n - 2)!(n - 2)^{\nu k+1}} \cdot \frac{(n - 2)^{\nu k+1}} {n - 1}.{}\end{array}$$
(3.86)

Using Lemma 3.142 with \(z =\nu k + 1\) and n replaced by n − 2, we have

$$\displaystyle{\lim _{n\rightarrow \infty }\frac{(\nu k + 1 + n - 2)(\nu k + 1 + n - 3)\cdots (\nu k + 1)} {(n - 2)!(n - 2)^{\nu k+1}} = \frac{1} {\Gamma (\nu k + 1)},}$$

and

$$\displaystyle{\lim _{n\rightarrow \infty }\frac{(n - 2)^{\nu k+1}} {n - 1} = \infty,\quad \mbox{ for}\quad k \geq 1,}$$
$$\displaystyle{\lim _{n\rightarrow \infty }\frac{(n - 2)^{\nu k+1}} {n - 1} = 1,\quad \mbox{ for}\quad k = 0.}$$

Using (3.86), we get the desired results. □ 

Theorem 3.144.

Assume 0 < b 2 ≤ c(t) < 1, \(t \in \mathbb{N}_{a+1}\) , 0 < ν < 1. Further assume x(t) is a solution of Caputo nabla fractional difference equation

$$\displaystyle\begin{array}{rcl} & & \nabla _{a^{{\ast}}}^{\nu }x(t) = c(t)x(t),\quad t \in \mathbb{N}_{a+1}{}\end{array}$$
(3.87)

satisfying x(a) > 0. Then

$$\displaystyle{x(t) \geq \frac{x(a)} {2} E_{b_{2},\nu,0}(t,a),}$$

for \(t \in \mathbb{N}_{a+1}\) .

Proof.

From Lemma 3.141, we have

$$\displaystyle{\nabla _{a^{{\ast}}}^{\nu }E_{b_{2},\nu,0}(t,a) = b_{2}E_{b_{2},\nu,0}(t,a)}$$

and \(E_{b_{2},\nu,0}(a,a) = 1\).

In Theorem 3.138, take c 2(t) = b 2. Then x(t) and

$$\displaystyle{y(t) = \frac{x(a)} {2} E_{b_{2},\nu,0}(t,a)}$$

satisfy

$$\displaystyle{ \nabla _{a^{{\ast}}}^{\nu }x(t) = c(t)x(t), }$$
(3.88)

and

$$\displaystyle{ \nabla _{a^{{\ast}}}^{\nu }y(t) = b_{2}y(t), }$$
(3.89)

respectively, for \(t \in \mathbb{N}_{a+1}\) and

$$\displaystyle{x(a) > \frac{x(a)} {2} E_{b_{2},\nu,0}(a,a) = y(a).}$$

From Theorem 3.138, we get that

$$\displaystyle{x(t) \geq \frac{x(a)} {2} E_{b_{2},\nu,0}(t,a),}$$

for \(t \in \mathbb{N}_{a}\). This completes the proof. □ 

From Lemma 3.143 and the definition of \(E_{b_{2},\nu,0}(t,a)\), we get the following theorem.

Theorem 3.145.

For 0 < b 2 < 1, we have

$$\displaystyle{\lim _{t\rightarrow \infty }E_{b_{2},\nu,0}(t,a) = +\infty.}$$

From Theorem 3.144 and Theorem 3.145, we have the following result holds.

Theorem 3.146.

Assume 0 < ν < 1 and there exists a constant b 2 such that 0 < b 2 ≤ c(t) < 1. Then the solutions of the equation (3.77) with x(a) > 0 satisfy

$$\displaystyle{\lim _{t\rightarrow \infty }x(t) = +\infty.}$$

Next we consider the case c(t) ≤ b 1 < 0, \(t \in \mathbb{N}_{a}\). First we prove some preliminary results.

Lemma 3.147.

Assume \(f: \mathbb{N}_{a} \rightarrow \mathbb{R}\) , 0 < ν < 1. Then

$$\displaystyle{ \nabla _{a}^{-(1-\nu )}\nabla f(t) = \nabla \nabla _{ a}^{-(1-\nu )}f(t) - f(a)H_{ -\nu }(t,a). }$$
(3.90)

Proof.

Using integration by parts and \(H_{-\nu }(t,t) = 0\), we have

$$\displaystyle\begin{array}{rcl} \nabla _{a}^{-(1-\nu )}\nabla f(t)& =& \int _{ a}^{t}H_{ -\nu }(t,\rho (s))\nabla f(s)\nabla s \\ & =& H_{-\nu }(t,s)f(s)\vert _{s=a}^{t} +\int _{ a}^{t}H_{ -\nu -1}(t,\rho (s))f(s)\nabla s \\ & =& -H_{-\nu }(t,a)f(a) +\int _{ a}^{t}H_{ -\nu -1}(t,\rho (s))f(s)\nabla s.{}\end{array}$$
(3.91)

Using the composition rule \(\nabla _{a}^{\nu }\nabla _{a}^{-\mu }f(t) = \nabla _{a}^{\nu -\mu }f(t)\), for ν, μ > 0 in Theorem 3.109, we have

$$\displaystyle\begin{array}{rcl} \nabla \nabla _{a}^{-(1-\nu )}f(t)& =& \nabla _{ a}^{\nu }f(t) \\ & =& \int _{a}^{t}H_{ -\nu -1}(t,\rho (s))f(s)\nabla s.{}\end{array}$$
(3.92)

From (3.91) and (3.92), we get that (3.90) holds. □ 

From Lemma 3.147, it is easy to get the following corollary which will be useful later.

Corollary 3.148.

For 0 < ν < 1, the following equality holds:

$$\displaystyle\begin{array}{rcl} & & \nabla _{a}^{-\nu }\nabla f(t) = \nabla \nabla _{ a}^{-\nu }f(t) - H_{\nu -1}(t,a)f(a).{}\end{array}$$
(3.93)

for \(t \in \mathbb{N}_{a}\) .

Lemma 3.149.

Assume that 0 < ν < 1 and x(t) is a solution of the fractional equation

$$\displaystyle\begin{array}{rcl} & & \nabla _{a^{{\ast}}}^{\nu }x(t) = c(t)x(t),\quad \quad t \in \mathbb{N}_{a+1}{}\end{array}$$
(3.94)

satisfying x(a) > 0. Then x(t) satisfies the integral equation

$$\displaystyle\begin{array}{rcl} x(t)& =& \int _{a}^{t}H_{\nu -1}(t,\rho (s))c(s)x(s)\nabla s + x(a) {}\\ & =& \sum _{s=a+1}^{t}\frac{(t - s + 1)^{\overline{\nu - 1}}} {\Gamma (\nu )} c(s)x(s) + x(a). {}\\ \end{array}$$

Proof.

Using Lemma 3.147 and the composition rule

$$\displaystyle{\nabla _{a}^{\alpha }\nabla _{ a}^{-\beta }f(t) = \nabla _{ a}^{\alpha -\beta }f(t),}$$

for α, β > 0 given in Theorem 3.109, we get

$$\displaystyle\begin{array}{rcl} \nabla _{a{\ast}}^{\nu }x(t)& =& \nabla _{ a}^{-(1-\nu )}\nabla x(t) {}\\ & =& \nabla \nabla _{a}^{-(1-\nu )}x(t) - x(a)H_{ -\nu }(t,a) {}\\ & =& \nabla _{a}^{\nu }x(t) - x(a)H_{ -\nu }(t,a). {}\\ \end{array}$$

From (3.94), we have

$$\displaystyle{\nabla _{a}^{\nu }x(t) = c(t)x(t) + x(a)H_{ -\nu }(t,a).}$$

Applying the operator ∇ a ν to each side we obtain

$$\displaystyle{\nabla _{a}^{-\nu }\nabla _{ a}^{\nu }x(t) = \nabla _{ a}^{-\nu }c(t)x(t) + x(a)\nabla _{ a}^{-\nu }H_{ -\nu }(t,a),}$$

which can be written in the form

$$\displaystyle{\nabla _{a}^{-\nu }\nabla \nabla _{ a}^{-(1-\nu )}x(t) = \nabla _{ a}^{-\nu }c(t)x(t) + x(a)\nabla _{ a}^{-\nu }H_{ -\nu }(t,a).}$$

Using Corollary 3.148, we obtain

$$\displaystyle{\nabla \nabla _{a}^{-\nu }\nabla _{ a}^{-(1-\nu )}x(t) -\frac{(t - a)^{\overline{\nu - 1}}} {\Gamma (\nu )} \nabla _{a}^{-(1-\nu )}x(t)\Big\vert _{ t=a}}$$
$$\displaystyle{= \nabla _{a}^{-\nu }c(t)x(t) + x(a)\nabla _{ a}^{-\nu }H_{ -\nu }(t,a).}$$

On the other hand, using

$$\displaystyle{\nabla _{a}^{-(1-\nu )}x(t)\Big\vert _{ t=a} =\int _{ a}^{a}H_{ -\nu }(a,\rho (s))x(s)\nabla s = 0,}$$

we obtain

$$\displaystyle{\nabla \nabla _{a}^{-\nu }\nabla _{ a}^{-(1-\nu )}x(t) = \nabla _{ a}^{-\nu }c(t)x(t) + x(a)\nabla _{ a}^{-\nu }H_{ -\nu }(t,a).}$$

By the composition rule, namely Theorem 3.107, it follows both that \(\nabla _{a}^{-\nu }\nabla _{a}^{-(1-\nu )}x(t) = \nabla _{a}^{-1}x(t)\) and that \(\nabla \nabla _{a}^{-1}x(t) = x(t)\), from which it follows that

$$\displaystyle{x(t) = \nabla _{a}^{-\nu }c(t)x(t) + x(a)\nabla _{ a}^{-\nu }H_{ -\nu }(t,a).}$$

Finally, by the power rule \(\nabla _{a}^{-\nu }H_{-\nu }(t,a) = H_{0}(t,a) = 1\), we obtain

$$\displaystyle\begin{array}{rcl} x(t)& =& \nabla _{a}^{-\nu }c(t)x(t) + x(a) \\ & =& \int _{a}^{t}H_{\nu -1}(t,\rho (s))c(s)x(s)\nabla s + x(a) \\ & =& \sum _{s=a+1}^{t}H_{\nu -1}(t,\rho (s))c(s)x(s) + x(a) \\ & =& \sum _{s=a+1}^{t}\frac{(t - s + 1)^{\overline{\nu - 1}}} {\Gamma (\nu )} c(s)x(s) + x(a).{}\end{array}$$
(3.95)

And this completes the proof. □ 

The following lemma appears in [34].

Lemma 3.150.

Assume that 0 < ν < 1, |b| < 1. Then the Mittag–Leffler function \(E_{b,\nu,\nu -1}(t,\rho (a)) =\sum _{ k=0}^{\infty }b^{k}H_{\nu k+\nu -1}(t,\rho (a))\) is the unique solution of the IVP

$$\displaystyle\begin{array}{rcl} & & \nabla _{\rho (a)}^{\nu }x(t) = bx(t),\quad \quad t \in \mathbb{N}_{ a+1} \\ & & x(a) = \frac{1} {1 - b}. {}\end{array}$$
(3.96)

Lemma 3.151.

Assume 0 < ν < 1, |b| < 1. Then any solution of the equation

$$\displaystyle\begin{array}{rcl} & & \nabla _{\rho (a)}^{\nu }x(t) = bx(t),\quad \quad t \in \mathbb{N}_{ a+1}{}\end{array}$$
(3.97)

satisfying x(a) > 0 is positive on \(\mathbb{N}_{a}\) .

Proof.

From (3.32), we have for \(t = a + k\)

$$\displaystyle\begin{array}{rcl} & & \nabla _{\rho (a)}^{\nu }x(t) =\int _{ \rho (a)}^{t}H_{ -\nu -1}(t,\rho (s))x(s)\nabla s {}\\ & & =\sum _{ s=a}^{a+k}H_{ -\nu -1}(a + k,s - 1)x(s) {}\\ & & = x(a + k) -\nu x(a + k - 1) -\frac{\nu (-\nu + 1)} {2} x(a + k - 2) {}\\ & & -\cdots -\frac{\nu (-\nu + 1)\cdots (-\nu + k - 1)} {k!} x(a). {}\\ \end{array}$$

Using (3.93), we have that

$$\displaystyle\begin{array}{rcl} & & (1 - b)x(a + k) \\ & & =\nu x(a + k - 1) + \frac{\nu (-\nu + 1)} {2} x(a + k - 2) \\ & & \quad + \cdots + \frac{\nu (-\nu + 1)\cdots (-\nu + k - 1)} {k!} x(a). {}\end{array}$$
(3.98)

We will prove x(a + k) > 0 for \(k \in \mathbb{N}_{0}\) by using the principle of strong induction. Since x(a) > 0 we have that the base case holds. Now assume that x(a + i) > 0, for \(i = 0,1,\cdots \,,k - 1\). Since

$$\displaystyle{\frac{\nu (-\nu + 1)\cdots (-\nu + i - 1)} {i!} > 0}$$

for \(i = 2,3,\cdots k - 1\), from (3.98), we have x(a + k) > 0. This completes the proof. □ 

Lemma 3.152.

Assume that 0 < ν < 1, − 1 < b < 0. Then

$$\displaystyle{\lim _{t\rightarrow \infty }E_{b,\nu,0}(t,a) = 0.}$$

Proof.

From Lemma 3.150 and Lemma 3.151, we have E b, ν, ν−1(t, ρ(a)) > 0, for \(t \in \mathbb{N}_{a+1}\). So we have

$$\displaystyle\begin{array}{rcl} \nabla E_{b,\nu,0}(t,a)& =& \sum _{k=0}^{\infty }b^{k}\nabla H_{\nu k}(t,a) {}\\ & =& \sum _{k=0}^{\infty }b^{k}H_{\nu k-1}(t,a) =\sum _{ k=1}^{\infty }b^{k}H_{\nu k-1}(t,a) {}\\ & =& b\sum _{k=1}^{\infty }b^{k-1}H_{\nu k-1}(t,a) = b\sum _{j=0}^{\infty }b^{j}H_{\nu j+\nu -1}(t,a) {}\\ & =& bE_{b,\nu,\nu -1}(t,a) = bE_{b,\nu,\nu -1}(t - 1,\rho (a)) < 0, {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a+1}\), where we used \(H_{-1}(t,a) = 0\). Therefore, E b, ν, 0(t, a) is decreasing for \(t \in \mathbb{N}_{a+1}\). From Lemma 3.137, we have E b, ν, 0(t, a) > 0 for \(t \in \mathbb{N}_{a+1}\). Suppose that

$$\displaystyle{\lim _{t\rightarrow \infty }E_{b,\nu,0}(t,a) = A \geq 0.}$$

In the following, we will prove A = 0. If not, A > 0. Let x(t): = E b, ν, 0(t, a) > 0. From Lemma 3.149, we have

$$\displaystyle{x(t) =\int _{ a}^{t}H_{\nu -1}(t,\rho (s))bx(s)\nabla s + x(a)}$$
$$\displaystyle{= b[x(t) +\nu x(t - 1) + \frac{\nu (\nu +1)} {2!} x(t - 2)}$$
$$\displaystyle{+\cdots + H_{\nu -1}(t,a)x(a + 1)] + x(a).}$$

For fixed k 0 > 0 and large t, we have (since b < 0)

$$\displaystyle{x(t) \leq b\Big[x(t) +\nu x(t - 1) + \frac{\nu (\nu +1)} {2!} x(t - 2)}$$
$$\displaystyle{+\cdots + \frac{\nu (\nu +1)\cdots (\nu +k_{0} - 1)} {k_{0}!} x(t - k_{0})\Big] + x(a).}$$

Letting \(t \rightarrow \infty \), we get that

$$\displaystyle{ 0 < A \leq bA\Big[1 +\nu +\frac{\nu (\nu +1)} {2!} + \cdots + \frac{\nu (\nu +1)\cdots (\nu +k_{0} - 1)} {k_{0}!} \Big] + x(a). }$$
(3.99)

Notice (using mathematical induction in the first step) that

$$\displaystyle\begin{array}{rcl} & & 1 +\nu +\frac{\nu (\nu +1)} {2!} + \cdots + \frac{\nu (\nu +1)\cdots (\nu +k_{0} - 1)} {k_{0}!} {}\\ & & = \frac{(\nu +1)(\nu +2)\cdots (\nu +k_{0})} {k_{0}!} {}\\ & & = \frac{(\nu +1)(\nu +2)\cdots (\nu +1 + k_{0} - 1)} {(k_{0} - 1)!(k_{0} - 1)^{\nu +1}} \frac{(k_{0} - 1)^{\nu +1}} {k_{0}} {}\\ & & \rightarrow +\,\infty, {}\\ \end{array}$$

as \(k_{0} \rightarrow \infty,\) where we used (see Lemma 3.142)

$$\displaystyle{ \frac{1} {\Gamma (\nu +1)} =\lim _{k_{0}\rightarrow \infty }\frac{(\nu +1)(\nu +2)\cdots (\nu +1 + k_{0} - 1)} {(k_{0} - 1)!(k_{0} - 1)^{\nu +1}}.}$$

So in (3.99), for sufficiently large k 0, the right side of (3.99) is negative, but the left side of (3.99) is positive, which is a contradiction. So A = 0. This completes the proof. □ 

Theorem 3.153.

Assume c(t) ≤ b 1 < 0, 0 < ν < 1, and x(t) is any solution of the Caputo nabla fractional difference equation

$$\displaystyle\begin{array}{rcl} & & \nabla _{a^{{\ast}}}^{\nu }x(t) = c(t)x(t),\quad t \in \mathbb{N}_{a+1}{}\end{array}$$
(3.100)

satisfying x(a) > 0. Then

$$\displaystyle{x(t) \leq 2x(a)E_{b_{1},\nu,0}(t,a),}$$

for \(t \in \mathbb{N}_{a}\) .

Proof.

Assume that b 1 > −1. Otherwise we can choose \(0 > b_{1}^{{\prime}} > -1\), \(b_{1}^{{\prime}} > b_{1}\) and replace b 1 by \(b_{1}^{{\prime}}\). From Lemma 3.141, we have

$$\displaystyle{\nabla _{a^{{\ast}}}^{\nu }E_{b_{1},\nu,0}(t,a) = b_{1}E_{b_{1},\nu,0}(t,a)}$$

and \(E_{b_{1},\nu,0}(a,a) = H_{0}(a,a) = 1\).

In Theorem 3.138, take c 2(t) = b 1. Then it holds that x(t) and \(y(t) = 2x(a)E_{b_{1},\nu,0}(t,a)\) satisfy

$$\displaystyle{ \nabla _{a^{{\ast}}}^{\nu }x(t) = c(t)x(t), }$$
(3.101)

and

$$\displaystyle{ \nabla _{a^{{\ast}}}^{\nu }y(t) = b_{1}y(t), }$$
(3.102)

respectively, for \(t \in \mathbb{N}_{a+1}\) and

$$\displaystyle{x(a) < 2x(a) = 2x(a)E_{b_{1},\nu,0}(a,a) = y(a).}$$

From Theorem 3.138, we get that

$$\displaystyle{x(t) \leq 2x(a)E_{b_{1},\nu,0}(t,a),}$$

for \(t \in \mathbb{N}_{a}\). This completes the proof. □ 

From Theorem 3.153 and Lemma 3.152, we get the following result.

Theorem 3.154.

Assume 0 < ν < 1 and there exists a constant b 1 such that c(t) ≤ b 1 < 0. Then the solutions of the equation (3.77) with x(a) > 0 satisfy

$$\displaystyle{\lim _{t\rightarrow \infty }x(t) = 0.}$$

Next we consider solutions of the ν-th order Caputo nabla fractional difference equation

$$\displaystyle\begin{array}{rcl} & & \nabla _{a^{{\ast}}}^{\nu }x(t) = c(t)x(t),\quad \quad t \in \mathbb{N}_{a+1},{}\end{array}$$
(3.103)

satisfying x(a) < 0.

By making the transformation \(x(t) = -y(t)\) and using Theorem 3.146 and Theorem 3.154, we get the following theorem.

Theorem 3.155.

Assume 0 < ν < 1 and there exists a constant b 2 such that 0 < b 2 ≤ c(t) < 1, \(t \in \mathbb{N}_{a+1}\) . Then the solutions of the equation (3.103) with x(a) < 0 satisfy

$$\displaystyle{\lim _{t\rightarrow \infty }x(t) = -\infty.}$$

Theorem 3.156.

Assume 0 < ν < 1 and there exists a constant b 1 such that c(t) ≤ b 1 < 0, \(t \in \mathbb{N}_{a+1}\) . Then the solutions of the equation (3.103) with x(a) < 0 satisfy

$$\displaystyle{\lim _{t\rightarrow \infty }x(t) = 0.}$$

3.23 Self-Adjoint Caputo Fractional Difference Equation

Let \(\mathcal{D}_{a}:=\{ x: \mathbb{N}_{a} \rightarrow \mathbb{R}\}\), and let \(L_{a}: \mathcal{D}_{a} \rightarrow \mathcal{D}_{a+1}\) be defined by

$$\displaystyle{ (L_{a}x)(t):= \nabla [p(t + 1)\nabla _{a{\ast}}^{\nu }x(t + 1)] + q(t)x(t),\quad t \in \mathbb{N}_{ a+1}, }$$
(3.104)

where \(x \in \mathcal{D}_{a},0 <\nu < 1\), p(t) > 0, \(t \in \mathbb{N}_{a+1}\) and \(q: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\). Most of the results in this section appear in Ahrendt et al. [4].

Theorem 3.157.

The operator L a in (3.104) is a linear transformation.

Proof.

Let \(x,y: \mathbb{N}_{a} \rightarrow \mathbb{R}\), and let \(\alpha,\beta \in \mathbb{R}\). Then

$$\displaystyle\begin{array}{rcl} & & L_{a}[\alpha x +\beta y](t) {}\\ & & = \nabla \bigg[p(t + 1)\nabla _{a{\ast}}^{\nu }[\alpha x(t + 1) +\beta y(t + 1)]\bigg] + q(t)[\alpha x(t) +\beta y(t)] {}\\ & & = \nabla \bigg[p(t + 1)[\alpha \nabla _{a{\ast}}^{\nu }x(t + 1) +\beta \nabla _{ a{\ast}}^{\nu }y(t + 1)]\bigg] +\alpha q(t)x(t) +\beta q(t)y(t) {}\\ & & = \nabla [\alpha p(t + 1)\nabla _{a{\ast}}^{\nu }x(t + 1) +\beta p(t + 1)\nabla _{ a{\ast}}^{\nu }y(t + 1)] +\alpha q(t)x(t) +\beta q(t)y(t) {}\\ & & =\alpha \nabla [p(t + 1)\nabla _{a{\ast}}^{\nu }x(t + 1)] +\alpha q(t)x(t) +\beta \nabla [p(t + 1)\nabla _{ a{\ast}}^{\nu }y(t + 1)] +\beta q(t)y(t) {}\\ & & =\alpha L_{a}x(t) +\beta L_{a}y(t), {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a+1}\). □ 

Theorem 3.158 (Existence and Uniqueness for IVPs).

Let \(A,B \in \mathbb{R}\) be given constants and assume \(h: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) . Then the IVP

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{@{}l@{\quad }l@{}} \quad &L_{a}x(t) = h(t),\quad t \in \mathbb{N}_{a+1} \\ \quad &x(a) = A,\quad \nabla x(a + 1) = B,\end{array} \right.& &{}\end{array}$$
(3.105)

has a unique solution on \(\mathbb{N}_{a}\) .

Proof.

Let \(x: \mathbb{N}_{a} \rightarrow \mathbb{R}\) be defined uniquely by

$$\displaystyle{x(a) = A,\quad x(a + 1) = A + B,}$$

and for \(t \in \mathbb{N}_{a+1},\) x(t) satisfies the summation equation

$$\displaystyle\begin{array}{rcl} x(t + 1)& =& x(t) -\sum _{\tau =a+1}^{t}\frac{(t -\tau +2)^{\overline{-\nu }}} {\Gamma (1-\nu )} \nabla x(\tau ) {}\\ & & + \frac{1} {p(t + 1)}\left [h(t) - q(t)x(t) + p(t)\sum _{\tau =a+1}^{t}\frac{(t -\tau +1)^{\overline{-\nu }}} {\Gamma (1-\nu )} \nabla x(\tau )\right ]. {}\\ \end{array}$$

We will show that x solves the IVP (3.105). Clearly the initial conditions are satisfied. Now we show that x is a solution of the nabla Caputo self-adjoint equation on \(\mathbb{N}_{a}\). To see this note that for \(t \in \mathbb{N}_{a+1}\), we have from the last equation

$$\displaystyle\begin{array}{rcl} & & \nabla x(t + 1) +\int _{ a}^{t}H_{ -\nu }(t + 1,\rho (\tau ))\nabla x(\tau )\nabla \tau \\ & &\qquad = \frac{1} {p(t + 1)}\left [h(t) - q(t)x(t) + p(t)\nabla _{a}^{-(1-\nu )}\nabla x(t)\right ].{}\end{array}$$
(3.106)

But

$$\displaystyle\begin{array}{rcl} & & \nabla _{a{\ast}}^{\nu }x(t + 1) {}\\ & & = \nabla _{a}^{-(1-\nu )}\nabla x(t + 1) {}\\ & & =\int _{ a}^{t+1}H_{ -\nu }(t + 1,\rho (\tau ))\nabla x(\tau )\nabla \tau {}\\ & & = H_{-\nu }(t + 1,t)\nabla x(t + 1) +\int _{ a}^{t}H_{ -\nu }(t + 1,\rho (\tau ))\nabla x(\tau )\nabla \tau {}\\ & & = \nabla x(t + 1) +\int _{ a}^{t}H_{ -\nu }(t + 1,\rho (\tau ))\nabla x(\tau )\nabla \tau. {}\\ \end{array}$$

Hence, from this last equation and (3.106) we get that

$$\displaystyle\begin{array}{rcl} p(t + 1)\nabla _{a{\ast}}^{\nu }x(t + 1)& =& h(t) - q(t)x(t) + p(t)\nabla _{ a}^{-(1-\nu )}\nabla x(t) {}\\ & =& h(t) - q(t)x(t) + p(t)\nabla _{a{\ast}}^{\nu }x(t). {}\\ \end{array}$$

It follows that

$$\displaystyle{ \nabla [p(t + 1)\nabla _{a{\ast}}^{\nu }x(t + 1)] + q(t)x(t) = h(t) }$$

for \(t \in \mathbb{N}_{a+1}\). Reversing the preceding steps shows that if y(t) is a solution to the IVP (3.105), it must be the same solution as x(t). Therefore the IVP (3.105) has a unique solution. □ 

Theorem 3.159.

Let 0 < ν < 1 and let \(x: \mathbb{N}_{a} \rightarrow \mathbb{R}\) . Then

$$\displaystyle{\nabla _{a{\ast}}^{\nu }x(a + 1) = \nabla x(a + 1).}$$

Proof.

Let 0 < ν < 1. Then by the definition of the nabla Caputo fractional difference it holds that

$$\displaystyle\begin{array}{rcl} \nabla _{a{\ast}}^{\nu }x(a + 1)& =& \nabla _{ a}^{-(1-\nu )}\nabla x(a + 1) {}\\ & =& \int _{a}^{a+1}H_{ -\nu }(a + 1,\rho (\tau ))\nabla x(\tau )\nabla \tau {}\\ & =& H_{-\nu }(a + 1,a)\nabla x(a + 1) {}\\ & =& \nabla x(a + 1), {}\\ \end{array}$$

which is what we wanted to prove. □ 

Theorem 3.160 (General Solution of the Homogeneous Equation).

We

assume x 1 ,x 2 are linearly independent solutions of L a x = 0 on \(\mathbb{N}_{a}\) . Then the general solution to L a x = 0 is given by

$$\displaystyle{x(t) = c_{1}x_{1}(t) + c_{2}x_{2}(t),\quad t \in \mathbb{N}_{a}}$$

where \(c_{1},c_{2} \in \mathbb{R}\) are arbitrary constants.

Proof.

Let x 1, x 2 be linearly independent solutions of L a x(t) = 0 on \(\mathbb{N}_{a}\). If we let

$$\displaystyle{\alpha:= x_{1}(a),\quad \beta:= x_{1}(a + 1),\quad \gamma:= x_{2}(a),\quad \delta:= x_{2}(a + 1),}$$

then x 1, x 2 are the unique solutions to the IVPs

$$\displaystyle{\begin{array}{lcr} \left \{\begin{array}{l} Lx_{1} = 0, \\ x_{1}(a) =\alpha, \\ x_{1}(a + 1) =\beta, \end{array} \right.&\mathrm{and}&\left \{\begin{array}{l} Lx_{2} = 0, \\ x_{2}(a) =\gamma, \\ x_{2}(a + 1) =\delta. \end{array} \right. \end{array} }$$

Since L a is a linear operator, for any \(c_{1},c_{2} \in \mathbb{R}\), we have

$$\displaystyle{L_{a}[c_{1}x_{1}(t) + c_{2}x_{2}(t)] = c_{1}L_{a}x_{1}(t) + c_{2}L_{a}x_{2}(t) = 0,}$$

so \(x(t) = c_{1}x_{1}(t) + c_{2}x_{2}(t)\) solves L a x(t) = 0. Conversely, suppose \(x: \mathbb{N}_{a} \rightarrow \mathbb{R}\) solves L a x(t) = 0. Note that if A: = x(a) and \(B:= x(a + 1)\), then x(t) is the unique solution of the IVP

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \quad &Lx = 0,\\ \quad &x(a) = A,\quad x(a + 1) = B. \end{array} \right.}$$

It remains to show that the matrix equation

$$\displaystyle{ \left [\begin{array}{cc} x_{1}(a) & x_{2}(a) \\ x_{1}(a + 1)&x_{2}(a + 1) \end{array} \right ]\left [\begin{array}{c} c_{1} \\ c_{2} \end{array} \right ] = \left [\begin{array}{cc} A\\ B \end{array} \right ] }$$
(3.107)

has a unique solution for c 1, c 2. Then x(t) and \(c_{1}x_{1}(t) + c_{2}x_{2}(t)\) solve the same IVP, so by Theorem 3.158, every solution to L a x(t) = 0 may be uniquely expressed as a linear combination of x 1(t) and x 2(t). The above matrix equation can be equivalently expressed as

$$\displaystyle{\left [\begin{array}{cc} \alpha &\gamma \\ \beta &\delta \end{array} \right ]\left [\begin{array}{c} c_{1} \\ c_{2} \end{array} \right ] = \left [\begin{array}{cc} A\\ B \end{array} \right ].}$$

Suppose by way of contradiction that

$$\displaystyle{\left \vert \begin{array}{cc} \alpha &\gamma \\ \beta &\delta \end{array} \right \vert = 0.}$$

Without loss of generality, there exists a constant \(k \in \mathbb{R}\) for which α = k γ and β = k δ. Then \(x_{1}(a) = k\gamma = kx_{2}(a)\), and \(x_{1}(a + 1) = k\delta = kx_{2}(a + 1)\). Since kx 2(t) solves L a x(t) = 0, we have that x 1(t) and kx 2(t) solve the same IVP. By uniqueness, x 1(t) = kx 2(t). But then x 1(t) and x 2(t) are linearly dependent on \(\mathbb{N}_{a}\), so we have a contradiction. Therefore, the matrix equation (3.107) must have a unique solution. □ 

Corollary 3.161.

Assume x 1 ,x 2 are linearly independent solutions of L a x(t) = 0 on \(\mathbb{N}_{a}\) and y 0 is a particular solution to L a x(t) = h(t) on \(\mathbb{N}_{a}\) for some \(h: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) . Then the general solution of L a x(t) = h(t) is given by

$$\displaystyle{x(t) = c_{1}x_{1}(t) + c_{2}x_{2}(t) + y_{0}(t),}$$

where \(c_{1},c_{2} \in \mathbb{R}\) are arbitrary constants, for \(t \in \mathbb{N}_{a}\) .

Proof.

This proof is left to the reader. □ 

Next we define the Cauchy function for the Caputo fractional self-adjoint equation, L a x = 0. Later we will see its importance for finding a variation of constants formula for L a x = h(t) and also its importance for constructing Green’s functions for various boundary value problems.

Definition 3.162.

The Cauchy function for L a x(t) = 0 is the real-valued function x with domain a ≤ s ≤ t such that, for each fixed \(s \in \mathbb{N}_{a}\), x satisfies the IVP

$$\displaystyle{ \left \{\begin{array}{l} L_{s}x(t) = 0,\quad t \in \mathbb{N}_{s+1} \\ x(s,s) = 0, \\ \nabla x(s + 1,s) = \frac{1} {p(s+1)}. \end{array} \right. }$$
(3.108)

Note by Theorem 3.159, the IVP (3.108) is equivalent to the IVP

$$\displaystyle{\left \{\begin{array}{l} L_{s}x(t) = 0\quad t \in \mathbb{N}_{s+1}, \\ x(s,s) = 0, \\ \nabla _{s{\ast}}^{\nu }x(s + 1,s) = \frac{1} {p(s+1)}.\end{array} \right.}$$

Example 3.163.

We show that the Cauchy function for

$$\displaystyle{\nabla [p(t + 1)\nabla _{a{\ast}}^{\nu }y(t + 1)] = 0,\quad t \in \mathbb{N}_{ a+1}}$$

is given by the formula

$$\displaystyle{ x(t,s) = \nabla _{s}^{-\nu }\left ( \frac{1} {p(t)}\right ) =\int _{ s}^{t}\frac{H_{\nu -1}(t,\rho (\tau ))} {p(\tau )} \nabla \tau. }$$
(3.109)

for t ≥ s ≥ a. We know for each fixed s, the Cauchy function satisfies the equation

$$\displaystyle{\nabla [p(t + 1)\nabla _{s{\ast}}^{\nu }x(t + 1,s)] = 0,}$$

for t ≥ s ≥ a. It follows that

$$\displaystyle\begin{array}{rcl} p(t + 1)\nabla _{s{\ast}}^{\nu }x(t + 1,s)& =& \alpha (s) {}\\ \nabla _{s{\ast}}^{\nu }x(t + 1,s)& =& \frac{\alpha (s)} {p(t + 1)}. {}\\ \end{array}$$

Letting t = s and using the initial condition

$$\displaystyle{\nabla x(s + 1,s) = \nabla _{s{\ast}}^{\nu }x(s + 1,s) = \frac{1} {p(s + 1)}}$$

we get that α(s) = 1. Hence we have that

$$\displaystyle{\nabla _{s{\ast}}^{\nu }x(t + 1,s) = \frac{1} {p(t + 1)}.}$$

By the definition of the Caputo difference, this is equivalent to

$$\displaystyle\begin{array}{rcl} \nabla _{s}^{-(1-\nu )}\nabla x(t + 1,s)& =& \frac{1} {p(t + 1)} {}\\ \nabla _{s}^{1-\nu }\nabla _{ s}^{-(1-\nu )}\nabla x(t + 1,s)& =& \nabla _{ s}^{1-\nu }\left ( \frac{1} {p(t + 1)}\right ) {}\\ \nabla x(t + 1,s)& =& \nabla _{s}^{1-\nu }\left ( \frac{1} {p(t + 1)}\right ). {}\\ \end{array}$$

Replacing t + 1 with t yields

$$\displaystyle{\nabla x(t,s) = \nabla _{s}^{1-\nu }\left ( \frac{1} {p(t)}\right ).}$$

Integrating both sides from s to t and using x(s, s) = 0 we get

$$\displaystyle\begin{array}{rcl} x(t,s)& =& \int _{s}^{t}\nabla _{ s}^{1-\nu } \frac{1} {p(\tau )}\nabla \tau {}\\ & =& \int _{s}^{t}\nabla \nabla _{ s}^{-\nu } \frac{1} {p(\tau )}\nabla \tau {}\\ & =& \bigg[\nabla _{s}^{-\nu } \frac{1} {p(\tau )}\bigg]_{\tau =s}^{\tau =t} {}\\ & =& \nabla _{s}^{-\nu }\left ( \frac{1} {p(t)}\right ) -\nabla _{s}^{-\nu }\left (\frac{1} {p}\right )(s) {}\\ & =& \nabla _{s}^{-\nu }\left ( \frac{1} {p(t)}\right ) =\int _{ s}^{t}\frac{H_{\nu -1}(t,\rho (\tau ))} {p(\tau )} \nabla \tau. {}\\ \end{array}$$

Example 3.164.

Find the Cauchy function for

$$\displaystyle{\nabla \nabla _{a{\ast}}^{\nu }y(t + 1) = 0,\quad t \in \mathbb{N}_{ a+1}.}$$

From Example 3.163 we have that the Cauchy function is given by

$$\displaystyle\begin{array}{rcl} x(t,s)& =& \nabla _{s}^{-\nu }\left ( \frac{1} {p(t)}\right ) =\int _{ s}^{t}\frac{H_{\nu -1}(t,\rho (\tau ))} {p(\tau )} \nabla \tau {}\\ & =& \int _{s}^{t}H_{\nu -1}(t,\rho (\tau ))\nabla \tau {}\\ & =& -H_{\nu }(t,\tau )\bigg\vert _{\tau =s}^{\tau =t} {}\\ & =& H_{\nu }(t,s). {}\\ \end{array}$$

Note that if ν = 1 we get the well-known result that the Cauchy function for \(\nabla ^{2}x(t + 1) = 0\) is given by \(x(t,s) = t - s\).

Theorem 3.165 (Variation of Constants).

Assume \(h: N_{a+1} \rightarrow \mathbb{R}\) . Then the solution of the IVP

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \quad &L_{a}y(t) = h(t),\quad t \in \mathbb{N}_{a+1} \\ \quad &y(a) = y(a + 1) = 0, \end{array} \right.}$$

is given by

$$\displaystyle{y(t) =\int _{ a}^{t}x(t,s)h(s)\nabla s,\quad t \in \mathbb{N}_{ a},}$$

where x(t,s) is the Cauchy function for L a x = 0.

Proof.

Let \(y(t) =\int _{ a}^{t}x(t,s)h(s)\nabla s,\) \(t \in \mathbb{N}_{a}\). We first note that y(t) satisfies the initial conditions:

$$\displaystyle\begin{array}{rcl} & & y(a) =\sum _{ s=a+1}^{a}x(a,s)h(s) = 0, {}\\ & & y(a + 1) =\sum _{ s=a+1}^{a+1}x(a + 1,s)h(s) = x(a + 1,a + 1)h(a + 1) = 0. {}\\ \end{array}$$

Next, note that by the Leibniz formula (3.23), we have that

$$\displaystyle\begin{array}{rcl} \nabla y(t)& =& \int _{a}^{t-1}\nabla _{ t}x(t,s)h(s)\nabla s + x(t,t)h(t) \\ & =& \int _{a}^{t-1}\nabla _{ t}x(t,s)h(s)\nabla s. {}\end{array}$$
(3.110)

We now show that

$$\displaystyle{ \nabla _{a{\ast}}^{\nu }y(t) =\int _{ a}^{t-1}\nabla _{ s{\ast}}^{\nu }x(t,s)h(s)\nabla s, }$$
(3.111)

for \(t \in \mathbb{N}_{a+2}\). By the definition of the Caputo fractional difference,

$$\displaystyle\begin{array}{rcl} \nabla _{a{\ast}}^{\nu }y(t)& =& \nabla _{ a}^{-(1-\nu )}\nabla y(t) {}\\ & =& \int _{a}^{t}H_{ -\nu }(t,\rho (\tau ))\nabla y(\tau )\nabla \tau {}\\ & =& \int _{a}^{t}H_{ -\nu }(t,\rho (\tau ))\int _{a}^{\tau -1}\nabla _{ t}x(t,s)h(s)\nabla s\nabla \tau,\quad \mbox{ by 3.93} {}\\ & =& \int _{a}^{t}\int _{ a}^{\tau -1}H_{ -\nu }(t,\rho (\tau ))\nabla _{t}x(t,s)h(s)\nabla s\nabla \tau {}\\ & =& \sum _{\tau =a+1}^{t}\sum _{ s=a+1}^{\tau -1}H_{ -\nu }(t,\rho (\tau ))\nabla _{t}x(t,s)h(s) {}\\ & =& \sum _{\tau =a+2}^{t}\sum _{ s=a+1}^{\tau -1}H_{ -\nu }(t,\rho (\tau ))\nabla _{t}x(t,s)h(s) {}\\ & =& \sum _{s=a+1}^{t-1}\sum _{ \tau =s+1}^{t}H_{ -\nu }(t,\rho (\tau ))\nabla _{t}x(t,s)h(s) {}\\ & =& \int _{a}^{t-1}\int _{ s}^{t}H_{ -\nu }(t,\rho (\tau ))\nabla _{t}x(t,s)h(s)\nabla \tau \nabla s {}\\ & =& \int _{a}^{t-1}\nabla _{ s}^{-(1-\nu )}\nabla _{ t}x(t,s)h(s)\nabla s {}\\ & =& \int _{a}^{t-1}\nabla _{ s{\ast}}^{\nu }x(t,s)h(s)\nabla s {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a+2}\). Hence (3.111) holds. Then by (3.111) we have that

$$\displaystyle{p(t + 1)\nabla _{a{\ast}}^{\nu }y(t + 1) =\int _{ a}^{t}\left [p(t + 1)\nabla _{ s{\ast}}^{\nu }x(t + 1,s)\right ]h(s)\nabla s.}$$

Using the Leibniz formula (3.23) we get that

$$\displaystyle\begin{array}{rcl} & & \nabla \left [p(t + 1)\nabla _{a{\ast}}^{\nu }y(t + 1)\right ] {}\\ & & =\int _{ a}^{t-1}\nabla \left [p(t + 1)\nabla _{ s{\ast}}^{\nu }x(t + 1,s)\right ]h(s)\nabla s + p(t + 1)\nabla _{ s{\ast}}^{\nu }x(t + 1,t)h(t) {}\\ & & =\int _{ a}^{t-1}\nabla \left [p(t + 1)\nabla _{ s{\ast}}^{\nu }x(t + 1,s)\right ]h(s)\nabla s + h(t) {}\\ \end{array}$$

It follows that

$$\displaystyle\begin{array}{rcl} & & L_{a}y(t) = \nabla [p(t + 1)\nabla _{a{\ast}}^{\nu }y(t + 1)] + q(t)y(t) {}\\ & & =\int _{ a}^{t-1}\nabla \left [p(t + 1)\nabla _{ s{\ast}}^{\nu }x(t + 1,s)\right ]h(s)\nabla s + h(t) +\int _{ a}^{t-1}q(t)x(t,s)h(s)\nabla s {}\\ & & =\int _{ a}^{t-1}\bigg[\nabla [p(t + 1)\nabla _{ s{\ast}}^{\nu }x(t + 1,s)] + q(t)x(t,s)\bigg]h(s)\nabla s + h(t) {}\\ & & = h(t) +\int _{ a}^{t-1}L_{ s}x(t,s)h(s)\nabla s {}\\ & & = h(t). {}\\ \end{array}$$

Thus, y(t) solves the given IVP. □ 

Theorem 3.166 (Variation of Constants Formula).

Assume \(p,q: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) . Then the solution to the IVP

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \quad &L_{a}y(t) = h(t), \\ \quad &y(a) = A, \\ \quad &\nabla y(a + 1) = B, \end{array} \right.}$$

for \(t \in \mathbb{N}_{a+1}\) , where \(A,B \in \mathbb{R}\) are arbitrary constants, is given by

$$\displaystyle{y(t) = y_{0}(t) +\int _{ a}^{t}x(t,s)h(s)\nabla s,}$$

where y 0 (t) solves the IVP

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \quad &L_{a}y(t) = 0, \\ \quad &y(a) = A, \\ \quad &\nabla y(a + 1) = B, \end{array} \right.}$$

for \(t \in \mathbb{N}_{a+1}\) .

Proof.

The proof follows from Theorem 3.165 by linearity. □ 

Corollary 3.167.

Assume \(p,h: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) . Then the solution of the IVP

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \quad &\nabla [p(t + 1)\nabla _{a{\ast}}^{\nu }y(t + 1)] = h(t),\quad t \in \mathbb{N}_{a+1} \\ \quad &y(a) = \nabla y(a + 1) = 0, \end{array} \right.}$$

is given by

$$\displaystyle{y(t) =\int _{ a}^{t}\nabla _{ s}^{-\nu }\left ( \frac{1} {p(t)}\right )h(s)\nabla s.}$$

Proof.

From Theorem 3.165, we know that the solution is given by

$$\displaystyle{y(t) =\int _{ a}^{t}x(t,s)h(s)\nabla s,}$$

where x(t, s) is the Cauchy function for

$$\displaystyle{\nabla [p(t + 1)\nabla _{a{\ast}}^{\nu }y(t + 1)] = 0.}$$

By Example 3.163, we know the Cauchy function for the above difference equation is \(x(t,s) = \nabla _{s}^{-\nu }\left ( \frac{1} {p(t)}\right )\). Hence the solution of our given IVP is given by

$$\displaystyle{y(t) =\int _{ a}^{t}\nabla _{ s}^{-\nu }\left ( \frac{1} {p(t)}\right )h(s)\nabla s,}$$

for \(t \in \mathbb{N}_{a}\). □ 

The following example appears in [4].

Example 3.168.

Solve the IVP

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \quad &\nabla \nabla _{0{\ast}}^{0.6}x(t + 1) = t,\quad t \in \mathbb{N}_{1}, \\ \quad &x(0) = \nabla x(1) = 0. \end{array} \right.}$$

Note that for this self-adjoint equation, p(t) ≡ 1 and q(t) ≡ 0. From Example (3.164) we have that x(t, s) = H . 6(t, s)

Then by Theorem 3.165

$$\displaystyle\begin{array}{rcl} x(t)& =& \int _{0}^{t}x(t,s)s\nabla s {}\\ & =& \int _{0}^{t}H_{.6}(t,s)s\nabla s. {}\\ \end{array}$$

Integrating by parts we get from Exercise 3.37 that

$$\displaystyle{ x(t) = \frac{1} {\Gamma (<InternalRef RefID="Equ12">3.6</InternalRef>)}(t - 1)^{\overline{2.6}}, }$$

for \(t \in \mathbb{N}_{0}\).

3.24 Boundary Value Problems

Many of the results in this section appear in Ahrendt et al. [4]. In this section we will consider the nonhomogeneous boundary value problem (BVP)

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \quad &L_{a}x(t) = h(t),\quad t \in \mathbb{N}_{a+1}^{b-1} \\ \quad &\alpha x(a) -\beta \nabla x(a + 1) = A, \\ \quad &\gamma x(b) +\delta \nabla x(b) = B, \end{array} \right. }$$
(3.112)

and the corresponding homogeneous BVP

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \quad &L_{a}x(t) = 0,\quad t \in \mathbb{N}_{a+1}^{b-1} \\ \quad &\alpha x(a) -\beta \nabla x(a + 1) = 0, \\ \quad &\gamma x(b) +\delta \nabla x(b) = 0,\end{array} \right. }$$
(3.113)

where \(h: \mathbb{N}_{a+1}^{b-1} \rightarrow \mathbb{R}\), p(t) > 0, \(t \in \mathbb{N}_{a+1}^{b}\), 0 < ν ≤ 1, and \(\alpha,\beta,\gamma,\delta,A,B \in \mathbb{R}\) for which α 2 +β 2 > 0 and γ 2 +δ 2 > 0. Also we always assume ba is a positive integer and ba is large enough so that the boundary conditions are not equivalent to initial conditions. The following theorem gives an important relationship between these two BVPs.

Theorem 3.169.

The homogeneous BVP (3.113) has only the trivial solution iff the nonhomogeneous BVP (3.112) has a unique solution.

Proof.

Let x 1, x 2 be linearly independent solutions to L a x(t) = 0 on \(\mathbb{N}_{a}^{b}\). By Theorem 3.160, a general solution to L a x(t) = 0 is given by

$$\displaystyle{x(t) = c_{1}x_{1}(t) + c_{2}x_{2}(t),}$$

where \(c_{1},c_{2} \in \mathbb{R}\) are arbitrary constants. If x(t) solves the homogeneous boundary conditions, then x(t) is the trivial solution if and only if \(c_{1} = c_{2} = 0\). This is true if and only if the system of equations

$$\displaystyle{\left \{\begin{array}{l} \alpha [c_{1}x_{1}(a) + c_{2}x_{2}(a)] -\beta \nabla [c_{1}x_{1}(a + 1) + c_{2}x_{2}(a + 1)] = 0, \\ \gamma [c_{1}x_{1}(b) + c_{2}x_{2}(b)] +\delta \nabla [c_{1}x_{1}(b) + c_{2}x_{2}(b)] = 0,\end{array} \right.}$$

or equivalently,

$$\displaystyle{\left \{\begin{array}{l} c_{1}[\alpha x_{1}(a) -\beta \nabla x_{1}(a + 1)] + c_{2}[\alpha x_{2}(a) -\beta \nabla x_{2}(a + 1)] = 0, \\ c_{1}[\gamma x_{1}(b) +\delta \nabla x_{1}(b)] + c_{2}[\gamma x_{2}(b) +\delta \nabla x_{2}(b)] = 0,\end{array} \right.}$$

has only the trivial solution \(c_{1} = c_{2} = 0\). In other words, x(t) solves (3.113) if and only if

$$\displaystyle{ D:= \left \vert \begin{array}{cc} \alpha x_{1}(a) -\beta \nabla x_{1}(a + 1)&\alpha x_{2}(a) -\beta \nabla x_{2}(a + 1) \\ \gamma x_{1}(b) +\delta \nabla x_{1}(b) & \gamma x_{2}(b) +\delta \nabla x_{2}(b) \end{array} \right \vert \neq 0. }$$

Now consider the nonhomogeneous BVP (3.112). By Corollary 3.161, a general solution of the nonhomogeneous equation L a y(t) = h(t) is given by

$$\displaystyle{y(t) = a_{1}x_{1}(t) + a_{2}x_{2}(t) + y_{0}(t),}$$

where \(a_{1},a_{2} \in \mathbb{R}\) are arbitrary constants, and \(y_{0}: \mathbb{N}_{a} \rightarrow \mathbb{R}\) is a particular solution of the nonhomogeneous equation L a y(t) = h(t). Then y(t) satisfies the nonhomogeneous boundary conditions in (3.112) if and only if

$$\displaystyle{\left \{\begin{array}{l} \alpha [a_{1}x_{1}(a) + a_{2}x_{2}(a) + y_{0}(a)] \\ \qquad \qquad -\beta \nabla _{a{\ast}}^{\nu }[a_{1}x_{1}(a + 1) + a_{2}x_{2}(a + 1) + y_{0}(a + 1)] = A, \\ \gamma [a_{1}x_{1}(b) + a_{2}x_{2}(b) + y_{0}(b)] +\delta \nabla [a_{1}x_{1}(b) + a_{2}x_{2}(b) + y_{0}(b)] = B.\end{array} \right.}$$

This system is equivalent to the system

$$\displaystyle{\left \{\begin{array}{l} a_{1}[\alpha x_{1}(a) -\beta \nabla x_{1}(a + 1)] + a_{2}[\alpha x_{2}(a) -\beta \nabla x_{2}(a + 1)] \\ = A -\alpha y_{0}(a) +\beta \nabla y_{0}(a + 1), \\ a_{1}[\gamma x_{1}(b) +\delta \nabla x_{1}(b)] + a_{2}[\gamma x_{2}(b) +\delta \nabla x_{2}(b)] \\ = B -\gamma y_{0}(b) -\delta \nabla y_{0}(b). \end{array} \right.}$$

Thus, y(t) satisfies the boundary conditions in (3.112) iff D ≠ 0. Therefore the homogeneous BVP (3.113) has only the trivial solution iff the nonhomogeneous BVP (3.112) has a unique solution. □ 

In the next theorem we give conditions for which Theorem 3.169 applies.

Theorem 3.170.

Let

$$\displaystyle{\rho:=\alpha \gamma \nabla _{a}^{-\nu } \frac{1} {p(b - 1)} + \frac{\alpha \delta } {p(b)} + \frac{\beta \gamma } {p(a + 1)}.}$$

Then the BVP

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \quad &\nabla [p(t + 1)\nabla _{a^{{\ast}}}^{\nu }x(t + 1)] = 0,\quad t \in \mathbb{N}_{ a+1}^{b-1}, \\ \quad &\alpha x(a) -\beta \nabla _{a^{{\ast}}}^{\nu }x(a + 1) = 0, \\ \quad &\gamma x(b) +\delta \nabla _{a^{{\ast}}}^{\nu }x(b) = 0, \end{array} \right. }$$
(3.114)

has only the trivial solution if and only if ρ ≠ 0.

Proof.

Note that \(x_{1}(t) = 1,x_{2}(t) = \nabla _{a}^{-\nu } \frac{1} {p(t)}\) are linearly independent solutions to

$$\displaystyle{\nabla \{p(t + 1)\nabla _{a^{{\ast}}}^{\nu }x(t + 1)\} = 0.}$$

Then a general solution of the difference equation is given by

$$\displaystyle{x(t) = c_{1}x_{1}(t) + c_{2}x_{2}(t) = c_{1} + c_{2}\nabla _{a}^{-\nu } \frac{1} {p(t)}.}$$

The boundary conditions \(\alpha x(a) -\beta \nabla _{a^{{\ast}}}^{\nu }x(a + 1) = 0,\) and \(\gamma x(b) +\delta \nabla _{a^{{\ast}}}^{\nu }x(b) = 0\) give us the linear system

$$\displaystyle{c_{1}\alpha + c_{2}\bigg[- \frac{\beta } {p(a + 1)}\bigg] = 0,}$$
$$\displaystyle{c_{1}\gamma + c_{2}\bigg( \frac{\delta } {p(b)} +\gamma \nabla _{a}^{-\nu } \frac{1} {p(b)}\bigg) = 0.}$$

The determinant of the coefficients of this system is given by

$$\displaystyle{\left \vert \begin{array}{cc} \alpha & - \frac{\beta } {\rho (a+1)} \\ \gamma & \frac{\delta } {p(b)} +\gamma \nabla _{a}^{-\nu } \frac{1} {p(b)} \end{array} \right \vert =\alpha \gamma \nabla _{a}^{-\nu } \frac{1} {p(b - 1)}+ \frac{\alpha \delta } {p(b)}+ \frac{\beta \gamma } {p(a + 1)} =\rho.}$$

Hence, the BVP has only the trivial solution if and only if ρ ≠ 0. □ 

Corollary 3.171.

Assume α, β, γ, and δ are all greater than or equal to zero with α 2 + β 2 ≠ 0 and γ 2 + δ 2 ≠ 0. Then the homogeneous BVP

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \quad &\nabla [p(t + 1)\nabla _{a^{{\ast}}}^{\nu }x(t + 1)] = 0,\quad t \in \mathbb{N}_{ a+1}^{b-1}, \\ \quad &\alpha x(a) -\beta \nabla _{a^{{\ast}}}^{\nu }x(a + 1) = 0, \\ \quad &\gamma x(b) +\delta \nabla _{a^{{\ast}}}^{\nu }x(b) = 0, \end{array} \right. }$$
(3.115)

has only the trivial solution.

Proof.

The hypotheses of this theorem imply that ρ > 0. Hence the conclusion follows from Theorem 3.170. □ 

Definition 3.172.

Assume the homogeneous BVP (3.113) has only the trivial solution. Then we define the Green’s function, G(t, s), for the homogeneous BVP 3.113 by

$$\displaystyle{G(t,s):= \left \{\begin{array}{lcr} u(t,s),&\quad &a \leq t \leq s \leq b,\\ v(t, s), &\quad &a \leq s \leq t \leq b, \end{array} \right.}$$

where for each fixed \(s \in \mathbb{N}_{a+1}^{b}\), u(t, s) is the unique solution (guaranteed by Theorem 3.169) of the BVP

$$\displaystyle{\left \{\begin{array}{l} L_{a}u(t) = 0,\quad t \in \mathbb{N}_{a+1}^{b-1} \\ \alpha u(a,s) -\beta \nabla u(a + 1,s) = 0, \\ \gamma u(b,s) +\delta \nabla u(b,s) = -[\gamma x(b,s) +\delta \nabla x(b,s)], \end{array} \right.}$$

and

$$\displaystyle{v(t,s):= u(t,s) + x(t,s),}$$

where x(t, s) is the Cauchy function for L a x(t) = 0.

Note that for each fixed \(s \in \mathbb{N}_{a}^{b}\), \(v(t,s) = u(t,s) + x(t,s)\) is a solution of L a x(t) = 0 and since

$$\displaystyle\begin{array}{rcl} \gamma v(b,s) +\delta \nabla u(b,s)& =& \gamma [u(b,s) + x(b - 1,s)] +\delta \nabla [u(b,s) + x(b,s)] {}\\ & =& [\gamma u(b,s) +\delta \nabla _{t}u(b,s)] + [\gamma x(b,s) +\delta \nabla x(b,s)] {}\\ & =& -[\gamma x(b,s) +\delta \nabla _{t}x(b,s)] + [\gamma x(b,s) +\delta \nabla x(b,s)] {}\\ & =& 0, {}\\ \end{array}$$

we have that for each fixed \(s \in \mathbb{N}_{a}^{b}\) the function v(t, s) satisfies the homogeneous boundary condition in (3.113) at t = b.

Theorem 3.173 (Green’s Function).

If (3.113) has only the trivial solution, then the unique solution to the BVP

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \quad &L_{a}y(t) = h(t),\quad t \in \mathbb{N}_{a+1}^{b-1} \\ \quad &\alpha y(a) -\beta \nabla y(a + 1) = 0, \\ \quad &\gamma y(b) +\delta \nabla y(b) = 0,\end{array} \right. }$$
(3.116)

is given by

$$\displaystyle{y(t) =\int _{ a}^{b}G(t,s)h(s)\nabla s,\quad t \in \mathbb{N}_{ a}^{b},}$$

where G(t,s) is the Green’s function for the homogeneous BVP (3.113).

Proof.

Let

$$\displaystyle\begin{array}{rcl} y(t):=\int _{ a}^{b}G(t,s)h(s)\nabla s& =& \int _{ a}^{t}G(t,s)h(s)\nabla s +\int _{ t}^{b}G(t,s)h(s)\nabla s {}\\ & =& \int _{a}^{t}v(t,s)h(s)\nabla s +\int _{ t}^{b}u(t,s)h(s)\nabla s {}\\ & =& \int _{a}^{t}[u(t,s) + x(t,s)]h(s)\nabla s +\int _{ t}^{b}u(t,s)h(s)\nabla s {}\\ & =& \int _{a}^{b}u(t,s)h(s)\nabla s +\int _{ a}^{t}x(t,s)h(s)\nabla s {}\\ & =& \int _{a}^{b}u(t,s)h(s)\nabla s + z(t), {}\\ \end{array}$$

where \(z(t):=\int _{ a}^{t}x(t,s)h(s)\nabla s\). Since x(t, s) is the Cauchy function for L a x(t) = 0, it follows that z solves the IVP

$$\displaystyle{\left \{\begin{array}{lr} L_{a}z(t) = h(t),\quad t \in \mathbb{N}_{a+1}^{b-1} \\ z(a) = 0, \\ z(a + 1) = 0, \end{array} \right.}$$

for \(t \in \mathbb{N}_{a+1}^{b}\), by Theorem 3.165. Then,

$$\displaystyle\begin{array}{rcl} L_{a}y(t)& =& \int _{a}^{b}L_{ a}u(t,s)h(s)\nabla s + L_{a}z(t) {}\\ & =& 0 + h(t) = h(t), {}\\ \end{array}$$

for \(t \in \mathbb{N}_{a+1}^{b}\). It remains to show that the boundary conditions hold. At t = a, we have

$$\displaystyle\begin{array}{rcl} \alpha y(a) -\beta \nabla y(a + 1)& =& \int _{a}^{b}[\alpha u(a,s) -\beta \nabla u(a + 1,s)]h(s)\nabla s {}\\ & & +[\alpha z(a) -\beta \nabla z(a + 1)] = 0, {}\\ \end{array}$$

and at t = b, we have

$$\displaystyle\begin{array}{rcl} & & \gamma y(b) +\delta \nabla y(b) {}\\ & =\,& \gamma z(b) +\int _{ a}^{b}\gamma u(b,s)h(s)\nabla s +\delta \nabla z(b) +\int _{ a}^{b}\delta \nabla u(b,s)h(s)\nabla s {}\\ & =\,& \gamma \int _{a}^{b}x(b,s)h(s)\nabla s +\delta \nabla \int _{ a}^{b}x(b,s)h(s)\nabla s {}\\ & & +\int _{a}^{b}[\gamma u(b,s) +\delta \nabla u(b,s)]h(s)\nabla s {}\\ & =\,& -\int _{a}^{b}[\gamma x(b,s) +\delta \nabla x(b,s)]h(s)\nabla s +\int _{ a}^{b}[\gamma x(b,s) +\delta \nabla x(b,s)]h(s)\nabla s {}\\ & =\,& 0. {}\\ \end{array}$$

This completes the proof. □ 

Corollary 3.174.

If the homogeneous BVP (3.113) has only the trivial solution, then the unique solution of the nonhomogeneous BVP (3.112) is given by

$$\displaystyle{y(t) = z(t) +\int _{ a}^{b}G(t,s)h(s)\nabla s,\quad t \in \mathbb{N}_{ a}^{b},}$$

where z is the unique solution to the BVP

$$\displaystyle{\left \{\begin{array}{l} L_{a}z(t) = 0,\quad t \in \mathbb{N}_{a+1}^{b-1} \\ \alpha z(a) -\beta \nabla z(a + 1) = A, \\ \gamma z(b) +\delta \nabla z(b) = B.\end{array} \right.}$$

Proof.

This corollary follows directly from Theorem 3.173 by linearity. □ 

Theorem 3.175.

Assume \(a,b \in \mathbb{R}\) and \(b - a \in \mathbb{N}_{2}\) . Then the Green’s function for the BVP

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \nabla \nabla _{a{\ast}}^{\nu }x(t + 1) = 0,\quad t \in \mathbb{N}_{a+1}^{b-1}\quad \\ x(a) = x(b) = 0, \quad \end{array} \right. }$$
(3.117)

is given by

$$\displaystyle{G(t,s) = \left \{\begin{array}{@{}l@{\quad }l@{}} u(t,s),\quad &a \leq t \leq s \leq b,\\ v(t, s),\quad &a \leq s \leq t \leq b, \end{array} \right.}$$

where

$$\displaystyle{u(t,s) = -\dfrac{(b - s)^{\overline{\nu }}(t - a)^{\overline{\nu }}} {\Gamma (1+\nu )(b - a)^{\overline{\nu }}}}$$

and

$$\displaystyle{v(t,s) = u(t,s) + \frac{(t - s)^{\overline{\nu }}} {\Gamma (\nu +1)}.}$$

Proof.

By the definition of the Green’s function for the boundary value problem (3.117) we have that

$$\displaystyle{G(t,s) = \left \{\begin{array}{@{}l@{\quad }l@{}} u(t,s),\quad &a \leq t \leq s \leq b,\\ v(t, s),\quad &a \leq s \leq t \leq b, \end{array} \right.}$$

where u(t, s) for each fixed s solves the BVP

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \nabla [\nabla _{a{\ast}}^{\nu }u(t + 1,s)] = 0,\quad \\ u(a,s) = 0, \quad \\ u(b,s) = -x(b,s), \quad \end{array} \right.}$$

for \(t \in \mathbb{N}_{a+1}\), and \(v(t,s) = u(t,s) + x(t,s)\). By inspection, we see that x 1(t) = 1 is a solution of

$$\displaystyle{\nabla [\nabla _{a{\ast}}^{\nu }y(t + 1)] = 0,}$$

for \(t \in \mathbb{N}_{a+1}\). Also if \(x_{2}(t):= (\nabla _{a}^{-\nu }1)(t)\) we have that

$$\displaystyle\begin{array}{rcl} \nabla [\nabla _{a{\ast}}^{\nu }x_{ 2}(t + 1)]& =& \nabla [\nabla _{a{\ast}}^{\nu }\nabla _{ a}^{\nu }(1)] {}\\ & =& \nabla [\nabla _{a}^{-(N-\nu )}\nabla ^{N}\nabla _{ a}^{\nu }(1)] {}\\ & =& \nabla [\nabla _{a}^{-(N-\nu )}\nabla _{ a}^{(N-\nu )}(1)] {}\\ & =& (\nabla 1)(t) {}\\ & =& 0, {}\\ \end{array}$$

so x 2(t) also is a solution \(\nabla [\nabla _{a{\ast}}^{\nu }y(t + 1)] = 0\). Since x 1(t) and x 2(t) are linearly independent, by Theorem 3.160 the general solution is given by

$$\displaystyle{ y(t) = c_{1} + c_{2}(\nabla _{a}^{-\nu }1)(t) = c_{ 1} + c_{2}\frac{(t - a)^{\overline{\nu }}} {\Gamma (1+\nu )}, }$$

and it follows that

$$\displaystyle{u(t,s) = c_{1}(s) + c_{2}(s)\frac{(t - a)^{\overline{\nu }}} {\Gamma (1+\nu )}.}$$

The boundary condition u(a, s) = 0 implies that c 1(s) = 0. The boundary condition \(u(b,s) = -x(b,s)\) then yields

$$\displaystyle{-x(b,s) = u(b,s) = c_{2}(s)\frac{(b - a)^{\overline{\nu }}} {\Gamma (1+\nu )}.}$$

From Example 3.163, we know that

$$\displaystyle{x(b,s) = (\nabla _{s}^{-\nu }1)(b) = \frac{(b - s)^{\overline{\nu }}} {\Gamma (1+\nu )},}$$

and thus

$$\displaystyle{c_{2}(s) = -\frac{(b - s)^{\overline{\nu }}} {(b - a)^{\overline{\nu }}}.}$$

Hence the Green’s function is given by

$$\displaystyle{G(t,s) = \left \{\begin{array}{@{}l@{\quad }l@{}} -\dfrac{(b - s)^{\overline{\nu }}(t - a)^{\overline{\nu }}} {\Gamma (1+\nu )(b - a)^{\overline{\nu }}}, \quad &a \leq t \leq s \leq b, \\ -\dfrac{(b - s)^{\overline{\nu }}(t - a)^{\overline{\nu }}} {\Gamma (1+\nu )(b - a)^{\overline{\nu }}} + \dfrac{(t - s)^{\overline{\nu }}} {\Gamma (1+\nu )},\quad &a \leq s \leq t \leq b. \end{array} \right.}$$

 □ 

Remark 3.176.

Note that in the continuous and integer-order discrete cases, the Green’s function is symmetric. This is not necessarily true in the fractional case. By way of counterexample, take a = 0, b = 5, and ν = 0. 5. Then one can show that

$$\displaystyle{G(2,3) = u(2,3) = -\frac{(2)^{\overline{0.5}}(2)^{\overline{0.5}}} {\Gamma (1.5)(5)^{\overline{0.5}}} = -\frac{32} {35},}$$

but

$$\displaystyle{G(3,2) = v(3,2) = -\frac{(3)^{\overline{0.5}}(3)^{\overline{0.5}}} {\Gamma (1.5)(5)^{\overline{0.5}}} + \frac{(1)^{\overline{0.5}}} {\Gamma (1.5)} = -\frac{3} {7}.}$$

Theorem 3.177.

Assume \(a,b \in \mathbb{R}\) and \(b - a \in \mathbb{N}_{2}\) . Then the Green’s function for the BVP

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \quad &\nabla \nabla _{a{\ast}}^{\nu }x(t + 1) = 0,\quad t \in \mathbb{N}_{a+1}^{b-1} \\ \quad &x(a) = x(b) = 0, \end{array} \right.}$$

satisfies the inequalities

  1. (i)

    \(G(t,s) \leq 0,\)

  2. (ii)

    \(G(t,s) \geq -\bigg(\frac{b - a} {4} \bigg)\bigg( \frac{\Gamma (b - a + 1)} {\Gamma (\nu +1)\Gamma (b - a+\nu )}\bigg),\)

  3. (iii)

    \(\int _{a}^{b}\vert G(t,s)\vert \nabla s \leq \frac{(b - a)^{2}} {4\Gamma (\nu +2)},\)

    for \(t \in \mathbb{N}_{a}^{b}\) , and

  4. (iv)

    \(\int _{a}^{b}\vert \nabla _{ t}G(t,s)\vert \nabla s \leq \frac{b - a} {\nu +1},\)

for \(t \in \mathbb{N}_{a+1}^{b}\) .

Proof.

First we show that (i) holds. Let a ≤ t ≤ s ≤ b. Then

$$\displaystyle{G(t,s) = u(t,s) = -\frac{(t - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} \leq 0.}$$

Now let a ≤ s < t ≤ b. Then G(t, s) = v(t, s), so we wish to show that v(t, s) is nonpositive. First, we show that v(t, s) is increasing. Taking the nabla difference with respect to t yields

$$\displaystyle\begin{array}{rcl} \nabla _{t}\bigg[-\frac{(t - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} + \frac{(t - s)^{\overline{\nu }}} {\Gamma (1+\nu )}\bigg] = -\frac{(t - a)^{\overline{\nu - 1}}(b - s)^{\overline{\nu }}} {\Gamma (\nu )(b - a)^{\overline{\nu }}} + \frac{(t - s)^{\overline{\nu - 1}}} {\Gamma (\nu )}.& & {}\\ \end{array}$$

This expression is nonnegative if and only if

$$\displaystyle{\frac{(t - a)^{\overline{\nu - 1}}(b - s)^{\overline{\nu }}} {\Gamma (\nu )(b - a)^{\overline{\nu }}} \leq \frac{(t - s)^{\overline{\nu - 1}}} {\Gamma (\nu )}.}$$

Since ts is positive, this is equivalent to

$$\displaystyle{\frac{(t - a)^{\overline{\nu - 1}}(b - s)^{\overline{\nu }}} {(b - a)^{\overline{\nu }}(t - s)^{\overline{\nu - 1}}} \leq 1.}$$

By the definition of the rising function,

$$\displaystyle\begin{array}{rcl} & & \frac{(t - a)^{\overline{\nu - 1}}(b - s)^{\overline{\nu }}} {(b - a)^{\overline{\nu }}(t - s)^{\overline{\nu - 1}}} {}\\ & & = \left [\frac{\Gamma (t - a +\nu -1)} {\Gamma (t - a)} \right ]\left [\frac{\Gamma (b - s+\nu )} {\Gamma (b - s)} \right ]\left [ \frac{\Gamma (b - a)} {\Gamma (b - a+\nu )}\right ]\left [ \frac{\Gamma (t - s)} {\Gamma (t - s +\nu -1)}\right ] {}\\ & & = \left [ \frac{\Gamma (t - a +\nu -1} {\Gamma (t - s +\nu -1)}\right ]\left [\frac{\Gamma (t - s)} {\Gamma (t - a)}\right ]\left [\frac{\Gamma (b - a)} {\Gamma (b - s)}\right ]\left [\frac{\Gamma (b - s+\nu )} {\Gamma (b - a+\nu )}\right ] {}\\ & & = \frac{(t - s +\nu -1)(t - s+\nu )\cdots (t - a +\nu -2)} {(t - s)(t - s + 1)\cdots (t -\rho (a))} {}\\ & & \qquad \qquad \qquad \cdot \frac{(b - s)(b - s + 1)\cdots (b -\rho (a))} {(b - s+\nu )(b - s +\nu +1)\cdots (b - a +\nu -1)} {}\\ & & = \frac{(t - s +\nu -1)} {(t - s)} \frac{(t - s+\nu )} {(t - s + 1)}\cdots \frac{(t - a +\nu -2)} {(t -\rho (a))} {}\\ & & \qquad \qquad \qquad \cdot \frac{(b - s)} {(b - s+\nu )} \frac{(b - s + 1)} {(b - s +\nu +1)}\cdots \frac{(b -\rho (a))} {(b - a +\nu -1)} {}\\ & & \leq 1 {}\\ \end{array}$$

since each factor in the second to last expression is less than or equal to one. Next, we note that v(t, s) at the right endpoint, t = b, satisfies

$$\displaystyle{v(b,s) = -\frac{(b - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} + \frac{(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)} = 0.}$$

Thus, v(t, s) is nonpositive for a ≤ s ≤ t ≤ b. Therefore, for \(t \in \mathbb{N}_{a}^{b}\), G(t, s) is nonpositive.

Next we show that (ii) holds. Since we know that v(t, s) is always increasing for a ≤ s ≤ t ≤ b and that for s = t, v(t, s) = u(t, s), it suffices to show that

$$\displaystyle{u(t,s) \geq -\frac{b - a} {4} \bigg( \frac{\Gamma (b - a + 1)} {\Gamma (\nu +1)\Gamma (b - a+\nu )}\bigg).}$$

Let a ≤ t ≤ s ≤ b. Then

$$\displaystyle{G(t,s) = u(t,s) = -\frac{(t - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} \geq -\frac{(s - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}}.}$$

Note that for \(\alpha \in \mathbb{N}_{1}\) and 0 < ν < 1,

$$\displaystyle{\alpha ^{\overline{\nu }} = \frac{\Gamma (\alpha +\nu )} {\Gamma (\alpha )} \leq \frac{\Gamma (\alpha +1)} {\Gamma (\alpha )} =\alpha ^{\overline{1}}.}$$

So

$$\displaystyle\begin{array}{rcl} -\frac{(s - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} & \geq & -\frac{(s - a)^{\overline{1}}(b - s)^{\overline{1}}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} {}\\ & \geq & -\frac{(\frac{a+b} {2} - a)(b -\frac{a+b} {2} )} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} {}\\ & =& -\frac{(b - a)(b - a)\Gamma (b - a)} {4\Gamma (\nu +1)\Gamma (b - a+\nu )} {}\\ & =& -\frac{(b - a)\Gamma (b - a + 1)} {4\Gamma (\nu +1)\Gamma (b - a+\nu )} {}\\ & =& -\frac{b - a} {4} \bigg( \frac{\Gamma (b - a + 1)} {\Gamma (\nu +1)\Gamma (b - a+\nu )}\bigg), {}\\ \end{array}$$

and hence (ii) holds.

Now we show property (iii) holds. Thus, we compute

$$\displaystyle\begin{array}{rcl} & & \int _{a}^{b}\vert G(t,s)\vert \nabla s {}\\ & & =\int _{ a}^{t}\vert v(t,s)\vert \nabla s +\int _{ t}^{b}\vert u(t,s)\vert \nabla s {}\\ & & =\int _{ a}^{t}\bigg\vert -\frac{(t - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} + \frac{(t - s)^{\overline{\nu }}} {\Gamma (1+\nu )}\bigg\vert \nabla s +\int _{ t}^{b}\frac{(t - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}}\nabla s {}\\ & & =\int _{ a}^{t} -\bigg [-\frac{(t - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} + \frac{(t - s)^{\overline{\nu }}} {\Gamma (1+\nu )}\bigg]\nabla s +\int _{ t}^{b}\frac{(t - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}}\nabla s {}\\ & & =\int _{ a}^{b}\frac{(t - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}}\nabla s -\int _{a}^{t}\frac{(t - s)^{\overline{\nu }}} {\Gamma (\nu +1)}\nabla s {}\\ & & = -\frac{(t - a)^{\overline{\nu }}(b - s - 1)^{\overline{\nu + 1}}} {\Gamma (\nu +2)(b - a)^{\overline{\nu }}} \bigg\vert _{s=a}^{s=b} + \frac{(t - s - 1)^{\overline{\nu + 1}}} {\Gamma (\nu +2)} \bigg\vert _{s=a}^{s=t} {}\\ & & = \frac{(t - a)^{\overline{\nu }}(b -\rho (a))^{\overline{\nu + 1}}} {\Gamma (\nu +2)(b - a)^{\overline{\nu }}} -\frac{(t -\rho (a))^{\overline{\nu + 1}}} {\Gamma (\nu +2)}{}\\ && {}\\ & & = \frac{(t - a)^{\overline{\nu }}(b -\rho (a))(b - a)^{\overline{\nu }}} {\Gamma (\nu +2)(b - a)^{\overline{\nu }}} -\frac{(t -\rho (a))(t - a)^{\overline{\nu }}} {\Gamma (\nu +2)} {}\\ & & = \frac{(t - a)^{\overline{\nu }}} {\Gamma (\nu +2)} [b -\rho (a) - (t -\rho (a))] {}\\ & & = \frac{(t - a)^{\overline{\nu }}(b - t)} {\Gamma (\nu +2)}. {}\\ \end{array}$$

Hence,

$$\displaystyle\begin{array}{rcl} \int _{a}^{b}\vert G(t,s)\vert \nabla s& \leq & \frac{(t - a)(b - t)} {\Gamma (\nu +2)} {}\\ & \leq & \frac{(\frac{a+b} {2} - a)(b -\frac{a+b} {2} )} {\Gamma (\nu +2)} {}\\ & =& \frac{(b - a)^{2}} {4\Gamma (\nu +2)}. {}\\ \end{array}$$

Finally, we will show that (iv) holds. First assume that ba > 1. Taking the difference with respect to t, we have

$$\displaystyle{\nabla _{t}u(t,s) = \nabla _{t}\frac{-(t - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} = \frac{-\nu (t - a)^{\overline{\nu - 1}}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} \leq 0.}$$

For \(t \in \mathbb{N}_{a+1}^{b}\) we compute

$$\displaystyle\begin{array}{rcl} & & \int _{a}^{b}\left \vert \nabla _{ t}G(t,s)\right \vert \nabla s {}\\ & & =\int _{ a}^{t-1}\left \vert \nabla _{ t}G(t,s)\right \vert \nabla s +\int _{ t-1}^{b}\left \vert \nabla _{ t}G(t,s)\right \vert \nabla s {}\\ & & =\int _{ a}^{t-1}\left \vert \nabla _{ t}v(t,s)\right \vert \nabla s +\int _{ t-1}^{b}\left \vert \nabla _{ t}u(t,s)\right \vert \nabla s {}\\ & & =\int _{ a}^{t-1}\nabla _{ t}\left [\frac{-(t - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} + \frac{(t - s)^{\overline{\nu }}} {\Gamma (\nu +1)}\right ]\nabla s {}\\ & & \quad +\int _{ t-1}^{b}\nabla _{ t}\left [\frac{(t - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}}\right ]\nabla s {}\\ & & =\int _{ a}^{t-1}\nabla _{ t}\left [\frac{-(t - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} \right ]\nabla s +\int _{ a}^{t-1}\nabla _{ t}\left [\frac{(t - s)^{\overline{\nu }}} {\Gamma (\nu +1)}\right ]\nabla s {}\\ & & \quad +\int _{ t-1}^{b}\nabla _{ t}\left [\frac{(t - a)^{\overline{\nu }}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}}\right ]\nabla s {}\\ & & =\int _{ a}^{t-1}\left [\frac{-\nu (t - a)^{\overline{\nu - 1}}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} \right ]\nabla s {}\\ & & \quad +\int _{ a}^{t-1}\left [\frac{\nu (t - s)^{\overline{\nu - 1}}} {\Gamma (\nu +1)} \right ]\nabla s +\int _{ t-1}^{b}\left [\frac{\nu (t - a)^{\overline{\nu - 1}}(b - s)^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} \right ]\nabla s {}\\ & & =\int _{ a}^{t-1}\left [\frac{-\nu (t - a)^{\overline{\nu - 1}}[(b - 1) -\rho (s)]^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} \right ]\nabla s {}\\ & & \quad +\int _{ a}^{t-1}\left [\frac{\nu [(t - 1) -\rho (s)]^{\overline{\nu - 1}}} {\Gamma (\nu +1)} \right ]\nabla s {}\\ & & \quad +\int _{ t-1}^{b}\left [\frac{\nu (t - a)^{\overline{\nu - 1}}[(b - 1) -\rho (s)]^{\overline{\nu }}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}} \right ]\nabla s {}\\ & & = \frac{-\nu (t - a)^{\overline{\nu - 1}}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}}\left [\frac{-1} {\nu +1}(b - s - 1)^{\overline{\nu + 1}}\right ]_{s=a}^{s=t-1} {}\\ & & \quad + \frac{\nu } {\Gamma (\nu +1)}\left [\frac{-1} {\nu } (t - s - 1)^{\overline{\nu }}\right ]_{s=a}^{s=t-1} {}\\ & & \quad + \frac{\nu (t - a)^{\overline{\nu - 1}}} {\Gamma (\nu +1)(b - a)^{\overline{\nu }}}\left [\frac{-1} {\nu +1}(b - s - 1)^{\overline{\nu + 1}}\right ]_{s=t-1}^{s=b} {}\\ & & = \frac{\nu (t - a)^{\overline{\nu - 1}}} {\Gamma (\nu +2)(b - a)^{\overline{\nu }}}\left [(b - t)^{\overline{\nu + 1}} - (b -\rho (a))^{\overline{\nu + 1}}\right ] {}\\ & & \quad - \frac{1} {\Gamma (\nu +1)}\left [(t - t + 1 - 1)^{\overline{\nu }} - (t -\rho (a))^{\overline{\nu }}\right ] {}\\ & & \quad + \frac{-\nu (t - a)^{\overline{\nu - 1}}} {\Gamma (\nu +2)(b - a)^{\overline{\nu }}}\left [(b - b - 1)^{\overline{\nu + 1}} - (b - t)^{\overline{\nu }}\right ] {}\\ & & = \frac{2\nu (t - a)^{\overline{\nu - 1}}(b - t)^{\overline{\nu + 1}}} {\Gamma (\nu +2)(b - a)^{\overline{\nu }}} + \frac{(t -\rho (a))^{\overline{\nu }}} {\Gamma (\nu +1)} -\frac{\nu (t - a)^{\overline{\nu - 1}}(b -\rho (a))} {\Gamma (\nu +2)}. {}\\ \end{array}$$

Suppose t = b. This would imply that

$$\displaystyle\begin{array}{rcl} \int _{a}^{b}\left \vert \nabla _{ t}G(t,s)\right \vert \nabla s& =& \frac{2\nu (b - a)^{\overline{\nu - 1}}(0)^{\overline{\nu + 1}}} {\Gamma (\nu +2)(b - a)^{\overline{\nu }}} {}\\ & & \quad + \frac{(b -\rho (a))^{\overline{\nu }}} {\Gamma (\nu +1)} -\frac{\nu (b - a)^{\overline{\nu - 1}}(b -\rho (a))} {\Gamma (\nu +2)} {}\\ & =& \frac{(\nu +1)(b -\rho (a))^{\overline{\nu }}} {\Gamma (\nu +2)} -\frac{\nu (b - a)^{\overline{\nu - 1}}(b -\rho (a))} {\Gamma (\nu +2)}. {}\\ \end{array}$$

For t = b and \(b - a = 2\), this becomes

$$\displaystyle\begin{array}{rcl} \int _{a}^{b}\left \vert \nabla _{ t}G(t,s)\right \vert \nabla s& =& \frac{(\nu +1)(1)^{\overline{\nu }}} {\Gamma (\nu +2)} -\frac{\nu (2)^{\overline{\nu - 1}}(1)} {\Gamma (\nu +2)} {}\\ & =& \frac{(\nu +1)\Gamma (\nu +1)} {\Gamma (\nu +2)} -\frac{\nu \Gamma (\nu +1)} {\Gamma (\nu +2)} {}\\ & =& 1 - \frac{\nu } {\nu +1} {}\\ & =& \frac{1} {\nu +1} {}\\ & \leq & \frac{2} {\nu +1} {}\\ & =& \frac{b - a} {\nu +1}. {}\\ \end{array}$$

On the other hand, for t = b and \(b - a = 3\), we have

$$\displaystyle\begin{array}{rcl} \int _{a}^{b}\left \vert \nabla _{ t}G(t,s)\right \vert \nabla s& =& \frac{(\nu +1)(2^{\overline{\nu }})} {\Gamma (\nu +2)} -\frac{\nu (3^{\overline{\nu - 1}})(2)} {\Gamma (\nu +2)} {}\\ & =& \frac{(\nu +1)\Gamma (\nu +2)} {\Gamma (\nu +2)} - \frac{2\nu \Gamma (\nu +2)} {\Gamma (\nu +2)\Gamma (3)} {}\\ & =& 1 {}\\ & \leq & \frac{b - a} {\nu +1}. {}\\ \end{array}$$

For t = b and ba ≥ 4, the result holds since

$$\displaystyle\begin{array}{rcl} & & \int _{a}^{b}\left \vert \nabla _{ t}G(t,s)\right \vert \nabla s {}\\ & & = \frac{(\nu +1)(b -\rho (a))^{\overline{\nu }}} {\Gamma (\nu +2)} -\frac{\nu (b - a)^{\overline{\nu - 1}}(b -\rho (a))} {\Gamma (\nu +2)} {}\\ & & = \frac{(\nu +1)(b - a - 2+\nu )\cdots (2+\nu )} {\Gamma (b -\rho (a))} -\frac{\nu \Gamma (b -\rho (a)+\nu )(b -\rho (a))} {\Gamma (2+\nu )\Gamma (b - a)} {}\\ & & = \frac{(\nu +1)(b - a - 2+\nu )\cdots (2+\nu )} {\Gamma (b -\rho (a))} - \frac{\nu \Gamma (b -\rho (a)+\nu )} {\Gamma (2+\nu )\Gamma (b -\rho (a))} {}\\ & & = \frac{(\nu +1)(b - a - 2+\nu )\cdots (2+\nu )} {(b - a - 2)!} -\frac{\nu (b - a - 2+\nu )\cdots (2+\nu )} {(b - a - 2)!} {}\\ & & = \frac{(b - a - 2+\nu )\cdots (2+\nu )} {(b - a - 2)!} {}\\ & & = \frac{(b -\rho (a))(b - a - 2+\nu )\cdots (2+\nu )} {(b -\rho (a))!} {}\\ & & \leq \frac{(b -\rho (a))(b -\rho (a))\cdots (3)} {(b -\rho (a))!} {}\\ & & = \frac{\frac{1} {2}(b -\rho (a))!(b -\rho (a))} {(b -\rho (a))!} = \frac{b -\rho (a)} {2} \leq \frac{b - a} {\nu +1}. {}\\ \end{array}$$

So the result holds in general when t = b. Now, assume t < b. If \(t = a + 1\), then we have

$$\displaystyle\begin{array}{rcl} & & \int _{a}^{b}\left \vert \nabla _{ t}G(t,s)\right \vert \nabla s {}\\ & & = \frac{2\nu (1^{\overline{\nu - 1}})(b -\rho (a))^{\overline{\nu + 1}} + (\nu +1)(0^{\overline{\nu }})(b - a)^{\overline{\nu }} -\nu (1^{\overline{\nu - 1}})(b -\rho (a))^{\overline{\nu + 1}}} {\Gamma (\nu +2)(b - a)^{\overline{\nu }}} {}\\ & & = \frac{2\nu \Gamma (\nu )(b -\rho (a))(b - a)^{\overline{\nu }} -\nu \Gamma (\nu )(b -\rho (a))(b - a)^{\overline{\nu }}} {\Gamma (\nu +2)(b - a)^{\overline{\nu }}} {}\\ & & = \frac{2\nu \Gamma (\nu )(b -\rho (a)) -\nu \Gamma (\nu )(b -\rho (a))} {\Gamma (\nu +2)} {}\\ & & = \frac{\Gamma (\nu +1)(b -\rho (a))} {\Gamma (\nu +2)} = \frac{\Gamma (\nu +1)(b -\rho (a))} {(\nu +1)\Gamma (\nu +1)} \frac{b -\rho (a)} {\nu +1} \leq \frac{b - a} {\nu +1}. {}\\ \end{array}$$

If \(t = a + 2\), then

$$\displaystyle\begin{array}{rcl} & & \int _{a}^{b}\left \vert \nabla _{ t}G(t,s)\right \vert \nabla s {}\\ & & = \frac{2\nu (2^{\overline{\nu - 1}})(b - a - 2)^{\overline{\nu + 1}} + (\nu +1)(1^{\overline{\nu }})(b - a)^{\overline{\nu }} -\nu (2^{\overline{\nu - 1}})(b -\rho (a))^{\overline{\nu + 1}}} {\Gamma (\nu +2)(b - a)^{\overline{\nu }}} {}\\ & & = \frac{2\nu \Gamma (\nu +1)(b - a - 2)^{\overline{\nu + 1}}} {\Gamma (\nu +2)(b - a)^{\overline{\nu }}} + \frac{\nu (\nu +1)\Gamma (\nu )(b - a)^{\overline{\nu }}} {\Gamma (\nu +2)(b - a)^{\overline{\nu }}} {}\\ & & \quad -\frac{\nu \Gamma (\nu +1)(b -\rho (a))^{\overline{\nu + 1}}} {\Gamma (\nu +2)(b - a)^{\overline{\nu }}} {}\\ & & = \frac{2\nu \Gamma (\nu +1)(b -\rho (a))(b - a - 2)} {(\nu +1)(b -\rho (a)+\nu )} + 1 -\frac{\nu (b -\rho (a))} {\nu +1} {}\\ & & \leq \frac{2\nu (b - a - 2)} {\nu +1} + 1 -\frac{\nu (b -\rho (a))} {\nu +1} {}\\ & & = \frac{2\nu (b - a - 2)} {\nu +1} + \frac{\nu +1} {\nu +1} -\frac{\nu (b -\rho (a))} {\nu +1} {}\\ & & = \frac{\nu (b - a - 2) + 1} {\nu +1} \leq \frac{b -\rho (a)} {\nu +1} \leq \frac{b - a} {\nu +1}. {}\\ \end{array}$$

If \(t = a + 3\), then

$$\displaystyle\begin{array}{rcl} & & \int _{a}^{b}\left \vert \nabla _{ t}G(t,s)\right \vert \nabla s {}\\ & & = \frac{2\nu (3^{\overline{\nu - 1}})(b - a - 3)^{\overline{\nu + 1}} + (\nu +1)(2^{\overline{\nu }})(b - a)^{\overline{\nu }} -\nu (3^{\overline{\nu - 1}})(b -\rho (a))^{\overline{\nu + 1}}} {\Gamma (\nu +2)(b - a)^{\overline{\nu }}} {}\\ & & = \frac{2\nu \Gamma (2+\nu )\Gamma (b - a - 2+\nu )\Gamma (b - a)} {\Gamma (3)\Gamma (b - a+\nu )\Gamma (\nu +2)\Gamma (b - a - 3)} + (\nu +1) {}\\ & & \quad -\frac{\nu \Gamma (\nu +2)(b -\rho (a))} {\Gamma (\nu +2)\Gamma (3)} {}\\ & & = \frac{\nu (b -\rho (a))(b - a - 2)(b - a - 3)} {(b -\rho (a)+\nu )(b - a - 2+\nu )} + (\nu +1) -\frac{\nu (b -\rho (a))} {2} {}\\ \end{array}$$

It follows that

$$\displaystyle\begin{array}{rcl} & & \int _{a}^{b}\left \vert \nabla _{ t}G(t,s)\right \vert \nabla s {}\\ & & \leq \nu (b - a - 3) +\nu +1 -\frac{\nu (b -\rho (a))} {2} {}\\ & & = \frac{2\nu b - 2\nu a - 6\nu + 2\nu + 2 -\nu b +\nu a+\nu } {2} {}\\ & & = \frac{\nu (b - a - 3) + 2} {2} {}\\ & & \leq \frac{b - a - 3 + 2} {2} {}\\ & & = \frac{b -\rho (a)} {2} {}\\ & & \leq \frac{b - a} {\nu +1}. {}\\ & & {}\\ \end{array}$$

Now suppose that \(t = a + k\), where \(k \in \mathbb{N}_{4}^{b-\rho (a)}\). Then

$$\displaystyle\begin{array}{rcl} & & \int _{a}^{b}\left \vert \nabla _{ t}G(t,s)\right \vert \nabla s {}\\ & & = \frac{2\nu (k)^{\overline{\nu - 1}}(b - a - k)^{\overline{\nu + 1}}} {(b - a)^{\overline{\nu }}\Gamma (\nu +2)} + \frac{(\nu +1)(k - 1)^{\overline{\nu }}} {\Gamma (\nu +2)} -\frac{\nu (k)^{\overline{\nu - 1}}(b -\rho (a))} {\Gamma (\nu +2)} {}\\ & & = \frac{2\nu \Gamma (k +\nu -1)\Gamma (b - a - k +\nu +1)\Gamma (b - a)} {\Gamma (k)\Gamma (b - a+\nu )\Gamma (\nu +2)\Gamma (b - a - k)} + \frac{(\nu +1)\Gamma (k - 1+\nu )} {\Gamma (\nu +2)\Gamma (k - 1)} {}\\ & & \quad -\frac{\nu \Gamma (k +\nu -1)(b -\rho (a))} {\Gamma (\nu +2)\Gamma (k)} {}\\ & & = \frac{2\nu (\nu +2)\ldots (\nu +k - 2)(b -\rho (a))\ldots (b - a - k)} {(k - 1)!(b -\rho (a)+\nu )\ldots (b - a - (k - 1)+\nu )} {}\\ & & \quad + \frac{(\nu +1)\ldots (\nu +k - 2)} {(k - 2)!} {}\\ & & \quad -\frac{\nu (\nu +2)\ldots (\nu +k - 2)(b -\rho (a))} {(k - 1)!}{}\\ && {}\\ \end{array}$$

Hence,

$$\displaystyle\begin{array}{rcl} & & \int _{a}^{b}\left \vert \nabla _{ t}G(t,s)\right \vert \nabla s {}\\ & & \leq \frac{2\nu (\nu +2)\ldots (\nu +k - 2)(b - a - k)} {(k - 1)!} + \frac{(k - 1)(\nu +1)\ldots (\nu +k - 2)} {(k - 1)!} {}\\ & & \quad -\frac{\nu (\nu +2)\ldots (\nu +k - 2)(b -\rho (a))} {(k - 1)!} {}\\ & & = \frac{\nu (\nu +2)\ldots (\nu +k - 2)(2b - 2a - 2k - b + a + 1)} {(k - 1)!} {}\\ & & \quad \quad \quad + \frac{(k - 1)(\nu +1)\ldots (\nu +k - 2)} {(k - 1)!} {}\\ & & = \frac{\nu (\nu +2)\ldots (\nu +k - 2)(b - a + 1 - 2k) + (k - 1)(\nu +1)\ldots (\nu +k - 2)} {(k - 1)!} {}\\ & & \leq \frac{(1)(3)(4)\ldots (k - 1)(b - a + 1 - 2k) + (k - 1)(2)(3)\ldots (k - 1)} {(k - 1)!} {}\\ & & = \frac{\frac{1} {2}(k - 1)!(b - a + 1 - 2k) + (k - 1)(k - 1)!} {(k - 1)!} {}\\ & & = \frac{(b - a + 1 - 2k) + 2(k - 1)} {2} {}\\ & & = \frac{b -\rho (a)} {2} {}\\ & & \leq \frac{b - a} {\nu +1}. {}\\ \end{array}$$

And this completes the proof. □ 

Example 3.178.

Use an appropriate Green’s function to solve the BVP

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \quad &\nabla \nabla _{0{\ast}}^{0.5}x(t + 1) = 1,\quad t \in \mathbb{N}_{1}^{b-1}, \\ \quad &x(0) = 0 = x(b),\end{array} \right.}$$

where \(b \in \mathbb{N}_{2}\). By Theorem 3.175 we have that the Green’s function for the BVP

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \quad &\nabla \nabla _{0{\ast}}^{0.5}x(t + 1) = 0,\quad t \in \mathbb{N}_{1}, \\ \quad &x(0) = 0 = x(b),\end{array} \right. }$$
(3.118)

is given by

$$\displaystyle{G(t,s) = \left \{\begin{array}{ll} u(t,s),&\quad 0 \leq t \leq s \leq b,\\ v(t, s), &\quad 0 \leq s \leq t \leq b, \end{array} \right.}$$

where

$$\displaystyle{u(t,s) = -\frac{(b - s)^{\overline{0.5}}t^{\overline{0.5}}} {\Gamma (1.5)b^{\overline{0.5}}},\quad 0 \leq t \leq s \leq b}$$

and

$$\displaystyle{v(t,s) = -\frac{(b - s)^{\overline{0.5}}t^{\overline{0.5}}} {\Gamma (1.5)b^{\overline{0.5}}} + \frac{(t - s)^{\overline{0.5}}} {\Gamma (1.5)},\quad 0 \leq s \leq t \leq b.}$$

Then the solution of the BVP (3.118) is given by

$$\displaystyle\begin{array}{rcl} x(t)& =& \int _{0}^{b}G(t,s)h(s)\nabla s {}\\ & =& \int _{0}^{t}v(t,s)h(s)\nabla s +\int _{ t}^{b}G(t,s)h(s)\nabla s {}\\ & =& \int _{0}^{t}\left [-\frac{(b - s)^{\overline{0.5}}t^{\overline{0.5}}} {\Gamma (1.5)b^{\overline{0.5}}} + \frac{(t - s)^{\overline{0.5}}} {\Gamma (1.5)} \right ]\nabla s +\int _{ t}^{b}\left [-\frac{(b - s)^{\overline{0.5}}t^{\overline{0.5}}} {\Gamma (1.5)b^{\overline{0.5}}} \right ]\nabla s {}\\ & =& - \frac{t^{\overline{0.5}}} {\Gamma (1.5)b^{\overline{0.5}}}\int _{0}^{b}(b - s)^{\overline{0.5}}\nabla s +\int _{ 0}^{t}\frac{(t - s)^{\overline{0.5}}} {\Gamma (1.5)} \nabla s {}\\ & =& - \frac{t^{\overline{0.5}}} {\Gamma (1.5)b^{\overline{0.5}}}\int _{0}^{b}[(b - 1) -\rho (s)]^{\overline{0.5}}\nabla s +\int _{ 0}^{t}\frac{[(t - 1) -\rho (s)]^{\overline{0.5}}} {\Gamma (1.5)} \nabla s {}\\ & =& \frac{t^{\overline{0.5}}} {\Gamma (1.5)b^{\overline{0.5}}} \frac{[(b - 1) - s]^{\overline{1.5}}} {1.5} \bigg\vert _{s=0}^{b} -\frac{[(t - 1) - s]^{\overline{1.5}}} {\Gamma (1.5)1.5} \bigg\vert _{s=0}^{t} {}\\ & =& - \frac{1} {1.5\Gamma (1.5)b^{\overline{.5}}}t^{\overline{.5}} + \frac{1} {1.5\Gamma (1.5)}(t - 1)^{\overline{1.5}} {}\\ & =& -\frac{4(b - 1)^{\overline{1.5}}} {3\sqrt{\pi }b^{\overline{0.5}}} t^{\overline{0.5}} + \frac{4} {3\sqrt{\pi }}(t - 1)^{\overline{1.5}} {}\\ \end{array}$$

for \(t \in \mathbb{N}_{0}^{b}\).

3.25 Exercises

3.1. Assume \(f: \mathbb{N}_{a}^{b} \rightarrow \mathbb{R}\). Show that if ∇f(t) = 0 for \(t \in \mathbb{N}_{a+1}^{b}\), then f(t) = C for \(t \in \mathbb{N}_{a}^{b}\), where C is a constant.

3.2. Assume \(f,g: \mathbb{N}_{a} \rightarrow \mathbb{R}\). Prove the nabla quotient rule

$$\displaystyle{\nabla \left (\frac{f(t)} {g(t)}\right ) = \frac{g(t)\nabla f(t) - f(t)\nabla g(t)} {g(t)g(\rho (t))},\quad g(t)\neq 0,\quad t \in \mathbb{N}_{a+1},}$$

which is part (vi) of Theorem 3.1.

3.3. Prove that

$$\displaystyle{(t - r + 1)^{\overline{r}} = t^{\underline{r}}.}$$

3.4. Prove that the box plus addition, \(\boxplus \), on \(\mathcal{R}\) is commutative and associative (see Theorem 3.8).

3.5. Show that if \(p,q \in \mathcal{R}\), then

$$\displaystyle{(p \boxminus q)(t) = \frac{p(t) - q(t)} {1 - q(t)},\quad t \in \mathbb{N}_{a}.}$$

3.6. Show that if \(p \in \mathcal{R}^{+}\), then

$$\displaystyle{\frac{1} {2} \boxdot p = \frac{p} {1 + \sqrt{1 - p}}.}$$

3.7. Prove directly from the definition of the box dot multiplication, \(\boxdot \), that if \(p \in \mathcal{R}\) and \(n \in \mathbb{N}_{1}\), then

$$\displaystyle{n \boxdot p = p \boxplus p \boxplus p\cdots \boxplus p,}$$

where the right-hand side of the above expression has n terms.

3.8. Show that the set of positively regressive constants \(\mathcal{R}^{+}\) with the addition \(\boxplus \) is an Abelian subgroup of \(\mathcal{R}\).

3.9. Prove part (vi) of Theorem 3.11 for the case a ≤ s ≤ r. That is, if \(p \in \mathcal{R}\) and a ≤ s ≤ r, then

$$\displaystyle{E_{p}(t,s)E_{p}(s,r) = E_{p}(t,r),\quad t \in \mathbb{N}_{a}.}$$

3.10. Assume \(p,q \in \mathcal{R}\) and \(s \in \mathbb{N}_{a}\). Prove the law of exponents (Theorem 3.11, (vii))

$$\displaystyle{E_{p}(t,s)E_{q}(t,s) = E_{p\boxplus q}(t,s),\quad t \in \mathbb{N}_{a}.}$$

3.11. Prove that if \(p,q \in \mathcal{R}\) and

$$\displaystyle{E_{p}(t,a) = E_{q}(t,a),\quad t \in \mathbb{N}_{a},}$$

then p(t) = q(t), \(t \in \mathbb{N}_{a+1}\).

3.12. Show that if \(\alpha,\beta \in \mathbb{R}\) and \(p \in \mathcal{R}^{+},\) then

$$\displaystyle{(\alpha +\beta ) \boxdot p = (\alpha \boxdot p) \boxplus (\beta \boxdot p).}$$

3.13. Show that if \(p,-p \in \mathcal{R}\), then

$$\displaystyle{\nabla Sinh_{p}(t,a) = p(t)Cosh(t,a),\quad t \in \mathbb{N}_{a}.}$$

3.14. Show by direct substitution that \(y(t) = (t - a)E_{r}(t,a)\), r ≠ 1, is a nontrivial solution of the second order linear equation \(\nabla ^{2}y(t) - 2r\nabla y(t) + r^{2}y(t) = 0\) on \(\mathbb{N}_{a}\).

3.15. Prove part (iii) of Theorem 3.18. That is, if p ≠ ± i is a constant, then

$$\displaystyle{\mbox{ Sinh}_{ip}(t,a) = i\;\mbox{ Sin}_{p}(t,a),\quad t \in \mathbb{N}_{a}.}$$

3.16. Solve each of the following nabla difference equations:

  1. (i)

    \(\nabla ^{2}u(t) - 4\nabla u(t) + 5u(t) = 0,\quad t \in \mathbb{N}_{0};\)

  2. (ii)

    \(\nabla ^{2}u(t) - 4\nabla u(t) + 4u(t) = 0,\quad t \in \mathbb{N}_{a};\)

  3. (iii)

    \(\nabla ^{2}u(t) - 4\nabla u(t) - 5u(t) = 0,\quad t \in \mathbb{N}_{a}\).

3.17. Solve each of the following nabla linear difference equations:

  1. (i)

    \(\nabla ^{2}x(t) - 10\nabla x(t) + 25x(t) = 0,\quad t \in \mathbb{N}_{0};\)

  2. (ii)

    \(\nabla ^{2}x(t) - 9x(t) = 0,\quad t \in \mathbb{N}_{a};\)

  3. (iii)

    \(\nabla ^{2}x(t) + 2\nabla x(t) + 5x(t) = 0,\quad t \in \mathbb{N}_{a}\).

3.18. Solve each of the following nabla linear difference equations:

  1. (i)

    \(\nabla ^{2}y(t) - 2\nabla y(t) + 2y(t) = 0,\quad t \in \mathbb{N}_{a};\)

  2. (ii)

    \(\nabla ^{2}y(t) - 2\nabla y(t) + 10y(t) = 0,\quad t \in \mathbb{N}_{a}\).

3.19. Prove the nabla version of L’Hôpital’s rule: If \(f,g: \mathbb{N}_{a} \rightarrow \mathbb{R},\) and

$$\displaystyle{\lim _{t\rightarrow \infty }f(t) = 0 =\lim _{t\rightarrow \infty }g(t)}$$

and g(t)∇g(t) < 0 for large t, then

$$\displaystyle{\lim _{t\rightarrow \infty }\frac{f(t)} {g(t)} =\lim _{t\rightarrow \infty }\frac{\nabla f(t)} {\nabla g(t)}}$$

provided \(\lim _{t\rightarrow \infty }\frac{\nabla f(t)} {\nabla g(t)}\) exists.

3.20. Use the integration formula

$$\displaystyle{\int \alpha ^{t+\beta }\nabla t = \frac{\alpha } {\alpha -1}\alpha ^{t+\beta } + C}$$

to prove the integration formula

$$\displaystyle{\int E_{p}(t,a)\nabla t = \frac{1} {p}E_{p}(t,a) + C.}$$

3.21. Show that if \(1 + p(t) + q(t)\neq 0\), for \(t \in \mathbb{N}_{a+2}\), then the general solution of the linear homogeneous equation

$$\displaystyle{\nabla ^{2}y(t) + p(t)\nabla y(t) + q(t)y(t) = 0}$$

is given by

$$\displaystyle{y(t) = c_{1}y_{1}(t) + c_{2}y_{2}(t),\quad t \in \mathbb{N}_{a},}$$

where y 1(t), y 2(t) are any two linearly independent solutions of (3.13) on \(\mathbb{N}_{a}\).

3.22. Assume \(f: \mathbb{N}_{a} \times \mathbb{N}_{a+1} \rightarrow \mathbb{R}\). Prove the Leibniz formula (3.23). That is,

$$\displaystyle{\nabla \left (\int _{a}^{t}f(t,\tau )\nabla \tau \right ) =\int _{ a}^{t-1}\nabla _{ t}f(t,\tau )\nabla \tau + f(t,t),}$$

for \(t \in \mathbb{N}_{a+1}\).

3.23. Evaluate the nabla integral \(\int _{0}^{t}f(\tau )\nabla \tau\), when

  1. (i)

    \(f(t) = t(-2)^{t},\quad t \in \mathbb{N}_{0};\)

  2. (ii)

    \(f(t) = H_{2}(\rho (t),0)E_{3}(t,0),\quad t \in \mathbb{N}_{0}\).

3.24. Use the variation of constants formula in either Corollary 3.52 or Theorem 3.51 to solve each of the following IVPs.

  1. (i)
    $$\displaystyle\begin{array}{rcl} \nabla ^{2}y(t)& =& 3^{-t},\quad t \in \mathbb{N}_{ 1} {}\\ y(0)& =& 0,\quad \nabla y(0) = 0 {}\\ \end{array}$$
  2. (ii)
    $$\displaystyle\begin{array}{rcl} \nabla ^{2}y(t)& =& \mbox{ Sinh}_{ 4}(t,0),\quad t \in \mathbb{N}_{1} {}\\ y(0)& =& -1,\quad \nabla y(0) = 1 {}\\ \end{array}$$
  3. (iii)
    $$\displaystyle\begin{array}{rcl} \nabla ^{2}y(t)& =& t - 2,\quad t \in \mathbb{N}_{ 3} {}\\ y(2)& =& 0,\quad \nabla y(2) = 0 {}\\ \end{array}$$

3.25. Show that if μ > 0, then

$$\displaystyle{H_{\mu }(a + 1,a) = 1 = H_{-\mu }(a + 1,a).}$$

3.26. Show if μ > 0 is not a positive integer, then

$$\displaystyle{H_{\mu }(t,a) = 0,\quad \mbox{ for $t = a,a - 1,a - 2,\cdots \,$.}}$$

Also show that if μ is a positive integer, then

$$\displaystyle{H_{\mu }(t,a) = 0,\quad \mbox{ for $t \in \mathbb{N}_{a-\mu +1}^{a}$.}}$$

3.27. Show that if \(f: \mathbb{N}_{a+1} \rightarrow \mathbb{R}\) and μ > 0, then

$$\displaystyle{\nabla _{a}^{-\mu }f(a + 1) = f(a + 1).}$$

3.28. Use Definition 3.61 to show that if μ > 0 is not an integer and C is a constant, then

$$\displaystyle{\nabla _{a}^{\mu }C = CH_{ -\mu }(t,a),\quad t \in \mathbb{N}_{a}.}$$

On the other hand, if μ = k is a positive integer, show that \(\nabla _{a}^{k}C = 0\).

3.29. Use the definition (Definition 3.58) of the fractional sum to evaluate each of the following integer sums.

  1. (i)

    \(\nabla _{a}^{-2}\mbox{ Cosh}_{3}(t,a),\quad t \in \mathbb{N}_{a};\)

  2. (ii)

    \(\nabla _{a}^{-3}H_{1}(t,a),\quad t \in \mathbb{N}_{a};\)

  3. (iii)

    \(\nabla _{2}^{-2}\mbox{ Sin}_{4}(t,2),\quad t \in \mathbb{N}_{2}\).

3.30. For p ≠ 0, 1 a constant, show that each of the functions E p (t, a), Cosh p (t, a), Sinh p (t, a), Cos p (t, a), and Sin p (t, a) is of exponential order \(\frac{1} {\vert 1-p\vert }\).

3.31. Prove parts (iii) and (v) of Theorem 3.76.

3.32. Using the definition (Definition 3.77) of the nabla convolution product, show that the nabla convolution product is commutative—i.e., for all \(f,g: \mathbb{N}_{a+1} \rightarrow \mathbb{R},\)

$$\displaystyle{(f {\ast} g)(t) = (g {\ast} f)(t),\quad t \in \mathbb{N}_{a+1}.}$$

Also show that the nabla convolution product is associative.

3.33. Solve each of the following IVPs using the nabla Laplace transform:

  1. (i)
    $$\displaystyle\begin{array}{rcl} \nabla y(t) - 4y(t)& =& 2E_{5}(t,a),\quad t \in \mathbb{N}_{a+1} {}\\ y(a)& =& -2; {}\\ \end{array}$$
  2. (ii)
    $$\displaystyle\begin{array}{rcl} \nabla y(t) - 3y(t)& =& 4,\quad t \in \mathbb{N}_{a+1} {}\\ y(a)& =& -2; {}\\ \end{array}$$
  3. (iii)
    $$\displaystyle\begin{array}{rcl} \nabla ^{2}y(t) + \nabla y(t) - 6y(t)& =& 0,\quad t \in \mathbb{N}_{ a+2} {}\\ y(a)& =& 3;\quad y(a + 1) = 0 {}\\ \end{array}$$
  4. (iv)
    $$\displaystyle\begin{array}{rcl} \nabla ^{2}y(t) - 5\nabla y(t) + 6y(t)& =& E_{ 4}(t,a),\quad t \in \mathbb{N}_{a+2} {}\\ y(a)& =& 1,\quad y(a + 1) = -1. {}\\ \end{array}$$

3.34. Prove part (iv) of Theorem 3.93

3.35. Use Theorem 3.120 to solve each of the following IVPs:

  1. (i)
    $$\displaystyle\begin{array}{rcl} \nabla _{0{\ast}}^{\frac{1} {2} }x(t)& =& 3,\quad t \in \mathbb{N}_{1} {}\\ x(0)& =& \pi; {}\\ \end{array}$$
  2. (ii)
    $$\displaystyle\begin{array}{rcl} \nabla _{0{\ast}}^{\frac{1} {3} }x(t)& =& t^{\overline{\frac{4} {3}}},\quad t \in \mathbb{N}_{1} {}\\ x(0)& =& 2; {}\\ \end{array}$$
  3. (iii)
    $$\displaystyle\begin{array}{rcl} \nabla _{a{\ast}}^{\frac{2} {3} }x(t)& =& t - a,\quad t \in \mathbb{N}_{a+1} {}\\ x(a)& =& 4. {}\\ \end{array}$$

3.36. Use Theorem 3.120 to solve each of the following IVPs:

  1. (i)
    $$\displaystyle\begin{array}{rcl} & & \nabla _{0{\ast}}^{1.6}x(t) = 3,\quad t \in \mathbb{N}_{ 1} {}\\ & & x(0) = 2, \nabla x(0) = -1. {}\\ \end{array}$$
  2. (ii)
    $$\displaystyle\begin{array}{rcl} & & \nabla _{0}^{\frac{5} {3} }x(t) = t^{\overline{\frac{4} {3}}},\quad t \in \mathbb{N}_{1} {}\\ & & x(0) = \nabla x(0) = 0. {}\\ \end{array}$$
  3. (iii)
    $$\displaystyle\begin{array}{rcl} \nabla _{a{\ast}}^{2.7}x(t)& =& t - a,\quad t \in \mathbb{N}_{ a+1} {}\\ x(a)& =& 0 = \nabla x(a). {}\\ \end{array}$$

3.37. Show that (see example 3.168)

$$\displaystyle{\int _{0}^{t}sH_{.6}(t,s)\nabla s = \frac{1} {\Gamma (<InternalRef RefID="Equ12">3.6</InternalRef>)}(t - 1)^{\overline{2.6}}}$$

for \(t \in \mathbb{N}_{0}\).

3.38. Solve the following IVPs using Theorem 3.166:

  1. (i)
    $$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \quad &\nabla \nabla _{0{\ast}}^{0.5}x(t + 1) = t - a,\quad t \in \mathbb{N}_{a+1}, \\ \quad &x(a) = \nabla x(a) = 0.\end{array} \right.}$$
  2. (ii)
    $$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \quad &\nabla \nabla _{0{\ast}}^{0.3}x(t + 1) = t,\quad t \in \mathbb{N}_{1}, \\ \quad &x(0) = 1,\quad \nabla x(0) = 2.\end{array} \right.}$$

3.39. Use an appropriate Green’s function to solve the BVP

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \quad &\nabla \nabla _{0{\ast}}^{0.2}x(t + 1) = 1,\quad t \in \mathbb{N}_{1}^{b-1}, \\ \quad &x(0) = 0 = x(b),\end{array} \right.}$$

where \(b \in \mathbb{N}_{2}\).