This chapter is devoted primarily to the exponential and lp-stability of Volterra difference equations. Lyapunov functionals are the main tools in the analysis. It is pointed out that in the case of exponential stability, Lyapunov functionals are hard to extend to vector Volterra difference equations or to Volterra difference equations with infinite delay . In addition, we use nonstandard discretization scheme due to Mickens [122] and apply them to continuous Volterra integro-differential equations. We will show that under the discretization scheme the stability of the zero solution of the continuous dynamical system is preserved. Also, under the same discretization, using a combination of Lyapunov functionals, Laplace transforms, and z-transforms, we show that the boundedness of solutions of the continuous dynamical system is preserved. We end the chapter with a brief section introducing semigroup, which should stir up some curiosity in the application of semigroup to Volterra difference equations. The chapter concludes with multiple open problems. The work of this chapter heavily depends on the materials in [9, 51, 59, 76, 91], and [98].

6.1 Exponential Stability

We consider the scalar linear difference equation with multiple delays

$$\displaystyle{ x(t + 1) = a(t)x(t) +\sum _{ s=t-r}^{t-1}b(t,s)x(s),\;t \geq 0, }$$
(6.1.1)

where \(r \in \mathbb{Z}^{+}\), \(a: \mathbb{Z}^{+} \rightarrow \mathbb{R}\) and \(b: \mathbb{Z}^{+} \times [-r,\infty ) \rightarrow \mathbb{R}.\) We will use Lyapunov functionals and obtain some inequalities regarding the solutions of (6.1.1) from which we can deduce exponential asymptotic stability of the zero solution. Also, we will provide a criteria for the unboundedness of solutions and the instability of the zero solution of (6.1.1) by means of Lyapunov type functionals.Consider the kth-order scalar difference equation

$$\displaystyle{ x(t + k) + p_{1}x(t + k - 1) + p_{2}x(t + k - 2) + \cdot \cdot \cdot + p_{k}x(t) = 0, }$$
(6.1.2)

where the pi’s are real numbers. It is well known that the zero solution of (6.1.2) is asymptotically stable if and only if | λ | < 1 for every characteristic root λ of (6.1.2). There are no easy criteria to test for exponential stability of the zero solution of equations that are similar to (6.1.2) for variable coefficients. This itself highlights the importance of the creativity of constructing a suitable Lyapunov functional that leads to the exponential stability. When using Lyapunov functionals, one faces the difficulties of relating the constructed Lyapunov functional back to the solution x so that stability can be deduced. This task is tedious and we did overcome it. The authors have done an extensive literature search and could not find any work that dealt with exponential stability of Volterra equations of the form of (6.1.1). This research offers easily verifiable conditions that guarantee exponential stability. Moreover, we give criteria for the instability of the zero solution. Most importantly, our results will hold for | a(t) | ≥ 1. We will illustrate our theory with several examples and numerical simulations. It is scarce to find results concerning the use of Lyapunov functionals in the stability of finite delay difference equations due to the unforeseen difficulties in constructing such functions. This section intends to fill some of the gap and moreover, we will compare the results obtained in this section to known ones where other methods are used, such as operator theory.

In Chapter 2, we looked at the system of functional difference equation of the form

$$\displaystyle{ x(n + 1) = G(n,x_{n}),\;x \in \mathbb{R}^{k} }$$
(6.1.3)

where \(G: \mathbb{Z}^{+} \times \mathbb{R} \rightarrow \mathbb{R}\) is continuous in x. Let x be any solution of (6.1.3). Quite often when using Lyapunov functional to study system (6.1.3) we encounter pair of inequalities in the form of

$$\displaystyle{ W_{1}(x(n)) \leq V (n,x(\cdot )) = W_{2}(x(n)) +\sum _{ s=0}^{n-1}K(n,s)W_{ 3}(x(s)), }$$
$$\displaystyle{ \bigtriangleup V (n,x(\cdot )) \leq -W_{4}(x(n)) + F(n) }$$

where V is a Lyapunov functional bounded below, x is the unknown solution of the functional difference equation, and K, F, and Wi, i = 1, 2, 3, 4 are scalar positive functions. The wedge W1 is mandatory in order to relate the solutions x back to V. Hence, identifying such a W1 is not an easy job, as we shall see later on in this section. It is even more difficult when using Lyapunov functionals to obtain exponential stability, since it requires that along the solutions of (6.1.3) we have for some α > 0

$$\displaystyle{ \bigtriangleup V (n,x(\cdot )) \leq -\alpha V (n,x(\cdot )). }$$

The above inequality presents us with formidable challenges as maybe seen later in the section. However, a simple but clever rewriting of the difference equation points us in the right direction in constructing the appropriate Lyapunov functional, as we shall see from (6.1.5).

Let ψ: [−h, 0] → (−, ) be a given bounded initial function with

$$\displaystyle{ \vert \vert \psi \vert \vert =\mathop{\max }\limits_{ -h \leq s \leq 0}\vert \psi (s)\vert. }$$

It should cause no confusion to denote the norm of a function φ: [−h, ) → (−, ) with

$$\displaystyle{ \vert \vert \varphi \vert \vert =\mathop{\sup }\limits_{ -h \leq s <\infty }\vert \varphi (s)\vert. }$$

The notation xt means that xt(τ) = x(t + τ), τ ∈ [−h, 0] as long as x(t + τ) is defined. Thus, xt is a function mapping an interval [−h, 0] into \(\mathbb{R}.\) We say x(t) ≡ x(t, t0, ψ) is a solution of (6.1.1) if x(t) satisfies (6.1.1) for tt0 and \(x_{t_{0}} = x(t_{0} + s) =\psi (s),\) s ∈ [−h, 0]. In preparation for our main results, we let

$$\displaystyle{ A(t,s) =\sum _{ u=t-s}^{r}b(u + s,s). }$$
(6.1.4)

By noting that

$$\displaystyle{ A(t,t - r - 1) = 0, }$$

we have that (6.1.1) is equivalent to

$$\displaystyle{ \bigtriangleup x(t) =\big (a(t) + A(t + 1,t) - 1\big)x(t) -\bigtriangleup _{t}\sum _{s=t-r-1}^{t-1}A(t,s)x(s). }$$
(6.1.5)

In [138], the author used the same method and studied the exponential stability and instability of the zero solution of

$$\displaystyle{ x(t + 1) = a(t)x(t) - b(t)x(t - h). }$$

One of the novelty of rewriting (6.1.1) in the form of (6.1.5) is that it allows us to obtain stability results concerning the totally delayed Volterra difference equation

$$\displaystyle{ x(t + 1) =\sum _{ s=t-r}^{t-1}b(t,s)x(s),\;t \geq 0. }$$
(6.1.6)

We have the following definition.

Definition 6.1.1.

The zero solution of (6.1.1) is said to be exponentially stable if any solution x(t, t0, ψ) of (6.1.1) satisfies

$$\displaystyle{ \vert x(t,t_{0},\psi )\vert \leq C\Big(\vert \vert \psi \vert \vert,t_{0}\Big)\zeta ^{\gamma (t-t_{0})},\;\;\;\mbox{ for all }t \geq t_{ 0}, }$$

where ζ is constant with 0 < ζ < 1, \(C: \mathbb{R}^{+} \times \mathbb{Z}^{+} \rightarrow \mathbb{R}^{+},\) and γ is a positive constant. The zero solution of (6.1.1) is said to be uniformly exponentially stable if C is independent of t0. 

For simplicity we let

$$\displaystyle{ Q(t) = a(t) + A(t + 1,t) - 1. }$$

Assume

$$\displaystyle{ \bigtriangleup _{t}A^{2}(t,z) \leq 0,\;\mbox{ for all}\;t + s + 1 \leq z \leq t - 1. }$$
(6.1.7)

Lemma 6.1 ([98]).

Let A(t, s) be given by  (6.1.4) and that for δ > 0 the inequality

$$\displaystyle{ - \frac{\delta } {(\delta +1)r} \leq Q(t) \leq -r\delta A^{2}(t + 1,t) - Q^{2}(t) }$$
(6.1.8)

holds. If

$$\displaystyle\begin{array}{rcl} V (t)& =& \left [x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)x(s)\right ]^{2} \\ & +& \delta \sum _{s=-r}^{-1}\sum _{ z=t+s}^{t-1}A^{2}(t,z)x^{2}(z),{}\end{array}$$
(6.1.9)

then along the solutions of ( 6.1.1 ) we have

$$\displaystyle{ \bigtriangleup V (t) \leq Q(t)V (t). }$$

Proof.

First we note that due to condition (6.1.8), Q(t) < 0 for all t ≥ 0. Also, we use the fact that if u(t) is a sequence, then △u 2(t) = u(t + 1)△u(t) + u(t)△u(t). Let x(t) = x(t, t0, ψ) be a solution of (6.1.1) and define V (t) by (6.1.9). Then along solutions of (6.1.5) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t)& =& \left [x(t + 1) +\sum _{ s=t-r}^{t}A(t + 1,s)x(s)\right ]\bigtriangleup _{ t}\left [x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)x(s)\right ] \\ & +& \left [x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)x(s)\right ]\bigtriangleup _{ t}\left [x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)x(s)\right ] \\ & +& \delta \bigtriangleup _{t}\sum _{s=-r}^{-1}\sum _{ z=t+s}^{t-1}A^{2}(t,z)x^{2}(z). {}\end{array}$$
(6.1.10)

We note that

$$\displaystyle\begin{array}{rcl} x(t + 1) +\sum _{ s=t-r}^{t}A(t + 1,s)x(s)& =& (Q(t) + 1)x(t) -\bigtriangleup _{ t}\sum _{s=t-r-1}^{t-1}A(t,s)x(s) {}\\ & +& \sum _{s=t-r}^{t}A(t + 1,s)x(s) {}\\ & =& (Q(t) + 1)x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)x(s) {}\\ & =& (Q(t) + 1)x(t) +\sum _{ s=t-r}^{t-1}A(t,s)x(s), {}\\ \end{array}$$

since A(t, tr − 1) = 0. With this in mind, (6.1.10) reduces to

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t)& =& \left [(Q(t) + 1)x(t) +\sum _{ s=t-r}^{t-1}A(t,s)x(s)\right ]Q(t)x(t) \\ & +& \left [x(t) +\sum _{ s=t-r}^{t-1}A(t,s)x(s)\right ]Q(t)x(t) \\ & +& \delta \bigtriangleup _{t}\sum _{s=-r}^{-1}\sum _{ z=t+s}^{t-1}A^{2}(t,z)x^{2}(z) \\ & =& Q(t)V (t) +\big (Q^{2}(t) + Q(t))\big)x^{2}(t) \\ & -& \delta Q(t)\sum _{s=-r}^{-1}\sum _{ z=t+s}^{t-1}A^{2}(t,z)x^{2}(z) \\ & +& \delta \bigtriangleup _{t}\sum _{s=-r}^{-1}\sum _{ z=t+s}^{t-1}A^{2}(t,z)x^{2}(z) \\ & -& Q(t)\left (\sum _{s=t-r}^{t-1}A(t,s)x(s)\right )^{2}. {}\end{array}$$
(6.1.11)

Also, using (6.1.4), we arrive at

$$\displaystyle\begin{array}{rcl} \bigtriangleup _{t}\sum _{s=-r}^{-1}\sum _{ z=t+s}^{t-1}A^{2}(t,z)x^{2}(z)& =& \sum _{ s=-r}^{-1}\sum _{ z=t+s+1}^{t}A^{2}(t + 1,z)x^{2}(z) \\ & -& \sum _{s=-r}^{-1}\sum _{ z=t+s}^{t-1}A^{2}(t,z)x^{2}(z) \\ & =& \sum _{s=-r}^{-1}\Big[A^{2}(t + 1,t)x^{2}(t) +\sum _{ z=t+s+1}^{t-1}A^{2}(t + 1,z)x^{2}(z) \\ & -& \sum _{z=t+s+1}^{t-1}A^{2}(t,z)x^{2}(z) - A^{2}(t,t + s)x^{2}(t + s)\Big] \\ & =& \sum _{s=-r}^{-1}\Big(A^{2}(t + 1,t)x^{2}(t) - A^{2}(t,t + s)x^{2}(t + s)\Big) \\ & +& \sum _{s=-r}^{-2}\sum _{ z=t+s+1}^{t-1}\bigtriangleup _{ t}A^{2}(t,z)x^{2}(z) \\ & =& rA^{2}(t + 1,t)x^{2}(t) -\sum _{ s=-r}^{-1}A^{2}(t,t + s)x^{2}(t + s) \\ & +& \sum _{s=-r}^{-2}\sum _{ z=t+s+1}^{t-1}\bigtriangleup _{ t}A^{2}(t,z)x^{2}(z) \\ & \leq & rA^{2}(t + 1,t)x^{2}(t) \\ & -& \sum _{s=-r}^{-1}A^{2}(t,t + s)x^{2}(t + s). {}\end{array}$$
(6.1.12)

With the aid of Hölder’s inequality, we have

$$\displaystyle\begin{array}{rcl} \left (\sum _{s=t-r}^{t-1}A(t,s)x(s)\right )^{2} \leq r\sum _{ s=t-r}^{t-1}A^{2}(t,s))x^{2}(s).& &{}\end{array}$$
(6.1.13)

Also,

$$\displaystyle\begin{array}{rcl} \sum _{s=-r}^{-1}\sum _{ z=t+s}^{t-1}A^{2}(t,z)x^{2}(z) \leq r\sum _{ s=t-r}^{t-1}A^{2}(t,s)x^{2}(s).& &{}\end{array}$$
(6.1.14)

By invoking (6.1.8) and substituting expressions (6.1.12)– (6.1.14) into (6.1.11), we obtain

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t)& \leq & Q(t)V (t) +\big (Q^{2}(t) + Q(t) + r\delta A^{2}(t + 1,t)\big)x^{2}(t) \\ & +& [-\big(\delta +1\big)rQ(t)-\delta ]\sum _{s=t-h}^{t-1}A^{2}(t,s)x^{2}(s) \\ & \leq & Q(t)V (t). {}\end{array}$$
(6.1.15)

This completes the proof.

Theorem 6.1.1 ([98]).

Assume the hypothesis of Lemma  6.1 holds and suppose there exists a number α < 1 such that \(0 <a(t) + A(t + 1,t)) \leq \alpha.\) Then any solution x(t) = x(t, t0, ψ) of ( 6.1.1 ) satisfies the exponential inequality

$$\displaystyle{ \vert x(t)\vert \leq \sqrt{\frac{r+\delta } {\delta } V (t_{0})\prod _{s=t_{0}}^{t-1}\big(a(s) + A(s + 1,s)\big)} }$$
(6.1.16)

for \(t \geq t_{0}.\)

Proof.

First we note that condition (6.1.8) implies that there exists some positive number α < 1 such that \(\vert a(t) + A(t + 1,t))\vert <\alpha.\) Now by changing the order of summation we have

$$\displaystyle\begin{array}{rcl} \ \delta \sum _{s=-r}^{-1}\sum _{ z=t+s}^{t-1}A^{2}(t,z)x^{2}(z)& =& \delta \sum _{ z=t-r}^{t-1}\sum _{ s=-r}^{z-1}A^{2}(t,z)x^{2}(z) {}\\ & & {}\\ & =& \delta \sum _{z=t-r}^{t-1}A^{2}(t,z)x^{2}(z)(z - t + r + 1) {}\\ & \geq & \delta \sum _{z=t-r}^{t-1}A^{2}(t,z)x^{2}(z), {}\\ \end{array}$$

where we have used the fact that \(t - h \leq z \leq t - 1\Rightarrow1 \leq z - t + h + 1 \leq h.\) Also

$$\displaystyle\begin{array}{rcl} \left (\sum _{z=t-r}^{t-1}A(t,s)x(s)\right )^{2}& \leq & r\sum _{ z=t-r}^{t-1}A^{2}(t,s)x^{2}(s), {}\\ \end{array}$$

and hence,

$$\displaystyle{ \delta \sum _{s=-r}^{-1}\sum _{ z=t+s}^{t-1}A^{2}(t,z)x^{2}(z) \geq \frac{\delta } {r}\left (\sum _{z=t-r}^{t-1}A(t,z)x(z)\right )^{2}. }$$

Let V (t) be given by (6.1.9). Then

$$\displaystyle\begin{array}{rcl} & & V (t) =\big [x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)x(s)\big]^{2} +\delta \sum _{ s=-r}^{-1}\sum _{ z=t+s}^{t-1}A^{2}(t,z)x^{2}(z) {}\\ & \geq & \big[x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)x(s)\big]^{2} + \frac{\delta } {r}\left (\sum _{z=t-r}^{t-1}A(t,z)x(z)\right )^{2} {}\\ & \geq & \frac{\delta } {r+\delta }x^{2}(t) + \left [\sqrt{ \frac{r} {r+\delta }}x(t) + \sqrt{\frac{r+\delta } {r}} \sum _{z=t-r}^{t-1}A(t,z)x(z)\right ]^{2} {}\\ & \geq & \frac{\delta } {r+\delta }x^{2}(t). {}\\ \end{array}$$

Consequently,

$$\displaystyle\begin{array}{rcl} \frac{\delta } {r+\delta }x^{2}(t) \leq V (t).& & {}\\ \end{array}$$

From (6.1.15) we get

$$\displaystyle{ V (t) \leq V (t_{0})\prod _{s=t_{0}}^{t-1}\big(a(s) + A(s + 1,s)\big). }$$

Thus we arrive at

$$\displaystyle{ \vert x(t)\vert \leq \sqrt{\frac{r+\delta } {\delta } V (t_{0})\prod _{s=t_{0}}^{t-1}\big(a(s) + A(s + 1,s)\big)} }$$

for \(t \geq t_{0}.\) This completes the proof.

Corollary 6.1 ([98]).

Assume the hypothesis of Theorem  6.1.1 holds. Then the zero solution of  (6.1.1) is exponentially stable.

Proof.

From inequality (6.1.16) we have that

$$\displaystyle\begin{array}{rcl} \vert x(t)\vert & \leq & \sqrt{\frac{r+\delta } {\delta } V (t_{0})\prod _{s=t_{0}}^{t-1}\big(a(s) + A(s + 1,s)\big)} {}\\ & \leq & \sqrt{\frac{r+\delta } {\delta } V (t_{0})\alpha ^{t-t_{0}}} {}\\ \end{array}$$

for \(t \geq t_{0}.\) The proof is complete since α ∈ (0, 1). 

Now we state a corollary regarding the exponential stability of the zero solution of (6.1.6).

Corollary 6.2 ([98]).

Assume the hypothesis of Theorem  6.1.1 holds with \(Q(t) = A(t + 1,t) - 1.\) Then the zero solution of  (6.1.6) is exponentially stable.

Remark 6.1.

If for a positive constant M we have

$$\displaystyle{ V (t_{0}) \leq M,\;\mbox{ for all}\;t_{0} \geq 0, }$$

then the zero solution of (6.1.1) is uniformly exponentially stable. This follows from the exponential inequality (6.1.16).

6.2 Criterion for Instability

In this section we use a nonnegative definite Lyapunov functional and obtain criteria that can be easily applied to test for instability of the zero solution of (6.1.1).

Theorem 6.2.1 ([98]).

Let H > r be a constant. Assume Q(t) > 0 such that

$$\displaystyle{ Q^{2}(t) + Q(t) - HA^{2}(t + 1,t) \geq 0. }$$
(6.2.1)

If

$$\displaystyle\begin{array}{rcl} V (t)& =& \left [x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)x(s)\right ]^{2} \\ & -& H\sum _{s=t-r-1}^{t-1}A^{2}(t,s)x^{2}(s),{}\end{array}$$
(6.2.2)

then along the solutions of ( 6.1.1 ) we have

$$\displaystyle{ \bigtriangleup V (t) \geq Q(t)V (t). }$$

Proof.

Let x(t) = x(t, t0, ψ) be a solution of (6.1.1) and assume V (t) is given by (6.2.2). Since the calculation is similar to the one in Lemma 6.1 we have that

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t)& =& Q(t)V (t) +\big (Q^{2}(t) + Q(t) - HA^{2}(t + 1,t)\big)x^{2}(t) \\ & +& Q(t)(H - r)\left (\sum _{s=t-r-1}^{t-1}A^{2}(t,s)x^{2}(s)\right )^{2} \\ & \geq & Q(t)V (t), {}\end{array}$$
(6.2.3)

where we have used

$$\displaystyle\begin{array}{rcl} \left (\sum _{s=t-r}^{t-1}A(t,s)x(s)\right )^{2}& \leq & r\sum _{ s=t-r}^{t-1}A^{2}(t,s)x^{2}(s) {}\\ \end{array}$$

and (6.2.1). This completes the proof.

We remark that condition (6.2.1) is satisfied for

$$\displaystyle{ Q(t) \geq \frac{-1 + \sqrt{1 + 4HA^{2 } (t + 1, t)}} {2}. }$$

Theorem 6.2.2 ([98]).

Suppose the hypothesis of Theorem  6.2.1 holds. Then all solutions of  (6.1.1) are unbounded and its zero solution is unstable, provided that

$$\displaystyle{ \prod ^{\infty }\big(a(s) + A(s + 1,s)\big) = \infty. }$$
(6.2.4)

Proof.

From (6.2.3) we have

$$\displaystyle{ V (t) \geq V (t_{0})\prod _{s=t_{0}}^{t-1}\big(a(s) + A(s + 1,s)\big). }$$
(6.2.5)

Let V (t) be given by (6.2.2). Then

$$\displaystyle\begin{array}{rcl} V (t)& =& x^{2}(t) + 2x(t)\sum _{ t-r-1}^{t-1}A(t,s)x(s) + \left [\sum _{ t-r-1}^{t-1}A(t,s)x(s)\right ]^{2} \\ & -& H\sum _{t-r-1}^{t-1}A^{2}(t,s)x^{2}(s). {}\end{array}$$
(6.2.6)

Let β = Hr. Then from

$$\displaystyle{ \left (\frac{\sqrt{r}} {\sqrt{\beta }} a - \frac{\sqrt{\beta }} {\sqrt{r}}b\right )^{2} \geq 0, }$$

we have

$$\displaystyle{ 2ab \leq \frac{r} {\beta } a^{2} + \frac{\beta } {r}b^{2}. }$$

With this in mind we arrive at

$$\displaystyle\begin{array}{rcl} 2x(t)\sum _{t-r-1}^{t-1}A(t,s)x(s)& \leq & 2\left \vert x(t)\right \vert \left \vert \sum _{ t-r-1}^{t-1}A(t,s)x(s)\right \vert {}\\ &\leq & \frac{r} {\beta } x^{2}(t) + \frac{\beta } {r}\left [\sum _{t-r-1}^{t-1}A(t,s)x(s)\right ]^{2} {}\\ & \leq & \frac{r} {\beta } x^{2}(t) + \frac{\beta } {r}r\sum _{t-r}^{t-1}A^{2}(t,s)x^{2}(s). {}\\ \end{array}$$

A substitution of the above inequality into (6.2.6) yields

$$\displaystyle\begin{array}{rcl} V (t)& \leq & x^{2}(t) + \frac{r} {\beta } x^{2}(t) + (\beta +r - H)\sum _{ t-r-1}^{t-1}A^{2}(t,s)x^{2}(s) {}\\ & =& \frac{\beta +r} {\beta } x^{2}(t) {}\\ & =& \frac{H} {H - r}x^{2}(t). {}\\ \end{array}$$

Using inequality (6.2.5), we get

$$\displaystyle\begin{array}{rcl} \vert x(t)\vert & \geq & \sqrt{\frac{H - r} {H}} \;V ^{1/2}(t) {}\\ & =& \sqrt{\frac{H - r} {H}} \;V ^{1/2}(t_{ 0})\left (\prod _{s=t_{0}}^{t-1}\big(a(s) + +A(s + 1,s)\right )^{\frac{1} {2} }. {}\\ \end{array}$$

This completes the proof.

We have the following corollary regarding the unboundedness and instability of (6.1.6).

Corollary 6.3 ([98]).

Suppose the hypothesis of Theorem  6.2.1 holds with \(Q(t) = A(t + 1,t) - 1.\) Then all solutions of  (6.1.6) are unbounded and its zero solution is unstable, provided that

$$\displaystyle{ \prod ^{\infty }\big(A(s + 1,s)\big) = \infty. }$$

6.2.1 Applications and Numerical Evidence

In this section we provide examples that illustrate our theoretical results in two instances: when the coefficients a(t) and b(t, s) are constants, and when they are functions.

First, if a(t) = a and b(t, s) = b (\(a,b \in \mathbb{R}\)) we have \(A(t,s) =\sum _{ u=t-s}^{r}b.\) Then A(t, s) = b(r + 1 − t + s). Hence, \(\bigtriangleup _{t}A^{2}(t,s) = b^{2}(r - t + s)^{2} - b^{2}(r + 1 - t + s)^{2} \leq 0\) and thus condition (6.1.7) holds. Also \(A(t + 1,t) = br,\) and hence condition (6.1.8) becomes

$$\displaystyle{ - \frac{\delta } {(\delta +1)r} \leq a + br - 1 \leq -\left [\delta b^{2}r^{3} + (a + br - 1)^{2}\right ]. }$$
(6.2.7)

It is obvious from (6.2.7) that when a = 1, b has to be negative.

Next we give four examples where the emphasis is on | a | ≥ 1.

Example 6.1 ([98]).

Let \(a = r = 1,b = -0.3\) and δ = 0. 5. Then one can easily verify that (6.2.7) is satisfied. Hence the zero solution of the delay difference equation

$$\displaystyle{ x(t + 1) = x(t) - 0.3x(t - 1) }$$
(6.2.8)

is exponentially stable.

Example 6.2 ([98]).

Let \(a = 1.2,b = -0.3,r = 1\), and δ = 0. 5. Then one can easily verify that (6.2.7) is satisfied. Hence the zero solution of the delay difference equation

$$\displaystyle{ x(t + 1) = 1.2x(t) - 0.3x(t - 1) }$$
(6.2.9)

is exponentially stable as illustrated in Figure 6.1.

Example 6.3 ([98]).

Let \(a = 1.29,b = -0.6,r = 1\), and δ = 0. 5. With these values (6.2.7) is satisfied, and therefore the zero solution of the delay difference equation

$$\displaystyle{ x(t + 1) = 1.29x(t) - 0.6x(t - 1) }$$

is exponentially stable as shown in Figure 6.2.

Example 6.4 ([98]).

\(a = 1.125,b = -0.15,r = 2\), and \(\delta = \frac{2} {3}.\) Then one can easily verify that (6.2.7) is satisfied. Hence the zero solution of the delay difference equation

$$\displaystyle{ x(t + 1) = 1.125x(t) - 0.15\big(x(t - 1) + x(t - 2)\big) }$$

is exponentially stable as shown in Figure 6.3.

Fig. 6.1
figure 1

Trajectories of (6.1.1) when a(t) and b(t, s) are constant. Figure 6.1 refers to Example 6.2 where a = 1. 2, b = −0. 3, and r = 1 with initial condition x(0) = −10 and x(1) = 10. 3. Figure 6.2 refers to Example 6.3 where a = 1. 29, b = −0. 6, and r = 1 with initial condition x(0) = −10 and x(1) = 10. 3. Figure 6.3 refers to Example 6.4 where a = 1. 125, b = −0. 6, and r = 2 with initial condition x(0) = 15, x(1) = 2, and x(3) = −10.

It is worth mentioning that in both papers [87] and [136], in which fixed point theory was used, it was assumed that

$$\displaystyle{ \prod _{s=0}^{t-1}a(s) \rightarrow 0,\;\mbox{ as}\;t \rightarrow \infty }$$

for the asymptotic stability.

Example 6.5 ([98]).

Let \(a = 1.3,b = -0.2,r = 1\) and H = 1. 1. Then Q(t) = 0. 1 > 0. Moreover \(Q(t) \geq \frac{-1 + \sqrt{1 + 4HA^{2 } (t + 1, t)}} {2} = 0.0422.\) Thus conditions (6.2.1) and (6.2.4) are satisfied and the zero solution of

$$\displaystyle{ x(t + 1) = 1.3x(t) - 0.2x(t - 1) }$$
(6.2.10)

is unstable. Actually, all its solutions become unbounded for large t. Figure 6.2 shows a trajectory for the above equation with initial condition x(0) = −10 and x(1) = −1. 3.

Fig. 6.2
figure 2

Trajectories of (6.1.1) when a(t) and b(t, s) are constant. This graph corresponds to Example 6.5 where a = 1. 3, b = −0. 2, and r = 1 with initial condition x(0) = −10 and x(1) = −1. 3.

Remark 6.2.

When a(t) and b(t, s) are constant the solution x(t) of the delay difference equation (6.1.1) is the same as the sequence \((x_{n})_{n\in \mathbb{N}_{0}}\) defined recursively as

$$\displaystyle{ x_{n+r+1} = ax_{n+r} + b\left (x_{n+r-1} + \cdots + x_{n}\right ),n \in \mathbb{N}_{0}, }$$
(6.2.11)

and for which the general solution can be obtained analytically. For r = 1 in particular, the general solution to (6.2.11) is easily calculated. For instance, the exact solution to (6.2.8) in Example 6.1 is

$$\displaystyle\begin{array}{rcl} x(t) = \left (\frac{\sqrt{30}} {10} \right )^{t}\left (x(0)\cos \left (t\theta \right ) + \frac{\frac{10x(1)} {\sqrt{30}} - x(0)\cos \theta } {\sin \theta } \sin \left (t\theta \right )\right ),& & {}\\ \end{array}$$

where \(\theta =\arctan \left (\frac{\sqrt{5}} {5} \right )\). Since \(\left \vert \frac{\sqrt{30}} {10} \right \vert <1\) we see that \(\lim \limits _{t\rightarrow +\infty }\vert x(t)\vert = 0\) with an exponential convergence. The exact solution to (6.2.9) in Example 6.2 is

$$\displaystyle\begin{array}{rcl} x(t)& =& \frac{1} {2}\left [\left (x(0) + 10\frac{x(1) -\frac{6x(0)} {10} } {\sqrt{6}} \right )\left (\frac{6 + \sqrt{6}} {10} \right )^{t}\right. {}\\ & +& \left.\left (x(0) - 10\frac{x(1) -\frac{6x(0)} {10} } {\sqrt{6}} \right )\left (\frac{6 -\sqrt{6}} {10} \right )^{t}\right ]. {}\\ \end{array}$$

Since \(\left \vert \frac{6\pm \sqrt{6}} {10} \right \vert <1\), we see that the solution x(t) of Example 6.2 converges exponentially to zero. Similar calculations can be done for Examples 6.3 and 6.4. Finally, the exact solution to (6.2.10) in Example 6.5 is

$$\displaystyle\begin{array}{rcl} x(t)& =& \frac{1} {2}\left [\left (x(0) + 20\frac{x(1) -\frac{13x(0)} {20} } {\sqrt{89}} \right )\left (\frac{13 + \sqrt{89}} {20} \right )^{t}\right. {}\\ & +& \left.\left (x(0) - 20\frac{x(1) -\frac{13x(0)} {20} } {\sqrt{89}} \right )\left (\frac{13 -\sqrt{89}} {20} \right )^{t}\right ]. {}\\ \end{array}$$

Since \(\left \vert \frac{13+\sqrt{89}} {20} \right \vert> 1\), we see that \(\lim \limits _{t\rightarrow +\infty }\vert x(t)\vert = +\infty\).

We now give two examples that illustrate the exponentially stable and unstable case when a(t) and b(t, s) are functions. We corroborate our results with numerical simulations.

Example 6.6 ([98]).

Let \(a(t) = d^{2t+1} + \frac{2} {3}\) and b(t, s) = −d t+s for \(d \in \mathbb{R}\). Then \(A(t,s) = -d^{2s}\sum _{ u=t-s}^{r}d^{u},\) and therefore \(A(t + 1,t) = -d^{2t}\sum _{ u=1}^{r}d^{u} = -d^{2t+1}\) for \(r = 1.\) We can show that \(\bigtriangleup _{t}A^{2}(t,z) \leq 0\) for all t + s + 1 ≤ zt − 1. If we take \(r = 1\) and \(\delta = 1,\) we obtain \(Q(t) = -\frac{1} {3}\). With these choices we see that the left inequality of condition (6.1.8) is trivially satisfied. To obtain the right inequality, we need to choose d such that \(\left (d^{2}(d^{4})^{t} + \frac{1} {9}\right ) \leq -Q(t) = \frac{1} {3}\) for t large enough. It is therefore sufficient to choose d ∈ (0, 1). In that case, \(\lim \limits _{t\rightarrow +\infty }(d^{4})^{t} = 0\) which implies that the right inequality of condition (6.1.8) will eventually be satisfied. Note that condition (6.1.8) is satisfied for all t ≥ 0 when \(d \in (0, \frac{\sqrt{2}} {3} ]\). With these choices for the parameters d, δ, and r, we can conclude that the zero solution of the delay difference equation

$$\displaystyle{ x(t + 1) = \left (d^{2t+1} + \frac{2} {3}\right )x(t) - d^{2t+1}x(t - 1) }$$

is exponentially stable. We plotted two of its trajectories in Figure 6.3.

Example 6.7 ([98]).

Let a(t) = d 2t+1 + 1. 1 and b(t, s) = −d t+s. Then from Example 6.6 we have \(A(t + 1,t) = -d^{2t+1}\) when \(r = 1.\) In that case choosing \(H = 1\) yields \(Q(t) = 0.1> 0\). With these choices we see that condition (6.2.1) is satisfied if d ∈ (0, 1) and hence the zero solution of the delay difference equation

$$\displaystyle{ x(t + 1) = \left (d^{2t+1} + 1.1\right )x(t) - d^{2t+1}x(t - 1) }$$

is unstable as illustrated in Figure 6.4. In fact, the zero solution is unstable for all choices of a(t) = d 2t+1 + ν with ν > 1. We note that with these choices of a(t) and b(t, s) we have

$$\displaystyle{ \prod ^{\infty }\big(a(s) + A(s + 1,s)\big) =\prod ^{\infty }\nu = +\infty, }$$

and hence (6.2.4) is verified.

Fig. 6.3
figure 3

Trajectories of (6.1.1) when \(a(t) = d^{2t+1} + \frac{2} {3}\) and b(t, s) = −d t+s. These plots refer to Example 6.6 with r = 1. The initial condition was taken to be x(0) = −1 and x(1) = 0. 21. In Figure 6.3(a) we plotted the trajectory obtained with \(d = \frac{2} {3}\), and in Figure 6.3(b) we plotted the trajectory with \(d = \frac{2.99} {3}\). In the latter case, since condition (6.1.8) is verified only after a certain value of t, the first few terms of the trajectory x(t) are not converging to zero until condition (6.1.8) is satisfied.

Fig. 6.4
figure 4

Trajectories of (6.1.1) when a(t) = d 2t+1 + 1. 1 and b(t, s) = −d t+s. This graph corresponds to Example 6.7 with r = 1 and initial condition x(0) = −1 and x(1) = 0. 21.

Next we compare our results to existing ones. Let \(a = 1.2,b_{1} = -0.2,\;b_{2} = -0.088,h^{{\ast}} = 2,\;\mbox{ and}\;\delta = 0.5.\) Then one can easily verify that (6.1.8) is satisfied. Hence the zero solution of the difference equation with multiple delays

$$\displaystyle{ x(t + 1) = 1.2x(t) - 0.2x(t - 1) - 0.088x(t - 2). }$$
(6.2.12)

is exponentially stable.

It is worth mentioning that in both papers [87] and [136] it was assumed that

$$\displaystyle{ \prod _{s=0}^{t-1}a(s) \rightarrow 0,\;\mbox{ as}\;t \rightarrow \infty }$$

for the asymptotic stability. Of course our a = 1. 2 does not satisfy such a condition, and yet we concluded exponential stability. Let \(a = 1.2,b = -0.3,h = 1,\;\mbox{ and}\;\delta = 0.5.\) Then one can easily verify that (6.1.8) is satisfied. Hence the zero solution of the delay difference equation

$$\displaystyle{ x(t + 1) = 1.2x(t) - 0.3x(t - 1) }$$
(6.2.13)

is exponentially stable.

Moreover the above condition of [87] and [136] cannot be satisfied since our a = 1. 2. Next we compare our results with the results obtained in [125] by El-Morshedy. Hence, we begin with the statement of the following.

Lemma 6.2 ([125]).

If there exists λ ∈ (0, 1) such that

$$\displaystyle{ \big\vert \prod _{j=0}^{N}a(n - j) + b(n)\big\vert +\sum _{ s=1}^{N}\big\vert \prod _{ j=0}^{s-1}a(n - j)\big\vert \;\vert b(n - s)\vert \leq \lambda, }$$
(6.2.14)

for large n, then the zero solution of

$$\displaystyle{ x(n + 1) = a(n)x(n) + b(n)x(n - N) }$$
(6.2.15)

is globally exponentially stable.

It can be easily seen that condition (6.2.14) cannot be satisfied for the data given in the above (6.2.12). Next we state two major results from [125], by Berezansky and Braverman, so we can compare with equation (6.2.12) and (6.2.13).

Lemma 6.3 ([16]).

Let 0 < γ < 1 and \(\sum _{l=2}^{m}\vert a_{ l}(n)\vert + \vert 1 - a_{1}(n)\vert \leq \gamma\) for n large enough. Then the equation

$$\displaystyle{ x(n + 1) - x(n) = -a_{1}(n)x(n) -\sum _{l=2}^{m}a_{ l}(n)x(g_{l}(n)) }$$
(6.2.16)

is exponentially stable. Here nTgl(n) ≤ n for some integer T > 0. 

Lemma 6.4 ([16]).

Suppose that for some γ ∈ (0, 1) the following inequality is satisfied for n large enough:

$$\displaystyle{ \sum _{k=2}^{m}\vert a_{ k}(n)\vert \sum _{j=g_{k}(n)}^{n-1}\sum _{ l=1}^{m}\vert a_{ l}(j)\vert + \vert 1 -\sum _{k=1}^{m}a_{ k}(n)\vert \leq \gamma. }$$
(6.2.17)

Then  (6.2.16) is exponentially stable.

In the spirit of (6.2.16) we rewrite (6.2.13) as

$$\displaystyle{ x(t + 1) - x(n) =.2x(t) - 0.3x(t - 1). }$$

Then the condition in Lemma 6.3 is equivalent to

$$\displaystyle{ \vert a_{2}(n)\vert + \vert 1 - a_{1}(n)\vert = 0.3 + \vert 1 + 0.2\vert> 1, }$$

is not satisfied. Also, condition (6.2.17) is equivalent to

$$\displaystyle{ \vert a_{2}(n)\vert \vert a_{1}(n)\vert + \vert 1 - a_{1}(n)\vert = 0.3(0.2) + \vert 1 + 0.2\vert> 1, }$$

is not satisfied. Thus, we have demonstrated that Lyapunov functionals yield better results as seen from the improvement over the results of [16] and [125]. For more comparison with existing literature, we have the following theorem by Berezansky, Braverman, and Karabash.

Theorem 6.2.3 ([17]).

Consider the two delays difference equation

$$\displaystyle{ x(n + 1) - x(n) = -a_{0}x(n) - a_{1}x(n - h_{1}) - a_{2}x(n - h_{2}),\;h_{1},\;h_{2}> 0. }$$
(6.2.18)

Suppose at least one of the following conditions hold:

1) \(1> a_{0}> 0,\;\vert a_{1}\vert + \vert a_{2}\vert <a_{0};\)

2) \(0 <a_{0} + a_{1} + a_{2} <1,\;\vert a_{1}\vert h_{1} + \vert a_{2}\vert h_{2} <\frac{a_{0} + a_{1} + a_{2}} {\vert a_{0}\vert + \vert a_{1}\vert + \vert a_{2}\vert };\)

3) \(0 <a_{0} + a_{2} <1,\;\vert a_{2}\vert h_{2} <\frac{a_{0} + a_{2} -\vert a_{\vert }} {\vert a_{0}\vert + \vert a_{1}\vert + \vert a_{2}\vert }.\)

Then Equation  (6.2.18) is exponentially stable.

The next theorem is due to Cooke and Győri.

Theorem 6.2.4 ([42]).

The multiple delays difference equation

$$\displaystyle{ x(n + 1) - x(n) = -\sum _{k=1}^{N}a_{ k}x(n - h_{k}),\;a_{k} \geq 0,\;h_{k} \geq 0, }$$
(6.2.19)

is asymptotically stable if

$$\displaystyle{ \sum _{k=1}^{N}a_{ k}h_{k} <1. }$$
(6.2.20)

The next theorem is due to Elaydi (1994) and Kocić and Laddas (1993).

Theorem 6.2.5 ([63, 94]).

The multiple delays difference equation

$$\displaystyle{ x(n + 1) - x(n) = a_{0}(n)x(n) -\sum _{k=1}^{N}a_{ k}(n)x(g_{k}(n)),\;g_{k}(n) \leq n }$$
(6.2.21)

is asymptotically stable if

$$\displaystyle\begin{array}{rcl} \sum _{k=1}^{N}\vert a_{ k}(n)\vert = \left \{\begin{array}{ccc} a_{0}(n)-\epsilon &\mathit{\mbox{ for $0 <a_{0}(n) <1$}} \\ 2 - a_{0}(n)-\epsilon &\mathit{\mbox{ for $1 \leq a_{0}(n) <2$}} \end{array} \right.& &{}\end{array}$$
(6.2.22)

The next theorem is due to Hartung and Győri.

Theorem 6.2.6 ([77]).

The multiple delays difference equation

$$\displaystyle{ x(n + 1) - x(n) = -\sum _{k=1}^{N}a_{ k}x(g_{k}(n)),\;a_{k} \geq 0,\;g_{k}(n) \leq n, }$$
(6.2.23)

is exponentially stable if

$$\displaystyle{ \lim \sup _{n\rightarrow \infty }(n - g_{k}(n)) <\infty, }$$
(6.2.24)

and

$$\displaystyle{ \sum _{k=1}^{N}a_{ k}\lim \sup _{n\rightarrow \infty }(n - g_{k}(n)) <1 + \frac{1} {e} -\sum _{k=1}^{N}a_{ k}. }$$
(6.2.25)

We remark that Theorems 6.2.4 and 6.2.5 only give results regarding asymptotic stability.

Consider the difference equation in Example 6.4, where we have shown its zero solution is exponentially stable.

$$\displaystyle{ x(n + 1) - x(n) = -(-.125)x(n) - 0.15x(n - 1) - 0.15x(n - 2). }$$
(6.2.26)

Then \(a_{0} = -.125,\;a_{1} = 0.15,\;\mbox{ and},\;a_{1} = 0.15.\) It is clear that 1) of Theorem 6.2.3 cannot hold since our a0 is negative. Similarly, condition (6.2.22) cannot hold since it requires that a0 > 0. Notice that Theorems 6.2.4 and 6.2.6 are not applicable to the results of this section since the coefficients (6.1.1) depend on time. In Example 6.6 we showed the zero solution is exponentially stable. Also we remark that our theorems do not require sign conditions on the coefficients and the fact that if we rewrite the equations of Theorems 6.2.46.2.5, and 6.2.6, in the form of

$$\displaystyle{ x(n + 1) = (a_{0} + 1)x(n) + a_{1}x(n - h_{1}) + a_{2}x(n - h_{2}) }$$

then their first coefficient, a0 + 1 > 1, unlike our theorems that are applicable to | a0 + 1 | > 1 as it was demonstrated in the examples.

In general, it is a major problem to find an appropriate Lyapunov functional and hence, the dependence of the quality of the corresponding results on such functional. However, once a suitable Lyapunov functional is found, investigators may continue for decades deriving more and more information from that Lyapunov functional. It is a common knowledge among researchers that stability and boundedness results go hand in hand with the type of the Lyapunov functional that is being used. To illustrate our concern, we consider the delay difference equation

$$\displaystyle{ x(t + 1) = a(t)x(t) + b(t)x(t-\tau ) + p(t),\;t \in \mathbb{Z}^{+}, }$$
(6.2.27)

where \(a,b,p: \mathbb{Z}^{+} \rightarrow \mathbb{R}\), τ is a positive integer. We have the following theorem.

Theorem 6.2.7.

Assume

$$\displaystyle{ \vert a(t)\vert <1,\;\mathit{\mbox{ for all }}\;t \in \mathbb{Z} }$$
(6.2.28)

and there is a δ > 0 such that

$$\displaystyle{ \vert b(t)\vert +\delta <1, }$$
(6.2.29)

and

$$\displaystyle{ \vert a(t)\vert \leq \delta,\;\mathit{\text{and}}\;\vert p(t)\vert \leq K,\;\mathit{\text{for some positive constant}}\;K. }$$
(6.2.30)

Then all solutions of  (6.2.27) are bounded. If p(t) = 0 for all t, then the zero solution of  (6.2.27) is (UAS).

Proof.

Consider the Lyapunov functional

$$\displaystyle{ V (t,x(\cdot )) = \vert x(t)\vert +\delta \sum _{ s=t-\tau }^{t-1}\vert x(s)\vert. }$$

Then along solutions of (6.2.27) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V & =& \vert x(t + 1)\vert -\vert x(t)\vert +\delta \sum _{ s=t+1-\tau }^{t}\vert x(s)\vert -\delta \sum _{ s=t-\tau }^{t-1}\vert x(s)\vert {}\\ &\!\leq & \vert a(t)\vert \vert x(t)\vert -\vert x(t)\vert + \vert b(t)\vert \vert x(t-\tau )\vert +\delta \!\sum _{ s=t+1-\tau }^{t}\!\vert x(s)\vert -\delta \sum _{ s=t-\tau }^{t-1}\vert x(s)\vert + \vert p(t)\vert {}\\ & =& \big(\vert a(t)\vert +\delta -1\big)\vert x(t)\vert +\big (\vert b(t)\vert -\delta \big)\vert x(t-\tau )\vert + \vert p(t)\vert {}\\ &\leq & \big(\vert a(t)\vert +\delta -1\big)\vert x(t)\vert + \vert p(t)\vert {}\\ &\leq & -\gamma \vert x(t)\vert + \vert p(t)\vert,\;\mathrm{for\;some\;positive\;constant}\;\gamma. {}\\ \end{array}$$

The results follow from either [133] or Theorem 2.2.4.

It is evident from Example 6.2 that Theorem 6.2.7 does not give any result concerning the exponential stability of the single delay difference equation

$$\displaystyle{ x(t + 1) = 1.2x(t) - 0.3x(t - 1). }$$

This illustrates the uncertainty we face when using Lyapunov functionals. On the other hand, it is tricky to construct a Lyapunov functional that deals with multiple delays.

As we indicated before, there is always a price to pay. By using Lyapunov functionals, our method relaxed the stringent conditions on the size of the coefficients. On the other hand, it puts a severe demand on the size of the delay h. The next theorem, which is due to Clark [35], does exactly the opposite; however, it asks for the coefficients to be constants.

Theorem 6.2.8 ([35]).

Suppose the coefficients a and b of  (6.2.15) are constants. Then Equation  (6.2.15) is asymptotically stable provided that

$$\displaystyle{ \vert a\vert + \vert b\vert <1. }$$

6.3 Vector Equation

In this section we try to extend the concept of exponential stability of finite delay scalar Volterra equation to the finite delay vector Volterra equation

$$\displaystyle{ x(t + 1) = Px(t) +\sum _{ s=t-r}^{t-1}C(t,s)g(x(s)), }$$
(6.3.1)

where r is a positive integer, P is a constant n × n matrix, and C is an n × n matrix of functions that are defined on − rts < , where \(t,s \in [-r,\infty ) \cap \mathbb{Z}.\) The nonlinear function \(g: \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}\) is continuous in x. Throughout this paper it is understood that if \(x \in \mathbb{R}^{n},\) then | x | is taken to be the Euclidean norm. Obtaining exponential stability through the method of Lyapunov functional V requires that along the solutions, we have △V (t, x) ≤ −αV (t, x), something that is almost impossible to obtain in vector equations. Materials of this section can be found in [51] and the references therein. Let U = (u)ij be an n × n matrix. Then we define the norm | U | to be

$$\displaystyle{ \vert U\vert =\mathop{\max }\limits_{ 1 \leq j \leq n}\sum _{i=1}^{n}\vert u_{ ij}\vert. }$$

It should cause no confusion to denote the norm of a sequence function \(\varphi: [-r,\infty ) \cap \mathbb{Z} \rightarrow \mathbb{R}^{n}\) with

$$\displaystyle{ \vert \vert \varphi \vert \vert =\mathop{\sup }\limits_{ -r \leq s <\infty }\vert \varphi (s)\vert. }$$

The notation xt means that \(x_{t}(\tau ) = x(t+\tau ),\tau \in [-r,0] \cap \mathbb{Z}\) as long as x(t + τ) is defined. Thus, xt is a function mapping an interval \([-r,0] \cap \mathbb{Z}\) into \(\mathbb{R}^{n}.\) We say x(t) ≡ x(t, t0, ψ) is a solution of (6.3.1) if x(t) satisfies (6.3.1) for tt0 and \(x_{t_{0}} = x(t_{0} + s) =\psi (s),\) \(s \in [-r,0] \cap \mathbb{Z}.\) Throughout this paper it is to be understood that when a function is written without its argument, then the argument is t. We begin with a stability definition. For t0 ≥ 0 we define

$$\displaystyle{ E_{t_{0}} = [-r,t_{0}] \cap \mathbb{Z}. }$$

Let C(t) denote the set of sequences \(\phi: [-r,\infty ) \cap \mathbb{Z} \rightarrow \mathbb{R}^{n}\) and \(\Vert \phi \Vert =\sup \{ \vert \phi (s)\vert: s \in [-r,\;t] \cap \mathbb{Z}\}\).

Definition 6.3.1.

The zero solution of (6.3.1) is stable if for each ɛ > 0 and each t0 ≥ −r, there exists a δ = δ(ɛ, t0) > 0 such that \([\phi \in E_{t_{0}} \rightarrow \mathbb{R}^{n},\phi \in C(t):\Vert \phi \Vert <\delta ]\) implies | x(t, t0, ϕ) | < ɛ for all t0 ≥ 0.

In order to be able to handle the calculations for △V (t) along the solutions of (6.3.1), we let

$$\displaystyle{ A(t,s):=\sum _{ u=t-s}^{r}C(u + s,s),\;t,s \in [0,\;\infty ) \cap \mathbb{Z}\}. }$$

Clearly A(t, tr − 1) = 0, and as a consequence, one can easily verify that (6.3.1) is equivalent to the new system

$$\displaystyle\begin{array}{rcl} \bigtriangleup x(t) = Qx(t) + A(t + 1,t)g(x(t)) -\bigtriangleup _{t}\sum _{s=t-r-1}^{t-1}A(t,s)g(x(s)),& &{}\end{array}$$
(6.3.2)

where the matrix Q is given by Q = PI and I is the identity n × n matrix.

Remark 6.3.

Writing (6.3.1) in the form of (6.3.2) allows us to obtain stability result regarding the nonlinear Volterra difference equation

$$\displaystyle{ x(t + 1) =\sum _{ s=t-r}^{t-1}C(t,s)g(x(s)). }$$
(6.3.3)

This is remarkable since (6.3.1) is considered as the perturbed form of \(x(t + 1) = Px(t),\) which implies that the stability of the zero solution of (6.3.1) depends on the stability of linear part; that is, one must require that the magnitude of all the eigenvalues of the matrix A be inside the unit circle.

Before we state and prove our next theorem, we assume there exists a positive definite symmetric and constant n × n matrix D such that for positive constants \(\lambda,\mu _{1}\), and μ2 we have

$$\displaystyle{ P^{T}DQ + Q^{T}D = -\mu _{ 1}I. }$$
(6.3.4)

Due to the nonlinearity of the function g, we require that

$$\displaystyle{ x^{T}\big(P^{T}DA(t + 1,t) + DA(t + 1,t)\big)g(x) \leq -\mu _{ 2}\vert x\vert ^{2}\;\mbox{ if}\;x\neq 0, }$$
(6.3.5)

and

$$\displaystyle{ \vert g(x)\vert \leq \lambda \vert x\vert. }$$
(6.3.6)

It is clear that conditions (6.3.5) and (6.3.6) imply that g(0) = 0 and hence x = 0 is a solution for system (6.3.1). We note that since D is a positive definite symmetric matrix, there exists a positive constant k such that

$$\displaystyle{ k\vert x\vert ^{2} \leq x^{T}Dx,\;\;\mbox{ for all}\;x. }$$
(6.3.7)

If W(t) and Z(t) are two sequences, then \(\bigtriangleup W(t)Z(t) = W(t + 1)\bigtriangleup Z(t) +\big (\bigtriangleup W(t)\big)\) Z(t). 

Theorem 6.3.1 ([51]).

Let  (6.3.4)–( 6.3.6 ) hold, and suppose there are constants γ > 0 and α > 0 so that

$$\displaystyle{ -\mu _{1} - 2\mu _{2} +\gamma r\lambda ^{2}\vert A(t + 1,t)\vert +\big (\lambda \vert A^{T}(t + 1,t)D\vert + \vert Q^{T}D\vert \big)\sum _{ s=t-r}^{t-1}\vert A(t,s)\vert \leq -\alpha, }$$
(6.3.8)
$$\displaystyle\begin{array}{rcl} -\gamma +\lambda \vert A^{T}(t + 1,t)D\vert + \vert Q^{T}D\vert \leq 0,& &{}\end{array}$$
(6.3.9)

and

$$\displaystyle{ 1 -\lambda \sum _{s=t-r-1}^{t-1}\vert A(t,s)\vert> 0 }$$
(6.3.10)

then the zero solution of ( 6.3.1 ) is stable.

Proof.

Define the Lyapunov functional V (t) = V (t, x) by

$$\displaystyle\begin{array}{rcl} V (t)& =& \Big(x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\Big)^{T}D\Big(x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\Big) \\ & & +\gamma \sum _{s=-r}^{-1}\sum _{ z=t+s}^{t-1}\vert A(t,z)\vert \vert g(x(z))\vert ^{2}. {}\end{array}$$
(6.3.11)

First we note that the right side of (6.3.11) is scalar. Let x(t) = x(t, t0, ψ) be a solution of (6.3.1) and define V (t) by (6.3.11). Then along solutions of (6.3.1) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t)& =& (x(t + 1) +\sum _{ s=t-r}^{t}A(t + 1,s)g(x(s))\Big)^{T}D {}\\ & \times & \bigtriangleup \Big(x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\Big) {}\\ & +& \bigtriangleup \Big(x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\Big)^{T} {}\\ & \times & D\Big(x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\Big) {}\\ & +& \gamma r\vert A(t + 1,t)\vert \vert g(x(t))\vert ^{2} -\gamma \sum _{ s=-r}^{-1}\vert A(t,t + s)\vert \vert g(x(t + s))\vert ^{2}. {}\\ \end{array}$$

Using (6.3.2) one can easily show that

$$\displaystyle{ x(t + 1) +\sum _{ s=t-r}^{t}A(t + 1,s)g(x(s)) = Px(t) +\sum _{ s=t-r}^{t-1}A(t,s)g(x(s)). }$$

With this in mind △V becomes

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t)& =& (Px(t) +\sum _{ s=t-r}^{t-1}A(t,s)g(x(s))\Big)^{T}D\Big(Qx(t) + A(t + 1,t)g(x(t))\Big) {}\\ & +& \Big(Qx(t) + A(t + 1,t)g(x(t))\Big)^{T}D\Big(x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\Big) {}\\ & +& \gamma r\lambda ^{2}\vert A(t + 1,t)\vert \vert g(x(t))\vert ^{2} -\gamma \sum _{ s=-r}^{-1}\vert A(t,t + s)\vert \vert g(x(t + s))\vert ^{2}. {}\\ \end{array}$$

After rearranging terms, the above expression simplifies to

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t)& =& x^{T}(t)\big(P^{T}DQ + Q^{T}D\big)x(t) + x^{T}(t)P^{T}DA(t + 1,t))g(x(t)) \\ & +& \big(\sum _{s=t-r}^{t-1}A(t,s)g(x(s))\Big)^{T}DQx(t) \\ & +& \big(\sum _{s=t-r}^{t-1}A(t,s)g(x(s))\Big)^{T}DA(t + 1,t)g(x(t)) \\ & +& x^{T}(t)Q^{T}D\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s)) + g^{T}(x(t))A^{T}(t + 1,t)Dx(t) \\ & +& g^{T}(x(t))A^{T}(t + 1,t)D\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s)) +\gamma r\vert A(t + 1,t)\vert \vert g(x(t))\vert ^{2} \\ & -& \gamma \sum _{s=-r}^{-1}\vert A(t,t + s)\vert \vert g(x(t + s))\vert ^{2}. {}\end{array}$$
(6.3.12)

Next we try to simplify (6.3.12) by combining likewise terms. We begin by noting that \(g^{T}(x)A^{T}(t + 1,t)Dx =\big [x^{T}DA(t + 1,t))g(x)\big]^{T},\) and hence we have

$$\displaystyle\begin{array}{rcl} x^{T}P^{T}DA(t + 1,t))g(x)& +& g^{T}(x)A^{T}(t + 1,t)Dx {}\\ & =& x^{T}\big(P^{T}DA(t + 1,t) + DA(t + 1,t)\big)g(x). {}\\ \end{array}$$
$$\displaystyle\begin{array}{rcl} \big(\sum _{s=t-r}^{t-1}A(t,s)g(x(s))\Big)^{T}DQx(t)& +& x^{T}(t)Q^{T}D\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s)) {}\\ & =& x^{T}(t)Q^{T}D\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s)) {}\\ & +& \Big[\big(\sum _{s=t-r}^{t-1}A(t,s)g(x(s))\Big)^{T}DQx(t)\Big]^{T} {}\\ & =& 2x^{T}(t)Q^{T}D\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s)) {}\\ & \leq & 2\vert x^{T}(t)\vert \vert Q^{T}D\vert \sum _{ s=t-r}^{t-1}\vert A(t,s)\vert \vert g(x(s))\vert {}\\ &\leq & \vert Q^{T}D\vert \sum _{ s=t-r}^{t-1}\vert A(t,s)\vert \big(\vert x(t)\vert ^{2} + \vert g(x(s))\vert ^{2}\big),{}\\ \end{array}$$

where we have used the inequality 2aba 2 + b 2. Similarly,

$$\displaystyle\begin{array}{rcl} & & \sum _{s=t-r}^{t-1}A(t,s)g(x(s))\Big)^{T}DA(t + 1,t)g(x(t)) {}\\ & +& g^{T}(x(t))A^{T}(t + 1,t)D\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s)) {}\\ & =& g^{T}(x(t))A^{T}(t + 1,t)D\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s)) {}\\ & +& \Big[\big(\sum _{s=t-r}^{t-1}A(t,s)g(x(s))\Big)^{T}DA(t + 1,t)g(x(t))\Big]^{T} {}\\ & =& 2g^{T}(x(t))A^{T}(t + 1,t)D\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s)) {}\\ & \leq & 2\lambda \vert x(t)\vert \vert A^{T}(t + 1,t)D\vert \sum _{ s=t-r}^{t-1}\vert A(t,s)\vert \vert g(x(s))\vert {}\\ & \leq & \lambda \vert A^{T}(t + 1,t)D\vert \sum _{ s=t-r}^{t-1}\vert A(t,s)\vert \big(\vert x(t)\vert ^{2} + \vert g(x(s))\vert ^{2}\big). {}\\ \end{array}$$

Let u = t + s, then

$$\displaystyle{ \gamma \sum _{s=-r}^{-1}\vert A(t,t + s)\vert \vert g(x(t + s))\vert ^{2} = -\gamma \sum _{ s=t-r}^{t-1}\vert A(t,s)\vert \vert g(x(s))\vert ^{2}. }$$

By substituting the above four simplified expressions into (6.3.12) yields

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t)& \leq & \Big[-\mu _{1} -\mu _{2} +\gamma r\lambda ^{2}\vert A(t + 1,t)\vert \\ & +& \big(\lambda \vert A^{T}(t + 1,t)D\vert + \vert Q^{T}D\vert \big)\sum _{ t-r}^{t-1}\vert A(t,s)\vert \Big]\vert x(t)\vert ^{2} \\ & +& \Big[-\gamma +\lambda \vert A^{T}(t + 1,t)D\vert + \vert Q^{T}D\vert \Big]\sum _{ t-r}^{t-1}\vert A(t,s)\vert \vert g(x(s))\vert ^{2}. \\ & \leq & -\alpha \vert x(t)\vert ^{2}. {}\end{array}$$
(6.3.13)

Let ɛ > 0 be given, we will find δ > 0 so that | x(t, t0, ϕ) | < ɛ as long as \([\phi \in E_{t_{0}} \rightarrow \mathbb{R}:\Vert \phi \Vert <\delta ]\). Let

$$\displaystyle{ L^{2} = \vert D\vert \Big(1 +\lambda \sum _{ t_{0}-r}^{t_{0}-1}\vert A(t_{ 0},s)\vert \Big)^{2} +\lambda ^{2}\nu \sum _{ s=-r}^{-1}\sum _{ z=t_{0}+s}^{t_{0}-1}\vert A(t_{ 0},z)\vert. }$$

By (6.3.13) we have V is decreasing and hence for tt0 ≥ 0 we have that

$$\displaystyle\begin{array}{rcl} V (t,x)& \leq & V (t_{0},\phi ) \\ & \leq & \vert D\vert (\phi (t_{0}) +\sum _{ t_{0}-r}^{t_{0}-1}A(t_{ 0},s)g(\phi (s))\Big)^{2} \\ & +& \nu \lambda ^{2}\sum _{ s=-r}^{-1}\sum _{ z=t_{0}+s}^{t_{0}-1}\vert A(t_{ 0},z)\vert \vert \phi (z)\vert ^{2} \\ & =& \delta ^{2}\;\vert D\vert \Big(1 +\lambda \sum _{ t_{0}-r}^{t_{0}-1}\vert A(t_{ 0},s)\vert \Big)^{2} \\ & +& \nu \lambda ^{2}\;\delta ^{2}\sum _{ s=-r}^{-1}\sum _{ z=t_{0}+s}^{t_{0}-1}\vert A(t_{ 0},z)\vert \\ &\leq & \delta ^{2}L^{2}. {}\end{array}$$
(6.3.14)

By (6.3.11), we have

$$\displaystyle\begin{array}{rcl} V (t,x)& \geq & \Big(x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\Big)^{T} \\ & \times & D\Big(x(t) +\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\Big) \\ & \geq & k^{2}\Big(\vert x\vert -\big\vert \sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\big\vert \Big)^{2}.{}\end{array}$$
(6.3.15)

Combining the two inequalities (6.3.14) and (6.3.15) we arrive at

$$\displaystyle{ \vert x(t)\vert \leq \frac{\delta L} {k} +\lambda \sum _{ s=t-r-1}^{t-1}\vert A(t,s)\vert \vert x(s)\vert. }$$

So as long as | x(t) | < ɛ, we have

$$\displaystyle{ \vert x(t)\vert <\frac{\delta L} {k} +\varepsilon \lambda \sum _{ s=t-r-1}^{t-1}\vert A(t,s)\vert,\;\mbox{ for all}\;t \geq t_{ 0}. }$$

Thus, we have from the above inequality

$$\displaystyle{ \vert x(t)\vert <\varepsilon \; \mbox{ for}\;\delta <\frac{k} {L}\big(1 -\lambda \sum _{s=t-r-1}^{t-1}\vert A(t,s)\vert \big)\varepsilon. }$$

Note that by (6.3.10), the above inequality regarding δ is valid. This completes the proof.

We have the following corollary.

Corollary 6.4 ([51]).

Assume all the conditions of Theorem  6.3.1 hold. Let x(t) be any solution of ( 6.3.1 ). Then \(\vert x(t)\vert ^{2} \in l\big([t_{0},\infty ) \cap \mathbb{Z}\big).\)

Proof.

We know from Theorem 6.3.1 that the zero solution is stable. Thus, for the same δ of stability, we take | x(t, t0, ϕ) | < 1. Since V is decreasing, we have by summing (6.3.13) from t0 to t − 1 and using (6.3.14) that,

$$\displaystyle{ V (t,x) \leq V (t_{0},\phi ) \leq \delta ^{2}L^{2} -\alpha \sum _{ t_{0}}^{t-1}\vert x(s)\vert ^{2}. }$$

Since,

$$\displaystyle\begin{array}{rcl} V (t,x) \geq \Big (x +\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\Big)^{T}D\Big(x +\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\Big),& & {}\\ \end{array}$$

we have that

$$\displaystyle\begin{array}{rcl} & & \Big(x +\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\Big)^{T}D\Big(x +\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\Big) \\ & \leq & \delta ^{2}L^{2} -\alpha \sum _{ t_{0}}^{t-1}\vert x(s)\vert ^{2}. {}\end{array}$$
(6.3.16)

Also, using Schwarz inequality one obtains

$$\displaystyle\begin{array}{rcl} \Big(\sum _{s=t-r-1}^{t-1}\vert A(t,s)\vert \vert g(x(s))\vert \Big)^{2}& =& \Big(\sum _{ s=t-r-1}^{t-1}\vert A(t,s)\vert ^{1/2}\vert A(t,s)\vert ^{1/2}\vert g(x(s))\vert \Big)^{2} {}\\ & \leq & \lambda ^{2}\sum _{ s=t-r-1}^{t-1}\vert A(t,s)\vert \sum _{ s=t-r-1}^{t-1}\vert A(t,s)\vert \vert x(s)\vert ^{2}.{}\\ \end{array}$$

As s = tr−1 t−1 | A(t, s) | is bounded by (6.3.10) and | x |2 < 1, we have s = tr−1 t−1 | A(t, s) | | x(s) |2 is bounded and hence s = tr−1 t−1 | A(t, s) | | g(x(s)) | is bounded. Therefore, from (6.3.16), we arrive at

$$\displaystyle\begin{array}{rcl} \alpha \sum _{s=t_{0}}^{t-1}\vert x(s)\vert ^{2}& \leq & \delta ^{2}L^{2} -\Big (x +\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\Big)^{T} {}\\ & \times & D\Big(x +\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\Big) {}\\ & \leq & \delta ^{2}L^{2} + \vert D\vert \Big(\vert x\vert +\sum _{ s=t-r-1}^{t-1}A(t,s)g(x(s))\vert \Big)^{2} \leq K, {}\\ \end{array}$$

from which we deduce that \(\vert x(t)\vert ^{2} \in l\big([t_{0},\infty ) \cap \mathbb{Z}\big).\)

Due to our previous remark, it is straightforward to extend Theorem 6.3.1 and Corollary 6.4 to (6.3.3) by setting the coefficient matrix P = 0. 

Theorem 6.3.2 ([51]).

Let  (6.3.4) and  (6.3.5) hold for P = 0 matrix. Assume  (6.3.6) and suppose there are constants γ > 0 and α > 0 so that

$$\displaystyle\begin{array}{rcl} -\mu _{1} -\mu _{2}& +& \gamma r\lambda ^{2}\vert A(t + 1,t)\vert +\big (\lambda \vert A^{T}(t + 1,t)D\vert {}\\ & +& \vert D\vert \big)\sum _{t-r}^{t-1}\vert A(t,s)\vert \leq -\alpha, {}\\ \end{array}$$
$$\displaystyle\begin{array}{rcl} -\gamma +\lambda \vert A^{T}(t + 1,t)D\vert + \vert D\vert \leq 0,& & {}\\ \end{array}$$

and

$$\displaystyle{ 1 -\lambda \sum _{s=t-r-1}^{t-1}\vert A(t,s)\vert> 0 }$$

then the zero solution of ( 6.3.3 ) is stable and \(\vert x(t)\vert ^{2} \in l\big([t_{0},\infty ) \cap \mathbb{Z}\big).\)

Proof.

The proof is immediate consequence of Theorem 6.3.1 and Corollary 6.4 by taking the matrix P to be the zero matrix which implies that Q = I. 

Next, we resort to fundamental matrix solution to characterize solutions of (6.3.1) and then compare both results. We begin by considering the homogenous system,

$$\displaystyle{ x(t + 1) = Ax(t) }$$
(6.3.17)

where A = (aij) is constant n × n nonsingular matrix. Then if Φ(t) is a matrix that is nonsingular for all tt0 and satisfies (6.3.17), then it is said to be a fundamental matrix for (6.3.17). Also, it is known that if all eigenvalues of A reside inside the unit circle, then there exist positive constants l and η ∈ (0, 1) such that \(\vert \varPhi (t)\varPhi ^{-1}(t_{ 0})\vert \leq l\eta ^{t-t_{0} }.\) For more on Linearization of systems of the form of (6.3.17), we refer the reader to [57]. Suppose the function g is Lipschitz . That is, there exists a positive constant L such that

$$\displaystyle{ \vert g(x) - g(y)\vert \leq L\vert x - y\vert }$$
(6.3.18)

for all x and y. Then (6.3.18) along with g(0) = 0 imply that | g(x) | ≤ L | x |. 

Theorem 6.3.3 ([51]).

Assume all eigenvalues of A of system  (6.3.17) reside inside the unit circle. Also, assume  (6.3.18) along with g(0) = 0. In addition we ask that for some positive constant R

$$\displaystyle{ \sum _{s=-r}^{\infty }\vert C(u,s)\vert \leq R, }$$
(6.3.19)

then the zero solution of  (6.3.1) is exponentially stable provided that \(RL <\frac{1-\eta } {l}.\)

Proof.

Let Φ(t) be the fundamental matrix for (6.3.17). For a given initial function \(\phi: [-r,\infty ) \cap \mathbb{Z} \rightarrow \mathbb{R}^{n}\), by using the variation of parameters, we have that

$$\displaystyle{ x(t) =\varPhi (t)\varPhi ^{-1}(t_{ 0})\phi (t_{0}) +\sum _{ u=t_{0}}^{t-1}\varPhi (t)\varPhi ^{-1}(u + 1)\sum _{ s=u-r}^{u-1}C(u,s)g(x(s)). }$$
(6.3.20)

Then x(t) given by (6.3.20) is a solution of (6.3.1) (see [57]). Hence, for tt0 we have

$$\displaystyle{ \vert x(t)\vert \leq l\eta ^{t-t_{0} }\vert \phi (t_{0})\vert + RLl\eta ^{t-1}\sum _{ u=t_{0}}^{t-1}\eta ^{-u}\vert x(u)\vert. }$$

The rest of the proof follows along the lines of Theorem 4.35 of [57], by invoking Gronwall’s inequality (see Corollary 1.1).

Theorem 6.3.3 gives stronger type of stability since it requires the zero solution of (6.3.17) to be exponentially stable. We end this section with an example.

Example 6.8.

Let \(P = \left (\begin{array}{rr} 1/2& 0\\ 0 &1/2 \end{array} \right )\)  and     \(C(t,s) = \left (\begin{array}{rr} 1/3& 0\\ 0 &1/3 \end{array} \right )\),

then \(A(t,s) = \left (\begin{array}{rr} \frac{1} {3}(r - t + s + 1)& 0 \\ 0&\frac{1} {3}(r - t + s + 1) \end{array} \right )\)

and \(A(t+1,t) = \left (\begin{array}{rr} \frac{1} {3}r& 0 \\ 0&\frac{1} {3}r \end{array} \right )\).

From P T DQ + Q T D = −μ1 I, we obtain

\(D = \left (\begin{array}{rr} \frac{4} {3}\mu _{1} & 0 \\ 0&\frac{4} {3}\mu _{1} \end{array} \right )\). Let \(g(x) = \left (\begin{array}{r} \frac{-9\mu _{2}} {8\mu _{1}r} x_{1}\\ \\ \frac{-9\mu _{2}} {8\mu _{1}r} x_{2} \end{array} \right ).\)

Then

$$\displaystyle{ x^{T}\big(P^{T}DA(t + 1,t) + DA(t + 1,t)\big)g(x) = -\mu _{ 2}\left (x_{1}^{2} + x_{ 2}^{2}\right ). }$$

Hence (6.3.5) is satisfied. By letting \(\frac{9\mu _{2}} {8r\mu _{1}} \leq \lambda <\frac{3} {r(r+1)}\) we have that \(\left \vert g(x)\right \vert \leq \lambda \left \vert x\right \vert.\) For the sake of verifying (6.3.10), we note that

$$\displaystyle{ \vert A(t,s)\vert \leq \frac{1} {3}\vert r - t + s + 1\vert \leq \frac{r} {3},\;\mbox{ for}\;s \in [t - r,t - 1]. }$$

Thus,

$$\displaystyle{ \sum _{s=t-r-1}^{t-1}\vert A(t,s)\vert \leq \sum _{ s=t-r-1}^{t-1}\frac{r} {3} \leq \frac{r(r + 1)} {3}. }$$

Thus, 1 −λ∑s = tr−1 t−1 | A(t, s) | > 0 for \(\lambda <\frac{3} {r(r + 1)}.\) Left to verify conditions (6.3.8) and (6.3.9). As before, by simple calculations one can easily show that (6.3.8) and (6.3.9) correspond to

$$\displaystyle{ -\mu _{1} - 2\mu _{2} +\gamma r\lambda ^{2}\frac{r} {3} + \frac{4\lambda r\mu _{1}} {9} + \frac{2} {3}\mu _{1}(\frac{r} {3}) \leq -\alpha, }$$
(6.3.21)

and

$$\displaystyle{ -\gamma + \frac{4\lambda r\mu _{1}} {9} + \frac{2} {3}\mu _{1} \leq 0, }$$
(6.3.22)

respectively. Now inequalities (6.3.21) and (6.3.22) can be satisfied by the choice of appropriate μ1, μ2, and r. Thus we have shown that the zero solution of

$$\displaystyle{ x(t+1) = \left (\begin{array}{rr} 1/2& 0\\ 0 &1/2 \end{array} \right )x(t)-\sum \limits _{s=t-r}^{t-1}\left (\begin{array}{rr} 1/3& 0 \\ 0&1/3 \end{array} \right )\left (\begin{array}{r} \frac{-9\mu _{2}} {8\mu _{1}r} x_{1}\\ \\ \frac{-9\mu _{2}} {8\mu _{1}r} x_{2} \end{array} \right ) }$$

is stable by invoking Theorem 6.3.1.

Next we consider the nonlinear Volterra difference equation

$$\displaystyle{ y(n + 1) = f(y(n)) +\sum _{ s=0}^{n}C(n,s)h(y(s)) + g(n) }$$
(6.3.23)

where f and h are k × 1 vector functions that are continuous in x and g is k × 1 vector sequence. In addition C is k × k matrix functions on \(\mathbb{Z}^{+}\) and \(\mathbb{Z}^{+} \times \mathbb{Z}^{+}.\) Note that (6.3.23) has no delay. We are mainly interested in the Uniform boundedness on the solutions of (6.3.23) and its exponential stability when g(n) = 0 for all \(n \in \mathbb{Z}^{+}.\) We make the assumptions that for positive constants λ1, λ2, and M that

$$\displaystyle{ \vert f(y)\vert \leq \lambda _{1}\vert y\vert,\;\vert h(y)\vert \leq \lambda _{2}\vert y\vert,\;\mbox{ and}\;\vert g\vert \leq M. }$$
(6.3.24)

If A = (aij) is a k × k real matrix, then we define the norm of A by

$$\displaystyle{ \vert A\vert =\max _{1\leq i\leq k}\{\sum _{j=1}^{k}\vert a_{ ij}\vert \}. }$$

Similarly, for \(x \in \mathbb{R}^{k},\) | x | denotes the maximum norm of x. In the next theorem we construct a Lyapunov functional to obtain uniform boundedness and the exponential stability of the zero solution.

Theorem 6.3.4.

Assume  (6.3.24). Also, we assume that

$$\displaystyle{ \sum \limits _{s=0}^{n-1}\sum \limits _{ j=n}^{\infty }\vert C(j,s)\vert <\infty, }$$
(6.3.25)
$$\displaystyle{ \lambda _{1} +\lambda _{2}\vert C(n,n)\vert + K\sum \limits _{j=n}^{\infty }\vert C(j,n)\vert \leq 1-\alpha, }$$
(6.3.26)
$$\displaystyle{ \vert C(n,s)\vert \geq \lambda \sum \limits _{j=n}^{\infty }\vert C(j,s)\vert \;\mathit{\mbox{ where}}\;\lambda = \frac{K\alpha } {\epsilon }, }$$
(6.3.27)

where ε, α, and K are positive constants with α ∈ (0, 1) and K = ε + λ2. Then every solution y(n) of  (6.3.23) is uniformly bounded and \(\lim \limits _{x\rightarrow \infty }\sup \vert y(n)\vert \leq \frac{M} {\alpha }.\) Moreover, if g(n) = 0 for all \(n \in \mathbb{Z}^{+}\) , then the zero solution of  (6.3.23) is exponentially stable.

Proof.

Let’s begin by defining the Lyapunov functional V by

$$\displaystyle{ V (n) =\vert y(n)\vert + K\sum \limits _{s=0}^{n-1}\sum \limits _{ j=n}^{\infty }\vert C(j,s)\vert \vert y(s)\vert. }$$
(6.3.28)

Then using (6.3.28) we have along the solutions of (6.3.23) that

$$\displaystyle{ \bigtriangleup V (n) =\vert y(n+1)\vert -\vert y(n)\vert +K\left (\sum \limits _{s=0}^{n}\sum \limits _{ j=n}^{\infty }\vert C(j,s)\vert \vert y(s)\vert -\sum \limits _{ s=0}^{n-1}\sum \limits _{ j=n}^{\infty }\vert C(j,s)\vert \vert y(s)\vert \right ). }$$

Or

$$\displaystyle{ \bigtriangleup V (n) = \left (\vert y(n + 1)\vert -\vert y(n)\vert \right ) + K\left (\sum \limits _{j=n}^{\infty }\vert C(j,n)\vert \vert y(n)\vert -\sum \limits _{ s=0}^{n-1}\vert C(n,s)\vert \vert y(s)\vert \right ). }$$

Substitute y(n + 1) and use (6.3.26) to obtain

$$\displaystyle{ \bigtriangleup V (n) \leq \left (\vert f(y(n))\vert +\vert g(n)\vert +\sum \limits _{ s=0}^{n}\vert C(n,s)\vert \vert h(y(s))\vert -\vert y(n)\vert \right ) }$$
$$\displaystyle{ +K\left (\sum \limits _{j=n}^{\infty }\vert C(j,n)\vert \vert y(n)\vert -\sum \limits _{ s=0}^{n-1}\vert C(n,s)\vert \vert y(s)\vert \right ). }$$

Or

$$\displaystyle{ \bigtriangleup V (n) \leq \left (\lambda _{1}\vert (y(n))\vert + M +\lambda _{2}\sum \limits _{s=0}^{n}\vert C(n,s)\vert \vert (y(s))\vert -\vert y(n)\vert \right ) }$$
$$\displaystyle{ +K\left (\sum \limits _{j=n}^{\infty }\vert C(j,n)\vert \vert y(n)\vert -\sum \limits _{ s=0}^{n-1}\vert C(n,s)\vert \vert y(s)\vert \right ). }$$

After simplification we arrive at

$$\displaystyle{ \bigtriangleup V (n) \leq \left (\lambda _{1} - 1 +\lambda _{2}\vert C(n,n)\vert + K\sum \limits _{j=n}^{\infty }\vert C(j,n)\vert \right )\vert y(n)\vert }$$
$$\displaystyle{ +M +\sum \limits _{ s=0}^{n-1}(\lambda _{ 2} - K)\vert C(n,s)\vert \vert y(s)\vert, }$$

which reduces to

$$\displaystyle{ \bigtriangleup V (n) \leq -\alpha \vert y(n)\vert + M + (\lambda _{2} - K)\sum \limits _{s=0}^{n-1}\vert C(n,s)\vert \vert y(s)\vert. }$$

Using (6.3.27) we get

$$\displaystyle{ \bigtriangleup V (n) \leq -\alpha \left (\vert y(n)\vert -\frac{M} {\alpha } -\frac{\lambda (\lambda _{2} - K)} {\alpha } \sum \limits _{s=0}^{n-1}\sum \limits _{ j=n}^{\infty }\vert C(j,s)\vert \vert y(s)\vert \right ). }$$

Now use the fact that \(\lambda:= \frac{\alpha K} {\epsilon }\) and \(\epsilon:= K -\lambda _{2} \Rightarrow -\frac{\lambda (\lambda _{2}-K)} {\alpha } = K\) to simplify to

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n)& \leq & -\alpha \left (\vert y(n)\vert + K\sum \limits _{s=0}^{n-1}\sum \limits _{ j=n}^{\infty }\vert C(j,s)\vert \vert y(s)\vert \right ) + M {}\\ & =& -\alpha V (n) + M. {}\\ \end{array}$$

Now, by applying the variations of parameters formula we get:

$$\displaystyle{ V (n) \leq (1-\alpha )^{n}V (0) + M\sum \limits _{ s=0}^{n-1}(1-\alpha )^{(n-s-1)}, }$$

which simplifies to

$$\displaystyle{ V (n) \leq (1-\alpha )^{n}V (0) + \frac{M} {\alpha }. }$$

Using (6.3.28) we arrive at

$$\displaystyle\begin{array}{rcl} \vert y(n)\vert & \leq & (1-\alpha )^{n}\vert y(0)\vert + \frac{M} {\alpha } \\ & \leq & \vert y(0)\vert + \frac{M} {\alpha }.{}\end{array}$$
(6.3.29)

Hence we have uniform boundedness. If g(n) = 0 for all \(n \in \mathbb{Z}^{+}\), then from (6.3.29) we have

$$\displaystyle{ \vert y(n)\vert \leq (1-\alpha )^{n}\vert y(0)\vert, }$$

which implies the exponential stability. This completes the proof.

For the next theorem we consider the scalar Volterra difference equation

$$\displaystyle{ x(n + 1) =\mu (n)x(n) +\sum _{ s=0}^{n-1}h(n,s)x(s) + f(n), }$$
(6.3.30)

and show, under suitable conditions, all its solutions are uniformly bounded and its zero solution is uniformly exponentially stable when f(n) is identically zero. We assume the existence of an initial sequence \(\phi: \mathbb{Z}^{+} \rightarrow [0,\infty )\), that is bounded and \(\vert \vert \phi \vert \vert =\max _{0\leq s\leq n_{0}}\vert \phi (s)\vert,\;\;n_{0} \geq 0.\)

Theorem 6.3.5 (Raffoul).

Suppose there is a scalar sequence \(\alpha: \mathbb{Z}^{+} \rightarrow [0,\infty ).\) Assume there are positive constants a > 1 and b such that

$$\displaystyle{ \alpha (s)a^{-b(n-s-1)} -\sum _{ u=s}^{n-1}a^{-b(n-s-1)}\vert h(u,s)\vert> 0, }$$
(6.3.31)
$$\displaystyle{ \vert \mu (n)\vert + \vert \alpha (n)\vert -\vert h(n,n)\vert - 1 \leq -(1 - a^{-b}), }$$
(6.3.32)

and for some positive constant M

$$\displaystyle{ \sum _{s=0}^{n-1}(1 - a^{-b})^{(n-s-1)}\vert f(s)\vert \leq M,\;\;for\;\;0 \leq n <\infty. }$$

(i) If

$$\displaystyle{ \max _{n\geq n_{0}}\sum _{s=0}^{n}\Big(\alpha (s)a^{-b(n-s-1)} -\sum _{ u=s}^{n}a^{-b(n-s-1)}\vert h(u,s)\vert \Big) <\infty }$$

then all solutions of  (6.3.30) are uniformly bounded and its zero solution is uniformly exponentially stable when f(n) is identically zero.

(ii) If for every n0 ≥ 0, there is a constant M(n0) depending on n0 such that

$$\displaystyle{ \sum _{s=0}^{n_{0}-1}\alpha (s)a^{-b(n_{0}-s-1)} -\sum _{ u=s}^{n_{0}-1}a^{-b(n_{0}-s-1)}\vert h(u,s)\vert <M(n_{ 0}), }$$

then all solutions of  (6.3.30) are bounded and its zero solution is exponentially stable when f(n) is identically zero.

Proof.

Consider the Lyapunov functional

$$\displaystyle\begin{array}{rcl} & & V (n,x) = \vert x(n)\vert \\ & +& \sum _{s=0}^{n-1}\big[\alpha (s)a^{-b(n-s-1)} -\sum _{ u=s}^{n-1}a^{-b(n-u-1)}\vert h(u,s)\vert \big]\vert x(s)\vert.{}\end{array}$$
(6.3.33)

Then along the solutions of (6.3.30) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n,x)& \leq & \vert \mu (n)\vert \vert x(n)\vert +\sum _{ s=0}^{n-1}\vert h(n,s)\vert \vert x(s)\vert + \vert f(n)\vert {}\\ & +& \sum _{s=0}^{n}\big[\alpha (s)a^{-b(n-s)} -\sum _{ u=s}^{n}a^{-b(n-u)}\vert h(u,s)\vert \big]\vert x(s)\vert {}\\ &-& \sum _{s=0}^{n-1}\big[\alpha (s)a^{-b(n-s-1)} -\sum _{ u=s}^{n-1}a^{-b(n-u-1)}\vert h(u,s)\vert \big]\vert x(s)\vert. {}\\ \end{array}$$

Next we try to simplify △V (n, x).

$$\displaystyle\begin{array}{rcl} & & \sum _{s=0}^{n}\big[\alpha (s)a^{-b(n-s)} -\sum _{ u=s}^{n}a^{-b(n-u)}\vert h(u,s)\vert \big]\vert x(s)\vert {}\\ & =& \sum _{s=0}^{n}\big[\alpha (s)a^{-b(n-s)} -\sum _{ u=s}^{n-1}a^{-b(n-u)}\vert h(u,s)\vert -\vert h(n,s)\vert \big]\vert x(s)\vert {}\\ & =& \sum _{s=0}^{n-1}\big[\alpha (s)a^{-b(n-s)} -\sum _{ u=s}^{n-1}a^{-b(n-u)}\vert h(u,s)\vert -\vert h(n,s)\vert \big]\vert x(s)\vert {}\\ & +& \alpha (n)\vert x(n)\vert -\vert h(n,n)\vert \vert x(n)\vert {}\\ & =& a^{-b}\sum _{ s=0}^{n-1}\big[\alpha (s)a^{-b(n-s-1)} -\sum _{ u=s}^{n-1}a^{-b(n-u-1)}\vert h(u,s)\vert \big]\vert x(s)\vert {}\\ & -& \sum _{s=0}^{n-1}\vert h(n,s)\vert \vert x(s)\vert +\alpha (n)\vert x(n)\vert -\vert h(n,n)\vert \vert x(n)\vert. {}\\ \end{array}$$

Substituting the above expression into (6.3.34) and making use of (6.3.32) yield

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n,x)& \leq & \big[\vert \mu (n)\vert + \vert \alpha (n)\vert -\vert h(n,n)\vert - 1\big]\vert x(n)\vert \\ &-& (1 - a^{-b})\sum _{ s=0}^{n-1}\big[\alpha (s)a^{-b(n-s-1)} -\sum _{ u=s}^{n-1}a^{-b(n-u-1)}\vert h(u,s)\vert \big]\vert x(s)\vert + \vert f(n)\vert \\ &\leq & -(1 - a^{-b})\Big[\vert x(n)\vert \\ & +& \sum _{s=0}^{n-1}\big[\alpha (s)a^{-b(n-s)} -\sum _{ u=s}^{n-1}a^{-b(n-u)}\vert h(u,s)\vert \big]\vert x(s)\vert + \vert f(n)\vert \\ & =& -(1 - a^{-b})V (n,x) + \vert f(n)\vert. {}\end{array}$$
(6.3.34)

Set β = (1 − a b) ∈ (0, 1) and apply the variation of parameters formula to get

$$\displaystyle\begin{array}{rcl} V (n,x(n))& \leq & (1-\beta )^{n-n_{0} }V (n_{0},\phi ) +\sum \limits _{ s=n_{0}}^{n-1}(1-\alpha )^{(n-s-1)}\vert f(s)\vert \\ &\leq & (1-\beta )^{n-n_{0} }\vert \vert \phi \vert \vert \Big[1 + \\ & +& \sum _{s=0}^{n_{0}-1}\big[\alpha (s)a^{-b(n_{0}-s-1)} -\sum _{ u=s}^{n_{0}-1}a^{-b(n_{0}-u-1)}\vert h(u,s)\vert \Big] \\ & +& \sum \limits _{s=n_{0}}^{n-1}(1-\alpha )^{(n-s-1)}\vert f(s)\vert. {}\end{array}$$
(6.3.35)

The results readily follow from (6.3.35) and the fact that | x(n) | ≤ V (n, x). This completes the proof.

6.4 z-Transform and Lyapunov Functionals

Next we use combination of Lyapunov functionals and z-transform to obtain boundedness and stability of Equation (6.3.23).

We assume the existence of a sequence φ(n) such that

$$\displaystyle{ \varphi (n) \geq 0,\;\bigtriangleup \varphi (n) \leq 0\ \mathrm{and}\sum _{n=0}^{\infty }\varphi (n) <\infty. }$$
(6.4.1)

Lemma 6.5.

Assume  (6.4.1) and if

$$\displaystyle{ H(n) =\beta (n) +\lambda \sum _{ s=0}^{n-1}\varphi (n - s - 1)\vert \beta (s)\vert, }$$
(6.4.2)
$$\displaystyle{ \bigtriangleup H(n) = -\alpha \beta (n)\ \ \ \beta (0) = 1, }$$
(6.4.3)

where β(n) and H(n) are scalar sequences, then

$$\displaystyle{ \beta (n) +\sum _{ s=0}^{n-1}\{\alpha +\lambda \varphi (n - s - 1)\}\beta (s) = 1\ \mathit{\mbox{ for all $n = 1,2,3,\ldots $}}, }$$
(6.4.4)
$$\displaystyle{ \beta (n)> 0\ \ \ \ \mathit{\mbox{ for all $n = 1,2,3,\ldots $}}, }$$
(6.4.5)
$$\displaystyle{ \sum _{n=0}^{\infty }\beta (n) <\infty, }$$
(6.4.6)

and

$$\displaystyle{ \tilde{\beta }(z) =\Bigg [1 + \frac{\alpha } {z - 1} +\lambda \frac{\tilde{\varphi }} {z}\Bigg]^{-1}\Bigg( \frac{z} {z - 1}\Bigg), }$$
(6.4.7)

where \(\tilde{\beta }(z)\) , \(\tilde{\varphi }(z)\) are Z-transforms of β and φ.

Proof.

By (6.4.3) we obtain

$$\displaystyle{ H(n) = H(0) -\alpha \sum _{s=0}^{n-1}\beta (s), }$$

and hence

$$\displaystyle\begin{array}{rcl} H(n)& =& H(0) -\alpha \sum _{s=0}^{n-1}\beta (s) {}\\ & =& \beta (n) +\sum _{ s=0}^{n-1}\{\alpha +\lambda \varphi (n - s - 1)\}\beta (s). {}\\ \end{array}$$

Since β(0) = H(0), we have (6.4.4). The proof of (6.4.5) follows by an induction argument on (6.4.4) and by noting that the summation term is positive and β(0) = 1. For the proof of (6.4.6) we sum (6.4.3) from s = 0 to s = n − 1 and get

$$\displaystyle{ \alpha \sum _{s=0}^{n-1}\beta (s) = H(0) - H(n). }$$

Since \(\beta (n)> 0\ \forall n \geq 0,\) we have that H is monotonically decreasing. Therefore

$$\displaystyle{ 0 <\sum _{ s=0}^{n-1}\beta (s) <\frac{H(0)} {\alpha } \qquad \mbox{ for every $n$}, }$$

which proves (6.4.6). Left to prove (6.4.7). The z-transforms of φ and β exist for some | z | > d, where d > 0. Therefore, replacing n by n + 1 in equation (6.4.4) gives

$$\displaystyle{ \beta (n + 1) +\sum _{ s=0}^{n}\{\alpha +\lambda \varphi (n - s)\}\beta (s) = 1. }$$

Taking the z-transform of both sides and using the fact that β(0) = 1 give

$$\displaystyle{ z\tilde{\beta }(z) - z\beta (0) +\alpha \frac{z} {z - 1}\tilde{\beta }(z) +\lambda \tilde{\varphi } (z)\tilde{\beta }(z) = \frac{z} {z - 1}, }$$

or

$$\displaystyle{ \Bigg\{z +\alpha \frac{z} {z - 1} +\lambda \tilde{\varphi } (z)\Bigg\}\tilde{\beta }(z) = \frac{z^{2}} {z - 1}. }$$

Since z > 0 we can divide through by z and get,

$$\displaystyle{ \Bigg\{1 +\alpha \frac{1} {z - 1} + \frac{\lambda \tilde{\varphi }(z)} {z} \Bigg\}\tilde{\beta }(z) = \frac{z} {z - 1}. }$$

Finally solving for \(\tilde{\beta }(z)\) gives

$$\displaystyle{ \tilde{\beta }(z) =\Bigg\{ 1 +\alpha \frac{1} {z - 1} + \frac{\lambda \tilde{\varphi }(z)} {z} \Bigg\}^{-1}\Bigg( \frac{z} {z - 1}\Bigg), }$$

which proves (6.4.7).

Theorem 6.4.1.

Assume the hypothesis of Lemma  6.5 . Assume there is λ > 0 such that

$$\displaystyle{ \lambda \bigtriangleup \varphi (n - s - 1) +\lambda _{2}\vert C(n,s)\vert \leq 0\qquad \mathrm{for}0 \leq s <n\mathit{\text{for}}n \in \mathbb{Z}^{+}, }$$
(6.4.8)

and

$$\displaystyle{ \lambda _{1} +\lambda _{2}\vert C(n,n)\vert +\lambda \varphi (0) \leq 1-\alpha, }$$
(6.4.9)

where α ∈ (0, 1), then for every solution y(n) of  (6.3.23), | y(n) | is uniformly bounded and

$$\displaystyle{ \lim _{n\rightarrow \infty }\sup \vert y(n)\vert \leq \frac{M} {\alpha }. }$$

Proof.

Define the Lyapunov functional V by

$$\displaystyle{ \qquad V (n) \equiv \vert y(n)\vert +\lambda \sum _{ s=0}^{n-1}\varphi (n - s - 1)\vert y(s)\vert,\qquad n \geq 0. }$$
(6.4.10)

Then, using (6.4.10) we have along the solutions of (6.3.23) that

$$\displaystyle{ \bigtriangleup V (n) = \vert y(n + 1)\vert +\lambda \sum _{ s=0}^{n}\varphi (n - s)\vert y(s)\vert -\vert y(n)\vert -\lambda \sum _{ s=0}^{n-1}\varphi (n - s - 1)\vert y(s)\vert, }$$

which simplifies to

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n)& =& \Big\{\vert y(n + 1)\vert -\vert y(n)\vert \Big\} {}\\ & +& \lambda \Bigg\{\sum _{s=0}^{n}\varphi (n - s) \parallel y(s) \parallel -\sum _{ s=0}^{n-1}\varphi (n - s - 1) \parallel y(s) \parallel \Bigg\} {}\\ & =& \Big\{\vert y(n + 1)\vert -\vert y(n)\vert \Big\} {}\\ & +& \lambda \Bigg\{\sum _{s=0}^{n-1}\bigtriangleup _{ n}\varphi (n - s - 1)\vert y(s)\vert +\varphi (0)\vert y(n)\vert \Bigg\}. {}\\ \end{array}$$

Along the solutions of (6.3.23) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n)& \leq & \Big\{\vert f(y(n))\vert + \vert g(n)\vert +\sum _{ s=0}^{n}\vert C(n,s)\vert \vert h(y(s))\vert -\vert y(n)\vert \Big\} {}\\ & +& \lambda \Big\{\sum _{s=0}^{n-1}\bigtriangleup _{ n}\varphi (n - s - 1)\vert y(s)\vert +\varphi (0)\vert y(n)\vert \Big\} {}\\ &\leq & \Big\{\lambda _{1}\vert y(n)\vert + M +\lambda _{2}\sum _{s=0}^{n}\vert C(n,s)\vert \vert y(s)\vert -\vert y(n)\vert \Big\} {}\\ & +& \lambda \Big\{\sum _{s=0}^{n-1}\bigtriangleup _{ n}\varphi (n - s - 1)\vert y(s)\vert +\varphi (0)\vert y(n)\vert \Big\}. {}\\ \end{array}$$

After some algebra, we arrive at the simplified expression,

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n)& \leq & \Big\{[\lambda _{1} +\lambda _{2}\vert C(n,n)\vert - 1 +\lambda \varphi (0)]\Big\}\vert y(n)\vert + M {}\\ & +& \Big\{\sum _{s=0}^{n-1}[\lambda _{ 2}\vert C(n,s)\vert +\lambda \bigtriangleup _{n}\varphi (n - s - 1)]\vert y(s)\vert \Big\}. {}\\ \end{array}$$

Using (6.4.8) and (6.4.9) we arrive at

$$\displaystyle{ \bigtriangleup V (n) \leq -\alpha \vert y(n)\vert + M,\qquad M> 0 }$$
(6.4.11)

Due to (6.4.11), there is a nonnegative sequence \(\eta (n): \mathbb{Z}^{+} \rightarrow \mathbb{R}\) such that

$$\displaystyle{ \bigtriangleup V (n) = -\alpha \vert y(n)\vert + M -\eta (n). }$$

Since η is a linear combination of functions of exponential order, η is also of exponential order and so we can take Z transform and have

$$\displaystyle{ z\tilde{V }(z) - zV (0) -\tilde{ V }(z) = -\alpha \vert \tilde{y}(z)\vert + M \frac{z} {z - 1} -\tilde{\eta } (z). }$$

We solve for \(\tilde{V }\) and get

$$\displaystyle{ \tilde{V }(z) = \frac{z} {z - 1}V (0) - \frac{\alpha } {z - 1}\vert \tilde{y}(z)\vert + M \frac{z} {(z - 1)^{2}} - \frac{\tilde{\eta }(z)} {z - 1}. }$$

To derive the second expression for \(\tilde{V }\), use the fact that

$$\displaystyle{ Z\Bigg[\sum _{s=0}^{n-1}x(n - s - 1)g(s)\Bigg] = \frac{1} {z}\tilde{x}(z)\tilde{g}(z). }$$

Taking the Z-transform in (6.4.10) we arrive at

$$\displaystyle{ \tilde{V }(z) =\Bigg\{ 1 +\lambda \frac{\tilde{\varphi }(z)} {z} \Bigg\}\vert \tilde{y}(z)\vert. }$$

Substituting it into

$$\displaystyle{ \tilde{V }(z) = \frac{z} {z - 1}V (0) - \frac{\alpha } {z - 1}\vert \tilde{y}(z)\vert + M \frac{z} {(z - 1)^{2}} - \frac{\tilde{\eta }(z)} {z - 1} }$$

gives

$$\displaystyle{ \frac{z} {z - 1}V (0) - \frac{\alpha } {z - 1}\vert \tilde{y}(z)\vert + M \frac{z} {(z - 1)^{2}} - \frac{\tilde{\eta }(z)} {z - 1} = \vert \tilde{y}(z)\vert +\lambda \frac{\tilde{\varphi }(z)\vert \tilde{y}(z)\vert } {z}. }$$

Solving for \(\vert \tilde{y}\vert\) gives

$$\displaystyle\begin{array}{rcl} \vert \tilde{y}(z)\vert & =& \Bigg[M \frac{1} {(z - 1)} + V (0) -\frac{\tilde{\eta }(z)} {z} \Bigg]\Bigg\{ \frac{\alpha } {z - 1} + 1 +\lambda \frac{\tilde{\varphi }(z)} {z} \Bigg\}^{-1}\Bigg( \frac{z} {z - 1}\Bigg) \\ & =& \Bigg[M \frac{1} {(z - 1)} + V (0) -\frac{\tilde{\eta }(z)} {z} \Bigg]\tilde{\beta }(z) \\ & =& \Bigg[M \frac{1} {(z - 1)}\tilde{\beta }(z) + V (0)\tilde{\beta }(z) -\frac{\tilde{\eta }(z)} {z} \tilde{\beta }(z)\Bigg]. {}\end{array}$$
(6.4.12)

Taking the z inverse in (6.4.12) gives

$$\displaystyle\begin{array}{rcl} \vert y(n)\vert & =& V (0)\beta (n) - M\beta (n) + M\sum _{s=0}^{n}\beta (s) -\sum _{ s=0}^{n-1}\eta (n - s - 1)\beta (s) {}\\ & =& V (0)\beta (n) + M\sum _{s=0}^{n-1}\beta (s) -\sum _{ s=0}^{n-1}\eta (n - s - 1)\beta (s) {}\\ & \leq & V (0)\beta (n) + M\sum _{s=0}^{n-1}\beta (s). {}\\ \end{array}$$

Since β(n) is bounded, there exists a positive constant R such that β(n) ≤ R for all n ≥ 0. Hence, the above inequality gives

$$\displaystyle{ \vert y(n)\vert \leq V (0)R + \frac{M} {\alpha }. }$$

This shows that all solutions y(n) of (6.3.23) are uniformly bounded .

Note that since n = 0 β(n) < , we have that β(n) → 0, as n and hence

$$\displaystyle{ \lim _{n\rightarrow \infty }\sup \vert y(n)\vert \leq M\sum _{n=0}^{\infty }\beta (n) \leq \frac{M} {\alpha }. }$$

This completes the proof.

We end the section with the following example.

Example 6.9.

Let

$$\displaystyle{ f(y(n)):= \left ( \frac{1} {16\sqrt{2}}\right )\left [\begin{array}{c} \frac{\vert y_{1}(n)\vert } {1+\vert y_{1}(n)\vert } \\ \frac{\vert y_{2}(n)\vert } {1+\vert y_{2}(n)\vert } \end{array} \right ] }$$
$$\displaystyle{ h(y(n)):= \left (\begin{array}{c} y_{1}(n)\\ 0 \end{array} \right ) }$$
$$\displaystyle{ C(n,s):= \frac{1} {2^{(n+4-s)}}\left [\begin{array}{cc} 1& 0\\ 0 & \frac{1} {2} \end{array} \right ] }$$

and

$$\displaystyle{ g(n):=\cos \left (\frac{n\pi } {4} \right )\left [\begin{array}{c} 1\\ 0 \end{array} \right ]. }$$

Then we have, \(\vert f(y(n))\vert \leq \frac{1} {16}\vert y(n)\vert,\;\vert h(y(n))\vert \leq \vert y(n)\vert,\vert C(n,s) \leq \frac{1} {2^{(n+4-s)}}\;\mbox{ and}\) | g(n) | ≤ 1. Let

$$\displaystyle{ \text{Let }\phi (n) = \frac{1} {2^{(n+3)}} }$$

and define the Lyapunov functional V by

$$\displaystyle{ V (n) =\vert y(n)\vert +\lambda \sum _{ s=0}^{n-1}\phi (n - s - 1)\vert y(s)\vert. }$$

Then we have \(\lambda = \frac{1} {2},\;\lambda _{1} = \frac{1} {16},\;\mbox{ and}\;\lambda _{2} = 1.\) Then all conditions of Theorem 6.4.1 are satisfied since △ϕ(n) ≤ 0,  λϕ(ns − 1) + λ2 | C(n, s) | ≤ 0. Moreover, condition (6.4.9) is satisfied for \(\alpha = \frac{13} {16}.\) Thus, by Theorem 6.4.1 all solutions are uniformly bounded and satisfy

$$\displaystyle{ \lim _{x\rightarrow \infty }\sup \vert y(n)\vert \leq \frac{M} {\alpha } = \frac{16} {13}. }$$

6.5 lp-Stability

In this section we state the definition of lp-stability and prove theorems under which it occurs. We begin by considering the nonautonomous nonlinear discrete system

$$\displaystyle{ x(n + 1) = G(n,x(s);\ 0 \leq s \leq n)\mathop{ =}\limits^{ def}G(n,x(\cdot )) }$$
(6.5.1)

where \(G: \mathbb{Z}^{+} \times \mathbb{R}^{k} \rightarrow \mathbb{R}^{k}\) is continuous in x and G(n, 0) = 0. Let C(n) denote the set of functions \(\phi: [0,n] \rightarrow \mathbb{R}\) and ∥ϕ∥ = sup{ | ϕ(s) |: 0 ≤ sn}.

We say that x(n) = x(n, n0, ϕ) is a solution of (6.5.1) with a bounded initial function \(\phi: [0,n_{0}] \rightarrow \mathbb{R}^{k}\) if it satisfies (6.5.1) for n > n0 and x(j) = ϕ(j) for jn0. 

Definition 6.5.1.

The zero solution of (6.5.1) is stable (S) if for each ɛ > 0, there is a δ = δ(n0, ɛ) > 0 such that [n0 ≥ 0, ϕC(n0),  ∥ϕ∥ < δ] imply | x(n, n0, ϕ) | < ɛ for all nn0. It is uniformly stable (US) if it is stable and δ is independent of n0. It is asymptotically stable (AS) if it is (S) and | x(n, n0, ϕ) | → 0,  as n. 

Definition 6.5.2.

The zero solution of system (6.5.1) is said to be lp-stable if it is stable and if \(\sum _{n=n_{0}}^{\infty }\vert \vert x(n,n_{ 0},\phi )\vert \vert ^{p} <\infty\) for positive p. 

We have the following elementary theorem.

Theorem 6.5.1.

If the zero solution of  (6.5.1) is exponentially stable, then it is also lp -stable.

Proof.

Since the zero solution of (6.5.1) is exponentially stable, we have by the above definition that

$$\displaystyle\begin{array}{rcl} \sum _{n=n_{0}}^{\infty }\vert \vert x(n,n_{ 0},\phi )\vert \vert & \leq & [C\big(\vert \vert \phi \vert \vert,n_{0}\big)]^{p}\sum _{ n=n_{0}}^{\infty }a^{p\eta (n-n_{0})} {}\\ & =& [C\big(\vert \vert \phi \vert \vert,n_{0}\big)]^{p}a^{-n_{0}p\eta }\sum _{ n=n_{0}}^{\infty }a^{p\eta n} {}\\ & =& [C\big(\vert \vert \phi \vert \vert,n_{0}\big)]^{p}/(1 - a^{p\eta }), {}\\ \end{array}$$

which is finite. This completes the proof.

We caution that lp-stability is not uniform with respect to p, as the next example shows. Also, it shows that (AS) does not imply lp-stability for all p. In Chapter 1, we considered the difference equation

$$\displaystyle{ x(n + 1) = \frac{n} {n + 1}x(n),\;\,x(n_{0}) = x_{0}\neq 0,\;\;n_{0} \geq 1 }$$

and showed its solution is given by

$$\displaystyle{ x(n):= x(n,n_{0},x_{0}) = \frac{x_{0}n_{0}} {n}. }$$

Clearly the zero solution is (US) and (AS). However, for n0 = n, we have

$$\displaystyle{ x(2n,n,x_{0}) = \frac{x_{0}n} {2n} \rightarrow \frac{x_{0}} {2} \neq 0 }$$

which implies that the zero solution is not (UAS). Moreover,

$$\displaystyle{ \sum _{n=n_{0}}^{\infty }\vert \vert x(n,n_{ 0},x_{0})\vert \vert ^{p} \leq \sum _{ n=n_{0}}^{\infty }\vert (\frac{x_{0}n_{0}} {n} )\vert ^{p} = \vert x_{ 0}\vert ^{p}(n_{ 0})^{p}\sum _{ n=n_{0}}^{\infty }( \frac{1} {n})^{p}, }$$

which diverges for 0 < p ≤ 1 and converges for p > 1. 

The next example shows that asymptotic stability does not necessarily imply lp-stability for any p > 0. Let g: [0, ) → (0, ) with limn g(n) = . Consider the nonautonomous difference equation

$$\displaystyle{ x(n + 1) =\big [g(n)/g(n + 1)\big]x(n),\;x(n_{0}) = x_{0}, }$$
(6.5.2)

which has the solution \(x(n,n_{0},x_{0}) = \frac{g(n_{0})} {g(n)} x_{0}\). It is obvious that as n the solution tends to zero, for fixed initial n0 and the zero solution is indeed asymptotically stable. On the other hand

$$\displaystyle{ \sum _{n=n_{0}}^{\infty }\vert \vert x(n,n_{ 0},x_{0})\vert \vert ^{p} = [g(n_{ 0})x_{0}]^{p}\sum _{ n=n_{0}}^{\infty }\Big( \frac{1} {g(n)}\Big)^{p}, }$$
(6.5.3)

which may not converge for any p > 0. For example, if we take

$$\displaystyle{ g(n) = log(n + 2), }$$

then from (6.5.3) we have

$$\displaystyle{ \sum _{n=n_{0}}^{\infty }\vert \vert x(n,n_{ 0},x_{0})\vert \vert ^{p} = [log(n_{ 0} + 2)]^{p}\vert \vert x_{ 0}\vert \vert ^{p}\sum _{ n=n_{0}}^{\infty }\Big( \frac{1} {log(n + 2)}\Big)^{p}, }$$

which is known to diverge for all p ≥ 0. 

The next theorem relates lp-stability to Lyapunov functionals.

Theorem 6.5.2.

If there exists a positive definite V (see Definition  1.2.1 ) and along the solutions of  (6.5.1), V satisfies \(\bigtriangleup V \leq -c\vert \vert x\vert \vert ^{p},\) for some positive constants c and p, then the zero solution of  (6.5.1) is lp -stable.

Proof.

Set the solution x(n): = x(n, n0, ϕ). The hypothesis of the theorem implies the zero solution is stable. Thus, for nn0 there is a positive constant M such that | | x(n, n0, ϕ) | | ≤ M. For nn0 we set

$$\displaystyle{ L(n) = V (n,x(n) + c\sum _{s=n_{0}}^{n-1}\vert \vert x(s)\vert \vert ^{p}. }$$

Then for all nn0 we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup L(n)& =& \bigtriangleup V (n,x) + \vert \vert x\vert \vert ^{p} {}\\ & \leq & -c\vert \vert x\vert \vert ^{p} + c\vert \vert x\vert \vert ^{p} = 0. {}\\ \end{array}$$

Therefore, L(n) is decreasing and hence \(0 \leq L(n) \leq L(n_{0}) = V (n_{0},\phi ),\;n \geq n_{0}.\) This implies that \(0 \leq L(n) = V (n,x) + c\sum _{s=n_{0}}^{n-1}\vert \vert x(s)\vert \vert ^{p} \leq V (n_{ 0},\phi ),\;n \geq n_{0}\) so that

$$\displaystyle{ 0 \leq V (n,x) \leq -c\sum _{s=n_{0}}^{n-1}\vert \vert x(s)\vert \vert ^{p} + V (n_{ 0},\phi ). }$$

As a consequence,

$$\displaystyle{ \sum _{s=n_{0}}^{n-1}\vert \vert x(s,n_{ 0},\phi )\vert \vert ^{p} \leq V (n_{ 0},\phi )/c,\;n \geq n_{0}. }$$

Letting n on both sides of the above inequality gives

$$\displaystyle{ \sum _{n=n_{0}}^{\infty }\vert \vert x(n,n_{ 0},\phi )\vert \vert ^{p} \leq V (n_{ 0},\phi )/c <\infty. }$$

This completes the proof.

In the next two examples we show that the lp-stability depends on the type of Lyapunov functional that is being used. Moreover, there will be a price to pay if you want to obtain lp-stability for higher values of p. 

Example 6.10.

Consider the scalar Volterra difference equation

$$\displaystyle{ x(n + 1) = a(n)x(n) +\sum _{ s=0}^{n-1}b(n,s)f(s,x(s)) }$$
(6.5.4)

with f being continuous and there exists a constant λ1 such that f(n, x) | ≤ λ1 | x |. Assume there exists a positive α such that

$$\displaystyle{ \vert a(n)\vert +\lambda \sum _{ s=n+1}^{\infty }\vert b(s,n)\vert +\lambda _{ 1}\vert b(n,n)\vert - 1 \leq -\alpha, }$$
(6.5.5)

and for some positive constant λ which is to be specified later, we have

$$\displaystyle{ \lambda _{1} \leq \lambda, }$$
(6.5.6)

then the zero solution of (6.5.4) is l1-stable.

Proof.

Define the Lyapunov functional V by

$$\displaystyle{ V (n,x) = \vert x(n)\vert +\lambda \sum _{ j=0}^{n-1}\sum _{ s=n}^{\infty }\vert b(s,j)\vert \vert x(j)\vert. }$$

We have along the solutions of (6.5.4) that

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t)& \leq & \big(\vert a(n)\vert +\lambda \sum _{ s=n+1}^{\infty }\vert b(s,n)\vert +\lambda _{ 1}\vert b(n,n)\vert - 1\big)\vert x(n)\vert {}\\ & & +(\lambda _{1}-\lambda )\sum _{s=0}^{n-1}\vert b(n,s)\vert \vert x(s)\vert {}\\ &\leq & -\alpha \vert x(n)\vert. {}\\ \end{array}$$

This implies the zero solution is stable and l1-stable by Theorem 6.5.2. This completes the proof.

Example 6.11.

Consider (6.5.4) and assume f is continuous with | f(n, x) | ≤ λ1 x 2. Assume there exists a positive constant α such that

$$\displaystyle{ a^{2}(n) +\lambda \sum _{ s=n+1}^{\infty }\vert b(s,n)\vert +\lambda _{ 1}\vert a(n)\vert \sum _{s=0}^{n}\vert b(n,s)\vert - 1 \leq -\alpha, }$$
(6.5.7)

and for some positive constant λ which is to be specified later, we have

$$\displaystyle{ \lambda _{1}\vert a(n)\vert +\lambda _{ 1}^{2}\sum _{ s=0}^{n-1}\vert b(n,s)\vert -\lambda \leq 0. }$$
(6.5.8)

Then the zero solution of (6.5.4) is l2-stable .

Proof.

define the Lyapunov functional V by

$$\displaystyle{ V (n,x) = x^{2}(n) +\lambda \sum _{ j=0}^{n-1}\sum _{ s=n}^{\infty }\vert b(s,j)\vert x^{2}(j). }$$

We have along the solutions of (6.5.4) that

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t)& =& \big(a(n)x(n) +\sum _{ s=0}^{n-1}b(n,s)f(s,x(s))\big)^{2} - x^{2}(n) {}\\ & +& \lambda x^{2}(n)\sum _{ s=n+1}^{\infty }\vert b(s,n)\vert -\lambda \sum _{ s=0}^{n-1}\vert b(n,s)\vert x^{2}(s) - x^{2}(n) {}\\ & \leq & a^{2}(n)x^{2}(n) + 2\lambda _{ 1}\vert a(n)\vert \vert x(n)\vert \sum _{s=0}^{n-1}\vert b(n,s)\vert \vert x(s)\vert +\big (\sum _{ s=0}^{n-1}b(n,s)f(s,x(s)))^{2} {}\\ & +& \lambda x^{2}(n)\sum _{ s=n+1}^{\infty }\vert b(s,n)\vert -\lambda \sum _{ s=0}^{n-1}\vert b(n,s)\vert x^{2}(s) - x^{2}(n). {}\\ \end{array}$$

As a consequence of 2zwz 2 + w 2, for any real numbers z and w we have

$$\displaystyle{ 2\lambda _{1}\vert a(n)\vert \vert x(n)\vert \sum _{s=0}^{n-1}\vert b(n,s)\vert \vert x(s)\vert \leq \lambda _{ 1}\vert a(n)\vert \sum _{s=0}^{n-1}\vert b(n,s)\vert (x^{2}(n) + x^{2}(s)). }$$

Also, using Schwartz inequality we obtain

$$\displaystyle\begin{array}{rcl} \big(\sum _{s=0}^{n-1}b(n,s)f(s,x(s))\big)^{2}& =& \sum _{ s=0}^{n-1}\vert b(n,s)\vert ^{1/2}\vert b(n,s)\vert ^{1/2}\vert f(s,x(s))\vert {}\\ &\leq & \sum _{s=0}^{n-1}\vert b(n,s)\vert \sum _{ s=0}^{n-1}\vert b(n,s)\vert f^{2}(s,x(s)) {}\\ & \leq & \lambda _{1}^{2}\sum _{ s=0}^{n-1}\vert b(n,s)\vert \sum _{ s=0}^{n-1}\vert b(n,s)\vert x^{2}(s). {}\\ \end{array}$$

Putting all together, we get

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t)& \leq & \Big(a^{2}(n) +\lambda \sum _{ s=n+1}^{\infty }\vert b(s,n)\vert +\lambda _{ 1}\vert a(n)\vert \sum _{s=0}^{n}\vert b(n,s)\vert - 1\Big)x^{2}(n) {}\\ & +& \Big(\lambda _{1}\vert a(n)\vert +\lambda _{ 1}^{2}\sum _{ s=0}^{n-1}\vert b(n,s)\vert -\lambda \Big)\sum _{ s=0}^{n-1}\vert b(n,s)\vert x^{2} {}\\ & \leq & -\alpha x^{2}(n). {}\\ \end{array}$$

This implies the zero solution is stable and l2-stable by Theorem 6.5.2. This completes the proof.

A quick comparison of (6.5.5) with (6.5.7) and (6.5.6) with (6.5.8) reveals that the conditions for the l2-stability are more stringent than of the conditions for l1-stability.

6.6 Discretization Scheme Preserving Stability and Boundedness

In Chapter 1, we briefly discussed the notion that Volterra discrete equations play major role in numerical solutions of Volterra integro-differential equations. In this section we apply a nonstandard discretization scheme due to Mickens (see [119]) to a Volterra integro-differential equation, to form a Volterra discrete system. By displaying suitable Lyapunov functionals, one for the Volterra integro-differential equation and another for the Volterra discrete system, we will show that under the same conditions on some of the coefficients, the stability of the zero solution and boundedness of solutions are preserved in both systems.This section is intended to give a brief introduction to the subject of discretization, although by no means, should it be considered a complete study of the subject. The author is not claiming that the discretization scheme used here is the most general nor it is the most efficient. The sole purpose of this section is to introduce the reader to the effectiveness of Lyapunov functionals when dealing with preserving the qualitative behaviors of solutions. However, this section should set the stage for future research in preserving the characteristics of Volterra integro-differential equations when nonstandard discretization schemes are used in obtaining the corresponding Volterra discrete systems. For comprehensive treatment of the subject of nonstandard discretization we refer to [119] and [120]. For motivational purpose, consider the differential equation

$$\displaystyle{ x'(t) = ax(t),\;\mbox{ for some constant}\;a <0, }$$
(6.6.1)

which has the solution \(x(t) = e^{a(t-t_{0})}\) and x(t) → 0 as t. 

On the other hand, if we consider the difference equation

$$\displaystyle{ x(t + 1) = ax(t),\;x(t_{0}) = x_{0}, }$$
(6.6.2)

then the unique solution of (6.6.2) is

$$\displaystyle{ x(t) = x_{0}a^{t-t_{0} } }$$

and

$$\displaystyle{ x(t) \rightarrow 0\,\mbox{ as}\;\;t \rightarrow \infty }$$

provided that | a | < 1. We see that the stability is not preserved. Applying the approximations

$$\displaystyle{ x'(t) = \frac{x(t + h) - x(t)} {h},\;x(t) = \frac{x(t + h) + x(t)} {2} }$$
(6.6.3)

to equation (6.6.1) we have the analogous discrete system

$$\displaystyle{ x(n + 1) = \frac{2 + ah} {2 - ah}x(n), }$$
(6.6.4)

where x(n + 1) = x(t + h) and x(n) = x(t). All solutions x(n) of (6.6.4) satisfy x(n) → 0 as n, provided that

$$\displaystyle{ \Big\vert \frac{2 + ah} {2 - ah}\Big\vert <1. }$$
(6.6.5)

Clearly, inequality (6.6.5) is satisfied for a < 0 and 0 < h < 1. Thus, we see that the discretization scheme defined by (6.6.3) preserved the stability of the zero solution. It is noted that the result holds under the restriction that the step-size △t = h satisfies the restriction

$$\displaystyle{ 0 <h <1. }$$
(6.6.6)

Restriction (6.6.6) is a direct consequence of how we discretize equation (6.6.1). To ease the restriction given by (6.6.6), we use a nonstandard discretization scheme due to Mickens [122]; that is we let

$$\displaystyle{ x'(t) = \frac{x(t + h) - x(t)} {\varPhi (a,h)},\;\;\varPhi (a,h) = \frac{e^{ah} - 1} {a}. }$$
(6.6.7)

We note that this scheme holds for all h > 0. For more on the use of nonstandard discretization, we refer the reader to [119, 120, 121, 122]. Under discretization (6.6.7), equation (6.6.1) becomes

$$\displaystyle{ x(n + 1) = (1 + a\varPhi (a,h))x(n) = e^{ah}x(n). }$$
(6.6.8)

Since a < 0, we have that e ah < 1, and hence all solutions of (6.6.8) go to zero asymptotically without any restriction on the step-size h. Thus, we see that the discretization scheme defined by (6.6.7) preserved the stability of the zero solution.

Definition 6.6.1.

A resulting difference equation is said to be consistent with respect to property P under a given discretization scheme with its continuous counterpart if they both exhibit property P under equivalent conditions.

Based on Definition 6.6.1, we see that (6.6.5) is consistent with respect to asymptotic stability with (6.6.1) under discretization (6.6.3) provided that (6.6.6) holds. The same is true for (6.6.7) but without further restriction on the size h. 

Next we discuss the stability, uniform asymptotic stability, and exponential stability of Volterra integro-differential equations and their corresponding discrete systems with respect to certain discretization schemes. Consider the scalar Volterra integro-differential equation

$$\displaystyle{ x'(t) = a\,x(t) +\int _{ 0}^{t}B(t,s)\,f(s,x(s))ds,\;t \geq 0. }$$
(6.6.9)

We assume f(t, x) is continuous in x and t and satisfy

$$\displaystyle{ \vert f(t,x)\vert \leq \gamma \,\vert x\vert, }$$
(6.6.10)

where γ is a positive constant. The kernel \(B: \mathbb{R}^{2} \rightarrow \mathbb{R}\) is continuous in both arguments. By considering the discretization scheme (6.6.3) for

$$\displaystyle{ x'(t) = ax(t) }$$

and by approximating the integral term with

$$\displaystyle{ \int _{0}^{t}B(t,s)f(s,x(s))\;ds = h\sum _{ s=0}^{t}B(t,s)f(s,x(s)), }$$
(6.6.11)

we arrive at the corresponding discrete Volterra equation,

$$\displaystyle{ x(n + 1) = \frac{2 + ah} {2 - ah}x(n) + \frac{2h^{2}} {2 - ah}\sum _{s=0}^{n}B(n,s)f(s,x(s)),\;n \geq 0, }$$
(6.6.12)

where x(n + 1) = x(t + h),  x(n) = x(t) and 0 < h < 1. Similarly by considering discretizations (6.6.7) and (6.6.11) we arrive at the corresponding discrete Volterra equation,

$$\displaystyle{ x(n + 1) = e^{ah}x(n) + h\varPhi (a,h)\sum _{ s=0}^{n}B(n,s)f(s,x(s)),\;n \geq 0, }$$
(6.6.13)

The study of Volterra discrete systems is important since they play a major role in the fields of numerical analysis, control theory, and computer science. Thus, finding a discretization scheme under which Equation (6.6.12) is consistent with (6.6.9) is important. Throughout this section it is assumed that the step size h satisfies 0 < h < 1. In preparation for the next theorem we make the following assumptions.

$$\displaystyle{ \vert B(t,s)\vert \;\mbox{ is monotonically decreasing in}\;t }$$
(6.6.14)

and there exists a constant α > 0 such that \(\forall t \geq 0\)

$$\displaystyle{ a +\gamma \int _{ t}^{\infty }\vert B(u,t)\vert du \leq -\alpha. }$$
(6.6.15)

Theorem 6.6.1.

Assume conditions  (6.6.14) and  (6.6.15) hold. Then  (6.6.13) is consistent with respect to uniform asymptotic stability under the discretization scheme  (6.6.7) and  (6.6.11) with its continuous counterpart  (6.6.9).

Proof.

Define the Lyapunov functional V by

$$\displaystyle{ V (t) = \vert x(t)\vert +\gamma \int _{ 0}^{t}\int _{ t}^{\infty }\vert B(u,s)\vert \vert x(s)\vert du\;ds. }$$

Then by making use of (6.6.15), we have along the solutions (6.6.9) that

$$\displaystyle\begin{array}{rcl} V ^{{\prime}}(t)& =& \frac{x(t)} {\vert x(t)\vert }x'(t) +\gamma \int _{ t}^{\infty }\vert B(u,t)\vert \vert x(t)\vert du -\gamma \int _{ 0}^{t}\vert B(t,s)\vert \vert x(s)\vert ds {}\\ & \leq & a\vert x(t)\vert +\gamma \int _{ 0}^{t}\vert B(t,s)\vert \vert x(s)\vert ds {}\\ & +& \gamma \int _{t}^{\infty }\vert B(u,t)\vert \vert x(t)\vert du -\gamma \int _{ 0}^{t}\vert B(t,s)\vert \vert x(s)\vert ds {}\\ & \leq & \Big[a +\gamma \int _{ t}^{\infty }\vert B(u,t)\vert du\Big]\vert x(t)\vert {}\\ &\leq & -\alpha \vert x(t)\vert. {}\\ \end{array}$$

Then by Theorem 2.6.1 of [22], the zero solution of (6.6.9) is (UAS). Now we turn our attention to (6.6.12). Define V by

$$\displaystyle{ V (n) = \vert x(n)\vert +\gamma h\varPhi (a,h)\sum _{s=0}^{n-1}\sum _{ u=n}^{\infty }\vert B(u,s)\vert \vert x(s)\vert. }$$

It can be easily shown that along the solutions of (6.6.13)

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n)& \leq & \Big[e^{ah} +\gamma h\varPhi (a,h)\sum _{ u=n}^{\infty }\vert B(u,n)\vert \;- 1\Big]\vert x(n)\vert. {}\\ \end{array}$$

Due to condition (6.6.15) there exists a positive constant β such that

$$\displaystyle{ \gamma \int _{t}^{\infty }\vert B(u,t)\vert du \leq \beta. }$$

We can choose h small enough so that the above inequality combined with (6.6.14) and the fact that a < 0 to imply that there exists a positive constant η such that

$$\displaystyle{ e^{ah} +\gamma h\varPhi (a,h)\sum _{ u=n}^{\infty }\vert B(u,n)\vert \;- 1 \leq -\eta. }$$

Therefore,

$$\displaystyle{ \bigtriangleup V (n) \leq -\eta \vert x(n)\vert. }$$

By setting α = 0 in Theorem 2.2.4 we have the zero solution of (6.6.13) is (UAS). The proof is complete.

In Theorem 6.6.1 we showed that the discretization scheme given by (6.6.3) and (6.6.11) preserved the uniform asymptotic stability of the zero solutions of Equations (6.6.9) and (6.6.12). In the next theorem we will show that the discretization scheme given by (6.6.3) and (6.6.11) preserves the exponential asymptotic stability of the zero solutions of Equations (6.6.9) and (6.6.12) under more stringent conditions on the kernel B(t, s). For the next theorem we make the following assumptions.

$$\displaystyle{ \vert B(t,s)\vert \;\mbox{ is monotonically decreasing in}\;t\;\mbox{ and}\;s. }$$
(6.6.16)

Suppose there exist constants k > 1 and α > 0 such that

$$\displaystyle{ a +\gamma \; k\int _{t}^{\infty }\vert B(u,t)\vert du \leq -\alpha <0 }$$
(6.6.17)

where k = 1 + ε for some ε > 0. Suppose

$$\displaystyle{ \vert B(t,s)\vert \geq \lambda \int _{t}^{\infty }\vert B(u,s)\vert du }$$
(6.6.18)

where \(\lambda \geq \frac{k\alpha } {\epsilon }> 0\), 0 ≤ s < tu < , and

$$\displaystyle{ \gamma \int _{0}^{t_{0} }\int _{t_{0}}^{\infty }\vert B(u,s)\vert du\;ds \leq \rho <\infty \;\ \mbox{ for all }t_{ 0} \geq 0. }$$
(6.6.19)

Remark 6.4.

Due to conditions (6.6.16) and (6.6.17) there exists a positive constant β such that

$$\displaystyle{ \gamma \;k\int _{t}^{\infty }\vert B(u,t)\vert \;du \leq \beta. }$$

Similarly, by conditions (6.6.16) and (6.6.19) there is a constant such that

$$\displaystyle{ h\varPhi (a,h)\gamma k\sum _{s=0}^{n_{0}-1}\sum _{ j=n_{0}}^{\infty }\vert B(j,s)\vert \leq \varGamma _{ 1}. }$$

Finally, as a consequence of (6.6.16) and (6.6.18) we have

$$\displaystyle{ \vert B(n,s)\vert \geq \lambda \sum _{j=n}^{\infty }\vert B(j,s)\vert. }$$

Theorem 6.6.2 ([91]).

Assume conditions  (6.6.16) (6.6.19) hold. Then  (6.6.13) is consistent with respect to uniform exponential stability under the discretization scheme  (6.6.3) and  (6.6.11) with its continuous counterpart  (6.6.9).

Proof.

Define

$$\displaystyle{ V (t,x) = \vert x(t)\vert + k\int _{0}^{t}\int _{ t}^{\infty }\vert B(u,s)\vert du\vert f(s,x(s))\vert ds. }$$
(6.6.20)

Let \(V '(t,x) = \frac{d} {dt}V (t,x(t)).\) Then along the solutions of (6.6.9) we have,

$$\displaystyle\begin{array}{rcl} V '(t,x)& =& \frac{x(t)} {\vert x(t)\vert }x'(t) + k\int _{t}^{\infty }\vert B(u,t)\vert du\vert f(t,x(t))\vert - k\int _{ 0}^{t}\vert B(t,s)\vert \vert f(s,x(s))\vert ds \\ & \leq & a\vert x(t)\vert +\int _{ 0}^{t}\vert B(t,s)\vert \vert f(s,x(s))\vert ds \\ & & +k\int _{t}^{\infty }\vert B(u,t)\vert du\vert f(t,x(t))\vert - k\int _{ 0}^{t}\vert B(t,s)\vert \vert f(s,x(s))\vert ds \\ & \leq & \Big[a + k\int _{t}^{\infty }\vert B(u,t)\vert du\gamma \Big]\vert x(t)\vert + (1 - k)\int _{ 0}^{t}\vert B(t,s)\vert \vert f(s,x(s))\vert ds \\ & \leq & -\alpha \vert x(t)\vert -\epsilon \int _{0}^{t}\vert B(t,s)\vert \vert f(s,x(s))\vert ds \\ & \leq & -\alpha \vert x(t)\vert -\epsilon \lambda \int _{0}^{t}\int _{ t}^{\infty }\vert B(u,s)\vert du\vert f(s,x(s))\vert ds \\ & \leq & -\alpha \Big[\vert x(t)\vert + k\int _{0}^{t}\int _{ t}^{\infty }\vert B(u,s)\vert du\vert f(s,x(s))\vert ds\Big] \\ & \leq & -\alpha V (t,x). {}\end{array}$$
(6.6.21)

Hence inequality (6.6.21) yields

$$\displaystyle\begin{array}{rcl} V (t,x) \leq V (t_{0},\phi (.))e^{-\alpha (t-t_{0})}.& & {}\\ \end{array}$$

As a consequence, we have

$$\displaystyle\begin{array}{rcl} \vert x(t)\vert & \leq & V (t_{0},\phi (.))e^{-\alpha (t-t_{0})}\;\;\ \mbox{ for}\;\ t \geq t_{ 0} {}\\ & \leq & \|\phi \|\Big[1 + k\gamma \int _{0}^{t_{0} }\int _{t_{0}}^{\infty }\vert B(u,s)\vert du\;ds\Big]e^{-\alpha (t-t_{0})}\;\;\ \mbox{ for}\;\ t \geq t_{ 0}. {}\\ \end{array}$$

Hence, the zero solution of (6.6.9) is uniformly exponentially stable.

Remark 6.5.

Suppose ρ(t0) is a constant depending on t0. If condition (6.6.19) is substituted with

$$\displaystyle{ \int _{0}^{t_{0} }\int _{t_{0}}^{\infty }\vert B(u,s)\vert du\gamma (s)ds \leq \rho (t_{ 0}),\;\ \mbox{ for }t_{0} \geq 0, }$$

then a slight modification of the proceeding paragraph shows that the zero solution of (6.6.9) is exponentially stable.

To show the zero solution of (6.6.12) is uniformly exponentially stable, we define V (n) = V (n, x) by

$$\displaystyle{ V (n) = \vert x(n)\vert + kh\varPhi (a,h)\sum _{s=0}^{n-1}\sum _{ j=n}^{\infty }\vert B(j,s)\vert \vert f(s,x(s))\vert. }$$

Then along solutions of (6.6.12), we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n)& =& \vert x(n + 1)\vert -\vert x(n)\vert + kh\varPhi (a,h)\sum _{s=0}^{n}\sum _{ j=n+1}^{\infty }\vert B(j,s)\vert \vert f(s,x(s))\vert {}\\ & &-kh\varPhi (a,h)\sum _{s=0}^{n-1}\sum _{ j=n}^{\infty }\vert B(j,s)\vert \vert f(s,x(s))\vert {}\\ & =& \Big\vert e^{ah}x(n) + h\varPhi (a,h)\sum _{ s=0}^{n}B(n,s)f(s,x(s))\Big\vert {}\\ & &-\vert x(n)\vert + kh\varPhi (a,h)\sum _{s=0}^{n}\Big[\sum _{ j=n}^{\infty }\vert B(j,s)\vert \vert f(s,x(s))\vert {}\\ & &-\vert B(n,s)\vert \vert f(s,x(s))\vert \Big] - kh\varPhi (a,h)\sum _{s=0}^{n-1}\sum _{ j=n}^{\infty }\vert B(j,s)\vert \vert f(s,x(s))\vert {}\\ &\leq & \Big[\;e^{ah} +\gamma kh\varPhi (a,h)\sum _{ j=n}^{\infty }\vert B(j,n)\vert - 1\Big]\vert x(n)\vert {}\\ & & +h\varPhi (a,h)(1 - k)\sum _{s=0}^{n-1}\vert B(n,s)\vert \vert f(s,x(s))\vert. {}\\ \end{array}$$

Let α be defined by (6.6.17). Then by (6.6.16) and (6.6.17), we can choose an appropriate h so that

$$\displaystyle{ e^{ah} +\gamma kh\varPhi (a,h)\sum _{ j=n}^{\infty }\vert B(j,n)\vert - 1 \leq -\alpha. }$$

As a consequence,

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n)& \leq & -\alpha \vert x(n)\vert + h\varPhi (a,h)(1 - k)\sum _{s=0}^{n-1}\vert B(n,s)\vert \vert f(s,x(s))\vert {}\\ &\leq & -\alpha \vert x(n)\vert -\epsilon \lambda h\varPhi (a,h)\sum _{s=0}^{n-1}\sum _{ j=n}^{\infty }\vert B(j,s)\vert \vert f(s,x(s))\vert {}\\ &\leq & -\alpha \Big[\vert x(n)\vert + h\varPhi (a,h)k\sum _{s=0}^{n-1}\sum _{ j=n}^{\infty }\vert B(j,s)\vert \vert f(s,x(s))\vert \Big] {}\\ & =& -\alpha V (n). {}\\ \end{array}$$

The above inequality implies that

$$\displaystyle\begin{array}{rcl} V (n) \leq (1-\alpha )^{n-n_{0} }V (n_{0}),\;n \geq n_{0} \geq 0.& & {}\\ \end{array}$$

Or

$$\displaystyle\begin{array}{rcl} \vert x(n)\vert & \leq & (1-\alpha )^{n-n_{0} }V (n_{0}) {}\\ & \leq & \vert \vert \phi \vert \vert \Big[1 + h\varPhi (a,h)\sum _{s=0}^{n_{0}-1}\sum _{ j=n_{0}}^{\infty }\vert B(j,s)\vert \Big](1-\alpha )^{n-n_{0} },\;n \geq n_{0} \geq 0. {}\\ \end{array}$$

This completes the proof.

Now we turn our attention to the preservation of boundedness. Consider the Volterra linear integro-differential equation

$$\displaystyle{ x'(t) = a\,x(t) +\int _{ 0}^{t}B(t,s)\,x(s)ds + g(t),\;t \geq 0 }$$
(6.6.22)

and its analogous discrete Volterra equation, under discretizations (6.6.7) and (6.6.11)

$$\displaystyle{ x(n + 1) = e^{ah}x(n) + h\varPhi (a,h)\sum _{ s=0}^{n}B(n,s)x(s) + h\varPhi (a,h)g(n),\;n \geq 0 }$$
(6.6.23)

where a, B are as defined before and g is continuous and uniformly bounded . Thus, there exists a positive constant M such that

$$\displaystyle{ h\varPhi (a,h)\vert g(t)\vert \leq M,\;\mbox{ for all }\;t \geq 0. }$$
(6.6.24)

Theorem 6.6.3.

Suppose there is a continuous function ψ: [0, ) → [0, ) with ψ′ ≤ 0 for t ≥ 0, \(\int _{0}^{t}\psi (u)du <\infty,\) and \(\frac{\partial } {\partial t}\psi (t - s) + \vert B(t,s)\vert \leq 0\) for 0 ≤ s < t < ∞, where | B(t, t) | is uniformly bounded . If for t ≥ 0, a + ψ(0) ≤ −α < 0, for some positive constant α, then  (6.6.23) is consistent with respect to boundedness under the discretization scheme  (6.6.7) and  (6.6.11) with its continuous counterpart  (6.6.22).

Proof.

Define a Lyapunov functional

$$\displaystyle{ V (t,x) = \vert x(t)\vert +\int _{ 0}^{t}\psi (t - s)\vert x(s)\vert ds. }$$

Along the solutions of (6.6.22) we have,

$$\displaystyle\begin{array}{rcl} V '(t,x)& =& \frac{x(t)} {\vert x(t)\vert }x'(t) +\psi (0)\vert x(t)\vert +\int _{ 0}^{t} \frac{\partial } {\partial t}\psi (t - s)\vert x(s)\vert ds {}\\ & \leq & a\vert x(t)\vert +\int _{ 0}^{t}\vert B(t,s)\vert \vert x(s)\vert ds {}\\ & +& \vert g(t)\vert +\psi (0)\vert x(t)\vert +\int _{ 0}^{t} \frac{\partial } {\partial t}\psi (t - s)\vert x(s)\vert ds {}\\ & \leq & [a +\psi (0)]\vert x(t)\vert + M +\int _{ 0}^{t}\Big[ \frac{\partial } {\partial t}\psi (t - s) + \vert B(t,s)\vert \Big]\vert x(s)\vert ds {}\\ & \leq & -\alpha \vert x(t)\vert + M. {}\\ \end{array}$$

By ([23], pp. 109–111) we have all solutions of (6.6.22) are bounded. With respect to (6.6.23) we consider the Lyapunov functional

$$\displaystyle{ V (n) = \vert x(n)\vert + h\varPhi (a,h)\sum _{s=0}^{n-1}\psi (n - s - 1)\vert x(s)\vert. }$$

Then along solutions of (6.6.23), we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n)& \leq & \Big[e^{ah} + h\varPhi (a,h)\big(\vert B(n,n)\vert +\psi (0)\big) - 1\Big]\vert x(n)\vert {}\\ & & +h\varPhi (a,h)\sum _{s=0}^{n-1}\Big[\bigtriangleup _{ n}\psi (n - s - 1) + \vert B(n,s)\vert \Big]\vert x(s)\vert + \vert g(n)\vert. {}\\ \end{array}$$

Due to condition \(\frac{\partial } {\partial t}\psi (t - s) + \vert B(t,s)\vert \leq 0\) for 0 ≤ s < t < , we have △n ψ(ns − 1) + | B(n, s) | ≤ 0 for 0 ≤ s < n < . 

Also, due to condition a + ψ(0) ≤ −α < 0 we have a < 0. Moreover, since | B(t, t) | is uniformly bounded we arrive at the fact that we can choose h small enough so that

$$\displaystyle{ e^{ah} + h\varPhi (a,h)\big(\vert B(n,n)\vert +\psi (0)\big) - 1 \leq -\alpha, }$$

for some positive constant α. As a consequence,

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n)& \leq & -\alpha \vert x(n)\vert + M. {}\\ \end{array}$$

By Theorem 2.1.1 we have all solutions of (6.6.23) are bounded. Thus, (6.6.23) is consistent with respect to boundedness under the discretization scheme (6.6.7) and (6.6.11) with its continuous counterpart (6.6.22).

Next we state the following corollaries using discretization (6.6.3).

Corollary 6.5 ([91]).

Assume conditions  (6.6.14) and  (6.6.15) hold. Then  (6.6.12) is consistent with respect to uniform asymptotic stability under the discretization scheme  (6.6.3) and  (6.6.11) with its continuous counterpart  (6.6.9).

The proof follows along the lines of the proof of Theorem 6.6.1 by taking

$$\displaystyle{ V (n) = \vert x(n)\vert +\gamma \frac{2h^{2}} {2 - ah}\sum _{s=0}^{n-1}\sum _{ u=n}^{\infty }\vert B(u,s)\vert \vert x(s)\vert. }$$

Corollary 6.6 ([91]).

Assume conditions  (6.6.16) (6.6.19) hold. Then  (6.6.12) is consistent with respect to uniform exponential stability under the discretization scheme  (6.6.3) and  (6.6.11) with its continuous counterpart  (6.6.9).

The proof follows along the lines of the proof of Theorem 6.6.2 by taking

$$\displaystyle{ V (n) = \vert x(n)\vert + \frac{2h^{2}} {2 - ah}k\sum _{s=0}^{n-1}\sum _{ j=n}^{\infty }\vert B(j,s)\vert \vert f(s,x(s))\vert. }$$

For the next corollary we consider (6.6.22) and its analogous discrete Volterra difference equation

$$\displaystyle{ x(n + 1) = \frac{2 + ah} {2 - ah}x(n) + \frac{2h^{2}} {2 - ah}\sum _{s=0}^{n}B(n,s)x(s) + g(n),\;n \geq 0 }$$
(6.6.25)

under discretization scheme (6.6.3) and (6.6.11).

Corollary 6.7 ([91]).

Suppose there is a continuous function ψ: [0, ) → [0, ) with ψ′ ≤ 0 for t ≥ 0, \(\int _{0}^{t}\psi (u)du <\infty,\) and \(\frac{\partial } {\partial t}\psi (t - s) + \vert B(t,s)\vert \leq 0\) for 0 ≤ s < t < ∞, where | B(t, t) | is uniformly bounded . If for t ≥ 0, a + ψ(0) ≤ −α < 0, for some positive constant α, then  (6.6.25) is consistent with respect to boundedness under the discretization scheme  (6.6.3) and  (6.6.11) with its continuous counterpart  (6.6.22).

The proof follows along the lines of the proof of Theorem 6.6.3 by taking

$$\displaystyle{ V (n) = \vert x(n)\vert + \frac{2h^{2}} {2 - ah}\sum _{s=0}^{n-1}\psi (n - s - 1)\vert x(s)\vert. }$$

6.7 Semigroup

We end the book with a brief introduction of the concept of semigroup. The notion of semigroup falls under the umbrella of fixed point theory. In continuous dynamical systems, including partial differential equations, semigroup has been the main tools in studying boundedness, uniform exponential stability, strong stability, weak stability, almost weak stability, the existence of weak solutions, and almost periodic solutions. The theory of semigroup has been well developed for continuous dynamical systems, which is not the case for discrete dynamical systems. This section is only intended to raise curiosity about the subject of semigroup and how it can be effectively used to qualitatively study solutions of discrete dynamical systems, and in particular, Volterra difference equations.

Let X be a Banach space and \(\mathcal{B}(X)\) the Banach algebra of all linear and bounded operators acting on X. 

Definition 6.7.1.

The subset \(\mathbb{T} =\{ T(n)\}_{n\in \mathbb{Z}}\) of \(\mathcal{B}(X)\) is called discrete semigroup if it satisfies the following conditions:

(i) T(0) = I,   where I is the identity operator on X. 

(ii) T(n + m) = T(n)T(m),   for all \(n,m \in \mathbb{Z}^{+}.\)

Definition 6.7.2.

A linear operator A is called the generator of semigroup T if

$$\displaystyle{ \lim _{s\rightarrow 1}\frac{T(s)x - T(1)x} {s - 1},\;x \in D(A), }$$

where the domain D(A) of A is the set of all xX for which the above limit exists.

Next we consider the discrete initial value problem

$$\displaystyle{ x(t + 1) = Ax(t),\;x(t_{0}) = x_{0} \in D(A),\;t \geq t_{0},\;t,t_{0} \in \mathbb{Z}^{+}, }$$
(6.7.1)

where A is the generator of T. By [76] the initial value problem (6.7.1) has the unique solution

$$\displaystyle{ x(t) = T(t - t_{0})x_{0}. }$$
(6.7.2)

Denote the norms in X and \(\mathcal{B}(X)\) by ∥⋅ ∥. We have the following theorems. First, for concise definitions and terminology regarding stability and boundedness we refer to [76].

Theorem 6.7.1 ([76]).

The following statements are equivalent:

(i) Equation  (6.7.1) is stable;

(ii) \(\{T(t): t \in \mathbb{Z}\}\) is bounded;

(iii) Equation  (6.7.1) is uniformly stable.

Theorem 6.7.2 ([76]).

The following statements are equivalent:

(i) Equation  (6.7.1) is asymptotically stable;

(ii) limt | | T(t)x | | = 0,   for every xX;

(iii) Equation  (6.7.1) is globally asymptotically stable;

(iv) Equation  (6.7.1) is uniformly asymptotically stable.

Next we turn our attention to using semigroup in Volterra difference equations. Thus, we consider the linear convolution Volterra difference equations with infinite delays

$$\displaystyle{ x(n + 1) =\sum _{ s=-\infty }^{n}C(n - s)x(s),\;n \geq n_{ 0} \geq 0,\,n,n_{0} \in \mathbb{Z}^{+}, }$$
(6.7.3)

and

$$\displaystyle{ x(n + 1) =\sum _{ s=-\infty }^{n}\{C(n - s) + G(n,s)\}x(s),\;n \geq n_{ 0} \geq 0,\,n,n_{0} \in \mathbb{Z}^{+}. }$$
(6.7.4)

Our intention is to write (6.7.3) as a functional difference equation so that semigroup can be used to derive conditions that relate solutions of (6.7.3) and (6.7.4). Let γ be a positive constant. Define the set

$$\displaystyle{ B^{\gamma } =\{\varphi: \mathbb{Z}^{-}\rightarrow \mathbb{C}^{k}:\sup _{ t\in \mathbb{Z}^{-}}\vert \varphi (t)\vert e^{\gamma t} <\infty \}, }$$

where \(\mathbb{C}\) is the set of complex numbers. Then B γ is a Banach space when endowed with the norm

$$\displaystyle{ \vert \vert \varphi \vert \vert =\sup _{t\in \mathbb{Z}^{-}}\vert \varphi (t)\vert e^{\gamma t} <\infty,\,\varphi \in B^{\gamma }. }$$

As we have done before, for xnB γ, we set

$$\displaystyle{ x_{n}(s) = x(n + s),\;\;s \in \mathbb{Z}^{+}. }$$

Then we may write (6.7.3) as

$$\displaystyle{ x(n + 1) = L(x_{n}), }$$
(6.7.5)

where \(L(\cdot ): B^{\gamma } \rightarrow \mathbb{C}^{k}\) is a functional given by

$$\displaystyle{ L(\varphi ) =\sum _{ j=0}^{\infty }C(j)\varphi (-j),\;\varphi \in B^{\gamma }. }$$

Let T(n) denote the solution of (6.7.5). Then T(n)φ = xn(φ), for φB γ. Moreover, we denote by x(⋅ , φ) the solution of (6.7.5) satisfying x(s, φ) = φ(s), for \(s \in \mathbb{Z}^{-}.\) Then it can be easily shown that T(n) is a bounded linear operator on B γ and satisfies the semigroup property

$$\displaystyle{ T(n + m) = T(n)T(m). }$$

We have the following theorems.

Theorem 6.7.3 ([59]).

Suppose system  (6.7.5) possesses an ordinary dichotomy with dichotomy constant M (see [ 59 ]). Assume

$$\displaystyle{ \sum _{n=0}^{\infty }\vert C(n)\vert e^{\gamma n} <\infty \;\;\mathit{\text{and}}\;\sum _{ s=-\infty }^{n}\sup _{ n\geq n_{0}}\vert G(n,s)\vert e^{\gamma (n-s)} <\infty, }$$
(6.7.6)
$$\displaystyle{ \sum _{s=n_{0}}^{\infty }\sum _{ j=-\infty }^{n_{0}-1}\vert G(s,j)\vert e^{\gamma (n_{0}-j)} +\sum _{ s=n_{0}}^{\infty }\sum _{ j=n_{0}}^{s}\vert G(s,j) <1/M. }$$
(6.7.7)

Then for any bounded solution x(n) of  (6.7.3) on [n0, ) there exists a unique bounded solution y(n) of  (6.7.4) on [n0, ) such that

$$\displaystyle\begin{array}{rcl} y(n)& =& x(n) +\sum _{ s=n_{0}}^{n-1}T(n - s - 1)PE^{0}\Big(\sum _{ j=-\infty }^{s}\vert G(s,j)y(j)\Big) \\ & -& \sum _{s=n}^{\infty }T(n - s - 1)(I - P)E^{0}\Big(\sum _{ j=-\infty }^{s}\vert G(s,j)y(j)\Big),\;n \geq n_{ 0},{}\end{array}$$
(6.7.8)

where E 0(t) = I if t = 0 and E 0(t) = 0 (matrix) if t ≠ 0. 

Theorem 6.7.4 ([59]).

Assume  (6.7.6) and  (6.7.7) Suppose system  (6.7.5) possesses an ordinary dichotomy with dichotomy constant M and related projection P (see [ 59 ]) such that

$$\displaystyle{ \vert \vert T(n)P\vert \vert \leq Ma^{n}\;\mathit{\text{for some}}\;a,0 <a <1. }$$

Then there is a one-to-one correspondence between bounded solutions x(n) of  (6.7.3) on [n0, ) and bounded solutions y(n) of  (6.7.4) on [n0, ), and the asymptotic relation

$$\displaystyle{ y(n) = x(n) + o(1)\;(n \rightarrow \infty ) }$$

holds.

Naturally, the resolvent operator that was developed in Chapter 1, Section 1.3, might be used to define a semigroup for Volterra difference equations. To see this, we consider Volterra difference equation of convolution type

$$\displaystyle{ x(n + 1) = Ax(n) +\sum _{ s=0}^{n}B(n - s)x(s) }$$
(6.7.9)

for all integers n ≥ 0 and for integers, 0 ≤ sn, where A, B are k × k matrix functions, and x is a k × 1 unknown vector. Then, we saw that the resolvent matrix equation of (6.7.9) takes the form

$$\displaystyle{ R(n + 1) = AR(n) +\sum _{ u=0}^{n}B(n - u)R(u),\;R(0) = I,\;n \in \mathbb{Z}^{+}. }$$
(6.7.10)

Let A and B(⋅ ) be closed operators in X. Hence D(A) endowed with the graph norm \(\vert x\vert = \vert \vert x\vert \vert + \vert \vert Ax\vert \vert\) is a Banach space denoted by Y. Next we use the resolvent operator to define the solution of the nonhomogenous Volterra difference equation of convolution type

$$\displaystyle{ x(n + 1) = Ax(n) +\sum _{ s=0}^{n}B(n - s)x(s) + f(n), }$$
(6.7.11)

where f is a k × 1 given vector. First, we have the following definition.

Definition 6.7.3.

R(⋅ ) is a resolvent of (6.7.11) if \(R(n) \in \mathcal{ B}(X)\) for \(n \in \mathbb{Z}^{+}\) and satisfies

  1. 1.

    R(0) = I (the identity operator on X).

  2. 2.

    \(R(n) \in \mathcal{ B}(Y )\) for \(n \in \mathbb{Z}^{+}\) and for yY, we have

    $$\displaystyle\begin{array}{rcl} R(n + 1)y& =& AR(n)y +\sum _{ u=0}^{n}B(n - u)R(u)y \\ & =& R(n)Ay +\sum _{ u=0}^{n}R(n - u)B(u)y. {}\end{array}$$
    (6.7.12)

We note that item 2. of Definition 6.7.3 is needed for (ii) of Definition 6.7.1. Suppose R(n) is the resolvent operator of (6.7.11). Then, it can be easily shown using the results of Section 1.3 that the solution of (6.7.11) is given by

$$\displaystyle{ x(n) = R(n)x_{0} +\sum _{ u=0}^{n}R(n - u - 1)f(u),\;n \in \mathbb{Z}^{+}. }$$
(6.7.13)

Now, one can use the concept of the resolvent operator given by (6.7.12) to obtain various results concerning the qualitative analysis of solutions of Volterra difference equations.

It is worth noting, however, that using the resolvent operator in Volterra difference equations to define a semigroup and obtain a meaningful result is in dire need for further development.

6.8 Open Problems

Open Problem 1

Prove a parallel theorem to Theorem 6.3.5 by considering (6.3.30) as a vector equation.

Open Problem 2

Extend Theorem 6.3.5 to the delay Volterra difference equation

$$\displaystyle{ x(n + 1) =\mu (n)x(n) +\sum _{ s=n-h}^{n-1}h(n,s)x(s) + f(n), }$$

where h is a positive integer.

Open Problem 3 (Extremely Hard)

Extend the results of Section 6.1 to the following Volterra difference equations

$$\displaystyle{ x(n + 1) = Px(n) +\sum _{ s=-\infty }^{n-1}H(n,s)g(x(s)),\;\;(vector) }$$
$$\displaystyle{ x(n + 1) = a(n)x(n) +\sum _{ s=-\infty }^{n-1}h(n,s)g(x(s)),\;\;(scalar) }$$

and

$$\displaystyle{ x(n + 1) = Px(n) +\sum _{ s=n-h}^{n-1}H(n,s)g(x(s)),\;\;(vector) }$$

where h is a positive integer.