In this chapter we consider functional difference equations that we apply to all types of Volterra difference equations. Our general theorems will require the construction of suitable Lyapunov functionals, a task that is difficult but possible. As we have seen in Chapter 1, the concept of resolvent can only apply to linear Volterra difference systems. The theorems on functional difference equations will enable us to qualitatively analyze the theory of boundedness, uniform ultimate boundedness, and stability of solutions of vectors and scalar s Volterra difference equations. We extend and prove parallel theorems regarding functional difference equations with finite or infinite delay, and provide many applications. In addition, we will point out the need of more research in delay difference equations. In the second part of the chapter, we state and prove theorems that guide us on how to systematically construct suitable Lyapunov functionals for a specific nonlinear Volterra difference equation. We end the chapter with open problems. Most of the results of this chapter can be found in [37, 38, 128, 133, 135, 141, 147, 181], and [182].

2.1 Uniform Boundedness and Uniform Ultimate Boundedness

We begin by considering Lyapunov functionals to prove general theorems about boundedness, uniform ultimate boundedness of solutions, and stability of the zero solution of the nonlinear functional discrete system

$$\displaystyle{ x(n + 1) = G(n,x(s);\ 0 \leq s \leq n)\mathop{ =}\limits^{ def}G(n,x(\cdot )) }$$
(2.1.1)

where \(G: \mathbb{Z}^{+} \times \mathbb{R}^{k} \rightarrow \mathbb{R}^{k}\) is continuous in x. When Lyapunov functionals are used to study the behavior of solutions of functional difference equations of the form of (2.1.1), we often end up with a pair of inequalities of the form

$$\displaystyle{ V (n,x(\cdot )) = W_{1}(x(n)) +\sum _{ s=0}^{n-1}K(n,s)W_{ 2}(x(s)), }$$
(2.1.2)
$$\displaystyle{ \bigtriangleup V (n,x(\cdot )) \leq -W_{3}(x(n)) + F(n) }$$
(2.1.3)

where V is a Lyapunov functional bounded below, x is the known solution of the functional difference equation, and K, F, and Wi, i = 1, 2, 3, are scalar positive functions. Inequalities (2.1.2) and (2.1.3) are rich in information regarding the qualitative behavior of the solutions of (2.1.1).

The goal is to use inequalities (2.1.2) and (2.1.3) to conclude boundedness of x(n) when F is bounded. Also, we obtain stability results about the zero solution of (2.1.1) when F = 0 and G(n, 0) = 0. In the celebrated paper of Kolmanovskii et al. [36], the authors investigated the boundedness of solutions of Volterra difference equations by means of Lyapunov functionals. Also, in [37] the same authors constructed general theorems for the stability of the zero solution of Volterra type difference equations.

As we have seen in Chapter 1, several authors like Medina [113, 115, 116], Islam and Raffoul [83], and Raffoul [135] obtained stability and boundedness results of the solutions of discrete Volterra equations by means of representing the solution in terms of the resolvent matrix of the corresponding system of difference Volterra equations. Eloe et al. [65] and Elaydi et al. [61] used the notion of total stability and established results on the asymptotic behavior of the solutions of discrete Volterra system with nonlinear perturbation. Their work heavily depended on the summability of the resolvent matrix. For more results on stability of the zero solution of Volterra discrete system we refer the reader to Elaydi [52] and Agarwal and Pang [5] and [117].

Boundedness of solutions of linear and nonlinear discrete Volterra equations was also studied by Diblik and Schmeidel [47], Gronek and Schmeidel [72], and the references therein. A survey of the fundamental results on the stability of linear Volterra difference equations, of both convolution and non-convolution type, can be found in Elaydi [59].

We say that x(n) = x(n, n0, ϕ) is a solution of (2.1.1) with a bounded initial function \(\phi: [0,n_{0}] \rightarrow \mathbb{R}^{k}\) if it satisfies (2.1.1) for n > n0 and x(j) = ϕ(j) for jn0. 

If D is a matrix or a vector, | D | means the sum of the absolute values of the elements. Since we are now dealing with functional difference equations, we restate the following stability definitions.

Definition 2.1.1.

Solutions of (2.1.1) are uniformly bounded (UB) if for each B1 > 0 there is B2 > 0 such that \(\big[n_{0} \geq 0,\phi: [0,n_{0}] \rightarrow \mathbb{R}^{k}\) with | ϕ(n) | < B1 on \([0,n_{0}],n> n_{0}\big]\) implies | x(n, n0, ϕ) | < B2. 

Definition 2.1.2.

Solutions of (2.1.1) are uniformly ultimately bounded (UUB) for bound B if there is a B > 0 and if for each M > 0 there exists N > 0 such that \(\big[n_{0} \geq 0,\phi: [0,n_{0}] \rightarrow \mathbb{R}^{k}\) with | ϕ(n) | < M on \([0,n_{0}],n> n_{0},n> n_{0} + N\big]\) implies | x(n, n0, ϕ) | < B. 

If G(n, 0) = 0, then x(n) = 0 is a solution of (2.1.1). In this case we state the following definitions.

Definition 2.1.3.

The zero solution of (2.1.1) is stable (S) if for each ɛ > 0, there is a δ = δ(ɛ) > 0 such that \(\big[\phi: [0,n_{0}] \rightarrow \mathbb{R}^{k}\) with | ϕ(n) | < δ on \([0,n_{0}],n \geq n_{0}\big]\) implies | x(n, n0, ϕ) | < ɛ. It is uniformly stable (US) if δ is independent of n0. 

Definition 2.1.4.

The zero solution of (2.1.1) is uniformly asymptotically stable (UAS) if it is (US) and there exists a γ > 0 with the property that for each μ > 0 there exists N > 0 such that \(\big[n_{0} \geq 0,\phi: [0,n_{0}] \rightarrow \mathbb{R}^{k}\) with | ϕ(n) | < γ on \([0,n_{0}],n \geq n_{0} + N\big]\) implies | x(n, n0, ϕ) | < μ. 

We begin by proving general theorems regarding boundedness and stability of solutions of (2.1.1).

Theorem 2.1.1 ([133]).

Let φ(n, s) be a scalar sequence for 0 ≤ sn < ∞ and suppose that φ(n, s) ≥ 0, △n φ(n, s) ≤ 0, △s φ(n, s) ≥ 0 and there are constants B and J such that ∑s = 0 n φ(n, s) ≤ B and φ(0, s) ≤ J. Also, suppose that for each n0 ≥ 0 and each bounded initial function \(\phi: [0,n_{0}] \rightarrow \mathbb{R}^{k}\) , every solution x(n) = x(n, n0, ϕ) of  (2.1.1) satisfies

$$\displaystyle{ W_{1}(\vert x(n)\vert ) \leq V (n,x(\cdot )) \leq W_{2}(\vert x(n)\vert ) +\sum _{ s=0}^{n-1}\varphi (n,s)W_{ 3}(\vert x(s)\vert ) }$$
(2.1.4)

and

$$\displaystyle{ \bigtriangleup V _{\text{(2.1.1)}}(n,x(\cdot )) \leq -\rho W_{3}(\vert x(n)\vert ) + K }$$
(2.1.5)

for some constants ρ and K ≥ 0. Then solutions of  (2.1.1) are (UB).

Proof.

H > 0 and | ϕ(n) | < H on [0, n0], and set V (n) = V (n, x(⋅ )). Let \(V (n^{{\ast}}) =\max _{0\leq n\leq n_{0}}V (n)\). If V (n) ≤ V (n ) for all nn0, then by (2.1.4) we have

$$\displaystyle\begin{array}{rcl} W_{1}(\vert x(n)\vert ) \leq V (n)& \leq & V (n^{{\ast}}) {}\\ & \leq & W_{2}(\vert x(n^{{\ast}})\vert ) +\sum _{ s=0}^{n^{{\ast}}-1 }\varphi (n^{{\ast}},s)W_{ 3}(\vert \phi (s)\vert ) {}\\ & \leq & W_{2}(\vert \phi (n^{{\ast}})\vert ) +\sum _{ s=0}^{n^{{\ast}}-1 }\varphi (n^{{\ast}},s)W_{ 3}(\vert \phi (s)\vert ) {}\\ & \leq & W_{2}(H) + BW_{3}(H). {}\\ \end{array}$$

From which it follows that

$$\displaystyle{ \vert x(n) \leq W_{1}^{-1}\big[W_{ 2}(H) + BW_{3}(H)\big]. }$$

On the other hand, if V (n) > V (n ) for some nn0, so that V (n) = max0 ≤ sn V (s). 

We multiply both sides of (2.1.5) by φ(n, s) and then sum from s = n0 to s = n − 1, we obtain

$$\displaystyle{ \sum _{s=n_{0}}^{n-1}\big(\bigtriangleup V (s)\big)\varphi (n,s) \leq -\rho \sum _{ s=n_{0}}^{n-1}\varphi (n,s)W_{ 3}(\vert x(s)\vert ) + KB. }$$

Summing by parts the left side we arrive at

$$\displaystyle\begin{array}{rcl} V (n)\varphi (n,n)& -& V (n_{0})\varphi (n,n_{0}) -\sum _{s=n_{0}}^{n-1}V (s + 1)\bigtriangleup _{ s}\varphi (n,s) {}\\ & \leq & -\rho \sum _{s=n_{0}}^{n-1}\varphi (n,s)W_{ 3}(\vert x(s)\vert ) + KB. {}\\ \end{array}$$

Hence

$$\displaystyle\begin{array}{rcl} \rho \sum _{s=n_{0}}^{n-1}\varphi (n,s)W_{ 3}(\vert x(s)\vert )& \leq & V (n)\varphi (n,n) + V (n_{0})\varphi (n,n_{0}) \\ & +& \sum _{s=n_{0}}^{n-1}V (s + 1)\bigtriangleup _{ s}\varphi (n,s) + KB.{}\end{array}$$
(2.1.6)

Since △s φ(n, s) ≥ 0, we have for V (n) = max0 ≤ sn−1 V (s + 1), 

$$\displaystyle\begin{array}{rcl} \sum _{s=n_{0}}^{n-1}V (s + 1)\bigtriangleup _{ s}\varphi (n,s)& \leq & V (n)\sum _{s=n_{0}}^{n-1}\bigtriangleup _{ s}\varphi (n,s) {}\\ & =& V (n)[\varphi (n,n) -\varphi (n,n_{0})]. {}\\ \end{array}$$

Thus, from inequality (2.1.6) we have

$$\displaystyle\begin{array}{rcl} \rho \sum _{s=n_{0}}^{n-1}\varphi (n,s)W_{ 3}(\vert x(s)\vert )& \leq & V (n)[\varphi (n,n) -\varphi (n,n_{0})] \\ & -& V (n)\varphi (n,n) + V (n_{0})\varphi (n,n_{0}) + KB \\ & \leq & V (n_{0})\varphi (n,n_{0}) + KB \\ & \leq & V (n_{0})\varphi (0,n_{0}) + KB \\ & \leq & V (n_{0})J + KB. {}\end{array}$$
(2.1.7)

In view of (2.1.4), we have

$$\displaystyle\begin{array}{rcl} V (n_{0})& \leq & W_{2}(\vert \phi (n_{0})\vert ) +\sum _{ s=0}^{n_{0}-1}\varphi (n_{ 0},s)W_{3}(\vert \phi (s)\vert ) {}\\ & \leq & W_{2}(H) + BW_{3}(H). {}\\ \end{array}$$

As a result, inequality (2.1.7) yields

$$\displaystyle{ \sum _{s=n_{0}}^{n-1}\varphi (n,s)W_{ 3}(\vert x(s)\vert ) \leq \frac{W_{2}(H) + BW_{3}(H)} {\rho } + \frac{KB} {\rho }. }$$

Now, inequality (2.1.4) implies that

$$\displaystyle\begin{array}{rcl} V (n)& \leq & W_{2}(\vert x(n)\vert ) +\sum _{ s=0}^{n_{0}-1}\varphi (n,s)W_{ 3}(\vert x(s)\vert ) +\sum _{ s=n_{0}}^{n-1}\varphi (n,s)W_{ 3}(\vert x(s)\vert ) {}\\ & \leq & W_{2}(\vert x(n)\vert ) + BW_{3}(H) + \frac{W_{2}(H) + BW_{3}(H)} {\rho } + \frac{KB} {\rho } {}\\ & \leq & W_{2}(\vert x(n)\vert ) + D(H), {}\\ \end{array}$$

where \(D(H) = BW_{3}(H) + \frac{W_{2}(H)+BW_{3}(H)} {\rho } + \frac{KB} {\rho }.\)

As W3(r) → as r, there exists an L > 0 such that \(W_{3}(L) = \frac{K} {\rho }\). Now, by (2.1.5), if | x | > L, then △V < 0. Thus, V (n) attains its maximum when | x | ≤ L. Hence we have

$$\displaystyle\begin{array}{rcl} W_{1}(\vert x(n)\vert )& \leq & V (n) \leq W_{2}(\vert x(n)\vert ) + D(H) {}\\ & \leq & W_{2}(L) + DH. {}\\ \end{array}$$

Finally, from the above inequality we arrive at

$$\displaystyle{ \vert x(n)\vert \leq W_{1}^{-1}\big[W_{ 2}(L) + D(H)\big]. }$$

This completes the proof.

The next theorem extends Theorem 2.1.1.

Theorem 2.1.2 ([133]).

Let φi(n, s) be a scalar sequence for 0 ≤ sn < ∞ and suppose that φi(n, s) ≥ 0, △n φi(n, s) ≤ 0, △s φi(n, s) ≥ 0 and there are constants Bi and Ji such that ∑s = 0 n φi(n, s) ≤ Bi and φi(0, s) ≤ Ji . Also, suppose that for each n0 ≥ 0 and each bounded initial function \(\phi: [0,n_{0}] \rightarrow \mathbb{R}^{k}\) , every solution x(n) = x(n, n0, ϕ) of  (2.1.1) satisfies

$$\displaystyle\begin{array}{rcl} W_{1}(\vert x(n)\vert )& \leq & V (n,x(\cdot )) \\ & \leq & W_{2}(\vert x(n)\vert ) +\sum _{ s=0}^{n-1}\varphi _{ 1}(n,s)W_{3}(\vert x(s)\vert ) \\ & & +\sum _{s=0}^{n-1}\varphi _{ 2}(n,s)W_{4}(\vert x(s)\vert ) {}\end{array}$$
(2.1.8)

and

$$\displaystyle{ \bigtriangleup V _{{\text{(2.1.1)}}}(n,x(\cdot )) \leq -\rho _{1}W_{3}(\vert x(n)\vert ) -\rho _{2}W_{4}(\vert x(n)\vert ) + K }$$
(2.1.9)

for some constants ρi ≥ 0, i = 1, 2 and K ≥ 0. Then solutions of  (2.1.1) are (UB).

Proof.

We follow the proof of the previous theorem. Let V (n) = max0 ≤ sn V (s), nn0. If the max of V (n) occurs on [0, n0], then it is trivial. Multiply both sides of (2.1.9) by φi(n, s) and then sum from s = n0 to s = n − 1 to obtain

$$\displaystyle\begin{array}{rcl} \rho _{i}\sum _{s=n_{0}}^{n-1}\varphi _{ i}(n,s)W_{3}(\vert x(s)\vert ) \leq V (n_{0})J + KB,\;\;\;\ i = 1,2.& &{}\end{array}$$
(2.1.10)

For H > 0 and | ϕ(n) | < H, we have

$$\displaystyle\begin{array}{rcl} V (n_{0}) \leq W_{2}(H) + BW_{3}(H) + B_{2}W_{4}(H)\mathop{ =}\limits^{ def}R(H),& & {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} \sum _{s=0}^{n_{0}-1}\varphi _{ i}(n,s)W_{i+2}(\vert x(s)\vert ) \leq W_{i+2}B_{i}.& & {}\\ \end{array}$$

Thus, inequality (2.1.10) yields,

$$\displaystyle\begin{array}{rcl} \sum _{s=0}^{n-1}\varphi _{ i}(n,s)W_{i+2}(\vert x(s)\vert ) \leq \frac{R(H)J_{i} + KB_{i}} {\rho _{i}} + W_{i+2}B_{i}\mathop{ =}\limits^{ def}S_{i}(H).& & {}\\ \end{array}$$

Now, by (2.1.9), if | x | > L, then △V < 0. Thus, V (n) attains its maximum when | x | ≤ L and hence

$$\displaystyle\begin{array}{rcl} W_{1}(\vert x(n)\vert )& \leq & V (n) \leq W_{2}\big[\vert x(H)\vert + S_{1}(H)\big] + S_{2}(H) {}\\ & \leq & W_{2}\big[L + S_{1}(H)\big] + S_{2}(H). {}\\ \end{array}$$

From the above inequality we obtain

$$\displaystyle{ \vert x(n)\vert \leq W_{1}^{-1}\Big[W_{ 2}\big[L + S_{1}(H)\big] + S_{2}(H)\Big]. }$$

This completes the proof.

In the next theorem we obtain boundedness and stability results about solutions and the zero solution of (2.1.1).

Theorem 2.1.3 ([133]).

Let φ(n) ≥ 0 be a scalar sequence for n ≥ 0 and V and Wi, i = 1, 2, be defined as before. Also, suppose that for each n0 ≥ 0 and each bounded initial function \(\phi: [0,n_{0}] \rightarrow \mathbb{R}^{k}\) , every solution x(n) = x(n, n0, ϕ) of  (2.1.1) satisfies

$$\displaystyle{ W_{1}(\vert x(n)\vert ) \leq V (n,x(\cdot )) \leq \alpha W_{2}(\vert x(n)\vert ) +\sum _{ s=0}^{n-1}\varphi (n - s - 1)W_{ 2}(\vert x(s)\vert ) }$$
(2.1.11)

and

$$\displaystyle{ \bigtriangleup V _{\text{(2.1.1)}}(n,x(\cdot )) \leq -\rho W_{2}(\vert x(n)\vert ) }$$
(2.1.12)

for some constants ρ and α > 0.

a) If ∑s = 0 φ(s) = B, then solutions of  (2.1.1) are (UB) and the zero solution of  (2.1.1) is (US).

b) If ∑n = 0 s = n φ(s) = J, then solutions of  (2.1.1) are (UAB) and the zero solution of  (2.1.1) is (UAS) .

Proof.

Let H > 0 and | ϕ(n) | < H on [0, n0], and set V (n) = V (n, x(⋅ )). By (2.1.12), V (n) is monotonically decreasing and hence, by (2.1.11), we have

$$\displaystyle\begin{array}{rcl} W_{1}(\vert x(n)\vert ) \leq V (n)& \leq & V (n_{0}) \\ & \leq & \alpha W_{2}(H) + W_{2}(H)\sum _{u=0}^{n_{0}-1}\varphi (u) \\ & \leq & W_{2}(H)\big(\alpha +B\big). {}\end{array}$$
(2.1.13)

Let ε > 0 be given. Choose H such that H < ε and

$$\displaystyle{ W_{2}(H)\big(\alpha +B\big) <W_{1}(\epsilon ). }$$

Hence from (2.1.13), we have | x(n) | < ε, for nn0. Consequently, the zero solution of (2.1.1) is US). Also, it follows from (2.1.13) that

$$\displaystyle{ \vert x(n)\vert <W_{1}^{-1}\big[W_{ 2}(H)\big(\alpha +B\big)\big], }$$

which implies solutions of (2.1.1) are (UB) .

Sum (2.1.12) from s = n0 to s = n − 1 to obtain

$$\displaystyle\begin{array}{rcl} -V (n_{0}) \leq V (n) - V (n_{0}) \leq -\rho \sum _{s=n_{0}}^{n-1}W_{ 2}(\vert x(s)\vert )& & {}\\ \end{array}$$

and hence

$$\displaystyle\begin{array}{rcl} \sum _{s=n_{0}}^{n-1}W_{ 2}(\vert x(s)\vert ) \leq \frac{V (n_{0})} {\rho } \leq \frac{(\alpha +B)W_{2}(H)} {\rho }.& & {}\\ \end{array}$$

On the other hand, if we sum (2.1.11) from s = n0 to s = n − 1 we arrive at

$$\displaystyle\begin{array}{rcl} \sum _{s=n_{0}}^{n-1}V (s)& \leq & \alpha \frac{(\alpha +B)W_{2}(H)} {\rho } +\sum _{ u=n_{0}}^{n-1}\sum _{ s=0}^{u-1}\varphi (u - s - 1)W_{ 2}(\vert x(s)\vert ) \\ & \leq & \frac{(\alpha +B)W_{2}(H)} {\rho } +\sum _{ s=0}^{n_{0}-1}\sum _{ u=n_{0}}^{n-1}\varphi (u - s - 1)W_{ 2}(\vert x(s)\vert ) \\ & & +\sum _{s=n_{0}}^{n-1}\sum _{ u=s}^{n-1}\varphi (u - s - 1)W_{ 2}(\vert x(s)\vert ) \\ & \leq & \frac{(\alpha +B)W_{2}(H)} {\rho } +\sum _{ s=0}^{n_{0}-1}W_{ 2}(\vert x(s)\vert )\sum _{u=n_{0}}^{n-1}\varphi (u - s - 1) \\ & & +\sum _{s=n_{0}}^{n-1}W_{ 2}(\vert x(s)\vert )\sum _{u=s}^{n-1}\varphi (u - s - 1) \\ & \leq & \frac{(\alpha +B)W_{2}(H)} {\rho } + W_{2}(H)\sum _{s=0}^{n_{0}-1}\sum _{ u=n_{0}}^{n-1}\varphi (u - s - 1) \\ & & +B\sum _{s=n_{0}}^{n-1}W_{ 2}(\vert x(s)\vert ) \\ & \leq & \frac{(\alpha +B)^{2}W_{2}(H)} {\rho } + W_{2}(H)\sum _{s=0}^{n_{0}-1}\sum _{ r=n_{0}-s-1}^{u-n}\varphi (r) \\ & \leq & \frac{(\alpha +B)^{2}W_{2}(H)} {\rho } + W_{2}(H)\sum _{\xi =0}^{\infty }\sum _{ r=\xi }^{\infty }\varphi (r) \\ & \leq & \frac{(\alpha +B)^{2}W_{2}(H)} {\rho } + W_{2}(H)J \\ & \leq & \big[J + \frac{(\alpha +B)^{2}} {\rho } \big]W_{2}(H)\mathop{ =}\limits^{ def}aW_{2}(H). {}\end{array}$$
(2.1.14)

Since V (n) is positive and decreasing for all nn0 ≥ 0, we have

$$\displaystyle{ \sum _{s=n_{0}}^{n-1}V (s) \geq V (n)(n - n_{ 0}). }$$

Let ε > 0 be given. Then, for \(n \geq n_{0} + \frac{aW_{2}(H)} {W_{1}(\epsilon )}\) we have form (2.1.11) and (2.1.14) that

$$\displaystyle{ W_{1}(\vert x(n)\vert ) \leq V (n) \leq \frac{aW_{2}(H)} {n - n_{0}} <W_{1}(\epsilon ). }$$
(2.1.15)

Hence, inequality (2.1.15) implies that

$$\displaystyle{ \vert x(n)\vert \leq W_{1}^{-1}\big(\frac{aW_{2}(H)} {n - n_{0}} \big) <\epsilon. }$$

From this we have the (UAB) and the (UAS).

2.2 Functional Delay Difference Equations

Next, we discuss the papers by Zhang [181], the paper [182] by Zhang, and MinG-Po, and the papers by Raffoul [133, 145], in which the authors prove general theorems regarding boundedness and stability of functional difference equations with infinite or finite delay. In [145], the author proves general theorem in which he offers necessary and sufficient conditions for the uniform boundedness of all solutions. We begin by noting that Definition 2.1.4 can be easily extended to accommodate infinite or finite delays systems by considering the initial sequence \(\phi: [-\alpha,n_{0}] \rightarrow \mathbb{R}^{k}\) where α can either be taken finite or infinite. We consider the functional delay difference equation

$$\displaystyle{ x(t + 1) = F(t,x_{t}). }$$
(2.2.1)

We assume that F is continuous in x and that \(F: \mathbb{Z} \times C \rightarrow \mathbb{R}^{n}\) where C is the set of sequences \(\phi: [-\alpha,0] \rightarrow \mathbb{R}^{n},\;\alpha> 0.\) Let

$$\displaystyle{ C(t) =\{\phi: [t-\alpha,t] \rightarrow \mathbb{R}^{n}\}. }$$

It is to be understood that C(t) is C when t = 0. Also ϕt denotes ϕC(t) and \(\vert \vert \phi _{t}\vert \vert =\max _{t-\alpha \leq s\leq t}\vert \phi (t)\vert,\) where | ⋅ | is a convenient norm on \(\mathbb{R}^{n}.\) For t = 0,

$$\displaystyle{ C(0) =\{\phi: [-\alpha,0] \rightarrow \mathbb{R}^{n}\}. }$$

Theorem 2.2.1 ([181]).

Let φ(n) ≥ 0 be a scalar sequence for n ≥ 0 and V and Wi, i = 1, 2, be defined as before. Also, suppose that for each n0 ≥ 0 and each bounded initial function ϕ: [0, n0] → [0, ), every solution x(n) = x(n, n0, ϕ) of  (2.1.1) satisfies

$$\displaystyle{ W_{1}(\vert x(n)\vert ) \leq V (n,x(\cdot )) \leq W_{2}(\vert x(n)\vert ) + W_{3}\Big(\sum _{s=l}^{n}\varphi (n - s)W_{ 4}(\vert x(s)\vert )\Big) }$$
(2.2.2)

and

$$\displaystyle{ \bigtriangleup V _{\text{(2.2.1)}}(n,x(\cdot )) \leq -W_{5}(\vert x(n)\vert ). }$$
(2.2.3)

In addition, if ∑φ(s) = J, then the zero solution of  (2.2.1) is (UAS) .

It is widely known that there are two methods in studying the qualitative theory of delay differential or difference equations. The basic and more natural one is what we call the Razumikhin Lyapunov method and the most popular one is the direct method of Lyapunov function or functional. In some cases one has an advantage over the other and that all depends on the system being studied. It is the opinion of the author that Razumikhin Lyapunov method is easier to use since the Lyapunov function is readily available. Moreover, the imposed conditions are less restrictives. We consider (2.2.1) for \(n \in \mathbb{Z}^{+}.\) We assume \(F: \mathbb{Z}^{+} \times C_{ H} \rightarrow \mathbb{R}^{n},\) where

$$\displaystyle{ C_{H} =\{\phi \in C(0): \vert \vert \phi \vert \vert <H\}, }$$

for some positive constant H. Also, \(x_{t}(s) = x(t + s),\;\;s \in C(0).\) We assume that F(t, 0) = 0, so that x = 0 is a solution. It is assumed that for any \(t_{0} \in \mathbb{Z}^{+}\) and a given function ϕCH, there is a unique solution of (2.2.1), denoted by x(t, t0, ϕ), such that it satisfies (2.2.1) for all integers tt0, and x(t0, t0, ϕ) = ϕ. Lastly, we assume there is a constant L > 0 such that

$$\displaystyle{ \vert F(t,\phi )\vert \leq L\vert \vert \phi \vert \vert,\;\mbox{ for }\;t \in \mathbb{Z}^{+}\;\;\mbox{ and}\;\;\phi \in C_{ H}. }$$

In the next theorem, we use Lyapunov-Razumikhin method type functions to prove the (UAS) of the zero solution of (2.2.1). Its proof is too long for our purpose and it can be found in [182].

Theorem 2.2.2 ([182]).

In addition to the above assumptions, suppose there exists a continuous Lyapunov function \(V: \mathbb{Z}^{+} \times B_{ H} \rightarrow \mathbb{R}^{+}\) with \(B_{H} =\{ x \in \mathbb{R}^{k}: \vert x\vert <H\},\) such that

$$\displaystyle{ W_{1}(\vert x\vert ) \leq V (t,x) \leq W_{2}(\vert x\vert ), }$$
(2.2.4)

and

$$\displaystyle{ \bigtriangleup V (t,x(.)) \leq -W_{3}(\vert x(t + 1)\vert ). }$$
(2.2.5)

If \(P\big(V (t+1,x(t+1))\big) \geq V (t+s,x(t+s)),\;\mathit{\mbox{ for}}\;s \in C(0)\;\mathit{\mbox{ and}}\;P: \mathbb{R}^{+} \rightarrow \mathbb{R}^{+}\mathit{\mbox{ is continuous function with }}\;P(s)> s,\;\mathit{\mbox{ for}}\;s> 0,\) then the zero solution of  (2.2.1) is (UAS).

Next, we display an example in the form of a theorem to show the application of Theorem 2.2.2. Consider the Volterra difference equation with multiple delays

$$\displaystyle{ x(t + 1) = a(t)x(t) +\sum _{ i=1}^{k}b_{ i}(t)x(t - h_{i}), }$$
(2.2.6)

where the delays hi are positive integers for i = 1, 2, 3, . . . . k. 

Theorem 2.2.3.

Let

$$\displaystyle{ a^{{\ast}} =\max _{ t\in \mathbb{Z}^{+}}\vert a(t)\vert,\;\;b_{i}^{{\ast}} =\max _{ t\in \mathbb{Z}^{+}}\vert b_{i}(t)\vert,\;\;i = 1,2,3,...k. }$$

If

$$\displaystyle{ a^{{\ast}} +\sum _{ i=1}^{k}b_{ i}^{{\ast}} <1, }$$
(2.2.7)

then the zero solution of  (2.2.6) is (UAS).

Proof.

Consider the Razumikhin type Lyapunov function

$$\displaystyle{ V (t,x) = \vert x(t)\vert. }$$

Then along the solutions of (2.2.6) we have that

$$\displaystyle{ \bigtriangleup V (t,x) = \vert x(t + 1)\vert -\vert x(t)\vert. }$$
(2.2.8)

Due to condition (2.2.7), there exists a constant μ ∈ (0, 1) such that

$$\displaystyle{ \sum _{i=1}^{k}b_{ i}^{{\ast}} <\mu (1 - a^{{\ast}}). }$$

Set \(P(t) = \frac{1} {\mu } t,\;\mbox{ for}\;t \geq 0.\) Let \(t_{0} \in \mathbb{Z}^{+}\), whenever

$$\displaystyle{ P\big(V (t + 1,x(t + 1))\big) \geq V (t + s,x(t + s)),\;\mbox{ for}\;s \in C(0), }$$

or

$$\displaystyle{ \frac{1} {\mu } \vert x(t + 1)\vert> \vert x(t + s)\vert,\;\mbox{ for}\;s \in C(0) }$$

then, by (2.2.6), we have

$$\displaystyle\begin{array}{rcl} \vert x(t + 1)\vert & \leq & \vert a(t)\vert \vert x(t)\vert +\sum _{ i=1}^{k}\vert b_{ i}(t)\vert \vert x(t - h_{i})\vert {}\\ &\leq & a^{{\ast}}\vert x(t)\vert + 1/\mu \big(\sum _{ i=1}^{k}\vert b_{ i}^{{\ast}}\vert \big)\vert x(t + 1)\vert, {}\\ \end{array}$$

which implies that

$$\displaystyle{ \vert x(t)\vert \geq \frac{\mu -\sum _{i=1}^{k}\vert b_{i}^{{\ast}}\vert } {a^{{\ast}}\mu } \vert x(t + 1)\vert. }$$

Thus, by (2.2.8) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t,x)& \leq & \Big(1 -\frac{\mu -\sum _{i=1}^{k}\vert b_{i}^{{\ast}}\vert } {a^{{\ast}}\mu } \Big)\vert x(t + 1)\vert {}\\ &\leq & -\Big(-\frac{(\mu (1 - a^{{\ast}}) -\sum _{i=1}^{k}\vert b_{i}^{{\ast}}\vert } {a^{{\ast}}\mu } \Big)\vert x(t + 1)\vert, {}\\ \end{array}$$

if \(P\big(V (t + 1,x(t + 1))\big) \geq V (t + s,x(t + s)),\;\mbox{ for}\;s \in C(0).\) Thus the conditions of Theorem 2.2.2 are satisfied with

$$\displaystyle{ W_{1}(\vert x\vert ) = W_{2}(\vert x\vert ) = \vert x\vert }$$

and

$$\displaystyle{ W_{3}(\vert x(t + 1)\vert ) =\Big (-\frac{(\mu (1 - a^{{\ast}}) -\sum _{i=1}^{k}\vert b_{i}^{{\ast}}\vert } {a^{{\ast}}\mu } \Big)\vert x(t + 1)\vert, }$$

and the zero solution is (UAS). This completes the proof.

Equations of the form (2.2.6) play a leading role in modeling additive neural networks. It is worth mentioning that Theorem 2.2.2 cannot be applied to Volterra equations of the form

$$\displaystyle{ x(n + 1) = A(n)x(n) +\sum _{ s=0}^{n}C(n,s)x(s). }$$

We will revisit such equation later in the chapter using Razumihkin-Lyapunov type functions. The next theorem offers easily verifiable conditions. Its proof can be found in [145]

Theorem 2.2.4 ([145]).

Let D > 0 and there is a scalar functional \(V (t,\psi _{t})\) that is continuous in ψ and locally Lipschitz in ψt when tt0 and ψtC(t) with | | ψt | | < D. In addition we assume that if \(x: [t_{0}-\alpha,\infty ) \rightarrow \mathbb{R}^{n}\) is a bounded sequence, then F(t, xt) is bounded on [t0, ). Suppose there is a function V such that V (t, 0) = 0, 

$$\displaystyle{ W_{1}(\vert \psi (t)\vert ) \leq V (t,\psi _{t}) \leq W_{2}(\vert \vert \psi _{t}\vert \vert ), }$$

and

$$\displaystyle{ \bigtriangleup V (t,\psi _{t}) \leq -W_{3}(\vert \psi (t)\vert ), }$$

then the zero solution of  (2.2.1) is (UAS) .

It is noted that Theorem 2.2.4 requires Lyapunov functional, unlike Theorem 2.2.2. In the next theorem we use a Lyapunov functional, and with the aid of Theorem 2.2.4, we show the zero solution of (2.2.6) is (UAS).

Theorem 2.2.5 ([145]).

Assume there exists a δ > 0 such that

$$\displaystyle{ \vert a(t)\vert - 1 + k\delta <0\;\mathit{\mbox{ and}}\;\delta \geq \sum _{i=1}^{k}\vert b_{ i}(t)\vert. }$$

Then the zero solution of  (2.2.6) is (UAS).

Proof.

Consider the Lyapunov functional

$$\displaystyle{ V (t,x_{t}) = \vert x(t)\vert +\delta \sum _{ i=1}^{k}\sum _{ s=t-h_{i}}^{t-1}\vert x(s)\vert. }$$
(2.2.9)

Then along the solutions of (2.2.6) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t,x_{t})& =& \vert x(t + 1)\vert -\vert x(t)\vert +\delta \sum _{ i=1}^{k}\big[\sum _{ s=t-h_{i}+1}^{t}\vert x(s)\vert -\sum _{ s=t-h_{i}}^{t-1}\vert x(s)\vert \big] {}\\ & =& \vert x(t + 1)\vert -\vert x(t)\vert +\delta \sum _{ i=1}^{k}\vert x(t)\vert -\delta \sum _{ i=1}^{k}\vert x(t - h_{ i})\vert {}\\ &\leq & (\vert a(t)\vert - 1 + k\delta )\vert x(t)\vert +\sum _{ i=1}^{k}(\vert b_{ i}(t)\vert -\delta )\vert x(t - h_{i})\vert {}\\ &\leq & -\alpha \vert x(t)\vert,\;\mbox{ for some positive constant}\;\alpha. {}\\ \end{array}$$

To make sure the conditions of Theorem 2.2.4 are satisfied, we note that

$$\displaystyle\begin{array}{rcl} \vert x(t)\vert & \leq & V (t,x_{t}) = \vert x(t)\vert +\delta \sum _{ i=1}^{k}\sum _{ s=t-h_{i}}^{t-1}\vert x(s)\vert {}\\ & =& \vert x(t)\vert +\delta \sum _{ i=1}^{k}\sum _{ u=-h_{i}}^{-1}\vert x(u + t)\vert. {}\\ \end{array}$$

Hence, if we take \(W_{1}(\vert \psi (t)\vert ) = \vert \psi (t)\vert,\;W_{3}(\vert \psi (t)\vert ) =\alpha \vert \psi (t)\vert,\;\mbox{ and}\;W_{2}(\vert \psi _{t}\vert ) =\delta \sum _{ i=1}^{k}\sum _{ u=-h_{i}}^{-1}\vert \psi (u + t)\vert,\) then we satisfy all the requirements of Theorem 2.2.4, and hence we have the zero solution of (2.2.6) is (UAS). This completes the proof.

It is very obvious that Theorem 2.2.1 would not work for our Lyapunov functional in Theorem 2.2.5. Theorem 2.2.1 is suitable for Volterra difference equations of convolution types. For example, if we consider

$$\displaystyle{ x(n + 1) = a(n)x(n) +\sum _{ s=0}^{n-1}b(n - s)g(n,x(n)), }$$

then a typical Lyapunov functional would be

$$\displaystyle{ V (n,x) = \vert x(n)\vert +\sum _{ s=0}^{n-1}\sum _{ u=n-s}^{\infty }\vert b(u)\vert \vert x(s)\vert, }$$

where the function g satisfies | g(n, x) | ≤ λ | x | for all \(n \in \mathbb{Z}^{+}\) for positive constant λ < 1. By assuming that

$$\displaystyle{ \vert a(n)\vert +\lambda \vert b(0)\vert \sum _{u=1}^{\infty }\vert b(u)\vert - 1 \leq -\alpha,\;\alpha> 0, }$$

we have along the solutions that

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n,x)& \leq & \big(\vert a(n)\vert +\lambda \vert b(0)\vert \sum _{u=1}^{\infty }\vert b(u)\vert - 1\big)\vert x(n)\vert + (\lambda -1)\sum _{ s=0}^{n-1}\vert b(n - s)\vert \vert x(s)\vert {}\\ &\leq & -\alpha \vert x(n)\vert. {}\\ \end{array}$$

Thus all the conditions of Theorem 2.2.1 are satisfied for W1 = W2 = W3 = W4 = | x |,  W5 = α | x |,  l = 0, and ϕ(ns) = u = ns | b(u) |. 

We end this section with an application to the second order difference equation with constant delay, r > 0

$$\displaystyle{ x(t + 2) + ax(t + 1) + bx(t - r) = 0,\;t \in \mathbb{Z}, }$$
(2.2.10)

where a and b are constants.

Theorem 2.2.6 ([145]).

Suppose there are positive constants η1, η2, α1, α2 and γ such that

$$\displaystyle{ \alpha _{1}\vert b\vert -\alpha _{2} +\gamma r \leq -\eta _{1},\;\alpha _{1}\vert a\vert -\alpha _{1} +\alpha _{2} +\gamma r \leq -\eta _{2}, }$$

and

$$\displaystyle{ \vert b\vert -\gamma \leq 0. }$$

Then the zero solution of  (2.2.10) is (UAS).

Proof.

First we write (2.2.10) into a system by letting y(t) = x(t + 1). Then by noting that

$$\displaystyle{ \bigtriangleup x(t) = y(t) - x(t), }$$

we have

$$\displaystyle{ b\sum _{s=-r}^{-1}\big(y(t + s) - x(t + s)\big) = b\sum _{ s=-r}^{-1}\bigtriangleup x(t + s) = bx(t) - bx(t - r). }$$

This implies that Equation (2.2.10) is equivalent to the system

$$\displaystyle{ \left \{\begin{array}{l} x(t + 1) = y(t) \\ y(t + 1) = -bx(t) - ay(t) + b\sum _{s=-r}^{-1}\big(y(t + s) - x(t + s)\big)\end{array} \right. }$$
(2.2.11)

Let

$$\displaystyle{ \beta =\max \{\eta _{1},\;\eta _{2}\} }$$

and define the Lyapunov functional

$$\displaystyle{ V (x_{t},y_{t}) =\alpha _{1}\vert y(t)\vert +\alpha _{2}\vert x(t)\vert +\gamma \sum _{ s=-r}^{-1}\sum _{ u=t+s}^{t-1}\big(\vert y(u)\vert + \vert x(u)\vert \big). }$$

Then along the solutions of (2.2.11) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (x_{t},y_{t})& \leq & (\alpha _{1}\vert b\vert -\alpha _{2} +\gamma r)\vert x(t)\vert + (\alpha _{1}\vert a\vert -\alpha _{1} +\alpha _{2} +\gamma r)\vert y(t)\vert \\ & +& (\vert b\vert -\gamma )\sum _{s=-r}^{-1}\big(\vert y(t + s)\vert + \vert x(t + s)\vert \big) \\ & \leq & -\beta \big(\vert x\vert + \vert y\vert \big). {}\end{array}$$
(2.2.12)

The results follow from Theorem 2.2.4.

Remark 2.1.

In Theorem 2.2.6 we saw that the stability depended on the size of the delay, which was not the case in Theorem 2.2.5.

In [112] the authors considered the linear difference system with diagonal delay

$$\displaystyle\begin{array}{rcl} x(n + 1) = ax(n - h) + by(n)& & \\ y(n + 1) = cx(n) + ay(n - h)& &{}\end{array}$$
(2.2.13)

where a, b, and c are real numbers and h is a positive integer. They used the method of characteristics and proved two theorems on the asymptotic stability of the zero solution of (2.2.13) by imposing conditions on the size of the delay; that is \(\sqrt{ \vert ab\vert } <(h + 1)/h\). Also, they required b and c be of the same sign and the delay h is odd. The above system has some limitations by considering all constant coefficients and diagonal entries have the same coefficient. Next we shall display a Lyapunov functional to obtain (UAS) of the zero solution of (2.2.13) by appealing to Theorem 2.2.4.

Theorem 2.2.7 ([145]).

Let δ be a positive constant such that

$$\displaystyle{ \vert a\vert -\delta \leq 0,\;\;\vert c\vert +\delta -1 <0,\;\;\mathit{\text{and}}\;\;\vert b\vert +\delta -1 <0. }$$

Then the zero solution of  (2.2.13) is (UAS).

Proof.

Let

$$\displaystyle{ \beta =\min \{ \vert c\vert +\delta -1,\;\vert b\vert +\delta -1\} }$$

and define the Lyapunov functional

$$\displaystyle{ V (x,y) = \vert x(n)\vert + \vert y(n)\vert +\delta \sum _{ s=n-h}^{n-1}\big(\vert x(s)\vert + \vert y(s)\vert \big). }$$

Then along the solutions of (2.2.13)

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (x,y)& \leq & (\vert a\vert -\delta )\vert x(n - h))\vert + (\vert c\vert +\delta -1)\vert x(n)\vert {}\\ & +& (\vert a\vert -\delta )\vert y(n - h)\vert + (\vert c\vert +\delta -1)\vert y(n)\vert {}\\ &\leq & \beta \big(\vert x\vert + \vert y\vert \big). {}\\ \end{array}$$

The (UAS) follows from Theorem 2.2.4.

Next we extend Theorem 2.2.7 to the Volterra delay system

$$\displaystyle\begin{array}{rcl} x(n + 1) = ax(n - h) + by(n) + d_{1}\sum _{s=-h}^{-1}y(n + s)& & \\ y(n + 1) = cx(n) + ay(n - h) + d_{2}\sum _{s=-h}^{-1}x(n + s)& &{}\end{array}$$
(2.2.14)

Theorem 2.2.8 ([145]).

Let δ be a positive constant such that

$$\displaystyle{ \vert a\vert + h\delta _{1} +\delta _{2} - 1 <0,\;\;\vert b\vert + h\delta _{1} +\delta _{2} - 1 <0, }$$

and

$$\displaystyle{ \vert a\vert -\delta _{2} \leq 0,\;\;\mathit{\text{and}}\;\;\vert d_{1}\vert + \vert d_{2}\vert -\delta _{1} \leq 0. }$$

Then the zero solution of  (2.2.14) is (UAS).

Proof.

Let

$$\displaystyle{ \beta =\min \{ \vert a\vert + h\delta _{1} +\delta _{2} - 1,\;\vert b\vert + h\delta _{1} +\delta _{2} - 1\} }$$

and define the Lyapunov functional

$$\displaystyle\begin{array}{rcl} V (x,y)& =& \vert x(n)\vert + \vert y(n)\vert +\delta _{2}\sum _{s=n-h}^{n-1}\big(\vert x(s)\vert + \vert y(s)\vert \big) {}\\ & +& \delta _{1}\sum _{s=-h}^{-1}\sum _{ u=n+s}^{n-1}\big(\vert x(u)\vert + \vert y(u)\vert \big). {}\\ \end{array}$$

Then along the solutions of (2.2.14)

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (x,y)& \leq & (\vert a\vert -\delta _{2})\vert x(n - h))\vert + (\vert a\vert + h\delta _{1} +\delta _{2} - 1)\vert x(n)\vert {}\\ & +& (\vert a\vert -\delta _{2})\vert y(n - h)\vert + (\vert b\vert + h\delta _{1} +\delta _{2} - 1)\vert y(n)\vert {}\\ &\leq & \beta \big(\vert x\vert + \vert y\vert \big) {}\\ & +& (\vert d_{1}\vert + \vert d_{2}\vert -\delta _{1})\sum _{s=-h}^{-1}\big(\vert x(n + s)\vert + \vert y(n + s)\vert \big) {}\\ & \leq & \beta \big(\vert x\vert + \vert y\vert \big). {}\\ \end{array}$$

Hence, the result of (UAS) follows from Theorem 2.2.4.

The next theorem shows that the zero solution of the nonlinear delay difference equation

$$\displaystyle{ x(n + 1) = a(n)f(x(n)) + b(n)g(x(n-\tau )),\;h \in \mathbb{Z}^{+} }$$
(2.2.15)

is (UAS).

Theorem 2.2.9.

Suppose f and g are continuous and there are positive constants α, β, and γ with γ > 1 + α such that

$$\displaystyle{ \gamma \vert a(n)\vert \vert f(x)\vert \geq \vert x\vert,\;\vert f(x)\vert \geq \vert g(x)\vert \;\mathit{\text{for}}\;\;0 <\vert x\vert <\beta, }$$
(2.2.16)

and

$$\displaystyle{ (1-\gamma )\vert a(n)\vert + \vert b(n+\tau )\vert \leq -\alpha \vert a(n)\vert. }$$
(2.2.17)

Then the zero solution of  (2.2.15) is (UAS).

Proof.

Define the Lyapunov functional

$$\displaystyle{ V (n,x_{n}) = \vert x(n)\vert +\sum _{ s=n-\tau }^{n-1}\big(\vert b(s+\tau )\vert \vert g(x(s))\vert \big). }$$
(2.2.18)

Then along the solutions of (2.2.15) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n,x_{n})& \leq & \vert a(n)\vert \vert f(x(n))\vert + \vert b(n)\vert \vert g(x(n-\tau ))\vert -\vert x(n)\vert {}\\ & +& \vert b(n+\tau )\vert \vert g(x(n))\vert -\vert b(n)\vert \vert g(x(n-\tau ))\vert. {}\\ \end{array}$$

Using (2.2.16) and (2.2.17) we arrive at

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n,x_{n})& \leq & \big(\vert a(n)\vert + \vert b(n+\tau )\vert -\gamma \big\vert a(n)\vert )\vert f(x(n))\vert \\ &\leq & -\alpha \vert a(n)\vert \vert f(x(n))\vert. {}\end{array}$$
(2.2.19)

The (UAS) follows from Theorem 2.2.4. This completes the proof.

2.2.1 Application to Volterra Difference Equations

In this section we apply Theorems 2.1.12.1.2, and 2.1.3 to establish stability and boundedness results regarding the nonlinear Volterra discrete system

$$\displaystyle{ x(n + 1) = A(n)x(n) +\sum _{ s=0}^{n}C(n,s)f(x(s)) + g(n,x(n) }$$
(2.2.20)

where A, C, are k × k matrices, g, f are k × 1 vector functions with | g(n, x(n)) | ≤ N and | f(x) | ≤ λ | x |, for some positive constants N and λ. 

In the case of f(x) = x, Medina [117], showed that if the zero solution of the homogenous equation associated with (2.2.20) is uniformly asymptotically stable , then all solutions of (2.2.20) are bounded when C(n, s) = C(ns) and g(n, x) = g(n) is bounded. In proving his results, Medina used the notion of the resolvent matrix coupled with the variation of parameters formula. Also, the author in [128] used Lyapunov functionals of convolution type coupled with the z-transform and obtained results about boundedness of solutions of (2.2.20) when g(n, x(n)) = g(n). Moreover, we saw in Chapter 1 that when f is linear in x, unlike the case here, we used total stability and under suitable conditions, we showed that the zero solution of (2.2.20) is uniformly asymptotically stable , when | g(n, x) | ≤ λ(n) | x |. We remark that the notion of the resolvent cannot be used to obtain boundedness of solutions of (2.2.20), since the summation term in (2.2.20) is nonlinear.

Theorem 2.2.10 ([133]).

Suppose A(n) = A is a k × k constant matrix, and C T(n, s) = C(n, s). Let I be the k × k identity matrix. Also, suppose there exist positive constants ρ, μ, and a constant k × k symmetric matrix B such that

$$\displaystyle{ A^{T}BA - B = -\mu I, }$$
(2.2.21)
$$\displaystyle{ \lambda \vert A^{T}B\vert \sum _{ s=0}^{n}\vert C(n,s)\vert + \vert B\vert \sum _{ s=n}^{\infty }\vert C(n,s)\vert + N^{2}-\mu \leq -\rho, }$$
(2.2.22)

and

$$\displaystyle{ \lambda \vert A^{T}B\vert \sum _{ s=0}^{n}\vert C(n,s)\vert +\lambda ^{2}\vert B\vert \sum _{ s=0}^{n}\vert C(n,s)\vert +\lambda -\vert B\vert \leq 0. }$$
(2.2.23)

Then solutions of  (2.2.20) are (UB) .

Proof.

Define the Lyapunov functional V (n) = V (n, x(n, ⋅ )) by

$$\displaystyle{ V (n,x(\cdot )) = x^{T}(n)Bx(n) + \vert B\vert \sum _{ j=0}^{n-1}\sum _{ s=n}^{\infty }\vert C(s,j)\vert x^{2}(j), }$$
(2.2.24)

where x 2(j) = x T(j)x(j). Then along solutions of (2.2.20) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V _{\text{(2.2.20)}}(n)& =& x^{T}(n)\big[A^{T}BA - B\big]x(n) + 2x^{T}(n)A^{T}B\sum _{ s=0}^{n}C(n,s)f(x(s)) \\ & & +2x^{T}(n)A^{T}Bg(n,x(n)) + 2g^{T}(n,x(n))B\sum _{ s=0}^{n}C(n,s)f(x(s)) \\ & & +\sum _{s=0}^{n}f^{T}(x(s))C(n,s)^{T}B\sum _{ s=0}^{n}C(n,s)f(x(s)) \\ & & +\vert B\vert \sum _{s=n+1}^{\infty }\vert C(n,s)\vert x^{2}(n) -\vert B\vert \sum _{ s=0}^{n-1}\vert C(n,s)\vert x^{2}(s) \\ & & +g^{T}(n,x(n))Bg(n,x(n)). {}\end{array}$$
(2.2.25)

Using (2.2.21)– (2.2.23) and the fact that for any two real numbers a and b, 2aba 2 + b 2, equation (2.2.25) reduces to

$$\displaystyle\begin{array}{rcl} \bigtriangleup V _{\text{(2.2.20)}}(n)& \leq & \Big[\lambda \vert A^{T}B\vert \sum _{ s=0}^{n}\vert C(n,s)\vert + \vert B\vert \sum _{ s=n}^{\infty }\vert C(n,s)\vert + N^{2}-\mu \Big]x^{2}(n) {}\\ & & \!+\Big[\lambda \vert A^{T}B\vert \sum _{ s=0}^{n}\vert C(n,s)\vert +\lambda ^{2}\vert B\vert \sum _{ s=0}^{n}\vert C(n,s)\vert +\lambda -\vert B\vert \Big]\sum _{ s=0}^{n}\vert C(n,s)\vert x^{2}(s) {}\\ & & \!+\vert A^{T}B\vert ^{2} +\lambda N^{2}\vert B\vert ^{2}\sum _{ s=0}^{n}\vert C(n,s) + \vert g^{T}Bg\vert {}\\ &\leq & -\rho x^{2}(n) + K, {}\\ \end{array}$$

where K = | A T B |2 + λN 2 | B |2 s = 0 n | C(n, s) + | g T Bg |. Thus, by Theorem 2.1.1 all solutions of (2.2.20) are (UB).

In the next theorem, we use Theorem 2.1.3 to establish (UB) and (UAS) for (2.2.20), when g(n, x(n)) is identically zero.

Theorem 2.2.11 ([133]).

Assume g(n, x(n)) = 0 and suppose there is a function φ(n) ≥ 0, withφ(n) ≤ 0 for n ≥ 0,n φ(ns − 1) + | C(n, s) | ≤ 0 for 0 ≤ s < n < ∞. Also, suppose that for n ≥ 0, | A(n) | + | C(n, n) | + φ(0) ≤ 1 −ρ for some ρ ∈ (0, 1).

a) If ∑s = 0 φ(s) = B, then solutions of  (2.2.20) are (UB) and the zero solution of  (2.2.20) is (US).

b) If ∑n = 0 s = n φ(s) = J, then solutions of  (2.2.20) are (UUB) and the zero solution of  (2.2.20) is (UAS).

Proof.

Define the Lyapunov functional V (n) = V (n, x(n, ⋅ )), by

$$\displaystyle{ V (n) = \vert x(n)\vert +\sum _{ s=0}^{n-1}\varphi (n - s - 1)\vert x(s)\vert,\;\;\;\;n \geq 0. }$$
(2.2.26)

Then along solutions of (2.2.20) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V _{\text{(2.2.20)}}(n)& \leq & \big(\vert A(n)\vert + \vert C(n,n)\vert +\varphi (0) - 1\big)\vert x(n)\vert {}\\ & & +\sum _{s=0}^{n-1}\big(\bigtriangleup _{ n}\varphi (n - s - 1) + \vert C(n,s)\vert \big)\vert x(s)\vert {}\\ &\leq & -\rho \vert x(n)\vert, {}\\ \end{array}$$

and the results follow from Theorem 2.1.3.

2.3 Necessary and Sufficient Conditions

We prove a general theorem in which necessary and sufficient conditions for obtaining uniform boundedness for functional difference system are present. We apply our results to finite and infinite delay Volterra difference equations. In the analysis we state and prove a discrete Jensen’s type inequality. Thus we consider the system of functional difference equations of the form

$$\displaystyle{ x(n + 1) = G(n,x_{n}),\;x \in \mathbb{R}^{k} }$$
(2.3.1)

where \(G: \mathbb{Z}^{+} \times \mathbb{R}^{k} \rightarrow \mathbb{R}^{k}\) is continuous in x. Let C be set of bounded sequences \(\phi: (-\infty,0] \rightarrow \mathbb{R}^{k}\) with the maximum norm. For \(n \in \mathbb{Z}^{+},C(n)\) denotes the set of sequences \(\psi: [0,n] \rightarrow \mathbb{R}^{k}\) with \(\vert \vert \psi \vert \vert =\max \{ \vert \psi (s)\vert: 0 \leq s \leq n\}\), where | ⋅ | is the Euclidean norm on \(\mathbb{R}^{k}\).

We assume that for each n0 ≥ 0, and each ϕC(n0) there is at least one solution x(n, n0, ϕ) of (2.3.1) defined on an interval [n0, α] with \(x_{n_{0}} =\phi\). If the solution remains bounded for all n, then α = . Notation wise, xn(s) = x(n + s) for s ≤ 0. If D is a matrix or a vector, | D | means the sum of the absolute values of the elements.

Definition 2.3.1.

Solutions of (2.3.1) are (UB) if for each B1 > 0 there is B2 > 0 such that \(\big[n_{0} \geq 0,\phi \in C,\vert \vert \phi \vert \vert <B_{1},\;n \geq n_{0}\big]\) implies | x(n, n0, ϕ) | < B2. 

Definition 2.3.2.

Solutions of (2.3.1) are (UUB) if there is a B > 0 and for each B3 > 0 there is N > 0 such that \(\big[n_{0} \geq 0,\phi \in C,\vert \vert \phi \vert \vert <B_{3},\;n \geq n_{0} + N\big]\) implies | x(n, n0, ϕ) | < B. 

Theorem 2.3.1 ([133]).

Let \(\mathbb{R}^{+} = [0,\infty )\) and assume there is a scalar sequence \(\varPhi: \mathbb{Z}^{+} \rightarrow \mathbb{R}^{+}\) that satisfy \(\varPhi \in l^{\infty }(\mathbb{R}^{+})\) . Assume the existence of wedges Wj, j = 1, 2, ⋅ , ⋅ , , 5. with W1(r) → , and positive constants K, M with W5(K) > M. Suppose there is a functional \(V: \mathbb{R}^{+} \times C \rightarrow \mathbb{R}^{+}\) such that for each xC(n), we have:

$$\displaystyle{ W_{1}(\vert x(n)\vert ) \leq V (n,x_{n}) \leq W_{2}(\vert x(n)\vert ) + W_{3}\Big(\sum _{s=0}^{n-1}\varPhi (n - s)W_{ 4}(\vert x(s)\vert )\Big) }$$
(2.3.2)

and

$$\displaystyle{ \bigtriangleup V (n,x_{n}) \leq -W_{5}(\vert x(n)\vert ) + M. }$$
(2.3.3)

Then solutions of  (2.3.1) are (UB) if and only if for each K1 > 0, there exists K2 > 0 such that if x(n) = x(n, n0, ϕ) is a solution of  (2.3.1) with | | ϕ | | ≤ K1, then

$$\displaystyle{ \sum _{s=n_{0}}^{n^{{\ast}}-1 }\varPhi (n^{{\ast}}- s)W_{ 4}(\vert x(s)\vert ) \leq K_{2} }$$
(2.3.4)

whenever v(s) < v(n ) for n0s < n , where v(s) = V (s, xs). 

Proof.

Let x(n) = x(n, n0, ϕ) be a solution of (2.3.1) that is (UB). Then, for ever B1 > 0 there exists a B2 > 0, say B2 > B1 so that for n0 ≥ 0, | | ϕ | | < B1, nn0, we have | x(n, n0, ϕ) < B2. Let J: = u = 0 Φ(u). Then, for nn0 we have that

$$\displaystyle{ \sum _{s=0}^{n-1}\varPhi (n - s)W_{ 4}(\vert x(s)\vert ) \leq \sum _{s=0}^{n-1}\varPhi (n - s)W_{ 4}(B_{2}) \leq JW_{4}(B_{2}). }$$

This completes the proof of (2.3.4).

Conversely, suppose that (2.3.4) holds. Then for x(n) = x(n, n0, ϕ) and v(n) = V (n, xn) with | | ϕ | | < B1, we have the two cases:

  1. (i)

    v(n) ≤ v(n0) for all nn0 or

  2. (ii)

        v(s) ≤ v(n ) for some n > n0 and all n0s < n . 

If (i) holds, then

$$\displaystyle\begin{array}{rcl} W_{1}(\vert x(n)\vert ) \leq V (n)& \leq & V (n_{0}) {}\\ & \leq & W_{2}(\vert x(n_{0})\vert ) +\sum _{ s=0}^{n_{0}-1}\varphi (n_{ 0},s)W_{3}(\vert \phi (s)\vert ) {}\\ & \leq & W_{2}(\vert \phi (n_{0})\vert ) +\sum _{ s=0}^{n_{0}-1}\varphi (n_{ 0},s)W_{3}(\vert \phi (s)\vert ) {}\\ & \leq & W_{2}(H) + BW_{3}(H). {}\\ \end{array}$$

From which it follows that

$$\displaystyle{ \vert x(n) \leq W_{1}^{-1}\big[W_{ 2}(H) + BW_{3}(H)\big]. }$$

On the other hand, if (ii) holds, then V (n, ⋅ ) is increasing and hence we have 0 ≤ −W5( | x(n ) | ) + M. Or, W5( | x(n ) | ) ≤ M. Now since W5(K) > M, we get | x(n ) | ≤ W5 −1(M). It follows from (i) and (2.3.2) that

$$\displaystyle\begin{array}{rcl} v(n^{{\ast}})& \leq & W_{ 2}(\vert x(n^{{\ast}})\vert ) + W_{ 3}\Big(\sum _{s=0}^{n_{0}-1}\varPhi (n^{{\ast}}- s)W_{ 4}(\vert x(s)\vert ) +\sum _{ s=n_{0}}^{n^{{\ast}}-1 }\varPhi (n^{{\ast}}- s)W_{ 4}(\vert x(s)\vert )\Big) {}\\ & \leq & W_{2}\left [W_{5}^{-1}(M)\right ] + W_{ 3}\left [JW_{4}(K_{1}) + K_{2}\right ]. {}\\ \end{array}$$

Since n is arbitrary, we have for all nn0 that

$$\displaystyle\begin{array}{rcl} v(n)& \leq & W_{2}\left [W_{5}^{-1}(M)\right ] + W_{ 3}\left [JW_{4}(K_{1}) + K_{2}\right ] + v(n_{0}). {}\\ & \leq & W_{2}\left [W_{5}^{-1}(M)\right ] + W_{ 3}\left [JW_{4}(K_{1}) + K_{2}\right ] + W_{2}(K_{1}) + W_{3}(JW_{4}(K_{1})).{}\\ \end{array}$$

On the other hand, from (2.3.2) we have W1( | x(n) | ) ≤ v(n), which implies that

$$\displaystyle{ \vert x(n)\vert \leq W_{1}^{-1}\Big[W_{ 2}\left [W_{5}^{-1}(M)\right ]+W_{ 3}\left [JW_{4}(K_{1}) + K_{2}\right ]+W_{2}(K_{1})+W_{3}(JW_{4}(K_{1}))\Big]. }$$
(2.3.5)

Finally, for all nn0, we have | x(n) | ≤ B2, where B2 is given by the right side of (2.3.5).

This completes the proof.The next Lemma is needed for our next results. One could say it is a Jensen’s type inequality.

Lemma 2.1 (Raffoul Jensen’s Type Discrete Inequality [133]).

Assume \(\varPhi: \mathbb{Z}^{+} \rightarrow \mathbb{R}^{+}\) such that \(\varPhi (u + 1),\bigtriangleup \varPhi (u)u \in l^{1}(\mathbb{Z}^{+}).\) Also, assume that \(q: [n_{0},n] \rightarrow \mathbb{R}^{+}\) is such that for constants α and β, we have

$$\displaystyle{ \frac{1} {n - s}\sum _{u=s}^{n-1}q(u) \leq \alpha + \frac{\beta } {n - s} }$$
(2.3.6)

for all n0s < n. Then,

$$\displaystyle{ \sum _{s=n_{0}}^{n-1}\varPhi (n - s)q(s) \leq \alpha \max _{ n\geq 0}\big\{\varPhi (n)n +\sum _{ u=0}^{n-1}\vert \bigtriangleup \varPhi (u)\vert u\big\} +\beta \sum _{ u=0}^{\infty }\vert \bigtriangleup \varPhi (u)\vert. }$$
(2.3.7)

Proof.

Let b be any positive integer. Then since \(\varPhi (u + 1),\bigtriangleup \varPhi (u)u \in l^{1}(\mathbb{Z}^{+}),\) we have Φ() = 0. Moreover,

$$\displaystyle{ \sum _{u=b}^{\infty }\vert \bigtriangleup \varPhi (u)\vert \geq \vert \sum _{ u=b}^{\infty }\bigtriangleup \varPhi (u)\vert = \vert \varPhi (\infty ) -\varPhi (b)\vert, }$$

from which we have

$$\displaystyle{ \varPhi (b) \leq \sum _{u=b}^{\infty }\vert \bigtriangleup \varPhi (u)\vert. }$$
(2.3.8)

We claim that since \(\varPhi (u + 1),\bigtriangleup \varPhi (u)u \in l^{1}(\mathbb{Z}^{+}),\) we have Φ() = 0. To see this for any two sequences y and z, we use the summation by parts formula

$$\displaystyle{ \sum (\bigtriangleup z)y = yz -\sum Ez\bigtriangleup y, }$$

where Ez(n) = z(n + 1). With this in mind, we have

$$\displaystyle{ \sum _{u=0}^{\infty }\bigtriangleup \varPhi (u)u =\varPhi (u)u\mid _{ u=0}^{\infty }-\sum _{ u=0}^{\infty }\varPhi (u + 1). }$$

From which we get

$$\displaystyle{ \varPhi (u)u\mid _{u=0}^{\infty } =\sum _{ u=0}^{\infty }\bigtriangleup \varPhi (u)u +\sum _{ u=0}^{\infty }\varPhi (u + 1) <\infty. }$$

Suppose the contrary; that is \(\varPhi (\infty ) \nrightarrow 0.\) Then, there exists a T large enough so that Φ(p) > ε,  for p > T. Thus,

$$\displaystyle{ \lim _{p\rightarrow \infty }p\varPhi (p) \geq \lim _{p\rightarrow \infty }p\epsilon = \infty, }$$

which contradict the fact that \(\bigtriangleup \varPhi (u)u,\varPhi (u + 1) \in l^{1}(\mathbb{Z}^{+}).\) This completes the proof of the claim.

Again, using summation by parts, we get

$$\displaystyle{ \sum _{u=0}^{b-1}\vert \bigtriangleup \varPhi (u)u\vert \geq \vert \sum _{ u=0}^{b-1}\bigtriangleup \varPhi (u)u\vert = \vert \varPhi (b)b -\sum _{ u=0}^{b-1}\varPhi (u + 1)\vert. }$$

From this inequality we arrive at,

$$\displaystyle\begin{array}{rcl} \vert \varPhi (b)b\vert -\vert \sum _{u=0}^{b-1}\varPhi (u + 1)\vert & \leq & \vert \varPhi (b)b -\sum _{ u=0}^{b-1}\varPhi (u + 1)\vert {}\\ &\leq & \sum _{u=0}^{b-1}\vert \bigtriangleup \varPhi (u)u\vert, {}\\ \end{array}$$

from which we get

$$\displaystyle{ \varPhi (b)b \leq \sum _{u=0}^{\infty }\left [\varPhi (u + 1) + \vert \bigtriangleup \varPhi (u)u\vert \right ] <\infty. }$$

Let y = Φ(ns) and △z = q(s). Then we have z = −u = s n−1 q(u). Hence,

$$\displaystyle\begin{array}{rcl} \sum _{s=n_{0}}^{n-1}\varPhi (n - s)q(s)& \leq & \varPhi (n - s)\big(-\sum _{ u=s}^{n-1}q(u)\big)\mid _{ s=n_{0}}^{n} -\sum _{ s=n_{0}}^{n-1}\bigtriangleup \varPhi (n - s)\sum _{ u=s+1}^{n-1}q(u) {}\\ & =& \varPhi (n - n_{0})\sum _{u=n_{0}}^{n-1}q(u) -\sum _{ s=n_{0}}^{n-1}\bigtriangleup \varPhi (n - s)\sum _{ u=s+1}^{n-1}q(u) {}\\ & \leq & \varPhi (n - n_{0})\sum _{u=n_{0}}^{n-1}q(u) +\sum _{ s=n_{0}}^{n-1}\vert \bigtriangleup \varPhi (n - s)\vert \sum _{ u=s}^{n-1}q(u) {}\\ & \leq & \varPhi (n - n_{0})\Big((n - n_{0})\alpha +\beta \Big) +\sum _{ s=n_{0}}^{n-1}\vert \bigtriangleup \varPhi (n - s)\vert \Big((n - s)\alpha +\beta \Big) {}\\ & =& \left [\varPhi (n - n_{0})(n - n_{0}) +\sum _{ s=n_{0}}^{n-1}\vert \bigtriangleup \varPhi (n - s)\vert (n - s)\right ]\alpha {}\\ & +& \left [\varPhi (n - n_{0}) +\sum _{ s=n_{0}}^{n-1}\vert \bigtriangleup \varPhi (n - s)\vert \right ]\beta {}\\ & =& \left [\varPhi (n - n_{0})(n - n_{0}) +\sum _{ u=1}^{n-n_{0} }\vert \bigtriangleup \varPhi (u)\vert u\right ]\alpha \;(\mbox{ letting}\;u = n - s) {}\\ & +& \left [\varPhi (n - n_{0})(n - n_{0}) +\sum _{ u=1}^{n-n_{0} }\vert \bigtriangleup \varPhi (u)\vert \right ]\beta {}\\ & \leq & \left [\varPhi (n - n_{0})(n - n_{0}) +\sum _{ u=0}^{n-n_{0} }\vert \bigtriangleup \varPhi (u)\vert u\right ]\alpha {}\\ & +& \left [\varPhi (n - n_{0})(n - n_{0}) +\sum _{ u=0}^{n-n_{0} }\vert \bigtriangleup \varPhi (u)\vert \right ]\beta {}\\ & \leq & \alpha \max _{n\geq 0}\{\varPhi (n)n +\sum _{ u=0}^{n-1}\vert \bigtriangleup \varPhi (u)\vert u\} {}\\ & +& \varPhi (n - n_{0})(n - n_{0}) + \left [\sum _{u=0}^{\infty }\vert \bigtriangleup \varPhi (u)\vert -\sum _{ u=n-n_{0}}^{\infty }\vert \bigtriangleup \varPhi (u)\vert \right ]\beta {}\\ & \leq & \alpha \max _{n\geq 0}\{\varPhi (n)n +\sum _{ u=0}^{n-1}\vert \bigtriangleup \varPhi (u)\vert u\} +\beta \sum _{ u=0}^{\infty }\vert \bigtriangleup \varPhi (u)\vert <\infty, {}\\ \end{array}$$

where we have used (2.3.8). This completes the proof.

Theorem 2.3.2 ([133]).

Assume there is a scalar sequence \(\varPhi: \mathbb{Z}^{+} \rightarrow \mathbb{R}^{+}\) that satisfies \(\varPhi \in l^{\infty }(\mathbb{R}^{+})\) . Assume the existence of wedges Wj, j = 1, 2, ⋅ , ⋅ , , 5. with W1(r) → , and positive constants K, M with W5(K) > M. Suppose there is a functional \(V: \mathbb{R}^{+} \times C \rightarrow \mathbb{R}^{+}\) such that for each xC(n) (2.3.2) and  (2.3.3) hold. Suppose that for every α > 0, there exists α > 0 such that for 0 ≤ s < n, 

$$\displaystyle{ \frac{1} {n - s}\sum _{u=s}^{n-1}W_{ 5}(\vert x(u)\vert ) \leq \alpha \Rightarrow \frac{1} {n - s}\sum _{u=s}^{n-1}W_{ 4}(\vert x(u)\vert ) \leq \alpha ^{{\ast}}. }$$
(2.3.9)

Then solutions of  (2.3.1) are (UB.)

Proof.

Let x(n) = x(n, n0, ϕ) be a solution of (2.3.1) and B1 > 0 with | | ϕ | | < B1. Set v(n) = V (n, xn). In the case of (ii) we sum (2.3.3) from s to n − 1 to get

$$\displaystyle{ v(n^{{\ast}}) - v(s) \leq -\sum _{ u=s}^{n^{{\ast}}-1 }W_{5}(\vert x(u)\vert ) + M(n^{{\ast}}- s). }$$

This yields that

$$\displaystyle{ \frac{1} {n - s}\sum _{u=s}^{n^{{\ast}}-1 }W_{5}(\vert x(u)\vert ) \leq M. }$$
(2.3.10)

Then by (2.3.9), there exists M > 0 such that

$$\displaystyle{ \frac{1} {n - s}\sum _{u=s}^{n^{{\ast}}-1 }W_{4}(\vert x(u)\vert ) \leq M^{{\ast}}. }$$

If we let q(u) = W4( | x(u) | ), α = M then an application of Lemma 1 with β = 0, we obtain (2.3.4) and hence by Theorem 2.3.1, solutions are (UB).

It is worth noting that in general (2.3.9) does not hold for arbitrary wedges. To see this, we let \(W_{5}(r) = \frac{1} {r},r> 0\) and W4(r) = r. Let \(x(u) = \frac{1} {a^{u-1}},\vert a\vert> 1.\) Then

$$\displaystyle{ \frac{1} {n - 1}\sum _{u=1}^{n}W_{ 4}(\vert x(u)\vert ) = \frac{1} {n - 1}\sum _{u=1}^{n}(\frac{1} {a})^{u-1} \rightarrow \infty,\;\mbox{ as}\;n \rightarrow \infty. }$$

On the other hand

$$\displaystyle{ \frac{1} {n - 1}\sum _{u=1}^{n}W_{ 5}(\vert x(u)\vert ) = \frac{1} {n - 1}\sum _{u=1}^{n}a^{u-1} = \frac{1} {n - 1} \frac{1 - (\frac{1} {a})^{n}} {1 -\frac{1} {a}} \leq 1, }$$

for n > 1. 

Remark 2.2.

If \(\varPhi: \mathbb{Z}^{+} \rightarrow \mathbb{R}^{+}\) with △Φ(n) ≤ 0, for all \(n \in \mathbb{Z}^{+}\) and \(\varPhi (u + 1) \in l^{1}(\mathbb{Z}^{+})\), then \(\bigtriangleup \varPhi (u)u \in l^{1}(\mathbb{Z}^{+}).\) As a matter of fact,

$$\displaystyle{ \varPhi (n)n +\sum _{ u=0}^{n-1}\vert \bigtriangleup \varPhi (u)\vert u =\varPhi (n)n -\sum _{ u=0}^{n-1}\bigtriangleup \varPhi (u)u =\sum _{ u=0}^{n-1}\varPhi (u + 1). }$$

For example, if we take \(\varPhi (n) = a^{n}sin(n\pi /2)\) for 0 < a < 1, then

$$\displaystyle{ \bigtriangleup \varPhi (n) = a^{n+1}cos(n\pi /2) - a^{n}sin(n\pi /2). }$$

It is easy to see that \(\bigtriangleup \varPhi (3) = a^{3}> 0.\) On the other hand

$$\displaystyle{ \sum _{n=0}^{\infty }\bigtriangleup \varPhi (n)n <\infty. }$$

In Theorem 2.1.1 we asked that △n Φ(n)(n) ≤ 0. 

We end the section with the following theorem.

Theorem 2.3.3 ([133]).

Solutions of the scalar difference equation

$$\displaystyle{ x(n + 1) = a(n)x(n) +\sum _{ s=0}^{n}D(n - s)x(s) + p(n), }$$
(2.3.11)

are (UB) if and only if

$$\displaystyle{ -1 + \vert a(n)\vert +\sum _{ u=0}^{\infty }\vert D(u)\vert \leq -\beta,\;\beta> 0. }$$
(2.3.12)

Proof.

First it is obvious that (2.3.12) implies u = 0 | D(u) | is bounded. Consider the Lyapunov functional

$$\displaystyle{ V (n,x_{n}) = \vert x(n)\vert +\sum _{ s=0}^{n-1}\sum _{ u=n-s}^{\infty }\vert D(u)\vert \vert x(s)\vert. }$$
(2.3.13)

Let M = | p(n) |. Then along the solutions of (2.3.11) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n,x_{n})& =& (\vert x(n + 1)\vert -\vert x(n)\vert ) +\sum _{ s=0}^{n}\sum _{ u=n-s+1}^{\infty }\vert D(u)\vert \vert x(s)\vert \\ &-& \sum _{s=0}^{n-1}\sum _{ u=n-s}^{\infty }\vert D(u)\vert \vert x(s)\vert + \vert p(n)\vert \\ &\leq & \Big(-1 + \vert a(n)\vert +\sum _{ u=0}^{\infty }\vert D(u)\vert \Big)\vert x(n)\vert + \vert p(n)\vert \\ &\leq & -\beta \vert x(n)\vert + M {}\end{array}$$
(2.3.14)

if and only if (2.3.12) holds (by noting that (2.3.12) is the condition given by (2.3.4)). We have from equation (2.3.13) and inequality (2.3.14) that

$$\displaystyle{ \vert x(n)\vert \leq V (n,x_{n}) \leq \vert x(n)\vert +\sum _{ s=0}^{n-1}\varTheta (n - s)\vert x(s)\vert }$$
(2.3.15)

and \(\varTheta (n) =\sum _{ u=n}^{\infty }\vert D(u)\vert.\) The results follow from Theorem 2.3.1.

2.4 More on Boundedness

In this section, we state and prove general theorems that guarantee boundedness of all solutions of (2.1.1). Then we utilize the theorems and use nonnegative definite Lyapunov functionals to obtain sufficient conditions that guarantee boundedness of solutions of (2.1.1). The theory is illustrated with several examples. A stereotype of equation (2.1.1) is the Volterra discrete system

$$\displaystyle{ x(n + 1) = A(n)x(n) +\sum _{ s=0}^{n}B(n,s)f(s,x(s)). }$$
(2.4.1)

Also, in [85], the author studied the exponential stability and boundedness of solutions of the nonlinear discrete system

$$\displaystyle\begin{array}{rcl} & x(n + 1) = F(n,x(n));\ n \geq 0& {}\\ & x(n_{0}) = x_{0};\ n_{0} \geq 0. & {}\\ \end{array}$$

We emphasize that the results of [85] do not apply to equations similar to (2.4.1). We are mainly interested in applying our results to Volterra discrete systems of the form of (2.4.1)) with f(x) = x n where n is positive and rational. This section offers a new perspective at looking at the notion of constructing Lyapunov functionals that can be effectively used to obtain existence results. For this section, we use a slightly different boundedness definition.

Definition 2.4.1.

We say that solutions of system (2.1.1) are bounded, if any solution x(n, n0, ϕ) of (2.1.1) satisfies

$$\displaystyle{ \vert \vert x(n,n_{0},\phi )\vert \vert \leq C\Big(\vert \vert \phi \vert \vert,n_{0}\Big),\;\;\;\mbox{ for all }n \geq n_{0}, }$$

where \(C: \mathbb{R}^{+} \times \mathbb{R}^{+} \rightarrow \mathbb{R}^{+}\) is a constant that depends on n0 and ϕ is a given bounded initial function. We say that solutions of system (2.1.1) are uniformly bounded if C is independent of n0. 

Theorem 2.4.1 ([147]).

Let D be a set in \(\mathbb{R}^{k}\) . Suppose there exists a Lyapunov functional \(V: \mathbb{Z}^{+} \times D \rightarrow \mathbb{R}^{+}\) that satisfies

$$\displaystyle{ \lambda _{1}W_{1}(\vert x\vert ) \leq V (n,x(.)) \leq \lambda _{2}W_{2}(\vert x\vert ) +\lambda _{2}\sum _{s=0}^{n-1}\varphi _{ 1}(n,s)W_{3}(\vert x(s)\vert ) }$$
(2.4.2)

and

$$\displaystyle{ \bigtriangleup V (n,x(.)) \leq -\lambda _{3}W_{4}(\vert x\vert ) -\lambda _{3}\sum _{s=0}^{n-1}\varphi _{ 2}(n,s)W_{5}(\vert x(s)\vert ) + L }$$
(2.4.3)

for some positive constants λ1, λ2, λ3 and L, and λ2 > λ3 , where φi(n, s) ≥ 0 is a scalar sequence for 0 ≤ st < , i = 1, 2, such that for some constant γ ≥ 0 the inequality

$$\displaystyle{ W_{2}(\vert x\vert ) - W_{4}(\vert x\vert ) +\sum _{ s=0}^{n-1}\Big(\varphi _{ 1}(n,s)W_{3}(\vert x(s)\vert ) -\varphi _{2}(n,s)W_{5}(\vert x(s)\vert )\Big) \leq \gamma }$$
(2.4.4)

holds. Moreover, if  ∑s = 0 n−1 ϕ1(n, s) ≤ B for some positive constant B, then all solutions of  (2.1.1) that stay in D are uniformly bounded .

Proof.

Let M = −ln(1 −λ3λ2) > 0. For any initial value \(n_{0} \in \mathbb{Z}^{+}\), let x(n) be any solution of (2.1.1) with x(n) = ϕ(n), for 0 ≤ nn0. Taking the difference of the function \(V (n,x)e^{M(n-n_{0})}\), we have

$$\displaystyle{ \bigtriangleup \Big(V (n,x)e^{M(n-n_{0})}\Big) =\Big [V (n + 1,x)e^{M} - V (n,x)\Big]e^{M(n-n_{0})}. }$$

For xD, using (2.4.2) and (2.4.3) we get

$$\displaystyle\begin{array}{rcl} & & \bigtriangleup \Big(V (n,x)e^{M(n-n_{0})}\Big) \\ & \leq & \Big[V (n,x)e^{M} -\lambda _{ 3}W_{4}(\vert x\vert )e^{M}-\lambda _{ 3}\sum _{s=0}^{n-1}\varphi _{ 2}(n,s)W_{5}(\vert x(s)\vert )e^{M}+Le^{M}-V (n,x)\Big]e^{M(n-n_{0})} \\ & =& \Big[V (n,x)(e^{M} - 1) -\lambda _{ 3}e^{M}\Big(W_{ 4}(\vert x\vert ) +\sum _{ s=0}^{n-1}\varphi _{ 2}(n,s)W_{5}(\vert x(s)\vert )\Big) + Le^{M}\Big]e^{M(n-n_{0})} \\ & \leq & \Big[(e^{M} - 1)\lambda _{ 2}\Big(W_{2}(\vert x\vert ) +\sum _{ s=0}^{n-1}\varphi _{ 1}(n,s)W_{3}(\vert x(s)\vert )\Big) -\lambda _{3}e^{M}\Big((W_{ 4}(\vert x\vert ) \\ & & +\sum _{s=0}^{n-1}\varphi _{ 2}(n,s)W_{5}(\vert x(s)\vert )\Big) + Le^{M}\Big]e^{M(n-n_{0})}. {}\end{array}$$
(2.4.5)

Since M = −ln(1 −λ3λ2) > 0, we have λ2(e M − 1) = λ3 e M. Thus, the above inequality reduces to

$$\displaystyle\begin{array}{rcl} \bigtriangleup \Big (V (n,x)e^{M(n-n_{0})}\Big)& \leq & \Big[(e^{M} - 1)\lambda _{ 2}\Big(W_{2}(\vert x\vert ) - (W_{4}(\vert x\vert ) +\sum _{ s=0}^{n-1}\Big(\varphi _{ 1}(n,s)W_{3}(\vert x(s)\vert ) \\ & & -\varphi _{2}(n,s)W_{5}(\vert x(s)\vert )\Big) + Le^{M}\Big]e^{M(n-n_{0})}. {}\end{array}$$
(2.4.6)

By invoking condition (2.4.4), the inequality (2.4.6) takes the form

$$\displaystyle{ \bigtriangleup \Big(V (n,x)e^{M(n-n_{0})}\Big) \leq \Big ((e^{M} - 1)\lambda _{ 2}\gamma + Le^{M}\Big)e^{M(n-n_{0})} \leq \alpha e^{M(n-n_{0})}, }$$

where α = (e M − 1)λ2 γ + Le M. Summing the above inequality from n0 to n − 1, we obtain,

$$\displaystyle\begin{array}{rcl} V (n,x)e^{M(n-n_{0})} - V (n_{ 0},\phi ) \leq \alpha \sum _{s=0}^{n-1}e^{M(s-n_{0})} \leq \alpha e^{-Mn_{0} }\sum _{s=0}^{n-1}(e^{M})^{s} \leq \frac{\alpha } {e^{M} - 1}e^{M(n-n_{0})},& & {}\\ \end{array}$$

that is,

$$\displaystyle\begin{array}{rcl} V (n,x) \leq V (n_{0},\phi )e^{-M(n-n_{0})} + \frac{\alpha } {e^{M} - 1} \leq V (n_{0},\phi ) + \frac{\alpha } {e^{M} - 1}.& & {}\\ \end{array}$$

From condition (2.4.2), we have

$$\displaystyle\begin{array}{rcl} \|x\|& \leq & W_{1}^{-1}\Big[\frac{1} {\lambda _{1}} \Big(\lambda _{2}W_{2}(\vert \phi \vert ) +\lambda _{2}W_{3}(\vert \phi \vert )\sum _{s=0}^{n_{0}-1}\varphi _{ 1}(n_{0},s)\Big) + \frac{\alpha } {e^{M} - 1}\Big];\mbox{ for all }\;n \geq n_{0}. {}\\ \end{array}$$

This completes the proof.

In the next theorems, we consider variables λi(n),  i = 1, 2, 3, 4, 5. 

Theorem 2.4.2 ([147]).

Let D be a set in \(\mathbb{R}^{k}\) . Suppose there exist a Lyapunov functional \(V: \mathbb{Z}^{+} \times D \rightarrow \mathbb{R}^{+}\) that satisfies

$$\displaystyle{ \lambda _{1}(n)W_{1}(\vert x\vert ) \leq V (n,x(.)) \leq \lambda _{2}(n)W_{2}(\vert x\vert ) +\lambda _{2}(n)\sum _{s=0}^{n-1}\varphi _{ 1}(n,s)W_{3}(\vert x(s)\vert ) }$$
(2.4.7)

and

$$\displaystyle{ \bigtriangleup V (n,x(.)) \leq -\lambda _{3}(n)W_{4}(\vert x\vert ) -\lambda _{3}(n)\sum _{s=0}^{n-1}\varphi _{ 2}(n,s)W_{5}(\vert x(s)\vert ) + L }$$
(2.4.8)

for some positive constant L, and positive sequence λ1(n), λ2(n), λ3(n), where λ1(n) is nondecreasing sequence, and φi(n, s) ≥ 0 is a scalar sequence for 0 ≤ sn < , i = 1, 2. Assume that for some positive constants θ, and γ the inequality with

$$\displaystyle{ 0 <\frac{\lambda _{3}(n)} {\lambda _{2}(n)} \leq \theta <1, }$$
(2.4.9)

and

$$\displaystyle{ W_{2}(\vert x\vert ) - W_{4}(\vert x\vert ) +\sum _{ s=0}^{n-1}\Big(\varphi _{ 1}(n,s)W_{3}(\vert x(s)\vert ) -\varphi _{2}(n,s)W_{5}(\vert x(s)\vert )\Big) \leq \gamma }$$
(2.4.10)

holds. If ∑s = 0 n−1 ϕ1(n, s) ≤ B,   λ2(n) ≤ N for some positive constants B and N, then all solutions of  (2.1.1) that stay in D are uniformly bounded .

Proof.

First we note that due to condition (2.4.9), λ3(n) is bounded for all nn0 ≥ 0. For any initial value n0 > 0, let x(n) be any solution of (2.1.1) with x(n) = ϕ(n) for 0 ≤ nn0. Taking the difference of the function \(V (n,x)e^{M(n-n_{0})}\) with

$$\displaystyle{ M =\inf _{n\in \mathbb{Z}^{+}}\Big(-\ln (1 -\frac{\lambda _{3}(n)} {\lambda _{2}(n)})\Big)> 0, }$$

we have

$$\displaystyle{ \bigtriangleup \Big(V (n,x)e^{M(n-n_{0})}\Big) =\Big [V (n + 1,x)e^{M} - V (n,x)\Big]e^{M(n-n_{0})}. }$$

By a similar argument as in Theorem 2.4.1 we obtain,

$$\displaystyle\begin{array}{rcl} \bigtriangleup \Big(V (n,x)e^{M(n-n_{0})}\Big)& \leq & \Big((e^{M} - 1)\lambda _{ 2}(n)\gamma + Le^{M}\Big)e^{M(n-n_{0})} {}\\ & \leq & \Big((e^{M} - 1)N\gamma + Le^{M}\Big)e^{M(n-n_{0})}. {}\\ \end{array}$$

We let β = (e M − 1) + Le M, and summing the above inequality from n0 to n − 1, we obtain

$$\displaystyle\begin{array}{rcl} V (n,x)e^{M(n-n_{0})}& \leq & V (n_{ 0},\phi (n_{0})) +\beta \sum _{ s=0}^{n-1}e^{M(s-n_{0})} {}\\ & \leq & V (n_{0},\phi (n_{0})) + \frac{\beta } {e^{M} - 1}e^{M(n-n_{0})}. {}\\ \end{array}$$

By condition (2.4.7), we have

$$\displaystyle\begin{array}{rcl} \|x\|& \leq & W_{1}^{-1}\Big[ \frac{1} {\lambda _{1}(n_{0})}\Big(V (n_{0},\phi (n_{0}))e^{M(n-n_{0})} + \frac{\beta } {e^{M} - 1}\Big)\Big] {}\\ & \leq & W_{1}^{-1}\Big[ \frac{1} {\lambda _{1}(n_{0})}\Big(\lambda _{2}(n_{0})W_{2}(\vert \phi \vert ) +\lambda _{2}(n_{0})W_{3}(\vert \phi \vert )\sum _{s=0}^{n_{0}-1}\phi _{ 1}(n_{0},s) + \frac{\beta } {e^{M} - 1}\Big)\Big], {}\\ & & {}\\ \end{array}$$

for all nn0. Hence, the solutions of (2.1.1) that start in D are uniformly bounded. This completes the proof.

Theorems 2.4.1 and 2.4.2 are of special importance since, by the aid of constructing a suitable Lyapunov functionals, the theorems can be applied to nonlinear Volterra systems of the form

$$\displaystyle{ x(n + 1) =\sigma (n)x(n) +\sum _{ s=0}^{n}B(n,s)x^{2/3}(s),\;n \geq 0,\;x(n) =\phi (n)\;\mbox{ for }\;0 \leq n \leq n_{ 0}, }$$
(2.4.11)

where ϕ(n) is a given bounded initial sequence.

2.5 Applications to Nonlinear Volterra Difference Equations

In this section, we apply the results of the previous section to nonlinear Volterra difference equations. As we shall see that Theorems 2.4.1 and 2.4.2 will guide us step by step on how such Lyapunov functionals are constructed.

Theorem 2.5.1 ([147]).

Consider the scalar nonlinear Volterra difference equation given by  (2.4.11). Suppose there are constants β1,  β2 ∈ (0, 1) such that

$$\displaystyle{ \sigma ^{2}(n) + \frac{2} {3}\Big(\vert \sigma (n)\vert +\sum _{ s=0}^{n}\vert B(n,s)\vert \Big)\vert B(n,n)\vert +\sum _{ u=n+1}^{\infty }\vert B(u,n)\vert - 1 \leq -\beta _{ 1}, }$$

and

$$\displaystyle{ \frac{2} {3}\Big(\vert \sigma (n)\vert +\sum _{ s=0}^{n}\vert B(n,s)\vert \Big) - 1 \leq -\beta _{ 2}. }$$
(2.5.1)

If

$$\displaystyle{ \sum _{s=0}^{n-1}\sum _{ u=n}^{\infty }\vert B(u,s)\vert,\;\;\sum _{ s=0}^{n}\vert B(n,s)\vert <\infty, }$$

and

$$\displaystyle{ \vert B(n,s)\vert \geq \sum _{u=n}^{\infty }\vert B(u,s)\vert, }$$

then all solutions of  (2.4.11) are uniformly bounded .

Proof.

To see this, we consider the Lyapunov functional

$$\displaystyle{ V (n,x) = x^{2}(n) +\sum _{ s=0}^{n-1}\sum _{ u=n}^{\infty }\vert B(u,s)\vert x^{2}(s). }$$

Then along solutions of (2.4.11) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n,x)& =& x^{2}(n + 1) - x^{2}(n) +\sum _{ u=n+1}^{\infty }\vert B(u,n)\vert x^{2}(n) -\sum _{ s=0}^{n-1}\vert B(u,s)\vert x^{2}(s) \\ & =& \Big(\sigma (n)x(n) +\sum _{ s=0}^{n}B(n,s)x^{2/3}(s)\Big)^{2} - x^{2}(n) \\ & & +2\sum _{u=n+1}^{\infty }\vert B(u,n)\vert x^{2}(n) -\sum _{ s=0}^{n-1}\vert B(u,s)\vert x^{2}(s) \\ & =& \Big(\sigma ^{2}(n) +\sum _{ u=n+1}^{\infty }\vert B(u,n)\vert - 1\Big)x^{2}(t) \\ & & +2\sigma (n)x(n)\sum _{s=0}^{n}\vert B(n,s)\vert x^{2/3}(s) +\Big (\sum _{ s=0}^{n}\vert B(n,s)\vert x^{2/3}(s)\Big)^{2} \\ & & -\sum _{s=0}^{n-1}\vert B(n,s)\vert x^{2}(s). {}\end{array}$$
(2.5.2)

To further simplify the above inequality we perform the following calculations. Using the fact that aba 2∕2 + b 2∕2,

$$\displaystyle\begin{array}{rcl} 2\sigma (n)x(n)\sum _{s=0}^{n}\vert B(n,s)\vert x^{2/3}(s)& \leq & 2\vert \sigma (n)\vert \vert x(n)\vert \sum _{ s=0}^{n}\vert B(n,s)\vert x^{2/3}(s) {}\\ & \leq & \ \vert \sigma (n)\vert \sum _{s=0}^{n}\vert B(n,s)\vert \Big(x^{2}(n) + x^{4/3}(s)\Big). {}\\ \end{array}$$

Using the Cauchy-Schwartz inequalities for series, one obtains

$$\displaystyle\begin{array}{rcl} \Big(\sum _{s=0}^{n}\vert B(n,s)\vert x^{2/3}(s)\Big)^{2}& \leq & \Big(\sum _{ s=0}^{n}\vert B(n,s)\vert ^{1/2}\vert B(n,s)\vert ^{1/2}x^{2/3}(s)\Big)^{2} {}\\ & \leq & \sum _{s=0}^{n}\vert B(n,s)\vert \sum _{ s=0}^{n}\vert B(n,s)\vert x^{4/3}(s). {}\\ \end{array}$$

Adding the above two inequalities yields

$$\displaystyle\begin{array}{rcl} & & 2\sigma (n)x(n)\sum _{s=0}^{n}\vert B(n,s)\vert x^{2/3}(s) +\Big (\sum _{ s=0}^{n}\vert B(n,s)\vert x^{2/3}(s)\Big)^{2} \\ & \leq & \vert \sigma (n)\vert \sum _{s=0}^{n}\vert B(n,s)\vert \Big(x^{2}(n) + x^{4/3}(s)\Big) +\sum _{ s=0}^{n}\vert B(n,s)\vert \sum _{ s=0}^{n}\vert B(n,s)\vert x^{4/3}(s) \\ & =& \vert \sigma (n)\vert \sum _{s=0}^{n}\vert B(n,s)\vert x^{2}(n) \\ & & +\Big(\vert \sigma (n)\vert +\sum _{ s=0}^{n}\vert B(n,s)\vert \Big)\sum _{ s=0}^{n}\vert B(n,s)\vert x^{4/3}(s). {}\end{array}$$
(2.5.3)

Finally, we make use of Young’s inequality, which says for any two nonnegative real numbers ω and ϖ, we have

$$\displaystyle{ \omega \varpi \leq \frac{\omega ^{e}} {e} + \frac{\varpi ^{f}} {f},\;\mbox{ with }\;1/e + 1/f = 1. }$$

Thus, for e = 3∕2 and f = 3, we get

$$\displaystyle\begin{array}{rcl} \sum _{s=0}^{n}\vert B(n,s)\vert x^{4/3}(s)& =& \sum _{ s=0}^{n}\vert B(n,s)\vert ^{1/3}\vert B(n,s)\vert ^{2/3}x^{4/3}(s) {}\\ & \leq & \sum _{s=0}^{n}\Big(\frac{\vert B(n,s)\vert } {3} + \frac{2} {3}\vert B(n,s)\vert x^{2}(s)\Big) {}\\ & =& \sum _{s=0}^{n}\frac{\vert B(n,s)\vert } {3} + \frac{2} {3}\vert B(n,s)\vert x^{2}(s) + \frac{2} {3}\sum _{s=0}^{n-1}\vert B(n,s)\vert x^{2}(s).{}\\ \end{array}$$

With this in mind, inequality (2.5.3) reduces to

$$\displaystyle\begin{array}{rcl} & & 2\sigma (n)x(n)\sum _{s=0}^{n}\vert B(n,s)\vert x^{2/3}(s) +\Big (\sum _{ s=0}^{n}\vert B(n,s)\vert x^{2/3}(s)\Big)^{2} \\ & \leq & \frac{2} {3}\Big(\vert \sigma (n)\vert +\sum _{ s=0}^{n}\vert B(n,s)\vert \Big)\vert B(n,n)\vert \\ & & +\frac{1} {3}\Big(\vert \sigma (n)\vert +\sum _{ s=0}^{n}\vert B(n,s)\vert \Big)\sum _{ s=0}^{n}\vert B(n,s)\vert \\ & & +\frac{2} {3}\Big(\vert \sigma (n)\vert +\sum _{ s=0}^{n}\vert B(n,s)\vert \Big)\sum _{ s=0}^{n-1}\vert B(n,s)\vert x^{2}(s). {}\end{array}$$
(2.5.4)

By substituting (2.5.4) into (2.5.2), we arrive at

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n,x)& \leq & \Big[\sigma ^{2}(n) + \frac{2} {3}\Big(\vert \sigma (n)\vert +\sum _{ s=0}^{n}\vert B(n,s)\vert \Big)\vert B(n,n)\vert +\sum _{ u=n+1}^{\infty }\vert B(u,n)\vert - 1\Big]x^{2}(n) \\ & & +\frac{2} {3}\Big(\vert \sigma (n)\vert +\sum _{ s=0}^{n}\vert B(n,s)\vert - 1\Big)\sum _{ s=0}^{n-1}\vert B(n,s)\vert x^{2}(s) \\ & & +\frac{1} {3}\Big(\vert \sigma (n)\vert +\sum _{ s=0}^{n}\vert B(n,s)\vert \Big)\sum _{ s=0}^{n}\vert B(n,s)\vert. {}\end{array}$$
(2.5.5)

Let \(L = \frac{1} {3}\Big(\vert \sigma (n)\vert +\sum _{ s=0}^{n}\vert B(n,s)\vert \Big)\sum _{ s=0}^{n}\vert B(n,s)\vert\). Take W1 = W2 = W4 = x 2(n),  W3 = W5 = x 2(s),  λ1 = λ2 = 1 and λ3 = min{β1, β2}. Also, we choose φ1(n, s) = u = n | B(u, s) |, and φ2(n, s) = | B(n, s) |, we see that conditions (2.4.7) and (2.4.8) of Theorem 2.4.2 are satisfied. Next we make sure condition (2.4.10) is satisfied. To see this,

$$\displaystyle\begin{array}{rcl} & & W_{2}(\vert x\vert ) - W_{4}(\vert x\vert ) +\sum _{ s=0}^{n-1}\Big(\varphi _{ 1}(n,s)W_{3}(\vert x(s)\vert ) -\varphi _{2}(n,s)W_{5}(\vert x(s)\vert )\Big) {}\\ & =& x^{2}(n) - x^{2}(n) +\sum _{ s=0}^{n-1}\Big(\sum _{ u=n}^{\infty }\vert B(u,s)\vert -\vert B(u,s)\vert \Big)x^{2}(s) {}\\ & =& \sum _{s=0}^{n-1}\Big(\sum _{ u=n}^{\infty }\vert B(u,s)\vert -\vert B(u,s)\vert \Big)x^{2}(s) \leq 0. {}\\ \end{array}$$

Thus, condition (2.4.10) is satisfied for γ = 0. An application of Theorem 2.4.2 yields the results.

In the next theorem we establish sufficient conditions that guarantee the boundedness of all solutions of the vector Volterra difference equation

$$\displaystyle{ \bigtriangleup x(t) = Ax(t) +\sum _{ s=0}^{t-1}C(t,s)x(s) + g(t), }$$
(2.5.6)

where t ≥ 0,  x(t) = ϕ(t)  for  0 ≤ tt0,  ϕ(t) is a given bounded continuous initial k × 1 vector function. Also, A and C are k × k matrices and g is k × 1 vector functions that is continuous in x If D is a matrix, | D | means the sum of the absolute values of the elements. For what to follow we write g and x for g(t) and x(t), respectively.

Theorem 2.5.2.

Suppose C T(t, s) = C(t, s). Let I be the k × k identity matrix. Assume there exist positive constants L, ν, ξ, β 1, β 2, λ 3 , and k × k positive definite constant symmetric matrix B such that

$$\displaystyle{ \Big[A^{T}B + BA + A^{T}BA\Big] \leq -\xi I, }$$
(2.5.7)
$$\displaystyle\begin{array}{rcl} \Big[-\xi + \vert Bg\vert & +& \sum _{s=0}^{t-1}\vert B\vert \vert C(t,s)\vert +\sum _{ s=0}^{t-1}\vert A^{T}B\vert \vert C(t,s)\vert \\ & +& \nu \sum _{u=t+1}^{\infty }\vert B(u,t)\vert \Big](1 +\lambda _{ 3}) \leq -\beta _{1}, {}\end{array}$$
(2.5.8)
$$\displaystyle\begin{array}{rcl} \Big[\vert B\vert -\nu +\Big((g^{T}B)^{2}& +& 1 + \vert A^{T}B\vert \\ & +& \sum _{s=0}^{t-1}\vert C(t,s)\vert \Big)\Big](1 +\lambda _{ 3}) \leq -\beta _{2},{}\end{array}$$
(2.5.9)
$$\displaystyle{ (\vert g^{T}g\vert + \vert Bg\vert )(1 +\lambda _{ 3}) = L, }$$
(2.5.10)
$$\displaystyle\begin{array}{rcl} \vert C(t,s)\vert \geq \nu \sum _{u=t+1}^{\infty }\vert C(u,s)\vert,& &{}\end{array}$$
(2.5.11)

and

$$\displaystyle\begin{array}{rcl} \sum _{s=0}^{t-1}\sum _{ u=t}^{\infty }\vert C(u,s)\vert,\;\sum _{ s=0}^{t-1}\vert C(t,s)\vert <\infty.& &{}\end{array}$$
(2.5.12)

Then solutions of  (2.5.6) are uniformly bounded .

Proof.

Since B is k × k positive definite constant symmetric matrix, then there exists an r1 ∈ (0, 1] and r2 > 0 such that

$$\displaystyle{ r_{1}x^{T}x \leq x^{T}Bx \leq r_{ 2}x^{T}x. }$$
(2.5.13)

Define

$$\displaystyle{ V (t,x) = x^{T}Bx +\nu \sum _{ s=0}^{t-1}\sum _{ u=t}^{\infty }\vert C(u,s)\vert x^{2}(s). }$$

Here x T x = x 2 = (x1 2 + x2 2 + ⋅ ⋅ ⋅ + xk 2). We have along the solutions that

$$\displaystyle\begin{array}{rcl} & & \bigtriangleup V (t,x) =\Big [Ax +\sum _{ 0}^{t-1}C(t,s)x(s) + g\Big]^{T}Bx \\ & +& x^{T}B\Big[Ax +\sum _{ s=0}^{t-1}C(t,s)x(s) + g\Big] \\ & +& \Big[Ax +\sum _{ s=0}^{t-1}C(t,s)x(s) + g\Big]^{T}B\Big[Ax +\sum _{ 0}^{t-1}C(t,s)x(s) + g\Big] \\ & -& \nu \sum _{s=0}^{t-1}\vert B(t,s)\vert x^{2}(s) +\nu \sum _{ u=t+1}^{\infty }\vert B(u,t)\vert \;x^{2}. {}\end{array}$$
(2.5.14)

By noting that the right side of (2.5.14) is scalar and by recalling that B is a symmetric matrix, expression (2.5.14) simplifies to

$$\displaystyle\begin{array}{rcl} & & \bigtriangleup V (t,x) = x^{T}\Big(A^{T}B + BA + A^{T}BA\Big)x + 2x^{T}Bg \\ & +& 2\sum _{s=0}^{t-1}x^{T}BC(t,s)x(s) \\ & +& \Big[x^{T}A^{T}Bg + 2g^{T}B\sum _{ s=0}^{t-1}C(t,s)x(s) + 2x^{T}A^{T}B\sum _{ s=0}^{t-1}C(t,s)x(s) \\ & +& \sum _{s=0}^{t-1}x^{T}(s)C(t,s)\bigtriangleup _{ s}\;B\sum _{s=0}^{t-1}C(t,s)x(s) + g^{T}\;B\;g\Big] \\ & -& \nu \sum _{s=0}^{t-1}\vert C(t,s)\vert x^{2}(s) +\nu \sum _{ u=t+1}^{\infty }\vert C(u,t)\vert x^{2} \\ & \leq & -\xi x^{2} + 2\vert x^{T}\vert \vert Bg\vert + 2\sum _{ s=0}^{t-1}\vert x^{T}\vert \vert B\vert \vert C(t,s)\vert \vert x(s)\vert \\ & +& \Big[\sum _{s=0}^{t-1}\vert C(t,s)\vert 2\vert g^{T}B\vert \vert x(s)\vert + 2\sum _{ s=0}^{t-1}\vert x^{T}\vert \vert A^{T}B\vert \vert C(t,s)\vert \vert x(s)\vert \\ & +& \sum _{s=0}^{t-1}x^{T}(s)C(t,s)\;B\;\sum _{ s=0}^{t-1}C(t,s)x(s) + \vert g^{T}g\vert \Big] \\ & -& \nu \sum _{s=0}^{t-1}\vert C(t,s)\vert x^{2}(s) +\nu \sum _{ u=t+1}^{\infty }\vert C(u,t)\vert x^{2}. {}\end{array}$$
(2.5.15)

Next, we perform some calculations to simplify inequality (2.5.15).

$$\displaystyle{ 2\vert x^{T}\vert \vert Bg\vert = 2\vert x^{T}\vert \vert Bg\vert ^{1/2}\vert Bg\vert ^{1/2} \leq x^{2}\vert Bg\vert + \vert Bg\vert, }$$
$$\displaystyle{ 2\sum _{s=0}^{t-1}\vert x^{T}\vert \vert B\vert \vert C(t,s)\vert \vert x(s)\vert \leq \sum _{ s=0}^{t-1}\vert B\vert \vert C(t,s)\vert (x^{2} + x^{2}(s)), }$$
$$\displaystyle{ \sum _{s=0}^{t-1}\vert C(t,s)\vert 2\vert g^{T}B\vert \vert x(s)\vert \leq \sum _{ s=0}^{t-1}\vert C(t,s)\vert (\vert g^{T}B\vert ^{2} + x^{2}(s)), }$$

and

$$\displaystyle{ 2\sum _{s=0}^{t-1}\vert x^{T}\vert \vert A^{T}B\vert \vert C(t,s)\vert \vert x(s)\vert \leq \sum _{ s=0}^{t-1}\vert A^{T}B\vert \vert C(t,s)\vert (x^{2} + x^{2}(s)). }$$

Finally,

$$\displaystyle\begin{array}{rcl} & & \sum _{s=0}^{t-1}x^{T}(s)C(t,s)\;B\;\sum _{ s=0}^{t-1}C(t,s)x(s) {}\\ & & \vert B\vert \;\vert \sum _{s=0}^{t-1}x^{T}(s)C(t,s)\vert \vert \sum _{ s=0}^{t-1}C(t,s)x(s)\vert {}\\ & &\leq \vert B\vert \Big(\sum _{s=0}^{t-1}x^{T}(s)C(t,s)\Big)^{2}\Big/2 + \vert B\vert \Big(\sum _{ s=0}^{t-1}C(t,s)x(s)\Big)^{2}\Big/2 {}\\ & & = \vert B\vert \Big(\sum _{s=0}^{t-1}C(t,s)x(s)\Big)^{2} {}\\ & & = \vert B\vert \Big(\sum _{s=0}^{t-1}\vert C(t,s)\vert ^{\frac{1} {2} }\vert C(t,s)\vert ^{\frac{1} {2} }\vert x(s)\vert \Big)^{2} {}\\ & & \leq \vert B\vert \sum _{s=0}^{t-1}\vert C(t,s)\vert \;\sum _{ s=0}^{t-1}\vert C(t,s)\vert x^{2}(s). {}\\ \end{array}$$

Substitution of the above inequalities into (2.5.15) yields

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t,x\dot{)}& \leq & \Big[-\xi + \vert Bg\vert +\sum _{ s=0}^{t-1}\vert B\vert \vert C(t,s)\vert {}\\ & +& \sum _{s=0}^{t-1}\vert A^{T}B\vert \vert C(t,s)\vert +\nu \sum _{ u=t+1}^{\infty }\vert C(u,t)\vert \Big]x^{2} {}\\ & +& \Big[\vert B\vert -\nu +\Big((g^{T}B)^{2} + 1 + \vert A^{T}B\vert + \vert B\vert \sum _{ s=0}^{t-1}\vert C(t,s)\vert \Big)\sum _{ s=0}^{t-1}\vert C(t,s)\vert x^{2}(s) {}\\ & +& (\mu (t)\vert g^{T}Bg\vert + \vert Bg\vert )(1 +\lambda _{ 3}). {}\\ \end{array}$$

Applying conditions (2.5.8), (2.5.9), and (2.5.10), △V (t, x) reduces to

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t,x)& \leq & -\beta _{1}x^{2} -\beta _{ 2}\sum _{s=0}^{t-1}\vert C(t,s)\vert x^{2}(s) + L, {}\\ \end{array}$$

where L = (μ(t) | g T Bg | + | Bg | )(1 + λ3). By taking W1 = r1 x T x, W2 = x T Bx,  W4 = r2 x T x,  W3 = W5 = x 2(s),  λ1 = λ2 = 1 and λ3 = min{β1, β2}, ϕ1(t, s) = ν∑u = t | C(u, s) |,   and   ϕ2(t, s) = | C(t, s) |,  we see that conditions (2.4.7) and (2.4.8) of Theorem 2.4.2 are satisfied. Next we make sure condition (2.4.10) is satisfied. Using (2.5.11) and (2.5.13) we obtain

$$\displaystyle\begin{array}{rcl} & & W_{2}(\vert x\vert ) - W_{4}(\vert x\vert ) +\sum _{ s=0}^{t-1}\left (\phi _{ 1}(t,s)W_{3}(\vert x\vert ) -\phi _{2}(t,s)W_{5}(\vert x(s)\vert )\right ) {}\\ & =& x^{T}Bx - r_{ 2}x^{T}x +\sum _{ s=0}^{t-1}\Big(\nu \sum _{ u=t}^{\infty }\vert C(u,s)\vert -\vert C(t,s)\vert \Big)x^{2}(s) \leq 0. {}\\ \end{array}$$

Thus condition (2.4.10) is satisfied with γ = 0. An application of Theorem 2.4.2 yields the results.

2.6 Open Problems

Open Problem 1.

Reformulate Theorems 2.4.1 and 2.4.2 to obtain results concerning the exponential stability of the zero solution of (2.1.1).

For Open Problem 2, we consider the functional delay difference equation

$$\displaystyle{ x(t + 1) = F(t,x_{t}). }$$
(2.6.1)

We assume that F is continuous in x and that \(F: \mathbb{Z} \times C \rightarrow \mathbb{R}^{n}\) where C is the set of sequences \(\phi: [-\alpha,0] \rightarrow \mathbb{R}^{n},\;\alpha> 0.\) Let

$$\displaystyle{ C(t) =\{\phi: [t-\alpha,t] \rightarrow \mathbb{R}^{n}\}. }$$

It is to be understood that C(t) is C when t = 0. For ϕC(t) we denote

$$\displaystyle{ \vert \vert \vert \phi _{t}\vert \vert \vert =\Big [\sum _{i=1}^{n}\sum _{ s=t-\alpha }^{t-1}\phi _{ i}^{2}(s)\Big]^{1/2} }$$

where ϕ(t) = (ϕ1(t), ⋯ , ϕn(t)). 

Open Problem 2.

Theorem 2.6.1.

Let D > 0 and there is a scalar functional \(V (t,\psi _{t})\) that is continuous in ψ and locally Lipschitz in ψt when tt0 and ψtC(t) with | | ψt | | < D. In addition we assume that if \(x: [t_{0}-\alpha,\infty ) \rightarrow \mathbb{R}^{n}\) is a bounded sequence, then F(t, xt) is bounded on [t0, ). If V such that V (t, 0) = 0, 

$$\displaystyle{ W_{1}(\vert \phi (t)\vert ) \leq V (t,\phi _{t}) \leq W_{2}(\vert \phi _{t}\vert ) + W_{3}(\vert \vert \vert \phi _{t}\vert \vert \vert ), }$$

and

$$\displaystyle{ \bigtriangleup V (t,\phi _{t}) \leq -W_{4}(\vert \phi (t)\vert ), }$$

then the zero solution of  (2.6.1) is uniformly asymptotically stable .

Open Problem 3.

General theorem in the spirit of Theorems 2.1.12.1.2, and 2.1.3 regarding functional delay difference equations is nowhere to be found and hence there is a desperate need of such theorems. In particular, for h > 0 and constant, we ask that parallel theorems to Theorems 2.1.12.1.2, and 2.1.3 should be developed regarding the functional discrete system

$$\displaystyle{ x(n + 1) =\big (G(n,x(s);\ -h \leq s \leq n\big)\mathop{ =}\limits^{ def}G(n,x(\cdot )) }$$
(2.6.2)

where \(G: \mathbb{Z}^{+} \times \mathbb{R}^{k} \rightarrow \mathbb{R}^{k}\) is continuous in x. Then such theorems can be applied to Volterra difference systems of the form

$$\displaystyle\begin{array}{rcl} x(n + 1) = b(n)x(n) +\sum _{ s=-h}^{n-1}C(n,s)g(x(s)),& &{}\end{array}$$
(2.6.3)

and

$$\displaystyle\begin{array}{rcl} x(n + 1) = b(n)x(n) +\sum _{ s=n-h}^{n-1}C(n,s)g(x(s)).& &{}\end{array}$$
(2.6.4)

In the next theorem we establish sufficient conditions that guarantee the boundedness of all solutions of the vector Volterra difference equation by using Lyapunov-Razumikhini method . It should serve as a guidance to formulate and prove boundedness results concerning functional difference equations. Thus, we consider the Volterra difference equation

$$\displaystyle{ x(t + 1) = Ax(t) +\sum _{ s=0}^{t-1}C(t,s)x(s) + g(t), }$$
(2.6.5)

where t ≥ 0,  x(t) = ϕ(t)  for  0 ≤ tt0,  ϕ(t) is a given bounded initial k × 1 vector functions. Also, A and C are k × k matrices and g is k × 1 vector functions. If D is a matrix, | D | means the sum of the absolute values of the elements. Let | | g | |[0,) denote the norm of g.

Theorem 2.6.2 ([144]).

Let I be the k × k identity matrix. Assume there exists a k × k positive definite constant symmetric matrix B such that

$$\displaystyle{ A^{T}B + BA = -I, }$$
(2.6.6)

Suppose that there is a positive constant M such that

$$\displaystyle{ \sum _{s=0}^{t-1}\vert BC(t,s)\vert \leq M, }$$

so that

$$\displaystyle{ \frac{2\beta hM} {\alpha } <1, }$$

where α, β, and h are all positive constants to specify in the proof. If in addition, g is bounded, then all solutions of of  (2.6.5) are uniformly bounded .

Proof.

Since B is k × k positive definite constant symmetric matrix, then there exists an α, β ∈ (0, 1] such that

$$\displaystyle{ \alpha ^{2}\vert x\vert ^{2} \leq x^{T}Bx \leq \beta ^{2}\vert x\vert ^{2}. }$$

Define the Lyapunov-Razumikhini function

$$\displaystyle{ V (t,x) = x^{T}Bx. }$$

Then clearly

$$\displaystyle{ \alpha ^{2}\vert x\vert ^{2} \leq V (t,x) \leq \beta ^{2}\vert x\vert ^{2}. }$$

Then along the solutions of (2.6.5) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t,x)& =& \Big[Ax +\sum _{ 0}^{t-1}C(t,s)x(s) + g\Big]^{T}Bx {}\\ & +& x^{T}B\Big[Ax +\sum _{ s=0}^{t-1}C(t,s)x(s) + g\Big] {}\\ & =& x^{T}\Big(A^{T}B + BA\Big)x + 2x^{T}Bg + 2\sum _{ s=0}^{t-1}x^{T}BC(t,s)x(s) {}\\ & \leq & -\vert x\vert ^{2} + 2\vert x\vert \vert B\vert \vert \vert g\vert \vert ^{[0,\infty )} + 2\sum _{ s=0}^{t-1}\vert BC(t,s)\vert \vert x(s)\vert. {}\\ \end{array}$$

Now, if \(h^{2}V (t,x(t))> V (s,x(s))\)  for 0 ≤ st − 1,  where h > 1 is a constant to be determined, then

$$\displaystyle{ \alpha ^{2}\vert x(s)\vert ^{2} \leq V (s,x(s)) \leq h^{2}V (t,x(t)) \leq h^{2}\beta ^{2}\vert x\vert ^{2}, }$$

and

$$\displaystyle{ \frac{h\beta } {\alpha } \vert x(t)\vert \geq \vert x(s)\vert,\;\;s \leq t - 1. }$$

Thus,

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t,x)& \leq & -\vert x\vert ^{2} + 2\frac{h\beta } {\alpha } \vert x\vert ^{2}\sum _{ s=0}^{t-1}\vert BC(t,s)\vert + 2\vert x\vert \vert B\vert \vert \vert g\vert \vert ^{[0,\infty )} {}\\ & \leq & -\vert x\vert ^{2} + 2\frac{h\beta M} {\alpha } \vert x\vert ^{2} + 2\vert x\vert \vert B\vert \vert \vert g\vert \vert ^{[0,\infty )}. {}\\ \end{array}$$

Since \(\frac{2\beta hM} {\alpha } <1,\)h maybe chosen so that h > 1 and \(\frac{2h\beta hM} {\alpha } <1,\) yielding

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t,x) \leq \big (2\frac{h\beta } {\alpha } - 1\big)\vert x\vert ^{2} + 2\vert x\vert \vert B\vert \vert \vert g\vert \vert ^{[0,\infty )} \leq 0& & {}\\ \end{array}$$

provided that

$$\displaystyle{ \vert x\vert \geq \frac{2\vert B\vert \vert \vert g\vert \vert ^{[0,\infty )}} {1 -\frac{2\beta hM} {\alpha } }:= K. }$$

Now we summarize what we have

  1. (a)

    \(W_{1}(\vert x\vert ) \leq V (t,x) \leq W_{2}(\vert x\vert ),\)

  2. (b)

    there exists K > 0 so that if x(t) is a solution of (2.6.5) with \(\vert x(t)\vert \geq K\) for some t ≥ 0 and V (s, x(s)) < p(V (t, x)) for 0 ≤ st − 1 and p(u) > u, then \(\bigtriangleup V (t,x) \leq 0,\) where \(p(u) = h^{2}u.\)

Now choose any solution x(t) such that | ϕ(t) | < H for 0 ≤ tn0 for some H > 0. Let L > max{H, K} and choose D > 0 with W2(L) < W1(D). If this solution is unbounded, then there is t1 > 0 such that | x(t1) | > D,   | x(t) | ≤ D   for   0 < tt1 − 1. If V (t1, x(t1) ≤ V (t0, x(t0), then we would have

$$\displaystyle\begin{array}{rcl} W_{1}(\vert x(t_{1}\vert ) \leq V (t_{1},x(t_{1})& \leq & V (t_{0},\phi (t_{0}) \leq W_{2}(\vert \phi (t_{0}\vert ) {}\\ & \leq & W_{2}(L) <W_{1}(D), {}\\ \end{array}$$

from which we get | x(t1) | < D, a contradiction. Thus, V (t1, x(t1) > V (t0, x(t0). We leave it for the reader to complete the proof using (a) and (b) as guidance.

Open Problem 4.

We propose that the reader develops general theorems for the boundedness and stability of functional difference equations using Lyapunov-Razumikhini method. Conditions (a) and (b) should serve as guidance for stating and proving such theorems. For more on the subject we refer to [181] and [182].