In the past hundred and fifty years, Lyapunov functions/functionals have been exclusively and successfully used in the study of stability and existence of periodic and bounded solutions. The author has extensively used Lyapunov functions/functionals for the purpose of analyzing solutions of functional equations, and each time the suitable Lyapunov functional presented us with unique difficulties, that could only overcome by the imposition of severe conditions on the given coefficients. In practice, Lyapunov direct method requires pointwise conditions, while as so many real-life problems call for averages. Moreover, it is rare that we encounter a problem for which a suitable Lyapunov functional can be easily constructed. It is a common knowledge among researchers that results on stability and boundedness go hand in hand with the constructed Lyapunov functional.

In this chapter, we begin a systematic study of stability theory for ordinary and functional difference equations by means of fixed point theory. The study of fixed point theory is motivated by a number of difficulties encountered in the study of stability by means of Lyapunov’s direct method. We notice that these difficulties frequently vanish when we apply fixed point theory. We provide a brief introduction on topics in Cauchy sequences, metric spaces, compact ness, contraction mapping principle, and Banach spaces . In some cases, contraction mapping principle fails to produce any results. This forces us to look for other alternatives, namely the concept of Large Contraction. We will restate the contraction mapping principle and Krasnoselskii’s fixed point theorems in which the regular contraction is replaced with Large Contraction. Most of the work in this chapter can be found in [4, 140, 142, 150, 166], and [167].

3.1 Motivation

We begin by offering an example that exposes the difficulties encountered by the use of Lyapunov functionals. Fixed point theory was first used in difference equations by Raffoul in [136] to study the stability and the existence of periodic solutions of the linear delay difference equation

$$\displaystyle{ \bigtriangleup x(t) = -a(t)x(t-\tau ). }$$

It was followed by a series of papers in which different authors considered the same idea and analyzed various types of difference and Volterra difference equations. For example, in [134] the author initiated the use of fixed point theory to alleviate some of the difficulties that arise from the deployment of Lyapunov functionals to study boundedness and stability of the neutral nonlinear delay differential equation

$$\displaystyle{ x'(t) = -a(t)x(t) + c(t)x'(t - g(t)) + q(t,x(t),x(t - g(t))), }$$

where a(t), b(t), g(t), and q are continuous in their respective arguments. Later on, Islam and Yankson [87] extended the work of [134] to the neutral nonlinear delay difference equation

$$\displaystyle{ x(t + 1) = a(t)x(t) + c(t)\bigtriangleup x(t - g(t)) + q(x(t),x(t - g(t)), }$$

where \(a,c: \mathbb{Z} \rightarrow \mathbb{R},q: \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}\), and \(g: \mathbb{Z} \rightarrow \mathbb{Z}.\)

To illustrate some of the difficulties that arise from the deployment of Lyapunov functionals, we consider the delay difference equation

$$\displaystyle{ x(t + 1) = a(t)x(t) + b(t)x(t-\tau ) + p(t),\;t \in \mathbb{Z}^{+}, }$$
(3.1.1)

where \(a,b,p: \mathbb{Z}^{+} \rightarrow \mathbb{R}\), τ is a positive integer. Assume

$$\displaystyle{ \vert a(t)\vert <1,\;\mbox{ for all }\;t \in \mathbb{Z}^{+} }$$
(3.1.2)

and there is a δ > 0 such that

$$\displaystyle{ \vert b(t)\vert +\delta <1,\;t \in \mathbb{Z}^{+} }$$
(3.1.3)

and

$$\displaystyle{ \vert a(t)\vert \leq \delta,\;\text{and}\;\vert p(t)\vert \leq K,\;\text{for some positive constant}\;K. }$$
(3.1.4)

Then all solutions of (3.1.1) are bounded. If p(t) = 0 for all t, then the zero solution of (3.1.1) is (UAS). To see this we consider the Lyapunov functional

$$\displaystyle{ V (t,x(\cdot )) = \vert x(t)\vert +\delta \sum _{ s=t-\tau }^{t-1}\vert x(s)\vert. }$$

Then along solutions of (3.1.1) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V & =& \vert x(t + 1)\vert -\vert x(t)\vert +\delta \sum _{ s=t+1-\tau }^{t}\vert x(s)\vert -\delta \sum _{ s=t-\tau }^{t-1}\vert x(s)\vert {}\\ &\leq & \vert a(t)\vert \vert x(t)\vert -\vert x(t)\vert +\vert b(t)\vert \vert x(t-\tau )\vert +\delta \sum _{s=t+1-\tau }^{t}\vert x(s)\vert -\delta \sum _{ s=t-\tau }^{t-1}\vert x(s)\vert +\vert p(t)\vert {}\\ & =& \big(\vert a(t)\vert +\delta -1\big)\vert x(t)\vert +\big (\vert b(t)\vert -\delta \big)\vert x(t-\tau )\vert + \vert p(t)\vert {}\\ &\leq & \big(\vert a(t)\vert +\delta -1\big)\vert x(t)\vert + \vert p(t)\vert {}\\ &\leq & -\gamma \vert x(t)\vert,\;\mathrm{for\;some\;positive\;constant}\;\gamma. {}\\ \end{array}$$

The results follow from Chapter 2 It is severe to ask that a, b be bounded and that | b(t) | is bounded by a all of the time. For another illustration, we consider the nonlinear delay difference equation

$$\displaystyle{ x(t + 1) = a(t)g(x(t)) + b(t)h(x(t - r)), }$$
(3.1.5)

where the functions g and h are continuous. Define the Lyapunov functional V by

$$\displaystyle{ V (t) = \vert x(t)\vert +\sum _{ s=t-r}^{t-1}\vert b(s + r)\vert \vert h(x(s))\vert. }$$

We assume that there are positive constants γ1 and γ2 such that | g(x) | ≤ γ1 | x | and | h(x) | ≤ γ2 | x |, so that

$$\displaystyle{ \gamma _{1}\vert a(t)\vert +\gamma _{2}\vert b(t + r)\vert - 1 \leq -\beta,\;\beta> 0. }$$

Then along solutions of (3.1.5) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V & =& \vert x(t + 1)\vert -\vert x(t)\vert + \vert b(t + r)\vert \vert h(x(t))\vert -\vert b(t)\vert \vert h(x(t - r))\vert {}\\ &\leq & \vert a(t)\vert \vert g(x(t))\vert + \vert b(t)\vert \vert h(x(t - r))\vert -\vert x(t)\vert {}\\ & +& \vert b(t + r)\vert \vert h(x(t))\vert -\vert b(t)\vert \vert h(x(t - r))\vert {}\\ &\leq & (\gamma _{1}\vert a(t)\vert +\gamma _{2}\vert b(t + r)\vert - 1)\vert x(t)\vert {}\\ &\leq & -\beta \vert x(t)\vert. {}\\ \end{array}$$

Now one may refer to Chapter 2 and argue that the zero solution of (3.1.5) is asymptotically stable.

3.2 Metrics and Banach Spaces

This section is devoted to introductory materials related to Cauchy sequences, metric spaces, contraction, compactness, contraction mapping principle, and Banach spaces. Materials in this section are taken from class notes that the author have used in graduate course on real analysis. For an excellent reference, we refer the reader to [23].

Definition 3.2.1.

A pair (E, ρ) is a metric space if E is a set and ρ: E × E → [0, ) such that when y, z, and u are in E, then

  1. (a)

    \(\rho (y,z) \geq 0,\;\rho (y,y) = 0,\;\mbox{ and}\;\rho (y,z) = 0\;\mbox{ implies}\;y = z.\)

  2. (b)

    \(\rho (y,z) =\rho (z,y),\) and

  3. (c)

    \(\rho (y,z) \leq \rho (y,u) +\rho (u,z).\)

Definition 3.2.2 (Cauchy Sequence).

A sequence {xn} ⊆ E is a Cauchy sequence if for each ɛ > 0 there exists an \(N \in \mathbb{N}\) such that \(n,m> N\Rightarrow\rho (x_{n},x_{m}) <\varepsilon\).

Definition 3.2.3 (Completeness of Metric Space).

A metric space (E, ρ) is said to be complete if every Cauchy sequence in E converges to a point in E.

Definition 3.2.4.

A set L in a metric space (E, ρ) is compact if each sequence in L has a subsequence with a limit in L.

Definition 3.2.5.

Let {fn} be a sequence of real functions with \(f_{n}: [a,b] \rightarrow \mathbb{R}\).

  1. 1.

    {fn} is uniformly bounded on [a, b] if there exists M > 0 such that | fn(t) | ≤ M for all \(n \in \mathbb{N}\) and for all t ∈ [a, b].

  2. 2.

    {fn} is equicontinuous at t0 if for each ɛ > 0 δ > 0 such that for all \(n \in \mathbb{N}\), if t ∈ [a, b] and | t0t | < δ, then | fn(t0) − fn(t) | < ɛ. Also, {fn} is equicontinuous if {fn} is equicontinuous at each t0 ∈ [a, b].

  3. 3.

    {fn} is uniformly equicontinuous if for each ɛ > 0 there exists d > 0 such that for all \(n \in \mathbb{N}\), if t1, t2 ∈ [a, b] and | t1t2 | < δ, then | fn(t1) − fn(t2) | < ɛ.

Easy to see that {fn} = {x n} is not an equicontinuous sequence of functions on [0, 1] but each fn is uniformly continuous.

Proposition 3.1 (Cauchy Criterion for Uniform Convergence).

If {Fn} is a sequence of bounded functions that is Cauchy in the uniform norm, then {Fn} converges uniformly.

Definition 3.2.6.

A real-valued function f defined on \(E \subseteq \mathbb{R}\) is said to be Lipschitz continuous with Lipschitz constant M if | f(x) − f(y) | ≤ M | xy | for all x, yE.

Remark 3.1.

It is an easy exercise that a Lipschitz continuous function is uniformly continuous. Also, if each fn in a sequence of functions {fn} has the same Lipschitz constant, then the sequence is uniformly equicontinuous .

Lemma 3.1.

If {fn} is an equicontinuous sequence of functions on a closed bounded interval, then {fn} is uniformly equicontinuous .

Proof.

Suppose {fn} is equicontinuous defined on [a, b] (which is contraction). Let ɛ > 0. For each xK, let δx > 0 be such that \(\vert y - x\vert <\delta _{x}\Rightarrow\vert f_{n}(x) - f_{n}(y)\vert <\varepsilon /2\) for all \(n \in \mathbb{N}\). The collection {B(x, δx∕2): x ∈ [a, b]} is an open cover of [a, b] so has a finite subcover \(\{B(x_{i},\delta _{x_{i}}/2): i = 1,\ldots,k\}\). Let \(\delta =\min \{\delta _{x_{i}}/2: i = 1,\ldots,k\}\). Then, if x, y ∈ [a, b] with | xy | < δ, there is some i with \(x \in B(x_{i},\delta _{x_{i}}/2)\). Since \(\vert x - y\vert <\delta \leq \delta _{x_{i}}/2\), we have \(\vert x_{i} - y\vert \leq \vert x_{i} - x\vert + \vert x - y\vert <\delta _{x_{i}}/2 +\delta _{x_{i}}/2 =\delta _{x_{i}}\). Hence \(\vert x_{i} - y\vert <\delta _{x_{i}}\) and \(\vert x_{i} - x\vert <\delta _{x_{i}}\). So, for any \(n \in \mathbb{N}\) we have | f n(x) − f n(y) | ≤ | f n(x) − f n(x i) | + | f n(x i) − f n(y) | < ɛ∕2 + ɛ∕2 = ɛ. So, {f n} is uniformly equicontinuous .

The next theorem gives us the main method of proving compact ness in the spaces in which we are interested.

Theorem 3.2.1 (Ascoli-Arzelà).

If {fn(t)} is a uniformly bounded and equicontinuous sequence of real valued functions on an interval [a, b], then there is a subsequence which converges uniformly on [a, b] to a continuous function.

Proof.

Since {fn(t)} is equicontinuous on [a, b], by Lemma 3.1, {fn(t)} is uniformly equicontinuous . Let t1, t2,  be a listing of the rational numbers in [a, b] (note, the set rational numbers is countable, so this enumeration is possible). The sequence {fn(t1)}n = 1 is a bounded sequence of real numbers (since {fn} is uniformly bounded ) so, it has a subsequence \(\{f_{n_{k}}(t_{1})\}\) converging to a number which we call ϕ(t1). It will be more convenient to represent this subsequence without sub-subscripts, so we write fk 1 for \(f_{n_{k}}\) and switch the index from k to n. So, the subsequence is written as {fn 1(t1)}n = 1 . Now, the sequence {fn 1(t2)} is bounded, so it has a convergent subsequence, say {fn 2(t2)}, with limit ϕ(t2). We continue in this way obtaining a sequence of sequences {fn m(t)}n = 1 (one sequence for each m) each of which is a subsequence of the previous. Furthermore, we have fn m(tm) → ϕ(tm) as n for each \(m \in \mathbb{N}\). Now, consider the “diagonal” functions defined Fk(t) = fk k(t). Since fn m(tm) → ϕ(tm), it follows that Fr(tm) → ϕ(tm) as r for each \(m \in \mathbb{N}\) (in other words, the sequence {Fr(t)} converges pointwise at each tm). We now show that {Fk(t)} converges uniformly on [a, b], by showing it is Cauchy in the uniform norm. Let ɛ > 0. Let δ > 0 be as in the definition of uniformly equicontinuous for {fn(t)} applied with ɛ∕3. Divide [a, b] into p intervals where \(p> \frac{b-a} {\delta }\). Let ξj be a rational number in the j th interval, for j = 1, , p. Remember, {Fr(t)} converges at each of the points ξj, since they are rational numbers. So, for each j, there is \(M_{j} \in \mathbb{N}\) such that | Fr(ξj) − Fs(ξj) | < ɛ∕3 whenever r, s > Mj. Let M = max{Mj: j = 1, , p}. If t ∈ [a, b], then it is in one of the p intervals, say the j th. So, | tξj | < δ and so | fr r(t) − fr r(ξj) | = | Fr(t) − Fr(ξj) | < ɛ∕3 for every r. Also, if r, s > M, then | Fr(ξj) − Fs(ξj) | < ɛ∕3 (since M is the max of the Mi’s). So, we have for r, s > M,

$$\displaystyle{ \vert F_{r}(t) - F_{s}(t)\vert = \vert F_{r}(t) - F_{r}(\xi _{j}) + F_{r}(\xi _{j}) - F_{s}(\xi _{j}) + F_{s}(\xi _{j}) - F_{s}(t)\vert }$$
$$\displaystyle{ \leq \vert F_{r}(t) - F_{r}(\xi _{j})\vert + \vert F_{r}(\xi _{j}) - F_{s}(\xi _{j})\vert + \vert F_{s}(\xi _{j}) - F_{s}(t)\vert }$$
$$\displaystyle{ \leq \frac{\varepsilon } {3} + \frac{\varepsilon } {3} + \frac{\varepsilon } {3} =\varepsilon. }$$

By the Cauchy Criterion for convergence, the sequence {Fr(t)}converges uniformly on [a, b]. Since each Fr(t) is continuous, the limit function ϕ(t) is also continuous.

Remark 3.2.

The Ascoli-Arzelà Theorem can be generalized to a sequence of functions from [a, b] to \(\mathbb{R}^{n}\). You apply the Ascoli-Arzelà to the first coordinate function to get a uniformly convergent subsequence. Then, apply the theorem again, this time to the corresponding subsequence of functions restricted to the second coordinate, getting a sub-subsequence, and so on.

Banach space s form an important class of metric spaces. We now define Banach space s in several steps.

Definition 3.2.7.

A triple (V, +, ⋅ ) is said to be a linear (or vector) space over a field F if V is a set and the following are true.

  1. 1.

    Properties of +

    1. a.

      + is a function from V × V to V. Outputs are denoted x + y.

    2. b.

      for all x, yV, x + y = y + x. (+ is commutative)

    3. c.

      for all x, y, wV, x + (y + w) = (x + y) + w. (+ is associative)

    4. d.

      there is a unique element of V which we denote 0 such that for all xV, 0 + x = x + 0 = x. (additive identity)

    5. e.

      for each xV there is a unique element of V which we denote − x such that x + (−x) = −x + x = 0. (additive inverse)

  2. 2.

    Scalar multiplication

    1. a.

      ⋅ is a function from F × V to V. Outputs are denoted α ⋅ x, or αx.

    2. b.

      for all α, βF and xV, α(βx) = (αβ)x.

    3. c.

      for all xV, 1 ⋅ x = x.

    4. d.

      for all α, βF and xV, (α + β)x = αx + βx.

    5. e.

      for all αF and x, yV, α(x + y) = αx + αy.

Commonly, the real numbers or complex numbers are the field in the above definition. For our purposes, we only consider the field of real numbers \(F = \mathbb{R}\).

Definition 3.2.8 (Normed Spaces).

A vector space (V, +, ⋅ ) is a normed space if for each xV there is a nonnegative real number ∥x∥, called the norm of x, such that for each x, yV and \(\alpha \in \mathbb{R}\)

  1. 1.

    x∥ = 0 if and only if x = 0

  2. 2.

    αx∥ = | α | ∥x

  3. 3.

    x + y∥ ≤ ∥x∥ + ∥y

Remark 3.3.

A norm on a vector space always defines a metric ρ(x, y) = ∥xy∥ on the vector space. Given a metric ρ defined on a vector space, it is tempting to define ∥v∥ = ρ(v, 0). But this is not always a norm.

Definition 3.2.9.

A Banach space is a complete normed vector space. That is, a vector space (X, +, ⋅ ) with norm ∥⋅ ∥ for which the metric ρ(x, y) = ∥xy∥ is complete.

Example 3.1.

The space \((\mathbb{R}^{n},+,\cdot )\) over the field \(\mathbb{R}\) is a vector space (with the usual vector addition, + and scalar multiplication, ⋅ ) and there are many suitable norms for it. For example, if x = (x1, x2, , xn), then

  1. 1.

    \(\|x\| =\max _{1\leq i\leq n}\vert x_{i}\vert\),

  2. 2.

    \(\|x\| = \sqrt{\sum _{i=1 }^{n }x_{i }^{2}}\), or

  3. 3.

    \(\|x\| =\sum _{ i=1}^{n}\vert x_{ i}\vert\)

are all suitable norms. Norm 2. is the Euclidean norm: the norm of a vector is its Euclidean distance to the zero vector and the metric defined from this norm is the usual Euclidean metric. Norm 3. generates the “taxi-cab” metric on \(\mathbb{R}^{2}\).

Remark 3.4.

Consider the vector space \((\mathbb{R}^{n},+,\cdot )\) as a metric space with its metric defined ρ(x, y) = ∥xy∥ where ∥⋅ ∥ is any of the norms as in Example 3.1. The completeness of this metric space comes directly from the completeness of \(\mathbb{R}\), hence \((\mathbb{R}^{n},\|\cdot \|)\) is a Banach space .

Remark 3.5.

In the Euclidean space \(\mathbb{R}^{n}\), compactness is equivalent to closed and bounded (Heine-Borel Theorem). In fact, the metrics generated from any of the norms in Example 3.1 are equivalent in the sense that they generate the same topologies. Moreover, compactness is equivalent to closed and bounded in each of those metrics.

Example 3.2.

Let \(C([a,b], \mathbb{R}^{n})\) denote the space of all continuous functions \(f: [a,b] \rightarrow \mathbb{R}^{n}\).

  1. 1.

    \(C([a,b], \mathbb{R}^{n})\) is a vector space over \(\mathbb{R}\).

  2. 2.

    If \(\|f\| =\max _{a\leq t\leq b}\vert f(t)\vert\) where | ⋅ | is a norm on \(\mathbb{R}^{n}\), then \(\left (C([a,b], \mathbb{R}^{n}),\|\cdot \|\right )\) is a Banach space .

  3. 3.

    Let M and K be two positive constants and define

    $$\displaystyle{ L =\{ f \in C([a,b], \mathbb{R}^{n}):\| f\| \leq M;\vert f(u) - f(v)\vert \leq K\vert u - v\vert \} }$$

    then L is compact .

Proof.

(of Part 3.) Let {fn} be any sequence in L. The functions are uniformly bounded by M and have the same Lipschitz constant, K. So, the sequence is uniformly equicontinuous . By the Ascoli-Arzelà Theorem, there is a subsequence, \(\{f_{n_{k}}\}\), that converges uniformly to a continuous function \(f: [a,b] \rightarrow \mathbb{R}^{n}\). We now show that fL. Well, | fn(t) | ≤ M for each t ∈ [a, b], so | f(t) | ≤ M for each t ∈ [a, b] and hence ∥f∥ ≤ M. Now, fix u, v ∈ [a, b] and fix ɛ > 0. Since \(\{f_{n_{k}}\}\) converges uniformly to f, there is \(N \in \mathbb{N}\) such that \(\vert f_{n_{k}}(t) - f(t)\vert <\varepsilon /2\) for all t ∈ [a, b] and all kN. So, fix any kN and we have

$$\displaystyle{ \vert f(u) - f(v)\vert = \vert f(u) - f_{n_{k}}(u) + f_{n_{k}}(u) - f_{n_{k}}(v) + f_{n_{k}}(v) - f(v)\vert }$$
$$\displaystyle{ \leq \vert f(u) - f_{n_{k}}(u)\vert + \vert f_{n_{k}}(u) - f_{n_{k}}(v)\vert + \vert f_{n_{k}}(v) - f(v)\vert }$$
$$\displaystyle{ <\varepsilon /2 + K\vert u - v\vert +\varepsilon /2 = K\vert u - v\vert +\varepsilon. }$$

Since ɛ > 0 was arbitrary, | f(u) − f(v) | ≤ K | uv |. Hence fL. We have demonstrated that {fn} has a subsequence converging to an element of L. Hence, L is compact.

Example 3.3.

Consider \(\mathbb{R}\) as a vector space over \(\mathbb{R}\) and define the metric \(d(x,y) = \frac{\vert x - y\vert } {1 + \vert x - y\vert }\). For each \(x \in \mathbb{R}\), we can define ∥x∥ = d(x, 0). Explain why ∥⋅ ∥ is not a norm on \(\mathbb{R}\).

Example 3.4.

Let \(\phi: [a,b] \rightarrow \mathbb{R}^{n}\) be continuous and let S be the set of continuous functions \(f: [a,c] \rightarrow \mathbb{R}^{n}\) with c > b and with f(t) = ϕ(t) for atb. Define \(\rho (f,g) =\| f - g\| =\sup _{a\leq t\leq c<}\vert f(t) - g(t)\vert\) for f, gS. Then (S, ρ) is a complete metric space but not a Banach space since \(f + g\) is not in S. 

Example 3.5.

Let (S, ρ) be the space of continuous bounded functions \(f: (-\infty,0] \rightarrow \mathbb{R}\) with \(\rho (f,g) =\| f - g\| =\sup _{-\infty <t\leq 0}\vert f(t) - g(t)\vert\).

  1. 1.

    Show that (S, ρ) is a Banach space .

  2. 2.

    The set L = {fS: ∥f∥ ≤ 1, | f(u) − f(v) | ≤ | uv | } is not compact in (S, ρ).

Proof.

(of 2.) Consider the sequence of functions defined

$$\displaystyle{ f_{n}(t) = \left \{\begin{array}{lr} 0 & \mathrm{if}\ \ t \leq -n \\ \frac{t} {n} + 1\ \ \ \ &\mathrm{if} - n <t \leq 0 \end{array} \right. }$$

Then, the sequence converges pointwise to f = 1, but ρ(fn, f) = 1 for all \(n \in \mathbb{N}\). So, there is no subsequence of {fn} converging in the norm ∥⋅ ∥ (i.e., converging uniformly) to f.

Example 3.6.

Let (S, ρ) be the space of continuous functions \(f: (-\infty,0] \rightarrow \mathbb{R}^{n}\) with

$$\displaystyle{ \rho (f,g) =\sum _{ n=1}^{\infty }2^{-n}\rho _{ n}(f,g)/\{1 +\rho _{n}(f,g)\} }$$

where

$$\displaystyle{ \rho _{n}(f,g) =\max _{-n\leq s\leq 0}\vert f(s) - g(s)\vert }$$

and | ⋅ | is the Euclidean norm on \(\mathbb{R}^{n}\)

  1. 1.

    Then (S, ρ) is a complete metric space. The distance between all function is bounded by 1. 

  2. 2.

    (S, +, ⋅ ) is a vector space over \(\mathbb{R}.\)

  3. 3.

    (S, ρ) is not a Banach space because ρ does not define a norm, since ρ(x, 0) = ∥x∥ does not satisfy ∥αx∥ = | α | ∥x∥. 

  4. 4.

    Let M and K be given positive constants. Then the set

    $$\displaystyle{ L =\{ f \in S:\| f\| \leq M\;\mbox{ on}\;(-\infty,0],\vert f(u) - f(v)\vert \leq K\vert u - v\vert \} }$$

    is compact in (S, ρ).

Proof.

(of 4.) Let {fn} be a sequence in L. It is clear that if fnf uniformly on compact subsets of (−, 0], then we have ρ(fn, f) → 0 as n. Let’s begin by considering {fn} on [−1, 0]. Then the sequence is uniformly bounded and equicontinuous and so there is a subsequence, say {fn 1} converging uniformly to some continuous f on [−1, 0]. Moreover the argument of Example 3.2 shows that | f(t) | ≤ M, and | f(u) − f(v) | ≤ K | uv |. Next we consider {fn 1} on [−2, 0]. Then the sequence is uniformly bounded and equicontinuous and so there is a subsequence, say {fn 2} converging uniformly, say, to some continuous f on [−2, 0]. Continuing this way we arrive at Fn = fn n which has a subsequence of {fn} and it converges uniformly on compact subsets of (−, 0] to a function fL. This proves L is compact .

The next result is stated in the form of a theorem that we leave its proof to the reader.

Theorem 3.2.2.

Let g: (−, 0] → [1, ) be a continuous strictly decreasing function with g(0) = 1 and g(r) → ∞ as r → −. Let (S, | ⋅ |g) be the space of continuous functions \(f: (-\infty,0] \rightarrow \mathbb{R}^{n}\) for which

$$\displaystyle{ \vert f\vert _{g}:=\sup _{-\infty <t\leq 0}\frac{\vert f(t)\vert } {\vert g(t)\vert } }$$

exists. Then

  1. 1.

    (S, | ⋅ |g) is a Banach space .

  2. 2.

    Let M and K be given positive constants. Then the set

    $$\displaystyle{ L =\{ f \in S:\| f\| \leq M\;\mathit{\mbox{ on}}\;(-\infty,0],\vert f(u) - f(v)\vert \leq K\vert u - v\vert \} }$$

    is compact in (S, ρ).

Definition 3.2.10.

Let (E, ρ) be a metric space and D: EE. The operator or mapping D is a contraction if there exists an α ∈ (0, 1) such that

$$\displaystyle{ \rho \Big(D(x),D(y)\Big) \leq \alpha \rho (x,y). }$$

Theorem 3.2.3 (Contraction Mapping Principle).

Let (E, ρ) be a complete metric space and D: EE a contraction operator. Then there exists a unique ϕE with \(D(\phi ) =\phi.\) Moreover, if ψE and if {ψn} is defined inductively by \(\psi _{1} = D(\psi )\) and \(\psi _{n+1}D(\psi _{n}),\) then \(\psi _{n} \rightarrow \phi\) , the unique fixed point .

Proof.

Let y0E and define a sequence {yn} in E by \(y_{1} = Dy_{0},\;y_{2} = Dy_{1} = D(Dy_{0}) = D^{2}y_{ 0},\mathop{\ldots },y_{n} = Dy_{n-1} = D^{n}y_{ 0}.\) Next we show that {yn} is a Cauchy sequence. To see this, if m > n, then

$$\displaystyle\begin{array}{rcl} \rho (y_{n},y_{m})& =& \rho (D^{n}y_{ 0},D^{m}y_{ 0}) {}\\ & \leq & \alpha \rho (D^{n-1}y_{ 0},D^{m-1}y_{ 0}) {}\\ & \vdots & {}\\ & \leq & \alpha ^{n}\rho (y_{ 0},y_{m-1}) {}\\ & \leq & \alpha ^{n}\big\{\rho (y_{ 0},y_{1}) +\rho (y_{1},y_{2}) + \mathop{\ldots } +\rho (y_{m-n-1},y_{m-n})\big\} {}\\ & \leq & \alpha ^{n}\big\{\rho (y_{ 0},y_{1}) +\alpha \rho (y_{0},y_{1}) + \mathop{\ldots } +\alpha ^{m-n-1}\rho (y_{ 0},y_{1})\big\} {}\\ & \leq & \alpha ^{n}\rho (y_{ 0},y_{1})\big\{1 +\alpha +\mathop{\ldots } +\alpha ^{m-n-1}\big\} {}\\ & \leq & \alpha ^{n}\rho (y_{ 0},y_{1}) \frac{1} {1-\alpha }. {}\\ \end{array}$$

Thus, since α ∈ (0, 1), we have that

$$\displaystyle{ \rho (y_{n},y_{m}) \rightarrow 0,\;\mbox{ as},\;n \rightarrow \infty. }$$

This shows the sequence {yn} is Cauchy. Since (E, ρ) is a complete metric space, {yn} has a limit, say y in E. Since the mapping D is continuous we have that

$$\displaystyle{ D(x) = D(\lim _{n\rightarrow \infty }y_{n}) =\lim _{n\rightarrow \infty }D(y_{n}) =\lim _{n\rightarrow \infty }y_{n+1} = y, }$$

and y is a fixed point . Left to show y is unique. Let x, yE such that D(x) = x and D(y) = y. Then

$$\displaystyle{ 0 \leq \rho (x,y) =\rho \big (D(x),D(y)\big) \leq \alpha \rho (x,y), }$$

which implies that

$$\displaystyle{ 0 \leq (1-\alpha )\rho (x,y) \leq 0. }$$

Since 1 −α ≠ 0, we must have ρ(x, y) = 0 and hence x = y. This completes the proof.

Another form of the contraction mapping principle.

Theorem 3.2.4 (Contraction Mapping Principle, Banach Fixed Point Theorem).

Let (E, ρ) be a complete metric space and P: EE such that P m is a contraction for some fixed positive integer m. Then there is a unique xE with P(x) = x.

3.3 Highly Nonlinear Delay Equations

We limit our study to the highly nonlinear delay difference equation, typified by

$$\displaystyle{ x(t + 1) = a(t)g(x(t - r)) }$$
(3.3.1)

where \(a: \mathbb{Z}^{+} \rightarrow \mathbb{R}\) and r is a positive integer. More conditions on g are forthcoming. Results of this section can be found in [140]. In the paper of Raffoul [136], the author considered the linear difference equation

$$\displaystyle{ \bigtriangleup x(t) = -a(t)x(t - r) }$$

and used fixed point theory and obtained asymptotic and periodicity results using fixed point theory. It is worth mentioning here that (3.3.1) has fundamental difference from the above-mentioned equation due to the nonlinearity that the function g presents. Moreover, when inverting (3.3.1) in order to construct a mapping that is suitable for fixed point theory, one will have to introduce a linear term which results in the addition term of xg(x). Also, the results of this section offer the use of nonconventional metric in order to avoid that the contraction constant not to depend on the Lipschitz constant K that g will be required to satisfy.First we rewrite (3.3.1) and have it ready for inversion so that fixed point theory can be used. Rewrite (3.3.1) as

$$\displaystyle{ x(t + 1) = a(t + r)g(x(t)) -\bigtriangleup _{t}\sum _{s=t-r}^{t-1}a(s + r)g(x(s)), }$$

where △t represents the difference with respect to t. We must create a linear tern in x in order to be able to invert. Thus, we add and subtract a(t + r)x(t) and get,

$$\displaystyle{ x(t + 1) = a(t + r)x(t) -a(t + r)[x(t) -g(x(t))] -\bigtriangleup _{t}\sum _{s=t-r}^{t-1}a(s + r)g(x(s)). }$$
(3.3.2)

For each t0 ≥ 0, equation (3.3.2) requires initial function \(\psi: [t_{0} - r,t_{0}] \rightarrow \mathbb{R}\) in order to specify a solution x(t, t0, ψ). The computation is the same for any t0 ≥ 0 and so we take t0 = 0. Thus, we say x(t): = x(t, 0, ψ) is a solution of (3.3.2) if x(t) = ψ(t) on [−r, 0] and x(t) satisfies (3.3.2) for t ≥ 0. We begin with the following lemma which we omit its proof.

Lemma 3.2.

Suppose that a(t + r) 0 for all \(t \in \mathbb{Z}^{+}\) . Then x(t) is a solution of equation  (3.3.2) if and only if

$$\displaystyle\begin{array}{rcl} x(t)& =& \psi (0)\prod _{s=0}^{t-1}a(s + r) -\sum _{ s=t-r}^{t-1}a(s + r)g(x(s)) +\prod _{ u=0}^{t-1}a(u + r)\sum _{ s=-r}^{-1}a(s + r)g(\psi (s) \\ & +& \sum _{s=0}^{t-1}\Big(a(s + r)\prod _{ k=s+1}^{t-1}a(k + r)\sum _{ u=s-r}^{s-1}a(u + r)g(x(u))\Big) \\ & -& \sum _{s=0}^{t-1}(\prod _{ u=s+1}^{t-1}a(u + r))a(s + r)[x(s) - g(x(s))],\;t \geq 0. {}\end{array}$$
(3.3.3)

The proof of lemma 3.2 is easily obtained from the variation of parameters formula followed with summation by parts. It is assumed that the function g is continuous, locally Lipschitz with Lipschitz constant K and odd. On the other hand, we assume that xg(x) is nondecreasing and g(x) is increasing on an interval [0, L] for some L > 0. Due to these assumptions, it is obvious that the functions g(x) and xg(x) are locally Lipschitz with the same Lipschitz constant K > 0. 

Note that if 0 < L1 < L, then the conditions on g hold on [−L1, L1]. Also note that if \(\phi: [-r,\infty ) \rightarrow \mathbb{R}\) with ϕ0 = ψ, and if | ϕ(t) | ≤ L, then for t ≥ 0 we have

$$\displaystyle{ \vert \phi (t) - g(\phi (t))\vert \leq L - g(L), }$$

since xg(x) is odd and nondecreasing on [0, L]. Here ϕ0 = ψ(s) for − rs ≤ 0. Let

$$\displaystyle{ S =\Big\{\phi: [-r,\infty ) \rightarrow \mathbb{R}:\phi _{0} =\psi,\vert \phi (t)\vert \leq L\Big\}. }$$

For ϕS, we define \(P: S \rightarrow S\) by

$$\displaystyle{ (P\phi )(t) =\psi (t)\;\mbox{ if}\; - r \leq t \leq 0 }$$

and

$$\displaystyle\begin{array}{rcl} (P\phi )(t)& =& \psi (0)\prod _{s=0}^{t-1}a(s+r) -\sum _{ s=t-r}^{t-1}a(s+r)g(\phi (s))+\prod _{ u=0}^{t-1}a(u+r)\sum _{ s=-r}^{-1}a(s + r)g(\psi (s) \\ & +& \sum _{s=0}^{t-1}\Big(a(s + r)\prod _{ k=s+1}^{t-1}a(k + r)\sum _{ u=s-r}^{s-1}a(u + r)g(\phi (u))\Big) \\ & -& \sum _{s=0}^{t-1}(\prod _{ u=s+1}^{t-1}a(u + r))a(s + r)[\phi (s) - g(\phi (s))],\;t \geq 0. {}\end{array}$$
(3.3.4)

Let g be odd, increasing on [0, L], satisfy a Lipschitz condition, and let xg(x) be nondecreasing on [0, L]. Suppose that if L1 ∈ (0, L], then

$$\displaystyle\begin{array}{rcl} & & \vert L_{1} - g(L_{1})\vert \max _{t\geq 0}\sum _{s=0}^{t-1}\vert (\prod _{ u=s+1}^{t-1}a(u + r))a(s + r)\vert + g(L_{ 1})\sum _{s=t-r}^{t-1}\vert a(s + r)\vert \\ & +& g(L_{1})\max _{t\geq 0}\sum _{s=0}^{t-1}\vert (a(s + r)\prod _{ k=s+1}^{t-1}a(k + r)\vert \sum _{ u=s-r}^{s-1}\vert a(u + r)\vert <L_{ 1}. {}\end{array}$$
(3.3.5)

We note that since g(x) is Lipschitz with Lipschitz constant K and g(0) = 0, then | g(x) | ≤ K | x |. 

Theorem 3.3.1 ([140]).

Let g be odd, increasing on [0, L], satisfy a Lipschitz condition, and let xg(x) be nondecreasing on [0, L]. Suppose that a(t + r) 0 for all \(t \in \mathbb{Z}^{+}\) . If  (3.3.5) hold, then every solution x(t, 0, ψ) of  (3.3.2) with small initial function ψ(t), is bounded provided P is a contraction.

Proof.

Let ϕS. Then, by (3.3.5), there exists an α ∈ (0, 1) such that for t ≥ 0 then

$$\displaystyle\begin{array}{rcl} \vert (P\phi )(t)\vert & \leq & \vert \vert \psi \vert \vert \vert \prod _{s=0}^{t-1}a(s + r)\vert + \vert \prod _{ u=0}^{t-1}a(u + r)\vert \vert \vert g(\psi (s)\vert \vert \sum _{ s=-r}^{-1}\vert a(s + r)\vert \\ & +& \vert L - g(L)\vert \max _{t\geq 0}\sum _{s=0}^{t-1}\vert (\prod _{ u=s+1}^{t-1}a(u + r))a(s + r)\vert + g(L)\sum _{ s=t-r}^{t-1}\vert a(s + r)\vert \\ & +& g(L)\max _{t\geq 0}\sum _{s=0}^{t-1}\vert (a(s + r)\prod _{ k=s+1}^{t-1}a(k + r)\vert \sum _{ u=s-r}^{s-1}\vert a(u + r)\vert \\ &\leq & \vert \vert \psi \vert \vert \prod _{s=0}^{t-1}\vert a(s + r)\vert + \vert \prod _{ u=0}^{t-1}a(u + r)\vert \;\vert \vert g(\psi (s)\vert \vert \sum _{ s=-r}^{-1}\vert a(s + r)\vert +\alpha L \\ & \leq & \prod _{s=0}^{t-1}\vert a(s + r)\vert \big[\vert \vert \psi \vert \vert + K\vert \vert \psi \vert \vert \big]\sum _{ s=-r}^{-1}\vert a(s + r)\vert +\alpha L. {}\end{array}$$
(3.3.6)

If we choose the initial function ψ small enough so that we have

$$\displaystyle{ \prod _{s=0}^{t-1}\vert a(s + r)\vert \big[\vert \vert \psi \vert \vert + K\vert \vert \psi \vert \vert \big]\sum _{ s=-r}^{-1}\vert a(s + r)\vert <(1-\alpha )L, }$$

then this yields

$$\displaystyle{ \vert (P\phi )(t)\vert \leq (1-\alpha )L +\alpha L = L. }$$

Thus, \(P: S \rightarrow S\). This shows that any solution x(t, 0, ψ) of (3.3.2) that is in S, is bounded. Next we show that P defines a contraction map. Using the regular maximum norm will require that the contraction constant to depend on the Lipschitz constant K. Instead, we use the weighted norm | ⋅ |K where for ϕS, we have

$$\displaystyle{ \vert \phi \vert _{K} =\sup _{t\geq 0}\vert \frac{1} {dK}\prod _{s=0}^{t-1}\vert a(s + r)\phi \vert,\;\mbox{ for }\;d> 0. }$$

Proposition 3.2 ([140]).

Let g be odd, increasing on [0, L], satisfy a Lipschitz condition, and let xg(x) be nondecreasing on [0, L]. Suppose that a(t + r) 0 for all \(t \in \mathbb{Z}^{+}\) with \(\vert a(t + r)\vert \leq \frac{1} {2}.\) Then P is a contraction with contraction constant d > 3. 

Proof.

Let ϕ, φS. Then for t ≥ 0, we have

$$\displaystyle\begin{array}{rcl} & & \vert (P\phi ) - (P\varphi )\vert _{K} \leq \sum _{s=t-r}^{t-1}\vert a(s + r)\vert \vert g(\phi (s)) - g(\varphi (s)\vert \vert \frac{1} {dK}\prod _{u=0}^{t-1}\vert a(u + r)\vert \\ & +& \sum _{s=0}^{t-1}\vert a(s + r)\prod _{ k=s+1}^{t-1}a(k + r)\vert \sum _{ u=s-r}^{s-1}\vert a(u + r)\vert \vert g(\phi (s)) - g(\varphi (s))\vert \vert \frac{1} {dK}\prod _{u=0}^{t-1}\vert a(u + r)\vert \\ & +& \sum _{s=0}^{t-1}(\prod _{ u=s+1}^{t-1}\vert a(u + r))\vert \vert a(s + r)\vert \vert \phi (s) - g(\phi (s)) \\ & -& (\varphi (s) - g(\varphi (s))\vert \vert \frac{1} {dK}\prod _{u=0}^{t-1}\vert a(u + r)\vert. {}\end{array}$$
(3.3.7)

Our aim is to simplify (3.3.7). First we remind the reader that due to the conditions on g(x) and xg(x), both functions share the same Lipschitz constant K. Moreover, since \(\vert a(t + r)\vert \leq \frac{1} {2}\), we have | a(t + r) | ≤ 1 − | a(t + r) | and | a(t + r) |2 ≤ 1 − | a(t + r) |2. Next, we consider the first term of (3.3.7)

$$\displaystyle\begin{array}{rcl} & & \sum _{s=t-r}^{t-1}\vert a(s + r)\vert \vert g(\phi (s)) - g(\varphi (s)\vert \vert \frac{1} {dK}\prod _{u=0}^{t-1}\vert a(u + r)\vert {}\\ & \leq & \sup _{t\geq 0} \frac{K} {dK}\sum _{s=t-r}^{t-1}\vert a(s + r)\vert \vert \phi (s) -\varphi (s)\vert \prod _{ u=0}^{s-1}\vert a(u + r)\vert \prod _{ u=s}^{t-1}\vert a(u + r)\vert {}\\ & \leq & \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}\sum _{s=t-r}^{t-1}\vert a(s + r)\vert \prod _{ u=s}^{t-1}\vert a(u + r)\vert {}\\ & \leq & \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}\sum _{s=t-r}^{t-1}\vert a(s + r)\vert \prod _{ u=s+1}^{t-1}\vert a(u + r)\vert {}\\ & \leq & \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}\sum _{s=t-r}^{t-1}(1 -\vert a(s + r)\vert )\prod _{ u=s+1}^{t-1}\vert a(u + r)\vert {}\\ & =& \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}\sum _{s=t-r}^{t-1}\bigtriangleup _{ s}\Big(\prod _{u=s}^{t-1}\vert a(u + r)\vert \Big) {}\\ & =& \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}(1 -\prod _{u=t-r}^{t-1}\vert a(u + r)\vert ) {}\\ & \leq & \frac{1} {d}\vert \phi -\varphi \vert _{K}. {}\\ \end{array}$$

Next we turn our attention to the second term of (3.3.7).

$$\displaystyle\begin{array}{rcl} & & \sum _{s=0}^{t-1}\vert a(s + r)\prod _{ k=s+1}^{t-1}a(k + r)\vert \sum _{ u=s-r}^{s-1}\vert a(u + r)\vert \vert g(\phi (s)) - g(\varphi (s))\vert \vert \frac{1} {dK}\prod _{l=0}^{t-1}\vert a(l + r)\vert {}\\ & \leq & \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}\sum _{s=0}^{t-1}\vert a(s + r)\prod _{ k=s+1}^{t-1}a(k+r)\vert \sum _{ u=s-r}^{s-1}\vert a(u+r)\vert \vert g(\phi (s))-g(\varphi (s))\vert \vert \frac{1} {dK}\prod _{l=u+1}^{t-1}\vert a(l+r)\vert {}\\ & \leq & \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}\sum _{s=0}^{t-1}\vert a(s + r)\vert \prod _{ k=s+1}^{t-1}a(k + r)\vert \sum _{ u=s-r}^{s-1}(1 -\vert a(u + r)\vert )\prod _{ l=u+1}^{t-1}\vert a(l + r)\vert {}\\ & =& \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}\sum _{s=0}^{t-1}\vert a(s + r)\vert \prod _{ k=s+1}^{t-1}a(k + r)\vert \sum _{ u=s-r}^{s-1}\bigtriangleup _{ s}\Big(\prod _{l=u}^{t-1}\vert a(l + r)\vert {}\\ & \leq & \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}\sum _{s=0}^{t-1}\vert a(s + r)\vert \prod _{ k=s+1}^{t-1}a(k + r)\vert \prod _{ l=s}^{t-1}\vert a(l + r)\vert {}\\ & =& \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}\sum _{s=0}^{t-1}\vert a(s + r)\vert ^{2}(\prod _{ k=s+1}^{t-1}\vert a(k + r)\vert )^{2} {}\\ & \leq & \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}\sum _{s=0}^{t-1}\vert (1 -\vert a(s + r)\vert ^{2})(\prod _{ k=s+1}^{t-1}\vert a(k + r)\vert )^{2} {}\\ & \leq & \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}\sum _{s=0}^{t-1}\vert \bigtriangleup _{ s}(\prod _{k=s}^{t-1}\vert a(k + r)\vert )^{2} {}\\ & \leq & \frac{1} {d}\vert \phi -\varphi \vert _{K}. {}\\ \end{array}$$

Now we deal with the last term of (3.3.7).

$$\displaystyle\begin{array}{rcl} & & \sum _{s=0}^{t-1}(\prod _{ u=s+1}^{t-1}\vert a(u+r))\vert \vert a(s+r)\vert \vert \phi (s)-g(\phi (s))-(\varphi (s)-g(\varphi (s))\vert \vert \frac{1} {dK}\prod _{u=0}^{t-1}\vert a(u+r)\vert {}\\ & \leq & \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}\sum _{s=0}^{t-1}\prod _{ u=s+1}^{t-1}\vert a(u + r)\vert \vert a(s + r)\vert \prod _{ u=s}^{t-1}\vert a(u + r)\vert {}\\ & \leq & \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}\sum _{s=0}^{t-1}\vert a(s + r)\vert ^{2}(\prod _{ u=s+1}^{t-1}\vert a(u + r)\vert )^{2} {}\\ & \leq & \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}\sum _{s=0}^{t-1}(1 -\vert a(s + r)\vert ^{2})(\prod _{ u=s+1}^{t-1}\vert a(u + r)\vert )^{2} {}\\ & =& \frac{1} {d}\vert \phi -\varphi \vert _{K}\sup _{t\geq 0}\sum _{s=0}^{t-1}\bigtriangleup _{ s}(\prod _{u=s}^{t-1}\vert a(u + r)\vert )^{2} {}\\ & =& \frac{1} {d}\vert \phi -\varphi \vert _{K}(1 - (\prod _{u=0}^{t-1}\vert a(u + r)\vert )^{2}) {}\\ & \leq & \frac{1} {d}\vert \phi -\varphi \vert _{K}. {}\\ \end{array}$$

Substituting the above three expressions into (3.3.7) yields

$$\displaystyle{ \vert (P\phi ) - (P\varphi )\vert _{K} \leq (\frac{1} {d} + \frac{1} {d} + \frac{1} {d})\vert \phi -\varphi \vert _{K}, }$$

which makes P a contraction for d > 3. Let \((\mathcal{X},\vert \cdot \vert )\) be the Banach space of bounded sequences \(\phi: [0,\infty ) \rightarrow \mathbb{R}\). Since S is a subset of the Banach space \(\mathcal{X}\) and S is closed and bounded so S is complete. Thus, P: SS has a unique fixed point . This completes the proof.

We have the following corollary.

Corollary 3.1 ([140]).

Let g be odd, increasing on [0, L], satisfy a Lipschitz condition, and let xg(x) be nondecreasing on [0, L]. Suppose that a(t + r) 0 for all \(t \in \mathbb{Z}^{+}\) . If  (3.3.5) hold with \(\vert a(t + r)\vert \leq \frac{1} {2}\) , then the unique solution x(t, 0, ψ) of  (3.3.2) with small initial function ψ(t) is bounded and its zero solution is stable.

Proof.

Let P be defined by (3.3.4). Then by Theorem 3.3.1, P maps S into S. Moreover, by Proposition 3.2 P is a contraction on S and hence the unique solution of (3.3.2) is bounded by Theorem 3.3.1. Left to show the zero solution is stable. Let L be given by (3.3.6) and set 0 < ε < L. Choose \(\delta = \frac{\epsilon (1-\alpha )} {(1+K)\prod _{s=0}^{t-1}\vert a(s+r)\vert \sum _{s=-r}^{-1}\vert a(s+r)\vert }.\) Then for | ψ | < δ, we have by (3.3.6) that

$$\displaystyle\begin{array}{rcl} \vert (P\phi )(t)\vert & \leq & \prod _{s=0}^{t-1}\vert a(s + r)\vert \big[\vert \vert \psi \vert \vert + K\vert \vert \psi \vert \vert \big]\sum _{ s=-r}^{-1}\vert a(s + r)\vert +\alpha L {}\\ & \leq & \delta (1 + K)\prod _{s=0}^{t-1}\vert a(s + r)\vert \sum _{ s=-r}^{-1}\vert a(s + r)\vert +\alpha L {}\\ & \leq & \delta (1 + K)\prod _{s=0}^{t-1}\vert a(s + r)\vert \sum _{ s=-r}^{-1}\vert a(s + r)\vert +\alpha \epsilon {}\\ & \leq & \epsilon (1-\alpha )+\alpha \epsilon =\epsilon. {}\\ \end{array}$$

Hence the zero solution is stable. This completes the proof.

We mention here that the requirement | a(t + s) | ≤ 1∕2 was necessitated by the use of the norm | ⋅ |K. However, in proving that P is a contraction we did not have to involve K in the contraction constant. We have the following application.

Example 3.7 ([140]).

Let a(t + r) ≠ 0 such that \(\vert a(t + r)\vert \leq \frac{1} {2}.\) Consider

$$\displaystyle{ x(t + 1) = -a(t)x^{3}(t - r). }$$
(3.3.8)

In view of (3.3.2) we have

$$\displaystyle{ x(t + 1) = a(t + r)x(t) -a(t + r)[x(t) -x^{3}(t-r)] + \bigtriangleup _{ t}\sum _{s=t-r}^{t-1}a(s + r)x^{3}(s). }$$

Let \(f(x) = x - x^{3}.\) Then f(x) is increasing on \((0, \frac{1} {\sqrt{3}})\) and has a maximum of \(\frac{2} {3\sqrt{3}}\) at \(x = \frac{1} {\sqrt{3}}.\) For any bounded initial sequence ψ on [−r, 0] with \(\vert \psi (t)\vert \leq \frac{1} {\sqrt{3}}\) we set

$$\displaystyle{ S =\Big\{\phi: [-r,\infty ) \rightarrow \mathbb{R}:\phi _{0} =\psi,\vert \phi (t)\vert \leq \frac{1} {\sqrt{3}}\Big\}. }$$

For ϕS, we define \(P: S \rightarrow S\) by

$$\displaystyle{ (P\phi )(t) =\psi (t)\;\mbox{ if}\; - r \leq t \leq 0, }$$

and

$$\displaystyle\begin{array}{rcl} (P\phi )(t)& =& \psi (0)\prod _{s=0}^{t-1}a(s + r) +\sum _{ s=t-r}^{t-1}a(s + r)\phi ^{3}(s) -\prod _{ u=0}^{t-1}a(u + r)\sum _{ s=-r}^{-1}a(s + r)\psi ^{3}(s) {}\\ & -& \sum _{s=0}^{t-1}\Big(a(s + r)\prod _{ k=s+1}^{t-1}a(k + r)\sum _{ u=s-r}^{s-1}a(u + r)\phi ^{3}(u)\Big) {}\\ & -& \sum _{s=0}^{t-1}(\prod _{ u=s+1}^{t-1}a(u + r))a(s + r)[\phi (s) -\phi ^{3}(s)],\;t \geq 0. {}\\ \end{array}$$

Let ψ be small enough so that

$$\displaystyle\begin{array}{rcl} & & \vert \vert \psi \vert \vert \prod _{s=0}^{t-1}\vert a(s + r)\vert + \frac{\sqrt{3}} {9} \sum _{s=t-r}^{t-1}\vert a(s + r)\vert + \vert \vert \psi \vert \vert \prod _{ u=0}^{t-1}\vert a(u + r)\vert \sum _{ s=-r}^{-1}\vert a(s + r)\vert {}\\ & +& \frac{\sqrt{3}} {9} \sum _{s=0}^{t-1}\Big(\vert a(s + r)\vert \prod _{ k=s+1}^{t-1}\vert a(k + r)\vert \sum _{ u=s-r}^{s-1}\vert a(u + r)\vert \Big) {}\\ & +& \frac{2} {3\sqrt{3}}\sum _{s=0}^{t-1}(\prod _{ u=s+1}^{t-1}\vert a(u + r))\vert \vert a(s + r)\vert \leq \frac{1} {\sqrt{3}}. {}\\ \end{array}$$

Then

$$\displaystyle{ \vert (P\phi )(t)\vert \leq \frac{1} {\sqrt{3}}. }$$

Moreover, it is obvious that the Lipschitz constant k = 1. Let d be a positive constant such that d > 3. Using

$$\displaystyle{ \vert \phi \vert _{1} =\sup _{t\geq 0}\vert \frac{1} {d}\prod _{s=0}^{t-1}\vert a(s + r)\phi \vert, }$$

we have P is a contraction on S and hence all solutions of (3.3.8) are bounded and its zero solution is stable.

3.4 Multiple and Functional Delays

In this section, we consider the multiple and functional delays difference equation

$$\displaystyle\begin{array}{rcl} \bigtriangleup x(n) = -\sum _{j=1}^{N}a_{ j}(n)(x(n -\tau _{j}(n)),& &{}\end{array}$$
(3.4.1)

where \(a_{j}: \mathbb{Z}^{+} \rightarrow \mathbb{R}\) and \(\tau _{j}: \mathbb{Z}^{+} \rightarrow \mathbb{Z}^{+}\) with nτ(n) → as n. For each n0, define mj(n0) = inf{sτj(s):   sn0}, m(n0) = min{mj(n0) :   1 ≤ jN}. In [87], Islam and Yankson showed that the zero solution of the equation

$$\displaystyle{ x(n + 1) = b(n)x(n) + a(n)x(n -\tau (n)) }$$

is asymptotically stable with one of the assumptions being that

$$\displaystyle{ \prod _{s=0}^{n-1}b(s) \rightarrow 0\;\text{as}\;n \rightarrow \infty. }$$
(3.4.2)

However, as pointed out in [136], condition (3.4.2) cannot hold for (3.4.1) since b(n) = 1, for all \(n \in \mathbb{Z}\). The results we obtain in this section overcome the requirement of (3.4.2). Let D(n0) denote the set of bounded sequences \(\psi: [m(n_{0}),n_{0}] \rightarrow \mathbb{R}\) with the maximum norm | | ⋅ | |. Also, let (B, | | ⋅ | | ) be the Banach space of bounded sequences \(\varphi: [m(n_{0}),\infty ) \rightarrow \mathbb{R}\) with the maximum norm. Define the inverse of nτi(n) by gi(n) if it exists and the set

$$\displaystyle\begin{array}{rcl} Q(n) =\sum _{ j=1}^{N}b(g_{ j}(n)),& & {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} \sum _{j=1}^{N}b(g_{ j}(n)) = 1 -\sum _{j=1}^{N}a(g_{ j}(n)).& & {}\\ \end{array}$$

For each \((n_{0},\psi ) \in \mathbb{Z}^{+} \times D(n_{0}),\) a solution of (3.4.1) through (n0, ψ) is a function \(x: [m(n_{0}),n_{0}+\alpha ] \rightarrow \mathbb{R}\) for some positive constant α > 0 such that x(n) satisfies (3.4.1) on [n0, n0 + α] and x(n) = ψ(n) for n ∈ [m(n0), n0]. We denote such a solution by x(n) = x(n, n0, ψ). For a fixed n0, we define

$$\displaystyle\begin{array}{rcl} \vert \vert \psi \vert \vert =\max \{ \vert \psi (n)\vert \;:\; m(n_{0}) \leq n \leq n_{0}\}.& & {}\\ \end{array}$$

We begin by rewriting (3.4.1) as

$$\displaystyle\begin{array}{rcl} \bigtriangleup x(n) = -\sum _{j=1}^{N}a_{ j}(g_{j}(n))x(n) + \bigtriangleup _{n}\sum _{j=1}^{N}\sum _{ s=n-\tau _{j}(n)}^{n-1}a_{ j}(g_{j}(s))x(s),& &{}\end{array}$$
(3.4.3)

where △n represents that the difference is with respect to n. But (3.4.3) implies that

$$\displaystyle\begin{array}{rcl} x(n + 1) - x(n)& =& -\sum _{j=1}^{N}a_{ j}(g_{j}(n))x(n) + \bigtriangleup _{n}\sum _{j=1}^{N}\sum _{ s=n-\tau _{j}(n)}^{n-1}a_{ j}(g_{j}(s))x(s) {}\\ x(n + 1)& =& \Big(1 -\sum _{j=1}^{N}a_{ j}(g_{j}(n))\Big)x(n) + \bigtriangleup _{n}\sum _{j=1}^{N}\sum _{ s=n-\tau _{j}(n)}^{n-1}a_{ j}(g_{j}(s))x(s),{}\\ \end{array}$$

which is equivalent to

$$\displaystyle\begin{array}{rcl} x(n + 1) =\sum _{ j=1}^{N}b_{ j}(g_{j}(n))x(n) + \bigtriangleup _{n}\sum _{j=1}^{N}\sum _{ s=n-\tau _{j}(n)}^{n-1}a_{ j}(g_{j}(s))x(s).& &{}\end{array}$$
(3.4.4)

Suppose that Q(n) ≠ 0 for all \(n \in \mathbb{Z}^{+}\) and the inverse function gj(n) of nτj(n) exists. Then x(n) is a solution of (3.4.1) if and only if

$$\displaystyle\begin{array}{rcl} x(n)& =& \Big(x(n_{0}) -\sum _{j=1}^{N}\sum _{ s=n_{0}-\tau _{j}(n_{0})}^{n_{0}-1}a_{ j}(g_{j}(s))x(s)\Big)\prod _{s=n_{0}}^{n-1}Q(s) {}\\ & & \;\;+\;\sum _{j=1}^{N}\sum _{ s=n-\tau _{j}(n)}^{n-1}a_{ j}(g_{j}(s))x(s) {}\\ & & \;\;-\;\sum _{s=n_{0}}^{n-1}\Big([1 - Q(s)]\prod _{ k=s+1}^{n-1}Q(s)\sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}a_{ j}(g_{j}(u))x(u)\Big),\;\;n \geq n_{0}.{}\\ \end{array}$$

To see this we have by the variation of parameters formula

$$\displaystyle\begin{array}{rcl} x(n)& =& x(n_{0})\prod _{s=n_{0}}^{n-1}Q(s) \\ & & \;\;+\;\sum _{k=0}^{n-1}\Big(\prod _{ s=k}^{n-1}Q(s)\bigtriangleup _{ k}\sum _{j=1}^{N}\sum _{ s=k-\tau _{j}(k)}^{k-1}a_{ j}(g_{j}(s))x(s)\Big).{}\end{array}$$
(3.4.5)

Using the summation by parts formula we obtain

$$\displaystyle\begin{array}{rcl} & & \sum _{k=0}^{n-1}\Big(\prod _{ s=k}^{n-1}Q(s)\bigtriangleup _{ k}\sum _{j=1}^{N}\sum _{ s=k-\tau _{j}(k)}^{k-1}a_{ j}(g_{j}(s))x(s)\Big) \\ & =& \sum _{j=1}^{N}\sum _{ s=n-\tau _{j}(n)}^{n-1}a_{ j}(g_{j}(s))x(s) \\ & & \;\;-\prod _{s=n_{0}}^{n-1}Q(s)\sum _{ j=1}^{N}\sum _{ s=n_{0}-\tau _{j}(n_{0})}^{n_{0}-1}a_{ j}(g_{j}(s))x(s) \\ & & \;\;-\sum _{s=n_{0}}^{n-1}\Big([1 - Q(s)]\prod _{ k=s+1}^{n-1}Q(k)\sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}a_{ j}(g_{j}(u))x(u)\Big).{}\end{array}$$
(3.4.6)

Substituting (3.4.6) into (3.4.5) gives the desired result. We have the following theorem, which is due to Yankosn [166].

Theorem 3.4.1 ([166]).

Suppose that the inverse function gj(n) of nτj(n) exists, and assume there exists a constant α ∈ (0, 1) such that

$$\displaystyle\begin{array}{rcl} & & \sum _{j=1}^{N}\sum _{ s=n-\tau _{j}(n)}^{n-1}\vert a_{ j}(g_{j}(s))\vert \\ & & +\sum _{s=n_{0}}^{n-1}\Big(\vert [1 - Q(s)]\vert \Big\vert \prod _{ k=s+1}^{n-1}Q(k)\Big\vert \sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}\vert a_{ j}(g_{j}(u))\vert \Big) \leq \alpha.{}\end{array}$$
(3.4.7)

Moreover, assume that there exists a positive constant M such that

$$\displaystyle\begin{array}{rcl} \Big\vert \prod _{s=n_{0}}^{n-1}Q(s)\Big\vert \leq M.& & {}\\ \end{array}$$

Then the zero solution of  (3.4.1) is stable.

Proof.

Let ε > 0 be given. Choose δ > 0 such that

$$\displaystyle\begin{array}{rcl} (M + M\alpha )\delta +\alpha \epsilon \leq \epsilon.& & {}\\ \end{array}$$

Let ψD(n0) such that ∣ψ(n)∣ ≤ δ. Define S = { φB :   φ(n) = ψ(n) if n ∈ [m(n0), n0], ∥φ ∥ ≤ ε}. Then (S, ∥⋅ ∥) is a complete metric space, where ∥⋅ ∥ is the maximum norm.

Define the mapping P: SS by

$$\displaystyle\begin{array}{rcl} (P\varphi )(n) =\psi (n)\;\mbox{ for}\;n \in [m(n_{0}),n_{0}],& & {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} (P\varphi )(n)& =& \Big(\psi (n_{0}) -\sum _{j=1}^{N}\sum _{ s=n_{0}-\tau _{j}(n_{0})}^{n_{0}-1}a_{ j}(g_{j}(s))\psi (s)\Big)\prod _{s=n_{0}}^{n-1}Q(s) \\ & & \;\;+\;\sum _{j=1}^{N}\sum _{ s=n-\tau _{j}(n)}^{n-1}a_{ j}(g_{j}(s))\varphi (s) \\ & & \;\;-\;\sum _{s=n_{0}}^{n-1}\Big([1 - Q(s)]\prod _{ k=s+1}^{n-1}Q(s)\sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}a_{ j}(g_{j}(u))\varphi (u)\Big).{}\end{array}$$
(3.4.8)

We first show that P maps from S to S.

$$\displaystyle\begin{array}{rcl} \mid (P\varphi )(n)\mid & \leq & M\delta + M\alpha \delta +\Big\{\sum _{ j=1}^{N}\sum _{ s=n-\tau _{j}(n)}^{n-1}a_{ j}(g_{j}(s)) {}\\ & & \;+\;\sum _{s=n_{0}}^{n-1}\Big([1 - Q(s)]\prod _{ k=s+1}^{n-1}Q(k)\sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}a_{ j}(g_{j}(u))\Big\} \parallel \varphi \parallel {}\\ &\leq & (M + M\alpha )\delta +\alpha \epsilon {}\\ & \leq & \epsilon. {}\\ \end{array}$$

Thus P maps from S into itself. We next show that is continuous.

Let φ, ϕS. Given any ε > 0, choose \(\delta = \frac{\epsilon }{\alpha }\) such that | | φϕ | | < δ. Then,

$$\displaystyle\begin{array}{rcl} \vert \vert (P\varphi ) - (P\phi )\vert \vert & \leq & \sum _{j=1}^{N}\sum _{ s=n-\tau _{j}(n)}^{n-1}\vert a_{ j}(g_{j}(s))\vert \vert \vert \varphi -\phi \vert \vert {}\\ & &\;\;-\;\sum _{s=n_{0}}^{n-1}\Big([1 - Q(s)]\Big\vert \prod _{ k=s+1}^{n-1}Q(s)\Big\vert \sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}\vert a_{ j}(g_{j}(u))\vert \Big) {}\\ & & \;\;\times \;\vert \vert \varphi -\phi \vert \vert {}\\ &\leq & \alpha \vert \vert \varphi -\phi \vert \vert {}\\ &\leq & \epsilon. {}\\ \end{array}$$

Thus showing that is continuous. Finally we show that P is a contraction.

Let φ, ηS. Then

$$\displaystyle\begin{array}{rcl} & & \vert (P\varphi )(n) - (P\eta )(n)\vert {}\\ & =& \Big\vert \Big(\psi (n_{0}) -\sum _{j=1}^{N}\sum _{ s=n_{0}-\tau _{j}(n_{0})}^{n_{0}-1}a_{ j}(g_{j}(s))\psi (s)\Big)\prod _{s=n_{0}}^{n-1}Q(s) {}\\ & & \;\;+\;\sum _{j=1}^{N}\sum _{ s=n-\tau _{j}(n)}^{n-1}a_{ j}(g_{j}(s))\varphi (s) {}\\ & & \;\;-\;\sum _{s=n_{0}}^{n-1}\Big([1 - Q(s)]\prod _{ k=s+1}^{n-1}Q(s)\sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}a_{ j}(g_{j}(u))\varphi (u)\Big)\vert {}\\ & &\;\;-\;\Big(\psi (n_{0}) -\sum _{j=1}^{N}\sum _{ s=n_{0}-\tau _{j}(n_{0})}^{n_{0}-1}a_{ j}(g_{j}(s))\psi (s)\Big)\prod _{s=n_{0}}^{n-1}Q(s) {}\\ & & \;\;-\;\sum _{j=1}^{N}\sum _{ s=n-\tau _{j}(n)}^{n-1}a_{ j}(g_{j}(s))\eta (s) {}\\ & & \;\;+\;\sum _{s=n_{0}}^{n-1}\Big([1 - Q(s)]\prod _{ k=s+1}^{n-1}Q(s)\sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}a_{ j}(g_{j}(u))\eta (u)\Big)\Big\vert {}\\ & \leq & \sum _{j=1}^{N}\sum _{ s=n-\tau _{j}(n)}^{n-1}a_{ j}(g_{j}(s))\|\varphi -\eta \| {}\\ & &\;\;+\;\sum _{s=n_{0}}^{n-1}\Big(\vert [1 - Q(s)]\vert \Big\vert \prod _{ k=s+1}^{n-1}Q(s)\Big\vert \sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}\vert a_{ j}(g_{j}(u))\vert \Big)\|\varphi -\eta \| {}\\ & \leq & \Big\{\sum _{j=1}^{N}\sum _{ s=n-\tau _{j}(n)}^{n-1}a_{ j}(g_{j}(s)) {}\\ & & \;\;+\;\sum _{s=n_{0}}^{n-1}\Big(\vert [1 - Q(s)]\vert \Big\vert \prod _{ k=s+1}^{n-1}Q(s)\Big\vert \sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}\vert a_{ j}(g_{j}(u))\vert \Big)\Big\}\|\varphi -\eta \| {}\\ & \leq & \alpha \|\varphi -\eta \|. {}\\ \end{array}$$

This shows that P is a contraction. Thus, by the contraction mapping principle, P has a unique fixed point in S which solves (3.4.3) and for any φS, ∥∥ ≤ ε. This proves that the zero solution of (3.4.3) is stable.

In the next theorem we address the asymptotic stability of the zero solution.

Theorem 3.4.2 ([166]).

Assume that the hypotheses of Theorem  3.4.1 hold. Also assume that

$$\displaystyle\begin{array}{rcl} \prod _{k=n_{0}}^{n-1}Q(k) \rightarrow 0\;\;\mathit{\mbox{ as}}\;\;n \rightarrow \infty.& &{}\end{array}$$
(3.4.9)

Then the zero solution of  (3.4.3) is asymptotically stable.

Proof.

We have already proved that the zero solution of (3.4.3) is stable. Let ψD(n0) such that | ψ(n) | ≤ δ and define

$$\displaystyle\begin{array}{rcl} S^{{\ast}}& =& \Big\{\varphi \in B\;\vert \;\varphi (n) =\psi (n)\;\mbox{ if}\;n \in [m(n_{ 0}),n_{0}],\;\vert \vert \varphi \vert \vert \leq \epsilon \;\mbox{ and} {}\\ & & \varphi (n) \rightarrow 0,\;\mbox{ as}\;n \rightarrow \infty \Big\}. {}\\ \end{array}$$

Define P: S S by (3.4.8). From the proof of Theorem 3.4.1, the map P is a contraction and for every φS , | | | | ≤ ε. 

Next we show that ()(n) → 0 as n. The first term on the right-hand side of (3.4.8) goes to zero because of condition (3.4.9). It is clear from (3.4.7) and the fact that φ(n) → 0 as n that

$$\displaystyle\begin{array}{rcl} \sum _{j=1}^{N}\sum _{ s=n-\tau _{j}(n)}^{n-1}\Big\vert a_{ j}(g_{j}(s))\Big\vert \vert \varphi (s)\vert \rightarrow 0\;\mbox{ as}\;n \rightarrow \infty.& & {}\\ \end{array}$$

Now we show that the last term on the right-hand side of (3.4.8) goes to zero as n. Since φ(n) → 0 and nτj(n) → as n, for each ε1 > 0, there exists an N1 > n0 such that sN1 implies | φ(sτj(s)) | < ε1 for j = 1, 2, 3, . . . , N. Thus for nN1, the last term I3 in (3.4.8) satisfies

$$\displaystyle\begin{array}{rcl} \vert I_{3}\vert & =& \Big\vert \sum _{s=n_{0}}^{n-1}\Big([1 - Q(s)]\prod _{ k=s+1}^{n-1}Q(s)\sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}a_{ j}(g_{j}(u))\varphi (u)\Big)\Big\vert {}\\ &\leq & \sum _{s=n_{0}}^{N_{1}-1}\Big(\vert [1 - Q(s)]\vert \Big\vert \prod _{ k=s+1}^{n-1}Q(s)\Big\vert \sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}\vert a_{ j}(g_{j}(u))\vert \vert \varphi (u)\vert \Big) {}\\ & & \;+\;\sum _{s=N_{1}}^{n}\Big(\vert [1 - Q(s)]\vert \Big\vert \prod _{ k=s+1}^{n-1}Q(s)\Big\vert \sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}\vert a_{ j}(g_{j}(u))\vert \vert \varphi (u)\vert \Big) {}\\ & \leq & \max _{\sigma \geq m(n_{0})}\vert \varphi (\sigma )\vert \sum _{s=n_{0}}^{N_{1}-1}\Big(\vert [1 - Q(s)]\vert \Big\vert \prod _{ k=s+1}^{n-1}Q(s)\Big\vert \sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}\vert a_{ j}(g_{j}(u))\vert {}\\ & &\;+\;\epsilon _{1}\sum _{s=N_{1}}^{n}\Big(\vert [1 - Q(s)]\vert \Big\vert \prod _{ k=s+1}^{n-1}Q(s)\Big\vert \sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}\vert a_{ j}(g_{j}(u))\vert \Big). {}\\ \end{array}$$

By (3.4.9), there exists N2 > N1 such that nN2 implies

$$\displaystyle\begin{array}{rcl} \max _{\sigma \geq m(n_{0})}\vert \varphi (\sigma )\vert \sum _{s=n_{0}}^{N_{1}-1}\Big(\vert [1 - Q(s)]\vert \Big\vert \prod _{ k=s+1}^{n-1}Q(s)\Big\vert \sum _{ j=1}^{N}\sum _{ u=s-\tau _{j}(s)}^{s-1}\vert a_{ j}(g_{j}(u))\vert <\epsilon _{1}.& & {}\\ \end{array}$$

Applying (3.4.7) gives | I3 | ≤ ε1 + ε1 α < 2ε1. Thus, I3 → 0 as n. Hence ()(n) → 0 as n, and so S . 

By the contraction mapping principle, P has a unique fixed point that solves (3.4.3) and goes to zero as n goes to infinity. Therefore the zero solution of (3.4.3) is asymptotically stable.

3.5 Neutral Volterra Equations

The results of this section pertain to asymptotic stability of the zero solution of the neutral type Volterra difference equation

$$\displaystyle{ x(n + 1) = a(n)x(n) + c(n) \bigtriangleup x(n - g(n)) +\sum _{ s=n-g(n)}^{n-1}k(n,s)h(x(s)) }$$
(3.5.1)

where \(a,c: \mathbb{Z} \rightarrow \mathbb{R},\;k: \mathbb{Z} \times \mathbb{Z} \rightarrow \mathbb{R},\;h: \mathbb{Z} \rightarrow \mathbb{R}\), and \(g: \mathbb{Z} \rightarrow \mathbb{Z}^{+}\). Throughout this section we assume that a(n) and c(n) are bounded whereas 0 ≤ g(n) ≤ g0 for some integer g0. We also assume that h(0) = 0 and

$$\displaystyle{ \vert h(x) - h(z)\vert \leq L\vert x - z\vert. }$$

For any integer n0 ≥ 0, we define \(\mathbb{Z}_{0}\) to be the set of integers in [−g0, n0]. Let \(\psi (n): \mathbb{Z}_{0} \rightarrow \mathbb{R}\) be an initial discrete bounded function.

Definition 3.5.1.

The zero solution of (3.5.1) is Lyapunov stable if for any ε > 0 and any integer n0 ≥ 0 there exists a δ > 0 such that | ψ(n) | ≤ δ on \(\mathbb{Z}_{0}\) imply | x(n, n0, ψ) | ≤ ε for nn0.

Definition 3.5.2.

The zero solution of (3.5.1) is asymptotically stable if it is Lyapunov stable and if for any integer n0 ≥ 0 there exists r(n0) > 0 such that | ψ(n) | ≤ r(n0) on \(\mathbb{Z}_{0}\) imply | x(n, n0, ψ) | → 0 as n.

Suppose that a(n) ≠ 0 for all \(n \in \mathbb{Z}\). Then x(n) is a solution of the equation (3.5.1) if and only if

$$\displaystyle{ x(n) = \left [x(n_{0}) - c(n_{0} - 1)x(n_{0} - g(n_{0}))\right ]\prod _{s=n_{0}}^{n-1}a(s) + c(n - 1)x(n - g(n)) }$$
$$\displaystyle{ +\sum _{r=n_{0}}^{n-1}[-x(r - g(r))\varPhi (r) +\sum _{ u=r-g(r)}^{r-1}k(r,u)h(x(u))]\prod _{ s=r+1}^{n-1}a(s),n \geq n_{ 0} }$$

where Φ(r) = c(r) − c(r − 1)a(r).

To see this, we first note that (3.5.1) is equivalent to

$$\displaystyle{ [\bigtriangleup x(n)\prod _{s=n_{0}}^{n-1}a^{-1}(s)] = \left [c(n) \bigtriangleup x(n - g(n)) +\sum _{ u=n-g(n)}^{n-1}k(n,u)h(x(u))\right ]\prod _{ s=n_{0}}^{n}a^{-1}(s). }$$

Summing the above equation from n0 to n − 1 gives

$$\displaystyle{ \sum _{r=n_{0}}^{n-1}[\bigtriangleup x(r)\prod _{ s=n_{0}}^{r-1}a^{-1}(s)] =\!\sum _{ r=n_{0}}^{r-1}[c(r)\bigtriangleup x(r\! -\! g(r))+\!\sum _{ u=r-g(r)}^{r-1}k(r,u)h(x(u))]\prod _{ s=n_{0}}^{r}a^{-1}(s). }$$

Or,

$$\displaystyle{ x(r)\prod _{s=n_{0}}^{r-1}a^{-1}(s)\mid _{ n_{0}}^{n} =\sum _{ r=n_{0}}^{n-1}[\sum _{ r=n_{0}}^{r-1}k(r,u)h(x(u))+c(r)\bigtriangleup x(r - g(r))]\prod _{ s=n_{0}}^{r}a^{-1}(s). }$$

Thus,

$$\displaystyle{ x(n) = x(n_{0})\prod _{s=n_{0}}^{n-1}a(s)+\sum _{ r=n_{0}}^{n-1}[\sum _{ r=n_{0}}^{r-1}k(r,u)h(x(u))+c(r)\bigtriangleup x(r - g(r))]\prod _{ s=n_{0}}^{r}a^{-1}(s)\prod _{ s=n_{0}}^{n-1}a(s). }$$

Performing a summation by parts yields,

$$\displaystyle\begin{array}{rcl} & & \sum _{r=n_{0}}^{n-1}[c(r) \bigtriangleup x(r - g(r))\prod _{ s=r+1}^{n-1}a(s)] = [c(r - 1)x(r - g(r))\prod _{ s=r}^{n-1}a(s)\mid _{ n_{0}}^{n} {}\\ & & -\sum _{r=n_{0}}^{n-1}x(r - g(r)) \bigtriangleup [c(r - 1)\prod _{ s=r}^{n-1}a(s)] {}\\ & & = c(n - 1)x(n - g(n))\prod _{s=n}^{n-1}a(s) - c(n_{ 0} - 1)x(n_{0} - g(n_{0}))\prod _{s=n_{0}}^{n-1}a(s) {}\\ & & -\sum _{r=n_{0}}^{n-1}x(r - g(r)) \bigtriangleup [c(r - 1)\prod _{ s=r}^{n-1}a(s)]. {}\\ \end{array}$$

Also,

$$\displaystyle\begin{array}{rcl} \sum _{r=n_{0}}^{n-1}[c(r) \bigtriangleup x(r - g(r))\prod _{ s=r+1}^{n-1}a(s)]& =& c(n - 1)x(n - g(n)) - c(n_{ 0} - 1)x(n_{0} - g(n_{0}))\prod _{s=n_{0}}^{n-1}a(s) {}\\ & & -\sum _{r=n_{0}}^{n-1}x(r - g(r)) \bigtriangleup [c(r - 1)\prod _{ s=r}^{n-1}a(s). {}\\ \end{array}$$

A substitution into the above expression gives,

$$\displaystyle\begin{array}{rcl} x(n)& =& x(n_{0})\prod _{s=n_{0}}^{n-1}a(s) +\sum _{ r=n_{0}}^{n-1}[\sum _{ u=r-g(r)}^{r-1}k(r,u)h(x(u))]\prod _{ s=r+1}^{n-1}a(s) + c(n - 1)x(n - g(n)) {}\\ & & -c(n_{0} - 1)x(n_{0} - g(n_{0}))\prod _{s=n_{0}}^{n-1}a(s) -\sum _{ r=n_{0}}^{n-1}x(r - g(r)) \bigtriangleup [c(r - 1)\prod _{ s=r}^{n-1}a(s)] {}\\ & =& [xn_{0} - c(n_{0} - 1)x(n_{0} - g(n_{0})]\prod _{s=r+1}^{n-1}a(s) + c(n - 1)x(n - g(n)) {}\\ & & +\sum _{r=n_{0}}^{n-1}(-x(r - g(r)) \bigtriangleup [c(r - 1)\prod _{ s=r}^{n-1}a(s)\sum _{ u=r-g(r)}^{r-1}k(r,u)h(x(u))]\prod _{ s=r+1}^{n-1}a(s). {}\\ \end{array}$$

Combining all expressions, we arrive at

$$\displaystyle\begin{array}{rcl} x(n)& =& [xn_{0} - c(n_{0} - 1)x(n_{0} - g(n_{0})]\prod _{s=r+1}^{n-1}a(s) + c(n - 1)x(n - g(n)) {}\\ & & +\sum _{r=n_{0}}^{n-1}[-x(r - g(r))\varPhi (r) +\sum _{ u=r-g(r)}^{r-1}k(r,u)h(x(u))]\prod _{ s=r+1}^{n-1}a(s) {}\\ & =& [xn_{0} - c(n_{0} - 1)x(n_{0} - g(n_{0})]\prod _{s=r+1}^{n-1}a(s) + c(n - 1)x(n - g(n)) {}\\ & & +\sum _{r=n_{0}}^{n-1}(-x(r - g(r))\varPhi (r) +\sum _{ u=r-g(r)}^{r-1}k(r,u)h(x(u))]\prod _{ s=r+1}^{n-1}a(s),\;n \geq n_{ 0}.{}\\ \end{array}$$

This completes the process.

Define

$$\displaystyle{ S =\{\varphi: \mathbb{Z} \rightarrow \mathbb{R}\mid \|\varphi \| \rightarrow 0\;as\;n \rightarrow \infty \}, }$$

where

$$\displaystyle{ \|\varphi \|=\max \{ \vert \varphi (n)\vert,\;n \geq n_{0}\}. }$$

Then \(\left (S,\|\cdot \|\right )\) is a Banach space . Let \(\psi: (-\infty,n_{0}] \rightarrow \mathbb{R}\) be a given initial bounded sequence. Define mapping H: SS by

$$\displaystyle{ (H\varphi )(n) =\psi (n)\;\mbox{ for}\;n \leq n_{0}, }$$

and

$$\displaystyle\begin{array}{rcl} (H\varphi )(n)& =& \left [\psi (n_{0}) - c(n_{0} - 1)\psi (n_{0} - g(n_{0}))\right ]\prod _{s=n_{0}}^{n-1}a(s) + c(n - 1)\varphi (n - g(n)) \\ & & +\sum _{r=n_{0}}^{n-1}(-\varphi (r - g(r))\varPhi (r) +\sum _{ u=r-g(r)}^{r-1}k(r,u)h(\varphi (u))]\prod _{ s=r+1}^{n-1}a(s),n \geq n_{ 0}.{}\end{array}$$
(3.5.2)

It should cause no confusion to write

$$\displaystyle{ \|\psi \|=\max \{ \vert \psi (n)\vert,\;n \leq n_{0}\}. }$$

We state Krasnoselskii’s fixed point theorem which will be used to prove the zero solution of (3.5.1) is asymptotically stable. We emphasize that it is the only appropriate theorem to use for such equation since the inversion of a neutral equation results in two mappings.

Theorem 3.5.1 (Krasnoselskii [97]).

Let \(\mathbb{M}\) be a closed convex nonempty subset of a Banach space \((\mathbb{B},\vert \vert \cdot \vert \vert ).\) Suppose that C and B map \(\mathbb{M}\) into \(\mathbb{B}\) such that

  1. (i)

    C is continuous and \(C\mathbb{M}\) is contained in a compact set,

  2. (ii)

     B is a contraction mapping.

  3. (iii)

    \(x,y \in \mathbb{M}\) implies \(Cx + By \in \mathbb{M}\) .

Then there exists \(z \in \mathbb{M}\) with z = Cz + Bz. 

We are now ready to prove our main results. According to Theorem 3.5.1 we need to construct two mappings, one is a contraction and the other is compact. Hence we write the mapping H that is given by (3.5.2) as

$$\displaystyle{ (H\varphi )(n) = (Q\varphi )(n) + (A\varphi )(n), }$$

where A, Q: SS are given by

$$\displaystyle{ (Q\varphi )(n) = \left [\psi (n_{0}) - c(n_{0} - 1)\psi (n_{0} - g(n_{0}))\right ]\prod _{s=n_{0}}^{n-1}a(s) + c(n - 1)\varphi (n - g(n)) }$$
(3.5.3)

and

$$\displaystyle{ (A\varphi )(n) =\sum _{ r=n_{0}}^{n-1}[-\varphi (r - g(r))\varPsi (r) +\sum _{ u=r-g(r)}^{r-1}k(r,u)h(\varphi (u))]\prod _{ s=r+1}^{n-1}a(s). }$$
(3.5.4)

Theorem 3.5.2.

Assume the Lipschitz condition on h. Suppose that

$$\displaystyle{ \prod _{s=n_{0}}^{n-1}a(s) \rightarrow 0\;as\;n \rightarrow \infty, }$$
(3.5.5)
$$\displaystyle{ n - g(n) \rightarrow \infty \;as\;n \rightarrow \infty, }$$
(3.5.6)

and there exist α(0,1) such that,

$$\displaystyle{ \vert c(n - 1)\vert +\sum _{ r=n_{0}}^{n-1}\left [\vert \varPhi (r)\vert + L\sum _{ u=r-g(r)}^{r-1}k(r,u)\right ]\left \vert \prod _{ s=r+1}^{n-1}a(s)\right \vert \leq \alpha,n \geq n_{ 0}. }$$
(3.5.7)

Then the zero solution of  (3.5.1) is asymptotically stable.

Proof.

First we show the mapping H defined by (3.5.2) → 0 as n. The first term on the right of (3.5.2) goes to zero because of condition (3.5.5). The second term on the right goes to zero because of condition (3.5.6) and the fact that φS.

Left to show that the last term

$$\displaystyle{ \sum _{r=n_{0}}^{n-1}\left [(-\varPhi (r)\varphi (r - g(r)) +\sum _{ u=r-g(r)}^{r-1}k(r,u)h(\varphi (u))\right ]\prod _{ s=r+1}^{n-1}a(s) }$$

on the right of (3.5.2) goes to zero as n. Let m > 0 such that for φS, | φ(ng(n)) | < σ for σ > 0. Also, since φ(ng(n)) → 0 as ng(n) → , there exists an n2 > m such that for n > n2, | φ(ng(n)) | < ε2 for ε2 > 0. Due to condition (3.5.5) there exists an n3 > n2 such that for n > n3 implies that

$$\displaystyle{ \left \vert \prod _{s=n_{2}}^{n-1}a(s)\right \vert <\frac{\epsilon _{2}} {\alpha \sigma }. }$$

Thus for n > n3, we have

$$\displaystyle\begin{array}{rcl} & & \left \vert \sum _{r=n_{0}}^{n-1}[(-\varphi (r - g(r))\varPhi (r) +\sum _{ u=r-g(r)}^{r-1}k(r,u)h(\varphi (u))]\prod _{ s=r+1}^{n-1}a(s)\right \vert {}\\ & &\leq \sum _{r=n_{0}}^{n-1}\left \vert [(-\varphi (r - g(r))\varPhi (r) +\sum _{ u=r-g(r)}^{r-1}k(r,u)h(\varphi (u))]\prod _{ s=r+1}^{n-1}a(s)\right \vert {}\\ & &\leq \sum _{r=n_{0}}^{n_{2}-1}\left \vert [(\varphi (r - g(r))\varPhi (r) + L\sum _{ u=r-g(r)}^{r-1}k(r,u)\varphi (u)]\prod _{ s=r+1}^{n-1}a(s)\right \vert {}\\ & & +\sum _{r=n_{0}}^{n_{2}-1}\left \vert [(\varphi (r - g(r))\varPhi (r) + L\sum _{ u=r-g(r)}^{r-1}k(r,u)h(\varphi (u))]\prod _{ s=r+1}^{n-1}a(s)\right \vert {}\\ & &\leq \sigma \left [\sum _{r=n_{0}}^{n_{2}-1}\vert \varPhi (r)\vert + L\sum _{ u=r-g(r)}^{r-1}k(r,u)\right ]\left \vert \prod _{ s=r+1}^{n-1}a(s)\right \vert +\epsilon _{ 2}\alpha {}\\ & & \leq \sigma \left [\sum _{r=n_{0}}^{n_{2}-1}\vert \varPhi (r)\vert + L\sum _{ u=r-g(r)}^{r-1}k(r,u)\right ]\left \vert \prod _{ s=r+1}^{n_{2}-1}a(s)\prod _{ s=n_{2}}^{n-1}a(s)\right \vert +\epsilon _{ 2}\alpha {}\\ & & \leq \sigma \alpha \left \vert \prod _{s=n_{2}}^{n-1}a(s)\right \vert +\epsilon _{ 2}\alpha \leq \epsilon _{2} +\epsilon _{2}\alpha. {}\\ \end{array}$$

Hence, ()(n) + ()(n): SS. Next we show that Q is a contraction. Let Q be given by (3.5.3). Then φ, ζS, we have from (3.5.7) that

$$\displaystyle\begin{array}{rcl} \|(Q\varphi ) - (Q\zeta )\|& \leq & \vert c(n - 1)\vert \|\varphi -\zeta \| {}\\ &\leq & \eta \|\varphi -\zeta \|,\;\mbox{ for some}\;\eta \in (0,1). {}\\ \end{array}$$

Now we are ready to prove the map A is compact . We note that the proof that is given in [167] for the compactness of A is not correct since our map is defined on an unbounded interval which rules out the use of Ascoli-Arzelà’s theorem. First we show A is continuous. Let {φ l} be a sequence in S such that

$$\displaystyle{ \lim _{l\rightarrow \infty }\vert \vert \varphi ^{l} -\varphi \vert \vert = 0. }$$

Since S is closed, we have φS. Then by the definition of A

$$\displaystyle{ \vert \vert A(\varphi ^{l}) - A(\varphi )\vert \vert =\max _{ n\in \mathbb{Z}}\vert A(\varphi ^{l}) - A(\varphi )\vert. }$$

Thus, for φS, we have by (3.5.4) that

$$\displaystyle\begin{array}{rcl} \vert (A\varphi ^{l})(n) - (A\varphi )(n)\vert & \leq & \sum _{ r=n_{0}}^{n-1}\vert \varPhi (r)\vert \left \vert \varphi ^{l}(r - g(r)) -\varphi (r - g(r))\right \vert \left \vert \prod _{ s=r+1}^{n-1}a(s)\right \vert {}\\ & & +\sum _{r=n_{0}}^{n-1}\left \vert \sum _{ u=r-g(r)}^{r-1}k(r,u)h(\varphi ^{l}(u)) -\sum _{ u=r-g(r)}^{r-1}k(r,u)h(\varphi (u))\right \vert \left \vert \prod _{ s=r+1}^{n-1}a(s)\right \vert {}\\ & =& \sum _{r=n_{0}}^{n-1}\vert \varPhi (r)\vert \left \vert \varphi ^{l}(r - g(r)) -\varphi (r - g(r))\right \vert \left \vert \prod _{ s=r+1}^{n-1}a(s)\right \vert {}\\ & & +\sum _{r=n_{0}}^{n-1}\sum _{ u=r-g(r)}^{r-1}\vert k(r,u)\vert \vert \big(h(\varphi ^{l}(u)) - h(\varphi (u)\big)\vert \vert \prod _{ s=r+1}^{n-1}a(s)\vert. {}\\ \end{array}$$

The continuity of φ and h along with Lebesgue dominated convergence theorem imply that

$$\displaystyle{ \lim _{l\rightarrow \infty }\max \vert A(\varphi ^{l})(n) - A(\varphi )(n)\vert = 0,\;n \in \mathbb{Z}. }$$

This shows A is continuous. Finally, we have to show that AS is precompact. Let φ l be a sequence in S. Then for each \(n \in \mathbb{Z},\) φ l is a bounded sequence of real numbers. This shows that {φ l} has a convergent subsequence. By the diagonal process, we can construct a convergent subsequence \(\{\varphi ^{l_{k}}\}\) of {φ l} in S. Since A is continuous, we know that { l} has a convergent subsequence in AS. This means AS is precompact. This completes the proof for compactness. Left to show the zero solution is stable. Due to condition (3.5.5) there exists a positive constant ρ such that \(\big\vert \prod _{s=n_{0}}^{n-1}a(s)\big\vert \leq \rho.\) Let ε > 0 be given. Choose δ > 0 such that

$$\displaystyle{ \left \vert 1 - c(n_{0} - 1)\right \vert \delta \rho +\alpha \epsilon <\epsilon. }$$

Let ψ(n) be any given initial function such that | ψ(n) | < δ. 

Define \(\mathbb{M} =\{\varphi \in S:\|\varphi \|<\epsilon \}.\) Let \(\varphi,\zeta \in \mathbb{M}\), then

$$\displaystyle\begin{array}{rcl} & & \hspace{-10.0pt}\|(Q\zeta ) - (A\varphi )\| \leq \left \vert [\psi (n_{0}) - c(n_{0} - 1)\psi (n_{0} - g(n_{0}))]\prod _{s=n_{0}}^{n-1}a(s)\right \vert + \left \vert c(n - 1)\zeta (n - g(n))\right \vert {}\\ & +& \sum _{r=n_{0})}^{n-1}\left \vert \varphi (r - g(r))\varPhi (r) +\sum _{ r=n_{0}}^{n-1}k(r,u)h(\varphi (u))\prod _{ s=r+1}^{n-1}a(s)\right \vert {}\\ & \leq & \left \vert 1 - c(n_{0} - 1)\right \vert \delta \rho + \vert c(n - 1)\vert +\sum _{ r=n_{0}}^{n-1}\left \vert \varPhi (r) + L\sum _{ u=r-g(r)}^{r-1}k(r,u)\right \vert \left \vert \prod _{ s=r+1}^{n-1}a(s)\right \vert \epsilon {}\\ & \leq & \left \vert 1 - c(n_{0} - 1)\right \vert \delta \rho + \left \{\vert c(n - 1)\vert +\sum _{ r=n_{0}}^{n-1}\left \vert \varPhi (r) + L\sum _{ u=r-g(r)}^{r-1}k(r,u)\right \vert \left \vert \prod _{ s=r+1}^{n-1}a(s)\right \vert \right \}\epsilon {}\\ & \leq & \left \vert 1 - c(n_{0} - 1)\right \vert \delta \rho +\alpha \epsilon {}\\ & \leq & \epsilon. {}\\ \end{array}$$

It follows from the above work that all the conditions of the Krasnoselskii’s fixed point theorem are satisfied on \(\mathbb{M}\). Thus there exists a fixed point z in \(\mathbb{M}\) such that z = Az + Qz. This completes the proof.

We end this section with the following example.

Example 3.8 ([167]).

Consider the difference equation

$$\displaystyle{ x(n+1) = \frac{1} {1 + n}x(n)+ \frac{2^{n+1}} {16(n + 1)!}\bigtriangleup x(n-2)+\sum _{s=n-2}^{n-1} \frac{2^{n}} {8(1 - n)!(s + 2)}x(s),n \geq 0. }$$
(3.5.8)

In this example we take n0 = 0. We observe that

$$\displaystyle{ \prod _{s=0}^{n-1} \frac{1} {1 + s} = \frac{1} {n!} \rightarrow 0\;as\;n \rightarrow \infty, }$$

and hence condition (3.5.5) is satisfied. Condition (3.5.6) also satisfied since

$$\displaystyle{ n - 2 \rightarrow \infty \;as\;n \rightarrow \infty. }$$

Next we verify condition (3.5.7).

$$\displaystyle\begin{array}{rcl} & & \left \vert \frac{2^{n}} {16n!}\right \vert +\sum _{ r=0}^{n-1}\left [ \frac{2^{r+1}} {16(r + 1)!} + \frac{2^{r}} {16r!(r + 1)!}\right ]\prod _{s=r+1}^{n-1} \frac{1} {1 + s} {}\\ & & +\sum _{r=0}^{n-1}\sum _{ u=r-2}^{r-1} \frac{2^{r}} {8(1 - r)!(u + 2)}\prod _{s=r+1}^{n-1} \frac{1} {1 + s} {}\\ & & = \left \vert \frac{2^{n}} {16n!}\right \vert + \frac{1} {8n!}\sum _{r=0}^{n-1}2r - \frac{1} {16n!}\sum _{r=0}^{n-1}2r +\sum _{ r=0}^{n-1} \frac{2^{r}} {8n!} {}\\ & & \leq \left \vert \frac{2^{n}} {16n!}\right \vert + \frac{1} {8n!}(2^{n} - 1) + \frac{1} {8n!}(2^{n} - 1) {}\\ & & \leq \left \vert \frac{2^{n}} {16n!}\right \vert + \frac{1} {8n!}2^{n} + \frac{1} {8n!}2^{n} {}\\ & & \leq \frac{1} {8} + \frac{1} {4} + \frac{1} {4} {}\\ & & = \frac{5} {8} <1. {}\\ \end{array}$$

Hence condition (3.5.7) is satisfied. All the conditions of Theorem 3.5.2 are satisfied and the zero solution of (3.5.1) is asymptotically stable.

3.6 Almost-Linear Volterra Equations

We consider the scalar Volterra difference equation

$$\displaystyle\begin{array}{rcl} \bigtriangleup x(n) = a(n)h(x(n)) +\sum _{ k=0}^{n-1}c(n,k)g(x(k)),\;x(0) = x_{ 0},\;n \geq 0.& &{}\end{array}$$
(3.6.1)

We assume that the functions h and g are continuous and that there exist positive constants H, H , G, and G such that

$$\displaystyle\begin{array}{rcl} \mid h(x) - Hx\mid \leq H^{{\ast}},& &{}\end{array}$$
(3.6.2)

and

$$\displaystyle\begin{array}{rcl} \mid g(x) - Gx\mid \leq G^{{\ast}}.& &{}\end{array}$$
(3.6.3)

Equation (3.6.1) will be called Almost-Linear if (3.6.2) and (3.6.3) hold. In [53] Burton introduced this concept of Almost-Linear equations for the continuous case and studied certain important properties of the resolvent of a linear Volterra equation. The work of this section is found in [150]. Our objective here is to apply the concept of Almost-Linear equations to Volterra difference equations and prove that the solutions of these Volterra difference equations are also bounded if they satisfy (3.6.2) and (3.6.3). Due to (3.6.2) and (3.6.3) contraction mapping principle cannot be used since our mapping cannot be made into a contraction. Therefore, we resort to the use of Krasnoselskii’s fixed point theorem. At the end of the section we will construct a suitable Lyapunov functional and refer to Chapter 2 to deduce that all solutions of (3.6.1) are bounded. It turns out that either method has advantages and disadvantages.

We begin with the following lemma which is essential to the construction of our mappings. Consider the general difference equation

$$\displaystyle\begin{array}{rcl} \bigtriangleup x(n) - Ha(n)x(n) = f(n),\;x(0) = x_{0},\;n \geq 0.& &{}\end{array}$$
(3.6.4)

Lemma 3.3.

Suppose 1 + Ha(n) 0 for all \(n \in [0,\infty ) \cap \mathbb{Z}.\) Then x(n) is a solution of equation  (3.6.4) if and only if

$$\displaystyle\begin{array}{rcl} x(n)& =& x(0)\prod _{s=0}^{n-1}(1 + Ha(s)) +\sum _{ u=0}^{n-1}f(u)\prod _{ s=u+1}^{n-1}(1 + Ha(s)).{}\end{array}$$
(3.6.5)

Proof.

First we note that (3.6.4) is equivalent to

$$\displaystyle\begin{array}{rcl} \bigtriangleup \Big[\prod _{s=0}^{n-1}(1 + Ha(s))^{-1}x(n)\Big] = f(n)\prod _{ s=0}^{n}(1 + Ha(s))^{-1}& &{}\end{array}$$
(3.6.6)

Summing equation (3.6.6) from 0 to n − 1 and dividing both sides by

$$\displaystyle{ \prod _{s=0}^{n-1}(1 + Ha(s))^{-1} }$$

gives (3.6.5).

Lemma 3.4.

Suppose 1 + Ha(n) 0 for all \(n \in [0,\infty ) \cap \mathbb{Z}.\) Then x(n) is a solution of equation  (3.6.1) if and only if

$$\displaystyle\begin{array}{rcl} x(n)& =& x(0)\prod _{s=0}^{n-1}(1 + Ha(s)) +\sum _{ u=0}^{n-1}\Big[a(u)\Big(-Hx(u) + h(x(u))\Big)\Big]\prod _{ s=u+1}^{n-1}(1 + Ha(s)) \\ & & +\;\sum _{u=0}^{n-1}\sum _{ k=0}^{u-1}c(u,k)\Big[g(x(k)) - Gx(k)\Big]\prod _{ s=u+1}^{n-1}(1 + Ha(s)) \\ & & +\;\sum _{u=0}^{n-1}\sum _{ k=0}^{u-1}c(u,k)Gx(k)\prod _{ s=u+1}^{n-1}(1 + Ha(s)). {}\end{array}$$
(3.6.7)

Proof.

Rewrite equation (3.6.1) as

$$\displaystyle\begin{array}{rcl} \bigtriangleup x(n) - Ha(n)x(n)& =& -Ha(n)x(n) + a(n)h(x(n)) {}\\ & & \;+\;\sum _{k=0}^{n-1}c(n,k)\Big[g(x(k)) - Gx(k)\Big] +\sum _{ k=0}^{n-1}c(n,k)Gx(k).{}\\ \end{array}$$

If we let

$$\displaystyle\begin{array}{rcl} f(n)& =& -Ha(n)x(n) + a(n)h(x(n)) +\sum _{ k=0}^{n-1}c(n,k)\Big[g(x(k)) - Gx(k)\Big] {}\\ & +& \sum _{k=0}^{n-1}c(n,k)Gx(k), {}\\ \end{array}$$

then the results follow from Lemma 3.3.

We rely on the following theorem for the relative compactness criterion since the Ascolli-Arzelà’s theorem cannot be utilized here due to the unbounded domain.

Theorem 3.6.1 ([7]).

Let M be the space of all bounded continuous (vector-valued) functions on [0, ) and SM. Then S is relatively compact in M if the following conditions hold:

  1. (i)

    S is bounded in M;

  2. (ii)

    the functions in S are equicontinuous on any compact interval of [0, );

  3. (iii)

    the functions in S are equiconvergent, that is, given ε > 0, there exists a T = T(ε) > 0 such that \(\parallel \phi (t) -\phi (\infty ) \parallel _{\mathbb{R}^{n}} <\epsilon,\) for all t > T and all ϕS. 

We assume that

$$\displaystyle\begin{array}{rcl} \lim _{n\rightarrow \infty }a(n) = 0,& &{}\end{array}$$
(3.6.8)

and for some positive constant L,

$$\displaystyle\begin{array}{rcl} 0 \leq \sum _{k=0}^{u-1}\vert c(u,k)\vert \leq L\vert a(u)\vert \;\mbox{ for all}\;u \in [0,\infty ) \cap \mathbb{Z},& &{}\end{array}$$
(3.6.9)

and

$$\displaystyle\begin{array}{rcl} H\vert a(n)\vert \leq 1 -\big\vert 1 + Ha(n)\big\vert \;\mbox{ for all}\;n \in [0,\infty ) \cap \mathbb{Z}.& &{}\end{array}$$
(3.6.10)

Moreover, we assume

$$\displaystyle\begin{array}{rcl} \sum _{u=0}^{n-1}\Big\vert \prod _{ s=u+1}^{n-1}(1 + Ha(s))\Big\vert \sum _{ k=0}^{u-1}G\vert c(u,k)\vert \leq \alpha <1,& &{}\end{array}$$
(3.6.11)

and

$$\displaystyle\begin{array}{rcl} \sum _{u=0}^{n-1}\Big\vert \prod _{ s=u+1}^{n-1}(1 + Ha(s))\Big\vert \Big[\vert a(u)\vert H^{{\ast}} +\sum _{ k=0}^{u-1}G^{{\ast}}\vert c(u,k)\vert \Big] \leq \beta <\infty.& &{}\end{array}$$
(3.6.12)

Finally, choose a constant ρ > 0 such that

$$\displaystyle\begin{array}{rcl} \vert x_{0}\vert \Big\vert \prod _{s=0}^{n-1}(1 + Ha(s))\Big\vert +\alpha \rho +\beta \leq \rho & &{}\end{array}$$
(3.6.13)

for all n ≥ 0. Let S be the Banach space of bounded sequences with the maximum norm. Let

$$\displaystyle\begin{array}{rcl} M =\{\psi \in S,\;\psi (0) = x_{0}\;: \vert \vert \psi \vert \vert \leq \rho \}.& &{}\end{array}$$
(3.6.14)

Then M is a closed convex subset of S.

Define mappings \(\mathcal{A}: M \rightarrow S\) and \(\mathcal{B}: M \rightarrow M\) as follows.

$$\displaystyle\begin{array}{rcl} (\mathcal{A}\phi )(n)& =& \sum _{u=0}^{n-1}\Big[a(u)\Big(-H\phi (u) + h(\phi (u))\Big)\Big]\prod _{ s=u+1}^{n-1}(1 + Ha(s)) \\ & & +\;\sum _{u=0}^{n-1}\sum _{ k=0}^{u-1}c(u,k)\Big[g(\phi (k)) - G\phi (k)\Big]\prod _{ s=u+1}^{n-1}(1 + Ha(s)),{}\end{array}$$
(3.6.15)

and

$$\displaystyle\begin{array}{rcl} (\mathcal{B}\phi )(n)& =& x(0)\prod _{s=0}^{n-1}(1 + Ha(s)) \\ & & +\;\sum _{u=0}^{n-1}\sum _{ k=0}^{u-1}c(u,k)G\phi (k)\prod _{ s=u+1}^{n-1}(1 + Ha(s)).{}\end{array}$$
(3.6.16)

We have the following lemma.

Lemma 3.5.

Suppose  (3.6.11) and  (3.6.13) hold. The map \(\mathcal{B}\) is a contraction from M into M. 

Proof.

Let ϕM. It follows from (3.6.11) and (3.6.13) that

$$\displaystyle\begin{array}{rcl} \vert (\mathcal{B}\phi )(n)\vert & \leq & \vert x_{0}\vert \Big\vert \prod _{s=0}^{n-1}(1 + Ha(s))\Big\vert +\alpha \rho \leq \rho.{}\end{array}$$
(3.6.17)

Also, for ϕ, ψM, we obtain

$$\displaystyle\begin{array}{rcl} \vert (\mathcal{B}\phi )(n) - (\mathcal{B}\psi )(n)\vert & \leq & \sum _{u=0}^{n-1}\Big\vert \prod _{ s=u+1}^{n-1}(1 + Ha(s))\Big\vert \sum _{ k=0}^{u-1}G\vert c(u,k)\vert \vert \vert \phi -\psi \vert \vert {}\\ &\leq & \alpha \vert \vert \phi -\psi \vert \vert. {}\\ \end{array}$$

Therefore proving that \(\mathcal{B}\) is a contraction from M into M. 

Lemma 3.6.

The mapping \(\mathcal{A}\) is a continuous mapping on M.

Proof.

Let {ϕn} be any sequence of functions in M with ∥ϕnϕ ∥ → 0 as n. Then one can easily verify that

$$\displaystyle\begin{array}{rcl} \parallel \mathcal{ A}\phi _{n} -\mathcal{ A}\phi \parallel \rightarrow 0\;\mbox{ as}\;n \rightarrow \infty.& & {}\\ \end{array}$$

Lemma 3.7.

Suppose  (3.6.2) (3.6.3) (3.6.8) (3.6.9), and  (3.6.10) hold. Then \(\mathcal{A}(M)\) is relatively compact .

Proof.

We use Theorem 3.6.1 to prove the relative compactness of \(\mathcal{A}(M)\) by showing that all three conditions of Theorem 3.6.1 hold. Thus to see that \(\mathcal{A}(M)\) is uniformly bounded , we use conditions (3.6.2), (3.6.3), (3.6.9), and (3.6.10) to obtain

$$\displaystyle\begin{array}{rcl} \vert (\mathcal{A}\phi )(n)\vert & \leq & \frac{H^{{\ast}} + LG^{{\ast}}} {H} \sum _{u=0}^{n-1}H\vert a(u)\vert \Big\vert \prod _{ s=u+1}^{n-1}(1 + Ha(s))\Big\vert {}\\ &\leq & \frac{H^{{\ast}} + LG^{{\ast}}} {H} \sum _{u=0}^{n-1}\big(1 -\big\vert 1 + Ha(u)\big\vert \big)\Big\vert \prod _{ s=u+1}^{n-1}(1 + Ha(s))\Big\vert {}\\ & =& \frac{H^{{\ast}} + LG^{{\ast}}} {H} \sum _{u=0}^{n-1}\bigtriangleup _{ u}\Big[\prod _{s=u}^{n-1}\vert (1 + Ha(s))\vert \Big] {}\\ & \leq & \frac{H^{{\ast}} + LG^{{\ast}}} {H} \Big[1 -\prod _{s=0}^{n-1}\vert (1 + Ha(s))\vert \Big]:=\sigma \; \mbox{ for all}\;n \in [0,\infty ) \cap \mathbb{Z}. {}\\ \end{array}$$

This shows that \(\mathcal{A}(M)\) is uniformly bounded .To show equicontinuity of \(\mathcal{A}(M)\), without loss of generality, we let n1 > n2 for \(n_{1},n_{2} \in [0,\infty ) \cap \mathbb{Z}\) and use the notations

$$\displaystyle\begin{array}{rcl} F(\phi (u)) = a(u)[H\phi (u) - h(\phi (u))],& & {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} J(\phi (u)) =\sum _{ k=0}^{u-1}c(u,k)\Big[g(\phi (k)) - G\phi (k)\Big].& & {}\\ \end{array}$$

Then, we may write

$$\displaystyle\begin{array}{rcl} (\mathcal{A}\phi )(n)& =& \sum _{u=0}^{n-1}\prod _{ s=u+1}^{n-1}(1 + Ha(s))\Big[F(\phi (u)) + J(\phi (u))\Big].{}\end{array}$$
(3.6.18)

Hence we have

$$\displaystyle\begin{array}{rcl} \vert (\mathcal{A}\phi )(n_{1}) - (\mathcal{A}\phi )(n_{2})\vert & =& \Big\vert \sum _{u=0}^{n_{1}-1}\prod _{ s=u+1}^{n_{1}-1}(1 + Ha(s))\Big[F(\phi (u)) + J(\phi (u))\Big] {}\\ & & -\;\sum _{u=0}^{n_{2}-1}\prod _{ s=u+1}^{n_{2}-1}(1 + Ha(s))\Big[F(\phi (u)) + J(\phi (u))\Big]\Big\vert {}\\ & =& \Big\vert \sum _{u=0}^{n_{2}-1}\Big[\prod _{ s=u+1}^{n_{1}-1}(1 + Ha(s)) {}\\ & & \;-\;\prod _{s=u+1}^{n_{2}-1}(1 + Ha(s))\Big]\Big[F(\phi (u)) + J(\phi (u))\Big]\Big\vert {}\\ & &\;+\;\Big\vert \sum _{u=n_{2}}^{n_{1}-1}\prod _{ s=u+1}^{n_{1}-1}(1 + Ha(s))\Big[F(\phi (u)) + J(\phi (u))\Big]\Big\vert {}\\ \end{array}$$
$$\displaystyle\begin{array}{rcl} & =& \sum _{u=0}^{n_{2}-1}\Big\vert \prod _{ s=u+1}^{n_{2}-1}(1 + Ha(s)) {}\\ & & \;-\;\prod _{s=u+1}^{n_{1}-1}(1 + Ha(s))\Big\vert \Big\vert F(\phi (u)) + J(\phi (u))\Big\vert {}\\ & &\;+\;\sum _{u=n_{2}}^{n_{1}-1}\prod _{ s=u+1}^{n_{1}-1}\vert (1 + Ha(s))\vert \Big\vert F(\phi (u)) + J(\phi (u))\Big\vert {}\\ & \leq & \sigma \sum _{u=0}^{n_{2}-1}H\vert a(u)\vert \Big\vert \prod _{ s=u+1}^{n_{2}-1}\vert (1 + Ha(s))\vert -\prod _{ s=u+1}^{n_{1}-1}\vert (1 + Ha(s))\vert \Big\vert {}\\ & &\;+\;\sigma \sum _{u=n_{2}}^{n_{1}-1}H\vert a(u)\vert \prod _{ s=u+1}^{n_{1}-1}\vert (1 + Ha(s))\vert {}\\ & \leq & \sigma \sum _{u=0}^{n_{2}-1}[1 -\vert 1 + Ha(u)\vert ]\Big\vert \prod _{ s=u+1}^{n_{2}-1}\vert (1 + Ha(s))\vert -\prod _{ s=u+1}^{n_{1}-1}\vert (1 + Ha(s))\vert \Big\vert {}\\ & &\;+\;\sigma \sum _{u=n_{2}}^{n_{1}-1}[1 -\vert 1 + Ha(u)\vert ]\Big\vert \prod _{ s=u+1}^{n_{1}-1}\vert (1 + Ha(s))\vert \Big\vert {}\\ & \leq & \sigma \sum _{u=0}^{n_{2}-1}\Big\vert \bigtriangleup _{ u}\Big[\prod _{s=u}^{n_{2}-1}\vert (1 + Ha(s))\vert -\prod _{ s=u}^{n_{1}-1}\vert (1 + Ha(s))\vert \Big]\Big\vert {}\\ & &\;+\;\sigma \sum _{u=n_{2}}^{n_{1}-1}\bigtriangleup _{ u}\Big[\prod _{s=u}^{n_{1}-1}\vert (1 + Ha(s))\vert \Big] {}\\ & \leq & \sigma \Big[2 - 2\prod _{s=n_{2}}^{n_{1}-1}\vert (1 + Ha(s))\vert -\prod _{ s=0}^{n_{2}-1}\vert (1 + Ha(s))\vert {}\\ & &\;+\;\prod _{s=0}^{n_{1}-1}\vert (1 + Ha(s))\vert \Big]\; \rightarrow 0\;\mbox{ as}\;n_{ 2} \rightarrow n_{1}. {}\\ \end{array}$$

This shows that \(\mathcal{A}\) is equicontinuous .

To see that \(\mathcal{A}\) is equiconvergent, we let

$$\displaystyle\begin{array}{rcl} & & \lim _{n\rightarrow \infty }\sum _{u=0}^{n-1}\prod _{ s=u}^{n-1}(1 + Ha(s))\Big[F(\phi (u)) + J(\phi (u))\Big] = {}\\ & & \sum _{u=0}^{\infty }\prod _{ s=u}^{\infty }(1 + Ha(s))\Big[F(\phi (u)) + J(\phi (u))\Big]. {}\\ \end{array}$$

Then we have

$$\displaystyle\begin{array}{rcl} \vert (\mathcal{A}\phi )(\infty ) - (\mathcal{A}\phi )(n)\vert & =& \Big\vert \sum _{u=0}^{\infty }\prod _{ s=u+1}^{\infty }(1 + Ha(s))\Big[F(\phi (u)) + J(\phi (u))\Big] {}\\ & & -\;\sum _{u=0}^{n-1}\prod _{ s=u+1}^{n-1}(1 + Ha(s))\Big[F(\phi (u)) + J(\phi (u))\Big]\Big\vert {}\\ & =& \Big\vert \sum _{u=0}^{n-1}\Big[\prod _{ s=u+1}^{\infty }(1 + Ha(s)) {}\\ & & \;-\;\prod _{s=u+1}^{n-1}(1 + Ha(s))\Big]\Big[F(\phi (u)) + J(\phi (u))\Big]\Big\vert {}\\ & &\;+\;\Big\vert \sum _{u=n}^{\infty }\prod _{ s=u+1}^{\infty }(1 + Ha(s))\Big[F(\phi (u)) + J(\phi (u))\Big]\Big\vert {}\\ & =& \sum _{u=0}^{n-1}\Big\vert \prod _{ s=u+1}^{n-1}(1 + Ha(s)) {}\\ & & \;-\;\prod _{s=u+1}^{\infty }(1 + Ha(s))\Big\vert \Big\vert F(\phi (u)) + J(\phi (u))\Big\vert {}\\ & &\;+\;\sigma \sum _{u=n}^{\infty }\bigtriangleup _{ u}\Big[\prod _{s=u}^{\infty }\vert (1 + Ha(s))\vert \Big] {}\\ & \leq & \sigma \sum _{u=0}^{n-1}\Big\vert \bigtriangleup _{ u}\Big[\prod _{s=u}^{n-1}\vert (1 + Ha(s))\vert -\prod _{ s=u}^{\infty }\vert (1 + Ha(s))\vert \Big]\Big\vert {}\\ & &\;+\;\sigma [1 -\prod _{s=n}^{\infty }\vert (1 + Ha(s))\vert ] {}\\ & \leq & \sigma \Big[2 - 2\prod _{s=n}^{\infty }\vert (1 + Ha(s))\vert -\prod _{ s=0}^{n-1}\vert (1 + Ha(s))\vert {}\\ & &\;+\;\prod _{s=0}^{\infty }\vert (1 + Ha(s))\vert \Big]\; \rightarrow 0\;\mbox{ as}\;n \rightarrow \infty, {}\\ \end{array}$$

where we used (3.6.8) which yields \(\lim _{n\rightarrow \infty }\prod _{s=n}^{\infty }(1 + Ha(s)) = 1.\)

Theorem 3.6.2.

Assume  (3.6.2) (3.6.3), and  (3.6.8) (3.6.13) hold. Then  (3.6.1) has a bounded solution.

Proof.

For ϕ, ψM, we obtain

$$\displaystyle\begin{array}{rcl} \vert (\mathcal{A}\phi )(n) + (\mathcal{B}\psi )(n)\vert & \leq & \vert x_{0}\vert \Big\vert \prod _{s=0}^{n-1}(1 + Ha(s))\Big\vert +\alpha \rho +\beta \leq \rho. {}\\ \end{array}$$

Thus, \(\mathcal{A}\phi +\mathcal{ B}\psi \in M.\) Moreover, Lemmas 3.53.7 satisfy the requirements of Krasnoselskii’s fixed point theorem and hence there exists a function x(n) ∈ M such that

$$\displaystyle{ x(n) =\mathcal{ A}x(n) +\mathcal{ B}x(n). }$$

This proves that (3.6.1) has a bounded solution x(n). 

3.6.1 Application to Nonlinear Volterra Difference Equations

Consider the Volterra difference equation

$$\displaystyle\begin{array}{rcl} \bigtriangleup x(n) = - \frac{1} {2^{n}}h(x(n)) +\sum _{ k=0}^{n-1} \frac{4^{k}} {4(2^{n})n!}g(x(k)),\;x(0) = x_{0},\;n \geq 0,& &{}\end{array}$$
(3.6.19)

where the functions h and g satisfy conditions (3.6.2) and (3.6.3), respectively. Let H, G, H , and G be positive constants with G < 1 and H = 1. We choose ρ > 0 such that for any initial point x0, the inequality

$$\displaystyle\begin{array}{rcl} \vert x_{0}\vert \Big\vert \prod _{s=0}^{n-1}(1 - 2^{-s})\Big\vert + G\rho + (H^{{\ast}} + G^{{\ast}}) \leq \rho & & {}\\ \end{array}$$

holds. Then (3.6.19) has a bounded solution x(n) satisfying | | x | | ≤ ρ. 

We let \(a(n) = -\frac{1} {2^{n}}\) and \(c(n,k) = \frac{4^{k}} {4(2^{n})n!}.\)

Thus,

$$\displaystyle\begin{array}{rcl} \sum _{u=0}^{n-1}\vert c(n,u)\vert & =& \sum _{ u=0}^{n-1} \frac{4^{u}} {4(2^{n})n!} {}\\ & \leq & \frac{1} {4(2^{n})n!}\Big(4^{n} - 1\Big) {}\\ & \leq & \frac{1} {2^{n}}. {}\\ \end{array}$$

This shows that condition (3.6.9) is satisfied with L = 1. Condition (3.6.8) can be easily verified. Moreover,

$$\displaystyle\begin{array}{rcl} H\vert a(n)\vert = 2^{-n} = 1 - (1 - 2^{-n}) \leq 1 -\vert 1 + Ha(n)\vert,& & {}\\ \end{array}$$

thus showing that condition (3.6.10) is satisfied. Next, we verify (3.6.11) as follows.

$$\displaystyle\begin{array}{rcl} & & \sum _{u=0}^{n-1}\vert \prod _{ s=u+1}^{n-1}(1 - 2^{-s})\vert G\sum _{ k=0}^{u-1} \frac{4^{k}} {4(2^{n})n!} {}\\ & & \leq G\sum _{u=0}^{n-1} \frac{1} {2^{u}} = G(1 - \frac{1} {2^{n}}) {}\\ & & \leq G <1. {}\\ \end{array}$$

Finally, we verify (3.6.12).

$$\displaystyle\begin{array}{rcl} & & \sum _{u=0}^{n-1}\Big\vert \prod _{ s=u+1}^{n-1}(1 - 2^{-s})\Big\vert \Big[2^{-u}H^{{\ast}} +\sum _{ k=0}^{u-1}G^{{\ast}} \frac{4^{k}} {4(2^{n})n!}\Big] {}\\ & & \leq \sum _{u=0}^{n-1}\Big[2^{-u}H^{{\ast}} + G^{{\ast}} \frac{1} {2^{u}}\Big] {}\\ & & = (H^{{\ast}} + G^{{\ast}})\sum _{ u=0}^{n-1} \frac{1} {2^{u}} {}\\ & & \leq (H^{{\ast}} + G^{{\ast}})(1 - \frac{1} {2^{n}}) <(H^{{\ast}} + G^{{\ast}}). {}\\ \end{array}$$

Thus, by Theorem 3.6.2, Equation (3.6.19) has a bounded solution.

3.7 Lyapunov Functionals or Fixed Point s

In this section, we construct a Lyapunov functional and then refer to Theorem 2.1.1 to deduce boundedness on all solutions of (3.6.1). Then we will compare the results via an example with Theorem 3.6.2. First we rewrite (3.6.1) as

$$\displaystyle\begin{array}{rcl} x(n + 1) = b(n)h(x(n)) +\sum _{ s=0}^{n-1}C(n,s)g(x(s)),\;x(0) = x_{ 0},\;n \geq 0,& &{}\end{array}$$
(3.7.1)

where \(b(n) = 1 - a(n).\) Before we state the next theorem we note that as a consequence of (3.6.2) and (3.6.3) we have, respectively, that

$$\displaystyle\begin{array}{rcl} \mid h(x)\mid \leq H\mid x\mid + H^{{\ast}},& &{}\end{array}$$
(3.7.2)

and

$$\displaystyle\begin{array}{rcl} \vert g(x)\vert \leq G\mid x\mid + G^{{\ast}}.& &{}\end{array}$$
(3.7.3)

Theorem 3.7.1.

Suppose  (3.7.2) and  (3.7.3) hold and for some α ∈ (0, 1), we have that

$$\displaystyle{ H\vert b(n)\vert + G\sum _{j=n+1}^{\infty }\vert C(j,n)\vert - 1 \leq -\alpha. }$$
(3.7.4)

Also, assume that

$$\displaystyle{ \sum _{s=0}^{n}\sum _{ j=n}^{\infty }\vert C(j,s)\vert <\infty }$$
(3.7.5)

and

$$\displaystyle{ \bigtriangleup _{s}\vert C(j,s)\vert \geq 0 }$$
(3.7.6)

then solutions of  (3.7.1) are bounded.

Proof.

Define

$$\displaystyle{ V (n,x(\cdot )) = \vert x(n)\vert +\sum _{ s=0}^{n-1}\sum _{ j=n}^{\infty }\vert C(j,s)\vert \vert g(x(s))\vert. }$$
(3.7.7)

Then along solutions of (3.7.1), we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (n,x(\cdot ))& =& \vert x(n + 1)\vert -\vert x(n)\vert +\sum _{ s=0}^{n}\sum _{ j=n+1}^{\infty }\vert C(j,s)\vert \vert g(x(s))\vert {}\\ & &-\sum _{s=0}^{n-1}\sum _{ j=n}^{\infty }\vert C(j,s)\vert \vert g(x(s))\vert {}\\ & =& \big\vert b(n)h(x(n)) +\sum _{ s=0}^{n-1}C(n,s)g(x(s))\big\vert {}\\ & &-\vert x(n)\vert +\sum _{ s=0}^{n}\sum _{ j=n+1}^{\infty }\vert C(j,s)\vert \vert g(x(s))\vert {}\\ & &-\sum _{s=0}^{n-1}\sum _{ j=n}^{\infty }\vert C(j,s)\vert \vert g(x(s))\vert {}\\ &\leq & \Big[H\vert b(n)\vert + G\sum _{j=n+1}^{\infty }\vert C(j,n)\vert - 1\Big]\vert x(n)\vert + M {}\\ & \leq & -\alpha \vert x(n)\vert + M, {}\\ \end{array}$$

where \(M = H^{{\ast}}\vert b(n)\vert + G^{{\ast}}\sum _{ j=n+1}^{\infty }\vert C(j,n)\vert\).

Let \(\varphi (n,s) =\sum _{ j=n}^{\infty }\vert C(j,s)\vert.\) Then, all the conditions of Theorem 2.1.1 are satisfied which implies that all solutions of (3.7.1) are bounded.

We note that Theorem 2.1.1 gives conditions under which all solutions of (3.7.1) are bounded, unlike Theorem 3.6.2 from which one can only conclude the existence of a bounded solution.

Next, we use the results of Section 3.6.1 to compare the conditions of Theorem 2.1.1 to those of Theorem 3.6.2. Let a(n), G, and H be given as in Theorem 3.7.1 and consider condition (3.7.4) for n ≥ 0. Then,

$$\displaystyle\begin{array}{rcl} H\vert b(n)\vert + G\sum _{j=n+1}^{\infty }\vert C(j,n)\vert - 1& =& \vert 1 - \frac{1} {2^{n}}\vert - 1 +\sum _{ j=n+1}^{\infty } \frac{4^{n}} {4(2^{j})j!} \\ & =& - \frac{1} {2^{n}} + 4^{n-1}\big[\sum _{ n=0}^{\infty } \frac{1} {(2^{n})n!} -\sum _{j=0}^{n} \frac{1} {(2^{j})j!}\big] \\ & =& - \frac{1} {2^{n}} + 4^{n-1}\big[\sqrt{e} -\sum _{ j=0}^{n} \frac{1} {(2^{j})j!}\big]. {}\end{array}$$
(3.7.8)

Next we perform the following calculations by using n! > 2n for n ≥ 4. 

$$\displaystyle\begin{array}{rcl} -\sum _{j=0}^{n} \frac{1} {(2^{j})j!}& =& -\frac{3} {2} -\frac{1} {8} - \frac{1} {48} -\sum _{j=4}^{n} \frac{1} {(2^{j})j!} \\ & \geq & -\frac{3} {2} -\frac{1} {8} - \frac{1} {48} -\sum _{j=4}^{n} \frac{1} {(4^{j})} \\ & =& -\frac{3} {2} -\frac{1} {8} - \frac{1} {48} + \frac{1} {4} + \frac{1} {4^{2}} + \frac{1} {4^{3}} -\sum _{j=1}^{n} \frac{1} {(4^{j})} \\ & =& -\frac{78} {48} + \frac{21} {4^{3}} -\frac{1} {4}\Big(\frac{1 - (1/4)^{n}} {1 - 1/4} \Big). {}\end{array}$$
(3.7.9)

Thus, substitution of (3.7.9) into (3.7.8) yields,

$$\displaystyle\begin{array}{rcl} H\vert b(n)\vert + G\sum _{j=n+1}^{\infty }\vert C(j,n)\vert - 1& \geq & - \frac{1} {2^{n}} + 4^{n-1}\big[\sqrt{e} -\frac{78} {48} + \frac{21} {4^{3}} -\frac{1} {4}\Big(\frac{1 - (1/4)^{n}} {1 - 1/4} \Big)\big] {}\\ &>& 0,\;\mbox{ for}\;n = 3. {}\\ \end{array}$$

This shows that condition (3.7.4) does not hold for all n ≥ 0. Hence, Theorem 2.1.1 gives no information regarding the solutions and yet Theorem 3.6.2 implies the existence of at least one bounded solution.

3.8 Delay Functional Difference Equations

We consider a functional infinite delay difference equation and use fixed point theory to obtain necessary and sufficient conditions for the asymptotic stability of its zero solution. We will apply the results to nonlinear Volterra difference equations. Let \(\mathbb{R} = (-\infty,\infty ),\; \mathbb{Z}^{+} = [0,\infty )\) and \(\mathbb{Z}^{-} = (-\infty,0],\) respectively. We concentrate on the delay functional difference equation

$$\displaystyle{ x(t + 1) = a(t)x(t) + g(t,x_{t}), }$$
(3.8.1)

where \(a: \mathbb{Z}^{+} \rightarrow \mathbb{R},\) and \(g: \mathbb{Z}^{+} \times \mathcal{ C},\) is continuous with \(\mathcal{C}\) being the Banach space of bounded functions \(\phi: \mathbb{Z}^{-}\rightarrow \mathbb{R}\) with the maximum norm \(\vert \vert \cdot \vert \vert.\) If \(x_{t} \in \mathcal{ C},\) then \(x_{t}(s) = x(t + s)\) for \(s \in \mathbb{Z}^{-}\).

We will use fixed point theory to obtain necessary and sufficient conditions for the asymptotic stability of the zero solution of (3.8.1). Throughout this section we assume \(g(t,0) = 0\) so that x = 0 is a solution of (3.8.1). For every positive β > 0, we define the set

$$\displaystyle{ \mathcal{C}(\beta ) =\{\phi \in \mathcal{ C}: \vert \vert \phi \vert \vert \leq \beta \}. }$$

Given a function \(\psi: \mathbb{Z} \rightarrow \mathbb{Z},\) we define \(\vert \vert \psi \vert \vert ^{[s,t]} =\max \{ \vert \psi (u)\vert: s \leq u \leq t\}.\) Moreover, for D > 0 a sequence \(x: (-\infty,D] \rightarrow \mathbb{R}\) is called a solution of (3.8.1) through \((t_{0},\phi ) \in \mathbb{Z}^{+} \times \mathcal{ C}\) if \(x_{t_{0}} =\phi\) and x satisfies (3.8.1) on [t0, D]. Due to the importance of the next result, we summarize it in the following lemma.

Lemma 3.8.

Suppose that a(t) 0 for all \(t \in \mathbb{Z}^{+}\) . Then x(t) is a solution of equation  (3.8.1) if and only if

$$\displaystyle\begin{array}{rcl} x(t) =\phi (t_{0})\prod _{s=t_{0}}^{t-1}a(s) +\sum _{ s=t_{0}}^{t-1}\prod _{ u=s+1}^{t-1}a(u)\;g(s,x_{ s})\;\mathit{\mbox{ for}}\;t \geq t_{0}.& &{}\end{array}$$
(3.8.2)

The proof of lemma 3.8 follows easily from the variation of parameters formula given in Chapter 1, and hence we omit.

In the preparation for our next theorem we let L > 0 be a constant, δ0 ≥ 0 and t0 ≥ 0. Let \(\phi \in \mathcal{ C}(\delta _{0})\) be fixed and set

$$\displaystyle{ S =\Big\{ x: \mathbb{Z} \rightarrow \mathbb{R}: x_{t_{0}} =\phi,\;x_{t} \in \mathcal{ C}(L)\;\mbox{ for}\;t \geq t_{0},x(t) \rightarrow 0,\;\mbox{ as}\;t \rightarrow \infty \Big\}. }$$

Then, S is a complete metric space with metric

$$\displaystyle{ \rho (x,y) =\max _{t\geq t_{0}}\vert x(t) - y(t)\vert. }$$

Define the mapping P: SS by

$$\displaystyle\begin{array}{rcl} \big(Px)(t) =\phi (t)\;\mbox{ if}\;t \leq t_{0}& & {}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} \big(Px\big)(t) =\phi (t_{0})\prod _{s=t_{0}}^{t-1}a(s) +\sum _{ s=t_{0}}^{t-1}\prod _{ u=s+1}^{t-1}a(u)\;g(s,x_{ s})\;\mbox{ for}\;t \geq t_{0}.& & {}\\ \end{array}$$

It is clear that for φS,  is continuous.

Theorem 3.8.1 ([146]).

Assume the existence of positive constants α, L, and a sequence \(b: \mathbb{Z}^{+} \rightarrow [0,\infty )\) such that the following conditions hold:

  1. (i)

    \(a(t)\neq 0\) for all \(t \in \mathbb{Z}^{+}.\)

  2. (ii)

    \(\sum _{s=0}^{t-1}\Big\vert \prod _{ u=s+1}^{t-1}a(u)\Big\vert b(s) \leq \alpha <1\)   for all \(t \in \mathbb{Z}^{+}.\)

  3. (iii)

    \(\vert g(t,\phi ) - g(t,\psi )\vert \leq b(t)\vert \vert \phi -\psi \vert \vert\) for all \(\phi,\psi \in \mathcal{ C}(L).\)

  4. (iv)

    For each ε > 0 and t1 ≥ 0, there exists a t2 > t1 such that for \(t> t_{2},x_{t} \in \mathcal{ C}(L)\) imply

    $$\displaystyle{ \vert g(t,x_{t})\vert \leq b(t)\Big(\epsilon +\vert \vert x\vert \vert ^{[t_{1},t-1]}\Big). }$$

    Then the zero solution of  (3.8.1) is asymptotically stable if and only if

  5. (v)

    \(\vert \prod _{s=0}^{t-1}a(s)\vert \rightarrow 0\)   as t. 

Proof.

Suppose (v) hold and let \(K =\max _{t\geq t_{0}}\vert \prod _{s=t_{0}}^{t-1}a(s)\vert.\) Then K > 0 due to (i). Choose δ0 > 0 such that \(\delta _{0}K +\alpha L \leq L.\) Then for x(t) ∈ S and for fixed \(\phi \in \mathcal{ C}(\delta _{0})\) we have

$$\displaystyle\begin{array}{rcl} \vert \big(Px\big)(t)\vert & \leq & \vert \phi (t_{0})\vert \;\vert \prod _{s=t_{0}}^{t-1}a(s)\vert +\sum _{ s=t_{0}}^{t-1}\vert \prod _{ u=s+1}^{t-1}a(u)\vert b(s)\vert \vert x_{ s}\vert \vert {}\\ &\leq & \delta _{0}K +\alpha L \leq L,\;\mbox{ for}\;t \geq t_{0}. {}\\ \end{array}$$

Hence, \(\big(Px\big) \in \mathcal{ C}(L).\) Next we show that \(\big(Px\big)(t) \rightarrow 0\) as t. Let xS. As a consequence of \(x(t) \rightarrow 0\) as t, there exists t1 > t0 such that | x(t) | < ε for all tt1. Moreover, since | x(t) | ≤ L, for all \(t \in \mathbb{Z}\), by (iv) there is a t2 > t1 such that for t > t2 we have

$$\displaystyle{ \vert g(t,x_{t})\vert \leq b(t)\Big(\epsilon +\vert \vert x\vert \vert ^{[t_{1},t-1]}\Big). }$$

Thus, for tt2, we have

$$\displaystyle\begin{array}{rcl} \big\vert \sum _{s=t_{0}}^{t-1}\prod _{ u=s+1}^{t-1}a(u)\;g(s,x_{ s})\big\vert & \leq & \sum _{s=t_{0}}^{t_{2}-1}\vert \prod _{ u=s+1}^{t-1}a(u)\vert \;\vert g(s,x_{ s})\vert {}\\ & +& \sum _{s=t_{2}}^{t-1}\vert \prod _{ u=s+1}^{t-1}a(u)\vert \;\vert g(s,x_{ s})\vert {}\\ &\leq & \sum _{s=t_{0}}^{t_{2}-1}\vert \prod _{ u=s+1}^{t-1}a(u)\vert \;\vert \vert x_{ s}\vert \vert {}\\ & +& \sum _{s=t_{2}}^{t-1}\vert \prod _{ u=s+1}^{t-1}a(u)\vert \;b(s)\Big(\epsilon +\vert \vert x\vert \vert ^{[t_{1},s-1]}\Big) {}\\ & \leq & \sum _{s=t_{0}}^{t_{2}-1}\vert \prod _{ u=s+1}^{t_{2}-1}a(u)\vert \vert \prod _{ u=t_{2}}^{t-1}a(u)\vert \;\vert \vert x_{ s}\vert \vert + 2\alpha \epsilon {}\\ & \leq & \alpha L\vert \prod _{u=t_{2}}^{t-1}a(u)\vert + 2\alpha \epsilon. {}\\ \end{array}$$

By (v), there exists t3 > t2 such that

$$\displaystyle{ \delta _{0}\vert \prod _{u=s+1}^{t-1}a(u)\vert + L\vert \prod _{ u=t_{2}}^{t-1}a(u)\vert <\epsilon. }$$

Thus, for tt3, we have

$$\displaystyle{ \vert \big(Px\big)(t)\vert \leq \delta _{0}\vert \prod _{u=s+1}^{t-1}a(u)\vert +\alpha L\vert \prod _{ u=t_{2}}^{t-1}a(u)\vert + 2\alpha \epsilon <3\epsilon. }$$

Hence, \(\big(Px\big)(t) \rightarrow 0\) as t. Left to show that \(\big(P\varphi \big)(t)\) is a contraction under the maximum norm. Let ζ, ηS. Then

$$\displaystyle\begin{array}{rcl} \Big\vert (P\zeta )(t) - (P\eta )(t)\Big\vert & \leq & \sum _{s=t_{0}}^{t-1}\vert \prod _{ u=s+1}^{t-1}a(u)\vert \;\vert g(s,\zeta _{ s}) - g(s,\eta _{s})\vert {}\\ &\leq & \sum _{s=t_{0}}^{t-1}\vert \prod _{ u=s+1}^{t-1}a(u)\vert b(s)\;\vert \zeta _{ s} -\eta _{s}\vert {}\\ &\leq & \alpha \rho (\zeta,\eta ). {}\\ \end{array}$$

Or,

$$\displaystyle{ \rho (P\zeta,P\eta ) \leq \alpha \rho (\zeta,\eta ). }$$

Thus, by the contraction mapping principle P has a unique fixed point in S which solves (3.8.1) with \(\phi \in \mathcal{ C}(\delta _{0})\) and x(t) = x(t, t0, ϕ) → 0 as t. We are left with showing that the zero solution of (3.8.1) is stable. Let ε > 0, ε < L be given and chose 0 < δ < ε so that \(\delta K+\alpha \epsilon <\epsilon.\) By the choice of δ we have | x(t0) | < ε. Let t t0 + 1 be such that | x(t ) | ≥ ε and | x(s) | < ε for t0st − 1. If x(t) = x(t, t0, ϕ) is a solution for (3.8.1) with | | ϕ | | < δ, then

$$\displaystyle\begin{array}{rcl} \vert x(t^{{\ast}})\vert & \leq & \delta \;\vert \prod _{ s=t_{0}}^{t^{{\ast}}-1 }a(s)\vert +\sum _{ s=t_{0}}^{t^{{\ast}}-1 }\vert \prod _{u=s+1}^{t^{{\ast}}-1 }a(u)\vert b(s)\vert \vert x_{s}\vert \vert {}\\ &\leq & \delta K+\alpha \epsilon <\epsilon, {}\\ \end{array}$$

which contradict the definition of t . Thus | x(t) | < ε for all tt0 and hence the zero solution of (3.8.1) is asymptotically stable.

Conversely, suppose (v) does not hold. Then by (i) there exists a sequence {tn} such that for positive constant q, 

$$\displaystyle{ \Big(\vert \prod _{u=0}^{t_{n}-1}a(u)\vert \Big)^{-1} = q,\;\mbox{ for}\;n = 1,2,3,\cdots \,. }$$

Now by (ii) we have that

$$\displaystyle{ \sum _{s=0}^{t_{n}-1}\vert \prod _{ u=s+1}^{t_{n}-1}a(u)\vert b(s) \leq \alpha, }$$

from which we get that

$$\displaystyle{ \Big(\vert \prod _{u=0}^{t_{n}-1}a(u)\vert \Big)^{-1}\sum _{ s=0}^{t_{n}-1}\vert \prod _{ u=s+1}^{t_{n}-1}a(u)\vert b(s) \leq \alpha \Big (\vert \prod _{ u=0}^{t_{n}-1}a(u)\vert \Big)^{-1}. }$$

This simplifies to

$$\displaystyle{ \sum _{s=0}^{t_{n}-1}\Big(\vert \prod _{ u=0}^{s}a(u)\vert \Big)^{-1}b(s) \leq \alpha q. }$$

Thus the sequence \(\{\sum _{s=0}^{t_{n}-1}\Big(\vert \prod _{ u=0}^{s}a(u)\vert \Big)^{-1}b(s)\}\) is bounded and hence there is a convergent subsequence. Thus, for the sake of keeping a simple notation we may assume

$$\displaystyle{ \mathop{\lim }\limits_{n \rightarrow \infty }\sum _{s=0}^{t_{n}-1}\Big(\vert \prod _{ u=0}^{s}a(u)\vert \Big)^{-1}b(s) =\omega }$$

for some positive constant ω. Next we may choose a positive integer \(\widetilde{n}\) large enough so that

$$\displaystyle{ \sum _{s=t_{\widetilde{n}}}^{t_{n}-1}\Big(\vert \prod _{ u=0}^{s}a(u)\vert \Big)^{-1}b(s) <\frac{1-\alpha } {2K^{2}} }$$

for all \(n \geq \widetilde{ n}.\)

Consider the solution \(x(t,t_{\widetilde{n}},\phi )\) with \(\phi (s) =\delta _{0}\) for \(s \leq \widetilde{ n}.\) Then, \(\vert x(t)\vert \leq L\) for all \(n \geq \widetilde{ n}\) and

$$\displaystyle\begin{array}{rcl} \vert x(t)\vert & \leq & \delta _{0}\;\vert \prod _{s=t_{\widetilde{n}}}^{t-1}a(s)\vert +\sum _{ s=t_{\widetilde{n}}}^{t-1}\vert \prod _{ u=s+1}^{t-1}a(u)\vert b(s)\vert \vert x_{ s}\vert \vert {}\\ &\leq & \delta _{0}K +\alpha \vert \vert x_{t}\vert \vert. {}\\ \end{array}$$

This implies

$$\displaystyle{ \vert x(t)\vert \leq \frac{\delta _{0}K} {1-\alpha },\;\mbox{ for all}\;t \geq t_{\widetilde{n}}. }$$

On the other hand, for \(n \geq \widetilde{ n}\), we also have

$$\displaystyle\begin{array}{rcl} \vert x(t)\vert & \geq & \delta _{0}\;\vert \prod _{s=t_{\widetilde{n}}}^{t_{n}-1}a(s)\vert -\sum _{ s=t_{\widetilde{n}}}^{t-1}\vert \prod _{ u=s+1}^{t_{n}-1}a(u)\vert b(s)\vert \vert x_{ s}\vert \vert {}\\ &\geq & \delta _{0}\;\vert \prod _{s=t_{\widetilde{n}}}^{t_{n}-1}a(s)\vert -\frac{\delta _{0}K} {1-\alpha }\vert \prod _{u=0}^{t_{n}-1}a(u)\vert \sum _{ s=t_{\widetilde{n}}}^{t-1}\vert \Big(\prod _{ u=0}^{s}a(u)\vert \Big)^{-1}b(s) {}\\ & =& \delta _{0}\;\vert \prod _{s=t_{\widetilde{n}}}^{t_{n}-1}a(s)\vert -\frac{\delta _{0}K} {1-\alpha }\vert \prod _{u=0}^{t_{\widetilde{n}}-1}a(s)\vert \prod _{ u=t_{\widetilde{n}}}^{t_{n}-1}a(s)\vert \sum _{ s=t_{\widetilde{n}}}^{t-1}\vert \Big(\prod _{ u=0}^{s}a(u)\vert \Big)^{-1}b(s) {}\\ & \geq & \vert \prod _{s=t_{\widetilde{n}}}^{t_{n}-1}a(s)\vert \Big(\delta _{ 0} -\frac{\delta _{0}K} {1-\alpha }K\sum _{s=t_{\widetilde{n}}}^{t-1}\Big(\vert \prod _{ u=0}^{s}a(u)\vert \Big)^{-1}b(s)\Big) {}\\ & \geq & \vert \prod _{s=t_{\widetilde{n}}}^{t_{n}-1}a(s)\vert (\delta _{ 0} -\frac{\delta _{0}K} {1-\alpha }K \frac{1-\alpha } {2K^{2}}) = \frac{\delta _{0}} {2}\vert \prod _{s=t_{\widetilde{n}}}^{t_{n}-1}a(s)\vert {}\\ & =& \frac{\delta _{0}} {2}\vert \prod _{u=0}^{t_{n}-1}a(s)\vert \Big(\vert \prod _{ u=0}^{t_{\widetilde{n}}-1}a(s)\vert \Big)^{-1} \rightarrow \frac{\delta _{0}} {2}q/q\neq 0\;\mbox{ as}\;n \rightarrow \infty. {}\\ \end{array}$$

Hence, condition (v) is necessary. This completes the proof.

Now we apply the results of Theorem 3.8.1 to the nonlinear Volterra infinite delay equation

$$\displaystyle{ x(t + 1) = a(t)x(t) +\sum _{ s=-\infty }^{t-1}G(t,s,x(s)) }$$
(3.8.3)

where \(a: \mathbb{Z}^{+} \rightarrow \mathbb{R}\;\mbox{ and}\;G:\varOmega \times \mathbb{R} \rightarrow \mathbb{R},\varOmega =\{ (t,s) \in \mathbb{Z}^{2}: t \geq s\}\) and G is continuous in x. The next theorem gives necessary and sufficient conditions for the stability of the zero solution of (3.8.3).

Theorem 3.8.2.

Assume the existence of positive constants α, L, and a sequence \(p:\varOmega \rightarrow \mathbb{R}^{+}\) such that the following conditions hold:

  1. (I)

    \(a(t)\neq 0\) for all \(t \in \mathbb{Z}^{+},\)

  2. (II)

    \(\mathop{\max }\limits_{t \in \mathbb{Z}^{+}}\sum _{ s=0}^{t-1}\vert \prod _{ u=s+1}^{t-1}a(u)\vert \sum _{\tau =0}^{s-1}p(s,\tau ) \leq \alpha <1\)   for all \(t \in \mathbb{Z}^{+},\)

  3. (III)

    If \(\vert x\vert,\vert y\vert \leq L\) , then

    $$\displaystyle{ \vert G(t,s,x) - G(t,s,y)\vert \leq p(t,s)\vert x - y\vert }$$

     and  G(t,s,0) = 0 for all (t, s) ∈ Ω, 

  4. (IV)

    For each ε > 0 and t1 ≥ 0, there exists a t2 > t1 such that for tt2, implies

    $$\displaystyle{ \sum _{s=-\infty }^{t_{1}-1}p(t,s) \leq \epsilon \sum _{ s=-\infty }^{t-1}p(t,s). }$$

    Then the zero solution of  (3.8.3) is asymptotically stable if and only if

  5. (V)

    \(\vert \prod _{s=0}^{t-1}a(s)\vert \rightarrow 0\)   as t. 

Proof.

We only need to verify that (iii) and (iv) of Theorem 3.8.1 hold. First we remark that due to condition (III) we have that \(\vert G(t,s,x)\vert \leq p(t,s)L.\) Equation (3.8.3) can be put in the form of Equation (3.8.1) by letting

$$\displaystyle{ g(t,\phi ) =\sum _{ s=-\infty }^{-1}G(t,t + s,\phi (s)). }$$

To verify (iii) we let \(b(t) =\sum _{ s=-\infty }^{t-1}p(t,s)\) and then for any functions \(\phi,\varphi \in \mathcal{ C}(L),\) we have

$$\displaystyle\begin{array}{rcl} \vert g(t,\phi ) - g(t,\varphi )\vert & \leq & \Big\vert \sum _{s=-\infty }^{-1}G(t,t + s,\phi (s)) -\sum _{ s=-\infty }^{-1}G(t,t + s,\varphi (s))\Big\vert {}\\ &\leq & \sum _{s=-\infty }^{-1}p(t,t + s)\;\vert \vert \phi -\varphi \vert \vert {}\\ & =& b(t)\vert \vert \phi -\varphi \vert \vert. {}\\ \end{array}$$

Next we verify (iv). Let ε > 0 and t1 ≥ 0 be given. By (IV) there exists a t2 > t1 such that

$$\displaystyle{ L\sum _{s=-\infty }^{t_{1}-1}p(t,s) <\epsilon \sum _{ s=-\infty }^{t-1}p(t,s)\;\mbox{ for all}\;t> t_{ 2}. }$$

Let \(x_{t} \in \mathcal{ C}(L)\) and for t > t2 we have

$$\displaystyle\begin{array}{rcl} \vert g(t,x_{t})\vert & \leq & \sum _{s=-\infty }^{t_{1}-1}\vert G(t,s,x(s))\vert +\sum _{ s=t_{1}}^{t-1}\vert G(t,s,x(s))\vert {}\\ &\leq & \sum _{s=-\infty }^{t_{1}-1}Lp(t,s) +\sum _{ s=t_{1}}^{t-1}p(t,s)\vert x(s)\vert {}\\ &\leq & \epsilon \sum _{s=-\infty }^{t-1}p(t,s) +\sum _{ s=t_{1}}^{t-1}p(t,s)\vert \vert x\vert \vert ^{[t_{1},t-1]} {}\\ & \leq & b(t)\Big(\epsilon +\vert \vert x\vert \vert ^{[t_{1},t-1]}\Big). {}\\ \end{array}$$

This implies that (iv) is satisfied, and hence by Theorem 3.8.1, the zero solution of (3.8.3) is asymptotically stable if and only if (V) holds.

We end the paper with the following example.

Example 3.9.

Consider the difference equation

$$\displaystyle{ x(t + 1) = \frac{1} {2^{t}}x(t) +\sum _{ s=-\infty }^{t-1}2^{s-t}x(s),n \geq 0. }$$
(3.8.4)

In this example we take t0 = 0. We make sure all conditions of Theorem 3.8.2 are satisfied. We observe that \(a(t) = \frac{1} {2^{t}},\) and G(t, s, x) = 2st x(s). Thus,

$$\displaystyle{ \prod _{s=0}^{t-1} \frac{1} {2^{s}} \rightarrow 0\;as\;t \rightarrow \infty, }$$

and hence condition(V) is satisfied. It is clear that p(t, s) = 2st. Next we make sure condition (II) is satisfied.

$$\displaystyle\begin{array}{rcl} & & \mathop{\max }\limits_{t \in \mathbf{Z}^{+}}\sum _{ s=0}^{t-1}\big\vert \prod _{ u=s+1}^{t-1}a(u)\big\vert \sum _{\tau =0}^{s-1}p(s,\tau ) {}\\ & =& \mathop{\max }\limits_{t \in \mathbf{Z}^{+}}\sum _{ s=0}^{t-1}\big\vert \prod _{ u=s+1}^{t-1}2^{-u}\big\vert \sum _{\tau =0}^{s-1}2^{s-\tau } {}\\ & \leq & \mathop{\max }\limits_{t \in \mathbf{Z}^{+}}\sum _{ s=0}^{t-1}\big\vert \prod _{ u=s+1}^{t-1}2^{-u}(1 - 2^{-s}) {}\\ & \leq & \mathop{\max }\limits_{t \in \mathbf{Z}^{+}}\sum _{ s=0}^{t-1}2^{1-t}(1 - 2^{-s}) {}\\ & \leq & 2^{1-t}[-2^{1-t} + 2 + \frac{4^{1-t}} {3} - 4/3] {}\\ & \leq & 2/3,\;\text{for all}\;t \in \mathbb{Z}^{+}. {}\\ \end{array}$$

Hence (II) is satisfied. Left to show (IV) is satisfied. Let t1 ≥ 0 be given. Then

$$\displaystyle\begin{array}{rcl} \sum _{s=-\infty }^{t_{1}-1}p(t,s)& =& \sum _{ s=-\infty }^{t_{1}-1}2^{-t+s} {}\\ & =& 2^{-t}[2^{t_{1} } - 2^{-\infty }] {}\\ & \leq & 2^{t-t_{2} } {}\\ & =& 2^{-t_{2} }\sum _{s=-\infty }^{t-1}2^{-t+s} {}\\ & \leq & \epsilon \sum _{s=-\infty }^{t-1}p(t,s),t \geq t_{ 2} \geq t_{1}. {}\\ \end{array}$$

Thus all the conditions of Theorem 3.8.2 are satisfied and the zero solution of (3.8.4) is asymptotically stable.

3.9 Volterra Summation Equations

We shift our focus to different types of Volterra difference equations, which we call Volterra summation equations . Volterra integral equations were first studied by Miller [123], in which he proposed the extension of the use of Lyapunov functionals in Volterra integro-differential equations to integral equations. Years later, Burton took upon himself such a tedious task and successfully used Lyapunov functionals in the qualitative analysis of Integral equations. For such a reference we mention the papers [27, 29, 30]. Since then, the study of integral equations has been fully developed, unlike its counterpart, Volterra summation equations. All the results of this section are new and not published anywhere else. Volterra summation equations play major role in the qualitative analysis of neutral difference equations. To see this we consider the neutral difference equation

$$\displaystyle{ \bigtriangleup \big(D(n,x_{n})\big) = f(n,x_{n}),\;\;n \in \mathbb{Z}^{+}. }$$
(3.9.1)

If

$$\displaystyle{ D(n,0) = f(n,0) = 0, }$$

then one would have to ask that the zero solution of D(n, 0) = 0 be stable in order for the zero solution of (3.9.1) to be stable. On the other hand, if we are interested in studying boundedness of solutions of (3.9.1), one would have to require that the solutions of

$$\displaystyle{ D(n,x_{n}) = h(n), }$$

be bounded for a suitable function h(n) with suitable conditions. Equation (3.9.1) is typified by neutral equations of the form

$$\displaystyle{ \bigtriangleup \Big(x(n) +\sum _{ s=n-h}^{n-1}b(s + h)x(s)\Big) = f(n,x_{ n}). }$$
(3.9.2)

Equation (3.9.2) will be studied in detail in Chapter 6 For more on neutral difference equations, we refer to [183]. Most of this section’s materials can be found in [142]. We consider the vector Volterra summation equation

$$\displaystyle{ x(t) = a(t) -\sum _{s=0}^{t-1}C(t,s)x(s),\;t \in \mathbb{Z}^{+} }$$
(3.9.3)

where x and a are k-vectors, k ≥ 1, while C is an k × k matrix. To clear any confusion, we note that the summation term in (3.9.3) could have been started at any initial time t0 ≥ 0. We will use the resolvent equation that was established on time scales in [2], combined with Lyapunov functionals and fixed point theory to obtain boundedness of solutions and their asymptotic behaviors. One of the major difficulties when using a suitable Lyapunov functional on Volterra summation equation is relating the solution back to that Lyapunov functional. For \(x \in \mathbb{R}\), | x | denotes the Euclidean norm of x. For any k × k matrix A, define the norm of A by | A | = sup{ | Ax |: | x | ≤ 1}. Let X denote the set of functions \(\phi: [0,n] \rightarrow \mathbb{R}\) and ∥ϕ∥ = max{ | ϕ(s) |: 0 ≤ sn}. We have the following theorem regarding the existence of solutions of (3.9.3).

Theorem 3.9.1.

Assume the existence of two positive constants K and α ∈ (0, 1) such that

$$\displaystyle{ \vert a(t)\vert \leq K,\;\mathit{\mbox{ and}}\;\max _{t\geq 0}\sum _{s=0}^{t-1}\vert C(t,s)\vert \leq \alpha, }$$
(3.9.4)

then there is a unique bounded solution of  (3.9.3).

Proof.

Define a mapping D: XX, by

$$\displaystyle{ (D\phi )(t) = a(t) -\sum _{s=t_{0}}^{t-1}C(t,s)\phi (s). }$$

It is clear that (X, ∥⋅ ∥) is a Banach space . Now for ϕX, with ∥ϕ∥ ≤ q for positive constant q we have that

$$\displaystyle{ \|(D\phi )\| \leq K +\alpha q. }$$

Thus D: XX. Left to show that D defines a contraction mapping on X. Let ϕ, φX. Then

$$\displaystyle\begin{array}{rcl} \|(D\phi ) - (D\varphi )\|& \leq & \max _{t\geq 0}\sum _{s=0}^{t-1}\vert C(t,s)\vert \|\phi -\varphi \| {}\\ &\leq & \alpha \|\phi -\varphi \|. {}\\ \end{array}$$

Hence, D is a contraction, and by the contraction mapping principle it has a unique solution in X that solves (3.9.3). This completes the proof.

We have the following theorem in which we use a Lyapunov functional to drive solutions to zero.

Theorem 3.9.2.

Assume the existence of two positive constants K1 and α ∈ (0, 1) such that

$$\displaystyle{ \sum _{t=0}^{\infty }\vert a(t)\vert \leq K_{ 1},\;\mathit{\mbox{ and}}\;\sum _{u=1}^{\infty }\vert C(u + t,t)\vert \leq \alpha, }$$
(3.9.5)

then every solution x(t) of  (3.9.3) satisfies xl1[0, ) and x(t) → 0, as t. 

Proof.

Using (3.9.3) we obtain

$$\displaystyle{ \vert x(t)\vert -\vert a(t)\vert \leq \sum _{s=0}^{t-1}\vert C(t,s)\vert \vert x(s)\vert. }$$
(3.9.6)

Define the Lyapunov functional V by

$$\displaystyle{ V (t) =\sum _{ s=0}^{t-1}\sum _{ u=t-s}^{\infty }\vert C(u + s,s)\vert \vert x(s)\vert. }$$

Moreover,

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t) = \vert x(t)\vert \sum _{u=1}^{\infty }\vert C(u + s,s)\vert -\sum _{ s=0}^{t-1}\vert C(t,s)\vert \vert x(s)\vert.& & {}\\ \end{array}$$

A substitution of (3.9.6) in the above expression yields

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t)& \leq & \vert x(t)\vert \sum _{u=1}^{\infty }\vert C(u + s,s)\vert -\vert x(t)\vert + \vert a(t)\vert {}\\ & =& (\alpha -1)\vert x(t)\vert + \vert a(t)\vert. {}\\ \end{array}$$

Summing the above inequality from 0 to t − 1 gives

$$\displaystyle{ 0 \leq V (t) \leq V (0) + (\alpha -1)\sum _{s=0}^{t-1}\vert x(s)\vert +\sum _{ s=0}^{t-1}\vert a(s)\vert. }$$

Since V (0) = 0 and α ∈ (0, 1) we arrive at

$$\displaystyle{ \sum _{s=0}^{t-1}\vert x(s)\vert \leq \frac{1} {1-\alpha }\sum _{s=0}^{t-1}\vert a(s)\vert. }$$

Letting t we have

$$\displaystyle{ \sum _{t=0}^{\infty }\vert x(t)\vert \leq \frac{1} {1-\alpha }\sum _{t=0}^{\infty }\vert a(t)\vert \leq \frac{1} {1-\alpha }K_{1}, }$$

which is automatically implied that x(t) → 0, as t. This completes the proof.

We note that the use of Lyapunov functional has an advantage here due to the absence of a linear term in our original equation (3.9.3) which is necessary for the use of variation of parameters. The next result is about the existence of a unique periodic solution for the Volterra summation equation

$$\displaystyle{ x(t) = a(t) -\sum _{s=-\infty }^{t-1}C(t,s)x(s),\;t \in \mathbb{Z} }$$
(3.9.7)

where x and a are k-vectors, k ≥ 1, while C is a k × k matrix.

Theorem 3.9.3.

Assume the existence of a constant α ∈ (0, 1) such that

$$\displaystyle{ \max _{t\geq 0}\sum _{s=-\infty }^{t-1}\vert C(t,s)\vert \leq \alpha, }$$

and there is a positive integer T such that

$$\displaystyle{ a(t + T) = a(t)\;and\;C(t + T,s + T) = C(t,s), }$$

then there is a unique periodic solution of  (3.9.7).

Proof.

Let X be the space of periodic sequences of period T. Then, it is clear that (X, ∥⋅ ∥) is a Banach space . Now for ϕX, we define D: XX by

$$\displaystyle{ (D\phi )(t) = a(t) -\sum _{s=-\infty }^{t-1}C(t,s)\phi (s). }$$

It is clear that D is periodic of period T. That is ()(t + T) = ()(t). Left to show that D defines a contraction mapping on X. Let ϕ, φX. Then

$$\displaystyle\begin{array}{rcl} \|(D\phi ) - (D\varphi )\|& \leq & \max _{t\in \mathbb{Z}}\sum _{s=-\infty }^{t-1}\vert C(t,s)\vert \|\phi -\varphi \| {}\\ &\leq & \alpha \|\phi -\varphi \|. {}\\ \end{array}$$

Hence, D is a contraction and by the contraction mapping principle it has a unique solution in X that solves (3.9.7). This completes the proof.

In the next theorem we return to (3.9.3) and rewrite it so we can show it has an asymptotically periodic solution. Thus we rewrite (3.9.3) in the form

$$\displaystyle{ x(t) = a(t) -\sum _{s=-\infty }^{t-1}C(t,s)x(s) +\sum _{ s=-\infty }^{-1}C(t,s)x(s). }$$
(3.9.8)

Note that the term a(t) − s = − t−1 C(t, s)x(s) produced a unique periodic solution as we have seen in Theorem 3.9.3 and this indicates that for any bounded x, the term s = − −1 C(t, s)x(s) → 0,  t. Hence it is intuitive to expect a solution x of (3.9.8) to be written as x = y + z where y is periodic and z → 0, as t. We need to properly define our spaces. Let

$$\displaystyle{ P_{T} =\{\phi: \mathbb{Z} \rightarrow \mathbb{R}^{k}\mid \phi (t + T) =\phi (t)\} }$$

and

$$\displaystyle{ Q =\{ q: \mathbb{Z}^{+} \rightarrow \mathbb{R}^{k}\mid q(t) \rightarrow 0,\;\mbox{ as}\;t \rightarrow \infty \}. }$$

We have the following theorem.

Theorem 3.9.4.

Suppose for ϕPT, 

$$\displaystyle{ \sum _{s=-\infty }^{-1}C(t,s)\phi (s) \rightarrow 0,\;\mathit{\mbox{ as}}\;t \rightarrow \infty }$$

and for each zQ, 

$$\displaystyle{ \sum _{s=0}^{t-1}C(t,s)z(s) \rightarrow 0,\;\mathit{\mbox{ as}}\;t \rightarrow \infty. }$$

Assume the existence of a constant α ∈ (0, 1) such that

$$\displaystyle{ \max _{t\geq 0}\sum _{s=0}^{t-1}\vert C(t,s)\vert \leq \alpha, }$$

and there is a positive constant integer T such that a(t + T) = a(t) and C(t + T, s + T) = C(t, s). Then  (3.9.3) has a solution \(x(t) = y(t) + z(t)\) where ϕPT and zQ. 

Proof.

Let X be the space of sequences \(\phi: \mathbb{Z}^{+} \rightarrow \mathbb{R}^{k}\) such that ϕX implies there is a yPT and zQ with ϕ = y + z. We claim that \(\big(X,\|\cdot \|\big)\) is a Banach space , where ∥⋅ ∥ is the maximum norm. To see this, we let {yn + zn} be a Cauchy sequence in \(\big(X,\|\cdot \|\big).\) Given an ɛ > 0 there is an N such that for n, mN we have

$$\displaystyle{ \vert y_{n}(t) + z_{n}(t) - y_{m}(t) - z_{m}(t)\vert <\frac{\varepsilon } {2}. }$$

Since zQ we have for each ɛ > 0 and each zQ there is an L > 0 such that tL, implies that | z(t) | ≤ ɛ∕4. Fix n, mN and for the ɛ∕4 find L > 0 such that tL implies that

$$\displaystyle{ \vert y_{n}(t) - z_{m}(t)\vert -\vert z_{n}(t) - y_{m}(t)\vert \leq \vert y_{n}(t) + z_{n}(t) - y_{m}(t) - z_{m}(t)\vert <\frac{\varepsilon } {2} }$$

so that tL implies that

$$\displaystyle{ \vert y_{n}(t) - y_{m}(t)\vert \leq \frac{\varepsilon } {2} + \vert z_{n}(t)\vert + \vert z_{m}(t)\vert <\varepsilon, }$$

for all t since yn, ymPT. Since this holds for every pair with nN and mN, it follows that {yn} is a Cauchy sequence. The same argument can be repeated for the sequence {yn}. This completes the proof of the claim.Let ϕ = y + z where yPT and zQ. Define the mapping \(H: X \rightarrow X\) by

$$\displaystyle{ (H\phi )(t) = a(t) -\sum _{s=0}^{t-1}C(t,s)x(s). }$$

Then H is a contraction mapping by Theorem 3.9.3. Now we observe that

$$\displaystyle\begin{array}{rcl} (H\phi )(t)& =& a(t) -\sum _{s=0}^{t-1}C(t,s)\phi (s) {}\\ & =& a(t) -\sum _{s=0}^{t-1}C(t,s)(y(s) + z(s)) {}\\ & =& \Big[a(t) -\sum _{s=-\infty }^{t-1}C(t,s)y(s)\Big] {}\\ & +& \Big[\sum _{s=-\infty }^{-1}C(t,s)y(s) -\sum _{ s=0}^{t-1}C(t,s)z(s)\Big] {}\\ &:=& B\phi + A\phi. {}\\ \end{array}$$

This defines mappings A and B on X. Note that \(B: X \rightarrow P_{T} \subset X\) and \(A: X \rightarrow Q \subset X.\) Since H defines a contraction mapping, it has a unique fixed point in X that solves (3.9.3). The proof is complete.

Next we discuss the equation for Volterra summation equation (3.9.3) and use variation of parameters to obtain variety of different results concerning the solutions. Adivar and Raffoul [2] were the first to establish the existence of the resolvent of an equation that is similar to (3.9.3) on time scales. Due to the importance of such results, we will state them as they were presented in [2] on time scales. We then set the time scale equal to \(\mathbb{Z}\) to suit our equation (3.9.3). For a good reference on time scales we refer the reader to the famous book by Martin Bohner and Al Peterson [18]. A time scale, denoted \(\mathbb{T}\), is a nonempty closed subset of real numbers. The set \(\mathbb{T}^{\kappa }\) is derived from the time scale \(\mathbb{T}\) as follows: if \(\mathbb{T}\) has a left-scattered maximum M, then \(\mathbb{T}^{\kappa } = \mathbb{T}-\left \{M\right \}\), otherwise \(\mathbb{T}^{\kappa } = \mathbb{T}\). The delta derivative f Δ of a function \(f: \mathbb{T} \rightarrow \mathbb{R}\), defined at a point \(t \in \mathbb{T}^{\kappa }\) by

$$\displaystyle{ f^{\varDelta }(t):=\lim _{s\rightarrow t}\frac{f(\sigma (t)) - f(s)} {\sigma (t) - s} \text{,where }s \rightarrow t\text{,}\ \ s \in \mathbb{T}\setminus \left \{\sigma (t)\right \}, }$$
(3.9.9)

In (3.9.9), \(\sigma: \mathbb{T} \rightarrow \mathbb{T}\) is the forward jump operator defined by σ(t): = \(\inf \left \{s\! \in \! \mathbb{T}: s> t\right \}\). Hereafter, we denote by μ(t) the step size function \(\mu: \mathbb{T} \rightarrow \mathbb{R}\) defined by μ(t): = σ(t) − t. A point \(t \in \mathbb{T}\) is said to be right dense (right scattered) if μ(t) = 0 (μ(t) > 0). A point is said to be left dense if \(\sup \left \{s \in \mathbb{T}: s <t\right \} = t\). We note that when the time scale is the set on integers, \(\mathbb{T} = \mathbb{Z}\), then σ(t) = t + 1 and μ(t) = 1. where \(t_{0} \in \mathbb{T}^{\kappa }\) is fixed and the functions \(a: I_{\mathbb{T}} \rightarrow \mathbb{R}\)\(C: I_{\mathbb{T}} \times I_{\mathbb{T}} \rightarrow \mathbb{R}\). Based on the results of [2], we have the following. Given a linear system of integral equations of the form

$$\displaystyle{ x(t) = a(t) -\int _{t_{0}}^{t}C(t,s)x(s)\varDelta s,\ \ t_{ 0} \in \mathbb{T}^{\kappa } }$$
(3.9.10)

the corresponding resolvent equation associated with C(t, s) is given by

$$\displaystyle{ R(t,s) = C(t,s) -\int _{\sigma (s)}^{t}R(t,u)C(u,s)\varDelta u. }$$
(3.9.11)

If C is scalar valued, then so is R. If C is n × n matrix, then so is R. Moreover, the solution of (3.9.10) in terms of R is given by the variation of parameters formula

$$\displaystyle{ x(t) = a(t) -\int _{t_{0}}^{t}R(t,u)a(u)\varDelta u. }$$
(3.9.12)

It should cause no difficulties to take the initial time t0 = 0. With this in mind, if we set \(\mathbb{T} = \mathbb{Z},\) then equations (3.9.10) and (3.9.11) become

$$\displaystyle{ x(t) = a(t) -\sum _{s=0}^{t-1}C(t,s)x(s), }$$
(3.9.13)

and

$$\displaystyle{ R(t,s) = C(t,s) -\sum _{u=s+1}^{t-1}R(t,u)C(u,s), }$$
(3.9.14)

respectively. If C is scalar valued, then so is R. If C is n × n matrix, then so is R. Moreover, the solution of (3.9.13) in terms of R is given by

$$\displaystyle{ x(t) = a(t) -\sum _{s=0}^{t-1}R(t,s)a(s). }$$
(3.9.15)

For the remainder of this section we denote the vector space of bounded sequences \(\phi: \mathbb{Z}^{+} \rightarrow \mathbb{R}^{k}\) by \(\mathcal{BC}.\) The next theorem is an extension of Perron’s Theorem for integral equation over the reals to an arbitrary time scale. Its proof can be found in [2]. Only for the next theorem we define \(\mathcal{B}\) to be the space of bounded continuous functions on \(I_{\mathbb{T}} = [t_{0},\infty )_{\mathbb{T}}\) endowed with the supremum norm, \(\vert \vert ^{.}\vert \vert _{\mathcal{B}}\), given by

$$\displaystyle{ \vert \vert f\vert \vert _{\mathcal{B}}:=\sup _{t\in I_{\mathbb{T}}}\vert f(t)\vert. }$$

One can easily see that \((\mathcal{B},\vert \vert ^{.}\vert \vert _{\mathcal{B}})\) is a Banach space .

Theorem 3.9.5 ([2]).

Let \(C: I_{\mathbb{T}} \times I_{\mathbb{T}} \rightarrow \mathbb{R}\) be continuous real valued function ont0st < ∞. If \(\int _{t_{0}}^{t}R(t,s)f(s)\varDelta s \in \mathcal{ BC}\) for each \(f \in \mathcal{ BC}\) , then there exists a positive constant K such that \(\int _{t_{0}}^{t}\vert R(t,s)\vert \;\varDelta s <K,\) for all \(t \in I_{\mathbb{T}}.\)

Just for the record, we restate Theorem 3.9.5 for \(\mathbb{T} = \mathbb{Z}\)

Theorem 3.9.6.

Let \(C: \mathbb{Z}^{+} \times \mathbb{Z}^{+} \rightarrow \mathbb{R}\) be real valued sequence on 0 ≤ t0st < ∞. If \(\sum _{s=0}^{t-1}R(t,s)f(s) \in \mathcal{ BC}\) for each \(f \in \mathcal{ BC}\) , then there exists a positive constant K such that ∑s = 0 t−1 | R(t, s) | < K, for all \(t \in \mathbb{Z}^{+}.\)

Theorem 3.9.7.

Suppose R(t, s) satisfies  (3.9.14) and that \(a \in \mathcal{ BC}\) . Then every solution x(t) of  (3.9.13) is bounded if and only if

$$\displaystyle{ \max _{t\in \mathbb{Z}^{+}}\sum _{s=0}^{t-1}\vert R(t,s)\vert <\infty }$$
(3.9.16)

holds.

Proof.

Suppose (3.9.16) holds. Then, using (3.9.15), it is trivial to show that x(t) is bounded. If x(t) and a(t) are bounded, then from (3.9.15), we have

$$\displaystyle{ \sum _{s=0}^{t-1}\vert R(t,s)\vert \vert a(s)\vert \leq \gamma }$$

for some positive constant γ and the proof follows from Theorem 3.9.6.

The intuitive idea here is that for C(t, s) well behaved then the solution of (3.9.13) follows a(t). 

Theorem 3.9.8.

Let C be a k × k matrix. Assume the existence of a constant α ∈ (0, 1) such that

$$\displaystyle{ \max _{t\in \mathbb{Z}^{+}}\sum _{s=0}^{t-1}\vert C(t,s)\vert \leq \alpha, }$$
(3.9.17)
  1. (i)

    If \(a \in \mathcal{ BC}\) so is the solution x of  (3.9.13); hence,  (3.9.17) holds.

  2. (ii)

    Suppose, in addition, that for each T > 0 we have \(\sum\limits _{s=0}^{T}\vert C(t,s)\vert \rightarrow 0\) as t. If \(a(t) \rightarrow 0\) as t, then so does x(t) and \(\sum \limits_{s=0}^{t-1}R(t,s)a(s).\)

  3. (iii)

    \(\sum _{s=0}^{t-1}\vert R(t,s)\vert \leq \frac{\alpha } {1-\alpha }.\)

Proof.

The proof of (i) is the same as the proof of Theorem 3.9.1. For the proof of (ii) we define the set

$$\displaystyle{ M =\{\phi: \mathbb{Z}^{+} \rightarrow \mathbb{R}^{k}\mid \vert \phi (t)\vert \rightarrow 0,\;\mbox{ as}\;t \rightarrow \infty \}. }$$

For ϕM, define the mapping Q by

$$\displaystyle{ \big(Q\phi \big)(t) = a(t) -\sum _{s=0}^{t-1}C(t,s)\phi (s). }$$

Then

$$\displaystyle{ \big\vert \big(Q\phi \big)(t)\big\vert \leq \vert a(t)\vert +\sum _{ s=0}^{t-1}\vert C(t,s)\phi (s)\vert. }$$

We already know that a(t) → 0 as t. Given an ɛ > 0 and ϕM, find T such that | ϕ(t) | < ɛ if tT and find d with | ϕ(t) | ≤ d for all tT. For this fixed T, find η > T such that tη implies that \(\sum _{s=0}^{T-1}\vert C(t,s)\vert \leq \frac{\varepsilon } {d}.\) Then tη implies that

$$\displaystyle\begin{array}{rcl} \sum _{s=0}^{t-1}\vert C(t,s)\vert & \leq & \sum _{ s=0}^{T-1}\vert C(t,s)\vert \sum _{ s=T}^{t-1}\vert C(t,s)\vert {}\\ &\leq & (d\varepsilon )/d+\alpha \varepsilon <2\varepsilon. {}\\ \end{array}$$

Thus, \(Q: M \rightarrow M\) and the fixed point satisfies x(t) → 0,  as t, for every vector sequence aM. Using (3.9.15) we have

$$\displaystyle{ \sum _{s=0}^{t-1}R(t,s)a(s) = a(t) - x(t) \rightarrow 0,\;\mbox{ as}\;t \rightarrow \infty. }$$

This completes the proof of (ii). Using (3.9.14) and (3.9.17) we have by changing of order of summations

$$\displaystyle\begin{array}{rcl} \sum _{s=0}^{t-1}\vert R(t,s)\vert & \leq & \sum _{ s=0}^{t-1}\vert C(t,s)\vert +\sum _{ s=0}^{t-1}\sum _{ u=s+1}^{t-1}\vert R(t,u)\vert \vert C(u,s)\vert {}\\ & =& \sum _{s=0}^{t-1}\vert C(t,s)\vert +\sum _{ u=0}^{t-1}\vert R(t,u)\vert \sum _{ s=0}^{u-1}\vert C(u,s)\vert {}\\ &\leq & \alpha +\alpha \sum _{u=0}^{t-1}\vert R(t,u)\vert. {}\\ \end{array}$$

Therefore,

$$\displaystyle{ (1-\alpha )\sum _{s=0}^{t-1}\vert R(t,s)\vert \leq \alpha. }$$
(3.9.18)

That is,

$$\displaystyle{ \max _{t\in \mathbb{Z}^{+}}\sum _{s=0}^{t-1}\vert R(t,s)\vert \leq \frac{\alpha } {1-\alpha }. }$$
(3.9.19)

Example 3.10.

Suppose there is a sequence \(r: \mathbb{Z}^{+} \rightarrow (0,1],\) with r(t) 0 with

$$\displaystyle{ \max _{t\in \mathbb{Z}^{+}}\sum _{s=0}^{t-1}\vert C(t,s)\vert (r(s)/r(t)) \leq \alpha,\;\alpha \in (0,1) }$$
(3.9.20)

and

$$\displaystyle{ \vert a(t)\vert \leq kr(t) }$$
(3.9.21)

for some positive constant k. Then the unique solution x(t) of (3.9.13) is bounded and goes to zero as t approaches infinity. Moreover, \(\sum _{s=0}^{t-1}R(t,s)a(s) \rightarrow 0,\;\mbox{ as}\;t \rightarrow \infty.\)

Proof.

Let

$$\displaystyle{ \mathcal{M} =\{\phi: [0,\infty ) \rightarrow \mathbb{R}^{k}\mid \vert \phi \vert _{ r} \leq \max _{t\in \mathbb{Z}^{+}} \frac{\vert \phi (t)\vert } {\vert r(t)\vert } <\infty \}. }$$

Then \(\big(\mathcal{M},\vert \cdot \vert _{r}\big)\) is a Banach space . For \(\phi \in \mathcal{ M},\) define the mapping Q by

$$\displaystyle{ \big(Q\phi \big)(t) = a(t) -\sum _{s=0}^{t-1}C(t,s)\phi (s). }$$

Then,

$$\displaystyle\begin{array}{rcl} \big\vert \big(Q\phi \big)(t)\big\vert /r(t)& \leq & \vert a(t)\vert /r(t) +\sum _{ s=0}^{t-1}\vert C(t,s)\vert (r(s)/r(t))\vert \phi (s)\vert /r(s) {}\\ & \leq & k + \vert \phi \vert _{r}\sum _{s=0}^{t-1}\vert C(t,s)\vert (r(s)/r(t)) {}\\ & \leq & k +\alpha \vert \phi \vert _{r}, {}\\ \end{array}$$

which shows that \(Q\phi \in \mathcal{ M}.\) Let \(\phi,\eta \in \mathcal{ M},\) then we readily have that

$$\displaystyle{ \Big\vert \big(Q\phi \big)(t)\big) -\big (Q\eta \big)(t)\Big\vert /r(t) \leq \alpha \vert \phi -\eta \vert _{r} }$$

and so we have Q is a contraction on \(\mathcal{M}\) and therefore it has a unique fixed point x(t) in \(\mathcal{M}\) that solves (3.9.13). Moreover, \(\max _{t\in \mathbb{Z}^{+}} \frac{\vert x(t)\vert } {\vert r(t)\vert } <\infty,\) implies that | x(t) | ≤ k r(t) → 0, as t. Also by (3.9.21) we have | a(t) | → 0,  as t and hence using (3.9.15) we have

$$\displaystyle{ \sum _{s=0}^{t-1}\vert R(t,s)a(s)\vert \leq \vert a(t)\vert + \vert x(t)\vert \rightarrow 0,\;\mbox{ as}\;t \rightarrow \infty. }$$

This completes the proof.

The next theorem relates the kernel to a kernel of convolution type. Also, unlike Theorem 3.9.2 we only ask for boundedness on a(t). 

Theorem 3.9.9.

Assume the existence of a constant α ∈ (0, 1) such that

$$\displaystyle{ \sum _{u=1}^{\infty }\vert C(u + t,t)\vert \leq \alpha, }$$
(3.9.22)

and let a(t) be a bounded sequence. Suppose there is a decreasing sequence \(\varPhi: [0,\infty ) \rightarrow (0,\infty )\) with Φl1[0, ), and

$$\displaystyle{ \varPhi (t - s) \geq \sum _{u=t-s}^{\infty }\vert C(u + s,s)\vert. }$$
(3.9.23)

If in addition there exists a positive constant K with

$$\displaystyle{ \sum _{u=t-s}^{\infty }\vert C(u + s,s)\vert \geq K\vert C(t,s)\vert, }$$
(3.9.24)

then the unique solution x(t) of  (3.9.13) is bounded and

$$\displaystyle{ \max _{t\in \mathbb{Z}^{+}}\sum _{s=0}^{t-1}\vert R(t,s)\vert <\infty. }$$

Proof.

Define the Lyapunov functional V by

$$\displaystyle{ V (t) =\sum _{ s=0}^{t-1}\sum _{ u=t-s}^{\infty }\vert C(u + s,s)\vert \vert x(s)\vert. }$$

Then along the solutions of (3.9.13) we have

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t) = \vert x(t)\vert \sum _{u=1}^{\infty }\vert C(u + s,s)\vert -\sum _{ s=0}^{t-1}\vert C(t,s)\vert \vert x(s)\vert.& & {}\\ \end{array}$$

Using (3.9.13) we arrive at

$$\displaystyle{ \vert x(t)\vert -\vert a(t)\vert \leq \sum _{s=0}^{t-1}\vert C(t,s)\vert \vert x(s)\vert. }$$

Hence,

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t)& \leq & \vert x(t)\vert \sum _{u=1}^{\infty }\vert C(u + s,s)\vert -\vert x(t)\vert + \vert a(t)\vert {}\\ & =& (\alpha -1)\vert x(t)\vert + \vert a(t)\vert:= -\delta \vert x(t)\vert + \vert a(t)\vert, {}\\ \end{array}$$

for δ > 0. Replace t with s in the above expression and then multiply both sides by Φ(ts) for 0 ≤ st < . 

$$\displaystyle{ \bigtriangleup _{s}V (s)\varPhi (t - s) \leq -\delta \vert x(s)\vert \varPhi (t - s) + \vert a(s)\vert \varPhi (t - s). }$$
(3.9.25)

Suppose there is a t > 0 satisfying

$$\displaystyle{ V (t) =\max _{0\leq s\leq t-1}V (s + 1). }$$

Then summing from 0 to t − 1 followed with summation by parts and by noting that V (0) = 0 we arrive at

$$\displaystyle\begin{array}{rcl} \sum _{s=0}^{t-1}\bigtriangleup _{ s}V (s)\varPhi (t - s)& =& V (s)\varPhi (t - s)\big\vert _{s=0}^{t} -\sum _{ s=0}^{t-1}V (s + 1)\bigtriangleup _{ s}\varPhi (t - s) \\ & \geq & V (t)\varPhi (0) - V (0)\varPhi (t) - V (t)\sum _{s=0}^{t-1}\bigtriangleup _{ s}\varPhi (t - s) \\ & =& V (t)\varPhi (0) - V (t)[\varPhi (0) -\varPhi (t)] \\ & =& V (t)\varPhi (t). {}\end{array}$$
(3.9.26)

Hence (3.9.26) combined with (3.9.25) and making use of (3.9.24) yield

$$\displaystyle\begin{array}{rcl} V (t)\varPhi (t)& \leq & \sum _{s=0}^{t-1}\bigtriangleup _{ s}V (s)\varPhi (t - s) {}\\ & \leq & -\delta \sum _{s=0}^{t-1}\varPhi (t - s)\vert x(s)\vert +\sum _{ s=0}^{t-1}\varPhi (t - s)\vert a(s)\vert {}\\ &\leq & -\delta \sum _{s=0}^{t-1}\sum _{ u=t-s}^{\infty }\vert C(u + s,s)\vert \vert x(s)\vert +\sum _{ s=0}^{t-1}\varPhi (t - s)\vert a(s)\vert {}\\ &\leq & -\delta V (t) + \vert \vert a\vert \vert q, {}\\ \end{array}$$

where | | a | | is the maximum norm of a and q is a positive constant that we get from | Φ | ∈ l1[0, ). The above inequality implies that

$$\displaystyle{ V (t) \leq \frac{\vert \vert a\vert \vert q} {\varPhi (t)+\delta }, }$$

and V (t) is bounded. Using (3.9.24) in V (t) gives

$$\displaystyle{ V (t) \geq K\sum _{s=0}^{t-1}\vert C(t,s)\vert \vert x(s)\vert \geq K[\vert x(t)\vert -\vert a(t)\vert ], }$$

from which we conclude x(t) is bounded since both V (t) and a(t) are bounded. Now from (3.9.15) we have that \(\sum _{s=0}^{t-1}\vert R(t,s)a(s)\vert \leq \vert x(t)\vert + \vert a(t)\vert\) and hence s = 0 t−1 | R(t, s)a(s) | is bounded and by Theorem 3.9.6, we have that \(\max _{t\in \mathbb{Z}^{+}}\sum _{s=0}^{t-1}\vert R(t,s)\vert <\infty.\) This completes the proof.

For Theorem 3.9.10 we assume (3.9.13) is scalar.

Theorem 3.9.10.

Assume the existence of constants α,  β ∈ (0, 1) such that

$$\displaystyle{ \sum _{u=1}^{\infty }\vert C(u + t,t)\vert \leq \alpha, }$$
(3.9.27)

and

$$\displaystyle{ \max _{t\in \mathbb{Z}^{+}}\sum _{s=0}^{t-1}\vert C(t,s)\vert \leq \beta. }$$
(3.9.28)

If al2[0, ) so is the solution x of  (3.9.13).

Proof.

Define the Lyapunov functional V by

$$\displaystyle{ V (t) =\sum _{ s=0}^{t-1}\sum _{ u=t-s}^{\infty }\vert C(u + s,s)\vert x^{2}(s). }$$

Then along the solutions of (3.9.13) we have that

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t) = x^{2}(t)\sum _{ u=1}^{\infty }\vert C(u + s,s)\vert -\sum _{ s=0}^{t-1}\vert C(t,s)\vert x^{2}(s).& & {}\\ \end{array}$$

Squaring both sides of (3.9.13) gives

$$\displaystyle\begin{array}{rcl} x^{2}(t)& =& a^{2}(t) - 2a(t)\sum _{ s=0}^{t-1}C(t,s)x(s) +\Big (\sum _{ s=0}^{t-1}C(t,s)x(s)\Big)^{2} {}\\ & \leq & 2\Big(a^{2}(t) +\big (\sum _{ s=0}^{t-1}C(t,s)x(s)\big)^{2}\Big) {}\\ & \leq & 2a^{2}(t) + 2\sum _{ s=0}^{t-1}\vert C(t,s)\vert \sum _{ s=0}^{t-1}\vert C(t,s)\vert x^{2}(s) {}\\ & \leq & 2a^{2}(t) + 2\beta \sum _{ s=0}^{t-1}\vert C(t,s)\vert x^{2}(s). {}\\ \end{array}$$

This implies that

$$\displaystyle{ -\sum _{s=0}^{t-1}\vert C(t,s)\vert x^{2}(s) \leq a^{2}(t)/\beta - x^{2}(t)/(2\beta ). }$$

Substituting into △V gives

$$\displaystyle\begin{array}{rcl} \bigtriangleup V (t)& =& x^{2}(t)\sum _{ u=1}^{\infty }\vert C(u + s,s)\vert -\sum _{ s=0}^{t-1}\vert C(t,s)\vert x^{2}(s) {}\\ & \leq & a^{2}(t)/\beta - (1/(2\beta )-\alpha )x^{2}(t). {}\\ \end{array}$$

Summing the above inequality for 0 to n − 1 yields

$$\displaystyle{ 0 \leq V (t) - V (0) \leq 1/\beta \sum _{s=0}^{n-1}a^{2}(s) -\big (1/(2\beta )-\alpha \big)\sum _{ s=0}^{n-1}x^{2}(s), }$$

and hence the results. This completes the proof.

3.10 The Need for Large Contraction

So far, we have been successful in using fixed point theorems including the contraction mapping principles in obtaining different results concerning functional difference equations. It is naive to believe that every map can be defined so that it is a contraction, even with the strictest conditions. For example, consider

$$\displaystyle{ f(x) = x - x^{3} }$$

then for \(x,y \in \mathbb{R}\) we have that

$$\displaystyle{ \vert f(x) - f(y)\vert = \vert x - x^{3} - y + y^{3}\vert \leq \vert x - y\vert \left (1 -\frac{x^{2} + y^{2}} {2} \right ) }$$

and the contraction constant tends to one as x 2 + y 2 → 0. As a consequence, the regular contraction mapping principle failed to produce any results. This forces us to look for other alternative, namely the concept of Large Contraction. We will restate the contraction mapping principle and Krasnoselskii’s fixed point theorems in which the regular contraction is replaced with Large Contraction. Then based on the notion of Large Contraction, we introduce two theorems to obtain boundedness and periodicity results in which Large Contraction is substituted for regular contraction.

Definition 3.10.1.

Let \((\mathcal{M},d)\) be a metric space and \(B:\mathcal{ M} \rightarrow \mathcal{ M}.\) The map B is said to be large contraction if \(\phi,\varphi \in \mathcal{ M}\) , with ϕφ then d(, ) ≤ d(ϕ, φ) and if for all ɛ > 0, there exists a δ ∈ (0, 1) such that

$$\displaystyle{ [\phi,\varphi \in \mathcal{ M},d(\phi,\varphi ) \geq \varepsilon ] \Rightarrow d(B\phi,B\varphi ) \leq \delta d(\phi,\varphi ). }$$

The next theorems are alternative to the regular Contraction Mapping Principle and Krasnoselskii’s fixed point theorem in which we substitute Large Contraction for regular contraction. The proofs of the two theorems and the statement of Definition 3.10.1 can be found in [24].

Theorem 3.10.1.

Let \((\mathcal{M},\rho )\) be a complete metric space and B be a large contraction. Suppose there are an \(x \in \mathcal{ M}\) and an L > 0 such that ρ(x, B n x) ≤ L for all n ≥. Then B has a unique fixed point in \(\mathcal{M}.\)

Theorem 3.10.2.

Let \(\mathcal{M}\) be a bounded convex nonempty subset of a Banach space \((\mathbb{B},\|\cdot \|).\) Suppose that A and B map \(\mathcal{M}\) into \(\mathbb{B}\) such that

  1. i.

    \(x,y \in \mathcal{ M}\) implies \(Ax + By \in \mathcal{ M}\) ;

  2. ii.

    A is compact and continuous;

  3. iii.

    B is a large contraction mapping.

Then there exists \(z \in \mathcal{ M}\) with z = Az + Bz. 

Next, we consider the completely nonlinear difference equation

$$\displaystyle{ x(t + 1) = a(t)x(t)^{5} + p(t), }$$
(3.10.1)

where \(a,p: \mathbb{Z} \rightarrow \mathbb{R}\). To invert our equation, we create a linear term by letting

$$\displaystyle{ H(x) = -x + x^{5}. }$$
(3.10.2)

It would become clearer later on that H(x) is not a contraction and, as a consequence, the Contraction Mapping Principle cannot be used. Instead, we will show that H is a Large Contraction and hence our mapping, to be constructed, will define a Large Contraction. Then we use Theorem 3.10.2 and show that solutions of (3.10.1) are bounded. This allows us to put (3.10.1) in the form

$$\displaystyle{ x(t + 1) - a(t)x(t) = a(t)H(x(t)) + p(t). }$$
(3.10.3)

Let x(0) = x0, then by the variation of parameters formula, one can easily show that for t ≥ 0, x(t) is a solution of (3.10.3) if and only if

$$\displaystyle{ x(t) = x_{0}\prod _{s=0}^{t-1}a(s)+\sum _{ s=0}^{t-1}\Big(a(s)H(x(s))\prod _{ u=s+1}^{t-1}a(u)\Big)+\sum _{ s=0}^{t-1}\Big(p(s)\prod _{ u=s+1}^{t-1}a(u)\Big). }$$
(3.10.4)

We begin with the following lemma.

Lemma 3.9.

Let ∥⋅ ∥ denote the maximum norm. If

$$\displaystyle{ \mathbb{M} = \left \{\phi: \mathbb{Z} \rightarrow \mathbb{R}\;\vert \;\phi (0) =\phi _{0},\;\mathit{\text{ and }}\left \Vert \phi \right \Vert \leq 5^{-1/4}\right \}, }$$

then the mapping H defined by ( 3.10.2 ) is a large contraction on the set \(\mathbb{M}.\)

Proof.

For any reals a and b we have the following inequalities

$$\displaystyle{ 0 \leq (a + b)^{4} = a^{4} + b^{4} + ab(4a^{2} + 6ab + 4b^{2}), }$$

and

$$\displaystyle{ -ab(a^{2} + ab + b^{2}) \leq \frac{a^{4} + b^{4}} {4} + \frac{a^{2}b^{2}} {2} \leq \frac{a^{4} + b^{4}} {2}. }$$

If \(x,y \in \mathbb{M}\) with xy, then x(t)4 + y(t)4 < 1. Hence, we arrive at

$$\displaystyle\begin{array}{rcl} \vert H(u) - H(v)\vert & \leq & \vert u - v\vert \left \vert 1 -\left (\frac{u^{5} - v^{5}} {u - v} \right )\right \vert \\ & =& \vert u - v\vert \left \{1 - u^{4} - v^{4} - uv(u^{2} + uv + v^{2})\right \} \\ & \leq & \vert u - v\vert \left \{1 -\frac{\left (u^{4} + v^{4}\right )} {2} \right \} \leq \vert u - v\vert, {}\end{array}$$
(3.10.5)

where we use the notations u = x(t) and v = y(t) for brevity. Now, we are ready to show that H is a large contraction on \(\mathbb{M}\). For a given ɛ ∈ (0, 1), suppose \(x,y \in \mathbb{M}\) with ∥xy∥ ≥ ɛ. There are two cases:

  1. a.
    $$\displaystyle{ \frac{\varepsilon } {2} \leq \vert x(t) - y(t)\vert \;\text{ for some }\;t \in \mathbb{Z}, }$$

    or

  2. b.
    $$\displaystyle{ \vert x(t) - y(t)\vert \leq \frac{\varepsilon } {2}\;\text{ for some }\;t \in \mathbb{Z}. }$$

If ɛ∕2 ≤ | x(t) − y(t) | for some \(t \in \mathbb{Z}\), then

$$\displaystyle{ (\varepsilon /2)^{4} \leq \vert x(t) - y(t)\vert ^{4} \leq 8(x(t)^{4} + y(t)^{4}), }$$

or

$$\displaystyle{ x(t)^{4} + y(t)^{4} \geq \frac{\varepsilon ^{4}} {2^{7}}. }$$

For all such t, we get by (3.10.5) that

$$\displaystyle{ \vert H(x(t)) - H(y(t))\vert \leq \vert x(t) - y(t)\vert \left (1 - \frac{\varepsilon ^{4}} {2^{7}}\right ). }$$

On the other hand, if | x(t) − y(t) | ≤ ɛ∕2 for some \(t \in \mathbb{Z}\), then along with (3.10.5) we find

$$\displaystyle{ \vert H(x(t)) - H(y(t))\vert \leq \vert x(t) - y(t)\vert \leq \frac{1} {2}\|x - y\|. }$$

Hence, in both cases we have

$$\displaystyle{ \vert H(x(t)) - H(y(t))\vert \leq \min \left \{1 - \frac{\varepsilon ^{4}} {2^{7}}, \frac{1} {2}\right \}\|x - y\|. }$$

Thus, H is a large contraction on the set \(\mathbb{M}\) with \(\delta =\min \left \{1 -\varepsilon ^{4}/2^{7},1/2\right \}.\) The proof is complete.

Remark 3.6.

It is clear from inequality (3.10.5) that (u 4 + v 4)∕2 → 0, the contraction constant approaches one. Hence, H(x) does not define a contraction mapping as we have claimed before.

For \(\psi \in \mathbb{M}\), we define the map \(B: \mathbb{M} \rightarrow \mathbb{M}\) by

$$\displaystyle{ (B\psi )(t) =\psi _{0}\prod _{s=0}^{t-1}a(s)+\sum _{ s=0}^{t-1}\Big(a(s)H(\psi (s))\prod _{ u=s+1}^{t-1}a(u)\Big)+\sum _{ s=0}^{t-1}\Big(p(s)\prod _{ u=s+1}^{t-1}a(u)\Big). }$$
(3.10.6)

Lemma 3.10.

Assume for all \(t \in \mathbb{Z}\)

$$\displaystyle{ \vert \psi _{0}\vert \Big\vert \prod _{s=0}^{t-1}a(s)\Big\vert +4(5^{-5/4})\sum _{ s=0}^{t-1}\Big\vert \prod _{ u=s}^{t-1}a(u)\Big\vert +\sum _{ s=0}^{t-1}\Big(\Big\vert p(s)\prod _{ u=s+1}^{t-1}a(u)\Big\vert \Big) \leq 5^{-1/4}. }$$
(3.10.7)

If H is a large contraction on \(\mathbb{M}\) , then so is the mapping B. 

Proof.

It is easy to see that

$$\displaystyle{ \vert H(x(t))\vert = \vert x(t) - x(t)^{5}\vert \leq 4(5^{-5/4})\text{ for all}\;x \in \mathbb{M}. }$$

By Lemma 3.9 H is a large contraction on \(\mathbb{M}.\) Hence, for \(x,y \in \mathbb{M}\) with xy, we have ∥HxHy∥ ≤ ∥xy∥. Hence,

$$\displaystyle\begin{array}{rcl} \vert Bx(t) - By(t)\vert & \leq & \sum _{s=0}^{t-1}\vert H(x(s)) - H(y(s))\vert \Big\vert \prod _{ u=s}^{t-1}a(u)\Big\vert {}\\ &\leq & 4(5^{-5/4})\sum _{ s=0}^{t-1}\Big\vert \prod _{ u=s}^{t-1}a(u)\Big\vert \|x - y\| {}\\ & =& \|x - y\|. {}\\ \end{array}$$

Taking maximum norm over the set [0, ), we get that ∥BxBy∥ ≤ ∥xy∥. Now, from the proof of Lemma 3.9, for a given ɛ ∈ (0, 1), suppose \(x,y \in \mathbb{M}\) with ∥xy∥ ≥ ɛ. Then \(\delta =\min \left \{1 -\varepsilon ^{4}/2^{7},1/2\right \},\) which implies that 0 < δ < 1. Hence, for all such ɛ > 0 we know that

$$\displaystyle{ [x,y \in \mathbb{M},\|x - y\| \geq \varepsilon ] \Rightarrow \| Hx - Hy\| \leq \delta \| x - y\|. }$$

Therefore, using (3.10.7), one easily verify that

$$\displaystyle{ \|Bx - By\| \leq \delta \| x - y\|. }$$

The proof is complete.

We arrive at the following theorem in which we prove boundedness.

Theorem 3.10.3.

Assume  (3.10.7). Then  (3.10.3) has a unique solution in \(\mathbb{M}\) which is bounded.

Proof.

\((\mathbb{M},\|\cdot \|)\) is a complete metric space of bounded sequences. For \(\psi \in \mathbb{M}\) we must show that \((B\psi )(t) \in \mathbb{M}.\) From (3.10.6) and the fact that

$$\displaystyle{ \left \vert H(x(t))\right \vert = \vert x(t) - x(t)^{5}\vert \leq 4(5^{-5/4})\;\text{ for all }x \in \mathbb{M}, }$$

we have

$$\displaystyle\begin{array}{rcl} \vert (B\psi )(t)\vert & \leq & \vert \psi _{0}\vert \Big\vert \prod _{s=0}^{t-1}a(s)\Big\vert + 4(5^{-5/4})\sum _{ s=0}^{t-1}\Big\vert \prod _{ u=s}^{t-1}a(u)\Big\vert +\sum _{ s=0}^{t-1}\Big(\Big\vert p(s)\prod _{ u=s+1}^{t-1}a(u)\Big\vert \Big) {}\\ & \leq & 5^{-1/4}. {}\\ \end{array}$$

This shows that \((B\psi )(t) \in \mathbb{M}.\) Lemma 3.9 implies the map B is a large contraction and hence by Theorem 3.10.1, the map B has a unique fixed point in \(\mathbb{M}\) which is a solution of (3.10.3). This completes the proof.

Next we use Theorem 3.10.2 and prove the existence of a periodic solution of the nonlinear delay difference equation

$$\displaystyle{ x(t + 1) = a(t)x(t)^{5} + G(t,x(t - r)) + p(t),\;t \in \mathbb{Z}, }$$
(3.10.8)

where r is a positive integer and

$$\displaystyle{ a(t + T) = a(t),\;p(t + T) = p(t),\;\text{ and}\;G(t + T,\cdot ) = G(t,\cdot ) }$$
(3.10.9)

and T is the least positive integer for which these hold. As before, for the sake of inversion, we rewrite (3.10.8) as

$$\displaystyle{ x(t + 1) - a(t)x(t) = a(t)H(x(t)) + G(t,x(t - r)) + p(t), }$$
(3.10.10)

where

$$\displaystyle{ H(x) = -x + x^{5}. }$$
(3.10.11)

We begin with the following lemma which we omit its proof.

Lemma 3.11.

Suppose that 1 − s = tT t−1 a(s) 0 for all \(t \in \mathbb{Z}\) . Then x(t) is a solution of  (3.10.10) if and only if

$$\displaystyle\begin{array}{rcl} x(t) =\Big (1 -\prod _{s=t-T}^{t-1}a(s)\Big)^{-1}\sum _{ u=t-T}^{t-1}\big(a(u)H(x(u)) + G(t,x(u - r)) + p(u)\big)\prod _{ s=u+1}^{t-1}a(s).& & {}\\ \end{array}$$

Let PT be the set of all sequences x(t), periodic in t of period T. Then (PT, ∥⋅ ∥) is a Banach space when it is endowed with the maximum norm

$$\displaystyle{ \|x\| =\max _{t\in \mathbb{Z}}\vert x(t)\vert =\max _{t\in [0,T-1]}\vert x(t)\vert. }$$

Set

$$\displaystyle{ \mathbb{M} =\{\varphi \in P_{T}:\|\varphi \|\leq 5^{-1/4}\}. }$$
(3.10.12)

Obviously, \(\mathbb{M}\) is bounded and convex subset of the Banach space PT. Let the map \(A: \mathbb{M} \rightarrow P_{T}\) be defined by

$$\displaystyle{ (A\varphi )(t) =\Big (1-\prod _{s=t-T}^{t-1}a(s)\Big)^{-1}\sum _{ u=t-T}^{t-1}(G(t,\varphi (u-r))+p(u))\prod _{ s=u+1}^{t-1}a(s),\;t \in \mathbb{Z}. }$$
(3.10.13)

In a similar way, we set the map \(B: \mathbb{M} \rightarrow P_{T}\) by

$$\displaystyle{ (B\psi )(t) =\Big (1 -\prod _{s=t-T}^{t-1}a(s)\Big)^{-1}\sum _{ u=t-T}^{t-1}\big(a(u)H(\psi (u))\big)\prod _{ s=u+1}^{t-1}a(s),\ t \in \mathbb{Z}. }$$
(3.10.14)

It is clear from (3.10.13) and (3.10.14) that and are T-periodic in t. For simplicity we let

$$\displaystyle{ \eta:=\Big \vert \Big(1 -\prod _{s=t-T}^{t-1}a(s)\Big)^{-1}\Big\vert. }$$

Let

$$\displaystyle{ G(u,\psi (u - r)) = b(u)\psi (u - r)^{5}. }$$
(3.10.15)

For \(x \in \mathbb{M},\) we have

$$\displaystyle{ \vert x(t)\vert ^{5} \leq 5^{-5/4}, }$$

and therefore,

$$\displaystyle\begin{array}{rcl} G(u,x(u - r)) + p(u)& =& b(u)x(u - r)^{5} + p(u) \\ & \leq & 5^{-5/4}\vert b(u)\vert + \vert p(u)\vert {}\end{array}$$
(3.10.16)

and

$$\displaystyle{ \vert H(x(t))\vert = \vert x(t) - x(t)^{5}\vert \leq 4(5^{-5/4})\;\text{ for all}\;x \in \mathbb{M}. }$$

We have the following theorem.

Theorem 3.10.4.

Suppose G(u, ψ(ur)) is given by  (3.10.15). Assume for all \(t \in \mathbb{Z}\)

$$\displaystyle{ \eta \sum _{u=t-T}^{t-1}\Big(5^{-5/4}\vert b(u)\vert + \vert p(u)\vert + 4(5^{-5/4})\vert a(u)\vert \Big)\Big\vert \prod _{ u=s+1}^{t-1}a(u)\Big\vert \leq 5^{-1/4}. }$$
(3.10.17)

Then  (3.10.8) has a periodic solution.

Proof.

Using condition (3.10.17) and by a similar argument as in Lemma 3.9, one can easily show that B is a large contraction since H is a large contraction. Also, the map A is continuous and maps bounded sets into compact sets and hence it is compact. Moreover, for \(\varphi,\psi \in \mathbb{M}\), we have by (3.10.17) that

$$\displaystyle{ A\varphi + B\psi: \mathbb{M} \rightarrow \mathbb{M}. }$$

Hence an application of Theorem 3.10.2 implies the existence of a periodic solution in \(\mathbb{M}\). This completes the proof.

It is evident from Lemma 3.9 that proving large contraction is tedious, long, and not very practical to consider case by case function. Consider the mapping H be defined by

$$\displaystyle{ H(x(u)) = x(u) - h(x(u)). }$$
(3.10.18)

We have observed from Lemma 3.9 that the properties of the function h in (3.10.18) plays a substantial role in obtaining a large contraction on a convenient set. Next we state and prove a remarkable theorem that generalizes the concept of Large Contraction. The theorem provides easily checked sufficient conditions under which a mapping is a Large Contraction. The next theorem is due to Adivar, Raffoul, and Islam. Several authors have published it in their work without the proper citations. Let α ∈ (0, 1] be a fixed real number. Define the set \(\mathbb{M}_{\alpha }\) by

$$\displaystyle{ \mathbb{M}_{\alpha } = \left \{\phi:\phi \in C(\mathbb{R}, \mathbb{R})\text{ and }\left \Vert \phi \right \Vert \leq \alpha \right \}. }$$
(3.10.19)
  1. H.1.

    \(h: \mathbb{R} \rightarrow \mathbb{R}\) is continuous on [−α, α] and differentiable on (−α, α),

  2. H.2.

    The function h is strictly increasing on [−α, α],

  3. H.3.

    \(\sup \limits _{t\in (-\alpha,\alpha )}h^{{\prime}}(t) \leq 1\).

Theorem 3.10.5.

[Adivar-Raffoul-Islam [ 4 ] (Classifications of Large Contraction Theorem)] Let \(h: \mathbb{R} \rightarrow \mathbb{R}\) be a function satisfying (H.1-H.3). Then the mapping H in ( 3.10.18 ) is a large contraction on the set \(\mathbb{M}_{\alpha }\) .

Proof.

Let \(\phi,\varphi \in \mathbb{M}_{\alpha }\) with ϕφ. Then ϕ(t) ≠ φ(t) for some \(t \in \mathbb{R}\). Let us denote this set by D(ϕ, φ), i.e.,

$$\displaystyle{ D(\phi,\varphi ) = \left \{t \in \mathbb{R}:\phi (t)\neq \varphi (t)\right \}. }$$

For all tD(ϕ, φ), we have

$$\displaystyle\begin{array}{rcl} \left \vert H\phi (t) - H\varphi (t)\right \vert & =& \left \vert \phi (t) - h(\phi (t)) -\varphi (t) + h(\varphi (t))\right \vert \\ & =& \left \vert \phi (t) -\varphi (t)\right \vert \left \vert 1 -\left (\frac{h(\phi (t)) - h(\varphi (t))} {\phi (t) -\varphi (t)} \right )\right \vert.{}\end{array}$$
(3.10.20)

Since h is a strictly increasing function we have

$$\displaystyle{ \frac{h(\phi (t)) - h(\varphi (t))} {\phi (t) -\varphi (t)}> 0\text{ for all }t \in D(\phi,\varphi ). }$$
(3.10.21)

For each fixed tD(ϕ, φ) define the interval Ut ⊂ [−α, α] by

$$\displaystyle{ U_{t} = \left \{\begin{array}{ll} (\varphi (t),\phi (t))&\text{if }\phi (t)>\varphi (t)\\ (\phi (t),\varphi (t)) &\text{if } \phi (t) <\varphi (t) \end{array} \right.. }$$

Mean Value Theorem implies that for each fixed tD(ϕ, φ) there exists a real number ctUt such that

$$\displaystyle{ \frac{h(\phi (t)) - h(\varphi (t))} {\phi (t) -\varphi (t)} = h^{{\prime}}(c_{ t}). }$$

By (H.2-H.3) we have

$$\displaystyle{ 0 \leq \inf \limits _{u\in (-\alpha,\alpha )}h^{{\prime}}(u) \leq \inf \limits _{ u\in U_{t}}h^{{\prime}}(u) \leq h^{{\prime}}(c_{ t}) \leq \sup \limits _{u\in U_{t}}h^{{\prime}}(u) \leq \sup \limits _{ u\in (-\alpha,\alpha )}h^{{\prime}}(u) \leq 1. }$$
(3.10.22)

Hence, by (3.10.20)–(3.10.22) we obtain

$$\displaystyle{ \left \vert H\phi (t) - H\varphi (t)\right \vert \leq \left \vert 1 -\inf \limits _{u\in (-\alpha,\alpha )}h^{{\prime}}(u)\right \vert \left \vert \phi (t) -\varphi (t)\right \vert. }$$
(3.10.23)

for all tD(ϕ, φ). This implies a large contraction in the supremum norm. To see this, choose a fixed ɛ ∈ (0, 1) and assume that ϕ and φ are two functions in \(\mathbb{M}_{\alpha }\) satisfying

$$\displaystyle{ \varepsilon \leq \sup _{t\in D(\phi,\varphi )}\left \vert \phi (t) -\varphi (t)\right \vert = \left \Vert \phi -\varphi \right \Vert. }$$

If \(\left \vert \phi (t) -\varphi (t)\right \vert \leq \frac{\varepsilon } {2}\) for some tD(ϕ, φ), then we get by (3.10.22) and (3.10.23) that

$$\displaystyle{ \left \vert H(\phi (t)) - H(\varphi (t))\right \vert \leq \left \vert \phi (t) -\varphi (t)\right \vert \leq \frac{1} {2}\left \Vert \phi -\varphi \right \Vert. }$$
(3.10.24)

Since h is continuous and strictly increasing, the function \(h\left (u + \frac{\varepsilon } {2}\right ) - h(u)\) attains its minimum on the closed and bounded interval \(\left [-\alpha,\alpha \right ]\). Thus, if \(\frac{\varepsilon }{2} <\left \vert \phi (t) -\varphi (t)\right \vert\) for some tD(ϕ, φ), then by (H.2) and (H.3) we conclude that

$$\displaystyle{ 1 \geq \frac{h(\phi (t)) - h(\varphi (t))} {\phi (t) -\varphi (t)}>\lambda, }$$

where

$$\displaystyle{ \lambda:= \frac{1} {2\alpha }\min \left \{h(u +\varepsilon /2) - h(u): u \in \left [-\alpha,\alpha \right ]\right \}> 0. }$$

Hence, (3.10.20) implies

$$\displaystyle{ \left \vert H\phi (t) - H\varphi (t)\right \vert \leq (1-\lambda )\left \Vert \phi (t) -\varphi (t)\right \Vert. }$$
(3.10.25)

Consequently, combining (3.10.24) and (3.10.25) we obtain

$$\displaystyle{ \left \vert H\phi (t) - H\varphi (t)\right \vert \leq \delta \left \Vert \phi -\varphi \right \Vert, }$$

where

$$\displaystyle{ \delta =\max \left \{\frac{1} {2},1-\lambda \right \} <1. }$$

The proof is complete.

Example 3.11.

Let α ∈ (0, 1) and \(k \in \mathbb{N}\) be fixed elements and u ∈ (−1, 1).

  1. 1.

    The condition (H.2) is not satisfied for the function \(h_{1}(u) = \frac{1} {2k}u^{2k}.\)

  2. 2.

    The function \(h_{2}(u) = \frac{1} {2k+1}u^{2k+1}\) satisfies (H.1-H.3).

Proof.

Since h1 (u) = u 2k−1 < 0 for − 1 < u < 0, the condition (H.2) is not satisfied for h1. Evidently, (H.1-H.2) hold for h2. (H.3) follows from the fact that h2 (u) ≤ α 2k and α ∈ (0, 1).

3.11 Open Problems

Open Problem 1.

Consider the neutral delay functional difference equation

$$\displaystyle{ x(n + 1) =\alpha x(n + 1 - h) + ax(n) - q(n,x(n),x(n - h)),\;n \in \mathbb{Z}^{+} }$$
(3.11.1)

where the function \(q: \mathbb{Z}^{+} \times \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}\) is continuous and α and a are constants. For a > 1,  (3.11.1) is equivalent to

$$\displaystyle{ \bigtriangleup \big[(x(n) -\alpha x(n - h)a^{1-n})\big] =\big [a\alpha x(n - h) - q(n,x(n),x(n - h))\big]a^{-n}. }$$
(3.11.2)

We search for a solution of (3.11.1) having the property

$$\displaystyle{ (x(n) -\alpha x(n - h))a^{-n} \rightarrow 0,\;\mbox{ as}\;n \rightarrow \infty. }$$

Hence, by summing (3.11.2) from 0 to we get the following advanced type Volterra difference equation,

$$\displaystyle{ x(n + 1) =\alpha x(n - h) -\sum _{s=n}^{\infty }\big[a\alpha x(s - h) - q(s,x(s),x(s - h))\big]a^{n-s} }$$
(3.11.3)

which is an indication to study the general advanced type Volterra difference equation

$$\displaystyle{ x(n) = f(n,x(n),x(n - h)) -\sum _{s=n}^{\infty }Q(s,x(s),x(s - h))C(n - s). }$$
(3.11.4)

Under suitable conditions, one might explore the boundedness of solutions and the existence of periodic solutions. Finding suitable Lyapunov functionals to imply the results is almost impossible. It is my suggestion that the use of fixed point theory would be fruitful. For instance, if the right spaces are set up, one have the choice to use the contraction mapping principle; Theorem 3.5.1 or the following Krasnoselskii-Schaefer Theorem.

Theorem 3.11.1 (Krasnosselskii-Schaefer Theorem [25]).

Let (S, ∥⋅ ∥) be a Banach space . Suppose B: SS is a contraction map, and A: SS is continuous and maps bounded sets into compact sets. Then either

  1. (i)

    \(x =\lambda B(\frac{x} {\lambda } ) +\lambda Ax\) has a solution in S for λ = 1, or

  2. (ii)

    the set of all such solutions, 0 < λ < 1, is unbounded.

It is noted that Krasnoselskii-Schaefer’s theorem requires a priori bounds on solutions of a corresponding auxiliary equation. To obtain such bounds, one would have to construct a Lyapunov functional that is suitable for Krasnoselskii-Schaefer’s theorem. For problem (3.11.4), the auxiliary equation is given by

$$\displaystyle{ x(n) =\lambda f(n,x(n)/\lambda,x(n - h)/\lambda ) -\lambda \sum _{s=n}^{\infty }Q(s,x(n),x(n - h))C(n - s). }$$

Open Problem 2.

Extend the results of Section 3.9 to Volterra summation equations of the form

$$\displaystyle{ x(t) = a(t) -\sum _{s=0}^{n-1}C(n,s)g(s,x(s)). }$$

This is an unexplored area of research.

Open Problem 3.

We have seen in Chapter 3 that fixed point theory was successfully used to obtain asymptotic stability, while as in Chapter 2 we obtained uniform asymptotic stability using Lyapunov functionals. To my knowledge, no one has been able to use fixed point theory to obtain uniform asymptotic stability. Such results would mainly rest on how the set S is defined. Being able to do so would revolutionize the concept of fixed point theory and open the door for new research in differential/difference equations and even in dynamical systems on time scales.

Open Problem 4.

Consider the following system of two neurons:

$$\displaystyle{ \left \{\begin{array}{c} x_{1}(n + 1) = h_{1}x_{1}(n) +\beta _{1}f(x_{1}(n-\tau )) + a_{1}f(x_{2}(n -\tau _{1})) \\ x_{2}(n + 1) = h_{2}x_{2}(n) +\beta _{2}f(x_{2}(n-\tau )) + a_{2}f(x_{1}(n -\tau _{2}))\end{array} \right. }$$
(3.11.5)

where x1(n) and x1(n) denote the activations of the two neurons, τi, (i = 1, 2) and τ denote the synaptic transmission delays, a1 and a2 are the synaptic coupling weights, \(f: \mathbb{R} \rightarrow \mathbb{R}\) is the activation function with f(0) = 0. 

Use either Lyapunov functionals (Chapter 2) or fixed point theory (Chapter 3) to analyze the system and compare both methods.