Abstract
In the past hundred and fifty years, Lyapunov functions/functionals have been exclusively and successfully used in the study of stability and existence of periodic and bounded solutions. The author has extensively used Lyapunov functions/functionals for the purpose of analyzing solutions of functional equations, and each time the suitable Lyapunov functional presented us with unique difficulties, that could only overcome by the imposition of severe conditions on the given coefficients.
Access provided by CONRICYT-eBooks. Download chapter PDF
In the past hundred and fifty years, Lyapunov functions/functionals have been exclusively and successfully used in the study of stability and existence of periodic and bounded solutions. The author has extensively used Lyapunov functions/functionals for the purpose of analyzing solutions of functional equations, and each time the suitable Lyapunov functional presented us with unique difficulties, that could only overcome by the imposition of severe conditions on the given coefficients. In practice, Lyapunov direct method requires pointwise conditions, while as so many real-life problems call for averages. Moreover, it is rare that we encounter a problem for which a suitable Lyapunov functional can be easily constructed. It is a common knowledge among researchers that results on stability and boundedness go hand in hand with the constructed Lyapunov functional.
In this chapter, we begin a systematic study of stability theory for ordinary and functional difference equations by means of fixed point theory. The study of fixed point theory is motivated by a number of difficulties encountered in the study of stability by means of Lyapunov’s direct method. We notice that these difficulties frequently vanish when we apply fixed point theory. We provide a brief introduction on topics in Cauchy sequences, metric spaces, compact ness, contraction mapping principle, and Banach spaces . In some cases, contraction mapping principle fails to produce any results. This forces us to look for other alternatives, namely the concept of Large Contraction. We will restate the contraction mapping principle and Krasnoselskii’s fixed point theorems in which the regular contraction is replaced with Large Contraction. Most of the work in this chapter can be found in [4, 140, 142, 150, 166], and [167].
3.1 Motivation
We begin by offering an example that exposes the difficulties encountered by the use of Lyapunov functionals. Fixed point theory was first used in difference equations by Raffoul in [136] to study the stability and the existence of periodic solutions of the linear delay difference equation
It was followed by a series of papers in which different authors considered the same idea and analyzed various types of difference and Volterra difference equations. For example, in [134] the author initiated the use of fixed point theory to alleviate some of the difficulties that arise from the deployment of Lyapunov functionals to study boundedness and stability of the neutral nonlinear delay differential equation
where a(t), b(t), g(t), and q are continuous in their respective arguments. Later on, Islam and Yankson [87] extended the work of [134] to the neutral nonlinear delay difference equation
where \(a,c: \mathbb{Z} \rightarrow \mathbb{R},q: \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}\), and \(g: \mathbb{Z} \rightarrow \mathbb{Z}.\)
To illustrate some of the difficulties that arise from the deployment of Lyapunov functionals, we consider the delay difference equation
where \(a,b,p: \mathbb{Z}^{+} \rightarrow \mathbb{R}\), τ is a positive integer. Assume
and there is a δ > 0 such that
and
Then all solutions of (3.1.1) are bounded. If p(t) = 0 for all t, then the zero solution of (3.1.1) is (UAS). To see this we consider the Lyapunov functional
Then along solutions of (3.1.1) we have
The results follow from Chapter 2 It is severe to ask that a, b be bounded and that | b(t) | is bounded by a all of the time. For another illustration, we consider the nonlinear delay difference equation
where the functions g and h are continuous. Define the Lyapunov functional V by
We assume that there are positive constants γ1 and γ2 such that | g(x) | ≤ γ1 | x | and | h(x) | ≤ γ2 | x |, so that
Then along solutions of (3.1.5) we have
Now one may refer to Chapter 2 and argue that the zero solution of (3.1.5) is asymptotically stable.
3.2 Metrics and Banach Spaces
This section is devoted to introductory materials related to Cauchy sequences, metric spaces, contraction, compactness, contraction mapping principle, and Banach spaces. Materials in this section are taken from class notes that the author have used in graduate course on real analysis. For an excellent reference, we refer the reader to [23].
Definition 3.2.1.
A pair (E, ρ) is a metric space if E is a set and ρ: E × E → [0, ∞) such that when y, z, and u are in E, then
-
(a)
\(\rho (y,z) \geq 0,\;\rho (y,y) = 0,\;\mbox{ and}\;\rho (y,z) = 0\;\mbox{ implies}\;y = z.\)
-
(b)
\(\rho (y,z) =\rho (z,y),\) and
-
(c)
\(\rho (y,z) \leq \rho (y,u) +\rho (u,z).\)
Definition 3.2.2 (Cauchy Sequence).
A sequence {xn} ⊆ E is a Cauchy sequence if for each ɛ > 0 there exists an \(N \in \mathbb{N}\) such that \(n,m> N\Rightarrow\rho (x_{n},x_{m}) <\varepsilon\).
Definition 3.2.3 (Completeness of Metric Space).
A metric space (E, ρ) is said to be complete if every Cauchy sequence in E converges to a point in E.
Definition 3.2.4.
A set L in a metric space (E, ρ) is compact if each sequence in L has a subsequence with a limit in L.
Definition 3.2.5.
Let {fn} be a sequence of real functions with \(f_{n}: [a,b] \rightarrow \mathbb{R}\).
-
1.
{fn} is uniformly bounded on [a, b] if there exists M > 0 such that | fn(t) | ≤ M for all \(n \in \mathbb{N}\) and for all t ∈ [a, b].
-
2.
{fn} is equicontinuous at t0 if for each ɛ > 0 δ > 0 such that for all \(n \in \mathbb{N}\), if t ∈ [a, b] and | t0 − t | < δ, then | fn(t0) − fn(t) | < ɛ. Also, {fn} is equicontinuous if {fn} is equicontinuous at each t0 ∈ [a, b].
-
3.
{fn} is uniformly equicontinuous if for each ɛ > 0 there exists d > 0 such that for all \(n \in \mathbb{N}\), if t1, t2 ∈ [a, b] and | t1 − t2 | < δ, then | fn(t1) − fn(t2) | < ɛ.
Easy to see that {fn} = {x n} is not an equicontinuous sequence of functions on [0, 1] but each fn is uniformly continuous.
Proposition 3.1 (Cauchy Criterion for Uniform Convergence).
If {Fn} is a sequence of bounded functions that is Cauchy in the uniform norm, then {Fn} converges uniformly.
Definition 3.2.6.
A real-valued function f defined on \(E \subseteq \mathbb{R}\) is said to be Lipschitz continuous with Lipschitz constant M if | f(x) − f(y) | ≤ M | x − y | for all x, y ∈ E.
Remark 3.1.
It is an easy exercise that a Lipschitz continuous function is uniformly continuous. Also, if each fn in a sequence of functions {fn} has the same Lipschitz constant, then the sequence is uniformly equicontinuous .
Lemma 3.1.
If {fn} is an equicontinuous sequence of functions on a closed bounded interval, then {fn} is uniformly equicontinuous .
Proof.
Suppose {fn} is equicontinuous defined on [a, b] (which is contraction). Let ɛ > 0. For each x ∈ K, let δx > 0 be such that \(\vert y - x\vert <\delta _{x}\Rightarrow\vert f_{n}(x) - f_{n}(y)\vert <\varepsilon /2\) for all \(n \in \mathbb{N}\). The collection {B(x, δx∕2): x ∈ [a, b]} is an open cover of [a, b] so has a finite subcover \(\{B(x_{i},\delta _{x_{i}}/2): i = 1,\ldots,k\}\). Let \(\delta =\min \{\delta _{x_{i}}/2: i = 1,\ldots,k\}\). Then, if x, y ∈ [a, b] with | x − y | < δ, there is some i with \(x \in B(x_{i},\delta _{x_{i}}/2)\). Since \(\vert x - y\vert <\delta \leq \delta _{x_{i}}/2\), we have \(\vert x_{i} - y\vert \leq \vert x_{i} - x\vert + \vert x - y\vert <\delta _{x_{i}}/2 +\delta _{x_{i}}/2 =\delta _{x_{i}}\). Hence \(\vert x_{i} - y\vert <\delta _{x_{i}}\) and \(\vert x_{i} - x\vert <\delta _{x_{i}}\). So, for any \(n \in \mathbb{N}\) we have | f n(x) − f n(y) | ≤ | f n(x) − f n(x i) | + | f n(x i) − f n(y) | < ɛ∕2 + ɛ∕2 = ɛ. So, {f n} is uniformly equicontinuous .
The next theorem gives us the main method of proving compact ness in the spaces in which we are interested.
Theorem 3.2.1 (Ascoli-Arzelà).
If {fn(t)} is a uniformly bounded and equicontinuous sequence of real valued functions on an interval [a, b], then there is a subsequence which converges uniformly on [a, b] to a continuous function.
Proof.
Since {fn(t)} is equicontinuous on [a, b], by Lemma 3.1, {fn(t)} is uniformly equicontinuous . Let t1, t2, … be a listing of the rational numbers in [a, b] (note, the set rational numbers is countable, so this enumeration is possible). The sequence {fn(t1)}n = 1 ∞ is a bounded sequence of real numbers (since {fn} is uniformly bounded ) so, it has a subsequence \(\{f_{n_{k}}(t_{1})\}\) converging to a number which we call ϕ(t1). It will be more convenient to represent this subsequence without sub-subscripts, so we write fk 1 for \(f_{n_{k}}\) and switch the index from k to n. So, the subsequence is written as {fn 1(t1)}n = 1 ∞. Now, the sequence {fn 1(t2)} is bounded, so it has a convergent subsequence, say {fn 2(t2)}, with limit ϕ(t2). We continue in this way obtaining a sequence of sequences {fn m(t)}n = 1 ∞ (one sequence for each m) each of which is a subsequence of the previous. Furthermore, we have fn m(tm) → ϕ(tm) as n → ∞ for each \(m \in \mathbb{N}\). Now, consider the “diagonal” functions defined Fk(t) = fk k(t). Since fn m(tm) → ϕ(tm), it follows that Fr(tm) → ϕ(tm) as r → ∞ for each \(m \in \mathbb{N}\) (in other words, the sequence {Fr(t)} converges pointwise at each tm). We now show that {Fk(t)} converges uniformly on [a, b], by showing it is Cauchy in the uniform norm. Let ɛ > 0. Let δ > 0 be as in the definition of uniformly equicontinuous for {fn(t)} applied with ɛ∕3. Divide [a, b] into p intervals where \(p> \frac{b-a} {\delta }\). Let ξj be a rational number in the j th interval, for j = 1, …, p. Remember, {Fr(t)} converges at each of the points ξj, since they are rational numbers. So, for each j, there is \(M_{j} \in \mathbb{N}\) such that | Fr(ξj) − Fs(ξj) | < ɛ∕3 whenever r, s > Mj. Let M = max{Mj: j = 1, …, p}. If t ∈ [a, b], then it is in one of the p intervals, say the j th. So, | t −ξj | < δ and so | fr r(t) − fr r(ξj) | = | Fr(t) − Fr(ξj) | < ɛ∕3 for every r. Also, if r, s > M, then | Fr(ξj) − Fs(ξj) | < ɛ∕3 (since M is the max of the Mi’s). So, we have for r, s > M,
By the Cauchy Criterion for convergence, the sequence {Fr(t)}converges uniformly on [a, b]. Since each Fr(t) is continuous, the limit function ϕ(t) is also continuous.
Remark 3.2.
The Ascoli-Arzelà Theorem can be generalized to a sequence of functions from [a, b] to \(\mathbb{R}^{n}\). You apply the Ascoli-Arzelà to the first coordinate function to get a uniformly convergent subsequence. Then, apply the theorem again, this time to the corresponding subsequence of functions restricted to the second coordinate, getting a sub-subsequence, and so on.
Banach space s form an important class of metric spaces. We now define Banach space s in several steps.
Definition 3.2.7.
A triple (V, +, ⋅ ) is said to be a linear (or vector) space over a field F if V is a set and the following are true.
-
1.
Properties of +
-
a.
+ is a function from V × V to V. Outputs are denoted x + y.
-
b.
for all x, y ∈ V, x + y = y + x. (+ is commutative)
-
c.
for all x, y, w ∈ V, x + (y + w) = (x + y) + w. (+ is associative)
-
d.
there is a unique element of V which we denote 0 such that for all x ∈ V, 0 + x = x + 0 = x. (additive identity)
-
e.
for each x ∈ V there is a unique element of V which we denote − x such that x + (−x) = −x + x = 0. (additive inverse)
-
a.
-
2.
Scalar multiplication
-
a.
⋅ is a function from F × V to V. Outputs are denoted α ⋅ x, or αx.
-
b.
for all α, β ∈ F and x ∈ V, α(βx) = (αβ)x.
-
c.
for all x ∈ V, 1 ⋅ x = x.
-
d.
for all α, β ∈ F and x ∈ V, (α + β)x = αx + βx.
-
e.
for all α ∈ F and x, y ∈ V, α(x + y) = αx + αy.
-
a.
Commonly, the real numbers or complex numbers are the field in the above definition. For our purposes, we only consider the field of real numbers \(F = \mathbb{R}\).
Definition 3.2.8 (Normed Spaces).
A vector space (V, +, ⋅ ) is a normed space if for each x ∈ V there is a nonnegative real number ∥x∥, called the norm of x, such that for each x, y ∈ V and \(\alpha \in \mathbb{R}\)
-
1.
∥x∥ = 0 if and only if x = 0
-
2.
∥αx∥ = | α | ∥x∥
-
3.
∥x + y∥ ≤ ∥x∥ + ∥y∥
Remark 3.3.
A norm on a vector space always defines a metric ρ(x, y) = ∥x − y∥ on the vector space. Given a metric ρ defined on a vector space, it is tempting to define ∥v∥ = ρ(v, 0). But this is not always a norm.
Definition 3.2.9.
A Banach space is a complete normed vector space. That is, a vector space (X, +, ⋅ ) with norm ∥⋅ ∥ for which the metric ρ(x, y) = ∥x − y∥ is complete.
Example 3.1.
The space \((\mathbb{R}^{n},+,\cdot )\) over the field \(\mathbb{R}\) is a vector space (with the usual vector addition, + and scalar multiplication, ⋅ ) and there are many suitable norms for it. For example, if x = (x1, x2, …, xn), then
-
1.
\(\|x\| =\max _{1\leq i\leq n}\vert x_{i}\vert\),
-
2.
\(\|x\| = \sqrt{\sum _{i=1 }^{n }x_{i }^{2}}\), or
-
3.
\(\|x\| =\sum _{ i=1}^{n}\vert x_{ i}\vert\)
are all suitable norms. Norm 2. is the Euclidean norm: the norm of a vector is its Euclidean distance to the zero vector and the metric defined from this norm is the usual Euclidean metric. Norm 3. generates the “taxi-cab” metric on \(\mathbb{R}^{2}\).
Remark 3.4.
Consider the vector space \((\mathbb{R}^{n},+,\cdot )\) as a metric space with its metric defined ρ(x, y) = ∥x − y∥ where ∥⋅ ∥ is any of the norms as in Example 3.1. The completeness of this metric space comes directly from the completeness of \(\mathbb{R}\), hence \((\mathbb{R}^{n},\|\cdot \|)\) is a Banach space .
Remark 3.5.
In the Euclidean space \(\mathbb{R}^{n}\), compactness is equivalent to closed and bounded (Heine-Borel Theorem). In fact, the metrics generated from any of the norms in Example 3.1 are equivalent in the sense that they generate the same topologies. Moreover, compactness is equivalent to closed and bounded in each of those metrics.
Example 3.2.
Let \(C([a,b], \mathbb{R}^{n})\) denote the space of all continuous functions \(f: [a,b] \rightarrow \mathbb{R}^{n}\).
-
1.
\(C([a,b], \mathbb{R}^{n})\) is a vector space over \(\mathbb{R}\).
-
2.
If \(\|f\| =\max _{a\leq t\leq b}\vert f(t)\vert\) where | ⋅ | is a norm on \(\mathbb{R}^{n}\), then \(\left (C([a,b], \mathbb{R}^{n}),\|\cdot \|\right )\) is a Banach space .
-
3.
Let M and K be two positive constants and define
$$\displaystyle{ L =\{ f \in C([a,b], \mathbb{R}^{n}):\| f\| \leq M;\vert f(u) - f(v)\vert \leq K\vert u - v\vert \} }$$then L is compact .
Proof.
(of Part 3.) Let {fn} be any sequence in L. The functions are uniformly bounded by M and have the same Lipschitz constant, K. So, the sequence is uniformly equicontinuous . By the Ascoli-Arzelà Theorem, there is a subsequence, \(\{f_{n_{k}}\}\), that converges uniformly to a continuous function \(f: [a,b] \rightarrow \mathbb{R}^{n}\). We now show that f ∈ L. Well, | fn(t) | ≤ M for each t ∈ [a, b], so | f(t) | ≤ M for each t ∈ [a, b] and hence ∥f∥ ≤ M. Now, fix u, v ∈ [a, b] and fix ɛ > 0. Since \(\{f_{n_{k}}\}\) converges uniformly to f, there is \(N \in \mathbb{N}\) such that \(\vert f_{n_{k}}(t) - f(t)\vert <\varepsilon /2\) for all t ∈ [a, b] and all k ≥ N. So, fix any k ≥ N and we have
Since ɛ > 0 was arbitrary, | f(u) − f(v) | ≤ K | u − v |. Hence f ∈ L. We have demonstrated that {fn} has a subsequence converging to an element of L. Hence, L is compact.
Example 3.3.
Consider \(\mathbb{R}\) as a vector space over \(\mathbb{R}\) and define the metric \(d(x,y) = \frac{\vert x - y\vert } {1 + \vert x - y\vert }\). For each \(x \in \mathbb{R}\), we can define ∥x∥ = d(x, 0). Explain why ∥⋅ ∥ is not a norm on \(\mathbb{R}\).
Example 3.4.
Let \(\phi: [a,b] \rightarrow \mathbb{R}^{n}\) be continuous and let S be the set of continuous functions \(f: [a,c] \rightarrow \mathbb{R}^{n}\) with c > b and with f(t) = ϕ(t) for a ≤ t ≤ b. Define \(\rho (f,g) =\| f - g\| =\sup _{a\leq t\leq c<}\vert f(t) - g(t)\vert\) for f, g ∈ S. Then (S, ρ) is a complete metric space but not a Banach space since \(f + g\) is not in S.
Example 3.5.
Let (S, ρ) be the space of continuous bounded functions \(f: (-\infty,0] \rightarrow \mathbb{R}\) with \(\rho (f,g) =\| f - g\| =\sup _{-\infty <t\leq 0}\vert f(t) - g(t)\vert\).
-
1.
Show that (S, ρ) is a Banach space .
-
2.
The set L = {f ∈ S: ∥f∥ ≤ 1, | f(u) − f(v) | ≤ | u − v | } is not compact in (S, ρ).
Proof.
(of 2.) Consider the sequence of functions defined
Then, the sequence converges pointwise to f = 1, but ρ(fn, f) = 1 for all \(n \in \mathbb{N}\). So, there is no subsequence of {fn} converging in the norm ∥⋅ ∥ (i.e., converging uniformly) to f.
Example 3.6.
Let (S, ρ) be the space of continuous functions \(f: (-\infty,0] \rightarrow \mathbb{R}^{n}\) with
where
and | ⋅ | is the Euclidean norm on \(\mathbb{R}^{n}\)
-
1.
Then (S, ρ) is a complete metric space. The distance between all function is bounded by 1.
-
2.
(S, +, ⋅ ) is a vector space over \(\mathbb{R}.\)
-
3.
(S, ρ) is not a Banach space because ρ does not define a norm, since ρ(x, 0) = ∥x∥ does not satisfy ∥αx∥ = | α | ∥x∥.
-
4.
Let M and K be given positive constants. Then the set
$$\displaystyle{ L =\{ f \in S:\| f\| \leq M\;\mbox{ on}\;(-\infty,0],\vert f(u) - f(v)\vert \leq K\vert u - v\vert \} }$$is compact in (S, ρ).
Proof.
(of 4.) Let {fn} be a sequence in L. It is clear that if fn → f uniformly on compact subsets of (−∞, 0], then we have ρ(fn, f) → 0 as n → ∞. Let’s begin by considering {fn} on [−1, 0]. Then the sequence is uniformly bounded and equicontinuous and so there is a subsequence, say {fn 1} converging uniformly to some continuous f on [−1, 0]. Moreover the argument of Example 3.2 shows that | f(t) | ≤ M, and | f(u) − f(v) | ≤ K | u − v |. Next we consider {fn 1} on [−2, 0]. Then the sequence is uniformly bounded and equicontinuous and so there is a subsequence, say {fn 2} converging uniformly, say, to some continuous f on [−2, 0]. Continuing this way we arrive at Fn = fn n which has a subsequence of {fn} and it converges uniformly on compact subsets of (−∞, 0] to a function f ∈ L. This proves L is compact .
The next result is stated in the form of a theorem that we leave its proof to the reader.
Theorem 3.2.2.
Let g: (−∞, 0] → [1, ∞) be a continuous strictly decreasing function with g(0) = 1 and g(r) → ∞ as r → −∞. Let (S, | ⋅ |g) be the space of continuous functions \(f: (-\infty,0] \rightarrow \mathbb{R}^{n}\) for which
exists. Then
-
1.
(S, | ⋅ |g) is a Banach space .
-
2.
Let M and K be given positive constants. Then the set
$$\displaystyle{ L =\{ f \in S:\| f\| \leq M\;\mathit{\mbox{ on}}\;(-\infty,0],\vert f(u) - f(v)\vert \leq K\vert u - v\vert \} }$$is compact in (S, ρ).
Definition 3.2.10.
Let (E, ρ) be a metric space and D: E → E. The operator or mapping D is a contraction if there exists an α ∈ (0, 1) such that
Theorem 3.2.3 (Contraction Mapping Principle).
Let (E, ρ) be a complete metric space and D: E → E a contraction operator. Then there exists a unique ϕ ∈ E with \(D(\phi ) =\phi.\) Moreover, if ψ ∈ E and if {ψn} is defined inductively by \(\psi _{1} = D(\psi )\) and \(\psi _{n+1}D(\psi _{n}),\) then \(\psi _{n} \rightarrow \phi\) , the unique fixed point .
Proof.
Let y0 ∈ E and define a sequence {yn} in E by \(y_{1} = Dy_{0},\;y_{2} = Dy_{1} = D(Dy_{0}) = D^{2}y_{ 0},\mathop{\ldots },y_{n} = Dy_{n-1} = D^{n}y_{ 0}.\) Next we show that {yn} is a Cauchy sequence. To see this, if m > n, then
Thus, since α ∈ (0, 1), we have that
This shows the sequence {yn} is Cauchy. Since (E, ρ) is a complete metric space, {yn} has a limit, say y in E. Since the mapping D is continuous we have that
and y is a fixed point . Left to show y is unique. Let x, y ∈ E such that D(x) = x and D(y) = y. Then
which implies that
Since 1 −α ≠ 0, we must have ρ(x, y) = 0 and hence x = y. This completes the proof.
Another form of the contraction mapping principle.
Theorem 3.2.4 (Contraction Mapping Principle, Banach Fixed Point Theorem).
Let (E, ρ) be a complete metric space and P: E → E such that P m is a contraction for some fixed positive integer m. Then there is a unique x ∈ E with P(x) = x.
3.3 Highly Nonlinear Delay Equations
We limit our study to the highly nonlinear delay difference equation, typified by
where \(a: \mathbb{Z}^{+} \rightarrow \mathbb{R}\) and r is a positive integer. More conditions on g are forthcoming. Results of this section can be found in [140]. In the paper of Raffoul [136], the author considered the linear difference equation
and used fixed point theory and obtained asymptotic and periodicity results using fixed point theory. It is worth mentioning here that (3.3.1) has fundamental difference from the above-mentioned equation due to the nonlinearity that the function g presents. Moreover, when inverting (3.3.1) in order to construct a mapping that is suitable for fixed point theory, one will have to introduce a linear term which results in the addition term of x − g(x). Also, the results of this section offer the use of nonconventional metric in order to avoid that the contraction constant not to depend on the Lipschitz constant K that g will be required to satisfy.First we rewrite (3.3.1) and have it ready for inversion so that fixed point theory can be used. Rewrite (3.3.1) as
where △t represents the difference with respect to t. We must create a linear tern in x in order to be able to invert. Thus, we add and subtract a(t + r)x(t) and get,
For each t0 ≥ 0, equation (3.3.2) requires initial function \(\psi: [t_{0} - r,t_{0}] \rightarrow \mathbb{R}\) in order to specify a solution x(t, t0, ψ). The computation is the same for any t0 ≥ 0 and so we take t0 = 0. Thus, we say x(t): = x(t, 0, ψ) is a solution of (3.3.2) if x(t) = ψ(t) on [−r, 0] and x(t) satisfies (3.3.2) for t ≥ 0. We begin with the following lemma which we omit its proof.
Lemma 3.2.
Suppose that a(t + r) ≠ 0 for all \(t \in \mathbb{Z}^{+}\) . Then x(t) is a solution of equation (3.3.2) if and only if
The proof of lemma 3.2 is easily obtained from the variation of parameters formula followed with summation by parts. It is assumed that the function g is continuous, locally Lipschitz with Lipschitz constant K and odd. On the other hand, we assume that x − g(x) is nondecreasing and g(x) is increasing on an interval [0, L] for some L > 0. Due to these assumptions, it is obvious that the functions g(x) and x − g(x) are locally Lipschitz with the same Lipschitz constant K > 0.
Note that if 0 < L1 < L, then the conditions on g hold on [−L1, L1]. Also note that if \(\phi: [-r,\infty ) \rightarrow \mathbb{R}\) with ϕ0 = ψ, and if | ϕ(t) | ≤ L, then for t ≥ 0 we have
since x − g(x) is odd and nondecreasing on [0, L]. Here ϕ0 = ψ(s) for − r ≤ s ≤ 0. Let
For ϕ ∈ S, we define \(P: S \rightarrow S\) by
and
Let g be odd, increasing on [0, L], satisfy a Lipschitz condition, and let x − g(x) be nondecreasing on [0, L]. Suppose that if L1 ∈ (0, L], then
We note that since g(x) is Lipschitz with Lipschitz constant K and g(0) = 0, then | g(x) | ≤ K | x |.
Theorem 3.3.1 ([140]).
Let g be odd, increasing on [0, L], satisfy a Lipschitz condition, and let x − g(x) be nondecreasing on [0, L]. Suppose that a(t + r) ≠ 0 for all \(t \in \mathbb{Z}^{+}\) . If (3.3.5) hold, then every solution x(t, 0, ψ) of (3.3.2) with small initial function ψ(t), is bounded provided P is a contraction.
Proof.
Let ϕ ∈ S. Then, by (3.3.5), there exists an α ∈ (0, 1) such that for t ≥ 0 then
If we choose the initial function ψ small enough so that we have
then this yields
Thus, \(P: S \rightarrow S\). This shows that any solution x(t, 0, ψ) of (3.3.2) that is in S, is bounded. Next we show that P defines a contraction map. Using the regular maximum norm will require that the contraction constant to depend on the Lipschitz constant K. Instead, we use the weighted norm | ⋅ |K where for ϕ ∈ S, we have
Proposition 3.2 ([140]).
Let g be odd, increasing on [0, L], satisfy a Lipschitz condition, and let x − g(x) be nondecreasing on [0, L]. Suppose that a(t + r) ≠ 0 for all \(t \in \mathbb{Z}^{+}\) with \(\vert a(t + r)\vert \leq \frac{1} {2}.\) Then P is a contraction with contraction constant d > 3.
Proof.
Let ϕ, φ ∈ S. Then for t ≥ 0, we have
Our aim is to simplify (3.3.7). First we remind the reader that due to the conditions on g(x) and x − g(x), both functions share the same Lipschitz constant K. Moreover, since \(\vert a(t + r)\vert \leq \frac{1} {2}\), we have | a(t + r) | ≤ 1 − | a(t + r) | and | a(t + r) |2 ≤ 1 − | a(t + r) |2. Next, we consider the first term of (3.3.7)
Next we turn our attention to the second term of (3.3.7).
Now we deal with the last term of (3.3.7).
Substituting the above three expressions into (3.3.7) yields
which makes P a contraction for d > 3. Let \((\mathcal{X},\vert \cdot \vert )\) be the Banach space of bounded sequences \(\phi: [0,\infty ) \rightarrow \mathbb{R}\). Since S is a subset of the Banach space \(\mathcal{X}\) and S is closed and bounded so S is complete. Thus, P: S → S has a unique fixed point . This completes the proof.
We have the following corollary.
Corollary 3.1 ([140]).
Let g be odd, increasing on [0, L], satisfy a Lipschitz condition, and let x − g(x) be nondecreasing on [0, L]. Suppose that a(t + r) ≠ 0 for all \(t \in \mathbb{Z}^{+}\) . If (3.3.5) hold with \(\vert a(t + r)\vert \leq \frac{1} {2}\) , then the unique solution x(t, 0, ψ) of (3.3.2) with small initial function ψ(t) is bounded and its zero solution is stable.
Proof.
Let P be defined by (3.3.4). Then by Theorem 3.3.1, P maps S into S. Moreover, by Proposition 3.2 P is a contraction on S and hence the unique solution of (3.3.2) is bounded by Theorem 3.3.1. Left to show the zero solution is stable. Let L be given by (3.3.6) and set 0 < ε < L. Choose \(\delta = \frac{\epsilon (1-\alpha )} {(1+K)\prod _{s=0}^{t-1}\vert a(s+r)\vert \sum _{s=-r}^{-1}\vert a(s+r)\vert }.\) Then for | ψ | < δ, we have by (3.3.6) that
Hence the zero solution is stable. This completes the proof.
We mention here that the requirement | a(t + s) | ≤ 1∕2 was necessitated by the use of the norm | ⋅ |K. However, in proving that P is a contraction we did not have to involve K in the contraction constant. We have the following application.
Example 3.7 ([140]).
Let a(t + r) ≠ 0 such that \(\vert a(t + r)\vert \leq \frac{1} {2}.\) Consider
In view of (3.3.2) we have
Let \(f(x) = x - x^{3}.\) Then f(x) is increasing on \((0, \frac{1} {\sqrt{3}})\) and has a maximum of \(\frac{2} {3\sqrt{3}}\) at \(x = \frac{1} {\sqrt{3}}.\) For any bounded initial sequence ψ on [−r, 0] with \(\vert \psi (t)\vert \leq \frac{1} {\sqrt{3}}\) we set
For ϕ ∈ S, we define \(P: S \rightarrow S\) by
and
Let ψ be small enough so that
Then
Moreover, it is obvious that the Lipschitz constant k = 1. Let d be a positive constant such that d > 3. Using
we have P is a contraction on S and hence all solutions of (3.3.8) are bounded and its zero solution is stable.
3.4 Multiple and Functional Delays
In this section, we consider the multiple and functional delays difference equation
where \(a_{j}: \mathbb{Z}^{+} \rightarrow \mathbb{R}\) and \(\tau _{j}: \mathbb{Z}^{+} \rightarrow \mathbb{Z}^{+}\) with n −τ(n) → ∞ as n → ∞. For each n0, define mj(n0) = inf{s −τj(s): s ≥ n0}, m(n0) = min{mj(n0) : 1 ≤ j ≤ N}. In [87], Islam and Yankson showed that the zero solution of the equation
is asymptotically stable with one of the assumptions being that
However, as pointed out in [136], condition (3.4.2) cannot hold for (3.4.1) since b(n) = 1, for all \(n \in \mathbb{Z}\). The results we obtain in this section overcome the requirement of (3.4.2). Let D(n0) denote the set of bounded sequences \(\psi: [m(n_{0}),n_{0}] \rightarrow \mathbb{R}\) with the maximum norm | | ⋅ | |. Also, let (B, | | ⋅ | | ) be the Banach space of bounded sequences \(\varphi: [m(n_{0}),\infty ) \rightarrow \mathbb{R}\) with the maximum norm. Define the inverse of n −τi(n) by gi(n) if it exists and the set
where
For each \((n_{0},\psi ) \in \mathbb{Z}^{+} \times D(n_{0}),\) a solution of (3.4.1) through (n0, ψ) is a function \(x: [m(n_{0}),n_{0}+\alpha ] \rightarrow \mathbb{R}\) for some positive constant α > 0 such that x(n) satisfies (3.4.1) on [n0, n0 + α] and x(n) = ψ(n) for n ∈ [m(n0), n0]. We denote such a solution by x(n) = x(n, n0, ψ). For a fixed n0, we define
We begin by rewriting (3.4.1) as
where △n represents that the difference is with respect to n. But (3.4.3) implies that
which is equivalent to
Suppose that Q(n) ≠ 0 for all \(n \in \mathbb{Z}^{+}\) and the inverse function gj(n) of n −τj(n) exists. Then x(n) is a solution of (3.4.1) if and only if
To see this we have by the variation of parameters formula
Using the summation by parts formula we obtain
Substituting (3.4.6) into (3.4.5) gives the desired result. We have the following theorem, which is due to Yankosn [166].
Theorem 3.4.1 ([166]).
Suppose that the inverse function gj(n) of n −τj(n) exists, and assume there exists a constant α ∈ (0, 1) such that
Moreover, assume that there exists a positive constant M such that
Then the zero solution of (3.4.1) is stable.
Proof.
Let ε > 0 be given. Choose δ > 0 such that
Let ψ ∈ D(n0) such that ∣ψ(n)∣ ≤ δ. Define S = { φ ∈ B : φ(n) = ψ(n) if n ∈ [m(n0), n0], ∥φ ∥ ≤ ε}. Then (S, ∥⋅ ∥) is a complete metric space, where ∥⋅ ∥ is the maximum norm.
Define the mapping P: S → S by
and
We first show that P maps from S to S.
Thus P maps from S into itself. We next show that Pφ is continuous.
Let φ, ϕ ∈ S. Given any ε > 0, choose \(\delta = \frac{\epsilon }{\alpha }\) such that | | φ −ϕ | | < δ. Then,
Thus showing that Pφ is continuous. Finally we show that P is a contraction.
Let φ, η ∈ S. Then
This shows that P is a contraction. Thus, by the contraction mapping principle, P has a unique fixed point in S which solves (3.4.3) and for any φ ∈ S, ∥Pφ∥ ≤ ε. This proves that the zero solution of (3.4.3) is stable.
In the next theorem we address the asymptotic stability of the zero solution.
Theorem 3.4.2 ([166]).
Assume that the hypotheses of Theorem 3.4.1 hold. Also assume that
Then the zero solution of (3.4.3) is asymptotically stable.
Proof.
We have already proved that the zero solution of (3.4.3) is stable. Let ψ ∈ D(n0) such that | ψ(n) | ≤ δ and define
Define P: S ∗ → S ∗ by (3.4.8). From the proof of Theorem 3.4.1, the map P is a contraction and for every φ ∈ S ∗, | | Pφ | | ≤ ε.
Next we show that (Pφ)(n) → 0 as n → ∞. The first term on the right-hand side of (3.4.8) goes to zero because of condition (3.4.9). It is clear from (3.4.7) and the fact that φ(n) → 0 as n → ∞ that
Now we show that the last term on the right-hand side of (3.4.8) goes to zero as n → ∞. Since φ(n) → 0 and n −τj(n) → ∞ as n → ∞, for each ε1 > 0, there exists an N1 > n0 such that s ≥ N1 implies | φ(s −τj(s)) | < ε1 for j = 1, 2, 3, . . . , N. Thus for n ≥ N1, the last term I3 in (3.4.8) satisfies
By (3.4.9), there exists N2 > N1 such that n ≥ N2 implies
Applying (3.4.7) gives | I3 | ≤ ε1 + ε1 α < 2ε1. Thus, I3 → 0 as n → ∞. Hence (Pφ)(n) → 0 as n → ∞, and so Pφ ∈ S ∗.
By the contraction mapping principle, P has a unique fixed point that solves (3.4.3) and goes to zero as n goes to infinity. Therefore the zero solution of (3.4.3) is asymptotically stable.
3.5 Neutral Volterra Equations
The results of this section pertain to asymptotic stability of the zero solution of the neutral type Volterra difference equation
where \(a,c: \mathbb{Z} \rightarrow \mathbb{R},\;k: \mathbb{Z} \times \mathbb{Z} \rightarrow \mathbb{R},\;h: \mathbb{Z} \rightarrow \mathbb{R}\), and \(g: \mathbb{Z} \rightarrow \mathbb{Z}^{+}\). Throughout this section we assume that a(n) and c(n) are bounded whereas 0 ≤ g(n) ≤ g0 for some integer g0. We also assume that h(0) = 0 and
For any integer n0 ≥ 0, we define \(\mathbb{Z}_{0}\) to be the set of integers in [−g0, n0]. Let \(\psi (n): \mathbb{Z}_{0} \rightarrow \mathbb{R}\) be an initial discrete bounded function.
Definition 3.5.1.
The zero solution of (3.5.1) is Lyapunov stable if for any ε > 0 and any integer n0 ≥ 0 there exists a δ > 0 such that | ψ(n) | ≤ δ on \(\mathbb{Z}_{0}\) imply | x(n, n0, ψ) | ≤ ε for n ≥ n0.
Definition 3.5.2.
The zero solution of (3.5.1) is asymptotically stable if it is Lyapunov stable and if for any integer n0 ≥ 0 there exists r(n0) > 0 such that | ψ(n) | ≤ r(n0) on \(\mathbb{Z}_{0}\) imply | x(n, n0, ψ) | → 0 as n → ∞.
Suppose that a(n) ≠ 0 for all \(n \in \mathbb{Z}\). Then x(n) is a solution of the equation (3.5.1) if and only if
where Φ(r) = c(r) − c(r − 1)a(r).
To see this, we first note that (3.5.1) is equivalent to
Summing the above equation from n0 to n − 1 gives
Or,
Thus,
Performing a summation by parts yields,
Also,
A substitution into the above expression gives,
Combining all expressions, we arrive at
This completes the process.
Define
where
Then \(\left (S,\|\cdot \|\right )\) is a Banach space . Let \(\psi: (-\infty,n_{0}] \rightarrow \mathbb{R}\) be a given initial bounded sequence. Define mapping H: S → S by
and
It should cause no confusion to write
We state Krasnoselskii’s fixed point theorem which will be used to prove the zero solution of (3.5.1) is asymptotically stable. We emphasize that it is the only appropriate theorem to use for such equation since the inversion of a neutral equation results in two mappings.
Theorem 3.5.1 (Krasnoselskii [97]).
Let \(\mathbb{M}\) be a closed convex nonempty subset of a Banach space \((\mathbb{B},\vert \vert \cdot \vert \vert ).\) Suppose that C and B map \(\mathbb{M}\) into \(\mathbb{B}\) such that
-
(i)
C is continuous and \(C\mathbb{M}\) is contained in a compact set,
-
(ii)
B is a contraction mapping.
-
(iii)
\(x,y \in \mathbb{M}\) implies \(Cx + By \in \mathbb{M}\) .
Then there exists \(z \in \mathbb{M}\) with z = Cz + Bz.
We are now ready to prove our main results. According to Theorem 3.5.1 we need to construct two mappings, one is a contraction and the other is compact. Hence we write the mapping H that is given by (3.5.2) as
where A, Q: S → S are given by
and
Theorem 3.5.2.
Assume the Lipschitz condition on h. Suppose that
and there exist α ∈ (0,1) such that,
Then the zero solution of (3.5.1) is asymptotically stable.
Proof.
First we show the mapping H defined by (3.5.2) → 0 as n → ∞. The first term on the right of (3.5.2) goes to zero because of condition (3.5.5). The second term on the right goes to zero because of condition (3.5.6) and the fact that φ ∈ S.
Left to show that the last term
on the right of (3.5.2) goes to zero as n → ∞. Let m > 0 such that for φ ∈ S, | φ(n − g(n)) | < σ for σ > 0. Also, since φ(n − g(n)) → 0 as n − g(n) → ∞, there exists an n2 > m such that for n > n2, | φ(n − g(n)) | < ε2 for ε2 > 0. Due to condition (3.5.5) there exists an n3 > n2 such that for n > n3 implies that
Thus for n > n3, we have
Hence, (Qφ)(n) + (Aφ)(n): S → S. Next we show that Q is a contraction. Let Q be given by (3.5.3). Then φ, ζ ∈ S, we have from (3.5.7) that
Now we are ready to prove the map A is compact . We note that the proof that is given in [167] for the compactness of A is not correct since our map is defined on an unbounded interval which rules out the use of Ascoli-Arzelà’s theorem. First we show A is continuous. Let {φ l} be a sequence in S such that
Since S is closed, we have φ ∈ S. Then by the definition of A
Thus, for φ ∈ S, we have by (3.5.4) that
The continuity of φ and h along with Lebesgue dominated convergence theorem imply that
This shows A is continuous. Finally, we have to show that AS is precompact. Let φ l be a sequence in S. Then for each \(n \in \mathbb{Z},\) φ l is a bounded sequence of real numbers. This shows that {φ l} has a convergent subsequence. By the diagonal process, we can construct a convergent subsequence \(\{\varphi ^{l_{k}}\}\) of {φ l} in S. Since A is continuous, we know that {Aφ l} has a convergent subsequence in AS. This means AS is precompact. This completes the proof for compactness. Left to show the zero solution is stable. Due to condition (3.5.5) there exists a positive constant ρ such that \(\big\vert \prod _{s=n_{0}}^{n-1}a(s)\big\vert \leq \rho.\) Let ε > 0 be given. Choose δ > 0 such that
Let ψ(n) be any given initial function such that | ψ(n) | < δ.
Define \(\mathbb{M} =\{\varphi \in S:\|\varphi \|<\epsilon \}.\) Let \(\varphi,\zeta \in \mathbb{M}\), then
It follows from the above work that all the conditions of the Krasnoselskii’s fixed point theorem are satisfied on \(\mathbb{M}\). Thus there exists a fixed point z in \(\mathbb{M}\) such that z = Az + Qz. This completes the proof.
We end this section with the following example.
Example 3.8 ([167]).
Consider the difference equation
In this example we take n0 = 0. We observe that
and hence condition (3.5.5) is satisfied. Condition (3.5.6) also satisfied since
Next we verify condition (3.5.7).
Hence condition (3.5.7) is satisfied. All the conditions of Theorem 3.5.2 are satisfied and the zero solution of (3.5.1) is asymptotically stable.
3.6 Almost-Linear Volterra Equations
We consider the scalar Volterra difference equation
We assume that the functions h and g are continuous and that there exist positive constants H, H ∗, G, and G ∗ such that
and
Equation (3.6.1) will be called Almost-Linear if (3.6.2) and (3.6.3) hold. In [53] Burton introduced this concept of Almost-Linear equations for the continuous case and studied certain important properties of the resolvent of a linear Volterra equation. The work of this section is found in [150]. Our objective here is to apply the concept of Almost-Linear equations to Volterra difference equations and prove that the solutions of these Volterra difference equations are also bounded if they satisfy (3.6.2) and (3.6.3). Due to (3.6.2) and (3.6.3) contraction mapping principle cannot be used since our mapping cannot be made into a contraction. Therefore, we resort to the use of Krasnoselskii’s fixed point theorem. At the end of the section we will construct a suitable Lyapunov functional and refer to Chapter 2 to deduce that all solutions of (3.6.1) are bounded. It turns out that either method has advantages and disadvantages.
We begin with the following lemma which is essential to the construction of our mappings. Consider the general difference equation
Lemma 3.3.
Suppose 1 + Ha(n) ≠ 0 for all \(n \in [0,\infty ) \cap \mathbb{Z}.\) Then x(n) is a solution of equation (3.6.4) if and only if
Proof.
First we note that (3.6.4) is equivalent to
Summing equation (3.6.6) from 0 to n − 1 and dividing both sides by
gives (3.6.5).
Lemma 3.4.
Suppose 1 + Ha(n) ≠ 0 for all \(n \in [0,\infty ) \cap \mathbb{Z}.\) Then x(n) is a solution of equation (3.6.1) if and only if
Proof.
Rewrite equation (3.6.1) as
If we let
then the results follow from Lemma 3.3.
We rely on the following theorem for the relative compactness criterion since the Ascolli-Arzelà’s theorem cannot be utilized here due to the unbounded domain.
Theorem 3.6.1 ([7]).
Let M be the space of all bounded continuous (vector-valued) functions on [0, ∞) and S ⊂ M. Then S is relatively compact in M if the following conditions hold:
-
(i)
S is bounded in M;
-
(ii)
the functions in S are equicontinuous on any compact interval of [0, ∞);
-
(iii)
the functions in S are equiconvergent, that is, given ε > 0, there exists a T = T(ε) > 0 such that \(\parallel \phi (t) -\phi (\infty ) \parallel _{\mathbb{R}^{n}} <\epsilon,\) for all t > T and all ϕ ∈ S.
We assume that
and for some positive constant L,
and
Moreover, we assume
and
Finally, choose a constant ρ > 0 such that
for all n ≥ 0. Let S be the Banach space of bounded sequences with the maximum norm. Let
Then M is a closed convex subset of S.
Define mappings \(\mathcal{A}: M \rightarrow S\) and \(\mathcal{B}: M \rightarrow M\) as follows.
and
We have the following lemma.
Lemma 3.5.
Suppose (3.6.11) and (3.6.13) hold. The map \(\mathcal{B}\) is a contraction from M into M.
Proof.
Let ϕ ∈ M. It follows from (3.6.11) and (3.6.13) that
Also, for ϕ, ψ ∈ M, we obtain
Therefore proving that \(\mathcal{B}\) is a contraction from M into M.
Lemma 3.6.
The mapping \(\mathcal{A}\) is a continuous mapping on M.
Proof.
Let {ϕn} be any sequence of functions in M with ∥ϕn −ϕ ∥ → 0 as n → ∞. Then one can easily verify that
Lemma 3.7.
Suppose (3.6.2), (3.6.3), (3.6.8), (3.6.9), and (3.6.10) hold. Then \(\mathcal{A}(M)\) is relatively compact .
Proof.
We use Theorem 3.6.1 to prove the relative compactness of \(\mathcal{A}(M)\) by showing that all three conditions of Theorem 3.6.1 hold. Thus to see that \(\mathcal{A}(M)\) is uniformly bounded , we use conditions (3.6.2), (3.6.3), (3.6.9), and (3.6.10) to obtain
This shows that \(\mathcal{A}(M)\) is uniformly bounded .To show equicontinuity of \(\mathcal{A}(M)\), without loss of generality, we let n1 > n2 for \(n_{1},n_{2} \in [0,\infty ) \cap \mathbb{Z}\) and use the notations
and
Then, we may write
Hence we have
This shows that \(\mathcal{A}\) is equicontinuous .
To see that \(\mathcal{A}\) is equiconvergent, we let
Then we have
where we used (3.6.8) which yields \(\lim _{n\rightarrow \infty }\prod _{s=n}^{\infty }(1 + Ha(s)) = 1.\)
Theorem 3.6.2.
Assume (3.6.2), (3.6.3), and (3.6.8)– (3.6.13) hold. Then (3.6.1) has a bounded solution.
Proof.
For ϕ, ψ ∈ M, we obtain
Thus, \(\mathcal{A}\phi +\mathcal{ B}\psi \in M.\) Moreover, Lemmas 3.5–3.7 satisfy the requirements of Krasnoselskii’s fixed point theorem and hence there exists a function x(n) ∈ M such that
This proves that (3.6.1) has a bounded solution x(n).
3.6.1 Application to Nonlinear Volterra Difference Equations
Consider the Volterra difference equation
where the functions h and g satisfy conditions (3.6.2) and (3.6.3), respectively. Let H, G, H ∗, and G ∗ be positive constants with G < 1 and H = 1. We choose ρ > 0 such that for any initial point x0, the inequality
holds. Then (3.6.19) has a bounded solution x(n) satisfying | | x | | ≤ ρ.
We let \(a(n) = -\frac{1} {2^{n}}\) and \(c(n,k) = \frac{4^{k}} {4(2^{n})n!}.\)
Thus,
This shows that condition (3.6.9) is satisfied with L = 1. Condition (3.6.8) can be easily verified. Moreover,
thus showing that condition (3.6.10) is satisfied. Next, we verify (3.6.11) as follows.
Finally, we verify (3.6.12).
Thus, by Theorem 3.6.2, Equation (3.6.19) has a bounded solution.
3.7 Lyapunov Functionals or Fixed Point s
In this section, we construct a Lyapunov functional and then refer to Theorem 2.1.1 to deduce boundedness on all solutions of (3.6.1). Then we will compare the results via an example with Theorem 3.6.2. First we rewrite (3.6.1) as
where \(b(n) = 1 - a(n).\) Before we state the next theorem we note that as a consequence of (3.6.2) and (3.6.3) we have, respectively, that
and
Theorem 3.7.1.
Suppose (3.7.2) and (3.7.3) hold and for some α ∈ (0, 1), we have that
Also, assume that
and
then solutions of (3.7.1) are bounded.
Proof.
Define
Then along solutions of (3.7.1), we have
where \(M = H^{{\ast}}\vert b(n)\vert + G^{{\ast}}\sum _{ j=n+1}^{\infty }\vert C(j,n)\vert\).
Let \(\varphi (n,s) =\sum _{ j=n}^{\infty }\vert C(j,s)\vert.\) Then, all the conditions of Theorem 2.1.1 are satisfied which implies that all solutions of (3.7.1) are bounded.
We note that Theorem 2.1.1 gives conditions under which all solutions of (3.7.1) are bounded, unlike Theorem 3.6.2 from which one can only conclude the existence of a bounded solution.
Next, we use the results of Section 3.6.1 to compare the conditions of Theorem 2.1.1 to those of Theorem 3.6.2. Let a(n), G, and H be given as in Theorem 3.7.1 and consider condition (3.7.4) for n ≥ 0. Then,
Next we perform the following calculations by using n! > 2n for n ≥ 4.
Thus, substitution of (3.7.9) into (3.7.8) yields,
This shows that condition (3.7.4) does not hold for all n ≥ 0. Hence, Theorem 2.1.1 gives no information regarding the solutions and yet Theorem 3.6.2 implies the existence of at least one bounded solution.
3.8 Delay Functional Difference Equations
We consider a functional infinite delay difference equation and use fixed point theory to obtain necessary and sufficient conditions for the asymptotic stability of its zero solution. We will apply the results to nonlinear Volterra difference equations. Let \(\mathbb{R} = (-\infty,\infty ),\; \mathbb{Z}^{+} = [0,\infty )\) and \(\mathbb{Z}^{-} = (-\infty,0],\) respectively. We concentrate on the delay functional difference equation
where \(a: \mathbb{Z}^{+} \rightarrow \mathbb{R},\) and \(g: \mathbb{Z}^{+} \times \mathcal{ C},\) is continuous with \(\mathcal{C}\) being the Banach space of bounded functions \(\phi: \mathbb{Z}^{-}\rightarrow \mathbb{R}\) with the maximum norm \(\vert \vert \cdot \vert \vert.\) If \(x_{t} \in \mathcal{ C},\) then \(x_{t}(s) = x(t + s)\) for \(s \in \mathbb{Z}^{-}\).
We will use fixed point theory to obtain necessary and sufficient conditions for the asymptotic stability of the zero solution of (3.8.1). Throughout this section we assume \(g(t,0) = 0\) so that x = 0 is a solution of (3.8.1). For every positive β > 0, we define the set
Given a function \(\psi: \mathbb{Z} \rightarrow \mathbb{Z},\) we define \(\vert \vert \psi \vert \vert ^{[s,t]} =\max \{ \vert \psi (u)\vert: s \leq u \leq t\}.\) Moreover, for D > 0 a sequence \(x: (-\infty,D] \rightarrow \mathbb{R}\) is called a solution of (3.8.1) through \((t_{0},\phi ) \in \mathbb{Z}^{+} \times \mathcal{ C}\) if \(x_{t_{0}} =\phi\) and x satisfies (3.8.1) on [t0, D]. Due to the importance of the next result, we summarize it in the following lemma.
Lemma 3.8.
Suppose that a(t) ≠ 0 for all \(t \in \mathbb{Z}^{+}\) . Then x(t) is a solution of equation (3.8.1) if and only if
The proof of lemma 3.8 follows easily from the variation of parameters formula given in Chapter 1, and hence we omit.
In the preparation for our next theorem we let L > 0 be a constant, δ0 ≥ 0 and t0 ≥ 0. Let \(\phi \in \mathcal{ C}(\delta _{0})\) be fixed and set
Then, S is a complete metric space with metric
Define the mapping P: S → S by
and
It is clear that for φ ∈ S, Pφ is continuous.
Theorem 3.8.1 ([146]).
Assume the existence of positive constants α, L, and a sequence \(b: \mathbb{Z}^{+} \rightarrow [0,\infty )\) such that the following conditions hold:
-
(i)
\(a(t)\neq 0\) for all \(t \in \mathbb{Z}^{+}.\)
-
(ii)
\(\sum _{s=0}^{t-1}\Big\vert \prod _{ u=s+1}^{t-1}a(u)\Big\vert b(s) \leq \alpha <1\) for all \(t \in \mathbb{Z}^{+}.\)
-
(iii)
\(\vert g(t,\phi ) - g(t,\psi )\vert \leq b(t)\vert \vert \phi -\psi \vert \vert\) for all \(\phi,\psi \in \mathcal{ C}(L).\)
-
(iv)
For each ε > 0 and t1 ≥ 0, there exists a t2 > t1 such that for \(t> t_{2},x_{t} \in \mathcal{ C}(L)\) imply
$$\displaystyle{ \vert g(t,x_{t})\vert \leq b(t)\Big(\epsilon +\vert \vert x\vert \vert ^{[t_{1},t-1]}\Big). }$$Then the zero solution of (3.8.1) is asymptotically stable if and only if
-
(v)
\(\vert \prod _{s=0}^{t-1}a(s)\vert \rightarrow 0\) as t → ∞.
Proof.
Suppose (v) hold and let \(K =\max _{t\geq t_{0}}\vert \prod _{s=t_{0}}^{t-1}a(s)\vert.\) Then K > 0 due to (i). Choose δ0 > 0 such that \(\delta _{0}K +\alpha L \leq L.\) Then for x(t) ∈ S and for fixed \(\phi \in \mathcal{ C}(\delta _{0})\) we have
Hence, \(\big(Px\big) \in \mathcal{ C}(L).\) Next we show that \(\big(Px\big)(t) \rightarrow 0\) as t → ∞. Let x ∈ S. As a consequence of \(x(t) \rightarrow 0\) as t → ∞, there exists t1 > t0 such that | x(t) | < ε for all t ≥ t1. Moreover, since | x(t) | ≤ L, for all \(t \in \mathbb{Z}\), by (iv) there is a t2 > t1 such that for t > t2 we have
Thus, for t ≥ t2, we have
By (v), there exists t3 > t2 such that
Thus, for t ≥ t3, we have
Hence, \(\big(Px\big)(t) \rightarrow 0\) as t → ∞. Left to show that \(\big(P\varphi \big)(t)\) is a contraction under the maximum norm. Let ζ, η ∈ S. Then
Or,
Thus, by the contraction mapping principle P has a unique fixed point in S which solves (3.8.1) with \(\phi \in \mathcal{ C}(\delta _{0})\) and x(t) = x(t, t0, ϕ) → 0 as t → ∞. We are left with showing that the zero solution of (3.8.1) is stable. Let ε > 0, ε < L be given and chose 0 < δ < ε so that \(\delta K+\alpha \epsilon <\epsilon.\) By the choice of δ we have | x(t0) | < ε. Let t ∗ ≥ t0 + 1 be such that | x(t ∗) | ≥ ε and | x(s) | < ε for t0 ≤ s ≤ t ∗− 1. If x(t) = x(t, t0, ϕ) is a solution for (3.8.1) with | | ϕ | | < δ, then
which contradict the definition of t ∗. Thus | x(t) | < ε for all t ≥ t0 and hence the zero solution of (3.8.1) is asymptotically stable.
Conversely, suppose (v) does not hold. Then by (i) there exists a sequence {tn} such that for positive constant q,
Now by (ii) we have that
from which we get that
This simplifies to
Thus the sequence \(\{\sum _{s=0}^{t_{n}-1}\Big(\vert \prod _{ u=0}^{s}a(u)\vert \Big)^{-1}b(s)\}\) is bounded and hence there is a convergent subsequence. Thus, for the sake of keeping a simple notation we may assume
for some positive constant ω. Next we may choose a positive integer \(\widetilde{n}\) large enough so that
for all \(n \geq \widetilde{ n}.\)
Consider the solution \(x(t,t_{\widetilde{n}},\phi )\) with \(\phi (s) =\delta _{0}\) for \(s \leq \widetilde{ n}.\) Then, \(\vert x(t)\vert \leq L\) for all \(n \geq \widetilde{ n}\) and
This implies
On the other hand, for \(n \geq \widetilde{ n}\), we also have
Hence, condition (v) is necessary. This completes the proof.
Now we apply the results of Theorem 3.8.1 to the nonlinear Volterra infinite delay equation
where \(a: \mathbb{Z}^{+} \rightarrow \mathbb{R}\;\mbox{ and}\;G:\varOmega \times \mathbb{R} \rightarrow \mathbb{R},\varOmega =\{ (t,s) \in \mathbb{Z}^{2}: t \geq s\}\) and G is continuous in x. The next theorem gives necessary and sufficient conditions for the stability of the zero solution of (3.8.3).
Theorem 3.8.2.
Assume the existence of positive constants α, L, and a sequence \(p:\varOmega \rightarrow \mathbb{R}^{+}\) such that the following conditions hold:
-
(I)
\(a(t)\neq 0\) for all \(t \in \mathbb{Z}^{+},\)
-
(II)
\(\mathop{\max }\limits_{t \in \mathbb{Z}^{+}}\sum _{ s=0}^{t-1}\vert \prod _{ u=s+1}^{t-1}a(u)\vert \sum _{\tau =0}^{s-1}p(s,\tau ) \leq \alpha <1\) for all \(t \in \mathbb{Z}^{+},\)
-
(III)
If \(\vert x\vert,\vert y\vert \leq L\) , then
$$\displaystyle{ \vert G(t,s,x) - G(t,s,y)\vert \leq p(t,s)\vert x - y\vert }$$and G(t,s,0) = 0 for all (t, s) ∈ Ω,
-
(IV)
For each ε > 0 and t1 ≥ 0, there exists a t2 > t1 such that for t ≥ t2, implies
$$\displaystyle{ \sum _{s=-\infty }^{t_{1}-1}p(t,s) \leq \epsilon \sum _{ s=-\infty }^{t-1}p(t,s). }$$Then the zero solution of (3.8.3) is asymptotically stable if and only if
-
(V)
\(\vert \prod _{s=0}^{t-1}a(s)\vert \rightarrow 0\) as t → ∞.
Proof.
We only need to verify that (iii) and (iv) of Theorem 3.8.1 hold. First we remark that due to condition (III) we have that \(\vert G(t,s,x)\vert \leq p(t,s)L.\) Equation (3.8.3) can be put in the form of Equation (3.8.1) by letting
To verify (iii) we let \(b(t) =\sum _{ s=-\infty }^{t-1}p(t,s)\) and then for any functions \(\phi,\varphi \in \mathcal{ C}(L),\) we have
Next we verify (iv). Let ε > 0 and t1 ≥ 0 be given. By (IV) there exists a t2 > t1 such that
Let \(x_{t} \in \mathcal{ C}(L)\) and for t > t2 we have
This implies that (iv) is satisfied, and hence by Theorem 3.8.1, the zero solution of (3.8.3) is asymptotically stable if and only if (V) holds.
We end the paper with the following example.
Example 3.9.
Consider the difference equation
In this example we take t0 = 0. We make sure all conditions of Theorem 3.8.2 are satisfied. We observe that \(a(t) = \frac{1} {2^{t}},\) and G(t, s, x) = 2s−t x(s). Thus,
and hence condition(V) is satisfied. It is clear that p(t, s) = 2s−t. Next we make sure condition (II) is satisfied.
Hence (II) is satisfied. Left to show (IV) is satisfied. Let t1 ≥ 0 be given. Then
Thus all the conditions of Theorem 3.8.2 are satisfied and the zero solution of (3.8.4) is asymptotically stable.
3.9 Volterra Summation Equations
We shift our focus to different types of Volterra difference equations, which we call Volterra summation equations . Volterra integral equations were first studied by Miller [123], in which he proposed the extension of the use of Lyapunov functionals in Volterra integro-differential equations to integral equations. Years later, Burton took upon himself such a tedious task and successfully used Lyapunov functionals in the qualitative analysis of Integral equations. For such a reference we mention the papers [27, 29, 30]. Since then, the study of integral equations has been fully developed, unlike its counterpart, Volterra summation equations. All the results of this section are new and not published anywhere else. Volterra summation equations play major role in the qualitative analysis of neutral difference equations. To see this we consider the neutral difference equation
If
then one would have to ask that the zero solution of D(n, 0) = 0 be stable in order for the zero solution of (3.9.1) to be stable. On the other hand, if we are interested in studying boundedness of solutions of (3.9.1), one would have to require that the solutions of
be bounded for a suitable function h(n) with suitable conditions. Equation (3.9.1) is typified by neutral equations of the form
Equation (3.9.2) will be studied in detail in Chapter 6 For more on neutral difference equations, we refer to [183]. Most of this section’s materials can be found in [142]. We consider the vector Volterra summation equation
where x and a are k-vectors, k ≥ 1, while C is an k × k matrix. To clear any confusion, we note that the summation term in (3.9.3) could have been started at any initial time t0 ≥ 0. We will use the resolvent equation that was established on time scales in [2], combined with Lyapunov functionals and fixed point theory to obtain boundedness of solutions and their asymptotic behaviors. One of the major difficulties when using a suitable Lyapunov functional on Volterra summation equation is relating the solution back to that Lyapunov functional. For \(x \in \mathbb{R}\), | x | denotes the Euclidean norm of x. For any k × k matrix A, define the norm of A by | A | = sup{ | Ax |: | x | ≤ 1}. Let X denote the set of functions \(\phi: [0,n] \rightarrow \mathbb{R}\) and ∥ϕ∥ = max{ | ϕ(s) |: 0 ≤ s ≤ n}. We have the following theorem regarding the existence of solutions of (3.9.3).
Theorem 3.9.1.
Assume the existence of two positive constants K and α ∈ (0, 1) such that
then there is a unique bounded solution of (3.9.3).
Proof.
Define a mapping D: X → X, by
It is clear that (X, ∥⋅ ∥) is a Banach space . Now for ϕ ∈ X, with ∥ϕ∥ ≤ q for positive constant q we have that
Thus D: X → X. Left to show that D defines a contraction mapping on X. Let ϕ, φ ∈ X. Then
Hence, D is a contraction, and by the contraction mapping principle it has a unique solution in X that solves (3.9.3). This completes the proof.
We have the following theorem in which we use a Lyapunov functional to drive solutions to zero.
Theorem 3.9.2.
Assume the existence of two positive constants K1 and α ∈ (0, 1) such that
then every solution x(t) of (3.9.3) satisfies x ∈ l1[0, ∞) and x(t) → 0, as t → ∞.
Proof.
Using (3.9.3) we obtain
Define the Lyapunov functional V by
Moreover,
A substitution of (3.9.6) in the above expression yields
Summing the above inequality from 0 to t − 1 gives
Since V (0) = 0 and α ∈ (0, 1) we arrive at
Letting t → ∞ we have
which is automatically implied that x(t) → 0, as t → ∞. This completes the proof.
We note that the use of Lyapunov functional has an advantage here due to the absence of a linear term in our original equation (3.9.3) which is necessary for the use of variation of parameters. The next result is about the existence of a unique periodic solution for the Volterra summation equation
where x and a are k-vectors, k ≥ 1, while C is a k × k matrix.
Theorem 3.9.3.
Assume the existence of a constant α ∈ (0, 1) such that
and there is a positive integer T such that
then there is a unique periodic solution of (3.9.7).
Proof.
Let X be the space of periodic sequences of period T. Then, it is clear that (X, ∥⋅ ∥) is a Banach space . Now for ϕ ∈ X, we define D: X → X by
It is clear that D is periodic of period T. That is (Dϕ)(t + T) = (Dϕ)(t). Left to show that D defines a contraction mapping on X. Let ϕ, φ ∈ X. Then
Hence, D is a contraction and by the contraction mapping principle it has a unique solution in X that solves (3.9.7). This completes the proof.
In the next theorem we return to (3.9.3) and rewrite it so we can show it has an asymptotically periodic solution. Thus we rewrite (3.9.3) in the form
Note that the term a(t) − ∑s = −∞ t−1 C(t, s)x(s) produced a unique periodic solution as we have seen in Theorem 3.9.3 and this indicates that for any bounded x, the term ∑s = −∞ −1 C(t, s)x(s) → 0, t → ∞. Hence it is intuitive to expect a solution x of (3.9.8) to be written as x = y + z where y is periodic and z → 0, as t → ∞. We need to properly define our spaces. Let
and
We have the following theorem.
Theorem 3.9.4.
Suppose for ϕ ∈ PT,
and for each z ∈ Q,
Assume the existence of a constant α ∈ (0, 1) such that
and there is a positive constant integer T such that a(t + T) = a(t) and C(t + T, s + T) = C(t, s). Then (3.9.3) has a solution \(x(t) = y(t) + z(t)\) where ϕ ∈ PT and z ∈ Q.
Proof.
Let X be the space of sequences \(\phi: \mathbb{Z}^{+} \rightarrow \mathbb{R}^{k}\) such that ϕ ∈ X implies there is a y ∈ PT and z ∈ Q with ϕ = y + z. We claim that \(\big(X,\|\cdot \|\big)\) is a Banach space , where ∥⋅ ∥ is the maximum norm. To see this, we let {yn + zn} be a Cauchy sequence in \(\big(X,\|\cdot \|\big).\) Given an ɛ > 0 there is an N such that for n, m ≥ N we have
Since z ∈ Q we have for each ɛ > 0 and each z ∈ Q there is an L > 0 such that t ≥ L, implies that | z(t) | ≤ ɛ∕4. Fix n, m ≥ N and for the ɛ∕4 find L > 0 such that t ≥ L implies that
so that t ≥ L implies that
for all t since yn, ym ∈ PT. Since this holds for every pair with n ≥ N and m ≥ N, it follows that {yn} is a Cauchy sequence. The same argument can be repeated for the sequence {yn}. This completes the proof of the claim.Let ϕ = y + z where y ∈ PT and z ∈ Q. Define the mapping \(H: X \rightarrow X\) by
Then H is a contraction mapping by Theorem 3.9.3. Now we observe that
This defines mappings A and B on X. Note that \(B: X \rightarrow P_{T} \subset X\) and \(A: X \rightarrow Q \subset X.\) Since H defines a contraction mapping, it has a unique fixed point in X that solves (3.9.3). The proof is complete.
Next we discuss the equation for Volterra summation equation (3.9.3) and use variation of parameters to obtain variety of different results concerning the solutions. Adivar and Raffoul [2] were the first to establish the existence of the resolvent of an equation that is similar to (3.9.3) on time scales. Due to the importance of such results, we will state them as they were presented in [2] on time scales. We then set the time scale equal to \(\mathbb{Z}\) to suit our equation (3.9.3). For a good reference on time scales we refer the reader to the famous book by Martin Bohner and Al Peterson [18]. A time scale, denoted \(\mathbb{T}\), is a nonempty closed subset of real numbers. The set \(\mathbb{T}^{\kappa }\) is derived from the time scale \(\mathbb{T}\) as follows: if \(\mathbb{T}\) has a left-scattered maximum M, then \(\mathbb{T}^{\kappa } = \mathbb{T}-\left \{M\right \}\), otherwise \(\mathbb{T}^{\kappa } = \mathbb{T}\). The delta derivative f Δ of a function \(f: \mathbb{T} \rightarrow \mathbb{R}\), defined at a point \(t \in \mathbb{T}^{\kappa }\) by
In (3.9.9), \(\sigma: \mathbb{T} \rightarrow \mathbb{T}\) is the forward jump operator defined by σ(t): = \(\inf \left \{s\! \in \! \mathbb{T}: s> t\right \}\). Hereafter, we denote by μ(t) the step size function \(\mu: \mathbb{T} \rightarrow \mathbb{R}\) defined by μ(t): = σ(t) − t. A point \(t \in \mathbb{T}\) is said to be right dense (right scattered) if μ(t) = 0 (μ(t) > 0). A point is said to be left dense if \(\sup \left \{s \in \mathbb{T}: s <t\right \} = t\). We note that when the time scale is the set on integers, \(\mathbb{T} = \mathbb{Z}\), then σ(t) = t + 1 and μ(t) = 1. where \(t_{0} \in \mathbb{T}^{\kappa }\) is fixed and the functions \(a: I_{\mathbb{T}} \rightarrow \mathbb{R}\), \(C: I_{\mathbb{T}} \times I_{\mathbb{T}} \rightarrow \mathbb{R}\). Based on the results of [2], we have the following. Given a linear system of integral equations of the form
the corresponding resolvent equation associated with C(t, s) is given by
If C is scalar valued, then so is R. If C is n × n matrix, then so is R. Moreover, the solution of (3.9.10) in terms of R is given by the variation of parameters formula
It should cause no difficulties to take the initial time t0 = 0. With this in mind, if we set \(\mathbb{T} = \mathbb{Z},\) then equations (3.9.10) and (3.9.11) become
and
respectively. If C is scalar valued, then so is R. If C is n × n matrix, then so is R. Moreover, the solution of (3.9.13) in terms of R is given by
For the remainder of this section we denote the vector space of bounded sequences \(\phi: \mathbb{Z}^{+} \rightarrow \mathbb{R}^{k}\) by \(\mathcal{BC}.\) The next theorem is an extension of Perron’s Theorem for integral equation over the reals to an arbitrary time scale. Its proof can be found in [2]. Only for the next theorem we define \(\mathcal{B}\) to be the space of bounded continuous functions on \(I_{\mathbb{T}} = [t_{0},\infty )_{\mathbb{T}}\) endowed with the supremum norm, \(\vert \vert ^{.}\vert \vert _{\mathcal{B}}\), given by
One can easily see that \((\mathcal{B},\vert \vert ^{.}\vert \vert _{\mathcal{B}})\) is a Banach space .
Theorem 3.9.5 ([2]).
Let \(C: I_{\mathbb{T}} \times I_{\mathbb{T}} \rightarrow \mathbb{R}\) be continuous real valued function on ≤ t0 ≤ s ≤ t < ∞. If \(\int _{t_{0}}^{t}R(t,s)f(s)\varDelta s \in \mathcal{ BC}\) for each \(f \in \mathcal{ BC}\) , then there exists a positive constant K such that \(\int _{t_{0}}^{t}\vert R(t,s)\vert \;\varDelta s <K,\) for all \(t \in I_{\mathbb{T}}.\)
Just for the record, we restate Theorem 3.9.5 for \(\mathbb{T} = \mathbb{Z}\)
Theorem 3.9.6.
Let \(C: \mathbb{Z}^{+} \times \mathbb{Z}^{+} \rightarrow \mathbb{R}\) be real valued sequence on 0 ≤ t0 ≤ s ≤ t < ∞. If \(\sum _{s=0}^{t-1}R(t,s)f(s) \in \mathcal{ BC}\) for each \(f \in \mathcal{ BC}\) , then there exists a positive constant K such that ∑s = 0 t−1 | R(t, s) | < K, for all \(t \in \mathbb{Z}^{+}.\)
Theorem 3.9.7.
Suppose R(t, s) satisfies (3.9.14) and that \(a \in \mathcal{ BC}\) . Then every solution x(t) of (3.9.13) is bounded if and only if
holds.
Proof.
Suppose (3.9.16) holds. Then, using (3.9.15), it is trivial to show that x(t) is bounded. If x(t) and a(t) are bounded, then from (3.9.15), we have
for some positive constant γ and the proof follows from Theorem 3.9.6.
The intuitive idea here is that for C(t, s) well behaved then the solution of (3.9.13) follows a(t).
Theorem 3.9.8.
Let C be a k × k matrix. Assume the existence of a constant α ∈ (0, 1) such that
-
(i)
If \(a \in \mathcal{ BC}\) so is the solution x of (3.9.13); hence, (3.9.17) holds.
-
(ii)
Suppose, in addition, that for each T > 0 we have \(\sum\limits _{s=0}^{T}\vert C(t,s)\vert \rightarrow 0\) as t → ∞. If \(a(t) \rightarrow 0\) as t → ∞, then so does x(t) and \(\sum \limits_{s=0}^{t-1}R(t,s)a(s).\)
-
(iii)
\(\sum _{s=0}^{t-1}\vert R(t,s)\vert \leq \frac{\alpha } {1-\alpha }.\)
Proof.
The proof of (i) is the same as the proof of Theorem 3.9.1. For the proof of (ii) we define the set
For ϕ ∈ M, define the mapping Q by
Then
We already know that a(t) → 0 as t → ∞. Given an ɛ > 0 and ϕ ∈ M, find T such that | ϕ(t) | < ɛ if t ≥ T and find d with | ϕ(t) | ≤ d for all t ≥ T. For this fixed T, find η > T such that t ≥ η implies that \(\sum _{s=0}^{T-1}\vert C(t,s)\vert \leq \frac{\varepsilon } {d}.\) Then t ≥ η implies that
Thus, \(Q: M \rightarrow M\) and the fixed point satisfies x(t) → 0, as t → ∞, for every vector sequence a ∈ M. Using (3.9.15) we have
This completes the proof of (ii). Using (3.9.14) and (3.9.17) we have by changing of order of summations
Therefore,
That is,
Example 3.10.
Suppose there is a sequence \(r: \mathbb{Z}^{+} \rightarrow (0,1],\) with r(t) ↓ 0 with
and
for some positive constant k. Then the unique solution x(t) of (3.9.13) is bounded and goes to zero as t approaches infinity. Moreover, \(\sum _{s=0}^{t-1}R(t,s)a(s) \rightarrow 0,\;\mbox{ as}\;t \rightarrow \infty.\)
Proof.
Let
Then \(\big(\mathcal{M},\vert \cdot \vert _{r}\big)\) is a Banach space . For \(\phi \in \mathcal{ M},\) define the mapping Q by
Then,
which shows that \(Q\phi \in \mathcal{ M}.\) Let \(\phi,\eta \in \mathcal{ M},\) then we readily have that
and so we have Q is a contraction on \(\mathcal{M}\) and therefore it has a unique fixed point x(t) in \(\mathcal{M}\) that solves (3.9.13). Moreover, \(\max _{t\in \mathbb{Z}^{+}} \frac{\vert x(t)\vert } {\vert r(t)\vert } <\infty,\) implies that | x(t) | ≤ k ∗ r(t) → 0, as t → ∞. Also by (3.9.21) we have | a(t) | → 0, as t → ∞ and hence using (3.9.15) we have
This completes the proof.
The next theorem relates the kernel to a kernel of convolution type. Also, unlike Theorem 3.9.2 we only ask for boundedness on a(t).
Theorem 3.9.9.
Assume the existence of a constant α ∈ (0, 1) such that
and let a(t) be a bounded sequence. Suppose there is a decreasing sequence \(\varPhi: [0,\infty ) \rightarrow (0,\infty )\) with Φ ∈ l1[0, ∞), and
If in addition there exists a positive constant K with
then the unique solution x(t) of (3.9.13) is bounded and
Proof.
Define the Lyapunov functional V by
Then along the solutions of (3.9.13) we have
Using (3.9.13) we arrive at
Hence,
for δ > 0. Replace t with s in the above expression and then multiply both sides by Φ(t − s) for 0 ≤ s ≤ t < ∞.
Suppose there is a t > 0 satisfying
Then summing from 0 to t − 1 followed with summation by parts and by noting that V (0) = 0 we arrive at
Hence (3.9.26) combined with (3.9.25) and making use of (3.9.24) yield
where | | a | | is the maximum norm of a and q is a positive constant that we get from | Φ | ∈ l1[0, ∞). The above inequality implies that
and V (t) is bounded. Using (3.9.24) in V (t) gives
from which we conclude x(t) is bounded since both V (t) and a(t) are bounded. Now from (3.9.15) we have that \(\sum _{s=0}^{t-1}\vert R(t,s)a(s)\vert \leq \vert x(t)\vert + \vert a(t)\vert\) and hence ∑s = 0 t−1 | R(t, s)a(s) | is bounded and by Theorem 3.9.6, we have that \(\max _{t\in \mathbb{Z}^{+}}\sum _{s=0}^{t-1}\vert R(t,s)\vert <\infty.\) This completes the proof.
For Theorem 3.9.10 we assume (3.9.13) is scalar.
Theorem 3.9.10.
Assume the existence of constants α, β ∈ (0, 1) such that
and
If a ∈ l2[0, ∞) so is the solution x of (3.9.13).
Proof.
Define the Lyapunov functional V by
Then along the solutions of (3.9.13) we have that
Squaring both sides of (3.9.13) gives
This implies that
Substituting into △V gives
Summing the above inequality for 0 to n − 1 yields
and hence the results. This completes the proof.
3.10 The Need for Large Contraction
So far, we have been successful in using fixed point theorems including the contraction mapping principles in obtaining different results concerning functional difference equations. It is naive to believe that every map can be defined so that it is a contraction, even with the strictest conditions. For example, consider
then for \(x,y \in \mathbb{R}\) we have that
and the contraction constant tends to one as x 2 + y 2 → 0. As a consequence, the regular contraction mapping principle failed to produce any results. This forces us to look for other alternative, namely the concept of Large Contraction. We will restate the contraction mapping principle and Krasnoselskii’s fixed point theorems in which the regular contraction is replaced with Large Contraction. Then based on the notion of Large Contraction, we introduce two theorems to obtain boundedness and periodicity results in which Large Contraction is substituted for regular contraction.
Definition 3.10.1.
Let \((\mathcal{M},d)\) be a metric space and \(B:\mathcal{ M} \rightarrow \mathcal{ M}.\) The map B is said to be large contraction if \(\phi,\varphi \in \mathcal{ M}\) , with ϕ ≠ φ then d(Bϕ, Bφ) ≤ d(ϕ, φ) and if for all ɛ > 0, there exists a δ ∈ (0, 1) such that
The next theorems are alternative to the regular Contraction Mapping Principle and Krasnoselskii’s fixed point theorem in which we substitute Large Contraction for regular contraction. The proofs of the two theorems and the statement of Definition 3.10.1 can be found in [24].
Theorem 3.10.1.
Let \((\mathcal{M},\rho )\) be a complete metric space and B be a large contraction. Suppose there are an \(x \in \mathcal{ M}\) and an L > 0 such that ρ(x, B n x) ≤ L for all n ≥. Then B has a unique fixed point in \(\mathcal{M}.\)
Theorem 3.10.2.
Let \(\mathcal{M}\) be a bounded convex nonempty subset of a Banach space \((\mathbb{B},\|\cdot \|).\) Suppose that A and B map \(\mathcal{M}\) into \(\mathbb{B}\) such that
-
i.
\(x,y \in \mathcal{ M}\) implies \(Ax + By \in \mathcal{ M}\) ;
-
ii.
A is compact and continuous;
-
iii.
B is a large contraction mapping.
Then there exists \(z \in \mathcal{ M}\) with z = Az + Bz.
Next, we consider the completely nonlinear difference equation
where \(a,p: \mathbb{Z} \rightarrow \mathbb{R}\). To invert our equation, we create a linear term by letting
It would become clearer later on that H(x) is not a contraction and, as a consequence, the Contraction Mapping Principle cannot be used. Instead, we will show that H is a Large Contraction and hence our mapping, to be constructed, will define a Large Contraction. Then we use Theorem 3.10.2 and show that solutions of (3.10.1) are bounded. This allows us to put (3.10.1) in the form
Let x(0) = x0, then by the variation of parameters formula, one can easily show that for t ≥ 0, x(t) is a solution of (3.10.3) if and only if
We begin with the following lemma.
Lemma 3.9.
Let ∥⋅ ∥ denote the maximum norm. If
then the mapping H defined by ( 3.10.2 ) is a large contraction on the set \(\mathbb{M}.\)
Proof.
For any reals a and b we have the following inequalities
and
If \(x,y \in \mathbb{M}\) with x ≠ y, then x(t)4 + y(t)4 < 1. Hence, we arrive at
where we use the notations u = x(t) and v = y(t) for brevity. Now, we are ready to show that H is a large contraction on \(\mathbb{M}\). For a given ɛ ∈ (0, 1), suppose \(x,y \in \mathbb{M}\) with ∥x − y∥ ≥ ɛ. There are two cases:
-
a.
$$\displaystyle{ \frac{\varepsilon } {2} \leq \vert x(t) - y(t)\vert \;\text{ for some }\;t \in \mathbb{Z}, }$$
or
-
b.
$$\displaystyle{ \vert x(t) - y(t)\vert \leq \frac{\varepsilon } {2}\;\text{ for some }\;t \in \mathbb{Z}. }$$
If ɛ∕2 ≤ | x(t) − y(t) | for some \(t \in \mathbb{Z}\), then
or
For all such t, we get by (3.10.5) that
On the other hand, if | x(t) − y(t) | ≤ ɛ∕2 for some \(t \in \mathbb{Z}\), then along with (3.10.5) we find
Hence, in both cases we have
Thus, H is a large contraction on the set \(\mathbb{M}\) with \(\delta =\min \left \{1 -\varepsilon ^{4}/2^{7},1/2\right \}.\) The proof is complete.
Remark 3.6.
It is clear from inequality (3.10.5) that (u 4 + v 4)∕2 → 0, the contraction constant approaches one. Hence, H(x) does not define a contraction mapping as we have claimed before.
For \(\psi \in \mathbb{M}\), we define the map \(B: \mathbb{M} \rightarrow \mathbb{M}\) by
Lemma 3.10.
Assume for all \(t \in \mathbb{Z}\)
If H is a large contraction on \(\mathbb{M}\) , then so is the mapping B.
Proof.
It is easy to see that
By Lemma 3.9 H is a large contraction on \(\mathbb{M}.\) Hence, for \(x,y \in \mathbb{M}\) with x ≠ y, we have ∥Hx − Hy∥ ≤ ∥x − y∥. Hence,
Taking maximum norm over the set [0, ∞), we get that ∥Bx − By∥ ≤ ∥x − y∥. Now, from the proof of Lemma 3.9, for a given ɛ ∈ (0, 1), suppose \(x,y \in \mathbb{M}\) with ∥x − y∥ ≥ ɛ. Then \(\delta =\min \left \{1 -\varepsilon ^{4}/2^{7},1/2\right \},\) which implies that 0 < δ < 1. Hence, for all such ɛ > 0 we know that
Therefore, using (3.10.7), one easily verify that
The proof is complete.
We arrive at the following theorem in which we prove boundedness.
Theorem 3.10.3.
Assume (3.10.7). Then (3.10.3) has a unique solution in \(\mathbb{M}\) which is bounded.
Proof.
\((\mathbb{M},\|\cdot \|)\) is a complete metric space of bounded sequences. For \(\psi \in \mathbb{M}\) we must show that \((B\psi )(t) \in \mathbb{M}.\) From (3.10.6) and the fact that
we have
This shows that \((B\psi )(t) \in \mathbb{M}.\) Lemma 3.9 implies the map B is a large contraction and hence by Theorem 3.10.1, the map B has a unique fixed point in \(\mathbb{M}\) which is a solution of (3.10.3). This completes the proof.
Next we use Theorem 3.10.2 and prove the existence of a periodic solution of the nonlinear delay difference equation
where r is a positive integer and
and T is the least positive integer for which these hold. As before, for the sake of inversion, we rewrite (3.10.8) as
where
We begin with the following lemma which we omit its proof.
Lemma 3.11.
Suppose that 1 − ∏s = t−T t−1 a(s) ≠ 0 for all \(t \in \mathbb{Z}\) . Then x(t) is a solution of (3.10.10) if and only if
Let PT be the set of all sequences x(t), periodic in t of period T. Then (PT, ∥⋅ ∥) is a Banach space when it is endowed with the maximum norm
Set
Obviously, \(\mathbb{M}\) is bounded and convex subset of the Banach space PT. Let the map \(A: \mathbb{M} \rightarrow P_{T}\) be defined by
In a similar way, we set the map \(B: \mathbb{M} \rightarrow P_{T}\) by
It is clear from (3.10.13) and (3.10.14) that Aφ and Bψ are T-periodic in t. For simplicity we let
Let
For \(x \in \mathbb{M},\) we have
and therefore,
and
We have the following theorem.
Theorem 3.10.4.
Suppose G(u, ψ(u − r)) is given by (3.10.15). Assume for all \(t \in \mathbb{Z}\)
Then (3.10.8) has a periodic solution.
Proof.
Using condition (3.10.17) and by a similar argument as in Lemma 3.9, one can easily show that B is a large contraction since H is a large contraction. Also, the map A is continuous and maps bounded sets into compact sets and hence it is compact. Moreover, for \(\varphi,\psi \in \mathbb{M}\), we have by (3.10.17) that
Hence an application of Theorem 3.10.2 implies the existence of a periodic solution in \(\mathbb{M}\). This completes the proof.
It is evident from Lemma 3.9 that proving large contraction is tedious, long, and not very practical to consider case by case function. Consider the mapping H be defined by
We have observed from Lemma 3.9 that the properties of the function h in (3.10.18) plays a substantial role in obtaining a large contraction on a convenient set. Next we state and prove a remarkable theorem that generalizes the concept of Large Contraction. The theorem provides easily checked sufficient conditions under which a mapping is a Large Contraction. The next theorem is due to Adivar, Raffoul, and Islam. Several authors have published it in their work without the proper citations. Let α ∈ (0, 1] be a fixed real number. Define the set \(\mathbb{M}_{\alpha }\) by
-
H.1.
\(h: \mathbb{R} \rightarrow \mathbb{R}\) is continuous on [−α, α] and differentiable on (−α, α),
-
H.2.
The function h is strictly increasing on [−α, α],
-
H.3.
\(\sup \limits _{t\in (-\alpha,\alpha )}h^{{\prime}}(t) \leq 1\).
Theorem 3.10.5.
[Adivar-Raffoul-Islam [ 4 ] (Classifications of Large Contraction Theorem)] Let \(h: \mathbb{R} \rightarrow \mathbb{R}\) be a function satisfying (H.1-H.3). Then the mapping H in ( 3.10.18 ) is a large contraction on the set \(\mathbb{M}_{\alpha }\) .
Proof.
Let \(\phi,\varphi \in \mathbb{M}_{\alpha }\) with ϕ ≠ φ. Then ϕ(t) ≠ φ(t) for some \(t \in \mathbb{R}\). Let us denote this set by D(ϕ, φ), i.e.,
For all t ∈ D(ϕ, φ), we have
Since h is a strictly increasing function we have
For each fixed t ∈ D(ϕ, φ) define the interval Ut ⊂ [−α, α] by
Mean Value Theorem implies that for each fixed t ∈ D(ϕ, φ) there exists a real number ct ∈ Ut such that
By (H.2-H.3) we have
Hence, by (3.10.20)–(3.10.22) we obtain
for all t ∈ D(ϕ, φ). This implies a large contraction in the supremum norm. To see this, choose a fixed ɛ ∈ (0, 1) and assume that ϕ and φ are two functions in \(\mathbb{M}_{\alpha }\) satisfying
If \(\left \vert \phi (t) -\varphi (t)\right \vert \leq \frac{\varepsilon } {2}\) for some t ∈ D(ϕ, φ), then we get by (3.10.22) and (3.10.23) that
Since h is continuous and strictly increasing, the function \(h\left (u + \frac{\varepsilon } {2}\right ) - h(u)\) attains its minimum on the closed and bounded interval \(\left [-\alpha,\alpha \right ]\). Thus, if \(\frac{\varepsilon }{2} <\left \vert \phi (t) -\varphi (t)\right \vert\) for some t ∈ D(ϕ, φ), then by (H.2) and (H.3) we conclude that
where
Hence, (3.10.20) implies
Consequently, combining (3.10.24) and (3.10.25) we obtain
where
The proof is complete.
Example 3.11.
Let α ∈ (0, 1) and \(k \in \mathbb{N}\) be fixed elements and u ∈ (−1, 1).
-
1.
The condition (H.2) is not satisfied for the function \(h_{1}(u) = \frac{1} {2k}u^{2k}.\)
-
2.
The function \(h_{2}(u) = \frac{1} {2k+1}u^{2k+1}\) satisfies (H.1-H.3).
Proof.
Since h1 ′(u) = u 2k−1 < 0 for − 1 < u < 0, the condition (H.2) is not satisfied for h1. Evidently, (H.1-H.2) hold for h2. (H.3) follows from the fact that h2 ′(u) ≤ α 2k and α ∈ (0, 1).
3.11 Open Problems
Open Problem 1.
Consider the neutral delay functional difference equation
where the function \(q: \mathbb{Z}^{+} \times \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}\) is continuous and α and a are constants. For a > 1, (3.11.1) is equivalent to
We search for a solution of (3.11.1) having the property
Hence, by summing (3.11.2) from 0 to ∞ we get the following advanced type Volterra difference equation,
which is an indication to study the general advanced type Volterra difference equation
Under suitable conditions, one might explore the boundedness of solutions and the existence of periodic solutions. Finding suitable Lyapunov functionals to imply the results is almost impossible. It is my suggestion that the use of fixed point theory would be fruitful. For instance, if the right spaces are set up, one have the choice to use the contraction mapping principle; Theorem 3.5.1 or the following Krasnoselskii-Schaefer Theorem.
Theorem 3.11.1 (Krasnosselskii-Schaefer Theorem [25]).
Let (S, ∥⋅ ∥) be a Banach space . Suppose B: S → S is a contraction map, and A: S → S is continuous and maps bounded sets into compact sets. Then either
-
(i)
\(x =\lambda B(\frac{x} {\lambda } ) +\lambda Ax\) has a solution in S for λ = 1, or
-
(ii)
the set of all such solutions, 0 < λ < 1, is unbounded.
It is noted that Krasnoselskii-Schaefer’s theorem requires a priori bounds on solutions of a corresponding auxiliary equation. To obtain such bounds, one would have to construct a Lyapunov functional that is suitable for Krasnoselskii-Schaefer’s theorem. For problem (3.11.4), the auxiliary equation is given by
Open Problem 2.
Extend the results of Section 3.9 to Volterra summation equations of the form
This is an unexplored area of research.
Open Problem 3.
We have seen in Chapter 3 that fixed point theory was successfully used to obtain asymptotic stability, while as in Chapter 2 we obtained uniform asymptotic stability using Lyapunov functionals. To my knowledge, no one has been able to use fixed point theory to obtain uniform asymptotic stability. Such results would mainly rest on how the set S is defined. Being able to do so would revolutionize the concept of fixed point theory and open the door for new research in differential/difference equations and even in dynamical systems on time scales.
Open Problem 4.
Consider the following system of two neurons:
where x1(n) and x1(n) denote the activations of the two neurons, τi, (i = 1, 2) and τ denote the synaptic transmission delays, a1 and a2 are the synaptic coupling weights, \(f: \mathbb{R} \rightarrow \mathbb{R}\) is the activation function with f(0) = 0.
Use either Lyapunov functionals (Chapter 2) or fixed point theory (Chapter 3) to analyze the system and compare both methods.
References
Adivar, M., and Raffoul, Y., Existence of for Volterra integral equations on time scales, Bull. Aust. Math. Soc. 82 (2010), 139–155.
Adivar, M,. Islam, M. and Raffoul, Y., Separate contraction and existence of periodic solutions in totally nonlinear delay differential equations, Hacettepe Journal of Mathematics and Statistics, 41 (1) (2012), 1–13.
Agarwal, R., and O’Regan, D., Infinite Interval Problems for Differential, Difference, and Integral Equations.Kluwer Academic, 2001.
Bohner, M., and Peterson, A., Dynamic Equations on Time Scales, An Introduction with Applications, Birkhäuser, Boston, 2001.
Burton, T. A., Stability and Periodic Solutions of Ordinary and Functional Differential Equations, Academic Press, New York, 1985.
Burton, T.A. Integral equations, implicit functions, and fixed points, Proc. Amer. Math. Soc. 124 (1996), 2383–2390.
Burton, T.A. and Kirk, C., A fixed point theorem of Krasnoselskii-Schaefer type, Math. Nachr., 189 (1998), 23–31.
Burton, T.A. fixed point s, Volterra equations, and Becker’s resolvent, Acta. Math. Hungar. 108(2005), 261–281.
Burton, T.A. Integral equations, Volterra equations, and the remarkable resolvent, E. J. Qualitative Theory of Diff. Equ. (2006) No. 2, 1–17.
Burton, T.A. Integral equations, L p -forcing, remarkable resolvent: Lyapunov functionals, Nonlinear Anal. 68, 35–46.
Elaydi, S.E. and Zhang, S., Periodic solutions Volterra difference equations with Infinite delay I: The linear case, Proceedings of the First International Conference on Difference Equations. Gorden and Breach (1994), 163–174.
Islam, M., and Yankson, E., Boundedness and stability in nonlinear delay difference equations employing fixed point theory,Electron. J. Qual. Theory Differ. Equ. 26 (2005).
Krasnoselskii, M.A., Positive solutions of operator Equations, Noordhoff, Groningen, (1964).
Miller, R., K., Nonlinear Volterra Integral Equations, Benjamin, New York, (1971).
Raffoul, Y., Stability in neutral nonlinear differential equations with functional delays using fixed point theory, Mathematical and Computer Modelling, 40(2004), 691–700.
Raffoul, Y., Stability and periodicity in discrete delay equations,J. Math. Anal. Appl. 324 (2006) 1356–1362.
Raffoul, Y., Stability in functional difference equations using fixed point theory, Communications of the Korean Mathematical Society 29 (1), (2014), 195–204.
Raffoul, Y., Fixed point theory in Volterra summation equations, preprint.
Raffoul, Y., Stability in functional difference equations with applications to infinite delay Volterra difference equations, preprint.
Raffoul, Y., and Yankson, E., Existence of bounded solutions for Almost-Linear Volterra difference equations using fixed point theory and Lyapunov Functionals Nonlinear Studies, Vol 21, No (2014) pp. 663–674.
Yankson, E., Stability in discrete equations with variable delays, Electron. J. Qual. Theory Differ. Equ. 2009, No. 8, 1–7.
Yankson, E., Stability of Volterra difference delay equations, Electron. J. Qual. Theory Differ. Equ. 2006, No. 20, 1–14.
Zhang, S., Stability of neutral delay difference systems, Computers Math. Applic. (2001) Vol. 42, pp. 291–299.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Raffoul, Y.N. (2018). Fixed Point Theory in Stability and Boundedness. In: Qualitative Theory of Volterra Difference Equations. Springer, Cham. https://doi.org/10.1007/978-3-319-97190-2_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-97190-2_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-97189-6
Online ISBN: 978-3-319-97190-2
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)