Keywords

1 Introduction

Considering the most general divided difference derivative [5, 6],

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \mathcal{D} f (t(s))=\frac{f(t(s+\frac{1}{2}))-f(t(s-\frac{1}{2}))} {t(s+\frac{1}{2})-t(s-\frac{1}{2})}, &\displaystyle {} \end{array} \end{aligned} $$
(1)

admitting the property that if f(t) = P n(t(s)) is a polynomial of degree n in t(s), then \(\mathcal {D} f (t(s))=\tilde {P}_{n-1}(t(s))\) is a polynomial in t(s) of degree n − 1, one is led to the following most important canonical forms for t(s) in order of increasing complexity:

$$\displaystyle \begin{aligned} \begin{array}{rcl} t(s)&\displaystyle =&\displaystyle t(0);{} \end{array} \end{aligned} $$
(2)
$$\displaystyle \begin{aligned} \begin{array}{rcl} t(s)&\displaystyle =&\displaystyle s;{} \end{array} \end{aligned} $$
(3)
$$\displaystyle \begin{aligned} \begin{array}{rcl} t(s)&\displaystyle =&\displaystyle q^s;{} \end{array} \end{aligned} $$
(4)
$$\displaystyle \begin{aligned} \begin{array}{rcl} t(s)&\displaystyle =&\displaystyle {q^s+q^{-s}\over 2},\; q\in \mathbf{C}, s\in \mathbb{Z}.{} \end{array} \end{aligned} $$
(5)

When the function t(s) is given by (2)–(4), the divided difference derivative (1) leads to the ordinary differential derivative \(D f (t)=\frac {d}{dt}f(t)\), finite difference derivative

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Delta f(s)=f(s+1)-f(s)=(e^{\frac{d}{ds}}-1) f(s) {} \end{array} \end{aligned} $$
(6)

and q-difference derivative (or Jakson derivative [4])

$$\displaystyle \begin{aligned} \begin{array}{rcl} D_{q} f(t)= \frac{f(qt)-f(t)}{qt-t}=\frac{q^{\frac{d}{dt}}-1}{qt-t} f(t){} \end{array} \end{aligned} $$
(7)

respectively. When x(s) is given by (5), the corresponding derivative is usually referred to as the Askey-Wilson first order divided difference operator [1] that one can write:

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \mathcal{D} f (x(z))=\frac{f(x(q^{\frac{1}{2}}z))-f(x(q^{-\frac{1}{2}}z))} {x(q^{\frac{1}{2}}z)-x(q^{-\frac{1}{2}}z)},&\displaystyle {} \end{array} \end{aligned} $$
(8)

where \(x(z)=\frac {z+z^{-1}}{2}\), having in mind that z = q s.

The calculus related to the differential derivative, the continuous or differential calculus, is clearly the classical one. The one related to the derivatives (6)–(8) (difference, q-difference and q-nonuniform difference respectively) is referred to as the discrete calculus. Its interest is two folds: On the one hand it generalizes the continuous calculus, and on the other hand it uses discrete variable.

This work is concerned in the difference calculus. We particularly aim to establish difference versions of the well-known in differential calculus, integral inequalities of Hölder, Cauchy-Schwartz, Minkowski, Grönwall, Bernoulli, and Lyapunov. We will note that the raised inequalities were proved in [3] for a more general difference operator than (6), but one will remark that except classical recipes used for the inequalities of classical analysis (Hölder, Cauchy-Schwartz and Minkowski), our approach here is essentially different. It is essentially based on the Lagrange method and it is so that it can be extended to the more general derivative (7) or even (8) (see [2]), the latter being, at our best knowledge, the largest one having the mentioned property of sending a polynomial of degree n in a polynomial of degree n − 1.

In the following lines, we first introduce basic concepts of difference calculus and linear first order difference equations necessary for the sequel, and then study the mentionned integral inequalities.

2 Preliminaries

2.1 Difference Derivative and Integral

Consider again the difference derivative that is the derivative related to the grid in (2):

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \Delta F(s)=F(s+1)-F(s)=f(s)&\displaystyle {} \end{array} \end{aligned} $$
(1)

Basing on this derivative, one defines the integration that is the inverse of the differentiation operation as follows:

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \int_{s_{0}}^{s}f(s)=^{def}\sum_{i=s_{0}}^{s-1}f(i).&\displaystyle {} \end{array} \end{aligned} $$
(2)

The defined integral admits the following properties:

Fundamental Principle of Analysis

One easily verifies that

$$\displaystyle \begin{aligned} \begin{array}{rcl} (i)&\displaystyle \Delta\left( \int_{s_{0}}^{s}f(s)d_{\Delta}s\right) =\Delta\left(\sum_{s_{0}}^{s-1}f(i)\right)=f(s),&\displaystyle {} \end{array} \end{aligned} $$
(3)
$$\displaystyle \begin{aligned} \begin{array}{rcl} (ii)&\displaystyle \int_{s_{0}}^{s}\left(\Delta F(s)\right) d_{\Delta}s =\sum_{s_{0}}^{s-1}\Delta F(i)=F(s)-F(s_{0}). &\displaystyle {} \end{array} \end{aligned} $$
(4)

Integration by Parts

Integrating the two members of the equality

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle f(s)\Delta g(s)=\Delta(f(s)g(s))-g(s+1)\Delta f(s)&\displaystyle {} \end{array} \end{aligned} $$
(5)

and applying (4), one gets

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \int_{s_{0}}^{s}f(s)\Delta g(s)d_{\Delta}s=\left[f(s)g(s)\right]_{s_{o}}^{s}-\int_{s_{0}}^{s}g(s+1)\Delta f(s)d_{\Delta}s,&\displaystyle {} \end{array} \end{aligned} $$
(6)

which is the integration by parts formula.

Positivity of the Integral

We finally remark that when f(s) is positive, the integral in (2) is clearly positive, which gives the following property and its corollary useful for the sequel.

Property 2.1

If \(f(s){\geqslant } 0\) and s 1 < s 2 , then

$$\displaystyle \begin{gathered} \int_{s_{1}}^{s_{2}}f(s)d_{\Delta}s{\geqslant} o. {} \end{gathered} $$
(7)

Corollary 2.1

If \(f(s){\geqslant } g(s)\) and s 1 < s 2 , then

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \int_{s_{1}}^{s_{2}}f(s)d_{\Delta}s{\geqslant}\int_{s_{1}}^{s_{2}}g(s)d_{\Delta}s.&\displaystyle {} \end{array} \end{aligned} $$
(8)

2.2 Linear Difference Equations of First Order

A linear difference equation of first order can be written as

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \Delta y(s)=a(s)y(s+1)+b(s)&\displaystyle {} \end{array} \end{aligned} $$
(9)

or

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \Delta y(s)=a(s)y(s)+b(s).&\displaystyle {} \end{array} \end{aligned} $$
(10)

Consider first the homogenous equation corresponding to (9):

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \Delta y(s)=a(s)y(s+1).&\displaystyle {} \end{array} \end{aligned} $$
(11)

Equation (11) gives

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle y(s+1)=(\frac{1}{1-a(s)})y(s),&\displaystyle {} \end{array} \end{aligned} $$
(12)

which by recursion leads to

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle y(s)=E_{a}(n_{0},n)y(n_{0}),&\displaystyle {} \end{array} \end{aligned} $$
(13)

where

$$\displaystyle \begin{aligned} \begin{array}{rcl} E_{a}(n_{0},n)=^{def}\left\{\begin{array}{ll} \prod_{i=s_{0}}^{s-1}\frac{1}{1-a(i)},& s> s_{0}\\ 1, &s{\leqslant} s_{0}\end{array}\right. {} \end{array} \end{aligned} $$
(14)

is a difference version of the exponential function (since Eq. (11) is a difference version of the differential equation, y′(x) = a(x)y(x)). Consider now the homogenous equation corresponding to (10):

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \Delta y(s)=a(s)y(s).&\displaystyle {} \end{array} \end{aligned} $$
(15)

Equation (15) gives

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle y(s+1)=(1+a(s))y(s),&\displaystyle {} \end{array} \end{aligned} $$
(16)

which by recursion leads to

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle y(s)=e_{a}(n_{0},n)y(n_{0}),&\displaystyle {} \end{array} \end{aligned} $$
(17)

where

$$\displaystyle \begin{aligned} \begin{array}{rcl} e_{a}(n_{0},n)=^{def}\left\{\begin{array}{ll} \prod_{i=s_{0}}^{s-1}(1+a(i)),& s> s_{0}\\ 1, &s{\leqslant} s_{0}\end{array}\right. {} \end{array} \end{aligned} $$
(18)

is another difference version of the exponential function. Clearly, we have

Theorem 2.1

$$\displaystyle \begin{aligned} \begin{array}{rcl} e_{a}(n_{0},n). E_{-a}(n_{0},n)= e_{-a}(n_{0},n). E_{a}(n_{0},n)=1.{} \end{array} \end{aligned} $$
(19)

More generally, we have

Theorem 2.2

If

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \Delta y(s)=a(s)y(s+1),&\displaystyle \\ &\displaystyle \Delta z(s)=-a(s)z(s),&\displaystyle {} \end{array} \end{aligned} $$
(20)

with

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle y(s_{0})z(s_{0})=1,&\displaystyle {} \end{array} \end{aligned} $$
(21)

then

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle y(s)z(s)=1.&\displaystyle {} \end{array} \end{aligned} $$
(22)

Prof. \(\Delta \left ( y(s)z(s)\right )=y(s+1)\Delta z(s)+z(s)\Delta y(s) \) = y(s + 1)(−a(s))z(s) + z(s)a(s)y(s + 1) = 0. This implies that y(s)z(s) = const., which by (21) gives (22), and the theorem is proved.

Nonhomogenous Cases

Consider first the equation

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \Delta y(s)=a(s)y(s+1)+b(s).&\displaystyle {} \end{array} \end{aligned} $$
(23)

Solving (23) by the method of variation of constants or method of Lagrange, we suppose that

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \Delta y_{0}(s)=a(s)y_{0}(s+1)&\displaystyle {} \end{array} \end{aligned} $$
(24)

and search the solution of (23) as

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle y(s)=c(s)y_{0}(s)&\displaystyle {} \end{array} \end{aligned} $$
(25)

where c(s) is to be determined. Placing (25) in (23) and using (24), we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle y_{0}(s)\Delta c(s)=b(s),&\displaystyle {} \end{array} \end{aligned} $$
(26)

or

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle c(s)=c+\sum_{i=s_{0}}^{s-1}y_{0}^{-1}(i)b(i).&\displaystyle {} \end{array} \end{aligned} $$
(27)

Placing this in (25), we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle y(s)=y_{0}(s)c+y_{0}(s)\sum_{i=s_{0}}^{s-1}y_{0}^{-1}(i)b(i),&\displaystyle {} \end{array} \end{aligned} $$
(28)

with \(c=y_{0}^{-1}(s_{0})y_{0}(s_{0})\) (we suppose that \(\sum _{i=s_{1}}^{s_{2}}h(i)=0\), if s 1 > s 2), or equivalently

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle y(s)=\phi(s,s_{0})\left[ y(s_{0})+\sum_{i=s_{0}}^{s-1}\phi(s_{0},i)b(i)\right] ,&\displaystyle {} \end{array} \end{aligned} $$
(29)

where \(\phi (a,b)=y_{0}(a)y_{0}^{-1}(b).\)

Consider now the nonhomogenous equation

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \Delta y(s)=a(s)y(s)+b(s).&\displaystyle {} \end{array} \end{aligned} $$
(30)

Here also, solving the equation by the method of Lagrange, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle y(s)=y_{0}(s)c+y_{0}(s)\sum_{i=s_{0}}^{s-1}y_{0}^{-1}(i+1)b(i),&\displaystyle {} \end{array} \end{aligned} $$
(31)

where

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \Delta y_{0}(s)=a(s)y_{0}(s)&\displaystyle {} \end{array} \end{aligned} $$
(32)

and \(c=y_{0}^{-1}(s_{0})y(s_{0})\), or equivalently,

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle y(s)=\phi(s,s_{0})\left[ y(s_{0})+\sum_{i=s_{0}}^{s-1}\phi(s_{0},i+1)b(i)\right].&\displaystyle {} \end{array} \end{aligned} $$
(33)

3 Difference Integral Inequalities

In this section, we deal with the main content of the work, that is we establish the mentioned integral inequalities. In the first two subsections, where we prove the Hölder, Cauchy-Schwartz, and Minkowski inequalities, we refer to classical recipes currently used in differential situations. In the last three sections, where we prove the Grönwall, Bernoulli, and Lyapunov inequalities, we mainly rely on the method of variation of constants of Lagrange.

3.1 Hölder and Cauchy-Schwartz Inequalities

Theorem 3.1 (Hölder Inequality)

Let \(a, b \in \mathbb {Z}\) . For all functions \(f,g:[a, b]\cap \mathbb {Z}\longrightarrow \mathbb {R}\) , we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{a}^{b}|f(s)g(s)|d_{\Delta}s{\leqslant} \left(\int_{a}^{b}|f(s)|{}^{\alpha}d_{\Delta}s \right)^{\frac{1}{\alpha}} \left(\int_{a}^{b}|g(s)|{}^{\beta}d_{\Delta}s \right)^{\frac{1}{\beta}},{}\vspace{-4pt} \end{array} \end{aligned} $$
(1)

with \(\frac {1}{\alpha }+\frac {1}{\beta }=1\).

Proof

For A, B ∈ [0, [, by the concavity of the logarithm function, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} log\left( \frac{A^{\frac{1}{\alpha}}}{\alpha}+\frac{B^{\frac{1}{\beta}}}{\beta}\right) {\geqslant}\frac{log(A^{\alpha})}{\alpha}+\frac{log(B^{\beta})}{\beta}=log(AB). {}\vspace{-4pt} \end{array} \end{aligned} $$
(2)

which leads to

$$\displaystyle \begin{aligned} \begin{array}{rcl} A^{\frac{1}{\alpha}}B^{\frac{1}{\beta}}{\leqslant} \frac{A}{\alpha}+\frac{B}{\beta}.{}\vspace{-4pt} \end{array} \end{aligned} $$
(3)

Now let

$$\displaystyle \begin{aligned} \begin{array}{rcl} A(s)=\frac{|f(s)|{}^{\alpha}}{\int_{a}^{b}|f(s)|{}^{\alpha}d_{\Delta}s}; B(s)=\frac{|g(s)|{}^{\beta}}{\int_{a}^{b}|g(s)|{}^{\beta}d_{\Delta}s}{}\vspace{-4pt} \end{array} \end{aligned} $$
(4)

with

$$\displaystyle \begin{aligned} \begin{array}{rcl} \left( {\int_{a}^{b}|f(s)|{}^{\alpha}d_{\Delta}s}\right) \left( {\int_{a}^{b}|g(s)|{}^{\beta}d_{\Delta}s}\right)\neq 0, {}\vspace{-4pt} \end{array} \end{aligned} $$
(5)

since \({\int _{a}^{b}|f(s)|{ }^{\alpha }d_{\Delta }s}=0\) or \( {\int _{a}^{b}|g(s)|{ }^{\beta }d_{\Delta }s}=0\) implies that f(s) ≡ 0 or g(s) ≡ 0 and (1) becomes an identity.

Next, substituting A and B in (3) and integrating from a to b, considering Corollary 2.1, one gets

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{a}^{b}\frac{|f(s)|}{\left( \int_{a}^{b}|f(s)|{}^{\alpha}d_{\Delta}s\right)^{\frac{1}{\alpha}} }\frac{|g(s)|}{\left( \int_{a}^{b}|g(s)|{}^{\beta}d_{\Delta}s\right)^{\frac{1}{\beta}} }d_{\Delta}s\vspace{-4pt} \end{array} \end{aligned} $$
$$\displaystyle \begin{aligned} \begin{array}{rcl} {\leqslant} \int_{a}^{b}\left\lbrace \frac{1}{\alpha}\frac{|f(s)|{}^{\alpha}}{\int_{a}^{b}|f(s)|{}^{\alpha}d_{\Delta}s}+\frac{1}{\beta}\frac{|g(s)|{}^{\beta}}{\int_{a}^{b}|g(s)|{}^{\beta}d_{\Delta}s}\right\rbrace d_{\Delta}s\\ =\frac{1}{\alpha}+\frac{1}{\beta}=1,{}\vspace{-4pt} \end{array} \end{aligned} $$
(6)

which gives directly the Hölder inequality and the theorem is proved.

If we set α = β = 2 in the Hölder inequality (1), we get the Cauchy-Schwartz inequality.

Corollary 3.1 (Cauchy-Schwartz Inequality)

Let \(a, b \in \mathbb {Z}\) . For all functions \(f, g: [a, b]\cap \mathbb {Z}\longrightarrow \mathbb {R}\) , we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{a}^{b}|f(s)g(s)|d_{\Delta}s{\leqslant} \sqrt{\left(\int_{a}^{b}|f(s)|{}^{2}d_{\Delta}s \right) \left(\int_{a}^{b}|g(s)|{}^{2}d_{\Delta}s \right)}.{} \end{array} \end{aligned} $$
(7)

Next, we can use the Hölder inequality to prove the Minkowski one.

3.2 Minkowski Inequality

Theorem 3.2 (Minkowski Inequality)

Soient \(a, b\in \mathbb {Z}\) . For all functions \(f, g: [a, b]\cap \mathbb {Z}\longrightarrow \mathbb {R}\) , we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \left( \int_{a}^{b}|f(s)+g(s)|d_{\Delta}s\right)^{\frac{1}{\alpha}} {\leqslant} \left(\int_{a}^{b}|f(s)|{}^{\alpha}d_{\Delta}s \right)^{\frac{1}{\alpha}}+ \left(\int_{a}^{b}|g(s)|{}^{\alpha}d_{\Delta}s \right)^{\frac{1}{\alpha}}.{} \end{array} \end{aligned} $$
(8)

Proof

We apply the Hölder inequality to obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{a}^{b}|f(s)+g(s)|{}^{\alpha}d_{\Delta}s=\int_{a}^{b}|f(s)+g(s)|{}^{\alpha-1}|f(s)+g(s)|d_{\Delta}s\\ {\leqslant}\int_{a}^{b}|f(s)+g(s)|{}^{\alpha-1}|f(s)|d_{\Delta}s+\int_{a}^{b}|f(s)+g(s)|{}^{\alpha-1}|g(s)|d_{\Delta}s\\ {\leqslant} \left( \int_{a}^{b}|f(s)+g(s)|{}^{(\alpha-1)\beta}d_{\Delta}s\right)^{\frac{1}{\beta}} \left[ \left( \int_{a}^{b}|f(s)|{}^{\alpha}d_{\Delta}s\right)^{\frac{1}{\alpha}}+\left( \int_{a}^{b}|g(s)|{}^{\alpha}d_{\Delta}s\right)^{\frac{1}{\alpha}}\right]. \end{array} \end{aligned} $$

Dividing the two members of the inequality by \(\left ( \int _{a}^{b}|f(s)+g(s)|{ }^{(\alpha -1)\beta }d_{\Delta }s\right )^{\frac {1}{\beta }}\), with (α − 1)β = α, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} \left( \int_{a}^{b}|f(s)+g(s)|{}^{\alpha}d_{\Delta}s\right)^{1-\frac{1}{\beta}}{\leqslant} \left[ \left( \int_{a}^{b}|f(s)|{}^{\alpha}d_{\Delta}s\right)^{\frac{1}{\alpha}}+\left( \int_{a}^{b}|g(s)|{}^{\alpha}d_{\Delta}s\right)^{\frac{1}{\alpha}}\right], \end{array} \end{aligned} $$

which is the Minkowski inequality since \(1-\frac {1}{\beta }=\frac {1}{\alpha }\).

3.3 Grönwall Inequality

Let’s prove first the following:

Lemma 3.1

Given y, f, a real valued functions defined on \(\mathbb {Z}\) , with \(a(s){\geqslant } 0\) . Suppose that y 0(s) is the solution of Δy 0(s) = a(s)y 0(s), such that y 0(s 0) = 1.

In that case, if

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Delta y(s){\leqslant} a(s)y(s)+f(s) {} \end{array} \end{aligned} $$
(9)

for all \(s\in \mathbb {Z}\) , then

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s){\leqslant} y_{0}(s)y(s_{0})+ y_{0}(s)\int_{s_{0}}^{s}y_{0}^{-1}(s+1)f(s)d_{\Delta}s.{} \end{array} \end{aligned} $$
(10)

Proof

Let y 0(s) be the solution of the homogenous equation

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Delta y_{0}(s)=a(s)y_{0}(s). {} \end{array} \end{aligned} $$
(11)

Searching the solution y(s) of (9) verifying (10), by the method of variation of constants

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s)=c(s)y_{0}(s), {} \end{array} \end{aligned} $$
(12)

where c(s) is unknown, we place (12) in (9), considering (11) and get

$$\displaystyle \begin{aligned} \begin{array}{rcl} y_{0}(s+1)\Delta c(s){\leqslant} f(s). {} \end{array} \end{aligned} $$
(13)

Given the fact that \(a(s){\geqslant } 0\), we have that y 0(s) > 0 and the relation (13) simplifies in

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Delta c(s){\leqslant} y_{0}^{-1}(s+1) f(s). {} \end{array} \end{aligned} $$
(14)

Integrating the two members of the inequality from s 0 to s, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} c(s)-c(s_{0}){\leqslant} \int_{s_{0}}^{s} y_{0}^{-1}(s+1) f(s)d_{\Delta}s. {} \end{array} \end{aligned} $$
(15)

Since y 0(s 0) = 1, (12) gives c(s 0) = y(s 0), and (15) simplifies in

$$\displaystyle \begin{aligned} \begin{array}{rcl} c(s){\leqslant} y(s_{0})+\int_{s_{0}}^{s} y_{0}^{-1}(s+1) f(s)d_{\Delta}s. {} \end{array} \end{aligned} $$
(16)

Hence

$$\displaystyle \begin{aligned} \begin{array}{rcl} c(s)y_{0}(s){\leqslant} y_{0}(s)\left[ y(s_{0})+\int_{s_{0}}^{s} y_{0}^{-1}(s+1) f(s)d_{\Delta}s\right] , {} \end{array} \end{aligned} $$
(17)

which gives the expected result:

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s){\leqslant} y_{0}(s) y(s_{0})+y_{0}(s)\int_{s_{0}}^{s} y_{0}^{-1}(s+1) f(s)d_{\Delta}s. \end{array} \end{aligned} $$

Considering the Theorem 2.1, we obtain the following

Corollary 3.2

If the functions y, f, a verify the conditions of Lemma 3.1 , then

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s){\leqslant} y(s_{0}) e_{a}(s_{0},s)+e_{a}(s_{0},s)\int_{s_{0}}^{s}E_{-a}(s_{0},s+1) f(s)d_{\Delta}s. \end{array} \end{aligned} $$

Lemma 3.2

Given y, f, a real valued functions defined on \(\mathbb {Z}\) , with \(a(s){\leqslant } 0\).

Suppose that y 0(s) is the solution of Δy 0(s) = a(s)y 0(s + 1), such that y 0(s 0) = 1.

In that case, if

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Delta y(s){\leqslant} a(s)y(s+1)+f(s) {} \end{array} \end{aligned} $$
(18)

for all \(s\in \mathbb {Z}\) , then

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s){\leqslant} y_{0}(s)y(s_{0})+ y_{0}(s)\int_{s_{0}}^{s}y_{0}^{-1}(s)f(s)d_{\Delta}s{} \end{array} \end{aligned} $$
(19)

Proof

Let y 0(s) be the solution of the homogenous equation

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Delta y_{0}(s)=a(s)y_{0}(s+1). {} \end{array} \end{aligned} $$
(20)

Searching the solution y(s) of (18) verifying (19), by the method of variation of constants

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s)=c(s)y_{0}(s), {} \end{array} \end{aligned} $$
(21)

where c(s) is unknown, we place (21) in (18), considering (20) and get

$$\displaystyle \begin{aligned} \begin{array}{rcl} y_{0}(s)\Delta c(s){\leqslant} f(s). {} \end{array} \end{aligned} $$
(22)

Given the fact that \(a(s){\leqslant } 0\), we have that y 0(s) > 0 and the relation (22) simplifies in

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Delta c(s){\leqslant} y_{0}^{-1}(s) f(s). {} \end{array} \end{aligned} $$
(23)

Integrating the two members of the inequality from s 0 to s, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} c(s)-c(s_{0}){\leqslant} \int_{s_{0}}^{s} y_{0}^{-1}(s) f(s)d_{\Delta}s. {} \end{array} \end{aligned} $$
(24)

Since y 0(s 0) = 1, (21) gives c(s 0) = y(s 0), and (24) simplifies in

$$\displaystyle \begin{aligned} \begin{array}{rcl} c(s){\leqslant} y(s_{0})+\int_{s_{0}}^{s} y_{0}^{-1}(s) f(s)d_{\Delta}s. {} \end{array} \end{aligned} $$
(25)

Hence

$$\displaystyle \begin{aligned} \begin{array}{rcl} c(s)y_{0}(s){\leqslant} y_{0}(s)\left[ y(s_{0})+\int_{s_{0}}^{s} y_{0}^{-1}(s) f(s)d_{\Delta}s\right] , {} \end{array} \end{aligned} $$
(26)

which gives the expected result:

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s){\leqslant} y_{0}(s) y(s_{0})+y_{0}(s)\int_{s_{0}}^{s} y_{0}^{-1}(s) f(s)d_{\Delta}s. \end{array} \end{aligned} $$

For the same reasons as the Corollary 3.2, we obtain the following:

Corollary 3.3

If the functions y, f, a verify the conditions of Lemma 3.2 , then

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s){\leqslant} y(s_{0}) E_{a}(s_{0},s)+E_{a}(s_{0},s)\int_{s_{0}}^{s}e_{-a}(s_{0},s) f(s)d_{\Delta}s. \end{array} \end{aligned} $$

We can now prove the following:

Theorem 3.3 (Grönwall Inequality)

Let y, f, a be real valued functions defined on \(\mathbb {Z}\) , with \(a(s){\geqslant } 0\).

Suppose that y 0(s) is the solution of Δy 0(s) = a(s)y 0(s), such that y 0(s 0) = 1.

In that case if

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s){\leqslant} f(s)+\int_{s_{0}}^{s}y(s)a(s)d_{\Delta}s, {} \end{array} \end{aligned} $$
(27)

then

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s){\leqslant} f(s)+ e_{a}(s_{0},s)\int_{s_{0}}^{s}a(s)f(s)E_{-a}(s_{0},s+1)d_{\Delta}s.{} \end{array} \end{aligned} $$
(28)

Proof

Defining

$$\displaystyle \begin{aligned} \begin{array}{rcl} v(s)=\int_{s_{0}}^{s}y(s)a(s)d_{\Delta}s,{} \end{array} \end{aligned} $$
(29)

(27) gives

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s){\leqslant} f(s)+v(s), {} \end{array} \end{aligned} $$
(30)

and

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Delta v(s)=y(s)a(s){\leqslant} f(s)a(s)+a(s)v(s). {} \end{array} \end{aligned} $$
(31)

By the Corollary 3.2 of Lemma 3.1, the inequality (31) leads to

$$\displaystyle \begin{aligned} \begin{array}{rcl} v(s){\leqslant} v(s_{0})e_{a}(s_{0},s)+ e_{a}(s_{0},s)\int_{s_{0}}^{s}a(s)f(s)E_{-a}(s_{0},s+1)d_{\Delta}s {} \end{array} \end{aligned} $$
(32)

Since v(s 0) = 0, (30) and (32) imply that

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s){\leqslant} f(s)+ e_{a}(s_{0},s)\int_{s_{0}}^{s}a(s)f(s)E_{-a}(s_{0},s+1)d_{\Delta}s, {} \end{array} \end{aligned} $$
(33)

which is the expected Grönwall inequality.

As direct consequences, we obtain the following results:

Corollary 3.4

Let y, f, a be real valued functions defined on \(\mathbb {Z}\) , with \(a(s){\geqslant } 0\) . If

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s){\leqslant} \int_{s_{0}}^{s}y(s)a(s)d_{\Delta}s, {} \end{array} \end{aligned} $$
(34)

for all \(s\in \mathbb {Z}\) , then

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s){\leqslant} 0. {} \end{array} \end{aligned} $$
(35)

Proof

This follows from the Theorem 3.3 with f(s) ≡ 0.

Corollary 3.5

Let \(a(s){\geqslant } 0\) and \(\alpha \in \mathbb {R}\) . If

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s){\leqslant} \alpha+\int_{s_{0}}^{s}y(s)a(s)d_{\Delta}s, {} \end{array} \end{aligned} $$
(36)

for all \(s\in \mathbb {Z}\) , then

$$\displaystyle \begin{aligned} \begin{array}{rcl} y(s){\leqslant} \alpha e_{a}(s_{0},s). {} \end{array} \end{aligned} $$
(37)

Proof

From the Grönwall inequality with f(s) = α, one gets

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle y(s){\leq}\, \alpha+e_{a}(s_{0},s)\int_{s_{0}}^{s}\alpha a(s)E_{-a}(s_{0},s+1)d_{\Delta}s \\ &\displaystyle =&\displaystyle \alpha\left( 1-e_{a}(s_{0},s)\int_{s_{0}}^{s}\Delta E_{-a}(s_{0},s)d_{\Delta}s\right) \\ &\displaystyle =&\displaystyle \alpha\left( 1-e_{a}(s_{0},s)\left[ E_{-a}(s_{0},s)-E_{-a}(s_{0},s_{0})\right] \right) \\ &\displaystyle =&\displaystyle \alpha-\alpha e_{a}(s_{0},s) E_{-a}(s_{0},s)+\alpha e_{a}(s_{0},s) \\ &\displaystyle =&\displaystyle \alpha e_{a}(s_{0},s), \\ \end{array} \end{aligned} $$
(38)

which gives the expected inequality.

3.4 Bernoulli Inequality

Theorem 3.4 (Bernoulli Inequality)

Let \(\alpha \in \mathbb {R}\) . Then for all \(s, s_{0} \in \mathbb {Z}\) , with s > s 0 , we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} e_{a}(s_{0},s){\geq} 1+\alpha (s-s_{0}). {} \end{array} \end{aligned} $$
(39)

Proof

Let y(s) = α(s − s 0), s > s 0. Then Δy(s) = α and αy(s) + α = α 2(s − s 0) + αα =  Δy(s), which implies that Δy(s)≤αy(s) + α.

By the Corollary 3.2 of Lemma 3.1, we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle y(s){\leq} y(s_{0})e_{\alpha}(s_{0},s)+ e_{\alpha}(s_{0},s)\int_{s_{0}}^{s}\alpha E_{-\alpha}(s_{0},s+1)d_{\Delta}s, \\ &\displaystyle =&\displaystyle -e_{\alpha}(s_{0},s)\int_{s_{0}}^{s}\Delta E_{-\alpha}(s_{0},s)d_{\Delta}s, (y(x_{0})=0)\\ &\displaystyle =&\displaystyle -e_{\alpha}(s_{0},s)[E_{-\alpha}(s_{0},s)-1] \\ &\displaystyle =&\displaystyle -1+e_{\alpha}(s_{0},s). \\ \end{array} \end{aligned} $$
(40)

Hence e α(s 0, s)≥1 + α(s − s 0), with s > s 0, as expected.

3.5 Lyapunov Inequality

Let \(f: \mathbb {Z}\longrightarrow [0, \infty [\). Consider the Sturm-Liouville difference equation

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \Delta^{2}u(s)+f(s)u(s+1)=0, s\in \mathbb{Z}. &\displaystyle {} \end{array} \end{aligned} $$
(41)

Define the function F by

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle F(y)=\int_{a}^{b}\left[ \left(\Delta y(s) \right)^{2}-f(s) y^{2}(s+1) \right]d_{\Delta}s. &\displaystyle {} \end{array} \end{aligned} $$
(42)

We prove first the following lemmas:

Lemma 3.3

Let u(s) be a nontrivial solution of the Sturm-Liouville difference equation (41). In that case, for all y belonging to the domain of definition of F, the following equality is verified,

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle F(y)-F(u)-F(y-u)=2 (y-u)(b)\Delta u(b)-2(y-u)(a)\Delta u(a). &\displaystyle {} \end{array} \end{aligned} $$
(43)

Proof

We have

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle F(y)-F(u)-F(y-u)\\ &\displaystyle =&\displaystyle \int_{a}^{b}[ \left( \Delta y(s)\right)^{2}-f(s)y^{2}(s+1)-\left(\Delta u(s) \right)^{2}\\ &\displaystyle &\displaystyle +f(s)u^{2}(s+1)-\left( \Delta(y-u)(s)\right)^{2}+f(s)(y-u)^{2}(s+1) ] d_{\Delta}s \\ &\displaystyle =&\displaystyle 2\int_{a}^{b}[ -\left( \Delta u(s)\right)^{2}+f(s)u^{2}(s+1)+\Delta y(s)\Delta u(s)\\ &\displaystyle &\displaystyle -f(s)y(s+1)u(s+1)] d_{\Delta}s \\ &\displaystyle =&\displaystyle 2\int_{a}^{b}[ \Delta y(s)\Delta u(s)+y(s+1)\Delta^{2} u(s)-\left(\Delta u(s) \right)^{2}\\ &\displaystyle &\displaystyle -\Delta^{2}u(s)u(s+1)]d_{\Delta}s \\ &\displaystyle =&\displaystyle 2\int_{a}^{b}[ \Delta [ y(s)\Delta u(s)]-\Delta[u(s)\Delta u(s)]]d_{\Delta}s \\ &\displaystyle =&\displaystyle 2\int_{a}^{b} \Delta \left[ \left( y(s)-u(s)\right) \Delta u(s)\right] d_{\Delta}s \\ &\displaystyle =&\displaystyle 2\left( y(b)-u(b)\right)\Delta u(b)-2\left( y(a)-u(a)\right)\Delta u(a), {} \end{array} \end{aligned} $$
(44)

which proves the lemma.

Lemma 3.4

Let y be in the domain of definition of F. For all \(c,d \in [a,b]\cap \mathbb {Z}\) , \(a,b \in \mathbb {Z}\) and \(a{\leqslant } c{\leqslant } d{\leqslant } b\) , we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \int_{c}^{d} (\Delta y(s)^{2})d_{\Delta}s{\geqslant} \frac{\left(y(d)-y(c) \right)^{2} }{d-c}.&\displaystyle {} \end{array} \end{aligned} $$
(45)

Proof

Let \(u(s)=\frac {y(d)-y(c)}{d-c}s+\frac {dy(c)-cy(d)}{d-c}.\) Then \(\Delta u(s)=\frac {y(d)-y(c)}{d-c}\) and Δ2 u(s) = 0. This proves that u(s) is a solution of (41) with f(s) = 0 for all \(s \in \mathbb {Z}\) and \(F(y)=\int _{a}^{b}\left (\Delta y(s)\right )^{2}d_{\Delta }s \), for all y from the domain of definition of F. By Lemma 3.3, we get F(s) − F(u) − F(y − u) = 0, and consequently \(F(y)=F(u)+F(y-u){\geqslant } F(u)\). This leads to the following result:

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle \int_{c}^{d}\left( \Delta y(s)\right)^{2}d_{\Delta}s{\geqslant} \int_{c}^{d}\left( \Delta u(s)\right)^{2}d_{\Delta}s&\displaystyle \\ {} &\displaystyle =\int_{c}^{d}\left( \frac{y(d)-y(c)}{d-c}\right)^{2}d_{\Delta}s&\displaystyle \\ {} &\displaystyle =\frac{\left( y(d)-y(c)\right)^{2} }{d-c},&\displaystyle {} \end{array} \end{aligned} $$
(46)

which proves the lemma.

Theorem 3.5 (Lyapunov Inequality)

Given \(f: \mathbb {Z}\longrightarrow [0, \infty [\) and u a nontrivial solution of Eq.(41) with \(u(a)=u(b)=0, a, b\in \mathbb {Z}\) and a < b, then

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{a}^{b}f(s)d_{\Delta}s{\geqslant}\frac{4}{b-a}.{} \end{array} \end{aligned} $$
(47)

Proof

By the Lemma 3.3 with y = 0 and u(a) = u(b) = 0, one gets F(0) − F(u) − F(−u) = −2u(b) Δu(b) + 2u(a) Δu(a).This gives F(u) = 0 since F(0) = 0 and F(u) = F(−u). Thus

$$\displaystyle \begin{aligned} \begin{array}{rcl} F(u)=\int_{a}^{b}\left[ \left( \Delta u(s)\right)^{2}-f(s)u^{2}(s+1) \right]d_{\Delta}s=0. {} \end{array} \end{aligned} $$
(48)

Let \(M=max\left [ u^{2}(s); s\in [a, b]\cap \mathbb {Z}\right ] \) and \(c\in [a, b]\cap \mathbb {Z}\) such that u 2(c) = M. Then \(M=u^{2}(c){\geqslant } u^{2}(s+1) \) and using (48), Lemma 3.4 and the fact that u(a) = u(b) = 0, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle M\int_{a}^{b}f(s)d_{\Delta}s{\geqslant}\int_{a}^{b}f(s)u^{2}(s+1)d_{\Delta}s\\ &\displaystyle =&\displaystyle \int_{a}^{b}\left( \Delta(us)\right) ^{2}d_{\Delta}s\\ &\displaystyle =&\displaystyle \int_{a}^{c}\left( \Delta(us)\right) ^{2}d_{\Delta}s+\int_{c}^{b}\left( \Delta(us)\right) ^{2}d_{\Delta}s\\ &\displaystyle &\displaystyle {\geqslant} \frac{\left(u(c)-u(a) \right)^{2} }{c-a}+\frac{\left(u(b)-u(c) \right)^{2} }{b-c} \\ &\displaystyle =&\displaystyle M\left[\frac{1}{c-a}+\frac{1}{b-c} \right]{\geqslant} M\frac{4}{b-a}. \\ \end{array} \end{aligned} $$
(49)

which proves the Lyapunov inequality.