Abstract
We establish difference versions of the classical integral inequalities of Hölder, Cauchy-Schwartz, Minkowski and integral inequalities of Grönwall, Bernoulli and Lyapunov based on the Lagrange method of linear difference equation of first order.
Dedicated to Prof. Hounkonnou M N, for his 60th birthday.
Access provided by CONRICYT-eBooks. Download chapter PDF
Similar content being viewed by others
Keywords
1 Introduction
Considering the most general divided difference derivative [5, 6],
admitting the property that if f(t) = P n(t(s)) is a polynomial of degree n in t(s), then \(\mathcal {D} f (t(s))=\tilde {P}_{n-1}(t(s))\) is a polynomial in t(s) of degree n − 1, one is led to the following most important canonical forms for t(s) in order of increasing complexity:
When the function t(s) is given by (2)–(4), the divided difference derivative (1) leads to the ordinary differential derivative \(D f (t)=\frac {d}{dt}f(t)\), finite difference derivative
and q-difference derivative (or Jakson derivative [4])
respectively. When x(s) is given by (5), the corresponding derivative is usually referred to as the Askey-Wilson first order divided difference operator [1] that one can write:
where \(x(z)=\frac {z+z^{-1}}{2}\), having in mind that z = q s.
The calculus related to the differential derivative, the continuous or differential calculus, is clearly the classical one. The one related to the derivatives (6)–(8) (difference, q-difference and q-nonuniform difference respectively) is referred to as the discrete calculus. Its interest is two folds: On the one hand it generalizes the continuous calculus, and on the other hand it uses discrete variable.
This work is concerned in the difference calculus. We particularly aim to establish difference versions of the well-known in differential calculus, integral inequalities of Hölder, Cauchy-Schwartz, Minkowski, Grönwall, Bernoulli, and Lyapunov. We will note that the raised inequalities were proved in [3] for a more general difference operator than (6), but one will remark that except classical recipes used for the inequalities of classical analysis (Hölder, Cauchy-Schwartz and Minkowski), our approach here is essentially different. It is essentially based on the Lagrange method and it is so that it can be extended to the more general derivative (7) or even (8) (see [2]), the latter being, at our best knowledge, the largest one having the mentioned property of sending a polynomial of degree n in a polynomial of degree n − 1.
In the following lines, we first introduce basic concepts of difference calculus and linear first order difference equations necessary for the sequel, and then study the mentionned integral inequalities.
2 Preliminaries
2.1 Difference Derivative and Integral
Consider again the difference derivative that is the derivative related to the grid in (2):
Basing on this derivative, one defines the integration that is the inverse of the differentiation operation as follows:
The defined integral admits the following properties:
Fundamental Principle of Analysis
One easily verifies that
Integration by Parts
Integrating the two members of the equality
and applying (4), one gets
which is the integration by parts formula.
Positivity of the Integral
We finally remark that when f(s) is positive, the integral in (2) is clearly positive, which gives the following property and its corollary useful for the sequel.
Property 2.1
If \(f(s){\geqslant } 0\) and s 1 < s 2 , then
Corollary 2.1
If \(f(s){\geqslant } g(s)\) and s 1 < s 2 , then
2.2 Linear Difference Equations of First Order
A linear difference equation of first order can be written as
or
Consider first the homogenous equation corresponding to (9):
Equation (11) gives
which by recursion leads to
where
is a difference version of the exponential function (since Eq. (11) is a difference version of the differential equation, y′(x) = a(x)y(x)). Consider now the homogenous equation corresponding to (10):
Equation (15) gives
which by recursion leads to
where
is another difference version of the exponential function. Clearly, we have
Theorem 2.1
More generally, we have
Theorem 2.2
If
with
then
Prof. \(\Delta \left ( y(s)z(s)\right )=y(s+1)\Delta z(s)+z(s)\Delta y(s) \) = y(s + 1)(−a(s))z(s) + z(s)a(s)y(s + 1) = 0. This implies that y(s)z(s) = const., which by (21) gives (22), and the theorem is proved.
Nonhomogenous Cases
Consider first the equation
Solving (23) by the method of variation of constants or method of Lagrange, we suppose that
and search the solution of (23) as
where c(s) is to be determined. Placing (25) in (23) and using (24), we get
or
Placing this in (25), we get
with \(c=y_{0}^{-1}(s_{0})y_{0}(s_{0})\) (we suppose that \(\sum _{i=s_{1}}^{s_{2}}h(i)=0\), if s 1 > s 2), or equivalently
where \(\phi (a,b)=y_{0}(a)y_{0}^{-1}(b).\)
Consider now the nonhomogenous equation
Here also, solving the equation by the method of Lagrange, we get
where
and \(c=y_{0}^{-1}(s_{0})y(s_{0})\), or equivalently,
3 Difference Integral Inequalities
In this section, we deal with the main content of the work, that is we establish the mentioned integral inequalities. In the first two subsections, where we prove the Hölder, Cauchy-Schwartz, and Minkowski inequalities, we refer to classical recipes currently used in differential situations. In the last three sections, where we prove the Grönwall, Bernoulli, and Lyapunov inequalities, we mainly rely on the method of variation of constants of Lagrange.
3.1 Hölder and Cauchy-Schwartz Inequalities
Theorem 3.1 (Hölder Inequality)
Let \(a, b \in \mathbb {Z}\) . For all functions \(f,g:[a, b]\cap \mathbb {Z}\longrightarrow \mathbb {R}\) , we have
with \(\frac {1}{\alpha }+\frac {1}{\beta }=1\).
Proof
For A, B ∈ [0, ∞[, by the concavity of the logarithm function, we have
which leads to
Now let
with
since \({\int _{a}^{b}|f(s)|{ }^{\alpha }d_{\Delta }s}=0\) or \( {\int _{a}^{b}|g(s)|{ }^{\beta }d_{\Delta }s}=0\) implies that f(s) ≡ 0 or g(s) ≡ 0 and (1) becomes an identity.
Next, substituting A and B in (3) and integrating from a to b, considering Corollary 2.1, one gets
which gives directly the Hölder inequality and the theorem is proved.
If we set α = β = 2 in the Hölder inequality (1), we get the Cauchy-Schwartz inequality.
Corollary 3.1 (Cauchy-Schwartz Inequality)
Let \(a, b \in \mathbb {Z}\) . For all functions \(f, g: [a, b]\cap \mathbb {Z}\longrightarrow \mathbb {R}\) , we have
Next, we can use the Hölder inequality to prove the Minkowski one.
3.2 Minkowski Inequality
Theorem 3.2 (Minkowski Inequality)
Soient \(a, b\in \mathbb {Z}\) . For all functions \(f, g: [a, b]\cap \mathbb {Z}\longrightarrow \mathbb {R}\) , we have
Proof
We apply the Hölder inequality to obtain
Dividing the two members of the inequality by \(\left ( \int _{a}^{b}|f(s)+g(s)|{ }^{(\alpha -1)\beta }d_{\Delta }s\right )^{\frac {1}{\beta }}\), with (α − 1)β = α, we get
which is the Minkowski inequality since \(1-\frac {1}{\beta }=\frac {1}{\alpha }\).
3.3 Grönwall Inequality
Let’s prove first the following:
Lemma 3.1
Given y, f, a real valued functions defined on \(\mathbb {Z}\) , with \(a(s){\geqslant } 0\) . Suppose that y 0(s) is the solution of Δy 0(s) = a(s)y 0(s), such that y 0(s 0) = 1.
In that case, if
for all \(s\in \mathbb {Z}\) , then
Proof
Let y 0(s) be the solution of the homogenous equation
Searching the solution y(s) of (9) verifying (10), by the method of variation of constants
where c(s) is unknown, we place (12) in (9), considering (11) and get
Given the fact that \(a(s){\geqslant } 0\), we have that y 0(s) > 0 and the relation (13) simplifies in
Integrating the two members of the inequality from s 0 to s, we get
Since y 0(s 0) = 1, (12) gives c(s 0) = y(s 0), and (15) simplifies in
Hence
which gives the expected result:
Considering the Theorem 2.1, we obtain the following
Corollary 3.2
If the functions y, f, a verify the conditions of Lemma 3.1 , then
Lemma 3.2
Given y, f, a real valued functions defined on \(\mathbb {Z}\) , with \(a(s){\leqslant } 0\).
Suppose that y 0(s) is the solution of Δy 0(s) = a(s)y 0(s + 1), such that y 0(s 0) = 1.
In that case, if
for all \(s\in \mathbb {Z}\) , then
Proof
Let y 0(s) be the solution of the homogenous equation
Searching the solution y(s) of (18) verifying (19), by the method of variation of constants
where c(s) is unknown, we place (21) in (18), considering (20) and get
Given the fact that \(a(s){\leqslant } 0\), we have that y 0(s) > 0 and the relation (22) simplifies in
Integrating the two members of the inequality from s 0 to s, we get
Since y 0(s 0) = 1, (21) gives c(s 0) = y(s 0), and (24) simplifies in
Hence
which gives the expected result:
For the same reasons as the Corollary 3.2, we obtain the following:
Corollary 3.3
If the functions y, f, a verify the conditions of Lemma 3.2 , then
We can now prove the following:
Theorem 3.3 (Grönwall Inequality)
Let y, f, a be real valued functions defined on \(\mathbb {Z}\) , with \(a(s){\geqslant } 0\).
Suppose that y 0(s) is the solution of Δy 0(s) = a(s)y 0(s), such that y 0(s 0) = 1.
In that case if
then
Proof
Defining
(27) gives
and
By the Corollary 3.2 of Lemma 3.1, the inequality (31) leads to
Since v(s 0) = 0, (30) and (32) imply that
which is the expected Grönwall inequality.
As direct consequences, we obtain the following results:
Corollary 3.4
Let y, f, a be real valued functions defined on \(\mathbb {Z}\) , with \(a(s){\geqslant } 0\) . If
for all \(s\in \mathbb {Z}\) , then
Proof
This follows from the Theorem 3.3 with f(s) ≡ 0.
Corollary 3.5
Let \(a(s){\geqslant } 0\) and \(\alpha \in \mathbb {R}\) . If
for all \(s\in \mathbb {Z}\) , then
Proof
From the Grönwall inequality with f(s) = α, one gets
which gives the expected inequality.
3.4 Bernoulli Inequality
Theorem 3.4 (Bernoulli Inequality)
Let \(\alpha \in \mathbb {R}\) . Then for all \(s, s_{0} \in \mathbb {Z}\) , with s > s 0 , we have
Proof
Let y(s) = α(s − s 0), s > s 0. Then Δy(s) = α and αy(s) + α = α 2(s − s 0) + α≥α = Δy(s), which implies that Δy(s)≤αy(s) + α.
By the Corollary 3.2 of Lemma 3.1, we obtain
Hence e α(s 0, s)≥1 + α(s − s 0), with s > s 0, as expected.
3.5 Lyapunov Inequality
Let \(f: \mathbb {Z}\longrightarrow [0, \infty [\). Consider the Sturm-Liouville difference equation
Define the function F by
We prove first the following lemmas:
Lemma 3.3
Let u(s) be a nontrivial solution of the Sturm-Liouville difference equation (41). In that case, for all y belonging to the domain of definition of F, the following equality is verified,
Proof
We have
which proves the lemma.
Lemma 3.4
Let y be in the domain of definition of F. For all \(c,d \in [a,b]\cap \mathbb {Z}\) , \(a,b \in \mathbb {Z}\) and \(a{\leqslant } c{\leqslant } d{\leqslant } b\) , we have
Proof
Let \(u(s)=\frac {y(d)-y(c)}{d-c}s+\frac {dy(c)-cy(d)}{d-c}.\) Then \(\Delta u(s)=\frac {y(d)-y(c)}{d-c}\) and Δ2 u(s) = 0. This proves that u(s) is a solution of (41) with f(s) = 0 for all \(s \in \mathbb {Z}\) and \(F(y)=\int _{a}^{b}\left (\Delta y(s)\right )^{2}d_{\Delta }s \), for all y from the domain of definition of F. By Lemma 3.3, we get F(s) − F(u) − F(y − u) = 0, and consequently \(F(y)=F(u)+F(y-u){\geqslant } F(u)\). This leads to the following result:
which proves the lemma.
Theorem 3.5 (Lyapunov Inequality)
Given \(f: \mathbb {Z}\longrightarrow [0, \infty [\) and u a nontrivial solution of Eq.(41) with \(u(a)=u(b)=0, a, b\in \mathbb {Z}\) and a < b, then
Proof
By the Lemma 3.3 with y = 0 and u(a) = u(b) = 0, one gets F(0) − F(u) − F(−u) = −2u(b) Δu(b) + 2u(a) Δu(a).This gives F(u) = 0 since F(0) = 0 and F(u) = F(−u). Thus
Let \(M=max\left [ u^{2}(s); s\in [a, b]\cap \mathbb {Z}\right ] \) and \(c\in [a, b]\cap \mathbb {Z}\) such that u 2(c) = M. Then \(M=u^{2}(c){\geqslant } u^{2}(s+1) \) and using (48), Lemma 3.4 and the fact that u(a) = u(b) = 0, we get
which proves the Lyapunov inequality.
References
R. Askey, J. Wilson Some basic hypergeometric orthogonal polynomials that generalize the Jacobi polynomials. Mem. Am. Math. Soc. 54, 1–55 (1985)
G. Bangerezako, J.P. Nuwacu, q-Non uniform difference calculus and classical integral inequalities. J. Inequal. Appl. (accepted for publication)
S. Elaydi, An Introduction to Difference Equations (Springer, New York, 2006)
H.F. Jackson, q-Difference equations. Am. J. Math. 32, 305–314 (1910)
A.P. Magnus, Associated Askey-Wilson polynomials as Laguerre-Hahn Orthogonal Polynomials. Springer Lectures Notes in Mathematics, vol. 1329 (Springer, Berlin 1988), pp. 261–278
A.P. Magnus, Special nonuniform lattices (snul) orthogonal polynomials on discrete dense sets of points. J. Comput. Appl. Math. 65, 253–265 (1995)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Bangerezako, G., Nuwacu, J.P. (2018). Some Difference Integral Inequalities. In: Diagana, T., Toni, B. (eds) Mathematical Structures and Applications. STEAM-H: Science, Technology, Engineering, Agriculture, Mathematics & Health. Springer, Cham. https://doi.org/10.1007/978-3-319-97175-9_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-97175-9_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-97174-2
Online ISBN: 978-3-319-97175-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)