Abstract
We aim to establish refinements of the Jensen inequality for the classes of completely convex and absolutely convex functions. In the first case the refinement is expressed in terms of the alternating sum of Lidstone polynomials, while in the second case we deal with the sum of the Lidstone polynomials. As an application, more accurate power mean inequalities are derived. In particular, we obtain strengthened versions of arithmetic–geometric mean inequality in a difference and a quotient form. Finally, we also establish more accurate form of the Hölder inequality.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The Lidstone polynomials (see [11]) were introduced to provide an approximation of a function in the neighborhood of two points instead of one point approximation given by the Taylor expansion. From a practical point of view, the Lidstone polynomial expansion turns to be very useful. Besides in approximation theory, such interpolation is widely used in some boundary value problems appearing in engineering and other areas of physical sciences.
The Lidstone polynomial \(\Lambda _n(t)\) is the unique polynomial of degree \(2n+1\) defined by the following boundary value problem:
After the initial polynomial \( \Lambda _0(t)=t\), the next two polynomials are \(\Lambda _1(t)=\frac{t^{3}}{6}-\frac{t}{6}\) and \(\Lambda _2(t)=\frac{t^{5}}{120}-\frac{t^{3}}{36}+\frac{7t}{360}\). Clearly, it follows from the above boundary value problem that \(\Lambda _n(t)\), \(n\ge 1\), contains only odd powers (see also [2]). It is well known from the theory of differential equations that the Lidstone polynomials can be expressed in integral form as
where
is the homogeneous Green function of the differential operator \(\textrm{d}^{2}/\textrm{d}s^{2}\) on the unit interval, and with the successive iterates
The Lidstone polynomials can also be represented by Fourier series, and by Bernoulli polynomials and numbers (for more details, see [2] and the references cited therein).
In the middle of the last century, Widder [14], established the following basic interpolation formula for 2n-times continuously differentiable function \(f\in C^{(2n)}([0,1])\):
By a simple linear transform \(x\mapsto \hat{x}=\frac{x-a}{b-a}\), the above representation can be extended to hold for an arbitrary interval [a, b], that is, if \(f\in C^{(2n)}([a,b])\), then
where \(\hat{x}^{*}=1-\hat{x}=\frac{b-x}{b-a}\) and \(\hat{s}=\frac{s-a}{b-a}\). For a comprehensive review on Lidstone interpolation including error representations and estimates, the reader is referred to [2].
In this paper, we use polynomial expansion (5) in a slightly different context. More precisely, this formula will be exploited in establishing refinements of the Jensen inequality for certain classes of convex functions. Recall that the function \(f:I\rightarrow \mathbb {R}\), where \(I\subset \mathbb {R}\) is an interval, is said to be convex if the relation \( f\left( \left( 1-t \right) x+t y \right) \le \left( 1-t \right) f\left( x \right) +t f\left( y \right) \) holds for all \(x,y\in I\) and \(0\le t \le 1\). The Jensen inequality can be rewritten in a form of the corresponding functional, i.e.
where \(f:I\rightarrow \mathbb {R}\) is a convex function, \(\textbf{x}=(x_1, x_2, \ldots , x_m)\subset I^{m}\) and \(\textbf{p}=(p_1, p_2, \ldots , p_m)\subset \mathbb {R}_+^{m}\) is such that \(P_m=\sum _{i=1}^m p_i>0\). Here, \(\mathbb {R}_+\) stands for a set of non-negative real numbers. This functional is monotonic, that is, \( \mathcal {J}_m(f, \textbf{x}, \textbf{p})\ge \mathcal {J}_m(f, \textbf{x}, \textbf{q})\ge 0, \) whenever \(\textbf{p}\ge \textbf{q}\), i.e. \(p_i\ge q_i\), \(i=1,2,\ldots ,m\) (see [5] and [12, p. 717]). Moreover, this monotonicity has been exploited in [10] to establish mutual bounds for the Jensen functional expressed in terms of the associated non-weighted functional, that is,
where \(p_{\min }=\min _{1\le i\le m}p_i\), \(p_{\max }=\max _{1\le i\le m}p_i\), and where \(\mathcal {I}_m(f, \textbf{x})\) stands for the corresponding non-weighted functional, i.e.
The lower bound in (7) is the refinement, while the upper one represents the reverse of the Jensen inequality. Based on (7), numerous inequalities such as inequalities of Young and Hölder, means inequalities, etc. have been refined (for more details, see [9, 10] and the references cited therein). For a comprehensive overview of classical and new results about the Jensen inequality, the reader is referred to monographs [8, 12, 13] and the references cited therein.
The main task in this paper is to establish another type of lower bound for the Jensen functional, regardless of weights. In other words, our intention is to derive a non-trivial bound that is also valid for the non-weighted functional \(\mathcal {I}_m(f, \textbf{x})\), since (7) is trivial in that case. It turns out that the Lidstone interpolation is quite suitable for some classes of convex function. In fact, this is not the first attempt to improve the Jensen inequality in this way. For example, Aras-Gazić et. al. [3], derived the following identity, given here in its simplest discrete form,
where \(\hat{x}_i=\frac{x_i-a}{b-a}\), \(\hat{x}_i^{*}=1-\hat{x}_i\), and \(\hat{x}_{P_m}=\frac{1}{P_m}\sum _{i=1}^m p_i \hat{x}_i\), \(\hat{x}_{P_m}^{*}=1-\hat{x}_{P_m}\). This identity was the basic step in deriving Jensen-type inequalities for the classes of 2n-convex, completely convex and absolutely convex functions. The resulting inequalities were established by imposing positivity of the Jensen functionals appearing on the right-hand side of (9). However, no deeper analysis of the Lidstone polynomials and the corresponding Green functions regarding convexity was done. We will show that conditions of this type are redundant in the case of non-negative m-tuple \(\textbf{p}\). However, it will be necessary to rewrite (9) in a more suitable form.
Let us first recall the definitions of the functions mentioned above. It is well known that an n-convex function is defined via the n-th order divided difference (see, e.g. [2, 13]), but since we deal with differentiable functions, we will use the simplest characterization which asserts that if the n-th order derivative \(f^{(n)}\) exists on the given interval, then the function f is n-convex if and only if \(f^{(n)}\ge 0\) on that interval. Conversely, f is n-concave if and only if \(f^{(n)}\le 0\). Recall that if \(n=2\), then the n-convexity reduces to the usual convexity. Further, the function \(f:[a,b]\rightarrow \mathbb {R}\) is completely convex if it has derivatives of all orders and if \((-1)^{k}f^{(2k)}(x)\ge 0\), \(x\in [a,b]\), for every non-negative integer k. In particular, \(f(x)=\sin x\) is completely convex on the interval \([0,\pi ]\), while \(g(x)=\cos x\) is completely convex on \([-\frac{\pi }{2}, \frac{\pi }{2}]\). Finally, the function \(f:[a,b]\rightarrow \mathbb {R}\) is absolutely convex if it has derivatives of all orders and if \(f^{(2k)}(x)\ge 0\), \(x\in [a,b]\), for every non-negative integer k. Clearly, \(h(x)=\exp x\) is absolutely convex on any interval.
We aim here to establish non-negative lower bounds for functional (6) expressed in terms of the sum and alternating sum of the Lidstone polynomials. This can be done for some classes of convex functions. The outline of the paper is as follows: after this Introduction, in Sect. 2 we discuss some known properties of the Lidstone polynomials and the corresponding Green functions regarding their convexity on a unit interval. We also introduce Euler polynomials that are closely connected to the Lidstone polynomials. Moreover, we discuss conditions under which the sum and alternating sum of the Lidstone polynomials are convex. Section 3 is devoted to our main results. The first result corresponds to a convex function f whose opposite function \(-f\) is completely convex. In this case the Jensen functional can be easily bounded by the functionals that correspond to alternating sum of the Lidstone and Euler polynomials. The case of an absolutely convex function is much more complicated. Hence, it is necessary to impose some additional conditions to obtain refinement of the Jensen inequality expressed in terms of the sum of the Lidstone and Euler polynomials. Roughly speaking, the corresponding refinement can be established on a small enough interval. As an application, in Sect. 4, we derive more accurate power mean inequalities. In particular, we obtain strengthened versions of arithmetic–geometric mean inequality in a difference and a quotient form. Finally, in Sect. 5 we establish an improvement of the Hölder inequality based on the method developed in this paper.
To keep our discussion as concise as possible, we introduce the following notation that will be valid throughout the paper. As above, \(\hat{x}\) stands for the linear transform \(x\mapsto \frac{x-a}{b-a}\), i.e. \(\hat{x}=\frac{x-a}{b-a}\). In addition, \(\hat{x}^{*}=1- \hat{x}=\frac{b-x}{b-a}\). The same holds for a real m-tuple \(\textbf{x}=(x_1, x_2, \ldots , x_m)\), i.e. \(\hat{\textbf{x}}=(\hat{x}_1, \hat{x}_2, \ldots , \hat{x}_m)\) and \(\hat{\textbf{x}}^{*}=\textbf{1}-\hat{{\textbf{x}}}=(1-\hat{x}_1, 1-\hat{x}_2, \ldots , 1-\hat{x}_m)\). Furthermore, if \(x_{P_m}=\frac{1}{P_m}\sum _{i=1}^m p_i x_i\), then \(\hat{x}_{P_m}=\frac{1}{P_m}\sum _{i=1}^m p_i \hat{x}_i\) and \(\hat{x}_{P_m}^{*}=1-\hat{x}_{P_m}\).
2 Preliminaries on the Lidstone Polynomials
At this point, we recall some basic properties of the Lidstone polynomials. These polynomials do not change sign on the unit interval. Moreover, their signs alternate, i.e. \(\Lambda _{2k}(t)\ge 0\), \(\Lambda _{2k+1}(t)\le 0\), for \(t\in [0,1]\). This conclusion is easily drawn from (2), (3) and (4), since the initial Green function \(G_1\) is obviously negative on the unit square. In addition, taking into account boundary value problem (1) that defines the Lidstone polynomials, one concludes that the signs of their second derivatives also alternate on the unit interval. This means that the Lidstone polynomials alternate with respect to convexity. A similar conclusion can be drawn for the sequence of Green functions \(G_n\). For the readers’s convenience, these properties are clearly stated in the following proposition, for more details the reader is referred to [2].
Proposition 1
(see [2]) Let n and m be non-negative integers. Then the following properties hold:
-
(i)
\((-1)^{n}\Lambda _n(t)\ge 0\), \(t\in [0,1]\),
-
(ii)
\((-1)^{n+1}\Lambda _n(t)\) is convex on interval [0, 1],
-
(iii)
\((-1)^{n}G_n(t,s)\ge 0\), \(t,s\in [0,1]\), \(n\ge 1\),
-
(iv)
\(\frac{\partial ^{2\,m}G_n(t,s)}{\partial t^{2\,m}}=G_{n-m}(t,s)\), \(t,s\in [0,1]\), \(n-m\ge 1\), in particular, \(\frac{\partial ^{2}G_n(t,s)}{\partial t^{2}}=G_{n-1}(t,s)\),
-
(v)
\((-1)^{n+1}G_n(t,s)\) is convex on [0, 1], for every fixed value \(s\in [0,1]\).
The Lidstone polynomials are closely connected to another class of special polynomials. The Euler polynomial \(E_n(t)\) of degree n may be defined by means of the generating function such that
The first few Euler polynomials are \(E_0(t)=1\), \(E_1(t)=t-\frac{1}{2}\), \(E_2(t)=\frac{1}{2}t^{2}-\frac{1}{2}t\), \(E_3(t)=\frac{1}{6}t^{3}-\frac{1}{4}t^{2}+\frac{1}{24}\), etc. These polynomials have numerous interesting properties (see, e.g. [1, 2, 6]), of course, we are interested in connection with the Lidstone polynomials. It is well known (see, e.g. [2]) that the Euler polynomial of an even order can be expressed in the following way:
It is clear from (10) that the sign of \(E_{2n}(t)\) coincides with the sign of polynomial \(\Lambda _n(t)\) on the unit interval, i.e. \((-1)^{n}E_{2n}(t)\ge 0\), \(0\le t\le 1\). Furthermore, in terms of convexity, the Euler polynomial of even order behaves exactly the same as the corresponding Lidstone polynomial, that is, polynomial \((-1)^{n+1}E_{2n}(t)\) is convex on the unit interval.
Roughly speaking, in this paper we establish lower bounds for the Jensen functional expressed in terms of the sum and alternating sum of the Lidstone and Euler polynomials. More precisely, we consider the sum and alternating sum of polynomials of the form \((b-a)^{2k}\Lambda _k(t)\) and \((b-a)^{2k}E_{2k}(t)\), that is, we define polynomials
where n is a non-negative integer and \(a,b\in \mathbb {R}\). We want to answer the question when these polynomials are convex. Of course, the matter is extremely simple for alternating sums.
Proposition 2
Let n be a non-negative integer and let \(a,b\in \mathbb {R}\). Then, polynomials \(\alpha _n(t)\) and \(\widetilde{\alpha }_n(t)\) are convex on interval [0, 1].
Proof
Due to Proposition 1 and (10), both polynomials \((b-a)^{2k}(-1)^{k+1}\Lambda _k(t)\) and \((b-a)^{2k}(-1)^{k+1}E_{2k}(t)\) are convex for every non-negative integer k. Consequently, the corresponding sums \(\alpha _n(t)\) and \(\widetilde{\alpha }_n(t)\) are also convex, which completes the proof. \(\square \)
Things get much more complicated if we consider the sum of polynomials \((b-a)^{2k}\Lambda _k(t)\) or \((b-a)^{2k}E_{2k}(t)\). In fact, we will see that increasing the value \(|b-a|\) ruins the convexity on the unit interval. Fortunately, we are able to establish convexity for smaller values of \(|b-a|\).
Proposition 3
Let n be a positive integer and let \(|b-a|\le \sqrt{6}\). Then, polynomials \(\omega _{n}(t)\) and \(\widetilde{\omega }_{n}(t)\) are convex on interval [0, 1].
Proof
We claim that all coefficients of polynomial \(\omega _{n}(t)\) are non-negative. This will lead to convexity of \(\omega _{n}(t)\) on the unit interval. We prove our assertion by mathematical induction. Clearly, \(\omega _0(t)=\Lambda _0(t)=t\). Moreover,
has non-negative coefficients provided that \(|b-a|\le \sqrt{6}\). Now, suppose that \( \omega _{n-1}(t)=\sum _{k=0}^{n-1}\alpha _{2k+1}t^{2k+1}, \) where \(\alpha _{2k+1}\ge 0\), \(k=0,1,2,\ldots ,n-1.\) Note also that \(\sum _{k=0}^{n-1}\alpha _{2k+1}=1\), since \(\omega _{n-1}(1)=1\). Now, we have that
Moreover, since \(\omega _n(0)=0\), by integrating, it follows that
where \(\alpha \) is a real constant. It remains to prove that \(\alpha \ge 0\). Namely, since \((b-a)^{2}\le (2k+2)(2k+3)\), for any non-negative integer k, it follows that
On the other hand, since \(\omega _n(1)=1\), taking into account (11) and (12), we have that
Consequently, \(\omega _n(t)\) is convex on [0, 1]. Finally, the polynomial \(\omega _n(1-t)\) is also convex on the unit interval which yields convexity of \(\widetilde{\omega }_n(t)\). \(\square \)
Remark 1
Note that polynomial \(\omega _1(t)\) is convex on the unit interval regardless of parameters a and b. On the other hand, since \(\omega _2''(t)=(b-a)^{2}\omega _1(t)\), as soon as \(|b-a|>\sqrt{6}\), polynomial \(\omega _2(t)\) has inflection point at \(t_0=\frac{\sqrt{(b-a)^{2}-6}}{|b-a|}\in (0,1)\), so it is neither convex nor concave on the unit interval.
3 Main Results
In this section we establish more accurate Jensen-type inequalities for the classes of completely convex and absolutely convex functions. To keep our discussion more concise, we will rewrite identity (9) in a more suitable form. Namely, the right-hand side of (9) consists of three Jensen functionals
so, it can be represented as
Taking into account properties (ii) and (v) from Proposition 1, we see that successive Jensen functionals for \(\Lambda _k\) and \(G_k\) alternate in sign, i.e. \((-1)^{k+1}\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\ge 0\), \((-1)^{k+1}\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}}}, \textbf{p})\ge 0\), \((-1)^{k+1}\mathcal {J}_m(G_k, {\hat{{\textbf{x}}}}, \textbf{p})\ge 0\), for any non-negative integer k. Therefore, it is natural to consider the Jensen functional for completely convex functions since their even derivatives alternate in sign on the corresponding interval. To be as precise as possible, we need a notion of 2n-complete convexity. We say that the function \(f\in C^{(2n)}([a,b])\) is 2n-completely convex if \((-1)^{k}f^{(2k)}(x)\ge 0\), \(x\in [a,b]\), for \(k=0,1,2\ldots ,n\). In fact, the improvement of the Jensen inequality will be established for the function f whose opposite function \(-f\) is completely convex, i.e. \((-1)^{k+1}f^{(2k)}(x)\ge 0\), \(x\in [a,b]\), for \(k=0,1,2\ldots ,n\). It should be noticed here that \(-f\) is convex on [a, b]. Now, we are ready to state and prove our first result.
Theorem 1
Let n be positive integer, let \(f\in C^{(2n)}([a,b])\), and let \(\textbf{x}\in [a,b]^{m}\), \(\textbf{p}\in \mathbb {R}_+^{m}\). If \(-f\) is 2n-completely convex function, then
where \(a_{\min }=\min _{0\le k\le n-1}|f^{(2k)}(a)|\) and \(b_{\min }=\min _{0\le k\le n-1}|f^{(2k)}(b)|\). Otherwise, if f is 2n-completely convex function, then holds the inequality
Proof
Let \(-f\) be 2n-completely convex function. Then,
which provides positivity of the integral on the right-hand side of (13). Consequently, we have that
Now, our aim is to estimate both sums on the right-hand side of the previous inequality. Since \(|f^{(2k)}(a)|=(-1)^{k+1}f^{(2k)}(a)\ge a_{\min }\), for every \(k=0,1,2,\ldots , n-1\), it follows that
and similarly,
which yields the first inequality sign in (14). Further, the second inequality sign in (14) holds due to an obvious relation
and finally, the last one is a consequence of convexity of polynomial \(\widetilde{\alpha }_{n-1}\) on the unit interval. This proves (14). On the other hand, if f is 2n-completely convex, then by putting \(-f\) in (14), we obtain (15), as claimed. \(\square \)
Remark 2
Relations (14) and (15) are homogeneous with respect to m-tuple \(\textbf{p}\). Moreover, since \(\mathcal {J}_m(f, \textbf{x}, \textbf{1})=m\mathcal {I}_m(f, \textbf{x})\), where \(\textbf{1}=(1,1,\ldots ,1)\), relation (14) implies
Clearly, the situation is similar with inequality (15). It is important to note that inequalities in (17) provide non-trivial lower bounds for the non-weighted functional (via the Lidstone polynomials), which was not the case in [10].
Remark 3
Consider the cosine function restricted on the interval \([\frac{2\pi }{3},\frac{5\pi }{4}]\). Since \(\cos ^{(2k)}x=(-1)^{k}\cos x\), it follows that \(-\cos x\) is completely convex on that interval. In addition, we have that \(\cos ^{(2k)}\frac{2\pi }{3}=\frac{(-1)^{k+1}}{2}\) and \(\cos ^{(2k)}\frac{5\pi }{4}=\frac{(-1)^{k+1}}{\sqrt{2}}\), so that \(a_{\min }=\frac{1}{2}\) and \(b_{\min }=\frac{1}{\sqrt{2}}\). In particular, if \(m=2\), then (17) yields
where \( {\hat{{\textbf{x}}}}=\left( \frac{12x_1-8\pi }{7\pi },\frac{12x_2-8\pi }{7\pi } \right) \), \(x_1, x_2\in [\frac{2\pi }{3},\frac{5\pi }{4}]\). Clearly, this inequality provides an explicit refinement of the non-weighted Jensen inequality.
Remark 4
Theorem 1 refers to functions whose even derivatives alternate in sign on interval [a, b]. If we take a look at its proof, we see that it is sufficient to assume that the 2n-th order derivative of function f does not change the sign on [a, b] and that lower derivatives of even order alternate at the endpoints of the interval. More precisely, it suffices to assume that \((-1)^{n}f^{(2n)}(x)\ge 0\), \(x\in [a,b]\), and \((-1)^{k}f^{(2k)}(a)\ge 0\), \((-1)^{k}f^{(2k)}(b)\ge 0\) for \(k=0,1,2,\ldots , n-1\). It is easy to see that these conditions imply complete 2n-convexity. Namely, without loss of generality, we can suppose that n is even, i.e. \(f^{(2n)}(x)\ge 0\) on [a, b], which means that \(f^{(2n-2)}\) is convex on [a, b]. Now, let \(x\in [a,b]\). Then, \(x=(1-t)a+tb\), for some \(t\in (0,1)\). Therefore, since \(f^{(2n-2)}(a)\le 0\) and \(f^{(2n-2)}(b)\le 0\), we have that \(f^{(2n-2)}(x)=f^{(2n-2)}((1-t)a+tb)\le (1-t)f^{(2n-2)}(a)+tf^{(2n-2)}(a)\le 0\). This means that \(f^{(2n-4)}\) is concave on [a, b] and, by the same argumentation, it follows that \(f^{(2n-4)}(x)\ge 0\) on [a, b]. Clearly, this procedure provides complete 2n-convexity.
Theorem 1 yields the lower bound for the Jensen functional expressed in terms of alternating sum of the Lidstone polynomials. It is much more complicated to establish the lower bound expressed in terms of the usual sum of the Lidstone polynomials. Our next goal is to establish a result that corresponds to functions with non-negative even derivatives. In fact, we will derive a more general result that also covers the case of completely convex functions. Bearing in mind Remark 4, we deal now with functions whose derivatives of order \(4l+2\) are non-negative at the endpoints of interval [a, b]. However, to establish a more accurate Jensen inequality, some additional conditions for derivatives of order 4l need to be imposed. Proposition 3 will certainly play an important role in obtaining the refinement of the Jensen inequality.
Theorem 2
Let n be a positive integer and \(0<b-a\le \sqrt{6}\). Let \(f\in C^{(2n)}([a,b])\), and let \(\textbf{x}\in [a,b]^{m}\), \(\textbf{p}\in \mathbb {R}_+^{m}\). Further, suppose that there exist \(\overline{a},\overline{b}\ge 0\) such that
and
If n is odd and f is 2n-convex, or n is even and f is 2n-concave, then holds the inequality
Proof
First, let n be odd and let f be 2n-convex. Then \(\mathcal {J}_m(G_n, {\hat{{\textbf{x}}}}, \textbf{p})\ge 0\) and \(f^{(2n)}(s)\ge 0\), \(s\in [a,b]\), so the integral on the right-hand side of (13) is non-negative, which means that (16) holds in this case. Similarly, the case of an even n and 2n-concave function f also yields (16).
Now, we aim to establish the suitable lower bounds for both terms on the right-hand side of (16). If k is odd, i.e. \(k=2l+1\), then \(\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\ge 0\), by Proposition 1, and \(f^{(2k)}(a)=f^{(4l+2)}(a)\ge \overline{a}\) by (18), so we have that \(f^{(2k)}(a)\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\ge \overline{a}\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\). If \(k\ge 2\) is even, i.e. \(k=2l\), then \(\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\le 0\), by Proposition 1, and \(f^{(2k)}(a)=f^{(4l)}(a)\le \overline{a}\), so we again obtain \(f^{(2k)}(a)\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\ge \overline{a}\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\). Consequently, since \(\mathcal {J}_m(\Lambda _0, {\hat{{\textbf{x}}^{*}}}, \textbf{p})=0\), we have that
and similarly,
which yields the first inequality sign in (20). Clearly, the second inequality sign in (20) holds due to identity
while the last sign holds due to Proposition 3. The proof is now completed. \(\square \)
Remark 5
It should be noticed here that if \(n=1\), both Theorems 1 and 2 reduce to the classical Jensen inequality.
Remark 6
Similarly to Remark 2, the non-weighted form of relation (20) reads
which represents a non-trivial lower bound for the non-weighted Jensen functional.
Remark 7
The function f fulfilling conditions of Theorem 2 is convex since it satisfies (20). The same conclusion can be drawn from identity (5). Namely, by taking a second derivative, we have that
since the above integral is non-negative. In addition, taking into account (18) and (19), we have that
since the coefficients of polynomial \(\omega _{n-2}\) are non-negative, as proved in Proposition 3. Moreover, by repeating this procedure, we conclude that \(f^{(4l+2)}(x)\ge 0\), \(x\in [a,b]\), \(0\le l\le \lfloor \!\frac{n-1}{2}\!\rfloor \).
Remark 8
It is not hard to find examples of functions satisfying (18) and (19). Namely, Theorem 2 covers the case of a convex function whose opposite function is completely convex. Such function has non-negative derivatives of order \(4l+2\), and negative derivatives of order 4l, so conditions (18) and (19) are always fulfilled. In particular, cosine function from Remark 3 satisfies these conditions. On the other hand, functions \(\exp x\), \(\exp (-x)\), \(\cosh x\) have equal even derivatives that are always non-negative. In particular, if \(f(x)=\exp x\), then we can take \(\overline{a}=\exp a\) and \(\overline{b}=\exp b\). These functions will be of crucial in establishing refinements of some power mean inequalities.
4 A Strengthened Power Mean Inequalities in Terms of the Lidstone Polynomials
We aim now to derive more accurate power mean inequalities based on our Theorem 2. Recall that a power mean is defined by
while the case of \(p_1=p_2=\cdots =p_m\) yields the corresponding non-weighted mean
Here, and throughout this section, \(\textbf{x}=(x_1, x_2, \ldots , x_m)\) stands for a positive n-tuple, i.e. \(x_i>0\), \(i=1,2,\ldots , m\). In particular, for \(r=-1,0,1\), we obtain the harmonic, geometric and arithmetic mean, respectively. The basic power mean inequality, describing monotonic behavior of means, asserts that if \(r<s\), then
This inequality is still of interest to numerous mathematicians. For a comprehensive study of power means including refinements and generalizations, the reader is referred to monographs [12, 13], as well as to papers [7, 9, 10] and the references cited therein. In particular, the mentioned paper [10] provides mutual bounds for the differences of means in terms of the corresponding non-weighted means.
According to Sect. 3, we establish here a different kind of lower bounds for the difference \(M_{s}\left( \textbf{x}, \textbf{p} \right) -M_{r}\left( \textbf{x}, \textbf{p} \right) \). To apply Theorem 2, we have to choose the appropriate functions in the Jensen functional. We first consider the cases when one of parameters r and s in (21) is equal to zero. The first case we deal with is the function \(f(t)=\frac{1}{r}\log t\), where \(r\ne 0\). Since \(f^{(2n)}(t)=-\frac{(2n-1)!}{rt^{2n}}\), it follows that f is 2n-convex for \(r<0\). Therefore, in this setting, Theorem 2 cannot be applied for even n. We have already commented that the case of \(n=1\) is trivial, so the first non-trivial case appears for \(n=3\). Of course, conditions (18) and (19) also need to be satisfied. This case is carried out in the sequel.
We also use here the transformation
It should be noticed here that if \(x_i\in [a,b]\), then \(x_i^{r}\in [\min \{a^{r},b^{r}\},\max \{a^{r},b^{r}\} ]\), so in order to apply Theorem 2, we deal with transformation \(x_i^{r}\mapsto \frac{x_i^{r}-\min \{a^{r},b^{r}\}}{\left| b^{r}-a^{r}\right| }\). Clearly, if \(r>0\), then \(\frac{x_i^{r}-\min \{a^{r},b^{r}\}}{\left| b^{r}-a^{r}\right| }=\frac{x_i^{r}-a^{r}}{b^{r}-a^{r}}\), while \(\frac{x_i^{r}-\min \{a^{r},b^{r}\}}{\left| b^{r}-a^{r}\right| }=\frac{b^{r}-x_i^{r}}{b^{r}-a^{r}}\), for \(r<0\). However, due to the symmetry, we can define \(\hat{x_i^{r}}=\frac{x_i^{r}-a^{r}}{b^{r}-a^{r}}\), \(\hat{x_i^{r *}}=\frac{b^{r}-x_i^{r}}{b^{r}-a^{r}}\), \(i=1,2,\ldots , m\), and so \({{\hat{\textbf{x}}^{r}}}=\left( \hat{x_1^{r}}, \hat{x_2^{r}},\ldots , \hat{x_m^{r}} \right) \) and \({{\hat{\textbf{x}}^{r *}}}=\left( \hat{x_1^{r *}}, \hat{x_2^{r *}},\ldots , \hat{x_m^{r *}} \right) \).
Considering the above discussion, we give the first application of Theorem 2 that provides more accurate estimate between means \(M_{0}\left( \textbf{x}, \textbf{p} \right) \) and \(M_{r}\left( \textbf{x}, \textbf{p} \right) \) in the so-called quotient form. As in the previous section, these estimates depend on the corresponding interval.
Corollary 1
Let \(r\ne 0\) and let \(\textbf{p}\in \mathbb {R}_+^{m}\), \(\textbf{x}\in [a,b]^{m}\), where \(|b^{r}-a^{r}|\le \sqrt{6}\) and \(\min \{a^{r}, b^{r}\}\ge \sqrt{6}\). Then hold the inequalities
where \(\Omega _n(t)=\sum _{k=0}^{n}(b^{r}-a^{r})^{2k}\Lambda _k(t)\) and \(\overline{\Omega }_n(t)=\sum _{k=0}^{n}(b^{r}-a^{r})^{2k}E_{2k}(t)\).
Proof
We consider two cases depending on whether \(r<0\) or \(r>0\). First, let \(r<0\). If \(f(t)=\frac{1}{r}\log t\) and \(\textbf{x}^{r}=(x_1^{r}, x_2^{r}, \ldots , x_m^{r})\), the left-hand side of (20) becomes
Moreover, since \(f^{(2n)}(t)=-\frac{(2n-1)!}{rt^{2n}}\), f is 2n-convex for \(r>0\), so we can choose \(n=3\), due to Theorem 2. However, it is necessary to fulfill conditions (18) and (19). It is easy to see that \(f^{(2)}(t)\ge f^{(4)}(t)\) for \(t\ge \sqrt{6}\), so these conditions hold for \(\overline{a}=f^{(2)}(a^{r})=-{a^{-2r}}/{r}\) and \(\overline{b}=f^{(2)}(b^{r})=-{b^{-2r}}/{r}\), since \(\min \{a^{r}, b^{r}\}\ge \sqrt{6}\). Consequently, (20) reduces to
It remains to consider the case when \(r>0\). Then, by putting \(f(t)=-\frac{1}{r}\log t\), \( \textbf{x}^{r}=(x_1^{r}, x_2^{r}, \ldots , x_m^{r})\), \(n=3\), in (20), and following the lines as in the above case, we arrive at the relation
Clearly, inequalities (23) and (24) yield (22), which completes the proof. \(\square \)
Remark 9
By putting \(r=-1\) in (22), we obtain the refinement of the geometric-harmonic mean inequality in a quotient form. More precisely, we have that
provided that \([a,b]\subseteq (0,\frac{1}{\sqrt{6}}]\) and \(|b^{-1}-a^{-1}|\le \sqrt{6}\). Similarly, the case of \(r=1\) represents a strengthened version of the arithmetic–geometric mean inequality. Namely, if \([a,b]\subseteq [{\sqrt{6}},\infty )\) and \(|b-a|\le \sqrt{6}\), then
Of course, the obtained relations also describe refinements of the corresponding non-weighted versions of inequalities. In particular, the latter relation reads
The next case we deal with refers to exponential function \(f(t)=e^{st}\). Since \(f^{(2n)}(t)=s^{2n}e^{st}\), it follows that f is 2n-convex for each \(n\in \mathbb {N}\). Hence, exactly as in the previous case, we consider Theorem 2 for \(n=3\). In addition, the associated coordinate transformation is
Now, with \(\widehat{\log \textbf{x}}=(\widehat{\log x_1},\widehat{\log x_2},\ldots ,\widehat{ \log x_m})\) and \(\widehat{\log \textbf{x}}{}^{*}=(\widehat{\log x_1}{}^{*},\widehat{\log x_2}{}^{*},\ldots ,\widehat{ \log x_m}{}^{*})\), where \(\widehat{\log x_i}=\frac{\log x_i-\log a}{\log b-\log a}\) and \(\widehat{\log x_i}{}^{*}=\frac{\log b -\log x_i}{\log b-\log a}\), we obtain more precise estimate between means \(M_{0}\left( \textbf{x}, \textbf{p} \right) \) and \(M_{s}\left( \textbf{x}, \textbf{p} \right) \), \(s\in [-1,1]\), in the so-called difference form.
Corollary 2
Let \(s\in [-1,1]\) and let \(\textbf{p}\in \mathbb {R}_+^{m}\), \(\textbf{x}\in [a,b]^{m}\), where \(b\le e^{\sqrt{6}}a\). Then hold the inequalities
where \(L_n(t)=\sum _{k=0}^{n}\log ^{2k}\!\frac{b}{a}\Lambda _k(t)\) and \(\overline{L}_n(t)=\sum _{k=0}^{n}\log ^{2k}\!\frac{b}{a}E_{2k}(t)\).
Proof
Again, the starting point is relation (20) for \(n=3\). Then, by putting \(f(t)=e^{st}\) and \(\log \textbf{x}=(\log x_1,\log x_2,\ldots , \log x_m)\), it follows that
Moreover, since \(f^{(2n)}(t)=s^{2n}e^{st}\), we have that \(f^{(2)}(t)\ge f^{(4)}(t)\) for \(s\in [-1,1]\). Finally, conditions (18) and (19) are satisfied with \(\overline{a}=f^{(2)}(\log a)=s^{2}a^{s}\) and \(\overline{b}=f^{(2)}(\log b)=s^{2}b^{s}\), which provides (25), as claimed. \(\square \)
Remark 10
If \(s=1\), then (25) yields the improved arithmetic–geometric inequality in a difference form, while the case of \(s=-1\) provides the corresponding geometric-harmonic inequality in a difference form. Let’s keep a little more attention on these two particular cases. Namely, if \(s=1\), then \(f(t)=e^{t}\), i.e. \(f^{(2n)}(t)=e^{t}\), which means that in this case Theorem 2 can be applied for an arbitrary odd n, since conditions (18) and (19) are trivially fulfilled for \(\overline{a}=a\) and \(\overline{b}=b\). Consequently, we have that
where n is a non-negative odd integer. The similar conclusion can be drawn for the case of \(s=-1\), i.e. relation
holds for an arbitrary non-negative odd integer n.
Remark 11
Contrary to Remark 10, if \(|s|<1\), then the sequence \(f^{(2n)}(t)=s^{2n}e^{st}\) is decreasing. This means that conditions (18) and (19) cannot be fulfilled for \(f(t)=e^{st}\) when \(n\ge 5\). This means that the case of \(n=3\) is the best we can get in Corollary 2. The similar conclusion can be drawn for Corollary 1, since the sequence \(f^{(2n)}(t)=-\frac{(2n-1)!}{rt^{2n}}\) (i.e. when \(f(t)=\frac{1}{r}\log t\)) is decreasing or increasing, depending on whether \(r>0\) or \(r<0\).
Finally, it remains to consider the case when both parameters r and s in (21) are not equal to zero. Then, the corresponding power mean inequalities will be established via the power function \(f(t)=t^{\frac{s}{r}}\). Clearly, \( f^{(2n)}(t)=\prod _{k=1}^{2n} \left( \frac{s}{r}-k+1\right) t^{\frac{s}{r}-2n}, \) and so, \( f^{(2n)}(t)\ge 0\) if \(\frac{s}{r}\in \overline{S}_n\), while \( f^{(2n)}(t)< 0\) if \(\frac{s}{r}\in {S}_n\), where \(S_n=\bigcup _{k=0}^{n-1}(2k, 2k+1),\) (for more details, see [4]). In this setting, Theorem 2 can be applied both for \(n=2\) and \(n=3\). We start with the case \(n=3\) since it is closely related to Corollaries 1 and 2.
Corollary 3
Let \(\frac{s}{r}\in (-\infty ,0]\cup [1,2]\cup [3,4]\cup [5,\infty )\) and let \(\textbf{p}\in \mathbb {R}_+^{m}\), \(\textbf{x}\in [a,b]^{m}\), where \(|b^{r}-a^{r}|\le \sqrt{6}\) and \(\min \{a^{r}, b^{r}\}\ge \frac{\sqrt{(s-2r)(s-3r)}}{|r|}\). Then hold the inequalities
where \(\Omega _2\) and \(\overline{\Omega }_2\) are defined in Corollary 1.
Proof
We utilize Theorem 2 with \(f(t)=t^{\frac{s}{r}}\), \(\textbf{x}^{r}=(x_1^{r}, x_2^{r}, \ldots , x_m^{r})\), and \(n=3\). Then, the left-hand side of (20) reduces to
Moreover, since \(n=3\), it follows that even derivatives \(f^{(2)}\), \(f^{(4)}\) and \(f^{(6)}\) are non-negative on [a, b], provided that \(\frac{s}{r}\in (-\infty ,0]\cup [1,2]\cup [3,4]\cup [5,\infty )\). Moreover, since \(f^{(2)}(t)\ge f^{(4)}(t) \), \(t>0\), if and only if \(t\ge \frac{\sqrt{(s-2r)(s-3r)}}{|r|}\), conditions (18) and (19) are satisfied for \(\overline{a}=f^{(2)}(a^{r})=\frac{s(s-r)a^{s-2r}}{r^{2}}\) and \(\overline{b}=f^{(2)}(b^{r})=\frac{s(s-r)b^{s-2r}}{r^{2}}\), so we arrive at (27). \(\square \)
In order to finish our previous discussion, we also consider Theorem 2 for \(f(t)=t^{\frac{s}{r}}\) and \(n=2\). It turns out that the conditions under which the corresponding inequality holds can be significantly weakened. More precisely, the following result includes polynomials \(\Omega _1(t)=\Lambda _0(t)+(b^{r}-a^{r})^{2}\Lambda _1(t)\) and \(\overline{\Omega }_1(t)=1+(b^{r}-a^{r})^{2}E_2(t)\), that are convex regardless of parameters a, b, and r (see also Remark 1).
Corollary 4
Let \(\frac{s}{r}\in [2,3]\) and let \(\textbf{p}\in \mathbb {R}_+^{m}\), \(\textbf{x}\in [a,b]^{m}\). Then hold the inequalities
Proof
It follows by putting \(n=2\) in (20) and by noting that \(f(t)=t^{\frac{s}{r}}\) is simultaneously convex and 4-concave if and only if \(\frac{s}{r}\in [2,3]\). \(\square \)
5 Application to the Hölder Inequality
We conclude this paper with a simple application of the arithmetic–geometric mean inequality (26) to the Hölder inequality. Let \((\Omega , \Sigma ,\mu )\) be \(\sigma \)-finite measure space and let \(\sum _{i=1}^m \frac{1}{q_i}=1\), \(q_i>1\). If \(f_i\in L^{q_i}(\Omega )\), \(i=1,2,\ldots , m\), are non-negative measurable functions, then holds the inequality
One way of proving the Hölder inequality is an application of the arithmetic-geometric mean inequality (for more details, see [12, 13]). Therefore, the strengthened arithmetic–geometric mean inequality (26) can be exploited in improving the Hölder inequality (29). However, it will be necessary to impose some extra conditions on non-negative measurable functions \(f_i\in L^{q_i}(\Omega )\), \(i=1,2,\ldots , m\), since the Lidstone interpolation refers to interval [a, b]. Now, with the abbreviation
and accordingly,
and \(\widehat{\log \textbf{f}_0^{\textbf{q}}(x)}{}^{*}=\textbf{1}-\widehat{\log \textbf{f}_0^{\textbf{q}}(x)}\), we arrive at the following refinement of the Hölder inequality.
Theorem 3
Let \((\Omega , \Sigma ,\mu )\) be \(\sigma \)-finite measure space and let \(\sum _{i=1}^m \frac{1}{q_i}=1\), \(q_i>1\), \(i=1,2,\ldots , m\). Further, suppose that \(f_i\in L^{q_i}(\Omega )\), \(i=1,2,\ldots , m\), are non-negative measurable functions such that
where \(0<a<b\le e^{\sqrt{6}}a\). Then the inequalities
where the polynomials \(L_n\) and \(\overline{L}_n\) are defined in Corollary 2, hold for any non-negative odd integer n.
Proof
The Young form of relation (26) reads
where \(q_i=\frac{P_m}{p_i}\), \(\textbf{q}^{-1}=\big (\frac{1}{q_1},\frac{1}{q_2},\ldots , \frac{1}{q_m} \big )\) and \(\sum _{i=1}^m \frac{1}{q_i}=1\). Now, by putting \({f_i^{q_i}(x)}/{\Vert f_i\Vert _{q_i}^{q_i}}\), \(x\in \Omega \), instead of \(x_i\), \(i=1,2,\ldots , m\), in the above inequality, which is meaningful due to conditions in (30), we arrive at the inequalities
In addition, integrating the last set of inequalities over \(\Omega \), with respect to measure \(\mu \), we have that
which yields (31), due to \(\sum _{i=1}^m \frac{1}{q_i}=1\). \(\square \)
Data availability
The author confirms that all data and materials used for this study are included in this article.
References
Abramowitz, M., Stegun, I.A.: Handbook of mathematical functions with formulas, graphs, and mathematical tables, National Bureau of Standards, Applied Math. Series 55, 4th printing, Washington (1965)
Agarwall, R.P., Wong, P.J.Y.: Error Inequalities in Polynomial Interpolation and Their Applications. Kluwer Academic Publishers, Dordrecht, Boston, London (1993)
Aras-Gazić, G., Čuljak, V., Pečarić, J., Vukelić, A.: Generalization of Jensen’s inequality by Lidstone’s polynomial and related results. Math. Inequal. Appl. 16, 1243–1267 (2013)
Bošnjak, M., Krnić, M., Pečarić, J.: Jensen-type inequalities, montgomery identity and higher-order convexity. Mediterr. J. Math. 19, 22 (2022)
Dragomir, S.S., Pečarić, J.E., Persson, L.E.: Properties of some functionals related to Jensen’s inequality. Acta Math. Hungar. 1–2, 129–143 (1996)
Fort, T.: Finite Differences and Difference Equations in the Real Domain. Oxford University Press, London (1948)
Krnić, M., Mikić, R., Pečarić, J.: Double precision of the Jensen-type operator inequalities for bounded and Lipschitzian functions. Aequat. Math. 93, 669–690 (2019)
Krnić, M., Lovričević, N., Pečarić, J., Perić, J.: Superadditivity and monotonicity of the Jensen-type functionals, Element, Zagreb, ISBN 978-953-197-599-5 (2015)
Krnić, M., Lovričević, N., Pečarić, J.: On the properties of McShane’s functional and their applications. Period. Math. Hung. 66, 159–180 (2013)
Krnić, M., Lovričević, N., Pečarić, J.: Jessen’s functional, its properties and applications. An. Şt. Univ. Ovidius Constanţa 20, 225–248 (2012)
Lidstone, G.J.: Notes on the extension of Aitken’s theorem (for polynomial interpolation) to the Everett types. Proc. Edinburgh Math. Soc. 2, 16–19 (1929)
Mitrinović, D.S., Pečarić, J.E., Fink, A.M.: Classical and New Inequalities in Analysis. Kluwer Academic Publishers, Dordrecht/Boston/London (1993)
Pečarić, J.E., Proschan, F., Tong, Y.L.: Convex Functions, Partial Orderings, and Statistical Applications. Academic Press, Singapore (1992)
Widder, D.V.: Completly convex functions and Lidstone series. Trans. Amer. Math. Soc. 51, 387–398 (1942)
Acknowledgements
The authors would like to thank the anonymous referees for some valuable comments and useful suggestions.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
The author confirms sole responsibility for the article preparation.
Corresponding author
Ethics declarations
Conflict of interest
The author has no relevant financial or non-financial interests to disclose.
Additional information
Communicated by Fuad Kittaneh.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Krnić, M. Improving Jensen-type Inequalities Via the Sum of the Lidstone Polynomials. Bull. Malays. Math. Sci. Soc. 47, 71 (2024). https://doi.org/10.1007/s40840-024-01670-y
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40840-024-01670-y