1 Introduction

The Lidstone polynomials (see [11]) were introduced to provide an approximation of a function in the neighborhood of two points instead of one point approximation given by the Taylor expansion. From a practical point of view, the Lidstone polynomial expansion turns to be very useful. Besides in approximation theory, such interpolation is widely used in some boundary value problems appearing in engineering and other areas of physical sciences.

The Lidstone polynomial \(\Lambda _n(t)\) is the unique polynomial of degree \(2n+1\) defined by the following boundary value problem:

$$\begin{aligned} \begin{aligned} \Lambda _0(t)&= t,\\ \Lambda _{n}^{''}(t)&=\Lambda _{n-1}(t),\\ \Lambda _n(0)&=\Lambda _n(1)=0,\quad n\ge 1. \end{aligned} \end{aligned}$$
(1)

After the initial polynomial \( \Lambda _0(t)=t\), the next two polynomials are \(\Lambda _1(t)=\frac{t^{3}}{6}-\frac{t}{6}\) and \(\Lambda _2(t)=\frac{t^{5}}{120}-\frac{t^{3}}{36}+\frac{7t}{360}\). Clearly, it follows from the above boundary value problem that \(\Lambda _n(t)\), \(n\ge 1\), contains only odd powers (see also [2]). It is well known from the theory of differential equations that the Lidstone polynomials can be expressed in integral form as

$$\begin{aligned} \Lambda _n(t)=\int _0^{1} G_n(t,s)s\textrm{d}s, \end{aligned}$$
(2)

where

$$\begin{aligned} G_1(t,s) =\left\{ \begin{array}{c} (t-1)s,\quad s\le t, \\ (s-1)t,\quad t<s, \end{array} \right. \end{aligned}$$
(3)

is the homogeneous Green function of the differential operator \(\textrm{d}^{2}/\textrm{d}s^{2}\) on the unit interval, and with the successive iterates

$$\begin{aligned} G_n(t,s)=\int _0^{1}G_1(t,u)G_{n-1}(u,s)\textrm{d}u,\quad n\ge 2. \end{aligned}$$
(4)

The Lidstone polynomials can also be represented by Fourier series, and by Bernoulli polynomials and numbers (for more details, see [2] and the references cited therein).

In the middle of the last century, Widder [14], established the following basic interpolation formula for 2n-times continuously differentiable function \(f\in C^{(2n)}([0,1])\):

$$\begin{aligned} f(x)=\sum _{k=0}^{n-1}\left[ f^{(2k)}(0)\Lambda _k(1-x) + f^{(2k)}(1)\Lambda _k(x)\right] +\int _0^{1}G_n(x,s)f^{(2n)}(s)\textrm{d}s. \end{aligned}$$

By a simple linear transform \(x\mapsto \hat{x}=\frac{x-a}{b-a}\), the above representation can be extended to hold for an arbitrary interval [ab], that is, if \(f\in C^{(2n)}([a,b])\), then

$$\begin{aligned} \begin{aligned} f(x)=&\sum _{k=0}^{n-1}(b-a)^{2k}\left[ f^{(2k)}(a)\Lambda _k\left( \hat{x}^{*}\right) + f^{(2k)}(b)\Lambda _k\left( \hat{x}\right) \right] \\&\ \ +(b-a)^{2n-1}\int _a^{b}G_n\left( \hat{x},\hat{s}\right) f^{(2n)}(s)\textrm{d}s, \end{aligned} \end{aligned}$$
(5)

where \(\hat{x}^{*}=1-\hat{x}=\frac{b-x}{b-a}\) and \(\hat{s}=\frac{s-a}{b-a}\). For a comprehensive review on Lidstone interpolation including error representations and estimates, the reader is referred to [2].

In this paper, we use polynomial expansion (5) in a slightly different context. More precisely, this formula will be exploited in establishing refinements of the Jensen inequality for certain classes of convex functions. Recall that the function \(f:I\rightarrow \mathbb {R}\), where \(I\subset \mathbb {R}\) is an interval, is said to be convex if the relation \( f\left( \left( 1-t \right) x+t y \right) \le \left( 1-t \right) f\left( x \right) +t f\left( y \right) \) holds for all \(x,y\in I\) and \(0\le t \le 1\). The Jensen inequality can be rewritten in a form of the corresponding functional, i.e.

$$\begin{aligned} \mathcal {J}_m(f, \textbf{x}, \textbf{p})=\sum _{i=1}^m p_if(x_i)-P_m f\left( \frac{1}{P_m}\sum _{i=1}^m p_i x_i \right) \ge 0, \end{aligned}$$
(6)

where \(f:I\rightarrow \mathbb {R}\) is a convex function, \(\textbf{x}=(x_1, x_2, \ldots , x_m)\subset I^{m}\) and \(\textbf{p}=(p_1, p_2, \ldots , p_m)\subset \mathbb {R}_+^{m}\) is such that \(P_m=\sum _{i=1}^m p_i>0\). Here, \(\mathbb {R}_+\) stands for a set of non-negative real numbers. This functional is monotonic, that is, \( \mathcal {J}_m(f, \textbf{x}, \textbf{p})\ge \mathcal {J}_m(f, \textbf{x}, \textbf{q})\ge 0, \) whenever \(\textbf{p}\ge \textbf{q}\), i.e. \(p_i\ge q_i\), \(i=1,2,\ldots ,m\) (see [5] and [12, p. 717]). Moreover, this monotonicity has been exploited in [10] to establish mutual bounds for the Jensen functional expressed in terms of the associated non-weighted functional, that is,

$$\begin{aligned} mp_{\max }\mathcal {I}_m(f, \textbf{x})\ge \mathcal {J}_m(f, \textbf{x}, \textbf{p})\ge mp_{\min }\mathcal {I}_m(f, \textbf{x}), \end{aligned}$$
(7)

where \(p_{\min }=\min _{1\le i\le m}p_i\), \(p_{\max }=\max _{1\le i\le m}p_i\), and where \(\mathcal {I}_m(f, \textbf{x})\) stands for the corresponding non-weighted functional, i.e.

$$\begin{aligned} \mathcal {I}_m(f, \textbf{x})=\frac{\sum _{i=1}^m f(x_i)}{m}-f\left( \frac{\sum _{i=1}^m x_i}{m} \right) . \end{aligned}$$
(8)

The lower bound in (7) is the refinement, while the upper one represents the reverse of the Jensen inequality. Based on (7), numerous inequalities such as inequalities of Young and Hölder, means inequalities, etc. have been refined (for more details, see [9, 10] and the references cited therein). For a comprehensive overview of classical and new results about the Jensen inequality, the reader is referred to monographs [8, 12, 13] and the references cited therein.

The main task in this paper is to establish another type of lower bound for the Jensen functional, regardless of weights. In other words, our intention is to derive a non-trivial bound that is also valid for the non-weighted functional \(\mathcal {I}_m(f, \textbf{x})\), since (7) is trivial in that case. It turns out that the Lidstone interpolation is quite suitable for some classes of convex function. In fact, this is not the first attempt to improve the Jensen inequality in this way. For example, Aras-Gazić et. al. [3], derived the following identity, given here in its simplest discrete form,

$$\begin{aligned} \mathcal {J}_m(f, \textbf{x}, \textbf{p})= & {} \sum _{k=0}^{n-1}(b-a)^{2k}f^{(2k)}(a)\left( \sum _{i=1}^m p_i\Lambda _k\left( \hat{x}_i^{*} \right) -P_m \Lambda _k\left( \hat{x}_{P_m}^{*}\right) \right) \nonumber \\{} & {} \quad +\sum _{k=0}^{n-1}(b-a)^{2k}f^{(2k)}(b)\left( \sum _{i=1}^m p_i\Lambda _k\left( \hat{x}_i \right) -P_m \Lambda _k\left( \hat{x}_{P_m}\right) \right) \nonumber \\{} & {} \quad +(b-a)^{2n-1}\int _a^{b}\left( \sum _{i=1}^m p_iG_n\left( \hat{x}_i,\hat{s} \right) - P_m G_n\left( \hat{x}_{P_m},\hat{s} \right) \right) f^{(2n)}(s)\textrm{d}s,\nonumber \\ \end{aligned}$$
(9)

where \(\hat{x}_i=\frac{x_i-a}{b-a}\), \(\hat{x}_i^{*}=1-\hat{x}_i\), and \(\hat{x}_{P_m}=\frac{1}{P_m}\sum _{i=1}^m p_i \hat{x}_i\), \(\hat{x}_{P_m}^{*}=1-\hat{x}_{P_m}\). This identity was the basic step in deriving Jensen-type inequalities for the classes of 2n-convex, completely convex and absolutely convex functions. The resulting inequalities were established by imposing positivity of the Jensen functionals appearing on the right-hand side of (9). However, no deeper analysis of the Lidstone polynomials and the corresponding Green functions regarding convexity was done. We will show that conditions of this type are redundant in the case of non-negative m-tuple \(\textbf{p}\). However, it will be necessary to rewrite (9) in a more suitable form.

Let us first recall the definitions of the functions mentioned above. It is well known that an n-convex function is defined via the n-th order divided difference (see, e.g. [2, 13]), but since we deal with differentiable functions, we will use the simplest characterization which asserts that if the n-th order derivative \(f^{(n)}\) exists on the given interval, then the function f is n-convex if and only if \(f^{(n)}\ge 0\) on that interval. Conversely, f is n-concave if and only if \(f^{(n)}\le 0\). Recall that if \(n=2\), then the n-convexity reduces to the usual convexity. Further, the function \(f:[a,b]\rightarrow \mathbb {R}\) is completely convex if it has derivatives of all orders and if \((-1)^{k}f^{(2k)}(x)\ge 0\), \(x\in [a,b]\), for every non-negative integer k. In particular, \(f(x)=\sin x\) is completely convex on the interval \([0,\pi ]\), while \(g(x)=\cos x\) is completely convex on \([-\frac{\pi }{2}, \frac{\pi }{2}]\). Finally, the function \(f:[a,b]\rightarrow \mathbb {R}\) is absolutely convex if it has derivatives of all orders and if \(f^{(2k)}(x)\ge 0\), \(x\in [a,b]\), for every non-negative integer k. Clearly, \(h(x)=\exp x\) is absolutely convex on any interval.

We aim here to establish non-negative lower bounds for functional (6) expressed in terms of the sum and alternating sum of the Lidstone polynomials. This can be done for some classes of convex functions. The outline of the paper is as follows: after this Introduction, in Sect. 2 we discuss some known properties of the Lidstone polynomials and the corresponding Green functions regarding their convexity on a unit interval. We also introduce Euler polynomials that are closely connected to the Lidstone polynomials. Moreover, we discuss conditions under which the sum and alternating sum of the Lidstone polynomials are convex. Section 3 is devoted to our main results. The first result corresponds to a convex function f whose opposite function \(-f\) is completely convex. In this case the Jensen functional can be easily bounded by the functionals that correspond to alternating sum of the Lidstone and Euler polynomials. The case of an absolutely convex function is much more complicated. Hence, it is necessary to impose some additional conditions to obtain refinement of the Jensen inequality expressed in terms of the sum of the Lidstone and Euler polynomials. Roughly speaking, the corresponding refinement can be established on a small enough interval. As an application, in Sect. 4, we derive more accurate power mean inequalities. In particular, we obtain strengthened versions of arithmetic–geometric mean inequality in a difference and a quotient form. Finally, in Sect. 5 we establish an improvement of the Hölder inequality based on the method developed in this paper.

To keep our discussion as concise as possible, we introduce the following notation that will be valid throughout the paper. As above, \(\hat{x}\) stands for the linear transform \(x\mapsto \frac{x-a}{b-a}\), i.e. \(\hat{x}=\frac{x-a}{b-a}\). In addition, \(\hat{x}^{*}=1- \hat{x}=\frac{b-x}{b-a}\). The same holds for a real m-tuple \(\textbf{x}=(x_1, x_2, \ldots , x_m)\), i.e. \(\hat{\textbf{x}}=(\hat{x}_1, \hat{x}_2, \ldots , \hat{x}_m)\) and \(\hat{\textbf{x}}^{*}=\textbf{1}-\hat{{\textbf{x}}}=(1-\hat{x}_1, 1-\hat{x}_2, \ldots , 1-\hat{x}_m)\). Furthermore, if \(x_{P_m}=\frac{1}{P_m}\sum _{i=1}^m p_i x_i\), then \(\hat{x}_{P_m}=\frac{1}{P_m}\sum _{i=1}^m p_i \hat{x}_i\) and \(\hat{x}_{P_m}^{*}=1-\hat{x}_{P_m}\).

2 Preliminaries on the Lidstone Polynomials

At this point, we recall some basic properties of the Lidstone polynomials. These polynomials do not change sign on the unit interval. Moreover, their signs alternate, i.e. \(\Lambda _{2k}(t)\ge 0\), \(\Lambda _{2k+1}(t)\le 0\), for \(t\in [0,1]\). This conclusion is easily drawn from (2), (3) and (4), since the initial Green function \(G_1\) is obviously negative on the unit square. In addition, taking into account boundary value problem (1) that defines the Lidstone polynomials, one concludes that the signs of their second derivatives also alternate on the unit interval. This means that the Lidstone polynomials alternate with respect to convexity. A similar conclusion can be drawn for the sequence of Green functions \(G_n\). For the readers’s convenience, these properties are clearly stated in the following proposition, for more details the reader is referred to [2].

Proposition 1

(see [2]) Let n and m be non-negative integers. Then the following properties hold:

  1. (i)

    \((-1)^{n}\Lambda _n(t)\ge 0\), \(t\in [0,1]\),

  2. (ii)

    \((-1)^{n+1}\Lambda _n(t)\) is convex on interval [0, 1],

  3. (iii)

    \((-1)^{n}G_n(t,s)\ge 0\), \(t,s\in [0,1]\), \(n\ge 1\),

  4. (iv)

    \(\frac{\partial ^{2\,m}G_n(t,s)}{\partial t^{2\,m}}=G_{n-m}(t,s)\), \(t,s\in [0,1]\), \(n-m\ge 1\), in particular, \(\frac{\partial ^{2}G_n(t,s)}{\partial t^{2}}=G_{n-1}(t,s)\),

  5. (v)

    \((-1)^{n+1}G_n(t,s)\) is convex on [0, 1], for every fixed value \(s\in [0,1]\).

The Lidstone polynomials are closely connected to another class of special polynomials. The Euler polynomial \(E_n(t)\) of degree n may be defined by means of the generating function such that

$$\begin{aligned} \frac{2e^{tx}}{e^{x}+1}=\sum _{n=0}^\infty E_n(t){x^{n}}. \end{aligned}$$

The first few Euler polynomials are \(E_0(t)=1\), \(E_1(t)=t-\frac{1}{2}\), \(E_2(t)=\frac{1}{2}t^{2}-\frac{1}{2}t\), \(E_3(t)=\frac{1}{6}t^{3}-\frac{1}{4}t^{2}+\frac{1}{24}\), etc. These polynomials have numerous interesting properties (see, e.g. [1, 2, 6]), of course, we are interested in connection with the Lidstone polynomials. It is well known (see, e.g. [2]) that the Euler polynomial of an even order can be expressed in the following way:

$$\begin{aligned} E_{2n}(t)=\Lambda _n(t)+\Lambda _n(1-t). \end{aligned}$$
(10)

It is clear from (10) that the sign of \(E_{2n}(t)\) coincides with the sign of polynomial \(\Lambda _n(t)\) on the unit interval, i.e. \((-1)^{n}E_{2n}(t)\ge 0\), \(0\le t\le 1\). Furthermore, in terms of convexity, the Euler polynomial of even order behaves exactly the same as the corresponding Lidstone polynomial, that is, polynomial \((-1)^{n+1}E_{2n}(t)\) is convex on the unit interval.

Roughly speaking, in this paper we establish lower bounds for the Jensen functional expressed in terms of the sum and alternating sum of the Lidstone and Euler polynomials. More precisely, we consider the sum and alternating sum of polynomials of the form \((b-a)^{2k}\Lambda _k(t)\) and \((b-a)^{2k}E_{2k}(t)\), that is, we define polynomials

$$\begin{aligned} \begin{aligned} \alpha _{n}(t)=&\ \sum _{k=0}^{n}(b-a)^{2k}(-1)^{k+1}\Lambda _k(t),\\ \widetilde{\alpha }_{n}(t)=&\ \sum _{k=0}^{n}(b-a)^{2k}(-1)^{k+1}E_{2k}(t),\\ \omega _{n}(t)=&\ \sum _{k=0}^{n}(b-a)^{2k}\Lambda _k(t),\\ \widetilde{\omega }_{n}(t)=&\ \sum _{k=0}^{n}(b-a)^{2k}E_{2k}(t), \end{aligned} \end{aligned}$$

where n is a non-negative integer and \(a,b\in \mathbb {R}\). We want to answer the question when these polynomials are convex. Of course, the matter is extremely simple for alternating sums.

Proposition 2

Let n be a non-negative integer and let \(a,b\in \mathbb {R}\). Then, polynomials \(\alpha _n(t)\) and \(\widetilde{\alpha }_n(t)\) are convex on interval [0, 1].

Proof

Due to Proposition 1 and (10), both polynomials \((b-a)^{2k}(-1)^{k+1}\Lambda _k(t)\) and \((b-a)^{2k}(-1)^{k+1}E_{2k}(t)\) are convex for every non-negative integer k. Consequently, the corresponding sums \(\alpha _n(t)\) and \(\widetilde{\alpha }_n(t)\) are also convex, which completes the proof. \(\square \)

Things get much more complicated if we consider the sum of polynomials \((b-a)^{2k}\Lambda _k(t)\) or \((b-a)^{2k}E_{2k}(t)\). In fact, we will see that increasing the value \(|b-a|\) ruins the convexity on the unit interval. Fortunately, we are able to establish convexity for smaller values of \(|b-a|\).

Proposition 3

Let n be a positive integer and let \(|b-a|\le \sqrt{6}\). Then, polynomials \(\omega _{n}(t)\) and \(\widetilde{\omega }_{n}(t)\) are convex on interval [0, 1].

Proof

We claim that all coefficients of polynomial \(\omega _{n}(t)\) are non-negative. This will lead to convexity of \(\omega _{n}(t)\) on the unit interval. We prove our assertion by mathematical induction. Clearly, \(\omega _0(t)=\Lambda _0(t)=t\). Moreover,

$$\begin{aligned} \omega _1(t)=\Lambda _0(t)+(b-a)^{2}\Lambda _1(t)=\frac{(b-a)^{2}}{6}t^3+\frac{6-(b-a)^{2}}{6}t \end{aligned}$$

has non-negative coefficients provided that \(|b-a|\le \sqrt{6}\). Now, suppose that \( \omega _{n-1}(t)=\sum _{k=0}^{n-1}\alpha _{2k+1}t^{2k+1}, \) where \(\alpha _{2k+1}\ge 0\), \(k=0,1,2,\ldots ,n-1.\) Note also that \(\sum _{k=0}^{n-1}\alpha _{2k+1}=1\), since \(\omega _{n-1}(1)=1\). Now, we have that

$$\begin{aligned} \begin{aligned} \omega _n''(t)=&\ \sum _{k=0}^n (b-a)^{2k}\Lambda _k''(t)=(b-a)^{2}\sum _{k=1}^n (b-a)^{2(k-1)}\Lambda _{k-1}(t)\\ =&\ (b-a)^{2}\sum _{k=0}^{n-1} (b-a)^{2k}\Lambda _{k}(t)=(b-a)^{2}{\omega }_{n-1}(t)\\ =&\ (b-a)^{2}\sum _{k=0}^{n-1}\alpha _{2k+1}t^{2k+1}. \end{aligned} \end{aligned}$$

Moreover, since \(\omega _n(0)=0\), by integrating, it follows that

$$\begin{aligned} \omega _n(t)=(b-a)^{2}\sum _{k=0}^{n-1}\frac{\alpha _{2k+1}}{(2k+2)(2k+3)}t^{2k+3}+\alpha t, \end{aligned}$$
(11)

where \(\alpha \) is a real constant. It remains to prove that \(\alpha \ge 0\). Namely, since \((b-a)^{2}\le (2k+2)(2k+3)\), for any non-negative integer k, it follows that

$$\begin{aligned} (b-a)^{2}\sum _{k=0}^{n-1}\frac{\alpha _{2k+1}}{(2k+2)(2k+3)}\le \sum _{k=0}^{n-1}\alpha _{2k+1}=1. \end{aligned}$$
(12)

On the other hand, since \(\omega _n(1)=1\), taking into account (11) and (12), we have that

$$\begin{aligned} \alpha =1-(b-a)^{2}\sum _{k=0}^{n-1}\frac{\alpha _{2k+1}}{(2k+2)(2k+3)}\ge 0. \end{aligned}$$

Consequently, \(\omega _n(t)\) is convex on [0, 1]. Finally, the polynomial \(\omega _n(1-t)\) is also convex on the unit interval which yields convexity of \(\widetilde{\omega }_n(t)\). \(\square \)

Remark 1

Note that polynomial \(\omega _1(t)\) is convex on the unit interval regardless of parameters a and b. On the other hand, since \(\omega _2''(t)=(b-a)^{2}\omega _1(t)\), as soon as \(|b-a|>\sqrt{6}\), polynomial \(\omega _2(t)\) has inflection point at \(t_0=\frac{\sqrt{(b-a)^{2}-6}}{|b-a|}\in (0,1)\), so it is neither convex nor concave on the unit interval.

3 Main Results

In this section we establish more accurate Jensen-type inequalities for the classes of completely convex and absolutely convex functions. To keep our discussion more concise, we will rewrite identity (9) in a more suitable form. Namely, the right-hand side of (9) consists of three Jensen functionals

$$\begin{aligned} \begin{aligned} \mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})&=\sum _{i=1}^m p_i\Lambda _k(\hat{x}_i^{*} )-P_m \Lambda _k\left( \hat{x}_{P_m}^{*} \right) , \\ \mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}}}, \textbf{p})&=\sum _{i=1}^m p_i\Lambda _k\left( \hat{x}_i \right) -P_m \Lambda _k\left( \hat{x}_{P_m} \right) ,\\ \mathcal {J}_m(G_n, {\hat{{\textbf{x}}}}, \textbf{p})&=\sum _{i=1}^m p_iG_n\left( \hat{x}_i,\hat{s} \right) - P_m G_n\left( \hat{x}_{P_m},\hat{s} \right) , \end{aligned} \end{aligned}$$

so, it can be represented as

$$\begin{aligned} \begin{aligned}&\mathcal {J}_m(f, \textbf{x}, \textbf{p}) \\ =&\sum _{k=0}^{n-1}(b-a)^{2k}f^{(2k)}(a)\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p}) +\sum _{k=0}^{n-1}(b-a)^{2k}f^{(2k)}(b)\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}}}, \textbf{p})\\&\quad +(b-a)^{2n-1}\int _a^{b}\mathcal {J}_m(G_n, {\hat{{\textbf{x}}}}, \textbf{p})f^{(2n)}(s)\textrm{d}s. \end{aligned} \end{aligned}$$
(13)

Taking into account properties (ii) and (v) from Proposition 1, we see that successive Jensen functionals for \(\Lambda _k\) and \(G_k\) alternate in sign, i.e. \((-1)^{k+1}\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\ge 0\), \((-1)^{k+1}\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}}}, \textbf{p})\ge 0\), \((-1)^{k+1}\mathcal {J}_m(G_k, {\hat{{\textbf{x}}}}, \textbf{p})\ge 0\), for any non-negative integer k. Therefore, it is natural to consider the Jensen functional for completely convex functions since their even derivatives alternate in sign on the corresponding interval. To be as precise as possible, we need a notion of 2n-complete convexity. We say that the function \(f\in C^{(2n)}([a,b])\) is 2n-completely convex if \((-1)^{k}f^{(2k)}(x)\ge 0\), \(x\in [a,b]\), for \(k=0,1,2\ldots ,n\). In fact, the improvement of the Jensen inequality will be established for the function f whose opposite function \(-f\) is completely convex, i.e. \((-1)^{k+1}f^{(2k)}(x)\ge 0\), \(x\in [a,b]\), for \(k=0,1,2\ldots ,n\). It should be noticed here that \(-f\) is convex on [ab]. Now, we are ready to state and prove our first result.

Theorem 1

Let n be positive integer, let \(f\in C^{(2n)}([a,b])\), and let \(\textbf{x}\in [a,b]^{m}\), \(\textbf{p}\in \mathbb {R}_+^{m}\). If \(-f\) is 2n-completely convex function, then

$$\begin{aligned} \begin{aligned} \mathcal {J}_m(f, \textbf{x}, \textbf{p})\ge&\ a_{\min }\mathcal {J}_m(\alpha _{n-1}, {\hat{{\textbf{x}}^{*}}}, \textbf{p})+b_{\min }\mathcal {J}_m(\alpha _{n-1}, {\hat{{\textbf{x}}}}, \textbf{p})\\ \ge&\ \min \{a_{\min }, b_{\min }\}\mathcal {J}_m(\widetilde{\alpha }_{n-1}, {\hat{{\textbf{x}}}}, \textbf{p})\ge 0, \end{aligned} \end{aligned}$$
(14)

where \(a_{\min }=\min _{0\le k\le n-1}|f^{(2k)}(a)|\) and \(b_{\min }=\min _{0\le k\le n-1}|f^{(2k)}(b)|\). Otherwise, if f is 2n-completely convex function, then holds the inequality

$$\begin{aligned} \begin{aligned} \mathcal {J}_m(f, \textbf{x}, \textbf{p})\le&-a_{\min }\mathcal {J}_m(\alpha _{n-1}, {\hat{{\textbf{x}}^{*}}}, \textbf{p})-b_{\min }\mathcal {J}_m(\alpha _{n-1}, {\hat{{\textbf{x}}}}, \textbf{p})\\ \le&-\min \{a_{\min }, b_{\min }\}\mathcal {J}_m(\widetilde{\alpha }_{n-1}, {\hat{{\textbf{x}}}}, \textbf{p})\le 0. \end{aligned} \end{aligned}$$
(15)

Proof

Let \(-f\) be 2n-completely convex function. Then,

$$\begin{aligned} \mathcal {J}_m(G_n, {\hat{{\textbf{x}}}}, \textbf{p})f^{(2n)}(s)=(-1)^{n+1}\mathcal {J}_m(G_n, {\hat{{\textbf{x}}}}, \textbf{p})(-1)^{n+1}f^{(2n)}(s)\ge 0, \end{aligned}$$

which provides positivity of the integral on the right-hand side of (13). Consequently, we have that

$$\begin{aligned} \begin{aligned}&\mathcal {J}_m(f, \textbf{x}, \textbf{p}) \\ \ge&\ \sum _{k=0}^{n-1}(b-a)^{2k}f^{(2k)}(a)\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p}) +\sum _{k=0}^{n-1}(b-a)^{2k}f^{(2k)}(b)\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}}}, \textbf{p}). \end{aligned} \end{aligned}$$
(16)

Now, our aim is to estimate both sums on the right-hand side of the previous inequality. Since \(|f^{(2k)}(a)|=(-1)^{k+1}f^{(2k)}(a)\ge a_{\min }\), for every \(k=0,1,2,\ldots , n-1\), it follows that

$$\begin{aligned} \begin{aligned}&\sum _{k=0}^{n-1}(b-a)^{2k}f^{(2k)}(a)\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\\ =&\sum _{k=0}^{n-1}(-1)^{k+1}f^{(2k)}(a)(b-a)^{2k}(-1)^{k+1}\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\\ \ge&\ a_{\min }\sum _{k=0}^{n-1}(b-a)^{2k}(-1)^{k+1}\left( \sum _{i=1}^m p_i\Lambda _k(\hat{x}_i^{*} )-P_m \Lambda _k\left( \hat{x}_{P_m}^{*} \right) \right) \\ =&\ a_{\min }\!\left( \sum _{i=1}^m p_i\!\left( \sum _{k=0}^{n-1}(b-a)^{2k}(-1)^{k+1} \Lambda _k(\hat{x}_i^{*} ) \right) -P_m\!\left( \sum _{k=0}^{n-1}(b-a)^{2k}(-1)^{k+1} \Lambda _k\left( \hat{x}_{P_m}^{*} \right) \right) \!\right) \\ =&\ a_{\min }\left( \sum _{i=1}^m p_i{\alpha }_{n-1}(\hat{x}_i^{*} )-P_m {\alpha }_{n-1}\left( \hat{x}_{P_m}^{*} \right) \right) = a_{\min }\mathcal {J}_m(\alpha _{n-1}, {\hat{{\textbf{x}}^{*}}}, \textbf{p}), \end{aligned} \end{aligned}$$

and similarly,

$$\begin{aligned} \sum _{k=0}^{n-1}(b-a)^{2k}f^{(2k)}(b)\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}}}, \textbf{p})\ge b_{\min }\mathcal {J}_m(\alpha _{n-1}, {\hat{{\textbf{x}}}}, \textbf{p}), \end{aligned}$$

which yields the first inequality sign in (14). Further, the second inequality sign in (14) holds due to an obvious relation

$$\begin{aligned} \mathcal {J}_m(\alpha _{n-1}, {\hat{{\textbf{x}}^{*}}}, \textbf{p})+\mathcal {J}_m(\alpha _{n-1}, {\hat{{\textbf{x}}}}, \textbf{p})=\mathcal {J}_m(\widetilde{\alpha }_{n-1}, {\hat{{\textbf{x}}}}, \textbf{p}), \end{aligned}$$

and finally, the last one is a consequence of convexity of polynomial \(\widetilde{\alpha }_{n-1}\) on the unit interval. This proves (14). On the other hand, if f is 2n-completely convex, then by putting \(-f\) in (14), we obtain (15), as claimed. \(\square \)

Remark 2

Relations (14) and (15) are homogeneous with respect to m-tuple \(\textbf{p}\). Moreover, since \(\mathcal {J}_m(f, \textbf{x}, \textbf{1})=m\mathcal {I}_m(f, \textbf{x})\), where \(\textbf{1}=(1,1,\ldots ,1)\), relation (14) implies

$$\begin{aligned} \begin{aligned} \mathcal {I}_m(f, \textbf{x})\ge&\ a_{\min }\mathcal {I}_m(\alpha _{n-1}, {\hat{{\textbf{x}}^{*}}})+b_{\min }\mathcal {I}_m(\alpha _{n-1}, {\hat{{\textbf{x}}}})\\ \ge&\ \min \{a_{\min }, b_{\min }\}\mathcal {I}_m(\widetilde{\alpha }_{n-1}, {\hat{{\textbf{x}}}})\ge 0. \end{aligned} \end{aligned}$$
(17)

Clearly, the situation is similar with inequality (15). It is important to note that inequalities in (17) provide non-trivial lower bounds for the non-weighted functional (via the Lidstone polynomials), which was not the case in [10].

Remark 3

Consider the cosine function restricted on the interval \([\frac{2\pi }{3},\frac{5\pi }{4}]\). Since \(\cos ^{(2k)}x=(-1)^{k}\cos x\), it follows that \(-\cos x\) is completely convex on that interval. In addition, we have that \(\cos ^{(2k)}\frac{2\pi }{3}=\frac{(-1)^{k+1}}{2}\) and \(\cos ^{(2k)}\frac{5\pi }{4}=\frac{(-1)^{k+1}}{\sqrt{2}}\), so that \(a_{\min }=\frac{1}{2}\) and \(b_{\min }=\frac{1}{\sqrt{2}}\). In particular, if \(m=2\), then (17) yields

$$\begin{aligned} \begin{aligned} \frac{\cos x_1+\cos x_2}{2}-\cos \left( \frac{x_1+x_2}{2} \right) \ge&\ \frac{1}{2}\mathcal {I}_2(\alpha _{n-1}, {\hat{{\textbf{x}}^{*}}})+\frac{1}{\sqrt{2}}\mathcal {I}_2(\alpha _{n-1}, {\hat{{\textbf{x}}}})\\ \ge&\ \frac{1}{2}\mathcal {I}_2(\widetilde{\alpha }_{n-1}, {\hat{{\textbf{x}}}})\ge 0, \end{aligned} \end{aligned}$$

where \( {\hat{{\textbf{x}}}}=\left( \frac{12x_1-8\pi }{7\pi },\frac{12x_2-8\pi }{7\pi } \right) \), \(x_1, x_2\in [\frac{2\pi }{3},\frac{5\pi }{4}]\). Clearly, this inequality provides an explicit refinement of the non-weighted Jensen inequality.

Remark 4

Theorem 1 refers to functions whose even derivatives alternate in sign on interval [ab]. If we take a look at its proof, we see that it is sufficient to assume that the 2n-th order derivative of function f does not change the sign on [ab] and that lower derivatives of even order alternate at the endpoints of the interval. More precisely, it suffices to assume that \((-1)^{n}f^{(2n)}(x)\ge 0\), \(x\in [a,b]\), and \((-1)^{k}f^{(2k)}(a)\ge 0\), \((-1)^{k}f^{(2k)}(b)\ge 0\) for \(k=0,1,2,\ldots , n-1\). It is easy to see that these conditions imply complete 2n-convexity. Namely, without loss of generality, we can suppose that n is even, i.e. \(f^{(2n)}(x)\ge 0\) on [ab], which means that \(f^{(2n-2)}\) is convex on [ab]. Now, let \(x\in [a,b]\). Then, \(x=(1-t)a+tb\), for some \(t\in (0,1)\). Therefore, since \(f^{(2n-2)}(a)\le 0\) and \(f^{(2n-2)}(b)\le 0\), we have that \(f^{(2n-2)}(x)=f^{(2n-2)}((1-t)a+tb)\le (1-t)f^{(2n-2)}(a)+tf^{(2n-2)}(a)\le 0\). This means that \(f^{(2n-4)}\) is concave on [ab] and, by the same argumentation, it follows that \(f^{(2n-4)}(x)\ge 0\) on [ab]. Clearly, this procedure provides complete 2n-convexity.

Theorem 1 yields the lower bound for the Jensen functional expressed in terms of alternating sum of the Lidstone polynomials. It is much more complicated to establish the lower bound expressed in terms of the usual sum of the Lidstone polynomials. Our next goal is to establish a result that corresponds to functions with non-negative even derivatives. In fact, we will derive a more general result that also covers the case of completely convex functions. Bearing in mind Remark 4, we deal now with functions whose derivatives of order \(4l+2\) are non-negative at the endpoints of interval [ab]. However, to establish a more accurate Jensen inequality, some additional conditions for derivatives of order 4l need to be imposed. Proposition 3 will certainly play an important role in obtaining the refinement of the Jensen inequality.

Theorem 2

Let n be a positive integer and \(0<b-a\le \sqrt{6}\). Let \(f\in C^{(2n)}([a,b])\), and let \(\textbf{x}\in [a,b]^{m}\), \(\textbf{p}\in \mathbb {R}_+^{m}\). Further, suppose that there exist \(\overline{a},\overline{b}\ge 0\) such that

$$\begin{aligned} \max _{1\le l\le \lfloor \!\frac{n-1}{2}\!\rfloor } f^{(4l)}(a)\le \overline{a}\le \min _{0\le l\le \lfloor \!\frac{n-2}{2}\!\rfloor } f^{(4l+2)}(a) \end{aligned}$$
(18)

and

$$\begin{aligned} \max _{1\le l\le \lfloor \!\frac{n-1}{2}\!\rfloor } f^{(4l)}(b)\le \overline{b}\le \min _{0\le l\le \lfloor \!\frac{n-2}{2}\!\rfloor } f^{(4l+2)}(b). \end{aligned}$$
(19)

If n is odd and f is 2n-convex, or n is even and f is 2n-concave, then holds the inequality

$$\begin{aligned} \begin{aligned} \mathcal {J}_m(f, \textbf{x}, \textbf{p})\ge&\ \overline{a}\mathcal {J}_m(\omega _{n-1}, {\hat{{\textbf{x}}^{*}}}, \textbf{p})+\overline{b}\mathcal {J}_m(\omega _{n-1}, {\hat{{\textbf{x}}}}, \textbf{p})\\ \ge&\ \min \{ \overline{a}, \overline{b}\}\mathcal {J}_m(\widetilde{\omega }_{n-1}, {\hat{{\textbf{x}}}}, \textbf{p})\ge 0. \end{aligned} \end{aligned}$$
(20)

Proof

First, let n be odd and let f be 2n-convex. Then \(\mathcal {J}_m(G_n, {\hat{{\textbf{x}}}}, \textbf{p})\ge 0\) and \(f^{(2n)}(s)\ge 0\), \(s\in [a,b]\), so the integral on the right-hand side of (13) is non-negative, which means that (16) holds in this case. Similarly, the case of an even n and 2n-concave function f also yields (16).

Now, we aim to establish the suitable lower bounds for both terms on the right-hand side of (16). If k is odd, i.e. \(k=2l+1\), then \(\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\ge 0\), by Proposition 1, and \(f^{(2k)}(a)=f^{(4l+2)}(a)\ge \overline{a}\) by (18), so we have that \(f^{(2k)}(a)\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\ge \overline{a}\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\). If \(k\ge 2\) is even, i.e. \(k=2l\), then \(\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\le 0\), by Proposition 1, and \(f^{(2k)}(a)=f^{(4l)}(a)\le \overline{a}\), so we again obtain \(f^{(2k)}(a)\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\ge \overline{a}\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\). Consequently, since \(\mathcal {J}_m(\Lambda _0, {\hat{{\textbf{x}}^{*}}}, \textbf{p})=0\), we have that

$$\begin{aligned}{} & {} \sum _{k=0}^{n-1}(b-a)^{2k}f^{(2k)}(a)\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}^{*}}}, \textbf{p})\\{} & {} \ge \ \overline{a}\sum _{k=0}^{n-1}(b-a)^{2k}\left( \sum _{i=1}^m p_i\Lambda _k(\hat{x}_i^{*} )-P_m \Lambda _k\left( \hat{x}_{P_m}^{*} \right) \right) \\{} & {} =\ \overline{a}\!\left( \sum _{i=1}^m p_i\!\left( \sum _{k=0}^{n-1}(b-a)^{2k} \Lambda _k(\hat{x}_i^{*} ) \right) -P_m\!\left( \sum _{k=0}^{n-1}(b-a)^{2k} \Lambda _k\left( \hat{x}_{P_m}^{*} \right) \right) \!\right) \\{} & {} = \overline{a}\left( \sum _{i=1}^m p_i\omega _{n-1}(\hat{x}_i^{*} )-P_m \omega _{n-1}\left( \hat{x}_{P_m}^{*} \right) \right) =\overline{a}\mathcal {J}_m(\omega _{n-1}, {\hat{{\textbf{x}}^{*}}}, \textbf{p}), \end{aligned}$$

and similarly,

$$\begin{aligned} \sum _{k=0}^{n-1}(b-a)^{2k}f^{(2k)}(b)\mathcal {J}_m(\Lambda _k, {\hat{{\textbf{x}}}}, \textbf{p})\ge \overline{b}\mathcal {J}_m(\omega _{n-1}, {\hat{{\textbf{x}}}}, \textbf{p}), \end{aligned}$$

which yields the first inequality sign in (20). Clearly, the second inequality sign in (20) holds due to identity

$$\begin{aligned} \mathcal {J}_m(\omega _{n-1}, {\hat{{\textbf{x}}^{*}}}, \textbf{p})+\mathcal {J}_m(\omega _{n-1}, {\hat{{\textbf{x}}}}, \textbf{p})=\mathcal {J}_m(\widetilde{\omega }_{n-1}, {\hat{{\textbf{x}}}}, \textbf{p}), \end{aligned}$$

while the last sign holds due to Proposition 3. The proof is now completed. \(\square \)

Remark 5

It should be noticed here that if \(n=1\), both Theorems 1 and 2 reduce to the classical Jensen inequality.

Remark 6

Similarly to Remark 2, the non-weighted form of relation (20) reads

$$\begin{aligned} \begin{aligned} \mathcal {I}_m(f, \textbf{x})\ge&\ \overline{a}\mathcal {I}_m(\omega _{n-1}, {\hat{{\textbf{x}}^{*}}})+\overline{b}\mathcal {I}_m(\omega _{n-1}, {\hat{{\textbf{x}}}}) \ge \min \{ \overline{a}, \overline{b}\}\mathcal {I}_m(\widetilde{\omega }_{n-1}, {\hat{{\textbf{x}}}})\ge 0, \end{aligned} \end{aligned}$$

which represents a non-trivial lower bound for the non-weighted Jensen functional.

Remark 7

The function f fulfilling conditions of Theorem 2 is convex since it satisfies (20). The same conclusion can be drawn from identity (5). Namely, by taking a second derivative, we have that

$$\begin{aligned} \begin{aligned} f''(x)=&\sum _{k=1}^{n-1}(b-a)^{2k-2}\left[ f^{(2k)}(a)\Lambda _{k-1}\left( \hat{x}^{*}\right) + f^{(2k)}(b)\Lambda _{k-1}\left( \hat{x}\right) \right] \\&\ \ +(b-a)^{2n-3}\int _a^{b}G_{n-1}\left( \hat{x},\hat{s}\right) f^{(2n)}(s)\textrm{d}s\\ \ge&\sum _{k=0}^{n-2}(b-a)^{2k}\left[ f^{(2k+2)}(a)\Lambda _{k}\left( \hat{x}^{*}\right) + f^{(2k+2)}(b)\Lambda _{k}\left( \hat{x}\right) \right] , \end{aligned} \end{aligned}$$

since the above integral is non-negative. In addition, taking into account (18) and (19), we have that

$$\begin{aligned} f''(x)\ge \overline{a}\omega _{n-2}\left( \hat{x}^{*}\right) + \overline{b}\omega _{n-2}\left( \hat{x}\right) \ge 0,\quad x\in [a,b], \end{aligned}$$

since the coefficients of polynomial \(\omega _{n-2}\) are non-negative, as proved in Proposition 3. Moreover, by repeating this procedure, we conclude that \(f^{(4l+2)}(x)\ge 0\), \(x\in [a,b]\), \(0\le l\le \lfloor \!\frac{n-1}{2}\!\rfloor \).

Remark 8

It is not hard to find examples of functions satisfying (18) and (19). Namely, Theorem 2 covers the case of a convex function whose opposite function is completely convex. Such function has non-negative derivatives of order \(4l+2\), and negative derivatives of order 4l, so conditions (18) and (19) are always fulfilled. In particular, cosine function from Remark 3 satisfies these conditions. On the other hand, functions \(\exp x\), \(\exp (-x)\), \(\cosh x\) have equal even derivatives that are always non-negative. In particular, if \(f(x)=\exp x\), then we can take \(\overline{a}=\exp a\) and \(\overline{b}=\exp b\). These functions will be of crucial in establishing refinements of some power mean inequalities.

4 A Strengthened Power Mean Inequalities in Terms of the Lidstone Polynomials

We aim now to derive more accurate power mean inequalities based on our Theorem 2. Recall that a power mean is defined by

$$\begin{aligned} M_{r}\left( \textbf{x}, \textbf{p} \right) =\left\{ \begin{array}{l} \left( \frac{1}{P_m}\sum _{i=1}^m p_i{x_i}^{r} \right) ^{\frac{1}{r}},\quad r\ne 0, \\ \left( \prod _{i=1}^m {x_i}^{p_i} \right) ^{\frac{1}{P_m}},\quad \quad \quad r=0, \end{array} \right. \end{aligned}$$

while the case of \(p_1=p_2=\cdots =p_m\) yields the corresponding non-weighted mean

$$\begin{aligned} m_{r}\left( \textbf{x} \right) =\left\{ \begin{array}{l} \left( \frac{1}{m}\sum _{i=1}^m {x_i}^{r} \right) ^{\frac{1}{r}},\quad r\ne 0, \\ \left( \prod _{i=1}^m {x_i} \right) ^{\frac{1}{m}},\quad \quad \ \ r=0. \end{array} \right. \end{aligned}$$

Here, and throughout this section, \(\textbf{x}=(x_1, x_2, \ldots , x_m)\) stands for a positive n-tuple, i.e. \(x_i>0\), \(i=1,2,\ldots , m\). In particular, for \(r=-1,0,1\), we obtain the harmonic, geometric and arithmetic mean, respectively. The basic power mean inequality, describing monotonic behavior of means, asserts that if \(r<s\), then

$$\begin{aligned} M_{r}\left( \textbf{x}, \textbf{p} \right) \le M_{s}\left( \textbf{x}, \textbf{p} \right) . \end{aligned}$$
(21)

This inequality is still of interest to numerous mathematicians. For a comprehensive study of power means including refinements and generalizations, the reader is referred to monographs [12, 13], as well as to papers [7, 9, 10] and the references cited therein. In particular, the mentioned paper [10] provides mutual bounds for the differences of means in terms of the corresponding non-weighted means.

According to Sect. 3, we establish here a different kind of lower bounds for the difference \(M_{s}\left( \textbf{x}, \textbf{p} \right) -M_{r}\left( \textbf{x}, \textbf{p} \right) \). To apply Theorem 2, we have to choose the appropriate functions in the Jensen functional. We first consider the cases when one of parameters r and s in (21) is equal to zero. The first case we deal with is the function \(f(t)=\frac{1}{r}\log t\), where \(r\ne 0\). Since \(f^{(2n)}(t)=-\frac{(2n-1)!}{rt^{2n}}\), it follows that f is 2n-convex for \(r<0\). Therefore, in this setting, Theorem 2 cannot be applied for even n. We have already commented that the case of \(n=1\) is trivial, so the first non-trivial case appears for \(n=3\). Of course, conditions (18) and (19) also need to be satisfied. This case is carried out in the sequel.

We also use here the transformation

$$\begin{aligned} \textbf{x}=(x_1, x_2, \ldots , x_m)\rightarrow \textbf{x}^{r}=(x_1^{r}, x_2^{r}, \ldots , x_m^{r}),\quad r\ne 0. \end{aligned}$$

It should be noticed here that if \(x_i\in [a,b]\), then \(x_i^{r}\in [\min \{a^{r},b^{r}\},\max \{a^{r},b^{r}\} ]\), so in order to apply Theorem 2, we deal with transformation \(x_i^{r}\mapsto \frac{x_i^{r}-\min \{a^{r},b^{r}\}}{\left| b^{r}-a^{r}\right| }\). Clearly, if \(r>0\), then \(\frac{x_i^{r}-\min \{a^{r},b^{r}\}}{\left| b^{r}-a^{r}\right| }=\frac{x_i^{r}-a^{r}}{b^{r}-a^{r}}\), while \(\frac{x_i^{r}-\min \{a^{r},b^{r}\}}{\left| b^{r}-a^{r}\right| }=\frac{b^{r}-x_i^{r}}{b^{r}-a^{r}}\), for \(r<0\). However, due to the symmetry, we can define \(\hat{x_i^{r}}=\frac{x_i^{r}-a^{r}}{b^{r}-a^{r}}\), \(\hat{x_i^{r *}}=\frac{b^{r}-x_i^{r}}{b^{r}-a^{r}}\), \(i=1,2,\ldots , m\), and so \({{\hat{\textbf{x}}^{r}}}=\left( \hat{x_1^{r}}, \hat{x_2^{r}},\ldots , \hat{x_m^{r}} \right) \) and \({{\hat{\textbf{x}}^{r *}}}=\left( \hat{x_1^{r *}}, \hat{x_2^{r *}},\ldots , \hat{x_m^{r *}} \right) \).

Considering the above discussion, we give the first application of Theorem 2 that provides more accurate estimate between means \(M_{0}\left( \textbf{x}, \textbf{p} \right) \) and \(M_{r}\left( \textbf{x}, \textbf{p} \right) \) in the so-called quotient form. As in the previous section, these estimates depend on the corresponding interval.

Corollary 1

Let \(r\ne 0\) and let \(\textbf{p}\in \mathbb {R}_+^{m}\), \(\textbf{x}\in [a,b]^{m}\), where \(|b^{r}-a^{r}|\le \sqrt{6}\) and \(\min \{a^{r}, b^{r}\}\ge \sqrt{6}\). Then hold the inequalities

$$\begin{aligned} \begin{aligned} \left| \log \frac{M_{0}\left( \textbf{x}, \textbf{p} \right) }{M_{r}\left( \textbf{x}, \textbf{p} \right) }\right|&\ge \frac{1}{|r|P_m}\left[ a^{-2r}\mathcal {J}_m(\Omega _2, {{\hat{\textbf{x}}^{r *}}}, \textbf{p}) +b^{-2r}\mathcal {J}_m(\Omega _2, {{\hat{\textbf{x}}^{r}}}, \textbf{p})\right] \\&\ge \frac{\min \{a^{-2r},b^{-2r}\}}{|r|P_m}\mathcal {J}_m(\overline{\Omega }_2, {{\hat{\textbf{x}}^{r}}}, \textbf{p})\ge 0, \end{aligned} \end{aligned}$$
(22)

where \(\Omega _n(t)=\sum _{k=0}^{n}(b^{r}-a^{r})^{2k}\Lambda _k(t)\) and \(\overline{\Omega }_n(t)=\sum _{k=0}^{n}(b^{r}-a^{r})^{2k}E_{2k}(t)\).

Proof

We consider two cases depending on whether \(r<0\) or \(r>0\). First, let \(r<0\). If \(f(t)=\frac{1}{r}\log t\) and \(\textbf{x}^{r}=(x_1^{r}, x_2^{r}, \ldots , x_m^{r})\), the left-hand side of (20) becomes

$$\begin{aligned} \mathcal {J}_m(f, \textbf{x}^{r}, \textbf{p})=\sum _{i=1}^m p_i\log x_i-P_m\log \left( \frac{1}{{P_m} }{\sum _{i=1}^m p_ix_i^{r}} \right) ^{\frac{1}{r}} =P_m\log \frac{M_{0}\left( \textbf{x}, \textbf{p} \right) }{ M_{r}\left( \textbf{x}, \textbf{p} \right) }. \end{aligned}$$

Moreover, since \(f^{(2n)}(t)=-\frac{(2n-1)!}{rt^{2n}}\), f is 2n-convex for \(r>0\), so we can choose \(n=3\), due to Theorem 2. However, it is necessary to fulfill conditions (18) and (19). It is easy to see that \(f^{(2)}(t)\ge f^{(4)}(t)\) for \(t\ge \sqrt{6}\), so these conditions hold for \(\overline{a}=f^{(2)}(a^{r})=-{a^{-2r}}/{r}\) and \(\overline{b}=f^{(2)}(b^{r})=-{b^{-2r}}/{r}\), since \(\min \{a^{r}, b^{r}\}\ge \sqrt{6}\). Consequently, (20) reduces to

$$\begin{aligned} \begin{aligned} \log \frac{M_{0}\left( \textbf{x}, \textbf{p} \right) }{M_{r}\left( \textbf{x}, \textbf{p} \right) }&\ge -\frac{1}{rP_m}\left[ a^{-2r}\mathcal {J}_m(\Omega _2, {{\hat{\textbf{x}}^{r *}}}, \textbf{p}) +b^{-2r}\mathcal {J}_m(\Omega _2, {{\hat{\textbf{x}}^{r}}}, \textbf{p})\right] \\&\ge -\frac{a^{-2r}}{rP_m}\mathcal {J}_m(\overline{\Omega }_2, {{\hat{\textbf{x}}^{r}}}, \textbf{p})\ge 0. \end{aligned} \end{aligned}$$
(23)

It remains to consider the case when \(r>0\). Then, by putting \(f(t)=-\frac{1}{r}\log t\), \( \textbf{x}^{r}=(x_1^{r}, x_2^{r}, \ldots , x_m^{r})\), \(n=3\), in (20), and following the lines as in the above case, we arrive at the relation

$$\begin{aligned} \begin{aligned} \log \frac{M_{r}\left( \textbf{x}, \textbf{p} \right) }{M_{0}\left( \textbf{x}, \textbf{p} \right) }&\ge \frac{1}{rP_m}\left[ a^{-2r}\mathcal {J}_m(\Omega _2, {{\hat{\textbf{x}}^{r *}}}, \textbf{p}) +b^{-2r}\mathcal {J}_m\Omega _2, {{\hat{\textbf{x}}^{r}}}, \textbf{p})\right] \\&\ge \frac{b^{-2r}}{rP_m}\mathcal {J}_m(\overline{\Omega }_2, {{\hat{\textbf{x}}^{r}}}, \textbf{p})\ge 0. \end{aligned} \end{aligned}$$
(24)

Clearly, inequalities (23) and (24) yield (22), which completes the proof. \(\square \)

Remark 9

By putting \(r=-1\) in (22), we obtain the refinement of the geometric-harmonic mean inequality in a quotient form. More precisely, we have that

$$\begin{aligned} \begin{aligned} \log \frac{M_{0}\left( \textbf{x}, \textbf{p} \right) }{M_{-1}\left( \textbf{x}, \textbf{p} \right) }&\ge \frac{1}{P_m}\left[ a^{2}\mathcal {J}_m(\Omega _2, {{\hat{\textbf{x}}^{-1 *}}}, \textbf{p}) +b^{2}\mathcal {J}_m(\Omega _2, {{\hat{\textbf{x}}^{-1}}}, \textbf{p})\right] \\&\ge \frac{a^{2}}{P_m}\mathcal {J}_m(\overline{\Omega }_2, {{\hat{\textbf{x}}^{-1}}}, \textbf{p})\ge 0, \end{aligned} \end{aligned}$$

provided that \([a,b]\subseteq (0,\frac{1}{\sqrt{6}}]\) and \(|b^{-1}-a^{-1}|\le \sqrt{6}\). Similarly, the case of \(r=1\) represents a strengthened version of the arithmetic–geometric mean inequality. Namely, if \([a,b]\subseteq [{\sqrt{6}},\infty )\) and \(|b-a|\le \sqrt{6}\), then

$$\begin{aligned} \begin{aligned} \log \frac{M_{1}\left( \textbf{x}, \textbf{p} \right) }{M_{0}\left( \textbf{x}, \textbf{p} \right) }&\ge \frac{1}{P_m}\left[ a^{-2}\mathcal {J}_m(\Omega _2, {{\hat{\textbf{x}}^{ *}}}, \textbf{p}) +b^{-2}\mathcal {J}_m(\Omega _2, {{\hat{\textbf{x}}}}, \textbf{p})\right] \\&\ge \frac{1}{b^{2}P_m}\mathcal {J}_m(\overline{\Omega }_2, {{\hat{\textbf{x}}}}, \textbf{p})\ge 0. \end{aligned} \end{aligned}$$

Of course, the obtained relations also describe refinements of the corresponding non-weighted versions of inequalities. In particular, the latter relation reads

$$\begin{aligned} \begin{aligned} \log \frac{m_{1}\left( \textbf{x} \right) }{m_{0}\left( \textbf{x} \right) } \ge a^{-2}\mathcal {I}_m(\Omega _2, {{\hat{\textbf{x}}^{ *}}}) +b^{-2}\mathcal {I}_m(\Omega _2, {{\hat{\textbf{x}}}})\ge b^{-2}\mathcal {I}_m(\overline{\Omega }_2, {{\hat{\textbf{x}}}}) \ge 0. \end{aligned} \end{aligned}$$

The next case we deal with refers to exponential function \(f(t)=e^{st}\). Since \(f^{(2n)}(t)=s^{2n}e^{st}\), it follows that f is 2n-convex for each \(n\in \mathbb {N}\). Hence, exactly as in the previous case, we consider Theorem 2 for \(n=3\). In addition, the associated coordinate transformation is

$$\begin{aligned} \textbf{x}=(x_1, x_2, \ldots , x_m)\rightarrow \log \textbf{x}=(\log x_1,\log x_2,\ldots , \log x_m). \end{aligned}$$

Now, with \(\widehat{\log \textbf{x}}=(\widehat{\log x_1},\widehat{\log x_2},\ldots ,\widehat{ \log x_m})\) and \(\widehat{\log \textbf{x}}{}^{*}=(\widehat{\log x_1}{}^{*},\widehat{\log x_2}{}^{*},\ldots ,\widehat{ \log x_m}{}^{*})\), where \(\widehat{\log x_i}=\frac{\log x_i-\log a}{\log b-\log a}\) and \(\widehat{\log x_i}{}^{*}=\frac{\log b -\log x_i}{\log b-\log a}\), we obtain more precise estimate between means \(M_{0}\left( \textbf{x}, \textbf{p} \right) \) and \(M_{s}\left( \textbf{x}, \textbf{p} \right) \), \(s\in [-1,1]\), in the so-called difference form.

Corollary 2

Let \(s\in [-1,1]\) and let \(\textbf{p}\in \mathbb {R}_+^{m}\), \(\textbf{x}\in [a,b]^{m}\), where \(b\le e^{\sqrt{6}}a\). Then hold the inequalities

$$\begin{aligned} \begin{aligned} M_{s}^{s}\left( \textbf{x}, \textbf{p} \right) -M_{0}^{s}\left( \textbf{x}, \textbf{p} \right)&\ge \ \frac{s^{2}}{P_m}\left[ a^{s}\mathcal {J}_m(L_2, {{\widehat{\log \textbf{x}}}{}^{*}}, \textbf{p}) +b^{s}\mathcal {J}_m(L_2, {{\widehat{\log \textbf{x}}}}, \textbf{p})\right] \\&\ge \frac{s^{2}\min \{a^{s}, b^{s}\}}{P_m}\mathcal {J}_m(\overline{L}_2, {{\widehat{\log \textbf{x}}}}, \textbf{p})\ge 0, \end{aligned} \end{aligned}$$
(25)

where \(L_n(t)=\sum _{k=0}^{n}\log ^{2k}\!\frac{b}{a}\Lambda _k(t)\) and \(\overline{L}_n(t)=\sum _{k=0}^{n}\log ^{2k}\!\frac{b}{a}E_{2k}(t)\).

Proof

Again, the starting point is relation (20) for \(n=3\). Then, by putting \(f(t)=e^{st}\) and \(\log \textbf{x}=(\log x_1,\log x_2,\ldots , \log x_m)\), it follows that

$$\begin{aligned} \mathcal {J}_m(f, \log \textbf{x}, \textbf{p})=P_m\left[ M_{s}^{s}\left( \textbf{x}, \textbf{p} \right) -M_{0}^{s}\left( \textbf{x}, \textbf{p} \right) \right] . \end{aligned}$$

Moreover, since \(f^{(2n)}(t)=s^{2n}e^{st}\), we have that \(f^{(2)}(t)\ge f^{(4)}(t)\) for \(s\in [-1,1]\). Finally, conditions (18) and (19) are satisfied with \(\overline{a}=f^{(2)}(\log a)=s^{2}a^{s}\) and \(\overline{b}=f^{(2)}(\log b)=s^{2}b^{s}\), which provides (25), as claimed. \(\square \)

Remark 10

If \(s=1\), then (25) yields the improved arithmetic–geometric inequality in a difference form, while the case of \(s=-1\) provides the corresponding geometric-harmonic inequality in a difference form. Let’s keep a little more attention on these two particular cases. Namely, if \(s=1\), then \(f(t)=e^{t}\), i.e. \(f^{(2n)}(t)=e^{t}\), which means that in this case Theorem 2 can be applied for an arbitrary odd n, since conditions (18) and (19) are trivially fulfilled for \(\overline{a}=a\) and \(\overline{b}=b\). Consequently, we have that

$$\begin{aligned} \begin{aligned} M_{1}\left( \textbf{x}, \textbf{p} \right) -M_{0}\left( \textbf{x}, \textbf{p} \right)&\ge \ \frac{1}{P_m}\left[ a\mathcal {J}_m(L_{n-1}, {{\widehat{\log \textbf{x}}}{}^{*}}, \textbf{p}) +b\mathcal {J}_m(L_{n-1}, {{\widehat{\log \textbf{x}}}}, \textbf{p})\right] \\&\ge \frac{a}{P_m}\mathcal {J}_m(\overline{L}_{n-1}, {{\widehat{\log \textbf{x}}}}, \textbf{p})\ge 0, \end{aligned} \end{aligned}$$
(26)

where n is a non-negative odd integer. The similar conclusion can be drawn for the case of \(s=-1\), i.e. relation

$$\begin{aligned} \begin{aligned} M_{-1}^{-1}\left( \textbf{x}, \textbf{p} \right) -M_{0}^{-1}\left( \textbf{x}, \textbf{p} \right)&\ge \ \frac{1}{P_m}\left[ a^{-1}\mathcal {J}_m(L_{n-1}, {{\widehat{\log \textbf{x}}}{}^{*}}, \textbf{p}) +b^{-1}\mathcal {J}_m(L_{n-1}, {{\widehat{\log \textbf{x}}}}, \textbf{p})\right] \\&\ge \frac{1}{bP_m}\mathcal {J}_m(\overline{L}_{n-1}, {{\widehat{\log \textbf{x}}}}, \textbf{p})\ge 0 \end{aligned} \end{aligned}$$

holds for an arbitrary non-negative odd integer n.

Remark 11

Contrary to Remark 10, if \(|s|<1\), then the sequence \(f^{(2n)}(t)=s^{2n}e^{st}\) is decreasing. This means that conditions (18) and (19) cannot be fulfilled for \(f(t)=e^{st}\) when \(n\ge 5\). This means that the case of \(n=3\) is the best we can get in Corollary 2. The similar conclusion can be drawn for Corollary 1, since the sequence \(f^{(2n)}(t)=-\frac{(2n-1)!}{rt^{2n}}\) (i.e. when \(f(t)=\frac{1}{r}\log t\)) is decreasing or increasing, depending on whether \(r>0\) or \(r<0\).

Finally, it remains to consider the case when both parameters r and s in (21) are not equal to zero. Then, the corresponding power mean inequalities will be established via the power function \(f(t)=t^{\frac{s}{r}}\). Clearly, \( f^{(2n)}(t)=\prod _{k=1}^{2n} \left( \frac{s}{r}-k+1\right) t^{\frac{s}{r}-2n}, \) and so, \( f^{(2n)}(t)\ge 0\) if \(\frac{s}{r}\in \overline{S}_n\), while \( f^{(2n)}(t)< 0\) if \(\frac{s}{r}\in {S}_n\), where \(S_n=\bigcup _{k=0}^{n-1}(2k, 2k+1),\) (for more details, see [4]). In this setting, Theorem 2 can be applied both for \(n=2\) and \(n=3\). We start with the case \(n=3\) since it is closely related to Corollaries 1 and 2.

Corollary 3

Let \(\frac{s}{r}\in (-\infty ,0]\cup [1,2]\cup [3,4]\cup [5,\infty )\) and let \(\textbf{p}\in \mathbb {R}_+^{m}\), \(\textbf{x}\in [a,b]^{m}\), where \(|b^{r}-a^{r}|\le \sqrt{6}\) and \(\min \{a^{r}, b^{r}\}\ge \frac{\sqrt{(s-2r)(s-3r)}}{|r|}\). Then hold the inequalities

$$\begin{aligned} M_{s}^{s}\left( \textbf{x}, \textbf{p} \right) -M_{r}^{s}\left( \textbf{x}, \textbf{p} \right)&\ge \frac{s(s-r)}{r^{2}P_m}\!\!\left[ a^{s-2r}\mathcal {J}_m(\Omega _2, {{\hat{\textbf{x}}^{r *}}}, \textbf{p}) \!+\!b^{s-2r}\mathcal {J}_m(\Omega _2, {{\hat{\textbf{x}}^{r}}}, \textbf{p})\right] \nonumber \\&\ge \frac{s(s-r)\min \{a^{s-2r},b^{s-2r}\}}{r^{2}P_m}\mathcal {J}_m(\overline{\Omega }_2, {{\hat{\textbf{x}}^{r}}}, \textbf{p})\ge 0, \end{aligned}$$
(27)

where \(\Omega _2\) and \(\overline{\Omega }_2\) are defined in Corollary 1.

Proof

We utilize Theorem 2 with \(f(t)=t^{\frac{s}{r}}\), \(\textbf{x}^{r}=(x_1^{r}, x_2^{r}, \ldots , x_m^{r})\), and \(n=3\). Then, the left-hand side of (20) reduces to

$$\begin{aligned} \mathcal {J}_m(f, \textbf{x}^{r}, \textbf{p})=\sum _{i=1}^m p_ix_i^{s}-P_m\left( \frac{1}{{P_m} }{\sum _{i=1}^m p_ix_i^{r}} \right) ^{\frac{s}{r}}=P_m\left[ M_{s}^{s}\left( \textbf{x}, \textbf{p} \right) -M_{r}^{s}\left( \textbf{x}, \textbf{p} \right) \right] . \end{aligned}$$

Moreover, since \(n=3\), it follows that even derivatives \(f^{(2)}\), \(f^{(4)}\) and \(f^{(6)}\) are non-negative on [ab], provided that \(\frac{s}{r}\in (-\infty ,0]\cup [1,2]\cup [3,4]\cup [5,\infty )\). Moreover, since \(f^{(2)}(t)\ge f^{(4)}(t) \), \(t>0\), if and only if \(t\ge \frac{\sqrt{(s-2r)(s-3r)}}{|r|}\), conditions (18) and (19) are satisfied for \(\overline{a}=f^{(2)}(a^{r})=\frac{s(s-r)a^{s-2r}}{r^{2}}\) and \(\overline{b}=f^{(2)}(b^{r})=\frac{s(s-r)b^{s-2r}}{r^{2}}\), so we arrive at (27). \(\square \)

In order to finish our previous discussion, we also consider Theorem 2 for \(f(t)=t^{\frac{s}{r}}\) and \(n=2\). It turns out that the conditions under which the corresponding inequality holds can be significantly weakened. More precisely, the following result includes polynomials \(\Omega _1(t)=\Lambda _0(t)+(b^{r}-a^{r})^{2}\Lambda _1(t)\) and \(\overline{\Omega }_1(t)=1+(b^{r}-a^{r})^{2}E_2(t)\), that are convex regardless of parameters a, b, and r (see also Remark 1).

Corollary 4

Let \(\frac{s}{r}\in [2,3]\) and let \(\textbf{p}\in \mathbb {R}_+^{m}\), \(\textbf{x}\in [a,b]^{m}\). Then hold the inequalities

$$\begin{aligned} M_{s}^{s}\left( \textbf{x}, \textbf{p} \right) -M_{r}^{s}\left( \textbf{x}, \textbf{p} \right)&\ge \frac{s(s-r)}{r^{2}P_m}\!\!\left[ a^{s-2r}\mathcal {J}_m(\Omega _1, {{\hat{\textbf{x}}^{r *}}}, \textbf{p}) \!+\!b^{s-2r}\mathcal {J}_m(\Omega _1, {{\hat{\textbf{x}}^{r}}}, \textbf{p})\right] \nonumber \\&\ge \frac{s(s-r)\min \{a^{s-2r},b^{s-2r}\}}{r^{2}P_m}\mathcal {J}_m(\overline{\Omega }_1, {{\hat{\textbf{x}}^{r}}}, \textbf{p})\ge 0. \end{aligned}$$
(28)

Proof

It follows by putting \(n=2\) in (20) and by noting that \(f(t)=t^{\frac{s}{r}}\) is simultaneously convex and 4-concave if and only if \(\frac{s}{r}\in [2,3]\). \(\square \)

5 Application to the Hölder Inequality

We conclude this paper with a simple application of the arithmetic–geometric mean inequality (26) to the Hölder inequality. Let \((\Omega , \Sigma ,\mu )\) be \(\sigma \)-finite measure space and let \(\sum _{i=1}^m \frac{1}{q_i}=1\), \(q_i>1\). If \(f_i\in L^{q_i}(\Omega )\), \(i=1,2,\ldots , m\), are non-negative measurable functions, then holds the inequality

$$\begin{aligned} \int _{\Omega }\prod _{i=1}^m f_i(x)\textrm{d}\mu (x)\le \prod _{i=1}^m \Vert f_i\Vert _{q_i}. \end{aligned}$$
(29)

One way of proving the Hölder inequality is an application of the arithmetic-geometric mean inequality (for more details, see [12, 13]). Therefore, the strengthened arithmetic–geometric mean inequality (26) can be exploited in improving the Hölder inequality (29). However, it will be necessary to impose some extra conditions on non-negative measurable functions \(f_i\in L^{q_i}(\Omega )\), \(i=1,2,\ldots , m\), since the Lidstone interpolation refers to interval [ab]. Now, with the abbreviation

$$\begin{aligned} \textbf{f}_0^{\textbf{q}}(x)=\left( \frac{f_1^{q_1}(x)}{\Vert f_1\Vert _{q_1}^{q_1}}, \frac{f_2^{q_2}(x)}{\Vert f_2\Vert _{q_2}^{q_2}},\ldots , \frac{f_m^{q_m}(x)}{\Vert f_m\Vert _{q_m}^{q_m}} \right) , \end{aligned}$$

and accordingly,

$$\begin{aligned} \widehat{\log \textbf{f}_0^{\textbf{q}}(x)}=\frac{1}{\log \frac{b}{a}}\left( \log \frac{f_1^{q_1}(x)}{a\Vert f_1\Vert _{q_1}^{q_1}}, \log \frac{f_2^{q_2}(x)}{a\Vert f_2\Vert _{q_2}^{q_2}},\ldots , \log \frac{f_m^{q_m}(x)}{a\Vert f_m\Vert _{q_m}^{q_m}} \right) \end{aligned}$$

and \(\widehat{\log \textbf{f}_0^{\textbf{q}}(x)}{}^{*}=\textbf{1}-\widehat{\log \textbf{f}_0^{\textbf{q}}(x)}\), we arrive at the following refinement of the Hölder inequality.

Theorem 3

Let \((\Omega , \Sigma ,\mu )\) be \(\sigma \)-finite measure space and let \(\sum _{i=1}^m \frac{1}{q_i}=1\), \(q_i>1\), \(i=1,2,\ldots , m\). Further, suppose that \(f_i\in L^{q_i}(\Omega )\), \(i=1,2,\ldots , m\), are non-negative measurable functions such that

$$\begin{aligned} a^{\frac{1}{q_i}}\Vert f_i\Vert _{q_i}\le f_i(x)\le b^{\frac{1}{q_i}}\Vert f_i\Vert _{q_i},\ x\in \Omega ,\ i=1,2,\ldots ,m, \end{aligned}$$
(30)

where \(0<a<b\le e^{\sqrt{6}}a\). Then the inequalities

$$\begin{aligned}{} & {} \prod _{i=1}^m \Vert f_i\Vert _{q_i}-\int _{\Omega }\prod _{i=1}^m f_i(x)\textrm{d}\mu (x)\nonumber \\{} & {} \ge \prod _{i=1}^m \Vert f_i\Vert _{q_i}\Big [ a\int _{\Omega }\mathcal {J}_m(L_{n-1}, \widehat{\log \textbf{f}_0^{\textbf{q}}(x)}{}^{*}, \textbf{q}^{-1})\textrm{d}\mu (x)\nonumber \\{} & {} \qquad +b\int _{\Omega }\mathcal {J}_m(L_{n-1}, \widehat{\log \textbf{f}_0^{\textbf{q}}(x)}, \textbf{q}^{-1})\textrm{d}\mu (x)\Big ]\nonumber \\{} & {} \ge a\prod _{i=1}^m \Vert f_i\Vert _{q_i}\int _{\Omega }\mathcal {J}_m(\overline{L}_{n-1}, \widehat{\log \textbf{f}_0^{\textbf{q}}(x)}, \textbf{q}^{-1})\textrm{d}\mu (x)\ge 0, \end{aligned}$$
(31)

where the polynomials \(L_n\) and \(\overline{L}_n\) are defined in Corollary 2, hold for any non-negative odd integer n.

Proof

The Young form of relation (26) reads

$$\begin{aligned} \begin{aligned} \sum _{i=1}^m \frac{x_i}{q_i}-\prod _{i=1}^m x_i^{\frac{1}{q_i}}&\ge \ a\mathcal {J}_m(L_{n-1}, {{\widehat{\log \textbf{x}}}{}^{*}}, \textbf{q}^{-1}) +b\mathcal {J}_m(L_{n-1}, {{\widehat{\log \textbf{x}}}}, \textbf{q}^{-1})\\&\ge a\mathcal {J}_m(\overline{L}_{n-1}, {{\widehat{\log \textbf{x}}}}, \textbf{q}^{-1})\ge 0, \end{aligned} \end{aligned}$$

where \(q_i=\frac{P_m}{p_i}\), \(\textbf{q}^{-1}=\big (\frac{1}{q_1},\frac{1}{q_2},\ldots , \frac{1}{q_m} \big )\) and \(\sum _{i=1}^m \frac{1}{q_i}=1\). Now, by putting \({f_i^{q_i}(x)}/{\Vert f_i\Vert _{q_i}^{q_i}}\), \(x\in \Omega \), instead of \(x_i\), \(i=1,2,\ldots , m\), in the above inequality, which is meaningful due to conditions in (30), we arrive at the inequalities

$$\begin{aligned} \begin{aligned}&\sum _{i=1}^m\frac{f_i^{q_i}(x)}{q_i\Vert f_i\Vert _{q_i}^{q_i}}-\prod _{i=1}^m \frac{f_i(x)}{\Vert f_i\Vert _{q_i}}\\&\ge a\mathcal {J}_m(L_{n-1}, \widehat{\log \textbf{f}_0^{\textbf{q}}(x)}{}^{*}, \textbf{q}^{-1}) +b\mathcal {J}_m(L_{n-1}, \widehat{\log \textbf{f}_0^{\textbf{q}}(x)}, \textbf{q}^{-1})\\&\ge a\mathcal {J}_m(\overline{L}_{n-1}, \widehat{\log \textbf{f}_0^{\textbf{q}}(x)}, \textbf{q}^{-1})\ge 0. \end{aligned} \end{aligned}$$

In addition, integrating the last set of inequalities over \(\Omega \), with respect to measure \(\mu \), we have that

$$\begin{aligned} \begin{aligned}&\sum _{i=1}^m\frac{1}{q_i}-\frac{\int _{\Omega }\prod _{i=1}^m f_i(x)\textrm{d}\mu (x)}{\prod _{i=1}^m \Vert f_i\Vert _{q_i}}\\&\ge a\int _{\Omega }\mathcal {J}_m(L_{n-1}, \widehat{\log \textbf{f}_0^{\textbf{q}}(x)}{}^{*}, \textbf{q}^{-1})\textrm{d}\mu (x)+b\int _{\Omega }\mathcal {J}_m(L_{n-1}, \widehat{\log \textbf{f}_0^{\textbf{q}}(x)}, \textbf{q}^{-1})\textrm{d}\mu (x)\\&\ge a\int _{\Omega }\mathcal {J}_m(\overline{L}_{n-1}, \widehat{\log \textbf{f}_0^{\textbf{q}}(x)}, \textbf{q}^{-1})\textrm{d}\mu (x)\ge 0, \end{aligned} \end{aligned}$$

which yields (31), due to \(\sum _{i=1}^m \frac{1}{q_i}=1\). \(\square \)