Keywords

Mathematics Subject Classification (2010) Primary 26A51; Secondary 26D10, 39B62

11.1 Introduction

In the present paper, we look at Hermite–Hadamard type inequalities from the perspective provided by the stochastic convex order. This approach is mainly due to Cal and Cárcamo. In the paper [12], the Hermite–Hadamard type inequalities are interpreted in terms of the convex stochastic ordering between random variables. Recently, also in [19, 32, 3538, 4042], the Hermite–Hadamard inequalities are studied based on the convex ordering properties. Here, we want to attract the reader’s attention to some selected topics by presenting some theorems on the convex ordering that can be useful in the study of the Hermite–Hadamard type inequalities.

The Ohlin lemma [31] on sufficient conditions for convex stochastic ordering was first used in [36], to get a simple proof of some known Hermite–Hadamard type inequalities as well as to obtaining new Hermite–Hadamard type inequalities. In [32, 41, 42], the authors used the Levin–Stečkin theorem [25] to study Hermite–Hadamard type inequalities.

Many results on higher-order generalizations of the Hermite–Hadamard type inequality one can find, among others, in [15, 16, 36, 37]. In recent papers [36, 37], the theorem of Denuit, Lefèvre, and Shaked [13] was used to prove Hermite–Hadamard type inequalities for higher-order convex functions. The theorem of Denuit, Lefèvre, and Shaked [13] on sufficient conditions for s-convex ordering is a counterpart of the Ohlin lemma concerning convex ordering. A theorem on necessary and sufficient conditions for higher-order convex stochastic ordering, which is a counterpart of the Levin–Stečkin theorem [25] concerning convex stochastic ordering, is given in the paper [38]. Based on this theorem, useful criteria for the verification of higher-order convex stochastic ordering are given. These criteria can be useful in the study of Hermite–Hadamard type inequalities for higher-order convex functions, and in particular inequalities between the quadrature operators. They may be easier to verify the higher-order convex orders, than those given in [13, 22].

In Section 11.2, we give simple proofs of known as well as new Hermite–Hadamard type inequalities, using Ohlin’s lemma and the Levin–Stečkin theorem.

In Sections 11.3 and 11.4, we study inequalities of the Hermite–Hadamard type involving numerical differentiation formulas of the first order and the second order, respectively.

In Section 11.5, we give simple proofs of Hermite–Hadamard type inequalities for higher-order convex functions, using the theorem of Denuit, Lefèvre, and Shaked, and a generalization of the Levin–Stečkin theorem to higher orders. These results are applied to derive some inequalities between quadrature operators.

11.2 Some Generalizations of the Hermite–Hadamard Inequality

Let \(f: [a,b] \rightarrow \mathbb{R}\) be a convex function (\(a,b \in \mathbb{R},\) a < b). The following double inequality

$$\displaystyle{ f\left (\frac{a + b} {2} \right ) \leq \frac{1} {b - a}\int _{a}^{b}f(x)\,dx \leq \frac{f(a) + f(b)} {2} }$$
(11.1)

is known as the Hermite–Hadamard inequality (see [16] for many generalizations and applications of (11.1)).

In many papers, the Hermite–Hadamard type inequalities are studied based on the convex stochastic ordering properties (see, for example, [19, 32, 3537, 40, 41]). In the paper [36], the Ohlin lemma on sufficient conditions for convex stochastic ordering is used to get a simple proof of some known Hermite–Hadamard type inequalities as well as to obtain new Hermite–Hadamard type inequalities. Recently, the Ohlin lemma is also used to study the inequalities of the Hermite–Hadamard type for convex functions in [32, 35, 40, 41]. In [37], also the inequalities of the Hermite–Hadamard type for delta-convex functions are studied by using the Ohlin lemma. In the papers [32, 40, 41], furthermore, the Levin–Stečkin theorem [25] (see also [30]) is used to examine the Hermite–Hadamard type inequalities. This theorem gives necessary and sufficient conditions for the stochastic convex ordering.

Let us recall some basic notions and results on the stochastic convex order (see, for example, [13]). As usual, F X denotes the distribution function of a random variable X and μ X is the distribution corresponding to X. For real-valued random variables X, Y with a finite expectation, we say that X is dominated by Y in convex ordering sense, if

$$\displaystyle{\mathbb{E}f(X) \leq \mathbb{E}f(Y )}$$

for all convex functions \(f: \mathbb{R} \rightarrow \mathbb{R}\) (for which the expectations exist). In that case, we write X cx Y, or μ X cx μ Y .

In the following Ohlin’s lemma [31], are given sufficient conditions for convex stochastic ordering.

Lemma 11.1 (Ohlin [31])

Let X, Y be two random variables such that \(\mathbb{E}X = \mathbb{E}Y\) . If the distribution functions F X , F Y cross exactly one time, i.e., for some x 0 holds

$$\displaystyle{ F_{X}(x) \leq F_{Y }(x)\mathit{\mbox{ if }}x <x_{0}\quad \mathit{\mbox{ and}}\quad F_{X}(x) \geq F_{Y }(x)\mathit{\mbox{ if }}x> x_{0}, }$$

then

$$\displaystyle{ \mathbb{E}f(X) \leq \mathbb{E}f(Y ) }$$
(11.2)

for all convex functions \(f: \mathbb{R} \rightarrow \mathbb{R}\) .

The inequality (11.1) may be easily proved with the use of the Ohlin lemma (see[36]). Indeed, let X, Y, Z be three random variables with the distributions μ X = δ (a+b)∕2, μ Y which is equally distributed in [a, b] and \(\mu _{Z} = \frac{1} {2}\,(\delta _{a} +\delta _{b})\), respectively. Then, it is easy to see that the pairs (X, Y ) and (Y, Z) satisfy the assumptions of the Ohlin lemma, and using (11.2), we obtain (11.1).

Let a < c < d < b. Let \(f: I \rightarrow \mathbb{R}\) be a convex function, a, bI. Then (see [21]),

$$\displaystyle{ \frac{f(c) + f(d)} {2} \, - f\left (\frac{c + d} {2} \right ) \leq \frac{f(a) + f(b)} {2} - f\left (\frac{a + b} {2} \right ). }$$
(11.3)

To prove (11.3) from the Ohlin lemma, it suffices to take random variables X, Y (see [27]) with

$$\displaystyle\begin{array}{rcl} & \mu _{X} = \frac{1} {4}\,\left (\delta _{c} +\delta _{d}\right ) + \frac{1} {2}\delta _{(a+b)/2},& {}\\ & \mu _{Y } = \frac{1} {4}\,\left (\delta _{a} +\delta _{b}\right ) + \frac{1} {2}\delta _{\delta _{(c+d)/2}}.& {}\\ \end{array}$$

Then, by Lemma 11.1, we obtain

$$\displaystyle{ \frac{f(c) + f(d)} {2} \, + f\left (\frac{a + b} {2} \right ) \leq \frac{f(a) + f(b)} {2} + f\left (\frac{c + d} {2} \right ), }$$
(11.4)

which implies (11.3).

Similarly, it can be proved the Popoviciu inequality

$$\displaystyle{ \frac{2} {3}\,\left [f\left (\frac{x + y} {2} \right ) + f\left (\frac{y + z} {2} \right ) + f\left (\frac{z + x} {2} \right )\right ] \leq \frac{f(x) + f(y) + f(z)} {3} + f\left (\frac{x + y + z} {3} \right ), }$$
(11.5)

where x, y, zI and \(f: I \rightarrow \mathbb{R}\) is a convex function. To prove (11.5) from the Ohlin lemma, it suffices (assuming xyz ) to take random variables X, Y (see [27]) with

$$\displaystyle\begin{array}{rcl} & \mu _{X} = \frac{1} {4}\,\left (\delta _{(x+y)/2} +\delta _{(y+z)/2} +\delta _{(z+x)/2}\right ),& {}\\ & \mu _{Y } = \frac{1} {6}\,\left (\delta _{x} +\delta _{y} +\delta _{z}\right ) + \frac{1} {2}\delta _{(x+y+z)/3}. & {}\\ \end{array}$$

Convexity has a nice probabilistic characterization, known as Jensen’s inequality (see [6]).

Proposition 11.1 ([6])

A function \(f: (a,b) \rightarrow \mathbb{R}\) is convex if, and only if,

$$\displaystyle{ f(\mathbb{E}X) \leq \mathbb{E}f(X) }$$
(11.6)

for all (a, b)-valued integrable random variables X.

To prove (11.6) from the Ohlin lemma, it suffices to take a random variable Y (see [35]) with

$$\displaystyle{\mu _{Y } =\delta _{\mathbb{E}X},}$$

then we have

$$\displaystyle{ \mathbb{E}f(Y ) = f(\mathbb{E}X). }$$
(11.7)

By the Ohlin lemma, we obtain \(\mathbb{E}f(Y ) \leq \mathbb{E}f(X)\), then taking into account (11.7), this implies (11.6).

Remark 11.1

Note that in [29], the Ohlin lemma was used to obtain a solution of the problem of Raşa concerning inequalities for Bernstein operators.

In [17], Fejér gave a generalization of the inequality (11.1).

Proposition 11.2 ([17])

Let \(f: I \rightarrow \mathbb{R}\) be a convex function defined on a real interval I, a, bI with a < b and let \(g: [a,b] \rightarrow \mathbb{R}\) be nonnegative and symmetric with respect to the point (a + b)∕2 (the existence of integrals is assumed in all formulas). Then,

$$\displaystyle{ f\left (\frac{a + b} {2} \right ) \cdot \int _{a}^{b}g(x)\,dx \leq \int _{ a}^{b}f(x)g(x)\,dx \leq \frac{f(a) + f(b)} {2} \cdot \int _{a}^{b}g(x)\,dx. }$$
(11.8)

The double inequality (11.8) is known in the literature as the Fejér inequality or the Hermite–Hadamard-Fejér inequality (see [16, 28, 33] for the historical background).

Remark 11.2 ([36])

Using the Ohlin lemma (Lemma 11.1), we get a simple proof of (11.8). Let f and g satisfy the assumptions of Proposition 11.2. Let X, Y, Z be three random variables such that μ X = δ (a+b)∕2, μ Y (dx) = ( a b g(x)dx)−1 g(x)dx, \(\mu _{Z} = \frac{1} {2}\,(\delta _{a} +\delta _{b})\). Then, by Lemma 11.1, we obtain that X cx Y and Y cx Z, which implies (11.8).

Remark 11.3

Note that for g(x) = w(x) such that a b w(x)dx = 1, the inequality (11.8) can be rewritten in the form

$$\displaystyle{ f\left (\frac{a + b} {2} \,\right ) \leq \int _{a}^{b}f(x)w(x)dx \leq \frac{f(a) + f(b)} {2}. }$$
(11.9)

Conversely, from the inequality (11.9), it follows (11.8). Indeed, if a b g(x)dx > 0, it suffices to take \(w(x) = \left (\int _{a}^{b}g(x)dx\right )^{-1}g(x)\). If a b g(x)dx = 0, then (11.8) is obvious.

For various modifications of (11.1) and (11.8), see, e.g., [35, 10, 11, 16], and the references given there.

As Fink noted in [18], one wonders what the symmetry has to do with the inequality (11.8) and if such an inequality holds for other functions (cf. [16, p. 53]).

As an immediate consequence of Lemma 11.1, we obtain the following theorem, which is a generalization of the Fejér inequality.

Theorem 11.1 ([36])

Let 0 < p < 1. Let \(f: I \rightarrow \mathbb{R}\) be a convex function, a, bI with a < b. Let μ be a finite measure on \(\mathcal{B}([a,b])\) such that: (i) μ([a, pa + qb]) ≤ pP 0 , (ii) μ((pa + qb, b]) ≤ qP 0 , and (iii) ∫ [a, b] (dx) = (pa + qb)P 0 , where q = 1 − p, P 0 = μ([a, b]). Then,

$$\displaystyle{ f(pa + qb)P_{0} \leq \int _{[a,b]}f(x)\mu (dx) \leq [pf(a) + qf(b)]P_{0}. }$$
(11.10)

Fink proved in [18] a general weighted version of the Hermite–Hadamard inequality. In particular, we have the following probabilistic version of this inequality.

Proposition 11.3 ([18])

Let X be a random variable taking values in the interval [a, b] such that m is the expectation of X and μ X is the distribution corresponding to X. Then,

$$\displaystyle{ f\left (m\right ) \leq \int _{a}^{b}f(x)\:\mu _{ X}(dx) \leq \frac{b - m} {b - a} \,f(a) + \frac{m - a} {b - a} f(b). }$$
(11.11)

Moreover, in [19] it was proved that, starting from such a fixed random variable X, we can fill the whole space between the Hermite–Hadamard bounds by highlighting some parametric families of random variables. The authors propose two alternative constructions based on the convex ordering properties.

In [35], based on Lemma 11.1, a very simple proof of Proposition 11.3 is given. Let X be a random variable satisfying the assumptions of Proposition 11.3. Let Y, Z be two random variables such that μ Y = δ m , \(\mu _{Z} = \frac{b-m} {b-a} \,\delta _{a} + \frac{m-a} {b-a} \delta _{b})\). Then, by Lemma 11.1, we obtain that Y cx X and X cx Z, which implies (11.11).

In [36], some results related to the Brenner–Alzer inequality are given. In the paper [23] by Klaričić Bakula, Pečarić, and Perić, some improvements of various forms of the Hermite–Hadamard inequality can be found; namely, that of Fejér, Lupas, Brenner–Alzer, and Beesack–Pečarić. These improvements imply the Hammer–Bullen inequality. In 1991, Brenner and Alzer [9] obtained the following result generalizing Fejér’s result as well as the result of Vasić and Lacković [43] and Lupas [26] (see also [33]).

Proposition 11.4 ([9])

Let p, q be the given positive numbers and a 1a < bb 1 . Then, the inequalities

$$\displaystyle{ f\left (\frac{pa + qb} {p + q} \,\right ) \leq \frac{1} {2y}\int _{A-y}^{A+y}f(t)dt \leq \frac{pf(a) + qf(b)} {p + q} }$$
(11.12)

hold for \(A = \frac{pa+qb} {p+q} \,\) , y > 0, and all continuous convex functions \(f: [a_{1},b_{1}] \rightarrow \mathbb{R}\) if, and only if,

$$\displaystyle{y \leq \frac{b - a} {p + q}\,\min \{p,q\}.}$$

Remark 11.4

It is known [33, p. 144] that under the same conditions Hermite–Hadamard’s inequality holds, the following refinement of (11.12):

$$\displaystyle{ f\left (\frac{pa + qb} {p + q} \,\right ) \leq \frac{1} {2y}\int _{A-y}^{A+y}f(t)dt \leq \frac{1} {2}\left \{f(A - y) + f(A + y)\right \} \leq \frac{pf(a) + qf(b)} {p + q} }$$
(11.13)

holds.

In the following theorem, we give some generalization of the Brenner and Alzer inequalities (11.13), which we prove using the Ohlin lemma.

Theorem 11.2 ([36])

Let p, q be the given positive numbers, a 1a < bb 1, \(0 <y \leq \frac{b-a} {p+q}\,\min \{p,q\}\) and let \(f: [a_{1},b_{1}] \rightarrow \mathbb{R}\) be a convex function. Then,

$$\displaystyle\begin{array}{rcl} & f\left (\frac{pa+qb} {p+q} \,\right ) \leq & {}\\ & \frac{\alpha }{2}\,\left \{f(A - (1-\alpha )y) + f(A + (1-\alpha )y)\right \} + \frac{1} {2y}\int _{A-(1-\alpha )y}^{A+(1-\alpha )y}f(t)dt \leq & {}\\ & \frac{\alpha }{2n}\,\sum _{k=1}^{n}\left \{f\left (A - y + k\frac{\alpha y} {n}\right ) + f\left (A + y - k\frac{\alpha y} {n}\right )\right \} + \frac{1} {2y}\int _{A-(1-\alpha )y}^{A+(1-\alpha )y}f(t)dt \leq & {}\\ \end{array}$$
$$\displaystyle{ \frac{1} {2y}\,\int _{A-y}^{A+y}f(t)dt, }$$
(11.14)

where 0 ≤ α ≤ 1, n = 1, 2, …,

$$\displaystyle\begin{array}{rcl} \frac{1} {2y}\,\int _{A-y}^{A+y}f(t)dt& \leq & \frac{\beta } {2}\{f(A - y) + f(A + y)\} + (1-\beta ) \frac{1} {2y}\int _{A-y}^{A+y}f(t)dt \\ & \leq & \frac{1} {2}\,\{f(A - y) + f(A + y)\}, {}\end{array}$$
(11.15)

where 0 ≤ β ≤ 1,

$$\displaystyle\begin{array}{rcl} & \frac{1} {2}\,\{f(A - y) + f(A + y)\} \leq & {}\\ & (\frac{1} {2}\,-\gamma )\{f(A - y - c) + f(A + y + c)\} +\gamma \{ f(A - y) + f(A + y)\} \leq & {}\\ \end{array}$$
$$\displaystyle{ \frac{pf(a) + qf(b)} {p + q} \,, }$$
(11.16)

where c = min{b − (A + y), (Ay) − a}, \(\gamma = \left \vert \frac{1} {2}\, - p\right \vert\) .

To prove this theorem, it suffices to consider random variables X, Y, W, Z, ξ n , η and λ such that:

$$\displaystyle\begin{array}{rcl} \mu _{X}& =& \delta _{\frac{pa+qb} {p+q} \,}, {}\\ \mu _{Y }(dx)& =& \frac{1} {2y}\,\chi _{[A-y,A+y]}(x)dx, {}\\ \mu _{Z}& =& \frac{p} {p + q}\,\delta _{a} + \frac{q} {p + q}\delta _{b},\mu _{W} = \frac{1} {2}\delta _{A-y} + \frac{1} {2}\delta _{A+y}, {}\\ \mu _{\xi _{n}}(dx)& =& \frac{\alpha } {2n}\,\sum _{k=1}^{n}\{\delta _{ A-y+k \frac{\alpha y} {n} } +\delta _{A+y-k \frac{\alpha y} {n} }\} + \frac{1} {2y}\chi _{[A-(1-\alpha )y,A+(1-\alpha )y]}(x)dx, {}\\ \mu _{\eta }(dx)& =& \frac{\beta } {2}\,\{\delta _{A-y} +\delta _{A+y}\} + \frac{1-\beta } {2y} \chi _{[A-y,A+y]}(x)dx, {}\\ \mu _{\lambda }& =& (\frac{1} {2}\,-\gamma )\{\delta _{A-y-c} +\delta _{A+y+c}\} +\gamma \{\delta _{A-y} +\delta _{A+y}\}. {}\\ \end{array}$$

Then, using the Ohlin lemma, we obtain:

  • X cx Y, Y cx W, and W cx Z, which implies the inequalities (11.13),

  • X cx ξ 1, ξ 1 cx ξ n , and ξ n cx Y, which implies (11.14),

  • Y cx η and η cx W, which implies (11.15), and

  • W cx λ and λ cx Z, which implies (11.16).

Theorem 11.3 ([36])

Let p,  q be the given positive numbers, 0 < α < 1, a 1a < bb 1, \(0 <y \leq \frac{b-a} {p+q}\,\min \{p,q\}\) and \(0 \leq \frac{\alpha } {1-\alpha }y \leq \frac{b-a} {p+q}\min \{p,q\}\) . Let \(f: [a_{1},b_{1}] \rightarrow \mathbb{R}\) be a convex function. Then,

$$\displaystyle\begin{array}{rcl} f(A)& \leq & \frac{\alpha } {y}\,\int _{A-y}^{A}f(t)dt + \frac{(1-\alpha )^{2}} {\alpha y} \int _{A}^{A+ \frac{\alpha }{1-\alpha }y}f(t)dt \\ & \leq & \alpha f(A - y) + (1-\alpha )f(A + \frac{\alpha } {1-\alpha }\,y) \\ & \leq & \frac{p} {p + q}\,f(a) + \frac{q} {p + q}f(b), {}\end{array}$$
(11.17)

where \(A = \frac{pa+qb} {p+q} \,\) .

Let X, Y, Z, and W be random variables such that:

$$\displaystyle\begin{array}{rcl} \mu _{X}& =& \delta _{A}, {}\\ \mu _{Y }(dx)& =& \frac{\alpha } {y}\,\chi _{[A-y,A]}(x)dx + \frac{(1-\alpha )^{2}} {\alpha y} \chi _{[A,A+ \frac{\alpha }{ 1-\alpha }y]}(x)dx, {}\\ \mu _{W}& =& \alpha \delta _{A-y} + (1-\alpha )\delta _{A+ \frac{\alpha }{ 1-\alpha }\,y}, {}\\ \mu _{Z}& =& \frac{p} {p + q}\,\delta _{a} + \frac{q} {p + q}\delta _{b}. {}\\ \end{array}$$

Then, using the Ohlin lemma, we obtain X cx Y, Y cx W, W cx Z, which implies the inequalities (11.17).

Remark 11.5

If we choose \(\alpha = \frac{1} {2}\,\) in Theorem 11.3, then the inequalities (11.17) reduce to the inequalities (11.15).

Remark 11.6

If we choose \(\alpha = \frac{p} {p+q}\,\) and y = (1 − p)z in Theorem 11.3, then we have

$$\displaystyle\begin{array}{rcl} f(A)& \leq & \frac{p} {qz}\,\int _{A- \frac{q} {p+q}z}^{A}f(t)dt + \frac{q} {pz}\int _{A}^{A+ \frac{p} {p+q}z}f(t)dt {}\\ & \leq & \frac{p} {p + q}\,f(A - \frac{q} {p + q}z) + \frac{q} {p + q}f(A + \frac{p} {p + q}z) {}\\ & \leq & \frac{p} {p + q}\,f(a) + \frac{q} {p + q}f(b), {}\\ \end{array}$$

where \(A = \frac{pa+qb} {p+q} \,\), 0 < zba.

In the paper [40], the author used Ohlin’s lemma to prove some new inequalities of the Hermite–Hadamard type, which are a generalization of known Hermite–Hadamard type inequalities.

Theorem 11.4 ([40])

The inequality

$$\displaystyle{ af(\alpha x + (1-\alpha )y) + (1 - a)f(\beta x + (1-\beta )y) \leq \frac{1} {y - x}\,\int _{x}^{y}f(t)dt, }$$
(11.18)

with some a, α, β ∈ [0, 1], α > β is satisfied for all \(x,y \in \mathbb{R}\) and all continuous and convex functions \(f: [x,y] \rightarrow \mathbb{R}\) if, and only if,

$$\displaystyle{ a\alpha + (1 - a)\beta = \frac{1} {2}, }$$
(11.19)

and one of the following conditions holds true:

  1. (i)

    a + α ≤ 1,

  2. (ii)

    a + β ≥ 1, and

  3. (iii)

    a + α > 1, a + β < 1, and a + 2α ≤ 2. 

Theorem 11.5 ([40])

Let a, b, c, α ∈ (0, 1) be numbers such that a + b + c = 1. Then, the inequality

$$\displaystyle{ af(x) + bf(\alpha x + (1-\alpha )y) + cf(y) \geq \frac{1} {y - x}\,\int _{x}^{y}f(t)dt }$$
(11.20)

is satisfied for all \(x,y \in \mathbb{R}\) and all continuous and convex functions \(f: [x,y] \rightarrow \mathbb{R}\) if, and only if,

$$\displaystyle{ b(1-\alpha ) + c = \frac{1} {2} }$$
(11.21)

and one of the following conditions holds true:

  1. (i)

    a + α ≥ 1, 

  2. (ii)

    a + b + α ≤ 1, and

  3. (iii)

    a + α < 1, a + b + α > 1, and 2a + α ≥ 1. 

Note that the original Hermite–Hadamard inequality consists of two parts. We treated these cases separately. However, it is possible to formulate a result containing both inequalities.

Corollary 11.1 ([40])

If a, α, β ∈ (0, 1) satisfy ( 11.19 ) and one of the conditions (i)(iii) of Theorem  11.4 , then the inequality

$$\displaystyle\begin{array}{rcl} & af(\alpha x + (1 -\alpha y) + (1 - a)f(\beta x + (1-\beta )y) \leq \frac{1} {y-x}\,\int _{x}^{y}f(t)dt \leq & {}\\ & (1-\alpha )f(x) + (\alpha -\beta )f(ax + (1 - a)y) +\beta f(y) & {}\\ \end{array}$$

is satisfied for all \(x,y \in \mathbb{R}\) and for all continuous and convex functions \(f: \mathbb{R} \rightarrow \mathbb{R}.\)

As we can see, the Ohlin lemma is very useful; however, it is worth noticing that in the case of some inequalities, the distribution functions cross more than once. Therefore, a simple application of the Ohlin lemma is impossible.

In the papers [32, 41], the authors used the Levin–Stečkin theorem [25] (see also [30, Theorem 4.2.7]), which gives necessary and sufficient conditions for convex ordering of functions with bounded variation, which are distribution functions of signed measures.

Theorem 11.6 (Levin, Stečkin [25])

Let \(a,b \in \mathbb{R}\) , a < b and let \(F_{1},F_{2}: [a,b] \rightarrow \mathbb{R}\) be functions with bounded variation such that F 1(a) = F 2(a). Then, in order that

$$\displaystyle{ \int _{a}^{b}f(x)dF_{ 1}(x) \leq \int _{a}^{b}f(x)dF_{ 2}(x) }$$
(11.22)

for all continuous convex functions \(f: [a,b] \rightarrow \mathbb{R},\) it is necessary and sufficient that F 1 and F 2 verify the following three conditions:

$$\displaystyle\begin{array}{rcl} F_{1}(b)& =& F_{2}(b),{}\end{array}$$
(11.23)
$$\displaystyle\begin{array}{rcl} \int _{a}^{b}F_{ 1}(x)dx& =& \int _{a}^{b}F_{ 2}(x)dx,{}\end{array}$$
(11.24)
$$\displaystyle\begin{array}{rcl} \int _{a}^{x}F_{ 1}(t)dt& \leq & \int _{a}^{x}F_{ 2}(t)dt\quad \mathit{\mbox{ for all}}\quad x \in (a,b).{}\end{array}$$
(11.25)

Define the number of sign changes of a function \(\varphi: \mathbb{R} \rightarrow \mathbb{R}\) by

$$\displaystyle{S^{-}(\varphi ) = \mathop{\mathrm{sup}}\nolimits \{S^{-}[\varphi (x_{ 1}),\varphi (x_{2}),\ldots,\varphi (x_{k})]: x_{1} <x_{2} <\ldots x_{k} \in \mathbb{R},\:k \in \mathbb{N}\},}$$

where S [y 1, y 2, , y k ] denotes the number of sign changes in the sequence y 1, y 2,, y k (zero terms are being discarded). Two real functions φ 1, φ 2 are said to have n crossing points (or cross each other n-times) if S (φ 1φ 2) = n. Let a = x 0 < x 1 < < x n < x n+1 = b. We say that the functions φ 1, φ 2 cross n-times at the points x 1, x 2, , , x n (or that x 1, x 2, , , x n are the points of sign changes of φ 1φ 2) if S (φ 1φ 2) = n and there exist a < ξ 1 < x 1 < < ξ n < x n < ξ n+1 < b such that S [ξ 1, ξ 2, , ξ n+1] = n.

Szostok [41] used Theorem 11.6 to make an observation, which is more general than Ohlin’s lemma and concerns the situation when the functions F 1 and F 2 have more crossing points than one. In [41] is given some useful modification of the Levin–Stečkin theorem [25], which can be rewritten in the following form.

Lemma 11.2 ([41])

Let \(a,b \in \mathbb{R}\) , a < b and let \(F_{1},F_{2}: (a,b) \rightarrow \mathbb{R}\) be functions with bounded variation such that F(a) = F(b) = 0, ∫ a b F(x)dx = 0, where F = F 2F 1 . Let a < x 1 < < x m < b be the points of sign changes of the function F. Assume that F(t) ≥ 0 for t ∈ (a, x 1).

  • If m is even, then the inequality

    $$\displaystyle{ \int _{a}^{b}f(x)dF_{ 1}(x) \leq \int _{a}^{b}f(x)dF_{ 2}(x) }$$
    (11.26)

    is not satisfied by all continuous convex functions \(f: [a,b] \rightarrow \mathbb{R}\) .

  • If m is odd, define A i (i = 0, 1, , m, x 0 = a, x m+1 = b)

    $$\displaystyle{A_{i} =\int _{ x_{i}}^{x_{}i+1}\vert F(x)\vert dx.}$$

    Then, the inequality  (11.26) is satisfied for all continuous convex functions \(f: [a,b] \rightarrow \mathbb{R},\) if, and only if, the following inequalities hold true:

    $$\displaystyle{ \begin{array}{rl} A_{0} & \geq A_{1}, \\ A_{0} + A_{2} & \geq A_{1} + A_{3},\\ &\vdots \\ A_{0} + A_{2} +\ldots +A_{m-3} & \geq A_{1} + A_{3} +\ldots +A_{m-2}.\end{array} }$$
    (11.27)

Remark 11.7 ([38])

Let

$$\displaystyle{ H(x) =\int _{ a}^{x}F(t)dt. }$$

Then, the inequalities (11.27) are equivalent to the following inequalities

$$\displaystyle{ H(x_{2}) \geq 0,\:H(x_{4}) \geq 0,\:H(x_{6}) \geq 0,\:\ldots,\:H(x_{m-1}) \geq 0. }$$

In [41], Lemma 11.2 is used to prove results, which extend the inequalities (11.18) and (11.20) and inequalities between quadrature operators.

Theorem 11.7 ([41])

Let numbers a 1, a 2, a 3, α 1, α 2, α 3 ∈ (0, 1) satisfy a 1 + a 2 + a 3 = 1 and α 1 > α 2 > α 3. 

Then, the inequality

$$\displaystyle{ \sum _{i=1}^{3}a_{ i}f(\alpha _{i}x + (1 -\alpha _{i})y) \leq \frac{1} {y - x}\,\int _{x}^{y}f(t) }$$
(11.28)

is satisfied by all convex functions \(f: [x,y] \rightarrow \mathbb{R}\) if, and only if, we have

$$\displaystyle{ \sum _{i=1}^{3}a_{ i}(1 -\alpha _{i}) = \frac{1} {2} }$$
(11.29)

and one of the following conditions is satisfied

  1. (i)

    a 1 ≤ 1 −α 1 and a 1 + a 2 ≥ 1 −α 3, 

  2. (ii)

    a 1 ≥ 1 −α 2 and a 1 + a 2 ≥ 1 −α 3, 

  3. (iii)

    a 1 ≤ 1 −α 1 and a 1 + a 2 ≤ 1 −α 2, 

  4. (iv)

    a 1 ≤ 1 −α 1, a 1 + a 2 ∈ (1 −α 2, 1 −α 3), and 2α 3a 3, 

  5. (v)

    a 1 ≥ 1 −α 2, a 1 + a 2 < 1 −α 3 , and 2α 3a 3, 

  6. (vi)

    a 1 > 1 −α 1, a 1 + a 2 ≤ 1 −α 2 , and \(1 -\alpha _{1} \geq \frac{a_{1}} {2} \,,\)

  7. (vii)

    a 1 ∈ (1 −α 1, 1 −α 2), a 1 + a 2 ≥ 1 −α 3, and \(1 -\alpha _{1} \geq \frac{a_{1}} {2} \,\) , and

  8. (viii)

    a 1 ∈ (1 −α 1, 1 −α 2), a 1 + a 2 ∈ (1 −α 2, 1 −α 3), \(1 -\alpha _{1} \geq \frac{a_{1}} {2} \,\) , and 2a 1(1 −α 1) + 2a 2(1 −α 2) ≥ (a 1 + a 2)2. 

To prove Theorem 11.7, we note that, if the inequality (11.28) is satisfied for every convex function f defined on the interval [0, 1], then it is satisfied by every convex function f defined on a given interval [x, y]. Therefore, without loss of generality, it suffices to consider the interval [0, 1] in place of [x, y]. 

To prove Theorem 11.7, we consider the functions \(F_{1},F_{2}: \mathbb{R} \rightarrow \mathbb{R}\) given by the following formulas

$$\displaystyle{ F_{1}(t):= \left \{\begin{array}{ll} 0, &\ t <1 -\alpha _{1}, \\ a_{1}, &\ t \in [1 -\alpha _{1},1 -\alpha _{2}), \\ a_{1} + a_{2},&\ t \in [1 -\alpha _{2},1 -\alpha _{3}), \\ 1, &\ t \geq 1 -\alpha _{3}, \end{array} \right. }$$
(11.30)

and

$$\displaystyle{ F_{2}(t):= \left \{\begin{array}{ll} 0,&\quad t <0, \\ t, &\quad t \in [0,1), \\ 1,&\quad t \geq 1. \end{array} \right. }$$
(11.31)

Observe that the equality (11.29) gives us

$$\displaystyle{\int _{0}^{1}tdF_{ 1}(t) =\int _{ 0}^{1}tdF_{ 2}(t).}$$

Further, it is easy to see that in the cases (i)–(iii) the pair (F 1, F 2) crosses exactly once and, consequently, the inequality (11.28) follows from the Ohlin lemma.

In the case (iv), the pair (F 1, F 2) crosses three times. Let A 0, , A 3 be defined as in Lemma 11.2. In order to prove the inequality (11.28), we note that A 0A 1. However, since A 0A 1 + A 2A 3 = 0, we shall show that A 2A 3. We have

$$\displaystyle{A_{2} =\int _{ a_{1}+a_{2}}^{1-\alpha _{3} }(t - a_{1} - a_{2})dt = \frac{(1 -\alpha _{3} - a_{1} - a_{2})^{2}} {2} \, = \frac{a_{3}^{2} - 2a_{3}\alpha _{3} +\alpha _{ 3}^{2}} {2} }$$

and

$$\displaystyle{A_{3} =\int _{ 1-\alpha _{3}}^{1}(1 - t)dt = \frac{\alpha _{3}^{2}} {2} \,.}$$

This means that A 2A 3 is equivalent to 2α 3a 3, as claimed.

We omit similar proofs in the cases (v)–(vii) and we pass to the case (vii). In this case, the pair (F 1, F 2) crosses five times. We have

$$\displaystyle{A_{0} =\int _{ 0}^{1-\alpha _{1} }tdt = \frac{(1 -\alpha _{1})^{2}} {2} \,}$$

and

$$\displaystyle{A_{1} =\int _{ 1-\alpha _{1}}^{a_{1} }(a_{1} - t)dt = a_{1}(a_{1} - (1 -\alpha _{1})) -\frac{a_{1}^{2} - (1 -\alpha _{1})^{2}} {2} \, = \frac{[a_{1} - (1 -\alpha _{1})]^{2}} {2} \,.}$$

This means that the inequality A 0A 1 is satisfied if, and only if, \(1 -\alpha _{1} \geq \frac{a_{1}} {2} \,\).

Further,

$$\displaystyle{A_{2} =\int _{ a_{1}}^{1-\alpha _{2} }(t - a_{1})dt = \frac{(1 -\alpha _{2})^{2} - a_{1}^{2}} {2} \, - a_{1}(1 -\alpha _{2} - a_{1})}$$

and

$$\displaystyle{A_{3} =\int _{ 1-\alpha _{2}}^{a_{1}+a_{2} }(a_{1}+a_{2}-t)dt = (a_{1}+a_{2})(a_{1}+a_{2}-(1-\alpha _{2}))-\frac{(a_{1} + a_{2})^{2} - (1 -\alpha _{2})^{2}} {2} \,,}$$

therefore, the inequality A 0 + A 2A 3 + A 1 is satisfied if, and only if,

$$\displaystyle{(1 -\alpha _{1})^{2} + (1 -\alpha _{ 2} - a_{1})^{2} \geq (a_{ 1} - 1 -\alpha _{1})^{2} + (a_{ 1} + a_{2} - 1 +\alpha _{2})^{2},}$$

which, after some calculations, gives us the last inequality from (vii).

Using assertions (i) and (vii) of Theorem 11.7, it is easy to get the following example.

Example 11.1 ([41])

Let \(x,y \in \mathbb{R},\) \(\alpha \in (\frac{1} {2},1)\), and a, b ∈ (0, 1) be such that 2a + b = 1. Then, the inequality

$$\displaystyle{ af(\alpha x + (1-\alpha )y) + bf\left (\frac{x + y} {2} \,\right ) + af((1-\alpha )x +\alpha y) \leq \frac{1} {y - x}\int _{x}^{y}f(t)dt }$$
(11.32)

is satisfied by all convex functions \(f: [x,y] \rightarrow \mathbb{R}\) if, and only if, a ≤ 2 − 2α. 

In the next theorem, we obtain inequalities, which extend the second of the Hermite–Hadamard inequalities.

Theorem 11.8 ([41])

Let numbers a 1, a 2, a 3, a 4 ∈ (0, 1), α 1, α 2, α 3, α 4 ∈ [0, 1] satisfy a 1 + a 2 + a 3 + a 4 = 1 and 1 = α 1 > α 2 > α 3 > α 4 = 0.

Then, the inequality

$$\displaystyle{ \sum _{i=1}^{4}a_{ i}f(\alpha _{i}x + (1 -\alpha _{i})y) \geq \frac{1} {y - x}\,\int _{x}^{y}f(t) }$$
(11.33)

is satisfied by all convex functions \(f: [x,y] \rightarrow \mathbb{R}\) if, and only if, we have

$$\displaystyle{ \sum _{i=1}^{4}a_{ i}(1 -\alpha _{i}) = \frac{1} {2} }$$
(11.34)

and one of the following conditions is satisfied:

  1. (i)

    a 1 ≥ 1 −α 2 and a 1 + a 2 ≥ 1 −α 3, 

  2. (ii)

    a 1 + a 2 ≤ 1 −α 2 and a 1 + a 2 + a 3 ≤ 1 −α 3, 

  3. (iii)

    1 −α 2a 1 and 1 −α 3a 1 + a 2 + a 3, 

  4. (iv)

    1 −α 2a 1, 1 −α 3 ∈ (a 1 + a 2, a 1 + a 2 + a 3), and α 3 ≤ 2a 4, 

  5. (v)

    1 −α 2a 1 + a 2, a 1 + a 2 + a 3 > 1 −α 3 , and α 3 ≤ 2a 4, 

  6. (vi)

    a 1 < 1 −α 2, a 1 + a 2 ≥ 1 −α 3 , and 2a 1 + α 2 ≥ 1, 

  7. (vii)

    a 1 < 1 −α 2, a 1 + a 2 > 1 −α 2, a 1 + a 2 + a 3 ≤ 1 −α 3 , and 2a 1 + α 2 ≥ 1, 

  8. (viii)

    1 −α 2 ∈ (a 1, a 1 + a 2), 1 −α 3 ∈ (a 1 + a 2, a 1 + a 2 + a 3), 2a 1 + α 2 ≥ 1, and 2a 1(1 −α 3) + 2a 2(α 2α 3) ≥ (1 −α 3)2. 

To prove Theorem 11.8, we assume that \(F_{1}: \mathbb{R} \rightarrow \mathbb{R}\) is the function given by the following formula

$$\displaystyle{ F_{1}(t):= \left \{\begin{array}{ll} 0, &\ t <0, \\ a_{1}, &\ t \in [0,1 -\alpha _{1}), \\ a_{1} + a_{2}, &\ t \in [1 -\alpha _{1},1 -\alpha _{2}), \\ a_{1} + a_{2} + a_{3},&\ t \in [1 -\alpha _{2},1), \\ 1, &\ t \geq 1. \end{array} \right. }$$
(11.35)

and let F 2 be the function given by (11.31). In view of (11.34), we have

$$\displaystyle{\int _{0}^{1}F_{ 1}(t)dt =\int _{ 0}^{1}F_{ 2}(t)dt.}$$

In cases (i)–(iii), there is only one crossing point of (F 2, F 1) and our assertion is a consequence of the Ohlin lemma.

In the cases (iv)–(vii), the pair (F 2, F 1) crosses three times and, therefore, we have to use Lemma 11.2.

In the case (iv), the inequality (11.33) is satisfied by all convex functions f if, and only if, A 0A 1. Further, we know that

$$\displaystyle{A_{0} - A_{1} + A_{2} - A_{3} = 0,}$$

which implies that the inequality A 0A 1 is equivalent to A 3A 2. Clearly, we have

$$\displaystyle{ \begin{array}{rl} A_{2} =&\int _{1-\alpha _{3}}^{1-a_{4}}(F_{1}(t) - F_{2}(t))dt = (\alpha _{3} - a_{4})(1 - a_{4}) -\frac{(1-a_{4})^{2}-(1-\alpha _{ 3})^{2}} {2} \, \\ =&(\alpha _{3} - a_{4})\left (1 - a_{4} + \frac{2-(\alpha _{3}+a_{4})} {2} \,\right ) \end{array} }$$
(11.36)

and

$$\displaystyle{ A_{3} =\int _{ 1-a_{4}}^{1}(t - (1 - a_{ 4}))dt = \frac{1 - (1 - a_{4})^{2}} {2} \, - (1 - a_{4})a_{4} }$$
(11.37)

that is, A 3A 2 is equivalent to α 3 ≤ 2a 4.

We omit similar reasoning in the cases (v)–(vii) and we pass to the most interesting case (viii). In this case, (F 2, F 1) has five crossing points and, therefore, we must check that the inequalities

$$\displaystyle{A_{0} \geq A_{1}\;\;\mathrm{and}\;\;A_{0} - A_{1} + A_{2} \geq A_{3}}$$

are equivalent to the inequalities of the condition (viii), respectively. To this end, we write

$$\displaystyle\begin{array}{rcl} & A_{0} =\int _{ 0}^{a_{1}}(a_{1} - t)dt = \frac{a_{1}^{2}} {2} \,, & {}\\ & A_{1} =\int _{ a_{1}}^{1-\alpha _{1}}(t - (a_{1} + a_{2}))dt = \frac{(a_{1}+a_{2}-1+\alpha _{1})^{2}} {2} \,,& {}\\ \end{array}$$

which means that A 0A 1 if, and only if, 2a 1 + α 2 ≥ 1. Further, A 2 and A 3 are given by formulas (11.36) and (11.37). Thus, A 0A 1 + A 2A 3 is equivalent to

$$\displaystyle{a_{1}^{2} + (a_{ 1} + a_{2} - (1 -\alpha _{2}))^{2} \geq (1 -\alpha _{ 2} - a_{1})^{2} + (1 -\alpha _{ 3} - a_{1} - a_{2})^{2},}$$

which yields

$$\displaystyle{2a_{1}(1 -\alpha _{3}) + 2a_{2}(\alpha _{2} -\alpha _{3}) \geq (1 -\alpha _{3})^{2}.}$$

Using assertions (ii) and (vii) of Theorem 11.8, we get the following example.

Example 11.2 ([41])

Let \(x,y \in \mathbb{R},\) let \(\alpha \in (\frac{1} {2},1)\), and let a, b ∈ (0, 1) be such that 2a + 2b = 1. Then, the inequality

$$\displaystyle{af(x) + bf(\alpha x + (1-\alpha )y) + bf\left ((1-\alpha )x +\alpha y\right ) + af(y) \geq \frac{1} {y - x}\,\int _{x}^{y}f(t)dt}$$

is satisfied by all convex functions \(f: [x,y] \rightarrow \mathbb{R}\) if, and only if, \(a \geq \frac{1-\alpha } {2} \,.\)

In the next theorem, we show that the same tools may be used to obtain some inequalities between quadrature operators, which do not involve the integral mean.

Theorem 11.9 ([41])

Let a, α 1, α 2, β ∈ (0, 1) and let b 1, b 2, b 3 ∈ (0, 1) satisfy b 1 + b 2 + b 3 = 1. 

Then, the inequality

$$\displaystyle{ af(\alpha _{1}x + (1 -\alpha _{1})y) + (1 - a)f(\alpha _{2}x + (1 -\alpha _{2})y) \leq }$$
$$\displaystyle{ b_{1}f(x) + b_{2}f(\beta x + (1-\beta )y) + b_{3}f(y) }$$
(11.38)

is satisfied by all convex functions \(f: [x,y] \rightarrow \mathbb{R}\) if, and only if, we have

$$\displaystyle{ b_{2}(1-\beta ) + b_{3} = a(1 -\alpha _{1}) + (1 - a)(1 -\alpha _{2}) }$$
(11.39)

and one of the following conditions is satisfied:

  1. (i)

    ab 1, 

  2. (ii)

    ab 1 + b 2, and

  3. (iii)

    α 2β

or

  1. (iv)

    a ∈ (b 1, b 1 + b 2), α 2 < β, and (1 −α 1)b 1 ≥ (α 1β)(ab 1). 

Now, using this theorem, we shall present positive and negative examples of inequalities of the type (11.38).

Example 11.3 ([41])

Let \(\alpha \in \left (\frac{1} {2},1\right ).\) The inequality

$$\displaystyle{\frac{f(\alpha x + (1-\alpha )y) + f((1-\alpha )x +\alpha y)} {2} \, \leq \frac{f(x) + f\left (\frac{x+y} {2} \right ) + f(y)} {3} }$$

is satisfied by all convex functions \(f: [x,y] \rightarrow \mathbb{R}\) if, and only if, \(\alpha \leq \frac{5} {6}.\)

Example 11.4 ([41])

Let \(\alpha \in \left (\frac{1} {2},1\right ).\) The inequality

$$\displaystyle{\frac{f(\alpha x + (1-\alpha )y) + f((1-\alpha )x +\alpha y)} {2} \, \leq \frac{1} {6}f(x) + \frac{2} {3}f\left (\frac{x + y} {2} \right ) + \frac{1} {6}f(y)}$$

is satisfied by all convex functions \(f: [x,y] \rightarrow \mathbb{R}\) if, and only if, \(\alpha \leq \frac{2} {3}.\)

11.3 Inequalities of the Hermite–Hadamard Type Involving Numerical Differentiation Formulas of the First Order

In the paper [32], expressions connected with numerical differentiation formulas of order 1 are studied. The authors used the Ohlin lemma and the Levin–Stečkin theorem to study inequalities of the Hermite–Hadamard type connected with these expressions.

First, we recall the classical Hermite–Hadamard inequality

$$\displaystyle{ f\left (\frac{x + y} {2} \,\right ) \leq \frac{1} {y - x}\int _{x}^{y}f(t)dt \leq \frac{f(x) + f(y)} {2}. }$$
(11.40)

Now, let us write (11.40) in the form

$$\displaystyle{ f\left (\frac{x + y} {2} \,\right ) \leq \frac{F(y) - F(x)} {y - x} \leq \frac{f(x) + f(y)} {2}. }$$
(11.41)

Clearly, this inequality is satisfied by every convex function f and its primitive function F. However, (11.41) may be viewed as an inequality involving two types of expressions used, in numerical integration and differentiation, respectively. Namely, \(f\left (\frac{x+y} {2} \,\right )\) and \(\frac{f(x)+f(y)} {2}\) are the simplest quadrature formulas used to approximate the definite integral, whereas \(\frac{F(y)-F(x)} {y-x} \,\) is the simplest expression used to approximate the derivative of F. Moreover, as it is known from numerical analysis, if F′ = f, then the following equality is satisfied

$$\displaystyle{ f(x) = \frac{F(x + h) - F(x - h)} {2h} \, -\frac{h^{2}} {6} f''(\xi ) }$$
(11.42)

for some ξ ∈ (xh, x + h). This means that (11.42) provides an alternate proof of (11.41) (for twice differentiable f).

This new formulation of the Hermite–Hadamard inequality was inspiration in [32] to replace the middle term of Hermite–Hadamard inequality by more complicated expressions than those used in (11.40). In [32], the authors study inequalities of the form

$$\displaystyle{f\left (\frac{x + y} {2} \,\right ) \leq \frac{a_{1}F(x) + a_{2}F(\alpha x + (1-\alpha )y) + a_{3}F(\beta x + (1-\beta )y) + a_{4}F(y)} {y - x} }$$

and

$$\displaystyle{\frac{a_{1}F(x) + a_{2}F(\alpha x + (1-\alpha )y) + a_{3}F(\beta x + (1-\beta )y) + a_{4}F(y)} {y - x} \, \leq \frac{f(x) + f(y)} {2},}$$

where \(f: [x,y] \rightarrow \mathbb{R}\) is a convex function, F′ = f, α, β ∈ (0, 1), and a 1 + a 2 + a 3 + a 4 = 0. 

Proposition 11.5 ([32])

Let \(n \in \mathbb{N},\) α i ∈ (0, 1), \(a_{i} \in \mathbb{R}\) , i = 1, , n be such that α 1 > α 2 > > α n and a 1 + a 2 + + a n = 0, and let F be a differentiable function with F′ = f. Then,

$$\displaystyle{\frac{\sum _{i=1}^{n}a_{i}F(\alpha _{i}x + (1 -\alpha _{i})y)} {y - x} \, =\int fd\mu,}$$

with

$$\displaystyle{\mu (A) = - \frac{1} {y - x}\,\sum _{i=1}^{n-1}(a_{ 1} + \cdots + a_{i})l_{1}(A \cap [\alpha _{i}x + (1 -\alpha _{i})y,\alpha _{i+1}x + (1 -\alpha _{i+1})y]),}$$

where l 1 stands for the one-dimensional Lebesgue measure.

Remark 11.8 ([32])

Taking F 1(t): = μ((−, t]) with μ from Proposition 11.5, we can see that

$$\displaystyle{ \frac{\sum _{i=1}^{n}a_{i}F(\alpha _{i}x + (1 -\alpha _{i})y)} {y - x} \, =\int fdF_{1}. }$$
(11.43)

Next proposition will show that, in order to get some inequalities of the Hermite–Hadamard type, we have to use sums containing more than three summands.

Proposition 11.6 ([32])

There are no numbers \(\alpha _{i},a_{i} \in \mathbb{R},i = 1,2,3\) , satisfying 1 = α 1 > α 2 > α 3 = 0 such that any of the inequalities

$$\displaystyle{f\left (\frac{x + y} {2} \,\right ) \leq \frac{\sum _{i=1}^{3}a_{i}F(\alpha _{i}x + (1 -\alpha _{i})y)} {y - x} }$$

or

$$\displaystyle{\frac{\sum _{i=1}^{3}a_{i}F(\alpha _{i}x + (1 -\alpha _{i})y)} {y - x} \, \leq \frac{f(x) + f(y)} {2} }$$

is fulfilled by every continuous and convex function f and its antiderivative F. 

To prove Proposition 11.6, we note that by Proposition 11.5, we can see that

$$\displaystyle{\frac{\sum _{i=1}^{3}a_{i}F(\alpha _{i}x + (1 -\alpha _{i})y)} {y - x} \, =\int _{ x}^{y}fd\mu,}$$

with

$$\displaystyle\begin{array}{rcl} & \mu (A) = - \frac{1} {y-x}\,\big(a_{1}l_{1}(A \cap [x,\alpha _{2}x + (1 -\alpha _{2})y])+& {}\\ & (a_{2} + a_{1})l_{1}(A \cap [\alpha _{2}x + (1 -\alpha _{2})y,y])\big), & {}\\ \end{array}$$

and

$$\displaystyle{\frac{\sum _{i=1}^{3}a_{i}F(\alpha _{i}x + (1 -\alpha _{i})y)} {y - x} \, =\int _{ x}^{y}f(t)dF_{ 1}(t),}$$

where

$$\displaystyle{ F_{1}(t) =\mu \{ (-\infty,t]\}. }$$
(11.44)

Now, if

$$\displaystyle{F_{2}(t) = \frac{1} {y - x}\,l_{1}\{(-\infty,t] \cap [x,y]\},}$$

then F 1 lies strictly above or below F 2 (on [x, y]). This means that

$$\displaystyle{ \int _{x}^{y}F_{ 2}(t)dt\neq \int _{x}^{y}F_{ 1}(t)dt. }$$
(11.45)

But, on the other hand, if

$$\displaystyle{ F_{3}(t):= \left \{\begin{array}{ll} 0,\quad &t <x, \\ \frac{1} {2}\,,\ &t \in [x,y),\\ 1,\ &t \geq y, \end{array} \right. }$$
(11.46)

and

$$\displaystyle{ F_{4}(t):= \left \{\begin{array}{ll} 0,\quad &t <\frac{x+y} {2} \,, \\ 1,\quad &t \geq \frac{x+y} {2} \,, \end{array} \right. }$$
(11.47)

then

$$\displaystyle{\int _{x}^{y}F_{ 2}(t)dt =\int _{ x}^{y}F_{ 3}(t)dt =\int _{ x}^{y}F_{ 4}(t)dt = \frac{y - x} {2} \,.}$$

This, together with (11.45), shows that neither

$$\displaystyle{\int _{x}^{y}fdF_{ 2} \leq \int _{x}^{y}fdF_{ 3}}$$

nor

$$\displaystyle{\int _{x}^{y}fdF_{ 2} \geq \int _{x}^{y}fdF_{ 4}}$$

is satisfied. To complete the proof, it suffices to observe that

$$\displaystyle\begin{array}{rcl} & \int _{x}^{y}fdF_{3} = \frac{f(x)+f(y)} {2} \,,\;& {}\\ & \int _{x}^{y}fdF_{4} = f\left (\frac{x+y} {2} \,\right ). & {}\\ \end{array}$$

Remark 11.9 ([32])

Observe that the assumptions of Proposition 11.6, α 1 = 1 and α 3 = 0, are essential. For example, it follows from the Ohlin lemma that the inequality

$$\displaystyle{f\left (\frac{x + y} {2} \,\right ) \leq \frac{-3F(\frac{3} {4}x + \frac{1} {4}y) + \frac{25} {11}F(\frac{11} {20}x + \frac{9} {20}y) + \frac{8} {11}F(y)} {y - x} \leq \frac{1} {y - x}\int f(t)dt}$$

is satisfied by all continuous and convex functions f (where F′ = f). Clearly, there are many more examples of inequalities of this type.

Lemma 11.3 ([32])

If any of the inequalities

$$\displaystyle{ f\left (\frac{x + y} {2} \,\right ) \leq \frac{\sum _{i=1}^{4}a_{i}F(\alpha _{i}x + (1 -\alpha _{i})y)} {y - x} }$$
(11.48)

or

$$\displaystyle{ \frac{\sum _{i=1}^{4}a_{i}F(\alpha _{i}x + (1 -\alpha _{i})y)} {y - x} \, \leq \frac{f(x) + f(y)} {2} }$$
(11.49)

is satisfied for all continuous and convex functions \(f: [x,y] \rightarrow \mathbb{R}\) (where F′ = f), then

$$\displaystyle{ a_{1}(\alpha _{2} -\alpha _{1}) + (a_{2} + a_{1})(\alpha _{3} -\alpha _{2}) + (a_{3} + a_{2} + a_{1})(\alpha _{4} -\alpha _{3}) = 1 }$$
(11.50)

and

$$\displaystyle{ a_{1}(\alpha _{2}^{2} -\alpha _{ 1}^{2}) + (a_{ 2} + a_{1})(\alpha _{3}^{2} -\alpha _{ 2}^{2}) + (a_{ 3} + a_{2} + a_{1})(\alpha _{4}^{2} -\alpha _{ 3}^{2}) = 1. }$$
(11.51)

To prove this lemma, we take x = 0, y = 1. Then, using Proposition 11.5, we can see that

$$\displaystyle\begin{array}{rcl} & \sum _{i=1}^{4}a_{i}F(1 -\alpha _{i}) =\int _{ 0}^{1}fd\mu = -a_{1}\int _{1-\alpha _{1}}^{1-\alpha _{2}}f(x)dx+ & {}\\ & -(a_{1} + a_{2})\int _{1-\alpha _{3}}^{1-\alpha _{2}}f(x)dx - (a_{1} + a_{2} + a_{3})\int _{1-\alpha _{ 4}}^{1-\alpha _{3}}f(x)dx.& {}\\ \end{array}$$

Now, we consider the functions F 1, F 3, and F 4 given by the formulas (11.44), (11.46), and (11.47), respectively. Then, the inequalities (11.48) and (11.49) may be written in the form

$$\displaystyle{\int fdF_{4} \leq \int fdF_{1}}$$

and

$$\displaystyle{\int fdF_{1} \leq \int fdF_{3}.}$$

This means that, if, for example, the inequality (11.48) is satisfied, then we have F 1(1) = F 4(1) = 1, which yields (11.50). Further,

$$\displaystyle{\int _{0}^{1}F_{ 1}(t)dt =\int _{ 0}^{1}F_{ 4}(t)dt = \frac{1} {2},}$$

which gives us (11.51).

Proposition 11.7 ([32])

Let α i ∈ (0, 1), \(a_{i} \in \mathbb{R}\) , i = 1, , 4, be such that 1 = α 1 > α 2 > α 3 > α 4 = 0, a 1 + a 2 + a 3 + a 4 = 0, and the equalities  (11.50) and  (11.51) are satisfied. If F 1 is such that

$$\displaystyle{\frac{\sum _{i=1}^{4}a_{i}F(\alpha _{i}x + (1 -\alpha _{i})y)} {y - x} \, =\int _{ x}^{y}fdF_{ 1}}$$

and F 2 is the distribution function of a measure which is uniformly distributed in the interval [x, y], then (F 1, F 2) crosses exactly once.

Indeed, from (11.50) we can see that F 1(x) = F 2(x) = 0 and F 1(y) = F 2(y) = 1. Note that, in view of Proposition 11.5, the graph of the restriction of F 1 to the interval [x, y] consists of three segments. Therefore, F 1 and F 2 cannot have more than one crossing point. On the other hand, if graphs F 1 and F 2 do not cross, then

$$\displaystyle{\int _{x}^{y}tdF_{ 1}(t)\neq \int _{x}^{y}tdF_{ 1}(t)}$$

that is, (11.51) is not satisfied.

Theorem 11.10

Let α i ∈ (0, 1), \(a_{i} \in \mathbb{R}\) , i = 1, , 4, be such that 1 = α 1 > α 2 > α 3 > α 4 = 0, a 1 + a 2 + a 3 + a 4 = 0, and the equalities  (11.50) and  (11.51) are satisfied. Let \(F,f: [x,y] \rightarrow \mathbb{R}\) be functions such that f is continuous and convex and F′ = f. Then,

  1. (i)

    If a 1 > −1, then

    $$\displaystyle{\frac{\sum _{i=1}^{4}a_{i}F(\alpha _{i}x + (1 -\alpha _{i})y)} {y - x} \, \leq \frac{1} {y - x}\int _{x}^{y}f(t)dt \leq \frac{f(x) + f(y)} {2},}$$
  2. (ii)

    If a 1 < −1, then

    $$\displaystyle{f\left (\frac{x + y} {2} \,\right ) \leq \frac{1} {y - x}\int _{x}^{y}f(t)dt \leq \frac{\sum _{i=1}^{4}a_{ i}F(\alpha _{i}x + (1 -\alpha _{i})y)} {y - x},}$$
  3. (iii)

    If a 1 ∈ (−1, 0], then

    $$\displaystyle{f\left (\frac{x + y} {2} \,\right ) \leq \frac{\sum _{i=1}^{4}a_{i}F(\alpha _{i}x + (1 -\alpha _{i})y)} {y - x} \leq \frac{1} {y - x}\int _{x}^{y}f(t)dt,and}$$
  4. (iv)

    If a 1 < −1 and a 2 + a 1 ≤ 0, then

    $$\displaystyle{ \frac{1} {y - x}\,\int _{x}^{y}f(t)dt \leq \frac{\sum _{i=1}^{4}a_{ i}F(\alpha _{i}x + (1 -\alpha _{i})y)} {y - x} \leq \frac{f(x) + f(y)} {2}.}$$

We shall prove the first assertion. Other proofs are similar and will be omitted. It is easy to see that if inequalities which we consider are satisfied by every continuous and convex function defined on the interval [0, 1], then they are true for every continuous and convex function on a given interval [x, y]. Therefore, we assume that x = 0 and y = 1. Let F 1 be such that (11.43) is satisfied and let F 2 be the distribution function of a measure, which is uniformly distributed in the interval [0, 1]. From Proposition 11.5 and Remark 11.8, we can see that the graph of F 1 consists of three segments and, since a 1 > −1, the slope of the first segment is smaller than 1, i.e., F 1 lies below F 2 on some right-hand neighborhood of x. In view of the Proposition 11.7, this means that the assumptions of the Ohlin lemma are satisfied and we get our result from this lemma.

Now, we shall present examples of inequalities, which may be obtained from this theorem.

Example 11.5 ([32])

Using (i), we can see that the inequality

$$\displaystyle{\frac{1} {3}F(x) -\frac{8} {3}F\left (\frac{3x + y} {4} \right ) + \frac{8} {3}F\left (\frac{x + 3y} {4} \right ) -\frac{1} {3}F(y) \leq \frac{\int _{x}^{y}f(t)dt} {y - x} }$$

is satisfied for every continuous and convex f and its antiderivative F. 

Example 11.6 ([32])

Using (ii), we can see that the inequality

$$\displaystyle{-2F(x) + 3F\left (\frac{2x + y} {3} \,\right ) - 3F\left (\frac{x + 2y} {3} \right ) + 2F(y) \geq \frac{\int _{x}^{y}f(t)dt} {y - x} \,}$$

is satisfied by every continuous and convex function f and its antiderivative F. 

Example 11.7 ([32])

Using (iii), we can see that the inequality

$$\displaystyle{\frac{\int _{x}^{y}f(t)dt} {y - x} \, \geq \frac{-\frac{1} {2}F(x) -\frac{3} {2}F\left (\frac{2x+y} {3} \right ) + \frac{3} {2}F\left (\frac{x+2y} {3} \right ) + \frac{1} {2}F(y)} {y - x} \geq f\left (\frac{x + y} {2} \right )}$$

is satisfied by every continuous and convex function f and its antiderivative F. 

Example 11.8 ([32])

Using (iv), we can see that the inequality

$$\displaystyle{\frac{\int _{x}^{y}f(t)dt} {y - x} \, \leq \frac{-\frac{3} {2}F(x) + 2F\left (\frac{3x+y} {4} \right ) - 2F\left (\frac{x+3y} {4} \right ) + \frac{3} {2}F(y)} {y - x} \leq \frac{f(x) + f(y)} {2} }$$

is satisfied by every continuous and convex function f and its antiderivative F. 

In all cases considered in the above theorem, we used only the Ohlin lemma. Using Lemma 11.2, it is possible to obtain more subtle inequalities. However (for the sake of simplicity), in the next result, we shall restrict our considerations to expressions of the simplified form. Note that the inequality between \(f\left (\frac{x+y} {2} \,\right )\) and expressions which we consider is a bit unexpected.

Theorem 11.11 ([32])

Let \(\alpha \in \left (0, \frac{1} {2}\right )\), \(a,b \in \mathbb{R}\).

  1. (i)

    If a > 0, then the inequality

    $$\displaystyle{f\left (\frac{x + y} {2} \,\right ) \geq \frac{aF(x) + bF(\alpha x + (1-\alpha )y) - bF((1-\alpha )x +\alpha y) - aF(y)} {y - x} }$$

    is satisfied by every continuous and convex f and its antiderivative F if, and only if,

    $$\displaystyle{ (1-\alpha )^{2} \frac{ab} {a + b}\,> \frac{1} {2} - (1-\alpha ) \frac{b} {a + b},and }$$
    (11.52)
  2. (ii)

    If a < −1 and a 1 + a 2 > 0, then the inequality

    $$\displaystyle{ \frac{aF(x) + bF(\alpha x + (1-\alpha )y) - bF((1-\alpha )x +\alpha y) - aF(y)} {y - x} \, \leq \frac{f(x) + f(y)} {2} }$$

    is satisfied by every continuous and convex f and its antiderivative F if, and only if,

    $$\displaystyle{- \frac{1} {4a}\,> \left (-a(1-\alpha ) -\frac{1} {2}\right )\left (\frac{1} {2} + \frac{1} {2a}\right ).}$$

We shall prove the assertion (i) of Theorem 11.11. The proof of (ii) is similar and will be omitted. Similarly as before, we may assume without loss of generality that x = 0, y = 1. Let F 1 be such that

$$\displaystyle{aF(0) + bF(1-\alpha ) - bF(\alpha ) + aF(1) =\int _{ 0}^{1}fdF_{ 1}}$$

and let F 4 be given by (11.47). Then, it is easy to see that (F 1, F 4) crosses three times: at \(\frac{(1-\alpha )b} {a+b} \,,\) \(\frac{1} {2}\), and at \(\frac{a+\alpha b} {a+b}\).

We are going to use Lemma 11.2. Since, from (11.51), we have that

$$\displaystyle{A_{0} + A_{1} + A_{2} + A_{3} = 0,}$$

it suffices to check that A 0A 1 if, and only if, the inequality (11.52) is satisfied. Since, F 4(x) = 0, for \(x \in \left (0, \frac{1} {2}\right ),\) we get

$$\displaystyle{A_{0} = -\int _{0}^{\frac{(1-\alpha )b} {a+b} \,}F_{1}(t)dt}$$

and

$$\displaystyle{A_{1} =\int _{ \frac{(1-\alpha )b} {a+b} \,}^{\frac{1} {2} }F_{1}(t)dt,}$$

which yields our assertion.

Example 11.9 ([32])

Neither inequality

$$\displaystyle{ f\left (\frac{x + y} {2} \,\right ) \leq \frac{\frac{1} {3}F(x) -\frac{8} {3}F\left (\frac{3x+y} {4} \right ) + \frac{8} {3}F\left (\frac{x+3y} {4} \right ) -\frac{1} {3}F(y)} {y - x} }$$
(11.53)

nor

$$\displaystyle{ f\left (\frac{x + y} {2} \,\right ) \geq \frac{\frac{1} {3}F(x) -\frac{8} {3}F\left (\frac{3x+y} {4} \right ) + \frac{8} {3}F\left (\frac{x+3y} {4} \right ) -\frac{1} {3}F(y)} {y - x} }$$
(11.54)

is satisfied for all continuous and convex \(f: [x,y] \rightarrow \mathbb{R}.\) Indeed, if F 1 is such that

$$\displaystyle{\int _{x}^{y}f(t)dF_{ 1}(t) = \frac{\frac{1} {3}F(x) -\frac{8} {3}F\left (\frac{3x+y} {4} \right ) + \frac{8} {3}F\left (\frac{x+3y} {4} \right ) -\frac{1} {3}F(y)} {y - x} \,,}$$

then

$$\displaystyle{\int _{x}^{\frac{3x+y} {4} \,}F_{1}(t)dt <\int _{ x}^{\frac{3x+y} {4} }F_{4}(t)dt,}$$

thus inequality (11.53) cannot be satisfied. On the other hand, the coefficients and nodes of the expression considered do not satisfy (11.52). Therefore, (11.54) is also not satisfied for all continuous and convex \(f: [x,y] \rightarrow \mathbb{R}.\)

Example 11.10 ([32])

Using assertion (i) of Theorem 11.11, we can see that the inequality

$$\displaystyle{\frac{2F(x) - 3F\left (\frac{3x+y} {4} \right ) + 3F\left (\frac{x+3y} {4} \right ) - 2F(y)} {y - x} \, \leq f\left (\frac{x + y} {2} \,\right )}$$

is satisfied for every continuous and convex f and its antiderivative F. 

Example 11.11 ([32])

Using assertion (ii) of Theorem 11.11, we can see that the inequality

$$\displaystyle{\frac{-2F(x) + 3F\left (\frac{2x+y} {3} \right ) - 3F\left (\frac{x+2y} {3} \right ) + 2F(y)} {y - x} \, \leq \frac{f(x) + f(y)} {2} \,}$$

is satisfied for every continuous and convex f and its antiderivative F. 

11.4 Inequalities of the Hermite–Hadamard Type Involving Numerical Differentiation Formulas of Order Two

In the paper [42], expressions connected with numerical differentiation formulas of order 2 are studied. The author used the Ohlin lemma and the Levin–Stečkin theorem to study inequalities connected with these expressions. In particular, the author presents a new proof of the inequality

$$\displaystyle{ f\left (\frac{x + y} {2} \,\right ) \leq \frac{1} {(y - x)^{2}}\int _{x}^{y}\int _{ x}^{y}f\left (\frac{s + t} {2} \right )ds\:dt \leq \frac{1} {y - x}\,\int _{x}^{y}f(t)dt, }$$
(11.55)

satisfied by every convex function \(f: \mathbb{R} \rightarrow \mathbb{R}\) and he obtains extensions of (11.55). In the previous section, inequalities involving expressions of the form

$$\displaystyle{\frac{\sum _{i=1}^{n}a_{i}F(\alpha _{i}x +\beta _{i}y)} {y - x} \,,}$$

where i = 1 n a i = 0, α i + β i = 1, and F′ = f were considered. In this section, we study inequalities for expressions of the form

$$\displaystyle{ \frac{\sum _{i=1}^{n}a_{i}F(\alpha _{i}x +\beta _{i}y)} {(y - x)^{2}} \,, }$$

which we use to approximate the second order derivative of F and, surprisingly, we discover a connection between our approach and the inequality (11.55) (see [42]).

First, we make the following simple observation.

Remark 11.10 ([42])

Let \(f,F,\varPhi: [x,y] \rightarrow \mathbb{R}\) be such that Φ′ = F, F′ = f. Let \(n_{i},m_{i} \in \mathbb{N} \cup \{ 0\}\), i = 1, 2, 3; \(a_{i,j} \in \mathbb{R}\), α i, j , β i, j ∈ [0, 1], i = 1, 2, 3; j = 1, , n i , \(b_{i,j} \in \mathbb{R}\), γ i, j , δ i, j ∈ [0, 1], i = 1, 2, 3; j = 1, , m i . If the inequality

$$\displaystyle\begin{array}{rcl} & & \sum _{i=1}^{n_{1} }a_{1,i}f(\alpha _{1,i}x +\beta _{1,i}y) + \frac{\sum _{i=1}^{n_{2}}a_{2,i}F(\alpha _{2,i}x +\beta _{2,i}y)} {y - x} \, \\ & & \phantom{\sum _{i=1}^{n_{1} }a_{1,i}f(\alpha _{1,i}} + \frac{\sum _{i=1}^{n_{3}}a_{3,i}\varPhi (\alpha _{3,i}x +\beta _{3,i}y)} {(y - x)^{2}} \leq \sum _{i=1}^{m_{1} }b_{1,i}f(\gamma _{1,i}x +\delta _{1,i}y) \\ & & \phantom{\sum _{i=1}^{n_{1} }a_{1,i}f(\alpha _{1,i}} + \frac{\sum _{i=1}^{m_{2}}b_{2,i}F(\gamma _{2,i}x +\delta _{2,i}y)} {y - x} \, + \frac{\sum _{i=1}^{m_{3}}b_{3,i}\varPhi (\gamma _{3,i}x +\delta _{3,i}y)} {(y - x)^{2}} {}\end{array}$$
(11.56)

is satisfied for x = 0, y = 1 and for all continuous and convex functions \(f: [0,1] \rightarrow \mathbb{R}\), then it is satisfied for all \(x,y \in \mathbb{R}\), x < y and for each continuous and convex function \(f: [x,y] \rightarrow \mathbb{R}.\) To see this, it is enough to observe that expressions from (11.56) remain unchanged if we replace \(f: [x,y] \rightarrow \mathbb{R}\) by \(\varphi: [0,1] \rightarrow \mathbb{R}\) given by \(\varphi (t):= f\left (x + \frac{t} {y-x}\,\right ).\)

The simplest expression used to approximate the second order derivative of f is of the form

$$\displaystyle{f''\left (\frac{x + y} {2} \,\right ) \approx \frac{f(x) - 2f\left (\frac{x+y} {2} \right ) + f(y)} {\left (\frac{y-x} {2} \right )^{2}}.}$$

Remark 11.11 ([42])

From numerical analysis, it is known that

$$\displaystyle{f''\left (\frac{x + y} {2} \,\right ) = \frac{f(x) - 2f\left (\frac{x+y} {2} \right ) + f(y)} {\left (\frac{y-x} {2} \right )^{2}} -\frac{\left (\frac{y-x} {2} \right )^{2}} {12} \,f^{(4)}(\xi ).}$$

This means that for a convex function g and for G such that G″ = g we have

$$\displaystyle{g\left (\frac{x + y} {2} \,\right ) \leq \frac{G(x) - 2G\left (\frac{x+y} {2} \right ) + G(y)} {\left (\frac{y-x} {2} \right )^{2}}.}$$

In the paper [42], some inequalities for convex functions which do not follow from formulas used in numerical differentiation are obtained.

Let now \(f: [x,y] \rightarrow \mathbb{R}\) be any function and let \(F,\varPhi: [x,y] \rightarrow \mathbb{R}\) be such that F′ = f and Φ″ = f. We need to write the expression

$$\displaystyle{ \frac{\varPhi (x) - 2\varPhi \left (\frac{x+y} {2} \right ) +\varPhi (y)} {\left (\frac{y-x} {2} \right )^{2}} \, }$$
(11.57)

in the form

$$\displaystyle{\int _{x}^{y}fdF_{ 1}}$$

for some F 1. In the next proposition, we show that it is possible—here for the sake of simplicity we shall work on the interval [0, 1]. 

Proposition 11.8 ([42])

Let \(f: [0,1] \rightarrow \mathbb{R}\) be any function and let \(\varPhi: [0,1] \rightarrow \mathbb{R}\) be such that Φ″ = f. Then, we have

$$\displaystyle{4\left (\varPhi (0) - 2\varPhi \left (\frac{1} {2}\,\right ) +\varPhi (1)\right ) =\int _{ x}^{y}fdF_{ 1},}$$

where \(F_{1}: [0,1] \rightarrow \mathbb{R}\) is given by

$$\displaystyle{ F_{1}(t):= \left \{\begin{array}{ll} 2x^{2},\ &x \leq \frac{1} {2}, \\ - 2x^{2} + 4x - 1,\ &x> \frac{1} {2}. \end{array} \right. }$$
(11.58)

Now, we observe that the following equality is satisfied

$$\displaystyle{\frac{\varPhi (x) - 2\varPhi \left (\frac{x+y} {2} \right ) +\varPhi (y)} {\left (\frac{y-x} {2} \right )^{2}} \, = \frac{1} {(y - x)^{2}}\int _{x}^{y}\int _{ x}^{y}f\left (\frac{s + t} {2} \,\right )ds\:dt.}$$

After this observation, it turns out that inequalities involving the expression (11.57) were considered in the paper of Dragomir [14], where (among others) the following inequalities were obtained

$$\displaystyle{ f\left (\frac{x + y} {2} \,\right ) \leq \frac{1} {(y - x)^{2}}\int _{x}^{y}\int _{ x}^{y}f\left (\frac{s + t} {2} \right )ds\:dt \leq \frac{1} {y - x}\,\int _{x}^{y}f(t)dt. }$$
(11.59)

As we already know (Remark 11.11), the first one of the above inequalities may be obtained using the numerical analysis results.

Now, the inequalities from the Dragomir’s paper easily follow from the Ohlin lemma but there are many possibilities of generalizations and modifications of inequalities (11.59). These generalizations will be discussed in this section.

First, we consider the symmetric case. We start with the following remark.

Remark 11.12 ([42])

Let F (t) = at 2 + bt + c for some \(a,b,c \in \mathbb{R},a\neq 0.\) It is impossible to obtain inequalities involving x y fdF and any of the expressions:

$$\displaystyle{ \frac{1} {y - x}\,\int _{x}^{y}f(t)dt,\quad \ \ f\left (\frac{x + y} {2} \right ),\quad \ \ \frac{f(x) + f(y)} {2},}$$

which are satisfied for all convex functions \(f: [x,y] \rightarrow \mathbb{R}.\) Indeed, suppose that we have

$$\displaystyle{\int _{x}^{y}fdF_{ {\ast}}\leq \frac{1} {y - x}\,\int _{x}^{y}f(t)dt}$$

for all convex \(f: [x,y] \rightarrow \mathbb{R}.\) Without loss of generality, we may assume that F (x) = 0, then from Theorem 11.6 we have F (y) = 1. Also from Theorem 11.6 we get

$$\displaystyle{\int _{x}^{y}F_{ {\ast}}(t)dt =\int _{ x}^{y}F_{ 0}dt,}$$

where \(F_{0}(t) = \frac{t-x} {y-x}\,\), t ∈ [x, y], which is impossible, because F is either strictly convex or concave.

This remark means that in order to get some new inequalities of the Hermite–Hadamard type we have to integrate with respect to functions constructed with the use of (at least) two quadratic functions.

Now, we present the main result of this section.

Theorem 11.12 ([42])

Let x, y be some real numbers such that x < y and let \(a \in \mathbb{R}.\) Let \(f,F,\varPhi: [x,y] \rightarrow \mathbb{R}\) be any functions such that F′ = f and Φ′ = F and let T a f(x, y) be the function defined by the following formula

$$\displaystyle{T_{a}f(x,y) = \left (1 -\frac{a} {2}\,\right )\frac{F(y) - F(x)} {y - x} + 2a\frac{\varPhi (x) - 2\varPhi \left (\frac{x+y} {2} \right ) +\varPhi (x)} {(y - x)^{2}}.}$$

Then, the following inequalities hold for all convex functions \(f: [x,y] \rightarrow \mathbb{R}:\)

  • If a ≥ 0, then

    $$\displaystyle{ T_{a}f(x,y) \leq \frac{1} {y - x}\,\int _{x}^{y}f(t)dt, }$$
    (11.60)
  • If a ≤ 0, then

    $$\displaystyle{ T_{a}f(x,y) \geq \frac{1} {y - x}\,\int _{x}^{y}f(t)dt, }$$
    (11.61)
  • If a ≤ 2, then

    $$\displaystyle{ f\left (\frac{x + y} {2} \,\right ) \leq T_{a}f(x,y), }$$
    (11.62)
  • If a ≥ 6, then

    $$\displaystyle{ T_{a}f(x,y) \leq f\left (\frac{x + y} {2} \,\right ), }$$
    (11.63)
  • If a ≥ −6, then

    $$\displaystyle{ T_{a}f(x,y) \leq \frac{f(x) + f(y)} {2} \,, }$$
    (11.64)

Furthermore,

  • If a ∈ (2, 6), then the expressions T a f(x, y), \(f\left (\frac{x+y} {2} \,\right )\) are not comparable in the class of convex functions, and

  • If a < −6, then expressions T a f(x, y), \(\frac{f(x)+f(y)} {2} \,\) are not comparable in the class of convex functions.

To prove Theorem 11.12, we note that we may restrict ourselves to the case x = 0, y = 1. Take \(a \in \mathbb{R},\) let \(f: [0,1]:\rightarrow \mathbb{R}\) be any convex function, and let \(F,\varPhi: [0,1] \rightarrow \mathbb{R}\) be such that F′ = f, Φ′ = F. Define \(F_{1}: [0,1] \rightarrow \mathbb{R}\) by the formula

$$\displaystyle{ F_{1}(t):= \left \{\begin{array}{ll} at^{2} + \left (1 -\frac{a} {2} \,\right )t,\quad &t <\frac{1} {2}, \\ - at^{2} + \left (1 + \frac{3a} {2} \,\right )t -\frac{a} {2},\quad &t \geq \frac{1} {2}. \end{array} \right. }$$
(11.65)

First, we prove that T a f(0, 1) = 0 1 fdF 1. Now, let F 2(t) = t, t ∈ [0, 1]. Then, the functions F 1, F 2 have exactly one crossing point (at \(\frac{1} {2}\)) and

$$\displaystyle{\int _{0}^{1}F_{ 1}(t)dt = \frac{1} {2} =\int _{ 0}^{1}tdt.}$$

Moreover, if a > 0, then the function F 1 is convex on the interval \((0, \frac{1} {2})\) and concave on \((\frac{1} {2},1).\) Therefore, it follows from the Ohlin lemma that for a > 0 we have

$$\displaystyle{ \int _{0}^{1}fdF_{ 1} \leq \int _{0}^{1}fdF_{ 2}, }$$

which, in view of Remark 11.10, yields (11.60) and for a < 0 the opposite inequality is satisfied, which gives (11.61). Take

$$\displaystyle{ F_{3}(t):= \left \{\begin{array}{ll} 0,\quad &t \leq \frac{1} {2}, \\ 1,\quad &t> \frac{1} {2}. \end{array} \right. }$$

It is easy to calculate that for a ≤ 2 we have F 1(t) ≥ F 3(t) for \(t \in \left [0, \frac{1} {2}\right ],\) and F 1(t) ≤ F 3(t) for \(t \in \left [\frac{1} {2},1\right ]\), and this means that from the Ohlin lemma we get (11.62). Let now

$$\displaystyle{ F_{4}(t):= \left \{\begin{array}{ll} 0,\quad &t = 0, \\ \frac{1} {2},\quad &t \in (0,1),\\ 1, \quad &t = 1. \end{array} \right. }$$

Similarly as before, if a ≥ −2, then we have F 1(t) ≥ F 4(t) for \(t \in \left [0, \frac{1} {2}\right ]\) and F 1(t) ≤ F 4(t) for \(t \in \left [\frac{1} {2},1\right ].\) Therefore, from the Ohlin lemma, we get (11.63).

Suppose that a > 2. Then there are three crossing points of the functions F 1 and F 3 \(: x_{0}, \frac{1} {2},x_{1},\) where \(x_{0} \in (0, \frac{1} {2}),x_{1} \in (\frac{1} {2},1)\). The function

$$\displaystyle{\varphi (s):=\int _{ 0}^{s}(F_{ 3}(t) - F_{1}(t))dt,\;s \in [0,1]}$$

is increasing on the intervals \([0,x_{0}],[\frac{1} {2},x_{1}]\) and decreasing on \([x_{0}, \frac{1} {2}]\) and on [x 1, 1]. This means that φ takes its absolute minimum at \(\frac{1} {2}.\) It is easy to calculate that \(\varphi \left (\frac{1} {2}\right ) \geq 0\), if a ≥ 6, which, in view of Theorem 11.6, gives us (11.63).

To see that, for a ∈ (2, 6), the expressions T a f(x, y) and \(f\left (\frac{x+y} {2} \,\right )\) are not comparable in the class of convex functions, it is enough to observe that in this case φ(x 0) > 0 and \(\varphi \left (\frac{1} {2}\right ) <0.\)

Analogously (using functions F 1 and F 4), we show that for a ∈ (−2, −6] we have (11.64), and in the case a < −6 the expressions T a f(x, y) and \(\frac{f(x)+f(y)} {2} \,\) are not comparable in the class of convex functions. This theorem provides us with a full description of inequalities, which may be obtained using Stieltjes integral with respect to a function of the form (11.65). Some of the obtained inequalities are already known. For example, from (11.60) and (11.61) we obtain the inequality

$$\displaystyle{ \frac{1} {(y - x)^{2}}\,\int _{x}^{y}\int _{ x}^{y}f\left (\frac{s + t} {2} \right )ds\:dt \leq \frac{1} {y - x}\,\int _{x}^{y}f(t)dt,}$$

whereas from (11.62) for a = 2 we get the inequality

$$\displaystyle{f\left (\frac{x + y} {2} \,\right ) \leq \frac{1} {(y - x)^{2}}\int _{x}^{y}\int _{ x}^{y}f\left (\frac{s + t} {2} \right )ds\:dt.}$$

However, inequalities obtained for “critical” values of a, i.e., − 6, 6. are here particularly interesting. In the following corollary, we explicitly write these inequalities.

Corollary 11.2 ([42])

For every convex function \(f: [x,y] \rightarrow \mathbb{R}\) , the following inequalities are satisfied

$$\displaystyle{ 3 \frac{1} {(y - x)^{2}}\,\int _{x}^{y}\int _{ x}^{y}f\left (\frac{s + t} {2} \right )dsdt \leq \frac{2} {y - x}\,\int _{x}^{y}f(t)dt + f\left (\frac{x + y} {2} \right ), }$$
(11.66)
$$\displaystyle{ \frac{4} {y - x}\,\int _{x}^{y}f(t)dt \leq 3 \frac{1} {(y - x)^{2}}\int _{x}^{y}\int _{ x}^{y}f\left (\frac{s + t} {2} \right )dsdt + \frac{f(x) + f(y)} {2}. }$$
(11.67)

Remark 11.13 ([42])

In the paper [15], Dragomir and Gomm obtained the following inequality

$$\displaystyle{ 3\int _{x}^{y}f(t)dt \leq 2 \frac{1} {(y - x)^{2}}\,\int _{x}^{y}\int _{ x}^{y}f\left (\frac{s + t} {2} \right )dsdt + \frac{f(x) + f(y)} {2}. }$$
(11.68)

Inequality (11.67) from Corollary 11.2 is stronger than (11.68). Moreover, as it was observed in Theorem 11.12, the inequalities (11.66) and (11.67) cannot be improved, i.e., the inequality

$$\displaystyle{ \frac{1} {y - x}\,\int _{x}^{y}f(t)dt \leq \lambda \frac{1} {(y - x)^{2}}\int _{x}^{y}\int _{ x}^{y}f\left (\frac{s + t} {2} \right )dsdt + (1-\lambda )\frac{f(x) + f(y)} {2} }$$

for \(\lambda> \frac{3} {4}\) is not satisfied by every convex function \(f: [x,y] \rightarrow \mathbb{R}\) and the inequality

$$\displaystyle{ \frac{1} {(y - x)^{2}}\,\int _{x}^{y}\int _{ x}^{y}f\left (\frac{s + t} {2} \right )dsdt \leq \gamma \frac{1} {y - x}\,\int _{x}^{y}f(t)dt + (1-\gamma )f\left (\frac{x + y} {2} \right )}$$

with \(\gamma> \frac{2} {3}\) is not true for all convex functions \(f: [x,y] \rightarrow \mathbb{R}.\)

In Corollary 11.2, we obtained inequalities for the triples:

$$\displaystyle{ \frac{1} {(y - x)^{2}}\,\int _{x}^{y}\int _{ x}^{y}f\left (\frac{s + t} {2} \right )dsdt,\quad \int _{x}^{y}f(t)dt,\quad \frac{f(x) + f(y)} {2} }$$

and

$$\displaystyle{ \frac{1} {(y - x)^{2}}\,\int _{x}^{y}\int _{ x}^{y}f\left (\frac{s + t} {2} \right )dsdt,\quad \int _{x}^{y}f(t)dt,\quad f\left (\frac{x + y} {2} \right ).}$$

In the next remark, we present an analogous result for expressions

$$\displaystyle{ \frac{1} {(y - x)^{2}}\,\int _{x}^{y}\int _{ x}^{y}f\left (\frac{s + t} {2} \right )dsdt,\quad \frac{f(x) + f(y)} {2},\quad f\left (\frac{x + y} {2} \right ).}$$

Remark 11.14 ([42])

Using the functions: F 1 defined by (11.58) and F 5 given by

$$\displaystyle{ F_{5}(t):= \left \{\begin{array}{ll} 0, &\quad t = 0, \\ \frac{1} {6},&\quad t \in \left (0, \frac{1} {2}\right ), \\ \frac{5} {6},&\quad t \in \left [\frac{1} {2},1\right ),\\ 1, &\quad t = 1, \end{array} \right. }$$

we can see that

$$\displaystyle{\frac{1} {6}f(x) + \frac{2} {3}f\left (\frac{x + y} {2} \right ) + \frac{1} {6}f(y) \geq \frac{1} {(y - x)^{2}}\int _{x}^{y}\int _{ x}^{y}f\left (\frac{s + t} {2} \right )dsdt}$$

for all convex functions \(f: [x,y] \rightarrow \mathbb{R}.\)

Moreover, it is easy to see that the above inequality cannot be strengthened, which means that if a, b ≥ 0, 2a + b = 1 and \(a <\frac{1} {6}\), then the inequality

$$\displaystyle{af(x) + bf\left (\frac{x + y} {2} \,\right ) + af(y) \geq \frac{1} {(y - x)^{2}}\int _{x}^{y}\int _{ x}^{y}f\left (\frac{s + t} {2} \right )dsdt,}$$

is not satisfied by all convex functions f.

In [42], inequalities for f(αx + (1 −α)y) and for αf(x) + (1 −α)f(y), where α is not necessarily equal to \(\frac{1} {2}\) (the nonsymmetric case), are also obtained.

Theorem 11.13 ([42])

Let x, y be some real numbers such that x < y and let α ∈ [0, 1]. Let \(f: [x,y] \rightarrow \mathbb{R}\) be a convex function, let F be such that F′ = f, and let Φ satisfy Φ′ = F. If S α 2 f(x, y) is defined by

$$\displaystyle{S_{\alpha }^{2}f(x,y):= \frac{(4 - 6\alpha )F(y) + (2 - 6\alpha )F(x)} {y - x} \, + \frac{(6 - 12\alpha )(\varPhi (y) -\varPhi (x))} {(y - x)^{2}},}$$

then the following conditions hold true:

  • $$\displaystyle{ S_{\alpha }^{2}f(x,y) \leq \alpha f(x) + (1-\alpha )f(y), }$$
  • If \(\alpha \in \left [\frac{1} {3}, \frac{2} {3}\right ]\) , then

    $$\displaystyle{ S_{\alpha }^{2}f(x,y) \geq f(\alpha x + (1-\alpha )y), }$$
  • If \(\alpha \in [0,1]\setminus \left [\frac{1} {3}, \frac{2} {3}\right ]\) , then the expressions S α 2 f(x, y) and f(αx + (1 −α)y) are incomparable in the class of convex functions,

  • If \(\alpha \in \left (0, \frac{1} {3}\right ] \cup \left [\frac{2} {3},1\right ),\) then

    $$\displaystyle{ S_{\alpha }^{2}f(x,y) \leq S_{\alpha }^{1}f(x,y),and }$$
  • If \(\alpha \in \left (\frac{1} {3}, \frac{1} {2}\right ) \cup \left (\frac{1} {2}, \frac{2} {3}\right )\) , then S α 1 f(x, y) and S α 2 f(x, y) are incomparable in the class of convex functions.

11.5 The Hermite–Hadamard Type Inequalities for n-th Order Convex Functions

Now, we are going to study Hermite–Hadamard type inequalities for higher-order convex functions. Many results on higher-order generalizations of the Hermite–Hadamard type inequality one can find, among others, in [15, 16, 20, 36, 37]. In recent papers [36, 37], the theorem of Denuit, Lefèvre, and Shaked [13] on sufficient conditions for s-convex ordering was used, to prove Hermite–Hadamard type inequalities for higher-order convex functions.

Let us review some notations. The convexity of n-th order (or n-convexity) was defined in terms of divided differences by Popoviciu [34]; however, we will not state it here. Instead, we list some properties of n-th order convexity which are equivalent to Popoviciu’s definition (see [24]).

Proposition 11.9

A function \(f: (a,b) \rightarrow \mathbb{R}\) is n-convex on (a, b) (n ≥ 1) if, and only if, its derivative f (n−1) exists and is convex on (a, b) (with the convention f (0)(x) = f(x)).

Proposition 11.10

Assume that \(f: [a,b] \rightarrow \mathbb{R}\) is (n + 1)-times differentiable on (a, b) and continuous on [a, b] (n ≥ 1). Then, f is n-convex if, and only if, f (n+1)(x) ≥ 0, x ∈ (a, b).

For real-valued random variables X, Y and any integer s ≥ 2, we say that X is dominated by Y in s-convex ordering sense if \(\mathbb{E}f(X) \leq \mathbb{E}f(Y )\) for all (s − 1)-convex functions \(f: \mathbb{R} \rightarrow \mathbb{R}\), for which the expectations exist [13]. In that case, we write X scx Y, or μ X scx μ Y , or F X scx F Y . Then, the order ≤2−cx is just the usual convex order ≤ cx .

A very useful criterion for the verification of the s-convex order is given by Denuit, Lefèvre, and Shaked in [13].

Proposition 11.11 ([13])

Let X and Y be two random variables such that \(\mathbb{E}(X^{j} - Y ^{j}) = 0\) , j = 1, 2, , s − 1 (s ≥ 2). If S (F X F Y ) = s − 1 and the last sign of F X F Y is positive, then X scx Y.

We now apply Proposition 11.11 to obtain the following results.

Theorem 11.14 ([36])

Let n ≥ 1, a 1a < bb 1 .

Let \(a(n) = \left [\frac{n} {2} \,\right ] + 1\), \(b(n) = \left [\frac{n+1} {2} \right ] + 1\) .

Let α 1, , α a(n) , x 1, , x a(n) , β 1, , β b(n) , y 1, , y b(n) be real numbers such that

  • If n is even, then

    $$\displaystyle\begin{array}{rcl} & 0 <\beta _{1} <\alpha _{1} <\beta _{1} +\beta _{2} <\alpha _{1} +\alpha _{2} <\ldots <\alpha _{1} +\ldots +\alpha _{a(n)} =\beta _{1} +\ldots +\beta _{b(n)} = 1,& {}\\ & a \leq y_{1} <x_{1} <y_{2} <x_{2} <\ldots <x_{a(n)} <y_{b(n)} \leq b, & {}\\ \end{array}$$
  • If n is odd, then

    $$\displaystyle\begin{array}{rcl} & 0 <\beta _{1} <\alpha _{1} <\beta _{1} +\beta _{2} <\alpha _{1} +\alpha _{2} <\ldots <\beta _{1} +\ldots +\beta _{b(n)} <\alpha _{1} +\ldots +\alpha _{a(n)} = 1& {}\\ & a \leq y_{1} <x_{1} <y_{2} <x_{2} <\ldots <y_{b(n)} <x_{a(n)} \leq b; & {}\\ \end{array}$$

and

$$\displaystyle{\sum _{k=1}^{a(n)}x_{ i}^{k}\alpha _{ i} =\sum _{ j=1}^{b(n)}y_{ j}^{k}\beta _{ j}}$$

for any k = 1, 2, , n.

Let \(f: [a_{1},b_{1}] \rightarrow \mathbb{R}\) be an n-convex function. Then, we have the following inequalities:

  • If n is even, then

    $$\displaystyle{ \sum _{i=1}^{a(n)}\alpha _{ i}f(x_{i}) \leq \sum _{j=1}^{b(n)}\beta _{ j}f(y_{j}), }$$
  • If n is odd, then

    $$\displaystyle{ \sum _{j=1}^{b(n)}\beta _{ j}f(y_{j}) \leq \sum _{i=1}^{a(n)}\alpha _{ i}f(x_{i}). }$$

Theorem 11.15 ([36])

Let n ≥ 1, a 1a < bb 1 . Let \(a(n),b(n) \in \mathbb{N}\) . Let α 1, , α a(n) , β 1, , β b(n) be positive real numbers such that α 1 + + α a(n) = β 1 + + β b(n) = 1. Let x 1, , x a(n) , y 1, , y b(n) be real numbers such that

  • ax 1x 2x a(n)b and ay 1y 2y b(n)b,

  • k = 1 a(n) x i k α i = j = 1 b(n) y j k β j , for any k = 1, 2, , n.

Let α 0 = β 0 = 0, x 0 = y 0 = −∞. Let \(F_{1},F_{2}: \mathbb{R} \rightarrow \mathbb{R}\) be two functions given by the following formulas: F 1(x) = α 0 + α 1 + + α k if x k < xx k+1 (k = 0, 1, , a(n) − 1) and F 1(x) = 1 if x > x a(n) ; F 2(x) = β 0 + β 1 + + β k if y k < xy k+1 (k = 0, 1, , b(n) − 1) and F 2(x) = 1 if x > y b(n) . If the functions F 1, F 2 have n crossing points and the last sign of F 1F 2 is a+, then for any n-convex function \(f: [a_{1},b_{1}] \rightarrow \mathbb{R}\) we have the following inequality

$$\displaystyle{ \sum _{i=1}^{a(n)}\alpha _{ i}f(x_{i}) \leq \sum _{j=1}^{b(n)}\beta _{ j}f(y_{j}). }$$

Theorem 11.16 ([36])

Let n ≥ 1, a 1a < bb 1 . Let \(a(n) = \left [\frac{n} {2} \,\right ] + 1\), \(b(n) = \left [\frac{n+1} {2} \right ] + 1\) . Let x 1, , x a(n), y 1, , y b(n) be real numbers, and α 1, , α a(n) , β 1, , β b(n) be positive numbers, such that α 1 + + α a(n) = 1, β 1 + + β b(n) = 1,

$$\displaystyle{ \frac{1} {b - a}\,\int _{a}^{b}x^{k}dx =\sum _{ j=1}^{b(n)}y_{ j}^{k}\beta _{ j} =\sum _{ i=1}^{a(n)}x_{ i}^{k}\alpha _{ i}\quad (k = 1,2,\ldots,n),}$$

ax 1 < x 2 < < x a(n)b, ay 1 < y 2 < < y b(n) < b,

$$\displaystyle\begin{array}{rcl} & \frac{x_{1}-a} {b-a} \, <\alpha _{1} <\frac{x_{2}-a} {b-a}, & {}\\ & \frac{x_{2}-a} {b-a} \, <\alpha _{1} +\alpha _{2} <\frac{x_{3}-a} {b-a}, & {}\\ & \ldots & {}\\ & \frac{x_{a(n)-1}-a} {b-a} \, <\alpha _{1} +\ldots +\alpha _{a(n)-1} <\frac{x_{a(n)}-a} {b-a},& {}\\ \end{array}$$
$$\displaystyle\begin{array}{rcl} & \frac{y_{1}-a} {b-a} \, <\beta _{1} <\frac{y_{2}-a} {b-a}, & {}\\ & \frac{y_{2}-a} {b-a} \, <\beta _{1} +\beta _{2} <\frac{y_{2}-a} {b-a}, & {}\\ & \ldots & {}\\ & \frac{y_{b(n)-1}-a} {b-a} \, <\beta _{1} +\ldots +\beta _{b(n)-1} <\frac{y_{b(n)}-a} {b-a};& {}\\ \end{array}$$

if n is even, then y 1 = a, y b(n) = b, x 1 > a, x a(n) < b;

if n is odd, then y 1 = a, y b(n) < b, x 1 > a, x a(n) = b.

Let \(f: [a_{1},b_{1}] \rightarrow \mathbb{R}\) be an n-convex function. Then, we have the following inequalities:

  • If n is even, then

    $$\displaystyle{ \sum _{i=1}^{a(n)}\alpha _{ i}f(x_{i}) \leq \frac{1} {b - a}\,\int _{a}^{b}f(x)dx \leq \sum _{ j=1}^{b(n)}\beta _{ j}f(y_{j}), }$$
  • If n is odd, then

    $$\displaystyle{ \sum _{j=1}^{b(n)}\beta _{ j}f(y_{j}) \leq \frac{1} {b - a}\,\int _{a}^{b}f(x)dx \leq \sum _{ i=1}^{a(n)}\alpha _{ i}f(x_{i}). }$$

Note that Proposition 11.11 can be rewritten in the following form.

Proposition 11.12 ([13])

Let X and Y be two random variables such that

$$\displaystyle{ \mathbb{E}(X^{j} - Y ^{j}) = 0,\quad j = 1,2,\ldots,s\:(s \geq 1). }$$

If the distribution functions F X and F Y cross exactly s-times at points x 1 < x 2 < < x s and

$$\displaystyle{ (-1)^{s+1}\left (F_{ Y }(x) - F_{X}(x)\right ) \geq 0\quad for\:all\:x \leq x_{1}, }$$

then

$$\displaystyle{ \mathbb{E}f(X) \leq \mathbb{E}f(Y ) }$$
(11.69)

for all s-convex functions \(f: \mathbb{R} \rightarrow \mathbb{R}\) .

Proposition 11.11 is a counterpart of the Ohlin lemma concerning convex ordering. This proposition gives sufficient conditions for s-convex ordering and is very useful for the verification of higher-order convex orders. However, it is worth noticing that in the case of some inequalities, the distribution functions cross more than s-times. Therefore, a simple application of this proposition is impossible.

In the paper [38], a theorem on necessary and sufficient conditions for higher-order convex stochastic ordering is given. This theorem is a counterpart of the Levin–Stečkin theorem [25] concerning convex stochastic ordering. Based on this theorem, useful criteria for the verification of higher-order convex stochastic ordering are given. These results can be useful in the study of Hermite–Hadamard type inequalities for higher-order convex functions, and in particular inequalities between the quadrature operators. It is worth noticing that these criteria can be easier to checking of higher-order convex orders, than those given in [13, 22].

Let \(F_{1},F_{2}: [a,b] \rightarrow \mathbb{R}\) be two functions with bounded variation and μ 1, μ 2 be the signed measures corresponding to F 1, F 2, respectively. We say that F 1 is dominated by F 2 in (n + 1)-convex ordering sense (n ≥ 1) if

$$\displaystyle{ \int _{-\infty }^{\infty }f(x)dF_{ 1}(x) \leq \int _{-\infty }^{\infty }f(x)dF_{ 2}(x) }$$

for all n-convex functions \(f: [a,b] \rightarrow \mathbb{R}\). In that case, we write F 1(n+1)−cx F 2, or μ 1(n+1)−cx μ 2. In the following theorem, we give necessary and sufficient conditions for (n + 1)-convex ordering of two functions with bounded variation.

Theorem 11.17 ([38])

Let \(a,b \in \mathbb{R}\) , a < b, \(n \in \mathbb{N}\) and let \(F_{1},F_{2}: [a,b] \rightarrow \mathbb{R}\) be two functions with bounded variation such that F 1(a) = F 2(a). Then, in order that

$$\displaystyle{ \int _{a}^{b}f(x)dF_{ 1}(x) \leq \int _{a}^{b}f(x)dF_{ 2}(x) }$$

for all continuous n-convex functions \(f: [a,b] \rightarrow \mathbb{R},\) it is necessary and sufficient that F 1 and F 2 verify the following conditions:

$$\displaystyle{ F_{1}(b) = F_{2}(b), }$$
$$\displaystyle{ \int _{a}^{b}F_{ 1}(x)dx =\int _{ a}^{b}F_{ 2}(x)dx, }$$
$$\displaystyle\begin{array}{rcl} & & \int _{a}^{b}\int _{ a}^{x_{k-1} }\ldots \int _{a}^{x_{1} }F_{1}(t)dtdx_{1}\ldots dx_{k-1} = \\ & & \phantom{\int _{a}^{b}\int _{ a}^{x_{k-1} }}\int _{a}^{b}\int _{ a}^{x_{k-1} }\ldots \int _{a}^{x_{1} }F_{2}(t)dtdx_{1}\ldots dx_{k-1}\quad \mathit{\mbox{ for }}\:k = 2,\ldots,n,{}\end{array}$$
(11.70)
$$\displaystyle\begin{array}{rcl} & & (-1)^{n+1}\int _{ a}^{x}\int _{ a}^{x_{n-1} }\ldots \int _{a}^{x_{1} }F_{1}(t)dtdx_{1}\ldots dx_{n-1} \leq \\ & &\quad (-1)^{n+1}\int _{ a}^{x}\int _{ a}^{x_{n-1} }\ldots \int _{a}^{x_{1} }F_{2}(t)dtdx_{1}\ldots dx_{n-1}\quad \mathit{\mbox{ for all }}\:x \in (a,b).{}\end{array}$$
(11.71)

Corollary 11.3 ([38])

Let μ 1 , μ 2 be two signed measures on \(\mathcal{B}(\mathbb{R})\) , which are concentrated on (a, b), and such that \(\int _{a}^{b}\vert x\vert ^{n}\mu _{i}(dx) <\infty\) , i = 1, 2. Then, in order that

$$\displaystyle{ \int _{a}^{b}f(x)d\mu _{ 1}(x) \leq \int _{a}^{b}f(x)d\mu _{ 2}(x) }$$

for continuous n-convex functions \(f: [a,b] \rightarrow \mathbb{R}\) , it is necessary and sufficient that μ 1 , μ 2 verify the following conditions:

$$\displaystyle{ \mu _{1}\left ((a,b)\right ) =\mu _{2}\left ((a,b)\right ), }$$
(11.72)
$$\displaystyle{ \int _{a}^{b}x^{k}\mu _{ 1}(dx) =\int _{ a}^{b}x^{k}\mu _{ 2}(dx)\quad \mathit{\mbox{ for }}\:\:k = 1,\ldots,n, }$$
(11.73)
$$\displaystyle{ \int _{a}^{b}\big(t - x\big)_{ +}^{n}\mu _{ 1}(dt) =\int _{ a}^{b}\big(t - x\big)_{ +}^{n}\mu _{ 2}(dt)\quad \mathit{\mbox{ for all }}\:\:x \in (a,b), }$$
(11.74)

where \(y_{+}^{n} =\Big \{\max \{y,0\}\Big\}^{n}\), \(y \in \mathbb{R}\) .

In [13], it can be found the following necessary and sufficient conditions for the verification of the (s + 1)-convex order.

Proposition 11.13 ([13])

If X and Y are two real-valued random variables such that \(\mathbb{E}\vert X\vert ^{s} <\infty\) and \(\mathbb{E}\vert Y \vert ^{s} <\infty\) , then

$$\displaystyle{ \mathbb{E}f(X) \leq \mathbb{E}f(Y ) }$$

for all continuous s-convex functions \(f: \mathbb{R} \rightarrow \mathbb{R}\) if, and only if,

$$\displaystyle{ \mathbb{E}X^{k} = \mathbb{E}Y ^{k}\quad \mathit{\mbox{ for }}\:k = 1,2,\ldots,s, }$$
(11.75)
$$\displaystyle{ \mathbb{E}(X - t)_{+}^{s} \leq \mathbb{E}(Y - t)_{ +}^{s}\quad \mathit{\mbox{ for all }}\:t \in \mathbb{R}. }$$
(11.76)

Remark 11.15 ([38])

Note that if the measures μ X , μ Y , corresponding to the random variables X, Y, respectively, occurring in Proposition 11.13, are concentrated on some interval [a, b], then this proposition is an easy consequence of Corollary 11.3.

Theorem 11.17 can be rewritten in the following form.

Theorem 11.18 ([38])

Let \(F_{1},F_{2}: [a,b] \rightarrow \mathbb{R}\) be two functions with bounded variation such that F 1(a) = F 2(a). Let

$$\displaystyle\begin{array}{rcl} H_{0}(t_{0})& =& F_{2}(t_{0}) - F_{1}(t_{0})\quad \mathit{\mbox{ for }}\:t_{0} \in [a,b], {}\\ H_{k}(t_{k})& =& \int _{a}^{t_{k-1} }H_{k-1}(t_{k-1})dt_{k-1}\quad \mathit{\mbox{ for }}\:t_{k} \in [a,b],\ k = 1,2,\ldots,n. {}\\ \end{array}$$

Then, in order that

$$\displaystyle{ \int _{a}^{b}f(x)dF_{ 1}(x) \leq \int _{a}^{b}f(x)dF_{ 2}(x) }$$

for all continuous n-convex functions \(f: [a,b] \rightarrow \mathbb{R},\) it is necessary and sufficient that the following conditions are satisfied:

$$\displaystyle\begin{array}{rcl} H_{k}(b)& =& 0\quad \mathit{\mbox{ for }}\:k = 0,1,2,\ldots,n, {}\\ (-1)^{n+1}H_{ n}(x)& \geq & 0\quad \mathit{\mbox{ for all }}\:x \in (a,b). {}\\ \end{array}$$

Remark 11.16 ([38])

The functions H 1, , H n that appear in Theorem 11.18 can be obtained from the following formulas

$$\displaystyle{ H_{n}(x) = (-1)^{n+1}\int _{ a}^{b}\frac{(t - x)_{+}^{n}} {n!} \,d(F_{2}(t) - F_{1}(t)), }$$
(11.77)
$$\displaystyle{ H_{k-1}(x) = H_{k}^{\,'}(x),\quad k = 2,3,\ldots,n. }$$
(11.78)

Note that the function (−1)n+1 H n−1, that appears in Theorem 11.18, plays a role similar to the role of the function F = F 2F 1 in Lemma 11.2. Consequently, from Theorem 11.18, Lemma 11.2, and Remarks 11.711.16, we obtain immediately the following criterion, which can be useful for the verification of higher-order convex ordering.

Corollary 11.4 ([38])

Let \(F_{1},F_{2}: [a,b] \rightarrow \mathbb{R}\) be functions with bounded variation such that F 1(a) = F 2(a), F 1(b) = F 2(b) and H k (b) = 0 (k = 1, 2, , n), where H k (x) (k = 1, 2, , n) are given by  (11.77) and  (11.78). Let a < x 1 < < x m < b be the points of sign changes of the function H n−1 and let (−1)n+1 H n−1(x) ≥ 0 for x ∈ (a, x 1).

  • If m is even, then the inequality

    $$\displaystyle{ \int _{a}^{b}f(x)dF_{ 1}(x) \leq \int _{a}^{b}f(x)dF_{ 2}(x), }$$
    (11.79)

    is not satisfied by all continuous n-convex functions \(f: [a,b] \rightarrow \mathbb{R}\) .

  • If m is odd, then the inequality  (11.79) is satisfied for all continuous n-convex functions \(f: [a,b] \rightarrow \mathbb{R}\) if, and only if,

    $$\displaystyle{ (-1)^{n+1}H_{ n}(x_{2}) \geq 0,\:\:\;(-1)^{n+1}H_{ n}(x_{4}) \geq 0,\:\;\ldots,\;\:(-1)^{n+1}H_{ n}(x_{m-1}) \geq 0. }$$
    (11.80)

In the numerical analysis, some inequalities, which are connected with quadrature operators, are studied. These inequalities, called extremalities, are a particular case of the Hermite–Hadamard type inequalities. Many extremalities are known in the numerical analysis (cf. [1, 7, 8] and the references therein). The numerical analysts prove them using the suitable differentiability assumptions. As proved by Wąsowicz in the papers [44, 45, 47], for convex functions of higher order, some extremalities can be obtained without assumptions of this kind, using only the higher-order convexity itself. The support-type properties play here the crucial role. As we show in [36, 37], some extremalities can be proved using a probabilistic characterization. The extremalities, which we study, are known; however, our method using the Ohlin lemma [31] and the Denuit–Lefèvre–Shaked theorem [13] on sufficient conditions for the convex stochastic ordering seems to be quite easy. It is worth noticing that these theorems concern only the sufficient conditions, and they cannot be used to the proof some extremalities (see [36, 37]). In these cases, results given in the paper [38] may be useful.

For a function \(f: [-1,1] \rightarrow \mathbb{R}\), we consider six operators approximating the integral mean value

$$\displaystyle{\mathcal{I}(f):= \tfrac{1} {2}\int \limits _{-1}^{1}f(x)dx.}$$

They are given by

$$\displaystyle\begin{array}{rcl} C(f)&:=& \tfrac{1} {3}\Big(f\big(-\tfrac{\sqrt{2}} {2} \big) + f(0) + f\big(\tfrac{\sqrt{2}} {2} \big)\Big), {}\\ \mathcal{G}_{2}(f)&:=& \tfrac{1} {2}\Big(f\big(-\tfrac{\sqrt{3}} {3} \big) + f\big(\tfrac{\sqrt{3}} {3} \big)\Big), {}\\ \mathcal{G}_{3}(f)&:=& \tfrac{4} {9}f(0) + \tfrac{5} {18}\Big(f\big(-\tfrac{\sqrt{15}} {5} \big) + f\big(\tfrac{\sqrt{15}} {5} \big)\Big), {}\\ \mathcal{L}_{4}(f)&:=& \tfrac{1} {12}\big(f(-1) + f(1)\big) + \tfrac{5} {12}\Big(f\big(-\tfrac{\sqrt{5}} {5} \big) + f\big(\tfrac{\sqrt{5}} {5} \big)\Big), {}\\ \mathcal{L}_{5}(f)&:=& \tfrac{16} {45}f(0) + \tfrac{1} {20}\big(f(-1) + f(1)\big) + \tfrac{49} {180}\Big(f\big(-\tfrac{\sqrt{21}} {7} \big) + f\big(\tfrac{\sqrt{21}} {7} \big)\Big),and {}\\ S(f)&:=& \tfrac{1} {6}\big(f(-1) + f(1)\big) + \tfrac{2} {3}f(0). {}\\ \end{array}$$

The operators \(\mathcal{G}_{2}\) and \(\mathcal{G}_{3}\) are connected with Gauss–Legendre rules. The operators \(\mathcal{L}_{4}\) and \(\mathcal{L}_{5}\) are connected with Lobatto quadratures. The operators S and C concern Simpson and Chebyshev quadrature rules, respectively. The operator \(\mathcal{I}\) stands for the integral mean value (see, e.g., [39, 4851]).

We will establish all possible inequalities between these operators in the class of higher-order convex functions.

Remark 11.17

Let X 2, X 3, Y 4, Y 5, U, V, and Z be random variables such that

$$\displaystyle\begin{array}{rcl} \mu _{X_{2}}& =& \frac{1} {2}\,\left (\delta _{-\frac{\sqrt{3}} {3} } +\delta _{\frac{\sqrt{3}} {3} }\right ), {}\\ \mu _{X_{3}}& =& \frac{4} {9}\,\delta _{0} + \frac{5} {18}\left (\delta _{-\frac{\sqrt{15}} {5} } +\delta _{\frac{\sqrt{15}} {5} }\right ), {}\\ \mu _{Y _{4}}& =& \frac{1} {12}\,(\delta _{-1} +\delta _{1}) + \frac{5} {12}\left (\delta _{-\frac{\sqrt{5}} {5} } +\delta _{\frac{\sqrt{5}} {5} }\right ), {}\\ \mu _{Y _{5}}& =& \frac{16} {45}\,\delta _{0} + \frac{1} {20}(\delta _{-1} +\delta _{1}) + \frac{49} {180}\left (\delta _{-\frac{\sqrt{21}} {7} } +\delta _{\frac{\sqrt{21}} {7} }\right ), {}\\ \mu _{U}& =& \frac{2} {3}\,\delta _{0} + \frac{1} {6}(\delta _{-1} +\delta _{1}), {}\\ \mu _{V }& =& \frac{1} {3}\,\left (\delta _{-\frac{\sqrt{2}} {2} } +\delta _{0} +\delta _{\frac{\sqrt{2}} {2} }\right ),and {}\\ \mu _{Z}(dx)& =& \frac{1} {2}\,\chi _{[-1,1]}(x)dx. {}\\ \end{array}$$

Then, we have

$$\displaystyle\begin{array}{rcl} & \mathcal{G}_{2}(f) = \mathbb{E}[f(X_{2})],\quad \mathcal{G}_{3}(f) = \mathbb{E}[f(X_{3})], & {}\\ & \mathcal{L}_{4}(f) = \mathbb{E}[f(Y _{4})],\quad \mathcal{L}_{5}(f) = \mathbb{E}[f(Y _{5})], & {}\\ & S(f) = \mathbb{E}[f(U)],\quad C(f) = \mathbb{E}[f(V )],\quad \mathcal{I}(f) = \mathbb{E}[f(Z)].& {}\\ \end{array}$$

Theorem 11.19

Let \(f: [-1,1] \rightarrow \mathbb{R}\) be 5-convex. Then,

$$\displaystyle{ \mathcal{G}_{3}(f) \leq \mathcal{ I}(f) \leq \mathcal{ L}_{4}(f), }$$
(11.81)
$$\displaystyle{ \mathcal{G}_{3}(f) \leq \mathcal{ L}_{5}(f) \leq \mathcal{ L}_{4}(f). }$$
(11.82)

Note that the inequalities (11.81) and (11.82) can be simply derived from Theorems 11.16 and 11.15 (see [38]).

Remark 11.18

The inequalities (11.82) can be found in [45, 47]. Wąsowicz [45] proved that in the class of 5-convex functions the operators \(\mathcal{G}_{2},C,S\) are not comparable both with each other and with \(\mathcal{G}_{3},\mathcal{L}_{4},\mathcal{L}_{5}\).

Theorem 11.20

Let \(f: [-1,1] \rightarrow \mathbb{R}\) be 3-convex. Then,

$$\displaystyle{ \mathcal{G}_{2}(f) \leq \mathcal{ I}(f) \leq S(f), }$$
(11.83)
$$\displaystyle{ \mathcal{G}_{2}(f) \leq C(f) \leq T(f) \leq S(f), }$$
(11.84)

where \(T \in \{\mathcal{ G}_{3},\mathcal{L}_{5}\}\) .

In [38] is given a new simple proof of Theorem 11.20. Note that from Theorem 11.16, we obtain \(\mathcal{G}_{3}(f) \leq \mathcal{ I}(f)\) and \(\mathcal{I}(f) \leq S(f)\), which implies (11.83). From Theorem 11.14, we obtain \(\mathcal{G}_{2}(f) \leq C(f)\). By Theorem 11.15, we get \(C(f) \leq \mathcal{ G}_{3}(f)\), \(C(f) \leq \mathcal{ L}_{5}(f)\), \(\mathcal{G}_{3}(f) \leq S(f)\), \(\mathcal{L}_{5}(f) \leq S(f)\).

Remark 11.19

The inequalities (11.84) can be found in [44]. Wąsowicz [44] proved that the quadratures \(\mathcal{L}_{4}\), \(\mathcal{L}_{5}\), and \(\mathcal{G}_{3}\) are not comparable in the class of 3-convex functions.

Remark 11.20

Moreover, Wąsowicz [44, 46] proved that

$$\displaystyle{ C(f) \leq \mathcal{ L}_{4}(f), }$$
(11.85)

if f is 3-convex.

The proof given in [44] is rather complicated. This was done using computer software. In [46], can be found a new proof of (11.85), without the use of any computer software, based on the spline approximation of convex functions of higher order. It is worth noticing that Proposition 11.11 does not apply to proving (11.85), because the distribution functions F V and \(F_{Y _{4}}\) cross exactly five-times.

In [38], the following new proof of (11.85) is given. In this proof of (11.85), we use Corollary 11.4. Note that we have F 1 = F V , \(F_{2} = F_{Y _{4}}\), and \(H_{0} = F = F_{Y _{4}} - F_{V }\). By (11.77) and (11.78), we obtain

$$\displaystyle\begin{array}{rcl} H_{3}(x)& =& \tfrac{1} {72}\left \{\left (-1 - x\right )_{+}^{3} + \left (1 - x\right )_{ +}^{3} + 5\left [\left (-\tfrac{\sqrt{5}} {5} - x\right )_{+}^{3} + \left (\tfrac{\sqrt{5}} {5} - x\right )_{+}^{3}\right ]\right. {}\\ & & {}\\ \quad & \, & \left.-4\left [\left (-1 - x\right )_{+}^{3} + \left (-\tfrac{\sqrt{2}} {2} - x\right )_{+}^{3} + \left (-x\right )_{ +}^{3} + \left (\tfrac{\sqrt{2}} {2} - x\right )_{+}^{3}\right ]\right \}, {}\\ \end{array}$$
$$\displaystyle\begin{array}{rcl} H_{2}(x)& =& \tfrac{1} {24}\left \{-\left (-1 - x\right )_{+}^{2} -\left (1 - x\right )_{ +}^{2} - 5\left [\left (-\tfrac{\sqrt{5}} {5} - x\right )_{+}^{2} + \left (\tfrac{\sqrt{5}} {5} - x\right )_{+}^{2}\right ]\right. {}\\ & & {}\\ \quad & \, & \left.+4\left [\left (-1 - x\right )_{+}^{2} + \left (-\tfrac{\sqrt{2}} {2} - x\right )_{+}^{2} + \left (-x\right )_{ +}^{2} + \left (\tfrac{\sqrt{2}} {2} - x\right )_{+}^{2}\right ]\right \}. {}\\ \end{array}$$

Similarly, H 1(x) can be obtained from the equality H 1(x) = H 2  ′(x). We compute that \(x_{1} = -1 -\sqrt{5} + 2\sqrt{2}\), x 2 = 0, and \(x_{3} = 1 + \sqrt{5} - 2\sqrt{2}\) are the points of sign changes of the function H 2(x). It is not difficult to check that the assumptions of Corollary 11.4 are satisfied. Since

$$\displaystyle{(-1)^{3+1}H_{ 3}(x_{2}) = (-1)^{3+1}H_{ 3}(0) = \tfrac{1} {72} + \tfrac{\sqrt{5}} {360} -\tfrac{\sqrt{2}} {72}> 0,}$$

it follows that the inequalities (11.80) are satisfied. From Corollary 11.4, we conclude that the relation (11.85) holds.