Abstract
In this article, we present some new improvements of Jensen’s type inequalities via 4-convex and Green functions. These improvements are demonstrated in discrete as well as in integral versions. The aforesaid results enable us to give some improvements of Jensen’s and the Jensen–Steffensen inequalities. Also, we present some improvements of the reverse Jensen’s and the Jensen–Steffensen inequalities. Then as consequences of the improved Jensen’s inequality, we deduce some new bounds for the power, geometric and quasi-arithmetic means, also obtain bounds for the Hermite–Hadamard gap and improvements of the Hölder inequality. Finally as applications of the improved Jensen’s inequality, we present some new bounds for various divergences and Zipf–Mandelbrot entropy.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and preliminaries
Jensen’s inequality is one of the most significant inequality in the existing literature of mathematical inequalities for convex functions. Several well known mathematical inequalities for example Hölder’s, Minkowski’s, Ky Fan’s, Levinson’s, Hermite–Hadamard and Young’s inequalities etc can be deduced from this inequality. This inequality can be utilized for solving certain optimization problems in modern analysis. In more detail, this inequality can be used for estimation of Csiszár divergence and Zipf–Mandelbrot entropy [1, 4, 5, 7, 15, 16], it helps to investigate the stability of time-delayed systems [18], also dynamically consistent nonlinear evaluations in probability space, Rao-Blackwell estimates for certain parameters in their respective probability spaces and super linear expectations with its applications in economics can be investigated through this inequality [19, 20, 27]. Because of its significant role in modern applied analysis, several mathematicians have obtained some useful results related to Jensen’s inequality in the last couple of decades [2, 6, 8,9,10,11,12, 14, 17, 21, 22, 24]. In what follows, we present some improvements of Jensen’s type inequalities in discrete as well as in integral form via 4-convex and Green functions.
In the following theorem, the discrete form of Jensen’s inequality is given while its integral version in Riemann sense can be found in [15].
Theorem 1.1
Let \(T:[\rho _1,\rho _2]\rightarrow \mathbb {R}\) be a convex function, \(s_k\in [\rho _1,\rho _2]\), \(u_k\ge 0\) for \(k=1,2,\dots ,m\) with \(U_m=\sum _{k=1}^{m}u_k>0\), then
About Jensen’s inequality, a question naturally comes into the mind that: is it possible to relax the condition of non-negativity of \(u_k \) \((k=1,2,\dots ,m)\) at the expense of restricting \(s_k \) \((k=1,2,\dots ,m)\) more severely?. The answer of this question was given by Steffensen in the following theorem [26].
Theorem 1.2
Let \(T:[\rho _1,\rho _2]\rightarrow \mathbb {R}\) be a convex function, \(s_k\in [\rho _1,\rho _2]\), \(u_k\in \mathbb {R},\) \(k=1,2,\dots ,m\). If \(s_1\le s_2\le \cdots \le s_m\) or \(s_1\ge s_2\ge \cdots \ge s_m\) and
then (1.1) holds.
The integral form of the above theorem can be seen in [13].
The following reverse of Jensen’s inequality has been given in [25, p. 83]:
Theorem 1.3
Let \(T:[\rho _1,\rho _2]\rightarrow \mathbb {R}\) be a convex function, \(s_k\in [\rho _1,\rho _2]\), \(u_1>0\), \(u_k\le 0\) for \(k=2,3,\dots ,m\) with \(U_m=\sum _{k=1}^{m}u_k>0\). Also, let \(\frac{1}{U_m}\sum _{k=1}^{m}u_ks_k\in [\rho _1,\rho _2],\) then
The following theorem presents a reverse of the Jensen–Steffensen inequality, also given in [25, p. 83]:
Theorem 1.4
Let \(T:[\rho _1,\rho _2]\rightarrow \mathbb {R}\) be a convex function, \(s_k\in [\rho _1,\rho _2],u_k\in \mathbb {R}\) for \(k=1,2,\dots ,m\). Let \(U_k=\sum _{j=1}^{k}u_j\) for \(k=1,2,\dots ,m\) with \(U_m>0\) and \(\frac{1}{U_m}\sum _{k=1}^{m}u_ks_k\in [\rho _1,\rho _2]\). If the m-tuple \((s_1,s_2,\dots ,s_m)\) is monotonic, and there exists a number \(p\in \{1,2,...m\}\) such that
then (1.2) holds.
To derive the main results, we need the following Green functions \(G_{i}\) for \(i=1,2,3,4,5,\) defined on \([\rho _{1},\rho _{2}]\times [\rho _{1},\rho _{2}]\) [23]:
These functions are continuous and convex with respect to both the variables z and x. Also, the following identities hold, for a function \(T\in C^{2}[\rho _{1},\rho _{2}]\) [23]:
Lemma 1.5
Let \(T\in C^{2}[\rho _{1},\rho _{2}],\) then the following identities hold.
where \(G_{i}\), for \(i=1,2,3,4,5\) are given in (1.3)–(1.7) respectively.
In order to present the main results, the following inequality (1.13) will be useful, which is a simple consequence of Jensen’s inequality.
Lemma 1.6
Let \(T:[\rho _{1},\rho _{2}]\rightarrow \mathbb {R}\) be a convex function and p(x) be a nonnegative weight function with \(\int _{\rho _{1}}^{\rho _{2}}p(x)dx>0,\) then
2 Main results
We begin to present our first main result.
Theorem 2.1
Let \(T\in C^{2}[\rho _{1},\rho _{2}]\) be a 4-convex function and \(s_{k}\in [\rho _{1},\rho _{2}],~u_{k}\in \mathbb {R}\) for \(k=1,2,\dots ,m\) with \(U_{m}:=\sum _{k=1}^{m}u_{k}\ne 0\) and \(\frac{1}{U_{m}}\sum _{k=1}^{m}u_{k}s_{k}\in [\rho _{1},\rho _{2}]\). Also, let \(G_{i} (i=1,2,3,4,5)\) be as defined in (1.3)–(1.7). If
then
If the reverse inequality holds in (2.14), then the reverse inequality holds in (2.15).
If T is 4-concave function then the reverse inequality holds in (2.15).
Proof
Using (1.8)-(1.12) in \(\frac{1}{U_{m}}\sum _{k=1}^{m}u_{k}T(s_{k})-T\left( \frac{1}{U_{m}}\sum _{k=1}^{m}u_{k}s_{k}\right) \), we obtain
Since (2.14) holds and T is 4-convex that is \(T''\) is convex. Therefore by applying definition of convexity in the right hand side of (2.16) we obtain
Now, if \(T(x)=\frac{\rho _{2}x^{2}}{2}-\frac{x^{3}}{6},\) then \(T''(x)=\rho _{2}-x\) and using (2.16) for these functions we get
Similarly, using (2.16) for \(T(x)=\frac{x^{3}}{6}-\frac{\rho _{1}x^{2}}{2},\) we get
Using (2.18) and (2.19) in (2.17), we get (2.15). \(\square \)
As an application of Theorem 2.1, we give an improvement of Jensen’s inequality.
Theorem 2.2
Let \(T\in C^{2}[\rho _{1},\rho _{2}]\) be a 4-convex function and \(s_{k}\in [\rho _{1},\rho _{2}],~u_{k}\ge 0\) for \(k=1,2,\dots ,m\) with \(\sum _{k=1}^{m}u_{k}=U_{m}>0,\) then (2.15) holds. If T is 4-concave function then the reverse inequality holds in (2.15).
Proof
Since \(u_k\ge 0\) for all k with \(U_m>0\) and the functions \(G_{i}\) are convex for all i, therefore by Jensen’s inequality, the inequality (2.14) holds. So applying Theorem 2.1 for these facts, we have (2.15). \(\square \)
As applications of Theorem 2.2, we give two new upper bounds for the Hölder difference.
Corollary 2.3
Let \(q>1,~p\not \in (2,3)\) such that \(\frac{1}{q}+\frac{1}{p}=1.\) Also, let \([\rho _{1},\rho _{2}]\) be a positive interval and \((a_{1},a_{2},\dots ,a_{m}),(b_{1},b_{2},\dots ,b_{m})\) be two positive m-tuples with \(\frac{\sum _{k=1}^{m}a_{k}b_{k}}{\sum _{k=1}^{m}b^{q}_{k}},~a_{k}b_{k}^{-\frac{q}{p}}\in [\rho _{1},\rho _{2}]\) for \(k=1,2,\dots ,m,\) then
Proof
Using (2.15) for \(T(x)=x^{p},~u_{k}=b_{k}^{q}\) and \(s_{k}=a_{k}b_{k}^{-\frac{q}{p}},\) we derive
By utilizing the inequality \(\xi ^{e}-\zeta ^{e}\le (\xi -\zeta )^{e},~0\le \zeta \le \xi ,~e\in [0,1]\) for \(\xi =\Big (\sum _{k=1}^{m}a_{k}^{p}\Big ) \Big (\sum _{k=1}^{m}b_{k}^{q}\Big )^{p-1},~\zeta =\Big (\sum _{k=1}^{m}a_{k}b_{k}\Big )^{p}\) and \(e=\frac{1}{p},\) we obtain
Now using (2.22) in (2.21), we get (2.20). \(\square \)
Corollary 2.4
Let \(0<p<1,~q=\frac{p}{p-1}\) such that \(\frac{1}{p}\not \in (2,3).\) Also, \([\rho _{1},\rho _{2}]\) be a positive interval and \((a_{1},a_{2},\dots ,a_{m}),(b_{1},b_{2},\dots ,b_{m})\) be two positive m-tuples with \(\frac{\sum _{k=1}^{m}a_{k}^{p}}{\sum _{k=1}^{m}b^{q}_{k}},~a_{k}^{p}b_{k}^{-q}\in [\rho _{1},\rho _{2}]\) for \(k=1,2,\dots ,m,\) then
Proof
For the given values of p, the function \(T(x)=x^{\frac{1}{p}}\) for \(x\in [\rho _{1},\rho _{2}],\) is convex as well as 4-convex. Therefore by using (2.15) for \(T(x)=x^{\frac{1}{p}},~u_{k}=b_{k}^{q}\) and \(s_{k}=a_{k}^{p}b_{k}^{-q},\) we get (2.23). \(\square \)
Definition 2.5
Let \(\mathbf {u}=(u_{1},u_{2},\dots ,u_{m})\) and \(\mathbf {s}=(s_{1},s_{2},\dots ,s_{m})\) be two positive m-tuples with \(U_{m}=\sum _{k=1}^{m}u_{k}.\) Then the power mean of order \(\alpha \in \mathbb {R}\) is defined as
As an application of Theorem 2.2, in the following corollary we present a bound for the power mean.
Corollary 2.6
Let \(0<\rho _{1}<\rho _{2}\) and \(\mathbf {u}=(u_{1},u_{2},\dots ,u_{m}),\) \(\mathbf {s}=(s_{1},s_{2},\dots ,s_{m})\) be two positive m-tuples with \(U_{m}=\sum _{k=1}^{m}u_{k}.\) Also, let r, t be two nonzero real numbers such that
-
(i)
if \(r>0\) with \(3r\le t\) or \(r\le t\le 2r\) or \(t<0,\) then we have
$$\begin{aligned}&\mathcal {M}^{t}_{t}(\mathbf {u},\mathbf {s})-\mathcal {M}^{t}_{r}(\mathbf {u},\mathbf {s}) \nonumber \\&\quad \le \frac{t(t-r)}{6r^{2}(\rho _{2}-\rho _{1})}\left( \rho ^{\frac{t}{r}-2}_{2}-\rho ^{\frac{t}{r}-2}_{1}\right) \left( \mathcal {M}^{3r}_{3r}(\mathbf {u},\mathbf {s})-\mathcal {M}^{3r}_{r}(\mathbf {u},\mathbf {s})\right) \nonumber \\&\qquad +\frac{t(t-r)}{2r^{2}(\rho _{2}-\rho _{1})}\left( \rho _{2}\rho ^{\frac{t}{r}-2}_{1}-\rho _{1}\rho ^{\frac{t}{r}-2}_{2}\right) \left( \mathcal {M}^{2r}_{2r}(\mathbf {u},\mathbf {s})-\mathcal {M}^{2r}_{r}(\mathbf {u},\mathbf {s})\right) . \end{aligned}$$(2.24) -
(ii)
If \(r<0\) with \(3r\ge t\) or \(r\ge t\ge 2r\) or \(t>0,\) then (2.24) holds.
-
(iii)
If \(r>0\) with \(2r<t<3r\) or \(r<0\) with \(3r<t<2r,\) then the reverse inequality holds in (2.24).
Proof
-
(i)
Let \(T(x)=x^{\frac{t}{r}}\) for \(x\in [\rho _{1},\rho _{2}],\) then the function T is 4-convex. Therefore using (2.15) for \(T(x)=x^{\frac{t}{r}}\) and \(s_{k}\rightarrow s^{r}_{k},\) we get (2.24).
-
(ii)
Also, in this case the function \(T(x)=x^{\frac{t}{r}}\) for \(x\in [\rho _{1},\rho _{2}]\) is 4-convex, therefore adopting the procedure of part (i), we obtain (2.24).
-
(iii)
For such values of r, t the function \(T(x)=x^{\frac{t}{r}}\) for \(x\in [\rho _{1},\rho _{2}]\) is 4-concave. Thus following the procedure of part (i) but for T as a 4-concave function, we obtain the reverse inequality in (2.24).
\(\square \)
The following corollary provides an interesting relation between different means as an application of Theorem 2.2.
Corollary 2.7
Let \(0<\rho _{1}<\rho _{2}\) and \(\mathbf {u}=(u_{1},u_{2},\dots ,u_{m}),\) \(\mathbf {s}=(s_{1},s_{2},\dots ,s_{m})\) be two positive m-tuples with \(U_{m}=\sum _{k=1}^{m}u_{k},\) then
Proof
-
(i)
Let \(T(x)=-\ln x\) for \(x\in [\rho _{1},\rho _{2}],\) then T is 4-convex. Therefore using (2.15) for this function, we get (2.25).
-
(ii)
Using (2.15) for the 4-convex function \(T(x)=e^{x},~x\in [\rho _{1},\rho _{2}]\) and \(s_{k}=\ln s_{k},\) we get (2.26).
\(\square \)
Definition 2.8
Let \(\mathbf {u}=(u_{1},u_{2},\dots ,u_{m})\) and \(\mathbf {s}=(s_{1},s_{2},\dots ,s_{m})\) be two positive m-tuples with \(U_{m}=\sum _{k=1}^{m}u_{k}.\) Then for \(\varphi \) as a strictly monotone, continuous function, the quasi arithmetic mean is defined as
As an application of Theorem 2.2, in the following corollary we present a bound for the quasi arithmetic mean.
Corollary 2.9
Let \(0<\rho _{1}<\rho _{2},\) and \(\mathbf {u}=(u_{1},u_{2},\dots ,u_{m}),~\mathbf {s}=(s_{1},s_{2},\dots ,s_{m})\) be two positive m-tuples with \(U_{m}=\sum _{k=1}^{m}u_{k}.\) Also, let \(\varphi \) be a strictly monotone, continuous function and assume that \(\beta \circ \varphi ^{-1}\) is a 4-convex function on \([\rho _{1},\rho _{2}],\) then the following inequality holds
Proof
(2.27) follows from (2.15) by assuming \(s_{k}\rightarrow \varphi (s_{k})\) and \(T\rightarrow \beta \circ \varphi ^{-1}.\) \(\square \)
As an application of Theorem 2.1, we obtain an improvement of the Jensen–Steffensen inequality.
Corollary 2.10
Let \(T\in C^{2}[\rho _{1},\rho _{2}]\) be a 4-convex function and \(s_{k}\in [\rho _{1},\rho _{2}],\) \(u_{k}\in \mathbb {R}\) for \(k=1,2,\dots ,m\). If \(s_1\le s_2\le \cdots \le s_m\) or \(s_1\ge s_2\ge \cdots \ge s_m\) and
then (2.15) holds. If T is 4-concave function then the reverse inequality holds in (2.15).
Proof
Since the Jensen–Steffensen conditions hold and the functions \(G_{i}\) for all i are convex, therefore by the Jensen–Steffensen inequality, the inequality (2.14) holds. So, by applying Theorem 2.1, we get (2.15). \(\square \)
In the following corollary, we present a refinement of reverse of Jensen’s inequality under the conditions stated in Theorem 1.3.
Corollary 2.11
Let \(T\in C^{2}[\rho _{1},\rho _{2}]\) be a 4-convex function, \(s_k\in [\rho _1,\rho _2]\), \(u_1>0\), \(u_k\le 0\) for \(k=2,3,\dots ,m\) with \(U_m=\sum _{k=1}^{m}u_k>0\). Also, let \(\frac{1}{U_m}\sum _{k=1}^{m}u_ks_k\in [\rho _1,\rho _2]\), then the reverse inequality in (2.15) holds.
Proof
Since for each \(i=1,2,3,4,5\), \(G_i\) is convex function, so by Theorem 1.3 we have \(\bar{G}(x):=\frac{1}{U_{m}}\sum _{k=1}^{m}u_{k}G_{i}(s_{k},x)-G_{i}\Big (\frac{1}{U_{m}}\sum _{k=1}^{m}u_{k}s_{k},x\Big )\le 0\). Hence, using Theorem 2.1 we obtain reverse inequality in (2.15). \(\square \)
In the following corollary, we present a refinement of the reverse of the Jensen–Steffensen inequality under the conditions stated in Theorem 1.4.
Corollary 2.12
Let \(T\in C^{2}[\rho _{1},\rho _{2}]\) be a 4-convex function, \(s_k\in [\rho _1,\rho _2],u_k\in \mathbb {R}\) for \(k=1,2,\dots ,m\). Let \(U_k=\sum _{j=1}^{k}u_j\) for \(k=1,2,\dots ,m\) with \(U_m>0\) and \(\frac{1}{U_m}\sum _{k=1}^{m}u_ks_k\in [\rho _1,\rho _2]\). If the m-tuple \((s_1,s_2,\dots ,s_m)\) is monotonic, and there exists a number \(p\in \{1,2,...,m\}\) such that
then the reverse inequality in (2.15) holds.
Proof
The proof is similar to the proof of Corollary 2.11, but using Theorem 1.4 instead of Theorem 1.3. \(\square \)
The following theorem is the integral version of Theorem 2.1.
Theorem 2.13
Let \(T\in C^{2}[\rho _{1},\rho _{2}]\) be a 4-convex function. Also, let \(f_{1},f_{2}:[a_{1},a_{2}]\rightarrow \mathbb {R}\) be two integrable functions such that \(f_{1}(y)\in [\rho _{1},\rho _{2}]\) for all \(y\in [a_{1},a_{2}]\) with \(D:=\int _{a_{1}}^{a_{2}}f_{2}(y)dy\ne 0\) and \(\frac{1}{D}\int _{a_{1}}^{a_{2}}f_{1}(y)f_{2}(y)dy\in [\rho _{1},\rho _{2}]\). Suppose that \(G_{i} (i=1,2,3,4,5)\) are defined as in (1.3)–(1.7), and
then
If the reverse inequality holds in (2.28), then the reverse inequality holds in (2.29).
If T is 4-concave function then the reverse inequality holds in (2.29).
The following corollary is the integral version of Theorem 2.2.
Corollary 2.14
Let \(T\in C^{2}[\rho _{1},\rho _{2}]\) be a 4-convex function. Also, let \(f_{1},f_{2}:[a_{1},a_{2}]\rightarrow \mathbb {R}\) be two integrable functions such that \(f_{1}(y)\in [\rho _{1},\rho _{2}]\) and \(f_{2}(y)\) is non negative for all \(y\in [a_{1},a_{2}]\) with \(D:=\int _{a_{1}}^{a_{2}}f_{2}(y)dy>0,\) then (2.29) holds. If T is 4-concave function then the reverse inequality holds in (2.29).
Remark 2.15
As applications of Corollary 2.14, two new bounds for the Hölder difference in integral form can be obtained. The procedure will be similar to those in Corollary 2.3 and Corollary 2.4.
Remark 2.16
As applications of Corollary 2.14, the integral versions of Corollary 2.6, Corollary 2.7 and Corollary 2.9 can be presented.
As an application of Corollary 2.14, we present a bound for the Hermite–Hadamard gap.
Corollary 2.17
Let \(\psi \in C^{2}[a_{1},a_{2}]\) be a 4-convex function, then
Proof
Using (2.29) for \(\psi =T,~[\rho _{1},\rho _{2}]=[a_{1},a_{2}]\) and \(f_{2}(y)=1,~f_{1}(y)=y\) for all \(y\in [a_{1},a_{2}],\) we get (2.30). \(\square \)
In the following corollary, we obtain a refinement of the integral Jensen–Steffensen inequality.
Corollary 2.18
Let \(T\in C^{2}[\rho _{1},\rho _{2}]\) be a 4-convex function. Also, let \(f_{1},f_{2}:[a_{1},a_{2}]\rightarrow \mathbb {R}\) be two integrable functions such that \(f_{1}(y)\in [\rho _{1},\rho _{2}]\) for all \(y\in [a_{1},a_{2}].\) If \(f_1\) is monotonic function on \([a_{1},a_{2}]\) and \(f_{2}\) satisfies
then (2.29) holds.
If T is 4-concave function then the reverse inequality holds in (2.29).
In the following theorem, we present another improvement of Jensen’s type inequality in discrete form.
Theorem 2.19
Let \(T\in C^{2}[\rho _{1},\rho _{2}]\) be a 4-convex function and \(s_{k}\in [\rho _{1},\rho _{2}],~u_{k}\in \mathbb {R}\) for \(k=1,2,\dots ,m\). If (2.14) holds, then
If the reverse inequality holds in (2.14), then the reverse inequality holds in (2.31).
If T is 4-concave function then the reverse inequality holds in (2.31).
Proof
Using (1.13) for \(p(x)=\frac{1}{U_{m}}\sum _{k=1}^{m}u_{k}G_{i}(s_{k},x)-G_{i}\Big (\frac{1}{U_{m}}\sum _{k=1}^{m}u_{k}s_{k},x\Big )\) and \(T=T'',\) then with the help of (2.16) we get the following inequality
where \(\bar{G_{i}}(x)=\left( \frac{1}{U_{m}}\sum _{k=1}^{m}u_{k}G_{i}(s_{k},x)-G_{i}\left( \frac{1}{U_{m}}\sum _{k=1}^{m}u_{k}s_{k},x\right) \right) .\)
Now, let \(T(x)=\frac{x^{2}}{2},\) then \(T''(x)=1\) and so using (2.16) for these functions we obtain
Also, let \(T(x)=\frac{x^{3}}{6},\) then \(T''(x)=x\) and so using (2.16) for these functions we obtain
Using (2.33) and (2.34) in (2.32), we get (2.31). \(\square \)
As an application of the above theorem, we give a refinement of Jensen’s inequality.
Corollary 2.20
Let \(T\in C^{2}[\rho _{1},\rho _{2}]\) be a 4-convex function and \(s_{k}\in [\rho _{1},\rho _{2}],~u_{k}\ge 0\) for \(k=1,2,\dots ,m\) with \(\sum _{k=1}^{m}u_{k}=U_{m}>0,\) then (2.31) holds.
If T is 4-concave function then the reverse inequality holds in (2.31).
Proof
The proof is analogous to the proof of Theorem 2.2. \(\square \)
The following corollary provides a refinement of Hölder type inequality as an application of Corollary 2.20.
Corollary 2.21
Let \(0<p<1,~q=\frac{p}{p-1}\) such that \(\frac{1}{p}\not \in (2,3).\) Also, \([\rho _{1},\rho _{2}]\) be a positive interval and \((a_{1},a_{2},\dots ,a_{m}),(b_{1},b_{2},\dots ,b_{m})\) be two positive m-tuples with \(\frac{\sum _{k=1}^{m}a_{k}^{p}}{\sum _{k=1}^{m}b^{q}_{k}},~a_{k}^{p}b_{k}^{-q}\in [\rho _{1},\rho _{2}]\) for \(k=1,2,\dots ,m.\) Then
Proof
For the given values of p, the function \(T(x)=x^{\frac{1}{p}}\) for \(x\in [\rho _{1},\rho _{2}]\) is convex as well as 4-convex. Using (2.31) for \(T(x)=x^{\frac{1}{p}},~u_{k}=b_{k}^{q}\) and \(s_{k}=a_{k}^{p}b_{k}^{-q},\) we get (2.35). \(\square \)
As an application of Corollary 2.20, in the following corollary we present another bound for the power mean.
Corollary 2.22
Let \(0<\rho _{1}<\rho _{2}\) and \(\mathbf {u}=(u_{1},u_{2},\dots ,u_{m}),\) \(\mathbf {s}=(s_{1},s_{2},\dots ,s_{m})\) be two positive m-tuples with \(U_{m}=\sum _{k=1}^{m}u_{k}.\) Also, let r, t be two nonzero real numbers such that
-
(i)
if \(r>0\) with \(3r\le t\) or \(r\le t\le 2r\) or \(t<0,\) then
$$\begin{aligned}&\mathcal {M}^{t}_{t}(\mathbf {u},\mathbf {s})-\mathcal {M}^{t}_{r}(\mathbf {u},\mathbf {s})\nonumber \\&\quad \ge \frac{t(t-r)}{2r^{2}}\left( \mathcal {M}^{2r}_{2r}(\mathbf {u},\mathbf {s})-\mathcal {M}^{2r}_{r}(\mathbf {u},\mathbf {s})\right) \left( \frac{\mathcal {M}^{3r}_{3r}(\mathbf {u},\mathbf {s})-\mathcal {M}^{3r}_{r}(\mathbf {u},\mathbf {s})}{3\left( \mathcal {M}^{2r}_{2r}(\mathbf {u},\mathbf {s})-\mathcal {M}^{2r}_{r}(\mathbf {u},\mathbf {s})\right) }\right) ^{\frac{t}{r}-2}. \end{aligned}$$(2.36) -
(ii)
If \(r<0\) with \(3r\ge t\) or \(r\ge t\ge 2r\) or \(t>0,\) then we get again (2.36).
-
(iii)
If \(r>0\) with \(2r<t<3r\) or \(r<0\) with \(3r<t<2r,\) then the reverse inequality holds in (2.36).
Proof
-
(i)
Let \(T(x)=x^{\frac{t}{r}}\) for \(x\in [\rho _{1},\rho _{2}],\) then the function T is 4-convex. Therefore using (2.31) for \(T(x)=x^{\frac{t}{r}}\) and \(s_{k}\rightarrow s^{r}_{k},\) we get (2.36)
-
(ii)
Also, in this case the function \(T(x)=x^{\frac{t}{r}}\) for \(x\in [\rho _{1},\rho _{2}]\) is 4-convex, therefore adopting the procedure of part (i), we obtain (2.36).
-
(iii)
For such values of r, t the function \(T(x)=x^{\frac{t}{r}}\) for \(x\in [\rho _{1},\rho _{2}]\) is 4-concave. Thus following the procedure of part (i) but for T as a 4-concave function, we obtain the reverse inequality in (2.36).\(\square \)
The following corollary provides an interesting relationship between different means as an application of Corollary 2.20.
Corollary 2.23
Let \(0<\rho _{1}<\rho _{2},\) and \(\mathbf {u}=(u_{1},u_{2},\dots ,u_{m}),\) \(\mathbf {s}=(s_{1},s_{2},\dots ,s_{m})\) be two positive m-tuples with \(U_{m}=\sum _{k=1}^{m}u_{k},\) then
Proof
-
(i)
Using (2.31) for the 4-convex function \(T(x)=-\ln x,~x\in [\rho _{1},\rho _{2}],\) we get (2.37).
-
(ii)
Let \(T(x)=e^{x},\) for \(x\in [\rho _{1},\rho _{2}]\) then T is a 4-convex function. Thus using (2.31) for \(T(x)=e^{x}\) and \(s_{k}=\ln s_{k},\) we get (2.38).\(\square \)
As an application of Corollary 2.20, in the following corollary we present a bound for the quasi arithmetic mean.
Corollary 2.24
Let \(\mathbf {u}=(u_{1},u_{2},\dots ,u_{m})\) and \(\mathbf {s}=(s_{1},s_{2},\dots ,s_{m})\) be two positive m-tuples with \(U_{m}=\sum _{k=1}^{m}u_{k}.\) Also, let \(\varphi \) be a strictly monotone, continuous function and assume that \(\beta \circ \varphi ^{-1}\) is a 4-convex function, then the following inequality holds
Proof
(2.39) follows from (2.31) by assuming \(s_{k}\rightarrow \varphi (s_{k})\) and \(T\rightarrow \beta \circ \varphi ^{-1}.\) \(\square \)
As an application of Theorem 2.19, we give a refinement of the Jensen–Steffensen inequality.
Corollary 2.25
Let \(T\in C^{2}[\rho _{1},\rho _{2}]\) be a 4-convex function and \(s_{k}\in [\rho _{1},\rho _{2}],~u_{k}\in \mathbb {R}\) for \(k=1,2,\dots ,m\). If \(s_1\le s_2\le \cdots \le s_m\) or \(s_1\ge s_2\ge \cdots \ge s_m\) and
then (2.31) holds.
If T is 4-concave function then the reverse inequality holds in (2.31).
Proof
The proof is analogous to the proof of Corollary 2.10. \(\square \)
Corollary 2.26
Under the assumptions of Corollary 2.11, the reverse inequality in (2.31) holds.
Proof
The idea of the proof is similar to the proof of Corollary 2.11. \(\square \)
Corollary 2.27
Under the assumptions of Corollary 2.12, the reverse inequality in (2.31) holds.
Proof
The idea of the proof is similar to the proof of Corollary 2.12. \(\square \)
In the following theorem, we state integral version of Theorem 2.19.
Theorem 2.28
Let \(T\in C^{2}[\rho _{1},\rho _{2}]\) be a 4-convex function. Also, let \(f_{1},f_{2}:[a_{1},a_{2}]\rightarrow \mathbb {R}\) be two integrable functions such that \(f_{1}(y)\in [\rho _{1},\rho _{2}]\) for all \(y\in [a_{1},a_{2}]\) with \(D:=\int _{a_{1}}^{a_{2}}f_{2}(y)dy\ne 0\) and \(\frac{1}{D}\int _{a_{1}}^{a_{2}}f_{1}(y)f_{2}(y)dy\in [\rho _{1},\rho _{2}]\). Suppose that the inequality (2.28) holds, then
If the reverse inequality holds in (2.28), then the reverse inequality holds in (2.40).
If T is 4-concave function then the reverse inequality holds in (2.40).
As an application of Theorem 2.28, we give a refinement of Jensen’s inequality.
Corollary 2.29
Let \(T\in C^{2}[\rho _{1},\rho _{2}]\) be a 4-convex function. Also, let \(f_{1}:[a_{1},a_{2}]\rightarrow \mathbb {R}\) be an integrable function such that \(f_{1}(y)\in [\rho _{1},\rho _{2}]\) for all \(y\in [a_{1},a_{2}]\) and \(f_{2}:[a_{1},a_{2}]\rightarrow \mathbb {R}\) be a nonnegative function with \(\int _{a_{1}}^{a_{2}}f_{2}(y)dy=D>0,\) then (2.40) holds. If T is 4-concave function then the reverse inequality holds in (2.40).
Remark 2.30
Similarly we can present integral version of Corollary 2.25.
Remark 2.31
Adopting the procedure of Corollary 2.21, one can present a refinement of the Hölder type inequality in integral form as an application of Corollary 2.29.
Remark 2.32
Integral versions of Corollary 2.22, Corollary 2.23 and of Corollary 2.24 can be presented as applications of Corollary 2.29.
As an application of Corollary 2.29, we present another bound for the Hermite–Hadamard gap.
Corollary 2.33
Let \(\psi \in C^{2}[a_{1},a_{2}]\) be a 4-convex function, then
Proof
Using (2.40) for \(\psi =T,~[\rho _{1},\rho _{2}]=[a_{1},a_{2}]\) and \(f_{2}(y)=1,~f_{1}(y)=y\) for all \(y\in [a_{1},a_{2}],\) we get (2.41). \(\square \)
3 Applications in information theory
Definition 3.1
(Csiszár divergence) Let \([\rho _{1},\rho _{2}]\subset \mathbb {R}\) and \(f:[\rho _{1},\rho _{2}]\rightarrow \mathbb {R}\) be a function, then for \(\mathbf {r}=(r_{1},r_{2},\dots ,r_{m})\in \mathbb {R}^{m}\) and \(\mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\in \mathbb {R}^{m}_{+}\) with \(\frac{r_{k}}{w_{k}}\in [\rho _{1},\rho _{2}]~(k=1,2,\dots ,m),\) the Csiszár divergence is defined by
Theorem 3.2
Let \(f\in C^{2}[\rho _{1},\rho _{2}]\) be a 4-convex function and \(\mathbf {r}=(r_{1},r_{2},\dots ,r_{m})\in \mathbb {R}^{m},\) \(\mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\in \mathbb {R}^{m}_{+}\) such that \(\frac{\sum _{k=1}^{m}r_{k}}{\sum _{k=1}^{m}w_{k}},\frac{r_{k}}{w_{k}}\in [\rho _{1},\rho _{2}]\) for \(k=1,2,\dots ,m,\) then
Proof
The result (3.42) can easily be deduced from (2.15) by choosing \(T=f,~s_{k}=\frac{r_{k}}{w_{k}},~u_{k}=\frac{w_{k}}{\sum _{k=1}^{m}w_{k}}.\) \(\square \)
Theorem 3.3
Let \(f\in C^{2}[\rho _{1},\rho _{2}]\) be a 4-convex function and \(\mathbf {r}=(r_{1},r_{2},\dots ,r_{m})\in \mathbb {R}^{m},\) \(\mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\in \mathbb {R}^{m}_{+}\) such that \(\frac{\sum _{k=1}^{m}r_{k}}{\sum _{k=1}^{m}w_{k}},\frac{r_{k}}{w_{k}}\in [\rho _{1},\rho _{2}]\) for \(k=1,2,\dots ,m,\) then
Proof
The result (3.43) can easily be deduced from (2.31) by choosing \(T=f,~s_{k}=\frac{r_{k}}{w_{k}},~u_{k}=\frac{w_{k}}{\sum _{k=1}^{m}w_{k}}.\) \(\square \)
Definition 3.4
(Rényi-divergence) For two positive probability distributions \(\mathbf {r}=(r_{1},r_{2},\dots ,r_{m}),~\mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\) and a nonnegative real number \(\mu \) such that \(\mu \ne 1,\) the Rényi-divergence is defined by
Corollary 3.5
Let \([\rho _{1},\rho _{2}]\subseteq \mathbb {R}^{+}\). Also, let \(\mathbf {r}=(r_{1},r_{2},\dots ,r_{m}),~\mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\) be positive probability distributions and \(\mu >1\) such that \(\sum _{k=1}^{m}w_{k}\left( \frac{r_{k}}{w_{k}}\right) ^{\mu },\left( \frac{r_{k}}{w_{k}}\right) ^{\mu -1}\in [\rho _{1},\rho _{2}]\) for \(k=1,2,\dots ,m.\) Then
Proof
Let \(T(x)=-\frac{1}{\mu -1}\log x,~x\in [\rho _{1},\rho _{2}],\) then \(T''(x)=\frac{1}{(\mu -1)x^{2}}>0\) and \(T''''(x)=\frac{6}{(\mu -1)x^{4}}>0.\) This verifies that T is a 4-convex function, therefore using (2.15) for \(T(x)=-\frac{1}{\mu -1}\log x,~u_{k}=r_{k}\) and \(s_{k}=\left( \frac{r_{k}}{w_{k}}\right) ^{\mu -1}\), we obtain (3.44). \(\square \)
Corollary 3.6
Let \([\rho _{1},\rho _{2}]\subseteq \mathbb {R}^{+}\). Also, let \(\mathbf {r}=(r_{1},r_{2},\dots ,r_{m}),~\mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\) be positive probability distributions and \(\mu >1\) with \(\sum _{k=1}^{m}w_{k}\left( \frac{r_{k}}{w_{k}}\right) ^{\mu },\left( \frac{r_{k}}{w_{k}}\right) ^{\mu -1}\in [\rho _{1},\rho _{2}]\) for \(k=1,2,\dots ,m.\) Then
Proof
Using (2.31) for the 4-convex function \(T(x)=-\frac{1}{\mu -1}\log x,~u_{k}=r_{k}\) and \(s_{k}=\left( \frac{r_{k}}{w_{k}}\right) ^{\mu -1}\), we get (3.45). \(\square \)
Definition 3.7
(Shannon-entropy) If \(\mathbf {w}=(w_{1},w_{2},\dots ,w_{m}),\) is a positive probability distribution, then the Shannon-entropy (information divergence) is defined by
Corollary 3.8
Let \([\rho _{1},\rho _{2}]\subseteq \mathbb {R}^{+}\) and \(\mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\) be a positive probability distribution such that \(\frac{1}{w_{k}}\in [\rho _{1},\rho _{2}]\) for \(k=1,2,\dots ,m,\) then
Proof
Let \(f(x)=-\log x,~x\in [\rho _{1},\rho _{2}],\) then \(f''''(x)=\frac{6}{x^{4}}>0.\) This shows that f is a 4-convex function, therefore using (3.42) for \(f(x)=-\log x\) and \((r_{1},r_{2},\dots ,r_{m})=(1,1,\dots ,1),\) we get (3.46). \(\square \)
Corollary 3.9
Let \([\rho _{1},\rho _{2}]\subseteq \mathbb {R}^{+}\) and \(\mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\) be a positive probability distribution such that \(\frac{1}{w_{k}}\in [\rho _{1},\rho _{2}]\) for \(k=1,2,\dots ,m,\) then
Proof
Using (3.43) for the 4-convex function \(f(x)=-\log x\) and \((r_{1},r_{2},\dots ,r_{m})=(1,1,\dots ,1),\) we get (3.47). \(\square \)
Definition 3.10
(Kullback-Leibler divergence) If \(\mathbf {r}=(r_{1},r_{2},\dots ,r_{m})\) and \(\mathbf {w}=(w_{1},w_{2},\dots ,w_{m}),\) are two positive probability distributions, then the Kullback-Leibler divergence is defined by
Corollary 3.11
Let \([\rho _{1},\rho _{2}]\subseteq \mathbb {R}^{+}\) and \(\mathbf {r}=(r_{1},r_{2},\dots ,r_{m}),~\mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\) be positive probability distributions such that \(\frac{r_{k}}{w_{k}}\in [\rho _{1},\rho _{2}]\) for \(k=1,2,\dots ,m,\) then
Proof
Let \(f(x)=x\log x,~x\in [\rho _{1},\rho _{2}],\) then \(f''''(x)=\frac{2}{x^{3}}>0,\) which shows that f is a 4-convex function. So we get (3.48) by using (3.42) for \(f(x)=x\log x.\) \(\square \)
Corollary 3.12
Let \([\rho _{1},\rho _{2}]\subseteq \mathbb {R}^{+}\) and \(\mathbf {r}=(r_{1},r_{2},\dots ,r_{m}),~\mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\) be positive probability distributions such that \(\frac{r_{k}}{w_{k}}\in [\rho _{1},\rho _{2}]\) for \(k=1,2,\dots ,m,\) then
Proof
We get (3.49) by using (3.43) for the 4-convex function \(f(x)=x\log x.\) \(\square \)
Definition 3.13
(\(\chi ^{2}\)-divergence) For two positive probability distributions \(\mathbf {r}=(r_{1},r_{2},\dots ,r_{m}), \mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\), the \(\chi ^{2}\)-divergence is defined by
Corollary 3.14
If \([\rho _{1},\rho _{2}]\subseteq \mathbb {R}^{+}\) and \(\mathbf {r}=(r_{1},r_{2},\dots ,r_{m}),~\mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\) are two positive probability distributions such that \(\frac{r_{k}}{w_{k}}\in [\rho _{1},\rho _{2}]\) for \(k=1,2,\dots ,m,\) then
Proof
Let \(f(x)=(x-1)^{2}\) for \(x\in [\rho _{1},\rho _{2}],\) then \(f''''(x)=0.\) This shows that f is a 4-convex function, therefore inequality (3.50) follows by using (3.42) for \(f(x)=(x-1)^{2}.\) \(\square \)
Corollary 3.15
If \([\rho _{1},\rho _{2}]\subseteq \mathbb {R}^{+}\) and \(\mathbf {r}=(r_{1},r_{2},\dots ,r_{m}),~\mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\) are two positive probability distributions with \(\frac{r_{k}}{w_{k}}\in [\rho _{1},\rho _{2}]\) for \(k=1,2,\dots ,m,\) then
Proof
The inequality (3.51) follows by using (3.43) for \(f(x)=(x-1)^{2}.\) \(\square \)
Definition 3.16
(Bhattacharyya-coefficient) Bhattacharyya-coefficient for two positive probability distributions \(\mathbf {r}=(r_{1},r_{2},\dots ,r_{m})\) and \(\mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\) is defined by
Corollary 3.17
Let \([\rho _{1},\rho _{2}]\subseteq \mathbb {R}^{+}\) and \(\mathbf {r}=(r_{1},r_{2},\dots ,r_{m}),~\mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\) be two positive probability distributions such that \(\frac{r_{k}}{w_{k}}\in [\rho _{1},\rho _{2}]\) for \(k=1,2,\dots ,m.\) Then
Proof
Let \(f(x)=-\sqrt{x},~x\in [\rho _{1},\rho _{2}].\) Then \(f''''(x)=\frac{15}{16x^{\frac{7}{2}}}>0,\) which shows that f is a 4-convex function. Thus we get (3.52) by following (3.42) for \(f(x)=-\sqrt{x}.\) \(\square \)
Corollary 3.18
Let \([\rho _{1},\rho _{2}]\subseteq \mathbb {R}^{+}\) and \(\mathbf {r}=(r_{1},r_{2},\dots ,r_{m}),~\mathbf {w}=(w_{1},w_{2},\dots ,w_{m})\) be two positive probability distributions such that \(\frac{r_{k}}{w_{k}}\in [\rho _{1},\rho _{2}]\) for \(k=1,2,\dots ,m,\) then
Proof
Inequality (3.53) can be obtained by using (3.43) for the 4-convex function \(f(x)=-\sqrt{x}\). \(\square \)
3.1 Applications for the Zipf–Mandelbrot entropy
The probability mass function for the Zipf–Mandelbrot law can be written as:
for \(k=1,2,\dots ,m,\) \(m\in \{1,2,\dots \}\), \(\theta \ge 0,\) \(s>0\) and \(M_{m,\theta ,s}=\sum _{k=1}^{m}\frac{1}{(k+\theta )^{s}}\) is a generalized harmonic number. In connection to the attitude of information theory, we utilize entropies to compute the amount of information in a written text. The Zipf–Mandelbrot entropy mentioned in [3] is given by:
Corollary 3.19
Let \(0<\rho _{1}<\rho _{2},\) \(\theta \ge 0,~s>0\) and \(w_{k}\ge 0\) for \(k=1,2,\dots ,m\) with \(\sum _{k=1}^{m}w_{k}=1\). Then
Proof
For \(r_{k}=\frac{1}{(k+\theta )^{s}M_{m,\theta ,s}},~k=1,2,\dots ,m,\) we have
Also,
Now using (3.55) and (3.56) in (3.48), we get (3.54). \(\square \)
Corollary 3.20
Let \(0<\rho _{1}<\rho _{2},\) \(\theta _{1},\theta _{2}\ge 0,~s_{1},s_{2}>0,\) then
Proof
For \(r_{k}=\frac{1}{(k+\theta _{1})^{s_{1}}M_{m,\theta _{1},s_{1}}},~w_{k}=\frac{1}{(k+\theta _{2})^{s_{2}}M_{m,\theta _{2},s_{2}}},~k=1,2,\dots ,m,\) we have
Also,
Now utilizing (3.58) and (3.59) in (3.48), we get (3.57). \(\square \)
Corollary 3.21
Let \(0<\rho _{1}<\rho _{2},\) \(\theta \ge 0,~s>0\) and \(w_{k}\ge 0\) for \(k=1,2,\dots ,m\) with \(\sum _{k=1}^{m}w_{k}=1\). Then
Proof
Using (3.49) for \(r_{k}=\frac{1}{(k+\theta )^{s}M_{m,\theta ,s}},~k=1,2,\dots ,m,\) we get (3.60). \(\square \)
Corollary 3.22
Let \(0<\rho _{1}<\rho _{2},\) \(\theta _{1},\theta _{2}\ge 0,~s_{1},s_{2}>0,\) then
Proof
Using (3.49) for \(r_{k}=\frac{1}{(k+\theta _{1})^{s_{1}}M_{m,\theta _{1},s_{1}}},~w_{k}=\frac{1}{(k+\theta _{2})^{s_{2}}M_{m,\theta _{2},s_{2}}},~k=1,2,\dots ,m,\) we get (3.61). \(\square \)
References
Adil Khan, M., Khan, S., Chu, Y.-M.: A new bound for the Jensen gap with applications in information theory. IEEE Access 8, 98001–98008 (2020)
Adil Khan, M., Pečarić, J., Chu, Y.-M.: Refinements of Jensen’s and McShane’s inequalities with applications. AIMS Math. 5(5), 4931–4945 (2020)
Adil Khan, M., Pečarić, Đ., Pečarić, J.: Bounds for Shannon and Zipf-Mandelbrot entropies, Math. Methods Appl. Sci., 40(18) 7316–7322 (2017)
Adil Khan, M., Pečarić, Đ., Pečarić, J.: On Zipf-Mandelbrot entropy, J. Comput. Appl. Math., 346, 192–204 (2019)
Adil Khan, M., Pečarić, Đ., Pečarić, J.: Bounds for Csiszár divergence and hybrid Zipf-Mandelbrot entropy, Math. Methods Appl. Sci., 42, 7411–7424 (2019)
Adil Khan, M., Pečarić, Đ., Pečarić, J.: New refinement of the Jensen inequality associated to certain functions with applications, J. Inequal. Appl., 2020 Article ID 76 (2020)
Ahmad, K., Adil Khan, M., Khan, S., Ali, A., Chu, Y.-M.: New estimates for generalized Shannon and Zipf-Mandelbrot entropies via convexity results, Results Phys., 18 Article ID 103305 (2020)
Bakula, M.K., Nikodem, K.: On the converse Jensen inequality for strongly convex functions. J. Math. Anal. Appl. 434, 516–522 (2016)
Dragomir, S.S.: A new refinement of Jensen’s inequality in linear spaces with applications. Math. Comput. Model. 52, 1497–1505 (2010)
Horváth, L.: New refinements of the discrete Jensen’s inequality generated by finite or infinite permutations. Aequat. Math. 94, 1109–1121 (2020)
Horváth, L., Khan, K.A., Pečarić, J.: Cyclic refinements of the different versions of operator Jensen’s inequality. Electron. J. Linear Algebra 31(1), 125–133 (2016)
Hussain, S., Pečarić, J.: An improvement of Jensen’s inequality with some applications. Asian-Eur. J. Math. 2(1), 85–94 (2009)
Khalid, S., Pečarić, J.: On the refinements of the integral Jensen-Steffensen inequality, J. Inequal. Appl., 2013 Article ID 20 (2013)
Khan, S., Adil Khan, M., Butt, S.I., Chu, Y.-M.: A new bound for the Jensen gap pertaining twice differentiable functions with applications, Adv. Differ. Equ. 2020 Article ID 333 (2020)
Khan, S., Adil Khan, M., Chu, Y.-M.: Converses of the Jensen inequality derived from the Green functions with applications in information theory, Math. Methods Appl. Sci., 43 2577–2587 (2020)
Khan, S., Adil Khan, M., Chu, Y.-M.: New converses of Jensen inequality via Green functions with applications, Rev. R. Acad. Cienc. Exactas Fìs. Nat. Ser. A Mat. RACSAM 114(3), 114 (2020)
Khan, K.A., Niaz, T., Pečarić, Đ., Pečarić, J.: Refinement of Jensen’s inequality and estimation of f-and Rényi divergence via Montgomery identity, J. Inequal. Appl., 2018 Article ID 318 (2018)
Kim, J.-H.: Further improvement of Jensen inequality and application to stability of time-delayed systems. Automatica 64, 121–125 (2016)
Liao, J.G., Berg, A.: Sharpening Jensen’s inequality. Am. Statist. 4, 1–4 (2018)
Lin, Q.: Jensen inequality for superlinear expectations. Stat. Probabil. Lett. 151, 79–83 (2019)
Mehmood, N., Butt, S.I., Pečarić, J.: Generalization of cyclic refinements of Jensen inequality by montgomery identity and Green’s function, Asian-Eur. J. Math., 11(4) Article ID 1850060 (2018)
Niaz, T., Khan, K.A., Pečarić, J.: On refinement of Jensen’s inequality for 3-convex function at a point. Turkish J. Inequal. 4(1), 70–80 (2020)
Pečarić, Đ., Pečarić, J., Rodić, M.: About the sharpness of the Jensen inequality, J. Inequal. Appl., 2018 Article ID 337 (2018)
Pečarić, J., Perić, J.: New improvement of the converse Jensen inequality. Math. Inequal. Appl. 21(1), 217–234 (2018)
Pečarić, J., Proschan, F., Tong, Y.L.: Convex Functions, Partial Orderings and Statistical Applications. Academic Press, New York (1992)
Steffensen, J.F.: On certain inequalities and methods of approximation. J. Inst. Actuaries 51, 274–297 (1919)
Zong, Z., Hu, F., Yin, C., Wu, H.: On Jensen’s inequality, Hölder’s inequality, and Minkowski’s inequality for dynamically consistent nonlinear evaluations, J. Inequal. Appl., 2015 Article ID 152 (2015)
Acknowledgements
The authors would like to express their sincere thanks to the anonymous reviewer for their helpful comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The publication was supported by the Ministry of Education and Science of the Russian Federation (the Agreement number No. 02.a03.21.0008.).
Rights and permissions
About this article
Cite this article
Adil Khan, M., Khan, S., Pečarić, Ɖ. et al. New improvements of Jensen’s type inequalities via 4-convex functions with applications. RACSAM 115, 43 (2021). https://doi.org/10.1007/s13398-020-00971-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13398-020-00971-8