1 Introduction and preliminaries

In this article, \({\mathcal {H}}\) will denote a Hilbert space, and the term “operator” we shall mean endormorphism of \({\mathcal {H}}\). The following result that provides an operator version for the Jensen inequality is due to Mond and Pečarić [12]:

Theorem 1.1

(Jensen’s operator inequality for convex functions) Let \(A\in {\mathcal {B}}\left( {\mathcal {H}} \right) \) be a self-adjoint operator with \(Sp\left( A \right) \subseteq \left[ m,M \right] \) for some scalars \(m<M\). If \(f\left( t \right) \) is a convex function on \(\left[ m,M \right] \), then

$$\begin{aligned} f\left( \left\langle Ax,x \right\rangle \right) \le \left\langle f\left( A \right) x,x \right\rangle , \end{aligned}$$
(1.1)

for every unit vector \(x\in {\mathcal {H}}\).

Over the years, various extensions and generalizations of (1.1) have been obtained in the literature, e.g., [6, 7, 13]. For this background we refer to any expository text such as [5].

The aim of this paper is to find an inequality which contains (1.1) as a special case. Our result also allows to obtain a refinement and a reverse for the scalar Young inequality. More precisely, it will be shown that for two non-negative numbers ab we have

$$\begin{aligned} \begin{aligned}&{{K}^{r}}\left( h,2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{r}{4} \right) {{\left( \frac{a-b}{D} \right) }^{2}} \right) \\&\quad \le \frac{a{{\nabla }_{v}}b}{a{{\sharp }_{v}}b} \\&\quad \le {{K}^{R}}\left( h,2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{R}{4} \right) {{\left( \frac{a-b}{D} \right) }^{2}} \right) , \end{aligned} \end{aligned}$$

where \(r=\min \left\{ v,1-v \right\} , R=\max \left\{ v,1-v \right\} , D=\max \left\{ a,b \right\} \) and \(K(h,2) = \frac{(h+1)^2}{4h}\) is the Kantorovich constant with \(h = \frac{b}{a}\).

To make the text more self-contained we give a brief overview of convexifiable functions. Given a continuous \(f:I\rightarrow {\mathbb {R}}\) defined on the compact interval \(I\subset {\mathbb {R}}\), consider a function \(\varphi :I\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) defined by \(\varphi \left( x,\alpha \right) =f\left( x \right) -\frac{1}{2}\alpha {{x}^{2}}\). If \(\varphi \left( x,\alpha \right) \) is a convex function on I for some \(\alpha ={{\alpha }^{*}}\), then \(\varphi \left( x,\alpha \right) \) is called a convexification of f and \({{\alpha }^{*}}\) a convexifier on I. A function f is convexifiable if it has a convexification. It is noted in [17, Corollary 2.9] that if the continuously differentiable function f has Lipschitz derivative (i.e., \(\left| f'\left( x \right) -f'\left( y \right) \right| \le L\left| x-y \right| \) for any \(x,y\in I\) and some constant L), then \(\alpha =-L\) is a convexifier of f.

The following fact concerning convexifiable functions plays an important role in our discussion (see [17, Corollary 2.8]):

figure a

The reader may consult [16] for additional information about this topic. For all other notions used in the paper, we refer the reader to the monograph [5].

2 Main results

After the above preparation, we are ready to prove the analogue of (1.1) for non-convex functions.

Theorem 2.1

(Jensen’s operator inequality for non-convex functions) Let f be a continuous convexifiable function on the interval I and \(\alpha \) a convexifier of f. Then

$$\begin{aligned} f\left( \left\langle Ax,x \right\rangle \right) \le \left\langle f\left( A \right) x,x \right\rangle -\frac{1}{2}\alpha \left( \left\langle {{A}^{2}}x,x \right\rangle -{{\left\langle Ax,x \right\rangle }^{2}} \right) , \end{aligned}$$
(2.1)

for every self-adjoint operator A with \(Sp\left( A \right) \subseteq I\) and every unit vector \(x\in {\mathcal {H}}\).

Proof

The idea of proof evolves from the approach in [15]. Let \({{g}_{\alpha }}:I\rightarrow {\mathbb {R}}\) with \({{g}_{\alpha }}\left( x \right) =\varphi \left( x,\alpha \right) \). According to the assumption, \({{g}_{\alpha }}\left( x \right) \) is convex. Therefore

$$\begin{aligned} {{g}_{\alpha }}\left( \left\langle Ax,x \right\rangle \right) \le \left\langle {{g}_{\alpha }}\left( A \right) x,x \right\rangle , \end{aligned}$$

for every unit vector \(x\in {\mathcal {H}}\). This expression is equivalent to the desired inequality (2.1). \(\square \)

A few remarks concerning Theorem 2.1 are in order.

Remark 2.1

  1. (a)

    Using the fact that for a convex function f one can choose the convexifier \(\alpha =0\), one recovers the inequality (1.1).

  2. (b)

    For continuously differentiable function f with Lipschitz derivative and Lipschitz constant L, we have

    $$\begin{aligned} f\left( \left\langle Ax,x \right\rangle \right) \le \left\langle f\left( A \right) x,x \right\rangle +\frac{1}{2}L\left( \left\langle {{A}^{2}}x,x \right\rangle -{{\left\langle Ax,x \right\rangle }^{2}} \right) . \end{aligned}$$

An important special case of Theorem 2.1, which refines inequality (1.1) can be explicitly stated using the property (P).

Remark 2.2

Let \(f:I\rightarrow {\mathbb {R}}\) be a twice continuously differentiable strictly convex function and \(\alpha ={\mathop {\min }_{t\in I}}\,f''\left( t \right) \). Then

$$\begin{aligned} f\left( \left\langle Ax,x \right\rangle \right) \le \left\langle f\left( A \right) x,x \right\rangle -\frac{1}{2}\alpha \left( \left\langle {{A}^{2}}x,x \right\rangle -{{\left\langle Ax,x \right\rangle }^{2}} \right) \le \left\langle f\left( A \right) x,x \right\rangle , \end{aligned}$$
(2.2)

for every positive operator A with \(Sp\left( A \right) \subseteq I\) and every unit vector \(x\in {\mathcal {H}}\).

The inequality (2.2) is obtained in the paper [13, Theorem 3.3] (where this result was derived for the strongly convex functions) with a different technique (see also [4]).

The proof of the following corollary is adapted from the one of [5, Theorem 1.3], but we put a sketch of the proof for the reader.

Corollary 2.1

Let f be a continuous convexifiable function on the interval I and \(\alpha \) a convexifier. Let \({{A}_{1}},\ldots ,{{A}_{n}}\) be self-adjoint operators on \({\mathcal {H}}\) with \(Sp\left( {{A}_{i}} \right) \subseteq I\) for \(1\le i\le n\) and \({{x}_{1}},\ldots ,{{x}_{n}}\in {\mathcal {H}}\) be such that \(\sum \nolimits _{i=1}^{n}{{{\left\| {{x}_{i}} \right\| }^{2}}}=1\). Then

$$\begin{aligned} f\left( \sum \limits _{i=1}^{n}{\left\langle {{A}_{i}}{{x}_{i}},{{x}_{i}} \right\rangle } \right) \le \sum \limits _{i=1}^{n}{\left\langle f\left( {{A}_{i}} \right) {{x}_{i}},{{x}_{i}} \right\rangle }-\frac{1}{2}\alpha \left( \sum \limits _{i=1}^{n}{\left\langle A_{i}^{2}{{x}_{i}},{{x}_{i}} \right\rangle }-{{\left( \sum \limits _{i=1}^{n}{\left\langle {{A}_{i}}{{x}_{i}},{{x}_{i}} \right\rangle } \right) }^{2}} \right) .\nonumber \\ \end{aligned}$$
(2.3)

Proof

In fact, \(\mathrm {x}:=\left( \begin{matrix} {{x}_{1}} \\ \vdots \\ {{x}_{n}} \\ \end{matrix} \right) \) is a unit vector in the Hilbert space \({{\mathcal {H}}^n}\). If we introduce the “diagonal” operator on \({{\mathcal {H}}^n}\)

$$\begin{aligned}\mathrm {A}:=\left( \begin{matrix} {{A}_{1}} &{} \quad \cdots &{} \quad 0 \\ \vdots &{} \quad \ddots &{} \quad \vdots \\ 0 &{} \quad \cdots &{} \quad {{A}_{n}} \\ \end{matrix} \right) ,\end{aligned}$$

then, obviously, \(Sp\left( \mathrm {A} \right) \subseteq I\), \(\left\| \mathrm {x} \right\| =1\), \(\left\langle f\left( \mathrm {A} \right) \mathrm {x},\mathrm {x} \right\rangle =\sum \nolimits _{i=1}^{n}{\left\langle f\left( {{A}_{i}} \right) {{x}_{i}},{{x}_{i}} \right\rangle }\), \(\left\langle \mathrm {A}\mathrm {x},\mathrm {x} \right\rangle =\sum \nolimits _{i=1}^{n}{\left\langle {{A}_{i}}{{x}_{i}},{{x}_{i}} \right\rangle }\), \(\left\langle {{\mathrm {A}}^{2}}\mathrm {x},\mathrm {x} \right\rangle =\sum \nolimits _{i=1}^{n}{\left\langle A_{i}^{2}{{x}_{i}},{{x}_{i}} \right\rangle }\). Hence, to complete the proof, it is enough to apply Theorem 2.1 for \(\mathrm {A}\) and \(\mathrm {x}\). \(\square \)

Corollary 2.1 leads us to the following result. The argument depends on an idea of [1, Corollary 1].

Corollary 2.2

Let f be a continuous convexifiable function on the interval I and \(\alpha \) a convexifier. Let \({{A}_{1}},\ldots ,{{A}_{n}}\) be self-adjoint operators on \({\mathcal {H}}\) with \(Sp\left( {{A}_{i}} \right) \subseteq I\) for \(1\le i\le n\) and let \({{p}_{1}},\ldots ,{{p}_{n}}\) be positive scalars such that \(\sum \nolimits _{i=1}^{n}{{{p}_{i}}}=1\). Then

$$\begin{aligned} f\left( \sum \limits _{i=1}^{n}{\left\langle {{p}_{i}}{{A}_{i}}x,x \right\rangle } \right) \le \sum \limits _{i=1}^{n}{\left\langle {{p}_{i}}f\left( {{A}_{i}} \right) x,x \right\rangle }-\frac{1}{2}\alpha \left( \sum \limits _{i=1}^{n}{\left\langle {{p}_{i}}A_{i}^{2}x,x \right\rangle }-{{\left( \sum \limits _{i=1}^{n}{\left\langle {{p}_{i}}{{A}_{i}}x,x \right\rangle } \right) }^{2}} \right) ,\nonumber \\ \end{aligned}$$
(2.4)

for every unit vector \(x\in {\mathcal {H}}\).

Proof

Suppose that \(x\in {\mathcal {H}}\) is a unit vector. Putting \({{x}_{i}}=\sqrt{{{p}_{i}}}x\in {\mathcal {H}}\) so that \(\sum \nolimits _{i=1}^{n}{{{\left\| {{x}_{i}} \right\| }^{2}}}=1\) and applying Corollary 2.1 we obtain the desired result (2.4). \(\square \)

The clear advantage of our approach over the Jensen operator inequality is shown in the following example. Before proceeding we recall the following multiple operator version of Jensen’s inequality [1, Corollary 1]: Let \(f:\left[ m,M \right] \subseteq {\mathbb {R}}\rightarrow {\mathbb {R}}\) be a convex function and \({{A}_{i}}\) be self-adjoint operators with \(Sp\left( {{A}_{i}} \right) \subseteq \left[ m,M \right] \), \(i=1,\ldots ,n\) for some scalars \(m<M\). If \({{p}_{i}}\ge 0\), \(i=1,\ldots ,n\) with \(\sum \nolimits _{i=1}^{n}{{{p}_{i}}}=1\), then

$$\begin{aligned} f\left( \sum \limits _{i=1}^{n}{\left\langle {{p}_{i}}{{A}_{i}}x,x \right\rangle } \right) \le \sum \limits _{i=1}^{n}{\left\langle {{p}_{i}}f\left( {{A}_{i}} \right) x,x \right\rangle }, \end{aligned}$$
(2.5)

for every \(x\in {\mathcal {H}}\) with \(\left\| x \right\| =1\).

Example 2.1

We use the same idea from [15, Illustration 1]. Let \(f\left( t \right) =\sin t\text { }\left( 0\le t\le 2\pi \right) \), \(\alpha ={{\min }_{0\le t\le 2\pi }}\,f''\left( t \right) =-1\), \(n=2\), \({{p}_{1}}=p\), \({{p}_{2}}=1-p\), \({\mathcal {H}}={{{\mathbb {R}}}^{2}}\), \({{A}_{1}}=\left( \begin{matrix} 2\pi &{} \quad 0 \\ 0 &{} \quad 0 \\ \end{matrix} \right) \), \({{A}_{2}}=\left( \begin{matrix} 0 &{} \quad 0 \\ 0 &{} \quad 2\pi \\ \end{matrix} \right) \) and \(x=\left( \begin{matrix} 0 \\ 1 \\ \end{matrix} \right) \). After simple calculations (thanks to the continuous functional calculus), from (2.4) we infer that

$$\begin{aligned} \sin \left( 2\pi \left( 1-p \right) \right) \le 2{{\pi }^{2}}p\left( 1-p \right) ,\qquad \text { }0\le p\le 1 \end{aligned}$$
(2.6)

and (2.5) implies

$$\begin{aligned} \sin \left( 2\pi \left( 1-p \right) \right) \le 0,\qquad \text { }0\le p\le 1. \end{aligned}$$
(2.7)

Not so surprisingly, the inequality (2.7) can break down when \(\frac{1}{2}\le p\le 1\) (i.e., (2.5) is not applicable here). However, the new upper bound in (2.6) holds.

The weighted version of [15, Theorem 3] follows from Corollary 2.2, i.e.,

$$\begin{aligned} f\left( \sum \limits _{i=1}^{n}{{{p}_{i}}{{t}_{i}}} \right) \le \sum \limits _{i=1}^{n}{{{p}_{i}}f\left( {{t}_{i}} \right) }-\frac{1}{2}\alpha \left( \sum \limits _{i=1}^{n}{{{p}_{i}}t_{i}^{2}}-{{\left( \sum \limits _{i=1}^{n}{{{p}_{i}}{{t}_{i}}} \right) }^{2}} \right) , \end{aligned}$$
(2.8)

where \({{t}_{i}}\in I\) and \(\sum \nolimits _{i=1}^{n}{{{p}_{i}}}=1\). For the case \(n=2\), the inequality (2.8) reduces to

$$\begin{aligned} f\left( \left( 1-v \right) {{t}_{1}}+v{{t}_{2}} \right) \le \left( 1-v \right) f\left( {{t}_{1}} \right) +vf\left( {{t}_{2}} \right) -\frac{v\left( 1-v \right) }{2}\alpha {{\left( {{t}_{1}}-{{t}_{2}} \right) }^{2}}, \end{aligned}$$
(2.9)

where \(0\le v\le 1\). In particular

$$\begin{aligned} f\left( \frac{{{t}_{1}}+{{t}_{2}}}{2} \right) \le \frac{f\left( {{t}_{1}} \right) +f\left( {{t}_{2}} \right) }{2}-\frac{1}{8}\alpha {{\left( {{t}_{1}}-{{t}_{2}} \right) }^{2}}. \end{aligned}$$
(2.10)

It is notable that Theorem 2.1 is equivalent to the inequality (2.8). The following provides a refinement of the arithmetic-geometric mean inequality.

Proposition 2.1

For each \(a,b>0\) and \(0\le v\le 1\), we have

$$\begin{aligned} \sqrt{ab}\le {{H}_{v}}\left( a,b \right) -\frac{d}{8}{{\left( \left( 1-2v \right) \left( \log \frac{a}{b} \right) \right) }^{2}}\le \frac{a+b}{2}-\frac{d}{8}{{\left( \log \frac{a}{b} \right) }^{2}}\le \frac{a+b}{2}, \end{aligned}$$
(2.11)

where \(d=\min \left\{ a,b \right\} \) and \({{H}_{v}}\left( a,b \right) = \frac{{{a}^{1-v}}{{b}^{v}}+{{b}^{1-v}}{{a}^{v}}}{2}\) is the Heinz mean.

Proof

Assume that f is a twice differentiable convex function such that \(\alpha \le f''\) where \(\alpha \in {\mathbb {R}}\). Under these conditions, it follows that

$$\begin{aligned} \begin{aligned} f\left( \frac{a+b}{2} \right)&=f\left( \frac{\left( 1-v \right) a+vb+\left( 1-v \right) b+va}{2} \right) \\&\le \frac{f\left( \left( 1-v \right) a+vb \right) +f\left( \left( 1-v \right) b+va \right) }{2}-\frac{1}{8}\alpha {{\left( \left( a-b \right) \left( 1-2v \right) \right) }^{2}} \quad ({\hbox {by}(2.10)})\\&\le \frac{f\left( a \right) +f\left( b \right) }{2}-\frac{1}{8}\alpha {{\left( a-b \right) }^{2}} \quad ({\hbox {by} (2.9)})\\&\le \frac{f\left( a \right) +f\left( b \right) }{2}, \end{aligned} \end{aligned}$$

for \(\alpha \ge 0\). Now taking \(f\left( t \right) ={{e}^{t}}\) with \(t \in I=\left[ a,b \right] \) in the above inequalities, we deduce the desired inequality (2.11). \(\square \)

Remark 2.3

As Bhatia pointed out in [2], the Heinz means interpolate between the geometric mean and the arithmetic mean, i.e.,

$$\begin{aligned} \sqrt{ab}\le {{H}_{v}}\left( a,b \right) \le \frac{a+b}{2}. \end{aligned}$$
(2.12)

Of course, the first inequality in (2.11) yields an improvement of (2.12). The inequalities in (2.11) also sharpens up the following inequality which is due to Dragomir (see [3, Remark 1]):

$$\begin{aligned} \frac{d}{8}{{\left( \log \frac{a}{b} \right) }^{2}}\le \frac{a+b}{2}-\sqrt{ab}. \end{aligned}$$

Studying about the arithmetic-geometric mean inequality, we cannot avoid mentioning its cousin, the Young inequality. The following inequalities provides a multiplicative type refinement and reverse of the Young’s inequality:

$$\begin{aligned} {{K}^{r}}\left( h,2 \right) \le \frac{\left( 1-v \right) a+vb}{{{a}^{1-v}}{{b}^{v}}}\le {{K}^{R}}\left( h,2 \right) , \end{aligned}$$
(2.13)

where \(0\le v\le 1\), \(r=\min \left\{ v,1-v \right\} \), \(R=\max \left\{ v,1-v \right\} \) and \(K(h,2) = \frac{(h+1)^2}{4h}\) with \(h = \frac{b}{a}\). The first one was proved by Zuo et al. [18, Corollary 3], while the second one was given by Liao et al. [9, Corollary 2.2].

Our aim in the following is to establish a refinement for the inequalities in (2.13). The crucial role for our purposes will play the following facts:

If f is a convex function on the fixed closed interval I, then

$$\begin{aligned} n\lambda \left\{ \sum \limits _{i=1}^{n}{\frac{1}{n}f\left( {{x}_{i}} \right) -f\left( \sum \limits _{i=1}^{n}{\frac{1}{n}{{x}_{i}}} \right) } \right\} \le \sum \limits _{i=1}^{n}{{{p}_{i}}f\left( {{x}_{i}} \right) }-f\left( \sum \limits _{i=1}^{n}{{{p}_{i}}{{x}_{i}}} \right) , \end{aligned}$$
(2.14)
$$\begin{aligned} \sum \limits _{i=1}^{n}{{{p}_{i}}f\left( {{x}_{i}} \right) }-f\left( \sum \limits _{i=1}^{n}{{{p}_{i}}{{x}_{i}}} \right) \le n\mu \left\{ \sum \limits _{i=1}^{n}{\frac{1}{n}f\left( {{x}_{i}} \right) -f\left( \sum \limits _{i=1}^{n}{\frac{1}{n}{{x}_{i}}} \right) } \right\} , \end{aligned}$$
(2.15)

where \({{p}_{1}},\ldots ,{{p}_{n}}\ge 0\) with \(\sum \nolimits _{i=1}^{n}{{{p}_{i}}}=1\), \(\lambda =\min \left\{ {{p}_{1}},\ldots ,{{p}_{n}} \right\} \), \(\mu =\max \left\{ {{p}_{1}},\ldots ,{{p}_{n}} \right\} \). Notice that the first inequality goes back to Pečarić et al. [10, Theorem 1, p. 717], while the second one was obtained by Mitroi in [11, Corollary 3.1].

Now we come to the announced theorem. In order to simplify the notations, we put \(a{{\sharp }_{v}}b={{a}^{1-v}}{{b}^{v}}\) and \(a{{\nabla }_{v}}b=\left( 1-v \right) a+vb\).

Theorem 2.2

Let \(a,b>0\) and \(0\le v\le 1\). Then

$$\begin{aligned}&{{K}^{r}}\left( h,2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{r}{4} \right) {{\left( \frac{a-b}{D} \right) }^{2}} \right) \nonumber \\&\quad \le \frac{a{{\nabla }_{v}}b}{a{{\sharp }_{v}}b}\nonumber \\&\quad \le {{K}^{R}}\left( h,2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{R}{4} \right) {{\left( \frac{a-b}{D} \right) }^{2}} \right) , \end{aligned}$$
(2.16)

where \(r=\min \left\{ v,1-v \right\} \), \(R=\max \left\{ v,1-v \right\} \), \(D=\max \left\{ a,b \right\} \) and \(K(h,2) = \frac{(h+1)^2}{4h}\) with \(h = \frac{b}{a}\).

Proof

Employing the inequality (2.14) for the twice differentiable convex function f with \(\alpha \le f''\), we have

$$\begin{aligned}\begin{aligned}&n\lambda \left\{ \frac{1}{n}\sum \limits _{i=1}^{n}{f\left( {{x}_{i}} \right) }-f\left( \frac{1}{n}\sum \limits _{i=1}^{n}{{{x}_{i}}} \right) \right\} -\sum \limits _{i=1}^{n}{{{p}_{i}}f\left( {{x}_{i}} \right) }+f\left( \sum \limits _{i=1}^{n}{{{p}_{i}}{{x}_{i}}} \right) \\&\le \frac{\alpha }{2}\left\{ n\lambda \left[ \frac{1}{n}\sum \limits _{i=1}^{n}{x_{i}^{2}}-{{\left( \frac{1}{n}\sum \limits _{i=1}^{n}{{{x}_{i}}} \right) }^{2}} \right] -\left( \sum \limits _{i=1}^{n}{{{p}_{i}}x_{i}^{2}}-{{\left( \sum \limits _{i=1}^{n}{{{p}_{i}}{{x}_{i}}} \right) }^{2}} \right) \right\} . \\ \end{aligned}\end{aligned}$$

Here we set \(n=2\), \(x_1=a\), \(x_2=b\), \(p_1=1-v\), \(p_2=v\), \(\lambda =r\) and \(f(x) =-\log x\) with \(I=[a,b]\) (so \(\alpha ={\mathop {\min }_{x\in I}}\,f''\left( x \right) =\frac{1}{{{D}^{2}}}\)). Thus we deduce the first inequality in (2.16). The second inequality in (2.16) is also obtained similarly by using the inequality (2.15). \(\square \)

Remark 2.4

  1. (a)

    Since \(\frac{v(1-v)}{2}-\frac{r}{4}\ge 0\) for each \(0\le v\le 1\), we have \(\exp \left( {\left( {\frac{{v\left( {1 - v} \right) }}{2} - \frac{r}{4}} \right) {{\left( {\frac{{a - b}}{D}} \right) }^2}} \right) \ge 1\). Therefore the first inequality in (2.16) provides an improvement for the first inequality in (2.13).

  2. (b)

    Since \(\frac{v(1-v)}{2}-\frac{R}{4}\le 0\) for each \(0\le v\le 1\), we get \(\exp \left( {\left( {\frac{{v\left( {1 - v} \right) }}{2} - \frac{R}{4}} \right) {{\left( {\frac{{a - b}}{D}} \right) }^2}} \right) \le 1\). Therefore the second inequality in (2.16) provides an improvement for the second inequality in (2.13).

Proposition 2.2

Under the same assumptions in Theorem 2.2, we have

$$\begin{aligned} \frac{(h+1)^2}{4h} \ge \exp \left( \frac{1}{4}\left( \frac{a-b}{D}\right) ^2\right) . \end{aligned}$$

Proof

We prove the case \(a \le b\), then \(h \ge 1\). We set \( f_1(h) \equiv 2\log (h+1)-\log h-2 \log 2 -\frac{1}{4}\frac{(h-1)^2}{h^2}. \) It is quite easy to see that \( f_1'(h)=\frac{(2h+1)(h-1)^2}{2h^3(h+1)} \ge 0, \) so that \(f_1(h) \ge f_1(1)=0\). For the case \(a \ge b\), (then \(0<h \le 1\)), we also set \( f_2(h) \equiv 2\log (h+1)-\log h-2 \log 2 -\frac{1}{4}(h-1)^2. \) By direct calculation \( f_2'(h)=-\frac{(h-1)^2(h+2)}{2h(h+1)} \le 0, \) so that \(f_2(h) \ge f_2(1) =0\). Thus the statement follows. \(\square \)

Remark 2.5

Dragomir obtained a refinement and reverse of Young’s inequality in [3, Theorem 3] as:

$$\begin{aligned} \exp \left( {\frac{{v\left( {1 - v} \right) }}{2}{{\left( {\frac{{a - b}}{D}} \right) }^2}} \right) \le \frac{{a{\nabla _v}b}}{{a{\sharp _v}b}} \le \exp \left( {\frac{{v\left( {1 - v} \right) }}{2}{{\left( {\frac{{a - b}}{d}} \right) }^2}} \right) , \end{aligned}$$
(2.17)

where \(d = \min \{a,b\}\). From the following facts (a) and (b), we claim that our inequalities are non-trivial results.

  1. (a)

    From Proposition 2.2, our lower bound in (2.16) is tighter than the one in (2.17).

  2. (b)

    Numerical computations show that there is no ordering between the right hand side in (2.16) and the one in the second inequality of (2.17) shown in [3, Theorem 3]. For example, if we take \(a=2\), \(b=1\) and \(v=0.1\), then

    $$\begin{aligned}&{{K}^{R}}\left( h,2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{R}{4} \right) {{\left( \frac{a-b}{D} \right) }^{2}} \right) -\exp \left( \frac{v(1-v)}{2}\left( \frac{a-b}{d}\right) ^2 \right) \nonumber \\&\quad \simeq 0.0168761, \end{aligned}$$

    whereas it approximately equals \(-\,0.0436069\) when \(a=2\), \(b=1\) and \(v=0.3\).

We give a further remark in relation to comparisons with other inequalities.

Remark 2.6

The following refined Young inequality and its reverse are known

$$\begin{aligned} K^{r'}(\sqrt{t},2)t^v +r(1-\sqrt{t})^2 \le (1-v) +v t \le K^{R'}(\sqrt{t},2)t^v +r(1-\sqrt{t})^2, \end{aligned}$$
(2.18)

where \(t>0\), \(r'=\min \{2r,1-2r\}\) and \(R'=\max \{2r,1-2r\}\). The first and second inequality were given in [14, Lemma 2.1] and in [9, Theorem 2.1], respectively.

Numerical computations show that there is no ordering between our inequalities (2.16) and the above ones. Actually, if we take \(v=0.45\) and \(t=0.1\) (we set \(t=\frac{b}{a}\) with \(a \ge b\) in (2.16)), then

$$\begin{aligned} K^{R'}(\sqrt{t},2)t^v +r(1-\sqrt{t})^2 - t^v K^R(h,2) \exp \left( \left( \frac{v(1-v)}{2}-\frac{R}{4}\right) (1-t)^2 \right) \simeq 0.0363059, \end{aligned}$$

while it equals approximately \(-\,0.0860004\) when \(v=0.9\) and \(t=0.1\).

Similarly, when \(v=0.45\) and \(t=0.1\) we get

$$\begin{aligned} K^{r'}(\sqrt{t},2)t^v +r(1-\sqrt{t})^2- t^v K^r(h,2) \exp \left( \left( \frac{v(1-v)}{2}-\frac{r}{4}\right) (1-t)^2 \right) \simeq -\, 0.0126828, \end{aligned}$$

while it equals approximately 0.037896 when \(v=0.9\) and \(t=0.1\).

Obviously, in the inequality (2.13), we cannot replace \({{K}^{r}}\left( h,2 \right) \) by \({{K}^{R}}\left( h,2 \right) \), or vice versa. In this regard, we have the following theorem. The proof is almost the same as that of Theorem 2.2 (it is enough to use the convexity of the function \({{g}_{\beta }}\left( x \right) =\frac{\beta }{2}{{x}^{2}}-f\left( x \right) \) where \(\beta ={\mathop {\min }_{x\in I}}\,f''\left( x \right) \)).

Theorem 2.3

Let all the assumptions of Theorem 2.2 hold except that \(d=\min \left\{ a,b \right\} \). Then

$$\begin{aligned}\begin{aligned} {{K}^{R}}\left( h,2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{R}{4} \right) {{\left( \frac{a-b}{d} \right) }^{2}} \right)&\le \frac{a{{\nabla }_{v}}b}{a{{\sharp }_{v}}b} \\&\le {{K}^{r}}\left( h,2 \right) \\&\exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{r}{4} \right) {{\left( \frac{a-b}{d} \right) }^{2}} \right) . \end{aligned} \end{aligned}$$

We end this paper by presenting the operator inequalities based on Theorems 2.2 and 2.3, thanks to the Kubo-Ando theory [8].

Corollary 2.3

Let A, B be two positive invertible operators and positive real numbers m, \(m'\), M, \(M'\) that satisfy one of the following conditions:

  1. (i)

    \(0<m'I\le A\le mI<MI\le B\le M'I\).

  2. (ii)

    \(0<m'I\le B\le mI<MI\le A\le M'I\).

Then

$$\begin{aligned}&{{K}^{r}}\left( h,2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{r}{4} \right) {{\left( \frac{1-h}{h} \right) }^{2}} \right) A{{\sharp }_{v}}B \nonumber \\&\quad \le A{{\nabla }_{v}}B \nonumber \\&\quad \le {{K}^{R}}\left( h',2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{R}{4} \right) {{\left( \frac{1-h'}{h'} \right) }^{2}} \right) A{{\sharp }_{v}}B \end{aligned}$$
(2.19)

and

$$\begin{aligned}\begin{aligned}&{{K}^{R}}\left( h,2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{R}{4} \right) {{\left( \frac{1-h'}{h'} \right) }^{2}} \right) A{{\sharp }_{v}}B \\&\quad \le A{{\nabla }_{v}}B \\&\quad \le {{K}^{r}}\left( h',2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{r}{4} \right) {{\left( \frac{1-h}{h} \right) }^{2}} \right) A{{\sharp }_{v}}B, \end{aligned}\end{aligned}$$

where \(r=\min \left\{ v,1-v \right\} \), \(R=\max \left\{ v,1-v \right\} \) and \(K(h,2) = \frac{(h+1)^2}{4h}\) with \(h=\frac{M}{m}\) and \(h'=\frac{M'}{m'}\).

Proof

On account of (2.16), we have

$$\begin{aligned}\begin{aligned}&\underset{h\le x\le h'}{\mathop {\min }}\,\left\{ {{K}^{r}}\left( x,2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{r}{4} \right) {{\left( \frac{1-x}{\max \left\{ 1,x \right\} } \right) }^{2}} \right) \right\} {{T}^{v}} \\&\quad \le \left( 1-v \right) I+vT \\&\quad \le \underset{h\le x\le h'}{\mathop {\max }}\,\left\{ {{K}^{R}}\left( x,2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{R}{4} \right) {{\left( \frac{1-x}{\max \left\{ 1,x \right\} } \right) }^{2}} \right) \right\} {{T}^{v}}, \end{aligned} \end{aligned}$$

for the positive operator T such that \(hI\le T\le h'I\). Setting \(T={{A}^{-\frac{1}{2}}}B{{A}^{-\frac{1}{2}}}\).

In the first case we have \(I<hI=\frac{M}{m}I\le {{A}^{-\frac{1}{2}}}B{{A}^{-\frac{1}{2}}}\le \frac{M'}{m'}I=h'I\), which implies that

$$\begin{aligned}&\underset{1\le h\le x\le h'}{\mathop {\min }}\,\left\{ {{K}^{r}}\left( x,2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{r}{4} \right) {{\left( \frac{1-x}{x} \right) }^{2}} \right) \right\} {{\left( {{A}^{-\frac{1}{2}}}B{{A}^{-\frac{1}{2}}} \right) }^{v}} \nonumber \\&\quad \le \left( 1-v \right) I+v{{A}^{-\frac{1}{2}}}B{{A}^{-\frac{1}{2}}} \nonumber \\&\quad \le \underset{1\le h\le x\le h'}{\mathop {\max }}\,\left\{ {{K}^{R}}\left( x,2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{R}{4} \right) {{\left( \frac{1-x}{x} \right) }^{2}} \right) \right\} {{\left( {{A}^{-\frac{1}{2}}}B{{A}^{-\frac{1}{2}}} \right) }^{v}}.\nonumber \\ \end{aligned}$$
(2.20)

We can write (2.20) in the form

$$\begin{aligned}\begin{aligned}&{{K}^{r}}\left( h,2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{r}{4} \right) {{\left( \frac{1-h}{h} \right) }^{2}} \right) {{\left( {{A}^{-\frac{1}{2}}}B{{A}^{-\frac{1}{2}}} \right) }^{v}} \\&\quad \le \left( 1-v \right) I+v{{A}^{-\frac{1}{2}}}B{{A}^{-\frac{1}{2}}} \\&\quad \le {{K}^{R}}\left( h',2 \right) \exp \left( \left( \frac{v\left( 1-v \right) }{2}-\frac{R}{4} \right) {{\left( \frac{1-h'}{h'} \right) }^{2}} \right) {{\left( {{A}^{-\frac{1}{2}}}B{{A}^{-\frac{1}{2}}} \right) }^{v}}. \end{aligned}\end{aligned}$$

Finally, multiplying both sides of the previous inequality by \({{A}^{\frac{1}{2}}}\) we get the desired result (2.19).

The proof of other cases is similar, we omit the details. \(\square \)