1 Introduction

The famous weighted arithmetic and geometric mean inequality can be stated as follows: Let m be a positive integer and \(x_{k}\) and \(p_{k}\) (\(k=1,2,\ldots ,m\)) be positive real numbers with \(\sum \limits _{k=1}^{m}p_{k}=1\), then

$$\begin{aligned} \prod \limits _{k=1}^{m}x_{k}^{p_{k}}\le \sum \limits _{k=1}^{m}p_{k}x_{k}. \end{aligned}$$
(1)

Equality holds if and only if \(x_{1}=x_{2}=\cdots =x_{m}\). When \(m=2\), inequality (1) is just the classical Young’s inequality

$$\begin{aligned} a^{1-v}b^{v}\le (1-v)a+vb, \end{aligned}$$
(2)

where a and b are positive real numbers and \(0\le v\le 1\).

Refining the Young’s and its reverse inequalities by adding a positive term to the left (right) side becomes possible, which has been taken the attention of many researchers.

In [19, 20], Kittaneh and Manasrah presented a refinement of Young’s and its reverse inequalities:

$$\begin{aligned} a^{v}b^{1-v}+r(\sqrt{a}-\sqrt{b})^{2}\le va+(1-v)b\le a^{v}b^{1-v}+R(\sqrt{a}-\sqrt{b})^{2}, \end{aligned}$$
(3)

where \(a\ge 0\), \(b\ge 0\), \(0\le v\le 1\), \(r=\min \{v,1-v\}\) and \(R=\max \{v,1-v\}\).

The squared form of the refined Young’s and its reverse inequalities, obtained by Hirzallah etc. [15] and He etc. [12], respectively, can be stated that

$$\begin{aligned} r^{2}(a-b)^{2}\le \big (va+(1-v)b\big )^{2}-\big (a^{v}b^{1-v}\big )^{2}\le R^{2}(a-b)^{2}, \end{aligned}$$
(4)

where \(a\ge 0\), \(b\ge 0\), \(0\le v\le 1\), \(r=\min \{v,1-v\}\) and \(R=\max \{v,1-v\}\).

In 2015, Manasrah and Kittaneh [24, Theorem 2] deduced a more generalization of inequalities (3) and (4) that

$$\begin{aligned} \big (a^{v}b^{1-v}\big )^{m}+r^{m}\big (a^{\frac{m}{2}}-b^{\frac{m}{2}}\big )^{2}\le \big (va+(1-v)b\big )^{m} \end{aligned}$$
(5)

for any positive integer m, where \(a\ge 0\), \(b\ge 0\), \(0\le v\le 1\), and \(r=\min \{v,1-v\}\).

It should be mentioned here that Akkouchi and Ighachane [1] gave an alternative proof to inequality (5).

In 2017, Al-Manasrah and Kittaneh [2, Theorem 3] presented another generalization of inequalities (3) as follows: Let a, b and v be positive real numbers with \(0\le v\le 1\) and m be a positive integer. Then

$$\begin{aligned} r^{m}(a^{\frac{m}{2}}-b^{\frac{m}{2}})^{2}&\le r^{m}\big [(a+b)^{m}-2^{m}(ab)^{\frac{m}{2}}\big ]^{2}\nonumber \\&\le \big [va+(1-v)b\big ]^{m}-\big (a^{v}b^{1-v}\big )^{m}\nonumber \\&\le R^{m}\big [(a+b)^{m}-2^{m}(ab)^{\frac{m}{2}}\big ]^{2}, \end{aligned}$$
(6)

where \(r=\min \{v,1-v\}\) and \(R=\max \{v,1-v\}\).

It is easy to see that the left-hand side of inequalities (6) is also a refinement of inequality (5).

Alzer, Fonseca and Kova\(\check{c}\)ec [3, Theorem 2.1] obtained an important refinement of the Young’s inequalities (3): Let a, b, v, \(\mu \) and \(\lambda \) be positive real numbers with \(0\le \mu \le v\le 1\) and \(\lambda \ge 1\). Then

$$\begin{aligned} \Big (\frac{\mu }{v}\Big )^{\lambda }\le \frac{\big [\mu a+(1-\mu )b\big ]^{\lambda }-(a^{\mu }b^{1-\mu })^{\lambda }}{\big [v a+(1-v)b\big ]^{\lambda }-(a^{v}b^{1-v})^{\lambda }}\le \Big (\frac{1-\mu }{1-v}\Big )^{\lambda }. \end{aligned}$$
(7)

The Alzer–Fonseca–Kova\(\check{c}\)ec’s inequalities (7) can be regarded as a major development concerning the Young’s inequality for the past few years.

Later, Liao and Wu [23, Theorem 2.1] deduced the following inequalities between the arithmetic mean and the harmonic mean:

$$\begin{aligned} \Big (\frac{\mu }{v}\Big )^{\lambda }\le \frac{\big [\mu a+(1-\mu )b\big ]^{\lambda }-\big [\mu a^{-1}+(1-\mu )b^{-1}\big ]^{-\lambda }}{\big [v a+(1-v)b\big ]^{\lambda }-\big [v a^{-1}+(1-v)b^{-1}\big ]^{-\lambda }}\le \Big (\frac{1-\mu }{1-v}\Big )^{\lambda }, \end{aligned}$$
(8)

where a, b, v, \(\mu \) and \(\lambda \) are positive real numbers with \(0\le \mu \le v\le 1\) and \(\lambda \ge 1\).

As far as convex functions are concerned, Sababheh [28, Theorem 2.1] generalized inequalities (7) and (8): Let \(f:[0,1]\rightarrow [0,+\infty )\) be a convex function and v, \(\mu \) and \(\lambda \) be positive real numbers with \(0\le \mu \le v\le 1\) and \(\lambda \ge 1\). Then

$$\begin{aligned} \Big (\frac{\mu }{v}\Big )^{\lambda }\le \frac{\big [\mu f(0)+(1-\mu )f(1)\big ]^{\lambda }-f(\mu )^{\lambda }}{\big [v f(0)+(1-v)f(1)\big ]^{\lambda }-f(v)^{\lambda }}\le \Big (\frac{1-\mu }{1-v}\Big )^{\lambda }. \end{aligned}$$

In 2020, Ren [27, Theorems 2.1 and 2.3] refined the Alzer–Fonseca–Kova\(\check{c}\)ec’s inequalities (7) in the cases \(\lambda =1,2\) under some conditions.

In 2021, Ighachane, Akkouchi and benabdi [17, Theorem 2.2] gave a refinement of the left-hand side of the Alzer–Fonseca–Kova\(\check{c}\)ec’s inequalities (7) for \(\lambda =1,2,3,\ldots \).

The Young’s and its reverse inequalities, though very simple, are important in functional analysis, matrix theory, operator theory, electrical networks, etc. Many scholars had done much research in this topic. We refer the readers to [5, 8,9,10, 21, 25, 26, 29,30,31,32,33,34,35,36,37] and references therein for other works.

The main aim of this work is to study generalizations and refinements of the Young’s and its reverse inequalities. First, we give an alternative proof to the Al-Manasrah–Kittaneh’s inequalities (6). And then, we also give some generalizations for the Al-Manasrah–Kittaneh’s inequalities (6) and refinements of the Alzer–Fonseca–Kova\(\check{c}\)ec’s inequalities (7) for \(\lambda =1, 2, 3, \ldots \). Based on them, we present some refined matrix version of the Young’s and its reverse inequalities for convex functions. Moreover, inequalities for determinants and operators are also given.

2 Generalizations and Refinements of the Young’s and Its Reverse Inequalities for Scalars

In this section, we mainly study generalizations and refinements of the Young’s and its reverse inequalities for scalars. Before giving the main results, we need the following lemmas. The first lemma was given by Akkouchi and Ighachane [1].

Lemma 1

Let m be a positive integer and v be a positive real number with \(0\le v\le 1\). Then

$$\begin{aligned} \sum \limits _{k=1}^{m}C_{m}^{k}kv^{k}(1-v)^{m-k}=mv, \end{aligned}$$

where \(C_{m}^{k}=\left( \begin{array}{c} m \\ k \\ \end{array} \right) \) is the binomial coefficient.

Lemma 2

Let \(\mu \) and v be positive real numbers with \(0\le \mu < v\le 1\) and m be a positive integer. Then inequality holds

$$\begin{aligned} \Big (\frac{1-\mu }{1-v}\Big )^{m}v^{k}(1-v)^{m-k}-\mu ^{k}(1-\mu )^{m-k}>0, \end{aligned}$$

for \(k=0, 1,\ldots ,m\).

Proof

Since \(0\le \mu < v\le 1\), then \(\frac{v}{\mu }>1\) and \(\frac{1-\mu }{1-v}>1\). So we get

$$\begin{aligned} \Big (\frac{1-\mu }{1-v}\Big )^{m}\frac{v^{k}(1-v)^{m-k}}{\mu ^{k}(1-\mu )^{m-k}} =\Big (\frac{v}{\mu }\Big )^{k}\cdot \Big (\frac{1-\mu }{1-v}\Big )^{k}>1. \end{aligned}$$

This completes the proof. \(\square \)

The third lemma was obtained by Ighachane, Akkouchi and Bennabdi [17, Lemma 2.2].

Lemma 3

Let \(\mu \) and v be positive real numbers with \(0\le \mu < v\le 1\) and m be a positive integer. Then inequality holds

$$\begin{aligned} \mu ^{k}(1-\mu )^{m-k}-\Big (\frac{\mu }{v}\Big )^{m}v^{k}(1-v)^{m-k}>0, \end{aligned}$$

for \(k=0,1,\ldots ,m\).

Utilizing the famous weighted arithmetic and geometric mean inequality, we can give an alternative proof to inequalities (6). For convenience, we present it as a theorem.

Theorem 1

Let a, b and v be positive real numbers with \(0\le v\le 1\) and m be a positive integer. Then

$$\begin{aligned} r^{m}\Big [(a+b)^{m}-2^{m}(ab)^{\frac{m}{2}}\Big ]&\le \big [va+(1-v)b\big ]^{m}-\big (a^{v}b^{1-v}\big )^{m}\nonumber \\&\le R^{m}\Big [(a+b)^{m}-2^{m}(ab)^{\frac{m}{2}}\Big ], \end{aligned}$$
(9)

where \(r=\min \{v,1-v\}\) and \(R=\max \{v,1-v\}\).

Proof

By the binomial expansion of \(\big (va+(1-v)\big )^{m}\) and \((a+1)^{m}\), we have

$$\begin{aligned} \big (va+(1-v)\big )^{m}=\sum \limits _{k=0}^{m}p_{k}a^{k}\end{aligned}$$

and

$$\begin{aligned}(a+1)^{m}=\sum \limits _{k=0}^{m}C_{m}^{k}a^{k},\end{aligned}$$

where \(p_{k}=C_{m}^{k}v^{k}(1-v)^{m-k}\) and \(C_{m}^{k}=\left( \begin{array}{c} m \\ k \\ \end{array} \right) \) is the binomial coefficient, \(k=0,1,\ldots ,m\).

Therefore, we get

$$\begin{aligned} \big [va+(1&-v)\big ]^{m}-a^{mv}-r^{m}\Big [(a+1)^{m}-2^{m}a^{\frac{m}{2}}\Big ]\nonumber \\&=\sum \limits _{k=0}^{m}p_{k}a^{k}-a^{mv}-r^{m}\Big [\sum \limits _{k=0}^{m}C_{m}^{k}a^{k}-2^{m}a^{\frac{m}{2}}\Big ]\nonumber \\&=\sum \limits _{k=0}^{m}\Big [p_{k}-r^{m}C_{m}^{k}\Big ]a^{k}+2^{m}r^{m}a^{\frac{m}{2}}-a^{mv}\nonumber \\&\ge a^{\sum \limits _{k=0}^{m}k\Big [p_{k}-r^{m}C_{m}^{k}\Big ]+m2^{m-1}r^{m}}-a^{mv}\ (inequality\ (1))\nonumber \\&=a^{mv-m2^{m-1}r^{m}+m2^{m-1}r^{m}}-a^{mv}\ (Lemma 1)\nonumber \\&=0. \end{aligned}$$
(10)

Inequality (10) gives

$$\begin{aligned} r^{m}\Big [(a+1)^{m}-2^{m}a^{\frac{m}{2}}\Big ]\le \big [va+(1-v)\big ]^{m}-a^{mv}. \end{aligned}$$
(11)

Putting \(a=:\frac{a}{b}\) in inequality (11), we have

$$\begin{aligned} r^{m}\Big [\Big (\frac{a}{b}+1\Big )^{m}-2^{m}\Big (\frac{a}{b}\Big )^{\frac{m}{2}}\Big ]\le \Big [v\frac{a}{b}+(1-v)\Big ]^{m}-\Big [\frac{a}{b}\Big ]^{mv}. \end{aligned}$$
(12)

Multiplying \(b^{m}\) on both sides of inequality (12), we get the left-hand side of inequality (9).

Similarly, since \(R=\max \{v,1-v\}\), then \(2R\ge 1\), thus we obtain

$$\begin{aligned} R^{m}&\big [(a+1)^{m}-2^{m}a^{\frac{m}{2}}\big ]-(va+1-v)^{m}+a^{mv}\\&=\sum \limits _{k=0}^{m}\Big [C_{m}^{k}R^{m}-p_{k}\Big ]a^{k}+a^{mv}-(2R)^{m}a^{\frac{m}{2}}\\&=(2R)^{m}\Big [\sum \limits _{k=0}^{m}(2R)^{-m}\big (C_{m}^{k}R^{m}-p_{k}\big )a^{k}+(2R)^{-m}a^{mv}\Big ]-(2R)^{m}a^{\frac{m}{2}}\\&\ge (2R)^{m}a^{(2R)^{-m}\Big \{\sum \limits _{k=0}^{m}k\big (C_{m}^{k}R^{m}-p_{k}\big )+mv\Big \}} -(2R)^{m}a^{\frac{m}{2}}(\ inequality (1))\\&=(2R)^{m}a^{\big [\frac{m}{2}-mv(2R)^{-m}+mv(2R)^{-m}\big ]}-(2R)^{m}a^{\frac{m}{2}}\ (Lemma 1)\\&=0, \end{aligned}$$

which implies

$$\begin{aligned} (va+1-v)^{m}+a^{mv}\le R^{m}\big [(a+1)^{m}-2^{m}a^{\frac{m}{2}}\big ]. \end{aligned}$$
(13)

Setting \(a=:\frac{a}{b}\) in inequality (13), we get

$$\begin{aligned} \Big [v\frac{a}{b}+1-v\Big ]^{m}+\Big (\frac{a}{b}\Big )^{mv}\le R^{m}\Big [\Big (\frac{a}{b}+1\Big )^{m}-2^{m}\Big (\frac{a}{b}\Big )^{\frac{m}{2}}\Big ]. \end{aligned}$$
(14)

Multiplying \(b^{m}\) on both sides of inequality (14), we get the right-hand side of inequality (9).

This completes the proof. \(\square \)

It is worth mentioning that the authors [2, Theorem 2] proved that if \(\phi \) is a strictly increasing convex function on an interval I, then \(\phi (z)-\phi (w)\le \phi (x)-\phi (y)\) , where x, y, w, z are points in I with \(w\le z\le x\), \(y\le x\) and \(z-w\le x-y\). By using this result for the function \(\phi (x)=x^{p}\) \((p\ge 1)\), they obtained Theorem 1. It is easy to see that our proof of Theorem 1 is different from that of Al-Manasrah and Kittaneh’s [2, Theorem 3].

The positive integer m in inequality (9) can be replaced by positive real number \(\lambda \ge 1\). Actually, we have the following results.

Theorem 2

Let a, b and \(\lambda \) be positive real numbers with \(\lambda \ge 1\). Then

$$\begin{aligned} \frac{\big [va+(1-v)b\big ]^{\lambda }-(a^{v}b^{1-v})^{\lambda }}{\min \{v,1-v\}^{\lambda }}\ge \frac{\big [\tau a+(1-\tau )b\big ]^{\lambda }-(a^{\tau }b^{1-\tau })^{\lambda }}{\min \{\tau ,1-\tau \}^{\lambda }}\end{aligned}$$

for \(0\le v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}\le v\le \tau \le 1\). In particular,

$$\begin{aligned}{}[va+(1-v)b]^{\lambda }-(a^{v}b^{1-v})^{\lambda }\ge r^{\lambda }\big [(a+b)^{\lambda }-2^{\lambda }(ab)^{\frac{\lambda }{2}}\big ], \end{aligned}$$

where \(0\le v\le 1\) and \(r=\min \{v,1-v\}\).

Proof

The proof idea is due to Alzer–Fonseca–Kova\(\check{c}\)ec [3, Theorem 2.1].

Taking \(h(x)=x(1-\ln x)-1\), then \(h'(x)=-\ln x\), thus \(h(x)<h(1)=0\) for all \(0<x\ne 1\). Define the function \(F(v,\lambda ,a)=\frac{(va+1-v)^{\lambda }-a^{v\lambda }}{\min \{v,1-v\}^{\lambda }}\). Then for \(0\le v\le \frac{1}{2}\), we get,

$$\begin{aligned} \frac{\partial }{\partial v}F(v,\lambda ,a)&=\frac{\lambda }{v^{1+\lambda }a^{v(1-\lambda )}} \Big [a^{v}(1-v\ln a)-\Big (\frac{va+1-v}{a^{v}}\Big )^{\lambda -1}\Big ]\\&\le \frac{\lambda }{v^{1+\lambda }a^{v(1-\lambda )}} \Big [a^{v}(1-v\ln a)-1\Big ]\\&=\frac{\lambda }{v^{1+\lambda }a^{v(1-\lambda )}}h(a^{v})\\&\le 0. \end{aligned}$$

This gives that

$$\begin{aligned} \frac{(va+1-v)^{\lambda }-a^{v\lambda }}{\min \{v,1-v\}^{\lambda }} \ge \frac{(\tau a+1-\tau )^{\lambda }-a^{\tau \lambda }}{\min \{\tau ,1-\tau \}^{\lambda }}, \end{aligned}$$
(15)

for \(0\le v\le \tau \le \frac{1}{2}\).

On the other hand, let \(\frac{1}{2}\le v\le 1\), we have

$$\begin{aligned} \frac{\partial }{\partial v}F(v,\lambda ,a)&=\frac{\lambda }{(1-v)^{1+\lambda }a^{v(1-\lambda )-1}} \Big [\Big (\frac{va+1-v}{a^{v}}\Big )^{\lambda -1}\\&\quad -a^{v-1}\big ((1-v)\ln a+1\big )\Big ]\\&\ge \frac{\lambda }{(1-v)^{1+\lambda }a^{v(1-\lambda )-1}} \Big [1-a^{v-1}\big ((1-v)\ln a+1\big )\Big ]\\&=-\frac{\lambda }{(1-v)^{1+\lambda }a^{v(1-\lambda )-1}}h(a^{v-1})\ge 0, \end{aligned}$$

which entails that

$$\begin{aligned} \frac{(va+1-v)^{\lambda }-a^{v\lambda }}{\min \{v,1-v\}^{\lambda }} \le \frac{(\tau a+1-\tau )^{\lambda }-a^{\tau \lambda }}{\min \{\tau ,1-\tau \}^{\lambda }}, \end{aligned}$$
(16)

for \(\frac{1}{2}\le v\le \tau \le 1\).

Replacing a by \(\frac{a}{b}\) and multiplying \(b^{\lambda }\) in inequalities (15) and (16), we get the desired results. This completes the proof. \(\square \)

The next theorem is a reverse of Theorem 2.

Theorem 3

Let a, b and \(\lambda \) be positive real numbers with \(\lambda \ge 1\). Then

$$\begin{aligned} \frac{\big [va+(1-v)b\big ]^{\lambda }-(a^{v}b^{1-v})^{\lambda }}{\max \{v,1-v\}^{\lambda }}\le \frac{\big [\tau a+(1-\tau )b\big ]^{\lambda }-(a^{\tau }b^{1-\tau })^{\lambda }}{\max \{\tau ,1-\tau \}^{\lambda }} \end{aligned}$$

for \(0\le v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}\le v\le \tau \le 1\). Moreover,

$$\begin{aligned}\big [va+(1-v)b\big ]^{\lambda }-\big (a^{v}b^{1-v}\big )^{\lambda }\le R^{\lambda }\big [(a+b)^{\lambda }-2^{\lambda }(ab)^{\frac{\lambda }{2}}\big ],\end{aligned}$$

where \(0\le v\le 1\) and \(R=\max \{v,1-v\}\).

Proof

The proof idea is the same as that of Theorem 2. First, the function h(x) is the same as in Theorem 2. Moreover, we define the function \(G(v,\lambda ,a)=\frac{(va+1-v)^{\lambda }-a^{v\lambda }}{\max \{v,1-v\}^{\lambda }}\). Therefore, for \(0\le v\le \frac{1}{2}\), we get,

$$\begin{aligned} \frac{\partial }{\partial v}G(v,\lambda ,a)&=\frac{\lambda }{(1-v)^{1+\lambda }a^{v(1-\lambda )-1}} \Big [\Big (\frac{va+1-v}{a^{v}}\Big )^{\lambda -1}\\&\quad -a^{v-1}\big ((1-v)\ln a+1\big )\Big ]\\&\ge \frac{\lambda }{(1-v)^{1+\lambda }a^{v(1-\lambda )-1}} \Big [1-a^{v-1}\big ((1-v)\ln a+1\big )\Big ]\\&=-\frac{\lambda }{(1-v)^{1+\lambda }a^{v(1-\lambda )-1}}h(a^{v-1})\\&\ge 0. \end{aligned}$$

This gives that

$$\begin{aligned} \frac{(va+1-v)^{\lambda }-a^{v\lambda }}{\max \{v,1-v\}^{\lambda }} \le \frac{(\tau a+1-\tau )^{\lambda }-a^{\tau \lambda }}{\max \{\tau ,1-\tau \}^{\lambda }}, \end{aligned}$$
(17)

for \(0\le v\le \tau \le \frac{1}{2}\).

On the other hand, if \(\frac{1}{2}\le v\le 1\), then we have

$$\begin{aligned} \frac{\partial }{\partial v}G(v,\lambda ,a)&=\frac{\lambda }{v^{1+\lambda }a^{v(1-\lambda )}} \Big [a^{v}(1-v\ln a)-\Big (\frac{va+1-v}{a^{v}}\Big )^{\lambda -1}\Big ]\\&\le \frac{\lambda }{v^{1+\lambda }a^{v(1-\lambda )}} \Big [a^{v}(1-v\ln a)-1\Big ]\\&=\frac{\lambda }{v^{1+\lambda }a^{v(1-\lambda )}}h(a^{v})\\&\le 0, \end{aligned}$$

which entails that

$$\begin{aligned} \frac{(va+1-v)^{\lambda }-a^{v\lambda }}{\max \{v,1-v\}^{\lambda }} \ge \frac{(\tau a+1-\tau )^{\lambda }-a^{\tau \lambda }}{\max \{\tau ,1-\tau \}^{\lambda }}, \end{aligned}$$
(18)

for \(\frac{1}{2}\le v\le \tau \le 1\).

Replacing a by \(\frac{a}{b}\) and multiplying \(b^{\lambda }\) in inequalities (17) and (18), we get the desired results.

This completes the proof. \(\square \)

For convex functions, the following results hold.

Theorem 4

Let \(f:[0,1]\rightarrow [0,+\infty )\) be convex function and \(\lambda \ge 1\). Then

$$\begin{aligned} \frac{\big [(1-v)f(0)+vf(1)\big ]^{\lambda }-f^{\lambda }(v)}{\min \{v,1-v\}^{\lambda }} \ge \frac{\big [(1-\tau )f(0)+\tau f(1)\big ]^{\lambda }-f^{\lambda }(\tau )}{\min \{\tau ,1-\tau \}^{\lambda }} \end{aligned}$$

holds for \(0\le v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}\le v\le \tau \le 1\). In particular,

$$\begin{aligned} r^{\lambda }\Big [\big (f(0)+f(1)\big )^{\lambda }-f^{\lambda }(\frac{1}{2})\Big ] \le \Big [(1-v)f(0)+vf(1)\Big ]^{\lambda }-f^{\lambda }(v),\end{aligned}$$

where \(0\le v\le 1\) and \(r=\min \{v,1-v\}\).

Proof

The proof idea is the same as that of Sababheh’s [28, Theorem 2.1].

First, we assume that the function f is twice differentiable, then \(f''(x)\ge 0\) for \(0\le x\le 1\). Define the function

$$\begin{aligned} F(v)=\frac{\big [(1-v)f(0)+vf(1)\big ]^{\lambda }-f^{\lambda }(v)}{\min \{v,1-v\}^{\lambda }}, \end{aligned}$$

where \(0\le v\le 1\). Then, for \(0\le v\le \frac{1}{2}\), we have

$$\begin{aligned} F'(v)&=\frac{\lambda f^{\lambda -1}(v)}{v^{\lambda +1}}\Big [-\Big (\frac{(1-v)f(0)+vf(1)}{f(v)}\Big )^{\lambda -1}f(0)+f(v)-vf'(v)\Big ]\nonumber \\&\le \frac{\lambda f^{\lambda -1}(v)}{v^{\lambda +1}}\Big [-f(0)+f(v)-vf'(v)\Big ]. \end{aligned}$$
(19)

Putting \(g_{1}(v)=-f(0)+f(v)-vf'(v)\) (\(v\in [0,1]\)), since \(f''(v)\ge 0\) for \(0\le v\le 1\), then \(g_{1}'(v)=-vf''(v)\le 0\). This shows \(g_{1}(v)\le g_{1}(0)=0\), \(v\in [0,1]\). Therefore, inequality (19) implies F(v) is decreasing with respect to v on \([0,\frac{1}{2}]\). Similarly, if \(\frac{1}{2}\le v\le 1\), then

$$\begin{aligned} F'(v)&=\frac{\lambda f^{\lambda -1}(v)}{(1-v)^{\lambda +1}}\Big [\Big (\frac{(1-v)f(0)+vf(1)}{f(v)}\Big )^{\lambda -1}f(1)-f(v)-(1-v)f'(v)\Big ]\nonumber \\&\ge \frac{\lambda f^{\lambda -1}(v)}{(1-v)^{\lambda +1}}\Big [f(1)-f(v)-(1-v)f'(v)\Big ]. \end{aligned}$$
(20)

Setting \(g_{2}(v)=f(1)-f(v)-(1-v)f'(v)\) (\(v\in [0,1]\)), then \(g_{2}'(v)=-(1-v)f''(v)\le 0\), which gives \(g_{2}(v)\ge g_{2}(1)=0\), \(v\in [0,1]\). Thus, inequality (20) implies F(v) is increasing with respect to v on \([\frac{1}{2},1]\).

The general case follows from the fact that any convex function is a uniform limit of smooth convex functions [4, Theorem 1].

This completes the proof. \(\square \)

A reverse of Theorem 4 can be stated as follows.

Theorem 5

Let \(f:[0,1]\rightarrow [0,+\infty )\) be convex function and \(\lambda \ge 1\). Then

$$\begin{aligned} \frac{\big [(1-v)f(0)+vf(1)\big ]^{\lambda }-f^{\lambda }(v)}{\max \{v,1-v\}^{\lambda }} \le \frac{\big [(1-\tau )f(0)+\tau f(1)\big ]^{\lambda }-f^{\lambda }(\tau )}{\max \{\tau ,1-\tau \}^{\lambda }} \end{aligned}$$

holds for \(0<v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}<v\le \tau \le 1\). Moreover,

$$\begin{aligned} \Big [(1-v)f(0)+vf(1)\Big ]^{\lambda }-f^{\lambda }(v)\le R^{\lambda }\Big [\big (f(0)+f(1)\big )^{\lambda }-f^{\lambda }(\frac{1}{2})\Big ],\end{aligned}$$

where \(0\le v\le 1\) and \(R=\max \{v,1-v\}\).

Proof

Using the same method as in that of Theorem 4, we can complete the proof. First, we assume the f is twice differentiable. Define the function

$$\begin{aligned} G(v)=\frac{\big [(1-v)f(0)+vf(1)\big ]^{\lambda }-f^{\lambda }(v)}{\max \{v,1-v\}^{\lambda }}, \end{aligned}$$

where \(0\le v\le 1\). Then, for \(0\le v\le \frac{1}{2}\), we have

$$\begin{aligned} G'(v)&=\frac{\lambda f^{\lambda -1}(v)}{(1-v)^{\lambda +1}}\Big [\Big (\frac{(1-v)f(0)+vf(1)}{f(v)}\Big )^{\lambda -1}f(1)-f(v)-(1-v)f'(v)\Big ]\nonumber \\&\ge \frac{\lambda f^{\lambda -1}(v)}{(1-v)^{\lambda +1}}\Big [f(1)-f(v)-(1-v)f'(v)\Big ]. \end{aligned}$$
(21)

Setting \(g_{2}(v)=f(1)-f(v)-(1-v)f'(v)\) (\(v\in [0,1]\)), then \(g_{2}'(v)=-(1-v)f''(v)\le 0\), which gives \(g_{2}(v)\ge g_{2}(1)=0\), \(v\in [0,1]\). By inequality (21), we conclude that \( G'(v)\ge 0\), which implies G(v) is increasing with respect to v on \([0,\frac{1}{2}]\). Similarly, if \(\frac{1}{2}\le v\le 1\), then

$$\begin{aligned} G'(v)&=\frac{\lambda f^{\lambda -1}(v)}{v^{\lambda +1}}\Big [-\Big (\frac{(1-v)f(0)+vf(1)}{f(v)}\Big )^{\lambda -1}f(0)+f(v)-vf'(v)\Big ]\nonumber \\&\le \frac{\lambda f^{\lambda -1}(v)}{v^{\lambda +1}}\Big [-f(0)+f(v)-vf'(v)\Big ]. \end{aligned}$$
(22)

Putting \(g_{1}(v)=-f(0)+f(v)-vf'(v)\) (\(v\in [0,1]\)), then \(g_{1}'(v)=-vf''(v)\le 0\), \(v\in [0,1]\). This is to say that \(g_{1}(v)\le g_{1}(0)=0\), \(v\in [0,1]\). Therefore, inequality (22) implies G(v) is decreasing with respect to v on \([\frac{1}{2},1]\).

The general case follows from the fact that any convex function is a uniform limit of smooth convex functions.

This completes the proof. \(\square \)

Remark 1

Let a, b and v be positive real numbers with \(0\le v\le 1\). Then the function \(f(v)=a^{v}b^{1-v}\) is a convex function on [0, 1]. Replacing f by \(f(v)=a^{v}b^{1-v}\) in Theorems 4 and 5, we get Theorems 2 and 3, respectively.

The following two theorems are refinements of the Alzer–Fonseca–Kova\(\check{c}\)ec’s inequalities (7) when \(\lambda =1,2,3,\ldots \).

Theorem 6

Let a, b, \(\mu \) and v be positive real numbers with \(0\le \mu < v\le 1\) and m be a positive integer. Then

$$\begin{aligned} \big [\mu a+(1-\mu )b\big ]^{m}&-\big (a^{\mu }b^{1-\mu }\big )^{m}+r\Big [a^{\frac{m}{2}}-\big (a^{v}b^{1-v}\big )^{\frac{m}{2}}\Big ]^{2}\nonumber \\&\le \Big (\frac{1-\mu }{1-v}\Big )^{m}\Big [\big (va+(1-v)b\big )^{m}-\big (a^{v}b^{1-v}\big )^{m}\Big ] , \end{aligned}$$
(23)

where r is a constant with \(0<r\le \Big (\frac{1-\mu }{1-v}\Big )^{m}v^{m}-\mu ^{m}\).

Proof

Let \(\tilde{r}=\Big (\frac{1-\mu }{1-v}\Big )^{m}+r\), \(p_{k}(x)=C_{m}^{k}x^{k}(1-x)^{m-k}\), (\(x\in [0,1]\) and \(k=0,1,\ldots ,m\)), \(\alpha _{k}=\Big (\frac{1-\mu }{1-v}\Big )^{m}p_{k}(v)-p_{k}(\mu )\),(\(k=0,1,\ldots ,m-1\)), \(\alpha _{m}=\Big (\frac{1-\mu }{1-v}\Big )^{m}p_{m}(v)-p_{m}(\mu )-r\), \(\alpha _{m+1}=1\), \(\alpha _{m+2}=2r\). By Lemma 2, we have \(\alpha _{k}\ge 0\) (\(k=0,1,\ldots ,m+2\)) and \(\sum \limits _{k=0}^{m+2}\alpha _{k}=\tilde{r}\). Therefore, we deduce that

$$\begin{aligned} \Big (\frac{1-\mu }{1-v}\Big )^{m}&\Big [(va+1-v)^{m}-a^{mv}\Big ]-(\mu a+1-\mu )^{m}+a^{m\mu }-r\Big [a^{\frac{m}{2}}-a^{\frac{mv}{2}}\Big ]^{2}\nonumber \\&=\sum \limits _{k=0}^{m}\alpha _{k}a^{k}+\alpha _{m+1}a^{m\mu }+\alpha _{m+2}a^{\frac{m(1+v)}{2}}-\tilde{r}a^{mv}\nonumber \\&=\tilde{r}\Big \{\sum \limits _{k=0}^{m}\frac{\alpha _{k}}{\tilde{r}}a^{k} +\frac{\alpha _{m+1}}{\tilde{r}}a^{m\mu }+\frac{\alpha _{m+2}}{\tilde{r}}a^{\frac{m(1+v)}{2}}\Big \}-\tilde{r}a^{mv}\nonumber \\&\ge \tilde{r}a^{\tilde{r}^{-1}\Big \{\sum \limits _{k=0}^{m}k\alpha _{k}+m\mu +\alpha _{m+2}\frac{m(1+v)}{2}\Big \}} -\tilde{r}a^{mv}\ (inequality\ (1))\nonumber \\&=\tilde{r}a^{mv}-\tilde{r}a^{mv}(Lemma 1))\nonumber \\&=0\nonumber , \end{aligned}$$

which gives

$$\begin{aligned} \Big (\frac{1-\mu }{1-v}\Big )^{m}\Big [(va+1-v)^{m}-a^{mv}\Big ]\ge (\mu a+1-\mu )^{m}-a^{m\mu }+r\Big [a^{\frac{m}{2}}-a^{\frac{mv}{2}}\Big ]^{2}. \end{aligned}$$
(24)

Replacing a by \(\frac{a}{b}\) and multiplying \(b^{m}\) in the above inequality (24), we get the desired inequality (23).

This completes the proof. \(\square \)

Theorem 7

Let a, b, \(\mu \) and v be positive real numbers with \(0\le \mu < v\le 1\) and m be a positive integer. Then

$$\begin{aligned} r_{1}\Big [a^{\frac{m}{2}}-\big (a^{v}b^{1-v}\big )^{\frac{m}{2}}\Big ]^{2}&+\Big (\frac{\mu }{v}\Big )^{m}\Big [\big (va+(1-v)b\big )^{m}-\big (a^{v}b^{1-v}\big )^{m}\Big ]\nonumber \\&\le \big (\mu a+(1-\mu )b\big )^{m}-\big (a^{\mu }b^{1-\mu }\big )^{m}, \end{aligned}$$
(25)

where \(r_{1}\) is a constant with \(0<r_{1}\le \min \Big \{\Big (\frac{\mu }{v}\Big )^{m},\mu ^{m}-\Big (\frac{\mu }{v}\Big )^{m}v^{m}\Big \}\).

Proof

Taking \(p_{k}(x)=C_{m}^{k}x^{k}(1-x)^{m-k}\), (\(x\in [0,1]\) and \(k=0,1,\ldots ,m\)), \(\beta _{k}=p_{k}(\mu )-\Big (\frac{\mu }{v}\Big )^{m}p_{k}(v)\), (\(k=0,1,\ldots ,m-1\)), \(\beta _{m}=\mu ^{m}-\Big (\frac{\mu }{v}\Big )^{m}v^{m}-r_{1}\), \(\beta _{m+1}=\Big (\frac{\mu }{v}\Big )^{m}-r_{1}\), \(\beta _{m+2}=2r_{1}\), then by Lemma 3, we have \(\beta _{k}\ge 0\) and \(\sum \limits _{k=0}^{m+2}\beta _{k}=1\). Therefore, we obtain

$$\begin{aligned} (\mu a+1-\mu )^{m}&-a^{m\mu }-\Big (\frac{\mu }{v}\Big )^{m}\Big [(va+1-v)^{m}-a^{m v}\Big ]-r_{1}\Big [a^{\frac{m}{2}}-a^{\frac{mv}{2}}\Big ]^{2}\nonumber \\&=\sum \limits _{k=0}^{m}\beta _{k}a^{k}+\beta _{m+1}a^{mv}+\beta _{m+2}a^{\frac{m(1+v)}{2}}-a^{m\mu }\nonumber \\&\ge a^{\Big \{\sum \limits _{k=0}^{m}k\beta _{k}+mv\beta _{m+1}+\frac{m(1+v)}{2}\beta _{m+2}\Big \}}-a^{m\mu }\ (inequality (1))\nonumber \\&=a^{m\mu }-a^{m\mu }\ (Lemma 1)\nonumber \\&=0, \end{aligned}$$

or equivalently,

$$\begin{aligned} \Big (\frac{\mu }{v}\Big )^{m}\Big [(va+1-v)^{m}-a^{mv}\Big ]+r_{1}\Big [a^{\frac{m}{2}}-a^{\frac{mv}{2}}\Big ]^{2}\le (\mu a+1-\mu )^{m}-a^{m\mu }. \end{aligned}$$
(26)

Replacing a by \(\frac{a}{b}\) and multiplying \(b^{m}\) in inequality (26), we obtain inequality (25).

This completes the proof. \(\square \)

3 Generalizations and Refinements of the Young’s and Its Reverse Inequalities for Matrices and Operators

3.1 Generalizations and Refinements of the Young’s and Its Reverse Inequalities for Convex Functions

Let \(\mathcal {M}_{n}(\mathbb {C})\) be the matrix algebra of all \(n\times n\) complex matrices. For any \(A\in \mathcal {M}_{n}(\mathbb {C})\), the absolute value of A is the positive semidefinite matrix \(|A|=(A^{*}A)^{\frac{1}{2}}\), where \(A^{*}\) is the conjugate transpose of A. A norm \(\Vert \cdot \Vert \) on \(\mathcal {M}_{n}(\mathbb {C})\) is called a unitarily invariant norm if \(\Vert UAV\Vert =\Vert A\Vert \) for A, U, \(V\in \mathcal {M}_{n}(\mathbb {C})\) with U, V are unitary matrices. Examples in these classes are the operator norm, the trace norm and the Hilbert-Schmidt norm.

Let A, B and \(X\in \mathcal {M}_{n}(\mathbb {C})\) with A and B are positive semidefinite matrices and \(\Vert \cdot \Vert \) be a unitarily invariant norm on \(\mathcal {M}_{n}(\mathbb {C})\), Bhatia and Davis [6] proved that the function \(f(v)=\big \Vert A^{v}XB^{1-v}+A^{1-v}XB^{v}\big \Vert \) is convex on the interval [0, 1] and attains its minimum at \(t=\frac{1}{2}\). Therefore, it is decreasing on \([0,\frac{1}{2}]\) and increasing on \([\frac{1}{2},1]\). Thus, the celebrated Heinz inequality is valid

$$\begin{aligned} 2\big \Vert A^{\frac{1}{2}}XB^{\frac{1}{2}}\big \Vert \le \big \Vert A^{v}XB^{1-v}+A^{1-v}XB^{v}\big \Vert \le \big \Vert AX+XB\big \Vert ,\end{aligned}$$

which is a significant refinement of the arithmetic-geometric mean inequality:

$$\begin{aligned}2\big \Vert A^{\frac{1}{2}}XB^{\frac{1}{2}}\big \Vert \le \big \Vert AX+XB\big \Vert . \end{aligned}$$

Based on the convexity of \(f(v)=\big \Vert A^{v}XB^{1-v}+A^{1-v}XB^{v}\big \Vert \) on [0, 1] and Theorems 4 and 5, we get the following theorems.

Theorem 8

Let A, B and \(X\in \mathcal {M}_{n}(\mathbb {C})\) with A and B are positive semidefinite matrices and let \(\lambda \ge 1\) and \(\Vert \cdot \Vert \) be a unitarily invariant norm on \(\mathcal {M}_{n}(\mathbb {C})\). Then

$$\begin{aligned}&\frac{\big \Vert AX+XB\big \Vert ^{\lambda }-\big \Vert A^{v}XB^{1-v}+A^{1-v}XB^{v}\big \Vert ^{\lambda }}{\min \{v,1-v\}^{\lambda }}\\&\qquad \qquad \qquad \qquad \ge \frac{\big \Vert AX+XB\big \Vert ^{\lambda }-\big \Vert A^{\tau }XB^{1-\tau }+A^{1-\tau }XB^{\tau }\big \Vert ^{\lambda }}{\min \{\tau ,1-\tau \}^{\lambda }} \end{aligned}$$

holds for \(0\le v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}\le v\le \tau \le 1\). In particular,

$$\begin{aligned} (2r)^{\lambda }\Big [\big \Vert AX+XB\big \Vert ^{\lambda }&-\big \Vert A^{\frac{1}{2}}XB^{\frac{1}{2}}\big \Vert ^{\lambda }\Big ]\\&\le \big \Vert AX+XB\big \Vert ^{\lambda }-\big \Vert A^{v}XB^{1-v}+A^{1-v}XB^{v}\big \Vert ^{\lambda }, \end{aligned}$$

where \(0\le v\le 1\) and \(r=\min \{v,1-v\}\).

Theorem 9

Let A, B and \(X\in \mathcal {M}_{n}(\mathbb {C})\) with A and B are positive semidefinite matrices and let \(\lambda \ge 1\) and \(\Vert \cdot \Vert \) be a unitarily invariant norm on \(\mathcal {M}_{n}(\mathbb {C})\). Then

$$\begin{aligned}&\frac{\big \Vert AX+XB\big \Vert ^{\lambda }-\big \Vert A^{v}XB^{1-v}+A^{1-v}XB^{v}\big \Vert ^{\lambda }}{\max \{v,1-v\}^{\lambda }}\\&\qquad \qquad \qquad \qquad \le \frac{\big \Vert AX+XB\big \Vert ^{\lambda }-\big \Vert A^{\tau }XB^{1-\tau }+A^{1-\tau }XB^{\tau }\big \Vert ^{\lambda }}{\max \{\tau ,1-\tau \}^{\lambda }} \end{aligned}$$

holds for \(0\le v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}\le v\le \tau \le 1\). In particular,

$$\begin{aligned} \big \Vert AX+XB\big \Vert ^{\lambda }&-\big \Vert A^{v}XB^{1-v}+A^{1-v}XB^{v}\big \Vert ^{\lambda }\\&\le (2R)^{\lambda }\Big [\big \Vert AX+XB\big \Vert ^{\lambda }-\big \Vert A^{\frac{1}{2}}XB^{\frac{1}{2}}\big \Vert ^{\lambda }\Big ], \end{aligned}$$

where \(0\le v\le 1\) and \(R=\max \{v,1-v\}\).

Similarly, let \(\phi (v)=\big \Vert \mid A^{v}XB^{1-v}\mid ^{r}\big \Vert \cdot \big \Vert \mid A^{1-v}XB^{v}\mid ^{r}\big \Vert \), where \(0\le v\le 1\), \(r>0\), A, B and \(X\in \mathcal {M}_{n}(\mathbb {C})\) with A and B are positive semidefinite matrices, and \(\Vert \cdot \Vert \) is a unitarily invariant norm on \(\mathcal {M}_{n}(\mathbb {C})\). Hiai and Zhan [14, Theorem 1] proved that \(\phi (v)=\big \Vert \mid A^{v}XB^{1-v}\mid ^{r}\big \Vert \cdot \big \Vert \mid A^{1-v}XB^{v}\mid ^{r}\big \Vert \) is convex on the interval [0, 1] and attains its minimum at \(t=\frac{1}{2}\). Consequently, it is decreasing on \([0,\frac{1}{2}]\) and increasing on \([\frac{1}{2},1]\). An immediate consequence of this result is that

$$\begin{aligned} \big \Vert \mid A^{\frac{1}{2}}XB^{\frac{1}{2}}\mid ^{r}\big \Vert ^{2}&\le \big \Vert \mid A^{v}XB^{1-v}\mid ^{r}\big \Vert \cdot \big \Vert \mid A^{1-v}XB^{v}\mid ^{r}\big \Vert \\&\le \big \Vert \mid AX\mid ^{r}\big \Vert \cdot \big \Vert \mid XB\mid ^{r}\big \Vert , \end{aligned}$$

which interpolates the Cauchy–Schwarz inequality

$$\begin{aligned} \big \Vert \mid A^{\frac{1}{2}}XB^{\frac{1}{2}}\mid ^{r}\big \Vert ^{2}\le \big \Vert \mid AX\mid ^{r}\big \Vert \cdot \big \Vert \mid XB\mid ^{r}\big \Vert , \end{aligned}$$

obtained by Bhatia and Davis [7] and Hiai [13]. Applying the convex function \(\phi (v)=\big \Vert \mid A^{v}XB^{1-v}\mid ^{r}\big \Vert \cdot \big \Vert \mid A^{1-v}XB^{v}\mid ^{r}\big \Vert \) to Theorems 4 and 5, we obtain the following Cauchy–Schwarz-type inequalities.

Theorem 10

Let A, B and \(X\in \mathcal {M}_{n}(\mathbb {C})\) with A and B are positive semidefinite matrices and let \(\lambda \ge 1\), \(r>0\) and \(\Vert \cdot \Vert \) be a unitarily invariant norm on \(\mathcal {M}_{n}(\mathbb {C})\). Then

$$\begin{aligned}&\frac{\Big (\big \Vert \mid AX\mid ^{r}\big \Vert \cdot \big \Vert \mid XB\mid ^{r}\big \Vert \Big )^{\lambda }-\Big (\big \Vert \mid A^{v}XB^{1-v}\mid ^{r}\big \Vert \cdot \big \Vert \mid A^{1-v}XB^{v}\mid ^{r}\big \Vert \Big )^{\lambda }}{\min \{v,1-v\}^{\lambda }}\\&\qquad \ge \frac{\Big (\big \Vert \mid AX\mid ^{r}\big \Vert \cdot \big \Vert \mid XB\mid ^{r}\big \Vert \Big )^{\lambda }-\Big (\big \Vert \mid A^{\tau }XB^{1-\tau }\mid ^{r}\big \Vert \cdot \big \Vert \mid A^{1-\tau }XB^{\tau }\mid ^{r}\big \Vert \Big )^{\lambda }}{\min \{\tau ,1-\tau \}^{\lambda }} \end{aligned}$$

holds for \(0\le v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}\le v\le \tau \le 1\). In particular,

$$\begin{aligned} r_{1}^{\lambda }&\Big [2^{\lambda }\Big (\big \Vert \mid AX\mid ^{r}\big \Vert \cdot \big \Vert \mid XB\mid ^{r}\big \Vert \Big )^{\lambda }- \big \Vert \mid A^{\frac{1}{2}}XB^{\frac{1}{2}}\mid ^{r}\big \Vert ^{2\lambda }\Big ]\\&\le \Big (\big \Vert \mid AX\mid ^{r}\big \Vert \cdot \big \Vert \mid XB\mid ^{r}\big \Vert \Big )^{\lambda }-\Big (\big \Vert \mid A^{v}XB^{1-v}\mid ^{r}\big \Vert \cdot \big \Vert \mid A^{1-v}XB^{v}\mid ^{r}\big \Vert \Big )^{\lambda }, \end{aligned}$$

where \(0\le v\le 1\) and \(r_{1}=\min \{v,1-v\}\).

Theorem 11

Let A, B and \(X\in \mathcal {M}_{n}(\mathbb {C})\) with A and B are positive semidefinite matrices and let \(\lambda \ge 1\), \(r>0\) and \(\Vert \cdot \Vert \) be a unitarily invariant norm on \(\mathcal {M}_{n}(\mathbb {C})\). Then

$$\begin{aligned}&\frac{\Big (\big \Vert \mid AX\mid ^{r}\big \Vert \cdot \big \Vert \mid XB\mid ^{r}\big \Vert \Big )^{\lambda }-\Big (\big \Vert \mid A^{v}XB^{1-v}\mid ^{r}\big \Vert \cdot \big \Vert \mid A^{1-v}XB^{v}\mid ^{r}\big \Vert \Big )^{\lambda }}{\max \{v,1-v\}^{\lambda }}\\&\qquad \le \frac{\Big (\big \Vert \mid AX\mid ^{r}\big \Vert \cdot \big \Vert \mid XB\mid ^{r}\big \Vert \Big )^{\lambda }-\Big (\big \Vert \mid A^{\tau }XB^{1-\tau }\mid ^{r}\big \Vert \cdot \big \Vert \mid A^{1-\tau }XB^{\tau }\mid ^{r}\big \Vert \Big )^{\lambda }}{\max \{\tau ,1-\tau \}^{\lambda }} \end{aligned}$$

holds for \(0\le v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}\le v\le \tau \le 1\). In particular,

$$\begin{aligned} \Big (\big \Vert \mid AX\mid ^{r}\big \Vert&\cdot \big \Vert \mid XB\mid ^{r}\big \Vert \Big )^{\lambda }-\Big (\big \Vert \mid A^{v}XB^{1-v}\mid ^{r}\big \Vert \cdot \big \Vert \mid A^{1-v}XB^{v}\mid ^{r}\big \Vert \Big )^{\lambda }\\&\le R_{1}^{\lambda }\Big [2^{\lambda }\Big (\big \Vert \mid AX\mid ^{r}\big \Vert \cdot \big \Vert \mid XB\mid ^{r}\big \Vert \Big )^{\lambda }- \big \Vert \mid A^{\frac{1}{2}}XB^{\frac{1}{2}}\mid ^{r}\big \Vert ^{2\lambda }\Big ], \end{aligned}$$

where \(0\le v\le 1\) and \(R_{1}=\min \{v,1-v\}\).

3.2 Generalizations and Refinements of the Young’s and Its Reverse Inequalities for Determinants

In this subsection, we mainly give generalizations and refinements of the Young’s and its reverse inequalities for determinants. To achieve our goal, we need the following lemma (see, e.g., [16, p.482]), which is the Minkowski inequality for determinants.

Lemma 4

Let A, \(B\in \mathcal {M}_{n}(\mathbb {C})\) be positive semidefinite matrices. Then

$$\begin{aligned} \det (A+B)^{\frac{1}{n}}\ge \det A^{\frac{1}{n}}+\det B^{\frac{1}{n}}.\end{aligned}$$

A determinant version of the Young’s inequality (see, e.g., [16, p.467]) is still well known, which can be stated as

$$\begin{aligned}\det (A^{v}B^{1-v})\le \det \big (vA+(1-v)B\big ),\end{aligned}$$

where A, \(B\in \mathcal {M}_{n}(\mathbb {C})\) are positive semidefinite matrices and \(0\le v\le 1\). Based on Theorems 6 and 7, we obtain the generalizations and refinements of the Young’s and its reverse inequalities for determinants.

Theorem 12

Let A, \(B\in \mathcal {M}_{n}(\mathbb {C})\) be positive semidefinite matrices and \(0\le \mu < v\le 1\). Then for positive integer m, the following inequality holds

$$\begin{aligned} \Big [\mu \det A^{\frac{1}{n}}+&(1-\mu )\det B^{\frac{1}{n}}\Big ]^{mn}\\ {}&\quad -\det \big (A^{\mu }B^{1-\mu }\big )^{m}+ r\Big [\det A^{\frac{m}{2}}-\det (A^{v}B^{1-v})^{\frac{m}{2}}\Big ]^{2}\\&\le \Big (\frac{1-\mu }{1-v}\Big )^{mn}\Big [\det (vA+(1-v)B)^{m}-\det \big (A^{v}B^{1-v}\big )^{m}\Big ], \end{aligned}$$

where r is a constant with \(0<r\le \Big (\frac{1-\mu }{1-v}\Big )^{mn}v^{mn}-\mu ^{mn}\).

Proof

By Lemma 4 and Theorem 6, we get

$$\begin{aligned}&\Big (\frac{1-\mu }{1-v}\Big )^{mn}\Big [\det (vA+(1-v)B)^{m}-\det \big (A^{v}B^{1-v}\big )^{m}\Big ]\\&=\Big (\frac{1-\mu }{1-v}\Big )^{mn}\Big [\big (\det (vA+(1-v)B)^{\frac{1}{n}}\big )^{m}- \big ((\det A^{\frac{1}{n}})^{v}\big (\det B^{\frac{1}{n}}\big )^{1-v}\big )^{mn}\Big ]\\&\ge \Big (\frac{1-\mu }{1-v}\Big )^{mn}\Big [\big (\det (vA)^{\frac{1}{n}}+\det ((1-v)B)^{\frac{1}{n}}\big )^{mn}\\&\qquad \qquad \qquad -\big ((\det A^{\frac{1}{n}})^{v}\big (\det B^{\frac{1}{n}}\big )^{1-v}\big )^{mn}\Big ]\\&=\Big (\frac{1-\mu }{1-v}\Big )^{mn}\Big [\big (v\det A^{\frac{1}{n}}+(1-v)\det B^{\frac{1}{n}}\big )^{mn}\\&\qquad \qquad \qquad -\big ((\det A^{\frac{1}{n}})^{v}\big (\det B^{\frac{1}{n}}\big )^{1-v}\big )^{mn}\Big ]\\&\ge \Big [\mu \det A^{\frac{1}{n}}+(1-\mu )\det B^{\frac{1}{n}}\Big ]^{mn}-\Big [\big (\det A^{\frac{1}{n}}\big )^{\mu }(\det B^{\frac{1}{n}}\big )^{1-\mu }\Big ]^{mn}\\&\qquad \qquad \qquad +r\Big [(\det A^{\frac{1}{n}})^{\frac{mn}{2}}- \big ((\det A^{\frac{1}{n}})^{v}(\det B^{\frac{1}{n}})^{1-v}\big )^{\frac{mn}{2}}\Big ]^{2}\\&=\Big [\mu \det A^{\frac{1}{n}}+(1-\mu )\det B^{\frac{1}{n}}\Big ]^{mn}-\det (A^{\mu }B^{1-\mu })^{m}\\&\qquad \qquad \qquad +r\Big [\det A^{\frac{m}{2}}- \det (A^{v}B^{1-v})^{\frac{m}{2}}\Big ]^{2}. \end{aligned}$$

This completes the proof. \(\square \)

Theorem 13

Let A, \(B\in \mathcal {M}_{n}(\mathbb {C})\) be positive semidefinite matrices and \(0\le \mu < v\le 1\). Then for positive integer m, the following inequality holds

$$\begin{aligned} \Big (\frac{\mu }{v}\Big )^{mn}\Big [\Big (v\det A^{\frac{1}{n}}+&(1-v)\det B^{\frac{1}{n}}\Big )^{mn}-\det (A^{v}B^{1-v})^{m}\Big ]\\&\qquad +r_{1}\Big [\det A^{\frac{m}{2}}-\det (A^{v}B^{1-v})^{\frac{m}{2}}\Big ]^{2}\\&\le \det \big (\mu A+(1-\mu )B\big )^{m}-\det \big (A^{\mu }B^{1-\mu }\big )^{m}, \end{aligned}$$

where \(r_{1}\) is a constant with \(0<r_{1}\le \min \Big \{\Big (\frac{\mu }{v}\Big )^{mn},\mu ^{mn}-\Big (\frac{\mu }{v}\Big )^{mn}v^{mn}\Big \}\).

Proof

By Lemma 4 and Theorem 7, we have

$$\begin{aligned}&\det \big (\mu A+(1-\mu )B\big )^{m}-\det \big (A^{\mu }B^{1-\mu }\big )^{m}\\&=\Big [\det \big (\mu A+(1-\mu )B\big )^{\frac{1}{n}}\Big ]^{mn}-\Big [\big (\det A^{\frac{1}{n}}\big )^{\mu }\big (\det B^{\frac{1}{n}}\big )^{1-\mu }\Big ]^{mn}\\&\ge \Big [\det \big (\mu A\big )^{\frac{1}{n}}+\det \big ((1-\mu )B\big )^{\frac{1}{n}}\Big ]^{mn}-\Big [\big (\det A^{\frac{1}{n}}\big )^{\mu }\big (\det B^{\frac{1}{n}}\big )^{1-\mu }\Big ]^{mn}\\&=\Big [\mu \det A^{\frac{1}{n}}+(1-\mu )\det B^{\frac{1}{n}}\Big ]^{mn}-\Big [\big (\det A^{\frac{1}{n}}\big )^{\mu }\big (\det B^{\frac{1}{n}}\big )^{1-\mu }\Big ]^{mn}\\&\ge \Big (\frac{\mu }{v}\Big )^{mn}\Big [\big (v\det A^{\frac{1}{n}}+(1-v)\det B^{\frac{1}{n}}\big )^{mn} -\Big (\big (\det A^{\frac{1}{n}}\big )^{v}\big (\det B^{\frac{1}{n}}\big )^{1-v}\Big )^{mn}\Big ]\\&\qquad +r_{1}\Big [\big (\det A^{\frac{1}{n}}\big )^{\frac{mn}{2}}- \Big (\big (\det A^{\frac{1}{n}}\big )^{v}\big (\det B^{\frac{1}{n}}\big )^{1-v}\Big )^{\frac{mn}{2}}\Big ]^{2}\\&=\Big (\frac{\mu }{v}\Big )^{mn}\Big [\big (v\det A^{\frac{1}{n}}+(1-v)\det B^{\frac{1}{n}}\big )^{mn} -\det (A^{v}B^{1-v})^{m}\Big ]\\&\qquad +r_{1}\Big [\det A^{\frac{m}{2}}- \det (A^{v}B^{1-v})^{m}\Big ]^{2}. \end{aligned}$$

This completes the proof. \(\square \)

3.3 Generalizations and Refinements of the Young’s and Its Reverse Inequalities for Operators

In this subsection, we mainly give operator inequalities for the Young’s and its reverse inequalities. We need some preparations. Let \(\mathcal {B}(\mathbb {H})\) be the \(C^{*}\)-algebra of all bounded linear operators on a complex Hilbert space \(\mathbb {H}\) and \(I_{\mathbb {H}}\) \((\in \mathcal {B}(\mathbb {H}))\) be the identity operator. For two self-adjoint operators A and B, the symbol \(B\le A\) means that \(A-B\) is a positive operator.

Let A, \(B\in \mathcal {B}(\mathbb {H})\) be positive operators and \(0\le v\le 1\). The \(v-\)weighted arithmetic operator mean of A and B, denoted by \(A\nabla _{v}B\), is defined by

$$\begin{aligned}A\nabla _{v}B=(1-v)A+vB.\end{aligned}$$

Moreover, if A is an invertible positive operator, the \(v-\) weighted geometric operator mean of A and B, denoted by \(A\sharp _{v}B\), is defined by

$$\begin{aligned} A\sharp _{v}B=A^{\frac{1}{2}}(A^{-\frac{1}{2}}BA^{-\frac{1}{2}})^{v}A^{\frac{1}{2}}.\end{aligned}$$

For \(v>1\), the definition of \(A\sharp _{v}B=A^{\frac{1}{2}}(A^{-\frac{1}{2}}BA^{-\frac{1}{2}})^{v}A^{\frac{1}{2}}\) is still well defined. In the following, we use \(A\sharp _{v}B=A^{\frac{1}{2}}(A^{-\frac{1}{2}}BA^{-\frac{1}{2}})^{v}A^{\frac{1}{2}}\) for \(v\ge 0\). When \(v=\frac{1}{2}\), the operators \(A\nabla _{\frac{1}{2}}B\) and \(A\sharp _{\frac{1}{2}}B\) are called the arithmetic operator mean and geometric operator mean, respectively. Usually, we write \(A\nabla B\) and \(A\sharp B\) for brevity, respectively. For more details, see Kubo and Ando [22].

The operator version of inequality (2) can be stated as

$$\begin{aligned} A\sharp _{v}B\le A\nabla _{v}B, \end{aligned}$$

where A, \(B\in \mathcal {B}(\mathbb {H})\) with A is invertible and \(0\le v \le 1\).

Let A, \(B\in \mathcal {M}_{n}(\mathbb {C})\) be positive semidefinite matrices with A is invertible and \(0\le v \le 1\). Kittaneh and Manasrah in [19] and [20] presented the matrix inequality of inequality (3),

$$\begin{aligned} 2r(A\nabla B-A\sharp B)\le A\nabla _{v}B-A\sharp _{v}B\le 2R(A\nabla B-A\sharp B), \end{aligned}$$
(27)

where \(r=\min \{v,1-v\}\) and \(R=\max \{v,1-v\}\).

It should be mentioned that Furuichi [9] independently proved inequality (27) for positive operators and Kittaneh et al. [18] also established inequality (27) by a different method.

Before giving the main results of this part, we need the following lemma, which is the monotonicity property for operator functions [11].

Lemma 5

Let \(X\in \mathcal {B}(\mathbb {H})\) be a self-adjoint operator and f and g be continuous functions such that \(f(t)\ge g(t)\) for all \(t\in Sp(X)\)(the spectrum of X), then \(f(X)\ge g(X)\).

Based on Theorem 6, we have the following operator inequality.

Theorem 14

If A, \(B\in \mathcal {B}(\mathbb {H})\) are positive operators with A is invertible and \(0\le \mu < v\le 1\) and m be a positive integer. Then

$$\begin{aligned} A\sharp _{m}(A\nabla _{\mu }B)-A\sharp _{m\mu }B&+r\Big [A\sharp _{m}B-2A\sharp _{\frac{m(1+v)}{2}}B+A\sharp _{mv}B\Big ]\\&\le \Big (\frac{1-\mu }{1-v}\Big )^{m}\Big [A\sharp _{m}(A\nabla _{v}B)-A\sharp _{mv}B\Big ], \end{aligned}$$

where r is a constant with \(0<r\le \Big (\frac{1-\mu }{1-v}\Big )^{m}v^{m}-\mu ^{m}\).

Proof

By inequality (25), we have

$$\begin{aligned} (\mu a+1-\mu )^{m}-a^{m\mu }+r\Big [a^{\frac{m}{2}}-a^{\frac{mv}{2}}\Big ]^{2}\le \Big (\frac{1-\mu }{1-v}\Big )^{m}\Big [(va+1-v)^{m}-a^{m v}\Big ], \end{aligned}$$

for \(a>0\).

Since A and B are positive operators, then so is the operator \(T=A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\). Therefore, by Lemma 5, we obtain

$$\begin{aligned} \big [\mu T+(1-\mu )I_{\mathbb {H}}\big ]^{m}&-T^{m\mu }+r\Big [T^{\frac{m}{2}}-T^{\frac{mv}{2}}\Big ]^{2}\nonumber \\&\le \Big (\frac{1-\mu }{1-v}\Big )^{m}\Big [(vT+(1-v)I_{\mathbb {H}})^{m}-T^{m v}\Big ]. \end{aligned}$$
(28)

Multiplying inequality (28) by \(A^{\frac{1}{2}}\), we have

$$\begin{aligned} A^{\frac{1}{2}}\big [\mu T&+(1-\mu )I_{\mathbb {H}}\big ]^{m}A^{\frac{1}{2}}-A^{\frac{1}{2}}T^{m\mu }A^{\frac{1}{2}} +rA^{\frac{1}{2}}\Big [T^{\frac{m}{2}}-T^{\frac{mv}{2}}\Big ]^{2}A^{\frac{1}{2}}\nonumber \\&\le \Big (\frac{1-\mu }{1-v}\Big )^{m}A^{\frac{1}{2}}\Big [(vT+(1-v)I_{\mathbb {H}})^{m}-T^{m v}\Big ]A^{\frac{1}{2}}. \end{aligned}$$

or equivalently,

$$\begin{aligned} A\sharp _{m}(A\nabla _{\mu }B)-A\sharp _{m\mu }B&+r\Big [A\sharp _{m}B-2A\sharp _{\frac{m(1+v)}{2}}B+A\sharp _{mv}B\Big ]\\&\le \Big (\frac{1-\mu }{1-v}\Big )^{m}\Big [A\sharp _{m}(A\nabla _{v}B)-A\sharp _{mv}B\Big ]. \end{aligned}$$

This completes the proof. \(\square \)

In the same way, based on Theorem 7, we get

Theorem 15

If A, \(B\in \mathcal {B}(\mathbb {H})\) are positive operators with A is invertible and \(0\le \mu < v\le 1\) and m be a positive integer. Then

$$\begin{aligned} r_{1}\Big [A\sharp _{m}B-2A\sharp _{\frac{m(1+v)}{2}}B+A\sharp _{mv}B\Big ]+\Big (\frac{\mu }{v}\Big )^{m}&\Big [A\sharp _{m}(A\nabla _{v}B)-A\sharp _{mv}B\Big ]\\&\le A\sharp _{m}(A\nabla _{\mu }B)-A\sharp _{m\mu }B,\\ \end{aligned}$$

where \(r_{1}\) is a constant with \(0<r_{1}\le \min \Big \{\Big (\frac{\mu }{v}\Big )^{m},\mu ^{m}-\Big (\frac{\mu }{v}\Big )^{m}v^{m}\Big \}\).

Proof

The proof is very similar to that of Theorem 14, so we omit it here.

This completes the proof. \(\square \)