Abstract
In this work, we mainly study refinements and generalizations of the Young’s and its reverse inequalities. First, utilizing the famous weighted arithmetic and geometric mean inequality, we provide an alternative proof for the power version of the Young’s inequality given by Al-Manasrah and Kittaneh. In addition, some generalizations of the Al-Manasrah–Kittaneh’s inequalities are also given. Further, we also establish refinement of the Alzer–Fonseca–Kova\(\check{c}\)ec’s version of the Young’s inequality as follows. Let a, b, \(\mu \) and v be positive real numbers with \(0\le \mu < v\le 1\) and m be a positive integer. Then
and
where r is a constant with \(0<r\le \Big (\frac{1-\mu }{1-v}\Big )^{m}v^{m}-\mu ^{m}\) and \(r_{1}\) is a constant with \(0<r_{1}\le \min \Big \{\Big (\frac{\mu }{v}\Big )^{m},\mu ^{m}-\Big (\frac{\mu }{v}\Big )^{m}v^{m}\Big \}\). As an application, we present corresponding operator and matrix inequalities following from the established scalar inequalities.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The famous weighted arithmetic and geometric mean inequality can be stated as follows: Let m be a positive integer and \(x_{k}\) and \(p_{k}\) (\(k=1,2,\ldots ,m\)) be positive real numbers with \(\sum \limits _{k=1}^{m}p_{k}=1\), then
Equality holds if and only if \(x_{1}=x_{2}=\cdots =x_{m}\). When \(m=2\), inequality (1) is just the classical Young’s inequality
where a and b are positive real numbers and \(0\le v\le 1\).
Refining the Young’s and its reverse inequalities by adding a positive term to the left (right) side becomes possible, which has been taken the attention of many researchers.
In [19, 20], Kittaneh and Manasrah presented a refinement of Young’s and its reverse inequalities:
where \(a\ge 0\), \(b\ge 0\), \(0\le v\le 1\), \(r=\min \{v,1-v\}\) and \(R=\max \{v,1-v\}\).
The squared form of the refined Young’s and its reverse inequalities, obtained by Hirzallah etc. [15] and He etc. [12], respectively, can be stated that
where \(a\ge 0\), \(b\ge 0\), \(0\le v\le 1\), \(r=\min \{v,1-v\}\) and \(R=\max \{v,1-v\}\).
In 2015, Manasrah and Kittaneh [24, Theorem 2] deduced a more generalization of inequalities (3) and (4) that
for any positive integer m, where \(a\ge 0\), \(b\ge 0\), \(0\le v\le 1\), and \(r=\min \{v,1-v\}\).
It should be mentioned here that Akkouchi and Ighachane [1] gave an alternative proof to inequality (5).
In 2017, Al-Manasrah and Kittaneh [2, Theorem 3] presented another generalization of inequalities (3) as follows: Let a, b and v be positive real numbers with \(0\le v\le 1\) and m be a positive integer. Then
where \(r=\min \{v,1-v\}\) and \(R=\max \{v,1-v\}\).
It is easy to see that the left-hand side of inequalities (6) is also a refinement of inequality (5).
Alzer, Fonseca and Kova\(\check{c}\)ec [3, Theorem 2.1] obtained an important refinement of the Young’s inequalities (3): Let a, b, v, \(\mu \) and \(\lambda \) be positive real numbers with \(0\le \mu \le v\le 1\) and \(\lambda \ge 1\). Then
The Alzer–Fonseca–Kova\(\check{c}\)ec’s inequalities (7) can be regarded as a major development concerning the Young’s inequality for the past few years.
Later, Liao and Wu [23, Theorem 2.1] deduced the following inequalities between the arithmetic mean and the harmonic mean:
where a, b, v, \(\mu \) and \(\lambda \) are positive real numbers with \(0\le \mu \le v\le 1\) and \(\lambda \ge 1\).
As far as convex functions are concerned, Sababheh [28, Theorem 2.1] generalized inequalities (7) and (8): Let \(f:[0,1]\rightarrow [0,+\infty )\) be a convex function and v, \(\mu \) and \(\lambda \) be positive real numbers with \(0\le \mu \le v\le 1\) and \(\lambda \ge 1\). Then
In 2020, Ren [27, Theorems 2.1 and 2.3] refined the Alzer–Fonseca–Kova\(\check{c}\)ec’s inequalities (7) in the cases \(\lambda =1,2\) under some conditions.
In 2021, Ighachane, Akkouchi and benabdi [17, Theorem 2.2] gave a refinement of the left-hand side of the Alzer–Fonseca–Kova\(\check{c}\)ec’s inequalities (7) for \(\lambda =1,2,3,\ldots \).
The Young’s and its reverse inequalities, though very simple, are important in functional analysis, matrix theory, operator theory, electrical networks, etc. Many scholars had done much research in this topic. We refer the readers to [5, 8,9,10, 21, 25, 26, 29,30,31,32,33,34,35,36,37] and references therein for other works.
The main aim of this work is to study generalizations and refinements of the Young’s and its reverse inequalities. First, we give an alternative proof to the Al-Manasrah–Kittaneh’s inequalities (6). And then, we also give some generalizations for the Al-Manasrah–Kittaneh’s inequalities (6) and refinements of the Alzer–Fonseca–Kova\(\check{c}\)ec’s inequalities (7) for \(\lambda =1, 2, 3, \ldots \). Based on them, we present some refined matrix version of the Young’s and its reverse inequalities for convex functions. Moreover, inequalities for determinants and operators are also given.
2 Generalizations and Refinements of the Young’s and Its Reverse Inequalities for Scalars
In this section, we mainly study generalizations and refinements of the Young’s and its reverse inequalities for scalars. Before giving the main results, we need the following lemmas. The first lemma was given by Akkouchi and Ighachane [1].
Lemma 1
Let m be a positive integer and v be a positive real number with \(0\le v\le 1\). Then
where \(C_{m}^{k}=\left( \begin{array}{c} m \\ k \\ \end{array} \right) \) is the binomial coefficient.
Lemma 2
Let \(\mu \) and v be positive real numbers with \(0\le \mu < v\le 1\) and m be a positive integer. Then inequality holds
for \(k=0, 1,\ldots ,m\).
Proof
Since \(0\le \mu < v\le 1\), then \(\frac{v}{\mu }>1\) and \(\frac{1-\mu }{1-v}>1\). So we get
This completes the proof. \(\square \)
The third lemma was obtained by Ighachane, Akkouchi and Bennabdi [17, Lemma 2.2].
Lemma 3
Let \(\mu \) and v be positive real numbers with \(0\le \mu < v\le 1\) and m be a positive integer. Then inequality holds
for \(k=0,1,\ldots ,m\).
Utilizing the famous weighted arithmetic and geometric mean inequality, we can give an alternative proof to inequalities (6). For convenience, we present it as a theorem.
Theorem 1
Let a, b and v be positive real numbers with \(0\le v\le 1\) and m be a positive integer. Then
where \(r=\min \{v,1-v\}\) and \(R=\max \{v,1-v\}\).
Proof
By the binomial expansion of \(\big (va+(1-v)\big )^{m}\) and \((a+1)^{m}\), we have
and
where \(p_{k}=C_{m}^{k}v^{k}(1-v)^{m-k}\) and \(C_{m}^{k}=\left( \begin{array}{c} m \\ k \\ \end{array} \right) \) is the binomial coefficient, \(k=0,1,\ldots ,m\).
Therefore, we get
Inequality (10) gives
Putting \(a=:\frac{a}{b}\) in inequality (11), we have
Multiplying \(b^{m}\) on both sides of inequality (12), we get the left-hand side of inequality (9).
Similarly, since \(R=\max \{v,1-v\}\), then \(2R\ge 1\), thus we obtain
which implies
Setting \(a=:\frac{a}{b}\) in inequality (13), we get
Multiplying \(b^{m}\) on both sides of inequality (14), we get the right-hand side of inequality (9).
This completes the proof. \(\square \)
It is worth mentioning that the authors [2, Theorem 2] proved that if \(\phi \) is a strictly increasing convex function on an interval I, then \(\phi (z)-\phi (w)\le \phi (x)-\phi (y)\) , where x, y, w, z are points in I with \(w\le z\le x\), \(y\le x\) and \(z-w\le x-y\). By using this result for the function \(\phi (x)=x^{p}\) \((p\ge 1)\), they obtained Theorem 1. It is easy to see that our proof of Theorem 1 is different from that of Al-Manasrah and Kittaneh’s [2, Theorem 3].
The positive integer m in inequality (9) can be replaced by positive real number \(\lambda \ge 1\). Actually, we have the following results.
Theorem 2
Let a, b and \(\lambda \) be positive real numbers with \(\lambda \ge 1\). Then
for \(0\le v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}\le v\le \tau \le 1\). In particular,
where \(0\le v\le 1\) and \(r=\min \{v,1-v\}\).
Proof
The proof idea is due to Alzer–Fonseca–Kova\(\check{c}\)ec [3, Theorem 2.1].
Taking \(h(x)=x(1-\ln x)-1\), then \(h'(x)=-\ln x\), thus \(h(x)<h(1)=0\) for all \(0<x\ne 1\). Define the function \(F(v,\lambda ,a)=\frac{(va+1-v)^{\lambda }-a^{v\lambda }}{\min \{v,1-v\}^{\lambda }}\). Then for \(0\le v\le \frac{1}{2}\), we get,
This gives that
for \(0\le v\le \tau \le \frac{1}{2}\).
On the other hand, let \(\frac{1}{2}\le v\le 1\), we have
which entails that
for \(\frac{1}{2}\le v\le \tau \le 1\).
Replacing a by \(\frac{a}{b}\) and multiplying \(b^{\lambda }\) in inequalities (15) and (16), we get the desired results. This completes the proof. \(\square \)
The next theorem is a reverse of Theorem 2.
Theorem 3
Let a, b and \(\lambda \) be positive real numbers with \(\lambda \ge 1\). Then
for \(0\le v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}\le v\le \tau \le 1\). Moreover,
where \(0\le v\le 1\) and \(R=\max \{v,1-v\}\).
Proof
The proof idea is the same as that of Theorem 2. First, the function h(x) is the same as in Theorem 2. Moreover, we define the function \(G(v,\lambda ,a)=\frac{(va+1-v)^{\lambda }-a^{v\lambda }}{\max \{v,1-v\}^{\lambda }}\). Therefore, for \(0\le v\le \frac{1}{2}\), we get,
This gives that
for \(0\le v\le \tau \le \frac{1}{2}\).
On the other hand, if \(\frac{1}{2}\le v\le 1\), then we have
which entails that
for \(\frac{1}{2}\le v\le \tau \le 1\).
Replacing a by \(\frac{a}{b}\) and multiplying \(b^{\lambda }\) in inequalities (17) and (18), we get the desired results.
This completes the proof. \(\square \)
For convex functions, the following results hold.
Theorem 4
Let \(f:[0,1]\rightarrow [0,+\infty )\) be convex function and \(\lambda \ge 1\). Then
holds for \(0\le v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}\le v\le \tau \le 1\). In particular,
where \(0\le v\le 1\) and \(r=\min \{v,1-v\}\).
Proof
The proof idea is the same as that of Sababheh’s [28, Theorem 2.1].
First, we assume that the function f is twice differentiable, then \(f''(x)\ge 0\) for \(0\le x\le 1\). Define the function
where \(0\le v\le 1\). Then, for \(0\le v\le \frac{1}{2}\), we have
Putting \(g_{1}(v)=-f(0)+f(v)-vf'(v)\) (\(v\in [0,1]\)), since \(f''(v)\ge 0\) for \(0\le v\le 1\), then \(g_{1}'(v)=-vf''(v)\le 0\). This shows \(g_{1}(v)\le g_{1}(0)=0\), \(v\in [0,1]\). Therefore, inequality (19) implies F(v) is decreasing with respect to v on \([0,\frac{1}{2}]\). Similarly, if \(\frac{1}{2}\le v\le 1\), then
Setting \(g_{2}(v)=f(1)-f(v)-(1-v)f'(v)\) (\(v\in [0,1]\)), then \(g_{2}'(v)=-(1-v)f''(v)\le 0\), which gives \(g_{2}(v)\ge g_{2}(1)=0\), \(v\in [0,1]\). Thus, inequality (20) implies F(v) is increasing with respect to v on \([\frac{1}{2},1]\).
The general case follows from the fact that any convex function is a uniform limit of smooth convex functions [4, Theorem 1].
This completes the proof. \(\square \)
A reverse of Theorem 4 can be stated as follows.
Theorem 5
Let \(f:[0,1]\rightarrow [0,+\infty )\) be convex function and \(\lambda \ge 1\). Then
holds for \(0<v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}<v\le \tau \le 1\). Moreover,
where \(0\le v\le 1\) and \(R=\max \{v,1-v\}\).
Proof
Using the same method as in that of Theorem 4, we can complete the proof. First, we assume the f is twice differentiable. Define the function
where \(0\le v\le 1\). Then, for \(0\le v\le \frac{1}{2}\), we have
Setting \(g_{2}(v)=f(1)-f(v)-(1-v)f'(v)\) (\(v\in [0,1]\)), then \(g_{2}'(v)=-(1-v)f''(v)\le 0\), which gives \(g_{2}(v)\ge g_{2}(1)=0\), \(v\in [0,1]\). By inequality (21), we conclude that \( G'(v)\ge 0\), which implies G(v) is increasing with respect to v on \([0,\frac{1}{2}]\). Similarly, if \(\frac{1}{2}\le v\le 1\), then
Putting \(g_{1}(v)=-f(0)+f(v)-vf'(v)\) (\(v\in [0,1]\)), then \(g_{1}'(v)=-vf''(v)\le 0\), \(v\in [0,1]\). This is to say that \(g_{1}(v)\le g_{1}(0)=0\), \(v\in [0,1]\). Therefore, inequality (22) implies G(v) is decreasing with respect to v on \([\frac{1}{2},1]\).
The general case follows from the fact that any convex function is a uniform limit of smooth convex functions.
This completes the proof. \(\square \)
Remark 1
Let a, b and v be positive real numbers with \(0\le v\le 1\). Then the function \(f(v)=a^{v}b^{1-v}\) is a convex function on [0, 1]. Replacing f by \(f(v)=a^{v}b^{1-v}\) in Theorems 4 and 5, we get Theorems 2 and 3, respectively.
The following two theorems are refinements of the Alzer–Fonseca–Kova\(\check{c}\)ec’s inequalities (7) when \(\lambda =1,2,3,\ldots \).
Theorem 6
Let a, b, \(\mu \) and v be positive real numbers with \(0\le \mu < v\le 1\) and m be a positive integer. Then
where r is a constant with \(0<r\le \Big (\frac{1-\mu }{1-v}\Big )^{m}v^{m}-\mu ^{m}\).
Proof
Let \(\tilde{r}=\Big (\frac{1-\mu }{1-v}\Big )^{m}+r\), \(p_{k}(x)=C_{m}^{k}x^{k}(1-x)^{m-k}\), (\(x\in [0,1]\) and \(k=0,1,\ldots ,m\)), \(\alpha _{k}=\Big (\frac{1-\mu }{1-v}\Big )^{m}p_{k}(v)-p_{k}(\mu )\),(\(k=0,1,\ldots ,m-1\)), \(\alpha _{m}=\Big (\frac{1-\mu }{1-v}\Big )^{m}p_{m}(v)-p_{m}(\mu )-r\), \(\alpha _{m+1}=1\), \(\alpha _{m+2}=2r\). By Lemma 2, we have \(\alpha _{k}\ge 0\) (\(k=0,1,\ldots ,m+2\)) and \(\sum \limits _{k=0}^{m+2}\alpha _{k}=\tilde{r}\). Therefore, we deduce that
which gives
Replacing a by \(\frac{a}{b}\) and multiplying \(b^{m}\) in the above inequality (24), we get the desired inequality (23).
This completes the proof. \(\square \)
Theorem 7
Let a, b, \(\mu \) and v be positive real numbers with \(0\le \mu < v\le 1\) and m be a positive integer. Then
where \(r_{1}\) is a constant with \(0<r_{1}\le \min \Big \{\Big (\frac{\mu }{v}\Big )^{m},\mu ^{m}-\Big (\frac{\mu }{v}\Big )^{m}v^{m}\Big \}\).
Proof
Taking \(p_{k}(x)=C_{m}^{k}x^{k}(1-x)^{m-k}\), (\(x\in [0,1]\) and \(k=0,1,\ldots ,m\)), \(\beta _{k}=p_{k}(\mu )-\Big (\frac{\mu }{v}\Big )^{m}p_{k}(v)\), (\(k=0,1,\ldots ,m-1\)), \(\beta _{m}=\mu ^{m}-\Big (\frac{\mu }{v}\Big )^{m}v^{m}-r_{1}\), \(\beta _{m+1}=\Big (\frac{\mu }{v}\Big )^{m}-r_{1}\), \(\beta _{m+2}=2r_{1}\), then by Lemma 3, we have \(\beta _{k}\ge 0\) and \(\sum \limits _{k=0}^{m+2}\beta _{k}=1\). Therefore, we obtain
or equivalently,
Replacing a by \(\frac{a}{b}\) and multiplying \(b^{m}\) in inequality (26), we obtain inequality (25).
This completes the proof. \(\square \)
3 Generalizations and Refinements of the Young’s and Its Reverse Inequalities for Matrices and Operators
3.1 Generalizations and Refinements of the Young’s and Its Reverse Inequalities for Convex Functions
Let \(\mathcal {M}_{n}(\mathbb {C})\) be the matrix algebra of all \(n\times n\) complex matrices. For any \(A\in \mathcal {M}_{n}(\mathbb {C})\), the absolute value of A is the positive semidefinite matrix \(|A|=(A^{*}A)^{\frac{1}{2}}\), where \(A^{*}\) is the conjugate transpose of A. A norm \(\Vert \cdot \Vert \) on \(\mathcal {M}_{n}(\mathbb {C})\) is called a unitarily invariant norm if \(\Vert UAV\Vert =\Vert A\Vert \) for A, U, \(V\in \mathcal {M}_{n}(\mathbb {C})\) with U, V are unitary matrices. Examples in these classes are the operator norm, the trace norm and the Hilbert-Schmidt norm.
Let A, B and \(X\in \mathcal {M}_{n}(\mathbb {C})\) with A and B are positive semidefinite matrices and \(\Vert \cdot \Vert \) be a unitarily invariant norm on \(\mathcal {M}_{n}(\mathbb {C})\), Bhatia and Davis [6] proved that the function \(f(v)=\big \Vert A^{v}XB^{1-v}+A^{1-v}XB^{v}\big \Vert \) is convex on the interval [0, 1] and attains its minimum at \(t=\frac{1}{2}\). Therefore, it is decreasing on \([0,\frac{1}{2}]\) and increasing on \([\frac{1}{2},1]\). Thus, the celebrated Heinz inequality is valid
which is a significant refinement of the arithmetic-geometric mean inequality:
Based on the convexity of \(f(v)=\big \Vert A^{v}XB^{1-v}+A^{1-v}XB^{v}\big \Vert \) on [0, 1] and Theorems 4 and 5, we get the following theorems.
Theorem 8
Let A, B and \(X\in \mathcal {M}_{n}(\mathbb {C})\) with A and B are positive semidefinite matrices and let \(\lambda \ge 1\) and \(\Vert \cdot \Vert \) be a unitarily invariant norm on \(\mathcal {M}_{n}(\mathbb {C})\). Then
holds for \(0\le v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}\le v\le \tau \le 1\). In particular,
where \(0\le v\le 1\) and \(r=\min \{v,1-v\}\).
Theorem 9
Let A, B and \(X\in \mathcal {M}_{n}(\mathbb {C})\) with A and B are positive semidefinite matrices and let \(\lambda \ge 1\) and \(\Vert \cdot \Vert \) be a unitarily invariant norm on \(\mathcal {M}_{n}(\mathbb {C})\). Then
holds for \(0\le v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}\le v\le \tau \le 1\). In particular,
where \(0\le v\le 1\) and \(R=\max \{v,1-v\}\).
Similarly, let \(\phi (v)=\big \Vert \mid A^{v}XB^{1-v}\mid ^{r}\big \Vert \cdot \big \Vert \mid A^{1-v}XB^{v}\mid ^{r}\big \Vert \), where \(0\le v\le 1\), \(r>0\), A, B and \(X\in \mathcal {M}_{n}(\mathbb {C})\) with A and B are positive semidefinite matrices, and \(\Vert \cdot \Vert \) is a unitarily invariant norm on \(\mathcal {M}_{n}(\mathbb {C})\). Hiai and Zhan [14, Theorem 1] proved that \(\phi (v)=\big \Vert \mid A^{v}XB^{1-v}\mid ^{r}\big \Vert \cdot \big \Vert \mid A^{1-v}XB^{v}\mid ^{r}\big \Vert \) is convex on the interval [0, 1] and attains its minimum at \(t=\frac{1}{2}\). Consequently, it is decreasing on \([0,\frac{1}{2}]\) and increasing on \([\frac{1}{2},1]\). An immediate consequence of this result is that
which interpolates the Cauchy–Schwarz inequality
obtained by Bhatia and Davis [7] and Hiai [13]. Applying the convex function \(\phi (v)=\big \Vert \mid A^{v}XB^{1-v}\mid ^{r}\big \Vert \cdot \big \Vert \mid A^{1-v}XB^{v}\mid ^{r}\big \Vert \) to Theorems 4 and 5, we obtain the following Cauchy–Schwarz-type inequalities.
Theorem 10
Let A, B and \(X\in \mathcal {M}_{n}(\mathbb {C})\) with A and B are positive semidefinite matrices and let \(\lambda \ge 1\), \(r>0\) and \(\Vert \cdot \Vert \) be a unitarily invariant norm on \(\mathcal {M}_{n}(\mathbb {C})\). Then
holds for \(0\le v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}\le v\le \tau \le 1\). In particular,
where \(0\le v\le 1\) and \(r_{1}=\min \{v,1-v\}\).
Theorem 11
Let A, B and \(X\in \mathcal {M}_{n}(\mathbb {C})\) with A and B are positive semidefinite matrices and let \(\lambda \ge 1\), \(r>0\) and \(\Vert \cdot \Vert \) be a unitarily invariant norm on \(\mathcal {M}_{n}(\mathbb {C})\). Then
holds for \(0\le v\le \tau \le \frac{1}{2}\), its reverse holds for \(\frac{1}{2}\le v\le \tau \le 1\). In particular,
where \(0\le v\le 1\) and \(R_{1}=\min \{v,1-v\}\).
3.2 Generalizations and Refinements of the Young’s and Its Reverse Inequalities for Determinants
In this subsection, we mainly give generalizations and refinements of the Young’s and its reverse inequalities for determinants. To achieve our goal, we need the following lemma (see, e.g., [16, p.482]), which is the Minkowski inequality for determinants.
Lemma 4
Let A, \(B\in \mathcal {M}_{n}(\mathbb {C})\) be positive semidefinite matrices. Then
A determinant version of the Young’s inequality (see, e.g., [16, p.467]) is still well known, which can be stated as
where A, \(B\in \mathcal {M}_{n}(\mathbb {C})\) are positive semidefinite matrices and \(0\le v\le 1\). Based on Theorems 6 and 7, we obtain the generalizations and refinements of the Young’s and its reverse inequalities for determinants.
Theorem 12
Let A, \(B\in \mathcal {M}_{n}(\mathbb {C})\) be positive semidefinite matrices and \(0\le \mu < v\le 1\). Then for positive integer m, the following inequality holds
where r is a constant with \(0<r\le \Big (\frac{1-\mu }{1-v}\Big )^{mn}v^{mn}-\mu ^{mn}\).
Proof
By Lemma 4 and Theorem 6, we get
This completes the proof. \(\square \)
Theorem 13
Let A, \(B\in \mathcal {M}_{n}(\mathbb {C})\) be positive semidefinite matrices and \(0\le \mu < v\le 1\). Then for positive integer m, the following inequality holds
where \(r_{1}\) is a constant with \(0<r_{1}\le \min \Big \{\Big (\frac{\mu }{v}\Big )^{mn},\mu ^{mn}-\Big (\frac{\mu }{v}\Big )^{mn}v^{mn}\Big \}\).
Proof
By Lemma 4 and Theorem 7, we have
This completes the proof. \(\square \)
3.3 Generalizations and Refinements of the Young’s and Its Reverse Inequalities for Operators
In this subsection, we mainly give operator inequalities for the Young’s and its reverse inequalities. We need some preparations. Let \(\mathcal {B}(\mathbb {H})\) be the \(C^{*}\)-algebra of all bounded linear operators on a complex Hilbert space \(\mathbb {H}\) and \(I_{\mathbb {H}}\) \((\in \mathcal {B}(\mathbb {H}))\) be the identity operator. For two self-adjoint operators A and B, the symbol \(B\le A\) means that \(A-B\) is a positive operator.
Let A, \(B\in \mathcal {B}(\mathbb {H})\) be positive operators and \(0\le v\le 1\). The \(v-\)weighted arithmetic operator mean of A and B, denoted by \(A\nabla _{v}B\), is defined by
Moreover, if A is an invertible positive operator, the \(v-\) weighted geometric operator mean of A and B, denoted by \(A\sharp _{v}B\), is defined by
For \(v>1\), the definition of \(A\sharp _{v}B=A^{\frac{1}{2}}(A^{-\frac{1}{2}}BA^{-\frac{1}{2}})^{v}A^{\frac{1}{2}}\) is still well defined. In the following, we use \(A\sharp _{v}B=A^{\frac{1}{2}}(A^{-\frac{1}{2}}BA^{-\frac{1}{2}})^{v}A^{\frac{1}{2}}\) for \(v\ge 0\). When \(v=\frac{1}{2}\), the operators \(A\nabla _{\frac{1}{2}}B\) and \(A\sharp _{\frac{1}{2}}B\) are called the arithmetic operator mean and geometric operator mean, respectively. Usually, we write \(A\nabla B\) and \(A\sharp B\) for brevity, respectively. For more details, see Kubo and Ando [22].
The operator version of inequality (2) can be stated as
where A, \(B\in \mathcal {B}(\mathbb {H})\) with A is invertible and \(0\le v \le 1\).
Let A, \(B\in \mathcal {M}_{n}(\mathbb {C})\) be positive semidefinite matrices with A is invertible and \(0\le v \le 1\). Kittaneh and Manasrah in [19] and [20] presented the matrix inequality of inequality (3),
where \(r=\min \{v,1-v\}\) and \(R=\max \{v,1-v\}\).
It should be mentioned that Furuichi [9] independently proved inequality (27) for positive operators and Kittaneh et al. [18] also established inequality (27) by a different method.
Before giving the main results of this part, we need the following lemma, which is the monotonicity property for operator functions [11].
Lemma 5
Let \(X\in \mathcal {B}(\mathbb {H})\) be a self-adjoint operator and f and g be continuous functions such that \(f(t)\ge g(t)\) for all \(t\in Sp(X)\)(the spectrum of X), then \(f(X)\ge g(X)\).
Based on Theorem 6, we have the following operator inequality.
Theorem 14
If A, \(B\in \mathcal {B}(\mathbb {H})\) are positive operators with A is invertible and \(0\le \mu < v\le 1\) and m be a positive integer. Then
where r is a constant with \(0<r\le \Big (\frac{1-\mu }{1-v}\Big )^{m}v^{m}-\mu ^{m}\).
Proof
By inequality (25), we have
for \(a>0\).
Since A and B are positive operators, then so is the operator \(T=A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\). Therefore, by Lemma 5, we obtain
Multiplying inequality (28) by \(A^{\frac{1}{2}}\), we have
or equivalently,
This completes the proof. \(\square \)
In the same way, based on Theorem 7, we get
Theorem 15
If A, \(B\in \mathcal {B}(\mathbb {H})\) are positive operators with A is invertible and \(0\le \mu < v\le 1\) and m be a positive integer. Then
where \(r_{1}\) is a constant with \(0<r_{1}\le \min \Big \{\Big (\frac{\mu }{v}\Big )^{m},\mu ^{m}-\Big (\frac{\mu }{v}\Big )^{m}v^{m}\Big \}\).
Proof
The proof is very similar to that of Theorem 14, so we omit it here.
This completes the proof. \(\square \)
Availability of data and material
Data sharing is not applicable to this manuscript as no new data were created or analyzed in this study.
References
Akkouchi, M., Ighachane, M.: A new proof of a refined Young inequality. Bull. Int. Math. Virtual. Inst. 10(3), 4255–4258 (2020)
Al-Manasrah, Y., Kittaneh, F.: Further generalization refinements and reverses of the Young and Heinz inequalities. Results math. 71, 1063–1072 (2017)
Alzer, H., da Fonseca, C.M., Kovacec, A.: Young-type inequalities and their matrix analogues. Linear Multilinear Algebra. 63(3), 622–635 (2015)
Azagra, D.: Global and fine approximation of convex functions. Proc. Lond. Math. Soc. 107, 799–824 (2013)
Bakherad, M., Krnic, M., Moslehian, M.S.: Reverses of the Young inequality for matrices and operators. Rocky Mt. J. Math. 46, 1089–1105 (2016)
Bhatia, R., Davis, C.: More matrix forms of the arithematic-geometric mean inequality. SIAM J. Matrix Anal. 14, 132–136 (1993)
Bhatia, R., Davis, C.: A Cauchy-Schwarz inequality for operators with applications. Linear Algebra Appl. 223(224), 119–129 (1995)
Burqan, A., Khandaqji, M.: Reverses of Young type inequalities. J. Math. Inequal. 9(1), 113–120 (2015)
Furuchi, S.: On refined Young inequalities and reverses inequalities. J. Math. Inequal. 5(1), 21–31 (2011)
Furuchi, S.: Further improvements of Young inequality. Rev. R. Acad. Cienc. Exactas. Fis. nat. Ser. A. Math. 113, 255–266 (2019)
Furuta, T., Mi\(\acute{c}\)i\(\acute{c}\) Hot, J., Pe\(\check{c}\)ari\(\acute{c}\), J., Seo, Y.: Mond-Pe\(\check{c}\)ari\(\acute{c}\) method in operator inequalities. Element, Zagreb (2005)
He, C., Zou, L.: Some inequalities involving unitarily invariant norms. Math. Inequal. Appl. 12, 757–768 (2012)
Hiai, F.: Log-majorizations and norm inequalities for exponential operators, In: Linear Operators. Banach Center Publications, vol. 38, Polish Academy of Sciences, Warszawa (1997)
Hiai, F., Zhan, X.: Inequalities involving unitarily invariant norms and operator monotone functions. Linear Algebra Appl. 341, 151–169 (2002)
Hirzallah, O., Kittaneh, F.: Matrix Young inequality for the Hilbert-Schmidt norm. Linear Algebra Appl. 308, 77–84 (2000)
Horn, R., Johnson, C.: Matrix Analysis. Cambridge University Press, New York (1985)
Ighachane, M., Akkouchi, M., Benabdi, E.: Further refinements of Alzer-Fonseca-Kova\(\check{c}\)ec’s inequalities and applications. Rev. R. Acad. Cienc. Exactas F\(\acute{i}\)s. Nat. Ser. A. Math. 115(3), 152 (2021)
Kittaneh, F., Krni\(\acute{c}\), M., Lovri\(\check{c}\)evi\(\acute{c}\), N., Pe\(\check{c}\)ari\(\acute{c}\), J.: Improved arithmetic-geometric and Heinz means inequalities for Hilbert space operators. Publ. Math. Debrecen. 80(3-4), 465–478 (2012)
Kittaneh, F., Manasrah, Y.: Improved Young and Heinz inequalities for matrices. J. Math. Anal. Appl. 361, 262–269 (2010)
Kittaneh, F., Manasrah, Y.: Reversed Young and Heinz inequalities for matrices. Linear Multilinear Algebra 59(9), 1031–1037 (2011)
Krni\(\acute{c}\), M., Lovri\(\check{c}\)evi\(\acute{c}\), N., Pe\(\check{c}\)ari\(\acute{c}\), J.: Jensen’s functional, its properties and applications. An. St. Univ. Ovidius Constanta. 20, 225-248 (2012)
Kubo, F., Ando, T.: Means of positive operators. Math. Ann. 246, 205–224 (1980)
Liao, W., Wu, J.: Matrix inequalities for the difference between arithmetic mean and harmonic mean. Ann. Funct. Anal. 6(3), 191–202 (2015)
Manasrah, Y., Kittaneh, F.: A generaliation of two refined Young inequalities. Positivity 19, 757–768 (2015)
Moradi, H., Furuichi, S., Mitroi-Symeonidis, F., Naseri, R.: An extension of Jensens operator inequality and its application to Young inequality. Rev. R. Acad. Cienc. Exactas. Fis. nat. Ser. A. Math. 113, 605–614 (2019)
Nasiri, L., Shakoori, M.: Reverses of Young type inequalities for matrices using the classical Kantorovich constant. Results Math. 74, 16 (2019)
Ren, Y.: Some results of Young-type inequalities. Rev. R. Acad. Cienc. Exactas Fis. Nat. Ser. A. Math. 114(3–4), 143 (2020)
Sababheh, M.: Convexity and matrix means. Linear Algebra Appl. 506, 588–602 (2016)
Sababheh, M., Choi, D.: A completes refinement of Young’s inequality. J. Math. Anal. Appl. 440, 379–393 (2016)
Sheikhhosseini, A., Khosravi, M.: On a refined operator version of Young’s inequality and its reverse. Electron. J. Linear Algebra. 35, 441–448 (2019)
Yang, C., Gao, Y., Lu, F.: Some refinements of Young type inequality for positive linear map. Mathematica Slovaca 69(4), 919–930 (2019)
Yang, C., Li, Y.: On the further refinement of some operator inequalities for positive linear map. J. Math. Inequal. 15, 781–797 (2021)
Zhao, J., Wu, J.: Operator inequalities involving improved Young and its reverse inequalities. J. Math. Anal. Appl. 432, 1779–1889 (2015)
Zhao, J., Wu, J.: Operator inequalities and reverse inequalities related to the Kittaneh-Manasrah inequalities. Linear Multilinear Algebra 62(7), 884–894 (2014)
Zhu, L.: New refinements of Young’s inequality. Rev. R. Acad. Cienc. Exactas. Fis. nat. Ser. A. Math. 113(2), 909–915 (2019)
Zhu, L.: Natural approaches of Young’s inequality. Rev. R. Acad. Cienc. Exactas. Fis. nat. Ser. A. Math. 114, 1 (2020)
Zou, H., Shi, G., Fujii, M.: Refined Young inequality with Kantorovich constant. J. Math. Inequal. 5(4), 551–556 (2011)
Acknowledgements
We would like to thank the editor and the anonymous referees for their valuable comments and helpful suggestions, which led to a great improvement for this paper.
Funding
This research received no external funding.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declare that he has no conflicts of interest to this work.
Additional information
Communicated by Fuad Kittaneh.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zhao, J. Further Refinements and Generalizations of the Young’s and Its Reverse Inequalities. Bull. Malays. Math. Sci. Soc. 46, 52 (2023). https://doi.org/10.1007/s40840-022-01448-0
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40840-022-01448-0
Keywords
- Young’s and its reverse inequalities
- Al-Manasrah–Kittaneh’s inequalities
- Alzer–Fonseca–Kova\(\check{c}\)ec’s inequalities
- Determinants
- Positive semidefinite matrices
- operators