Abstract
In this paper, we study the norm and skew angular distances in a normed space \({\mathscr {X}}\), where convex functions are used to obtain refinements and reverses of some outstanding results in the literature. For example, in this regard, we show that if \(a,b\in {\mathscr {X}}\) are non-zero and if \(p,q>0\) are such that \(\frac{1}{p}+\frac{1}{q}=1\), then
where \(r\ge 1\), \(\lambda =\min \left\{ {1}/{p},{1}/{q}\right\} \) and \(\mu =\max \left\{ {1}/{p},{1}/{q} \right\} \). Then we explain how this result extends some known results in the literature. Many other related results will be also shown. Then, with the theme of convexity, we employ a log-convex approach on certain matrix functions to obtain improvements and new sights of some matrix inequalities, including possible bounds of \(\Vert A^{t}XB^{1-t}\Vert ,\) where A, B are positive definite matrices, X is an arbitrary matrix, \(\Vert \cdot \Vert \) is a unitarily invariant norm and \(0\le t\le 1.\) Many other results involving matrix and scalar log-convex functions will be presented too.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let \(f: J\rightarrow {\mathbb {R}}\) be a real-valued function defined on the interval J. We say that f is convex on J if it satisfies the simple inequality
for every \(a,b\in J\) and \(0\le t\le 1\). Though simple, this inequality has been used to obtain numerous useful inequalities, such as the arithmetic–geometric mean inequality, the Cauchy–Schwarz inequality, the Bellman inequality, and many others.
Refining and reversing this inequality, Dragomir [10] showed that if \(a,b\in J\) and \(f: J\rightarrow {\mathbb {R}}\) is convex, then
where \(\gamma =\min \left\{ t,1-t \right\} \), \(\Gamma =\max \left\{ t,1-t \right\} \) and \(0\le t\le 1\). Although this result is known for functions f defined on real numbers, it is valid for convex functions defined on normed spaces.
This inequality has received the attention of numerous researchers due to its usage in sharpening and reversing some celebrated inequalities in the literature. We refer the reader to [6, 21, 22, 27] as a sample of possible applications and related results. A stronger form of a convex function is known as a log-convex function. We recall that a function \(f:J\rightarrow (0,\infty )\) is said to be log-convex if \(\log f\) is convex. Equivalently, if \(f((1-t)a+tb)\le f(a)^{1-t}f(b)^t,\) when \(a,b\in J\) and \(0\le t\le 1.\)
The main goal of this paper is to present some interesting results on the so-called norm-angular distance, the skew angular distance, and matrix norms. Our approach will be based on delicate treatments of convex and log-convex results.
The organization of this paper is as follows. In the next section, we employ convexity as a new approach to obtain newly refined bounds for the angular distance between vectors in normed spaces and inner product spaces. We then study properties of log-convex functions with applications to matrix norm inequalities. For the sake of convenience, we introduce each subsequent section there, with an emphasis on the recent progress in that direction.
2 The Angular Distance
This section further explores the norm and skew angular distances, with related results in inner product spaces. Let \({\mathscr {X}}=({\mathscr {X}},\Vert \cdot \Vert )\) be a normed space (real or complex). Among the most important inequalities in \({\mathscr {X}}, \) is the triangle inequality that states
A significant application of this inequality is its role in obtaining convergence results, for example. We refer the reader to [2, 5, 20, 24] for a relatively new discussion of triangle inequality.
In mathematical inequalities, sharpening and reversing known inequalities is a demanding topic, where researchers try to find better bounds than those known ones and to find reversed versions. This helps understand such inequalities and obtain further applications.
In [16], Maligranda showed the following interesting double inequality, which provides an improvement and a reverse of (2.1):
where \(A[a,b]=\Big ( 2 - \Big \Vert \dfrac{a}{\Vert a\Vert }+\dfrac{b}{\Vert b\Vert }\Big \Vert \Big )\ge 0\), and a and b are nonzero vectors in a normed space \({\mathscr {X}} = ({\mathscr {X}},\Vert \cdot \Vert )\).
Using this inequality Maligranda [17] obtained a lower bound and an upper bound for the norm angular distance or Clarkson distance (see, e.g., [4]) between nonzero vectors a and b in a normed space \(({\mathscr {X}},\Vert \cdot \Vert ).\) More precisely, given non-zero elements \(a,b\in {\mathscr {X}}\), the norm angular distance between a, b was defined in [4] as
In [17], the following double inequality was shown
as an improvement and a reverse of the Massera–Schäffer inequality, which states [18]
for every nonzero vectors a and b in \({\mathscr {X}}\). The latter inequality is stronger than the Dunkl–Williams inequality, given in [12] as follows:
Other results related to the angular distance, named Dunkl–Williams type theorems (see [12]), are given by Moslehian et al. [23].
Dehghan in [7] presented a new refinement of the triangle inequality and defined the skew angular distance between nonzero vectors x and y by \(\beta [a,b] = \Big \Vert \dfrac{a}{\Vert b\Vert }-\dfrac{b}{\Vert a\Vert }\Big \Vert \). In [7], the following double inequality is shown
for any nonzero elements a and b in a real normed linear space \({\mathscr {X}} = ({\mathscr {X}},\Vert \cdot \Vert )\).
Recently, numerous improvements and generalizations of bounds for the angular distance and the skew angular distance have been established in [11, 14, 15, 25]. In this direction, we establish several estimates for the angular distance and the skew angular distance between two non-zero vectors in a normed space. We also obtain some refinements of Maligranda’s inequality and some refinements of Dehghan’s inequality. Our approach is based on delicate treatments of convex functions and their inequalities.
2.1 Normed Spaces
In this subsection, we use convex functions to study norm and skew angular distances. We state several corollaries and remarks that explain the relationship with the existing results. In particular, improvements of (2.3), (2.4) and (2.5) will be presented. The significance of these results is not only the results themselves but also the convexity approach that allows obtaining such results.
Theorem 2.1
Let \(\left( {\mathscr {X}},\left\| \cdot \right\| \right) \) be a real or complex normed space, and let \(p,q>0\) be such that \(\frac{1}{p}+\frac{1}{q}=1\). If \(a,b\in {\mathscr {X}}\) and \(r\ge 1,\) then
where \(\lambda =\min \left\{ {1}/{p},{1}/{q}\right\} \) and \(\mu =\max \left\{ {1}/{p},{1}/{q} \right\} \).
Proof
Let \(f:{\mathscr {X}}\rightarrow {\mathbb {R}}\) be a convex function, \(a,b\in {\mathscr {X}}\) and let \(p,q>0\) with \({1}/{p}\;+{1}/{q}\;=1\). If we replace \(1-t\) and t by \({1}/{p}\;\) and \({1}/{q}\;\) in (1.1), we deduce
where \(\lambda =\min \left\{ {1}/{p}\;,{1}/{q} \right\} \) and \(\mu =\max \left\{ {1}/{p}\;,{1}/{q} \right\} \). Since this is valid for all \(a,b\in {\mathscr {X}}\), we may replace a by pa and b by qb in (2.7), to get
Noting that the function \(f:{\mathscr {X}}\rightarrow {\mathbb {R}}\) defined by \(f\left( x \right) ={\Vert x\Vert ^{r}}\) (\(x\in {\mathscr {X}}\) and \(1\le r<\infty \)) is convex, applying (2.8) implies the desired result. \(\square \)
In the following corollary, we have an easier form that easily follows from (2.6).
Corollary 2.1
Let a and b two nonzero vectors in a normed linear space \(\left( {\mathscr {X}},\left\| \cdot \right\| \right) \) and let \(p,q>0\) be such that \(\frac{1}{p}+\frac{1}{q}=1.\) If \(r\ge 1\), then
where \(\lambda =\min \left\{ {1}/{p},{1}/{q}\right\} \) and \(\mu =\max \left\{ {1}/{p},{1}/{q} \right\} \).
Proof
In (2.6), replace a and b by \(\dfrac{a}{\left\| a \right\| }\) and \(\dfrac{b}{\left\| b \right\| },\) respectively, to get the desired inequality. \(\square \)
Now we explain the significance (2.9).
Remark 2.1
Taking \(r=1\) in (2.9), we get
Using inequalities (2.2) and (2.9), we deduce
Therefore, we obtain the following refinement of the triangle inequality
for all nonzero vectors a and b in a normed space \(({\mathscr {X}},\Vert \cdot \Vert )\).
Another refinement of the triangle inequality can be stated as follows, with the same parameters as before.
Corollary 2.2
Let \(\left( {\mathscr {X}},\left\| \cdot \right\| \right) \) be a real or complex normed linear space. The following inequality holds:
for all vectors a and b in \({\mathscr {X}}\).
Proof
Letting \(r=1\) in (2.6), we obtain the desired inequality. \(\square \)
Remark 2.2
Inequality (2.10) can be obtained as a consequence of [19, Theorem 1]. It is easy to see that inequality (2.10) improves and reverses the triangle inequality.
In the next result, we present a more elaborated form that improves both (2.3) and (2.4).
Theorem 2.2
Let \(\left( {\mathscr {X}},\left\| \cdot \right\| \right) \) be a real or complex normed space, and let \(p,q>0\) be such that \(\frac{1}{p}+\frac{1}{q}=1\). If \(a,b\in {\mathscr {X}}\backslash \left\{ 0 \right\} \), then
and
where
and \(\lambda =\min \left\{ {1}/{p},{1}/{q}\right\} \) and \(\mu =\max \left\{ {1}/{p},{1}/{q} \right\} \).
Proof
By employing the triangle inequality, it is easy to see that \(\gamma _1,\gamma _2\ge 0.\) Let \(a,b\in {\mathscr {X}}\backslash \left\{ 0 \right\} \). Replacing a and b by \({\left( a-b \right) }/{\left\| a \right\| }\;\) and \(b\left( {1}/{\left\| a \right\| }\;-{1}/{\left\| b \right\| }\; \right) \), in (2.10), we obtain
If we replace a and b by \(a\left( {1}/{\left\| a \right\| }\;-{1}/{\left\| b \right\| }\; \right) \) and \({\left( a-b \right) }/{\left\| b \right\| },\) in (2.10), then we have
We deduce the desired result using inequalities in (2.13) and (2.14). \(\square \)
The significance of the above theorem and its relation with (2.3) and (2.4) is explained next.
Remark 2.3
In terms of the angular distance, inequalities (2.11) and (2.12) become
for all vectors a and b in \({\mathscr {X}}\backslash \left\{ 0 \right\} \), where \(\gamma _1\) and \(\gamma _2\) are as given above. Therefore, (2.15) improves (2.3) proved by Maligranda.
We also have the inequality \(|\Vert \alpha \Vert -\Vert \beta \Vert |\le \Vert \alpha -\beta \Vert \). So
for all vectors a and b in \({\mathscr {X}}\backslash \left\{ 0 \right\} \), where \(\gamma _1\) and \(\gamma _2\) are as given above. The right part of inequality (2.16) represents another improvement of the Massera–Schäffer inequality (2.4).
So far, we have studied the norm angular distance. Now we investigate the skew angular distance.
Theorem 2.3
Let \(\left( {\mathscr {X}},\left\| \cdot \right\| \right) \) be a real or complex normed space, and let \(p,q>0\) be such that \(\frac{1}{p}+\frac{1}{q}=1\). If \(a,b\in {\mathscr {X}}\backslash \left\{ 0 \right\} \), then
and
where
and \(\lambda =\min \left\{ {1}/{p},{1}/{q}\right\} \) and \(\mu =\max \left\{ {1}/{p},{1}/{q} \right\} \).
Proof
Using the triangle inequality, it is straightforward to see that \(\rho _1,\rho _2\ge 0.\) Let \(a,b\in {\mathscr {X}}\backslash \left\{ 0 \right\} \). Replacing the vectors a and b by the vectors \({\left( a-b \right) }/{\left\| b \right\| }\;\) and \(b\left( {1}/{\left\| b \right\| }\;-{1}/{\left\| a \right\| }\; \right) \), in (2.10), we get
If we replace a and b by \(a\left( {1}/{\left\| b \right\| }\;-{1}/{\left\| a \right\| }\; \right) \) and \({\left( a-b \right) }/{\left\| a \right\| },\) in (2.10), then we have
We deduce the desired inequalities using (2.19) and (2.20). \(\square \)
In the following remark, we explain how the above theorem improves (2.5).
Remark 2.4
In terms of the skew angular distance, inequalities (2.17) and (2.18) become
for all vectors a and b in \({\mathscr {X}}\backslash \left\{ 0 \right\} \), where \(\rho _1\) and \(\rho _2\) are given above. Therefore, (2.21) improves (2.5) proved by Dehghan. We also have the inequality \(|\Vert \alpha \Vert -\Vert \beta \Vert |\le \Vert \alpha -\beta \Vert \). Thus, we obtain
for all vectors a and b in \({\mathscr {X}}\backslash \left\{ 0 \right\} \), where \(\rho _1\) and \(\rho _2\) are as given above.
In the following result, we present a more elaborated form of the angular distance, where arbitrary powers of \(\Vert a\Vert \) and \(\Vert b\Vert \) are studied.
Proposition 2.1
Let \({\mathscr {X}}\) be a normed space, \(a,b\in {\mathscr {X}}\backslash \left\{ 0 \right\} \) and \(m\ge 0\). If \(p,q>0\) with \({1}/{p}\;+{1}/{q}\;=1\), then
where
and \(\lambda =\min \{1/p,1/q\}.\)
Proof
Replacing a and b by \({\Vert a\Vert ^{m-1}\left( a-b \right) }\;\) and \(b\left( \Vert a \Vert ^{m-1}\;-\left\| b \right\| ^{m-1}\; \right) \), in (2.10), we get
We get the desired result if we interchange the roles of a and b in (2.23). \(\square \)
In particular, we may state the following refinement of some results in the proof of [16, Theorem 2].
Corollary 2.3
Let \(a,b\in {\mathscr {X}}\backslash \left\{ 0 \right\} , m\ge 0\) and \(p,q>0\) be such that \({1}/{p}\;+{1}/{q}\;=1\).
-
(1)
If \(0\le m<1,\) then
$$\begin{aligned} \Big \Vert \Vert a\Vert ^{m-1} a-\Vert b\Vert ^{m-1}b \Big \Vert +\lambda \max \left\{ {{\alpha }_{1}},{{\alpha }_{2}} \right\} \le (2-m)\frac{\Vert a-b\Vert }{\max \{\Vert a\Vert , \Vert b\Vert \}^{1-m}}. \end{aligned}$$ -
(2)
If \(m\ge 1\), then
$$\begin{aligned} \Big \Vert \Vert a\Vert ^{m-1} a-\Vert b\Vert ^{m-1}b \Big \Vert +\lambda \max \left\{ {{\alpha }_{1}},{{\alpha }_{2}} \right\} \le m\max \{\Vert a\Vert , \Vert b\Vert \}^{m-1}\Vert a-b\Vert , \end{aligned}$$
where
and \(\lambda =\min \{1/p,1/q\}.\)
In particular, if we consider the case \(m=0\) in Proposition 2.1, we obtain the following improvement of (2.3).
Corollary 2.4
Let \(a,b\in {\mathscr {X}}\backslash \left\{ 0 \right\} \). If \(p,q>0\) are such that \({1}/{p}\;+{1}/{q}\;=1\), then
where
and \(\lambda =\min \{1/p,1/q\}.\)
Remark 2.5
In the reverse direction, we also have
where
and \(\mu =\max \{1/p,1/q\}.\)
2.2 Inner Product Spaces
Let \({\mathscr {X}}=\left( {\mathscr {X}},\left<\cdot ,\cdot \right>\right) \) be a real or complex inner product space, with induced norm \(\Vert \cdot \Vert \). A significant inequality in this space is the Cauchy–Schwarz inequality that states
for all \(a,b \in {\mathscr {X}}\). In addition, equality in (2.24) holds if and only if a and b are linearly dependent. We refer the reader to [1, 9, 29] for further discussion of this inequality.
This short section presents some similar results to the previously discussed angular distances. As a consequence, we obtain an identity for \({\mathfrak {R}}\left<a.b\right>\) for the nonzero vectors \(a,b\in {\mathscr {X}}\), where \({\mathfrak {R}}\) is the real part. This identity is related to the angle between two vectors in an inner product space.
Theorem 2.4
Let \({\mathscr {X}}\) be an inner product space, and let \(a,b\in {\mathscr {X}}\backslash \{0\}\). If \(p,q>0\) are such that \(\frac{1}{p}+\frac{1}{q}=1,\) then
and
where \(\lambda =\min \{1/p,1/q\}\) and \(\mu =\max \{1/p,1/q\}.\)
Proof
Assume that \(\left\| a \right\| =\left\| b \right\| =1\) and \({\mathscr {X}}\) is an inner product space. Letting \(r=2\) in the first inequality in (2.6) implies
Thus,
Now, replacing a and b by \({a}/{\left\| a \right\| }\;\) and \({b}/{\left\| b \right\| }\;\), respectively, gives the first inequality. If we apply the same method for the second inequality in (2.6), we infer the second inequality. This completes the proof. \(\square \)
Remark 2.6
If we let \(a, b \in {\mathscr {X}}\backslash \{0\}\) and \(p=q=2\) in Theorem 2.4, then
Using the admissible definition for the angle between the vectors a and b, given by [28]
we have
Note that the last identity is related to the notion of the angle between two non-zero vectors a and b in a real normed space introduced by Dimmine et al. in [8],
3 Log-Convex Functions
In the previous section, we have seen how Theorem 2.1 plays a major role in obtaining our results about the angular distance between vectors in normed spaces. We notice that (1.1) is the key tool behind Theorem 2.1. Thus, convexity has been used to obtain those results concerning the angular distance.
As mentioned in the introduction, a more potent form of convexity is the so-called log-convexity. In this section, we present some inequalities for log-convex functions and then employ them to obtain new inequalities for norms of matrices.
In this context, we use the notation \({\mathcal {M}}_n\) to denote the algebra of all \(n\times n\) complex matrices. If \(\left<Ax,x\right>\ge 0\) for all \(x\in {\mathbb {C}}^n\), we say that A is positive semi-definite. On the other hand, if \(\left<Ax,x\right>>0\) for all non-zero vectors \(x\in {\mathbb {C}}^n\), we say that A is positive definite. A matrix norm \(\Vert \cdot \Vert \) defined on \({\mathcal {M}}_n\) is said to be unitarily invariant if \(\Vert UAV\Vert =\Vert A\Vert \) for all \(A\in {\mathcal {M}}_n\) and all unitary matrices \(U,V\in {\mathcal {M}}_n\). In the literature, researchers showed interest in studying possible bounds of \(\Vert A^tXB^{1-t}\Vert \) where \(X\in {\mathcal {M}}_n,\) \(A,B\in {\mathcal {M}}_n\) are positive (definite or semi-definite) and \(0\le t\le 1.\)
The following two related lemmas will be needed in our analysis. The first one has been shown in the proof of [13, Theorem 2.1], while the second one is shown in [26, Proposition 2.21].
Lemma 3.1
Let \(A\in {\mathcal {M}}_n\) be positive definite and \(x\in {\mathbb {C}}^n\). Then
is log-convex on \(\left( -\infty ,\infty \right) \).
Some applications of Lemma 3.1 can be found in [22].
Lemma 3.2
Let \(A,B,X\in {{{\mathcal {M}}}_{n}}\) be such that A, B are positive. Then the function
is log-convex on \(\left[ 0,1 \right] \), for every unitarily invariant norm \(\Vert \cdot \Vert .\)
Before proceeding, we remind the reader that every convex function \(f:[0,\infty )\rightarrow [0,\infty )\) with \(f\left( 0 \right) = 0\) satisfies the following two inequalities
and
Now we prove some inequalities for log-convex functions; then we employ them to obtain matrix norm results. After that, we show further inequalities among real numbers’ powers based on a more delicate treatment of log-convex functions. The first result is a multiplicative-additive type inequality for log-convex functions.
Theorem 3.1
Let \(f:[0,\infty )\rightarrow \left( 0,\infty \right) \) be a log-convex function. Then for any \(a,b \ge 0\),
and
where \(r'=\min \left\{ a,b \right\} \) and \(R'=\max \left\{ a,b \right\} \).
Proof
Since f is log-convex, it is convex. So, by (1.1), we have
where \(r=\min \left\{ t,1-t \right\} \) and \(0\le t\le 1\) for \(a,b\ge 0.\) Since f is a log-convex function, \(\log f\) is convex. Accordingly,
Consequently (see also [6, (5)]),
Now, from the inequality (3.1), we can write
for any \(x\ge 0\) and \(0\le t\le 1\). Employing (3.2), we obtain
where \(r'=\min \left\{ a,b \right\} \) and \(r_1=\min \{\frac{a}{a+b},\frac{b}{a+b}\}=\frac{r'}{a+b}\). So,
For the reverse side, the second inequality in (1.1) assures that
holds, where \(R=\max \left\{ t,1-t \right\} \) and \(0\le t\le 1\). Therefore, for the log-convex function f, we have (see also [6, (8)])
Hence,
for any \(x\ge 0\) and \(0\le t\le 1\). By (3.4), we conclude that
where \(R'=\max \left\{ a,b \right\} \). This completes the proof. \(\square \)
Remark 3.1
Since, for log-convex functions,
and \(f\left( \frac{a+b}{2}\right) \le \sqrt{f(0)f(a+b)}\), we have
If f enjoy the inequality \(f\left( a \right) f\left( b \right) \le f\left( a+b \right) \), and if 0 is in its domain, then \(f\left( 0 \right) \le 1\). To see this, \(f\left( 0 \right) \le \frac{f\left( a+0 \right) }{f\left( a \right) }=1\). The following corollary provides an interesting additive-multiplicative inequality for log-convex functions.
Corollary 3.1
Let the assumptions of Theorem 3.1 hold. If \(f\left( 0 \right) =1\), then
On the other hand, the following theorem presents super and sub-additive inequalities for certain inner products.
Theorem 3.2
Let \(A\in {\mathcal {M}}_n\) be positive definite and let \(x\in {\mathbb {C}}^n\). Then for any \(s,t\ge 0\),
and
where \(\lambda =\min \left\{ s,t \right\} \) and \(\nu =\max \left\{ s,t \right\} \).
Proof
The result follows from Lemma 3.1 and Corollary 3.1 by taking into account that \({{A}^{0}}=I\). \(\square \)
Now we are ready to present the following matrix norm inequalities, related to the quantity \(\Vert A^tXB^{1-t}\Vert ,\) where we use Lemma 3.2 and Corollary 3.1.
Theorem 3.3
Let \(A,B,X\in {{{\mathcal {M}}}_{n}}\) be such that A, B are positive definite. Then for any \(0\le s,t \le 1\) and any unitarily invariant norm \(\Vert \cdot \Vert \),
and
where \(\lambda =\min \left\{ s,t \right\} \) and \(\nu =\max \left\{ s,t \right\} \).
Remark 3.2
At this point, we remark that Theorems 3.2 and 3.3 have been shown for positive definite matrices \(A,B\in {\mathcal {M}}_n.\) It is natural to ask about the validity of these results in infinite dimensional Hilbert spaces. In this context, let \({\mathcal {B}}({\mathcal {H}})\) be the algebra of all bounded linear operators on a Hilbert space \({\mathcal {H}}.\) If \(A\in {\mathcal {B}}({\mathcal {H}})\) is such that \(\left<Ax,x\right>>0\) for all non-zero \(x\in {\mathcal {H}}\), then A is said to be a strictly positive operator.
The proof of Theorem 3.2 was based on Lemma 3.1, which asserts log-convexity of the function \(f(t)=\left<A^tx,x\right>\) when \(A\in {\mathcal {M}}_n\) is positive definite.
Referring to [13, Theorem 2.1], it is shown that when \(A\in {\mathcal {B}}({\mathcal {H}})\) is strictly positive, then \(f(t)=\left<A^tx,x\right>\) is log-convex, for each \(x\in {\mathcal {H}}\). Thus, Theorem 3.2 is valid for \(A\in {\mathcal {B}}({\mathcal {H}})\) and \(x\in {\mathcal {H}}.\)
On the other hand, Theorem 3.3 was based on Lemma 3.2, which asserts log-convexity of the function \(f(t)= \Vert A^{t}XB^{1-t}\Vert \), for positive definite \(A,B\in {\mathcal {M}}_n\) and arbitrary \(X\in {\mathcal {M}}_n\), where \(\Vert \cdot \Vert \) is an arbitrary unitarily invariant norm on \({\mathcal {M}}_n\). Recall that unitarily invariant norms on \({\mathcal {B}}({\mathcal {H}})\) are defined on norm ideals associated with these norms. Let \(\Vert \;\;\Vert \) be a unitarily invariant norm defined on a norm ideal \({\mathcal {N}}_{\Vert \;\;\Vert }\) of \({\mathcal {B}}({\mathcal {H}})\). When we write \(\Vert A\Vert \), we implicitly mean that \(A\in {\mathcal {N}}_{\Vert \;\;\Vert }\). In [3, Theorem 2], it is shown that when \(\Vert \cdot \Vert \) is a unitarily invariant norm on \({\mathcal {B}}({\mathcal {H}})\), with associated norm ideal \({\mathcal {N}}_{\Vert \;\Vert }\), then the function \(f(t)= \Vert A^{t}XB^{1-t}\Vert \) is log-convex, where \(A,B,X\in {\mathcal {N}}_{\Vert \;\Vert }\). Therefore, we deduce that Theorem 3.3 is true for Hilbert space operators.
So, the conclusion of this remark is that both Theorems 3.2 and 3.3 are valid for Hilbert space operators, not only for the algebra \({\mathcal {M}}_n\).
As a conclusion of this paper, we present a delicate treatment of log-convex functions. This treatment is related to the fact that if f is a convex function on the interval J and \(x,y,z\in J\), with \(x< y< z\), then
This inequality can be translated to the log-convex setting as
The following theorem provides an improvement and a reverse of this inequality.
Theorem 3.4
Let \(f:J\rightarrow \left( 0,\infty \right) \) be a log-convex function and let \(x,y,z\in J\). Then for any \(x< y< z\),
and
Proof
If we let \(1-t=\frac{y-x}{z-x}\), \(t=\frac{z-y}{z-x}\), \(a=z\), and \(b=x\), we get from (3.1) that
Raise the two sides of (3.7) to the power of \(z-x\), and we obtain
Multiply both sides of the above inequality by \(\frac{{{f}^{x-y}}\left( y \right) }{{{f}^{z-y}}\left( x \right) }\) to get
From this, we can write
as desired. We can deduce the second inequality by applying the same technique to the inequality (3.3). \(\square \)
It is worth mentioning here that inequalities (3.1) and (3.5) (resp. (3.3) and (3.6)) are equivalent. (3.1) (resp. (3.3)) \(\Rightarrow \) (3.5) (resp. (3.6)) has been presented in the proof of Theorem 3.4, so we need to show (3.5) (resp. (3.6)) \(\Rightarrow \) (3.1) (resp. (3.3)). If we put \(x=0\), \(y=t\), \(z=1\), in (3.5), we get
for any \(0\le t\le 1\). Define
It is not hard to check that if f is log-convex, then g is log-convex too. Therefore, g satisfies the inequality (3.8). We reach (3.1) since \(g\left( \frac{1}{2} \right) =f\left( \frac{a+b}{2} \right) \), \(g\left( 0 \right) =f\left( a \right) \), and \(g\left( 1 \right) =f\left( b \right) \). (3.6) \(\Rightarrow \) (3.3) can be obtained in the same way.
Our last result in this direction is the following Clarkson-type inequality, where we explore the relation between \(\frac{|a|^p+|b|^p}{2}\) and \(\left| \frac{a^p+b^p}{2}\right| ,\) for the real numbers a, b and \(p\ge 1.\) Such inequalities are important when studying the \(p-\)norm in \(L^p\) spaces.
Theorem 3.5
Let \(a,b \in {\mathbb {R}}\). Then for any \(p\ge 1\),
and
In particular,
and
where \(r=min\{t,1-t\}\), \(R=max\{t,1-t\}\), and \(0 \le t \le 1\).
Proof
We can write the inequality (3.1) in the following form:
The function \(f\left( x \right) =\exp \left( {{\left| x \right| }^{p}} \right) \left( p\ge 1 \right) \) is a log-convex function. Thus, for any \(p\ge 1\), we have
Take the logarithm on both sides of (3.9); we get
This completes the first inequality. The third inequality can be obtained using this inequality by setting \(p=1\). The inequality (3.3) also can be written as
Using the same arguments as above, we infer the other inequalities. \(\square \)
Availability of data and materials
Not applicable.
References
Aldaz, J.M.: Strengthened Cauchy–Schwarz and Hölder inequalities, J. Inequal. Pure Appl. Math. 10(4), Article 116 (2009)
Al-Natoor, A., Audeh, W.: Refinement of triangle inequality for the Schatten \(p\)-norm. Adv. Oper. Theory 5, 1635–1645 (2020)
Bhatia, R., Davis, C.: A Cauchy–inequality for operators with applications. Linear Algebra Appl. 223, 119–129 (1995)
Clarkson, J.A.: Uniformly convex spaces. Trans. Am. Math. Soc. 40, 396–414 (1936)
Dadipour, F., Moslehian, M.S., Rassias, J.M., Takahasi, S.E.: Characterization of a generalized triangle inequality in normed spaces. Nonlinear Anal. 75, 735–741 (2012)
Davarpanah, S.M., Moradi, H.R.: A log-convex approach to Jensen–Mercer inequality. J. Linear. Topol. Algebra 11(3), 169–176 (2022)
Dehghan, H.: A characterization of inner product spaces related to the skew-angular distance. Math. Notes 93(4), 556–560 (2013)
Diminnie, C.R., Andalafte, E.Z., Freese, R.W.: Angles in normed linear spaces and a characterization of real inner product spaces. Math. Nachr. 129, 197–204 (1986)
Dragomir S.S.: A potpourri of Schwarz related inequalities in inner product spaces (II), J. Inequal. Pure Appl. Math. 7(1), Article 14 (2006)
Dragomir, S.S.: Bounds for the normalised Jensen functional. Bull. Austral. Math. Soc. 74, 471–478 (2006)
Dragomir, S.S.: Upper and lower bounds for the \(p\)-angular distance in normed spaces with applications. J. Math. Inequal. 8, 947–961 (2014)
Dunkl, C.F., Wiliams, K.S.: A simple norm inequality. Am. Math. Monthly 71, 53–54 (1964)
Fujii, M., Nakamoto, R.: Refinements of Holder–McCarthy inequality and Young inequality. Adv. Oper. Theory 1(2), 184–188 (2016)
Krnić, M., Minculete, N.: Bounds for the \(p\)-angular distance and characterizations of inner product spaces. Mediterr. J. Math. 18, 140 (2021)
Krnić, M., Minculete, N.: Characterizations of inner product spaces via angular distances and Cauchy-Schwarz inequality. Aequat. Math. 95, 147–166 (2021)
Maligranda, L.: Simple norm inequalities. Amer. Math. Monthly 113, 256–260 (2006)
Maligranda, L.: Some remarks on the triangle inequality for norms. Banach J. Math. Anal. 2, 31–41 (2008)
Massera, J.L., Schäffer, J.J.: Linear differential equations and functional analysis, I. Ann. Math. 67(3), 517–573 (1958)
Minculete, N., Moradi, H.R.: Some improvements of the Cauchy-Schwarz inequality using the Tapia semi-inner-product. Mathematics. 8, 2112 (2020)
Minculete, N., Păltănea, R.: Improved estimates for the triangle inequality. J. Inequal. Appl. 2017, 17 (2017)
Moradi, H.R., Sababheh, M.: More accurate numerical radius inequalities (II). Linear Multilinear Algebra. 69(5), 921–933 (2021)
Moradi, H.R., Furuichi, S., Sababheh, M.: Some operator inequalities via convexity. Linear Multilinear Algebra. https://doi.org/10.1080/03081087.2021.2006592.
Moslehian, M.S., Dadipour, F., Rajić, R., Marić, A.: A glimpse at the Dunkl-Williams inequality. Banach J. Math. Anal. 5, 138–151 (2011)
Pečarić, J., Rajić, R.: On some generalized norm triangle inequalities. Rad HAZU. 515, 43–52 (2013)
Rooin, J., Rajabi, S., Moslehian, M.S.: \(p\)-angular distance orthogonality. Aequationes Math. 94(1), 103–121 (2020)
Sababheh, M.: Log and Harmonically log-convex functions related to matrix norms. Oper. Matrix 10(2), 453–465 (2016)
Sababheh, M.: Means refinements via convexity. Mediterr. J. Math. 14, 125 (2017). https://doi.org/10.1007/s00009-017-0924-8
Scharnhorst, K.: Angles in complex vector spaces. Acta Appl. Math. 69, 95–103 (2001)
Wigren, T.: The Cauchy-Schwarz inequality: Proofs and applications in various spaces, monograph
Acknowledgements
The authors would like to express their deep gratitude to the reviewer, for valuable comments.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
Authors declare that they have contributed equally to this paper. All authors have read and approved this version.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Conde, C., Minculete, N., Moradi, H.R. et al. Norm Inequalities via Convex and Log-Convex Functions. Mediterr. J. Math. 20, 6 (2023). https://doi.org/10.1007/s00009-022-02214-z
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00009-022-02214-z