1 Introduction

Among all compact domains of given surface area S in the Euclidean space \(\mathbb {R}^n\), the volume \(V_n\) of a domain D is maximized only by the ball. This isoperimetric property of ball is usually formulated as the following classical isoperimetric inequality

$$\begin{aligned} S(D)^n\ge n^n\omega _n V_n(D)^{n-1}, \end{aligned}$$
(1.1)

with equality if and only if the compact domain D is a ball, where \(\omega _n=\pi ^{n/2}/\Gamma (1+\frac{n}{2})\) is the volume of unit ball B in \(\mathbb {R}^n\). The literature on isoperimetric inequality, as well as its various generalizations and applications, is abundant. See, e.g., the excellent survey articles by Osserman [20] and Gardner [4].

Let K be a convex body in \(\mathbb {R}^n\). Write \(V_j(K|\xi )\) for the j-dimensional volume of projection of K onto a j-dimensional subspace \(\xi \subseteq \mathbb {R}^n\), and call it the jth projection function. The important geometric quantities related to \(V_j(K|\xi )\) are the jth surface area, defined by

$$\begin{aligned} S_j(K)=\frac{n\omega _n}{\omega _j}\int _{G_{n,j}}V_j(K|\xi )\,d\mu _j(\xi ),\quad j=1,2,\ldots ,n-1,n, \end{aligned}$$
(1.2)

where the Grassmann manifold \(G_{n,j}\) is endowed with the normalized Haar measure \(\mu _j\). The jth surface area is a generalization of the surface area and volume. Indeed, \(\frac{1}{n}S_{n}(K)\) is the volume of K. Let \(j=n-1.\) We have the celebrated Cauchy surface area formula

$$\begin{aligned} S_{n-1}(K)=S(K) =\frac{1}{\omega _{n-1}}\int _{\mathbb {S}^{n-1}}V_{n-1}(K|u^\bot )\,d\mathcal {H}^{n-1}(u), \end{aligned}$$

where \(u^\bot \) denotes the \((n-1)-\)dimensional subspace orthogonal to u, \(\mathcal {H}^{n-1}\) denotes the Lebesgue measure on unit sphere \(\mathbb {S}^{n-1}\). This formula says that the surface area of a convex body is, up to a factor depending only on n, the average volume of its shadows.

Note that in accordance with the conventional terminology in convex geometry, \(\frac{1}{n}S_j(K)\) is precisely the so-called \((n-j)\)th quermassintegral \(W_{n-j}(K)\) of convex body K. Here, we prefer to calling it the jth surface area and denoting it by \(S_{j}\) because the integral in (1.2) shows the true nature of “surface area”.

For jth surface area \(S_j(K)\), there holds the extended isoperimetric inequality

$$\begin{aligned} S_j(K)^n\ge n^n\omega _n^{n-j}V_n(K)^{j}, \end{aligned}$$
(1.3)

with equality if and only if K is a ball.

Without doubt, the Euclidean ball, uniquely characterized by isoperimetric inequalities, such as (1.1) and (1.3), is one of the most important geometric objects. However, to study isoperimetric features of other important geometric objects, such as ellipsoids, simplices and parallelotopes, a fruitful and natural approach is from affine geometry. First of all, we need to study geometric quantities which are affine invariant. As an aside, the jth surface area is not affine invariant. In some sense, to establish sharp affine isoperimetric inequalities, is a central problem in isoperimetric theory, as well as in affine geometry.

In 1970s, Petty [22] proved the following celebrated affine isoperimetric inequality, which is now known as the Petty projection inequality

$$\begin{aligned} {V_n}({\Pi ^*}K){V_n}{(K)^{n - 1}} \le {\left( {\frac{{{\omega _n}}}{{{\omega _{n - 1}}}}} \right) ^n}, \end{aligned}$$
(1.4)

with equality if and only if K is an ellipsoid. Here, \(\Pi K\) is the projection body of a convex body K with its support function \(h_{\Pi K}(u)=V_{n-1}(K|u^\bot )\), for \(u\in \mathbb {S}^{n-1}\). \(\Pi ^*K\) denotes the polar body of \(\Pi K\). It is noted that by monotonicity of power means, the Petty projection inequality (1.4) is far stronger than the Euclidean isoperimetric inequality (1.1). The reverse form of (1.4) is known as the Zhang projection inequality, which was conjectured by Ball [1] and was first proved by Zhang [26].

Since

$$\begin{aligned} {[}{V_n}({\Pi ^*}K)]^{^{-\frac{1}{n}}}&= {\left( \frac{1}{n}\int _{\mathbb {S}^{n-1}}V_{n-1}(K|u^\bot )^{-n}\,du\right) }^{-\frac{1}{n}}, \end{aligned}$$

it indicates the functional \([{V_n}({\Pi ^*}K)]^{^{-\frac{1}{n}}} \) has the true nature of “surface area”. Later, analogous quantities were considered by Lutwak and Grinberg in the setting of convex bodies. In [11], Lutwak proposed to define affine quermassintegrals \(\Phi _0(K)\), \(\Phi _1(K)\), \(\ldots \), \(\Phi _n(K)\) for each convex body K in \(\mathbb {R}^n\), by taking \(\Phi _0(K)=V_n(K)\), \(\Phi _n(K)=\omega _n\), and for \(1\le j\le n-1\), by

$$\begin{aligned} {\Phi _j}(K) = \frac{{{\omega _n}}}{{{\omega _{n - j}}}}{\left( {\int _{{G_{n,n - j}}} {{V_{n - j}}{{(K|\xi )}^{ - n}}\,d{\mu _{n - j}}(\xi )} } \right) ^{-1/n}}. \end{aligned}$$
(1.5)

Grinberg [7] proved that these geometric quantities, as their names suggest, are invariant under volume-preserving affine transformations. Concerning the Lutwak dual affine quermassintegral and its related affine isoperimetric inequality extended to the bounded integrable functions, one can refer to the excellent article [2] by S. Dann, G. Paouris and P. Pivovarov.

In light of the integral in (1.5) has the character of surface area, we slightly modify these quantities \({\Phi _j}(K)\) and write them by

$$\begin{aligned} \Lambda _j(K)=n\Phi _{n-j}(K),\quad j=0, 1,\ldots ,n-1,n. \end{aligned}$$
(1.6)

We call \(\Lambda _j(K)\) the jth integral affine surface area of convex body K. Note that \(\Lambda _{n-1}(K)\) is a constant multiple (depending only on n) of \([{V_n}({\Pi ^*}K)]^{^{-\frac{1}{n}}} \). Thus, the Petty projection inequality (1.4) can be reformulated as the following

$$\begin{aligned} {\Lambda _{n - 1}}{(K)^n} \ge {n^n}{\omega _n}{V_n}(K)^{n-1}, \end{aligned}$$

with equality if and only if K is an ellipsoid.

In contrast to the classical isoperimetric inequality for surface area functional and the Petty projection inequality for \((n-1)\)th integral affine surface area, Lutwak [12] proposed the following insightful conjecture for general jth integral affine surface areas.

The Lutwak conjecture Suppose K is a convex body in \(\mathbb {R}^n\). Then

$$\begin{aligned}{\Lambda _j}{(K)^n} \ge {n^n}\omega _n^{n - j}{V_n}{(K)^j}, \quad j=1, 2,\ldots ,n-1,\end{aligned}$$

with equality if and only if K is an ellipsoid.

Unfortunately, the Lutwak conjecture has not made any essential progress during the last 3 decades. It has not even received the attention it deserves, because only two nontrivial cases follow from the classical results: when \(j=n-1,\) it is the above mentioned Petty projection inequality; when \(j=1\) and K is symmetric, it is exactly the celebrated Blaschke–Santaló inequality. In each case equality holds precisely when K is an ellipsoid. For \(j=2, 3,\ldots , n-1,\) the Lutwak conjecture still remains open.

In this article, we focus on the Lutwak integral affine surface areas. In Sect. 2, a variational formula for the affine surface area \(\Lambda _{j}(K)\) of convex body K in \(\mathbb {R}^n\) is established when \(j=1, 2,\ldots , n-1.\) From the established variational formula, we define a new measure, called affine projection measure, and show this measure is indeed affine invariant. In Sect. 3, we introduce a new ellipsoid \({\mathrm{{P}}_j}K\), which is associated with the jth projection function \(V_j(K|\cdot )\) of convex body K, and call it the jth projection mean ellipsoid of K. It is with this projection mean ellipsoid that we prove the following main results in Sects. 4 and 5, respectively.

Theorem 1.1

Suppose K is a convex body in \(\mathbb {R}^n\). Then,

$$\begin{aligned} {\Lambda _j}{(K)^n} \ge {n^n}\omega _n^{n - j}{V_n}{({\mathrm{{P}}_j}K)^j}, \quad j=1, 2,\ldots ,n-1. \end{aligned}$$
(1.7)

If \(j=2, 3, \ldots ,n-1\), or \(j=1\) and K is centrally symmetric, the equality holds if and only if K is an ellipsoid. If \(j=1\), the equality holds if and only if K has an \(\mathrm{SL}(n)\) image with constant width.

Theorem 1.2

Suppose K is an origin-symmetric convex body in \(\mathbb {R}^n\). Then,

$$\begin{aligned} {V_n}({K^*}){V_n}({\mathrm{{P}}_1}K) \le \omega _n^2, \end{aligned}$$
(1.8)

with equality if and only if K is an ellipsoid.

The sharp affine isoperimetric inequality (1.7) within Theorem 1.1, including its equality condition, can be viewed as a modified version of the Lutwak conjecture. In this new geometric inequality, as well as inequality (1.8), projection mean ellipsoid plays a crucial and indispensable role.

It is worth mentioning that projection and intersection, are two most fundamental geometric means to study structures of convex bodies in convex geometry. Meanwhile, ellipsoid, especially the classical John ellipsoid and its various generalizations, such as \(L_{p}\) John ellipsoids [16], mixed \(L_{p}\) John ellipsoids [9], Orlicz–John ellipsoids [28], Orlicz–Legendre ellipsoids [29], are both powerful to attack reverse isoperimetric problems and effective to establish reverse isoperimetric inequalities. See, e.g., [1, 9, 13, 15, 17, 18, 24, 25, 27, 28], etc. In this article, for the first time we take into account these two important characters: projection and ellipsoid, and introduce a new ellipsoid by using projection function. It is wonderful that this new ellipsoid is tailor-made to do extremum problem for the Lutwak integral affine surface area, which opens up a entirely distinctive passage to tackle the longstanding Lutwak conjecture in convex geometry.

As for the ellipsoid associated with intersection function and its applications to affine isoperimetric problem, one can refer to [10]. In Sect. 6, we provide an example to compare the volumes of convex body itself and the projection mean ellipsoid.

2 A variational formula for the integral affine surface area

The setting for this paper is Euclidean n-dimensional space \(\mathbb {R}^n\). As usual, write B and \(\mathbb {S}^{n-1}\) for standard Euclidean unit ball and unit sphere in \(\mathbb {R}^n\), respectively. Write \(G_{n,j}\) for the Grassmannian manifold of all j-dimensional linear subspaces in \(\mathbb {R}^n\). For \(\xi \in G_{n,j}\), let \(|\xi \) denote the orthogonal projection from \(\mathbb {R}^n\) onto \(\xi \).

2.1 Basics on convex bodies

Write \(\mathcal {K}^n\) for the class of convex bodies in \(\mathbb {R}^n\). A compact convex set K in \(\mathbb {R}^n\) is uniquely determined by its support function \(h_K: \mathbb {R}^n \rightarrow \mathbb {R}\), defined for \(x\in \mathbb {R}^n\) by

$$\begin{aligned} h_K(x)=\max \left\{ x\cdot y: y\in K \right\} . \end{aligned}$$
(2.1)

It is clear that the support function is positively homogeneous with degree 1.

Suppose K is a convex body in \(\mathbb {R}^n\) with the origin in its interior. Its radial function \(\rho _K: \mathbb {S}^{n-1}\rightarrow (0,\infty )\) is defined for \(u\in {\mathbb S}^{n-1}\) by \(\rho _K(u)=\max \{\lambda >0: \lambda u\in K\}.\) The polar body \(K^*\) of K is still a convex body with the origin in its interior, and \(\rho _{K^*}(u)=h_K(u)^{-1}.\)

For compact convex sets K and L, their Hausdorff distance is defined by

$$\begin{aligned} \delta (K,L)=\Vert h_K-h_L\Vert _\infty , \end{aligned}$$
(2.2)

where \(\Vert \cdot \Vert _\infty \) denotes the \(L_\infty \) norm on \(\mathbb {S}^{n-1}\).

For compact convex sets KL in \(\mathbb {R}^n\), the volume of \(K+\varepsilon L\), \(\varepsilon \ge 0\), can be represented as the following Steiner–Minkowski polynomial

$$\begin{aligned} {V_n}(K + \varepsilon L) = \sum _{j = 0}^n {\left( {\begin{array}{*{20}{c}} n \\ j \\ \end{array}} \right) {V_{n,j}}(K,L){\varepsilon ^j}}, \end{aligned}$$
(2.3)

where \(V_{n,j}(K,L)\) is called the jth mixed volume of (KL). Note that the notation \(V_{n,j}(K,L)\) is slightly different from common use, but it is convenient for our purpose. When \(L=B^n\), \(nV_{n,j}(K,B)=S_j(K)\).

From (2.3), it follows that

$$\begin{aligned} {V_{n,1}}(K,L) = \lim _{\varepsilon \rightarrow 0^+} \frac{{{V_n}(K + \varepsilon L) - {V_n}(K)}}{{n\varepsilon }}. \end{aligned}$$
(2.4)

If in addition K is a convex body, then there is the following integral representation

$$\begin{aligned} V_{n,1}(K,L)=\frac{1}{n}\int _{\mathbb {S}^{n-1}}h_L(u)\,dS(K,u). \end{aligned}$$
(2.5)

Here, \(S(K,\cdot )\) denotes the surface area measure of K. For more information on surface area measure, see, e.g., Gardner [5], Gruber [8] and Schneider [23].

For \(\xi \in G_{n,j}\) and \(1\le j\le n-1\), write \(V_{j,1}(K|\xi ,L|\xi )\) for the first mixed volume of \((K|\xi ,L|\xi )\) defined in the subspace \(\xi \). It is convenient to use the normalization of \(V_{j,1}(K|\xi , L|\xi )\). That is,

$$\begin{aligned} \bar{V}_{j,1}(K|\xi , L|\xi )=\frac{V_{j,1}(K|\xi , L|\xi )}{V_j(K|\xi )}. \end{aligned}$$
(2.6)

2.2 Affine projection measures

Let \(K\in \mathcal {K}^n\) and \(1\le j\le n-1\). It is useful to introduce a Borel measure \(\mu _j(K,\cdot )\) of convex body K, which is defined over \(G_{n,j}\) and called the jth affine projection measure of K. \(\mu _j(K,\cdot )\) is absolutely continuous to Haar measure \(\mu _j\) with Radon–Nikodym derivative

$$\begin{aligned} \frac{{d{\mu _j}(K,\xi )}}{{d{\mu _j}(\xi )}} = {V_j}{(K|\xi )^{ - n}}. \end{aligned}$$
(2.7)

Obviously, \(\mu _j(K+x,\cdot )=\mu _j(K,\cdot )\) for \(x\in \mathbb {R}^n\), and \(\mu _j(\alpha K,\cdot ) =\alpha ^{-nj}\mu _j(K,\cdot )\) for \(\alpha >0\).

Note that the total mass \(\mu _j(K, G_{n,j})\) of \(\mu _j(K,\cdot )\) and the jth integral affine surface area \(\Lambda _j(K)\) have the equality

$$\begin{aligned} {\mu _j}(K,{G_{n,j}}) = {\left( {\frac{{n{\omega _n}}}{{{\omega _j}{\Lambda _j}(K)}}} \right) ^{ - n}}. \end{aligned}$$

So, \(\mu _j(K,\cdot )\) can be viewed as the differential of the jth integral affine surface area \(\Lambda _j\).

For convenience, write \(\bar{\mu }_j(K,\cdot )\) for the normalization of \(\mu _j(K,\cdot )\), that is,

$$\begin{aligned} \bar{\mu }_j(K,\cdot )=\mu _j(K,\cdot )/\mu _j(K,G_{n,j}), \end{aligned}$$
(2.8)

which will be appeared in the variational formula in Theorem 2.3.

The following theorem shows \(\mu _j(K,\cdot )\) is indeed affine invariant.

Theorem 2.1

Suppose \(K\in \mathcal {K}^n\) and \(1\le j\le n-1\). Then for \(g\in \mathrm{SL}(n)\),

$$\begin{aligned} d\mu _j(gK,\xi )=d\mu _j(K, g^T\xi ), \quad \forall \xi \in G_{n,j}. \end{aligned}$$

Proof

Since g induces a linear transformation from \(\xi \) to \(g\xi \), for any Lebesgue measurable \(A\subset \xi \) with positive Lebesgue measure, the volume ratio \(V_j(gA)/V_j(A)\) depends only on g (and is independent of the choice of A). Thus, it is reasonable to define

$$\begin{aligned} \sigma _j(g,\xi )=V_j(gA)/V_j(A). \end{aligned}$$
(2.9)

Let \(g\mu _j\) be the image measure of \(\mu _j\) under the map \(g: G_{n,j}\rightarrow G_{n,j}\), \(\xi \mapsto g\xi \). Since the Grassmannian \(G_{n,j}\) is of class \(C^\infty \), through local coordinates, its Riemannian volume element \(d\mu _j(\xi )\) is always represented as the differential form \(f(x_1,\ldots ,x_l)dx_1\cdots dx_l\), where f is of class \(C^\infty \) and \(l=\dim (G_{n,j})\). So, \(g\mu _j\) is absolutely continuous with respect to \(\mu _j\), with a positive Radon–Nikodym derivative everywhere.

Hence, we can take the Radon–Nikodym derivative, \(\sigma _{G_{n,j}}(g,\xi )\), of \(g^{-1}\mu _j\) with respect to \(\mu _j\). Then, \( \sigma _{G_{n,j}}(g,\xi )=d\mu _j(g^{-1}\xi )/d\mu _j(\xi )\). Using the fact that \(\sigma _{G_{n,j}}(g,\xi )=\sigma _j(g,\xi )^{-n}\), proved by Furstenberg and Tzkoni [3], we have

$$\begin{aligned} d\mu _j(g^{-1}\xi )=\sigma _j(g,\xi )^{-n} d\mu _j(\xi ). \end{aligned}$$
(2.10)

Recall that in [7], Grinberg proved the following identity

$$\begin{aligned} V_j((gK)|\xi )=\sigma _j(g^T,\xi )V_j(K|g^T\xi ). \end{aligned}$$
(2.11)

Now, from (2.7), (2.11), the fact \(\xi =g^{-T}(g^{T}\xi )\), (2.10), (2.9) and finally (2.7) again, it follows that

$$\begin{aligned} d{\mu _j}(gK,\xi )&= {V_j}{((gK)|\xi )^{ - n}}\,d{\mu _j}(\xi ) \\&= {V_j}{(K|{g^T}\xi )^{ - n}}{\sigma _j}{({g^T},\xi )^{ - n}}\,d{\mu _j}(\xi ) \\&= {V_j}{(K|{g^T}\xi )^{ - n}}{\sigma _j}{({g^T},{g^{ - T}}({g^T}\xi ))^{ - n}}\,d{\mu _j}({g^{ - T}}({g^T}\xi )) \\&= {V_j}{(K|{g^T}\xi )^{ - n}}{\sigma _j}{({g^T},{g^{ - T}}({g^T}\xi ))^{ - n}}{\sigma _j}{({g^{ - T}},{g^T}\xi )^{ - n}}\,d{\mu _j}({g^T}\xi ) \\&= {V_j}{(K|{g^T}\xi )^{ - n}}{\sigma _j}{({g^T}{g^{ - T}},{g^T}\xi )^{ - n}}\,d{\mu _j}({g^T}\xi ) \\&= {V_j}{(K|{g^T}\xi )^{ - n}}d{\mu _j}({g^T}\xi ) \\&= d{\mu _j}(K,{g^T}\xi ), \end{aligned}$$

as desired. \(\square \)

The following lemma shows the weak convergence of affine projection measure.

Lemma 2.2

Suppose \(K,K_i\in \mathcal {K}^n\), \(i\in \mathbb {N}\) and \(1\le j\le n-1\). If \(K_i\rightarrow K\) in the Hausdorff metric as \(i\rightarrow \infty \), then \(\mu _j(K_i,\cdot )\rightarrow \mu _j(K,\cdot )\) weakly.

Proof

Let f be a continuous function on \(G_{n,j}\). We aim to prove the convergence

$$\begin{aligned} \int _{{G_{n,j}}} {f(\xi )\,d{\mu _j}({K_i},\xi )} \rightarrow \int _{{G_{n,j}}} {f(\xi )\,d{\mu _j}(K,\xi )} . \end{aligned}$$

For each \(\xi \in G_{n,j}\), since \(K_i\rightarrow K\), it follows that \(K_i|\xi \rightarrow K|\xi \). Since the volume functional \(V_j\) is continuous in the Hausdorff metric, this implies that \(V_j(K_i|\xi )\rightarrow V_j(K|\xi )\). So,

$$\begin{aligned} f(\xi )V_j(K_i|\xi )^{-n} \rightarrow f(\xi )V_j(K|\xi )^{-n}. \end{aligned}$$

To make use of the Lebesgue dominated theorem to obtain the desired limit, we need to show

$$\begin{aligned} \max _{(i,\xi )\in \mathbb {N}\times G_{n,j}}|f(\xi )|V_j(K_i|\xi )^{-n}<\infty . \end{aligned}$$

Since \(G_{n,j}\) is compact, the continuity of f implies that \(\max _{G_{n,j}}|f|<\infty \). So, it suffices to prove

$$\begin{aligned} 0< c_1:=\min _{(i,\xi )\in \mathbb {N}\times G_{n,j}}V_j(K_i|\xi ). \end{aligned}$$

In fact, by the convergence \(K_i\rightarrow K\), it yields that there exists a constant \(c_2>0\), a point \(x\in K\) and an index \(i_0\in \mathbb {N}\), such that \(c_2B+x\subset \mathrm{int}K\) and \(c_2B+x\subseteq K_i\), for \(i\ge i_0+1\). Note that

$$\begin{aligned} 0<\min \left\{ \min _{\xi \in G_{n,j}}V_j(K_1|\xi ), \ldots , \min _{\xi \in G_{n,j}}V_j(K_{i_0}|\xi ), c_2^{j}\omega _j \right\} \le c_1, \end{aligned}$$

which completes the proof. \(\square \)

2.3 Integral affine surface area

The starting point of this article is to calculate the first variational of \(\Lambda _j\).

Theorem 2.3

Suppose \(K,L\in \mathcal {K}^n\) and \(1\le j\le n-1\). Then,

$$\begin{aligned} \lim _{\varepsilon \rightarrow {0^ + }} \frac{{{\Lambda _j}(K + \varepsilon L) - {\Lambda _j}(K)}}{{j{\Lambda _j}(K)\varepsilon }} = \int _{{G_{n,j}}} {{{\bar{V}}_{j,1}}(K|\xi ,L|\xi )\,d{\bar{\mu }_j}(K,\xi )} \end{aligned}$$

Proof

From compactness of convex bodies, there are positive constant numbers \(R_K\) and \(R_L\) such that \(K\subseteq R_KB^n\) and \(L\subseteq R_L B^n\). Let \(0<\varepsilon \le \varepsilon _0<\infty \) and \(\xi \in G_{n,j}\). From monotonicity of mixed volumes with respect to set inclusion and homogeneity of mixed volumes, for \(1\le l\le j\), we have

$$\begin{aligned} {V_{j,l}}(K|\xi ,L|\xi )&\le {V_{j,l}}(({R_K}{B^n})|\xi ,({R_L}{B^n})|\xi ) \\&= {R_K}^{j - l}{R_L}^l{V_{j,l}}({B^n}|\xi ,{B^n}|\xi ) \\&= {R_K}^{j - l}{R_L}^l{V_j}({B^n}|\xi ) \\&= {R_K}^{j - l}{R_L}^l{\omega _j}. \end{aligned}$$

By using Steiner–Minkowski (2.3) to \(V_j((K|\xi )+\varepsilon L|\xi )\), it yields

$$\begin{aligned} \frac{{{V_j}((K + \varepsilon L)|\xi ) - {V_j}(K|\xi )}}{\varepsilon } \le c: = {\omega _j}\sum \limits _{l = 1}^j {\left( {\begin{array}{*{20}{c}} l \\ j \\ \end{array}} \right) {R_K}^{j - l}{R_L}^l{\varepsilon _0}^{l - 1}} . \end{aligned}$$

Observe that the constant c is positive and finite, and is independent of \(\xi \in G_{n,j}\). Hence, the following family of positive integrable functions

$$\begin{aligned} \left\{ {\frac{{{V_j}((K + \varepsilon L)| \cdot ) - {V_j}(K| \cdot )}}{\varepsilon }:\mathrm{{0 < }}\varepsilon \le {\varepsilon _0}} \right\} \end{aligned}$$

is uniformly bounded on the Grassmannian \(G_{n,j}\).

Moreover,

$$\begin{aligned} {\frac{\left| {{V_j}{{((K + \varepsilon L)| \cdot )}^{ - n}} - {V_j}{{(K| \cdot )}^{ - n}}}\right| }{\varepsilon }}&\le \frac{n}{{{V_j}{{(K| \cdot )}^{n + 1}}}}\cdot \frac{{{V_j}((K + \varepsilon L)|\cdot ) - {V_j}(K|\cdot )}}{\varepsilon } \\&\le \frac{{nc}}{{{{\min }_{{G_{n,j}}}}{V_j}{{(K| \cdot )}^{n + 1}}}} \\&< \infty . \end{aligned}$$

Thus, the set

$$\begin{aligned} \left\{ {\frac{{{V_j}{{((K + \varepsilon L)| \cdot )}^{ - n}} - {V_j}{{(K| \cdot )}^{ - n}}}}{\varepsilon }:\mathrm{{0 < }}\varepsilon \le {\varepsilon _0}} \right\} \end{aligned}$$

is also uniformly bounded on the Grassmannian \(G_{n,j}\).

Meanwhile, by (2.4) and (2.6), for each \(\varepsilon \), the function \(\varepsilon ^{-1}\left( {{V_j}{{((K + \varepsilon L)| \cdot )}^{ - n}} - {V_j}{{(K| \cdot )}^{ - n}}} \right) \) is \(\mu _j\)-integrable on \(G_{n,j}\), and for each \(\xi \in G_{n,j}\), there holds the limit

$$\begin{aligned} \lim _{\varepsilon \rightarrow {0^ + }} \frac{{{V_j}{{((K + \varepsilon L)|\xi )}^{ - n}} - {V_j}{{(K|\xi )}^{ - n}}}}{\varepsilon } = - nj{V_j}{(K|\xi )^{ - n}}{\bar{V}_{j,1}}(K|\xi ,L|\xi ). \end{aligned}$$

By the Lebesgue dominated theorem, the functional \(V_j(K|\cdot )^{-n}\bar{V}_{j,1}(K|\cdot ,L|\cdot )\) is integrable with respect to \(\mu _j\). From (2.7), we have

$$\begin{aligned} {\left. {\frac{d}{{d\varepsilon }}} \right| _{\varepsilon = {0^ + }}}\int _{{G_{n,j}}} {{V_j}{{((K + \varepsilon L)|\xi )}^{ - n}}\,d{\mu _j}(\xi )} = - nj\int _{{G_{n,j}}} {{{\bar{V}}_{j,1}}(K|\xi ,L|\xi )\,d{\mu _j}(K,\xi )} . \end{aligned}$$

This shows \(\Lambda _j(K+\varepsilon L)^{-n}\) has right derivative at 0 with respect to \(\varepsilon \). By direct calculations, we obtain the desired formula. \(\square \)

For \(K,L\in \mathcal {K}^n\) and \(1\le j\le n-1\), the previous theorem suggests us to define the following geometric quantity

$$\begin{aligned} {{\bar{\Lambda }}_j}(K,L) = \int _{{G_{n,j}}} {{{\bar{V}}_{j,1}}(K|\xi ,L|\xi )\,d{\bar{\mu } _j}(K,\xi )}. \end{aligned}$$
(2.12)

Then, \({{\bar{\Lambda }}_j}(K,K)=1\). If we set \(\Lambda _n(K)=V_n(K)\), then \({\bar{\Lambda }}_n(K,L) = \bar{V}_{n,1}(K,L).\)

What follows provides some fundamental properties for \(\bar{\Lambda }_j(K,L)\).

Lemma 2.4

Suppose \(K\in \mathcal {K}^n\) and \(1\le j\le n-1\). Then the following claims hold.

  1. (1)

    \(\Lambda _j(gK)=\Lambda _j(K)\), for \(g\in \mathrm{SL}(n)\).

  2. (2)

    \(\Lambda _j(\alpha K)=\alpha ^j \Lambda _j(K)\), for \(\alpha >0\).

  3. (3)

    \(\Lambda _j(K+x)=\Lambda _j(K)\), for \(x\in \mathbb {R}^n\).

Proof

(1) was shown by Grinberg [7]. Also, it is an immediate consequence of Theorem 2.1. From the definition of \(\Lambda _j\) and the fact that \(V_j((\lambda K+x)|\xi )=\lambda ^j V_j(K|\xi )\), for \(\lambda >0\) and \(x\in \mathbb {R}^n\), (2) and (3) are obtained. \(\square \)

Lemma 2.5

Suppose \(K,L\in \mathcal {K}^n\) and \(1\le j\le n-1\). Then the following claims hold.

  1. (1)

    \( \bar{\Lambda }_j(gK,L)=\bar{\Lambda }_j(K,g^{-1}L)\), for \(g\in \mathrm{SL}(n)\).

  2. (2)

    \(\bar{\Lambda }_j(\alpha _1K, \alpha _2 L)=\alpha _1^{-1}\alpha _2 \bar{\Lambda }_j(K,L)\), for \(\alpha _1, \alpha _2>0\).

  3. (3)

    \( \bar{\Lambda }_j(K+x,L+y)=\bar{\Lambda }_j(K,L)\), for \(x,y\in \mathbb {R}^n\).

Proof

From (2.12) together with Theorem 2.3, Lemma 2.4 (1), and Theorem 2.3 together with (2.12) again, we have

$$\begin{aligned} {{\bar{\Lambda }}_j}(gK,L)&= \lim _{\varepsilon \rightarrow {0^ + }} \frac{{{\Lambda _j}(gK + \varepsilon L) - {\Lambda _j}(gK)}}{{j{\Lambda _j}(gK)\varepsilon }} \\&= \lim _{\varepsilon \rightarrow {0^ + }} \frac{{{\Lambda _j}(K + \varepsilon {g^{ - 1}}L) - {\Lambda _j}(K)}}{{j{\Lambda _j}(K)\varepsilon }} \\&= {{\bar{\Lambda }}_j}(K,{g^{ - 1}}L), \end{aligned}$$

as desired. Combining (2.12) with Theorem 2.3 and Lemma 2.4, (2) and (3) can be obtained similarly. \(\square \)

Lemma 2.6

Suppose \(K,L_1, L_2\in \mathcal {K}^n\) and \(1\le j\le n-1\). If \(L_1\subseteq L_2\), then

$$\begin{aligned} \bar{\Lambda }_j(K,L_1)\le \bar{\Lambda }_j(K,L_2). \end{aligned}$$

Proof

Let \(L_1\subseteq L_2\). By the monotonicity of mixed volumes with respect to set inclusions, it implies that \(V_{j,1}(K|\xi , L_1|\xi )\le V_{j,1}(K|\xi , L_2|\xi )\), for any \(\xi \in G_{n,j}\). From this fact together with the definition of \(\bar{\Lambda }_j(K,\cdot )\), the desired inequality is obtained. \(\square \)

From the definition of \(\bar{\Lambda }_j(K,\cdot )\) together with the fact that for \(\xi \in G_{n,j}\), we have

$$\begin{aligned} V_{j,1}(K|\xi , (L_1+L_2)|\xi )=V_{j,1}(K|\xi , L_1|\xi )+ V_{j,1}(K|\xi , L_2|\xi ), \end{aligned}$$

the following lemma is obtained.

Lemma 2.7

Suppose \(K,L_1, L_2\in \mathcal {K}^n\) and \(1\le j\le n-1\). Then,

$$\begin{aligned} \bar{\Lambda }_j(K,L_1+L_2)= \bar{\Lambda }_j(K,L_1) + \bar{\Lambda }_j(K,L_2). \end{aligned}$$

3 Projection mean ellipsoids

In this section, a new kind of ellipsoid operators \(\mathrm{P}_j\) associated with projection functions, \(j=1,\ldots ,n-1\), for convex bodies are introduced. It is remarkable that these ellipsoid operators are closely connected with the Lutwak conjecture. For \(K\in \mathcal {K}^n\), these ellipsoids \(\mathrm{P}_j K\) are well defined by solving an optimization problem.

Theorem 3.1

Suppose K is a convex body in \(\mathbb {R}^n\) and \(j=1,\ldots ,n-1\). Among all origin-symmetric ellipsoids E, there exists a unique ellipsoid \(\mathrm{P}_jK\) which solves the constrained maximization problem

$$\begin{aligned} \mathop {\max }\limits _E \; {V_n}(E)\quad {{\text {subject to}}} \quad \bar{\Lambda }_j(K,E) \le 1. \end{aligned}$$

Proof

Give an ellipsoid E, let \(d_E\) denote its maximal principal radius and \(u_E\) be the maximal principal direction. Write \([-d_Eu_E, d_Eu_E]\) for the line segment with endpoints \(\pm d_Eu_E\). Then, \([-d_E u_E, d_Eu_E]|\xi \subset E|\xi \).

By compactness of convex body K, there exist finite positive numbers rR and a point \(x\in K\) such that \(rB+x\subseteq K\subseteq RB.\) From monotonicity of mixed volume with respect to set inclusion together with the fact \((rB+x)|\xi \subseteq K|\xi \), the homogeneity and translation invariance of mixed volume, and (2.5), it follows that for any \(\xi \in G_{n,j}\),

$$\begin{aligned} {V_{j,1}}(K|\xi ,E|\xi )&\ge {V_{j,1}}((r{B^n} + x)|\xi ,[ - {d_E}{u_E},{d_E}{u_E}]|\xi ) \\&\ge {r^{j - 1}}{d_E}{V_{j,1}}({B^n}|\xi ,[{u_E},{u_E}]|\xi ) \\&= \frac{{{r^{j - 1}}{d_E}}}{j}\int _{\xi \cap {\mathbb {S}^{n - 1}}} {|{u_E} \cdot v|d{\mathcal {H}^{j - 1}}(v)}. \end{aligned}$$

Thus, from (2.12) together with (2.7) and (2.8), the fact that \(r^j\omega _j\le V_j(K|\xi )\le R^j\omega _j\) for all \(\xi \in G_{n,j}\), Fubini’s theorem, and the fact that \(\int _{\mathbb {S}^{n-1}}|u_E\cdot v|d\mathcal {H}^{n-1}(v)=2V_{n-1}(B|u_E^\bot )\), we have

$$\begin{aligned} {{\bar{\Lambda }}_j}(K,E)&= \frac{{\int _{{G_{n,j}}} {{V_{j,1}}(K|\xi ,E|\xi ){V_j}{{(K|\xi )}^{ - (n + 1)}}d{\mu _j}(\xi )} }}{{\int _{G_{n,j}} {{V_j}{{(K|\xi )}^{ - n}}d{\mu _j}(\xi )} }} \\&\ge {\left( {\frac{r}{R}} \right) ^{j(n + 1)}}\frac{{{d_E}}}{r}\frac{1}{{j{\omega _j}}}\int _{{G_{n,j}}} {\int _{\xi \cap {\mathbb {S}^{n - 1}}} {|{u_E} \cdot v|d{\mathcal {H}^{j - 1}}(v)} d{\mu _j}(\xi )} \\&= {\left( {\frac{r}{R}} \right) ^{j(n + 1)}}\frac{{{d_E}}}{r}\frac{1}{{n{\omega _n}}}\int _{{\mathbb {S}^{ - 1}}} {|{u_E} \cdot v|d{\mathcal {H}^{n - 1}}(v)} \\&= {\left( {\frac{r}{R}} \right) ^{j(n + 1)}}\frac{{{d_E}}}{r}\frac{{2{\omega _{n - 1}}}}{{n{\omega _n}}}. \end{aligned}$$

Hence, an origin-symmetric ellipsoid E satisfying the constraint satisfies the condition

$$\begin{aligned} {d_E} \le \frac{{n{\omega _n}}}{{2{\omega _{n - 1}}}}{\left( {\frac{R}{r}} \right) ^{j(n + 1)}}r < \infty . \end{aligned}$$

Consequently, any maximizing ellipsoid sequence \(\{E_i\}_{i\in \mathbb {N}}\) for the extremum problem is bounded from above. By Blaschke selection theorem, there exists a convergent subsequence \(\{E_{i_k}\}_{k\in \mathbb {N}}\) converging to an origin-symmetric ellipsoid \(E_0\). It remains to prove that \(E_0\) is not degenerated. Note that \(0<{{\bar{\Lambda }}_j}(K,B) <\infty \). Then, \({{\bar{\Lambda }}_j}\left( {K,\frac{B}{{{{\bar{\Lambda }}_j}(K,B)}}} \right) = 1\). This implies that the ball \({{\bar{\Lambda }}_j}(K,B)^{-1}B\) satisfies the constraint. Therefore,

$$\begin{aligned} 0 < {{\bar{\Lambda }}_j}{(K,B)^{-n}}{\omega _n} \le {V_n}({E_0}), \end{aligned}$$

which ensures \(\dim (E_0)=n\).

Now, we show the uniqueness. Assume two positive definite symmetric transformations \(g_1, g_2\in \mathrm{GL}(n)\) are such that the ellipsoids \(E_i=g_iB\), \(i=1,2\), solve the maximization problem. We aim to prove that \(g_1=g_2\). From the definition of support function of ellipsoid and triangle inequality, we obtain that \(\frac{{{g_1} + {g_2}}}{2}B \subseteq \frac{{{g_1}B + {g_2}B}}{2}\). So, from Lemmas 2.6, 2.7 and that \(\bar{\Lambda }_j(K,E_i)\le 1\) for \(i=1,2\), it follows that

$$\begin{aligned} \bar{\Lambda }_j\left( {K,\frac{{{g_1} + {g_2}}}{2}B} \right) \le 1. \end{aligned}$$

This means that the ellipsoid \({\frac{{{g_1} + {g_2}}}{2}B}\) also satisfies the constraint of extremum problem. So, \({V_n}\left( {\frac{{{g_1} + {g_2}}}{2}B} \right) \le {V_n}({g_1}B) = {V_n}({g_2}B)\). Consequently,

$$\begin{aligned} \det {\left( {\frac{{{g_1} + {g_2}}}{2}} \right) ^{1/n}} \le \frac{{\det {{({g_1})}^{1/n}} + \det {{({g_2})}^{1/n}}}}{2}. \end{aligned}$$

On the other hand, the Minkowski inequality for positive definite matrices asserts that the reverse of the above inequality always holds. Thus, equality has to occur in the above inequality. By equality condition of the Minkowski inequality, it follows that \(g_1=\lambda g_2\) for some \(\lambda >0\). Since \(\det (g_1)=\det (g_2)\), it follows that \(g_1=g_2\). \(\square \)

Therefore, for \(K\in \mathcal {K}^n\), by Theorem 3.1, a family of ellipsoids \(\mathrm{P}_jK\), \(j=1,\ldots ,n-1\), are produced. We call \(\mathrm{P}_jK\) the jth projection mean ellipsoid of K.

Recall that for convex body \(K\in \mathcal {K}^n\), the John ellipsoid \(\mathrm{J}K\) is the unique ellipsoid of maximal volume contained in K. For each \(\xi \in G_{n,j}\), \(1\le j\le n-1\), we have \(\mathrm{J}K |\xi \subseteq K|\xi \). By Theorem 3.1, \(V_n(\mathrm{P}_j K)\ge V_n(\mathrm{J}K).\)

In additional, \(\bar{\Lambda }_n(K,L)\) is just the normalized mixed volume \(\bar{V}_{n,1}(K,L)\). When \(j=n\), it is interesting that the nth projection mean ellipsoid \(\mathrm{P}_nK\) is precisely the classical Petty ellipsoid \(\mathrm{P} K\). The volume-normalized Petty ellipsoid [21] is obtained by minimizing the surface area of K under \(\mathrm{SL}(n)\) transformations of K. See also Giannopoulos [6].

From Theorem 3.1 and Lemma 2.5, we obtain the following result.

Lemma 3.2

Suppose \(K\in \mathcal {K}^n\) and \(1\le j\le n-1\). Then for any \(g\in \mathrm{GL}(n)\) and \(x\in \mathbb {R}^n\),

$$\begin{aligned}\mathrm{P}_j(gK+x)=g \mathrm{P}_j K.\end{aligned}$$

4 A new affine isoperimetric inequality for the integral affine surface area

For convex body K in \(\mathbb {R}^n\), Lutwak [11] conjectured that

$$\begin{aligned} \Lambda _j(K)^n\ge n^n \omega _n^{n-j}V_n(K)^j,\quad j=2,\ldots ,n-2, \end{aligned}$$

with equality if and only if K is an ellipsoid. In this section, we present a variant of the Lutwak conjecture.

Theorem 4.1

Suppose K is a convex body in \(\mathbb {R}^n\). Then,

$$\begin{aligned} {\Lambda _j}{(K)^n} \ge {n^n}\omega _n^{n - j}{V_n}{({\mathrm{{P}}_j}K)^j}, \quad j=1,\ldots ,n-1. \end{aligned}$$

If \( j=2,3,\ldots , n-1\), or \(j=1\) and K is centrally symmetric, the equality holds if and only if K is an ellipsoid. If \(j=1\), the equality holds if and only if K has an \(\mathrm{SL}(n)\) image with constant width.

To prove this theorem, we need to prove several lemmas.

Lemma 4.2

Suppose \(K,L\in \mathcal {K}^n\). Then,

$$\begin{aligned} \bar{\Lambda }_j(K,L) \ge {\left( {\frac{{{\Lambda _j}(L)}}{{{\Lambda _j}(K)}}} \right) ^{1/j}}, \quad j=1,\ldots ,n-1. \end{aligned}$$
(4.1)

If \(j=2,3,\ldots , n-1\), the equality holds if and only if K and L are homothetic. If \(j=1\), the equality holds if and only if \(w_K=\lambda w_L\) for some \(\lambda >0\). If \(j=1\) and KL are centrally symmetric, the equality holds if and only if K and L are homothetic.

Proof

By the Minkowski first inequality, for each \(\xi \in G_{n,j}\), there holds

$$\begin{aligned} \bar{V}_{j,1}(K|\xi ,L|\xi ) \ge {\left( {\frac{{{V_j}(L|\xi )}}{{{V_j}(K|\xi )}}} \right) ^{{1}/{j}}}, \end{aligned}$$

with equality if and only if \(K|\xi \) and \(L|\xi \) are homothetic. If \(j=1\), the equality always holds.

From (2.12), Minkowski’s first inequality, the definition of \(\mu _j(K,\cdot )\), Hölder’s inequality, and finally the definition of \(\Lambda _j\), it follows that

$$\begin{aligned} \bar{\Lambda }_j(K,L)&= \int _{{G_{n,j}}} \bar{V}_{j,1}(K|\xi , L|\xi )\, d{\mu _j}(K,\xi ) \\&\ge \int _{{G_{n,j}}} {{{\left( {\frac{{{V_j}(L|\xi )}}{{{V_j}(K|\xi )}}} \right) }^{\frac{1}{j}}}\,d{\mu _j}(K,\xi )} \\&= \frac{{\int _{{G_{n,j}}} {{V_j}{{(L|\xi )}^{( - n) \cdot \frac{{ - 1}}{{jn}}}}{V_j}{{(K|\xi )}^{( - n) \cdot \frac{{jn + 1}}{{jn}}}}\,d{\mu _j}(\xi )} }}{{\int _{{G_{n,j}}} {{V_j}{{(K|\xi )}^{ - n}}\,d{\mu _j}(\xi )} }} \\&\ge \frac{{{{\left( {\int _{{G_{n,j}}} {{V_j}{{(L|\xi )}^{ - n}}\,d{\mu _j}(\xi )} } \right) }^{\frac{{ - 1}}{{jn}}}}{{\left( {\int _{{G_{n,j}}} {{V_j}{{(L|\xi )}^{ - n}}\,d{\mu _j}(\xi )} } \right) }^{\frac{{jn + 1}}{{jn}}}}}}{{\int _{{G_{n,j}}} {{V_j}{{(K|\xi )}^{ - n}}\,d{\mu _j}(\xi )} }} \\&= {\left( {\frac{{{\Lambda _j}(L)}}{{{\Lambda _j}(K)}}} \right) ^{\frac{1}{j}}}, \end{aligned}$$

which establishes inequality (4.1).

Assume the equality holds in (4.1). Then equalities in the second line and the fourth line both hold. If \(2\le j\le n-1\), by the equality condition of the Minkowski inequality, \(K|\xi \) and \(L|\xi \) are homothetic for all \(\xi \in G_{n-1}\), and therefore K and L are homothetic (see, e.g., Theorem 3.1.3 in [5]). If \(j=1\), the equality condition of the Holder inequality implies that \(w_K=\lambda w_L\) for some constant \(\lambda >0\). If in addition K and L are centrally symmetric, then they are homothetic.

On the contrary, if K and L are homothetic, by Lemma 2.5 (1) and (2), it follows that the equality holds in (4.1). \(\square \)

From Lemma 2.5 together with the definition of \(\mathrm{P}_jK\), we obtain the following result.

Lemma 4.3

Suppose \(K\in \mathcal {K}^n\). Then,

$$\begin{aligned} \bar{\Lambda }_j(K,\mathrm{P}_jK)=1, \quad j=1,\ldots ,n-1. \end{aligned}$$

Lemma 4.4

Suppose E is an ellipsoid in \(\mathbb {R}^n\). Then,

$$\begin{aligned} {\Lambda _j}(E) = n\omega _n^{(n-j)/n}V_n(E)^{j/n}, \quad j=1,2, \ldots ,n-1. \end{aligned}$$

Proof

From the jth positive homogeneity of \(\Lambda _j\), the \(\mathrm{SL}(n)\) invariance of \(\Lambda _j(K)\), and the fact \(\Lambda _j(B)=n\omega _n\), it follows that

$$\begin{aligned} {\Lambda _j}(E)&= {\Lambda _j}\left( {{V_n}{{(E)}^{1/n}}\frac{E}{{{V_n}{{(E)}^{1/n}}}}} \right) \\&= {V_n}{(E)^{j/n}}{\Lambda _j}\left( {\frac{E}{{{V_n}{{(E)}^{1/n}}}}} \right) \\&= {V_n}{(E)^{j/n}}{\Lambda _j}\left( {\frac{B}{{\omega _n^{1/n}}}} \right) \\&= {V_n}{(E)^{j/n}}\omega _n^{ - j/n}{\Lambda _j}\left( B \right) \\&= n\omega _n^{(n - j)/n}{V_n}{(E)^{j/n}}, \end{aligned}$$

as desired. \(\square \)

Lemma 4.5

Suppose E is an ellipsoid with center \(c_E\). Then,

$$\begin{aligned} \mathrm{P}_jE=E-c_E, \quad j=1,2, \ldots ,n-1,n. \end{aligned}$$

Proof

By Lemma 3.2, it suffices to prove \(\mathrm{P}_j B=B.\) From Lemmas 4.3, 4.2 and 4.4, it follows that

$$\begin{aligned} 1 = \bar{\Lambda }_j(B,{\mathrm{{P}}_j}B) \ge {\left( {\frac{{{\Lambda _j}({\mathrm{{P}}_j}B)}}{{{\Lambda _j}(B)}}} \right) ^{1/j}} = {\left( {\frac{{{V_n}({\mathrm{{P}}_j}B)}}{{{V_n}(B)}}} \right) ^{1/n}}. \end{aligned}$$

So, \({V_n}({\mathrm{{P}}_j}B) \ge {V_n}(B).\)

On the other hand, since \(\bar{\Lambda }_j(B,B)=1\), i.e., unit ball B satisfies the constraint of the extremum problem in Theorem 3.1 for (Bj), it follows that \(V_n (\mathrm{P}_j B) \le V_n(B). \)

Thus, \( V_n (\mathrm{P}_j B)=V_n(B)\). By uniqueness of projection mean ellipsoid, \(\mathrm{P}_j B=B\) is obtained. \(\square \)

Lemma 4.6

Suppose \(K\in \mathcal {K}^n\). Then,

$$\begin{aligned} \Lambda _j(K)\ge \Lambda _j(\mathrm{P}_j K), \quad j=1,2, \ldots ,n-1. \end{aligned}$$
(4.2)

If \( j=2,3, \ldots ,n-1\), or \(j=1\) and K is centrally symmetric, the equality holds if and only if K is an ellipsoid. If \(j=1\), the equality holds if and only if K has an \(\mathrm{SL}(n)\) image with constant width.

Proof

From Lemmas 4.2 and 4.3, it follows that for \(j=1,2, \ldots ,n-1\),

$$\begin{aligned} 1 = \bar{\Lambda }_j(K,{\mathrm{{P}}_j}K) \ge {\left( {\frac{{{\Lambda _j}({\mathrm{{P}}_j}K)}}{{{\Lambda _j}(K)}}} \right) ^{\frac{1}{j}}}. \end{aligned}$$

That is, \(\Lambda _j(K)\ge \Lambda _j(\mathrm{P}_j K)\).

Assume the equality holds. By Lemma 4.6, if \(j=2,3, \ldots ,n-1\), then the bodies \(\mathrm{P}_jK\) and K are homothetic. Therefore, K is an ellipsoid. Let \(j=1\). Since \(w_{K}=\alpha w_{\mathrm{P}_1K}\) for some \(\alpha >0\), from the affine nature of \(\mathrm{P}_1K\), there exists an \(\mathrm{SL}(n)\) transformation g such that \(g\mathrm{P}_1 K\) is an origin-symmetric ball. Thus, \(w_{gK}=\alpha w_{\mathrm{P}_1(gK)}\). That is, the body gK is of constant width. Moreover, if in addition K is centrally symmetric, then gK is a ball, and therefore, K is an ellipsoid.

Assume that K is an ellipsoid. By Lemma 4.5, \(\mathrm{P}_jK=K-c_K\), here \(c_K\) is the center of K. By Lemma 2.4 (3), \(\Lambda _j(\mathrm{P}_jK)=\Lambda _j(K)\). \(\square \)

We are now in the position to finish the proof of Theorem 4.1.

Proof of Theorem 4.1

From Lemma 4.4, it follows that

$$\begin{aligned} {\Lambda _j}({\mathrm{{P}}_j}K) = n\omega _n^{(n - j)/n}{V_n}{({\mathrm{{P}}_j}K)^{j/n}}. \end{aligned}$$

Combining this fact with Lemma 4.6, it follows that

$$\begin{aligned} \Lambda _j(K)\ge {\Lambda _j}({\mathrm{{P}}_j}K) = n\omega _n^{(n - j)/n}{V_n}{({\mathrm{{P}}_j}K)^{j/n}}, \end{aligned}$$

as desired. The equality condition are derived from Lemma 4.6 immediately. \(\square \)

5 A sharp affine isoperimetric inequality for 1st projection mean ellipsoid

Lemma 5.1

Suppose K and L are origin-symmetric convex bodies in \(\mathbb {R}^n\) with the origin in their interior. Then,

$$\begin{aligned} \bar{\Lambda }_1(K,L) = \frac{{\int _{{S^{n - 1}}} {{\rho _{{L^*}}}{{(u)}^{ - 1}}{\rho _{{K^*}}}{{(u)}^{n + 1}}\,d{\mathcal {H}^{n - 1}}(u)} }}{{n{V_n}({K^*})}}. \end{aligned}$$
(5.1)

Proof

Since

$$\begin{aligned}&\int _{{G_{n,1}}} {{V_{1,1}}(K|\xi ,L|\xi ){V_1}{{(K|\xi )}^{ - (n + 1)}}\,d{\mu _1}(\xi )}\\&\quad = {(n{\omega _n})^{ - 1}}\int _{{\mathbb {S}^{n - 1}}} {{w_L}(u){w_K}{{(u)}^{ - (n + 1)}}\,d{\mathcal {H}^{n - 1}}(u)} \\&\quad = 2^{-n}{(n{\omega _n})^{ - 1}}\int _{{\mathbb {S}^{n - 1}}} {{h_L}(u){h_K}{{(u)}^{ - (n + 1)}}\,d{\mathcal {H}^{n - 1}}(u)} \\&\quad = 2^{-n}{(n{\omega _n})^{ - 1}}\int _{{\mathbb {S}^{n - 1}}} {{\rho _{{L^*}}}{{(u)}^{ - 1}}{\rho _{{K^*}}}{{(u)}^{n + 1}}\,d{\mathcal {H}^{n - 1}}(u)}, \end{aligned}$$

and

$$\begin{aligned} \int _{{G_{n,1}}} {{V_1}{{(K|\xi )}^{ - n}}d{\mu _1}(\xi )} = 2^{-n}{(n{\omega _n})^{ - 1}}\int _{{\mathbb {S}^{n - 1}}} {{\rho _{{K^*}}}{{(u)}^n}d{\mathcal {H}^{n - 1}}(u)} . \end{aligned}$$

By the definition of \(\bar{\Lambda }_1(K,L)\), (5.1) is obtained. \(\square \)

Theorem 5.2

Suppose K is an origin-symmetric convex body in \(\mathbb {R}^n\). Then,

$$\begin{aligned} {V_n}({K^*}){V_n}({\mathrm{{P}}_1}K) \le \omega _n^2, \end{aligned}$$

with equality if and only if K is an ellipsoid.

Proof

From the Hölder inequality and the polar formula for volume, it yields that

$$\begin{aligned} {n^{ - 1}}\int _{{S^{n - 1}}} {{\rho _{{L^*}}}{{(u)}^{ - 1}}{\rho _{{K^*}}}{{(u)}^{n + 1}}d{\mathcal {H}^{n - 1}}(u)} \ge {V_n}{({K^*})^{(n + 1)/n}}{V_n}{({L^*})^{ - 1/n}}, \end{aligned}$$

with equality if and only if \(K^*\) and \(L^*\) are dilates. From Lemma 5.1, it follows that

$$\begin{aligned} \bar{\Lambda }_1(K,L) \ge {\left( {\frac{{{V_n}({K^*})}}{{{V_n}({L^*})}}} \right) ^{\frac{1}{n}}}, \end{aligned}$$

with equality if and only if K and L are dilates.

Let \(L=\mathrm{P}_1 K\). Using Lemma 4.3, we obtain

$$\begin{aligned} 1 \ge \frac{{{V_n}({K^*})}}{{{V_n}({{({\mathrm{{P}}_1}K)}^*})}}, \end{aligned}$$

with equality if and only if K is an ellipsoid. By the Blaschke–Santaló inequality, we have

$$\begin{aligned} \frac{{{\omega _n}^2}}{{{V_n}({{({\mathrm{{P}}_1}K)}^*})}} = {V_n}({\mathrm{{P}}_1}K). \end{aligned}$$

Thus,

$$\begin{aligned} {\omega _n}^2 \ge \frac{{{\omega _n}^2{V_n}({K^*})}}{{{V_n}({{({\mathrm{{P}}_1}K)}^*})}} = {V_n}({K^*}){V_n}({\mathrm{{P}}_1}K), \end{aligned}$$

as desired. \(\square \)

6 Which one is bigger, \(V_{n}(K)\) or \(V_{n}(\mathrm{P}_j K)\)?

In light of the longstanding Lutwak conjecture, a natural question is posed as follows: For a convex body K in \(\mathbb {R}^n\), which geometric quantity is bigger, \(V_n(K)\) or \(V_n(\mathrm{P}_j K)\)?

Recall that when \(j=n\), \(\mathrm{P}_nK\) is just the classical Petty ellipsoid of K. It is known that \(V_n(K)\ge V_n(\mathrm{P}_n K)\). As a result, one may tempt to conjecture that \(V_n(K)\ge V_n(\mathrm{P}_j K)\), for \(j=1,2, \ldots ,n-1\)?

In this section, we provide an example to show that it is not always true. So, projection mean ellipsoid not only owns strong geometric intuition, but also is of great value to attack the Lutwak conjecture for affine surface area.

Lemma 6.1

Suppose \(B_p\) is the unit ball in \(\mathbb {R}^n\) with \(l_p\) norm, \(1\le p\le \infty \). Then the mean projection ellipsoid \(\mathrm{P}_j B_p\), \(j=1,2, \ldots ,n\), is an origin-symmetric Euclidean ball.

Proof

We argue by contradiction. Assume \(\mathrm{P}_j B_p\) is not an Euclidean ball. We prove that there exists an orthogonal transformation g, such that

$$\begin{aligned} gB_p=B_p\quad \mathrm{but}\quad g \mathrm{P}_j B_p\ne \mathrm{P}_j B_p. \end{aligned}$$

However, this is impossible, since by Lemma 3.2 and \(gB_p=B_p\), it necessarily yields that \(g \mathrm{P}_j B_p= \mathrm{P}_j B_p\).

By the above assumption, among all principal radii of the ellipsoid \(\mathrm{P}_j B_p\) there exists a principal radius, say \(\lambda _{0}\), which differs from the others. Suppose \(\pm u_{0}\) are the principal directions corresponding to \(\lambda _{0}\). Say, \(u_{0}=(u^{0}_1, \ldots , u^{0}_n)\).

We first handle the case where \(u^0_{i_0}=1\) or \(-1\), for some index \(i_0\). W.l.o.g., assume that \(i_0=1\). Then, \(u^0_i=0\) for \(i\ne 1\). Take the orthogonal transformation \(g: \mathbb {R}^n\rightarrow \mathbb {R}^n\),

$$\begin{aligned} ({x_1},{x_2},{x_3}, \ldots ,{x_n}) \mapsto ({x_2},{x_1},{x_3}, \ldots ,{x_n}). \end{aligned}$$

Clearly, \(gB_p=B_p\). Observe that the principal radii of \(g\mathrm{P}_j B_p\) are identical to those of \(\mathrm{P}_j B_p\), and \(\pm gu_0\) are the unit principal directions corresponding to principal radius \(\lambda _0\) of \(g\mathrm{P}_j B_p\). The choice of g implies that \(\{\pm gu_0\}\ne \{\pm u_0\}\). Moreover, \(g \mathrm{P}_j B_p\ne \mathrm{P}_j B_p\), since if \(g \mathrm{P}_j B_p= \mathrm{P}_j B_p\), then it yields that \(\{\pm gu_0\}=\{\pm u_0\}\).

To complete the proof, it remains to consider the case where vector \(u_0\) has two nonzero components, say \(u^0_{i_1}\) and \(u^0_{i_2}\). W.l.o.g., assume that \(i_1=1\) and \(i_2=2\). Take the orthogonal transformation \(g: \mathbb {R}^n\rightarrow \mathbb {R}^n\),

$$\begin{aligned} ({x_1},{x_2},{x_3}, \ldots ,{x_n}) \mapsto ( - {x_2},{x_1},{x_3}, \ldots ,{x_n}). \end{aligned}$$

Clearly, \(gB_p=B_p\). An argument similar to the above yields that \(g \mathrm{P}_j B_p\ne \mathrm{P}_j B_p\). \(\square \)

For convex body K with its centroid at the origin, its isotropic constant \(L_K\) is given by

$$\begin{aligned} {L_K}^2 = \frac{1}{n}\min \left\{ {\frac{1}{{V{{(gK)}^{1 + \frac{2}{n}}}}}\int \limits _{gK} {|x{|^2}\,dx} :g \in \mathrm{{GL}}(n)} \right\} . \end{aligned}$$

In particular, modulo orthogonal transformations, there is a unique \(\mathrm{SL}(n)\) transformation g such that

$$\begin{aligned} \frac{1}{{nV{{(gK)}^{1 + \frac{2}{n}}}}}\int \limits _{gK} {|x{|^2}dx} = {L_K}^2, \end{aligned}$$

i.e.,

$$\begin{aligned} \frac{1}{{nV(gK)}}\int \limits _{{\mathbb {S}^{n - 1}}} {\rho _{gK}^{n + 2}(u)\,d{\mathcal {H}^{n - 1}}}(u) = (n + 2)V{(gK)^{\frac{2}{n}}}{L_K}^2. \end{aligned}$$

If in addition \(g_K\) is orthogonal, then K is said to be isotropic. One of the main remained open problems in asymptotic theory of convex bodies is the hyperplane conjecture, which is equivalently asks whether there exists an absolute upper bound for isotropic constant. For more information, see, e.g., the classical paper by Milman and Pajor [19].

Recall that a known fact concerning \(B_p\), \(1\le p\le \infty \), is that it is isotropic. Meanwhile, note that \(B_p^*=B_{p^*}\), with \(p^*\) denoting the conjugate of p. Thus, we have

$$\begin{aligned} \frac{1}{{nV(B_p^*)}}\int \limits _{{\mathbb {S}^{n - 1}}} {\rho _{B_p^*}^{n + 2}(u)\,d{\mathcal {H}^{n - 1}}}(u) = (n + 2)V{(B_p^*)^{\frac{2}{n}}}{L_{B_p^*}}^2. \end{aligned}$$
(6.1)

Theorem 6.2

Let \(1\le p\le \infty \). Then,

$$\begin{aligned} {\left( {\frac{{{V_n}({\mathrm{{P}}_1}{B_p})}}{{{V_n}({B_p})}}} \right) ^{1/n}} \ge \frac{{\omega _n^{ - 1/n}}}{{\sqrt{n + 2} }}L_{B_p^*}^{ - 1}, \end{aligned}$$
(6.2)

with equality if and only if \(p=2\).

Proof

By Lemma 6.1, \(\mathrm{P}_1 B_p\) is an origin-symmetric Euclidean ball. Let \(r_p\) be its radius. From Lemmas 4.3 and 5.1, it follows that

$$\begin{aligned} \frac{{{r_p}}}{n{{V_n}(B_p^*)}}\int _{{\mathbb {S}^{n - 1}}} {{\rho _{B_p^*}}{{(u)}^{n + 1}}\,d{\mathcal {H}^{n - 1}}(u)} = 1. \end{aligned}$$
(6.3)

Then,

$$\begin{aligned} \frac{{{V_n}({\mathrm{{P}}_1}{B_p})}}{{{V_n}({B_p})}} = \frac{{{n^n}\omega _n{V_n}{{(B_p^*)}^n}}}{{{V_n}({B_p}){{\left( {\int _{{S^{n - 1}}} {{\rho _{B_p^*}}{{(u)}^{n + 1}}\,d{\mathcal {H}^{n - 1}}(u)} } \right) }^n}}}. \end{aligned}$$

Meanwhile, from the Jensen inequality and (6.1), it follows that

$$\begin{aligned} \frac{1}{{n{V_n}(B_p^*)}}\int _{{S^{n - 1}}} {{\rho _{B_p^*}}{{(u)}^{n + 1}}d{\mathcal {H}^{n - 1}}(u)}&\le {\left( {\frac{1}{{n{V_n}(B_p^*)}}\int _{{S^{n - 1}}} {{\rho _{B_p^*}}{{(u)}^{n + 2}}d{\mathcal {H}^{n - 1}}(u)} } \right) ^{1/2}} \\&= {(n + 2)^{1/2}}{V_n}{(B_p^*)^{1/n}}{L_{B_p^*}}, \end{aligned}$$

with equality in the first line if and only if \(p=2\). Thus,

$$\begin{aligned} \frac{{{V_n}({\mathrm{{P}}_1}{B_p})}}{{{V_n}({B_p})}} \ge \frac{{{\omega _n}}}{{{V_n}({B_p}){V_n}(B_p^*){{(n + 2)}^{n/2}}{L_{B_p^*}}^n}}, \end{aligned}$$

with equality if and only if \(p=2\). Finally, with the Blaschke–Santaló inequality, the desired inequality is obtained. \(\square \)

An important fact goes back to Milman and Pajor [19] (also see LYZ [14]) states that for a convex body K with centroid at the origin,

$$\begin{aligned} {L_K} \ge \frac{{\omega _n^{ - 1/n}}}{{\sqrt{n + 2} }}, \end{aligned}$$
(6.4)

with equality if and only if K is an origin-symmetric ellipsoid.

Corollary 6.3

Let \(1\le p\le \infty \). Then,

$$\begin{aligned} \max _{1 \le p \le \infty } \frac{{{V_n}({\mathrm{{P}}_1}{B_p})}}{{{V_n}({B_p})}} \ge 1. \end{aligned}$$

Proof

Since \(B^*_p\) is continuous in \(p\in [1,\infty ]\), then \(r_p\) is also continuous in \(p\in [1,\infty ]\) by Eq. (6.3). From (6.2) and (6.4), it follows that

$$\begin{aligned} \max _{1 \le p \le \infty } \frac{{{V_n}({\mathrm{{P}}_1}{B_p})}}{{{V_n}({B_p})}} \ge \frac{1}{{\min _{1 \le p \le \infty } {\omega _n}{{(n + 2)}^{n/2}}{L_{B_p^*}}^n}}\ge 1, \end{aligned}$$

as desired. \(\square \)

Specifically, let \(n=2\) and \(p=\infty \). We show that \(V_2(\mathrm{P}_1 B_\infty )> V_2(B_\infty ).\)

For this aim, we use the polar coordinate \(\{(\rho ,\theta ): 0\le \rho \le \infty , 0\le \theta \le 2\pi \}\). Since \(\rho _{B^*_\infty }(\theta )=(|\cos \theta |+|\sin \theta |)^{-1},\) it yields that

$$\begin{aligned} \frac{{{V_2}({\mathrm{{P}}_1}{B_\infty })}}{{{V_2}({B_\infty })}}&= \frac{{4\pi {V_2}{{(B_\infty ^*)}^2}}}{{{V_2}({B_\infty })}}{\left( {\int \limits _0^{2\pi } {{\rho _{B_\infty ^*}}{{(\theta )}^3}d\theta } } \right) ^{ - 2}} \\&= \frac{\pi }{4}{\left( {\int \limits _0^{\pi /2} {{{\left( {\cos \theta + \sin \theta } \right) }^{ - 3}}d\theta } } \right) ^{ - 2}} \\&= 2\pi {\left( {2\int \limits _{\pi /4}^{\pi /2} {{{\sin }^{ - 3}}\theta d\theta } } \right) ^{ - 2}} \\&= \frac{{2\pi }}{{{{\left( {\sqrt{2} + \log (\sqrt{2} + 1)} \right) }^2}}} = 1.1923 \cdots > 1. \end{aligned}$$

Recall that \(\mathrm{P}_1 B_p=r_pB\) is continuous in \(p\in [1,\infty ]\). So, there exists a \(p_0\in (2,\infty )\), so that for \(p_0<p\le \infty \), \(V_2(\mathrm{P}_1 B_p)> V_2(B_p).\)