1 Introduction

The focus of this paper is distributional inequalities for the volume of random convex sets. Typical models involve convex hulls or Minkowski sums of line segments generated by independent random points in \(\mathbb{R }^n\). Specifically, let \(\mu \) be a probability measure on \(\mathbb{R }^n\). Sample \(N\ge n\) independent points \(X_1,\ldots ,X_N\) according to \(\mu \). Let \(K_N\) be the absolute convex hull of the \(X_i\)’s, i.e.,

$$\begin{aligned} {K}_{N}:={\mathrm{conv}}\big \{\pm {X}_{1},\ldots , \pm {X}_{N}\big \} \end{aligned}$$
(1.1)

and let \(Z_N\) be the zonotope, i.e., the Minkowski sum of the line segments \([-X_i,X_i]\),

$$\begin{aligned} {Z}_{N}:=\sum ^{N}_{i=1}[-{X}_{i},{X}_{i}] = \Bigl \{\sum _{i=1}^{N} \lambda _{i} {X}_{i}: \lambda _{i}\in [-1,1], \; i=1,\ldots ,N \Bigr \}. \end{aligned}$$
(1.2)

The literature contains a wealth of results aimed at quantifying the size of \({K}_{N}\) and its non-symmetric analogue \({\mathrm{conv}}\big \{{X}_{1},\ldots , {X}_{N}\big \}\) in terms of metric quantities such as volume, surface area and mean-width; especially in the asymptotic setting where the dimension \(n\) is fixed and \(N\rightarrow \infty \). The measure \(\mu \) strongly determines the corresponding properties of \({K}_{N}\) and \({Z}_{N}\). Common models include the case when \(\mu \) is the standard Gaussian measure, see e.g., [10, 39]; the uniform measure on a convex body, see e.g., the survey [7]; among many others, e.g., [71]. These are just a sample of recent articles and we refer the reader to the thorough list of references given therein.

A different asymptotic setting involves the case when the dimension \(n\) is large and one is interested in precise dependence on \(N\) and phenomena that hold uniformly for a large family of measures \(\mu \). In this setting, various geometric properties of \(K_N\) and \(Z_N\) such as Banach–Mazur distance, in-radius and other metric quantities have been analyzed. For zonotopes, see e.g., [14, 15, 35]. Concerning \(K_N\) there have been a number of recent results with special attention paid to estimates that hold “with high probability.” These include, for instance, the case when \(\mu \) is the uniform measure on the vertices of the cube [29], measures with “Gaussian-like” features [45, 50] and the case when \(\mu \) is the uniform measure on a convex body [21, 30]. We are interested in distributional inequalities for \({\text{ vol }}_{n}{\big ({K}_{N}\big )}\) and \({\text{ vol }}_{n}{\big ({Z}_{N}\big )}\), where \({\text{ vol }}_{n}{(\cdot )}\) denotes \(n\)-dimensional Lebesgue measure, with precise dependence on \(n\) and \(N\) for a broad class of measures.

Let \(\mathcal P _{n}\) denote the set of all probability measures on \(\mathbb{R }^n\) that are absolutely continuous with respect to Lebesgue measure. Our setting involves those \(\mu \) in \(\mathcal P _{n}\) whose densities \(f_{\mu }=\frac{d\mu }{dx}\) are bounded. To fix the normalization, we set

$$\begin{aligned} \mathcal P _{n}^{b}:= \big \{\mu \in \mathcal P _{n}:\left||f_{\mu }\right||_{\infty }= 1\big \}, \end{aligned}$$

where \(\left||f\right||_{\infty }\) is the essential supremum of \(f\). In particular, our setting includes the Gaussian measure and the uniform measure on a convex body \(K\subset \mathbb{R }^n\) but not the case of discrete measures. We assume that \(\mu _{1},\ldots ,\mu _{N}\in \mathcal P _{n}^{b}\) and that \({X}_{1},\ldots ,{X}_{N}\) are independent random vectors with \({X}_{i}\) distributed according \(\mu _{i}\). Since we will compare \({K}_{N}\) and \({Z}_{N}\) (which depend on the \(X_i\)’s) for various underlying measures, we will write \(\mathbb P _{\otimes _{i=1}^N \mu _{i}}\) (or simply \(\mathbb P _{\otimes \mu _{i}}\)) for the product measure associated with \(\mu _{1},\ldots ,\mu _{N}\); the corresponding expectation by \(\mathbb{E }_{\otimes _{i=1}^{N} \mu _{i}} = \mathbb{E }_{\otimes \mu _{i}}\).

Our main interest is in bounding the quantity

$$\begin{aligned} \mathbb{P }_{\otimes \mu _{i}}\big (\text{ vol }_{n}{\big ({K_{N}}\big )}^{1/n} \le \varepsilon \big ), \end{aligned}$$
(1.3)

for small values of \(\varepsilon \); in particular, the precise dependence on \(\varepsilon , n\) and \(N\). Such estimates are often referred to as small-ball probabilities. Our aim is to find and quantify universal behavior of small-ball probabilities for \({\text{ vol }}_{n}{\big ({K_N}\big )}\), as well as \({\text{ vol }}_{n}{\big ({Z_N}\big )}\), for \(\mu _i\in \mathcal P _n^b\). For the expectation \(\mathbb{E }_{\otimes \mu _i}{\text{ vol }}_{n}{\big ({K_N}\big )}\), the behavior can be far from uniform. Indeed, even for the Euclidean norm \(|X_1|\) of a single vector, the quantity \(\mathbb{E }_{\mu }|X_1|\) need not be finite. Thus in such a general setting, searching for uniform concentration phenomena seems a lost cause. We will show, however, that small-ball-type estimates always hold and are surprisingly uniform.

To the best of our knowledge, apart from particular cases, general small-ball estimates are unknown. Surveying related results in the literature, it was unclear to us even the order of magnitude to expect. One reason for this is that the volume problem is often approached indirectly. Many cases involve stronger statements about, e.g., the in-radius of \({K}_{N}\) or inclusion of other naturally associated sets. For instance, the main focus of [50] is singular values of certain random matrices; volume estimates for \({K}_{N}\) arise as consequences. To put our problem in context, we state a sample result from the latter paper. Specifically, in [50], \({K}_{N}\) is the absolute convex hull of the rows of a random matrix, the entries of which are symmetric, independent and identically distributed random variables with sub-Gaussian tail-decay. In this case, they prove that if \(N\ge (1+\zeta )n\), where \(\zeta >1/\ln n\), and \(\beta \in (0,1/2)\), then

$$\begin{aligned} \mathbb{P }\Big ({\text{ vol }}_{n}{\big ({K_N}\big )}^{1/n}\le c(\zeta ) \sqrt{\frac{\beta \ln (2N/n)}{n}}\,\Big ) \le \exp (-c_1N^{1-\beta }n^{\beta }); \end{aligned}$$

here \(c(\zeta )\) is a constant that depends on \(\zeta \) and the sub-Gaussian constant of the measure and \(c_1\) is a positive numeric constant. The latter is proved by estimating the in-radius of \(K_N\). The factor \(N^{1-\beta }n^{\beta }\) in the exponent is the best possible for the analogous statement involving the in-radius of \(K_N\) in the class of measures they consider (see [50, Theorem 4.2 & subsequent remark]). In the class \(\mathcal P _n^b\), however, the volume \({\text{ vol }}_{n}{\big ({K_N}\big )}\) behaves differently.

A similar result involves the case when \(\mu _K\) is the uniform measure on a convex body \(K\subset \mathbb{R }^n\) of volume one. In this case, it is known that if \(N\ge n\), then

$$\begin{aligned} \mathbb{P }_{\otimes \mu _K}\Big ({\text{ vol }}_{n}{\big ({K_N}\big )}^{1/n}\le c \sqrt{\frac{\ln (2N/n)}{n}}\,\Big ) \le \text{ e }^{-n}, \end{aligned}$$

where \(c\) is a positive numeric constant. See the discussion in [21, §3.1] (and [68, Proposition 1] for the case \(N=n\)).

The quantity \(\sqrt{\frac{\ln (2N/n)}{n}}\) that appears in both of the latter examples corresponds to the expectation of \({\text{ vol }}_{n}{\big ({K_N}\big )}^{1/n}\) for the uniform measure \(\lambda _{D_n}\) on the Euclidean ball of volume one. More precisely, for \(n \le N \le \text{ e }^n\), one has

$$\begin{aligned} \Big (\mathbb{E }_{\otimes \lambda _{D_n}}{\text{ vol }}_{n}{\big ({K_N}\big )}\Big )^{1/n}\simeq \sqrt{\frac{\ln (2N/n)}{n}}; \end{aligned}$$

see, e.g., [30] (see also the references in Sect. 4). Here \(A\simeq B\) means that \(c_{1} B\le A\le c_{2}B\) for some positive numeric constants \(c_1\) and \(c_2\). It is proved in [66] that among all measures \(\mu \in \mathcal P _n^b\) the uniform measure \(\lambda _{D_n}\) on the Euclidean ball of volume one minimizes the expected volume of \(K_N\), namely,

$$\begin{aligned} \mathbb{E }_{\otimes \mu _i}{\text{ vol }}_{n}{\big ({K_N}\big )} \ge \mathbb{E }_{\otimes \lambda _{D_n}}{\text{ vol }}_{n}{\big ({K_N}\big )}. \end{aligned}$$
(1.4)

Similarly, it is shown in [66] that

$$\begin{aligned} \mathbb{E }_{\otimes \mu _i}{\text{ vol }}_{n}{\big ({Z_N}\big )}\ge \mathbb{E }_{\otimes \lambda _{D_n}}{\text{ vol }}_{n}{\big ({Z_N}\big )}. \end{aligned}$$
(1.5)

One can check that for \(N\ge n\),

$$\begin{aligned} \Big (\mathbb{E }_{\otimes \lambda _{D_n}}{\text{ vol }}_{n}{\big ({Z_N}\big )}\Big )^{1/n} \simeq \frac{N}{\sqrt{n}}; \end{aligned}$$

(use, e.g., Lemma 4.7; see also [15] as well as (9.4) for a more general result). Thus it is always meaningful to ask for the dependence on \(\varepsilon , n\) and \(N\) in the following quantities:

$$\begin{aligned} \mathbb{P }_{\otimes \mu _i}\Big ({\text{ vol }}_{n}{\big ({K_N}\big )}^{1/n} \le c \varepsilon \sqrt{\frac{\ln (2N/n)}{n}}\Big ) \end{aligned}$$
(1.6)

and

$$\begin{aligned} \mathbb{P }_{\otimes \mu _i}\Big ({\text{ vol }}_{n}{\big ({Z_N}\big )}^{1/n} \le \frac{c \varepsilon N}{\sqrt{n}}\Big ) \end{aligned}$$
(1.7)

for all measures in \(\mathcal P _n^b\).

Our first main result is the following theorem.

Theorem 1.1

Let \(n\le N\le \text{ e }^n\) and let \(\mu _1,\ldots , \mu _N\in \mathcal P _n^b\). Let \(\delta >1\) and \(\varepsilon \in (0,1)\). Then

$$\begin{aligned} \mathbb{P }_{\otimes \mu _i}\Big ( {\mathrm{vol}}_{n}{\big ({K_N}\big )}^{1/n} \le \frac{c_1\varepsilon }{\delta } \sqrt{\frac{\ln (2N/n)}{n}}\Big ) \le \varepsilon ^{c_2 N^{1-1/\delta ^2}n^{1/\delta ^2}}. \end{aligned}$$
(1.8)

Moreover, if \(N\le n\text{ e }^{\delta ^2}\), then

$$\begin{aligned} \mathbb{P }_{\otimes \mu _i}\Big ({\mathrm{vol}}_{n}{\big ({K_N}\big )}^{1/n} \le \frac{c_3\varepsilon }{\delta } \sqrt{\frac{\ln (2N/n)}{n}}\Big ) \le \varepsilon ^{n(N-n+1-o(1))/4}, \end{aligned}$$
(1.9)

where the \(c_i\)’s are positive numeric constants.

Here and throughout the paper, we use the notation \(o(1)\) to denote a quantity in \([0,1]\) that tends to \(0\) as \(N,n\rightarrow \infty \). For zonotopes, we prove the following theorem.

Theorem 1.2

Let \(n\le N\le \text{ e }^n\) and let \(\mu _1,\ldots , \mu _N\in \mathcal P _n^b\). Then for each \(\varepsilon \in (0,1)\),

$$\begin{aligned} \mathbb{P }_{\otimes \mu _i}\Big ({\mathrm{vol}}_{n}{\big ({Z_N}\big )}^{1/n} \le \frac{c\varepsilon N}{\sqrt{n}}\Big ) \le \varepsilon ^{n(N-n+1-o(1))/4}, \end{aligned}$$
(1.10)

where \(c\) is a positive numeric constant.

In Sect. 8, we also give lower bounds for the quantities in Theorems 1.1 and 1.2, which suggest that the estimates (1.9) and (1.10) are essentially optimal.

It has been observed in various other contexts that achieving the best bounds in small-ball estimates in high-dimensional geometry often requires different techniques than those used for proving large deviations e.g., [33, Proposition 3], [43], [45, Proposition 2.6]. To describe the techniques used in this paper, we outline our viewpoint.

As in [66], we adopt an operator-theoretic point of view from the Local Theory of Banach spaces, e.g., [5355]. Namely, we view \(K_N\) and \(Z_N\) as the image of the cross-polytope \(B_1^N\) and the cube \(B_{\infty }^N\), respectively, under the random matrix \([X_1\ldots X_N]\), i.e., \(K_N = [X_1\ldots X_N]B_1^N\) and \(Z_N=[X_1\ldots X_N]B_{\infty }^N\). In the same way, for any convex body \(C\subset \mathbb{R }^N\), we generate a random \(n\)-dimensional convex body by applying \([X_1\ldots X_N]\) to \(C\):

$$\begin{aligned}{}[X_1\ldots X_N]C = \Big \{\sum _{i=1}^N c_i X_i: c=(c_i)\in C \Big \}. \end{aligned}$$
(1.11)

Our first step is to identify the extremal measures \(\mu _i\in \mathcal{P }_n^b\) that maximize the small-ball probability

$$\begin{aligned} \mathbb{P }_{\otimes \mu _i}\big ({\text{ vol }}_{n}{\big ({[X_1\ldots X_N] C}\big )}^{\tfrac{1}{n}} \le \varepsilon \big ). \end{aligned}$$

This is done by means of symmetrization as in [66]. We show that the probability in question is maximized for \(\mu _i = \lambda _{D_n}\), the uniform measure on the Euclidean ball of volume one. While this simplifies the problem, computing the small-ball probability directly for \(\lambda _{D_n}\) is non-trivial. We turn instead to \(\mu = \gamma _n\), the standard Gaussian measure. Working with \(\gamma _n\) allows us to recast the small-ball problem in more geometric terms by using the Gaussian representation of intrinsic volumes [72, 74] and a suitable extension. A key point in our approach is that purely geometric properties of \(C\)—its intrinsic volumes and natural generalizations—dictate the small-ball behavior for \({\text{ vol }}_{n}{\big ([X_1\ldots X_N]C\big )}\). In this way, we reduce Theorems 1.1 and 1.2 to questions from the realm of classical convexity about the cross-polytope and the cube. In particular, Theorem 1.2 depends on verification of an isomorphic version of a conjecture of Lutwak about affine quermassintegrals; a key tool here is a result due to Grinberg [37] (see Sect. 5). Wherever possible, we outline proofs for a general convex body \(C\subset \mathbb{R }^N\). However, the focus of the paper is on \(B_1^N\) and \(B_{\infty }^N\).

A more common normalization than that which we use (although slightly more restrictive) is when the covariance matrix of \(\mu \) is assumed to be the identity, i.e., \(\mu \) is isotropic. We prove estimates analogous to those of Theorems 1.1 and 1.2 under this normalization in Sect. 9; we also treat the important subclass of log-concave measures (see Sects. 2 and 9 for definitions). In the last several years, there have been many important results concerning random matrices generated by log-concave measures, see e.g., [1, 2] and the references therein. In this important class we obtain more precise estimates, such as the following theorem.

Theorem 1.3

Let \(n\le N\le \text{ e }^n\) and \(\mu \) be an isotropic log-concave probability measure on \(\mathbb{R }^n\) with bounded isotropic constant. Then for every \(\varepsilon \in (0,1)\),

$$\begin{aligned} \mathbb{P }_{\otimes \mu }\Big ({\mathrm{vol}}_{n}{\big ({Z_{N}}\big )}^{\tfrac{1}{n}} \le c\varepsilon \Big ( \mathbb{E }_{\otimes \mu } {\mathrm{vol}}_{n}{\big ({Z_{N}}\big )} \Big )^{\tfrac{1}{n}}\Big ) \le \varepsilon ^{n(N-n+1-o(1))/4} \end{aligned}$$
(1.12)

and

$$\begin{aligned} \mathbb{P }_{\otimes \mu }\Big ({\mathrm{vol}}_{n}{\big ({Z_N}\big )}^{1/n} \le c_1\varepsilon \Big ( \mathbb{E }_{\otimes \mu } {\mathrm{vol}}_{n} {\big ({Z_{N}}\big )}\Big )^{\tfrac{1}{n}}\Big )\ge \varepsilon ^{nN}, \end{aligned}$$
(1.13)

where \(c\) and \(c_1\) are positive numeric constants.

See Sect. 9 for the definition of the isotropic constant and the corresponding result for \(K_N\).

The paper is organized as follows. In Sect. 2 we give basic notation and definitions used in the paper. The reduction to the uniform measure on the Euclidean ball via symmetrization is described in Sect. 3; we simply sketch the main points from [66]. In Sect. 4 we discuss the Gaussian representation of intrinsic volumes and show how an extension thereof is connected to the small-ball problem. Generalizations of intrinsic volumes are discussed in Sect. 5. Section 6 involves technical computations for the generalized intrinsic volumes of \(B_1^N\) and \(B_{\infty }^N\). In Sect. 7, we transfer the small-ball estimates obtained for \(\gamma _n\) to \(\lambda _{D_n}\). In Sect. 8 we prove Theorems 1.1 and 1.2 and give complementary lower bounds. In Sect. 9 we deal with the isotropic normalization and the log-concave case. We conclude with a discussion in Sect. 10 about general random convex sets \([X_1\ldots X_N]C\) and show how results from the asymptotic theory of convex bodies [59, 67] can be applied to the general problem of small-ball estimates for random convex sets.

2 Preliminaries

In this section we record notation and definitions used throughout the paper. The setting is \(\mathbb{R }^n\), where \(n\ge 2\), with the usual inner-product \(\langle \cdot , \cdot \rangle \), standard Euclidean norm \(|\cdot |\) and standard unit vector basis \(e_1,\ldots ,e_n\); \(n\)-dimensional Lebesgue measure \({\text{ vol }}_{n}{\big ({\cdot }\big )}\); Euclidean ball of radius one \(B_2^n\) with volume \(\omega _n={\text{ vol }}_{n}{\big ({B_2^n}\big )}\). We reserve \(D_n\) for the Euclidean ball of volume one, i.e., \(D_n=\omega _n^{-1/n} B_2^n\); Lebesgue measure restricted to \(D_n\) is \(\lambda _{D_n}\). The unit sphere is \(S^{n-1}\) and is equipped with the Haar probability measure \(\sigma \). The Grassmannian manifold of all \(n\)-dimensional subspaces of \(\mathbb{R }^N\) is denoted \(G_{N,n}\), with Haar probability measure \(\nu _{N,n}\). For a subspace \(F\in G_{N,n}\), we write \(P_F\) for the orthogonal projection onto \(F\). The standard Gaussian measure is \(\gamma _n\), i.e., \(\text{ d }\gamma _n(x) = (2\pi )^{-n/2}\text{ e }^{-|x|^2/2}\text{ d }x\), while \(\overline{\gamma }_n\) is the Gaussian measure with \(\text{ d }\overline{\gamma }_n(x)= \text{ e }^{-\pi |x|^2}\text{ d }x\).

Throughout the paper we reserve the symbols \(c,c_1,c_2,\ldots \) for positive numeric constants (not necessarily the same in each occurrence). We use the convention \(A\simeq B\) to signify that \(c_{1} B\le A\le c_{2}B\) for some positive numeric constants \(c_1\) and \(c_2\). Wherever necessary, we assume without loss of generality that \(n\) is larger than a fixed numeric constant. By adjusting the constants involved one can always force the results to hold for all \(n\ge 2\).

A convex body \(K\subset \mathbb{R }^n\) is a compact, convex set with non-empty interior. The support function of a convex body \(K\) is given by

$$\begin{aligned} h_K(y) =\sup \{\langle x, y \rangle : x\in K\} \quad (y\in \mathbb{R }^n) \end{aligned}$$

and the mean-width of \(K\) is

$$\begin{aligned} W(K)= \int _{S^{n-1}}h_K(\theta )\text{ d }\sigma (\theta ) +\int _{S^{n-1}}h_K(-\theta )\text{ d }\sigma (\theta ) =2 \int _{S^{n-1}}h_K(\theta )\text{ d }\sigma (\theta ). \end{aligned}$$

We say that \(K\) is origin-symmetric if \(K=-K\). If the origin is an interior point of \(K\), the polar body \(K^{\circ }\) of \(K\) is defined by \(K^{\circ }=\{y\in \mathbb{R }^n:h_K(y)\le 1\}\). A convex body is isotropic if its volume is one, its center of mass is the origin and

$$\begin{aligned} \int _{K}\langle x, \theta \rangle ^2 \text{ d }x = L_K^2\quad \forall \theta \in S^{n-1}; \end{aligned}$$
(2.1)

the constant \(L_K\) is called the isotropic constant of \(K\). We say that a convex body \(K\subset \mathbb{R }^n\) is \(1\)-symmetric (with respect to the standard basis \(e_1,\ldots ,e_n\)), if

$$\begin{aligned} (\alpha _{\xi (1)} x_{\xi (1)},\ldots ,\alpha _{\xi (n)}x_{\xi (n)})\in K \end{aligned}$$
(2.2)

whenever \(x=(x_1,\ldots ,x_n)\in K, \alpha _i\in [-1,1]\) for each \(i=1,\ldots ,n\) and \(\xi :\{1,\ldots ,n\}\rightarrow \{1,\ldots ,n\}\) is a permutation. We say that \(K\) is \(1\)-unconditional if (2.2) holds whenever \(x=(x_1,\ldots ,x_n)\in K, \alpha _i\in [-1,1]\) for each \(i=1,\ldots ,N\) and \(\xi \) is the identity. We also let \(B_p^n\) denote the unit-ball in \(\ell _p^n\).

Let \(\mathcal P _n\) denote the class of all probability measures on \(\mathbb{R }^n\) that are absolutely continuous with respect to Lebesgue measure. The subclass \(\mathcal P ^b_n\subset \mathcal P _n\) consists of all those measures \(\mu \) in \(\mathcal P _n\) whose densities \(f_\mu :=\frac{\text{ d }\mu }{\text{ d }x}\) satisfy \(\left||f_\mu \right||_{\infty }=1\), where \(\left||\cdot \right||_{\infty }\) is the essential supremum.

A Borel measure \(\mu \) on \(\mathbb{R }^n\) is said to be log-concave if for any compact sets \(A,B\subset \mathbb{R }^n\) and \(t \in [0,1]\),

$$\begin{aligned} \mu ( tA +(1-t)B)\ge \mu (A)^{t}\mu (B)^{1-t}. \end{aligned}$$

Similarly, a function \(f:\mathbb{R }^n\rightarrow \mathbb{R }^{+}\) is log-concave if \(\log f\) is concave on its support. It is known that if \(\mu \) is a log-concave measure on \(\mathbb{R }^n\) that is not supported on any proper affine subspace, then \(\mu \in \mathcal P _n\) and its density \(f_\mu \) is log-concave [13].

If \(A\subset \mathbb{R }^n\) is a Borel set with finite volume, the symmetric rearrangement \(A^{*}\) of \(A\) is the (open) Euclidean ball centered at the origin whose volume is equal to that of \(A\). The symmetric decreasing rearrangement of \(\chi _A\) is defined by \(\chi _A^{*}:=\chi _{A^{*}}\). If \(f:{\mathbb{R }}^n\rightarrow {\mathbb{R }}^+\) is an integrable function, we define its symmetric decreasing rearrangement \(f^{*}\) by

$$\begin{aligned} f^{*}(x)=\int _0^{\infty }\chi ^{*}_{\{ f> t\}}(x)\text{ d }t =\int _0^{\infty }\chi _{\{ f>t\}^{*}}(x)\text{ d }t. \end{aligned}$$

The latter should be compared with the “layer-cake representation” of \(f\):

$$\begin{aligned} f(x)=\int _0^{\infty }\chi _{\{ f> t\}}(x)\text{ d }t. \end{aligned}$$
(2.3)

see [47, Theorem 1.13]. The function \(f^{*}\) is radially-symmetric, decreasing and equimeasurable with \(f\), i.e., \(\{f>\alpha \}\) and \(\{f^*>\alpha \}\) have the same volume for each \(\alpha \ge 0\). By equimeasurability and (2.3), one has \(\left||f\right||_p=\left||f^*\right||_p\) for each \(1\le p\le \infty \), where \(\left||\cdot \right||_p\) denotes the \(L_p\)-norm. If \(\mu \in \mathcal P _n^b\) has density \(f_\mu \), we let \(\mu ^*\) denote the measure in \(\mathcal P _n^b\) with density \(f_{\mu }^*\). See [18, 47] for further background material on rearrangements.

For the reader’s convenience, we list a few basic linear algebra facts used in the paper.

Proposition 2.1

Suppose that \(N\ge n\) and that \(T:\mathbb{R }^N\rightarrow \mathbb{R }^n\) is a linear operator. Denote the adjoint of \(T\) by \(T^{*}\).

  1. (i)

    (Polar decomposition) There is an isometry \(U:\mathbb{R }^n\rightarrow \mathbb{R }^N\) such that \(T^*=U(TT^*)^{1/2}\).

  2. (ii)

    If \(v_1,\ldots ,v_n\in \mathbb{R }^N\) denote the columns of \(T^*\) (as a matrix with respect to the standard unit vector basis), then

    $$\begin{aligned} {\mathrm{vol}}_{n}{\big ({T^* [0,1]^n}\big )}&= \det {(TT^*)^{1/2}} \end{aligned}$$
    (2.4)
    $$\begin{aligned}&= |v_1||P_{V_1^{\perp }}v_2||P_{V_2^{\perp }}v_3| \cdots |P_{V_{n-1}^{\perp }}v_n|, \end{aligned}$$
    (2.5)

    where

    $$\begin{aligned} V_k:=\mathrm{span}\{v_1,\ldots ,v_k\} \quad V_0=\{0\}, \end{aligned}$$

    for \(k=1,\ldots ,n-1\).

  3. (iii)

    Let \(E=\ker (T)^{\perp }\) and let \(T\vert _E\) be the restriction of \(T\) to \(E\). If \(B\subset \mathbb{R }^N\) is a compact set then

    $$\begin{aligned} {\mathrm{vol}}_{n}{\big ({TB}\big )} = |\mathrm{det}(T\vert _E)| {\mathrm{vol}}_{n}{\big ({P_EB}\big )}, \end{aligned}$$
    (2.6)

    where \(|\mathrm{det}(T\vert _E)| = \det {(TT^*)^{1/2}}\).

For (i) see, e.g., [24, §3.2]; (2.4) follows from (i), while (2.5) is the well-known formula for the volume of the parallelpiped spanned by \(v_1,\ldots ,v_n\), which follows from Gram–Schmidt (see, e.g., [4, Theorem 7.5.1]). For (iii), note that \(E =\mathrm{Range}(T^*)\) and

$$\begin{aligned} \mathrm{det}(TT^{*})&= {\text{ vol }}_{n}{\big ({T T^*[0,1]^n}\big )} \\&= |\mathrm{det}(T\vert _E)| {\text{ vol }}_{n}{\big ({T^*[0,1]^n}\big )}\\&= |\mathrm{det}(T\vert _E)|\mathrm{det} (TT^{*})^{1/2}, \end{aligned}$$

hence \(\mathrm{det}(T\vert _E)=\mathrm{det}(TT^*)^{1/2}\); (2.6) follows from the fact that \(TB = T\vert _E P_E B\).

3 Distributional Inequalities via Symmetrization

The main goal of this section is to show that the small-ball probabilities in Theorems 1.1 and 1.2 are maximized for \(\lambda _{D_n}\). This is done by adapting [66, Theorem 1.1], which (in the notation of the introduction) asserts that if \(\mu _1,\ldots ,\mu _N\in \mathcal P _n^b\) and \(C\subset \mathbb{R }^N\) is a convex body, then

$$\begin{aligned} \mathbb{E }_{\otimes \mu _i }{\text{ vol }}_{n}{\big ({[X_1\ldots X_N]C}\big )} \ge \mathbb{E }_{\otimes \lambda _{D_n}}{\text{ vol }}_{n}{\big ({[X_1\ldots X_N]C}\big )}. \end{aligned}$$
(3.1)

The next theorem is a distributional form of (3.1) in the case when \(C\) is \(1\)-unconditional (which suffices for our purposes).

Theorem 3.1

Let \(N\ge n\) and let \(\mu _1,\ldots ,\mu _N\in \mathcal P ^b_n\). Suppose that \(C\subset \mathbb{R }^N\) is a \(1\)-unconditional convex body. Then

$$\begin{aligned} \mathbb{P }_{\otimes \mu _i}\big ({\mathrm{vol}}_{n}{\big ([X_1\ldots X_N]C\big )}\ge \alpha \big ) \ge \mathbb{P }_{\otimes \lambda _{D_n}}\big ({\mathrm{vol}}_{n}{\big ([X_1\ldots X_N]C\big )} \ge \alpha \big ). \end{aligned}$$

Remark 3.2

The analogous result for the convex hull of random points sampled in a convex body of volume one was proved by Giannopoulos and Tsolomitis [32, Lemma 3.3].

Remark 3.3

In Theorem 3.1, one can replace \({\text{ vol }}_{n}{\big ({\cdot }\big )}\) by other intrinsic volumes (see [66, Remark 4.4]). In this paper we focus all of our efforts on \({\text{ vol }}_{n}{\big ({\cdot }\big )}\).

The proof of Theorem 3.1 is a straightforward modification of that of (3.1). To clarify the role of the extra unconditionality assumption in the present context, we sketch the main points. Recall that if \(\mu \in \mathcal P _n^b\) has density \(f_\mu \), then \(\mu ^*\) denotes the measure in \(\mathcal P _n^b\) whose density is the symmetric decreasing rearrangement \(f_{\mu }^*\).

Theorem 3.4

Let \(N\) and \(n\) be positive integers. Let \(\mu _1,\ldots ,\mu _N\in \mathcal P ^b_n\) and let \(\alpha >0\). Suppose that \(F:(\mathbb{R }^n)^N\rightarrow \mathbb{R }^{+}\) satisfies the following condition: for each \(z \in S^{n-1}\), for all \(y_1,\ldots ,y_N\in z^{\perp }\), the level set

$$\begin{aligned} \big \{t\in \mathbb{R }^N: F(y_1+t_1 z, \ldots , y_N+t_N z)\le \alpha \big \} \end{aligned}$$

is origin-symmetric and convex. Then

$$\begin{aligned} \mathbb{P }_{\otimes \mu _i}\big (\{F >\alpha \}\big ) \ge \mathbb{P }_{\otimes \mu _i^*}\big (\{F >\alpha \}\big ). \end{aligned}$$
(3.2)

The latter theorem makes use of the Brascamp–Lieb–Luttinger rearrangement inequality [17] (see also [20]); the proof is given in detail in [66, Proposition 3.2] (use the fact that \(\mathbb{P }_{\otimes \mu _i}\Big (\{F >\alpha \}\Big ) = \mathbb{E }_{\otimes \mu _i} {{\small 1}\!\!1}_{\{F>\alpha \}}\)).

If \(K\subset \mathbb{R }^n\) is a compact set of volume one and all \(\mu _i\) are equal to the uniform measure on \(K\), then Theorem 3.4 gives immediately

$$\begin{aligned} \mathbb{P }_{\otimes \mu _i}\big (\{F >\alpha \}\big )\ge \mathbb{P }_{\otimes \lambda _{D_n}}\big (\{F>\alpha \}\big ). \end{aligned}$$
(3.3)

For general measures \(\mu \in \mathcal P _n^b\), an additional step is required to pass to the uniform measure on the ball. We say that \(F:(\mathbb{R }^n)^N\rightarrow \mathbb{R }^{+}\) is coordinate-wise increasing if for all \(x_1,\ldots , x_N\) in \(\mathbb{R }^n\),

$$\begin{aligned} F(s_{1}x_{1}, \ldots , s_{N}x_{N}) \le F(t_{1}x_{1}, \ldots , t_{N}x_{N}), \end{aligned}$$
(3.4)

whenever \(0\le s_{i}\le t_{i}, i=1,\ldots ,N\). For such functions, one can pass from rotationally-invariant measures \(\mu \in \mathcal P _n^b\) to \(\lambda _{D_n}\). Here and elsewhere, we use the term “increasing” in the non-strict sense.

Proposition 3.5

Let \(\mu _1,\ldots ,\mu _N\in \mathcal P _n^b\) and suppose that \(\mu _i=\mu _i^*\) for each \(i=1,\ldots ,N\). Assume that \(F\) is coordinate-wise increasing as in (3.4). Then

$$\begin{aligned} \mathbb{P }_{\otimes \mu _i}\big (\{F >\alpha \}\big ) \ge \mathbb{P }_{\otimes \lambda _{D_n}}\big (\{F >\alpha \}\big ). \end{aligned}$$

Proof

Using spherical coordinates \(x_i=r_i\theta _i\), where \(r_i\in \mathbb{R }^{+}\) and \(\theta _i\in S^{n-1}\) and writing \(\text{ d }\overline{r} = \text{ d }r_1\ldots \text{ d }r_N\) and \(\text{ d }{\overline{\theta }} =\text{ d }\sigma (\theta _1)\ldots \text{ d }\sigma (\theta _N)\), we have

$$\begin{aligned} \mathbb{P }_{\otimes \mu _i}\Big (\{F>\alpha \}\Big )&= \int _{\mathbb{R }^n}\ldots \int _{\mathbb{R }^n}{{\small 1}\!\!1}_{ \{F>\alpha \}}(x_1,\ldots ,x_N) \prod _{i=1}^N f_i(x_i) \text{ d }x_1\ldots \text{ d }x_N\\&= (n\omega _n)^N\int _{(\mathbb{R }^{+})^N} \int _{(S^{n-1})^N}{{\small 1}\!\!1}_{\{F>\alpha \}} (r_1\theta _1,\ldots ,r_N\theta _N) \prod _{i=1}^N f_i(r_i\theta _i) \text{ d }{\overline{ \theta }}\text{ d }{\overline{r}}. \end{aligned}$$

By our assumption on \(F\),

$$\begin{aligned} \mathbb{R }^{+}\ni r_j \mapsto {{\small 1}\!\!1}_{\{F\ge \alpha \}} (r_1\theta _1,\ldots r_j \theta _j,\ldots ,r_N\theta _N) \end{aligned}$$

is increasing, hence

$$\begin{aligned}&\int _0^{\infty } {{\small 1}\!\!1}_{\{F> \alpha \}} (r_1\theta _1,\ldots , r_j \theta _j,\ldots ,r_N\theta _N) f_j(r_j\theta _j) \text{ d }{r_j}\\&\quad \ge \int _0^{\omega _n^{1/n}} {{\small 1}\!\!1}_{\{F>\alpha \}} (r_1\theta _1,\ldots , r_j \theta _j,\ldots ,r_N\theta _N)\text{ d }{r_j}; \end{aligned}$$

(see, e.g., [66, Lemma 3.5]). Applying the latter inequality for each \(j\), together with Fubini’s Theorem, yields the result.\(\square \)

Proof of Theorem

3.1 Let \(F:(\mathbb{R }^n)^N\rightarrow \mathbb{R }^{+}\) be defined by

$$\begin{aligned} F(x_1,\ldots , x_N):={\text{ vol }}_{n}{\big ([x_1\ldots x_N]C\big )}. \end{aligned}$$

Using an argument due to Groemer [38], it is shown in [66, Proposition 4.1] that \(F\) satisfies the assumption in Theorem 3.4, hence (3.2) holds. The unconditionality assumption on \(C\) guarantees that for each \(x_1,\ldots ,x_N\) in \(\mathbb{R }^n\),

$$\begin{aligned}{}[s_1x_1\ldots s_Nx_N]C\subset [t_1x_1\ldots t_Nx_N]C, \end{aligned}$$

whenever \(0\le s_i\le t_i\), for \(i=1,\ldots ,N\), hence \(F\) is coordinate-wise increasing and Proposition applies.\(\square \)

While Theorem 3.1 reduces Theorems 1.1 and 1.2 to the case of \(\mathbb P _{\otimes \lambda _{D_n}}\), our path will involve first calculating the small-ball probability for the Gaussian measure, to which we now turn our attention.

4 An Extension of the Gaussian Representation of Intrinsic Volumes

This section is our first step towards estimating the small-ball probabilities in Theorems 1.1 and 1.2 for \(\mu =\gamma _n\), the standard Gaussian measure. As in the previous section, we work with random sets of the form \([X_1\ldots X_N]C\) for a general convex body \(C\subset \mathbb{R }^N\).

When \(N=n\), the small-ball problem for any \(C\) reduces to estimates for random determinants. Indeed, \({\text{ vol }}_{n}{\big ([X_1\ldots X_n]C\big )}=|\det {[X_1\ldots X_n]}| {\text{ vol }}_{n}{\big ({C}\big )}\). As in [53, Fact 1.5], one can bound

$$\begin{aligned} \mathbb{P }_{\otimes \gamma _n}\big (|\det {[X_1\ldots X_n]}| \le \varepsilon \big ), \end{aligned}$$

by estimating moments \(\mathbb{E }_{\otimes \gamma _n}|\det {[X_1\ldots X_n]}|^{-p}\) for \(p>0\) (see also [68, Proposition 2] for such estimates beyond the Gaussian setting).

As we mentioned in the introduction, for the random polytope \(K_N\), the in-radius is well-studied, see [34, 50] and the references therein. Aside from implications stemming from in-radius estimates, we are not aware of small deviations for the volume \({\text{ vol }}_{n}{\big ({K_N}\big )}\) for the full range of parameters \(n, N\) and \(\varepsilon \) considered in this paper.

As in the case \(N=n\), our approach will involve estimation of moments \(\mathbb{E }_{\otimes \gamma _n}{\text{ vol }}_{n}{\big ([X_1\ldots X_N]C\big )}^{-p}\) for \(p>0\). Unlike the case \(N=n\), however, the geometry of \(C\) plays a crucial role, which we quantify through intrinsic volumes and suitable extensions.

Recall that the intrinsic volumes of a convex body \(C\subset \mathbb{R }^N\) can be defined via the Steiner formula for the outer parallel volume of \(C\):

$$\begin{aligned} {\text{ vol }}_{N}{\big ({C+ \alpha B_2^N}\big )} = \sum _{n=0}^N \omega _n V_{N-n}(C){\alpha }^{n}. \end{aligned}$$
(4.1)

The quantities \(V_n, n=1,\ldots ,N\), are the \(n\)-th intrinsic volumes of \(C\) (we set \(V_0 \equiv 1\)). Of particular interest are \(V_1, V_{N-1}\) and \(V_N\), which are multiples of the mean-width, surface area and volume, respectively. Intrinsic volumes are also referred to as quermassintegrals (under an alternate labelling and normalization). For further background on intrinsic volumes, we refer the reader to [70]. We will make use of the following fact, which is a special case of Kubota’s integral recursion:

$$\begin{aligned} V_n(C)= {N\atopwithdelims ()n}\frac{\omega _N}{\omega _n\omega _{N-n}} \int _{G_{N,n}} {\text{ vol }}_{n}{\big ({P_E C}\big )}\text{ d }\nu _{N,n}(E). \end{aligned}$$
(4.2)

There is a version of the latter formula that uses Gaussian random matrices, termed the Gaussian representation of intrinsic volumes in [74] and which appeared previously in another context in [72]. If \(G=[\gamma _{ij}]\) is an \(n\times N\) matrix with independent standard Gaussian entries, then the \(n\)th intrinsic volume of \(C\subset \mathbb{R }^N\) is given by

$$\begin{aligned} V_n(C)=\frac{(2\pi )^{n/2}}{\omega _n n!}\mathbb{E }\,{\text{ vol }}_{n}{\big ({GC}\big )}. \end{aligned}$$
(4.3)

(Here we have omitted the subscript on \(\mathbb{E }_{\otimes \gamma _n}\) and will do so when the context is clear.)

The next proposition is an extension of (4.3), which connects powers of \({\text{ vol }}_{n}{\big ({GC}\big )}\) and the following parameter \(W_{[n,p]}(C)\), defined in [22],

$$\begin{aligned} W_{[n,p]}(C) :=\Big (\int _{G_{N,n}} {\text{ vol }}_{n}{\big ({P_F C}\big )}^{p} \text{ d }\nu _{N,n}(F) \Big )^{\tfrac{1}{np}}, \end{aligned}$$
(4.4)

for \(p\in [-\infty , \infty ]\). In the latter expression, the \(p=0\) case is interpreted as \(\lim _{p\rightarrow 0}W_{[n,p]}(C)\); a similar convention is made for \(0\)th moments throughout the paper. The quantities \(W_{[n,p]}(C)\) are discussed in greater detail in Sect. 5. The proof we give below is the same as that of [72, Theorem 6], although presented differently; see also [73, Theorem 1] for a probabilistic derivation of the Steiner formula (4.1), which led us to the connection.

Proposition 4.1

Let \(n\le N\) and let \(G\) be an \(n\times N\) random matrix with independent standard Gaussian entries. Let \(C\subset \mathbb{R }^N\) be a compact set with non-empty interior and \(p >-(N-n+1)\). Then

$$\begin{aligned} \Big (\mathbb{E }\,{\mathrm{vol}}_{n}{\big ({G C}\big )}^p\Big )^{\tfrac{1}{p}} = \Big (\mathbb{E }\det {(GG^{*})^{\tfrac{p}{2}}}\Big )^{\tfrac{1}{p}} W^n_{[n,p]}(C). \end{aligned}$$
(4.5)

If \(C\) is a convex body and \(p=1\), then (4.5) reduces to

$$\begin{aligned} \mathbb{E }\,{\mathrm{vol}}_{n}{\big ({G C}\big )} = \frac{1}{(2\pi )^{n/2}}\frac{N!}{(N-n)!}\frac{\omega _N}{\omega _{N-n}} \int _{G_{N,n}}{\text{ vol }}_{n}{\big ({P_E C}\big )}\text{ d }\nu _{N,n}(E), \end{aligned}$$

which is the Gaussian representation of intrinsic volumes. The random matrix \(GG^*\) in Proposition 4.1 is distributed according to the Wishart density and explicit formulas for \(\mathbb{E }\mathrm{det}(GG^{*})^{p/2}\) are well-known, e.g., [4, Chap. 7]; a direct argument giving the order of magnitude of \(\mathbb{E }\mathrm{det}(GG^{*})^{p/2}\) is given below in Lemma 4.2. For a strong stochastic equivalence involving projections of regular simplices on \(G_{N,n}\) and Gaussian vectors, see [11, Theorem 1].

In a different context, passage between Gaussian random operators and random projections on the Grassmannian manifold has been used to great effect in studying volumetric invariants that arise in Banach–Mazur distance investigations; see [53, 55].

Proof of Proposition

4.1 Let \(h_1,\ldots ,h_n\in \mathbb{R }^N\) be the columns of \(G^{*}\). Then \(G^*[0,1]^n\) is the parallelpiped generated by \(h_1,\ldots , h_n\) and \({\text{ vol }}_{n}{\big ({G^*[0,1]^n}\big )}=\mathrm{det}(GG^*)^{1/2}\), by Proposition 2.1(ii). Let \(H\) be the subspace spanned by \(h_1,\ldots ,h_n\) so that

$$\begin{aligned} H=\mathrm{Range}(G^*)=\ker (G)^{\perp }. \end{aligned}$$

Let \(U\) be a random matrix distributed uniformly on the orthogonal group \(\mathcal O (N)\), independent of \(G\). Note that \((GU)^*[0,1]^n\) is the parallelpiped spanned by the vectors \(U^*h_1,\ldots , U^*h_n\), hence

$$\begin{aligned} {\text{ vol }}_{n}{\big ({(GU)^*[0,1]^n}\big )}= \mathrm{det}((GU)(GU)^*)^{1/2} = \mathrm{det}(GG^*)^{1/2}. \end{aligned}$$

Combining the latter equality with Proposition 2.1(iii), we have

$$\begin{aligned} {\text{ vol }}_{n}{\big ({GUC}\big )} =\mathrm{det}(GG^*)^{\tfrac{1}{2}} {\text{ vol }}_{n}{\big ({P_{U^*H}C}\big )}. \end{aligned}$$

Let \(\mathbb{E }_{\otimes _{i=1}^N \gamma _n}=\mathbb{E }_{\otimes _{i=1}^n \gamma _N}\) denote expectation with respect to \(G\); similarly let \(\mathbb{E }_{U}\) denote expectation with respect to \(U\). By rotational invariance of \(\gamma _N, G\) and \(GU\) have the same distribution, hence

$$\begin{aligned} \mathbb{E }_{\otimes _{i=1}^n \gamma _N} {\text{ vol }}_{n}{\big ({GC}\big )}^p&= \mathbb{E }_{\otimes _{i=1}^n \gamma _N} \mathbb{E }_U {\text{ vol }}_{n}{\big ({GUC}\big )}^p \\&= \mathbb{E }_{\otimes _{i=1}^n \gamma _N} \Big (\mathrm{det}(GG^*)^{\tfrac{p}{2}}\mathbb{E }_U {\text{ vol }}_{n}{\big ({P_{U^* H}C}\big )}^p \Big ) \\&= \mathbb{E }_{\otimes _{i=1}^n \gamma _N}{ \mathrm det}(GG^*)^{\tfrac{p}{2}} \int _{G_{N,n}} {\text{ vol }}_{n}{\big ({P_EC}\big )}^p \text{ d }\nu _{N,n}(E). \end{aligned}$$

\(\square \)

As mentioned above, we give the order of magnitude of \(\mathbb{E }\det {(GG^*)}^{p/2}\). Since the resulting estimate is closely connected to the small-ball estimate in the Gaussian case, we include a detailed proof.

Lemma 4.2

Let \(N\ge n\) and let \(G\) be an \(n\times N\) random matrix with independent standard Gaussian entries. Then for all \(p \in [-(N-n+1-\text{ e }^{-n(N-n+1)}), N]\),

$$\begin{aligned} \Big (\mathbb{E }\det {(GG^*)}^{\tfrac{p}{2}}\Big )^{ \tfrac{1}{pn}} \simeq \sqrt{N}. \end{aligned}$$

Proof

Let \(X=(x_1,\ldots ,x_N)\) be an \(N\)-dimensional standard Gaussian vector. Let \(m \in \{1,\ldots ,N\}\) and \(F\in G_{N,m}\). For each \(\eta >0\) and for all \(p\in [-(m-\text{ e }^{-\eta m}), m]\), we have

$$\begin{aligned} c \text{ e }^{-\eta }\sqrt{m} \le \left( \mathbb{E }|P_{F} X|^{ p }\right) ^{\tfrac{1}{p}} \le c_1\sqrt{m} . \end{aligned}$$
(4.6)

Indeed, note that for \(a\in (0,1), \mathbb{E }_{\gamma _{1}}|x_1|^{-a} \simeq \frac{1}{1-a}\). Then, for \(p_{0}= m-\text{ e }^{-\eta m}\), we have

$$\begin{aligned} \left( \mathbb{E }|P_{F} X|^{- p_{0} }\right) ^{-\tfrac{1}{p_{0}}}&= \left( \mathbb{E }_{\gamma _{m}} |(x_1,\ldots ,x_m)|^ {-p_{0}} \right) ^{-\tfrac{1}{p_{0}}} \\&= \Big (\frac{m\omega _{m}}{(2\pi )^{\tfrac{m}{2}}} \int _{0}^{\infty } r^{m-(m-\text{ e }^{-\eta m})-1} \text{ e }^{-\tfrac{r^{2}}{2}} \text{ d }r \Big )^{-\tfrac{1}{p_{0}}} \\&= \frac{1}{(m\omega _{m})^{\tfrac{1}{p_{0}}}} (2\pi )^ {\tfrac{m-1}{2p_{0}}} \Big (\frac{1}{2}\mathbb{E }_{\gamma _{1}}|x_1|^{-(1-\text{ e }^{- \eta m})}\Big )^{-\tfrac{1}{p_{0}}} \\&\ge c\text{ e }^{-\eta } \sqrt{m}. \end{aligned}$$

For the positive range,

$$\begin{aligned} \big (\mathbb{E }|P_{F} X|^{m}\big )^{\tfrac{1}{m}}&= \big (\mathbb{E }_{\gamma _{m}} |(x_1,\ldots ,x_m)|^{m } \big )^{\tfrac{1}{m}} \\&= \Big (\frac{m\omega _{m}}{(2\pi )^{\tfrac{m}{2}}} \int _{0}^{\infty } r^{2m-1} \text{ e }^{-\tfrac{r^{2}}{2}} \text{ d }r \Big )^{\tfrac{1}{m}}\\&\simeq \sqrt{m}. \end{aligned}$$

As in the proof of Proposition 4.1, let \(h_1,\ldots ,h_n\in \mathbb{R }^N\) be the columns of \(G^{*}\). Let \(H_0=\{0\}\). For \(k=1,\ldots ,n-1\), set

$$\begin{aligned} H_k:=\mathrm{span}\{h_1,\ldots ,h_k\}. \end{aligned}$$

By Proposition 2.1 (ii), we have

$$\begin{aligned} \det {(GG^{*})^{\tfrac{p}{2}}}=\prod _{k=1}^n |P_{H^{\perp }_{k-1}}h_k|^p. \end{aligned}$$
(4.7)

Let \(p_{1}=-\big (N-n+1-\text{ e }^{-n(N-n+1)} \big )\). Integrating first with respect to \(h_{n}\), then \(h_{n-1}\) and so forth, at each stage applying (4.6) with \(m=N-k+1\) and \(\eta _{k}= 2^{-k}\) for \(k\ge 2\) and \(\eta _{1}=n\), we obtain

$$\begin{aligned} \Big ( \mathbb{E }\,\mathrm{det}(GG^{*})^{\tfrac{p_{1}}{2}}\Big )^{\tfrac{1}{p_{1}n}}&= \Big ( \mathbb{E }\prod _{k=1}^{n}|P_{H_{k-1}^{\perp }} h_{k}|^{p_1}\Big )^{\tfrac{1}{p_1n}}\\&\ge \Big ( \prod _{k=1}^{n} (N-k+1)\Big )^{\tfrac{1}{2n}} \text{ e }^{- \tfrac{1}{n} \sum _{k=1}^{n}\eta _{k}} \\&\ge \Big ({N \atopwithdelims ()n} n!\Big )^{\tfrac{1}{2n}}\text{ e }^{-n-1/2}\\&\ge c\sqrt{N}. \end{aligned}$$

Similarly, for the positive range, we have

$$\begin{aligned} \big ( \mathbb{E }\,\mathrm{det} (GG^{*})^{\tfrac{N}{2}}\big )^ {\frac{1}{N}} \simeq \sqrt{N}. \end{aligned}$$

The result follows by Hölder’s inequality.\(\square \)

The following proposition will be used to show that Theorem 1.2 is sharp for the Gaussian measure (cf. Proposition 6.7). As the proof is similar to the latter lemma, we include it here.

Proposition 4.3

Let \(N\ge n\) and let \(G\) be an \(n\times N\) random matrix with independent standard Gaussian entries. Then for any \(\varepsilon \in (0,1/2)\),

$$\begin{aligned} \mathbb{P }\Big (\mathrm{det}(GG^*)^{1/(2n)} \le c\varepsilon \sqrt{N}\Big )\ge \varepsilon ^{n(N-n+1)}, \end{aligned}$$

where \(c\) is a positive numeric constant.

Proof

Let \(X\) be an \(N\)-dimensional standard Gaussian vector. Let \(m \in \{1,\ldots ,N\}\) and \(F\in G_{N,m}\). By Chebyshev’s inequality,

$$\begin{aligned} \mathbb{P }\Big (|P_F X|\le \sqrt{2m}\Big )\ge \frac{1}{2}. \end{aligned}$$
(4.8)

Moreover, a direct computation shows that for any \(\varepsilon \in (0,1/2)\),

$$\begin{aligned} \mathbb{P }\Big (|P_F X| \le c_1\varepsilon \sqrt{m}\Big )\ge \varepsilon ^m, \end{aligned}$$
(4.9)

where \(c_1\) is a positive numeric constant (see Proposition 9.5 for a more general result).

As in the previous proof, let \(h_1,\ldots ,h_n\) denote the columns of \(G^{*}\); set \(H_0=\{0\}\) and \(H_k=\mathrm{span}\{h_1,\ldots ,h_k\}\). For each \(k=1,\ldots ,n-1\), let \(a_k=\sqrt{2(N-k+1)}\) and let \(a_n=\varepsilon ^n \sqrt{N-n+1}\). Using (4.7), we have

$$\begin{aligned} \mathbb{P }\big (\mathrm{det}(GG^*)^{\tfrac{1}{2n}}\le c\varepsilon \sqrt{N} \big )&\ge \mathbb{P }\big (|P_{H_{k-1}^{\perp }}h_k| \le c a_k \text{ for } \text{ each } k=1,\ldots ,n\big ), \end{aligned}$$

where \(c\) is a positive numeric constant. Applying Fubini’s theorem iteratively (integrating first with respect to \(h_n\), then \(h_{n-1}\) and so on), using (4.9) with \(m=N-n+1\) and (4.8) for \(m=N-k+1\) (for \(k=n-1,\ldots ,1\)) gives the desired result.\(\square \)

Proposition 4.1 and Lemma 4.2 reduce the small-ball problem for \(\gamma _n\) to capturing the asymptotics of the quantities \(W_{[n,p]}(C)\). We make this explicit in the next subsection.

4.1 Connection to Small-Ball Estimates for the Gaussian Case

For a convex body \(C\subset \mathbb{R }^N\), positive integers \(n\le N\), and \(p\in [-1, \infty ]\), we define

$$\begin{aligned} A_{n,p}(C) := \frac{ W_{[n,1]}(C)}{ W_{[n,-p]}(C)} = \frac{\big (\int _{G_{N,n}}{\text{ vol }}_{n}{\big ({P_F C}\big )}\text{ d }\nu _{N,n}(F) \big )^{\tfrac{1}{n}}}{\big (\int _{G_{N,n}}{\text{ vol }}_{n}{\big ({P_F C}\big )}^{-p}\text{ d }\nu _{N,n}(F) \big )^{-\tfrac{1}{pn}}}. \end{aligned}$$
(4.10)

By Hölder’s inequality, \(A_{n,p}(C) \ge 1\) and

$$\begin{aligned}{}[-1,\infty )\ni p \mapsto A_{n,p}(C) \end{aligned}$$

is an increasing function.

Proposition 4.4

Let \(N\ge n\) and let \(G\) be an \(n\times N\) random matrix with independent standard Gaussian entries. Let \(C\subset \mathbb{R }^N\) be a convex body and \(p\in [0,N-n+1-\text{ e }^{-n(N-n+1)}]\). Then

$$\begin{aligned} \Big (\mathbb{E }_{\otimes \gamma _n} {\mathrm{vol}}_{n}{\big ({GC}\big )}^{-p}\Big )^{-\tfrac{1}{pn}} \ge \frac{\Big (\mathbb{E }_{\otimes \gamma _n} {\mathrm{vol}}_{n}{\big ({GC}\big )}\Big )^{\tfrac{1}{n}}}{c_0A_{n,p}(C)}, \end{aligned}$$
(4.11)

where \(c_0\) is a positive numeric constant. Consequently, for each \(\varepsilon \in (0,1)\),

$$\begin{aligned} \mathbb{P }_{\otimes \gamma _n}\Big ({\mathrm{vol}}_{n}{\big ({GC}\big )}^{1/n}\le \frac{\varepsilon }{cA_{n,p}(C)} \Big (\mathbb{E }_{\otimes \gamma _n} {\mathrm{vol}}_{n}{\big ({GC}\big )}\Big )^{1/n}\Big ) \le \varepsilon ^{pn}, \end{aligned}$$
(4.12)

where \(c\) is a positive numeric constant.

Proof

By Proposition 4.1 and Lemma 4.2, we get

$$\begin{aligned} A_{n,p}(C) \simeq \frac{ \big ( \mathbb{E }\,{\text{ vol }}_{n} {\big ({GC}\big )}\big )^{\tfrac{1}{n}}}{\Big ( \mathbb{E }\,{\text{ vol }}_{n}{\big ({GC}\big )}^{-p}\Big )^{-\tfrac{1}{pn}} }, \end{aligned}$$
(4.13)

which implies (4.11). Using the latter equivalence and Markov’s inequality, for any \(\eta >0\), we have

$$\begin{aligned}&\mathbb{P }\Big ({\text{ vol }}_{n}{\big ({GC}\big )}^{1/n} \le \frac{\eta }{A_{n,p}(C)} \big (\mathbb{E }{\text{ vol }}_{n}{\big ({GC}\big )}\big )^{\tfrac{1}{n}}\Big ) \\&\quad \le \mathbb{P }\Big ({\text{ vol }}_{n}{\big ({GC}\big )}^{1/n} \le c\eta \big (\mathbb{E }{\text{ vol }}_{n}{\big ({GC}\big )}^{-p}\big )^{-\tfrac{1}{pn}} \Big )\\&\quad \le (c\eta )^{pn}, \end{aligned}$$

where \(c\) is a positive numeric constant. The small-ball estimate (4.12) follows on substituting \(\varepsilon = c\eta \).\(\square \)

As (4.12) indicates, we have reduced the small-ball problem to bounding the ratio

$$\begin{aligned} A_{n,p}(C)=\frac{ W_{[n,1]}(C)}{ W_{[n,-p]}(C)}. \end{aligned}$$

For \(C=B_1^N\) and \(C=B_{\infty }^N\), bounds for the numerators \(W_{[n,1]}(B_1^N)\) and \(W_{[n,1]}(B_{\infty }^N)\) are well-known. We state them here in their Gaussian form (cf. (4.3)) as this is more convenient for our purpose. These are also well-known results from the perspective of Gaussian random polytopes.

Proposition 4.5

Let \(N\ge n\) and let \(G\) be an \(n\times N\) matrix with independent standard Gaussian entries. Then, for \(N\le \text{ e }^n\), we have

$$\begin{aligned} \Big (\mathbb{E }\,{\mathrm{vol}}_{n}{\big ({G B_{1}^{N}}\big )}\Big )^{\tfrac{1}{n}} \simeq \sqrt{\frac{\ln (2N/n)}{n}}. \end{aligned}$$
(4.14)

For any \(N\ge n\), we have

$$\begin{aligned} \Big (\mathbb{E }\,{\mathrm{vol}}_{n}{\big ({G B_{\infty }^{N}}\big )}\Big )^{\tfrac{1}{n}} \simeq \frac{N}{\sqrt{n}}. \end{aligned}$$
(4.15)

The intrinsic volumes of \(B_1^N\) are computed explicitly in [12]. For \(B_{\infty }^N\), one has \(V_n(B_{\infty }^N)=2^n{N\atopwithdelims ()n}\). Alternatively, taking the view of random sets generated by the Gaussian measure, the estimates in Proposition 4.5 have been proved by numerous methods. One approach for the upper bounds involves volume estimates for the convex hull and Minkowski sum of arbitrary points in \(\mathbb{R }^n\). As these will be needed again in Sect. 8, we record them here.

Theorem 4.6

Let \(N\ge n\) and let \(x_1,\ldots ,x_N\in \mathbb{R }^n\) with \(|x_i|\le M\) for \(i=1,\ldots ,N\). Then

$$\begin{aligned} \Big ({\mathrm{vol}}_{n}{\big ({[x_1\ldots x_N]B_1^N}\big )}\Big )^{1/n} \le \frac{cM\sqrt{\ln (2N/n)}}{n}, \end{aligned}$$

where \(c\) is a positive numeric constant.

The latter theorem can be proved in a number of ways, see [6, 8, 9, 19, 34]. For zonotopes, we use the following elementary lemma. Here we use \(|I|\) to denote the cardinality of the set \(I\).

Lemma 4.7

Let \(N\ge n\) and let \(x_1,\ldots ,x_N\in \mathbb{R }^n\). Then

$$\begin{aligned} {\mathrm{vol}}_{n}{\Big ({\sum _{i=1}^N[-x_i,x_i]}\Big )} = 2^n \sum _{\begin{array}{c} I\subset \{1,\ldots ,N\}\\ |I|=n \end{array}}|\mathrm{det}[x_i]_{i\in I}|. \end{aligned}$$
(4.16)

Moreover, if \(|x_i|\le M\) for each \(i=1,\ldots ,N\), then

$$\begin{aligned} {\mathrm{vol}}_{n}{\big ({[x_1\ldots x_N]B_{\infty }^N}\big )}^{1/n} \le \frac{cNM}{n}, \end{aligned}$$

where \(c\) is a positive numeric constant.

Remark 4.8

Analogous volume estimates for \({\text{ vol }}_{n}{\big ([x_1\ldots x_N]B_{p}^N\big )}\), where \(1\le p\le \infty \), are proved in [36].

Proof

(Sketch) The first assertion (4.16) is the well-known zonotope volume formula (see, e.g., [58, p. 73]). The second assertion follows from the first since

$$\begin{aligned} {\text{ vol }}_{n}{\big ([x_1\ldots x_N]B_{\infty }^N\big )}= 2^n \sum _{|I|=n} \text{ d }_I \le 2^n{N \atopwithdelims ()n} \max _{i\in I}\text{ d }_I \end{aligned}$$

where \(\text{ d }_I=|\mathrm{det}([x_i]_{i\in I})|\). We conclude by using the estimate \({N \atopwithdelims ()n}\le (\text{ e }N/n)^n\) together with Hadamard’s determinant inequality: \(\text{ d }_I\le \prod _{i\in I} |x_i|\).\(\square \)

Thus if \(g_1,\ldots , g_N\) denote the columns of \(G\) in Proposition 4.5, then the upper bound for \(\mathbb{E }\,{\text{ vol }}_{n}{\big ({G B_{1}^{N}}\big )}\) follows from Theorem 4.6 and the fact that with high probability, \(|g_i|\simeq \sqrt{n}\) (cf. (4.6)). The lower bound, for \(N\ge 2n\), follows from Gluskin’s lemma [34] (see also [48, 61]) or by computing the in-radius of \(GB_1^N\) as in [30] (which treats the case of vectors distributed according to \(\lambda _{D_n}\)); for \(N=n\), one can simply estimate the determinant: \((\mathbb{E }|\mathrm{det}([g_1\ldots g_n])|)^{1/n}\simeq \sqrt{n}\) (e.g., take \(N=n\) in Lemma 4.2). For asymptotic values as \(N\rightarrow \infty \) (in the non-symmetric case), see [3]. Similarly, for \(GB_{\infty }^N=\sum _{i=1}^N[-g_i,g_i]\) one applies (4.16) and the fact that \((\mathbb{E }|\mathrm{det}([g_1\ldots g_n])|)^{1/n}\simeq \sqrt{n}\).

To estimate the quantities \(W_{[n,-p]}(B_1^{N})\) and \(W_{[n,-p]}(B_{\infty }^{N})\), we require additional machinery which we describe in the next two sections.

5 Generalized Intrinsic Volumes

In this section we delve further into properties of the quantities \(W_{[n,p]}(C)\) for an arbitrary convex body \(C\subset \mathbb{R }^N\). As in Sect. 4, for every \(p\in [-\infty , \infty ]\) and \(1\le n\le N-1\), we set

$$\begin{aligned} W_{[n,p]}(C) :=\Big (\int _{G_{N,n}} {\text{ vol }}_{n}{\big ({P_F C}\big )}^{p} \text{ d }\nu _{N,n}(F) \Big )^{\tfrac{1}{np}}. \end{aligned}$$
(5.1)

Note that \( W_{[n]}(C):= W_{[n,1]}(C)\) is simply a constant multiple (depending on \(N\) and \(n\)) of the \(n\)th intrinsic volume of \(C\). We also set \( W_{[N]}(C):= {\text{ vol }}_{N}{\big ({C}\big )}^{1/N}\). The Aleksandrov–Fenchel inequality (e.g., [70, Chap. 6]) implies that for \(1\le n_{1}\le n_{2} \le N\),

$$\begin{aligned} \frac{W_{[n_{2}]}(C)}{ W_{[n_{2}]}(B_{2}^{N})} \le \frac{W_{[n_{1}]}(C)}{ W_{[n_{1}]}(B_{2}^{N})}. \end{aligned}$$

The latter inequality, together with the fact that \({\text{ vol }}_{N}{\big ({B_2^N}\big )}^{1/N}\simeq \frac{1}{\sqrt{N}}\), implies that

$$\begin{aligned} c_{1}\sqrt{\frac{N}{n}} {\text{ vol }}_{N}{\big ({C}\big )}^{\tfrac{1}{N}} \le W_{[n]}(C) \le \frac{c_{2}}{\sqrt{n}} W(C). \end{aligned}$$
(5.2)

We now define variants of the normalized affine quermassintegrals, introduced by Lutwak [51]. For a convex body \(C\subset \mathbb{R }^N\) of volume one, set

$$\begin{aligned} \Phi _{[n]}(C) := W_{[n,-N]} (C) := \Big (\int _{G_{N,n}}{\text{ vol }}_{n}{\big ({P_F C}\big )}^{-N}\text{ d } \nu _{N,n}(F)\Big )^{-\tfrac{1}{nN}}. \end{aligned}$$
(5.3)

The fact that \(\Phi _{[n]}(C)\) is invariant under volume-preserving affine transformations was proved by Grinberg [37, Theorem 2] (see also [26]). It was conjectured by Lutwak in [52] that if \(C\subset \mathbb{R }^N\) is a convex body of \({\text{ vol }}_{N}{\big ({C}\big )}=1\), then for \(1<n<N-1\),

$$\begin{aligned} \Phi _{[n]}(C)\ge \Phi _{[n]}(D_N), \end{aligned}$$
(5.4)

where \(D_N\subset \mathbb{R }^N\) is the Euclidean ball of volume one, with equality if and only if \(C\) is an ellipsoid. Here we follow the normalization used in [22]. When \(n=N-1\), inequality (5.4) is true and known as the Petty projection inequality; when \(n=1\) and the centroid of \(C\) is the origin, (5.4) is the Blaschke–Santalo inequality; see [27, Chap. 9] and the references and notes therein. In [22], it is conjectured that the quantities \(\Phi _{[n]}(C)\) are asymptotically of the same order as \(\Phi _{[n]} (D_N)\), i.e., if \(C\subset \mathbb{R }^N\) is a convex body of \({\text{ vol }}_{N}{\big ({C}\big )}=1\), then for \(1<n<N-1\),

$$\begin{aligned} \Phi _{[n]}(C) \simeq \sqrt{\frac{N}{n}}. \end{aligned}$$

In [22], the upper bound is shown to be correct up to a logarithmic factor. In this section, we verify that the lower bound holds as well.

Theorem 5.1

Let \(C\subset \mathbb{R }^N\) be a convex body of volume one. Then for \(1\le n \le N-1\),

$$\begin{aligned} \Phi _{[n]}(C)\ge c\sqrt{\frac{N}{n}}, \end{aligned}$$

where \(c\) is a positive numeric constant.

The proof uses a duality argument. The first ingredient is the following theorem due to Grinberg [37]; see also [28].

Theorem 5.2

Let \(K\subset \mathbb{R }^N\) be a compact set of volume \(1\). Then

$$\begin{aligned} \Big ( \int _{G_{N,n}} {\mathrm{vol}}_{n}{\big ({K\cap F}\big )}^{N} \text{ d }\nu _{N,n}(F) \Big )^{\tfrac{1}{nN}} \le \Big ( \int _{G_{N,n}} {\mathrm{vol}}_{n}{\big ({D_{N}\cap F}\big )}^{N} \text{ d }\nu _{N,n}(F) \Big )^{\tfrac{1}{nN}}. \end{aligned}$$

We will also use the Blaschke–Santaló inequality [69].

Theorem 5.3

Let \(C\subset \mathbb{R }^N\) be a convex body with center of mass at the origin. Then

$$\begin{aligned} ({\mathrm{vol}}_{N}{\big ({C}\big )}{\mathrm{vol}}_{N}{\big ({C^{\circ }}\big )})^{\tfrac{1}{N}} \le \omega _N^{\tfrac{2}{N}}, \end{aligned}$$
(5.5)

with equality if and only if \(C\) is an ellipsoid.

The proof in the origin-symmetric case can be found in, e.g., [27], together with additional notes and references; we also refer to the introduction of [31] for a discussion relating the role of the center of mass and the Santalo point of \(C\).

The reverse inequality, proved by Bourgain and Milman [16], will also be used.

Theorem 5.4

Let \(C\subset \mathbb{R }^N\) be a convex body with the origin in its interior. Then

$$\begin{aligned} c \omega _N^{\tfrac{2}{N}}\le ({\mathrm{vol}}_{N}{\big ({C}\big )} {\mathrm{vol}}_{N}{\big ({C^{\circ }}\big )})^{\tfrac{1}{N}}, \end{aligned}$$
(5.6)

where \(c\) is a positive numeric constant.

See [44] for the best-known constant \(c\) in the latter theorem in the origin-symmetric case; for recent developments and further references, see [31].

Proof of Theorem

5.1 Without loss of generality we can assume that the center of mass of \(C\) is the origin. Let \(F\in G_{N,n}\). Applying Theorem 5.4, we have

$$\begin{aligned} {\text{ vol }}_{n}{\big ({P_{F} C }\big )}^{-\tfrac{1}{n}} \le cn{\text{ vol }}_{n}{\big ({(P_{F} C)^{\circ }}\big )}^{\tfrac{1}{n}} = cn{\text{ vol }}_{n}{\big ({C^{\circ }\cap F}\big )}^{\tfrac{1}{n}}, \end{aligned}$$

where \(c\) is a positive numeric constant. Set \(K= C^{\circ }\) and write \(\widetilde{K} := K/{\text{ vol }}_{N}{\big ({K}\big )}^{1/N}\). Since \({\text{ vol }}_{N}{\big ({C}\big )}=1\), Theorem 5.3 gives the upper bound \({\text{ vol }}_{N}{\big ({K}\big )}^{1/N}\le c/N\), where \(c\) is a positive numeric constant, hence

$$\begin{aligned} {\text{ vol }}_{n}{\big ({K \cap F}\big )}^{\tfrac{1}{n}} = {\text{ vol }}_{N}{\big ({K}\big )}^{\tfrac{1}{N}}{\text{ vol }}_{n}{\big ({\widetilde{K} \cap F}\big )}^{\tfrac{1}{n}} \le \frac{c}{N}{\text{ vol }}_{n}{\big ({\widetilde{K} \cap F}\big )}^{\tfrac{1}{n}}. \end{aligned}$$

The latter two inequalities imply that

$$\begin{aligned} \Big (\int _{G_{N,n}} {\text{ vol }}_{n}{\big ({P_{F} C}\big )}^{-N} \text{ d }\nu _{N,n}(F) \Big )^{\tfrac{1}{Nn}} \le \frac{c_1n}{N} \Big ( \int _{G_{N,n}} {\text{ vol }}_{n}{\big ({\widetilde{K} \cap F}\big )}^{N} \text{ d }\nu _{N,n}(F) \Big )^{\tfrac{1}{nN}}, \end{aligned}$$

where \(c_1\) is a positive numeric constant. Now we apply Theorem 5.2 to obtain

$$\begin{aligned} \Big (\int _{G_{N,n}} {\text{ vol }}_{n}{\big ({P_{F} C}\big )}^{-N} \text{ d }\nu _{N,n}(F) \Big )^{\tfrac{1}{Nn}}&\le \frac{c_1n}{N}\Big ( \int _{G_{N,n}} {\text{ vol }}_{n}{\big ({D_{N}\cap F}\big )}^{N} \text{ d }\nu _{N,n}(F) \Big )^{\tfrac{1}{nN}}\\&\le c_2\sqrt{\frac{n}{N}}, \end{aligned}$$

where \(c_2\) is a positive numeric constant, from which the result follows.\(\square \)

Lastly, we will make use of a result from [22] (Theorem 3.2 and the subsequent remark (3.22)). For completeness, we give the proof. If \(C\subset \mathbb{R }^N\) is a convex body with the origin in its interior and \(p\in [-\infty , \infty ]\), define its generalized mean-width by

$$\begin{aligned} W_{p} (C) := \Big (\int _{S^{N-1}} h_{C}(\theta )^{p} \text{ d }\sigma (\theta )\Big )^{\tfrac{1}{p}}. \end{aligned}$$
(5.7)

Proposition 5.5

Let \(C\subset \mathbb{R }^N\) be a convex body with the origin in its interior. Then for each \(p\ge 1\),

$$\begin{aligned} W_{[n,-p]}(C) \ge \frac{c}{\sqrt{n}} W_{-np}(C), \end{aligned}$$
(5.8)

where \(c\) is a positive numeric constant.

Proof

Let \(F\in G_{N,n}\) and write \(S_F=S^{N-1}\cap F\); let \(\sigma _F\) denote the Haar probability measure on \(S_F\). By Theorem 5.4,

$$\begin{aligned} {\text{ vol }}_{n}{\big ({P_FC}\big )}^{-p} \le \frac{{\text{ vol }}_{n}{\big ({(P_FC)^{\circ }}\big )}^p}{c^{np} \omega _n^{2p}}. \end{aligned}$$

Using the fact that \(h_{P_FC}(\theta ) = h_C(\theta )\) for \(\theta \in S_F\), together with Hölder’s inequality, we have

$$\begin{aligned} {\text{ vol }}_{n}{\big ({(P_FC)^{\circ }}\big )}^p&= \Big (\omega _n \int _{S_F}h_C^{-n}(\theta )\text{ d }\sigma _F(\theta )\Big )^p \le \omega _n^p\int _{S_F}h_C^{-np}(\theta )\text{ d }\sigma _F(\theta ). \end{aligned}$$

The latter two inequalities imply that

$$\begin{aligned} \Big (\int _{G_{N,n}}{\text{ vol }}_{n}{\big ({P_FC}\big )}^{-p}\text{ d }\nu _{N,n}(F) \Big )^{\tfrac{1}{np}}&\le c_1\sqrt{n} \Big (\int _{G_{N,n}}\int _{S_F}h_C^{-np}(\theta ) \text{ d }\sigma _F(\theta )\text{ d }\nu _{N,n}(F)\Big )^{\tfrac{1}{np}}\\&= c_1\sqrt{n}\Big (\int _{S^{N-1}}h_C^{-np}(\theta )\text{ d }\sigma (\theta ) \Big )^{\tfrac{1}{np}}\\&= c_1\sqrt{n}W^{-1}_{-np}(C), \end{aligned}$$

where \(c_1\) is a positive numeric constant.\(\square \)

We refer the reader to [22] for further information on the quantities \(W_{[n,p]}(C)\).

6 Bounds for Generalized Intrinsic Volumes of \(B_1^N\) and \(B_{\infty }^N\)

By Proposition 4.4, we can obtain small-ball estimates in the Gaussian case by bounding the quantities \(A_{n,p}(B_1^N)\) and \(A_{n,p}(B_{\infty }^N)\). We will invoke Proposition 5.5, which relates \(W_{[n,-p]}(C)\) and the generalized mean-width \(W_{-p}(C)\) (defined in (5.7)) and thus we start by estimating \(W_{-p}(B_1^N)\).

Proposition 6.1

Let \(1\le p\le N\). Then

$$\begin{aligned} W_{-p}(B_{1}^{N}) \simeq \frac{\sqrt{ \ln {\frac{2N}{p}}}}{\sqrt{N}}. \end{aligned}$$
(6.1)

Proof

Using integration in spherical coordinates, one may verify that

$$\begin{aligned} W_{-p}(C) \simeq \frac{1}{\sqrt{N}} \Big ( \int _\mathbb{R ^N} h_{C}^{-p}(x) \text{ d }\gamma _{N}(x) \Big )^{-\tfrac{1}{p}} \end{aligned}$$

for all \(0<p\le \frac{N}{2}\). Note that for all \(r>0\),

$$\begin{aligned} \gamma _{N}\Big (\{ x: h_{B_{1}^{N}} (x)\le r\}\Big ) = \gamma _{N}\Big ( r [-1,1]^N\Big ) = \big (1-2\Phi (r)\big )^{N}, \end{aligned}$$

where

$$\begin{aligned} \Phi (r):=\frac{1}{\sqrt{2\pi }}\int _{r}^{\infty }\text{ e }^{-x^2/2}\text{ d }x. \end{aligned}$$

Assume first that \(p\le c_1N\) for some numeric constant \(c_{1}\in (0,1)\) to be specified later. Write

$$\begin{aligned} \int _{\mathbb{R }^N} h_{B_{1}^{N}}^{-p}(x) \text{ d }\gamma _{N}(x)&= p\int _{0}^{\infty } \frac{\Big (1-2\Phi (s)\Big )^{N}}{s^{p+1}} \text{ d }s \\&= p\int _{0}^{1} \frac{\Big (1-2\Phi (s)\Big )^{N}}{s^{p+1}} \text{ d }s + p\int _{1}^{\infty } \frac{\Big (1-2\Phi (s)\Big )^{N}}{s^{p+1}} \text{ d }s. \end{aligned}$$

Using the inequality \(1-2\Phi (r)\le \sqrt{\frac{2}{\pi }} r\) for \(r\in [0,1]\), we choose \(c_1\in (0,1)\) to ensure that

$$\begin{aligned} p\int _{0}^{1} \frac{\big (1-2\Phi (s)\big )^{N}}{s^{p+1}}\text{ d }s \le p \Big ( \frac{ 2}{\pi }\Big )^{\tfrac{N}{2}} \int _{0}^{1} s^{N-p-1} \text{ d }s \le \Big ( \frac{ 2}{\pi }\Big )^{\tfrac{N}{2}}. \end{aligned}$$

For the remainder of the integral, we use the rough estimate

$$\begin{aligned} p\int _{1}^{\infty } \frac{\big (1-2\Phi (s)\big )^{N}}{s^{p+1}} \text{ d }s \le p\int _{1}^{\infty } \frac{(1-2\text{ e }^{-8s^2})^N}{s^{p+1}}\text{ d }s. \end{aligned}$$

A routine calculation shows that the integrand

$$\begin{aligned} g(s):=\frac{(1-2\text{ e }^{-8s^2})^N}{s^{p+1}} \end{aligned}$$

is increasing on \((1,s_0)\) where \(s_0 :=\frac{1}{3}\sqrt{\ln (2N/p)}\). Thus

$$\begin{aligned} p\int _{1}^{s_{0}} g(s) \text{ d }s \le p(s_{0}-1) g(s_{0}) \le \frac{p}{s_{0}^{p}} \end{aligned}$$

and

$$\begin{aligned} p\int _{s_{0}}^{\infty } \frac{1}{s^{p+1}} \text{ d }s = \frac{1}{s_{0}^{p}}. \end{aligned}$$

Combining each of the estimates yields

$$\begin{aligned} p\int _{0}^{\infty } \frac{\big (1-2\Phi (s)\big )^{N}}{ s^{p+1}}\text{ d }s \le \frac{p+2}{s_{0}^{p}}. \end{aligned}$$
(6.2)

The reverse inequality is proved similarly.

Lastly, we treat the case \(c_1N\le p \le N\). Note that

$$\begin{aligned} W_{-N}(B_{1}^{N}) = \Big (\frac{{\text{ vol }}_{N}{\big ({B_{2}^{N}}\big )}}{{\text{ vol }}_{N}{\big ({B_{\infty }^{N}}\big )}}\Big )^{\tfrac{1}{N}} \simeq \frac{1}{\sqrt{N}}, \end{aligned}$$

hence Hölder’s inequality yields

$$\begin{aligned} \frac{1}{\sqrt{N}} \simeq W_{-{c_1 N}}(B_{1}^{N}) \ge W_{-p}(B_{1}^{N}) \ge W_{-N}(B_{1}^{N}) \simeq \frac{1}{\sqrt{N}}. \end{aligned}$$

\(\square \)

Proposition 6.2

Let \(N\ge n\) and let \(\delta \ge 1\). Then for \(1\le p\le \big (\frac{N}{n}\big )^{1-\frac{1}{\delta ^{2}}}\), we have

$$\begin{aligned} A_{n,p}(B_{1}^{N}) \le c^{\prime } \delta . \end{aligned}$$

Moreover, for \(N\le n\text{ e }^{\delta ^{2}}\),

$$\begin{aligned} A_{n,N}(B_{1}^{N}) \le c^{\prime \prime }\delta , \end{aligned}$$

where \(c^{\prime }\) and \(c^{\prime \prime }\) are positive numeric constants.

Proof

Set \(p_{0}= \big (\frac{N}{n}\big )^{1-\frac{1}{\delta ^{2}}}\). By Proposition 5.5 and Hölder’s inequality, for \(p\le p_{0}\), we have

$$\begin{aligned} W_{[n,p]}(B_{1}^{N}) \ge \frac{c}{\sqrt{n}} W_{-np}(B_{1}^{N}) \ge \frac{c}{\sqrt{n}} W_{-np_{0}}(B_{1}^{N}). \end{aligned}$$

By Proposition 6.1, the latter quantity is at least as large as

$$\begin{aligned} \frac{c^{\prime }\sqrt{\ln {(2N/(np_{0}))}}}{ \sqrt{nN}}= \frac{c^{\prime }}{\delta }\frac{\sqrt{\ln { (2N/n)}}}{ \sqrt{nN}}. \end{aligned}$$

Moreover, by Proposition 4.1, Lemma 4.2 and Proposition 4.5, we have

$$\begin{aligned} W_{[n,1]}(B_{1}^{N}) \simeq \frac{\sqrt{\ln {(2N/n)}}}{\sqrt{nN}}. \end{aligned}$$
(6.3)

Combining the latter two estimates, we have

$$\begin{aligned} A_{n,p} (B_{1}^{N}) = \frac{W_{[n,1]}(B_1^N)}{W_{[n,-p]} (B_1^N)}\le c\delta . \end{aligned}$$

Finally, for any \(p\le N\), Hölder’s inequality and Theorem 5.1 imply that

$$\begin{aligned} W_{[n,-p]}(B_{1}^{N}) \ge W_{[n,-N]}(B_{1}^{N}) = W_{[n,-N]}(\widetilde{B_{1}^{N}}) {\text{ vol }}_{N} {\big ({B_{1}^{N}}\big )}^{\tfrac{1}{N}} \ge \frac{c}{\sqrt{nN}}, \end{aligned}$$
(6.4)

where \(\widetilde{B_1^N}\) is the volume-one homothet of \(B_1^N\). Thus by (6.3), (6.4) and the definition of \(A_{n,p}\) we get that

$$\begin{aligned} A_{n,p} (B_{1}^{N}) \le c \sqrt{\ln {(2N/n)}} \le c^{\prime \prime }\delta , \end{aligned}$$

provided that \(N\le n\text{ e }^{\delta ^2}\).\(\square \)

Proposition 6.3

Let \(n\le N\) and let \(0< p\le N\). Then

$$\begin{aligned} A_{n,p}(B_{\infty }^N)\le c_0, \end{aligned}$$

where \(c_0\) is a positive numeric constant.

Proof

Since \(W(B_{\infty }^N)\le \mathrm{diam}(B_{\infty }^N)=2\sqrt{N}\), (5.2) yields

$$\begin{aligned} W_{[n,1]}(B_{\infty }^N)=W_{[n]}(B_{\infty }^N) \le \frac{c_2}{\sqrt{n}} W(B_{\infty }^N)\le 2c_2\sqrt{\frac{N}{n}}. \end{aligned}$$

By Theorem 5.1, we have

$$\begin{aligned} W_{[n,-N]}(B_{\infty }^N)=2\Phi _{[n]}((1/2)B_{\infty }^N) \ge c_1\sqrt{\frac{N}{n}}, \end{aligned}$$

where \(c_1\) is a positive numeric constant. Since \(W_{[n,-p]}(B_{\infty }^N)\ge W_{[n,-N]}(B_{\infty }^N)\) whenever \(0<p\le N\), we obtain

$$\begin{aligned} A_{n,p}(B_{\infty }^N)=\frac{W_{[n,1]}(B_{\infty }^N)}{W_{[n,-p]}(B_{\infty }^N)}\le \frac{2c_2}{c_1}. \end{aligned}$$

\(\square \)

Remark 6.4

The proof of Proposition 6.3 shows that if \(C\subset \mathbb{R }^N\) is a convex body with \({\text{ vol }}_{N}{\big ({C}\big )}=1\) and \(W(C)\le c\sqrt{N}\), then for any \(0<p\le N\) we have \(A_{n,p}(C)\le c^{\prime }\), where \(c^{\prime }\) is a constant that depends only on \(c\). Any zonoid in Lowner’s position satisfies this property (see [62]). In particular, there is a positive numeric constant \(c_1\) such that \(A_{n,p}(B_q^N)\le c_1\) whenever \(0<p\le N\) and \(2\le q\le \infty \). Note that by Urysohn’s inequality (see, e.g., [67, Corollary 1.4]), the inequality \(W(C)\ge c\sqrt{N}\) holds for any convex body \(C\) satisfying \({\text{ vol }}_{N}{\big ({C}\big )}=1\).

6.1 Small-Ball Estimates in the Gaussian Case

The results of the previous subsection lead to the following small-ball estimates.

Proposition 6.5

Let \(n\le N\le \text{ e }^n\) and let \(\varepsilon \in (0,1)\) and \(\delta >1\). Then

$$\begin{aligned} \mathbb{P }_{\otimes \gamma _n}\Big ({\mathrm{vol}}_{n}{\big ({{K_N}}\big )}^{\tfrac{1}{n}} \le \frac{\varepsilon }{c_1\delta } \mathbb{E }_{\otimes \gamma _n} {\mathrm{vol}}_{n}{\big ({K_N}\big )}^{\tfrac{1}{n}} \Big ) \le \varepsilon ^{N^{1-1/\delta ^2}n^{1/\delta ^2}}, \end{aligned}$$

where \(c_1\) is a positive numeric constant. Moreover, if \(N\le n\text{ e }^{\delta ^2}\), then

$$\begin{aligned} \mathbb{P }_{\otimes \gamma _n}\Big ({\mathrm{vol}}_{n}{\big ({K_N}\big )}^{\tfrac{1}{n}} \le \frac{\varepsilon }{c_2\delta } \mathbb{E }_{\otimes \gamma _n} {\mathrm{vol}}_{n}{\big ({K_N}\big )}^{\tfrac{1}{n}}\Big ) \le \varepsilon ^{n(N-n+1-o(1))}, \end{aligned}$$

where \(c_2\) is a positive numeric constant.

Proof

Let \(p_0=\big (\frac{N}{n}\big )^{1-1/\delta ^2}\). Then \(p_0\le N-n+1\), hence Propositions 4.4 and 6.2 imply that

$$\begin{aligned}&\mathbb{P }_{\otimes \gamma _n}\Big ({\text{ vol }}_{n}{\big ({K_N}\big )}^{\tfrac{1}{n}} \le \frac{\varepsilon }{c_1\delta } \mathbb{E }_{\otimes \gamma _n}{\text{ vol }}_{n}{\big ({K_N}\big )}^{\tfrac{1}{n}} \Big )\\&\quad \le \mathbb{P }_{\otimes \gamma _n}\Big ({\text{ vol }}_{n}{\big ({K_N}\big )}^{\tfrac{1}{n}} \le \frac{\varepsilon }{cA_{n,p_0}(B_1^N)} \Big (\mathbb{E }_{\otimes \gamma _n}{\text{ vol }}_{n}{\big ({K_N}\big )}\Big )^{\tfrac{1}{n}}\Big )\\&\quad \le \varepsilon ^{np_0}, \end{aligned}$$

where \(c_1\) is a positive numeric constant. If \(N\le n\text{ e }^{\delta ^2}\), we take \(p_1=N-n+1-\text{ e }^{-n(N-n+1)}\) and argue as above.\(\square \)

For zonotopes generated by the Gaussian measure we have the following.

Proposition 6.6

Let \(N\ge n\) and let \(\varepsilon \in (0,1)\). Then

$$\begin{aligned} \mathbb{P }_{\otimes \gamma _n}\Big ({\mathrm{vol}}_{n}{\big ({Z_N}\big )}^{\tfrac{1}{n}} \le \frac{\varepsilon }{c_1} \mathbb{E }_{\otimes \gamma _n}{\mathrm{vol}}_{n}{\big ({Z_N}\big )}^{\tfrac{1}{n}} \Big ) \le \varepsilon ^{n(N-n+1-o(1))}, \end{aligned}$$

where \(c_1\) is a positive numeric constant.

Proof

Use Propositions 4.4 and 6.3 and argue as in the proof of the previous proposition.\(\square \)

We conclude this section with a complementary lower bound that shows Proposition 6.6 is essentially optimal.

Proposition 6.7

Let \(N\ge n\) and let \(\varepsilon \in (0,1/2)\). Then

$$\begin{aligned} \mathbb{P }_{\otimes \gamma _n}\Big ({\mathrm{vol}}_{n}{\big ({Z_N}\big )}^{\tfrac{1}{n}} \le \frac{\varepsilon }{c_2} \mathbb{E }_{\otimes \gamma _n} {\mathrm{vol}}_{n}{\big ({Z_N}\big )}^{\tfrac{1}{n}} \Big ) \ge \varepsilon ^{n(N-n+1)}, \end{aligned}$$

where the \(c_2\) is a positive numeric constant.

Proof

Let \(G\) be an \(n\times N\) matrix with independent standard Gaussian entries. Then \(Z_N= GB_{\infty }^N \subset \sqrt{N}GB_2^N\), hence

$$\begin{aligned} {\text{ vol }}_{n}{\big ({Z_N}\big )} \le N^{n/2} \mathrm{det}(GG^*)^{1/2} \omega _n. \end{aligned}$$

Using the latter inequality and Proposition 4.5, we have

$$\begin{aligned} \mathbb{P }_{\otimes \gamma _n}\Big ({\text{ vol }}_{n}{\big ({Z_N}\big )}^{\tfrac{1}{n}} \le \frac{\varepsilon }{c_2} \mathbb{E }_{\otimes \gamma _n}{\text{ vol }}_{n}{\big ({Z_N}\big )}^{\tfrac{1}{n}}\Big )\ge \mathbb{P }_{\otimes \gamma _n}\Big (\mathrm{det}(GG^*)^{\tfrac{1}{2n}} \le \frac{\varepsilon }{c_2} \sqrt{N}\Big ), \end{aligned}$$

where \(c_2\) is a positive numeric constant. The result follows from Proposition .\(\square \)

Unlike the case of \(Z_N\), we do not know whether the probabilities in Proposition 6.5 are optimal. In Sect. 9.1, we prove lower bounds for such probabilities in a more general setting.

7 From the Gaussian Measure to the Ball

With estimates for Gaussian-measure in hand, we proceed to transfer them to the uniform measure on the Euclidean ball. Let \(\overline{\gamma }_{n}\) be the Gaussian measure on \(\mathbb{R }^n\) with density \(\text{ d }\overline{\gamma }_{n}(x) = \text{ e }^{-\pi |x|^{2} } \text{ d }x\); in particular, \(\overline{\gamma }_n\) belongs to the class \(\mathcal P _n^b\).

The main goal of this section is to establish the following proposition.

Proposition 7.1

Let \(n < N\le \text{ e }^n\) and set \(m=N/2+(n-1)/2\). Then for any \(p\in (0, (N-n+1)/4)\), we have

$$\begin{aligned} \Big (\mathbb{E }_{\otimes _{i=1}^m \overline{\gamma }_n} {\mathrm{vol}}_{n}{\big ({K_{m}}\big )}^{-p}\Big )^{-\tfrac{1}{pn}} \le c\Big (\mathbb{E }_{\otimes _{i=1}^N \lambda _{D_n}} {\mathrm{vol}}_{n}{\big ({K_{N}}\big )}^{-p}\Big )^{-\tfrac{1}{pn}} \end{aligned}$$
(7.1)

and

$$\begin{aligned} \Big (\mathbb{E }_{\otimes _{i=1}^m \overline{\gamma }_n} {\mathrm{vol}}_{n}{\big ({Z_{m}}\big )}^{-p}\Big )^{-\tfrac{1}{pn}} \le c\Big (\mathbb{E }_{\otimes _{i=1}^N \lambda _{D_n}} {\mathrm{vol}}_{n}{\big ({Z_{N}}\big )}^{-p}\Big )^{-\tfrac{1}{pn}}, \end{aligned}$$
(7.2)

where \(c\) is a positive numeric constant.

For the case when \(N=n\), see Remark 7.3. For simplicity, we assume throughout that \(m= N/2 +(n-1)/2\) is an integer; simple modifications will yield the result for all \(n\) and \(N\).

As in the previous sections, we will prove a more general statement. Let \(C\subset \mathbb{R }^N\) be a \(1\)-symmetric convex body. For convenience of notation, we write \(\overline{x}=(x_1,\ldots , x_N)\in (\mathbb{R }^n)^N\) and set

$$\begin{aligned} F(\overline{x}) := F(x_{1}, \ldots , x_{N}):= {\text{ vol }}_{n}{\big ({[x_{1} \ldots x_{N}]C}\big )}. \end{aligned}$$
(7.3)

The main properties of \(F\) used here are the following:

  1. (i)

    \(F\) is coordinate-wise increasing: for fixed \(x_1,\ldots , x_N\in \mathbb{R }^n\) and for \(0<s_{i}\le t_{i}, i\le N\), we have

    $$\begin{aligned} F(s_{1}x_{1}, \ldots , s_{N}x_{N}) \le F(t_{1}x_{1}, \ldots , t_{N}x_{N}); \end{aligned}$$
    (7.4)

    see the proof of Theorem 3.1.

  2. (ii)

    \(F\) is \(n\)-homogeneous, i.e., \(F(a\overline{x}) = a^{n} F(\overline{x})\) for \(a>0\);

  3. (iii)

    \(F\) is invariant under permutation of its coordinates, i.e., \(F(x_1,\ldots ,x_N)=F(x_{\xi (1)},\ldots ,x_{\xi (N)})\) for any permutation \(\xi :\{1,\ldots ,N\} \rightarrow \{1,\ldots , N\}\).

Proposition 7.2

Let \(F:(\mathbb{R }^n)^N\rightarrow \mathbb{R }^{+}\) be defined by (7.3). Let \(n<N\le \text{ e }^n\) and set \(m=N/2+(n-1)/2\). If \(p\in (0,(N-n+1)/4)\), then

$$\begin{aligned} \Big (\mathbb{E }_{\otimes _{i=1}^m \overline{\gamma }_n} F(X_1,\ldots ,X_{m},0,\ldots ,0)^{-p}\Big )^{-\tfrac{1}{pn}} \!\le \! c\Big (\mathbb{E }_{\otimes _{i=1}^N\lambda _{D_n}} F(X_1,\ldots ,X_N)^{-p}\Big )^{-\tfrac{1}{pn}},\nonumber \\ \end{aligned}$$
(7.5)

where \(c\) is a positive numeric constant.

The complementary inequality

$$\begin{aligned} \Big (\mathbb{E }_{\otimes _{i=1}^N \overline{\gamma }_n} F(X_1,\ldots ,X_N)^{-p}\Big )^{-\tfrac{1}{pn}} \ge \Big (\mathbb{E }_{\otimes _{i=1}^N \lambda _{D_n}} F(X_1,\ldots ,X_N)^{-p}\Big )^{-\tfrac{1}{pn}} \end{aligned}$$

follows from Theorem 3.1.

To prove the proposition, we will express the expectations in (7.5) in spherical coordinates and compare them with the corresponding expectations on the \(N\)-fold product of spheres \(S_n^N:=S^{n-1}\times \cdots \times S^{n-1}\), equipped with the product of the Haar probability measures \(\sigma \), denoted here by \(\mathbb P _{\otimes _{i=1}^N \sigma }\). Before doing so, we discuss the case \(N=n\).

Remark 7.3

If \(N=n\), then \(F(x_1,\ldots ,x_n)=|\mathrm{det}([x_1\ldots x_N])|{\text{ vol }}_{n}{\big ({C}\big )}\). In this case, if \(X_1,\ldots ,X_n\) are independent and distributed according \(\overline{\gamma }_n\), then one can write \(X_i=|X_i|\theta _i\), where \(\theta _i=X_i/|X_i|\) is uniformly distributed on the sphere and is independent of \(|X_i|\). Thus for any \(p\in (0,1)\),

$$\begin{aligned} \mathbb{E }_{\otimes _{i=1}^n\overline{\gamma }_n}F(X_1,\ldots ,X_n)^{-p} = \mathbb{E }_{\otimes _{i=1}^n \overline{\gamma }_n}|X_1|^{-p} \ldots |X_n|^{-p} \mathbb{E }_{\otimes _{i=1}^n \sigma }F(\theta _1,\ldots ,\theta _n)^{-p}. \end{aligned}$$

Similarly, if the \(X_i\)’s are independent and sampled according to \(\lambda _{D_n}\), we have

$$\begin{aligned} \mathbb{E }_{\otimes _{i=1}^n\lambda _{D_n}}F(X_1,\ldots ,X_n)^{-p} = \mathbb{E }_{\otimes _{i=1}^n \lambda _{D_n}}|X_1|^{-p}\ldots |X_n|^{-p} \mathbb{E }_{\otimes _{i=1}^n \sigma }F (\theta _1,\ldots ,\theta _n)^{-p}. \end{aligned}$$

Thus equality holds

$$\begin{aligned} \Big (\mathbb{E }_{\otimes _{i=1}^n\overline{\gamma }_n}F(X_1,\ldots , X_n)^{-p}\Big )^{-\tfrac{1}{pn}} = c_{n,p} \Big (\mathbb{E }_{\otimes _{i=1}^n\lambda _{D_n}}F(X_1,\ldots , X_n)^{-p}\Big )^{-\tfrac{1}{pn}}, \end{aligned}$$

for all \(p\in (0,1)\), where

$$\begin{aligned} c_{n,p}=\Big (\frac{\mathbb{E }_{\otimes _{i=1}^n \overline{\gamma }_n}|X_1|^{-p}\ldots |X_n|^{-p}}{ \mathbb{E }_{\otimes _{i=1}^n \lambda _{D_n}}|X_1|^{-p}\ldots |X_n|^{-p}}\Big )^{-\tfrac{1}{pn}} = \Big (\frac{\mathbb{E }_{\overline{\gamma }_n}|X_1|^{-p}}{\mathbb{E }_{\lambda _{D_n}}|X_1|^{-p}}\Big )^{-\tfrac{1}{p}}\simeq 1. \end{aligned}$$

In particular, the constant \(4\) in Proposition 7.2 is not needed when \(N=n\).

Proof of Proposition

7.2 Assume first that \(X_1,\ldots ,X_N\) are independent random vectors distributed according to \(\overline{\gamma }_n\) and write \(\overline{X}=(X_1,\ldots ,X_N)\). Then for each \(t_0>0\), we have

$$\begin{aligned}&\mathbb{E }_{\otimes _{i=1}^N \overline{\gamma }_n} F(\overline{X})^{-p} \nonumber \\&\quad \ge \int _{t_{0}B_2^n} \ldots \int _{t_{0} B_2^n} F^{-p}(x_{1}, \ldots , x_{N}) \text{ d } \overline{\gamma }_{n}(x_{N})\ldots \text{ d } \overline{\gamma }_{n}(x_{1})\nonumber \\&\quad = (n\omega _{n})^{N} \int _{[0,t_0]^N}\int _{S_n^N} F^{-p} (r_{1}\theta _{1}, \ldots , r_{N} \theta _{N}) \prod _{i=1}^{N} r_{i}^{n-1} \text{ e }^{-\pi r_{i}^{2}} \text{ d }\sigma _{n}^{N}(\overline{\theta }) \text{ d }{\overline{r}} \nonumber \\&\quad \ge (n\omega _{n})^{N} \int _{[0,t_0]^N}\int _{S_{n}^{N}} F^{-p} (t_{0}\theta _{1}, \ldots , t_{0} \theta _{N}) \prod _{i=1}^{N}r_{i}^{n-1} \text{ e }^{-\pi r_{i}^{2}} \text{ d }\sigma _{n}^{N}(\overline{\theta }) \text{ d }{\overline{r}}\nonumber \\&\quad = t_{0}^{-pn}(n\omega _{n})^{N} \int _{[0,t_0]^N} \prod _{i=1}^{N} r_{i}^{n-1} \text{ e }^{-\pi r_{i}^{2}} \text{ d }{\overline{r}} \int _{S_{n}^{N}} F^{-p} ( \theta _{1}, \ldots , \theta _{N}) \text{ d }\sigma _{n}^{N}(\overline{\theta })\nonumber \\&\quad = t_{0}^{-pn}\overline{\gamma }_{n}(t_0B_2^n)^N \mathbb{E }_{\otimes _{i=1}^N \sigma } F(\overline{\theta })^{-p}, \end{aligned}$$
(7.6)

where \(\overline{\theta }=(\theta _1,\ldots ,\theta _N)\) is distributed according to \(\mathbb P _{\otimes _{i=1}^N \sigma }\).

At this point, we choose \(t_0\) such that \(\overline{\gamma }_n(t_0B_2^n) =1-\text{ e }^{-n}\); one can check that \(t_0\simeq \sqrt{n}\). Then, for \(N\le \text{ e }^{n}\), we have

$$\begin{aligned} 1\ge \big ( \overline{\gamma }_{n}( t_{0} B_2^n)\big )^{N} = (1-\text{ e }^{-n})^{N} \ge \frac{1}{\text{ e }}. \end{aligned}$$
(7.7)

Combining (7.6) and (7.7) yields

$$\begin{aligned} \Big (\mathbb{E }_{\otimes _{i=1}^N \overline{\gamma }_n} F^{-p}( \overline{X})\Big )^{-\tfrac{1}{pn}} \le c \sqrt{n} \Big (\mathbb{E }_{\otimes _{i=1}^N \sigma } F^{-p}(\overline{\theta })\Big )^{-\tfrac{1}{pn}}, \end{aligned}$$
(7.8)

where \(c>0\) is a positive numeric constant.

Assume now that \(X_1,X_2,\ldots , X_N\) are independent random vectors distributed uniformly in \(D_n\) and write \(\overline{X}=(X_1,\ldots ,X_N)\). Note that for each \(i=1,\ldots ,N\), we can write \(X_i=|X_i| \theta _i\), where \(|X_i|\) is the Euclidean norm of \(X_i\), and \(\theta _i = X_i/|X_i|\) is distributed uniformly on the sphere \(S^{n-1}\) and is independent of \(|X_i|\).

Let \(s_0\) be such that \(\mathbb{P }_{\lambda _{D_n}}\Big (|X_1|\ge s_0\Big )=1-\text{ e }^{-n}\) and note that \(s_0\simeq \sqrt{n}\). Since \(N\le \text{ e }^n\),

$$\begin{aligned} \mathbb{P }_{\otimes _{i=1}^N\lambda _{D_n}}\Big (|X_i|\ge s_0 \text{ for } \text{ each } i=1,\ldots ,N \Big ) = (1-\text{ e }^{-n})^N\ge \frac{1}{\text{ e }}. \end{aligned}$$

Denote the decreasing rearrangement of the sequence \((|X_i|)\) by \((|X_i|^{*})\). Then

$$\begin{aligned} \mathbb{E }_{\otimes _{i=1}^N \lambda _{D_n} }|X_N|^* = \mathbb{E }_{\otimes _{i=1}^N \lambda _{D_n} }{\mathop {\text{ min }} \limits _{i\le N}}{|X_i|} \ge s_0/\text{ e }. \end{aligned}$$
(7.9)

Since \(F\) is invariant under permutations, we have

$$\begin{aligned} F(X_1\ldots X_N) = F(\theta _1|X_1|^{*}, \ldots , \theta _N|X_N|^*). \end{aligned}$$

We partition the sequence \((|X_i|^*)\) into three blocks as follows:

$$\begin{aligned} \underbrace{|X_1|^*, \ldots }_{n-1},\;\underbrace{|X_n|^*\ldots }_{(N-n+1)/2},\; \underbrace{\ldots , {|X_N|^*}}_{(N-n+1)/2}. \end{aligned}$$

Taking \(m=n-1+(N-n+1)/2 = N/2 +(n-1)/2\) and using monotonicity and homogeneity of \(F\), we have

$$\begin{aligned} F(X_1\ldots X_N) \ge (|X_m|^*)^n F(\theta _1,\ldots , \theta _{m},0,\ldots ,0). \end{aligned}$$
(7.10)

Since \(N-m = (N-n+1)/2\), we have

$$\begin{aligned} \mathbb{P }_{\otimes _{i=1}^N\lambda _{D_n} }\Big (|X_m|^* \le c \varepsilon \sqrt{n}\Big )&\le \sum _{|I|=(N-n+1)/2} \mathbb{P }_{\otimes _{i=1}^N\lambda _{D_n}}\Big (\bigcap _{i\in I}\{|X_i|\le c \varepsilon \sqrt{n}\}\Big ) \\&\le {N \atopwithdelims ()(N-n+1)/2} \mathbb{P }_{\lambda _{D_n}}\Big (|X_1| \le c \varepsilon \sqrt{n}\Big )^{(N-n+1)/2}\\&\le \Big (\frac{2\text{ e }N}{N-n+1}\Big )^{(N-n+1)/2}\varepsilon ^{n(N-n+1)/2}. \end{aligned}$$

By the distribution formula for non-negative random variables, we obtain

$$\begin{aligned} \left( \mathbb{E }_{\otimes _{i=1}^N\lambda _{D_n}} (|X_m|^*)^ {-pn}\right) ^{-\tfrac{1}{pn}}\ge c_0\mathbb{E }_{\otimes _{i=1}^N\lambda _{D_n}} |X_m|^* \end{aligned}$$
(7.11)

for all \(0< p \le (N-n+1)/4\), where \(c_0\) is a positive numeric constant. By (7.9) we have

$$\begin{aligned} \mathbb{E }_{\otimes _{i=1}^N\lambda _{D_n}} |X_m|^* \ge \mathbb{E }_{\otimes _{i=1}^N \lambda _{D_n}} |X_N|^*\ge c_1\sqrt{n}, \end{aligned}$$

where \(c_1\) is a positive numeric constant. Taking powers and then expectations in (7.10) and applying (7.11), we get that for \(0<p<(N-n+1)/4\),

$$\begin{aligned} \Big (\mathbb{E }_{\otimes _{i=1}^N \lambda _{D_n}} F(\overline{X}) ^{-p}\Big )^{-\tfrac{1}{pn}} \ge c_1\sqrt{n} \Big (\mathbb{E }_{\otimes _{i=1}^m \sigma } F(\overline{\theta }) ^{-p}\Big )^{-\tfrac{1}{pn}}, \end{aligned}$$

where \(\overline{\theta }=(\theta _1,\ldots ,\theta _{m},0,\ldots ,0)\) and \(\theta _1,\ldots ,\theta _{m}\) are independent and uniformly distributed on the sphere \(S^{n-1}\). The proposition now follows by applying (7.8) (with \(N\) replaced by \(m\)).\(\square \)

Remark 7.4

(1) The assumption \(N\le \text{ e }^n\) in Proposition 7.1 is essential for \(K_N\) since after this point \(\mathbb{E }_{\otimes \overline{\gamma }_n}{\text{ vol }}_{n}{\big ({K_N}\big )}\) is much larger than \(\mathbb{E }_{\otimes \lambda _{D_n}}{\text{ vol }}_{n}{\big ({K_N}\big )}\).

(2) We do not believe the constant \(4\) in Proposition 7.1 is necessary; perhaps the optimal constant is \(1+o(1)\). Any improvement here will lead to better constants in the exponents of the small-ball estimates in Theorems 1.1–1.3.

8 Proof of the Main Theorems and Further Remarks

We are now ready to prove the two main results of this paper.

Theorem 8.1

Let \(n\le N\le \text{ e }^n\) and let \(\mu _1,\ldots , \mu _N\in \mathcal P _n^b\). Let \(\delta >1\) and let \(\varepsilon \in (0,1)\). Then

$$\begin{aligned} \mathbb{P }_{\otimes \mu _i}\Big ( {\mathrm{vol}}_{n}{\big ({K_N}\big )}^{1/n} \le \frac{c\varepsilon }{\delta } \sqrt{\frac{\ln (2N/n)}{n}}\Big ) \le \varepsilon ^{c_1 N^{1-1/\delta ^2}n^{1/\delta ^2}} \end{aligned}$$
(8.1)

and, if \(N\le n\text{ e }^{\delta ^2}\), then

$$\begin{aligned} \mathbb{P }_{\otimes \mu _i}\Big ({\mathrm{vol}}_{n}{\big ({K_N}\big )}^{1/n} \le \frac{c\varepsilon }{\delta } \sqrt{\frac{\ln (2N/n)}{n}} \Big ) \le \varepsilon ^{n(N-n+1-o(1))/4}. \end{aligned}$$
(8.2)

Proof

Let \(m=N/2+(n-1)/2\) and let \(p_0 =\big (\frac{m}{n}\big )^{1-1/\delta ^2}\). By (4.13) and Proposition 6.2,

$$\begin{aligned} \frac{\Big (\mathbb{E }_{\otimes _{i=1}^m\gamma _n}{\text{ vol }}_{n}{\big ({K_m}\big )}\Big )^{\tfrac{1}{n}}}{\Big (\mathbb{E }_{\otimes _{i=1}^m\gamma _n}{\text{ vol }}_{n} {\big ({K_m}\big )}^{-p_0}\Big )^{-\tfrac{1}{p_0n}}} \simeq A_{n,p_0}(B_1^m)\le c^{\prime }\delta , \end{aligned}$$
(8.3)

where \(c^{\prime }\) is a positive numeric constant. Since \(p_0\le (N-n+1)/4\), by Proposition 7.2 and (8.3), we have

$$\begin{aligned} \Big (\mathbb{E }_{\otimes _{i=1}^N \lambda _{D_n}}{\text{ vol }}_{n} {\big ({K_N}\big )}^{-p_0}\Big )^{-\tfrac{1}{p_0n}}&\ge c_0\Big (\mathbb{E }_{\otimes _{i=1}^m \overline{\gamma }_n} {\text{ vol }}_{n}{\big ({K_{m}}\big )}^{-p_0}\Big )^{-\tfrac{1}{p_0n}}\\&\ge c_1\Big (\mathbb{E }_{\otimes _{i=1}^m \gamma _n} {\text{ vol }}_{n}{\big ({K_{m}}\big )}^{-p_0}\Big )^{-\tfrac{1}{p_0n}}\\&\ge \frac{c_2}{\delta } \Big (\mathbb{E }_{\otimes _{i=1}^m \gamma _n}{\text{ vol }}_{n}{\big ({K_{m}}\big )}\Big )^{1/n}\\&\ge \frac{c_3}{\delta }\sqrt{\frac{\ln (2N/n)}{n}}. \end{aligned}$$

By Markov’s inequality, we obtain

$$\begin{aligned} \mathbb{P }_{\otimes _{i=1}^N \lambda _{D_n}}\Big ( {\text{ vol }}_{n}{\big ({K_N}\big )}^{1/n} \le \frac{c\varepsilon }{\delta } \sqrt{\frac{\ln (2N/n)}{n}}\Big ) \le \varepsilon ^{c_1 N^{1-1/\delta ^2}n^{1/\delta ^2}}. \end{aligned}$$

Lastly, apply Theorem 3.1. The proof of (8.2) follows the same argument.\(\square \)

Theorem 8.2

Let \(n\le N\le \text{ e }^n\) and let \(\mu _1,\ldots , \mu _N\in \mathcal P _n^b\). Then for each \(\varepsilon \in (0,1)\),

$$\begin{aligned} \mathbb{P }_{\otimes \mu _i}\Big ({\mathrm{vol}}_{n}{\big ({Z_N}\big )}^{1/n}\le \frac{c\varepsilon N}{\sqrt{n}} \Big ) \le \varepsilon ^{n(N-n+1-o(1))/4}. \end{aligned}$$
(8.4)

Proof

Argue as in the proof of the previous theorem and apply Proposition 6.3 instead of Proposition 6.2.\(\square \)

Remark 8.3

Note that when \(N=2n\) the estimate in (8.2) is much stronger than the estimate in (8.1), which suggests that a better exponent can be achieved in general. As we will see in the next subsection, the estimates in (8.2) and (8.4) are sharp up to the numeric constants involved.

Remark 8.4

We wish to emphasize several points regarding the exponent \(n(N-n+1-o(1))/4\) in (8.2) and (8.4). Firstly, as we mentioned in Remark 7.4, the constant \(4\) is an artifact of the proof of Proposition 7.2 and \(N-n+1-o(1)\) is the best possible in (8.4) (cf. Proposition 6.7). Secondly, the \(o(1)\)-term can be estimated to a high degree of accuracy (cf. Proposition 6.5 and its proof). Finally, the ‘+1’ in the exponent accommodates the case \(N=n\), in which case \(n(N-n+1-o(1))= n(1-o(1))\) is the best that can be achieved in general; note also that the \(4\) is not needed in this case (cf. Remark 7.3).

8.1 Complementary Small-Ball Estimates

In this section we give lower bounds for the probabilities in Theorems 8.1 and 8.2. We make use of known bounds for the volume of the convex hull and zonotope generated by arbitrary points in \(\mathbb{R }^n\) (which we stated in Sect. 4).

Let \(\mu \in \mathcal P _n^b\) and assume that \(f_{\mu }(0)=\left||f_{\mu }\right||_{\infty }=1\). Suppose there exists \(\varepsilon _0= \varepsilon _0(\mu )\) such that

$$\begin{aligned} \mathbb{P }_{\mu }\Big (|X|\le c \varepsilon \sqrt{n}\Big )\ge \varepsilon ^n \; \text{ whenever } \; \varepsilon < \varepsilon _0, \end{aligned}$$
(8.5)

where \(c\) is a positive numeric constant. For instance, if \(f_{\mu }\) is continuous at \(0\) then there exists \(\varepsilon _0=\varepsilon _0(\mu )\) such that \(|f_{\mu }(x)|\ge 1/2\) whenever \(|x|\le \varepsilon _0 c\sqrt{n}\), hence (8.5) holds (with \(c\) replaced by \(2^{1/n}c\)). Given \(\varepsilon _0(\mu )\), we can apply Theorem 4.6 to obtain, for each \(\varepsilon \le \varepsilon _0(\mu )\),

$$\begin{aligned} \mathbb{P }_{\otimes \mu }\Big ({\text{ vol }}_{n}{\big ({K_N}\big )}^{1/n} \le c \varepsilon \sqrt{\frac{\ln (2N/n)}{n}}\Big )&\ge \mathbb{P }_{\otimes \mu }\Big (|X_i|\le c \varepsilon \sqrt{n} \text{ for } i=1,\ldots ,N\Big )\\&= \mathbb{P }_{\mu }\Big (|X_1|\le c \varepsilon \sqrt{n}\Big )^N\\&\ge \varepsilon ^{nN}. \end{aligned}$$

Similarly, for \(Z_N\) we apply Lemma 4.7: for any \(\varepsilon \le \varepsilon _0(\mu )\),

$$\begin{aligned} \mathbb{P }_{\otimes \mu }\Big ({\text{ vol }}_{n}{\big ({Z_N}\big )}^{1/n} \le \frac{c \varepsilon N}{\sqrt{n}}\Big )&\ge \mathbb{P }_{\otimes \mu }\Big (|X_i| \le c \varepsilon \sqrt{n} \text{ for } i=1,\ldots ,N\Big )\\&= \mathbb{P }_{\mu }\Big (|X_1|\le c \varepsilon \sqrt{n}\Big )^N\\&\ge \varepsilon ^{nN}. \end{aligned}$$

Thus even though \(\varepsilon _0(\mu )\) depends on \(\mu \) and \(\inf \{\varepsilon _0(\mu ):\mu \in \mathcal P _n^b\}=0\), the asymptotic behavior of the small-ball estimates for \(K_N\) and \(Z_N\) as \(\varepsilon \rightarrow 0\) is at least \(\varepsilon ^{nN}\). In some classes of measures, one can control the value of \(\varepsilon _0(\mu )\); in particular, for the class of isotropic log-concave probability measures (see Sect. 9.1).

9 Isotropicity and Log-Concavity

In many cases, the literature on volumetric bounds for random convex sets involves isotropic measures rather than those in \(\mathcal P _n^b\). However, one can easily deduce results for isotropic measures from our main theorems.

Let \(\mathcal P _n^{\mathrm{cov}}\) denote the set of measures \(\mu \in \mathcal P _n\) with bounded densities such that the covariance matrix of \(\mu \) is well-defined. We say that a probability measure \(\mu \in \mathcal P _n^{\mathrm{cov}}\) is isotropic if its covariance matrix is the identity. When \(\mu \) is isotropic, we define its isotropic constant \(L_{\mu }\) by

$$\begin{aligned} L_{\mu }:= \left||f_{\mu }\right||_{\infty }^{1/n}, \end{aligned}$$

where \(f_{\mu }\) is the density of \(\mu \). Given any measure \(\mu \in \mathcal P _n^{\mathrm{cov}}\) with barycenter at the origin, one can find a linear map \(T:\mathbb{R }^n\rightarrow \mathbb{R }^n\) (unique modulo orthogonal transformations) of determinant one such that \(\mu \circ T^{-1}\) is an isotropic probability measure; in this way, the isotropic constant is uniquely defined for all \(\mu \in \mathcal P _n^\mathrm{cov}\).

Let \(a>0\) and \(\mu \in \mathcal P _n^{\mathrm{cov}}\) with density \(f_\mu \). We define a new probability measure \(\mu _{a}\) on \(\mathbb{R }^n\) as the measure that has density \(f_{\mu _{a}}(x)= a^{n} f_{\mu }(ax)\). Obviously,

$$\begin{aligned} \left||f_{\mu _{a}}\right||_{\infty } = a^{n}\left||f_{\mu }\right||_{\infty } . \end{aligned}$$

Moreover, if \(F:(\mathbb{R }^{n})^{N}\rightarrow \mathbb{R }^{+}\) is \(p\)-homogeneous, then

$$\begin{aligned} \mathbb{E }_{\otimes \mu } F(X_1,\ldots ,X_N) = a^{p} \mathbb E _{\otimes \mu _{a}} F(X_1,\ldots ,X_N). \end{aligned}$$

Thus if \(\mu \in \mathcal P _n^{\mathrm{cov}}\) is isotropic then \(\mu ^{\prime }:= \mu _{\frac{1}{L_{\mu }}} \) satisfies \(\left||f_{\mu ^{\prime }}\right||_{\infty }=1\),

$$\begin{aligned} \left( \mathbb{E }_{\otimes \mu }{\text{ vol }}_{n}{\big ({[X_1\ldots X_N] C}\big )}\right) ^{\tfrac{1}{n}} =\frac{1}{ L_{\mu } } \left( \mathbb{E }_{\otimes \mu ^{\prime }} {\text{ vol }}_{n}{\big ({[X_1 \ldots X_N]C}\big )}\right) ^{\tfrac{1}{n}}. \end{aligned}$$

and

$$\begin{aligned} \frac{\left( \mathbb{E }_{\otimes \mu }{\text{ vol }}_{n}{\big ({[X_1\ldots X_N]C}\big )}\right) ^{\tfrac{1}{n}}}{\left( \mathbb{E }_{\otimes \mu } {\text{ vol }}_{n}{\big ({[X_1\ldots X_N]C}\big )} ^{p} \right) ^{\tfrac{1}{pn}}} = \frac{ \left( \mathbb{E }_{\otimes \mu ^{\prime }} {\text{ vol }}_{n}{\big ({[X_1\ldots X_N]C}\big )}\right) ^{\tfrac{1}{n}}}{\left( \mathbb{E }_{\otimes \mu ^{\prime }}{\text{ vol }}_{n}{\big ({[X_1\ldots X_N]C}\big )}^{p} \right) ^{\tfrac{1}{pn}}}. \end{aligned}$$

By a change of variables, note that for any \(S\in SL(n)\), we have

$$\begin{aligned} \mathbb{E }_{\otimes \mu \circ S^{-1}} {\text{ vol }}_{n}{\big ({[X_1\ldots X_N]C}\big )}^{p} = \mathbb{E }_{\otimes \mu } {\text{ vol }}_{n}{\big ({[X_1\ldots X_N]C}\big )}^{p} \end{aligned}$$

for any \(p\) for which the expressions are defined. Thus there is no loss in generality in assuming that \(\mu \) is isotropic.

Following the proof of our main theorem we obtain a corresponding result for isotropic probability measures.

Theorem 9.1

Let \(n\le N\le \text{ e }^n\). Let \(\mu \in \mathcal P ^{\mathrm{cov}}_n\) and assume that \(\mu \) is isotropic. Then for every \(\varepsilon \in (0,1)\),

$$\begin{aligned} \mathbb{P }_{\otimes \mu }\Big ( {\mathrm{vol}}_{n}{\big ({K_N}\big )}^{1/n} \le \frac{c\varepsilon }{\delta L_{\mu }} \sqrt{\frac{\ln (2N/n)}{n}}\Big ) \le \varepsilon ^{c_1 N^{1-1/\delta ^2}n^{1/\delta ^2}} \end{aligned}$$

and, if \(N\le n\text{ e }^{\delta ^2}\), then

$$\begin{aligned} \mathbb{P }_{\otimes \mu }\Big ({\mathrm{vol}}_{n}{\big ({K_N}\big )}^{1/n} \le \frac{c\varepsilon }{\delta L_{\mu }} \sqrt{\frac{\ln (2N/n)}{n}}\Big ) \le \varepsilon ^{n(N-n+1-o(1))/4}, \end{aligned}$$

where \(c\) and \(c_1\) are positive numeric constants. Similarly, for each \(\varepsilon \in (0,1)\),

$$\begin{aligned} \mathbb{P }_{\otimes \mu }\Big ({\mathrm{vol}}_{n}{\big ({Z_{N}}\big )}^{\tfrac{1}{n}} \le \frac{c_2\varepsilon }{L_{\mu }} \frac{N}{\sqrt{n}}\Big ) \le \varepsilon ^{n(N-n+1-o(1))/4}, \end{aligned}$$

where \(c_2\) is a positive numeric constant.

It is known that \(\inf \{L_{\mu }: \mu \in \mathcal P _n^{\mathrm{cov}}\}\ge c\), where \(c\) is a positive numeric constant (see [5, 58]). On the other hand, \(L_{\mu }\) does not admit a uniform upper bound as \(\mu \) varies in \(\mathcal P _n^{\mathrm{cov}}\). However, in the important class of log-concave probability measures \(\mathcal LP _n\) it has been conjectured that

$$\begin{aligned} \sup \{L_{\mu }:n\in \mathbb{N }, \mu \in \mathcal LP _n\}\le c, \end{aligned}$$
(9.1)

where \(c>0\) is a positive numeric constant. This is known to be equivalent to a famous open problem in convex geometry, namely, the Hyperplane Conjecture. We refer to [58] for an introductory survey and to [23, 40, 42] for the best known results. In many large subclasses of \(\mathcal LP _n\), it has been verified that \(L_{\mu }\) admits a uniform upper bound, independent of the dimension; see, e.g., the references given in [63]. Henceforth, we say that \(\mu \in \mathcal LP _n\) has bounded isotropic constant if \(L_{\mu }\le c\), where \(c\) is a positive numeric constant (independent of \(\mu \) and \(n\)).

It is known that if \(\mu \) is an isotropic log-concave probability measure on \(\mathbb{R }^n\) with bounded isotropic constant and \(n\le N\le \text{ e }^{n}\), then

$$\begin{aligned} \Big (\mathbb{E }_{\otimes \mu } {\text{ vol }}_{n}{\big ({K_{N}}\big )}\Big )^{\tfrac{1}{n}} \simeq \frac{\sqrt{\ln {(2N/n)}}}{\sqrt{n}} \simeq \Big (\mathbb{E }_{\otimes \lambda _{D_{n} } } {\text{ vol }}_{n}{\big ({K_{N}}\big )}\Big )^{\tfrac{1}{n}}; \end{aligned}$$

see [21]. In this case, we obtain the following result.

Theorem 9.2

Let \(n\le N\le \text{ e }^{n}\) and let \(\mu \) be an isotropic log-concave probability measure on \(\mathbb{R }^n\) with bounded isotropic constant. Then for every \(\varepsilon \in (0,1)\),

$$\begin{aligned} \mathbb{P }_{\otimes \mu }\Big ({\mathrm{vol}}_{n}{\big ({K_{N}}\big )}^{\tfrac{1}{n}} \le \frac{c\varepsilon }{\delta } \Big ( \mathbb{E }_{\otimes \mu } {\mathrm{vol}}_{n}{\big ({K_{N}}\big )} \Big )^{\tfrac{1}{n}}\Big ) \le \varepsilon ^{c_1N^{1-1/\delta ^2}n^{1/\delta ^2}} \end{aligned}$$
(9.2)

and, if \(N\le n\text{ e }^{\delta ^2}\), then

$$\begin{aligned} \mathbb{P }_{\otimes \mu }\Big ({\mathrm{vol}}_{n}{\big ({K_{N}}\big )}^{\tfrac{1}{n}} \le \frac{c\varepsilon }{\delta } \Big (\mathbb{E }_{\otimes \mu } {\mathrm{vol}}_{n}{\big ({K_{N}}\big )} \Big )^{\tfrac{1}{n}}\Big ) \le \varepsilon ^{n(N-n+1-o(1))/4}, \end{aligned}$$
(9.3)

where \(c\) and \(c_1\) are positive numeric constants.

A similar theorem is true for random zonotopes. If \(\mu \) is an isotropic log-concave probability measure on \(\mathbb{R }^n\), then

$$\begin{aligned} \mathbb{E }_{\otimes \mu }{\text{ vol }}_{n}{\big ({Z_N}\big )}^{1/n} \simeq \frac{N}{\sqrt{n}}; \end{aligned}$$
(9.4)

the latter equivalence is proved in [68, Proposition 6 & Remark 3]. For the reader’s convenience we sketch the proof. Note that for any subspace \(E\subset \mathbb{R }^n\), the isotropicity of \(\mu \) implies that

$$\begin{aligned} \mathbb{E }_{\mu }|P_E X_1|^2 = \dim (E). \end{aligned}$$

Thus

$$\begin{aligned} \mathbb{E }_{\otimes \mu }|\mathrm{det}[X_1\ldots X_n]|^2=n! \end{aligned}$$

(apply (2.5) and use Fubini’s theorem, integrating with respect to \(X_n\), then \(X_{n-1}\), and so on). For \(I\subset \{1,\ldots ,N\}\), write \(d_I := |\mathrm{det}([X_i]_{i\in I})|\) and apply the zonotope volume formula (4.16) and Jensen’s inequality:

$$\begin{aligned} \mathbb{E }_{\otimes \mu } \Big (\sum _{|I|=n}\text{ d }_I\Big ) ^{1/n}\le {N\atopwithdelims ()n}^{1/n} (\mathbb{E }_{\otimes \mu } \text{ d }_{I_0})^{1/n} \le \frac{eN}{n} (n!)^{1/(2n)}\le \frac{cN}{\sqrt{n}}, \end{aligned}$$

where \(c\) is a positive numeric constant and \(I_0=\{1,\ldots ,n\}\). For the lower bound, we use concavity of \(x\mapsto x^{1/n}\) in (4.16):

$$\begin{aligned} \mathbb{E }_{\otimes \mu } \Big (\sum _{|I|=n}\text{ d }_I \Big )^{1/n}\ge {N\atopwithdelims ()n}^{1/n-1} \sum _{|I|=n}\mathbb{E }_{\otimes \mu } \text{ d }_I^{1/n} \ge \frac{N}{n}\mathbb{E }_{\otimes \mu } \text{ d }_I^{1/n}. \end{aligned}$$

One completes the proof of (9.4) by using the fact that \(\mathbb{E }_{\otimes \mu } |\mathrm{\det }([X_1\ldots X_n])|^{1/n}\simeq \sqrt{n}\) (see [68, Corollary 1]).

Theorem 9.1 and the equivalence in (9.4) leads to the following.

Theorem 9.3

Let \(n\le N\le \text{ e }^n\) and let \(\mu \) be an isotropic log-concave probability measure on \(\mathbb{R }^n\) with bounded isotropic constant. Then, for every \(\varepsilon \in (0,1)\),

$$\begin{aligned} \mathbb{P }_{\otimes \mu }\Big ({\mathrm{vol}}_{n}{\big ({Z_{N}}\big )}^{\tfrac{1}{n}} \le c\varepsilon \Big ( \mathbb{E }_{\otimes \mu } {\mathrm{vol}}_{n}{\big ({Z_{N}}\big )} \Big )^{\tfrac{1}{n}}\Big ) \le \varepsilon ^{n(N-n+1-o(1))/4}. \end{aligned}$$
(9.5)

where \(c\) is a positive numeric constant.

9.1 Complementary Lower Bounds

In this section we prove that the small-ball probabilities in (9.3) (for \(N=2n\)) and (9.5) (for \(N\ge 2n\)) are essentially sharp (up to the numeric constants involved).

Proposition 9.4

Let \(n\le N\le \text{ e }^n\) and let \(\mu \) be an isotropic log-concave probability measure on \(\mathbb{R }^n\). Then for every \(\varepsilon \in (0,c_0)\),

$$\begin{aligned} \mathbb{P }_{\otimes \mu }\Big ({\mathrm{vol}}_{n}{\big ({K_N}\big )}^{\tfrac{1}{n}} \le c_1\varepsilon \sqrt{\frac{\ln (2N/n)}{n}}\Big ) \ge \varepsilon ^{Nn}, \end{aligned}$$

and

$$\begin{aligned} \mathbb{P }_{\otimes \mu }\Big ({\mathrm{vol}}_{n}{\big ({Z_N}\big )}^{\tfrac{1}{n}} \le \frac{c_2\varepsilon N}{\sqrt{n}}\Big ) \ge \varepsilon ^{Nn}, \end{aligned}$$

where the \(c_i\)’s are positive numeric constants.

Note that a sharper bound for the Gaussian measure \(\gamma _n\) was given in Proposition 6.7. The proof is analogous to the general case, which we gave in Sect. 8.1. All that remains is to show the following proposition.

Proposition 9.5

Let \(\mu \) be an isotropic log-concave probability measure on \(\mathbb{R }^n\). Let \(X\) be a random vector distributed according to \(\mu \). Then for every \(\varepsilon \in (0,c_0)\),

$$\begin{aligned} \mathbb{P }_{\mu }\Big (|X|\le c_1\varepsilon \sqrt{n}\Big ) \ge \varepsilon ^{n}, \end{aligned}$$

where \(c_0\) and \(c_1\) are positive numeric constants.

Proposition 9.5 shows that for any isotropic log-concave probability measure \(\mu \) on \(\mathbb{R }^n\) the quantity \(\varepsilon _0(\mu )\) defined in Sect. 8.1 satisfies \(\varepsilon _0(\mu )\ge c_0\) (with \(c=c_1\) in (8.5)). By the argument given in Sect. 8.1, the small-ball estimates in (9.3) and (9.5) are essentially sharp.

The first step in the proof of Proposition 9.5 involves covering numbers. Recall that if \(C\) and \(D\) are convex bodies in \(\mathbb{R }^N\), the covering number of \(C\) with respect to \(D\) is the minimum number \(N(C, D)\) of translates of \(D\) whose union covers \(C\), i.e.,

$$\begin{aligned} N(C,D):=\inf \Big \{M:\exists x_1,\ldots ,x_M\in \mathbb{R }^N, \;C\subset \bigcup _{i=1}^M (D+x_i) \Big \}. \end{aligned}$$
(9.6)

For further information on covering numbers see, e.g., [67].

The second ingredient is the following technical lemma about log-concave functions, which is essentially shown in [41, Lemmas 4.4, 5.2].

Lemma 9.6

Let \(f: \mathbb R ^{+} \rightarrow \mathbb R ^{+} \) be a \(C^{2} log\)-concave function with \(\int _{0}^{\infty } f(t)\text{ d }t <\infty \). Suppose that \(\left||f\right||_{\infty } \le \text{ e }^{n} f(0)\). Then for \(n\ge 2\) and any \(b>0\),

$$\begin{aligned} \int _{0}^{b} t^{n-1} f(t) \text{ d }t \ge c^{n} \min \Big \{ \int _{0}^{\infty } t^{n-1} f(t) \text{ d }t , f(0)b^{n}\Big \} \end{aligned}$$
(9.7)

Proof

For convenience, let \(g(t)= t^{n-1} f(t)\) and \(h:= \int _{0}^{\infty } g(t) \text{ d }t\). Let \(t_{n}\) be the (unique) positive real such that \( g^{\prime }(t_{n}) = 0\). Let \(\varepsilon \in (0,1)\) and \(a\ge 5\). Set \(t_{0}:= \sup \{ s>0 : f(s) \ge \text{ e }^{-an} f(0)\}\). It is shown in [41, Lemmas 4.4, 5.2] that

$$\begin{aligned} \int _{t_{n} (1-\varepsilon )}^{t_{n} (1+\varepsilon )} t^{n-1} f(t) \text{ d } t \ge \Big ( 1- c_1\text{ e }^{-c\varepsilon ^{2} n} \Big ) \int _{0}^{\infty } t^{n-1} f(t) \text{ d } t \end{aligned}$$
(9.8)

where \(c_1>1\) and \(0<c<1\) are positive numeric constants, and

$$\begin{aligned} \int _{0}^{t_{0}} t^{n-1} f(t) \text{ d }t \ge \Big ( 1- \text{ e }^{-an/8} \Big ) \int _{0}^{\infty } t^{n-1} f(t) \text{ d } t. \end{aligned}$$
(9.9)

Taking \(\varepsilon =1/2\) in (9.8) and using the definition of \(t_n\), we have

$$\begin{aligned} h\le c_2 \int _{t_n/2}^{\tfrac{3t_{n}}{2}} t^{n-1} f(t) \text{ d } t \le c_2 t_{n}^{n} f(t_{n}) \le c_2t_{n}^{n} \left||f\right||_{\infty } \le c_3^{n} t_{n}^{n} f(0), \end{aligned}$$
(9.10)

where \(c_2\) and \(c_3\) are positive numeric constants. Taking \(a=5\) in (9.9) and using (9.8), we have \(t_{0} \ge t_n/2\), which means that for \(s\le t_n/2\),

$$\begin{aligned} f(s) \ge \text{ e }^{-5n} f(0). \end{aligned}$$
(9.11)

Applying (9.8) once more, together with (9.11), we have

$$\begin{aligned} h c_1\text{ e }^{-\tfrac{c}{4}n} \ge \int _{0}^{ t_n/2} t^{n-1} f(t) \text{ d }t \ge \text{ e }^{-5n} f(0) \int _{0}^{ t_n/2} t^{n-1} \text{ d }t = \frac{\text{ e }^{-5n}}{n2^{n}} f(0) t_{n}^{n}, \end{aligned}$$

which implies \(h\ge c_{1}^{n} f(0) t_{n}^{n}\). Finally, if \(0<b\le t_n/2\), then (9.11) yields

$$\begin{aligned} \int _{0}^{b} t^{n-1} f(t) \text{ d } t \ge \frac{\text{ e }^{-5n}}{n} f(0) b^{n}. \end{aligned}$$

On the other hand, if \(b\ge t_n/2\), we apply (9.10) to get

$$\begin{aligned} \int _{0}^{b} t^{n-1} f(t) d t \ge \int _{0}^{t_n/2} t^{n-1} f(t) d t \ge \frac{\text{ e }^{-5n}}{n}f(0)t_{n}^{n} \ge c^{n} h, \end{aligned}$$
(9.12)

from which the result follows.\(\square \)

Proof of Proposition

9.5 Let \(K\) be an isotropic convex body with isotropic constant \(L_K\) (cf. (2.1)). We will first show that for every \(\varepsilon \in (0,1)\),

$$\begin{aligned} {\text{ vol }}_{n}{\big ({K\cap \varepsilon L_{K} D_{n}}\big )} \ge (c\varepsilon )^{n}, \end{aligned}$$
(9.13)

where \(c>0\) is a positive numeric constant. By [49, Lemma 4], the covering number \(N(K,L_K D_n)\) satisfies

$$\begin{aligned} N(K, L_{K} D_{n}) \le \text{ e }^{c_{0}n}, \end{aligned}$$

where \(c_{0}>0\) is a positive numeric constant. Standard volumetric arguments (as in, e.g., [31, Lemma 4.2]) yield

$$\begin{aligned} N(K, \varepsilon L_{K} D_{n})&\le \frac{ 4^{n}{\text{ vol }}_{n}{\big ({K+ \varepsilon L_{K} D_{n}}\big )} }{ \varepsilon ^{n} {\text{ vol }}_{n}{\big ({L_{K} D_{n}}\big )}} \le \frac{4^{n}{\text{ vol }}_{n}{\big ({K+L_{K} D_{n}}\big )}}{ \varepsilon ^{n} {\text{ vol }}_{n}{\big ({L_{K} D_{n}}\big )}} \\&\le \Big (\frac{c_{1}}{ \varepsilon }\Big )^{n} N(K, L_{K} D_{n}) \le \Big (\frac{c_{2}}{\varepsilon }\Big )^{n}. \end{aligned}$$

By the Brunn–Minkowski inequality and [25, Theorem 4], if \(C_{1}, C_{2}\subset \mathbb{R }^N\) are convex bodies such that the center of mass of \(C_{1}\) is the origin, \({\text{ vol }}_{n}{\big ({C_1}\big )}=1\), and \(C_{2}\) is origin-symmetric, then

$$\begin{aligned} 1\le \max _{x\in \mathbb{R }^n} {\text{ vol }}_{n}{\big ({C_{1} \cap (x+ C_{2})}\big )} N(C_{1}, C_{2}) \le \text{ e }^n{\text{ vol }}_{n}{\big ({C_{1} \cap C_{2}}\big )} N(C_{1}, C_{2}). \end{aligned}$$

Thus

$$\begin{aligned} {\text{ vol }}_{n}{\big ({K\cap \varepsilon L_{K} D_{n}}\big )} \ge \frac{\text{ e }^{-n}}{ N(K, \varepsilon L_{K}D_{n})} \ge (c\varepsilon )^{n}, \end{aligned}$$

which establishes (9.13).

Without loss of generality we may assume that the density \(f\) of \(\mu \) is \(C^{2}\). Let \(b_{n}:= \omega _{n}^{-1/n}\) and set

$$\begin{aligned} \rho _{K}(\theta ):= \Big ( \frac{n}{f(0)} \int _{0}^{\infty } t^{n-1} f(t\theta ) \text{ d }t\Big )^{\tfrac{1}{n}} \end{aligned}$$

By [5], \(\rho _{K}\) is the radial function of a convex body \(K\). It is known that \({\text{ vol }}_{n}{\big ({K}\big )}^{1/n} =f(0)^{-1/n}, L_K\simeq f(0)^{1/n}\) and there exists \(T\in SL(n)\) satisfying \(|Tx|\simeq |x|\) for all \(x\in S^{n-1}\) such that \(TK\) is an isotropic convex body (see, e.g., [64, Propositions 3.3, 3.5]). Thus if \(\widetilde{K}\) is the volume-one homothet of \(K\), we have

$$\begin{aligned} \rho _{\widetilde{K}} (\theta ) = n^{\tfrac{1}{n}} \Big (\int _{0}^{\infty } t^{n-1} f(t) \text{ d }t\Big )^{\tfrac{1}{n}}. \end{aligned}$$

Note that

$$\begin{aligned} \rho _{\widetilde{K} \cap \varepsilon f(0)^{\tfrac{1}{n}}D_{n}} (\theta ) = \min \Big \{ n^{\tfrac{1}{n}} \Big (\int _{0}^{\infty } t^{n-1} f(t) \text{ d }t\Big )^{\tfrac{1}{n}}, \varepsilon f(0)^{\tfrac{1}{n}} b_{n}\Big \}. \end{aligned}$$
(9.14)

Since \(\mu \) is isotropic, [25, Theorem 4] gives \(\left||f\right||_{\infty } \le \text{ e }^{n}f(0)\). Using Lemma 9.6 and (9.14) we have

$$\begin{aligned} \mu (\varepsilon D_{n})&= n\omega _{n} \int _{S^{n-1}} \int _{0}^{\varepsilon b_{n}} t^{n-1} f(t\theta ) \text{ d }t \text{ d }\sigma (\theta ) \nonumber \\&\ge c^{n} \omega _{n} \int _{S^{n-1}} \min \{ \rho _{\widetilde{K}}^{n} (\theta ), \varepsilon ^n f(0) b_{n}^{n}\} \text{ d }\sigma (\theta )\nonumber \\&= c^{n} \omega _{n} \int _{S^{n-1}} \rho _{\widetilde{K} \cap \varepsilon f(0)^{\tfrac{1}{n}}D_{n}}^ {n} (\theta ) \text{ d }\sigma (\theta )\nonumber \\&= c^{n}{\text{ vol }}_{n}{\big ({\widetilde{K}\cap \varepsilon f(0)^{\tfrac{1}{n}}D_{n}}\big )}\nonumber \\&\ge c^{n} {\text{ vol }}_{n}{\big ({\widetilde{K}\cap \varepsilon c^{\prime } L_{K}D_{n}}\big )}. \end{aligned}$$

By adjusting the constants and applying Lemma 9.6 and (9.13) for \(\widetilde{K}\), we conclude the proof.\(\square \)

10 Bounds for a General Convex Body \(C\)

A large part of this paper has involved general random convex sets \([X_1\ldots X_N]C\) and we have emphasized the small-ball probabilities for \(C=B_1^N\) and \(C=B_{\infty }^N\) only. The approach of applying a random linear operator \([X_1\ldots X_N]\) to a general convex body \(C\) has led to several applications [65, 66, §4,5] and we feel it is of interest to outline how to obtain small-ball probabilities for \({\text{ vol }}_{n}{\big ({[X_1\ldots X_N]C}\big )}\) in the general case.

If \(C\subset \mathbb{R }^N\) is nearly degenerate, one cannot expect to control the small-ball probability

$$\begin{aligned} \mathbb{P }_{\otimes \mu _i}\big ({\text{ vol }}_{n}{\big ({[X_1\ldots X_N]C}\big )}^{1/n}\le \varepsilon \big ). \end{aligned}$$

To ensure that \(C\) is not degenerate, we make assumptions about its “position.” By a position of a convex body, we mean a linear image, chosen to satisfy certain conditions. As Proposition 4.4 indicates, a key part of the proof is to bound the quantity \(A_{n,p}(C)\). As we did for \(B_1^N\) and \(B_{\infty }^N\), we will give nearly optimal estimates when \(N\) is proportional to \(n\), assuming that \(C\) is in a suitable position. We will also provide non-trivial estimates in the general case.

10.1 \(M\)-Position and the Proportional Case

Our first method for bounding \(A_{n,p}(C)\) is applicable when \(N\) is proportional to \(n\) and depends on a deep result due to Milman [57]; see also [67, Chap. 7]. Milman proved that given any convex body \(C\), one can find a suitable position such that the covering number of \(C\) by a ball of the same volume is of minimal possible order. As in Sect. 9.1, we use \(N(C, D)\) to denote the covering number of \(C\) with respect to \(D\) (cf. (9.6)). Using the above notation, Milman’s theorem reads as follows.

Theorem 10.1

There exists a constant \(\beta >0\) such that for any convex body \(C\subset \mathbb{R }^N\) there exists a linear operator \(T:\mathbb{R }^N\rightarrow \mathbb{R }^N\) such that \({\text{ vol }}_{N}{\big ({TC}\big )} =1\) and

$$\begin{aligned} N(TC, D_{N}) \le \text{ e }^{\beta N}. \end{aligned}$$
(10.1)

We say that \(C\) is in \(M\)-position if \(T\) is the identity operator. We refer to [67] for further information about \(M\)-position.

The following proposition is a well-known property of bodies in \(M\)-position; the proof is included for completeness.

Proposition 10.2

Let \(C\subset \mathbb R ^N\) be an origin-symmetric convex body in \(M\)-position with constant \(\beta >0\). Let \(\lambda \in (0,1)\) and set \(n=\lambda N\). Then

$$\begin{aligned} \frac{\max _{F\in G_{N,n}}{\text{ vol }}_{n}{\big ({P_F C}\big )}^{\tfrac{1}{n}} }{\min _{F\in G_{N,n}} {\text{ vol }}_{n}{\big ({P_F C}\big )}^{\tfrac{1}{n}} } \le c(\lambda ,\beta ), \end{aligned}$$
(10.2)

where \( c (\lambda ,\beta )>0\) depends only on \(\lambda , \beta \).

Proof

Let \(F\in G_{N,n}\). Then

$$\begin{aligned} \frac{{\text{ vol }}_{n}{\big ({P_{F} C}\big )}}{ {\text{ vol }}_{n}{\big ({P_{F} D_{N}}\big )}} \le N(P_{F} C, P_{F} D_{N}) \le N( C, D_{N}) \le \text{ e }^{\beta N}, \end{aligned}$$

hence

$$\begin{aligned} {\text{ vol }}_{n}{\big ({P_{F} C}\big )}^{\tfrac{1}{n}} \le \text{ e }^{\beta \frac{N}{k}} {\text{ vol }}_{n}{\big ({P_{F} D_{N}}\big )}^{\tfrac{1}{n}}. \end{aligned}$$
(10.3)

Since \({\text{ vol }}_{N-n}{\big ({C\cap F^{\perp }}\big )} {\text{ vol }}_{n}{\big ({P_{F} C}\big )}\ge 1\), we have

$$\begin{aligned} {\text{ vol }}_{N-n}{\big ({C\cap F^{\perp }}\big )} \ge \frac{1}{ {\text{ vol }}_{n}{\big ({P_{F} C}\big )}} \ge \frac{1}{ \text{ e }^{\beta N} {\text{ vol }}_{n}{\big ({P_{F} D_{N}}\big )}}. \end{aligned}$$

Thus for every \(1\le \ell <N\) and \(E\in G_{N, N-\ell }\) we obtain

$$\begin{aligned} {\text{ vol }}_{N-\ell }{\big ({P_{E}C}\big )} \ge {\text{ vol }}_{N-\ell }{\big ({C\cap E}\big )} \ge \text{ e }^{-\beta N} \frac{1}{ {\text{ vol }_{\ell }}{\big ({P_{E^{\perp }} D_{N}}\big )}}. \end{aligned}$$

Applying the latter inequality for \(\ell := N-n\) and \(E\in G_{N,n}\) yields

$$\begin{aligned} {\text{ vol }}_{n}{\big ({P_{E}C}\big )}^{\tfrac{1}{n}}&\ge \text{ e }^{-\beta \frac{N}{n}} \frac{1}{ {\text{ vol }}_{N-n}{\big ({P_{E^{\perp }}D_{N}}\big )}^{\tfrac{1}{n}}}\nonumber \\&= \text{ e }^{-\beta \frac{N}{n}} \frac{1}{ {\text{ vol }}_{N-n}{\big ({D_{N}\cap E^{\perp }}\big )}^{\tfrac{1}{n}}} \nonumber \\&\ge c\text{ e }^{-\tfrac{\beta N}{n}}. \end{aligned}$$
(10.4)

By (10.3), (10.4) and the fact that \({\text{ vol }}_{n}{\big ({P_{F} D_{N}}\big )}^{\tfrac{1}{n}} \simeq \sqrt{N/n}\), we conclude that

$$\begin{aligned} \frac{\max _{F\in G_{N,n}} {\text{ vol }}_{n}{\big ({P_F C}\big )}^{\tfrac{1}{n}} }{ \min _{F\in G_{N,n}} {\text{ vol }}_{n}{\big ({P_F C}\big )}^{\tfrac{1}{n}} } \le c\text{ e }^{2\frac{\beta N}{n}} \sqrt{\frac{N}{n}}. \end{aligned}$$

This yields (10.2) with \(c(\lambda , \beta ):= \frac{c}{\sqrt{\lambda } } \text{ e }^{\tfrac{2\beta }{\lambda }}\).\(\square \)

Proposition 10.3

Let \(C\subset \mathbb{R }^N\) be an origin-symmetric convex body in \(M\)-position with constant \(\beta \). Let \(\lambda \in (0,1)\) and let \(n=\lambda N\). Then for all \(p\in [1, \infty ]\),

$$\begin{aligned} A_{n, p} (C) \le c_1\text{ e }^{3\beta /\lambda }. \end{aligned}$$
(10.5)

Proof

Recalling the definition of \(A_{n,p}(C)\) (cf. (4.10)), we have

$$\begin{aligned} A_{n, p}(C) \le \frac{\max _{F\in G_{N,n}}{\text{ vol }}_{n}{\big ({P_F C}\big )}^{\tfrac{1}{n}}}{\min _{F\in G_{N,n}}{\text{ vol }}_{n}{\big ({P_F C}\big )}^{\tfrac{1}{n}}}. \end{aligned}$$

Applying Proposition 10.2 gives the result.\(\square \)

By applying Proposition 4.4, one obtains small-ball estimates for \({{\text{ vol }}_{n}{\big ({GC}\big )}}\) when \(N\) is proportional to \(n\) and \(C\) is in \(M\)-position. Proceeding to the case of arbitrary measures \(\mu _i\in \mathcal P _n^b\) then depends on the comparison in Proposition 7.2 (where we have assumed \(C\) is \(1\)-symmetric) and the proof follows that of Theorem 8.1. It is not difficult to show that any \(1\)-symmetric convex body of volume one is in \(M\)-position.

10.2 Small-Ball Estimates for Norms: Implications for Generalized Intrinsic Volumes

Our second method for bounding \(A_{n,p}(C)\) involves Proposition 5.5 and therefore depends on lower bounds for generalized mean-widths \(W_{-p}(C)\); this, in turn, depends on small-ball estimates for norms. The study of small-ball probabilities for norms was initiated in [43, 46] and shown to have close connections to Milman’s proof of Dvoretzky’s theorem on nearly-Euclidean sections of convex bodies. We will give bounds for \(A_{n,p}(C)\) in terms of the Dvoretzky dimension of \(C\) (defined below). Actually, one can replace the Dvoretzky dimension by a potentially larger quantity. For this we will make use of a theorem from [43], which we state below in terms of support functions (dual to the setting there).

If \(C\subset \mathbb{R }^N\) is a convex body, the Dvoretzky dimension \(k_*(C)\) is defined by

$$\begin{aligned} k_*(C) = N\Big (\frac{W(C)}{\mathrm{diam}(C)}\Big )^2, \end{aligned}$$

where \(\mathrm{diam}(C)\) is the diameter of \(C\) and \(W(C)\) is the mean-width of \(C\). As shown by Milman [56] (see also [60]), the parameter \(k_*(C)\) is the dimension up to which “most” projections of \(C\) are nearly Euclidean; more precisely, for \(n\le k_*(C)\) the \(\nu _{N,n}\)-measure of subspaces \(E\in G_{N,n}\) satisfying

$$\begin{aligned} c_1W(C) P_E B_2^N \subset P_E C \subset c_2 W(C) P_E B_2^N \end{aligned}$$
(10.6)

for some positive numeric constants \(c_1\) and \(c_2\) is at least \(1-\text{ e }^{-n}\); see [59] or [67] for further background information.

It has been observed that if one requires only the left-hand inclusion of (10.6), then the dimension at which this holds can increase dramatically. The critical dimension depends on the following quantity, introduced in [43],

$$\begin{aligned} d_*(C) := \min \{ - \ln \sigma \{\theta \in S^{N-1}: h_C(\theta )\le W(C)/2\}, N\}. \end{aligned}$$

One has \(d_*(C)\ge c_1k_*(C)\) for some positive numeric constant \(c_1>0\), see [43].

Theorem 10.4

[43] Let \(C\) be an origin-symmetric convex body in \(\mathbb{R }^N\). Assume that \(0<p\le d_*(C)\). Then

$$\begin{aligned} c_1W(C)\le W_{-p}(C)\le c_2 W(C) \end{aligned}$$

where \(c, c_1, c_2\) are positive numeric constants.

When \(C\) is in a suitable position, for instance when \(C^{\circ }\) is in John’s position (see, e.g., [59, Chap. 3]), we have \(k_*(C)\ge c \ln N\), where \(c\) is a positive numeric constant.

Proposition 10.5

Let \(C\subset \mathbb{R }^N\) be an origin-symmetric convex body. If \( np\le k_*(C)\le \text{ d }_*(C)\), then

$$\begin{aligned} A_{n,p} (C) \le c, \end{aligned}$$
(10.7)

where \(c\) is a positive numeric constant. In particular, if \(C\) is a convex body such that \(C^{\circ }\) is in John’s position and \(0\le p\le \frac{\ln N}{n}\), then (10.7) holds.

Proof

By (5.2), we have

$$\begin{aligned} W_{[n,1]}(C)\le \frac{c_2}{\sqrt{n}}W(C). \end{aligned}$$

On the other hand, Proposition 5.5 gives

$$\begin{aligned} W_{[n,-p]}(C) \ge \frac{c_1}{\sqrt{n}}W_{-np}(C). \end{aligned}$$

Thus

$$\begin{aligned} A_{n,p}(C) = \frac{ W_{[n,1]}(C)}{ W_{[n,-p]}(C)} \le \frac{c_2 W(C)}{c_1W_{-np}(C)} \end{aligned}$$

Applying Theorem 10.4 yields \(A_{n,p}(C)\le c\).\(\square \)

Remark 10.6

It is shown in [43] that \(d_*(B_1^N)\) is much larger than \(k_*(B_1^N)\). In fact, the calculation in [43, Remark 2 on page 204] led us to consider Proposition 6.1 and our proof is based on similar estimates.