Abstract
We prove small-deviation estimates for the volume of random convex sets. The focus is on convex hulls and Minkowski sums of line segments generated by independent random points. The random models considered include (Lebesgue) absolutely continuous probability measures with bounded densities and the class of log-concave measures.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The focus of this paper is distributional inequalities for the volume of random convex sets. Typical models involve convex hulls or Minkowski sums of line segments generated by independent random points in \(\mathbb{R }^n\). Specifically, let \(\mu \) be a probability measure on \(\mathbb{R }^n\). Sample \(N\ge n\) independent points \(X_1,\ldots ,X_N\) according to \(\mu \). Let \(K_N\) be the absolute convex hull of the \(X_i\)’s, i.e.,
and let \(Z_N\) be the zonotope, i.e., the Minkowski sum of the line segments \([-X_i,X_i]\),
The literature contains a wealth of results aimed at quantifying the size of \({K}_{N}\) and its non-symmetric analogue \({\mathrm{conv}}\big \{{X}_{1},\ldots , {X}_{N}\big \}\) in terms of metric quantities such as volume, surface area and mean-width; especially in the asymptotic setting where the dimension \(n\) is fixed and \(N\rightarrow \infty \). The measure \(\mu \) strongly determines the corresponding properties of \({K}_{N}\) and \({Z}_{N}\). Common models include the case when \(\mu \) is the standard Gaussian measure, see e.g., [10, 39]; the uniform measure on a convex body, see e.g., the survey [7]; among many others, e.g., [71]. These are just a sample of recent articles and we refer the reader to the thorough list of references given therein.
A different asymptotic setting involves the case when the dimension \(n\) is large and one is interested in precise dependence on \(N\) and phenomena that hold uniformly for a large family of measures \(\mu \). In this setting, various geometric properties of \(K_N\) and \(Z_N\) such as Banach–Mazur distance, in-radius and other metric quantities have been analyzed. For zonotopes, see e.g., [14, 15, 35]. Concerning \(K_N\) there have been a number of recent results with special attention paid to estimates that hold “with high probability.” These include, for instance, the case when \(\mu \) is the uniform measure on the vertices of the cube [29], measures with “Gaussian-like” features [45, 50] and the case when \(\mu \) is the uniform measure on a convex body [21, 30]. We are interested in distributional inequalities for \({\text{ vol }}_{n}{\big ({K}_{N}\big )}\) and \({\text{ vol }}_{n}{\big ({Z}_{N}\big )}\), where \({\text{ vol }}_{n}{(\cdot )}\) denotes \(n\)-dimensional Lebesgue measure, with precise dependence on \(n\) and \(N\) for a broad class of measures.
Let \(\mathcal P _{n}\) denote the set of all probability measures on \(\mathbb{R }^n\) that are absolutely continuous with respect to Lebesgue measure. Our setting involves those \(\mu \) in \(\mathcal P _{n}\) whose densities \(f_{\mu }=\frac{d\mu }{dx}\) are bounded. To fix the normalization, we set
where \(\left||f\right||_{\infty }\) is the essential supremum of \(f\). In particular, our setting includes the Gaussian measure and the uniform measure on a convex body \(K\subset \mathbb{R }^n\) but not the case of discrete measures. We assume that \(\mu _{1},\ldots ,\mu _{N}\in \mathcal P _{n}^{b}\) and that \({X}_{1},\ldots ,{X}_{N}\) are independent random vectors with \({X}_{i}\) distributed according \(\mu _{i}\). Since we will compare \({K}_{N}\) and \({Z}_{N}\) (which depend on the \(X_i\)’s) for various underlying measures, we will write \(\mathbb P _{\otimes _{i=1}^N \mu _{i}}\) (or simply \(\mathbb P _{\otimes \mu _{i}}\)) for the product measure associated with \(\mu _{1},\ldots ,\mu _{N}\); the corresponding expectation by \(\mathbb{E }_{\otimes _{i=1}^{N} \mu _{i}} = \mathbb{E }_{\otimes \mu _{i}}\).
Our main interest is in bounding the quantity
for small values of \(\varepsilon \); in particular, the precise dependence on \(\varepsilon , n\) and \(N\). Such estimates are often referred to as small-ball probabilities. Our aim is to find and quantify universal behavior of small-ball probabilities for \({\text{ vol }}_{n}{\big ({K_N}\big )}\), as well as \({\text{ vol }}_{n}{\big ({Z_N}\big )}\), for \(\mu _i\in \mathcal P _n^b\). For the expectation \(\mathbb{E }_{\otimes \mu _i}{\text{ vol }}_{n}{\big ({K_N}\big )}\), the behavior can be far from uniform. Indeed, even for the Euclidean norm \(|X_1|\) of a single vector, the quantity \(\mathbb{E }_{\mu }|X_1|\) need not be finite. Thus in such a general setting, searching for uniform concentration phenomena seems a lost cause. We will show, however, that small-ball-type estimates always hold and are surprisingly uniform.
To the best of our knowledge, apart from particular cases, general small-ball estimates are unknown. Surveying related results in the literature, it was unclear to us even the order of magnitude to expect. One reason for this is that the volume problem is often approached indirectly. Many cases involve stronger statements about, e.g., the in-radius of \({K}_{N}\) or inclusion of other naturally associated sets. For instance, the main focus of [50] is singular values of certain random matrices; volume estimates for \({K}_{N}\) arise as consequences. To put our problem in context, we state a sample result from the latter paper. Specifically, in [50], \({K}_{N}\) is the absolute convex hull of the rows of a random matrix, the entries of which are symmetric, independent and identically distributed random variables with sub-Gaussian tail-decay. In this case, they prove that if \(N\ge (1+\zeta )n\), where \(\zeta >1/\ln n\), and \(\beta \in (0,1/2)\), then
here \(c(\zeta )\) is a constant that depends on \(\zeta \) and the sub-Gaussian constant of the measure and \(c_1\) is a positive numeric constant. The latter is proved by estimating the in-radius of \(K_N\). The factor \(N^{1-\beta }n^{\beta }\) in the exponent is the best possible for the analogous statement involving the in-radius of \(K_N\) in the class of measures they consider (see [50, Theorem 4.2 & subsequent remark]). In the class \(\mathcal P _n^b\), however, the volume \({\text{ vol }}_{n}{\big ({K_N}\big )}\) behaves differently.
A similar result involves the case when \(\mu _K\) is the uniform measure on a convex body \(K\subset \mathbb{R }^n\) of volume one. In this case, it is known that if \(N\ge n\), then
where \(c\) is a positive numeric constant. See the discussion in [21, §3.1] (and [68, Proposition 1] for the case \(N=n\)).
The quantity \(\sqrt{\frac{\ln (2N/n)}{n}}\) that appears in both of the latter examples corresponds to the expectation of \({\text{ vol }}_{n}{\big ({K_N}\big )}^{1/n}\) for the uniform measure \(\lambda _{D_n}\) on the Euclidean ball of volume one. More precisely, for \(n \le N \le \text{ e }^n\), one has
see, e.g., [30] (see also the references in Sect. 4). Here \(A\simeq B\) means that \(c_{1} B\le A\le c_{2}B\) for some positive numeric constants \(c_1\) and \(c_2\). It is proved in [66] that among all measures \(\mu \in \mathcal P _n^b\) the uniform measure \(\lambda _{D_n}\) on the Euclidean ball of volume one minimizes the expected volume of \(K_N\), namely,
Similarly, it is shown in [66] that
One can check that for \(N\ge n\),
(use, e.g., Lemma 4.7; see also [15] as well as (9.4) for a more general result). Thus it is always meaningful to ask for the dependence on \(\varepsilon , n\) and \(N\) in the following quantities:
and
for all measures in \(\mathcal P _n^b\).
Our first main result is the following theorem.
Theorem 1.1
Let \(n\le N\le \text{ e }^n\) and let \(\mu _1,\ldots , \mu _N\in \mathcal P _n^b\). Let \(\delta >1\) and \(\varepsilon \in (0,1)\). Then
Moreover, if \(N\le n\text{ e }^{\delta ^2}\), then
where the \(c_i\)’s are positive numeric constants.
Here and throughout the paper, we use the notation \(o(1)\) to denote a quantity in \([0,1]\) that tends to \(0\) as \(N,n\rightarrow \infty \). For zonotopes, we prove the following theorem.
Theorem 1.2
Let \(n\le N\le \text{ e }^n\) and let \(\mu _1,\ldots , \mu _N\in \mathcal P _n^b\). Then for each \(\varepsilon \in (0,1)\),
where \(c\) is a positive numeric constant.
In Sect. 8, we also give lower bounds for the quantities in Theorems 1.1 and 1.2, which suggest that the estimates (1.9) and (1.10) are essentially optimal.
It has been observed in various other contexts that achieving the best bounds in small-ball estimates in high-dimensional geometry often requires different techniques than those used for proving large deviations e.g., [33, Proposition 3], [43], [45, Proposition 2.6]. To describe the techniques used in this paper, we outline our viewpoint.
As in [66], we adopt an operator-theoretic point of view from the Local Theory of Banach spaces, e.g., [53–55]. Namely, we view \(K_N\) and \(Z_N\) as the image of the cross-polytope \(B_1^N\) and the cube \(B_{\infty }^N\), respectively, under the random matrix \([X_1\ldots X_N]\), i.e., \(K_N = [X_1\ldots X_N]B_1^N\) and \(Z_N=[X_1\ldots X_N]B_{\infty }^N\). In the same way, for any convex body \(C\subset \mathbb{R }^N\), we generate a random \(n\)-dimensional convex body by applying \([X_1\ldots X_N]\) to \(C\):
Our first step is to identify the extremal measures \(\mu _i\in \mathcal{P }_n^b\) that maximize the small-ball probability
This is done by means of symmetrization as in [66]. We show that the probability in question is maximized for \(\mu _i = \lambda _{D_n}\), the uniform measure on the Euclidean ball of volume one. While this simplifies the problem, computing the small-ball probability directly for \(\lambda _{D_n}\) is non-trivial. We turn instead to \(\mu = \gamma _n\), the standard Gaussian measure. Working with \(\gamma _n\) allows us to recast the small-ball problem in more geometric terms by using the Gaussian representation of intrinsic volumes [72, 74] and a suitable extension. A key point in our approach is that purely geometric properties of \(C\)—its intrinsic volumes and natural generalizations—dictate the small-ball behavior for \({\text{ vol }}_{n}{\big ([X_1\ldots X_N]C\big )}\). In this way, we reduce Theorems 1.1 and 1.2 to questions from the realm of classical convexity about the cross-polytope and the cube. In particular, Theorem 1.2 depends on verification of an isomorphic version of a conjecture of Lutwak about affine quermassintegrals; a key tool here is a result due to Grinberg [37] (see Sect. 5). Wherever possible, we outline proofs for a general convex body \(C\subset \mathbb{R }^N\). However, the focus of the paper is on \(B_1^N\) and \(B_{\infty }^N\).
A more common normalization than that which we use (although slightly more restrictive) is when the covariance matrix of \(\mu \) is assumed to be the identity, i.e., \(\mu \) is isotropic. We prove estimates analogous to those of Theorems 1.1 and 1.2 under this normalization in Sect. 9; we also treat the important subclass of log-concave measures (see Sects. 2 and 9 for definitions). In the last several years, there have been many important results concerning random matrices generated by log-concave measures, see e.g., [1, 2] and the references therein. In this important class we obtain more precise estimates, such as the following theorem.
Theorem 1.3
Let \(n\le N\le \text{ e }^n\) and \(\mu \) be an isotropic log-concave probability measure on \(\mathbb{R }^n\) with bounded isotropic constant. Then for every \(\varepsilon \in (0,1)\),
and
where \(c\) and \(c_1\) are positive numeric constants.
See Sect. 9 for the definition of the isotropic constant and the corresponding result for \(K_N\).
The paper is organized as follows. In Sect. 2 we give basic notation and definitions used in the paper. The reduction to the uniform measure on the Euclidean ball via symmetrization is described in Sect. 3; we simply sketch the main points from [66]. In Sect. 4 we discuss the Gaussian representation of intrinsic volumes and show how an extension thereof is connected to the small-ball problem. Generalizations of intrinsic volumes are discussed in Sect. 5. Section 6 involves technical computations for the generalized intrinsic volumes of \(B_1^N\) and \(B_{\infty }^N\). In Sect. 7, we transfer the small-ball estimates obtained for \(\gamma _n\) to \(\lambda _{D_n}\). In Sect. 8 we prove Theorems 1.1 and 1.2 and give complementary lower bounds. In Sect. 9 we deal with the isotropic normalization and the log-concave case. We conclude with a discussion in Sect. 10 about general random convex sets \([X_1\ldots X_N]C\) and show how results from the asymptotic theory of convex bodies [59, 67] can be applied to the general problem of small-ball estimates for random convex sets.
2 Preliminaries
In this section we record notation and definitions used throughout the paper. The setting is \(\mathbb{R }^n\), where \(n\ge 2\), with the usual inner-product \(\langle \cdot , \cdot \rangle \), standard Euclidean norm \(|\cdot |\) and standard unit vector basis \(e_1,\ldots ,e_n\); \(n\)-dimensional Lebesgue measure \({\text{ vol }}_{n}{\big ({\cdot }\big )}\); Euclidean ball of radius one \(B_2^n\) with volume \(\omega _n={\text{ vol }}_{n}{\big ({B_2^n}\big )}\). We reserve \(D_n\) for the Euclidean ball of volume one, i.e., \(D_n=\omega _n^{-1/n} B_2^n\); Lebesgue measure restricted to \(D_n\) is \(\lambda _{D_n}\). The unit sphere is \(S^{n-1}\) and is equipped with the Haar probability measure \(\sigma \). The Grassmannian manifold of all \(n\)-dimensional subspaces of \(\mathbb{R }^N\) is denoted \(G_{N,n}\), with Haar probability measure \(\nu _{N,n}\). For a subspace \(F\in G_{N,n}\), we write \(P_F\) for the orthogonal projection onto \(F\). The standard Gaussian measure is \(\gamma _n\), i.e., \(\text{ d }\gamma _n(x) = (2\pi )^{-n/2}\text{ e }^{-|x|^2/2}\text{ d }x\), while \(\overline{\gamma }_n\) is the Gaussian measure with \(\text{ d }\overline{\gamma }_n(x)= \text{ e }^{-\pi |x|^2}\text{ d }x\).
Throughout the paper we reserve the symbols \(c,c_1,c_2,\ldots \) for positive numeric constants (not necessarily the same in each occurrence). We use the convention \(A\simeq B\) to signify that \(c_{1} B\le A\le c_{2}B\) for some positive numeric constants \(c_1\) and \(c_2\). Wherever necessary, we assume without loss of generality that \(n\) is larger than a fixed numeric constant. By adjusting the constants involved one can always force the results to hold for all \(n\ge 2\).
A convex body \(K\subset \mathbb{R }^n\) is a compact, convex set with non-empty interior. The support function of a convex body \(K\) is given by
and the mean-width of \(K\) is
We say that \(K\) is origin-symmetric if \(K=-K\). If the origin is an interior point of \(K\), the polar body \(K^{\circ }\) of \(K\) is defined by \(K^{\circ }=\{y\in \mathbb{R }^n:h_K(y)\le 1\}\). A convex body is isotropic if its volume is one, its center of mass is the origin and
the constant \(L_K\) is called the isotropic constant of \(K\). We say that a convex body \(K\subset \mathbb{R }^n\) is \(1\)-symmetric (with respect to the standard basis \(e_1,\ldots ,e_n\)), if
whenever \(x=(x_1,\ldots ,x_n)\in K, \alpha _i\in [-1,1]\) for each \(i=1,\ldots ,n\) and \(\xi :\{1,\ldots ,n\}\rightarrow \{1,\ldots ,n\}\) is a permutation. We say that \(K\) is \(1\)-unconditional if (2.2) holds whenever \(x=(x_1,\ldots ,x_n)\in K, \alpha _i\in [-1,1]\) for each \(i=1,\ldots ,N\) and \(\xi \) is the identity. We also let \(B_p^n\) denote the unit-ball in \(\ell _p^n\).
Let \(\mathcal P _n\) denote the class of all probability measures on \(\mathbb{R }^n\) that are absolutely continuous with respect to Lebesgue measure. The subclass \(\mathcal P ^b_n\subset \mathcal P _n\) consists of all those measures \(\mu \) in \(\mathcal P _n\) whose densities \(f_\mu :=\frac{\text{ d }\mu }{\text{ d }x}\) satisfy \(\left||f_\mu \right||_{\infty }=1\), where \(\left||\cdot \right||_{\infty }\) is the essential supremum.
A Borel measure \(\mu \) on \(\mathbb{R }^n\) is said to be log-concave if for any compact sets \(A,B\subset \mathbb{R }^n\) and \(t \in [0,1]\),
Similarly, a function \(f:\mathbb{R }^n\rightarrow \mathbb{R }^{+}\) is log-concave if \(\log f\) is concave on its support. It is known that if \(\mu \) is a log-concave measure on \(\mathbb{R }^n\) that is not supported on any proper affine subspace, then \(\mu \in \mathcal P _n\) and its density \(f_\mu \) is log-concave [13].
If \(A\subset \mathbb{R }^n\) is a Borel set with finite volume, the symmetric rearrangement \(A^{*}\) of \(A\) is the (open) Euclidean ball centered at the origin whose volume is equal to that of \(A\). The symmetric decreasing rearrangement of \(\chi _A\) is defined by \(\chi _A^{*}:=\chi _{A^{*}}\). If \(f:{\mathbb{R }}^n\rightarrow {\mathbb{R }}^+\) is an integrable function, we define its symmetric decreasing rearrangement \(f^{*}\) by
The latter should be compared with the “layer-cake representation” of \(f\):
see [47, Theorem 1.13]. The function \(f^{*}\) is radially-symmetric, decreasing and equimeasurable with \(f\), i.e., \(\{f>\alpha \}\) and \(\{f^*>\alpha \}\) have the same volume for each \(\alpha \ge 0\). By equimeasurability and (2.3), one has \(\left||f\right||_p=\left||f^*\right||_p\) for each \(1\le p\le \infty \), where \(\left||\cdot \right||_p\) denotes the \(L_p\)-norm. If \(\mu \in \mathcal P _n^b\) has density \(f_\mu \), we let \(\mu ^*\) denote the measure in \(\mathcal P _n^b\) with density \(f_{\mu }^*\). See [18, 47] for further background material on rearrangements.
For the reader’s convenience, we list a few basic linear algebra facts used in the paper.
Proposition 2.1
Suppose that \(N\ge n\) and that \(T:\mathbb{R }^N\rightarrow \mathbb{R }^n\) is a linear operator. Denote the adjoint of \(T\) by \(T^{*}\).
-
(i)
(Polar decomposition) There is an isometry \(U:\mathbb{R }^n\rightarrow \mathbb{R }^N\) such that \(T^*=U(TT^*)^{1/2}\).
-
(ii)
If \(v_1,\ldots ,v_n\in \mathbb{R }^N\) denote the columns of \(T^*\) (as a matrix with respect to the standard unit vector basis), then
$$\begin{aligned} {\mathrm{vol}}_{n}{\big ({T^* [0,1]^n}\big )}&= \det {(TT^*)^{1/2}} \end{aligned}$$(2.4)$$\begin{aligned}&= |v_1||P_{V_1^{\perp }}v_2||P_{V_2^{\perp }}v_3| \cdots |P_{V_{n-1}^{\perp }}v_n|, \end{aligned}$$(2.5)where
$$\begin{aligned} V_k:=\mathrm{span}\{v_1,\ldots ,v_k\} \quad V_0=\{0\}, \end{aligned}$$for \(k=1,\ldots ,n-1\).
-
(iii)
Let \(E=\ker (T)^{\perp }\) and let \(T\vert _E\) be the restriction of \(T\) to \(E\). If \(B\subset \mathbb{R }^N\) is a compact set then
$$\begin{aligned} {\mathrm{vol}}_{n}{\big ({TB}\big )} = |\mathrm{det}(T\vert _E)| {\mathrm{vol}}_{n}{\big ({P_EB}\big )}, \end{aligned}$$(2.6)where \(|\mathrm{det}(T\vert _E)| = \det {(TT^*)^{1/2}}\).
For (i) see, e.g., [24, §3.2]; (2.4) follows from (i), while (2.5) is the well-known formula for the volume of the parallelpiped spanned by \(v_1,\ldots ,v_n\), which follows from Gram–Schmidt (see, e.g., [4, Theorem 7.5.1]). For (iii), note that \(E =\mathrm{Range}(T^*)\) and
hence \(\mathrm{det}(T\vert _E)=\mathrm{det}(TT^*)^{1/2}\); (2.6) follows from the fact that \(TB = T\vert _E P_E B\).
3 Distributional Inequalities via Symmetrization
The main goal of this section is to show that the small-ball probabilities in Theorems 1.1 and 1.2 are maximized for \(\lambda _{D_n}\). This is done by adapting [66, Theorem 1.1], which (in the notation of the introduction) asserts that if \(\mu _1,\ldots ,\mu _N\in \mathcal P _n^b\) and \(C\subset \mathbb{R }^N\) is a convex body, then
The next theorem is a distributional form of (3.1) in the case when \(C\) is \(1\)-unconditional (which suffices for our purposes).
Theorem 3.1
Let \(N\ge n\) and let \(\mu _1,\ldots ,\mu _N\in \mathcal P ^b_n\). Suppose that \(C\subset \mathbb{R }^N\) is a \(1\)-unconditional convex body. Then
Remark 3.2
The analogous result for the convex hull of random points sampled in a convex body of volume one was proved by Giannopoulos and Tsolomitis [32, Lemma 3.3].
Remark 3.3
In Theorem 3.1, one can replace \({\text{ vol }}_{n}{\big ({\cdot }\big )}\) by other intrinsic volumes (see [66, Remark 4.4]). In this paper we focus all of our efforts on \({\text{ vol }}_{n}{\big ({\cdot }\big )}\).
The proof of Theorem 3.1 is a straightforward modification of that of (3.1). To clarify the role of the extra unconditionality assumption in the present context, we sketch the main points. Recall that if \(\mu \in \mathcal P _n^b\) has density \(f_\mu \), then \(\mu ^*\) denotes the measure in \(\mathcal P _n^b\) whose density is the symmetric decreasing rearrangement \(f_{\mu }^*\).
Theorem 3.4
Let \(N\) and \(n\) be positive integers. Let \(\mu _1,\ldots ,\mu _N\in \mathcal P ^b_n\) and let \(\alpha >0\). Suppose that \(F:(\mathbb{R }^n)^N\rightarrow \mathbb{R }^{+}\) satisfies the following condition: for each \(z \in S^{n-1}\), for all \(y_1,\ldots ,y_N\in z^{\perp }\), the level set
is origin-symmetric and convex. Then
The latter theorem makes use of the Brascamp–Lieb–Luttinger rearrangement inequality [17] (see also [20]); the proof is given in detail in [66, Proposition 3.2] (use the fact that \(\mathbb{P }_{\otimes \mu _i}\Big (\{F >\alpha \}\Big ) = \mathbb{E }_{\otimes \mu _i} {{\small 1}\!\!1}_{\{F>\alpha \}}\)).
If \(K\subset \mathbb{R }^n\) is a compact set of volume one and all \(\mu _i\) are equal to the uniform measure on \(K\), then Theorem 3.4 gives immediately
For general measures \(\mu \in \mathcal P _n^b\), an additional step is required to pass to the uniform measure on the ball. We say that \(F:(\mathbb{R }^n)^N\rightarrow \mathbb{R }^{+}\) is coordinate-wise increasing if for all \(x_1,\ldots , x_N\) in \(\mathbb{R }^n\),
whenever \(0\le s_{i}\le t_{i}, i=1,\ldots ,N\). For such functions, one can pass from rotationally-invariant measures \(\mu \in \mathcal P _n^b\) to \(\lambda _{D_n}\). Here and elsewhere, we use the term “increasing” in the non-strict sense.
Proposition 3.5
Let \(\mu _1,\ldots ,\mu _N\in \mathcal P _n^b\) and suppose that \(\mu _i=\mu _i^*\) for each \(i=1,\ldots ,N\). Assume that \(F\) is coordinate-wise increasing as in (3.4). Then
Proof
Using spherical coordinates \(x_i=r_i\theta _i\), where \(r_i\in \mathbb{R }^{+}\) and \(\theta _i\in S^{n-1}\) and writing \(\text{ d }\overline{r} = \text{ d }r_1\ldots \text{ d }r_N\) and \(\text{ d }{\overline{\theta }} =\text{ d }\sigma (\theta _1)\ldots \text{ d }\sigma (\theta _N)\), we have
By our assumption on \(F\),
is increasing, hence
(see, e.g., [66, Lemma 3.5]). Applying the latter inequality for each \(j\), together with Fubini’s Theorem, yields the result.\(\square \)
Proof of Theorem
3.1 Let \(F:(\mathbb{R }^n)^N\rightarrow \mathbb{R }^{+}\) be defined by
Using an argument due to Groemer [38], it is shown in [66, Proposition 4.1] that \(F\) satisfies the assumption in Theorem 3.4, hence (3.2) holds. The unconditionality assumption on \(C\) guarantees that for each \(x_1,\ldots ,x_N\) in \(\mathbb{R }^n\),
whenever \(0\le s_i\le t_i\), for \(i=1,\ldots ,N\), hence \(F\) is coordinate-wise increasing and Proposition applies.\(\square \)
While Theorem 3.1 reduces Theorems 1.1 and 1.2 to the case of \(\mathbb P _{\otimes \lambda _{D_n}}\), our path will involve first calculating the small-ball probability for the Gaussian measure, to which we now turn our attention.
4 An Extension of the Gaussian Representation of Intrinsic Volumes
This section is our first step towards estimating the small-ball probabilities in Theorems 1.1 and 1.2 for \(\mu =\gamma _n\), the standard Gaussian measure. As in the previous section, we work with random sets of the form \([X_1\ldots X_N]C\) for a general convex body \(C\subset \mathbb{R }^N\).
When \(N=n\), the small-ball problem for any \(C\) reduces to estimates for random determinants. Indeed, \({\text{ vol }}_{n}{\big ([X_1\ldots X_n]C\big )}=|\det {[X_1\ldots X_n]}| {\text{ vol }}_{n}{\big ({C}\big )}\). As in [53, Fact 1.5], one can bound
by estimating moments \(\mathbb{E }_{\otimes \gamma _n}|\det {[X_1\ldots X_n]}|^{-p}\) for \(p>0\) (see also [68, Proposition 2] for such estimates beyond the Gaussian setting).
As we mentioned in the introduction, for the random polytope \(K_N\), the in-radius is well-studied, see [34, 50] and the references therein. Aside from implications stemming from in-radius estimates, we are not aware of small deviations for the volume \({\text{ vol }}_{n}{\big ({K_N}\big )}\) for the full range of parameters \(n, N\) and \(\varepsilon \) considered in this paper.
As in the case \(N=n\), our approach will involve estimation of moments \(\mathbb{E }_{\otimes \gamma _n}{\text{ vol }}_{n}{\big ([X_1\ldots X_N]C\big )}^{-p}\) for \(p>0\). Unlike the case \(N=n\), however, the geometry of \(C\) plays a crucial role, which we quantify through intrinsic volumes and suitable extensions.
Recall that the intrinsic volumes of a convex body \(C\subset \mathbb{R }^N\) can be defined via the Steiner formula for the outer parallel volume of \(C\):
The quantities \(V_n, n=1,\ldots ,N\), are the \(n\)-th intrinsic volumes of \(C\) (we set \(V_0 \equiv 1\)). Of particular interest are \(V_1, V_{N-1}\) and \(V_N\), which are multiples of the mean-width, surface area and volume, respectively. Intrinsic volumes are also referred to as quermassintegrals (under an alternate labelling and normalization). For further background on intrinsic volumes, we refer the reader to [70]. We will make use of the following fact, which is a special case of Kubota’s integral recursion:
There is a version of the latter formula that uses Gaussian random matrices, termed the Gaussian representation of intrinsic volumes in [74] and which appeared previously in another context in [72]. If \(G=[\gamma _{ij}]\) is an \(n\times N\) matrix with independent standard Gaussian entries, then the \(n\)th intrinsic volume of \(C\subset \mathbb{R }^N\) is given by
(Here we have omitted the subscript on \(\mathbb{E }_{\otimes \gamma _n}\) and will do so when the context is clear.)
The next proposition is an extension of (4.3), which connects powers of \({\text{ vol }}_{n}{\big ({GC}\big )}\) and the following parameter \(W_{[n,p]}(C)\), defined in [22],
for \(p\in [-\infty , \infty ]\). In the latter expression, the \(p=0\) case is interpreted as \(\lim _{p\rightarrow 0}W_{[n,p]}(C)\); a similar convention is made for \(0\)th moments throughout the paper. The quantities \(W_{[n,p]}(C)\) are discussed in greater detail in Sect. 5. The proof we give below is the same as that of [72, Theorem 6], although presented differently; see also [73, Theorem 1] for a probabilistic derivation of the Steiner formula (4.1), which led us to the connection.
Proposition 4.1
Let \(n\le N\) and let \(G\) be an \(n\times N\) random matrix with independent standard Gaussian entries. Let \(C\subset \mathbb{R }^N\) be a compact set with non-empty interior and \(p >-(N-n+1)\). Then
If \(C\) is a convex body and \(p=1\), then (4.5) reduces to
which is the Gaussian representation of intrinsic volumes. The random matrix \(GG^*\) in Proposition 4.1 is distributed according to the Wishart density and explicit formulas for \(\mathbb{E }\mathrm{det}(GG^{*})^{p/2}\) are well-known, e.g., [4, Chap. 7]; a direct argument giving the order of magnitude of \(\mathbb{E }\mathrm{det}(GG^{*})^{p/2}\) is given below in Lemma 4.2. For a strong stochastic equivalence involving projections of regular simplices on \(G_{N,n}\) and Gaussian vectors, see [11, Theorem 1].
In a different context, passage between Gaussian random operators and random projections on the Grassmannian manifold has been used to great effect in studying volumetric invariants that arise in Banach–Mazur distance investigations; see [53, 55].
Proof of Proposition
4.1 Let \(h_1,\ldots ,h_n\in \mathbb{R }^N\) be the columns of \(G^{*}\). Then \(G^*[0,1]^n\) is the parallelpiped generated by \(h_1,\ldots , h_n\) and \({\text{ vol }}_{n}{\big ({G^*[0,1]^n}\big )}=\mathrm{det}(GG^*)^{1/2}\), by Proposition 2.1(ii). Let \(H\) be the subspace spanned by \(h_1,\ldots ,h_n\) so that
Let \(U\) be a random matrix distributed uniformly on the orthogonal group \(\mathcal O (N)\), independent of \(G\). Note that \((GU)^*[0,1]^n\) is the parallelpiped spanned by the vectors \(U^*h_1,\ldots , U^*h_n\), hence
Combining the latter equality with Proposition 2.1(iii), we have
Let \(\mathbb{E }_{\otimes _{i=1}^N \gamma _n}=\mathbb{E }_{\otimes _{i=1}^n \gamma _N}\) denote expectation with respect to \(G\); similarly let \(\mathbb{E }_{U}\) denote expectation with respect to \(U\). By rotational invariance of \(\gamma _N, G\) and \(GU\) have the same distribution, hence
\(\square \)
As mentioned above, we give the order of magnitude of \(\mathbb{E }\det {(GG^*)}^{p/2}\). Since the resulting estimate is closely connected to the small-ball estimate in the Gaussian case, we include a detailed proof.
Lemma 4.2
Let \(N\ge n\) and let \(G\) be an \(n\times N\) random matrix with independent standard Gaussian entries. Then for all \(p \in [-(N-n+1-\text{ e }^{-n(N-n+1)}), N]\),
Proof
Let \(X=(x_1,\ldots ,x_N)\) be an \(N\)-dimensional standard Gaussian vector. Let \(m \in \{1,\ldots ,N\}\) and \(F\in G_{N,m}\). For each \(\eta >0\) and for all \(p\in [-(m-\text{ e }^{-\eta m}), m]\), we have
Indeed, note that for \(a\in (0,1), \mathbb{E }_{\gamma _{1}}|x_1|^{-a} \simeq \frac{1}{1-a}\). Then, for \(p_{0}= m-\text{ e }^{-\eta m}\), we have
For the positive range,
As in the proof of Proposition 4.1, let \(h_1,\ldots ,h_n\in \mathbb{R }^N\) be the columns of \(G^{*}\). Let \(H_0=\{0\}\). For \(k=1,\ldots ,n-1\), set
By Proposition 2.1 (ii), we have
Let \(p_{1}=-\big (N-n+1-\text{ e }^{-n(N-n+1)} \big )\). Integrating first with respect to \(h_{n}\), then \(h_{n-1}\) and so forth, at each stage applying (4.6) with \(m=N-k+1\) and \(\eta _{k}= 2^{-k}\) for \(k\ge 2\) and \(\eta _{1}=n\), we obtain
Similarly, for the positive range, we have
The result follows by Hölder’s inequality.\(\square \)
The following proposition will be used to show that Theorem 1.2 is sharp for the Gaussian measure (cf. Proposition 6.7). As the proof is similar to the latter lemma, we include it here.
Proposition 4.3
Let \(N\ge n\) and let \(G\) be an \(n\times N\) random matrix with independent standard Gaussian entries. Then for any \(\varepsilon \in (0,1/2)\),
where \(c\) is a positive numeric constant.
Proof
Let \(X\) be an \(N\)-dimensional standard Gaussian vector. Let \(m \in \{1,\ldots ,N\}\) and \(F\in G_{N,m}\). By Chebyshev’s inequality,
Moreover, a direct computation shows that for any \(\varepsilon \in (0,1/2)\),
where \(c_1\) is a positive numeric constant (see Proposition 9.5 for a more general result).
As in the previous proof, let \(h_1,\ldots ,h_n\) denote the columns of \(G^{*}\); set \(H_0=\{0\}\) and \(H_k=\mathrm{span}\{h_1,\ldots ,h_k\}\). For each \(k=1,\ldots ,n-1\), let \(a_k=\sqrt{2(N-k+1)}\) and let \(a_n=\varepsilon ^n \sqrt{N-n+1}\). Using (4.7), we have
where \(c\) is a positive numeric constant. Applying Fubini’s theorem iteratively (integrating first with respect to \(h_n\), then \(h_{n-1}\) and so on), using (4.9) with \(m=N-n+1\) and (4.8) for \(m=N-k+1\) (for \(k=n-1,\ldots ,1\)) gives the desired result.\(\square \)
Proposition 4.1 and Lemma 4.2 reduce the small-ball problem for \(\gamma _n\) to capturing the asymptotics of the quantities \(W_{[n,p]}(C)\). We make this explicit in the next subsection.
4.1 Connection to Small-Ball Estimates for the Gaussian Case
For a convex body \(C\subset \mathbb{R }^N\), positive integers \(n\le N\), and \(p\in [-1, \infty ]\), we define
By Hölder’s inequality, \(A_{n,p}(C) \ge 1\) and
is an increasing function.
Proposition 4.4
Let \(N\ge n\) and let \(G\) be an \(n\times N\) random matrix with independent standard Gaussian entries. Let \(C\subset \mathbb{R }^N\) be a convex body and \(p\in [0,N-n+1-\text{ e }^{-n(N-n+1)}]\). Then
where \(c_0\) is a positive numeric constant. Consequently, for each \(\varepsilon \in (0,1)\),
where \(c\) is a positive numeric constant.
Proof
By Proposition 4.1 and Lemma 4.2, we get
which implies (4.11). Using the latter equivalence and Markov’s inequality, for any \(\eta >0\), we have
where \(c\) is a positive numeric constant. The small-ball estimate (4.12) follows on substituting \(\varepsilon = c\eta \).\(\square \)
As (4.12) indicates, we have reduced the small-ball problem to bounding the ratio
For \(C=B_1^N\) and \(C=B_{\infty }^N\), bounds for the numerators \(W_{[n,1]}(B_1^N)\) and \(W_{[n,1]}(B_{\infty }^N)\) are well-known. We state them here in their Gaussian form (cf. (4.3)) as this is more convenient for our purpose. These are also well-known results from the perspective of Gaussian random polytopes.
Proposition 4.5
Let \(N\ge n\) and let \(G\) be an \(n\times N\) matrix with independent standard Gaussian entries. Then, for \(N\le \text{ e }^n\), we have
For any \(N\ge n\), we have
The intrinsic volumes of \(B_1^N\) are computed explicitly in [12]. For \(B_{\infty }^N\), one has \(V_n(B_{\infty }^N)=2^n{N\atopwithdelims ()n}\). Alternatively, taking the view of random sets generated by the Gaussian measure, the estimates in Proposition 4.5 have been proved by numerous methods. One approach for the upper bounds involves volume estimates for the convex hull and Minkowski sum of arbitrary points in \(\mathbb{R }^n\). As these will be needed again in Sect. 8, we record them here.
Theorem 4.6
Let \(N\ge n\) and let \(x_1,\ldots ,x_N\in \mathbb{R }^n\) with \(|x_i|\le M\) for \(i=1,\ldots ,N\). Then
where \(c\) is a positive numeric constant.
The latter theorem can be proved in a number of ways, see [6, 8, 9, 19, 34]. For zonotopes, we use the following elementary lemma. Here we use \(|I|\) to denote the cardinality of the set \(I\).
Lemma 4.7
Let \(N\ge n\) and let \(x_1,\ldots ,x_N\in \mathbb{R }^n\). Then
Moreover, if \(|x_i|\le M\) for each \(i=1,\ldots ,N\), then
where \(c\) is a positive numeric constant.
Remark 4.8
Analogous volume estimates for \({\text{ vol }}_{n}{\big ([x_1\ldots x_N]B_{p}^N\big )}\), where \(1\le p\le \infty \), are proved in [36].
Proof
(Sketch) The first assertion (4.16) is the well-known zonotope volume formula (see, e.g., [58, p. 73]). The second assertion follows from the first since
where \(\text{ d }_I=|\mathrm{det}([x_i]_{i\in I})|\). We conclude by using the estimate \({N \atopwithdelims ()n}\le (\text{ e }N/n)^n\) together with Hadamard’s determinant inequality: \(\text{ d }_I\le \prod _{i\in I} |x_i|\).\(\square \)
Thus if \(g_1,\ldots , g_N\) denote the columns of \(G\) in Proposition 4.5, then the upper bound for \(\mathbb{E }\,{\text{ vol }}_{n}{\big ({G B_{1}^{N}}\big )}\) follows from Theorem 4.6 and the fact that with high probability, \(|g_i|\simeq \sqrt{n}\) (cf. (4.6)). The lower bound, for \(N\ge 2n\), follows from Gluskin’s lemma [34] (see also [48, 61]) or by computing the in-radius of \(GB_1^N\) as in [30] (which treats the case of vectors distributed according to \(\lambda _{D_n}\)); for \(N=n\), one can simply estimate the determinant: \((\mathbb{E }|\mathrm{det}([g_1\ldots g_n])|)^{1/n}\simeq \sqrt{n}\) (e.g., take \(N=n\) in Lemma 4.2). For asymptotic values as \(N\rightarrow \infty \) (in the non-symmetric case), see [3]. Similarly, for \(GB_{\infty }^N=\sum _{i=1}^N[-g_i,g_i]\) one applies (4.16) and the fact that \((\mathbb{E }|\mathrm{det}([g_1\ldots g_n])|)^{1/n}\simeq \sqrt{n}\).
To estimate the quantities \(W_{[n,-p]}(B_1^{N})\) and \(W_{[n,-p]}(B_{\infty }^{N})\), we require additional machinery which we describe in the next two sections.
5 Generalized Intrinsic Volumes
In this section we delve further into properties of the quantities \(W_{[n,p]}(C)\) for an arbitrary convex body \(C\subset \mathbb{R }^N\). As in Sect. 4, for every \(p\in [-\infty , \infty ]\) and \(1\le n\le N-1\), we set
Note that \( W_{[n]}(C):= W_{[n,1]}(C)\) is simply a constant multiple (depending on \(N\) and \(n\)) of the \(n\)th intrinsic volume of \(C\). We also set \( W_{[N]}(C):= {\text{ vol }}_{N}{\big ({C}\big )}^{1/N}\). The Aleksandrov–Fenchel inequality (e.g., [70, Chap. 6]) implies that for \(1\le n_{1}\le n_{2} \le N\),
The latter inequality, together with the fact that \({\text{ vol }}_{N}{\big ({B_2^N}\big )}^{1/N}\simeq \frac{1}{\sqrt{N}}\), implies that
We now define variants of the normalized affine quermassintegrals, introduced by Lutwak [51]. For a convex body \(C\subset \mathbb{R }^N\) of volume one, set
The fact that \(\Phi _{[n]}(C)\) is invariant under volume-preserving affine transformations was proved by Grinberg [37, Theorem 2] (see also [26]). It was conjectured by Lutwak in [52] that if \(C\subset \mathbb{R }^N\) is a convex body of \({\text{ vol }}_{N}{\big ({C}\big )}=1\), then for \(1<n<N-1\),
where \(D_N\subset \mathbb{R }^N\) is the Euclidean ball of volume one, with equality if and only if \(C\) is an ellipsoid. Here we follow the normalization used in [22]. When \(n=N-1\), inequality (5.4) is true and known as the Petty projection inequality; when \(n=1\) and the centroid of \(C\) is the origin, (5.4) is the Blaschke–Santalo inequality; see [27, Chap. 9] and the references and notes therein. In [22], it is conjectured that the quantities \(\Phi _{[n]}(C)\) are asymptotically of the same order as \(\Phi _{[n]} (D_N)\), i.e., if \(C\subset \mathbb{R }^N\) is a convex body of \({\text{ vol }}_{N}{\big ({C}\big )}=1\), then for \(1<n<N-1\),
In [22], the upper bound is shown to be correct up to a logarithmic factor. In this section, we verify that the lower bound holds as well.
Theorem 5.1
Let \(C\subset \mathbb{R }^N\) be a convex body of volume one. Then for \(1\le n \le N-1\),
where \(c\) is a positive numeric constant.
The proof uses a duality argument. The first ingredient is the following theorem due to Grinberg [37]; see also [28].
Theorem 5.2
Let \(K\subset \mathbb{R }^N\) be a compact set of volume \(1\). Then
We will also use the Blaschke–Santaló inequality [69].
Theorem 5.3
Let \(C\subset \mathbb{R }^N\) be a convex body with center of mass at the origin. Then
with equality if and only if \(C\) is an ellipsoid.
The proof in the origin-symmetric case can be found in, e.g., [27], together with additional notes and references; we also refer to the introduction of [31] for a discussion relating the role of the center of mass and the Santalo point of \(C\).
The reverse inequality, proved by Bourgain and Milman [16], will also be used.
Theorem 5.4
Let \(C\subset \mathbb{R }^N\) be a convex body with the origin in its interior. Then
where \(c\) is a positive numeric constant.
See [44] for the best-known constant \(c\) in the latter theorem in the origin-symmetric case; for recent developments and further references, see [31].
Proof of Theorem
5.1 Without loss of generality we can assume that the center of mass of \(C\) is the origin. Let \(F\in G_{N,n}\). Applying Theorem 5.4, we have
where \(c\) is a positive numeric constant. Set \(K= C^{\circ }\) and write \(\widetilde{K} := K/{\text{ vol }}_{N}{\big ({K}\big )}^{1/N}\). Since \({\text{ vol }}_{N}{\big ({C}\big )}=1\), Theorem 5.3 gives the upper bound \({\text{ vol }}_{N}{\big ({K}\big )}^{1/N}\le c/N\), where \(c\) is a positive numeric constant, hence
The latter two inequalities imply that
where \(c_1\) is a positive numeric constant. Now we apply Theorem 5.2 to obtain
where \(c_2\) is a positive numeric constant, from which the result follows.\(\square \)
Lastly, we will make use of a result from [22] (Theorem 3.2 and the subsequent remark (3.22)). For completeness, we give the proof. If \(C\subset \mathbb{R }^N\) is a convex body with the origin in its interior and \(p\in [-\infty , \infty ]\), define its generalized mean-width by
Proposition 5.5
Let \(C\subset \mathbb{R }^N\) be a convex body with the origin in its interior. Then for each \(p\ge 1\),
where \(c\) is a positive numeric constant.
Proof
Let \(F\in G_{N,n}\) and write \(S_F=S^{N-1}\cap F\); let \(\sigma _F\) denote the Haar probability measure on \(S_F\). By Theorem 5.4,
Using the fact that \(h_{P_FC}(\theta ) = h_C(\theta )\) for \(\theta \in S_F\), together with Hölder’s inequality, we have
The latter two inequalities imply that
where \(c_1\) is a positive numeric constant.\(\square \)
We refer the reader to [22] for further information on the quantities \(W_{[n,p]}(C)\).
6 Bounds for Generalized Intrinsic Volumes of \(B_1^N\) and \(B_{\infty }^N\)
By Proposition 4.4, we can obtain small-ball estimates in the Gaussian case by bounding the quantities \(A_{n,p}(B_1^N)\) and \(A_{n,p}(B_{\infty }^N)\). We will invoke Proposition 5.5, which relates \(W_{[n,-p]}(C)\) and the generalized mean-width \(W_{-p}(C)\) (defined in (5.7)) and thus we start by estimating \(W_{-p}(B_1^N)\).
Proposition 6.1
Let \(1\le p\le N\). Then
Proof
Using integration in spherical coordinates, one may verify that
for all \(0<p\le \frac{N}{2}\). Note that for all \(r>0\),
where
Assume first that \(p\le c_1N\) for some numeric constant \(c_{1}\in (0,1)\) to be specified later. Write
Using the inequality \(1-2\Phi (r)\le \sqrt{\frac{2}{\pi }} r\) for \(r\in [0,1]\), we choose \(c_1\in (0,1)\) to ensure that
For the remainder of the integral, we use the rough estimate
A routine calculation shows that the integrand
is increasing on \((1,s_0)\) where \(s_0 :=\frac{1}{3}\sqrt{\ln (2N/p)}\). Thus
and
Combining each of the estimates yields
The reverse inequality is proved similarly.
Lastly, we treat the case \(c_1N\le p \le N\). Note that
hence Hölder’s inequality yields
\(\square \)
Proposition 6.2
Let \(N\ge n\) and let \(\delta \ge 1\). Then for \(1\le p\le \big (\frac{N}{n}\big )^{1-\frac{1}{\delta ^{2}}}\), we have
Moreover, for \(N\le n\text{ e }^{\delta ^{2}}\),
where \(c^{\prime }\) and \(c^{\prime \prime }\) are positive numeric constants.
Proof
Set \(p_{0}= \big (\frac{N}{n}\big )^{1-\frac{1}{\delta ^{2}}}\). By Proposition 5.5 and Hölder’s inequality, for \(p\le p_{0}\), we have
By Proposition 6.1, the latter quantity is at least as large as
Moreover, by Proposition 4.1, Lemma 4.2 and Proposition 4.5, we have
Combining the latter two estimates, we have
Finally, for any \(p\le N\), Hölder’s inequality and Theorem 5.1 imply that
where \(\widetilde{B_1^N}\) is the volume-one homothet of \(B_1^N\). Thus by (6.3), (6.4) and the definition of \(A_{n,p}\) we get that
provided that \(N\le n\text{ e }^{\delta ^2}\).\(\square \)
Proposition 6.3
Let \(n\le N\) and let \(0< p\le N\). Then
where \(c_0\) is a positive numeric constant.
Proof
Since \(W(B_{\infty }^N)\le \mathrm{diam}(B_{\infty }^N)=2\sqrt{N}\), (5.2) yields
By Theorem 5.1, we have
where \(c_1\) is a positive numeric constant. Since \(W_{[n,-p]}(B_{\infty }^N)\ge W_{[n,-N]}(B_{\infty }^N)\) whenever \(0<p\le N\), we obtain
\(\square \)
Remark 6.4
The proof of Proposition 6.3 shows that if \(C\subset \mathbb{R }^N\) is a convex body with \({\text{ vol }}_{N}{\big ({C}\big )}=1\) and \(W(C)\le c\sqrt{N}\), then for any \(0<p\le N\) we have \(A_{n,p}(C)\le c^{\prime }\), where \(c^{\prime }\) is a constant that depends only on \(c\). Any zonoid in Lowner’s position satisfies this property (see [62]). In particular, there is a positive numeric constant \(c_1\) such that \(A_{n,p}(B_q^N)\le c_1\) whenever \(0<p\le N\) and \(2\le q\le \infty \). Note that by Urysohn’s inequality (see, e.g., [67, Corollary 1.4]), the inequality \(W(C)\ge c\sqrt{N}\) holds for any convex body \(C\) satisfying \({\text{ vol }}_{N}{\big ({C}\big )}=1\).
6.1 Small-Ball Estimates in the Gaussian Case
The results of the previous subsection lead to the following small-ball estimates.
Proposition 6.5
Let \(n\le N\le \text{ e }^n\) and let \(\varepsilon \in (0,1)\) and \(\delta >1\). Then
where \(c_1\) is a positive numeric constant. Moreover, if \(N\le n\text{ e }^{\delta ^2}\), then
where \(c_2\) is a positive numeric constant.
Proof
Let \(p_0=\big (\frac{N}{n}\big )^{1-1/\delta ^2}\). Then \(p_0\le N-n+1\), hence Propositions 4.4 and 6.2 imply that
where \(c_1\) is a positive numeric constant. If \(N\le n\text{ e }^{\delta ^2}\), we take \(p_1=N-n+1-\text{ e }^{-n(N-n+1)}\) and argue as above.\(\square \)
For zonotopes generated by the Gaussian measure we have the following.
Proposition 6.6
Let \(N\ge n\) and let \(\varepsilon \in (0,1)\). Then
where \(c_1\) is a positive numeric constant.
Proof
Use Propositions 4.4 and 6.3 and argue as in the proof of the previous proposition.\(\square \)
We conclude this section with a complementary lower bound that shows Proposition 6.6 is essentially optimal.
Proposition 6.7
Let \(N\ge n\) and let \(\varepsilon \in (0,1/2)\). Then
where the \(c_2\) is a positive numeric constant.
Proof
Let \(G\) be an \(n\times N\) matrix with independent standard Gaussian entries. Then \(Z_N= GB_{\infty }^N \subset \sqrt{N}GB_2^N\), hence
Using the latter inequality and Proposition 4.5, we have
where \(c_2\) is a positive numeric constant. The result follows from Proposition .\(\square \)
Unlike the case of \(Z_N\), we do not know whether the probabilities in Proposition 6.5 are optimal. In Sect. 9.1, we prove lower bounds for such probabilities in a more general setting.
7 From the Gaussian Measure to the Ball
With estimates for Gaussian-measure in hand, we proceed to transfer them to the uniform measure on the Euclidean ball. Let \(\overline{\gamma }_{n}\) be the Gaussian measure on \(\mathbb{R }^n\) with density \(\text{ d }\overline{\gamma }_{n}(x) = \text{ e }^{-\pi |x|^{2} } \text{ d }x\); in particular, \(\overline{\gamma }_n\) belongs to the class \(\mathcal P _n^b\).
The main goal of this section is to establish the following proposition.
Proposition 7.1
Let \(n < N\le \text{ e }^n\) and set \(m=N/2+(n-1)/2\). Then for any \(p\in (0, (N-n+1)/4)\), we have
and
where \(c\) is a positive numeric constant.
For the case when \(N=n\), see Remark 7.3. For simplicity, we assume throughout that \(m= N/2 +(n-1)/2\) is an integer; simple modifications will yield the result for all \(n\) and \(N\).
As in the previous sections, we will prove a more general statement. Let \(C\subset \mathbb{R }^N\) be a \(1\)-symmetric convex body. For convenience of notation, we write \(\overline{x}=(x_1,\ldots , x_N)\in (\mathbb{R }^n)^N\) and set
The main properties of \(F\) used here are the following:
-
(i)
\(F\) is coordinate-wise increasing: for fixed \(x_1,\ldots , x_N\in \mathbb{R }^n\) and for \(0<s_{i}\le t_{i}, i\le N\), we have
$$\begin{aligned} F(s_{1}x_{1}, \ldots , s_{N}x_{N}) \le F(t_{1}x_{1}, \ldots , t_{N}x_{N}); \end{aligned}$$(7.4)see the proof of Theorem 3.1.
-
(ii)
\(F\) is \(n\)-homogeneous, i.e., \(F(a\overline{x}) = a^{n} F(\overline{x})\) for \(a>0\);
-
(iii)
\(F\) is invariant under permutation of its coordinates, i.e., \(F(x_1,\ldots ,x_N)=F(x_{\xi (1)},\ldots ,x_{\xi (N)})\) for any permutation \(\xi :\{1,\ldots ,N\} \rightarrow \{1,\ldots , N\}\).
Proposition 7.2
Let \(F:(\mathbb{R }^n)^N\rightarrow \mathbb{R }^{+}\) be defined by (7.3). Let \(n<N\le \text{ e }^n\) and set \(m=N/2+(n-1)/2\). If \(p\in (0,(N-n+1)/4)\), then
where \(c\) is a positive numeric constant.
The complementary inequality
follows from Theorem 3.1.
To prove the proposition, we will express the expectations in (7.5) in spherical coordinates and compare them with the corresponding expectations on the \(N\)-fold product of spheres \(S_n^N:=S^{n-1}\times \cdots \times S^{n-1}\), equipped with the product of the Haar probability measures \(\sigma \), denoted here by \(\mathbb P _{\otimes _{i=1}^N \sigma }\). Before doing so, we discuss the case \(N=n\).
Remark 7.3
If \(N=n\), then \(F(x_1,\ldots ,x_n)=|\mathrm{det}([x_1\ldots x_N])|{\text{ vol }}_{n}{\big ({C}\big )}\). In this case, if \(X_1,\ldots ,X_n\) are independent and distributed according \(\overline{\gamma }_n\), then one can write \(X_i=|X_i|\theta _i\), where \(\theta _i=X_i/|X_i|\) is uniformly distributed on the sphere and is independent of \(|X_i|\). Thus for any \(p\in (0,1)\),
Similarly, if the \(X_i\)’s are independent and sampled according to \(\lambda _{D_n}\), we have
Thus equality holds
for all \(p\in (0,1)\), where
In particular, the constant \(4\) in Proposition 7.2 is not needed when \(N=n\).
Proof of Proposition
7.2 Assume first that \(X_1,\ldots ,X_N\) are independent random vectors distributed according to \(\overline{\gamma }_n\) and write \(\overline{X}=(X_1,\ldots ,X_N)\). Then for each \(t_0>0\), we have
where \(\overline{\theta }=(\theta _1,\ldots ,\theta _N)\) is distributed according to \(\mathbb P _{\otimes _{i=1}^N \sigma }\).
At this point, we choose \(t_0\) such that \(\overline{\gamma }_n(t_0B_2^n) =1-\text{ e }^{-n}\); one can check that \(t_0\simeq \sqrt{n}\). Then, for \(N\le \text{ e }^{n}\), we have
Combining (7.6) and (7.7) yields
where \(c>0\) is a positive numeric constant.
Assume now that \(X_1,X_2,\ldots , X_N\) are independent random vectors distributed uniformly in \(D_n\) and write \(\overline{X}=(X_1,\ldots ,X_N)\). Note that for each \(i=1,\ldots ,N\), we can write \(X_i=|X_i| \theta _i\), where \(|X_i|\) is the Euclidean norm of \(X_i\), and \(\theta _i = X_i/|X_i|\) is distributed uniformly on the sphere \(S^{n-1}\) and is independent of \(|X_i|\).
Let \(s_0\) be such that \(\mathbb{P }_{\lambda _{D_n}}\Big (|X_1|\ge s_0\Big )=1-\text{ e }^{-n}\) and note that \(s_0\simeq \sqrt{n}\). Since \(N\le \text{ e }^n\),
Denote the decreasing rearrangement of the sequence \((|X_i|)\) by \((|X_i|^{*})\). Then
Since \(F\) is invariant under permutations, we have
We partition the sequence \((|X_i|^*)\) into three blocks as follows:
Taking \(m=n-1+(N-n+1)/2 = N/2 +(n-1)/2\) and using monotonicity and homogeneity of \(F\), we have
Since \(N-m = (N-n+1)/2\), we have
By the distribution formula for non-negative random variables, we obtain
for all \(0< p \le (N-n+1)/4\), where \(c_0\) is a positive numeric constant. By (7.9) we have
where \(c_1\) is a positive numeric constant. Taking powers and then expectations in (7.10) and applying (7.11), we get that for \(0<p<(N-n+1)/4\),
where \(\overline{\theta }=(\theta _1,\ldots ,\theta _{m},0,\ldots ,0)\) and \(\theta _1,\ldots ,\theta _{m}\) are independent and uniformly distributed on the sphere \(S^{n-1}\). The proposition now follows by applying (7.8) (with \(N\) replaced by \(m\)).\(\square \)
Remark 7.4
(1) The assumption \(N\le \text{ e }^n\) in Proposition 7.1 is essential for \(K_N\) since after this point \(\mathbb{E }_{\otimes \overline{\gamma }_n}{\text{ vol }}_{n}{\big ({K_N}\big )}\) is much larger than \(\mathbb{E }_{\otimes \lambda _{D_n}}{\text{ vol }}_{n}{\big ({K_N}\big )}\).
(2) We do not believe the constant \(4\) in Proposition 7.1 is necessary; perhaps the optimal constant is \(1+o(1)\). Any improvement here will lead to better constants in the exponents of the small-ball estimates in Theorems 1.1–1.3.
8 Proof of the Main Theorems and Further Remarks
We are now ready to prove the two main results of this paper.
Theorem 8.1
Let \(n\le N\le \text{ e }^n\) and let \(\mu _1,\ldots , \mu _N\in \mathcal P _n^b\). Let \(\delta >1\) and let \(\varepsilon \in (0,1)\). Then
and, if \(N\le n\text{ e }^{\delta ^2}\), then
Proof
Let \(m=N/2+(n-1)/2\) and let \(p_0 =\big (\frac{m}{n}\big )^{1-1/\delta ^2}\). By (4.13) and Proposition 6.2,
where \(c^{\prime }\) is a positive numeric constant. Since \(p_0\le (N-n+1)/4\), by Proposition 7.2 and (8.3), we have
By Markov’s inequality, we obtain
Lastly, apply Theorem 3.1. The proof of (8.2) follows the same argument.\(\square \)
Theorem 8.2
Let \(n\le N\le \text{ e }^n\) and let \(\mu _1,\ldots , \mu _N\in \mathcal P _n^b\). Then for each \(\varepsilon \in (0,1)\),
Proof
Argue as in the proof of the previous theorem and apply Proposition 6.3 instead of Proposition 6.2.\(\square \)
Remark 8.3
Note that when \(N=2n\) the estimate in (8.2) is much stronger than the estimate in (8.1), which suggests that a better exponent can be achieved in general. As we will see in the next subsection, the estimates in (8.2) and (8.4) are sharp up to the numeric constants involved.
Remark 8.4
We wish to emphasize several points regarding the exponent \(n(N-n+1-o(1))/4\) in (8.2) and (8.4). Firstly, as we mentioned in Remark 7.4, the constant \(4\) is an artifact of the proof of Proposition 7.2 and \(N-n+1-o(1)\) is the best possible in (8.4) (cf. Proposition 6.7). Secondly, the \(o(1)\)-term can be estimated to a high degree of accuracy (cf. Proposition 6.5 and its proof). Finally, the ‘+1’ in the exponent accommodates the case \(N=n\), in which case \(n(N-n+1-o(1))= n(1-o(1))\) is the best that can be achieved in general; note also that the \(4\) is not needed in this case (cf. Remark 7.3).
8.1 Complementary Small-Ball Estimates
In this section we give lower bounds for the probabilities in Theorems 8.1 and 8.2. We make use of known bounds for the volume of the convex hull and zonotope generated by arbitrary points in \(\mathbb{R }^n\) (which we stated in Sect. 4).
Let \(\mu \in \mathcal P _n^b\) and assume that \(f_{\mu }(0)=\left||f_{\mu }\right||_{\infty }=1\). Suppose there exists \(\varepsilon _0= \varepsilon _0(\mu )\) such that
where \(c\) is a positive numeric constant. For instance, if \(f_{\mu }\) is continuous at \(0\) then there exists \(\varepsilon _0=\varepsilon _0(\mu )\) such that \(|f_{\mu }(x)|\ge 1/2\) whenever \(|x|\le \varepsilon _0 c\sqrt{n}\), hence (8.5) holds (with \(c\) replaced by \(2^{1/n}c\)). Given \(\varepsilon _0(\mu )\), we can apply Theorem 4.6 to obtain, for each \(\varepsilon \le \varepsilon _0(\mu )\),
Similarly, for \(Z_N\) we apply Lemma 4.7: for any \(\varepsilon \le \varepsilon _0(\mu )\),
Thus even though \(\varepsilon _0(\mu )\) depends on \(\mu \) and \(\inf \{\varepsilon _0(\mu ):\mu \in \mathcal P _n^b\}=0\), the asymptotic behavior of the small-ball estimates for \(K_N\) and \(Z_N\) as \(\varepsilon \rightarrow 0\) is at least \(\varepsilon ^{nN}\). In some classes of measures, one can control the value of \(\varepsilon _0(\mu )\); in particular, for the class of isotropic log-concave probability measures (see Sect. 9.1).
9 Isotropicity and Log-Concavity
In many cases, the literature on volumetric bounds for random convex sets involves isotropic measures rather than those in \(\mathcal P _n^b\). However, one can easily deduce results for isotropic measures from our main theorems.
Let \(\mathcal P _n^{\mathrm{cov}}\) denote the set of measures \(\mu \in \mathcal P _n\) with bounded densities such that the covariance matrix of \(\mu \) is well-defined. We say that a probability measure \(\mu \in \mathcal P _n^{\mathrm{cov}}\) is isotropic if its covariance matrix is the identity. When \(\mu \) is isotropic, we define its isotropic constant \(L_{\mu }\) by
where \(f_{\mu }\) is the density of \(\mu \). Given any measure \(\mu \in \mathcal P _n^{\mathrm{cov}}\) with barycenter at the origin, one can find a linear map \(T:\mathbb{R }^n\rightarrow \mathbb{R }^n\) (unique modulo orthogonal transformations) of determinant one such that \(\mu \circ T^{-1}\) is an isotropic probability measure; in this way, the isotropic constant is uniquely defined for all \(\mu \in \mathcal P _n^\mathrm{cov}\).
Let \(a>0\) and \(\mu \in \mathcal P _n^{\mathrm{cov}}\) with density \(f_\mu \). We define a new probability measure \(\mu _{a}\) on \(\mathbb{R }^n\) as the measure that has density \(f_{\mu _{a}}(x)= a^{n} f_{\mu }(ax)\). Obviously,
Moreover, if \(F:(\mathbb{R }^{n})^{N}\rightarrow \mathbb{R }^{+}\) is \(p\)-homogeneous, then
Thus if \(\mu \in \mathcal P _n^{\mathrm{cov}}\) is isotropic then \(\mu ^{\prime }:= \mu _{\frac{1}{L_{\mu }}} \) satisfies \(\left||f_{\mu ^{\prime }}\right||_{\infty }=1\),
and
By a change of variables, note that for any \(S\in SL(n)\), we have
for any \(p\) for which the expressions are defined. Thus there is no loss in generality in assuming that \(\mu \) is isotropic.
Following the proof of our main theorem we obtain a corresponding result for isotropic probability measures.
Theorem 9.1
Let \(n\le N\le \text{ e }^n\). Let \(\mu \in \mathcal P ^{\mathrm{cov}}_n\) and assume that \(\mu \) is isotropic. Then for every \(\varepsilon \in (0,1)\),
and, if \(N\le n\text{ e }^{\delta ^2}\), then
where \(c\) and \(c_1\) are positive numeric constants. Similarly, for each \(\varepsilon \in (0,1)\),
where \(c_2\) is a positive numeric constant.
It is known that \(\inf \{L_{\mu }: \mu \in \mathcal P _n^{\mathrm{cov}}\}\ge c\), where \(c\) is a positive numeric constant (see [5, 58]). On the other hand, \(L_{\mu }\) does not admit a uniform upper bound as \(\mu \) varies in \(\mathcal P _n^{\mathrm{cov}}\). However, in the important class of log-concave probability measures \(\mathcal LP _n\) it has been conjectured that
where \(c>0\) is a positive numeric constant. This is known to be equivalent to a famous open problem in convex geometry, namely, the Hyperplane Conjecture. We refer to [58] for an introductory survey and to [23, 40, 42] for the best known results. In many large subclasses of \(\mathcal LP _n\), it has been verified that \(L_{\mu }\) admits a uniform upper bound, independent of the dimension; see, e.g., the references given in [63]. Henceforth, we say that \(\mu \in \mathcal LP _n\) has bounded isotropic constant if \(L_{\mu }\le c\), where \(c\) is a positive numeric constant (independent of \(\mu \) and \(n\)).
It is known that if \(\mu \) is an isotropic log-concave probability measure on \(\mathbb{R }^n\) with bounded isotropic constant and \(n\le N\le \text{ e }^{n}\), then
see [21]. In this case, we obtain the following result.
Theorem 9.2
Let \(n\le N\le \text{ e }^{n}\) and let \(\mu \) be an isotropic log-concave probability measure on \(\mathbb{R }^n\) with bounded isotropic constant. Then for every \(\varepsilon \in (0,1)\),
and, if \(N\le n\text{ e }^{\delta ^2}\), then
where \(c\) and \(c_1\) are positive numeric constants.
A similar theorem is true for random zonotopes. If \(\mu \) is an isotropic log-concave probability measure on \(\mathbb{R }^n\), then
the latter equivalence is proved in [68, Proposition 6 & Remark 3]. For the reader’s convenience we sketch the proof. Note that for any subspace \(E\subset \mathbb{R }^n\), the isotropicity of \(\mu \) implies that
Thus
(apply (2.5) and use Fubini’s theorem, integrating with respect to \(X_n\), then \(X_{n-1}\), and so on). For \(I\subset \{1,\ldots ,N\}\), write \(d_I := |\mathrm{det}([X_i]_{i\in I})|\) and apply the zonotope volume formula (4.16) and Jensen’s inequality:
where \(c\) is a positive numeric constant and \(I_0=\{1,\ldots ,n\}\). For the lower bound, we use concavity of \(x\mapsto x^{1/n}\) in (4.16):
One completes the proof of (9.4) by using the fact that \(\mathbb{E }_{\otimes \mu } |\mathrm{\det }([X_1\ldots X_n])|^{1/n}\simeq \sqrt{n}\) (see [68, Corollary 1]).
Theorem 9.1 and the equivalence in (9.4) leads to the following.
Theorem 9.3
Let \(n\le N\le \text{ e }^n\) and let \(\mu \) be an isotropic log-concave probability measure on \(\mathbb{R }^n\) with bounded isotropic constant. Then, for every \(\varepsilon \in (0,1)\),
where \(c\) is a positive numeric constant.
9.1 Complementary Lower Bounds
In this section we prove that the small-ball probabilities in (9.3) (for \(N=2n\)) and (9.5) (for \(N\ge 2n\)) are essentially sharp (up to the numeric constants involved).
Proposition 9.4
Let \(n\le N\le \text{ e }^n\) and let \(\mu \) be an isotropic log-concave probability measure on \(\mathbb{R }^n\). Then for every \(\varepsilon \in (0,c_0)\),
and
where the \(c_i\)’s are positive numeric constants.
Note that a sharper bound for the Gaussian measure \(\gamma _n\) was given in Proposition 6.7. The proof is analogous to the general case, which we gave in Sect. 8.1. All that remains is to show the following proposition.
Proposition 9.5
Let \(\mu \) be an isotropic log-concave probability measure on \(\mathbb{R }^n\). Let \(X\) be a random vector distributed according to \(\mu \). Then for every \(\varepsilon \in (0,c_0)\),
where \(c_0\) and \(c_1\) are positive numeric constants.
Proposition 9.5 shows that for any isotropic log-concave probability measure \(\mu \) on \(\mathbb{R }^n\) the quantity \(\varepsilon _0(\mu )\) defined in Sect. 8.1 satisfies \(\varepsilon _0(\mu )\ge c_0\) (with \(c=c_1\) in (8.5)). By the argument given in Sect. 8.1, the small-ball estimates in (9.3) and (9.5) are essentially sharp.
The first step in the proof of Proposition 9.5 involves covering numbers. Recall that if \(C\) and \(D\) are convex bodies in \(\mathbb{R }^N\), the covering number of \(C\) with respect to \(D\) is the minimum number \(N(C, D)\) of translates of \(D\) whose union covers \(C\), i.e.,
For further information on covering numbers see, e.g., [67].
The second ingredient is the following technical lemma about log-concave functions, which is essentially shown in [41, Lemmas 4.4, 5.2].
Lemma 9.6
Let \(f: \mathbb R ^{+} \rightarrow \mathbb R ^{+} \) be a \(C^{2} log\)-concave function with \(\int _{0}^{\infty } f(t)\text{ d }t <\infty \). Suppose that \(\left||f\right||_{\infty } \le \text{ e }^{n} f(0)\). Then for \(n\ge 2\) and any \(b>0\),
Proof
For convenience, let \(g(t)= t^{n-1} f(t)\) and \(h:= \int _{0}^{\infty } g(t) \text{ d }t\). Let \(t_{n}\) be the (unique) positive real such that \( g^{\prime }(t_{n}) = 0\). Let \(\varepsilon \in (0,1)\) and \(a\ge 5\). Set \(t_{0}:= \sup \{ s>0 : f(s) \ge \text{ e }^{-an} f(0)\}\). It is shown in [41, Lemmas 4.4, 5.2] that
where \(c_1>1\) and \(0<c<1\) are positive numeric constants, and
Taking \(\varepsilon =1/2\) in (9.8) and using the definition of \(t_n\), we have
where \(c_2\) and \(c_3\) are positive numeric constants. Taking \(a=5\) in (9.9) and using (9.8), we have \(t_{0} \ge t_n/2\), which means that for \(s\le t_n/2\),
Applying (9.8) once more, together with (9.11), we have
which implies \(h\ge c_{1}^{n} f(0) t_{n}^{n}\). Finally, if \(0<b\le t_n/2\), then (9.11) yields
On the other hand, if \(b\ge t_n/2\), we apply (9.10) to get
from which the result follows.\(\square \)
Proof of Proposition
9.5 Let \(K\) be an isotropic convex body with isotropic constant \(L_K\) (cf. (2.1)). We will first show that for every \(\varepsilon \in (0,1)\),
where \(c>0\) is a positive numeric constant. By [49, Lemma 4], the covering number \(N(K,L_K D_n)\) satisfies
where \(c_{0}>0\) is a positive numeric constant. Standard volumetric arguments (as in, e.g., [31, Lemma 4.2]) yield
By the Brunn–Minkowski inequality and [25, Theorem 4], if \(C_{1}, C_{2}\subset \mathbb{R }^N\) are convex bodies such that the center of mass of \(C_{1}\) is the origin, \({\text{ vol }}_{n}{\big ({C_1}\big )}=1\), and \(C_{2}\) is origin-symmetric, then
Thus
which establishes (9.13).
Without loss of generality we may assume that the density \(f\) of \(\mu \) is \(C^{2}\). Let \(b_{n}:= \omega _{n}^{-1/n}\) and set
By [5], \(\rho _{K}\) is the radial function of a convex body \(K\). It is known that \({\text{ vol }}_{n}{\big ({K}\big )}^{1/n} =f(0)^{-1/n}, L_K\simeq f(0)^{1/n}\) and there exists \(T\in SL(n)\) satisfying \(|Tx|\simeq |x|\) for all \(x\in S^{n-1}\) such that \(TK\) is an isotropic convex body (see, e.g., [64, Propositions 3.3, 3.5]). Thus if \(\widetilde{K}\) is the volume-one homothet of \(K\), we have
Note that
Since \(\mu \) is isotropic, [25, Theorem 4] gives \(\left||f\right||_{\infty } \le \text{ e }^{n}f(0)\). Using Lemma 9.6 and (9.14) we have
By adjusting the constants and applying Lemma 9.6 and (9.13) for \(\widetilde{K}\), we conclude the proof.\(\square \)
10 Bounds for a General Convex Body \(C\)
A large part of this paper has involved general random convex sets \([X_1\ldots X_N]C\) and we have emphasized the small-ball probabilities for \(C=B_1^N\) and \(C=B_{\infty }^N\) only. The approach of applying a random linear operator \([X_1\ldots X_N]\) to a general convex body \(C\) has led to several applications [65, 66, §4,5] and we feel it is of interest to outline how to obtain small-ball probabilities for \({\text{ vol }}_{n}{\big ({[X_1\ldots X_N]C}\big )}\) in the general case.
If \(C\subset \mathbb{R }^N\) is nearly degenerate, one cannot expect to control the small-ball probability
To ensure that \(C\) is not degenerate, we make assumptions about its “position.” By a position of a convex body, we mean a linear image, chosen to satisfy certain conditions. As Proposition 4.4 indicates, a key part of the proof is to bound the quantity \(A_{n,p}(C)\). As we did for \(B_1^N\) and \(B_{\infty }^N\), we will give nearly optimal estimates when \(N\) is proportional to \(n\), assuming that \(C\) is in a suitable position. We will also provide non-trivial estimates in the general case.
10.1 \(M\)-Position and the Proportional Case
Our first method for bounding \(A_{n,p}(C)\) is applicable when \(N\) is proportional to \(n\) and depends on a deep result due to Milman [57]; see also [67, Chap. 7]. Milman proved that given any convex body \(C\), one can find a suitable position such that the covering number of \(C\) by a ball of the same volume is of minimal possible order. As in Sect. 9.1, we use \(N(C, D)\) to denote the covering number of \(C\) with respect to \(D\) (cf. (9.6)). Using the above notation, Milman’s theorem reads as follows.
Theorem 10.1
There exists a constant \(\beta >0\) such that for any convex body \(C\subset \mathbb{R }^N\) there exists a linear operator \(T:\mathbb{R }^N\rightarrow \mathbb{R }^N\) such that \({\text{ vol }}_{N}{\big ({TC}\big )} =1\) and
We say that \(C\) is in \(M\)-position if \(T\) is the identity operator. We refer to [67] for further information about \(M\)-position.
The following proposition is a well-known property of bodies in \(M\)-position; the proof is included for completeness.
Proposition 10.2
Let \(C\subset \mathbb R ^N\) be an origin-symmetric convex body in \(M\)-position with constant \(\beta >0\). Let \(\lambda \in (0,1)\) and set \(n=\lambda N\). Then
where \( c (\lambda ,\beta )>0\) depends only on \(\lambda , \beta \).
Proof
Let \(F\in G_{N,n}\). Then
hence
Since \({\text{ vol }}_{N-n}{\big ({C\cap F^{\perp }}\big )} {\text{ vol }}_{n}{\big ({P_{F} C}\big )}\ge 1\), we have
Thus for every \(1\le \ell <N\) and \(E\in G_{N, N-\ell }\) we obtain
Applying the latter inequality for \(\ell := N-n\) and \(E\in G_{N,n}\) yields
By (10.3), (10.4) and the fact that \({\text{ vol }}_{n}{\big ({P_{F} D_{N}}\big )}^{\tfrac{1}{n}} \simeq \sqrt{N/n}\), we conclude that
This yields (10.2) with \(c(\lambda , \beta ):= \frac{c}{\sqrt{\lambda } } \text{ e }^{\tfrac{2\beta }{\lambda }}\).\(\square \)
Proposition 10.3
Let \(C\subset \mathbb{R }^N\) be an origin-symmetric convex body in \(M\)-position with constant \(\beta \). Let \(\lambda \in (0,1)\) and let \(n=\lambda N\). Then for all \(p\in [1, \infty ]\),
Proof
Recalling the definition of \(A_{n,p}(C)\) (cf. (4.10)), we have
Applying Proposition 10.2 gives the result.\(\square \)
By applying Proposition 4.4, one obtains small-ball estimates for \({{\text{ vol }}_{n}{\big ({GC}\big )}}\) when \(N\) is proportional to \(n\) and \(C\) is in \(M\)-position. Proceeding to the case of arbitrary measures \(\mu _i\in \mathcal P _n^b\) then depends on the comparison in Proposition 7.2 (where we have assumed \(C\) is \(1\)-symmetric) and the proof follows that of Theorem 8.1. It is not difficult to show that any \(1\)-symmetric convex body of volume one is in \(M\)-position.
10.2 Small-Ball Estimates for Norms: Implications for Generalized Intrinsic Volumes
Our second method for bounding \(A_{n,p}(C)\) involves Proposition 5.5 and therefore depends on lower bounds for generalized mean-widths \(W_{-p}(C)\); this, in turn, depends on small-ball estimates for norms. The study of small-ball probabilities for norms was initiated in [43, 46] and shown to have close connections to Milman’s proof of Dvoretzky’s theorem on nearly-Euclidean sections of convex bodies. We will give bounds for \(A_{n,p}(C)\) in terms of the Dvoretzky dimension of \(C\) (defined below). Actually, one can replace the Dvoretzky dimension by a potentially larger quantity. For this we will make use of a theorem from [43], which we state below in terms of support functions (dual to the setting there).
If \(C\subset \mathbb{R }^N\) is a convex body, the Dvoretzky dimension \(k_*(C)\) is defined by
where \(\mathrm{diam}(C)\) is the diameter of \(C\) and \(W(C)\) is the mean-width of \(C\). As shown by Milman [56] (see also [60]), the parameter \(k_*(C)\) is the dimension up to which “most” projections of \(C\) are nearly Euclidean; more precisely, for \(n\le k_*(C)\) the \(\nu _{N,n}\)-measure of subspaces \(E\in G_{N,n}\) satisfying
for some positive numeric constants \(c_1\) and \(c_2\) is at least \(1-\text{ e }^{-n}\); see [59] or [67] for further background information.
It has been observed that if one requires only the left-hand inclusion of (10.6), then the dimension at which this holds can increase dramatically. The critical dimension depends on the following quantity, introduced in [43],
One has \(d_*(C)\ge c_1k_*(C)\) for some positive numeric constant \(c_1>0\), see [43].
Theorem 10.4
[43] Let \(C\) be an origin-symmetric convex body in \(\mathbb{R }^N\). Assume that \(0<p\le d_*(C)\). Then
where \(c, c_1, c_2\) are positive numeric constants.
When \(C\) is in a suitable position, for instance when \(C^{\circ }\) is in John’s position (see, e.g., [59, Chap. 3]), we have \(k_*(C)\ge c \ln N\), where \(c\) is a positive numeric constant.
Proposition 10.5
Let \(C\subset \mathbb{R }^N\) be an origin-symmetric convex body. If \( np\le k_*(C)\le \text{ d }_*(C)\), then
where \(c\) is a positive numeric constant. In particular, if \(C\) is a convex body such that \(C^{\circ }\) is in John’s position and \(0\le p\le \frac{\ln N}{n}\), then (10.7) holds.
Proof
By (5.2), we have
On the other hand, Proposition 5.5 gives
Thus
Applying Theorem 10.4 yields \(A_{n,p}(C)\le c\).\(\square \)
Remark 10.6
It is shown in [43] that \(d_*(B_1^N)\) is much larger than \(k_*(B_1^N)\). In fact, the calculation in [43, Remark 2 on page 204] led us to consider Proposition 6.1 and our proof is based on similar estimates.
References
Adamczak, R., Guédon, O., Litvak, A., Pajor, A., Tomczak-Jaegermann, N.: Condition number of a square matrix with i.i.d. columns drawn from a convex body. Proc. Am. Math. Soc. 140, 987–998 (2012)
Adamczak, R., Litvak, A.E., Pajor, A., Tomczak-Jaegermann, N.: Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles. J. Am. Math. Soc. 23(2), 535–561 (2010)
Affentranger, F.: The convex hull of random points with spherically symmetric distributions. Rend. Sem. Mat. Univ. Politec. Torino 49(3), 359–383 (1991)
Anderson, T. W.: An Introduction to Multivariate Statistical Analysis, 3rd edn., Wiley Series in Probability and Statistics. Wiley, Hoboken, NJ (2003)
Ball, K.: Logarithmically concave functions and sections of convex sets in \(\mathbb{R}^{n}\). Studia Math. 88(1), 69–84 (1988)
Ball, K., Pajor, A.: Convex bodies with few faces. Proc. Am. Math. Soc. 110(1), 225–231 (1990)
Bárány, I.: Random polytopes, convex bodies, and approximation. In: Stochastic Geometry. Lecture Notes in Mathematics, vol. 1892, pp. 77–118. Springer, Berlin (2007)
Bárány, I., Füredi, Z.: Computing the volume is difficult. Discrete Comput. Geom. 2(4), 319–326 (1987)
Bárány, I., Füredi, Z.: Approximation of the sphere by polytopes having few vertices. Proc. Am. Math. Soc. 102(3), 651–659 (1988)
Bárány, I., Vu, V.: Central limit theorems for Gaussian polytopes. Ann. Probab. 35(4), 1593–1621 (2007)
Baryshnikov, Y.M., Vitale, R.A.: Regular simplices and Gaussian samples. Discrete Comput. Geom. 11(2), 141–147 (1994)
Betke, U., Henk, M.: Intrinsic volumes and lattice points of crosspolytopes. Monatsh. Math. 115(1–2), 27–33 (1993)
Borell, C.: Convex set functions in \(d\)-space. Period. Math. Hungar. 6(2), 111–136 (1975)
Bourgain, J., Lindenstrauss, J., Milman, V.: Approximation of zonoids by zonotopes. Acta Math. 162(1–2), 73–141 (1989)
Bourgain, J., Meyer, M., Milman, V., Pajor, A.: On a geometric inequality. In: Geometric Aspects of Functional Analysis (1986/1987). Lecture Notes in Mathematics, vol. 1317, pp. 271–282. Springer, Berlin (1988)
Bourgain, J., Milman, V.D.: New volume ratio properties for convex symmetric bodies in \(\mathbb{R}^{n}\). Invent. Math. 88(2), 319–340 (1987)
Brascamp, H.J., Lieb, E.H., Luttinger, J.M.: A general rearrangement inequality for multiple integrals. J. Funct. Anal. 17, 227–237 (1974)
Burchard, A.: A short course on rearrangement inequalities. Available at http://www.math.utoronto.ca/almut/rearrange.pdf (2009)
Carl, B., Pajor, A.: Gel’ fand numbers of operators with values in a Hilbert space. Invent. Math. 94(3), 479–504 (1988)
Christ, M.: Estimates for the \(k\)-plane transform. Indiana Univ. Math. J. 33(6), 891–910 (1984)
Dafnis, N., Giannopoulos, A., Tsolomitis, A.: Asymptotic shape of a random polytope in a convex body. J. Funct. Anal. 257(9), 2820–2839 (2009)
Dafnis, N., Paouris, G.: Estimates for the affine and dual affine quermassintegrals of convex bodies. Ill. J. Math. (to appear)
Dafnis, N., Paouris, G.: Small ball probability estimates, \(\psi _2\)-behavior and the hyperplane conjecture. J. Funct. Anal. 258(6), 1933–1964 (2010)
Evans, L.C., Gariepy, R.F.: Measure Theory and Fine Properties of Functions. Studies in Advanced Mathematics. CRC Press, Boca Raton, FL (1992)
Fradelizi, M.: Sections of convex bodies through their centroid. Arch. Math. (Basel) 69(6), 515–522 (1997)
Furstenberg, H., Tzkoni, I.: Spherical functions and integral geometry. Israel J. Math. 10, 327–338 (1971)
Gardner, R. J.: Geometric Tomography, 2nd edn., Encyclopedia of Mathematics and Its Applications, vol. 58. Cambridge University Press, Cambridge (2006)
Gardner, R.J.: The dual Brunn-Minkowski theory for bounded Borel sets: dual affine quermassintegrals and inequalities. Adv. Math. 216(1), 358–386 (2007)
Giannopoulos, A., Hartzoulaki, M.: Random spaces generated by vertices of the cube. Discrete Comput. Geom. 28(2), 255–273 (2002)
Giannopoulos, A., Hartzoulaki, M., Tsolomitis, A.: Random points in isotropic unconditional convex bodies. J. London Math. Soc. (2) 72(3), 779–798 (2005)
Giannopoulos, A., Paouris, G., Vritsiou, B. H.: The isotropic position and the reverse Santalo inequality, Israel J. Math. doi:10.1007/s11856-012-0173-2
Giannopoulos, A., Tsolomitis, A.: Volume radius of a random polytope in a convex body. Math. Proc. Camb. Philos. Soc. 134(1), 13–21 (2003)
Gluskin, E., Milman, V.: Geometric probability and random cotype 2. In: Geometric Aspects of Functional Analysis. Lecture Notes in Mathematics, vol. 1850, pp. 123–138. Springer. Berlin (2004)
Gluskin, E. D.: Extremal properties of orthogonal parallelepipeds and their applications to the geometry of Banach spaces. Mat. Sb. (N.S.) 136(178)(1), 85–96 (1988)
Gluskin, E.D., On the sum of intervals. In: Geometric Aspects of Functional Analysis. Lecture Notes in Mathematics, vol. 1807, pp. 122–130. Springer, Berlin (2003)
Gordon, Y., Junge, M.: Volume formulas in \(L_p\)-spaces. Positivity 1(1), 7–43 (1997)
Grinberg, E.L.: Isoperimetric inequalities and identities for \(k\)-dimensional cross-sections of convex bodies. Math. Ann. 291(1), 75–86 (1991)
Groemer, H.: On the mean value of the volume of a random polytope in a convex set. Arch. Math. (Basel) 25, 86–90 (1974)
Hug, D., Reitzner, M.: Gaussian polytopes: variances and limit theorems. Adv. Appl. Probab. 37(2), 297–320 (2005)
Klartag, B.: On convex perturbations with a bounded isotropic constant. Geom. Funct. Anal. 16(6), 1274–1290 (2006)
Klartag, B.: A central limit theorem for convex sets. Invent. Math. 168(1), 91–131 (2007)
Klartag, B., Milman, E.: Centroid bodies and the logarithmic Laplace transform—a unified approach. J. Funct. Anal. 262, 10–34 (2012)
Klartag, B., Vershynin, R.: Small ball probability and Dvoretzky’s theorem. Israel J. Math. 157, 193–207 (2007)
Kuperberg, G.: From the Mahler conjecture to Gauss linking integrals. Geom. Funct. Anal. 18(3), 870–892 (2008)
Latala, R., Mankiewicz, P., Oleszkiewicz, K., Tomczak-Jaegermann, N.: Banach–Mazur distances and projections on random subgaussian polytopes. Discrete Comput. Geom. 38(1), 29–50 (2007)
Latała, R., Oleszkiewicz, K.: Small ball probability estimates in terms of widths. Studia Math. 169(3), 305–314 (2005)
Lieb, E.H., Loss, M.: Analysis. In: Graduate Studies in Mathematics, vol. 14. American Mathematical Society, Providence, RI (1997)
Litvak, A., Mankiewicz, P., Tomczak-Jaegermann, N.: Randomized isomorphic Dvoretzky theorem. C. R. Math. Acad. Sci. Paris 335(4), 345–350 (2002)
Litvak, A.E., Milman, V.D., Pajor, A.: The covering numbers and “low \(M^\ast \)-estimate” for quasi-convex bodies. Proc. Am. Math. Soc. 127(5), 1499–1507 (1999)
Litvak, A.E., Pajor, A., Rudelson, M., Tomczak-Jaegermann, N.: Smallest singular value of random matrices and geometry of random polytopes. Adv. Math. 195(2), 491–523 (2005)
Lutwak, E.: A general isepiphanic inequality. Proc. Am. Math. Soc. 90(3), 415–421 (1984)
Lutwak, E.: Inequalities for Hadwiger’s harmonic Quermassintegrals. Math. Ann. 280(1), 165–175 (1988)
Mankiewicz, P., Tomczak-Jaegermann, N.: Geometry of families of random projections of symmetric convex bodies. Geom. Funct. Anal. 11(6), 1282–1326 (2001)
Mankiewicz, P., Tomczak-Jaegermann, N.: Quotients of finite-dimensional Banach spaces; random phenomena. In: Handbook of the Geometry of Banach Spaces, vol. 2, pp. 1201–1246. North-Holland, Amsterdam (2003)
Mankiewicz, P., Tomczak-Jaegermann, N.: Volumetric invariants and operators on random families of Banach spaces. Studia Math. 159(2), 315–335 (2003) (Dedicated to Professor Aleksander Pełczyński on the occasion of his 70th birthday (Polish))
Milman, V. D.: A new proof of A. Dvoretzky’s theorem on cross-sections of convex bodies. Funkcional. Anal. i Priložen. 5(4), 28–37 (1971)
Milman, V. D.: Isomorphic symmetrizations and geometric inequalities. In: Geometric Aspects of Functional Analysis (1986/1987). Lecture Notes in Mathematics, vol. 1317, pp. 107–131. Springer, Berlin (1988)
Milman, V. D., Pajor, A.: Isotropic position and inertia ellipsoids and zonoids of the unit ball of a normed \(n\)-dimensional space. In: Geometric Aspects of Functional Analysis (1987–1988). Lecture Notes in Mathematics, vol. 1376, pp. pp. 64–104. Springer, Berlin (1989)
Milman, V.D., Schechtman, G.: Asymptotic theory of finite-dimensional normed spaces. In: Lecture Notes in Mathematics, vol. 1200. Springer, Berlin (1986) (With an appendix by M. Gromov)
Milman, V.D., Schechtman, G.: Global versus local asymptotic theories of finite-dimensional normed spaces. Duke Math. J. 90(1), 73–93 (1997)
Milman, V.D., Schechtman, G.: An “isomorphic” version of Dvoretzky’s theorem. II. In: Convex Geometric Analysis (Berkeley, CA, 1996). Mathematical Sciences Research Institute Publicaion, vol. 34, pp. 159–164. Cambridge University Press, Cambridge (1999)
Paouris, G.: \(\Psi _2\)-Estimates for linear functionals on zonoids. In: Geometric Aspects of Functional Analysis. Lecture Notes in Mathematics, vol. 1807, pp. 211–222. Springer. Berlin (2003)
Paouris, G.: On the existence of supergaussian directions on convex bodies. Mathematika 58, 389–408 (2012)
Paouris, G.: Small ball probability estimates for log-concave measures. Trans. Am. Math. Soc. 364(1), 287–308 (2012)
Paouris, G., Pivovarov, P.: Intrinsic volumes and linear contractions. Proc. Amer. Math. Soc. 141, 1805–1808 (2013)
Paouris, G., Pivovarov, P.: A probabilistic take on isoperimetric-type inequalities. Adv. Math. 230, 1402–1422 (2012)
Pisier, G.: The volume of convex bodies and Banach space geometry. In: Cambridge Tracts in Mathematics, vol. 94. Cambridge University Press, Cambridge (1989)
Pivovarov, P.: On determinants and the volume of random polytopes in isotropic convex bodies. Geom. Dedicata 149, 45–58 (2010)
Santaló, L.A.: An affine invariant for convex bodies of \(n\)-dimensional space. Portugaliae Math. 8, 155–161 (1949)
Schneider, R.: Convex bodies: the Brunn-Minkowski theory. In: Encyclopedia of Mathematics and Its Applications, vol. 44. Cambridge University Press, Cambridge (1993)
Schneider, R.: Recent results on random polytopes. Boll. Unione Mat. Ital. (9), 1(1), 17–39 (2008)
Tsirelson, B.S.: A geometric approach to maximum likelihood estimation for an infinite-dimensional Gaussian location. II. Teor. Veroyatnost. i Primenen. 30(4), 772–779 (1985)
Vitale, R.A.: On the volume of parallel bodies: a probabilistic derivation of the Steiner formula. Adv. Appl. Probab. 27(1), 97–101 (1995)
Vitale, R.A.: On the Gaussian representation of intrinsic volumes. Statist. Probab. Lett. 78(10), 1246–1249 (2008)
Acknowledgments
Grigoris Paouris is supported by the A. Sloan Foundation, BSF grant 2010288 and the US National Science Foundation, grants DMS-0906150 and CAREER-1151711. Peter Pivovarov was supported by a Postdoctoral Fellowship award from the Natural Sciences and Engineering Research Council of Canada and the Department of Mathematics at Texas A&M University. It is our pleasure to thank D. Cordero-Erausquin, A. Giannopoulos, N. Tomczak-Jaegermann and R. Vitale for useful discussions. We would also like to thank the anonymous referee for careful reading and useful comments which improved the presentation of the paper.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Paouris, G., Pivovarov, P. Small-Ball Probabilities for the Volume of Random Convex Sets. Discrete Comput Geom 49, 601–646 (2013). https://doi.org/10.1007/s00454-013-9492-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00454-013-9492-2