1 Introduction

In this paper we estimate the isotropic constant of some random polytopes (for the definitions and notation see Sect. 2). It is known (see, e.g., [28]) that among all the convex bodies in \(\mathbb {R}^n\) the Euclidean ball is the one with the smallest isotropic constant, that is \(L_K\ge L_{B_2^n}\ge c\), where \(c\) is an absolute positive constant. However, it is still an open problem to determine whether there exists or not an absolute constant \(C\) such that for every convex body \(K\subset \mathbb {R}^n\) one has \(L_K\le C\). The boundedness of \(L_K\) by an absolute constant is equivalent to the long standing hyperplane conjecture [9]. The best general upper bound known up to now is \(L_K\le Cn^{1/4}\) [22]. This estimate slightly improves (by a logarithmic factor) the earlier Bourgain’s upper bound [10].

Since the remarkable result of Gluskin [18] random polytopes are known to provide many examples of convex bodies (and related normed spaces) with a “pathologically bad” behavior of various parameters of a linear and geometric nature (we refer to the survey [27] and references therein; see also recent examples in [19] and [21]). Not surprisingly, they were also a natural candidate for a potential counterexample for the hyperplane conjecture. This was resolved in [23], where it was shown that the convex hull or the symmetric convex hull of independent Gaussian random vectors in \(\mathbb {R}^n\) with high probability has the bounded isotropic constant. Some other distributions for vertices were also considered. In all of them the vertices had independent coordinates.

Following the ideas in [23], the problem of estimating the isotropic constant of random polytopes was considered in [4], for independent random vectors distributed uniformly on the sphere \(S^{n-1}\), and in [13], for independent random vectors uniformly distributed on an isotropic unconditional convex body. Also in these cases the isotropic constant of random polytopes generated by these vectors is bounded with high probability. One can check that the same method works for independent random vectors uniformly distributed on a \(\psi _2\) isotropic convex body as well.

In this paper we estimate the isotropic constant of a random polytope in an isotropic convex body (see Sect. 2 for the definitions). It is known (see [6, 20] or [5]) that if \(K_N\) is a polytope in \(\mathbb {R}^n\) with \(N\) vertices then

$$\begin{aligned} L_{K_N}\le C\min \left\{ \sqrt{\frac{N}{n}},\log N\right\} , \end{aligned}$$

where \(C\) is an absolute constant.

In [14, 15], the authors provided a lower estimate for the volume of a random polytope \(K_N\) obtained as the convex hull of \(N\le e^{\sqrt{n}}\) random points, namely

$$\begin{aligned} |K_N|^\frac{1}{n}\ge c\sqrt{\frac{\log \frac{N}{n}}{n}}L_K \end{aligned}$$

(see the end of Sect. 2 for more precise formulation and details). On the other hand, the proof of the estimate \(L_{K_N}\le C\log N\) in [5] passes through showing that if \(X_1,\dots , X_N\) are the vertices of \(K_N\), then for any affine transformation \(T\) we have

$$\begin{aligned} L_{K_N}\le \frac{C\max _{1\le i\le N} |TX_i|\log N}{n|TK_N|^\frac{1}{n}}. \end{aligned}$$

Consequently, taking \(T\) to be the identity operator and using the concentration of measure result proved by Paouris [30], we obtain that if \(K_N\) is the convex hull or the symmetric convex hull of \(N\) independent random vectors (for \(n+1\le N\le e^{\sqrt{n}}\)) uniformly distributed on an isotropic convex body, then with high probability

$$\begin{aligned} L_{K_N}\le \frac{C\log N}{\sqrt{\log \frac{N}{n}}}. \end{aligned}$$
(1.1)

Notice that if \(N\ge n^{1+\delta }\), \(\delta \in (0,1)\), this estimate does not exceed \((C/\delta )\sqrt{\log \frac{N}{n}}\). However, \(C/\delta \) tends to infinity as \(\delta \) tends to 0. On the other hand, if \(N\) is proportional to \(n\) the isotropic constant of \(K_N\) is bounded (by an absolute constant), while the upper bound in (1.1) is not. The following theorem closes the gap between \(N\le cn\) and \(N\ge n^{1+\delta }\).

Theorem 1.1

There exist absolute positive constants \(c\), \(C\) such that if \(n\le N\), and \(X_1,\dots ,X_N\) are independent random vectors uniformly distributed on an isotropic convex body \(K\), and \(K_N\) is their symmetric convex hull, then

$$\begin{aligned} \mathbb {P}\left( \left\{ L_{K_N}\le C\sqrt{\log \frac{2N}{n}} \right\} \right) \ge 1-\exp {(-c\sqrt{n})}. \end{aligned}$$

Furthermore, we study a natural more general family of perturbations of random polytopes. Namely, for any \(X_1,\dots , X_N\in \mathbb {R}^n\) and \(y\in \mathbb {R}^N\), we define

$$\begin{aligned} K_{N,y} =\mathrm{conv}\Big \{\pm y_1X_1,\dots ,\pm y_NX_N\Big \}. \end{aligned}$$

As it turns out Theorem 1.1 is valid for this family as well. To describe this result we need more notation.

For a vector \(y=(y_1, \ldots , y_N)\in \mathbb {R}^N\), let \(\{y_1^*, \ldots , y_N^*\}\) be the sequence of decreasing rearrangement of \(\{|y_i|\}_i\). For \(1\le k \le N\) and \(n\le m\le N\) denote

$$\begin{aligned} \Vert y\Vert _{k, 2} {:=} \left( \sum _{i=1}^{k} y_i^{*2}\right) ^{1/2} \quad \mathrm{and}\quad \alpha _{y,m} := \left( \prod _{i=m-n+1}^{m} y_i^{*} \right) ^\frac{1}{n} \ge y_m^{*}. \end{aligned}$$

Our main result is the following theorem.

Theorem 1.2

There exist absolute positive constants \(c\) and \(C\) such that the following holds. Let \(n\le N\) and let \(X_1,\dots ,X_N\) be independent random vectors uniformly distributed on an isotropic convex body \(K\). Then for every \(y\in \mathbb {R}^N\) the event

$$\begin{aligned} L_{K_{N,y}}\le C\, \frac{\Vert y\Vert _{n,2}}{\sqrt{n}} \, \inf \left\{ \alpha ^{-1}_{y,m} \, \frac{\log (2N/n)}{\sqrt{\log (2m/n)}} \, \, \mid \, \, n\le m\le N \right\} \end{aligned}$$

occurs with probability greater than \(1-e^{-c\sqrt{n}}\). Moreover, the event

$$\begin{aligned} \sup _{\Vert y\Vert _{n, 2}\le 1 } L_{K_{N,y}}\le C\, \alpha ^{-1}_{y,m_0} \, \,\sqrt{ \frac{ \log (2N/n) }{n}} , \end{aligned}$$

where \(m_0 = N (1- c/\log N)\), occurs with probability greater than \(1-e^{-c\sqrt{n}}\).

Remark 1

Note that if \(n\le N\le C n\) then \(L_K\le C\) for any symmetric polytope \(K\) generated by \(N\) vectors [6].

Remark 2

Clearly Theorem 1.2 applied to the vector \(y=(1,\dots ,1)\) implies Theorem 1.1.

Finally, we apply Theorem 1.2 to the case when the vector describing the perturbation is also random. Such a setting has been recently considered in [7]. In Theorem 4.1 we show that for the Gaussian vector \(G=(g_1, \ldots , g_N)\) in \(\mathbb {R}^N\) with high probability we have

$$\begin{aligned} L_{K_{N,G}}\le C\sqrt{\log \frac{2N}{n}}. \end{aligned}$$

2 Preliminaries

In this paper the letters \(c,C,c_1, C_1, \dots \) will always denote absolute positive constants, whose values may change from line to line. Given two functions \(f\) and \(g\) we say that they are equivalent and write \(f\approx g\) if \(c_1 f \le g \le c_2 f\).

By \(|\cdot |\) and \(\left\langle \cdot , \cdot \right\rangle \) we denote the canonical Euclidean norm and the canonical inner product on \(\mathbb {R}^n\). The (unit) Euclidean ball and sphere are denoted by \(B_2^n\) and \(S^{n-1}\). Let \(K\) be a symmetric convex body in \(\mathbb {R}^n\) and let \(\Vert \cdot \Vert _K\) be its associated norm

$$\begin{aligned} \Vert x\Vert _K=\inf \Big \{\lambda >0\,:\,x\in \lambda K\Big \}. \end{aligned}$$

The support function of \(K\) is \(\displaystyle {h_K(y)=\max _{x\in K}\langle x,y\rangle }\) and it is the norm associated with the polar body of \(K\),

$$\begin{aligned} K^\circ =\Big \{y\in \mathbb {R}^n\,:\,\langle x,y\rangle \le 1\, \, \forall x\in K\Big \}. \end{aligned}$$

Given convex body \(K\) we denote by \(|K|\) its volume. We also denote by \(|E|\) the cardinality of a finite set \(E\). For \(E\subset \{1, \dots , N\}\) the coordinate projection on \(\mathbb {R}^E\) is denoted by \(P_E\).

We say that a convex body \(K\subseteq \mathbb {R}^n\) is isotropic if it has volume \(|K|=1\), its center of mass is at \(0\) (i.e., \(\int _Kxdx=0\)) and for every \(\theta \in S^{n-1}\) one has

$$\begin{aligned} \int _K\langle x,\theta \rangle ^2dx=L_K^2, \end{aligned}$$

where \(L_K\) is a constant independent of \(\theta \). \(L_K\) is called the isotropic constant of \(K\).

It is known that every convex body has a unique (up to an orthogonal transformation) affine image that is isotropic. This allows to define the isotropic constant of any convex body as the isotropic constant of its isotropic image. It is also known (see, e.g., [28]) that

$$\begin{aligned} nL_K^2=\inf \left\{ \frac{1}{|TK|^{1+\frac{2}{n}}}\int _{a+TK}|x|^2dx\,:\, T\in GL(n),a\in \mathbb {R}^n\right\} \!. \end{aligned}$$
(2.1)

We need two more results on the distribution of Euclidean norms of random vectors and their sums. Let \(X_i\), \(i\le N\), be independent random vectors uniformly distributed in an isotropic convex body \(K\subset \mathbb {R}^n\). Let \(A\) be a random \(n\times N\) matrix, whose columns are the \(X_i\)’s. For \(m\le N\) denote

$$\begin{aligned} A_m=\sup \Big \{ |Ay|\, \, \mid \, \, y\in B_2^N, \, |\mathrm{supp}\, y|\le m \Big \} \end{aligned}$$

(supp denotes the support of \(y\)). Theorem 3.13 in [1] (note the different normalization) implies the following estimate.

Theorem 2.1

There is an absolute positive constant \(C\) such that for every \(\gamma \ge 1\) and every \(m\le N\)

$$\begin{aligned} \mathbb {P}\left( \left\{ \, A_m\ge C L_K\gamma \sqrt{m} \log \frac{2N}{m} + 6 \max _i {|X_i|}\right\} \right) \le \exp \left( - \gamma \sqrt{m} \log \frac{2N}{m} \right) . \end{aligned}$$

The following theorem is a combination of Paouris’s theorem ([30], see also [3] for a short proof) with the union bound (cf. Lemma 3.1 in [1]).

Theorem 2.2

There exists an absolute positive constant \(C\) such that for any \(N\le \exp (\sqrt{n})\) and for every \(\lambda \ge 1\) one has

$$\begin{aligned} \mathbb {P}\left( \left\{ \max _{i\le N} |X_i| \ge C \lambda \sqrt{n} \, L_K \right\} \right) \le \exp \left( -\lambda \sqrt{n} \right) . \end{aligned}$$

Finally we need the estimate on the volume of the random polytope

$$\begin{aligned} K_{N} =\mathrm{conv}\{\pm X_1,\dots ,\pm X_N\}, \end{aligned}$$

where \(X_i\), \(i\le N\), are independent random vectors uniformly distributed in an isotropic convex body \(K\subset \mathbb {R}^n\). The estimates of the following theorem were observed in [14] (see Fact 3.2, the remarks following it, and Fact 3.3 there; see also [32] and Chapter 11 of [11], where the assumption \(N\ge C n\) was reduced to \(N\ge n\)).

Theorem 2.3

There are absolute positive constants \(c_1\), \(c_2\) such that for \(n\le N\le e^{\sqrt{n}}\),

$$\begin{aligned} \mathbb {P}\left( \left\{ |K_N|^{1/n}\ge c_1 \sqrt{\frac{\log (N/n)}{n}} \, L_K \right\} \right) \ge 1- \exp (-c_2 \sqrt{N}). \end{aligned}$$

In fact this theorem is a combination of three results. The first says that \(K_N\) contains the centroid body. Recall that for \(p\ge 1\) the \(p\)-centroid body \(Z_p(K)\) was introduced in [26] (with a different normalization) as the convex body, whose support function is

$$\begin{aligned} h_{Z_p(K)}(\theta )=\left( \int _K|\langle x,\theta \rangle |^pdx\right) ^\frac{1}{p}. \end{aligned}$$
(2.2)

In [15] (Theorem 1.1) the authors proved that for every parameters \(\beta \in (0, 1/2)\) and \(\gamma >1\) one has the inclusion \(K_N\supset c_1 Z_p(K)\) for \(p=c_2\beta \log (2N/n)\) and \(N\ge c_3 \gamma n\) with the probability greater than

$$\begin{aligned} 1 - \exp (-c_4 N^{1-\beta } n^{\beta }) - \mathbb {P}\left( \left\{ \Vert A\Vert > \gamma L_K \sqrt{N} \right\} \right) , \end{aligned}$$

where \(A\) is the random matrix whose columns are \(X_1, \dots , X_N\). The probability that norm of \(A\) (note \(\Vert A\Vert =A_N\)) is large was estimated in [1] (combine Theorems 2.1 and 2.2 above). Finally, from results of [24] and [30] the bound

$$\begin{aligned} |Z_p(K)|^{1/n} \approx \sqrt{p/n}\, L_K \end{aligned}$$
(2.3)

follows provided that \(p\le \sqrt{n}\) (it improves the bound provided in [25]).

In the Sect. 4 we will use the following standard estimate. For the sake of completeness we provide a proof (cf. Example 10 in [16]).

Lemma 2.4

There exists an absolute positive constant \(C\) such that for \(Cm\le N\) and independent standard Gaussian random variables \(g_1\), ..., \(g_N\) one has

$$\begin{aligned} \mathbb {P}\left( \left\{ g^*_m \le \sqrt{\log (e N/m)}\right\} \right) \le \exp \left( -\frac{\sqrt{mN}}{10\sqrt{\log \left( \frac{eN}{m}\right) }}\right) \end{aligned}$$

Proof

Denote \(\alpha = \sqrt{ \log (eN/m)}\). Note, \(g^*_m \le \alpha \) in particular means that there exists \(\sigma \subset \{1, \ldots , N\}\) of cardinality \(\ell := N-m\) such that \(|g_i| \le \alpha \) for every \(i\in \sigma \). Therefore,

$$\begin{aligned} p:= \mathbb {P}\left( \left\{ g^*_m \le \alpha \right\} \right) \le {N \atopwithdelims ()\ell } \left( \mathbb {P}\left( \left\{ |g_1| \le \alpha \right\} \right) \right) ^{\ell }. \end{aligned}$$

Using that

$$\begin{aligned} \mathbb {P}\left( \left\{ |g_1|\le \alpha \right\} \right) = 1-\sqrt{\frac{2}{\pi }}\, \int _{\alpha }^{\infty } e^{-x^2/2}\, dx \le 1- \frac{e^{-\alpha ^2/2}}{\sqrt{2\pi }\alpha } \le \exp \left( - \frac{e^{-\alpha ^2/2}}{\sqrt{2\pi }\alpha } \right) , \end{aligned}$$

\(\alpha = \sqrt{ \log (eN/m)}\) and \({N \atopwithdelims ()\ell } = {N \atopwithdelims ()m} \le (eN/m)^{m}\) we obtain

$$\begin{aligned} p \le \exp \left( m\, \log (eN/m) - \ell \, \,\frac{\sqrt{\frac{m}{eN}}}{\sqrt{2\pi \log \left( \frac{eN}{m}\right) }} \right) . \end{aligned}$$

As \(\ell = N-m\), this implies the result. \(\square \)

3 Proofs

In this section we prove Theorem 1.2. The proof consists of two propositions.

Proposition 3.1

Let \(n\le N\le e^{\sqrt{n}}\) and \(X_1,\dots ,X_N\) are independent random vectors distributed uniformly on an isotropic convex body \(K\). Then the event

$$\begin{aligned} \sup \left\{ \frac{1}{|K_{N,y}|} \int _{K_{N,y}}|x|^2 dx \, \, \mid \, \, \Vert y\Vert _{n, 2}\le 1, \, y_n^*>0 \right\} \le C\, \frac{1}{n} \, L_K^2\log ^2\frac{2N}{n} \end{aligned}$$

occurs with probability greater than \(1- \exp (- \sqrt{n} \, \log (2N/n))\), where \(C\) is an absolute constant.

To prove this proposition we need the following lemma.

Lemma 3.2

Let \(1\le n\le N\) be integers and \(P=\text{ conv }\{P_1,\dots , P_N\}\) be a non-degenerated symmetric polytope in \(\mathbb {R}^n\). Then

$$\begin{aligned} \frac{1}{|P|}\int _P|x|^2dx\le \frac{1}{(n+1)(n+2)} \sup _E\left( \sum _{i\in E} |P_i|^2+\left| \sum _{i\in E}P_i\right| ^2\right) , \end{aligned}$$

where the supremum is taken over all subsets \(E\subset \{1,\dots , N\}\) of cardinality \(n\).

Proof

We can decompose \(P\) as a disjoint union of simplices (up to sets of measure 0), say \(P=\cup _{i=1}^{\ell } C_i\), where each \(C_i\) is of the form \(\mathrm{conv}\{0,P_{i_1},\dots ,P_{i_n}\}\) for some choice of \(P_{i_j}\)’s. For every such \(C_i\), denote \(F_i:= \mathrm{conv}\{P_{i_1},\dots ,P_{i_n}\}\). Then for any integrable function \(f\) we have

$$\begin{aligned} \int _{C_i}f(x)dx\!=\!\int _{F_i}\int _0^1r^{n-1}f(ry)|\langle y,\nu (y)\rangle |dydr \!=\!d(0,F_i)\int _{F_i}\int _0^1r^{n-1}f(ry)dydr, \end{aligned}$$

where \(\nu (y)\) is the outer normal vector to \(P\) at the point \(y\) and \(d(0,F_i)\) is the distance from the origin to the affine subspace spanned by \(F_i\). Thus, as in [23], for every \(i\le \ell \) one has

$$\begin{aligned} |C_i|=n^{-1} |F_i|d(0,F_i) \quad \quad \text{ and } \quad \quad \int _{C_i}|x|^2=\frac{d(0,F_i)}{n+2}\int _{F_i}|y|^2dy. \end{aligned}$$

In particular,

$$\begin{aligned} |P|=\sum _{i=1}^{\ell } |C_i| = \frac{1}{n}\, \sum _{i=1}^{\ell } |F_i|d(0,F_i). \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{1}{|P|}\int _{P}|x|^2dx&= \frac{1}{|P|}\sum _{i=1}^{\ell } \frac{d(0,F_i)}{n+2}\int _{F_i} |y|^2dy \\&\le \frac{1}{|P|} \sum _{i=1}^{\ell } \frac{d(0,F_i) |F_i|}{n+2} \, \sup _{1\le i\le \ell }\frac{1}{|F_i|}\int _{F_i}|y|^2dy\\&\le \frac{n}{n+2}\, \sup _{F} \frac{1}{|F|}\int _F |y|^2dy, \end{aligned}$$

where the supremum is taken over all \(F= \mathrm{conv}\{P_{i_1},\dots ,P_{i_{n}}\}\). Note that any such \(F\) can be presented as \(F=T\Delta ^{n-1}\), where \(\Delta ^{n-1}=\mathrm{conv}\{e_1,\dots ,e_n\}\) denotes the regular \(n-1\) dimensional simplex in \(\mathbb {R}^n\) and \(T\) is the matrix whose columns are the vectors \(P_{i_j}\). Since

$$\begin{aligned} \frac{1}{|\Delta ^{n-1}|}\int _{\Delta ^{n-1}}y_iy_jdy=\frac{1+\delta _{ij}}{n(n+1)}, \end{aligned}$$

where \(\delta _{ij}\) is the Kronecker delta, for every \(F=\mathrm{conv}\{P_{i_1},\dots ,P_{i_n}\}\) we obtain

$$\begin{aligned} \frac{1}{|F|}\int _F|y|^2dy=\frac{1}{n(n+1)}\left( \sum _{j=1}^n|P_{i_j}|^2+ \left| \sum _{j=1}^nP_{i_j}\right| ^2\right) . \end{aligned}$$

This implies the desire estimate. \(\square \)

Proof of Proposition 3.1

Note that if \(y^*_n>0\) then the cardinality of the support of \(y\) is at least \(n\), so \(K_{N,y}\) is not degenerated with probability one. Therefore, with probability one \(K_{N,y}\) is non-degenerated for any countable dense set in \(B_0:=\{ y\in \mathbb {R}^N\, \, \mid \, \, \Vert y\Vert _{n, 2}\le 1, \, y^*_n >0 \}\). Clearly, the supremum under question is the same over \(y\in B_0\) and over such a dense set.

Now, by Lemma 3.2 we have that \(\displaystyle {\sup _{y\in B_0}|K_{N,y}|^{-1} \int _{K_{N,y}}|x|^2dx}\) is bounded from above by

$$\begin{aligned} \frac{1}{(n+1)(n+2)}\sup _{y\in B_0} \sup _{|E|=n}\left( \sum _{i\in E}|y_i X_i|^2+\left| \sum _{i\in E} y_iX_i\right| ^2\right) \end{aligned}$$

(formally, we should additionally take supremum over \(\varepsilon _i=\pm 1\) and to have \(y_i \varepsilon _i X_i\) in the formula under suprema, but, since \(B_0\) is unconditional, the supremum over \(\varepsilon _i\)’s can be omitted).

Note that

$$\begin{aligned} \sum _{i\in E}|y_i X_i|^2\le \Vert y\Vert ^2 _{n, 2}\, \max _{i\le N} |X_i|^2 \end{aligned}$$

and

$$\begin{aligned} \left| \sum _{i\in E} y_i X_i\right| = |A P_E y |\le A_n \, \Vert y\Vert _{n, 2}, \end{aligned}$$

where \(A\) is the matrix whose columns are \(X_1, \dots , X_N\). Therefore, applying Theorems 2.1 and 2.2 (with \(m=n\) and \(\lambda =2\log (2N/n)\)) we obtain that

$$\begin{aligned} \sup _{y\in B_0} \sup _{|E|=n}\left( \sum _{i\in E}|y_i X_i|^2+\left| \sum _{i\in E} y_iX_i\right| ^2\right) \le C\, n L_K^2\log ^2\frac{2N}{n} \end{aligned}$$

with probability greater than \(1- \exp (- \sqrt{n} \, \log (2N/n))\). \(\square \)

Proposition 3.3

There exist absolute positive constants \(c_1\), \(c_2\) such that if \(n\le N\le e^{\sqrt{n}}\) and \(X_1,\dots ,X_N\) are independent random vectors distributed uniformly on an isotropic convex body \(K\), then for every \(y\in \mathbb {R}^N\),

$$\begin{aligned} \mathbb {P}\left( \left\{ \forall m\ge n : \quad |K_{N,y}|^\frac{1}{n}\ge c_1 \alpha _{y,m} L_K\sqrt{\frac{\log (2m/n)}{n}} \right\} \right) \ge 1-\exp {(-c_2\sqrt{n})}. \end{aligned}$$

Moreover, the event

$$\begin{aligned} \forall m\ge N-\frac{c_1 N}{\log N} \, \, \, \, \, \, \forall y\in \mathbb {R}^N \quad \quad |K_{N,y}|^\frac{1}{n}\ge c_1\alpha _{y,m} L_K\sqrt{\frac{\log (2m/n)}{n}} \end{aligned}$$

occurs with probability greater than \(1-\exp {(-c_2\sqrt{N})}\).

The probability estimates in Proposition 3.3 are based on an estimate of corresponding probability for a fixed \(y\) and the union bound. We start with the following lemma.

Lemma 3.4

There exist absolute positive constants \(c_1\), \(c_2\) such that the following holds. Let \(n\le m\le N\le e^{\sqrt{n}}\) and \(X_1,\dots ,X_N\) are independent random vectors distributed uniformly on an isotropic convex body \(K\). Then for every \(y\in \mathbb {R}^N\) with \(y_m^*>0\) there exists \(v=v(y)\in \mathbb {R}^N\) having \(0/1\) coordinates with exactly \(m\) ones such that

$$\begin{aligned} |K_{N,y}| \ge \alpha _{y,m}^n |K_{N,v}| \end{aligned}$$

and

$$\begin{aligned} \mathbb {P}\left( \left\{ |K_{N,v}|^\frac{1}{n}\ge c_1 \, L_K\sqrt{\frac{\log (2m/n)}{n}} \right\} \right) \ge 1-\exp {\left( -c_2\sqrt{m}\right) }. \end{aligned}$$

Proof

Fix \(y\in \mathbb {R}^N\) with \(y^*_m>0\) (i.e., \(|\mathrm{supp}\, y|\ge m\)). Let \(i_1,\dots ,i_m\) be the indices such that \(y_{i_j}=y_j^*\) and let \(v=v(y)\in \mathbb {R}^N\) be the vector with \(v_{k}=1\) if \(k=i_j\) and 0 otherwise. Decompose the polytope \(K_{N,v}\) into a disjoint union of simplices (up to a set of zero measure)

$$\begin{aligned} K_{N,v} = \bigcup _{k=1}^{\ell } C_k, \end{aligned}$$

where \(C_k=\mathrm{conv}\{0,\varepsilon _{k_1}X_{k_1},\dots ,\varepsilon _{k_n}X_{k_n}\}\) for some \(\varepsilon _{k_j}=\pm 1\) and some vectors \(X_{k_j}\), given by the simplicial decomposition of the facets of \(K_{N,v}\). Denote

$$\begin{aligned} C_{k, y} = \mathrm{conv}\Big \{0,\varepsilon _{k_1} |y_{k_1}| X_{k_1},\dots , \varepsilon _{k_n} |y_{k_n} | X_{k_n}\Big \} \subset K_{N,y}. \end{aligned}$$

Clearly, \(C_{k, y}\)’s are also disjoint up to a set of zero measure and

$$\begin{aligned} |C_k|&= |\det \left( \varepsilon _{k_1}X_{k_1},\dots , \varepsilon _{k_n}X_{k_n}\right) |\, |\mathrm{conv}\{0,e_1,\dots ,e_n\}| \\&\le \frac{1}{\alpha _{y,m}^n} |\det \left( \varepsilon _{k_1}|y_{k_1}| X_{k_1}, \dots , \varepsilon _{k_n}|y_{k_n}|X_{k_n}\right) | |\mathrm{conv}\{0,e_1,\dots ,e_n\}|. \end{aligned}$$

This implies

$$\begin{aligned} |K_{N,v}| = \sum _{k=1}^{\ell } C_k \le \alpha _{y,m}^{-n} \sum _{k=1}^{\ell } C_{k,y} \le \alpha _{y,m}^{-n} |K_{N,y}|. \end{aligned}$$

This proves the first estimate. The second one follows by Theorem 2.3, since \(K_{N,v}\) is a symmetric random polytope in an isotropic convex body generated by \(m\ge n\) random points. \(\square \)

Proof of Proposition 3.3

Without loss of generality we only consider \(y\)’s satisfying \(y_n^*>0\) (otherwise estimates are trivial).

The first estimate follows from Lemma 3.4 and the union bound, since

$$\begin{aligned} \sum _{m\ge n} e^{-c_2 \sqrt{m}} \le e^{-c_0\sqrt{n}} \end{aligned}$$
(3.1)

for an absolute positive constant \(c_0\).

To prove the second bound note that the set \(\{v(y)\}_{y\in \mathbb {R}^N}\) (\(v(y)\) is from Lemma 3.4) has cardinality \(N \atopwithdelims ()m\) and that denoting \(k=N-m\)

$$\begin{aligned} {N \atopwithdelims ()m} \exp {(-c_2\sqrt{m})} \le \exp (-c_2\sqrt{m} + k\log (eN/k))\le \exp {(-c_2\sqrt{m} /2)}, \end{aligned}$$

provided that \(k\le c\sqrt{N}/\log N\). Lemma 3.4 and the union bound imply

$$\begin{aligned} \mathbb {P}\left( \left\{ \forall y\in \mathbb {R}^N : \, \, \, |K_{N,v}|^\frac{1}{n}\ge c_1 \, \alpha _{y,m}\, L_K\sqrt{\frac{\log (2m/n)}{n}}\right\} \right) \ge 1-\exp {\left( -c_2\sqrt{m}/2\right) }. \end{aligned}$$

The result follows by the union bound and (3.1). \(\square \)

Proof of Theorem 1.2

For \(n\le N \le e^{\sqrt{N}}\, \) Propositions 3.1 and 3.3 imply the result, since, by (2.1),

$$\begin{aligned} n L_{K_{N,y}}^2\le \frac{1}{|K_{N,y}|^{1+\frac{2}{n}}} \int _{K_{N,y}}|x|^2 dx. \end{aligned}$$

For \(N\ge e^{\sqrt{n}}\) the theorem follows from the general estimate \(L_K\le Cn^{1/4}\) for any \(n\)-dimensional convex body [22]. \(\square \)

4 Random Perturbations of Random Polytopes

In this section \(G=(g_1,\dots , g_N)\) denotes a standard Gaussian random vector in \(\mathbb {R}^N\), independent of any other random variables. In the following theorem, which is a consequence of Theorem 1.2 and Lemma 2.4, we estimate \(L_{K_{N, G}}\).

Theorem 4.1

Let \(n\le N\). Let \(X_1,\dots , X_N\) be independent copies of a random vector uniformly distributed on an isotropic convex body. Then

$$\begin{aligned} \mathbb {P}_{G,X_1,\dots ,X_N} \left( \left\{ L_{K_{N,G}}\le c_1\sqrt{\log \frac{2N}{n}}\right\} \right) \ge 1-\exp { \left( -c_2\sqrt{n} \right) }, \end{aligned}$$

where \(c_1\) and \(c_2\) are absolute positive constants.

Proof

Without loss of generality we assume that \(N\ge Cn\) for a sufficiently large absolute constant (see Remark 1 following Theorem 1.2). It is well-known (and can be directly calculated) that for the Gaussian vector \(G=(g_1,\dots ,g_N)\) one has

$$\begin{aligned} {\mathbb {E}} \Vert G\Vert _{n,2} \approx \sqrt{n\, \log {\frac{N}{n}}}. \end{aligned}$$

Using concentration (see, e.g., Theorem 1.5 in [31]), we observe that for some absolute constant \(C_1>0\),

$$\begin{aligned} \mathbb {P}_G\left( \Vert G\Vert _{n,2}\ge C_1\, \sqrt{n\, \log \frac{N}{n}} \right) \le \exp {(- n \log ({N}/{n}))}. \end{aligned}$$

By Lemma 2.4 with \(m=\left\lceil \sqrt{n N}\right\rceil \) we have

$$\begin{aligned} \mathbb {P}\left( \left\{ g^*_m \le \sqrt{\log (\sqrt{N/n})}\right\} \right) \le \exp \left( - \frac{N^{3/4} n^{1/4}}{10\sqrt{\log \left( e\sqrt{\frac{N}{n}}\right) }}\right) . \end{aligned}$$

Since \(\alpha _{G,m}\ge g_m^*\), Theorem 1.2 with \(m=\left\lceil \sqrt{n N}\right\rceil \) in the infimum implies

$$\begin{aligned} L_{K_{N,G}}\le C_2\, \frac{\Vert G\Vert _{n,2}}{\sqrt{n}} \, \alpha ^{-1}_{G,m} \, \frac{\log (2N/n)}{\sqrt{\log (2m/n)}} \le C_3 \, \sqrt{\log \frac{N}{n}}. \end{aligned}$$

with probability at least \(1-\exp {(-c\sqrt{n})} - \exp (-n) - \exp \left( - c^\prime \frac{N^{3/4} n^{1/4}}{\sqrt{\log \frac{eN}{n}}}\right) \). This implies the desired result. \(\square \)

5 Concluding Remarks

In this section we show that under additional (strong) assumption that random vectors are \(\psi _2\) vectors, the polytope \(K_{N, G}\) contains \(Z_p\) for an appropriate \(p\) (recall that \(G\) is a standard Gaussian random vector in \(\mathbb {R}^N\) and the \(Z_p\) body was defined in (2.2)).

We first recall the definition of \(\psi _{\alpha }\) norm. For a real random variable \(z\) and \(\alpha \in [1,2]\) we define the \(\psi _{\alpha }\)-norm by

$$\begin{aligned} \Vert z\Vert _{\psi _{\alpha }}=\inf \left\{ C>0\,\mid \, \, \mathbb {E}\exp \left( {|Y|/C}\right) ^{\alpha } \le 2\right\} . \end{aligned}$$

It is well known (see, e.g., [12]) that the condition \(\Vert z\Vert _{\psi _{\alpha }}\le c_1\) is equivalent to the condition

$$\begin{aligned} \forall p>1 \, : \quad \left( {\mathbb {E}} |z|^p \right) ^{1/p} \le c_2\, p^{1/\alpha } \, {\mathbb {E}} |z|. \end{aligned}$$
(5.1)

Let \(X\) be a centered random vector in \(\mathbb {R}^n\) and \(\alpha >0\). We say that \(X\) is \(\psi _{\alpha }\) or a \(\psi _{\alpha }\) vector, if

$$\begin{aligned} \Vert X\Vert _{\psi _{\alpha }} :=\sup _{y\in S^{n-1}} \Vert \left\langle X, y \right\rangle \Vert _{\psi _{\alpha }} <\infty . \end{aligned}$$
(5.2)

We also denote

$$\begin{aligned} \gamma _p := \left( \mathbb {E}|g_1|^p\right) ^{1/p} \approx \sqrt{p}. \end{aligned}$$

Proposition 5.1

There are absolute positive constants \(C\), \(c_1\) and \(c_2\) such that the following holds. Let \(\beta >2\) and \(N\ge C (\log \beta )^2 \, n \log (3n)/ (\log \log (3n))\). Let \(X_1,\dots , X_N\) be independent copies of a random vector uniformly distributed on an isotropic convex body \(K\subset \mathbb {R}^n\). Assume that \(\Vert X_1\Vert _{\psi _{2}} \le \beta \, L_K \sqrt{\log \frac{N}{n}}\). Then

$$\begin{aligned} \mathbb {P}_{G,X_1,\dots ,X_N}\left( \left\{ K_{N,G}\supseteq c_1\sqrt{\log \frac{N}{n}}\, Z_{\log (N/n)}(K)\right\} \right) \ge 1-\alpha , \end{aligned}$$

where \(\alpha =\exp {(-c_2\sqrt{N})}\) for \(N \ge (n/\log (3n))^2\) and \(\alpha =(N/n)^{-c_2 N/n}\) otherwise.

Remark 1

We would like to note that for \(N\ge n^2\) this Proposition (with slightly worse probability) was proved in [7] (see Lemma 4 there) without any assumptions on \(\psi _2\)-norm.

Remark 2

Proposition 5.1 together with Proposition 3.1 and volume estimate (2.3) can be used to obtain the estimate of Theorem 4.1 (under additional \(\psi _2\) assumption).

Our proof is very similar to the one given in [7]. We provide it for the sake of completeness. The main new ingredient is the following lemma, which is needed to estimate the norm of matrix \(A\) in the proof of Proposition 5.1.

Lemma 5.2

Let \(g\) be a standard Gaussian random variable. Let \(X\) be uniformly distributed on an isotropic convex body \(K\subset \mathbb {R}^n\). Assume that \(X\) is a \(\psi _{2}\) random vector and that \(\Vert X\Vert _{\psi _{2}}=\psi L_K\). Then for every \(p\ge 1\),

$$\begin{aligned} \left( \mathbb {E}|g X|^p\right) ^{1/p} \le C L_K \left( \sqrt{p n} + p \psi \right) , \end{aligned}$$

where \(C\) is an absolute positive constant.

Proof

We use the following form of Paouris’s theorem, which may be deduced directly from Paouris’s work [30] (as formulated here it first appeared as Theorem 2 in [2], a short proof was given in [3])

$$\begin{aligned} \left( \mathbb {E}| X|^p\right) ^{1/p} \le C \left( \mathbb {E}| X| + \sup _{y\in S^{n-1}} \left( \mathbb {E}|\langle X, y \rangle |^p\right) ^{1/p} \right) . \end{aligned}$$

Thus, using assumptions of \(X\), we obtain

$$\begin{aligned} \left( \mathbb {E}|g X|^p\right) ^{1/p} \le \gamma _p\, C \left( L_K \sqrt{n} + \sqrt{p} \, L_K \, \psi \right) , \end{aligned}$$

which implies the result. \(\square \)

Proof of Proposition 5.1

First note that \(g_i X_i\), \(i\le N\), have \(\psi _1\) norm bounded by \(c_1 \, \beta \, L_K\, \sqrt{\log (N/n)}\). Indeed, for any \(p\ge 1\) and \(\theta \in S^{n-1}\), one has

$$\begin{aligned} \left( \mathbb {E}|\langle g_iX_i,\theta \rangle |^p\right) ^{1/p} = \gamma _p \left( \mathbb {E}|\langle X_i,\theta \rangle |^p\right) ^{1/p} \le c_2 p \, \beta \, L_K\, \sqrt{\log \frac{N}{n}} \, \mathbb {E}| \langle X_i,\theta \rangle |. \end{aligned}$$

Denote by \(A\) the \(n\times N\) random matrix whose columns are the vectors \(g_i X_i\). By Theorem 3.13 in [1] (cf. Theorem 2.1),

$$\begin{aligned} \mathbb {P}_{G,X_1,\dots ,X_N}\left( \left\{ \Vert A\Vert \ge c_3 \beta L_K \, \sqrt{N \, \log \frac{N}{n}} + 6 \max _{i\le N} {|g_i X_i|} \right\} \right) \le \exp \left( - 2 \sqrt{N} \right) . \end{aligned}$$

Now we estimate \(\max _{i\le N} {|g_i X_i|}\). If \(N\ge (n/\ln (3n))^2\) we choose \(p= 4\sqrt{N}\), otherwise \(p=4 (N/n)\ln (N/n)\). In both cases,

$$\begin{aligned} \max \left\{ \sqrt{pn}, p\sqrt{\ln (N/n)} \right\} \le 4 \sqrt{N \ln (N/n)}. \end{aligned}$$

Then by Lemma 5.2 and the Chebyshev inequality we have for every \(i\le N\),

$$\begin{aligned} \mathbb {P}\left( \left\{ |g_i X_i| \ge 8 e\, C\, \beta \, L_K\, \sqrt{N \ln (N/n)}\right\} \right) \le \frac{\mathbb {E}|g_i X_i|^p}{ \left( 8 e\, C\, \beta \, L_K\, \sqrt{N \ln (N/n)} \right) ^{p}} \le e^{-p}. \end{aligned}$$

Union bound implies,

$$\begin{aligned} \mathbb {P}\left( \left\{ \max _{i\le N} {|g_i X_i|} \ge 8 e\, C\, \beta \, L_K\, \sqrt{N \ln (N/n)}\right\} \right) \le N e^{-p } \le e^{-p/2} \end{aligned}$$

due to conditions on \(N\) and \(n\). This implies

$$\begin{aligned} \mathbb {P}_{G,X_1,\dots ,X_N}\left( \left\{ \Vert A\Vert \ge c_5 \beta L_K\, \sqrt{N \, \log \frac{N}{n}} \right\} \right) \le \alpha _0, \end{aligned}$$
(5.3)

where \(\alpha _0= e^{- \sqrt{N} }\) for \(N\ge (n/\ln (3n))^2\) and \(\alpha _0= \exp (- (N/n)\ln (N/n))\) otherwise.

On the other hand, for every \(\sigma \subseteq \{1,\dots , N\}\), \(q\ge 1\) and \(\theta \in S^{n-1}\), by the Paley–Zygmund inequality,

$$\begin{aligned}&\mathbb {P}_{G,X_1,\dots ,X_N}\left( \left\{ \max _{i\in \sigma }|\langle g_iX_i,\theta \rangle |\le \frac{1}{2} \left( \mathbb {E}|g_1|^q\right) ^\frac{1}{q} \left( \mathbb {E}|\langle X_1,\theta \rangle |^q\right) ^\frac{1}{q}\right\} \right) \\&\quad =\prod _{i\in \sigma }\mathbb {P}_{G,X_1,\dots ,X_N}\left( \left\{ |\langle g_iX_i,\theta \rangle |\le \frac{1}{2} \left( \mathbb {E}|g_1|^q\right) ^\frac{1}{q}\left( \mathbb {E}|\langle X_1,\theta \rangle |^q\right) ^\frac{1}{q}\right\} \right) \\&\quad \le \left( 1-\left( 1-\left( \frac{1}{2}\right) ^q\right) ^2\frac{\left( \mathbb {E}|g_1|^q\mathbb {E}|\langle X_1,\theta \rangle |^q\right) ^2}{\mathbb {E}|g_1|^{2q}\mathbb {E}|\langle X_1,\theta \rangle |^{2q}}\right) ^{|\sigma |}. \end{aligned}$$

Since \(\gamma _p \approx \sqrt{p}\), and from Borell’s lemma ([8], see also Appendix III in [29]),

$$\begin{aligned} \mathbb {E}|\langle X_1,\theta \rangle |^{2q}\le c_6^q \, \left( \mathbb {E}|\langle X_1,\theta \rangle |^q\right) ^2, \end{aligned}$$

the quantity above is bounded by

$$\begin{aligned} \left( 1-\frac{1}{4C^q}\right) ^{|\sigma |}\le \exp {\left( -\frac{|\sigma |}{4C^q}\right) }. \end{aligned}$$

Set \(m=\left[ N/n\right] \). Let \(\sigma _1,\dots ,\sigma _n\) be a partition of \(\{1,\dots , N\}\) with \(m\le |\sigma _i|\) for every \(i\) and \(\Vert \cdot \Vert _0\) be the norm

$$\begin{aligned} \Vert u\Vert _0=\frac{1}{n}\sum _{i=1}^n \max _{j\in \sigma _i}|u_j|. \end{aligned}$$

Note that \(\Vert \cdot \Vert _0 \le n^{-1/2} |\cdot |\). Since for all \(1\le i\le n\) and every \(z\in \mathbb {R}^n\)

$$\begin{aligned} h_{K_{N,G}}(z)=\max _{1\le j\le N}|\langle g_jX_j,z\rangle |\ge \max _{j\in \sigma _i}|\langle g_jX_j,z\rangle |, \end{aligned}$$

then for every \(z\in \mathbb {R}^n\)

$$\begin{aligned} h_{K_{N,G}}(z)\ge \Vert A^t z\Vert _0, \end{aligned}$$

where \(A^t\) is the transpose of \(A\).

Clearly, if \(z\in \mathbb {R}^n\) verifies \(\Vert A^t z\Vert _0\le \frac{1}{4}\gamma _q \left( \mathbb {E}|\langle X_1,z\rangle |^q\right) ^\frac{1}{q}\), then there exists a set \(I\subseteq \{1,\dots ,n\}\) with \(|I|\ge \frac{n}{2}\) such that

$$\begin{aligned} \max _{j\in \sigma _i}|\langle g_jX_j,z\rangle |\le \frac{1}{2}\gamma _q \left( \mathbb {E}|\langle X_1,z\rangle |^q\right) ^\frac{1}{q} \end{aligned}$$

for every \(i\in I\). Thus, for every \(z\in \mathbb {R}^n\),

$$\begin{aligned}&\mathbb {P}_{G,X_1,\dots ,X_N}\left( \left\{ \Vert A^t z\Vert _0\le \frac{1}{4} \gamma _q \left( \mathbb {E}|\langle X_1,z\rangle |^q\right) ^\frac{1}{q}\right\} \right) \\&\quad \le \sum _{|I|=\lceil \frac{n}{2}\rceil }\mathbb {P}_{G,X_1,\dots ,X_N}\left( \left\{ \,\forall i\in I\, : \, \, \max _{j\in \sigma _i} |\langle g_jX_j,z\rangle |\le \frac{1}{2} \gamma _q \left( \mathbb {E}|\langle X_1,z\rangle |^q\right) ^\frac{1}{q}\right\} \right) \\&\quad \le \sum _{|I|=\lceil \frac{n}{2}\rceil }\prod _{i\in I}\mathbb {P}_{G,X_1,\dots ,X_N}\left( \left\{ \max _{j\in \sigma _i} |\langle g_jX_j,z\rangle |\le \frac{1}{2} \gamma _q \left( \mathbb {E}|\langle X_1,z\rangle |^q\right) ^\frac{1}{q}\right\} \right) \\&\quad \le \sum _{|I|=\lceil \frac{n}{2}\rceil }\prod _{i\in I}\exp {\left( -\frac{|\sigma _i|}{4C^q}\right) } \le 2^n \exp {\left( -\frac{n m }{4C^q}\right) } \le 2^n \exp {\left( -\frac{N }{8C^q}\right) } \\&\quad \le \exp {\left( -\frac{\sqrt{N n} }{16}\right) } \end{aligned}$$

provided that \(q:=(1/2) \log (N/n)\) and \(N>125 n\).

Now, let

$$\begin{aligned} S=\left\{ z\in \mathbb {R}^n\,\mid \,\, \frac{1}{2}\gamma _{q}\left( \mathbb {E}|\langle X_1,z\rangle |^q\right) ^\frac{1}{q}=1\right\} \end{aligned}$$

and let \(U\subset S\) be a \(\delta \)-net (in metric defined by \(S\)) with cardinality \(|U|\le \left( \frac{3}{\delta }\right) ^n\), i.e., for every \(z\in S\) there is \(u\in U\) such that \(\frac{1}{2}\gamma _q\left( \mathbb {E}|\langle X_1,z-u\rangle |^q\right) ^\frac{1}{q}\le \delta \). Then

$$\begin{aligned} \mathbb {P}\left( \left\{ \exists u\in U\,:\,\Vert A^t u\Vert _0\le \frac{1}{2}\right\} \right) \le \exp {\left( n \log \frac{3}{\delta } - \sqrt{N n}/16\right) }. \end{aligned}$$

By isotropicity we have that \(\left( \mathbb {E}|\langle X_1,z\rangle |^q\right) ^\frac{1}{q}\ge L_k |z|\) (because we have chosen \(q=(1/2) \log (N/n)>2\)). Thus, assuming \(\Vert A^t \Vert \le c_5 \beta L_K \sqrt{N\, \log \frac{N}{n}}\), we have

$$\begin{aligned} \Vert A^t z\Vert _0\le \frac{1}{\sqrt{n}}|A^t z|&\le c_5 \beta L_K\, \sqrt{\frac{N}{n}}\, \sqrt{\log \frac{N}{n}}\, |z|\le c_5 \beta \sqrt{\frac{N}{n}}\sqrt{2 q }\, \left( \mathbb {E}|\langle X_1,z\rangle |^q\right) ^\frac{1}{q}\\&\le c_6 \beta \, \gamma _q\, \sqrt{\frac{N}{n}} \, \left( \mathbb {E}|\langle X_1,z\rangle |^q\right) ^\frac{1}{q}. \end{aligned}$$

Therefore, if \(u\in U\) approximates \(z\in S\), that is if \(\frac{1}{2}\gamma _{q}\left( \mathbb {E}|\langle X_1,z-u\rangle |^q\right) ^\frac{1}{q}\le \delta \), then \(u\) also satisfies

$$\begin{aligned} \Vert A^t u\Vert _0\le \Vert A^t z\Vert _0+ c_7\beta \, \sqrt{\frac{N}{n}}\, \delta . \end{aligned}$$

Choosing \(\delta =\sqrt{n}/(4 \beta c_7 \sqrt{N})\) and denoting the event

$$\begin{aligned} \Omega _0 := \left\{ \Vert A\Vert \le c_5\beta L_K \sqrt{N\, \log \frac{N}{n}} \right\} \end{aligned}$$

we obtain

$$\begin{aligned}&\mathbb {P}_{G,X_1,\dots ,X_N}\left( \left\{ \omega \in \Omega _0\, \mid \, \, \exists z\in \mathbb {R}^n\,:\, \Vert A^t z\Vert _0\le \frac{1}{8} \gamma _q \left( \mathbb {E}|\langle X_1,z\rangle |^q\right) ^\frac{1}{q}\right\} \right) \\&\quad =\mathbb {P}_{G,X_1,\dots ,X_N}\left( \left\{ \omega \in \Omega _0\, \mid \, \, \exists z\in S\,:\, \Vert A^t z\Vert _0\le \frac{1}{4}\right\} \right) \\&\quad \le \mathbb {P}_{G,X_1,\dots ,X_N}\left( \left\{ \omega \in \Omega _0\, \mid \, \, \exists u\in U\,:\, \Vert A^t u\Vert _0\le \frac{1}{2}\right\} \right) \\&\quad \le \exp {\left( n \log \frac{12 c_7\, \beta \, \sqrt{N}}{\sqrt{n}} - \sqrt{N n}/16\right) } \le \exp {\left( - \sqrt{N n}/20\right) }, \end{aligned}$$

provided \(N\ge C (\log \beta )^2 n\) for a big enough absolute constant \(C\). Since \(h_{K_{N, G}}(z) =\Vert A^t z \Vert _{\infty } \ge \Vert A^t z\Vert _0\), this together with (5.3) and the definition (2.2), implies that with probability at least \(1-\alpha _0 - \exp {\left( -\sqrt{N n}/20\right) }\),

$$\begin{aligned} K_{N,G} \supseteq \frac{1}{8}\, \gamma _q \, Z_q(K) \supseteq c \sqrt{\log \frac{N}{n}}\, Z_{\log (N/n)} (K). \end{aligned}$$

\(\square \)