Abstract
This is a somewhat expanded form of a 4h course given, with small variations, first at the educational workshop Probabilistic methods in geometry, Bedlewo, Poland, July 6–12, 2008 and a few weeks later at the Summer school on Fourier analytic and probabilistic methods in geometric functional analysis and convexity, Kent, Ohio, August 13–20, 2008. The main part of these notes gives yet another exposition of Dvoretzky’s theorem on Euclidean sections of convex bodies with a proof based on Milman’s. This material is by now quite standard. Towards the end of these notes we discuss issues related to fine estimates in Dvoretzky’s theorem and there are some results that didn’t appear in print before. In particular there is an exposition of an unpublished result of Figiel (Claim1) which gives an upper bound on the possible dependence on \(\epsilon \)in Milman’s theorem. We would like to thank Tadek Figiel for allowing us to include it here. There is also a better version of the proof of one of the results from Schechtman (Adv. Math. 200(1), 125–135, 2006) giving a lower bound on the dependence on \(\epsilon \)in Dvoretzky’s theorem. The improvement is in the statement and proof of Proposition 2 here which is a stronger version of the corresponding Corollary 1 in Schechtman (Adv. Math. 200(1), 125–135, 2006).
Mathematical Subject Classifications (2010): 46B07, 52A20, 46B09
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Key words
1 Lecture 1
By a convex, symmetric body \(K \subset {\mathbb{R}}^{n}\)we shall refer to a compact set with non-empty interior which is convex and symmetric about the origin (i.e., \(x \in K\)implies that \(-x \in K\)).
This series of lectures will revolve around the following theorem of Dvoretzky.
Theorem 1 (A. Dvoretzky, 1960).
There is a function \(k : (0,1) \times \mathbb{N} \rightarrow \mathbb{N}\) satisfying, for all \(0 < \epsilon < 1\) \(k(\epsilon,n) \rightarrow \infty \) as \(n \rightarrow \infty \) , such that for every \(0 < \epsilon < 1\) , every \(n \in \mathbb{N}\) and every convex symmetric body in \(K \subset {\mathbb{R}}^{n}\) there exists a subspace \(V \subseteq {\mathbb{R}}^{n}\) satisfying:
-
1.
\(\dim V = k(\epsilon,n)\) .
-
2.
\(V \cap K\) is “ \(\epsilon \) -euclidean,” which means that there exists r > 0 such that:
$$r \cdot V \cap B_{2}^{n} \subset V \cap K \subset (1 + \epsilon )r \cdot V \cap B_{ 2}^{n}.$$
The theorem was proved by Aryeh Dvoretzky [3], answering a question of Grothendieck. The question of Grothendieck was asked in [8] in relation with a paper of Dvoretzky and Rogers [4]. Grothendieck [8] gives another proof of the main application (the existence, in any infinite-dimensional Banach space, of an unconditionally convergent series which is not absolutely convergent) of the result of Dvoretzky and Rogers [4] a version of which is used bellow (Lemma 2).
The original proof of Dvoretzky is very involved. Several simplified proofs were given in the beginning of the 1970s, one by Figiel [5], one by Szankowski [17] and the earliest one, a version of which we will present here, by Milman [10]. This proof which turns out to be very influential is based on the notion of concentration of measure. Milman was also the first to get the right estimate (\(\log n\)) of the dimension \(k = k(\epsilon,n)\)of the almost Euclidean section as the function of the dimension n. The dependence of k on \(\epsilon \)is still wide open and we will discuss it in detail later in this survey. Milman’s version of Dvoretzky’s theorem is the following.
Theorem 2.
For every \(\epsilon > 0\) there exists a constant \(c = c(\epsilon ) > 0\) such that for every \(n \in \mathbb{N}\) and every convex symmetric body in \(K \subset {\mathbb{R}}^{n}\) there exists a subspace \(V \subseteq {\mathbb{R}}^{n}\) satisfying:
-
1.
\(\dim V = k\) , where \(k \geq c \cdot \log n\) .
-
2.
\(V \cap K\) is \(\epsilon \) -euclidean:
$$r \cdot V \cap B_{2}^{n} \subset V \cap K \subset (1 + \epsilon )r \cdot V \cap B_{ 2}^{n}.$$
For example, the unit ball of \(\ell_{\infty }^{n}\)—the n-dimensional cube—is far from the Euclidean ball. It is easy to see that the ratio of radii of the bounding and the bounded ball is \(\sqrt{n}\):
and \(\sqrt{n}\)is the best constant. Yet, according to Theorem 2, we can find a subspace of \({\mathbb{R}}^{n}\)of dimension proportional to \(\log n\)in which the ratio of bounding and bounded balls will be \(1 + \epsilon \).
There is a simple correspondence between symmetric convex sets in \({\mathbb{R}}^{n}\)and norms on \({\mathbb{R}}^{n}\)given by \(\Vert x\Vert _{K} =\inf \{ \lambda > 0\; :\; \frac{x} {\lambda } \in K\}\). The following is an equivalent formulation of Theorem 2 in terms of norms.
Theorem 3.
For every \(\epsilon > 0\) there exists a constant \(c = c(\epsilon ) > 0\) such that for every \(n \in \mathbb{N}\) and every norm \(\Vert \cdot \Vert _{}\) in \({\mathbb{R}}^{n}\) \(\ell_{2}^{k}\) \((1 + \epsilon )\) -embeds in \(({\mathbb{R}}^{n},\Vert \cdot \Vert _{})\) for some \(k \geq c \cdot \log n\) .
By “XC-embed in Y ” I mean that there exists a one-to-one bounded operator \(T : X \rightarrow Y\)with \(\|T\|\|{(T_{\vert TX})}^{-1}\| \leq C\).
Clearly, Theorem 2 implies Theorem 3. Also, Theorem 3 clearly implies a weaker version of Theorem 2, with \(B_{2}^{n}\)replaced by some ellipsoid (which by definition is an invertible linear image of B 2 n). But, since any k-dimensional ellipsoid easily seen to have a k ∕ 2-dimensional section which is a multiple of the Euclidean ball, we see that also Theorem 3 implies Theorem 2. This argument also shows that proving Theorem 2 for K is equivalent to proving it for some invertible linear image of K. Before starting the actual proof of Theorem 3 here is a very vague sketch of the proof: Consider the unit sphere of \(\ell_{2}^{n}\)and the surface of \(B_{2}^{n}\), which we will denote by \({S}^{n-1} =\{ x \in {\mathbb{R}}^{n}\; :\;\Vert x\Vert _{2} = 1\}\). Let \(\Vert x\Vert _{}\)be some arbitrary norm in \({\mathbb{R}}^{n}\). The first task will be to show that there exists a “large” set \(S_{\text{good}} \subset {S}^{n-1}\)satisfying \(\forall x \in S_{\text{good}}.\;\vert \Vert x\Vert _{} - M\vert < \epsilon M\)where M is the average of \(\Vert x\Vert _{}\)on \({S}^{n-1}\). Moreover, we shall see that, depending on the Lipschitz constant of \(\|\cdot \|\), the set \(S_{\text{good}}\)is “almost all” the sphere in the measure sense. This phenomenon is called concentration of measure.
The next stage will be to pass from the “large” set to a large dimensional subspace of \({\mathbb{R}}^{n}\)contained in it. Denote O(n)—the group of orthogonal transformations from \({\mathbb{R}}^{n}\)into itself. Choose some subspace V 0 of appropriate dimension k and fix an \(\epsilon \)-net N on \(V _{0} \cap {S}^{n-1}\). For some \(x_{0} \in N\),“almost all” transformations \(U \in O(n)\)will send it into some point in \(S_{\text{good}}\). Moreover, if the “almost all” notion is good enough, we will be able to find a transformation that sends all the points of the \(\epsilon \)-net into \(S_{\text{good}}\). Now there is a standard approximation procedure that will let us pass from the \(\epsilon \)-net to all points in the subspace.
In preparation for the actual proof denote by \(\mu \)the normalized Haar measure on \({S}^{n-1}\)—the unique, probability measure which is invariant under the group of orthogonal transformations. The main tool will be the following concentration of measure theorem of Paul Levy (for a proof see e.g. [14]).
Theorem 4 (P. Levy).
Let \(f : {S}^{n-1}\rightarrow \mathbb{R}\) be a Lipschitz function with a constant L; i.e.,
Then,
Remark.
The theorem also holds with the expectation of f replaced by its median.
Our next goal is to prove the following theorem of Milman which, gives some lower bound on the dimension of almost Euclidean section in each convex body. It will be the main tool in the proof of Theorem 3.
Theorem 5 (V. Milman).
For every \(\epsilon > 0\) there exists a constant \(c = c(\epsilon ) > 0\) such that for every \(n \in \mathbb{N}\) and every norm \(\Vert \cdot \Vert _{}\) in \({\mathbb{R}}^{n}\) there exists a subspace \(V \subseteq {\mathbb{R}}^{n}\) satisfying:
-
1.
\(\dim V = k\) , where \(k \geq c \cdot \bigg{(}\frac{E} {b}{ \bigg{)}}^{2}n\) .
-
2.
For every \(x \in V\)
$$(1 - \epsilon )E \cdot \Vert x\Vert _{2} \leq \Vert x\Vert _{} \leq (1 + \epsilon )E \cdot \Vert x\Vert _{2}.$$
Here \(E =\int _{{S}^{n-1}}\Vert x\Vert _{}\mathrm{d}\mu \) and b is the smallest constant satisfying \(\Vert x\Vert _{} \leq b\Vert x\Vert _{2}\) .
The definition of b implies that the function \(\|\cdot \|\)is Lipschitz with constant b on \({S}^{n-1}\). Applying Theorem 4 we get a subset of \({S}^{n-1}\)of probability very close to one (\(\geq 1 - 2{\mathrm{e}}^{-{\epsilon }^{2}{E}^{2}n/2 }\)), assuming E is not too small, on which
We need to replace this set of large measure with a set which is large in the algebraic sense: a set of the form \(V \cap {S}^{n-1}\)for a subspace V of relatively high dimension. The way to overcome this difficulty is to fix an \(\epsilon \)-net in \(V _{0} \cap {S}^{n-1}\)(i.e., a finite set such that any other point in \(V _{0} \cap {S}^{n-1}\)is of distance at most \(\epsilon \)from one of the points in this set) for some fixed subspace V 0 (of dimension k to be decided upon later) and show that we can find an orthogonal transformation U such that \(\|Ux\|\)satisfies Eq. (1) for each x in the \(\epsilon \)-net. A successive approximation argument (the details of which can be found, e.g., in [11], as all other details which are not explained here) then gives a similar inequality (maybe with \(2\epsilon \)replacing \(\epsilon \)) for all \(x \in V _{0} \cap {S}^{n-1}\), showing that \(V = UV _{0}\)can serve as the needed subspace.
To find the required \(U \in O(n)\)we need two simple facts. The first is to notice that if we denote by \(\nu \)the normalized Haar measure on the orthogonal group O(n), then, using the uniqueness of the Haar measure on S n − 1, we get that, for each fixed \(x \in {S}^{n-1}\), the distribution of Ux, where U is distributed according to \(\nu \), is \(\mu \). It follows that, for each fixed \(x \in {S}^{n-1}\), with \(\nu \)-probability at least \(1 - 2{\mathrm{e}}^{-{\epsilon }^{2}{E}^{2}n/2 }\),
Using a simple union bound we get that for any finite set \(N \subset {S}^{n-1}\), with \(\nu \)-probability \(\geq 1 - 2\vert N\vert {\mathrm{e}}^{-{\epsilon }^{2}{E}^{2}n/2 }\), \(U\)satisfies
for all \(x \in N\)( | N | denotes the cardinality of N).
Lemma 1.
For every \(0 < \epsilon < 1\) there exists an \(\epsilon \) -net N on \({S}^{k-1}\) of cardinality \(\leq \bigg{(}\frac{3} {\epsilon }{\bigg{)}}^{k}\) .
So as long as \(2\bigg{(}\frac{3} {\epsilon }{\bigg{)}}^{k}{\mathrm{e}}^{-{\epsilon }^{2}{E}^{2}n/2 } < 1\)we can find the required U. This translates into \(k \geq c\frac{{\epsilon }^{2}} {\log \frac{3} {\epsilon }} {E}^{2}n\)for some absolute c > 0 as is needed in the conclusion of Theorem 5.
Remark.
This proof gives that the \(c(\epsilon )\)in Theorem 5 can be taken to be \(c\frac{{\epsilon }^{2}} {\log \frac{3} {\epsilon }}\)for some absolute c > 0. This can be improved to \(c(\epsilon ) \geq c{\epsilon }^{2}\)as was done first by Gordon in [7]. (See also [13] for a proof that is more along the lines here.) This later estimate can’t be improved as we shall see below in Claim 1.
To prove the lemma, let \(N =\{ x_{i}\}_{i=1}^{m}\)be a maximal set in S k − 1 such that for all \(x,y \in N\) \(\Vert x - y\Vert _{2} \geq \epsilon \). The maximality of N implies that it is an \(\epsilon \)-net for S k − 1. Consider \(\{B(x_{i}, \frac{\epsilon } {2})\}_{i=1}^{m}\)—the collection of balls of radius \(\frac{\epsilon } {2}\)around the x i -s. They are mutually disjoint and completely contained in \(B(0,1 + \frac{\epsilon } {2})\). Hence:
The k homogeneity of the Lebesgue measure in \({\mathbb{R}}^{k}\)implies now that \(m \leq \bigg{(}\frac{1+\epsilon /2} {\epsilon /2}{ \bigg{)}}^{k} = \bigg{(}1 + \frac{2} {\epsilon }{\bigg{)}}^{k}\).
This completes the sketch of the proof of Theorem 5.
2 Lecture 2
In order to prove Theorem 3 we need to estimate E and b for a general symmetric convex body. Since the problem is invariant under invertible linear transformation we may assume that \({S}^{n-1}\)is included in K, i.e., b = 1. It remains to estimate E from below. As we will see this can be done quite effectively for many interesting examples (we will show the computation for the \(\ell_{p}^{n}\)balls). However in general it may happen that E is very small even if we assume as we may that S n − 1 touches the boundary of K. This is easy to see.
The way to overcome this difficulty is to assume in addition that S n − 1 is the ellipsoid of maximal volume inscribed in K. An ellipsoid is just an invertible linear image of the canonical Euclidean ball. Given a convex body one can find by compactness an ellipsoid of maximal volume inscribed in it. It is known that this maximum is attained for a unique inscribed ellipsoid but this fact will not be used in the reasoning below. The invariance of the problem lets us assume that the canonical Euclidean ball is such an ellipsoid. The advantage of this special situation comes from the following lemma.
Lemma 2 (Dvoretzky–Rogers).
Let \(\Vert \cdot \Vert _{}\) be some norm on \({\mathbb{R}}^{n}\) and denote its unit ball by \(K = B_{\Vert \cdot \Vert _{}}\) . Assume the Euclidean ball \(B_{2}^{n} = B_{\|\cdot \|_{2}}\) is (the) ellipsoid of maximal volume inscribed in K. Then there exists an orthonormal basis \(x_{1},\ldots,x_{n}\) such that
Remark 1.
This is a weaker version of the original Dvoretzky–Rogers lemma. It shows in particular that half of the x i -s have norm bounded from below: for all \(1 \leq i \leq \lfloor \frac{n} {2} \rfloor \;\;\Vert x_{i}\Vert \geq {(2e)}^{-1}\). This is what will be used in the proof of the main theorem.
Proof.
First of all choose an arbitrary \(x_{1} \in {S}^{n-1}\)of maximal norm. Of course, \(\Vert x_{1}\Vert _{} = 1\). Suppose we have chosen \(\{x_{1},\ldots,x_{i-1}\}\)that are orthonormal. Choose x i as the one having the maximal norm among all \(x \in {S}^{n-1}\)that are orthogonal to \(\{x_{1},\ldots,x_{i-1}\}\). Define a new ellipsoid which is smaller in some directions and bigger in others:
Suppose \(\sum _{i=1}^{n}b_{i}x_{i} \in \mathcal{E}\). Then \(\sum _{i=1}^{j-1}b_{i}x_{i} \in aB_{2}^{n}\); hence \(\Vert \sum _{i=1}^{j-1}b_{i}x_{i}\Vert \leq a\). Moreover, for each \(x \in span\{x_{j},\ldots,x_{n}\}\bigcap B_{2}^{n}\)we have \(\Vert x\Vert _{} \leq \Vert x_{j}\Vert _{}\)and since \(\sum _{i=j}^{n}b_{i}x_{i} \in bB_{2}^{n}\), \(\Vert \sum _{i=j}^{n}b_{i}x_{i}\Vert \leq \Vert x_{j}\Vert b\). Thus,
The relation between the volumes of \(\mathcal{E}\)and \(B_{2}^{n}\)is \(\mathrm{Vol}(\mathcal{E}) = {a}^{j-1}{b}^{n-j+1}\mathrm{Vol}(B_{2}^{n})\). If \(a +\Vert x_{j}\Vert \cdot b \leq 1\), then \(\mathcal{E}\subseteq K\). Using the fact that \(B_{2}^{n}\)is the ellipsoid of the maximal volume inscribed in K we conclude that
Substituting \(b = \frac{1-a} {\Vert x_{j}\Vert }\)and \(a = \frac{j-1} {n}\)it follows that for every \(j \geq 2\)
We are now ready to prove Theorem 3 and consequently also Theorem 2.
As we have indicated, using Theorem 5, and assuming as we may that B 2 n is the ellipsoid of maximal volume inscribed in \(K = B_{\|\cdot \|}\), it is enough to prove that
for some absolute constant c > 0.
This will prove Theorems 2 and 3 with the bound \(k \geq c\frac{{\epsilon }^{2}} {\log \frac{1} {\epsilon }} \log n\).
We now turn to prove Inequality (2). According to Dvoretzky–Rogers Lemma 2 there are orthonormal vectors \(x_{1},\ldots,x_{n}\)such that for all \(1 \leq i \leq \lfloor \frac{n} {2} \rfloor \ \|x_{i}\| \geq 1/2e\)
To evaluate the last integral we notice that because of the invariance of the canonical Gaussian distribution in \({\mathbb{R}}^{n}\)under orthogonal transformation and (again!) the uniqueness of the Haar measure on S n − 1, the vector \({(\sum g_{i}^{2})}^{-1/2}(g_{1},g_{2},\ldots,g_{n})\)is distributed \(\mu \). Here \(g_{1},g_{2},\ldots,g_{n}\)are i.i.d. N(0, 1) variables. Thus
(The last equation follows from the fact that the random vector \({(\sum g_{i}^{2})}^{-1/2}\) \((g_{1},g_{2},\ldots,g_{n})\)and the random variable \({(\sum g_{i}^{2})}^{1/2}\)are independent.)
To evaluate the denominator from above note that by Jensen’s inequality:
The numerator is known to be of order \(\sqrt{\log n}\)(estimate the tail behaviour of \(\max _{1\leq i\leq \lfloor \frac{n} {2} \rfloor }\vert g_{i}\vert\)).
This gives the required estimate and concludes the proof of Theorems 2 and 3.
As another application of Theorem 5 we will estimate the almost Euclidean sections of the \(\ell_{p}^{n}\)balls \(B_{p}^{n} =\{ x \in {\mathbb{R}}^{n};\|x\|_{p} = {(\sum _{i=1}^{n}\vert x_{i}{\vert }^{p})}^{1/p} \leq 1\}\).
Using the connection between the Gaussian distribution and \(\mu \)we can write
To bound the last quantity from below we will use the following inequality:
Hence:
For p > 2 we have \(\Vert x\Vert _{p} \leq \Vert x\Vert _{2}\). For \(1 \leq p < 2\)we have \(\Vert x\Vert _{p} \leq {n}^{\frac{1} {p}-\frac{1} {2} } \cdot \Vert x\Vert _{2}\). It now follows from Theorem 5 that the dimension of the largest \(\epsilon \)Euclidean section of the \(\ell_{p}^{n}\)ball is
3 Lecture 3
In this section we will mostly be concerned with the question of how good the estimates we got are. We begin with the last result of the last section concerning the dimension of almost Euclidean sections of the \(\ell_{p}^{n}\)balls.
Clearly, for \(1 \leq p < 2\)the dependence of k on n is best possible. The following proposition of Bennett, Dor, Goodman, Johnson and Newman [2] shows that this is the case also for \(2 < p < \infty \).
Proposition 1.
Let \(2 < p < \infty \) and suppose that \(\ell_{2}^{k}\) C-embeds into \(\ell_{p}^{n}\) , meaning that there exists a linear operator \(T : {\mathbb{R}}^{k} \rightarrow {\mathbb{R}}^{n}\) such that
then \(k \leq c(p,C){n}^{2/p}\) .
Proof.
Let \(T : {\mathbb{R}}^{k} \rightarrow {\mathbb{R}}^{n}\), \( T = {{(a_{ij})_{{i=1}^{k}_{j=1}}^{n}}} \)be the linear operator from the statement of the claim. Then for every \(x \in {\mathbb{R}}^{k}\)
In particular, for every \(1 \leq l \leq n\), substituting instead of x the lth row of T we get
Hence, for every \(1 \leq l \leq n\)
Let \(g_{1},\ldots,g_{k}\)be independent standard normal random variables. Then using the fact that \(\sum _{j=1}^{k}g_{i}a_{j}\)has the same distribution as \({(\sum _{j=1}^{k}a_{j}^{2})}^{1/2}g_{1}\)and the left-hand side of the Inequality (4) we have
On the other hand we can evaluate \(\mathbb{E}{(\sum _{j=1}^{k}g_{j}^{2})}^{p/2}\)from below using the convexity of the exponent function for \(p/2 > 1\):
Combining the last two inequalities we get an upper bound for k:
Remarks.
-
1.
There exist absolute constants \(0 < \alpha \leq A < \infty \)such that \(\alpha \sqrt{p} \leq {(\mathbb{E}\vert g{_{1}\vert }^{p})}^{1/p} \leq A\sqrt{p}\). Hence the estimate we get for c(p, C) is \(c(p,C) \leq Ap{C}^{2}\). In particular, for \(p =\log n\), we have
$$k \leq A{C}^{2}\log n$$for an absolute A. \(\ell_{\log n}^{n}\)is e-isomorphic to \(\ell_{\infty }^{n}\). Hence, if we C-embed \(\ell_{2}^{k}\)into \(\ell_{\infty }^{n}\), then \(k \leq A{c}^{2}\log n\), which means that the \(\log n\)bound in Theorem 2 is sharp.
-
2.
The exact dependence on \(\epsilon \)in Theorem 2 is an open question. From the proof we got an estimation \(k \geq \frac{c{\epsilon }^{2}} {\log (1/\epsilon )}\log n\). We will deal more with this issue below.
Although the last result doesn’t directly give good results concerning the dependence on \(\epsilon \)in Dvoretzky’s theorem it can be used to show that one can’t expect any better behaviour on \(\epsilon \)than \({\epsilon }^{2}\)in Milman’s Theorem 5. This was observed by Tadek Figiel and didn’t appear in print before. We thank Figiel for permitting us to include it here.
Claim 1 (Figiel).
For any 0 < ε < 1 and n large enough ( \(n > {\epsilon }^{-4}\) will do), there is a 1-symmetric norm, \(\|\cdot \|\) , on \({\mathbb{R}}^{n}\) which is 2-equivalent to the \(\ell_{2}\) norm and such that if \(V\) is a subspace of \({\mathbb{R}}^{n}\) on which the \(\|\cdot \|\) and \(\|\cdot \|_{2}\) are \((1 + \epsilon )\) -equivalent then \(\mathrm{dim}V \leq C{\epsilon }^{2}n\) (C is an absolute constant).
Proof.
Given \(\epsilon \)and \(n > {\epsilon }^{-4}\)(say) let 2 < p < 4 be such that \({n}^{\frac{1} {p}-\frac{1} {2} } = 2\epsilon \). Put
on \({\mathbb{R}}^{n}\). Assume that for some A and all \(x \in V\),
Clearly, \(1 + \frac{\epsilon } {2} \leq \frac{1+{n}^{\frac{1} {p}-\frac{1} {2} }} {1+\epsilon } \leq A \leq 2\)and be get that for all \(x \in V\),
Since \(\epsilon A \leq {n}^{\frac{1} {p}-\frac{1} {2} } \leq 4(A - 1)\), we get that, for \(B = A - 1\),
It follows from [BDGJN] that for some absolute C,
Next we will see another relatively simple way of obtaining an upper bound on k in Dvoretzky’s theorem, which, unlike the estimate in Remark 1, tends to 0 as \(\epsilon \rightarrow 0\). It still leaves a big gap with the lower bound above.
Claim 2.
If \(\ell_{2}^{k}\) \((1 + \epsilon )\) -embeds into \(\ell_{\infty }^{n}\) , then
for some absolute constants \(0 < c,C < \infty \) .
Proof.
Assume we have \({(1 - \epsilon )}^{-1}\)-embedding of \(\ell_{2}^{k}\)into \(\ell_{\infty }^{n}\), i.e., we have an operator \( T = {{(a_{ij})_{{i=1}^{k}_{j=1}}^{n}}} \)satisfying, for every \(x \in {\mathbb{R}}^{k}\),
This means that there exist vectors \(v_{1},\ldots,v_{n} \in {\mathbb{R}}^{k}\)such that for every \(x \in {\mathbb{R}}^{k}\):
In particular, \(\Vert v_{i}\Vert _{2} \leq 1\)for every \(1 \leq i \leq n\).
Suppose \(x \in {S}^{k-1}\), then the left-hand side of Eq. (6) states that there exists an \(1 \leq i \leq n\)such that \(< v_{i},x >\geq (1 - \epsilon )\); hence
Thus, the vectors \(v_{1},\ldots,v_{n}\)form a \(\sqrt{2\epsilon }\)-net on the \({S}^{k-1}\), which means that n is much larger (exponentially) than k.Indeed, we have
This gives for \(\epsilon < \frac{1} {32}\)and \(k \geq 12\)
or
This shows that the c(ε) in the statement of Theorem 2 can’t be larger than \(\frac{C} {\log (1/c\epsilon )}\).
Our last objective in this survey is to improve somewhat the lower estimate on \(c(\epsilon )\)in the version of Dvoretzky’s theorem we proved. For that we will need the inverse to Claim 2.
Claim 3.
\(\ell_{2}^{k}\) \((1 + \epsilon )\) -embeds into \(\ell_{\infty }^{n}\) for
for some absolute constants \(0 < c,C < \infty \) .
The proof is very simple and we only state the embedding. Use Lemma 1 to find an \(\epsilon \)-net {x i } i = 1 n on s k − 1 where k and n are related as in the statement of the claim. The embedding of \(\ell_{2}^{k}\)into \(\ell_{\infty }^{n}\)is given by \(x \rightarrow \{\langle x,x_{i}\rangle \}_{i=1}^{n}\).
4 Lecture 4
In this last section we will prove a somewhat improved version of Dvoretzky’s theorem, replacing the \({\epsilon }^{2}\)dependence by \(\epsilon \)(except for a \(\log\)factor).
Theorem 6.
There is a constant c > 0 such that for all \(n \in \mathbb{N}\) and all \(\epsilon > 0\) , every n-dimensional normed space \(\ell_{2}^{k}\) \((1 + \epsilon )\) -embeds in \(({\mathbb{R}}^{n},\Vert \cdot \Vert _{})\) for some \(k \geq \frac{c\epsilon } {{(\log \frac{1} {\epsilon } )}^{2}} \log n\) .
The idea of the proof is the following: We start as in the proof of Milman’s Theorem 5, assuming S n − 1 is the ellipsoid of maximal volume inscribed in the unit ball of \(B_{\Vert \cdot \Vert _{}}\). If E is large enough (so that \({\epsilon }^{2}{E}^{2}n \geq \frac{\epsilon } {{(\log \frac{1} {\epsilon } )}^{2}} \log n\)) we get the result from Milman’s theorem. If not, we will show that the space actually contains a relatively high dimensional \(\ell_{\infty }^{m}\)and then use Claim 1 to get an estimate on the dimension of the embedded \(\ell_{2}^{k}\).
The main proposition is the following one which improves the main proposition of [15]:
Proposition 2.
Let \((X,\|\cdot \|)\) be a normed space and let \(x_{1},\ldots,x_{n}\) be a sequence in X satisfying \(\|x_{i}\| \geq 1/10\) for all i and
Then, there is a subspace of X of dimension \(k \geq \frac{{n}^{1/4}} {CL}\) which is CL-isomorphic to \(\ell_{\infty }^{k}\) . C is a universal constant.
Let us assume the proposition and continue with the
Proof (Proof of Theorem 6).
We start as in the proof of Theorem 2, assuming B 2 n is the ellipsoid of maximal volume inscribed in the unit ball of \(({\mathbb{R}}^{n},\|\cdot \|)\). As we already said we may assume \({\epsilon }^{2}{E}^{2}n \leq \frac{\epsilon } {{(\log \frac{1} {\epsilon } )}^{2}} \log n\)or \(E\sqrt{n} \leq \frac{\sqrt{\log n}} {\sqrt{\epsilon }\log \frac{1} {\epsilon }}\). Let \(x_{1},\ldots,x_{n}\)be the orthonormal basis given by the Dvoretzky–Rogers lemma, so that in particular \(\|x_{i}\| \geq 1/10\)for \(i = 1,\ldots,n/2\). It follows from the triangle inequality for the first inequality and from the relation between the distribution of a canonical Gaussian vector and the Haar measure on the sphere that
So
and by Proposition 2 there is a subspace of \(({\mathbb{R}}^{n},\|\cdot \|)\)of dimension \(k \geq \frac{{n}^{1/4}} {CL}\)which is CL-isomorphic to \(\ell_{\infty }^{k}\)where \(L = \frac{1} {\sqrt{\epsilon }\log \frac{1} {\epsilon }}\). It now follows from an iteration result of James (see Lemma 3 below and Corollary 1 following it) that for any \(0 < \epsilon < 1\)there is a subspace of \(({\mathbb{R}}^{n},\|\cdot \|)\)of dimension \(k \geq c{n}^{\frac{c\epsilon } {\log L} }\)which is 1 + ε - isomorphic to \(\ell_{\infty }^{k}\). c > 0 is a universal constant. We now use Claim 1 to conclude that \(\ell_{2}^{k}\)embeds in our space for some \(k \geq \frac{c\log (c{n}^{\frac{c\epsilon } {\log L} })} {\log (1/c\epsilon )} = \frac{{c}^{{\prime}}\epsilon \log n} {{(\log (1/c\epsilon ))}^{2}}\).
The following simple lemma is due to R.C. James:
Lemma 3.
Let \(x_{1},\ldots,x_{m}\) be vectors in some normed space X such that \(\|x_{i}\| \geq 1\) for all i and
for all sequences of coefficients \(a_{1},\ldots,a_{m} \in \mathbb{R}\) . Then X contains a sequence \(y_{1},\ldots,y_{\lfloor \sqrt{m}\rfloor }\) satisfying \(\|y_{i}\| \geq 1\) for all i and
for all sequences of coefficients \(a_{1},\ldots,a_{\lfloor \sqrt{m}\rfloor }\in \mathbb{R}\) .
Proof.
Let \(\sigma _{j}\), \(j = 1,\ldots,\lfloor \sqrt{m}\rfloor \)be disjoint subsets of \(\{1,\ldots,m\}\)each of cardinality \(\lfloor \sqrt{m}\rfloor \). If for some j
for all sequences of coefficients, we are done. Otherwise, for each j we can find a vector \(y_{j} =\sum _{i\in \sigma _{j}}a_{i}x_{i}\)such that \(\|y_{j}\| = 1\)and \(\sqrt{L}\max _{i\in \sigma _{j}}\vert a_{i}\vert < 1\). But then,
Corollary 1.
If \(\ell_{\infty }^{m}\) L-embeds into a normed space X, then for all 0 < ε < 1, \(\ell_{\infty }^{k}\) \(\frac{1+\epsilon } {1-\epsilon }\) -embeds into X for \(k \sim {m}^{\epsilon /\log L}\) .
Proof.
By iterating the lemma (pretending for the sake of simplicity of notation that \({m}^{{2}^{-s} }\)is an integer for all the relevant s-s), for all positive integer t there is a sequence of length \(k = {m}^{{2}^{-t} }\)of norm one vectors \(x_{1},\ldots,x_{k}\)in X satisfying
for all coefficients. Pick a t such that \({L}^{{2}^{-t} } = 1 + \epsilon \)(approximately); i.e., \({2}^{-t} = \frac{\log 1+\epsilon } {\log L} \sim \frac{\epsilon } {\log L}\). Thus \(k \sim {m}^{\epsilon /\log L}\)and
To get a similar lower bound on \(\|\sum _{i=1}^{k}a_{i}x_{i}\|\), assume without loss of generality that \(\max \vert a_{i}\vert = a_{1}\). Then
We are left with the task of proving Proposition 2. We begin with
Claim 4.
Let \(x_{1},\ldots,x_{n}\) be normalized vectors in a normed space. Then for all real \(a_{1},\ldots,a_{n}\)
Proof.
Assume as we may \(a_{1} =\max _{1\leq i\leq n}\vert a_{i}\vert \). If \(\|a_{1}x_{1} +\sum _{ i=2}^{n}\epsilon _{i}a_{i}x_{i}\| < a_{1}\)then
and thus
So
Remark 2.
If \(x_{1} = x_{2}\), \(a_{1} = a_{2} = 1\)and \(a_{3} = \cdots = a_{n} = 0\)then the 1 ∕ 2 in the statement of Claim 4 cannot be replaced by any smaller constant.
Proposition 3.
Let \(x_{1},\ldots,x_{n}\) be vectors in a normed space with \(\|x_{i}\| \geq 1/10\) for all i and let \(g_{1},\ldots,g_{n}\) be a sequence of independent standard Gaussian variables. Then, for n large enough,
Proof.
Note first that it follows from Claim 4 that
This is easily seen by noticing that \((g_{1}\ldots,g_{n})\)is distributed identically to \((\epsilon _{1}\vert g_{1}\vert \ldots,\epsilon _{n}\vert g_{n}\vert )\)where \(\epsilon _{1}\ldots,\epsilon _{n}\)are independent random signs independent of the g i -s. Now compute
by first conditioning on the g i -s. We use (8) in the following sequence of inequalities:
In the proof of Proposition 2 we shall use a theorem of Alon and Milman [1] (see [18] for a simpler proof) which has a very similar statement: Gaussians are replaced by random signs and \(\sqrt{\log n}\)by a constant.
Theorem 7 (Alon and Milman).
Let \((X,\|\cdot \|)\) be a normed space and let \(x_{1},\ldots,x_{n}\) be a sequence in X satisfying \(\|x_{i}\| \geq 1\) for all i and
Then, there is a subspace of X of dimension \(k \geq \frac{{n}^{1/2}} {CL}\) which is CL-isomorphic to \(\ell_{\infty }^{k}\) . C is a universal constant.
Proof of Proposition 2.
Let \(\sigma _{1},\ldots,\sigma _{\lfloor \sqrt{n}\rfloor }\subset \{ 1,\ldots,n\}\)be disjoint with \(\vert \sigma _{j}\vert = \lfloor \sqrt{n}\rfloor \)for all j. We will show that there is a subset \(J \subset \{ 1,\ldots,\lfloor \sqrt{n}\rfloor \}\)of cardinality at least \(\frac{\sqrt{n}} {4}\)and there are \(\{y_{j}\}_{j\in J}\)with y j supported on \(\sigma _{j}\)such that \(\|y_{j}\| = 1\)for all \(j \in J\)and
We then apply the theorem above.
To show this notice that the events \(\|\sum _{i\in \sigma _{j}}g_{i}x_{i}\| < \frac{\sqrt{\log n}} {200}\), \(j = 1,\ldots,\lfloor \sqrt{n}\rfloor \), are independent and by Proposition 3 have probability at most 2 ∕ 3 each. So with probability at least 1 ∕ 2 there is a subset \(J \subset \{ 1,\ldots,\lfloor \sqrt{n}\rfloor \}\)with \(\vert J\vert \geq \frac{\lfloor \sqrt{n}\rfloor } {4}\)such that \(\|\sum _{i\in \sigma _{j}}g_{i}x_{i}\| > \frac{1} {200}\sqrt{\log n}\)for all \(j \in J\). Denote the event that such a J exists by A. Let \(\{r_{j}\}_{j=1}^{\lfloor \sqrt{n}\rfloor }\)be a sequence of independent signs independent of the original Gaussian sequence. We get that
It follows that for some \(\omega \in A\), there exists a \(J \subset \{ 1,\ldots,\lfloor \sqrt{n}\rfloor \}\)with \(\vert J\vert \geq \frac{\lfloor \sqrt{n}\rfloor } {4}\)such that putting \(\bar{y}_{j} =\sum _{i\in \sigma _{j}}g_{i}(\omega )x_{i}\), one has \(\|\bar{y}_{j}\| > \frac{1} {200}\sqrt{\log n}\)for all \(j \in J\)and
Take \(y_{j} =\bar{ y}_{j}/\|\bar{y}_{j}\|\).
References
N. Alon, V.D. Milman, Embedding of \(l_{\infty }^{k}\)in finite-dimensional Banach spaces. Israel J. Math. 45(4), 265–280 (1983)
G. Bennett, L.E. Dor, V. Goodman, W.B. Johnson, C. Newman, On uncomplemented subspaces of \(L_{p}\), 1 < p < 2. Israel J. Math. 26(2), 178–187 (1977)
A. Dvoretzky, Some results on convex bodies and Banach spaces, in Proceedings International Symposium on Linear spaces, 1961 (Jerusalem Academic Press, Jerusalem; Pergamon, Oxford, 1960), pp. 123–160
A. Dvoretzky, C.A. Rogers, Absolute and unconditional convergence in normed linear spaces. Proc. Nat. Acad. Sci. U.S.A. 36, 192–197 (1950)
T. Figiel, A short proof of Dvoretzky’s theorem on almost spherical sections of convex bodies. Compositio Math. 33(3), 297–301 (1976)
A.A. Giannopoulos, V.D. Milman, Euclidean structure in finite dimensional normed spaces. Handbook of the geometry of Banach spaces, vol. I (Elsevier, North-Holland, Amsterdam, 2001), pp. 707–779
Y. Gordon, Some inequalities for Gaussian processes and applications. Israel J. Math. 50(4), 265–289 (1985)
A. Grothendieck, Sur certaines classes de suites dans les espaces de Banach et le théorème de Dvoretzky-Rogers, (French) Bol. Soc. Mat. So Paulo 8, 1953, pp. 81–110 (1956)
W.B. Johnson, G. Schechtman, Finite dimensional subspaces of L p. Handbook of the geometry of Banach spaces, vol. I (Elsevier, North-Holland, Amsterdam, 2001), pp. 837–870
V.D. Milman, A new proof of A. Dvoretzky’s theorem on cross-sections of convex bodies, (Russian) Funkcional. Anal. i Priloen. 5(4), 28–37 (1971)
V.M. Milman, G. Schechtman, in Asymptotic theory of finite-dimensional normed spaces. Lecture Notes in Mathematics, vol. 1200 (Springer, Berlin, 1986)
G. Pisier, The volumes of convex bodies and Banach space geometry (Cambridge University Press, Cambridge 1989)
G. Schechtman, in A remark concerning the dependence on \(\epsilon \) in Dvoretzky’s theorem. Geometric aspects of functional analysis (19871988). Lecture Notes in Mathematics, vol. 1376 (Springer, Berlin, 1989), pp. 274–277
G. Schechtman, Concentration, results and applications. Handbook of the geometry of Banach spaces, vol. 2 (Elsevier, North-Holland, Amsterdam, 2003), pp. 1603–1634
G. Schechtman, Two observations regarding embedding subsets of Euclidean spaces in normed spaces. Adv. Math. 200(1), 125–135 (2006)
R. Schneider, in Convex bodies: the Brunn-Minkowski theory, Encyclopedia of Mathematics and its Applications, vol. 44 (Cambridge University Press, Cambridge, 1993)
A. Szankowski, On Dvoretzky’s theorem on almost spherical sections of convex bodies. Israel J. Math. 17, 325–338 (1974)
M. Talagrand, Embedding of \(l_{k}^{\infty }\)and a theorem of Alon and Milman. Geometric aspects of functional analysis (Israel, 1992–1994). Oper. Theory Adv. Appl. 77, 289–293 (1995)
Acknowledgements
The work was supported in part by the Israel Science Foundation.
In the list of references below we included also some books and expository papers not directly referred to in the text above.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer Science+Business Media New York
About this chapter
Cite this chapter
Schechtman, G. (2013). Euclidean Sections of Convex Bodies. In: Ludwig, M., Milman, V., Pestov, V., Tomczak-Jaegermann, N. (eds) Asymptotic Geometric Analysis. Fields Institute Communications, vol 68. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-6406-8_12
Download citation
DOI: https://doi.org/10.1007/978-1-4614-6406-8_12
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-6405-1
Online ISBN: 978-1-4614-6406-8
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)