Abstract
A Radon measure μ on a locally convex linear space F is called logarithmically concave (log-concave in short) if for any compact nonempty sets K, L ⊂ F and λ ∈ [0, 1], μ(λ K + (1 −λ)L) ≥ μ(K)λ μ(L)1−λ. A random vector with values in F is called log-concave if its distribution is logarithmically concave.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
A Radon measure μ on a locally convex linear space F is called logarithmically concave (log-concave in short) if for any compact nonempty sets K, L ⊂ F and λ ∈ [0, 1], μ(λ K + (1 −λ)L) ≥ μ(K)λ μ(L)1−λ. A random vector with values in F is called log-concave if its distribution is logarithmically concave.
The class of log-concave measures is closed under affine transformations, convolutions, and weak limits. By the result of Borell [4] an n-dimensional vector with a full dimensional support is log-concave iff it has a log-concave density, i.e. a density of the form e −h, where h is a convex function with values in (−∞, ∞]. A typical example of a log-concave vector is a vector uniformly distributed over a convex body. It may be shown that the class of log-concave distributions on \(\mathbb{R}^{n}\) is the smallest class that contains uniform distributions on convex bodies and is closed under affine transformations and weak limits.
Every full-dimensional logarithmically concave probability measure on \(\mathbb{R}^{n}\) may be affinely transformed into an isotropic distribution, i.e. a distribution with mean zero and identity covariance matrix.
In recent years the study of log-concave vectors attracted attention of many researchers, cf. monographs [2] and [5]. There are reasons to believe that logarithmically concave isotropic distributions have similar properties as product distributions. The most important results confirming this belief are the central limit theorem of Klartag [9] and Paouris’ large deviation for Euclidean norms [21]. However, many important questions concerning log-concave measures are still open – in this note we present and discuss some of them.
Notation. By 〈⋅ , ⋅ 〉 we denote the standard scalar product on \(\mathbb{R}^{n}\). For \(x \in \mathbb{R}^{n}\) we put ∥ x ∥ p = (∑ i = 1 n | x i | p)1∕p for 1 ≤ p < ∞ and ∥ x ∥ ∞ = max i | x i | , we also use | x | for ∥ x ∥ 2. We set B p n for a unit ball in l p n, i.e.. \(B_{p}^{n} =\{ x \in \mathbb{R}^{n}: \|x\|_{p} \leq 1\}\). \(\mathcal{B}(\mathbb{R}^{n})\) stands for the family of Borel sets on \(\mathbb{R}^{n}\).
By a letter C we denote absolute constants, value of C may differ at each occurence. Whenever we want to fix a value of an absolute constant we use letters C 1, C 2, ….
2 Optimal Concentration
Let ν be a symmetric exponential measure with parameter 1, i.e. the measure on the real line with the density \(\frac{1} {2}e^{-\vert x\vert }\). Talagrand [23] (see also [17] for a simpler proof based on a functional inequality) showed that the product measure ν n satisfies the following two-sided concentration inequality
This is a very strong result – a simple transportation of measure argument shows that it yields the Gaussian concentration inequality
where γ n is the canonical Gaussian measure on \(\mathbb{R}^{n}\), i.e. the measure with the density (2π)−d∕2exp(− | x | 2∕2).
It is natural to ask if similar inequalities may be derived for other measures. To answer this question we should first find a right way to enlarge sets.
Definition 1
Let μ be a probability measure on \(\mathbb{R}^{n}\), for p ≥ 1 we define the following sets
and
Sets \(\mathcal{Z}_{p}(\mu _{K})\) for p ≥ 1, when μ K is the uniform distribution on the convex body K are called L p -centroid bodies of K. They were introduced (under a different normalization) in [16], their properties were also investigated in [21]. Observe that for isotropic measures \(\mathcal{M}_{2}(\mu ) = \mathcal{Z}_{2}(\mu ) = B_{2}^{n}\).
Obviously \(\mathcal{M}_{p}(\mu ) \subset \mathcal{M}_{q}(\mu )\) and \(\mathcal{Z}_{p}(\mu ) \supset \mathcal{Z}_{q}(\mu )\) for p ≥ q. Next definition allows to reverse these inclusions.
Definition 2
We say that moments of a probability measure μ on \(\mathbb{R}^{n}\) grow α-regularly for some α ∈ [1, ∞) if for any p ≥ q ≥ 2 and \(v \in \mathbb{R}^{n}\),
It is easy to see that for measures with α-regular growth of moments and p ≥ q ≥ 2 we have \(\alpha \frac{p} {q}\mathcal{M}_{p}(\mu ) \supset \mathcal{M}_{q}(\mu )\) and \(\mathcal{Z}_{p}(\mu ) \subset \alpha \frac{p} {q}\mathcal{Z}_{q}(\mu )\).
Moments of log-concave measures grow 3-regularly (1-regularly for symmetric measures and 2-regularly for centered measures). The following easy observation was noted in [15].
Proposition 3
Suppose that μ is a symmetric probability measure on \(\mathbb{R}^{n}\) with α-regular growth of moments. Let K be a convex set such that for any halfspace A,
Then \(K \supset c(\alpha )\mathcal{Z}_{p}\) if p ≥ p(α), where c(α) and p(α) depend only on α.
The above motivates the following definition.
Definition 4
We say that a measure μ satisfies the optimal concentration inequality with constant β ( CI(β) in short) if
By the result of Gluskin and Kwapien [6], \(\mathcal{M}_{p}(\nu ^{n}) \sim p^{-1}B_{\infty }^{n} \cap p^{-1/2}B_{2}^{n}\), so \(\mathcal{Z}_{p}(\nu ^{n}) \sim pB_{1}^{n} + p^{1/2}B_{2}^{n}\). Therefore Talagrand’s two-sided concentration inequality states that ν n satisfy CI(β) with β ≤ C.
Remark 5
By Proposition 2.7 in [15] CI(β) may be equivalently stated as
In [15] a very strong conjecture was posed that every symmetric log-concave measure on \(\mathbb{R}^{n}\) satisfy CI(β) with a uniform constant β. Unfortunately there are very few examples supporting this conjecture.
Theorem 6
The following probability measures satisfy the optimal concentration inequality with an absolute constant β:
-
i)
symmetric product log-concave measures;
-
ii)
uniform distributions on B p n -balls, 1 ≤ p ≤∞;
-
iii)
rotationally invariant logconcave measures.
Parts i) ii) were showed in [15], iii) may be showed using a radial transportation and the Gaussian concentration inequality.
Property CI(β) is invariant under linear transformations, so it is enough to study it for isotropic measures. For isotropic log-concave measures and p ≥ 2 we have \(\mathcal{Z}_{p}(\mu ) \subset p\mathcal{Z}_{2}(\mu ) = pB_{2}^{n}\), so CI(β) implies the exponential concentration:
By the result of E. Milman [20] the exponential concentration for log-concave measures is equivalent to Cheeger’s inequality:
and constants β, β′ are comparable up to universal multiplicative factors. The long-standing open conjecture of Kannan, Lovasz, and Simonovits [8] states that isotropic log-concave probability measures satisfy Cheeger’s inequality with a uniform constant.
The best known bound for the exponential concentration constant for isotropic log-concave measures \(\beta \leq Cn^{1/3}\sqrt{\log n}\) is due to Eldan [7]. We will show a weaker estimate for the CI constant.
Proposition 7
Every centered log-concave probability measure on \(\mathbb{R}^{n}\) satisfies the optimal concentration inequality with constant \(\beta \leq C\sqrt{n}\).
Our proof is based on the following two simple lemmas.
Lemma 8
Let μ be a probabilistic measure on \(\mathbb{R}^{n}\) . Then
Proof
Let T = { u 1, …, u N } be a 1∕2-net in \(\mathcal{M}_{p}(\mu )\) of cardinality N ≤ 5n, i.e. such set \(T \subset \mathcal{M}_{p}(\mu )\) that \(\mathcal{M}_{p}(\mu ) \subset T + \frac{1} {2}\mathcal{M}_{p}(\mu )\). Then the condition \(x\notin \mathcal{Z}_{p}(\mu )\) implies 〈u j , x〉 > 1∕2 for some j ≤ N. Hence
where the second inequality follows by Chebyshev’s inequality. □
Lemma 9
Let μ be a log-concave probability measure on \(\mathbb{R}^{n}\) and K be a symmetric convex set such that μ(K) ≥ 1 − e −p for some p ≥ 2. Then for any Borel set A in \(\mathbb{R}^{n}\),
Proof
By Borell’s lemma [4] we have for t ≥ 1,
Let μ(A) = e −u for some u ≥ 0. Set
We have
Observe that if \(x \in \tilde{ A}\) then \(\frac{2p} {\tilde{u}} x \in 8K\), therefore \((1 -\frac{2p} {\tilde{u}} )\tilde{A} \subset \tilde{ A} + 8K\) and
□
Proof of Proposition 7.
By the linear invariance we may and will assume that μ is isotropic.
Applying Lemma 8 with λ = e and Lemma 9 with \(K = 10e\mathcal{Z}_{p}(\mu )\) we see that (1) holds with β = 90e for p ≥ n. For \(p \geq \sqrt{n}\) we have \(2\sqrt{n}\mathcal{Z}_{p}(\mu ) \supset \mathcal{Z}_{p\sqrt{n}}(\mu )\) and we get (1) with \(\beta = 180e\sqrt{n}\) in this case.
The Paouris inequality (4) gives
Together with Lemma 9 this yields for any Borel set A and t ≥ 1,
Using the above bound for t = 1 and the inclusion \(\mathcal{Z}_{p}(\mu ) \supset \mathcal{Z}_{2}(\mu ) = B_{2}^{n}\) we obtain (1) with \(\beta = 9C_{1}\sqrt{n}\) for \(2 \leq p \leq \sqrt{n}\). □
It would be of interest to improve the estimate from Proposition 7 to β ≤ Cn 1∕2−ɛ for some ɛ > 0. Suppose that we are able to show that
Then (assuming again that μ is isotropic)i) if p ≤ p 0: = n 1∕9(logn)−1∕3, we obtain by the Eldan’s bound on Cheeger’s constant
ii) if p 0 ≤ p ≤ n, then by (2) and Lemma 9,
So (2) would yield CI(β) for μ with β ≤ Cn 4∕9(logn)1∕3. Unfortunately we do not know whether (2) holds for symmetric log-concave measures (we are able to show it in the unconditional case).
A measure μ on \(\mathbb{R}^{n}\) is called unconditional if it is invariant under symmetries with respect to coordinate axes. If μ is a log-concave, isotropic, and unconditional measure on \(\mathbb{R}^{n}\), then the result of Bobkov and Nazarov [3] yields \(\mathcal{Z}_{p}(\mu ) \subset C\mathcal{Z}_{p}(\nu ^{n})\). Therefore property CI(β) yields two-level concentration inequality for such measures
Klartag [10] showed that unconditional isotropic log-concave measures satisfy exponential concentration inequality with a constant β ≤ Clogn. We do not know if similar bound for β holds for the optimal concentration inequality or its weaker form (3).
3 Weak and Strong Moments
One of the fundamental properties of log-concave vectors is the Paouris inequality [21] (see also [1] for a shorter proof).
Theorem 10
For any log-concave vector X in \(\mathbb{R}^{n}\),
where
Equivalently, in terms of tails we have
Observe that if X is additionally isotropic then σ X (p) ≤ p σ X (2) = p for p ≥ 2 and \(\mathbb{E}\vert X\vert \leq (\mathbb{E}\vert X\vert ^{2})^{1/2} = \sqrt{n}\), so we get
It would be very valuable to have a reasonable characterization of random vectors which satisfy the Paouris inequality. The following example shows that the regular growth of moments is not enough.
Example 11
Let \(Y = \sqrt{n}gU\), where U has a uniform distribution on S n−1 and g is the standard normal \(\mathcal{N}(0,1)\) r.v., independent of U. Then it is easy to see that Y is isotropic, rotationally invariant and for any seminorm on \(\mathbb{R}^{n}\)
In particular this implies that for any \(v \in \mathbb{R}^{n}\),
So moments of Y grow C-regularly. Moreover
thus for 1 ≪ p ≪ n, \((\mathbb{E}\vert Y \vert ^{p})^{1/p} \gg (\mathbb{E}\vert Y \vert ^{2})^{1/2} +\sigma _{Y }(p)\).
It is natural to ask whether Theorem 10 may be generalized to non-Euclidean norms. In [11] the following conjecture was formulated and discussed.
Conjecture 12
There exists a universal constant C such that for any n-dimensional log-concave vector X and any norm ∥ ∥ on \(\mathbb{R}^{n}\),
where ∥v∥ ∗ = sup {|〈 v,x〉 |: ∥x∥≤ 1} denotes the dual norm on \(\mathbb{R}^{n}\).
Note that obviously for any random vector X and p ≥ 1,
The following simple observation from [15] shows that the optimal concentration yields comparison of weak and strong moments.
Proposition 13
Suppose that the law of an n-dimensional random vector X is α-regular and satisfies the optimal concentration inequality with constant β. Then for any norm ∥ ∥ on \(\mathbb{R}^{n}\),
Recall that log-concave measures are 3-regular. Therefore if the law of X is of one of three types listed in Theorem 6 then for any norm ∥ ∥ ,
We do not know if such inequality is satisfied for Euclidean norms and arbitrary log-concave vectors, i.e. whether Paouris inequality holds with the constant 1 in front of \(\mathbb{E}\vert X\vert \). This question is related to the so-called variance conjecture, discussed in [2].
The following extension of the Paouris inequality was shown in [13].
Theorem 14
Let X be a log-concave vector with values in a normed space (F,∥ ∥) which may be isometrically embedded in ℓ r for some r ∈ [1,∞). Then for p ≥ 1,
Remark 15
Let X and F be as above. Then by Chebyshev’s inequality we obtain large deviation estimate for ∥ X ∥ :
where
denotes the weak p-th moment of ∥ X ∥ .
Remark 16
If i: F → ℓ r is a nonisometric embedding and \(\lambda =\| i\|_{F\rightarrow \ell_{r}}\|i^{-1}\|_{i(F)\rightarrow F}\), then we may define another norm on F by \(\|x\|':=\| i(x)\|/\|i\|_{F\rightarrow \ell_{r}}\). Obviously (F, ∥ ∥ ′) isometrically embeds in ℓ r , moreover ∥ x ∥ ′ ≤ ∥ x ∥ ≤ λ ∥ x ∥ ′ for x ∈ F. Hence Theorem 14 gives
Since log-concavity is preserved under linear transformations and, by the Hahn-Banach theorem, any linear functional on a subspace of l r is a restriction of a functional on the whole l r with the same norm, it is enough to prove Theorem 14 for F = l r . An easy approximation argument shows that we may consider finite dimensional spaces l r n. This way Theorem 14 reduces to the following finite dimensional statement.
Theorem 17
Let X be a log-concave vector in \(\mathbb{R}^{n}\) and r ∈ [1,∞). Then
where
and r′ denotes the Hölder’s dual of r, i.e. \(r' = \frac{r} {r-1}\) for r > 1 and r′ = ∞ for r = 1.
Any finite dimensional space embeds isometrically in ℓ ∞ , so to show Conjecture 12 it is enough to establish Theorem 17 (with a universal constant in place of Cr) for r = ∞. Such a result was shown for isotropic log-concave vectors.
Theorem 18 ([12])
Let X be an isotropic log-concave vector in \(\mathbb{R}^{n}\) . Then for any a 1 ,…,a n and p ≥ 1,
However a linear image of an isotropic vector does not have to be isotropic, so to establish the conjecture we need to consider either isotropic vectors and an arbitrary norm or vectors with a general covariance structure and the standard ℓ ∞ -norm.
In the case of unconditional vectors slightly more is known.
Theorem 19 ([11])
Let X be an n-dimensional isotropic, unconditional, log-concave vector and Y = (Y 1 ,…,Y n ), where Y i are independent symmetric exponential r.v’s with variance 1 (i.e., with the density \(2^{-1/2}\exp (-\sqrt{2}\vert x\vert )\) ). Then for any norm ∥ ∥ on \(\mathbb{R}^{n}\) and p ≥ 1,
Proof is based on the Talagrand generic-chaining type two-sided estimate of \(\mathbb{E}\|Y \|\) [24] and the Bobkov-Nazarov [3] bound for the joint d.f. of X, which implies \((\mathbb{E}\vert \langle v,X\rangle \vert ^{p})^{1/p} \leq C(\mathbb{E}\vert \langle v,Y \rangle \vert ^{p})^{1/p}\) for p ≥ 1 and \(v \in \mathbb{R}^{n}\).
Using the easy estimate \(\mathbb{E}\|Y \| \leq C\log n\ \mathbb{E}\|X\|\) we get the following.
Corollary 20
For any n-dimensional unconditional, log-concave vector X, any norm ∥ ∥ on \(\mathbb{R}^{n}\) and p ≥ 1 one has
The Maurey-Pisier result [18] implies \(\mathbb{E}\|Y \| \leq C\mathbb{E}\|X\|\) in spaces with nontrivial cotype.
Corollary 21
Let 2 ≤ q < ∞ and \(F = (\mathbb{R}^{n},\|\ \|)\) has a q-cotype constant bounded by β < ∞. Then for any n-dimensional unconditional, log-concave vector X and p ≥ 1,
where C(q,β) is a constant that depends only on q and β.
For a class of invariant measures Conjecture 12 was established in [12].
Proposition 22
Let X be an n-dimensional random vector with the density of the form \(e^{-\varphi (\|x\|_{r})}\) , where 1 ≤ r ≤∞ and φ: [0,∞) → (−∞,∞] is nondecreasing and convex. Then for any norm ∥ ∥ on \(\mathbb{R}^{n}\) and any p ≥ 1,
4 Sudakov Minoration
For any norm ∥ ∥ on \(\mathbb{R}^{n}\) we have
Thus to estimate the mean of a norm of a random vector X one needs to investigate \(\mathbb{E}\sup _{v,w\in V }\langle v - w,X\rangle\) for bounded subsets V in \(\mathbb{R}^{n}\).
There are numerous powerful methods to estimate suprema of stochastic processes (cf. the monograph [25]), let us however present only a very easy upper bound. Namely for any p ≥ 1,
In particular,
It is natural to ask when the above estimate may be reversed. Namely, when is it true that if the set \(V \subset \mathbb{R}^{n}\) has large cardinality (say at least e p) and variables (〈v, X〉) v ∈ V are A-separated with respect to the L p -distance then \(\mathbb{E}\sup _{v,w\in V }\langle v - w,X\rangle\) is at least of the order of A? The following definition gives a more precise formulation of such property.
Definition 23
Let X be a random n-dimensional vector. We say that X satisfies the L p -Sudakov minoration principle with a constant κ > 0 (SMP p (κ) in short) if for any nonempty set \(V \subset \mathbb{R}^{n}\) with | V | ≥ e p such that
we have
A random vector X satisfies the Sudakov minoration principle with a constant κ (SMP(κ) in short) if it satisfies SMP p (κ) for any p ≥ 1.
Example 24
If X has the canonical n-dimensional Gaussian distribution \(\mathcal{N}(0,I_{n})\) then \((\mathbb{E}\vert \langle v,X\rangle \vert ^{p})^{1/p} =\gamma _{p}\vert v\vert \), where \(\gamma _{p} = (\mathbb{E}\vert \mathcal{N}(0,1)\vert ^{p})^{1/p} \sim \sqrt{p}\) for p ≥ 1. Hence condition (6) is equivalent to | v − w | ≥ A∕γ p for distinct vectors v, w ∈ V and the classical Sudakov minoration principle for Gaussian processes [22] then yields
provided that | V | ≥ e p. Therefore X satisfies the Sudakov minoration principle with a universal constant. In fact it is not hard to see that for centered Gaussian vectors the Sudakov minoration principle in the sense of Definition 23 is formally equivalent to the minoration property established by Sudakov.
The Sudakov minoration principle for vectors X with independent coordinates was investigated in detail in [14]. It was shown there that for SMP in such a case the sufficient (and necessary if coordinates of X have identical distribution) condition is the regular growth of moments of coordinates of X, i.e. the existence of α < ∞ such that \((\mathbb{E}\vert X_{i}\vert ^{p})^{1/p} \leq \alpha \frac{p} {q}(\mathbb{E}\vert X_{i}\vert ^{q})^{1/q}\) for all i and p ≥ q ≥ 1. In particular log-concave vectors X with independent coordinates satisfy SMP with a universal constant κ.
In the sequel we will discuss the following conjecture.
Conjecture 25
Every n-dimensional log-concave random vector satisfies the Sudakov-minoration principle with a universal constant.
Remark 26
Suppose that X is log-concave and (6) is satisfied, but | V | = e q with 1 ≤ q ≤ p. Since \(d_{X,q}(v,w) \geq \frac{q} {3p}d_{X,p}(v,w)\), the Sudakov minoration principle for a log-concave vector X implies the following formally stronger statement – for any nonempty \(V \subset \mathbb{R}^{n}\) and A > 0,
where N(V, d, ɛ) denotes the minimal number of balls in metric d of radius ɛ that cover V.
The Sudakov minoration principle and Conjecture 25 were posed independently by Shahar Mendelson, Emanuel Milman, and Grigoris Paouris (unpublished) and by the author in [12]. In [19] there is discussed approach to the Sudakov minoration and its dual version based on variants of the Johnson-Lindenstrauss dimension reduction lemma. The results presented below were proven in [12].
It is easy to see that the Sudakov minoration property is affinely invariant, so it is enough to investigate it only for isotropic random vectors. Using the fact that isotropic log-concave vectors satisfy exponential concentration with constant Cn γ with γ < 1∕2 one may show that the lower bound (6) holds for special classes of sets.
Proposition 27
Suppose that X is an n-dimensional log-concave random vector, p ≥ 2, \(V \subset \mathbb{R}^{n}\) satisfies (6) and Cov (〈 v,X〉 ,〈 w,X〉 ) = 0 for v,w ∈ V with v ≠ w. Then (7) holds with a universal constant κ provided that |V |≥ e p.
In the case of general sets we know at the moment only the following much weaker form of the Sudakov minoration principle.
Theorem 28
Let X be a log-concave vector, p ≥ 1 and \(V \subset \mathbb{R}^{n}\) be such that \(\vert V \vert \geq e^{e^{p} }\) and (6) holds. Then
Stronger bounds may be derived in the unconditional case. Comparing unconditional log-concave vectors with vectors with independent symmetric exponential coordinates one gets the following bound on κ.
Proposition 29
Suppose that X is an n-dimensional log-concave unconditional vector. Then X satisfies SMP (1∕Clog (n + 1)).
The next result presents a bound on κ independent of dimension but under a stronger assumptions on the cardinality of V than in the definition of SMP.
Theorem 30
Let X be a log-concave unconditional vector in \(\mathbb{R}^{n}\) , p ≥ 1 and \(V \subset \mathbb{R}^{n}\) be such that \(\vert V \vert \geq e^{p^{2} }\) and (6) holds. Then
Remark 31
Theorems 28 and 30 may be rephrased in terms of entropy numbers as in Remark 26. Namely, for any nonempty set \(V \subset \mathbb{R}^{n}\) and log-concave vector X,
If X is unconditional and log-concave, then
We know that a class of invariant log-concave vectors satisfy SMP(κ) with uniform κ.
Theorem 32
All n-dimensional random vectors with densities of the form exp (−φ(∥x∥ p )), where 1 ≤ p ≤∞ and φ: [0,∞) → (−∞,∞] is nondecreasing and convex satisfy the Sudakov minoration principle with a universal constant. In particular all rotationally invariant log-concave random vectors satisfy the Sudakov minoration principle with a universal constant.
One of the important consequences of the SMP-property is the following comparison-type result for random vectors.
Proposition 33
Suppose that a random vector X in \(\mathbb{R}^{n}\) satisfies SMP (κ). Let Y be a random n-dimensional vector such that \(\mathbb{E}\vert \langle v,Y \rangle \vert ^{p} \leq \mathbb{E}\vert \langle v,X\rangle \vert ^{p}\) for all p ≥ 1, \(v \in \mathbb{R}^{n}\) . Then for any norm ∥ ∥ on \(\mathbb{R}^{n}\) and p ≥ 1,
As a consequence we know that for random vectors which satisfy Sudakov minoration principle weak and strong moments are comparable up to a logarithmic factor.
Corollary 34
Suppose that X is an n-dimensional random vector, which satisfies SMP (κ). Then for any norm ∥ ∥ on \(\mathbb{R}^{n}\) and any p ≥ 1,
References
R. Adamczak, R. Latała, A. E. Litvak, K. Oleszkiewicz, A. Pajor and N. Tomczak-Jaegermann, A short proof of Paouris’ inequality, Canad. Math. Bull. 57 (2014), 3–8.
D. Alonso-Gutiérrez and J. Bastero, Approaching the Kannan-Lovász-Simonovits and variance conjectures, Lecture Notes in Mathematics 2131, Springer, Cham, 2015
S. Bobkov and F. L. Nazarov, On convex bodies and log-concave probability measures with unconditional basis, in: Geometric aspects of functional analysis, 53–69, Lecture Notes in Math. 1807, Springer, Berlin, 2003
C. Borell, Convex measures on locally convex spaces, Ark. Math. 12 (1974), 239–252.
S. Brazitikos, A. Giannopoulos, P. Valettas and B. H. Vritsiou, Geometry of isotropic convex bodies, Mathematical Surveys and Monographs 196, American Mathematical Society, Providence, RI, 2014.
E .D. Gluskin and S. Kwapień Tail and moment estimates for sums of independent random variables with logarithmically concave tails, Studia Math. 114 (1995) 303–309.
R. Eldan, Thin shell implies spectral gap up to polylog via a stochastic localization scheme, Geom. Funct. Anal. 23 (2013), 532–569.
R. Kannan, L. Lovász and M. Simonovits, Isoperimetric problems for convex bodies and a localization lemma, Discrete Comput. Geom. 13 (1995), 541–559.
B. Klartag, A central limit theorem for convex sets, Invent. Math. 168 (2007), 91–131.
B. Klartag, A Berry-Esseen type inequality for convex bodies with an unconditional basis, Probab. Theory Related Fields, 145 (2009), 1–33.
R. Latała, Weak and strong moments of random vectors, in: Marcinkiewicz centenary volume, 115–121, Banach Center Publ. 95, Polish Acad. Sci. Inst. Math., Warsaw, 2011.
R. Latała, Sudakov-type minoration for log-concave vectors, Studia Math. 223 (2014), 251–274.
R. Latała and M. Strzelecka, Weak and strong moments of l r -norms of log-concave vectors, Proc. Amer. Math. Soc. 144 (2016), 3597–3608.
R. Latała and T. Tkocz, A note on suprema of canonical processes based on random variables with regular moments, Electron. J. Probab. 20 (2015), no. 36, 1–17.
R. Latała and J. O. Wojtaszczyk, On the infimum convolution inequality, Studia Math. 189 (2008), 147–187.
E. Lutvak and G. Zhang, Blaschke-Santaló inequalities, J. Differential Geom. 47 (1997), 1–16.
B. Maurey, Some deviation inequalities, Geom. Funct. Anal. 1 (1991), 188–197.
B. Maurey and G. Pisier, Séries de variables aléatoires vectorielles indépendantes et propriétés géométriques des espaces de Banach, Studia Math. 58 (1976), 45–90.
S. Mendelson, E. Milman and G. Paouris, Generalized Sudakov via dimension reduction - a program, arXiv:1610.09287.
E. Milman, On the role of convexity in isoperimetry, spectral gap and concentration, Invent. Math. 177 (2009), 1–43.
G. Paouris, Concentration of mass on convex bodies, Geom. Funct. Anal. 16 (2006), 1021–1049.
V. N. Sudakov, Gaussian measures, Cauchy measures and ɛ -entropy, Soviet Math. Dokl. 10 (1969), 310–313.
M. Talagrand, A new isoperimetric inequality and the concentration of measure phenomenon, in: Israel Seminar (GAFA), Lecture Notes in Math. 1469, 94–124, Springer, Berlin 1991
M. Talagrand, The supremum of some canonical processes Amer. J. Math. 116 (1994), 283–325.
M. Talagrand, Upper and lower bounds for stochastic processes. Modern methods and classical problems, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics 60, Springer, Heidelberg, 2014.
Acknowledgements
This research was supported by the National Science Centre, Poland grant 2015/18/A/ST1/00553.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Science+Business Media LLC
About this paper
Cite this paper
Latała, R. (2017). On Some Problems Concerning Log-Concave Random Vectors. In: Carlen, E., Madiman, M., Werner, E. (eds) Convexity and Concentration. The IMA Volumes in Mathematics and its Applications, vol 161. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-7005-6_16
Download citation
DOI: https://doi.org/10.1007/978-1-4939-7005-6_16
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4939-7004-9
Online ISBN: 978-1-4939-7005-6
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)